8220 stories
·
62 followers

How to Restore MySQL Logical Backup at Maximum Speed

1 Share
Restore MySQL Logical Backup

Restore MySQL Logical BackupThe ability to restore MySQL logical backups is a significant part of disaster recovery procedures. It’s a last line of defense.

Even if you lost all data from a production server, physical backups (data files snapshot created with an offline copy or with Percona XtraBackup) could show the same internal database structure corruption as in production data. Backups in a simple plain text format allow you to avoid such corruptions and migrate between database formats (e.g., during a software upgrade and downgrade), or even help with migration from completely different database solution.

Unfortunately, the restore speed for logical backups is usually bad, and for a big database it could require days or even weeks to get data back. Thus it’s important to tune backups and MySQL for the fastest data restore and change settings back before production operations.

Disclaimer

All results are specific to my combination of hardware and dataset, but could be used as an illustration for MySQL database tuning procedures related to logical backup restore.

Benchmark

There is no general advice for tuning a MySQL database for a bulk logical backup load, and any parameter should be verified with a test on your hardware and database. In this article, we will explore some variables that help that process. To illustrate the tuning procedure, I’ve downloaded IMDB CSV files and created a MySQL database with pyimdb.

You may repeat the whole benchmark procedure, or just look at settings changed and resulting times.

Database:

  • 16GB – InnoDB database size
  • 6.6GB – uncompressed mysqldump sql
  • 5.8GB – uncompressed CSV + create table statements.

The simplest restore procedure for logical backups created by the mysqldump tool:

mysql -e 'create database imdb;'
time mysql imdb < imdb.sql
# real 129m51.389s

This requires slightly more than two hours to restore the backup into the MySQL instance started with default settings.

I’m using the Docker image percona:latest – it contains Percona Server 5.7.20-19 running on a laptop with 16GB RAM, Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz, two disks: SSD KINGSTON RBU-SNS and HDD HGST HTS721010A9.

Let’s start with some “good” settings: buffer pool bigger than default, 2x1GB transaction log files, disable sync (because we are using slow HDD), and set big values for IO capacity,
the load should be faster with big batches thus use 1GB for max_allowed_packet.

Values were chosen to be bigger than the default MySQL parameters because I’m trying to see the difference between the usually suggested values (like 80% of RAM should belong to InnoDB buffer pool).

docker run --publish-all --name p57 -it -e MYSQL_ALLOW_EMPTY_PASSWORD=1 percona:5.7
  --innodb_buffer_pool_size=4GB
  --innodb_log_file_size=1G
  --skip-log-bin
  --innodb_flush_log_at_trx_commit=0
  --innodb_flush_method=nosync
  --innodb_io_capacity=2000
  --innodb_io_capacity_max=3000
  --max_allowed_packet=1G
  time (mysql --max_allowed_packet=1G imdb1 < imdb.sql )
  # real 59m34.252s

The load is IO bounded, and there is no reaction on set global foreign_key_checks=0 and unique_checks=0 because these variables are already disabled in the dump file.

How can we reduce IO?

Disable InnoDB double write: --innodb_doublewrite=0

time (mysql --max_allowed_packet=1G imdb1 < imdb.sql )
# real 44m49.963s

A huge improvement, but we still have an IO-bounded load.

We will not be able to improve load time significantly for IO bounded load. Let’s move to SSD:

time (mysql --max_allowed_packet=1G imdb1 < imdb.sql )
# real 33m36.975s

Is it vital to disable disk sync for the InnoDB transaction log?

sudo rm -rf mysql/*
docker rm p57
docker run -v /home/ihanick/Private/Src/tmp/data-movies/imdb.sql:/root/imdb.sql -v /home/ihanick/Private/Src/tmp/data-movies/mysql:/var/lib/mysql
--name p57 -it -e MYSQL_ALLOW_EMPTY_PASSWORD=1 percona:5.7
--innodb_buffer_pool_size=4GB
--innodb_log_file_size=1G
--skip-log-bin
--innodb_flush_log_at_trx_commit=0
--innodb_io_capacity=700
--innodb_io_capacity_max=1500
--max_allowed_packet=1G
--innodb_doublewrite=0
# real 33m49.724s

There is no significant difference.

By default, mysqldump produces SQL data, but it could also save data to CSV format:

cd /var/lib/mysql-files
mkdir imdb
chown mysql:mysql imdb/
time mysqldump --max_allowed_packet=128M --tab /var/lib/mysql-files/imdb imdb1
# real 1m45.983s
sudo rm -rf mysql/*
docker rm p57
docker run -v /srv/ihanick/tmp/imdb:/var/lib/mysql-files/imdb -v /home/ihanick/Private/Src/tmp/data-movies/mysql:/var/lib/mysql
--name p57 -it -e MYSQL_ALLOW_EMPTY_PASSWORD=1 percona:5.7
--innodb_buffer_pool_size=4GB
--innodb_log_file_size=1G
--skip-log-bin
--innodb_flush_log_at_trx_commit=0
--innodb_io_capacity=700
--innodb_io_capacity_max=1500
--max_allowed_packet=1G
--innodb_doublewrite=0
time (
mysql -e 'drop database imdb1;create database imdb1;set global FOREIGN_KEY_CHECKS=0;'
(echo "SET FOREIGN_KEY_CHECKS=0;";cat *.sql) | mysql imdb1 ;
for i in $PWD/*.txt ; do mysqlimport imdb1 $i ; done
)
# real 21m56.049s
1.5X faster, just because of changing the format from SQL to CSV!

We’re still using only one CPU core, let’s improve the load with the –use-threads=4 option:

time (
mysql -e 'drop database if exists imdb1;create database imdb1;set global FOREIGN_KEY_CHECKS=0;'
(echo "SET FOREIGN_KEY_CHECKS=0;";cat *.sql) | mysql imdb1
mysqlimport --use-threads=4 imdb1 $PWD/*.txt
)
# real 15m38.147s

In the end, the load is still not fully parallel due to a big table: all other tables are loaded, but one thread is still active.

Let’s split CSV files into smaller ones. For example, 100k rows in each file and load with GNU/parallel:

# /var/lib/mysql-files/imdb/test-restore.sh
apt-get update ; apt-get install -y parallel
cd /var/lib/mysql-files/imdb
time (
cd split1
for i in ../*.txt ; do echo $i ; split -a 6 -l 100000 -- $i `basename $i .txt`. ; done
for i in `ls *.*|sed 's/^[^.]+.//'|sort -u` ; do
mkdir ../split-$i
for j in *.$i ; do mv $j ../split-$i/${j/$i/txt} ; done
done
)
# real 2m26.566s
time (
mysql -e 'drop database if exists imdb1;create database imdb1;set global FOREIGN_KEY_CHECKS=0;'
(echo "SET FOREIGN_KEY_CHECKS=0;";cat *.sql) | mysql imdb1
parallel 'mysqlimport imdb1 /var/lib/mysql-files/imdb/{}/*.txt' ::: split-*
)
#real 16m50.314s

Split is not free, but you can split your dump files right after backup.

The load is parallel now, but the single big table strikes back with ‘setting auto-inc lock’ in SHOW ENGINE INNODB STATUSG

Using the --innodb_autoinc_lock_mode=2 option fixes this issue: 16m2.567s.

We got slightly better results with just mysqlimport --use-threads=4. Let’s check if hyperthreading helps and if the problem caused by “parallel” tool:

  • Using four parallel jobs for load: 17m3.662s
  • Using four parallel jobs for load and two threads: 16m4.218s

There is no difference between GNU/Parallel and --use-threads option of mysqlimport.

Why 100k rows? With 500k rows: 15m33.258s

Now we have performance better than for mysqlimport --use-threads=4.

How about 1M rows at once? Just 16m52.357s.

I see periodic flushing logs message with bigger transaction logs (2x4GB): 12m18.160s:

--innodb_buffer_pool_size=4GB --innodb_log_file_size=4G --skip-log-bin --innodb_flush_log_at_trx_commit=0 --innodb_io_capacity=700 --innodb_io_capacity_max=1500 --max_allowed_packet=1G --innodb_doublewrite=0 --innodb_autoinc_lock_mode=2 --performance-schema=0

Let’s compare the number with myloader 0.6.1 also running with four threads (myloader have only -d parameter, myloader execution time is under corresponding mydumper command):

# oversized statement size to get 0.5M rows in one statement, single statement per chunk file
mydumper -B imdb1 --no-locks --rows 500000 --statement-size 536870912 -o 500kRows512MBstatement
17m59.866s
mydumper -B imdb1 --no-locks -o default_options
17m15.175s
mydumper -B imdb1 --no-locks --chunk-filesize 128 -o chunk128MB
16m36.878s
mydumper -B imdb1 --no-locks --chunk-filesize 64 -o chunk64MB
18m15.266s

It will be great to test mydumper with CSV format, but unfortunately, it wasn’t implemented in the last 1.5 years: https://bugs.launchpad.net/mydumper/+bug/1640550.

Returning back to parallel CSV files load, even bigger transaction logs 2x8GB: 11m15.132s.

What about a bigger buffer pool: --innodb_buffer_pool_size=12G? 9m41.519s

Let’s check six-year-old server-grade hardware: Intel(R) Xeon(R) CPU E5-2430 with SAS raid (used only for single SQL file restore test) and NVMe (Intel Corporation PCIe Data Center SSD, used for all other tests).

I’m using similar options as for previous tests, with 100k rows split for CSV files load:

--innodb_buffer_pool_size=8GB --innodb_log_file_size=8G --skip-log-bin --innodb_flush_log_at_trx_commit=0 --innodb_io_capacity=700 --innodb_io_capacity_max=1500 --max_allowed_packet=1G --innodb_doublewrite=0 --innodb_autoinc_lock_mode=2

  • Single SQL file created by mysqldump loaded for 117m29.062s = 2x slower.
  • 24 parallel processes of mysqlimport: 11m51.718s
  • Again hyperthreading making a huge difference! 12 parallel jobs: 18m3.699s.
  • Due to higher concurrency, adaptive hash index is a reason for locking contention. After disabling it with --skip-innodb_adaptive_hash_index: 10m52.788s.
  • In many places, disable unique checks referred as a performance booster: 10m52.489s
    You can spend more time reading advice about unique_checks, but it might help for some databases with many unique indexes (in addition to primary one).
  • The buffer pool is smaller than the dataset, can you change old/new pages split to make insert faster? No: --innodb_old_blocks_pct=5 : 10m59.517s.
  • O_DIRECT is also recommended: --innodb_flush_method=O_DIRECT: 11m1.742s.
  • O_DIRECT is not able to improve performance by itself, but if you can use a bigger buffer pool: O_DIRECT + 30% bigger buffer pool: --innodb_buffeer_pool_size=11G: 10m46.716s.

Conclusions

  • There is no common solution to improve logical backup restore procedure.
  • If you have IO-bounded restore: disable InnoDB double write. It’s safe because even if the database crashes during restore, you can restart the operation.
  • Do not use SQL dumps for databases > 5-10GB. CSV files are much faster for mysqldump+mysql. Implement mysqldump --tabs+mysqlimport or use mydumper/myloader with appropriate chunk-filesize.
  • The number of rows per load data infile batch is important. Usually 100K-1M, use binary search (2-3 iterations) to find a good value for your dataset.
  • InnoDB log file size and buffer pool size are really important options for backup restore performance.
  • O_DIRECT reduces insert speed, but it’s good if you can increase the buffer pool size.
  • If you have enough RAM or SSD, the restore procedure is limited by CPU. Use a faster CPU (higher frequency, turboboost).
  • Hyperthreading also counts.
  • A powerful server could be slower than your laptop (12×2.4GHz vs. 4×2.8+turboboost).
  • Even with modern hardware, it’s hard to expect backup restore faster than 50MBps (for the final size of InnoDB database).
  • You can find a lot of different advice on how to improve backup load speed. Unfortunately, it’s not possible to implement improvements blindly, and you should know the limits of your system with general Unix performance tools like vmstat, iostat and various MySQL commands like SHOW ENGINE INNODB STATUS (all can be collected together with pt-stalk).
  • Percona Monitoring and Management (PMM) also provides good graphs, but you should be careful with QAN: full slow query log during logical database dump restore can cause significant processing load.
  • Default MySQL settings could cost you 10x backup restore slowdown
  • This benchmark is aimed at speeding up the restore procedure while the application is not running and the server is not used in production. Make sure that you have reverted all configuration parameters back to production values after load. For example, if you disable the InnoDB double write buffer during restore and left it enabled in production, you may have scary data corruption due to partial InnoDB pages writes.
  • If the application is running during restore, in most cases you will get an inconsistent database due to missing support for locking or correct transactions for restore methods (discussed above).
Read the whole story
fxer
8 hours ago
reply
Bend, Oregon
Share this story
Delete

Sorry, Collectors, Nobody Wants Your Beanie Babies Anymore

1 Comment
Comments

“A lot of them are never going to come back, and so they’re definitely going to become valuable,” said Mr. Percy. His online listings include Slowpoke, a sloth, and Teddy, a holiday bear.

Not. So. Fast.

A box of Beanies Babies Cameron Percy found in the basement of his new home.

Two decades after the great Beanie Baby boom, where old and young hit Hallmark Stores around the country in search of rare examples, the speculators are back, living in an alternate Beanie reality.

Some on eBay insist that their rare Beanies, with mistakes in the tags, from a specific generation or with PVC beads as their fillings instead of the more common polyethylene, are worth tens of thousands of dollars.

Such dreams are crushed for anyone calling Rogue Toys, a collectibles store with branches in Las Vegas and Portland, Ore. The store’s answering machine specifically says the store doesn’t want to buy your Beanies.

“If you bring Beanies to me and try to sell them to me in bulk, I’ll give you about 20 cents. That’s me telling you I don’t want them,” said  Steve Johnston, the store’s owner. “Give them away.”

Many hoped that the 20th anniversary of Princess Diana’s death last summer would inspire interest in her memorial bear. Others have seen viral news stories touting the return of the market. Some, mostly retirees looking to downsize, thought that now the time might finally be ripe.

Mr. Percy has managed to sell only one Beanie after posting 11 listings.

“It is not like they are selling fast,” he said.

The recent wave of Beanie optimism, toy appraisers say, has been fueled by viral stories online. Ryan Tedards, who runs Love My Beanies, a Beanie Babies price guide website, says many prospective sellers cite articles or videos claiming certain beanies are worth large amounts of money.

“There’s just this cycle of fake news about Beanie Babies, some of them are the same article, copied and pasted, but they get a ton of traffic,” said Mr. Tedards, who fields about a hundred questions a week from people asking what their Beanies are worth. “It is always a bit of a battle. I have to explain to them that what they read or saw is not true.”

These flurry of posts have confused some collectors, filling them with hope and then letting them down.

Gary Larson’s collection includes a Princess bear.

Among the disappointed is Lynn Bowman, who started collecting Beanies in 1994 with her then 6-year-old son. She would travel across Illinois and eat McDonald’s burgers to collect the “Teenie Beanies,” toys that could only be purchased with a meal, in the hopes that they would one day be worth thousands.

Now, her old hobby has turned into a bit of a nightmare. No one wants them, not even the five bears she packaged with (fresh) candy that she tried to sell for just $20 in time for Valentine’s Day.

“I kept them really well, and now I find they aren’t worth very much,” she said, having spent recent weeks researching and sorting the Beanies, kept pristine in three tubs in her attic. “I’ll probably end up selling them for cheap—it is really sad.”

Gary Larson, a 65-year-old retiree, remembers the “casual excitement” he felt seeing articles claiming a Mystic, a white unicorn, or a Clubbie, made to commemorate the Beanie Baby club, could be worth thousands.

“But then I’d take out my box of Beanies and realize they were just the normal, ordinary ones, not the ones people were saying are rare,” he said. He is having difficulty selling any of his roughly 200 Beanies for even $2 each, and has given some away.

It wasn’t supposed to be this way. A 1997 price guide, self-published by a New Jersey father, predicted some Beanies, made by Chicago-based toy company Ty Inc., would appreciate 8,000% within the following decade.

The prediction didn’t even hold up for a year. A 1998 Wall Street Journal article found that the “American Trio” of Beanies—Lefty the donkey, Righty the elephant and Libearty the white bear—had fallen to $899 at one Florida shop from $1,299 earlier that year. Ty didn’t respond to a request for comment.

Today, that same trio would probably be worth about $50, says antiques and rare toys appraiser Bruce Zalkin. He remembers a half-a-million dollar deal he made in the mid-1990s for 28,000 bears, which would be worth “hardly any money today.”

Part of the problem, Mr. Zalkin says, is that Beanies were made to be collected, driving up hype and saturating the market. By contrast, children actually played with tin toys from the 1920s or plastic “Star Wars” toys, making pristine examples rare. Toy appraisers predict Beanies will never make a comeback, since the 1990s children—millennials—aren’t collecting like generations before them.

“Sometimes these things that sound like a great idea just don’t pan out,” said Mr. Larson, who remembers going to a 1997 Chicago Cubs game to get his hands on a commemorative Cubs bear even though he is a die-hard White Sox fan. “I definitely fell for it.”

Write to Shibani Mahtani at shibani.mahtani@wsj.com


Comments
Read the whole story
fxer
9 hours ago
reply
don't collect shit. millennials seem to get it, good for them.
Bend, Oregon
DMack
8 hours ago
Divorcing couple in 1999, splitting up their beanie babies collection: https://i.redd.it/iq4taexvvggy.jpg
Share this story
Delete

Florida Prohibits Municipalities From Enacting Gun-Control Laws

1 Share

Rachel Martin talks to South Miami Mayor Phil Stoddard who wants to pass gun control measures in his city following last week's mass shooting, but Florida law prohibits local firearm regulations.

Read the whole story
fxer
9 hours ago
reply
Bend, Oregon
Share this story
Delete

Low-carb vs low-fat? Both led to ~12lb loss after a year, regardless of genes

1 Share

(credit: Allan Foster / Flickr)

In a 609-person, year-long study, dieters lost an average of about 12 pounds—regardless of whether they were trying to stick to a low-fat or a low-carb diet and regardless of whether they carried genetic variations linked to success on one of those diets.

The lackluster finding, published by Stanford researchers this week in JAMA, knocks back hopes that we’re at the point of harnessing genetic information to tighten our waistlines. Previous studies had whetted dieter’s appetites for the idea, picking out specific blips in metabolic genes that appeared to help explain why some people easily shed poundage on a given diet, while others struggled. Biotech companies have even begun serving up DNA tests that claim to help hungry dieters pair their menus with their biological blueprints.

But according to the new study, that order isn’t up yet.

The authors, led by nutrition researcher Christopher Gardner, enrolled 609 participants, who were aged 18 to 50 and had body mass indexes from 28 to 40 (spanning overweight to extreme obesity), with a mean of 33 (obese) and an average weight of around 212 pounds. Of those, 305 participants were randomly assigned to eat a “healthy low-fat diet” for a year, while the remaining 304 were assigned a “healthy low-carbohydrate diet.”

The dieters weren’t strictly monitored or required to stick to a rigid plan. Instead, they were offered 22 hour-long classes led by registered dietitians on how to follow their assigned diet without feeling deprived, as well as general advice on healthy eating.

Lean data

For instance, the low-fat group was advised to avoid oils, fatty meats, full-fat dairy, and nuts, while the low-carb group was cautioned to avoid cereals, grains, starch vegetables, and legumes. But both diet groups were told to “(1) maximize vegetable intake; (2) minimize intake of added sugars, refined flours, and trans fats; and (3) focus on whole foods that were minimally processed, nutrient dense, and prepared at home whenever possible.” Dietitians also went over emotional awareness—to avoid stress binges, for instance—and common behavioral modifications—such as setting goals—that can help with dieting.

Otherwise, the dieters were given general targets: members of the low-carb group tried to get their daily carb intake down to 20 grams in the first eight weeks. The low-fat dieters tried to get their daily fat intake down to 20 grams in the first eight weeks. Then, both groups were instructed to find the minimal level that they thought they could maintain indefinitely.

Both groups cut back but on average didn’t hold to the targets, based on interviews. The low-carb dieters reported eating, on average, about 246.5 grams of carbohydrates per day at the beginning of the trial. They got that down to an average of 97 at the three-month mark but crept back up to 132 by the end. The low-fat dieters were, on average, eating 87 grams of fat per day at the start. They were down to 42 after three months and inched up to 57 by the end.

Though the researchers didn’t tell dieters to count or cut calories, both groups were eating around 500 to 600 fewer calories each day throughout the study.

At the end, the low-carb group lost an average of about 13.2 pounds per person, while the low-fat dieters lost an average of 11.7 pounds. The difference between the two groups was not statistically significant. But, there was wild variation within both groups. Some dieters lost upwards of 60 pounds, while others gained more than 20.

To see if genetics could help explain that weighty wobbling, the researchers turned to small genetic differences in three genes involved in fat and carbohydrate metabolism. Earlier work had suggested that these could predict the success of certain diets. Among the 304 low-carb dieters, 97 (32 percent) had a matching low-carb genotype, 114 (37.5 percent) had a mismatched low-fat genotype, and the rest had genotypes that matched neither. Among the 305 low-fat dieters, 130 (42 percent) had a matching low-fat genotype while 83 (27 percent) had a mismatched low-carb genotype.

Statistics on the side

The researchers ran the data to see if those with the matching genotypes did better on their diets than others. They didn’t. There were no statistically significant differences in success or failure among the genotypes in either dieting group.

The researchers also collected insulin data from the participants to see if that could predict dieting outcomes. The idea here is that people whose bodies don’t release enough insulin—which is involved in metabolizing carbohydrates—may have a better chance of losing weight on a low-carb diet, which would demand less insulin. But, that too, didn’t pan out. Insulin secretion didn’t link to better or worse weight-loss outcomes.

Overall, researchers concluded that neither the genetic variations nor insulin measurements were “helpful in identifying which diet was better for whom.”

Of course, the study had limitations. Despite being large and randomized, it relied on self-reported diet information from participants who didn’t adhere to strict menus. This replicates what happens in real-life conditions for most dieters who try but do not always succeed at consistently following a strict—and potentially punishing—food plan. That said, stricter diets could have led to different outcomes during the study. Also, while the genetic variations the authors homed in on didn’t seem to predict weight-loss outcomes, other genetic factors or combinations may one day prove useful in this front.

The researchers are currently picking through leftover genetic data from the participants to see if any other bits of code may help explain the weight-loss variation.

JAMA, 2018. DOI: 10.1001/jama.2018.0245  (About DOIs).

Read Comments

Read the whole story
fxer
9 hours ago
reply
Bend, Oregon
Share this story
Delete

'Jehovah's Witness Simulator 2018' Gives a Candid Glance at an Insular World

1 Share

No one likes it when a Jehovah's Witness knocks on your door and tries to sell you on their religion when you’d rather be doing anything else. There’s a good chance the Witness hates it even more than you do though, especially if they’re a teenage boy.

Misha Verollet is an ex-Jehovah's Witness living in Vienna, Austria. Jehovah's Witness Simulator 2018 is a video game he made that condenses his childhood experience down to about four minutes.

Verollet used simple graphics and text boxes to convey the stifling life he lived. Caleb—the protagonist—explores his home, goes door-to-door to spread the church’s message, and dodges impolite questions from other kids at public school. It’s a short 'walking simulator'-type game with an effective message—being a Jehovah’s Witness can be alienating and strange.

The creator opened up about his game and his early life in the church in an Ask Me Anything on Reddit. One user asked him when he decided to leave the church.

“It basically came down to realizing I couldn't do the whole grind anymore and a life in short-term freedom plus death at armageddon would be better than being caged in until armageddon and then dying anyway because God could read my thoughts,” he wrote.

The church is a Christian sect that believes the world is wicked and God will fix it by ushering in the end of the world. As Verollet said, members tend to live in the moment and focus on “saving” as many people as possible because they believe Armaggeddon is just around the corner.

He left the church by forcing it to excommunicate him. “I could have just told the Elders that I didn't want to be a Jehovah's Witness anymore, but...I didn't have the courage,” he wrote on Reddit.

Instead, he slept with a woman he wasn’t married to and told everyone about it. “I had to stand in front of judicial committee (a tribunal of local Elders), confess to my sin and then refused to repent – therefore I was disfellowshipped and have been shunned ever since by family and friends,” he explained.

Verlott’s game is free to play here.



Read the whole story
fxer
9 hours ago
reply
Bend, Oregon
Share this story
Delete

Homemade Cannoli That Live Up to the Hype

1 Comment

Good cannoli are impossible to find, but they're more than possible to make at home—if you have the right ingredients, anyway. Start with the best-quality ricotta, something irresistibly fresh and creamy all on its own. Stirred into a batch of homemade vanilla pudding, that ricotta becomes a sweet and silky filling for the crispy homemade shells. Fortunately, both elements can be made in advance, then assembled at the last minute and finished with a sprinkling of toasted pistachios or dark chocolate. Read More
Read the whole story
fxer
9 hours ago
reply
oh shiiiitttttt
Bend, Oregon
Share this story
Delete
Next Page of Stories