Curio Cabinet / Daily Curio
-
FREEMind + Body Daily CurioFree1 CQ
What’s all this then? The official national dish of Great Britain may be chicken tikka masala, but fish and chips is certainly a close second. Brits love this hearty, battered meal, and they’ve been enjoying it since at least the mid-1800s, though two different cities claim to have invented it.
Fish and chips is a dish of battered fish and french fries (called “chips” in the UK). The fish is usually cod or haddock, coated in a batter made of flour, baking powder, milk, eggs, and seasonings. Sometimes, beer is added to the batter for extra flavor. The chips are usually cut thick, so some Americans might consider them “steak fries.” While both cod and haddock have been eaten in England since before the country’s founding, the practice of battering fish didn’t become popular until around the 16th century. That’s when Jewish immigrants from Spain and Portugal first brought the practice to England. It quickly caught on with the Brits, who served it alongside bread or mashed potatoes. As for chips, they were likely invented in the 1680s by Belgian housewives who fried potatoes in place of more-expensive fish. By 1830, chips had made their way to England, where they were popular with working people.
Some claim that the first recorded instance of a restaurant selling fish and chips side by side comes from London, in the 1860s. Joseph Malin, a Jewish immigrant who had fled persecution in Eastern Europe to work as a rug weaver in England, opened a small fish and chips shop, also known as a “chippy”, to supplement his income. The shop exploded in popularity…so it seems that London can claim fish and chips as their own, right? Not quite. Some people claim that the first fish and chips shop was actually a wooden hut opened in 1863 at the Mossley market, an outdoor market in Lancashire that still exists today. The hut’s owner was John Lees, a local businessman.
Though we’ll probably never know which man (or which city) first spawned fish and chips, there’s no doubt that the dish has staying power. It remains as popular as ever in Britain, where it’s also a point of national pride. Each year, the National Federation of Fish Fryers even awards one restaurant the title of Fish & Chip Takeaway of the Year, and the winners even get a free trip to Norway. Not too shabby for a chippy!
[Image description: Fish and chips with mashed peas and white dipping sauce on a blue plate.] Credit & copyright: Famifranquoi, PixabayWhat’s all this then? The official national dish of Great Britain may be chicken tikka masala, but fish and chips is certainly a close second. Brits love this hearty, battered meal, and they’ve been enjoying it since at least the mid-1800s, though two different cities claim to have invented it.
Fish and chips is a dish of battered fish and french fries (called “chips” in the UK). The fish is usually cod or haddock, coated in a batter made of flour, baking powder, milk, eggs, and seasonings. Sometimes, beer is added to the batter for extra flavor. The chips are usually cut thick, so some Americans might consider them “steak fries.” While both cod and haddock have been eaten in England since before the country’s founding, the practice of battering fish didn’t become popular until around the 16th century. That’s when Jewish immigrants from Spain and Portugal first brought the practice to England. It quickly caught on with the Brits, who served it alongside bread or mashed potatoes. As for chips, they were likely invented in the 1680s by Belgian housewives who fried potatoes in place of more-expensive fish. By 1830, chips had made their way to England, where they were popular with working people.
Some claim that the first recorded instance of a restaurant selling fish and chips side by side comes from London, in the 1860s. Joseph Malin, a Jewish immigrant who had fled persecution in Eastern Europe to work as a rug weaver in England, opened a small fish and chips shop, also known as a “chippy”, to supplement his income. The shop exploded in popularity…so it seems that London can claim fish and chips as their own, right? Not quite. Some people claim that the first fish and chips shop was actually a wooden hut opened in 1863 at the Mossley market, an outdoor market in Lancashire that still exists today. The hut’s owner was John Lees, a local businessman.
Though we’ll probably never know which man (or which city) first spawned fish and chips, there’s no doubt that the dish has staying power. It remains as popular as ever in Britain, where it’s also a point of national pride. Each year, the National Federation of Fish Fryers even awards one restaurant the title of Fish & Chip Takeaway of the Year, and the winners even get a free trip to Norway. Not too shabby for a chippy!
[Image description: Fish and chips with mashed peas and white dipping sauce on a blue plate.] Credit & copyright: Famifranquoi, Pixabay -
FREEScience Daily Curio #2732Free1 CQ
That’s no dog, that’s a dingo! A video from Australia, which showed a dingo committing literal daylight robbery, recently went viral. The tenacious pup swam to a boat where it proceeded to steal food and even a handbag. While dingoes may look similar to domesticated dogs, they’re considered wild animals. In fact, these pirate pests have actually carved out a niche for themselves in one of the planet’s harshest ecosystems.
Dingoes first arrived in Australia thousands of years ago, though it’s not known exactly when. Genetic analysis shows that dingoes are closely related to East Asian domestic dogs, meaning that they were likely brought over by humans who used them to hunt vermin. Archaeological evidence shows that dingoes have been in Australia for at least 3,500 years, but they likely arrived no earlier than 12,000 years ago. That’s around the time that rising sea levels separated Tasmania from the mainland, and no dingoes are found on Tasmania today. On the mainland, though, they went from human companions to apex predators.
Unfortunately, dingos were considered pests by many early Australian settlers looking to raise livestock. This led to massive culling programs in southeastern Australia that all but wiped out dingoes in the region. Then, farmers constructed a 5,600-kilometer-long fence meant to keep out rabbits, a more recently-introduced invasive species. The fence did little to keep rabbits out, but it did keep dingoes out. Settlers didn’t understand that dingoes had, by then, become an integral part of Australia’s ecosystem, even filling the ecological niche left over when thylacines (or Tasmanian tigers) went extinct. Without dingoes to keep kangaroos and other herbivore populations in check, the fenced-in area has become overgrazed, leading to a crash in biodiversity. The effects are so drastic that, today, the difference in vegetation levels on either side of the fence can be seen from space.
The dingo is listed as a vulnerable species by the International Union for the Conservation of Nature. Despite the fact that they weren’t originally wild animals, they’ve been part of the ecosystem for so long now that their demise would likely mean ecological disaster for Australia. Dingoes are still being baited, trapped, and hunted as pests, while conservationists work to educate the public about the valuable role dingoes play, as wild predators. It might be a long road ahead, but every dog has his day.
[Image description: A tan-colored dingo stands in green grass.] Credit & copyright: TheOtherKev, PixabayThat’s no dog, that’s a dingo! A video from Australia, which showed a dingo committing literal daylight robbery, recently went viral. The tenacious pup swam to a boat where it proceeded to steal food and even a handbag. While dingoes may look similar to domesticated dogs, they’re considered wild animals. In fact, these pirate pests have actually carved out a niche for themselves in one of the planet’s harshest ecosystems.
Dingoes first arrived in Australia thousands of years ago, though it’s not known exactly when. Genetic analysis shows that dingoes are closely related to East Asian domestic dogs, meaning that they were likely brought over by humans who used them to hunt vermin. Archaeological evidence shows that dingoes have been in Australia for at least 3,500 years, but they likely arrived no earlier than 12,000 years ago. That’s around the time that rising sea levels separated Tasmania from the mainland, and no dingoes are found on Tasmania today. On the mainland, though, they went from human companions to apex predators.
Unfortunately, dingos were considered pests by many early Australian settlers looking to raise livestock. This led to massive culling programs in southeastern Australia that all but wiped out dingoes in the region. Then, farmers constructed a 5,600-kilometer-long fence meant to keep out rabbits, a more recently-introduced invasive species. The fence did little to keep rabbits out, but it did keep dingoes out. Settlers didn’t understand that dingoes had, by then, become an integral part of Australia’s ecosystem, even filling the ecological niche left over when thylacines (or Tasmanian tigers) went extinct. Without dingoes to keep kangaroos and other herbivore populations in check, the fenced-in area has become overgrazed, leading to a crash in biodiversity. The effects are so drastic that, today, the difference in vegetation levels on either side of the fence can be seen from space.
The dingo is listed as a vulnerable species by the International Union for the Conservation of Nature. Despite the fact that they weren’t originally wild animals, they’ve been part of the ecosystem for so long now that their demise would likely mean ecological disaster for Australia. Dingoes are still being baited, trapped, and hunted as pests, while conservationists work to educate the public about the valuable role dingoes play, as wild predators. It might be a long road ahead, but every dog has his day.
[Image description: A tan-colored dingo stands in green grass.] Credit & copyright: TheOtherKev, Pixabay -
FREEOutdoors Daily Curio #2731Free1 CQ
You’re deathly ill, trapped in the pitch black dark and—worst of all—nearly a mile underground. What do you do? In the case of American caver Mark Dickey, who was recently rescued from Morca Cave in Turkey, all he could do was wait. Caving, also called cave exploration or spelunking, might not sound particularly dangerous to the uninitiated. After all, tons of tourists flock to massive caverns along well-lit walkways every day without incident, but that’s not really what caving is. What draws hardcore cavers are deep, unexplored underground passages that often have little wiggle room. Traversing a cave often means taking a deep breath out and squeezing through a hole or tunnel barely big enough for a person. It’s a dangerous activity that leaves little room for error or sheer bad luck. In the case of Mark Dickey, an experienced caver who’s been exploring underground spaces for over 20 years, he simply came down with a stomach illness that left him unable to make the passage out of one of Turkey’s deepest caves. He was reportedly throwing up blood and required emergency medical attention after weeks spent trapped in the cave.
Other cavers have met their end due to sudden flooding, while others have fallen from great heights or gotten stuck between a literal rock and a hard place. The problem is that, when cavers need rescuing, it usually takes another caver to get them out or even reach them. Dickey’s grueling ordeal could have been a lot worse—while he may have been 0.8 miles underground, the deepest cave system in the world is Veryovkina Cave in Georgia, at 1.3 miles. That distance might be easy to traverse aboveground, but getting through the rocky, winding passages of a deep cave takes a long time. In fact, it can take days to explore caves like Morca or Veryovkina. Despite these immense risks, caving remains a popular hobby among extremophiles who prefer slow and steady over the quick thrill of activities like base jumping. If you’re considering cave exploration for your next adventure, you might want to give it some deep thought.
[Image description: A cave with ferns growing around its sunlit entrance.] Credit & copyright: Tama66You’re deathly ill, trapped in the pitch black dark and—worst of all—nearly a mile underground. What do you do? In the case of American caver Mark Dickey, who was recently rescued from Morca Cave in Turkey, all he could do was wait. Caving, also called cave exploration or spelunking, might not sound particularly dangerous to the uninitiated. After all, tons of tourists flock to massive caverns along well-lit walkways every day without incident, but that’s not really what caving is. What draws hardcore cavers are deep, unexplored underground passages that often have little wiggle room. Traversing a cave often means taking a deep breath out and squeezing through a hole or tunnel barely big enough for a person. It’s a dangerous activity that leaves little room for error or sheer bad luck. In the case of Mark Dickey, an experienced caver who’s been exploring underground spaces for over 20 years, he simply came down with a stomach illness that left him unable to make the passage out of one of Turkey’s deepest caves. He was reportedly throwing up blood and required emergency medical attention after weeks spent trapped in the cave.
Other cavers have met their end due to sudden flooding, while others have fallen from great heights or gotten stuck between a literal rock and a hard place. The problem is that, when cavers need rescuing, it usually takes another caver to get them out or even reach them. Dickey’s grueling ordeal could have been a lot worse—while he may have been 0.8 miles underground, the deepest cave system in the world is Veryovkina Cave in Georgia, at 1.3 miles. That distance might be easy to traverse aboveground, but getting through the rocky, winding passages of a deep cave takes a long time. In fact, it can take days to explore caves like Morca or Veryovkina. Despite these immense risks, caving remains a popular hobby among extremophiles who prefer slow and steady over the quick thrill of activities like base jumping. If you’re considering cave exploration for your next adventure, you might want to give it some deep thought.
[Image description: A cave with ferns growing around its sunlit entrance.] Credit & copyright: Tama66 -
FREEScience Daily Curio #2730Free1 CQ
Not everyone can be a Nobel Prize winner, so why not aim your ambitions just a tad lower? For those conducting scientific research that “makes people laugh…then think” there’s the Ig Nobel Awards, which just selected ten winners for 2023. The prize has been used since 1991 as a way to honor scientists who make quirky or humorous discoveries. One of this year’s winners was chosen for their research on why scientists like to lick rock samples, while a team of international engineers was selected for trying to reanimate dead spiders.
According to the journal Nature, “The Ig Nobel awards are arguably the highlight of the scientific calendar.” While the “Igs,” as they’re also called, might seem like a mocking jab at frivolous research, they’re actually just the opposite. The award recognizes the value of oddball endeavors. The rock-licking study, for instance, was actually about the effectiveness of a real technique used in the field by paleontologists. Jan Zalasiewicz, who won the chemistry and geology prize, expounded in the study, “Wetting the surface allows fossil and mineral textures to stand out sharply, rather than being lost in the blur of intersecting micro-reflections and micro-refractions that come out of a dry surface.” In the case of the dead spiders, the engineers were taking inspiration from the arachnid’s anatomy to develop better mechanical grippers. They found that when spiders die and curl up, their legs are returning to their default “gripping” state. Based on this principle, the gripper they designed opens when pressure is applied instead of the other way around and is better at holding irregularly-sized objects.
Overall, the award is a tongue-in-cheek way for the scientific community to recognize the ostensibly absurd nature of their shared work while rewarding legitimate research. The awards are largely presented by Nobel Laureates and at least one Ig Nobel winner went on to win a Nobel Prize later. So it’s safe to say that the Igs are all in good fun. Even the prize money is absurd: each winner is given a “cash reward” of a $10 trillion dollar bill. Of course, that’s in Zimbabwe dollars, which haven’t been recognized as legal tender since 2009. Who said that scientists have no sense of humor?
[Image description: Two arms, each holding a large, gold trophy, reach in front of a yellow background.] Credit & copyright: Anna Shvets, PexelsNot everyone can be a Nobel Prize winner, so why not aim your ambitions just a tad lower? For those conducting scientific research that “makes people laugh…then think” there’s the Ig Nobel Awards, which just selected ten winners for 2023. The prize has been used since 1991 as a way to honor scientists who make quirky or humorous discoveries. One of this year’s winners was chosen for their research on why scientists like to lick rock samples, while a team of international engineers was selected for trying to reanimate dead spiders.
According to the journal Nature, “The Ig Nobel awards are arguably the highlight of the scientific calendar.” While the “Igs,” as they’re also called, might seem like a mocking jab at frivolous research, they’re actually just the opposite. The award recognizes the value of oddball endeavors. The rock-licking study, for instance, was actually about the effectiveness of a real technique used in the field by paleontologists. Jan Zalasiewicz, who won the chemistry and geology prize, expounded in the study, “Wetting the surface allows fossil and mineral textures to stand out sharply, rather than being lost in the blur of intersecting micro-reflections and micro-refractions that come out of a dry surface.” In the case of the dead spiders, the engineers were taking inspiration from the arachnid’s anatomy to develop better mechanical grippers. They found that when spiders die and curl up, their legs are returning to their default “gripping” state. Based on this principle, the gripper they designed opens when pressure is applied instead of the other way around and is better at holding irregularly-sized objects.
Overall, the award is a tongue-in-cheek way for the scientific community to recognize the ostensibly absurd nature of their shared work while rewarding legitimate research. The awards are largely presented by Nobel Laureates and at least one Ig Nobel winner went on to win a Nobel Prize later. So it’s safe to say that the Igs are all in good fun. Even the prize money is absurd: each winner is given a “cash reward” of a $10 trillion dollar bill. Of course, that’s in Zimbabwe dollars, which haven’t been recognized as legal tender since 2009. Who said that scientists have no sense of humor?
[Image description: Two arms, each holding a large, gold trophy, reach in front of a yellow background.] Credit & copyright: Anna Shvets, Pexels -
FREEDaily Curio #2729Free1 CQ
This might be the most unlikely museum conservation of all time. We’ve written before about thylacines (commonly known as Tasmanian tigers) and their extinction, but what about their resurrection? Recently, scientists from Sweden and Norway retrieved RNA from a thylacine specimen, marking the first time in history that RNA has been recovered from an extinct species. RNA is what reads the genetically-encoded instructions found in DNA. However, RNA is much more fragile and difficult to preserve. If a tissue sample containing RNA isn’t put into cold storage quickly enough, it’s promptly destroyed by enzymes. Against all odds, researchers managed to recover intact RNA from a thylacine tissue sample at the Swedish Museum of Natural History in Stockholm even though it had been kept at room temperature. Using the surviving RNA, scientists were able to catalog a list of transcriptomes, or actively expressed genes. By looking at transcriptomes, they can tell what proteins were being produced by the tissues found in the sample when the animal was still alive.
Thylacines went extinct around 130 years ago, but before their disappearance, they were the top predators in Tasmania. Predators play a key role in their respective ecosystems, so some conservationists believe that resurrecting and reintroducing thylacines to Tasmania would be good for the island’s health. Unfortunately, those hoping for a Jurassic Park situation, it’s unlikely that a thylacine will be cloned using the recently-retrieved RNA. What is possible, however, is a synthetic reconstruction of the species via gene editing, though even that would be a long way off. There are still immediate benefits to the RNA’s recovery, though. It opens up a world of possibilities for recovering RNA from similarly-kept samples, most of which were assumed to be lost causes. The RNA will also undoubtedly give scientists a detailed glimpse into the lives of these extinct creatures. The eyes may be the windows to the soul, but RNA is a porthole to the past.
[Image description: A drawing of two thylacines looking to the left.] Credit & copyright:
Henry Constantine Richter after John Gould, 1863, Wikimedia Commons, This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1928.This might be the most unlikely museum conservation of all time. We’ve written before about thylacines (commonly known as Tasmanian tigers) and their extinction, but what about their resurrection? Recently, scientists from Sweden and Norway retrieved RNA from a thylacine specimen, marking the first time in history that RNA has been recovered from an extinct species. RNA is what reads the genetically-encoded instructions found in DNA. However, RNA is much more fragile and difficult to preserve. If a tissue sample containing RNA isn’t put into cold storage quickly enough, it’s promptly destroyed by enzymes. Against all odds, researchers managed to recover intact RNA from a thylacine tissue sample at the Swedish Museum of Natural History in Stockholm even though it had been kept at room temperature. Using the surviving RNA, scientists were able to catalog a list of transcriptomes, or actively expressed genes. By looking at transcriptomes, they can tell what proteins were being produced by the tissues found in the sample when the animal was still alive.
Thylacines went extinct around 130 years ago, but before their disappearance, they were the top predators in Tasmania. Predators play a key role in their respective ecosystems, so some conservationists believe that resurrecting and reintroducing thylacines to Tasmania would be good for the island’s health. Unfortunately, those hoping for a Jurassic Park situation, it’s unlikely that a thylacine will be cloned using the recently-retrieved RNA. What is possible, however, is a synthetic reconstruction of the species via gene editing, though even that would be a long way off. There are still immediate benefits to the RNA’s recovery, though. It opens up a world of possibilities for recovering RNA from similarly-kept samples, most of which were assumed to be lost causes. The RNA will also undoubtedly give scientists a detailed glimpse into the lives of these extinct creatures. The eyes may be the windows to the soul, but RNA is a porthole to the past.
[Image description: A drawing of two thylacines looking to the left.] Credit & copyright:
Henry Constantine Richter after John Gould, 1863, Wikimedia Commons, This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1928. -
FREEMind + Body Daily CurioFree1 CQ
It’s sweet, it’s sour, it’s scrumptious. It’s also associated with Russia even though it wasn’t invented there, and named after an ingredient that is rarely used to make it anymore. Suffice it to say that borscht has a long and complicated history. Luckily, the tangy soup itself is quite simple to make.
Borscht is a bright red soup made from meat stock, sautéed vegetables such as potatoes, cabbage, and carrots, and fermented beetroot juice, called beet sour. This final ingredient is what gives the soup its famous color. One ingredient you won’t find in most modern borscht is hogweed ( also called cow-parsnip), a common European weed related to fennel. Yet the original Slavic name for borscht refers to this plant. That’s because, before beets were widely cultivated, hogweed was the ingredient that gave borscht its sour kick. In fact, borscht dates back to around the fifth century C.E., when foraging for wild ingredients was common practice.
In what is now Ukraine, ancient Slavic peoples would pick and chop up hogweed in early summer, then place it in clay pots filled with water and leave it to ferment. The result was a sour ingredient that could be combined with meat, cream, and egg yolks to make tarts or used as a primary component in soup. Hogweed remained the main ingredient in borscht until around the 17th century, by which time it had spread throughout what we now think of as Eastern Europe. Despite the fact that many people, including famed Polish botanist Simon Syrenius, considered hogweed a great hangover cure at the time, borscht began to change alongside Slavic farming practices. No one knows when, exactly, beetroot replaced hogweed as the soup’s primary ingredient, but many historians believe that it happened east of the Dneiper River in the late 17th century. The change was likely made by Ukrainian farmers living under Russian rule who turned to their own crops for sustenance.
As Russia became a dominant force in 18th century Europe, borscht’s popularity grew further. Meatless versions were often eaten by Christians during religious fasts, and Russian churches would sometimes distribute bowls of borscht to the poor. Today, this flavorful soup is considered a Russian staple food, and is often served with a dollop of sour cream on top, alongside boiled potatoes and hard-boiled eggs. Still, its humble origins have never been forgotten. In Poland, the common phrase “tani jak barszcz” even means “cheap as borscht.” Hey, the most delicious things in life aren’t always the fanciest.
[Image description: A bown of borscht topped with sour cream and fennel against a white background. There is sliced, brown bread in the upper right.] Credit & copyright: Polina Tankilevitch, PexelsIt’s sweet, it’s sour, it’s scrumptious. It’s also associated with Russia even though it wasn’t invented there, and named after an ingredient that is rarely used to make it anymore. Suffice it to say that borscht has a long and complicated history. Luckily, the tangy soup itself is quite simple to make.
Borscht is a bright red soup made from meat stock, sautéed vegetables such as potatoes, cabbage, and carrots, and fermented beetroot juice, called beet sour. This final ingredient is what gives the soup its famous color. One ingredient you won’t find in most modern borscht is hogweed ( also called cow-parsnip), a common European weed related to fennel. Yet the original Slavic name for borscht refers to this plant. That’s because, before beets were widely cultivated, hogweed was the ingredient that gave borscht its sour kick. In fact, borscht dates back to around the fifth century C.E., when foraging for wild ingredients was common practice.
In what is now Ukraine, ancient Slavic peoples would pick and chop up hogweed in early summer, then place it in clay pots filled with water and leave it to ferment. The result was a sour ingredient that could be combined with meat, cream, and egg yolks to make tarts or used as a primary component in soup. Hogweed remained the main ingredient in borscht until around the 17th century, by which time it had spread throughout what we now think of as Eastern Europe. Despite the fact that many people, including famed Polish botanist Simon Syrenius, considered hogweed a great hangover cure at the time, borscht began to change alongside Slavic farming practices. No one knows when, exactly, beetroot replaced hogweed as the soup’s primary ingredient, but many historians believe that it happened east of the Dneiper River in the late 17th century. The change was likely made by Ukrainian farmers living under Russian rule who turned to their own crops for sustenance.
As Russia became a dominant force in 18th century Europe, borscht’s popularity grew further. Meatless versions were often eaten by Christians during religious fasts, and Russian churches would sometimes distribute bowls of borscht to the poor. Today, this flavorful soup is considered a Russian staple food, and is often served with a dollop of sour cream on top, alongside boiled potatoes and hard-boiled eggs. Still, its humble origins have never been forgotten. In Poland, the common phrase “tani jak barszcz” even means “cheap as borscht.” Hey, the most delicious things in life aren’t always the fanciest.
[Image description: A bown of borscht topped with sour cream and fennel against a white background. There is sliced, brown bread in the upper right.] Credit & copyright: Polina Tankilevitch, Pexels -
FREELiterature Daily Curio #2728Free1 CQ
The greatest adventures can start in the most unexpected places. J.R.R. Tolkien’s The Hobbit was published on this day in 1937. The beloved classic is known as a child-friendly prequel to the author’s grander fantasy epic, The Lord of the Rings. Yet, few people know that it wasn’t actually intended to be a prequel at all.
Compared to the cosmically-high stakes of The Lord of the Rings, its predecessor seems fairly tame. It’s the story of Bilbo Baggins, a hobbit with no ambitions of being a great adventurer, who goes on a journey with a gang of dwarves and an eccentric wizard. Being a hobbit, Bilbo is literally small in stature, and his people are insular folk unconcerned with anything that happens beyond their land, called The Shire. After helping the dwarves reclaim their home from a dragon, Bilbo returns to The Shire as a braver and more worldly hobbit, and thus his adventure ends.
So why does this simple story lead into one of the greatest fantasy epics of all time? Well, it wasn’t really supposed to. Tolkien first conceived of The Hobbit as a way to entertain his children. As he made up the story, he added things his children liked, like bears, silly jokes, and the hobbits’ famous love of food. When Tolkien finally decided to write the story into a book and publish it, The Hobbit was wildly successful, prompting his publisher to ask Tolkien for more stories from the same world in which it took place.
Tolkien was a great world-builder and a renowned linguist with a keen interest in genealogy. So, when his publishers asked for more stories, he started work on The Silmarillion, a book that incorporated his passions. It greatly fleshed out the early mythology of Middle-earth, where The Hobbit took place. Unfortunately for Tolkein, his publishers really just wanted more hobbits. So, Tolkien began work on what would become The Lord of the Rings, featuring Bilbo’s nephew, Frodo, as the protagonist at the heart of a war for the soul of Middle-earth. The Lord of the Rings is decidedly darker and more mature, than The Hobbit, yet at times it’s just as whimsical. Mythology and epic battles are great…but so are hobbits.
[Image description: A wooden writing desk with an open notebook, a leather bag, an inkwell, a jar of pencils, a stack of books, and a clock sitting on it. ] Credit & copyright: mozlase__, PixabayThe greatest adventures can start in the most unexpected places. J.R.R. Tolkien’s The Hobbit was published on this day in 1937. The beloved classic is known as a child-friendly prequel to the author’s grander fantasy epic, The Lord of the Rings. Yet, few people know that it wasn’t actually intended to be a prequel at all.
Compared to the cosmically-high stakes of The Lord of the Rings, its predecessor seems fairly tame. It’s the story of Bilbo Baggins, a hobbit with no ambitions of being a great adventurer, who goes on a journey with a gang of dwarves and an eccentric wizard. Being a hobbit, Bilbo is literally small in stature, and his people are insular folk unconcerned with anything that happens beyond their land, called The Shire. After helping the dwarves reclaim their home from a dragon, Bilbo returns to The Shire as a braver and more worldly hobbit, and thus his adventure ends.
So why does this simple story lead into one of the greatest fantasy epics of all time? Well, it wasn’t really supposed to. Tolkien first conceived of The Hobbit as a way to entertain his children. As he made up the story, he added things his children liked, like bears, silly jokes, and the hobbits’ famous love of food. When Tolkien finally decided to write the story into a book and publish it, The Hobbit was wildly successful, prompting his publisher to ask Tolkien for more stories from the same world in which it took place.
Tolkien was a great world-builder and a renowned linguist with a keen interest in genealogy. So, when his publishers asked for more stories, he started work on The Silmarillion, a book that incorporated his passions. It greatly fleshed out the early mythology of Middle-earth, where The Hobbit took place. Unfortunately for Tolkein, his publishers really just wanted more hobbits. So, Tolkien began work on what would become The Lord of the Rings, featuring Bilbo’s nephew, Frodo, as the protagonist at the heart of a war for the soul of Middle-earth. The Lord of the Rings is decidedly darker and more mature, than The Hobbit, yet at times it’s just as whimsical. Mythology and epic battles are great…but so are hobbits.
[Image description: A wooden writing desk with an open notebook, a leather bag, an inkwell, a jar of pencils, a stack of books, and a clock sitting on it. ] Credit & copyright: mozlase__, Pixabay -
FREEHumanities Daily Curio #2727Free1 CQ
Raise the curtain! The first Cannes Film Festival took place on this day in 1946, and it continues to be one of the most prestigious events in the worldwide film industry. Named for the French city of Cannes (pronounced more like pan than pawn with silent “S”), the festival is heavily associated with the glitz and glamor of international movie stars gracing the scenic streets of its namesake. However, the festival actually has its origins in the tumultuous political climate preceding WWII.
French diplomat and historian Philippe Erlanger was at the International Venice Film Festival in 1938 when a Nazi propaganda film was announced as the winner due to political pressure from both Hitler and Mussolini. This led Erlanger to convince the French Minister of Education, Jean Zay, to create a film festival that could be free of political meddling. They followed through on this plan in 1939, holding an international film festival in Cannes at the same time as the one in Venice. Unfortunately, the first festival didn’t go as planned. Scheduled to begin on September 1, the festival debuted on the same day as Germany’s invasion of Poland. With the onset of WWII, the organizers couldn’t see the festival through. In the end, there was only a single, private screening featuring the American film Quasimodo, and the festival couldn’t be held again until after the war ended.
Between 1945 and 1946, Erlanger once again campaigned for a film festival held in France. At the time, there wasn’t any government funding available for the festival at the national or local level, so the money was collected through a public fundraising effort. Although there were numerous technical difficulties, the Cannes Film Festival was able to debut for the “first” time for a second time. Today, the festival is known for the unfettered creativity of the contributors, and is a place where experimental or unconventional projects can find an audience. Between screenings, producers and distributors get together to make deals on films that perform well with viewers and judges. The most coveted prize, of course, is the Palme d’Or (“golden palm”). Films that win the award often go on to be critically and commercially successful upon wider release, like Apocalypse Now and Pulp Fiction. You’ve just got to have a Cannes-do attitude and a certain je ne sais quoi to do well.
[Image description: An empty movie theater with red seats and a gold curtain.] Credit & copyright: onkelglocke, PixabayRaise the curtain! The first Cannes Film Festival took place on this day in 1946, and it continues to be one of the most prestigious events in the worldwide film industry. Named for the French city of Cannes (pronounced more like pan than pawn with silent “S”), the festival is heavily associated with the glitz and glamor of international movie stars gracing the scenic streets of its namesake. However, the festival actually has its origins in the tumultuous political climate preceding WWII.
French diplomat and historian Philippe Erlanger was at the International Venice Film Festival in 1938 when a Nazi propaganda film was announced as the winner due to political pressure from both Hitler and Mussolini. This led Erlanger to convince the French Minister of Education, Jean Zay, to create a film festival that could be free of political meddling. They followed through on this plan in 1939, holding an international film festival in Cannes at the same time as the one in Venice. Unfortunately, the first festival didn’t go as planned. Scheduled to begin on September 1, the festival debuted on the same day as Germany’s invasion of Poland. With the onset of WWII, the organizers couldn’t see the festival through. In the end, there was only a single, private screening featuring the American film Quasimodo, and the festival couldn’t be held again until after the war ended.
Between 1945 and 1946, Erlanger once again campaigned for a film festival held in France. At the time, there wasn’t any government funding available for the festival at the national or local level, so the money was collected through a public fundraising effort. Although there were numerous technical difficulties, the Cannes Film Festival was able to debut for the “first” time for a second time. Today, the festival is known for the unfettered creativity of the contributors, and is a place where experimental or unconventional projects can find an audience. Between screenings, producers and distributors get together to make deals on films that perform well with viewers and judges. The most coveted prize, of course, is the Palme d’Or (“golden palm”). Films that win the award often go on to be critically and commercially successful upon wider release, like Apocalypse Now and Pulp Fiction. You’ve just got to have a Cannes-do attitude and a certain je ne sais quoi to do well.
[Image description: An empty movie theater with red seats and a gold curtain.] Credit & copyright: onkelglocke, Pixabay -
FREEUS History Daily Curio #2726Free1 CQ
Americans love fireworks, but this takes things to another level. Just over 242 years ago, Benedict Arnold became a “turncoat” or traitor during the American Revolution and burned down the town of New London, Connecticut. In turn, New London burns him in effigy every year. The latest burning took place on September 9, during the annual Burning of Benedict Arnold Festival, with residents chanting, “Burn the traitor!” Arnold was a general in the Continental Army during the American Revolution. Unhappy with his treatment by his colleagues and superiors, he became a “turncoat,” as in, he joined the British “Redcoats” to fight against the former colonies.
Arnold is perhaps the most well-known traitor in American history, and his name is still synonymous with betrayal in common parlance today. But there’s actually more history behind the festival that bears his name. Soon after the end of the Revolutionary War, there were many local events throughout the U.S. akin to the Burning of Benedict Arnold Festival. The reason, aside from leftover animosity against traitors, was Guy Fawkes Day. Before the war, American colonists celebrated the day by burning an effigy of England’s most famous traitor, because they considered themselves English. But after the war, they couldn’t exactly go back to burning Guy Fawkes for treason when they had just fought a war against the king of England. Instead of letting the holiday die, they simply began burning effigies of American traitors. New London specifically picked Benedict Arnold for a little payback since he’d burned down their town. It’s all in good fun, though. This year, there were even self-proclaimed supporters of Arnold who showed up in powdered wigs to defend his honor in jest. As for how Arnold himself would feel about it, he probably wouldn’t be surprised—he actually saw himself being burned in effigy during his lifetime. And despite joining the British, he was never really accepted by British society. However, contrary to some enduring myths, Arnold never expressed any regrets for his decision, and there is no evidence to suggest that he was buried in his blue Continental uniform upon his death, though his tombstone reads, “Sometime general in the army of George Washington.” That’s some selective wording.
[Image description: A painting of Benedict Arnold in a hat and military uniform with one hand raised.] Credit & copyright: Thomas Hart, Wikimedia Commons, From the Anne S. K. Brown Collection at Brown University, Public Domain (published or registered with the U.S. Copyright Office before January 1, 1928.)Americans love fireworks, but this takes things to another level. Just over 242 years ago, Benedict Arnold became a “turncoat” or traitor during the American Revolution and burned down the town of New London, Connecticut. In turn, New London burns him in effigy every year. The latest burning took place on September 9, during the annual Burning of Benedict Arnold Festival, with residents chanting, “Burn the traitor!” Arnold was a general in the Continental Army during the American Revolution. Unhappy with his treatment by his colleagues and superiors, he became a “turncoat,” as in, he joined the British “Redcoats” to fight against the former colonies.
Arnold is perhaps the most well-known traitor in American history, and his name is still synonymous with betrayal in common parlance today. But there’s actually more history behind the festival that bears his name. Soon after the end of the Revolutionary War, there were many local events throughout the U.S. akin to the Burning of Benedict Arnold Festival. The reason, aside from leftover animosity against traitors, was Guy Fawkes Day. Before the war, American colonists celebrated the day by burning an effigy of England’s most famous traitor, because they considered themselves English. But after the war, they couldn’t exactly go back to burning Guy Fawkes for treason when they had just fought a war against the king of England. Instead of letting the holiday die, they simply began burning effigies of American traitors. New London specifically picked Benedict Arnold for a little payback since he’d burned down their town. It’s all in good fun, though. This year, there were even self-proclaimed supporters of Arnold who showed up in powdered wigs to defend his honor in jest. As for how Arnold himself would feel about it, he probably wouldn’t be surprised—he actually saw himself being burned in effigy during his lifetime. And despite joining the British, he was never really accepted by British society. However, contrary to some enduring myths, Arnold never expressed any regrets for his decision, and there is no evidence to suggest that he was buried in his blue Continental uniform upon his death, though his tombstone reads, “Sometime general in the army of George Washington.” That’s some selective wording.
[Image description: A painting of Benedict Arnold in a hat and military uniform with one hand raised.] Credit & copyright: Thomas Hart, Wikimedia Commons, From the Anne S. K. Brown Collection at Brown University, Public Domain (published or registered with the U.S. Copyright Office before January 1, 1928.) -
FREEMind + Body Daily Curio #2725Free1 CQ
Here's a troubling heatwave that has nothing to do with the weather. A Massachusetts teen recently passed away after eating an extremely spicy “One Chip Challenge” tortilla chip, leaving some people wondering about the dangers of spicy foods. The company that makes the chips pulled them from store shelves, but they’re not the only extremely spicy products on the market. In fact, spicy food challenges have been popular online for years. TikTokers routinely face off to see who can handle the spiciest peppers and the beloved YouTube series Hot Ones features celebrities eating spicy chicken wings while answering interview questions. So, is spicy food really dangerous? Thankfully, the answer is usually no.
Peppers are the powerhouses behind many spicy eating challenges. They’re hot due to a compound called capsaicin, which causes a burning sensation when it binds to a pain receptor in the mouth called TRPV1. Despite the pain, eating spicy foods can actually have some health benefits for the average person. They can improve stomach health, lower cholesterol, and have been linked to longer lifespans. Capsaicin also doesn’t cause ulcers; that’s just a myth. It’s actually a key ingredient in some pain management medications.
That’s not to say that capsaicin can’t be unsafe in rare circumstances. When a person eats too much capsaicin at once, it can cause an acute reaction. In 2018, a man participating in a chili-eating contest landed in the hospital with brain abnormalities after eating a Carolina Reaper. These chilis are bred to have an excessive amount of capsaicin. The man’s body had a rare but immediate reaction, causing blood vessels in his head to constrict dangerously. It's a good reminder for people with underlying medical conditions that affect their digestive or vascular systems to steer clear of “extreme” spicy challenges. The burning sensation might not be real, but the danger can be.
[Image description: Two green peppers on fire against a black background.] Credit & copyright: holdosi, PixabayHere's a troubling heatwave that has nothing to do with the weather. A Massachusetts teen recently passed away after eating an extremely spicy “One Chip Challenge” tortilla chip, leaving some people wondering about the dangers of spicy foods. The company that makes the chips pulled them from store shelves, but they’re not the only extremely spicy products on the market. In fact, spicy food challenges have been popular online for years. TikTokers routinely face off to see who can handle the spiciest peppers and the beloved YouTube series Hot Ones features celebrities eating spicy chicken wings while answering interview questions. So, is spicy food really dangerous? Thankfully, the answer is usually no.
Peppers are the powerhouses behind many spicy eating challenges. They’re hot due to a compound called capsaicin, which causes a burning sensation when it binds to a pain receptor in the mouth called TRPV1. Despite the pain, eating spicy foods can actually have some health benefits for the average person. They can improve stomach health, lower cholesterol, and have been linked to longer lifespans. Capsaicin also doesn’t cause ulcers; that’s just a myth. It’s actually a key ingredient in some pain management medications.
That’s not to say that capsaicin can’t be unsafe in rare circumstances. When a person eats too much capsaicin at once, it can cause an acute reaction. In 2018, a man participating in a chili-eating contest landed in the hospital with brain abnormalities after eating a Carolina Reaper. These chilis are bred to have an excessive amount of capsaicin. The man’s body had a rare but immediate reaction, causing blood vessels in his head to constrict dangerously. It's a good reminder for people with underlying medical conditions that affect their digestive or vascular systems to steer clear of “extreme” spicy challenges. The burning sensation might not be real, but the danger can be.
[Image description: Two green peppers on fire against a black background.] Credit & copyright: holdosi, Pixabay -
FREEMind + Body Daily CurioFree1 CQ
This fanciful desert may have surprisingly bawdy origins…or not. Though theories abound, the exact circumstances of tiramisu’s invention are a mystery. What’s not so mysterious is its enduring popularity. This caffeine-laced dessert has been delighting diners around the world since at least the 1960s.
Tiramisu is a layered dish that looks similar to cake, though it requires no baking. It has six main ingredients, the most famous of which are ladyfingers, a type of long cookie or biscuit. These are soaked in coffee and layered with mascarpone, a type of cream cheese, which has been combined with eggs and sugar. The entire confection is topped with cocoa powder, making for a sweet treat with a bitter edge. Coffee is key to one popular, saucy legend about tiramisu’s origins. Supposedly, a 19th-century brothel owner in Treviso, Italy, created tiramisu to sell to her customers so that its caffeine and calories could reinvigorate them when they returned to their wives. Tirami su does translate to “pick me up” in the traditional Treviso dialect, which seems to indicate that the dish was invented there. But the world of Italian desserts is complicated. A fierce tiramisu-based rivalry has been raging for decades between two Italian regions, both of which claim to have invented it. On one side is Veneto, the region of which Treviso is a part, which officially claims that tiramisu was invented there by restaurateur Ado Campeol and his wife, Alba di Pillo, at Campeol’s Le Beccherie restaurant in 1969. On the other side is Friuli Venezia Giulia, which claims that a chef named Norma Pielli invented the dish there in 1959 at the Albergo Roma hotel in Tolmezzo. When the Italian government officially backed Friuli Venezia Giulia’s claims in 2017, outrage ensued in Veneto. The region’s local government spoke out against Friuli Venezia Giuli’s attempt to “steal” tiramisu and even threatened legal action. Still, the biggest tiramisu-cooking competition in the world is held each year in Veneto, and Italians are generally split on where, exactly, their famed dessert comes from.
What’s certain, though, is that tiramisu didn’t hit a sweet spot in the U.S. until the 1980s. Italian food was experiencing something of a stateside popularity boom, at the time, and tiramisu was different enough from traditional American desserts to stand out. At the same time, its flavors were familiar enough for it to fit right in. Tiramisu’s modern popularity proves that it’s a dish able to stand the test of time…even if we’ll never know exactly how long it's been around.
[Image description: A slice of tiramisu on a white plate in front of a brick wall. To the left is a container reading “Ti Bar”, to the right is a glass of wine.] Credit & copyright: Anestiev, PixabayThis fanciful desert may have surprisingly bawdy origins…or not. Though theories abound, the exact circumstances of tiramisu’s invention are a mystery. What’s not so mysterious is its enduring popularity. This caffeine-laced dessert has been delighting diners around the world since at least the 1960s.
Tiramisu is a layered dish that looks similar to cake, though it requires no baking. It has six main ingredients, the most famous of which are ladyfingers, a type of long cookie or biscuit. These are soaked in coffee and layered with mascarpone, a type of cream cheese, which has been combined with eggs and sugar. The entire confection is topped with cocoa powder, making for a sweet treat with a bitter edge. Coffee is key to one popular, saucy legend about tiramisu’s origins. Supposedly, a 19th-century brothel owner in Treviso, Italy, created tiramisu to sell to her customers so that its caffeine and calories could reinvigorate them when they returned to their wives. Tirami su does translate to “pick me up” in the traditional Treviso dialect, which seems to indicate that the dish was invented there. But the world of Italian desserts is complicated. A fierce tiramisu-based rivalry has been raging for decades between two Italian regions, both of which claim to have invented it. On one side is Veneto, the region of which Treviso is a part, which officially claims that tiramisu was invented there by restaurateur Ado Campeol and his wife, Alba di Pillo, at Campeol’s Le Beccherie restaurant in 1969. On the other side is Friuli Venezia Giulia, which claims that a chef named Norma Pielli invented the dish there in 1959 at the Albergo Roma hotel in Tolmezzo. When the Italian government officially backed Friuli Venezia Giulia’s claims in 2017, outrage ensued in Veneto. The region’s local government spoke out against Friuli Venezia Giuli’s attempt to “steal” tiramisu and even threatened legal action. Still, the biggest tiramisu-cooking competition in the world is held each year in Veneto, and Italians are generally split on where, exactly, their famed dessert comes from.
What’s certain, though, is that tiramisu didn’t hit a sweet spot in the U.S. until the 1980s. Italian food was experiencing something of a stateside popularity boom, at the time, and tiramisu was different enough from traditional American desserts to stand out. At the same time, its flavors were familiar enough for it to fit right in. Tiramisu’s modern popularity proves that it’s a dish able to stand the test of time…even if we’ll never know exactly how long it's been around.
[Image description: A slice of tiramisu on a white plate in front of a brick wall. To the left is a container reading “Ti Bar”, to the right is a glass of wine.] Credit & copyright: Anestiev, Pixabay -
FREEHumanities Daily Curio #2724Free1 CQ
Okay, Google: is Google okay? A giant among Big Tech giants, Google is now on trial for allegedly breaking antitrust laws. When it comes to search engines, Google is the top choice for over 90 percent of people in the U.S. Its popularity is such that the name has become a verb since its debut in 1998. Other search engines have come and gone, leaving a few competitors like Microsoft’s Bing. It’s one thing to be number one by being the best, but it’s against federal law for a company to sabotage its competition. Now, following a three-year investigation, the federal government is accusing Google of using unfair practices to secure their current dominance. Among the accusations levied against the company is that it uses exclusionary contracts to push out competition. For example, the government says that Google paid Apple to make its search engine the default option in Apple’s proprietary browser, Safari, which is used on their iPhones and other devices.
Then there’s the issue of Google’s own smartphone operating system, Android. On Android devices, Google is accused of preloading its own apps, thereby discouraging the installation and use of other search engines. Beyond playing king of the hill, Google is also accused of selling its advertisers short. According to a complaint by the Department of Justice (DOJ), Google’s search advertising tool, Search Ads 360 (SA360), has been designed to be largely incompatible with services like Bing on purpose. Essentially, Google is forcing advertisers to use their platform, even if there might be better options for their specific needs. On top of that, the DOJ claims that Google destroyed evidence ahead of the trial. For example, they believe that Google allowed employee chat logs to be auto-deleted, even though they knew that the logs should have been preserved. The trial is expected to last around 10 weeks. Antitrust cases can be complicated, but if you want to know more, you can always do a quick online search!
[Image description: A wooden gavel, a black briefcase, and a gold scales against a white background.] Credit & copyright: succo, PixabayOkay, Google: is Google okay? A giant among Big Tech giants, Google is now on trial for allegedly breaking antitrust laws. When it comes to search engines, Google is the top choice for over 90 percent of people in the U.S. Its popularity is such that the name has become a verb since its debut in 1998. Other search engines have come and gone, leaving a few competitors like Microsoft’s Bing. It’s one thing to be number one by being the best, but it’s against federal law for a company to sabotage its competition. Now, following a three-year investigation, the federal government is accusing Google of using unfair practices to secure their current dominance. Among the accusations levied against the company is that it uses exclusionary contracts to push out competition. For example, the government says that Google paid Apple to make its search engine the default option in Apple’s proprietary browser, Safari, which is used on their iPhones and other devices.
Then there’s the issue of Google’s own smartphone operating system, Android. On Android devices, Google is accused of preloading its own apps, thereby discouraging the installation and use of other search engines. Beyond playing king of the hill, Google is also accused of selling its advertisers short. According to a complaint by the Department of Justice (DOJ), Google’s search advertising tool, Search Ads 360 (SA360), has been designed to be largely incompatible with services like Bing on purpose. Essentially, Google is forcing advertisers to use their platform, even if there might be better options for their specific needs. On top of that, the DOJ claims that Google destroyed evidence ahead of the trial. For example, they believe that Google allowed employee chat logs to be auto-deleted, even though they knew that the logs should have been preserved. The trial is expected to last around 10 weeks. Antitrust cases can be complicated, but if you want to know more, you can always do a quick online search!
[Image description: A wooden gavel, a black briefcase, and a gold scales against a white background.] Credit & copyright: succo, Pixabay -
FREEHumanities Daily Curio #2723Free1 CQ
What is it called when someone makes a hole in the Great Wall of China? A monumental mistake. The government of China also calls it a crime—understandably—and has arrested a pair of suspects who are being accused of breaking through a section of the Great Wall to create a shortcut. A 38-year-old man and a 55-year-old woman were recently taken into custody after authorities discovered a dirt road going straight through where the wall once stood. Oddly enough, though, this isn’t the first time that the wall was brought down by locals—in the past, it’s been plundered for its bricks. That, combined with erosion from natural forces, has all but erased about a third of the wall. It’s still a formidable structure that spans approximately 5,500 miles (though it can’t actually be seen from space) and is obviously a point of national pride for China.
Much of what remains of the wall today is actually the result of decades of painstaking reconstruction. Construction of the original wall began during the Qin dynasty in the 3rd century B.C.E., and the structure saw various additions over the next two millennia. Many of the people who worked on the wall’s construction were soldiers or peasants, but the job was also handed out as punishment to criminals. Tax evaders were often sentenced to building the wall, while guard duty along its walkways was given to outlaws. It was common for those laboring on the wall to die in the process, though there are no exact figures for how many lives were lost. Despite all the effort, the wall was never all that effective at protecting China’s borders. By the time of the Communist Revolution in China, much of the wall had been abandoned and picked apart, which may be hard to believe considering the status of the Great Wall today as a UNESCO World Heritage Site. It’ll take more than spackle and elbow grease to repair the recently-made hole, but at least those responsible won’t be sentenced to do it themselves.
[Image description: A section of the Great Wall of China surrounded by green forests.] Credit & copyright: Wikimedia Commons, Ahazan, CC0 1.0 Universal (CC0 1.0) Public Domain DedicationWhat is it called when someone makes a hole in the Great Wall of China? A monumental mistake. The government of China also calls it a crime—understandably—and has arrested a pair of suspects who are being accused of breaking through a section of the Great Wall to create a shortcut. A 38-year-old man and a 55-year-old woman were recently taken into custody after authorities discovered a dirt road going straight through where the wall once stood. Oddly enough, though, this isn’t the first time that the wall was brought down by locals—in the past, it’s been plundered for its bricks. That, combined with erosion from natural forces, has all but erased about a third of the wall. It’s still a formidable structure that spans approximately 5,500 miles (though it can’t actually be seen from space) and is obviously a point of national pride for China.
Much of what remains of the wall today is actually the result of decades of painstaking reconstruction. Construction of the original wall began during the Qin dynasty in the 3rd century B.C.E., and the structure saw various additions over the next two millennia. Many of the people who worked on the wall’s construction were soldiers or peasants, but the job was also handed out as punishment to criminals. Tax evaders were often sentenced to building the wall, while guard duty along its walkways was given to outlaws. It was common for those laboring on the wall to die in the process, though there are no exact figures for how many lives were lost. Despite all the effort, the wall was never all that effective at protecting China’s borders. By the time of the Communist Revolution in China, much of the wall had been abandoned and picked apart, which may be hard to believe considering the status of the Great Wall today as a UNESCO World Heritage Site. It’ll take more than spackle and elbow grease to repair the recently-made hole, but at least those responsible won’t be sentenced to do it themselves.
[Image description: A section of the Great Wall of China surrounded by green forests.] Credit & copyright: Wikimedia Commons, Ahazan, CC0 1.0 Universal (CC0 1.0) Public Domain Dedication -
FREEBiology Daily Curio #2722Free1 CQ
It’s gonna take a lot more than extinction to get rid of this bird. Once thought to have been wiped out, a flightless bird known as the takahē has been reintroduced to the wilderness of New Zealand. While these islands are no strangers to flightless birds like kiwis and kākāpō, one member of this illustrious club has been absent for over a century: the mysterious takahē. It may look like a strange mix between a turkey and a parrot, but this bird has a long, proud history. In fact, fossils indicate that the takahē has been around since at least the Pleistocene era. With a round, plump body covered in blue feathers and a short, stout, vibrant red beak, it looks like an animal that could only be seen in a book or a museum of prehistory. In fact, it was once declared extinct in 1898. The birds’ numbers were already dwindling by the time European settlers arrived in New Zealand, but the invasion of cats, rats, and ferrets that accompanied the human arrivals were thought to have picked off what was left of them. However, just 50 years later, a group of hunters spotted a strange bird standing, somewhat ironically, outside a museum. It was around 50 centimeters long and made strange, squeaky whistles unlike anything they’d ever heard. Their discovery was confirmed later by scientists, but only after the hunters returned with a net and a camera to capture the prehistoric bird on film. Since then, conservationists have been raising the takahē in captivity in the hopes of restoring their natural population back to sustainable levels. Currently, there are 500 takahē in captivity, and they’ve been successfully bred to increase their numbers. In August, 18 of them were released on the South Island, where they haven’t been seen for over 100 years. However, there’s more to securing the future of the takahē than setting a group of them loose. The government of New Zealand has gotten serious, in recent years, about eradicating the invasive species that drove these birds (and other native fauna) to near extinction in the first place. The greatest threats are rats, possums, and stoats, which are being trapped systematically to establish sanctuaries for native species. Takahē might be flightless, but hopefully they’ll be taking off in no time.
[Image description: Description ] Credit & copyright: Wikimedia Commons, Bernard Spragg. NZ, Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s gonna take a lot more than extinction to get rid of this bird. Once thought to have been wiped out, a flightless bird known as the takahē has been reintroduced to the wilderness of New Zealand. While these islands are no strangers to flightless birds like kiwis and kākāpō, one member of this illustrious club has been absent for over a century: the mysterious takahē. It may look like a strange mix between a turkey and a parrot, but this bird has a long, proud history. In fact, fossils indicate that the takahē has been around since at least the Pleistocene era. With a round, plump body covered in blue feathers and a short, stout, vibrant red beak, it looks like an animal that could only be seen in a book or a museum of prehistory. In fact, it was once declared extinct in 1898. The birds’ numbers were already dwindling by the time European settlers arrived in New Zealand, but the invasion of cats, rats, and ferrets that accompanied the human arrivals were thought to have picked off what was left of them. However, just 50 years later, a group of hunters spotted a strange bird standing, somewhat ironically, outside a museum. It was around 50 centimeters long and made strange, squeaky whistles unlike anything they’d ever heard. Their discovery was confirmed later by scientists, but only after the hunters returned with a net and a camera to capture the prehistoric bird on film. Since then, conservationists have been raising the takahē in captivity in the hopes of restoring their natural population back to sustainable levels. Currently, there are 500 takahē in captivity, and they’ve been successfully bred to increase their numbers. In August, 18 of them were released on the South Island, where they haven’t been seen for over 100 years. However, there’s more to securing the future of the takahē than setting a group of them loose. The government of New Zealand has gotten serious, in recent years, about eradicating the invasive species that drove these birds (and other native fauna) to near extinction in the first place. The greatest threats are rats, possums, and stoats, which are being trapped systematically to establish sanctuaries for native species. Takahē might be flightless, but hopefully they’ll be taking off in no time.
[Image description: Description ] Credit & copyright: Wikimedia Commons, Bernard Spragg. NZ, Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEOutdoors Daily Curio #2721Free1 CQ
Many people have heard the call of the sea, daring them to seek adventure and conquer its unforgiving waters. Some might be better off hanging up on that call, though. In a recent, bizarre story of the sea, a man was taken into custody by the U.S. Coast Guard (USCG) after attempting to cross the Atlantic Ocean in a giant, homemade vessel resembling a hamster wheel. Reza Baluchi was discovered by members of the USCG in what he calls a “hydro-pod”, a vessel made of buoys surrounding an enclosure. The vessel moves as the person inside it runs, causing paddles on the outside to push against the water. While this might be a fun way to traverse a small, calm body of water, Baluchi was 70 nautical miles off the coast of Florida on a 4,000-mile voyage to London, England. To top it all off, Hurricane Franklin was approaching the area. The USCG only spotted Baluchi while preparing for the hurricane’s arrival. They made repeated attempts over several days to get him aboard their ship. Allegedly, Baluchi refused, threatened to hurt himself, and even claimed that he had an explosive with him (though he later admitted that it was a lie). After around three days of negotiations, the Coast Guard was able to convince him to abandon his vessel and his voyage. Later, Baluchi was released on a $250,000 bond.
But could he have made it? Was the Coast Guard just being a bunch of spoilsports? Well, in 2014, Baluchi had made a similar attempt to reach Bermuda from Boca Raton in a hamster ball instead of a wheel. That attempt failed due to a lack of navigation equipment and his inability to fight the ocean currents. Moreover, a long-distance trip in such a vessel is practically impossible because a running human isn’t an effective form of propulsion. However, after Baluchi failed in his Bermuda voyage, former USCG servicemember Todd Coggeshall remarked that he was more likely to get carried by the currents to England than to make it to the island. Maybe that’s where he got the idea for this trip’s intended destination.
[Image description: Two members of the U.S. Coast Guard stand aboard a ship and speak to Reza Baluchi, who is visible in the center of his round, homemade vessel.] Credit & copyright: Wikimedia Commons, Public Domain, this image or file is a work of a United States Coast Guard service personnel or employee, taken or made as part of that person's official duties. As a work of the U.S. federal government, the image or file is in the public domain (17 U.S.C. § 101 and § 105)Many people have heard the call of the sea, daring them to seek adventure and conquer its unforgiving waters. Some might be better off hanging up on that call, though. In a recent, bizarre story of the sea, a man was taken into custody by the U.S. Coast Guard (USCG) after attempting to cross the Atlantic Ocean in a giant, homemade vessel resembling a hamster wheel. Reza Baluchi was discovered by members of the USCG in what he calls a “hydro-pod”, a vessel made of buoys surrounding an enclosure. The vessel moves as the person inside it runs, causing paddles on the outside to push against the water. While this might be a fun way to traverse a small, calm body of water, Baluchi was 70 nautical miles off the coast of Florida on a 4,000-mile voyage to London, England. To top it all off, Hurricane Franklin was approaching the area. The USCG only spotted Baluchi while preparing for the hurricane’s arrival. They made repeated attempts over several days to get him aboard their ship. Allegedly, Baluchi refused, threatened to hurt himself, and even claimed that he had an explosive with him (though he later admitted that it was a lie). After around three days of negotiations, the Coast Guard was able to convince him to abandon his vessel and his voyage. Later, Baluchi was released on a $250,000 bond.
But could he have made it? Was the Coast Guard just being a bunch of spoilsports? Well, in 2014, Baluchi had made a similar attempt to reach Bermuda from Boca Raton in a hamster ball instead of a wheel. That attempt failed due to a lack of navigation equipment and his inability to fight the ocean currents. Moreover, a long-distance trip in such a vessel is practically impossible because a running human isn’t an effective form of propulsion. However, after Baluchi failed in his Bermuda voyage, former USCG servicemember Todd Coggeshall remarked that he was more likely to get carried by the currents to England than to make it to the island. Maybe that’s where he got the idea for this trip’s intended destination.
[Image description: Two members of the U.S. Coast Guard stand aboard a ship and speak to Reza Baluchi, who is visible in the center of his round, homemade vessel.] Credit & copyright: Wikimedia Commons, Public Domain, this image or file is a work of a United States Coast Guard service personnel or employee, taken or made as part of that person's official duties. As a work of the U.S. federal government, the image or file is in the public domain (17 U.S.C. § 101 and § 105) -
FREEMind + Body Daily CurioFree1 CQ
This succulent dish is wrapped in puff pastry…and in secrets. Beef Wellington is synonymous with British cuisine today, but its origins are murky at best. One would think it was named after Arthur Wellesley, the real life 1st Duke of Wellington, who held office in the UK from 1828 to 1830 and was famous for defeating Napoleon Bonaparte at the Battle of Waterloo. Some legends certainly claim that the dish and the duke are related, but history seems to suggest otherwise.
Beef Wellington is a dish of fillet steak coated in duxelles (a minced mushroom and herb mixture), pâté (often foie gras, which is made from duck or goose liver) and baked inside puff pastry. It can be tricky to make, but a successful beef Wellington makes for a flaky, juicy, unique meal. It’s no wonder that many European chefs, including famous ones like Gordan Ramsey, feature beef Wellington prominently their menus. Yet, despite being known as a British dish, no one is quite sure whether beef Wellington actually originated in England. Some historians believe that it was invented in Ireland, since a dish known as “steak Wellington” was popular around the time that the duke himself rose to fame, and he did have some Irish ancestry. Others think that beef Wellington originated in France, since it's similar to a French dish called filet de boeuf en croûte, which predates the duke’s time in office. The first mention of beef Wellington, specifically, dates to an 1899 cruise ship menu, long after Duke Wellington was dead. Of course, it’s possible that the dish was named posthumously in his honor, but we’ll never know for sure.
What’s certain is that Julia Child, one of the U.S.’s first celebrity chefs, is responsible for popularizing the dish in America. After a recipe for beef Wellington was featured in her 1961 cookbook Mastering the Art of French Cooking, the dish became all the rage in fine dining circles. In fact, it was soon known to be a favorite of President John F. Kennedy. That’s a high honor, no matter how you slice it.
[Image description: Beef wellington on a white plate with carrots.] Credit & copyright: pzphone, PixabayThis succulent dish is wrapped in puff pastry…and in secrets. Beef Wellington is synonymous with British cuisine today, but its origins are murky at best. One would think it was named after Arthur Wellesley, the real life 1st Duke of Wellington, who held office in the UK from 1828 to 1830 and was famous for defeating Napoleon Bonaparte at the Battle of Waterloo. Some legends certainly claim that the dish and the duke are related, but history seems to suggest otherwise.
Beef Wellington is a dish of fillet steak coated in duxelles (a minced mushroom and herb mixture), pâté (often foie gras, which is made from duck or goose liver) and baked inside puff pastry. It can be tricky to make, but a successful beef Wellington makes for a flaky, juicy, unique meal. It’s no wonder that many European chefs, including famous ones like Gordan Ramsey, feature beef Wellington prominently their menus. Yet, despite being known as a British dish, no one is quite sure whether beef Wellington actually originated in England. Some historians believe that it was invented in Ireland, since a dish known as “steak Wellington” was popular around the time that the duke himself rose to fame, and he did have some Irish ancestry. Others think that beef Wellington originated in France, since it's similar to a French dish called filet de boeuf en croûte, which predates the duke’s time in office. The first mention of beef Wellington, specifically, dates to an 1899 cruise ship menu, long after Duke Wellington was dead. Of course, it’s possible that the dish was named posthumously in his honor, but we’ll never know for sure.
What’s certain is that Julia Child, one of the U.S.’s first celebrity chefs, is responsible for popularizing the dish in America. After a recipe for beef Wellington was featured in her 1961 cookbook Mastering the Art of French Cooking, the dish became all the rage in fine dining circles. In fact, it was soon known to be a favorite of President John F. Kennedy. That’s a high honor, no matter how you slice it.
[Image description: Beef wellington on a white plate with carrots.] Credit & copyright: pzphone, Pixabay -
FREEScience Daily Curio #2720Free1 CQ
This famously edgy festival is looking a bit washed-up this year. Recently, around 70,000 Burning Man Festival attendees were stranded in Black Rock Desert following an unusually heavy storm that turned the event grounds into a mud pit. The Burning Man Festival has been going on every year since 1986, and takes place in the middle of the Black Rock Desert in Nevada. It’s a gathering of artists, celebrities, and anyone willing to fork out a small fortune to camp out for a few days. While the unexpected is to be expected at the event, this weather-related calamity caused unprecedented chaos as the ground turned into thick, sticky mud.
With a horde of RVs and other motor vehicles unable to get in or out easily, attendees were ordered to stay put while emergency personnel arrived to provide assistance, although not everyone complied. It may sound strange to hear about a desert flooding in the middle of summer, but it’s not completely unheard of. Most deserts do, in fact, get rainfall, and dry ground isn’t particularly absorbent, allowing water to wash over it and cause flash flooding. However, climate experts are saying that this particular storm is unusual for the region. It was caused by moisture moving into the region due to the Southwest monsoon, which prompted officials in California and Nevada to issue flood warnings. Over two days during the festival, the desert received as much as two months worth of rain all at once. This amount of precipitation might become more common due to climate change, but it’s unlikely to spell the end of the annual gathering. Rainfall in deserts tends to be localized, so it’s still a wild coincidence that the festival was hit so directly by the storm. Fortunately, the festival didn’t see any fatalities related to the flooding (the one death reported at the festival wasn’t due to weather) and although the eponymous Burning Man’s immolation was delayed, his wooden body was eventually incinerated. He was a real stick in the mud this year, anyway.
[Image description: A black-and-white photo of a desert landscape under a cloudy sky.] Credit & copyright: Brett Sayles, PexelsThis famously edgy festival is looking a bit washed-up this year. Recently, around 70,000 Burning Man Festival attendees were stranded in Black Rock Desert following an unusually heavy storm that turned the event grounds into a mud pit. The Burning Man Festival has been going on every year since 1986, and takes place in the middle of the Black Rock Desert in Nevada. It’s a gathering of artists, celebrities, and anyone willing to fork out a small fortune to camp out for a few days. While the unexpected is to be expected at the event, this weather-related calamity caused unprecedented chaos as the ground turned into thick, sticky mud.
With a horde of RVs and other motor vehicles unable to get in or out easily, attendees were ordered to stay put while emergency personnel arrived to provide assistance, although not everyone complied. It may sound strange to hear about a desert flooding in the middle of summer, but it’s not completely unheard of. Most deserts do, in fact, get rainfall, and dry ground isn’t particularly absorbent, allowing water to wash over it and cause flash flooding. However, climate experts are saying that this particular storm is unusual for the region. It was caused by moisture moving into the region due to the Southwest monsoon, which prompted officials in California and Nevada to issue flood warnings. Over two days during the festival, the desert received as much as two months worth of rain all at once. This amount of precipitation might become more common due to climate change, but it’s unlikely to spell the end of the annual gathering. Rainfall in deserts tends to be localized, so it’s still a wild coincidence that the festival was hit so directly by the storm. Fortunately, the festival didn’t see any fatalities related to the flooding (the one death reported at the festival wasn’t due to weather) and although the eponymous Burning Man’s immolation was delayed, his wooden body was eventually incinerated. He was a real stick in the mud this year, anyway.
[Image description: A black-and-white photo of a desert landscape under a cloudy sky.] Credit & copyright: Brett Sayles, Pexels -
FREEAstronomy Daily Curio #2719Free1 CQ
Behind every great moon rover is a woman…or a hundred. The Indian Space Research Organisation (ISRO) recently made history with lunar mission Chandrayaan-3, which landed a rover on the moon’s south pole. To commemorate the achievement, the landing site was given the name Shiv Shakti, after the concept of feminine energy in Hindu mythology. The reference is far from arbitrary; it’s meant to honor the women who helped make the ambitious space mission possible. ISRO states that around 25 percent of its 16,000 employees are women. Over 100 women were involved in the lunar mission as scientists or engineers, and contributed to making India the first nation to place a rover on one of the moon’s poles with a modest budget of $75 million. Among them are mission’s deputy project director Kalpana Kalahasti, a satellite specialist who has previously worked on imaging devices for taking high resolution pictures of the Earth’s surface and is currently involved in the Mars orbiter mission. There’s also robotics specialist Reema Ghosh who helped create Pragyan, the rover that is crawling around the lunar surface.
Pragyan has already sent back images and video footage of itself roaming the dusty surface, and ISRO stated that it has discovered traces of sulfur on the moon. The mission’s success has been highly publicized in India and around the world as has India’s push to encourage more women to enter STEM fields. Currently, 43 percent of STEM graduates in the country are women, but they make up only 14 percent of the workforce in STEM industries. Those at ISRO who worked on the lunar mission hope that their success will encourage other Indian employers to accept more women into their ranks. There’s certainly enough space for everyone.
[Image description: A photo of the partially-obscured moon in a dark blue sky.] Credit & copyright: Ponciano, PixabayBehind every great moon rover is a woman…or a hundred. The Indian Space Research Organisation (ISRO) recently made history with lunar mission Chandrayaan-3, which landed a rover on the moon’s south pole. To commemorate the achievement, the landing site was given the name Shiv Shakti, after the concept of feminine energy in Hindu mythology. The reference is far from arbitrary; it’s meant to honor the women who helped make the ambitious space mission possible. ISRO states that around 25 percent of its 16,000 employees are women. Over 100 women were involved in the lunar mission as scientists or engineers, and contributed to making India the first nation to place a rover on one of the moon’s poles with a modest budget of $75 million. Among them are mission’s deputy project director Kalpana Kalahasti, a satellite specialist who has previously worked on imaging devices for taking high resolution pictures of the Earth’s surface and is currently involved in the Mars orbiter mission. There’s also robotics specialist Reema Ghosh who helped create Pragyan, the rover that is crawling around the lunar surface.
Pragyan has already sent back images and video footage of itself roaming the dusty surface, and ISRO stated that it has discovered traces of sulfur on the moon. The mission’s success has been highly publicized in India and around the world as has India’s push to encourage more women to enter STEM fields. Currently, 43 percent of STEM graduates in the country are women, but they make up only 14 percent of the workforce in STEM industries. Those at ISRO who worked on the lunar mission hope that their success will encourage other Indian employers to accept more women into their ranks. There’s certainly enough space for everyone.
[Image description: A photo of the partially-obscured moon in a dark blue sky.] Credit & copyright: Ponciano, Pixabay -
FREEHumanities Daily Curio #2718Free1 CQ
This case was a breath of fresh air. A historic ruling by Montana’s First Judicial District Court has established that the Treasure State violated the constitutional rights of its residents by failing to provide its youth with a “clean and healthful environment” due to poor air quality. The landmark case was brought to court 16 youths between the ages of 5 and 22 who were represented by nonprofit law firm Our Children’s Trust, the Western Environmental Law Center, and McGarvey Law. It seems far-fetched that anyone could successfully sue a state over dirty air. However, the plaintiffs cited actual Montana law guaranteeing the right to clean air. In Montana’s state constitution, Article II, Section 3 states, “All persons are born free and have certain inalienable rights. They include the right to a clean and healthful environment.” The young plaintiffs’ case wasn’t just about making a broad claim that Montana wasn’t clean enough. Instead, they specifically challenged the Montana Environmental Policy Act (MEPA). This act has a provision which specifically forbids state agencies from considering the impacts of greenhouse gas (GHG) emissions or climate change when conducting environmental reviews. So, if a company wanted to set up a coal mining operation, the state would only be able to take into consideration the direct impact on local health and environment, but not the long term costs of emitting GHG. Since the matter of a “clean and healthful environment” was already in the books, plaintiffs just had to prove that MEPA violated the state’s constitution.
To that end, the plaintiffs provided expert witnesses who testified that the state failed to provide the “clean and healthful environment” and contributed to the droughts, wildfires and other natural disasters caused in part by climate change. As for the age of the plaintiffs, they argued that as young people, they would have to endure more of the negative consequences of the state’s failure to protect them. This ruling sets a significant precedent that might allow similar cases to go to trial in other states. These young plaintiffs just proved that their dream of a cleaner future is more than just pie in the Big Sky!
[Image description: A mountain and field under a blue sky in Montana.] Credit & copyright: Kerry, PexelsThis case was a breath of fresh air. A historic ruling by Montana’s First Judicial District Court has established that the Treasure State violated the constitutional rights of its residents by failing to provide its youth with a “clean and healthful environment” due to poor air quality. The landmark case was brought to court 16 youths between the ages of 5 and 22 who were represented by nonprofit law firm Our Children’s Trust, the Western Environmental Law Center, and McGarvey Law. It seems far-fetched that anyone could successfully sue a state over dirty air. However, the plaintiffs cited actual Montana law guaranteeing the right to clean air. In Montana’s state constitution, Article II, Section 3 states, “All persons are born free and have certain inalienable rights. They include the right to a clean and healthful environment.” The young plaintiffs’ case wasn’t just about making a broad claim that Montana wasn’t clean enough. Instead, they specifically challenged the Montana Environmental Policy Act (MEPA). This act has a provision which specifically forbids state agencies from considering the impacts of greenhouse gas (GHG) emissions or climate change when conducting environmental reviews. So, if a company wanted to set up a coal mining operation, the state would only be able to take into consideration the direct impact on local health and environment, but not the long term costs of emitting GHG. Since the matter of a “clean and healthful environment” was already in the books, plaintiffs just had to prove that MEPA violated the state’s constitution.
To that end, the plaintiffs provided expert witnesses who testified that the state failed to provide the “clean and healthful environment” and contributed to the droughts, wildfires and other natural disasters caused in part by climate change. As for the age of the plaintiffs, they argued that as young people, they would have to endure more of the negative consequences of the state’s failure to protect them. This ruling sets a significant precedent that might allow similar cases to go to trial in other states. These young plaintiffs just proved that their dream of a cleaner future is more than just pie in the Big Sky!
[Image description: A mountain and field under a blue sky in Montana.] Credit & copyright: Kerry, Pexels -
FREETravel Daily Curio #2717Free1 CQ
There’s never been a better time to play with your food. On August 30, the Spanish town of Buñol held Tomatina, an annual tomato-throwing festival that’s been going on for nearly 80 years. Located in the province of Valencia, Buñol is a small town of around 9,000 people. Yet, every year, its population more than doubles as tourists from all over the world gather for a unique celebration of the humble tomato. This year, around 15,000 participants gathered in the streets of the rural town as workers distributed 120 tons of fresh tomatoes from the tops of trucks. Attendees then threw the tomatoes at each other, covering themselves, the streets, and houses in bright red tomato pulp. Not to worry though; once the tomatoes run out, everything is hosed down and washed away.
Tomatina is every child’s food-fight daydream come true, and that’s supposedly how it all started. In 1944 or 1945 (accounts vary) a group of misbehaving children or adolescents in Buñol knocked the headpiece off of a parade performer who was marching in a celebration for San Luis Bertran, the town’s patron saint. In the ensuing argument, someone grabbed a bunch of tomatoes from a vendor’s stand nearby and threw it at another person in the brawl. That led to others joining in but, as the story goes, all of the ill will was forgotten in a red haze of flying tomatoes. In fact, the fight ended up being so much fun that locals decided to repeat the saucy skirmish the following year. Tomatoes are a signature crop in the area, so Buñol is unlikely to ever run out of their weapon of choice. That’s good news, since Tomatina is one of Spain’s most recognizable cultural events, alongside the Running of the Bulls. Of course, in that event, the bulls throw you.
[Image description: A pile of red tomatoes.] Credit & copyright: LoggaWiggler, PixabayThere’s never been a better time to play with your food. On August 30, the Spanish town of Buñol held Tomatina, an annual tomato-throwing festival that’s been going on for nearly 80 years. Located in the province of Valencia, Buñol is a small town of around 9,000 people. Yet, every year, its population more than doubles as tourists from all over the world gather for a unique celebration of the humble tomato. This year, around 15,000 participants gathered in the streets of the rural town as workers distributed 120 tons of fresh tomatoes from the tops of trucks. Attendees then threw the tomatoes at each other, covering themselves, the streets, and houses in bright red tomato pulp. Not to worry though; once the tomatoes run out, everything is hosed down and washed away.
Tomatina is every child’s food-fight daydream come true, and that’s supposedly how it all started. In 1944 or 1945 (accounts vary) a group of misbehaving children or adolescents in Buñol knocked the headpiece off of a parade performer who was marching in a celebration for San Luis Bertran, the town’s patron saint. In the ensuing argument, someone grabbed a bunch of tomatoes from a vendor’s stand nearby and threw it at another person in the brawl. That led to others joining in but, as the story goes, all of the ill will was forgotten in a red haze of flying tomatoes. In fact, the fight ended up being so much fun that locals decided to repeat the saucy skirmish the following year. Tomatoes are a signature crop in the area, so Buñol is unlikely to ever run out of their weapon of choice. That’s good news, since Tomatina is one of Spain’s most recognizable cultural events, alongside the Running of the Bulls. Of course, in that event, the bulls throw you.
[Image description: A pile of red tomatoes.] Credit & copyright: LoggaWiggler, Pixabay