Curio Cabinet / Daily Curio
-
FREEWorld History Daily Curio #3057Free1 CQ
Just because a war is undeclared doesn't mean it's not real. Just take a look at the Falkland Island War, a conflict between Argentina and the U.K. that began on this day in 1982. As its name implies, the Falkland Island War was fought over which nation had sovereignty over the islands, which are located 300 miles from the coast of Argentina. For most of their history, the Falkland Islands were uninhabited, but by the early 1800s, there were Argentine residents living there. Then, in 1833, Britain took control of the islands and forced out their inhabitants, establishing a population of British residents. After that, the island was recognized as belonging to the U.K., despite objections over the years from the government of Argentina.
In 1982, tensions finally boiled over after negotiations between the two countries fell through, and the Argentine military junta launched an invasion of the islands. The decision to reclaim them was partially motivated by the junta's declining grip on Argentina, and they believed that taking back the islands would garner support. British troops were also sent to the islands, and fighting commenced. The conflict lasted over two months and claimed nearly a thousand lives (649 Argentine and 255 British and Falkland Islanders), yet neither England nor Argentina officially declared war. The Falkland Islands remain under British control today, and their residents have rebuffed any attempts by the Argentine government to incorporate them into Argentina. Of the over 2,500 people currently living on the islands, nearly all are English-speaking and of British descent. The island's economy depends largely on a modest agricultural industry and tourism, without much in the way of natural resources. The dispute over the islands’ ownership is mostly just a matter of national pride for both sides. Who knew island life could be so controversial?
[Image description: The flag of the Falkland Islands, featuring a dark blue background, British flag in the upper left, and a seal with a sheep and a sailing ship in the lower right.] Credit & copyright: Government of Great Britain, Wikimedia Commons. Public Domain.Just because a war is undeclared doesn't mean it's not real. Just take a look at the Falkland Island War, a conflict between Argentina and the U.K. that began on this day in 1982. As its name implies, the Falkland Island War was fought over which nation had sovereignty over the islands, which are located 300 miles from the coast of Argentina. For most of their history, the Falkland Islands were uninhabited, but by the early 1800s, there were Argentine residents living there. Then, in 1833, Britain took control of the islands and forced out their inhabitants, establishing a population of British residents. After that, the island was recognized as belonging to the U.K., despite objections over the years from the government of Argentina.
In 1982, tensions finally boiled over after negotiations between the two countries fell through, and the Argentine military junta launched an invasion of the islands. The decision to reclaim them was partially motivated by the junta's declining grip on Argentina, and they believed that taking back the islands would garner support. British troops were also sent to the islands, and fighting commenced. The conflict lasted over two months and claimed nearly a thousand lives (649 Argentine and 255 British and Falkland Islanders), yet neither England nor Argentina officially declared war. The Falkland Islands remain under British control today, and their residents have rebuffed any attempts by the Argentine government to incorporate them into Argentina. Of the over 2,500 people currently living on the islands, nearly all are English-speaking and of British descent. The island's economy depends largely on a modest agricultural industry and tourism, without much in the way of natural resources. The dispute over the islands’ ownership is mostly just a matter of national pride for both sides. Who knew island life could be so controversial?
[Image description: The flag of the Falkland Islands, featuring a dark blue background, British flag in the upper left, and a seal with a sheep and a sailing ship in the lower right.] Credit & copyright: Government of Great Britain, Wikimedia Commons. Public Domain. -
FREEPolitical Science Daily Curio #3056Free1 CQ
This wasn’t an April Fools joke, but it almost seems like one. The state of Illinois recently allowed its citizens to vote on a new design for their state flag, and by far the largest share of the votes went to the existing design. Last year, Minnesota voted to adopt a new design for their state flag, and maybe Illinois was feeling a little left out. In the end, the state’s redesign contest came down to 10 finalists, and out of around 385,000 voters, 43 percent wanted to keep the same old-same old.
Illinois originally adopted their flag in 1915, and it's not exactly known for its vexillological beauty. It features an eagle atop a rock and a shield decorated with the stars and stripes with the setting sun in the background. Next to the eagle, a banner shows the state motto—"State Sovereignty, National Union”—and on the rock are two dates. One is 1868, the year the state seal (the eagle design featured on the flag itself) was adopted, and 1818 for when Illinois became a state. While it may seem strange to hold a flag-design contest, Illinois’ current flag was actually chosen via a similar contest in 1915 organized by the Daughters of the American Revolution, and that wasn't even the first time someone tried to come up with a different state flag for Illinois. A few years prior to that contest, a man named Wallace Rice designed a flag featuring blue and white stripes, 20 blue stars, and one white star. The 21 stars were meant to represent the fact that Illinois was the 21st state to be added to the Union, but no matter the symbolism, the flag was never approved by the state legislature. Other flags considered in the past included banners created for the state’s Centennial and Sesquicentennial celebrations in 1918 and 1968 respectively, and those two were also among the 10 finalists in the latest vote. Even with flags, it seems most people agree: if it ain't broke, don't fix it!
[Image description: The Illinois state flag: a white flag with an eagle in the center. The eagle holds a red banner reading “NATIONAL UNION” and “STATE SOVEREIGNTY" while standing on a rock listing the years 1868 and 1818. There is a yellow setting sun in the background.] Credit & copyright: Public Domain.This wasn’t an April Fools joke, but it almost seems like one. The state of Illinois recently allowed its citizens to vote on a new design for their state flag, and by far the largest share of the votes went to the existing design. Last year, Minnesota voted to adopt a new design for their state flag, and maybe Illinois was feeling a little left out. In the end, the state’s redesign contest came down to 10 finalists, and out of around 385,000 voters, 43 percent wanted to keep the same old-same old.
Illinois originally adopted their flag in 1915, and it's not exactly known for its vexillological beauty. It features an eagle atop a rock and a shield decorated with the stars and stripes with the setting sun in the background. Next to the eagle, a banner shows the state motto—"State Sovereignty, National Union”—and on the rock are two dates. One is 1868, the year the state seal (the eagle design featured on the flag itself) was adopted, and 1818 for when Illinois became a state. While it may seem strange to hold a flag-design contest, Illinois’ current flag was actually chosen via a similar contest in 1915 organized by the Daughters of the American Revolution, and that wasn't even the first time someone tried to come up with a different state flag for Illinois. A few years prior to that contest, a man named Wallace Rice designed a flag featuring blue and white stripes, 20 blue stars, and one white star. The 21 stars were meant to represent the fact that Illinois was the 21st state to be added to the Union, but no matter the symbolism, the flag was never approved by the state legislature. Other flags considered in the past included banners created for the state’s Centennial and Sesquicentennial celebrations in 1918 and 1968 respectively, and those two were also among the 10 finalists in the latest vote. Even with flags, it seems most people agree: if it ain't broke, don't fix it!
[Image description: The Illinois state flag: a white flag with an eagle in the center. The eagle holds a red banner reading “NATIONAL UNION” and “STATE SOVEREIGNTY" while standing on a rock listing the years 1868 and 1818. There is a yellow setting sun in the background.] Credit & copyright: Public Domain. -
FREENutrition Daily Curio #3055Free1 CQ
Here’s a citrus to celebrate. Researchers at Harvard Medical School have discovered that eating citrus might be an effective way to lower the risk of developing depression. Depression is an extremely common condition, yet it can be extremely difficult to treat. Around 290 million people worldwide are thought to suffer from the disorder, and for many of them, treatments aren’t effective. In fact, around 70 percent of those with depression don’t find antidepressants to be effective. However, in recent years, researchers have found a strong link between an individual's gut microbiome and their mental health, and the Mediterranean diet has been found to reduce the risk of depression by almost 35 percent. Now, similar effects have been found in patients who eat at least one orange every day.
Harvard researchers’ recently examined a study known as the Nurses’ Health Study II (NHS2), which began in 1989 and involved detailed interviews with 100,000 regarding their diets and lifestyles. Those who ate a lot of citrus tended to have significantly lower rates of depression compared to those who didn't. Based on the data, just one medium orange every day might lower the risk of depression by up to 20 percent. But it's not the orange that's helping directly. Rather, citrus consumption promotes the growth of F. Prausnitzii, a beneficial bacterium found in the gut. Researchers believe that F. Prausnitzii affects the production of serotonin and dopamine in the intestines, which can make their way to the brain. Serotonin and dopamine are the hormones that are often lacking in people with depression. Apples might keep the doctor away, but it seems that oranges really keep the blues at bay.
[Image description: Rows of cut oranges.] Credit & copyright: Engin Akyurt, PexelsHere’s a citrus to celebrate. Researchers at Harvard Medical School have discovered that eating citrus might be an effective way to lower the risk of developing depression. Depression is an extremely common condition, yet it can be extremely difficult to treat. Around 290 million people worldwide are thought to suffer from the disorder, and for many of them, treatments aren’t effective. In fact, around 70 percent of those with depression don’t find antidepressants to be effective. However, in recent years, researchers have found a strong link between an individual's gut microbiome and their mental health, and the Mediterranean diet has been found to reduce the risk of depression by almost 35 percent. Now, similar effects have been found in patients who eat at least one orange every day.
Harvard researchers’ recently examined a study known as the Nurses’ Health Study II (NHS2), which began in 1989 and involved detailed interviews with 100,000 regarding their diets and lifestyles. Those who ate a lot of citrus tended to have significantly lower rates of depression compared to those who didn't. Based on the data, just one medium orange every day might lower the risk of depression by up to 20 percent. But it's not the orange that's helping directly. Rather, citrus consumption promotes the growth of F. Prausnitzii, a beneficial bacterium found in the gut. Researchers believe that F. Prausnitzii affects the production of serotonin and dopamine in the intestines, which can make their way to the brain. Serotonin and dopamine are the hormones that are often lacking in people with depression. Apples might keep the doctor away, but it seems that oranges really keep the blues at bay.
[Image description: Rows of cut oranges.] Credit & copyright: Engin Akyurt, Pexels -
FREEMind + Body Daily CurioFree1 CQ
You can’t help but catch a whiff as you chow down with this dish. Referring to a food as “stinky” might seem rude, but it’s actually a point of pride for makers of stinky tofu. This Chinese dish’s actual name, chòu dòufu, literally translates to “smelly tofu”, and is lovingly referred to as stinky tofu in English. The dish has gone viral in recent years as influencers descend on Asian food markets to try unusual dishes on camera, but stinky tofu’s history predates the internet by quite a few years. In fact, it’s ancient.
Stinky tofu is, of course, a kind of tofu, which is a gelatinous food made from soybean paste. The paste is mixed with soy milk, then an acidic coagulant is added, causing the milk to curdle, thus producing solid pieces of tofu. Normal tofu has a very mild smell, though it’s great at soaking up the smells and flavors of dishes that it’s added to. Unlike regular tofu, stinky tofu is fermented, and its pungent aroma, which is sometimes compared to that of rotting vegetables, comes from the brine it’s made in. Fermentation is the same process that turns cucumbers into pickles; it involves submerging food into a brine and keeping it in a sealed container until yeast and bacteria create chemical changes that make it taste (and smell) different. Stinky tofu is usually fermented in a brine of veggies like bamboo shoots and greens, meat products like dried shrimp, fermented milk, and spices. While stinky tofu’s flavor is stronger than that of normal tofu, it’s not nearly as overpowering as its smell. The dish is creamy and rich, with a sour, somewhat salty flavor. It can be eaten in many different ways: cold, steamed, or fried. It’s usually served with spicy sauce for dipping.
Stinky tofu dates all the way back to China’s Qing Dynasty, which lasted from 1644 to 1912. Unlike many ancient foods, we actually know who invented stinky tofu: a scholar-turned-tofu-merchant named Wang Zhihe. In 1669, he journeyed to Beijing from his home in Anhui province to try his hand at becoming part of China’s state bureaucracy. However, he failed the official examination for the job, and found himself low on funds after his journey. To keep afloat, Zhihe set up a tofu stand in the city. His bad luck continued, though, and he ended up with a lot of unsold tofu. Rather than let it go to waste, Zhihe fermented the tofu in jars. This new, stinky tofu was a hit, as it stood out from Beijing’s other street food offerings. To this day, stinky tofu is mainly sold as a street food, both at permanent food stalls and at pop up events, like festivals and night markets. It’s especially popular in Taiwan, and is considered by many to be Taiwan’s unofficial “national snack food.” Sometimes, pungency is perfection.
[Image description: A plate of five thick tofu squares with shredded vegetables in the center.] Credit & copyright: Pilzland, Wikimedia Commons. The copyright holder of this work has made it available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You can’t help but catch a whiff as you chow down with this dish. Referring to a food as “stinky” might seem rude, but it’s actually a point of pride for makers of stinky tofu. This Chinese dish’s actual name, chòu dòufu, literally translates to “smelly tofu”, and is lovingly referred to as stinky tofu in English. The dish has gone viral in recent years as influencers descend on Asian food markets to try unusual dishes on camera, but stinky tofu’s history predates the internet by quite a few years. In fact, it’s ancient.
Stinky tofu is, of course, a kind of tofu, which is a gelatinous food made from soybean paste. The paste is mixed with soy milk, then an acidic coagulant is added, causing the milk to curdle, thus producing solid pieces of tofu. Normal tofu has a very mild smell, though it’s great at soaking up the smells and flavors of dishes that it’s added to. Unlike regular tofu, stinky tofu is fermented, and its pungent aroma, which is sometimes compared to that of rotting vegetables, comes from the brine it’s made in. Fermentation is the same process that turns cucumbers into pickles; it involves submerging food into a brine and keeping it in a sealed container until yeast and bacteria create chemical changes that make it taste (and smell) different. Stinky tofu is usually fermented in a brine of veggies like bamboo shoots and greens, meat products like dried shrimp, fermented milk, and spices. While stinky tofu’s flavor is stronger than that of normal tofu, it’s not nearly as overpowering as its smell. The dish is creamy and rich, with a sour, somewhat salty flavor. It can be eaten in many different ways: cold, steamed, or fried. It’s usually served with spicy sauce for dipping.
Stinky tofu dates all the way back to China’s Qing Dynasty, which lasted from 1644 to 1912. Unlike many ancient foods, we actually know who invented stinky tofu: a scholar-turned-tofu-merchant named Wang Zhihe. In 1669, he journeyed to Beijing from his home in Anhui province to try his hand at becoming part of China’s state bureaucracy. However, he failed the official examination for the job, and found himself low on funds after his journey. To keep afloat, Zhihe set up a tofu stand in the city. His bad luck continued, though, and he ended up with a lot of unsold tofu. Rather than let it go to waste, Zhihe fermented the tofu in jars. This new, stinky tofu was a hit, as it stood out from Beijing’s other street food offerings. To this day, stinky tofu is mainly sold as a street food, both at permanent food stalls and at pop up events, like festivals and night markets. It’s especially popular in Taiwan, and is considered by many to be Taiwan’s unofficial “national snack food.” Sometimes, pungency is perfection.
[Image description: A plate of five thick tofu squares with shredded vegetables in the center.] Credit & copyright: Pilzland, Wikimedia Commons. The copyright holder of this work has made it available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3054Free1 CQ
This four-sided sighting is rarer than a four-leafed clover. Conservationists around the world were pleasantly surprised when four snow leopards were recently spotted traversing harrowing mountain terrain together in Pakistan. These elusive cats are notoriously hard to spot on camera. Native to Asia, snow leopards live at elevations between 6,000 and 18,000 feet. They thrive in cold, snowy, mountainous terrain, and with only around 6,500 individuals left in the wild, any sighting of them is good news.
For big cats, snow leopards aren’t all that big, weighing between 50 to 120 pounds. Combined with their camouflage and their ability to deftly navigate the most impassable terrain, snow leopards are practically invisible in the mountains. These cats tend to lead solitary lives, with each individual claiming up to 15.4 square miles of territory. Capturing a single snow leopard on camera often involves days or weeks of tracking, so finding four of them together is fairly unheard of. According to the photographer who captured the snow leopards on video in Central Karakoram National Park, the foursome consists of a mother and her three cubs. Aside from being a rare sight, the video is evidence that conservation efforts in Pakistan might be paying off. While snow leopards are proficient predators and have no equal in their natural habitat, that habitat is under threat from climate change, human encroachment, and poaching. These amazing animals are currently listed as "vulnerable” by the International Union for Conservation of Nature (IUCN). That's a welcome improvement from their previous status of "endangered," which was changed back in 2017 after conservationists discovered a calculation error in a 2008 population assessment. Who could blame them for the mistake, considering how elusive snow leopards are?
[Image description: A snow leopard sitting in green grass at the Doué-la-Fontaine Zoo in France.] Credit & copyright: Vassil, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide.This four-sided sighting is rarer than a four-leafed clover. Conservationists around the world were pleasantly surprised when four snow leopards were recently spotted traversing harrowing mountain terrain together in Pakistan. These elusive cats are notoriously hard to spot on camera. Native to Asia, snow leopards live at elevations between 6,000 and 18,000 feet. They thrive in cold, snowy, mountainous terrain, and with only around 6,500 individuals left in the wild, any sighting of them is good news.
For big cats, snow leopards aren’t all that big, weighing between 50 to 120 pounds. Combined with their camouflage and their ability to deftly navigate the most impassable terrain, snow leopards are practically invisible in the mountains. These cats tend to lead solitary lives, with each individual claiming up to 15.4 square miles of territory. Capturing a single snow leopard on camera often involves days or weeks of tracking, so finding four of them together is fairly unheard of. According to the photographer who captured the snow leopards on video in Central Karakoram National Park, the foursome consists of a mother and her three cubs. Aside from being a rare sight, the video is evidence that conservation efforts in Pakistan might be paying off. While snow leopards are proficient predators and have no equal in their natural habitat, that habitat is under threat from climate change, human encroachment, and poaching. These amazing animals are currently listed as "vulnerable” by the International Union for Conservation of Nature (IUCN). That's a welcome improvement from their previous status of "endangered," which was changed back in 2017 after conservationists discovered a calculation error in a 2008 population assessment. Who could blame them for the mistake, considering how elusive snow leopards are?
[Image description: A snow leopard sitting in green grass at the Doué-la-Fontaine Zoo in France.] Credit & copyright: Vassil, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide. -
FREEWorld History Daily Curio #3053Free1 CQ
Cults are dangerous at the best of times, and 1995 was not the best of times. On March 20 of that year, a group then known as the AUM Shinrikyo terrorized Tokyo with a series of deadly sarin gas attacks on the city's subway system. The cult, which mixed some aspects of Buddhism and Christianity while emphasizing a series of doomsday prophecies, was seemingly attempting to speed up the apocalypse with their attacks. Now known as Aleph, the cult carried out the attacks during morning rush hour, targeting several different trains and subway lines, leading to 14 deaths and around 6,000 injuries. It’s considered one of the worst terrorist attacks in postwar Japan, and to this day many survivors continue to deal with the health consequences of being exposed to the gas.
Sarin was invented in 1938 and was originally meant to be used as a pesticide. When it was discovered that the gas worked as a deadly nerve agent, some considered using it as a chemical weapon during WWII, but the idea was scrapped. The consequences would have been disastrous, since sarin can taint water supplies and contaminate food in addition to killing people via inhalation. The gas is a deadly neurotoxin that affects the function of muscles and the respiratory system.
Even three decades later, many survivors of the subway attacks experience severe nerve pains, fatigue, dizziness, and other debilitating symptoms. While there are antidotes against the toxin, there is unfortunately no cure or universal treatment available for those who suffer permanent damage. As for the attacks themselves, they were the deadly climax to a series of smaller-scale attacks by members of the cult, one of which claimed eight lives and injured around 500 people. Luckily, many members involved with the attacks—including the leader—were arrested, though the very last suspect wasn't caught until 2012. Better for justice to come late than never.Cults are dangerous at the best of times, and 1995 was not the best of times. On March 20 of that year, a group then known as the AUM Shinrikyo terrorized Tokyo with a series of deadly sarin gas attacks on the city's subway system. The cult, which mixed some aspects of Buddhism and Christianity while emphasizing a series of doomsday prophecies, was seemingly attempting to speed up the apocalypse with their attacks. Now known as Aleph, the cult carried out the attacks during morning rush hour, targeting several different trains and subway lines, leading to 14 deaths and around 6,000 injuries. It’s considered one of the worst terrorist attacks in postwar Japan, and to this day many survivors continue to deal with the health consequences of being exposed to the gas.
Sarin was invented in 1938 and was originally meant to be used as a pesticide. When it was discovered that the gas worked as a deadly nerve agent, some considered using it as a chemical weapon during WWII, but the idea was scrapped. The consequences would have been disastrous, since sarin can taint water supplies and contaminate food in addition to killing people via inhalation. The gas is a deadly neurotoxin that affects the function of muscles and the respiratory system.
Even three decades later, many survivors of the subway attacks experience severe nerve pains, fatigue, dizziness, and other debilitating symptoms. While there are antidotes against the toxin, there is unfortunately no cure or universal treatment available for those who suffer permanent damage. As for the attacks themselves, they were the deadly climax to a series of smaller-scale attacks by members of the cult, one of which claimed eight lives and injured around 500 people. Luckily, many members involved with the attacks—including the leader—were arrested, though the very last suspect wasn't caught until 2012. Better for justice to come late than never. -
FREEHumanities Daily Curio #3052Free1 CQ
Who put the "art" in escape artist? Harry Houdini, of course! The Hungarian-American magician was born this month in 1874, and nearly a century after his untimely death, his name remains synonymous with the type of escape performances that he popularized. Yet, when he wasn’t wowing audiences, Houdini was a surprisingly grounded man. It might seem an unlikely role for a magician, but Houdini was a vocal skeptic in his time, especially as a spiritualist movement swept through America. This movement convinced many people that they could speak to dead loved ones, or pay someone to do so, but Houdini was having none of it.
Houdini started performing at an early age. By the time he was nine, he was part of a trapeze act. Starting from the age of 17, he adopted his stage name, Harry Houdini (his birth name was Ehrich Weisz) and started performing magic acts. As his career exploded around the turn of the century, Houdini became particularly famous for his escape acts. These death-defying stunts, in which Houdini was often chained underwater with seemingly no way out, seemed like magic when he managed to slip his bonds and survive. However, Houdini was a vocal critic of anyone who claimed to have true supernatural powers, especially spiritual mediums.
Houdini felt that, as a performer himself, he had a duty to expose the fraudulent practices rife in the spiritualism industry. His criticism reached the point of activism when he testified before Congress to call for the criminalization of fortune-telling and similar practices. Houdini's skepticism also led to public tensions with his longtime friend, Sherlock Holmes author Sir Arthur Conan Doyle. Doyle was a believer of spiritualism and regularly held seances with his wife, who claimed to be a medium herself. Sometime before he died, Houdini and his wife devised a way to prove to themselves whether mediumship was actually real. They created a coded message that they agreed to pass to each other from beyond the grave, should they ever truly be contacted. In the decade following Houdini’s death (at Halloween, no less), his wife held seances, challenging any medium to pass along their secret message. None succeeded, and on the 10th anniversary of his death, the widow announced to the world that her husband was unable to be reached. Even in death, Houdini was still putting on a show and keeping skepticism alive.
[Image description: Harry Houdini making a mysterious face with one hand raised near his cheek.] Credit & copyright: Library of Congress, 1925. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1930.Who put the "art" in escape artist? Harry Houdini, of course! The Hungarian-American magician was born this month in 1874, and nearly a century after his untimely death, his name remains synonymous with the type of escape performances that he popularized. Yet, when he wasn’t wowing audiences, Houdini was a surprisingly grounded man. It might seem an unlikely role for a magician, but Houdini was a vocal skeptic in his time, especially as a spiritualist movement swept through America. This movement convinced many people that they could speak to dead loved ones, or pay someone to do so, but Houdini was having none of it.
Houdini started performing at an early age. By the time he was nine, he was part of a trapeze act. Starting from the age of 17, he adopted his stage name, Harry Houdini (his birth name was Ehrich Weisz) and started performing magic acts. As his career exploded around the turn of the century, Houdini became particularly famous for his escape acts. These death-defying stunts, in which Houdini was often chained underwater with seemingly no way out, seemed like magic when he managed to slip his bonds and survive. However, Houdini was a vocal critic of anyone who claimed to have true supernatural powers, especially spiritual mediums.
Houdini felt that, as a performer himself, he had a duty to expose the fraudulent practices rife in the spiritualism industry. His criticism reached the point of activism when he testified before Congress to call for the criminalization of fortune-telling and similar practices. Houdini's skepticism also led to public tensions with his longtime friend, Sherlock Holmes author Sir Arthur Conan Doyle. Doyle was a believer of spiritualism and regularly held seances with his wife, who claimed to be a medium herself. Sometime before he died, Houdini and his wife devised a way to prove to themselves whether mediumship was actually real. They created a coded message that they agreed to pass to each other from beyond the grave, should they ever truly be contacted. In the decade following Houdini’s death (at Halloween, no less), his wife held seances, challenging any medium to pass along their secret message. None succeeded, and on the 10th anniversary of his death, the widow announced to the world that her husband was unable to be reached. Even in death, Houdini was still putting on a show and keeping skepticism alive.
[Image description: Harry Houdini making a mysterious face with one hand raised near his cheek.] Credit & copyright: Library of Congress, 1925. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1930. -
FREESports Daily Curio #3051Free1 CQ
Bullfights without blood? It’s no bull! Legislators in Mexico City recently banned "violent" bullfighting. The controversial sport has existed for centuries in various forms; some cultures favored the practice of pitting bulls against bulls, while others had them face off against people. The most popular iteration of bullfighting comes from the Spanish tradition. It started as a sport in which human participants lanced bulls from horseback, but in the 18th century, Joaquín Rodríguez Costillares became one of the first professional matadors to bullfight on foot. He was also responsible for introducing much of the pomp and panache associated with modern bullfighting. Thanks to him, dramatic movements and flamboyant costumes became the norm for matadors.
While bullfighting remains popular in parts of Spain and many Spanish-speaking countries, it has its fair share of detractors. Every year, around 180,000 bulls are killed after bullfights, during which the animals are skewered with short spears. The recent bill in Mexico City, which passed 61 to 1, isn't the first time a ban on violent bullfighting has been tried. A previous ban in 2022 was overturned back in 2023, hailing the return of an industry that generates around $400 million a year and employs 80,000 in Mexico. However, the current ban may play out differently. The bill doesn't ban bullfighting outright, but allows for a less violent version of the sport in which matadors attempt to "skewer" the bulls using harmless poles against a Velcro pad attached to the animals’ backs. Still, the bill attracted fervent opposition from supporters of traditional bullfighting, many of whom consider the sport a point of national pride. Of course, while the bulls will be safer in this new version, the matadors won’t be any better protected. They’ll still have to remember that when you mess with the bull, you get the horns!
[Image description: A painting depicting two 18th century bullfights in a divided arena with a large crowd.] Credit & copyright: Bullfight in a Divided Ring, Attributed to Goya (Francisco de Goya y Lucientes) (Spanish, Fuendetodos 1746–1828 Bordeaux). The Metropolitan Museum of Art, Catharine Lorillard Wolfe Collection, Wolfe Fund, 1922. Public Domain.Bullfights without blood? It’s no bull! Legislators in Mexico City recently banned "violent" bullfighting. The controversial sport has existed for centuries in various forms; some cultures favored the practice of pitting bulls against bulls, while others had them face off against people. The most popular iteration of bullfighting comes from the Spanish tradition. It started as a sport in which human participants lanced bulls from horseback, but in the 18th century, Joaquín Rodríguez Costillares became one of the first professional matadors to bullfight on foot. He was also responsible for introducing much of the pomp and panache associated with modern bullfighting. Thanks to him, dramatic movements and flamboyant costumes became the norm for matadors.
While bullfighting remains popular in parts of Spain and many Spanish-speaking countries, it has its fair share of detractors. Every year, around 180,000 bulls are killed after bullfights, during which the animals are skewered with short spears. The recent bill in Mexico City, which passed 61 to 1, isn't the first time a ban on violent bullfighting has been tried. A previous ban in 2022 was overturned back in 2023, hailing the return of an industry that generates around $400 million a year and employs 80,000 in Mexico. However, the current ban may play out differently. The bill doesn't ban bullfighting outright, but allows for a less violent version of the sport in which matadors attempt to "skewer" the bulls using harmless poles against a Velcro pad attached to the animals’ backs. Still, the bill attracted fervent opposition from supporters of traditional bullfighting, many of whom consider the sport a point of national pride. Of course, while the bulls will be safer in this new version, the matadors won’t be any better protected. They’ll still have to remember that when you mess with the bull, you get the horns!
[Image description: A painting depicting two 18th century bullfights in a divided arena with a large crowd.] Credit & copyright: Bullfight in a Divided Ring, Attributed to Goya (Francisco de Goya y Lucientes) (Spanish, Fuendetodos 1746–1828 Bordeaux). The Metropolitan Museum of Art, Catharine Lorillard Wolfe Collection, Wolfe Fund, 1922. Public Domain. -
FREEMind + Body Daily CurioFree1 CQ
St. Patrick’s Day might be over, but it’s not too late to learn about this classic holiday dish. Corned beef, usually served with cabbage and potatoes, is considered a classic St. Patrick’s Day meal in the U.S., especially in places with large Irish-American populations, like New York City. There’s one place you won’t find many people eating corned beef on St. Patrick’s Day, though: Ireland. While corned beef does have roots on the Emerald Isle, its connection to the holiday began in the U.S.
Corned beef doesn’t actually contain corn. Rather, it’s a type of cured beef. “Corning” beef involves brining beef brisket in a solution of spices, sodium nitrite, and large-grain rock salt (the salt granules are roughly the size of corn kernels, hence the name). During the curing process, the sodium nitrite reacts with a protein in the beef called myoglobin, turning the meat pink. Corned beef is often served with cabbage, which is boiled in the same brine as the beef, and boiled potatoes.
To understand why the Irish don’t eat much corned beef today, it’s important to look at their history with beef in general. Before the British began taking over large swaths of Ireland in the 12th century, pork was the most commonly eaten meat in Ireland. The Irish did raise cows, but they were used mainly for milk and for plowing. The Gaels, one of Ireland’s native peoples, even considered cows sacred, and wouldn’t eat beef unless the cows were too old to work or produce milk. As England took greater control of Ireland, culminating in a complete takeover in the mid-17th century, the country’s culinary landscape changed.
Beef became big business in Ireland, with many Irish cows being exported to England. This rankled beef farmers living in England, though, and to protect their businesses, the Cattle Acts of 1663 and 1667 made it illegal for Ireland to export any more beef to England. Ireland was therefore stuck with a surplus of cows. The solution? Corned beef. Since the process of salting meat kills harmful bacteria, the process has been used since the beginning of human history to keep stored meat edible for long periods. Since Ireland had a much lower salt tax than England, they had access to higher-quality salt, and thus corned beef became a major Irish export. However, because of England’s legal and economic oppression of Irish Catholics, most Irish people couldn’t actually afford to eat beef.
In 1845, during Ireland's Great Famine, more than a million Irish immigrants came to the U.S., forming communities in major cities. For the first time, they could actually afford to eat beef…though not the most expensive, fresh cuts. Instead, they opted for corned beef, exported from their own native country. The practice of serving corned beef with cabbage and potatoes actually comes from Jewish culinary tradition, which mixed with Irish tradition in U.S. cities. Back in Ireland, St. Patrick’s Day had been a quiet, religious holiday. But Irish immigrants turned it into a celebration of Irish culture, including Irish food. Thus, corned beef became a holiday staple. In the U.S., it remains so to this day, while back in Ireland, pork has gone back to being the most popular meat. Hey, a new home means new traditions.
[Image description: A plate of corned beef, cabbage, and small, whole potatoes. ] Credit & copyright: A1stopshop, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide.St. Patrick’s Day might be over, but it’s not too late to learn about this classic holiday dish. Corned beef, usually served with cabbage and potatoes, is considered a classic St. Patrick’s Day meal in the U.S., especially in places with large Irish-American populations, like New York City. There’s one place you won’t find many people eating corned beef on St. Patrick’s Day, though: Ireland. While corned beef does have roots on the Emerald Isle, its connection to the holiday began in the U.S.
Corned beef doesn’t actually contain corn. Rather, it’s a type of cured beef. “Corning” beef involves brining beef brisket in a solution of spices, sodium nitrite, and large-grain rock salt (the salt granules are roughly the size of corn kernels, hence the name). During the curing process, the sodium nitrite reacts with a protein in the beef called myoglobin, turning the meat pink. Corned beef is often served with cabbage, which is boiled in the same brine as the beef, and boiled potatoes.
To understand why the Irish don’t eat much corned beef today, it’s important to look at their history with beef in general. Before the British began taking over large swaths of Ireland in the 12th century, pork was the most commonly eaten meat in Ireland. The Irish did raise cows, but they were used mainly for milk and for plowing. The Gaels, one of Ireland’s native peoples, even considered cows sacred, and wouldn’t eat beef unless the cows were too old to work or produce milk. As England took greater control of Ireland, culminating in a complete takeover in the mid-17th century, the country’s culinary landscape changed.
Beef became big business in Ireland, with many Irish cows being exported to England. This rankled beef farmers living in England, though, and to protect their businesses, the Cattle Acts of 1663 and 1667 made it illegal for Ireland to export any more beef to England. Ireland was therefore stuck with a surplus of cows. The solution? Corned beef. Since the process of salting meat kills harmful bacteria, the process has been used since the beginning of human history to keep stored meat edible for long periods. Since Ireland had a much lower salt tax than England, they had access to higher-quality salt, and thus corned beef became a major Irish export. However, because of England’s legal and economic oppression of Irish Catholics, most Irish people couldn’t actually afford to eat beef.
In 1845, during Ireland's Great Famine, more than a million Irish immigrants came to the U.S., forming communities in major cities. For the first time, they could actually afford to eat beef…though not the most expensive, fresh cuts. Instead, they opted for corned beef, exported from their own native country. The practice of serving corned beef with cabbage and potatoes actually comes from Jewish culinary tradition, which mixed with Irish tradition in U.S. cities. Back in Ireland, St. Patrick’s Day had been a quiet, religious holiday. But Irish immigrants turned it into a celebration of Irish culture, including Irish food. Thus, corned beef became a holiday staple. In the U.S., it remains so to this day, while back in Ireland, pork has gone back to being the most popular meat. Hey, a new home means new traditions.
[Image description: A plate of corned beef, cabbage, and small, whole potatoes. ] Credit & copyright: A1stopshop, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide. -
FREEBiology Daily Curio #3050Free1 CQ
Modern medicine wouldn’t be half as modern without one woman’s critical work. Born on this day in 1879, Canadian physician and biochemist Maud Leonora Menten forever changed her fields of research. In doing so, she opened the door for the development of new, lifesaving medical therapies that we still benefit from today.
Born in Port Lambton, Ontario, Canada, Menten earned her medical doctorate degree at the University of Toronto, at a time when almost no women worked as scientists or physicians. In fact, she was one of the first women in Canadian history to earn the degree. After graduation, Menten’s ambitions immediately clashed with the rules of 19th-century Canadian society, as women weren’t usually permitted to participate in scientific research there. Undeterred, Menten took a ship across the Atlantic to Germany in 1912, despite fears from friends and family about the safety of the journey (after all, the Titanic had sunk earlier that year).
In Germany, Menten began working with biochemist Leonor Michaelis. They studied enzyme reactions, which were not well understood at the time. Enzymes are proteins that speed up chemical reactions, like metabolism, in the body. Enzymes act upon specific molecules called substrates during these reactions. For example, the enzyme amylase acts upon starch. Together, Menten and Michaelis developed an equation explaining that enzyme reaction rates are proportional to and depend upon how much substrate is available to them. Known as the Michaelis-Menten equation, this breakthrough laid the groundwork for an entire new field of study, known as enzyme kinetics. The equation is still taught in biochemistry classes today, and because it helped researchers understand how drugs are metabolized in the body, many modern medicines wouldn’t exist without it.
From Germany, Menten journeyed to the U.S. where she obtained her PhD in biochemistry from the University of Chicago. After continuing her medical research in the U.S. for a time, she joined the faculty at the University of Pittsburgh’s School of Medicine. Over time, she became a treasured member of the Pittsburgh community, and the head of pathology at the Children's Hospital of Pittsburgh. She retired in 1950. At a time when social and legal barriers kept women from being able to study, let alone practice science, Menten studied it, earned degrees in it, broke new ground in its practice, and ultimately taught it to others. There are some pioneers you just can’t keep down.
[Image description: Four varying glass measuring containers filled with brightly colored liquids.] Credit & copyright: Kindel Media, PexelsModern medicine wouldn’t be half as modern without one woman’s critical work. Born on this day in 1879, Canadian physician and biochemist Maud Leonora Menten forever changed her fields of research. In doing so, she opened the door for the development of new, lifesaving medical therapies that we still benefit from today.
Born in Port Lambton, Ontario, Canada, Menten earned her medical doctorate degree at the University of Toronto, at a time when almost no women worked as scientists or physicians. In fact, she was one of the first women in Canadian history to earn the degree. After graduation, Menten’s ambitions immediately clashed with the rules of 19th-century Canadian society, as women weren’t usually permitted to participate in scientific research there. Undeterred, Menten took a ship across the Atlantic to Germany in 1912, despite fears from friends and family about the safety of the journey (after all, the Titanic had sunk earlier that year).
In Germany, Menten began working with biochemist Leonor Michaelis. They studied enzyme reactions, which were not well understood at the time. Enzymes are proteins that speed up chemical reactions, like metabolism, in the body. Enzymes act upon specific molecules called substrates during these reactions. For example, the enzyme amylase acts upon starch. Together, Menten and Michaelis developed an equation explaining that enzyme reaction rates are proportional to and depend upon how much substrate is available to them. Known as the Michaelis-Menten equation, this breakthrough laid the groundwork for an entire new field of study, known as enzyme kinetics. The equation is still taught in biochemistry classes today, and because it helped researchers understand how drugs are metabolized in the body, many modern medicines wouldn’t exist without it.
From Germany, Menten journeyed to the U.S. where she obtained her PhD in biochemistry from the University of Chicago. After continuing her medical research in the U.S. for a time, she joined the faculty at the University of Pittsburgh’s School of Medicine. Over time, she became a treasured member of the Pittsburgh community, and the head of pathology at the Children's Hospital of Pittsburgh. She retired in 1950. At a time when social and legal barriers kept women from being able to study, let alone practice science, Menten studied it, earned degrees in it, broke new ground in its practice, and ultimately taught it to others. There are some pioneers you just can’t keep down.
[Image description: Four varying glass measuring containers filled with brightly colored liquids.] Credit & copyright: Kindel Media, Pexels -
FREEPhysical Therapy Daily Curio #3049Free1 CQ
Feeling achy? Break out the nature documentaries. A recent study conducted by researchers from the University of Exeter and University of Vienna found that people experiencing pain felt better when viewing scenes from nature—even if those scenes were shown to them via a screen.
The study used functional magnetic resonance imaging (fRMI) to look at the brain activity of 49 volunteers in real time. The volunteers were given a series of small electric shocks while watching and listening to different types of scenes on a screen. These included a scene of an office, a city, and a lake surrounded by trees. While watching nature scenes, participants reported feeling less pain. Their brain imaging also showed a decrease in nociception, the process by which the body senses pain.
This surprisingly powerful result could be due to something called attention restoration theory, a psychological theory that spending time in nature improves overall cognitive function and naturally captures human attention. If a person suffering from low-grade pain is able to focus on something else, they’re less likely to feel the pain as intensely. Now, it seems that simply looking at nature is enough to achieve a similar effect as actually being outside. While pain medication is, of course, still needed to treat severe pain, it’s always useful for medical professionals and the general public to expand their knowledge of medication-free pain relief methods. After all, nature videos are almost always a click away these days, making them a uniquely accessible form of therapy. Previous research has also shown that viewing nature has other beneficial physical effects, such as lowering levels of cortisol, the body’s main stress hormone. Time to follow some nature accounts on social media!
[Image description: A spruce forest in Sweden. Tall trees are visible with greenery on the forest floor.] Credit & copyright: W.carter, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Feeling achy? Break out the nature documentaries. A recent study conducted by researchers from the University of Exeter and University of Vienna found that people experiencing pain felt better when viewing scenes from nature—even if those scenes were shown to them via a screen.
The study used functional magnetic resonance imaging (fRMI) to look at the brain activity of 49 volunteers in real time. The volunteers were given a series of small electric shocks while watching and listening to different types of scenes on a screen. These included a scene of an office, a city, and a lake surrounded by trees. While watching nature scenes, participants reported feeling less pain. Their brain imaging also showed a decrease in nociception, the process by which the body senses pain.
This surprisingly powerful result could be due to something called attention restoration theory, a psychological theory that spending time in nature improves overall cognitive function and naturally captures human attention. If a person suffering from low-grade pain is able to focus on something else, they’re less likely to feel the pain as intensely. Now, it seems that simply looking at nature is enough to achieve a similar effect as actually being outside. While pain medication is, of course, still needed to treat severe pain, it’s always useful for medical professionals and the general public to expand their knowledge of medication-free pain relief methods. After all, nature videos are almost always a click away these days, making them a uniquely accessible form of therapy. Previous research has also shown that viewing nature has other beneficial physical effects, such as lowering levels of cortisol, the body’s main stress hormone. Time to follow some nature accounts on social media!
[Image description: A spruce forest in Sweden. Tall trees are visible with greenery on the forest floor.] Credit & copyright: W.carter, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #3048Free1 CQ
This waka is making waves once again! A recent archeological discovery in New Zealand is making headlines not only for its historical and cultural significance, but for its rarity. It wasn’t discovered during an archaeological dig, though, but by a local fisherman and his son who found some unusual looking timber while walking along a beach. The wood turned out to be parts of a waka, a kind of boat, similar to a large canoe, used by the Moriori people, who were the first known inhabitants of New Zealand’s Chatham Islands and have lived there since around 1500 C.E.
Wakas were used by several Polynesian peoples, including the Moriori, to journey between islands in the Pacific Ocean’s Polynesian Triangle. The Moriori also journeyed between New Zealand’s mainland and the Chatham Islands, almost 500 miles away. In fact, some pieces of the recently-discovered waka came from the mainland. Obviously, the Moriori were skilled navigators, using the stars, wind direction, and other natural indicators to find their way across vast swaths of ocean. Today, between 3,000 and 6,000 people of Moriori descent live on the Chatham Islands and New Zealand’s mainland.
The recent waka find has been one of the most complete in history. After archeologists descended on the site where the original pieces of timber were found, many more waka segments were uncovered, including pieces of the boat’s sail, rope, and corking. Such pieces are rarely unearthed, since they degrade more quickly than large chunks of wood. Some braided fibers were still tied to specially carved holes in pieces of wood, giving insight into the shipbuilding process. In all, more than 450 waka pieces and other items were found at the site. That’s a lot of history uncovered thanks to a simple day at the beach.
[Image description: The surface of water.] Credit & copyright: Matt Hardy, PexelsThis waka is making waves once again! A recent archeological discovery in New Zealand is making headlines not only for its historical and cultural significance, but for its rarity. It wasn’t discovered during an archaeological dig, though, but by a local fisherman and his son who found some unusual looking timber while walking along a beach. The wood turned out to be parts of a waka, a kind of boat, similar to a large canoe, used by the Moriori people, who were the first known inhabitants of New Zealand’s Chatham Islands and have lived there since around 1500 C.E.
Wakas were used by several Polynesian peoples, including the Moriori, to journey between islands in the Pacific Ocean’s Polynesian Triangle. The Moriori also journeyed between New Zealand’s mainland and the Chatham Islands, almost 500 miles away. In fact, some pieces of the recently-discovered waka came from the mainland. Obviously, the Moriori were skilled navigators, using the stars, wind direction, and other natural indicators to find their way across vast swaths of ocean. Today, between 3,000 and 6,000 people of Moriori descent live on the Chatham Islands and New Zealand’s mainland.
The recent waka find has been one of the most complete in history. After archeologists descended on the site where the original pieces of timber were found, many more waka segments were uncovered, including pieces of the boat’s sail, rope, and corking. Such pieces are rarely unearthed, since they degrade more quickly than large chunks of wood. Some braided fibers were still tied to specially carved holes in pieces of wood, giving insight into the shipbuilding process. In all, more than 450 waka pieces and other items were found at the site. That’s a lot of history uncovered thanks to a simple day at the beach.
[Image description: The surface of water.] Credit & copyright: Matt Hardy, Pexels -
FREEScience Daily Curio #3047Free1 CQ
The house might be burning, but at least the roof is intact. As climate change continues to affect Earth’s weather, there’s still some good news about the environment: the ozone layer is doing better and better, according to recent research. Concerns over the state of the ozone layer first emerged in the 1980s, when researchers discovered a hole in the layer over Antarctica. That was bad news considering how crucial the ozone layer is to the health of life on Earth. Consisting of 3 oxygen atoms, ozone mitigates the amount of harmful UV radiation that makes it to our planet’s surface. Without it, humans and animals would be much more prone to skin cancer and cataracts. Many plants, including some crops, could also die of excess radiation.
In 1986, researchers from the National Oceanic and Atmospheric Administration (NOAA) set out on an expedition to Antarctica and discovered the culprit behind the missing patch of ozone layer: chlorofluorocarbons. Better known as CFCs, these synthetic chemicals were widely used at the time as refrigerants, insulation, and as aerosol propellants, showing up in common, everyday items like air conditioners and hair spray. Following the discovery, an international treaty limiting the use of CFCs was adopted. Known as the Montreal Protocol, the treaty’s benefits are becoming clearer by the day. A recent study from MIT shows that the ozone layer is recovering, and the data shows that it’s a direct result of CFC reduction. Susan Solomon, the author of the study, said in a university statement, “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.” So far, it’s 1-0 for ozone.
[Image description: A blue sky with white clouds.] Credit & copyright: Johann Piber, PexelsThe house might be burning, but at least the roof is intact. As climate change continues to affect Earth’s weather, there’s still some good news about the environment: the ozone layer is doing better and better, according to recent research. Concerns over the state of the ozone layer first emerged in the 1980s, when researchers discovered a hole in the layer over Antarctica. That was bad news considering how crucial the ozone layer is to the health of life on Earth. Consisting of 3 oxygen atoms, ozone mitigates the amount of harmful UV radiation that makes it to our planet’s surface. Without it, humans and animals would be much more prone to skin cancer and cataracts. Many plants, including some crops, could also die of excess radiation.
In 1986, researchers from the National Oceanic and Atmospheric Administration (NOAA) set out on an expedition to Antarctica and discovered the culprit behind the missing patch of ozone layer: chlorofluorocarbons. Better known as CFCs, these synthetic chemicals were widely used at the time as refrigerants, insulation, and as aerosol propellants, showing up in common, everyday items like air conditioners and hair spray. Following the discovery, an international treaty limiting the use of CFCs was adopted. Known as the Montreal Protocol, the treaty’s benefits are becoming clearer by the day. A recent study from MIT shows that the ozone layer is recovering, and the data shows that it’s a direct result of CFC reduction. Susan Solomon, the author of the study, said in a university statement, “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.” So far, it’s 1-0 for ozone.
[Image description: A blue sky with white clouds.] Credit & copyright: Johann Piber, Pexels -
FREEMind + Body Daily CurioFree1 CQ
There’s so many layers to love. With its meaty sauce and layers of pasta, lasagna is one of the world’s best-known foods, and it’s available at just about every Italian restaurant on Earth. Yet, this famously Italian dish didn’t originate in Italy. Like modern mathematics and philosophy, the first form of lasagna actually came from ancient Greece.
Lasagna is a dish made with large, flat sheets of pasta layered on top of one another, with fillings like chopped tomatoes, meat, cheese, or a combination of the three in between the layers. Usually, lasagna is smothered in tomato sauce or ragù, a type of meat sauce, and topped with cheese (usually mozzarella) before being baked and cut into squares for serving.
The lasagna we know today began as an ancient Greek dish called laganon. Like modern lasagna, laganon utilized large, flat sheets of pasta, but these sheets were cut into strips, sprinkled with toppings like crumbly cheese or chopped vegetables, and eaten with a pointed stick. Things changed around 146 B.C.E., when the Romans conquered Greece and began expanding upon Greek recipes. Over the next century, laganon morphed into a Roman dish called lasagne patina, which was cut into squares, but varied greatly from modern lasagna when it came to its ingredients. Some recipes called for fish to fill in the layers between pasta, others for pork belly or mixed vegetables. Sauce was still not standard for lasagna, though cheese did become one of the most popular Roman filling and topping.
Sauce, specifically tomato sauce, didn’t become the golden standard for lasagna until the dish got popular in Naples. By the 1600s, Neapolitans were eating their lasagna with ricotta cheese, ragú, and mozzarella cheese, though the dish still wasn’t served in layers. Then, in 1863, Francesco Zambrini, a scholar of ancient Italian texts from Bologna, Italy, published a lost, 14th-century cookbook called Libro di Cucina. Inside was a recipe for lasagna that called for layering egg pasta sheets with cheese filling. This recipe, mixed with the already-in-vogue practice of serving lasagna with tomatoes and meat sauce, resulted in the beloved dish that’s so popular today. All it took to make it happen was the formation of the Roman Empire, a love for tomatoes, and a long-lost cookbook!
[Image description: Lasagna topped with greens on a plate with silverware.] Credit & copyright: alleksana, PexelsThere’s so many layers to love. With its meaty sauce and layers of pasta, lasagna is one of the world’s best-known foods, and it’s available at just about every Italian restaurant on Earth. Yet, this famously Italian dish didn’t originate in Italy. Like modern mathematics and philosophy, the first form of lasagna actually came from ancient Greece.
Lasagna is a dish made with large, flat sheets of pasta layered on top of one another, with fillings like chopped tomatoes, meat, cheese, or a combination of the three in between the layers. Usually, lasagna is smothered in tomato sauce or ragù, a type of meat sauce, and topped with cheese (usually mozzarella) before being baked and cut into squares for serving.
The lasagna we know today began as an ancient Greek dish called laganon. Like modern lasagna, laganon utilized large, flat sheets of pasta, but these sheets were cut into strips, sprinkled with toppings like crumbly cheese or chopped vegetables, and eaten with a pointed stick. Things changed around 146 B.C.E., when the Romans conquered Greece and began expanding upon Greek recipes. Over the next century, laganon morphed into a Roman dish called lasagne patina, which was cut into squares, but varied greatly from modern lasagna when it came to its ingredients. Some recipes called for fish to fill in the layers between pasta, others for pork belly or mixed vegetables. Sauce was still not standard for lasagna, though cheese did become one of the most popular Roman filling and topping.
Sauce, specifically tomato sauce, didn’t become the golden standard for lasagna until the dish got popular in Naples. By the 1600s, Neapolitans were eating their lasagna with ricotta cheese, ragú, and mozzarella cheese, though the dish still wasn’t served in layers. Then, in 1863, Francesco Zambrini, a scholar of ancient Italian texts from Bologna, Italy, published a lost, 14th-century cookbook called Libro di Cucina. Inside was a recipe for lasagna that called for layering egg pasta sheets with cheese filling. This recipe, mixed with the already-in-vogue practice of serving lasagna with tomatoes and meat sauce, resulted in the beloved dish that’s so popular today. All it took to make it happen was the formation of the Roman Empire, a love for tomatoes, and a long-lost cookbook!
[Image description: Lasagna topped with greens on a plate with silverware.] Credit & copyright: alleksana, Pexels -
FREEParenting Daily Curio #3046Free1 CQ
Grief affects everyone differently, but the one constant is that it’s never easy. Now, at least, British parents who experience a miscarriage will have the right to take bereavement leave thanks to new workers’ rights reforms. The new law is part of changes to the employment rights bill proposed by the Labour Party and extends bereavement leave of up to two weeks to pregnant people who suffer a miscarriage before 24 weeks, as well as their partners. That’s good news for parents who are trying to have children, especially since most miscarriages happen early in the course of a pregnancy.
As tragic as they are, miscarriages are unfortunately extremely common. Though estimates vary, it’s believed that up to 20 percent of pregnancies end in miscarriage, with around 80 percent of them occurring in the first trimester, or in the first 12 weeks. Miscarriages can happen for a variety of reasons, but the most common cause is an issue with the number of fetal chromosomes. Extra chromosomes or missing chromosomes can lead to a fetus or embryo not developing properly, which, in turn, leads to a miscarriage. Viruses, illnesses, and food poisoning can also lead to miscarriages. Miscarriage symptoms also vary widely. Bleeding, cramping, or rapid heartbeat while pregnant can all be signs of a miscarriage, but sometimes there are no symptoms at all. In such cases, the miscarriage might go completely unnoticed, meaning that the actual miscarriage rate could be much higher than is currently estimated. Since miscarriages can have so many causes, many of them can’t be prevented—much of it is down to simple luck. Still, avoiding alcohol, smoking, and particularly risky sports can give a pregnancy a better chance at viability. At least with Britain’s new law, parents will have some time to breathe if bad luck strikes.Grief affects everyone differently, but the one constant is that it’s never easy. Now, at least, British parents who experience a miscarriage will have the right to take bereavement leave thanks to new workers’ rights reforms. The new law is part of changes to the employment rights bill proposed by the Labour Party and extends bereavement leave of up to two weeks to pregnant people who suffer a miscarriage before 24 weeks, as well as their partners. That’s good news for parents who are trying to have children, especially since most miscarriages happen early in the course of a pregnancy.
As tragic as they are, miscarriages are unfortunately extremely common. Though estimates vary, it’s believed that up to 20 percent of pregnancies end in miscarriage, with around 80 percent of them occurring in the first trimester, or in the first 12 weeks. Miscarriages can happen for a variety of reasons, but the most common cause is an issue with the number of fetal chromosomes. Extra chromosomes or missing chromosomes can lead to a fetus or embryo not developing properly, which, in turn, leads to a miscarriage. Viruses, illnesses, and food poisoning can also lead to miscarriages. Miscarriage symptoms also vary widely. Bleeding, cramping, or rapid heartbeat while pregnant can all be signs of a miscarriage, but sometimes there are no symptoms at all. In such cases, the miscarriage might go completely unnoticed, meaning that the actual miscarriage rate could be much higher than is currently estimated. Since miscarriages can have so many causes, many of them can’t be prevented—much of it is down to simple luck. Still, avoiding alcohol, smoking, and particularly risky sports can give a pregnancy a better chance at viability. At least with Britain’s new law, parents will have some time to breathe if bad luck strikes. -
FREEUS History Daily Curio #3045Free1 CQ
This was one march that could turn on a dime. March of Dimes recently appointed a new CEO, making this the perfect time to look back on the nonprofit’s impressive, 85-year history. Not only did the organization play a major hand in helping to eradicate polio, it has pivoted and widened its scope several times throughout its history. Initially named the National Foundation for Infantile Paralysis, March of Dimes was founded by President Franklin D. Roosevelt to combat polio. Infantile paralysis was another name for polio at the time, and the president himself relied on a wheelchair for much of his life due to his own bout with polio in 1921. The catchy name for the organization, March of Dimes, came from a campaign asking the public to send money to the White House in pursuit of a cure. While coming up with the idea for the campaign, popular comedian Eddie Cantor suggested that they call it “The March of Dimes,” a play on the name of a newsreel at the time, “The March of Time.” Much of the money sent to the White House was indeed in the form of dimes and was used to fund research headed by Jonas Salk, the developer of one of the first successful polio vaccines. The campaign was heavily promoted on the radio, and its success helped develop the fundraising model used by other medical nonprofits today.
By the 1950s, Salk successfully developed a cure for polio, and by the end of the 1970s, polio was all but eradicated. Sadly, Roosevelt passed away in 1945, before he could see the creation of the vaccine. In 1979, the organization officially changed its name to the March of Dimes Foundation. After fulfilling their original mission, the March of Dimes also diversified their efforts. The organization continues to fund research into various childhood diseases while lobbying for better access to prenatal care. As part of their mission statement, the March of Dimes acknowledges that the U.S. has some of the highest infant and maternal mortality rates among developed nations. Clearly, the march is far from over.
[Image description: A jar of coins with some coins sitting beside it.] Credit & copyright: Miguel Á. Padriñán, PexelsThis was one march that could turn on a dime. March of Dimes recently appointed a new CEO, making this the perfect time to look back on the nonprofit’s impressive, 85-year history. Not only did the organization play a major hand in helping to eradicate polio, it has pivoted and widened its scope several times throughout its history. Initially named the National Foundation for Infantile Paralysis, March of Dimes was founded by President Franklin D. Roosevelt to combat polio. Infantile paralysis was another name for polio at the time, and the president himself relied on a wheelchair for much of his life due to his own bout with polio in 1921. The catchy name for the organization, March of Dimes, came from a campaign asking the public to send money to the White House in pursuit of a cure. While coming up with the idea for the campaign, popular comedian Eddie Cantor suggested that they call it “The March of Dimes,” a play on the name of a newsreel at the time, “The March of Time.” Much of the money sent to the White House was indeed in the form of dimes and was used to fund research headed by Jonas Salk, the developer of one of the first successful polio vaccines. The campaign was heavily promoted on the radio, and its success helped develop the fundraising model used by other medical nonprofits today.
By the 1950s, Salk successfully developed a cure for polio, and by the end of the 1970s, polio was all but eradicated. Sadly, Roosevelt passed away in 1945, before he could see the creation of the vaccine. In 1979, the organization officially changed its name to the March of Dimes Foundation. After fulfilling their original mission, the March of Dimes also diversified their efforts. The organization continues to fund research into various childhood diseases while lobbying for better access to prenatal care. As part of their mission statement, the March of Dimes acknowledges that the U.S. has some of the highest infant and maternal mortality rates among developed nations. Clearly, the march is far from over.
[Image description: A jar of coins with some coins sitting beside it.] Credit & copyright: Miguel Á. Padriñán, Pexels -
FREEEngineering Daily Curio #3044Free1 CQ
It’s a good thing your eye comes with a spare. Researchers at the Dana-Farber Cancer Institute, Massachusetts Eye and Ear, and Boston Children’s Hospital have found a way to repair previously-irreversible corneal damage in one eye using stem cells from a person’s remaining, healthy eye.
Along with the lens, an eye’s cornea plays a critical role in focusing light. Damage to the cornea, whether from injury or disease, can permanently impair vision or even lead to blindness. The cornea also protects the eye behind it by keeping out debris and germs. Since the cornea is literally at the front and center of the eye, however, it is particularly vulnerable to damage from the very things it’s designed to protect against. Unfortunately, damage to the cornea is notoriously difficult to treat.
Now, researchers have managed to extract what they call cultivated autologous limbal epithelial cells (CALEC) from a healthy eye to restore function to a damaged cornea. The process works like this: CALEC is extracted via a biopsy from the healthy eye then placed in a cellular tissue graft, where it takes up to three weeks to grow. Once ready, the graft is transplanted into the damaged eye to replace the damaged cornea. One of the researchers, Ula Jurkunas, MD, said in press release, “Now we have this new data supporting that CALEC is more than 90% effective at restoring the cornea’s surface, which makes a meaningful difference in individuals with cornea damage that was considered untreatable.” Still, as they say, an ounce of prevention is worth a pound of cure. Common causes of corneal injury include damage from foreign objects entering the eye during yard work or while working with tools or chemicals. Too much exposure to UV rays can also cause damage to the cornea, as well as physical trauma during sports. The best way to prevent these injuries, of course, is to use eye protection. Even if they can fix your cornea, having someone poke around inside your one good eye should probably remain a last resort.
[Image description: An illustrated diagram of the human eye from the side, with labels.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide.It’s a good thing your eye comes with a spare. Researchers at the Dana-Farber Cancer Institute, Massachusetts Eye and Ear, and Boston Children’s Hospital have found a way to repair previously-irreversible corneal damage in one eye using stem cells from a person’s remaining, healthy eye.
Along with the lens, an eye’s cornea plays a critical role in focusing light. Damage to the cornea, whether from injury or disease, can permanently impair vision or even lead to blindness. The cornea also protects the eye behind it by keeping out debris and germs. Since the cornea is literally at the front and center of the eye, however, it is particularly vulnerable to damage from the very things it’s designed to protect against. Unfortunately, damage to the cornea is notoriously difficult to treat.
Now, researchers have managed to extract what they call cultivated autologous limbal epithelial cells (CALEC) from a healthy eye to restore function to a damaged cornea. The process works like this: CALEC is extracted via a biopsy from the healthy eye then placed in a cellular tissue graft, where it takes up to three weeks to grow. Once ready, the graft is transplanted into the damaged eye to replace the damaged cornea. One of the researchers, Ula Jurkunas, MD, said in press release, “Now we have this new data supporting that CALEC is more than 90% effective at restoring the cornea’s surface, which makes a meaningful difference in individuals with cornea damage that was considered untreatable.” Still, as they say, an ounce of prevention is worth a pound of cure. Common causes of corneal injury include damage from foreign objects entering the eye during yard work or while working with tools or chemicals. Too much exposure to UV rays can also cause damage to the cornea, as well as physical trauma during sports. The best way to prevent these injuries, of course, is to use eye protection. Even if they can fix your cornea, having someone poke around inside your one good eye should probably remain a last resort.
[Image description: An illustrated diagram of the human eye from the side, with labels.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide. -
FREELiterature Daily Curio #3043Free1 CQ
She might be gone, but her work lives on! Pulitzer Prize-winning American author Harper Lee published only two books before passing away in 2016: 1960’s To Kill A Mockingbird and 2015’s Go Set a Watchmen. Now, in a great surprise to fans, a collection of short stories that Lee wrote prior to 1960 is set to be published by Harper, an imprint of Harper Collins, this October.
The stories will be part of The Land of Sweet Forever: Stories and Essays, which will also include eight of Lee’s nonfiction pieces printed in various publications throughout her life. Some of the collection’s short stories draw upon themes that are also present in To Kill A Mockingbird, and include elements inspired from her own life in Alabama, where she grew up, and New York City, where she moved in 1949 and lived part-time for around 40 years. Ailah Ahmed, publishing director of the new book’s UK publisher, Hutchinson Heinemann, told The Guardian that the stories “...will prove an invaluable resource for anyone interested in Lee’s development as a writer.”
A famously private author, Lee wrote unflinchingly about the racism that plagued the deep South during the 1930s in To Kill A Mockingbird. The empathetic voice of Scout Finch, the book’s child narrator, offers some hope for a better future throughout an otherwise somber tale. Lee’s willingness to portray Atticus Finch, a white lawyer, fighting for the rights of Tom Robinson, a Black man unjustly accused of rape, showcases the idea that bravery and empathy are the ultimate antidotes to prejudice, even if injustice ultimately wins the day, as it does in the story. No doubt Lee’s fans will relish the chance to glimpse into the author’s past, to a time before To Kill A Mockingbird forever changed America’s literary landscape. Short stories like this just don’t happen everyday.
[Image description: A stack of books without titles visible.] Credit & copyright: Jess Bailey Designs, PexelsShe might be gone, but her work lives on! Pulitzer Prize-winning American author Harper Lee published only two books before passing away in 2016: 1960’s To Kill A Mockingbird and 2015’s Go Set a Watchmen. Now, in a great surprise to fans, a collection of short stories that Lee wrote prior to 1960 is set to be published by Harper, an imprint of Harper Collins, this October.
The stories will be part of The Land of Sweet Forever: Stories and Essays, which will also include eight of Lee’s nonfiction pieces printed in various publications throughout her life. Some of the collection’s short stories draw upon themes that are also present in To Kill A Mockingbird, and include elements inspired from her own life in Alabama, where she grew up, and New York City, where she moved in 1949 and lived part-time for around 40 years. Ailah Ahmed, publishing director of the new book’s UK publisher, Hutchinson Heinemann, told The Guardian that the stories “...will prove an invaluable resource for anyone interested in Lee’s development as a writer.”
A famously private author, Lee wrote unflinchingly about the racism that plagued the deep South during the 1930s in To Kill A Mockingbird. The empathetic voice of Scout Finch, the book’s child narrator, offers some hope for a better future throughout an otherwise somber tale. Lee’s willingness to portray Atticus Finch, a white lawyer, fighting for the rights of Tom Robinson, a Black man unjustly accused of rape, showcases the idea that bravery and empathy are the ultimate antidotes to prejudice, even if injustice ultimately wins the day, as it does in the story. No doubt Lee’s fans will relish the chance to glimpse into the author’s past, to a time before To Kill A Mockingbird forever changed America’s literary landscape. Short stories like this just don’t happen everyday.
[Image description: A stack of books without titles visible.] Credit & copyright: Jess Bailey Designs, Pexels -
FREEMind + Body Daily CurioFree1 CQ
Let’s have an entire tray of these crispy confections, s'il vous plaît! Macarons are some of the best-known cookies in the world, and they come in so many bright colors that it’s no wonder they’re native to the fashion capital of the world: Paris, France. Or are they? Although macarons are famous symbols of France that were undoubtedly popularized in Paris, they might not have actually been invented there.
Macarons shouldn’t be confused with macaroons, which are drop cookies made from a paste of shredded coconut. Macarons are sandwich cookies made from a unique list of ingredients. Instead of being made from dough, like most cookies, macaron’s outer shells are made from almond flour, eggs, and powdered sugar that’s been whipped into a crispy meringue. The chewy filling is traditionally made from buttercream, which itself is made by beating butter and sugar together. All sorts of fillings are used in modern macarons, though, including jams, lemon curd, or chocolate ganache. Food coloring is used to give macarons their color, which means that they can come in just about any shade, and they’re often made purposefully bright to draw attention in window displays.
Macarons likely owe their invention to the long history of almond-based confections, like marzipan, and meringue-based desserts that have been popular in both Europe and the Middle East for centuries. Some believe that an early version of the macaroon was invented in the Middle East and brought to Europe by traders. Others say the cookies were invented in al-Andalus, an area that is now part of Spain, in the early 11th century, then brought to Morocco, where they were eaten during Ramadan. Still others claim that they were invented in Italy and brought to France by an Italian chef, though there are no written records of this. An early version of macarons, which were made from meringue but were not sandwich cookies, were sold in Nancy, France, in the 1790s. The sellers were two nuns who had sought asylum there during the French Revolution, though it’s unclear where their recipe came from. The cookies were so popular in Nancy that the nuns were nicknamed the “Macaron Sisters.”
While we’ll never know for certain where early macarons were first made, the type of macarons we know and love today undoubtedly come from Paris. Just which Parisian chef invented it is a topic of some debate, however. Some claim that Pierre Desfontaines, a pastry chef at Paris’s famous Ladurée pâtisserie, created the macaron sandwich cookie in the 1930s to match the colorful decor that the pâtisserie was famous for. Around the same time, another chef, Claude Gerbet, was also making macarons at his own Parisian bakery. Some believe that Gerbert invented modern macarons, since the cookies were known in Paris, for a time, as “Gerberts.” What we know for sure is that macarons quickly became an extremely popular, fast-selling snack at pâtisseries and coffeeshops throughout the city, and by the mid-1940s they were heavily associated with France, as they continue to be today. Delicate, colorful, and sweet? Très magnifique!
[Image description: A plate of light purple macarons with a matching teacup and tablecloth.] Credit & copyright: Jill Wellington, PexelsLet’s have an entire tray of these crispy confections, s'il vous plaît! Macarons are some of the best-known cookies in the world, and they come in so many bright colors that it’s no wonder they’re native to the fashion capital of the world: Paris, France. Or are they? Although macarons are famous symbols of France that were undoubtedly popularized in Paris, they might not have actually been invented there.
Macarons shouldn’t be confused with macaroons, which are drop cookies made from a paste of shredded coconut. Macarons are sandwich cookies made from a unique list of ingredients. Instead of being made from dough, like most cookies, macaron’s outer shells are made from almond flour, eggs, and powdered sugar that’s been whipped into a crispy meringue. The chewy filling is traditionally made from buttercream, which itself is made by beating butter and sugar together. All sorts of fillings are used in modern macarons, though, including jams, lemon curd, or chocolate ganache. Food coloring is used to give macarons their color, which means that they can come in just about any shade, and they’re often made purposefully bright to draw attention in window displays.
Macarons likely owe their invention to the long history of almond-based confections, like marzipan, and meringue-based desserts that have been popular in both Europe and the Middle East for centuries. Some believe that an early version of the macaroon was invented in the Middle East and brought to Europe by traders. Others say the cookies were invented in al-Andalus, an area that is now part of Spain, in the early 11th century, then brought to Morocco, where they were eaten during Ramadan. Still others claim that they were invented in Italy and brought to France by an Italian chef, though there are no written records of this. An early version of macarons, which were made from meringue but were not sandwich cookies, were sold in Nancy, France, in the 1790s. The sellers were two nuns who had sought asylum there during the French Revolution, though it’s unclear where their recipe came from. The cookies were so popular in Nancy that the nuns were nicknamed the “Macaron Sisters.”
While we’ll never know for certain where early macarons were first made, the type of macarons we know and love today undoubtedly come from Paris. Just which Parisian chef invented it is a topic of some debate, however. Some claim that Pierre Desfontaines, a pastry chef at Paris’s famous Ladurée pâtisserie, created the macaron sandwich cookie in the 1930s to match the colorful decor that the pâtisserie was famous for. Around the same time, another chef, Claude Gerbet, was also making macarons at his own Parisian bakery. Some believe that Gerbert invented modern macarons, since the cookies were known in Paris, for a time, as “Gerberts.” What we know for sure is that macarons quickly became an extremely popular, fast-selling snack at pâtisseries and coffeeshops throughout the city, and by the mid-1940s they were heavily associated with France, as they continue to be today. Delicate, colorful, and sweet? Très magnifique!
[Image description: A plate of light purple macarons with a matching teacup and tablecloth.] Credit & copyright: Jill Wellington, Pexels -
FREEPhysics Daily Curio #3042Free1 CQ
Here’s some hot news that’s worth reflecting on: volcanic eruptions can turn human brains into glass. In the 1960s, archaeologists unearthed many artifacts and preserved human bodies from the ancient Roman city of Pompeii and the town of Herculaneum, both of which were destroyed by the eruption of Mount Vesuvius in 79 C.E. The bodies from these sites are famous for being incredibly well-preserved by layers of volcanic ash, showing the exact poses and sometimes even expressions of the volcano’s victims in their dying moments. In 2018, however, one researcher discovered something even more interesting about one particular body, which had belonged to a 20-year-old man killed in the eruption. Italian anthropologist Pier Paolo noticed that there were shiny areas inside the body’s skull, and upon further investigation discovered that part of the victim’s brain and spine had turned into glass. Now, scientists believe they’ve uncovered the process behind this extremely rare phenomenon.
Glass does sometimes form in nature without human intervention, but the process, known as vitrification, requires extreme conditions. It can happen when lightning strikes sand, rapidly heating the grains to over 50,000° Fahrenheit, which is hotter than the surface of the sun. As soon as the lightning is done striking, the sand can rapidly cool, forming tubes or crusts of glass known as fulgurites. Glass can form after volcanic eruptions too. Obsidian is known as volcanic glass because it’s created when lava rapidly cools. However, 2018 was the first time that a vitrified human organ had ever been discovered. Researchers now believe that they know how it happened. First, a superheated ash cloud from the eruption of Vesuvius swept through Herculaneum, instantly killing those in its wake with temperatures of around 1,000 degrees Fahrenheit. Instead of incinerating victims’ bodies, the cloud left them covered in layers of ash. The cloud then dissipated quickly, allowing the bodies to cool. The brain in question was somewhat protected by the skull surrounding it, allowing it to cool rapidly and form into glass rather than being completely destroyed. It seems this ancient, cranial mystery is no longer a head-scratcher.
[Image description: A gray model of a human brain against a black background.] Credit & copyright: KATRIN BOLOVTSOVA, PexelsHere’s some hot news that’s worth reflecting on: volcanic eruptions can turn human brains into glass. In the 1960s, archaeologists unearthed many artifacts and preserved human bodies from the ancient Roman city of Pompeii and the town of Herculaneum, both of which were destroyed by the eruption of Mount Vesuvius in 79 C.E. The bodies from these sites are famous for being incredibly well-preserved by layers of volcanic ash, showing the exact poses and sometimes even expressions of the volcano’s victims in their dying moments. In 2018, however, one researcher discovered something even more interesting about one particular body, which had belonged to a 20-year-old man killed in the eruption. Italian anthropologist Pier Paolo noticed that there were shiny areas inside the body’s skull, and upon further investigation discovered that part of the victim’s brain and spine had turned into glass. Now, scientists believe they’ve uncovered the process behind this extremely rare phenomenon.
Glass does sometimes form in nature without human intervention, but the process, known as vitrification, requires extreme conditions. It can happen when lightning strikes sand, rapidly heating the grains to over 50,000° Fahrenheit, which is hotter than the surface of the sun. As soon as the lightning is done striking, the sand can rapidly cool, forming tubes or crusts of glass known as fulgurites. Glass can form after volcanic eruptions too. Obsidian is known as volcanic glass because it’s created when lava rapidly cools. However, 2018 was the first time that a vitrified human organ had ever been discovered. Researchers now believe that they know how it happened. First, a superheated ash cloud from the eruption of Vesuvius swept through Herculaneum, instantly killing those in its wake with temperatures of around 1,000 degrees Fahrenheit. Instead of incinerating victims’ bodies, the cloud left them covered in layers of ash. The cloud then dissipated quickly, allowing the bodies to cool. The brain in question was somewhat protected by the skull surrounding it, allowing it to cool rapidly and form into glass rather than being completely destroyed. It seems this ancient, cranial mystery is no longer a head-scratcher.
[Image description: A gray model of a human brain against a black background.] Credit & copyright: KATRIN BOLOVTSOVA, Pexels