Curio Cabinet / Daily Curio
-
FREEPhysics Daily Curio #3042Free1 CQ
Here’s some hot news that’s worth reflecting on: volcanic eruptions can turn human brains into glass. In the 1960s, archaeologists unearthed many artifacts and preserved human bodies from the ancient Roman city of Pompeii and the town of Herculaneum, both of which were destroyed by the eruption of Mount Vesuvius in 79 C.E. The bodies from these sites are famous for being incredibly well-preserved by layers of volcanic ash, showing the exact poses and sometimes even expressions of the volcano’s victims in their dying moments. In 2018, however, one researcher discovered something even more interesting about one particular body, which had belonged to a 20-year-old man killed in the eruption. Italian anthropologist Pier Paolo noticed that there were shiny areas inside the body’s skull, and upon further investigation discovered that part of the victim’s brain and spine had turned into glass. Now, scientists believe they’ve uncovered the process behind this extremely rare phenomenon.
Glass does sometimes form in nature without human intervention, but the process, known as vitrification, requires extreme conditions. It can happen when lightning strikes sand, rapidly heating the grains to over 50,000° Fahrenheit, which is hotter than the surface of the sun. As soon as the lightning is done striking, the sand can rapidly cool, forming tubes or crusts of glass known as fulgurites. Glass can form after volcanic eruptions too. Obsidian is known as volcanic glass because it’s created when lava rapidly cools. However, 2018 was the first time that a vitrified human organ had ever been discovered. Researchers now believe that they know how it happened. First, a superheated ash cloud from the eruption of Vesuvius swept through Herculaneum, instantly killing those in its wake with temperatures of around 1,000 degrees Fahrenheit. Instead of incinerating victims’ bodies, the cloud left them covered in layers of ash. The cloud then dissipated quickly, allowing the bodies to cool. The brain in question was somewhat protected by the skull surrounding it, allowing it to cool rapidly and form into glass rather than being completely destroyed. It seems this ancient, cranial mystery is no longer a head-scratcher.
[Image description: A gray model of a human brain against a black background.] Credit & copyright: KATRIN BOLOVTSOVA, PexelsHere’s some hot news that’s worth reflecting on: volcanic eruptions can turn human brains into glass. In the 1960s, archaeologists unearthed many artifacts and preserved human bodies from the ancient Roman city of Pompeii and the town of Herculaneum, both of which were destroyed by the eruption of Mount Vesuvius in 79 C.E. The bodies from these sites are famous for being incredibly well-preserved by layers of volcanic ash, showing the exact poses and sometimes even expressions of the volcano’s victims in their dying moments. In 2018, however, one researcher discovered something even more interesting about one particular body, which had belonged to a 20-year-old man killed in the eruption. Italian anthropologist Pier Paolo noticed that there were shiny areas inside the body’s skull, and upon further investigation discovered that part of the victim’s brain and spine had turned into glass. Now, scientists believe they’ve uncovered the process behind this extremely rare phenomenon.
Glass does sometimes form in nature without human intervention, but the process, known as vitrification, requires extreme conditions. It can happen when lightning strikes sand, rapidly heating the grains to over 50,000° Fahrenheit, which is hotter than the surface of the sun. As soon as the lightning is done striking, the sand can rapidly cool, forming tubes or crusts of glass known as fulgurites. Glass can form after volcanic eruptions too. Obsidian is known as volcanic glass because it’s created when lava rapidly cools. However, 2018 was the first time that a vitrified human organ had ever been discovered. Researchers now believe that they know how it happened. First, a superheated ash cloud from the eruption of Vesuvius swept through Herculaneum, instantly killing those in its wake with temperatures of around 1,000 degrees Fahrenheit. Instead of incinerating victims’ bodies, the cloud left them covered in layers of ash. The cloud then dissipated quickly, allowing the bodies to cool. The brain in question was somewhat protected by the skull surrounding it, allowing it to cool rapidly and form into glass rather than being completely destroyed. It seems this ancient, cranial mystery is no longer a head-scratcher.
[Image description: A gray model of a human brain against a black background.] Credit & copyright: KATRIN BOLOVTSOVA, Pexels -
FREEUS History Daily Curio #3041Free1 CQ
There were no shots fired, but it still kicked things off! The Battles of Lexington and Concord on April 19, 1775, are generally thought of as the first conflicts of the Revolutionary War. Yet, there was another skirmish, of sorts, between British troops and American colonists that took place months before these famous battles, and those who were present for it considered it to be the true start of the war. Today, the standoff is known as Leslie’s Retreat.
On February 26, 1775, British Lieutenant Colonel Alexander Leslie led a regiment of British soldiers from Boston to Salem, Massachusetts. He had received word that a colonial militia had formed there, and that they had been stockpiling weapons, including cannons. Leslie was confident that he could seize any such weapons, with an entire regiment at his command. Thomas Gage, Commander in Chief of the British forces in North America, had ordered Leslie to conduct the raid on a Sunday, since he believed that the townspeople would be in church and thus caught off guard. Little did he know that Militia member Major John Pedrick had seen Leslie’s troops marching toward Salem, and had rushed ahead to warn the town. The Salemites had, indeed, been gathered at church, which only made it easier for Pedrick to pass on his news and for the townsfolk to mobilize.
As he approached the bridge leading into Salem, Leslie found more than he bargained for. Militia members and unarmed townsfolk alike turned out in great numbers to block the streets and stop him from advancing. Fearing the breakout of a violent battle, Leslie didn’t dare fire on the town. Instead, he was forced into a tense negotiation. According to Charles Moses Endicott, Salem’s unofficial historian who recorded a detailed account of the conflict, Leslie told the townspeople, “I am determined to pass over this bridge before I return to Boston, if I remain here until next autumn.” However, after nearly two hours, Leslie was forced to strike a deal with the townsfolk. He and his troops were allowed to cross the bridge, ride no more than 275 yards into town, then leave without harming anyone. While he endured a humiliating march back to Boston, Endicott wrote that the standoff represented “the first blow” in their war for independence. It seems that Salem’s hidden cannons really got the cannonball rolling.
[Image description: A black-and-white illustration depicting the Battle of Lexington in the Revolutionary War. Opposing soldiers struggle in a field with tall trees in the distance. Some soldiers fire down from a rocky ridge.] Credit & copyright: The Metropolitan Museum of Art, Battle of Lexington, April 19, 1775, Designed and engraved by John Baker (American, active 1830–40). Bequest of Charles Allen Munn, 1924.There were no shots fired, but it still kicked things off! The Battles of Lexington and Concord on April 19, 1775, are generally thought of as the first conflicts of the Revolutionary War. Yet, there was another skirmish, of sorts, between British troops and American colonists that took place months before these famous battles, and those who were present for it considered it to be the true start of the war. Today, the standoff is known as Leslie’s Retreat.
On February 26, 1775, British Lieutenant Colonel Alexander Leslie led a regiment of British soldiers from Boston to Salem, Massachusetts. He had received word that a colonial militia had formed there, and that they had been stockpiling weapons, including cannons. Leslie was confident that he could seize any such weapons, with an entire regiment at his command. Thomas Gage, Commander in Chief of the British forces in North America, had ordered Leslie to conduct the raid on a Sunday, since he believed that the townspeople would be in church and thus caught off guard. Little did he know that Militia member Major John Pedrick had seen Leslie’s troops marching toward Salem, and had rushed ahead to warn the town. The Salemites had, indeed, been gathered at church, which only made it easier for Pedrick to pass on his news and for the townsfolk to mobilize.
As he approached the bridge leading into Salem, Leslie found more than he bargained for. Militia members and unarmed townsfolk alike turned out in great numbers to block the streets and stop him from advancing. Fearing the breakout of a violent battle, Leslie didn’t dare fire on the town. Instead, he was forced into a tense negotiation. According to Charles Moses Endicott, Salem’s unofficial historian who recorded a detailed account of the conflict, Leslie told the townspeople, “I am determined to pass over this bridge before I return to Boston, if I remain here until next autumn.” However, after nearly two hours, Leslie was forced to strike a deal with the townsfolk. He and his troops were allowed to cross the bridge, ride no more than 275 yards into town, then leave without harming anyone. While he endured a humiliating march back to Boston, Endicott wrote that the standoff represented “the first blow” in their war for independence. It seems that Salem’s hidden cannons really got the cannonball rolling.
[Image description: A black-and-white illustration depicting the Battle of Lexington in the Revolutionary War. Opposing soldiers struggle in a field with tall trees in the distance. Some soldiers fire down from a rocky ridge.] Credit & copyright: The Metropolitan Museum of Art, Battle of Lexington, April 19, 1775, Designed and engraved by John Baker (American, active 1830–40). Bequest of Charles Allen Munn, 1924. -
FREEScience Daily Curio #3040Free1 CQ
Your living space can never be too clean…right? According to a team of American researchers, it actually can, and their proof lies in the International Space Station (ISS). They found that the space station is far more sterile than most environments on Earth, and that could be a bad thing considering the way that human immune systems function.
Researchers began by collecting more than 800 samples from various areas aboard the ISS. When they compared the samples to ones taken from buildings on Earth, like homes and office buildings, they found that microbial diversity on the space station was severely lacking. On Earth, microbes from soil, water, dust, and other sources keep our immune systems robust by exposing them to different stimuli, allowing them to build immunities to common bacteria and other harmful microscopic matter. But on the ISS, almost all of the bacteria comes from human skin shed by the astronauts who live and work there. Also worrying was the fact that chemicals from cleaning products used on board seemed to have built up, since fresh air and sunlight can’t help break them down over time, as they would on Earth.
Even on our planet’s surface, environments that are too sterile are known to cause health problems. These include immune dysfunction, cold sores, and spontaneous allergic reactions. As with many things in space, there is no obvious, simple solution. Certain bacteria could be purposefully added to the ISS, but as of right now there’s no way to know if that’s safe. After all, bacteria are living things capable of evolving. Just because they behave and adapt a certain way on Earth doesn’t mean they’d do the same thing in space, and unchecked bacterial growth could lead to all sorts of new health problems. For now, space will likely remain a largely microbe-less place. Hey, at least ISS astronauts don’t have to worry about pandemics.
[Image description: A starry sky with some purple visible.] Credit & copyright: Felix Mittermeier, PexelsYour living space can never be too clean…right? According to a team of American researchers, it actually can, and their proof lies in the International Space Station (ISS). They found that the space station is far more sterile than most environments on Earth, and that could be a bad thing considering the way that human immune systems function.
Researchers began by collecting more than 800 samples from various areas aboard the ISS. When they compared the samples to ones taken from buildings on Earth, like homes and office buildings, they found that microbial diversity on the space station was severely lacking. On Earth, microbes from soil, water, dust, and other sources keep our immune systems robust by exposing them to different stimuli, allowing them to build immunities to common bacteria and other harmful microscopic matter. But on the ISS, almost all of the bacteria comes from human skin shed by the astronauts who live and work there. Also worrying was the fact that chemicals from cleaning products used on board seemed to have built up, since fresh air and sunlight can’t help break them down over time, as they would on Earth.
Even on our planet’s surface, environments that are too sterile are known to cause health problems. These include immune dysfunction, cold sores, and spontaneous allergic reactions. As with many things in space, there is no obvious, simple solution. Certain bacteria could be purposefully added to the ISS, but as of right now there’s no way to know if that’s safe. After all, bacteria are living things capable of evolving. Just because they behave and adapt a certain way on Earth doesn’t mean they’d do the same thing in space, and unchecked bacterial growth could lead to all sorts of new health problems. For now, space will likely remain a largely microbe-less place. Hey, at least ISS astronauts don’t have to worry about pandemics.
[Image description: A starry sky with some purple visible.] Credit & copyright: Felix Mittermeier, Pexels -
FREEWorld History Daily Curio #3039Free1 CQ
They’ve been all around the world, but now they’re heading home. The Netherlands recently announced that they’ll be sending more than 100 bronze sculptures, known as Benin bronzes, back to their original home in Nigeria. The statues were looted from Nigeria’s Benin City in the late 19th century, but in recent years a number of countries and individual museums have pledged to return them to their country of origin.
In 1897, Benin wasn’t yet part of Nigeria. It was a kingdom unto itself known as the Edo Kingdom of Benin, and though it enjoyed a good trade relationship with some other nations, it wasn’t willing to establish such relations with the British. At the time, the British were attempting to exert more control over African trade routes, which the Kingdom of Benin didn’t appreciate. When Britain sent Niger Coast Protectorate official James Robert Phillips to Benin City in January 1897 to pressure the Kingdom into a trade deal, he and his men were attacked and killed. In retaliation, the British launched a full-scale siege on the city the following month, burning the royal palace, exiling the Kingdom’s leader, or Oba, and seizing control of the area for themselves. In the process, countless artistic and historical treasures were stolen and sold off to European museums and private collectors. The British eventually colonized the former Kingdom of Benin and incorporated it into Nigeria.
Among the artifacts stolen from Benin were a group of sculptures collectively known as the Benin bronzes. Most of these bronze sculptures are small enough to be carried by one person, which made them easier to steal. Some are ceremonial objects from religious ceremonies, but most depict people and animals. Busts of former Obas, statues of men holding weapons, and sculptures of big cats are plentiful among Benin bronzes. According to the AFP news agency, Eppo Bruins, Dutch Minister of Culture, Education, and Science recently explained, “With this return, we are contributing to the redress of a historical injustice that is still felt today.” It’s never too late to do the right thing.
[Image description: A small, circular bronze statue with human figures standing around a textured circle.] Credit & copyright: Altar to the Hand (Ikegobo), Edo peoples, late 18th century. The Metropolitan Museum of Art, The Michael C. Rockefeller Memorial Collection, Bequest of Nelson A. Rockefeller, 1979. Public Domain.They’ve been all around the world, but now they’re heading home. The Netherlands recently announced that they’ll be sending more than 100 bronze sculptures, known as Benin bronzes, back to their original home in Nigeria. The statues were looted from Nigeria’s Benin City in the late 19th century, but in recent years a number of countries and individual museums have pledged to return them to their country of origin.
In 1897, Benin wasn’t yet part of Nigeria. It was a kingdom unto itself known as the Edo Kingdom of Benin, and though it enjoyed a good trade relationship with some other nations, it wasn’t willing to establish such relations with the British. At the time, the British were attempting to exert more control over African trade routes, which the Kingdom of Benin didn’t appreciate. When Britain sent Niger Coast Protectorate official James Robert Phillips to Benin City in January 1897 to pressure the Kingdom into a trade deal, he and his men were attacked and killed. In retaliation, the British launched a full-scale siege on the city the following month, burning the royal palace, exiling the Kingdom’s leader, or Oba, and seizing control of the area for themselves. In the process, countless artistic and historical treasures were stolen and sold off to European museums and private collectors. The British eventually colonized the former Kingdom of Benin and incorporated it into Nigeria.
Among the artifacts stolen from Benin were a group of sculptures collectively known as the Benin bronzes. Most of these bronze sculptures are small enough to be carried by one person, which made them easier to steal. Some are ceremonial objects from religious ceremonies, but most depict people and animals. Busts of former Obas, statues of men holding weapons, and sculptures of big cats are plentiful among Benin bronzes. According to the AFP news agency, Eppo Bruins, Dutch Minister of Culture, Education, and Science recently explained, “With this return, we are contributing to the redress of a historical injustice that is still felt today.” It’s never too late to do the right thing.
[Image description: A small, circular bronze statue with human figures standing around a textured circle.] Credit & copyright: Altar to the Hand (Ikegobo), Edo peoples, late 18th century. The Metropolitan Museum of Art, The Michael C. Rockefeller Memorial Collection, Bequest of Nelson A. Rockefeller, 1979. Public Domain. -
FREEMind + Body Daily CurioFree1 CQ
Have a hoppy breakfast! Don’t worry, though—there’s not actually any toad in the famed British dish called toad in the hole. This cheekily named breakfast food is actually made with sausage, and it’s been popular in England for centuries.
Toad in the hole is made by baking sausages in a yorkshire pudding batter. The batter is made from eggs, flour, and milk. Sausages are normally arranged in a line or other pattern on top of the batter, so they’re half-submerged during the baking process. This allows them to get crispy on top and for their flavor to sink into the batter below. The result is a warm, savory, meaty breakfast dish that’s usually served with onion gravy.
The first written record of toad in the hole comes from England in the 18th century, though dishes that combined meat and pastry, such as meat pies, existed long beforehand. Unlike meat pies, though, which were considered an upper class dish due to how much meat they contained, toad in the hole was created as a way for poorer families to make use of whatever bits of meat they had, usually as leftovers. Beef and pork weren’t always available to peasants. A 1747 recipe for the dish called for using pigeon meat, while others called for organ meats, such as lamb kidney. As years went by and England’s lower classes had more opportunities for economic advancement, toad in the hole became a heartier, meatier dish. Today, it’s sometimes served in British schools at lunchtime, but is most popular as a breakfast food.
As for the dish’s unusual name, no one really knows where it came from, though we do know that it was never made with actual toad (or frog) meat. The “hole” part of the name might come from the fact that sausages leave behind holes if they’re picked out of cooked batter, while “toad” might be a somewhat derisive reference to the cheap kinds of meat originally used in the dish. The name might also refer to the fact that toads sometimes hide in holes with the tops of their heads poking out to wait for prey, just like the sausages in toad in the hole poke halfway out of the batter. Either way, don’t let its name dissuade you from trying this meaty marvel the next time you find yourself across the pond. It won’t croak, and neither will you!
[Image description: A glass pan full of toad-in-the-hole, five sausages cooked in a pastry batter.] Credit & copyright: Robert Gibert, Wikimedia Commons. This work has been released into the public domain by its author, Robert Gibert. This applies worldwide.Have a hoppy breakfast! Don’t worry, though—there’s not actually any toad in the famed British dish called toad in the hole. This cheekily named breakfast food is actually made with sausage, and it’s been popular in England for centuries.
Toad in the hole is made by baking sausages in a yorkshire pudding batter. The batter is made from eggs, flour, and milk. Sausages are normally arranged in a line or other pattern on top of the batter, so they’re half-submerged during the baking process. This allows them to get crispy on top and for their flavor to sink into the batter below. The result is a warm, savory, meaty breakfast dish that’s usually served with onion gravy.
The first written record of toad in the hole comes from England in the 18th century, though dishes that combined meat and pastry, such as meat pies, existed long beforehand. Unlike meat pies, though, which were considered an upper class dish due to how much meat they contained, toad in the hole was created as a way for poorer families to make use of whatever bits of meat they had, usually as leftovers. Beef and pork weren’t always available to peasants. A 1747 recipe for the dish called for using pigeon meat, while others called for organ meats, such as lamb kidney. As years went by and England’s lower classes had more opportunities for economic advancement, toad in the hole became a heartier, meatier dish. Today, it’s sometimes served in British schools at lunchtime, but is most popular as a breakfast food.
As for the dish’s unusual name, no one really knows where it came from, though we do know that it was never made with actual toad (or frog) meat. The “hole” part of the name might come from the fact that sausages leave behind holes if they’re picked out of cooked batter, while “toad” might be a somewhat derisive reference to the cheap kinds of meat originally used in the dish. The name might also refer to the fact that toads sometimes hide in holes with the tops of their heads poking out to wait for prey, just like the sausages in toad in the hole poke halfway out of the batter. Either way, don’t let its name dissuade you from trying this meaty marvel the next time you find yourself across the pond. It won’t croak, and neither will you!
[Image description: A glass pan full of toad-in-the-hole, five sausages cooked in a pastry batter.] Credit & copyright: Robert Gibert, Wikimedia Commons. This work has been released into the public domain by its author, Robert Gibert. This applies worldwide. -
FREEMind + Body Daily Curio #3038Free1 CQ
Delicious things shouldn’t be hazardous. Yet, just as delicious cheeseburgers can lead to high cholesterol if consumed in abundance, tuna can lead to mercury poisoning. This dangerous condition can damage the central nervous system and is particularly harmful in children. As for how mercury ends up in fish, the metal is naturally present in the ocean, where bacteria turn it into toxic methylmercury. Plankton absorb this toxic compound, then pass it along to the small fish that eat them, which pass it along to the larger fish that eat them. The larger a fish is, the more mercury it is exposed to, and since tuna reach average weights of around 40 pounds (with some massive ones weighing as much as 2,000 pounds) mercury in tuna meat is bound to be an issue. This is why pregnant women, nursing mothers, and people with certain medical conditions are told to steer clear of tuna, and even healthy people are advised not to eat too much. However, a recent discovery might make mercury-laden tuna a thing of the past, at least when it comes to canned meat.
Researchers from Chalmers University of Technology in Sweden found that when tuna was packaged in a water solution containing cysteine, an amino acid, up to 35 percent of mercury was removed from the meat. While this is a lucky breakthrough for tuna-lovers everywhere, there may be no need to wait until this new packaging becomes available. For most people, two to three servings of tuna per week are already deemed safe, and different kinds of tuna contain different levels of mercury, making it safe to eat some kinds of tuna (such as canned light tuna) more often than other kinds, like albacore tuna. No need to throw out that tuna sandwich–just be mindful of how many you’re eating per week!
[Image description: A can of tuna from above, with some green leaves visible beside the can.] Credit & copyright: Towfiqu barbhuiya, PexelsDelicious things shouldn’t be hazardous. Yet, just as delicious cheeseburgers can lead to high cholesterol if consumed in abundance, tuna can lead to mercury poisoning. This dangerous condition can damage the central nervous system and is particularly harmful in children. As for how mercury ends up in fish, the metal is naturally present in the ocean, where bacteria turn it into toxic methylmercury. Plankton absorb this toxic compound, then pass it along to the small fish that eat them, which pass it along to the larger fish that eat them. The larger a fish is, the more mercury it is exposed to, and since tuna reach average weights of around 40 pounds (with some massive ones weighing as much as 2,000 pounds) mercury in tuna meat is bound to be an issue. This is why pregnant women, nursing mothers, and people with certain medical conditions are told to steer clear of tuna, and even healthy people are advised not to eat too much. However, a recent discovery might make mercury-laden tuna a thing of the past, at least when it comes to canned meat.
Researchers from Chalmers University of Technology in Sweden found that when tuna was packaged in a water solution containing cysteine, an amino acid, up to 35 percent of mercury was removed from the meat. While this is a lucky breakthrough for tuna-lovers everywhere, there may be no need to wait until this new packaging becomes available. For most people, two to three servings of tuna per week are already deemed safe, and different kinds of tuna contain different levels of mercury, making it safe to eat some kinds of tuna (such as canned light tuna) more often than other kinds, like albacore tuna. No need to throw out that tuna sandwich–just be mindful of how many you’re eating per week!
[Image description: A can of tuna from above, with some green leaves visible beside the can.] Credit & copyright: Towfiqu barbhuiya, Pexels -
FREEBiology Daily Curio #3037Free1 CQ
You could say that these researchers discovered something fishy. Recently, in the Mediterranean Sea near the island of Corsica, scientists went diving to study fish behavior. There was a problem, though. Every time the researchers went in the water, they took food with them to give to the fish as rewards for following certain commands. However, the seabream in the area always seemed to know who was carrying the food and would swarm that person immediately. The researchers even used other divers as decoys to no avail, and it seemed that their research progress was more or less halted by the hungry, keen-eyed fish.
Instead of giving up, the team simply pivoted a bit and took their research in another direction. Katinka Soller, one of the researchers, spent 12 days training two different types of seabream to follow her around by enticing them with food. She also started out wearing a red vest, but gradually shed the bright color over the course of the experiment. Then she had another diver join her wearing different colors. At first, they were both swarmed by fish, but when it was clear that only Soller was giving out food, the fish ignored the other diver. It seems that humans have been largely underestimating fish cognition, as the seabream were able to differentiate between people based on what those people were wearing. To confirm this, the researchers went down again, this time wearing identical gear. They found that the fish weren’t interested in either of them, since they couldn’t tell which person might have food. Yet even small clothing differences, like variations in the divers’ color of flippers, were enough for the fish to distinguish between each person. These brainy fish must be breaming with pride.
[Image description: Two seabream fish against a black background.] Credit & copyright: Beyza Kaplan, PexelsYou could say that these researchers discovered something fishy. Recently, in the Mediterranean Sea near the island of Corsica, scientists went diving to study fish behavior. There was a problem, though. Every time the researchers went in the water, they took food with them to give to the fish as rewards for following certain commands. However, the seabream in the area always seemed to know who was carrying the food and would swarm that person immediately. The researchers even used other divers as decoys to no avail, and it seemed that their research progress was more or less halted by the hungry, keen-eyed fish.
Instead of giving up, the team simply pivoted a bit and took their research in another direction. Katinka Soller, one of the researchers, spent 12 days training two different types of seabream to follow her around by enticing them with food. She also started out wearing a red vest, but gradually shed the bright color over the course of the experiment. Then she had another diver join her wearing different colors. At first, they were both swarmed by fish, but when it was clear that only Soller was giving out food, the fish ignored the other diver. It seems that humans have been largely underestimating fish cognition, as the seabream were able to differentiate between people based on what those people were wearing. To confirm this, the researchers went down again, this time wearing identical gear. They found that the fish weren’t interested in either of them, since they couldn’t tell which person might have food. Yet even small clothing differences, like variations in the divers’ color of flippers, were enough for the fish to distinguish between each person. These brainy fish must be breaming with pride.
[Image description: Two seabream fish against a black background.] Credit & copyright: Beyza Kaplan, Pexels -
FREEUS History Daily Curio #3036Free1 CQ
Every delivery has to start somewhere. This month in 1792, President George Washington signed the Post Office Act, creating nationwide mail service that persists to this day. Before the Post Office Act, the thirteen American colonies had their first taste of a comprehensive mail delivery service thanks to Benjamin Franklin, who developed the colonial mail service. As the American Revolution approached, the value of expanding such a service was clear: should conflict break out against the British, communication between the colonies would be paramount. Naturally, Franklin was chosen as the first postmaster. This early version of the postal service used couriers to relay messages from the battlefield to the Continental Congress, and was essential to the success of the revolution. Still, there was no permanent form of nationwide mail delivery until 1792, when the Post Office Act established the Post Office Department.
Besides delivering packages and war correspondence, the early Post Office made newspapers much more widely available by making their deliveries cheaper. The Post Office Act also established some important rules that helped people trust the new system. Firstly, the act strictly forbade the government from opening the mail of private citizens for the purposes of surveillance. Secondly, it gave the power of establishing new mail routes to Congress, not the executive branch. Throughout the 19th century, the Post Office was critical for the U.S. as it expanded its territories and its citizens became ever more spread out. During this time, the Post Office Department came up with several innovations, like postage stamps and standardized rates. Over a century and a half later, the Post Office was revitalized by the Postal Reorganization Act, which changed the name to the United States Postal Service (USPS). While private parcel delivery companies also exist today, many of them also rely on the USPS to make their business models feasible. Remember to thank your mail carrier, especially when they’re beset by snow or rain or heat or gloom of night!
[Image description: Three shipping boxes on the ground, with one on a dolly. Two stickers on the boxes read “FRAGILE.”] Credit & copyright: Tima Miroshnichenko, PexelsEvery delivery has to start somewhere. This month in 1792, President George Washington signed the Post Office Act, creating nationwide mail service that persists to this day. Before the Post Office Act, the thirteen American colonies had their first taste of a comprehensive mail delivery service thanks to Benjamin Franklin, who developed the colonial mail service. As the American Revolution approached, the value of expanding such a service was clear: should conflict break out against the British, communication between the colonies would be paramount. Naturally, Franklin was chosen as the first postmaster. This early version of the postal service used couriers to relay messages from the battlefield to the Continental Congress, and was essential to the success of the revolution. Still, there was no permanent form of nationwide mail delivery until 1792, when the Post Office Act established the Post Office Department.
Besides delivering packages and war correspondence, the early Post Office made newspapers much more widely available by making their deliveries cheaper. The Post Office Act also established some important rules that helped people trust the new system. Firstly, the act strictly forbade the government from opening the mail of private citizens for the purposes of surveillance. Secondly, it gave the power of establishing new mail routes to Congress, not the executive branch. Throughout the 19th century, the Post Office was critical for the U.S. as it expanded its territories and its citizens became ever more spread out. During this time, the Post Office Department came up with several innovations, like postage stamps and standardized rates. Over a century and a half later, the Post Office was revitalized by the Postal Reorganization Act, which changed the name to the United States Postal Service (USPS). While private parcel delivery companies also exist today, many of them also rely on the USPS to make their business models feasible. Remember to thank your mail carrier, especially when they’re beset by snow or rain or heat or gloom of night!
[Image description: Three shipping boxes on the ground, with one on a dolly. Two stickers on the boxes read “FRAGILE.”] Credit & copyright: Tima Miroshnichenko, Pexels -
FREEWorld History Daily Curio #3035Free1 CQ
This gives new meaning to “too much of a good thing.” This month in 1478, George Plantagenet, Duke of Clarence, was supposedly drowned in a barrel of wine. We’ll never know for certain whether this unusual execution actually took place, since it was common for stories in the 15th century to be passed around until they became exaggerated. We do know that, one way or another, the duke was executed for treason. We also know some details about the events, both sad and violent, leading up to his death.
George was the younger brother of Edward IV of England. The brothers’ father led the House of York in the War of the Roses against the House of Lancaster over the right to the English throne. While their father died in battle, Edward eventually took the throne. Given a dukedom by his brother, George could have lived the rest of his life in ease and prosperity. Instead he, along with others in the House of York, were grievously insulted when Edward married Elizabeth Woodville, a widow from the House of Lancaster. The relationships between the two brothers only soured further when George married the cousin of a man who was a vocal critic of Edward. At one point, George even helped lead a rebellion against his own brother, captured him, and held him prisoner for a time, though wartime trouble with the Scots eventually led to his release.
Edward suspected that George was plotting to overthrow him, and George was forced to flee to France, for a time, to avoid Edward’s wrath. The brothers had a brief reconciliation after Henry VI was restored to the throne via political machinations. George helped Edward defeat him and retake the throne. Then, two things happened that finally ended the feud between the brothers once and for all. First, George’s wife passed away a few months after giving birth, and he accused one of her female servants, or “ladies”, of poisoning her. Without the proper authority to do so, he had the servant arrested and executed, which angered Edward. Around the same time, someone in George’s household was accused of “imagining the king’s death by necromancy.” When George publicly protested the charge, he was charged with treason himself. Although historical accounts show that he was executed in private, rumors began circulating shortly after his death that he was drowned in Malmsey wine, an expensive fortified wine from Portugal. Some accounts even claim that he made the request himself, and that it was to mock his brother’s drinking habits. With his own brother constantly trying to overthrow him, could you blame the guy for having a drink now and then?
[Image description: A black-and-white illustration of the Duke of Clarence being drowned in a barrel of wine. The Duke is being held upside down by his feet as a guard pushes on his head.] Credit & copyright: John Cassell's illustrated history of England vol. II (London, 1858). Internet Archive. Public Domain, Mark 1.0This gives new meaning to “too much of a good thing.” This month in 1478, George Plantagenet, Duke of Clarence, was supposedly drowned in a barrel of wine. We’ll never know for certain whether this unusual execution actually took place, since it was common for stories in the 15th century to be passed around until they became exaggerated. We do know that, one way or another, the duke was executed for treason. We also know some details about the events, both sad and violent, leading up to his death.
George was the younger brother of Edward IV of England. The brothers’ father led the House of York in the War of the Roses against the House of Lancaster over the right to the English throne. While their father died in battle, Edward eventually took the throne. Given a dukedom by his brother, George could have lived the rest of his life in ease and prosperity. Instead he, along with others in the House of York, were grievously insulted when Edward married Elizabeth Woodville, a widow from the House of Lancaster. The relationships between the two brothers only soured further when George married the cousin of a man who was a vocal critic of Edward. At one point, George even helped lead a rebellion against his own brother, captured him, and held him prisoner for a time, though wartime trouble with the Scots eventually led to his release.
Edward suspected that George was plotting to overthrow him, and George was forced to flee to France, for a time, to avoid Edward’s wrath. The brothers had a brief reconciliation after Henry VI was restored to the throne via political machinations. George helped Edward defeat him and retake the throne. Then, two things happened that finally ended the feud between the brothers once and for all. First, George’s wife passed away a few months after giving birth, and he accused one of her female servants, or “ladies”, of poisoning her. Without the proper authority to do so, he had the servant arrested and executed, which angered Edward. Around the same time, someone in George’s household was accused of “imagining the king’s death by necromancy.” When George publicly protested the charge, he was charged with treason himself. Although historical accounts show that he was executed in private, rumors began circulating shortly after his death that he was drowned in Malmsey wine, an expensive fortified wine from Portugal. Some accounts even claim that he made the request himself, and that it was to mock his brother’s drinking habits. With his own brother constantly trying to overthrow him, could you blame the guy for having a drink now and then?
[Image description: A black-and-white illustration of the Duke of Clarence being drowned in a barrel of wine. The Duke is being held upside down by his feet as a guard pushes on his head.] Credit & copyright: John Cassell's illustrated history of England vol. II (London, 1858). Internet Archive. Public Domain, Mark 1.0 -
FREEMind + Body Daily CurioFree1 CQ
In many parts of the U.S., temperatures are currently plunging…making it a perfect time to cozy up with some warm, sugary beignets. These dough-y treats are heavily associated with New Orleans today, but, like much of The Big Easy’s cuisine, beignets originated in France.
Beignets are sometimes referred to as donuts since they’re also made from deep-fried dough. In fact, beignets are one of only two official state donuts. Unlike most donuts, though, beignets are rectangular and traditionally made from pâte à choux, a French dough made from flour, butter, eggs, and a substantial amount of water. During cooking, the excess water turns to steam, making the dough puff up and become airy. Not all beignets are made this way, as some do use leavened dough, making for a thicker pastry. French beignets were often served with hot chocolate for dunking, while Louisiana beignets are traditionally served with butter and powdered sugar.
People have been frying dough for centuries, all over the world, and France is no exception. Beignets began as a 16th-century, French Mardi Gras food, served during and after the yearly celebration. Then, during the French and Indian war in the 18th century, the British forced large swaths of French people from their homes in Acadia, a territory spanning parts of modern Nova Scotia, New Brunswick, and Prince Edward Island. Seeking a new, French home far from the British, these displaced people came to Louisiana, which was a French colony at the time. The term “French Acadian” was soon shortened to “Cajun”, and the culinary traditions they brought with them from France changed Louisiana’s food landscape forever.
At first, Louisiana beignets remained mostly a Mardi Gras tradition. But their sugary goodness couldn’t be contained, and, in 1862, a coffee stand called Café du Monde opened in New Orleans, selling beignets alongside their drinks. Soon, many other New Orleans restaurants and food stands were selling beignets outside of Mardi Gras season. To this day, Café du Monde sells beignets 24/7. Hey, there’s never a bad time for something this delicious.
[Image description: A tray of beignets covered in powdered sugar on a table with two coffee drinks.] Credit & copyright: Hamalya Comeau, PexelsIn many parts of the U.S., temperatures are currently plunging…making it a perfect time to cozy up with some warm, sugary beignets. These dough-y treats are heavily associated with New Orleans today, but, like much of The Big Easy’s cuisine, beignets originated in France.
Beignets are sometimes referred to as donuts since they’re also made from deep-fried dough. In fact, beignets are one of only two official state donuts. Unlike most donuts, though, beignets are rectangular and traditionally made from pâte à choux, a French dough made from flour, butter, eggs, and a substantial amount of water. During cooking, the excess water turns to steam, making the dough puff up and become airy. Not all beignets are made this way, as some do use leavened dough, making for a thicker pastry. French beignets were often served with hot chocolate for dunking, while Louisiana beignets are traditionally served with butter and powdered sugar.
People have been frying dough for centuries, all over the world, and France is no exception. Beignets began as a 16th-century, French Mardi Gras food, served during and after the yearly celebration. Then, during the French and Indian war in the 18th century, the British forced large swaths of French people from their homes in Acadia, a territory spanning parts of modern Nova Scotia, New Brunswick, and Prince Edward Island. Seeking a new, French home far from the British, these displaced people came to Louisiana, which was a French colony at the time. The term “French Acadian” was soon shortened to “Cajun”, and the culinary traditions they brought with them from France changed Louisiana’s food landscape forever.
At first, Louisiana beignets remained mostly a Mardi Gras tradition. But their sugary goodness couldn’t be contained, and, in 1862, a coffee stand called Café du Monde opened in New Orleans, selling beignets alongside their drinks. Soon, many other New Orleans restaurants and food stands were selling beignets outside of Mardi Gras season. To this day, Café du Monde sells beignets 24/7. Hey, there’s never a bad time for something this delicious.
[Image description: A tray of beignets covered in powdered sugar on a table with two coffee drinks.] Credit & copyright: Hamalya Comeau, Pexels -
FREEMind + Body Daily Curio #3034Free1 CQ
Do you ever feel like the world’s a little brighter in the morning, and not just because of the sun? Turns out, feeling better in the morning is more common than previously thought, according to new research coming out of University College London.
“Sleep on it” is age-old advice for anyone fretting over a major decision or dealing with bad news. Now, science seems to confirm that it’s worth listening to. Researchers surveyed almost 50,000 people over a period of two years, asking them to keep track of their moods throughout the day, and the data shows that people do, in fact, feel better in the morning. In fact, people’s moods fluctuate in a relatively predictable schedule throughout the day and even throughout the week. Generally, people feel their best in the morning, with the mood peaking in the late morning. By mid-afternoon, decision fatigue begins to set in, and mood declines. People’s moods continue to decline throughout the evening, reaching its lowest point at midnight. All this is despite the fact that the stress hormone cortisol is at its highest levels in the morning and is at its lowest at night.
Surprisingly, despite Monday’s lackluster reputation, people tend to feel better on Mondays and Fridays than they do on Sunday. Specifically, they tend to feel more satisfied with their lives on those days, while happiness peaks on Tuesdays. Does that mean that the best time of the week is Tuesday morning? More research is needed before we’ll know for sure. While researchers accounted for age, health, and employment status in their study, they didn’t gather data on sleep cycles, weather, and other factors that might contribute to fluctuating moods. They also didn’t differentiate between physical and mental well-being this time around. Nevertheless, their research might lead to improvements in mental health care. Mainly, they believe that more mental healthcare should be available later in the day, when people are feeling their lowest. Late-night therapy sessions don’t sound like the worst idea.
[Image description: A grassy field behind a wooden gate at dawn, with sunbeams shining through clouds.] Credit & copyright: Matthias Groeneveld, PexelsDo you ever feel like the world’s a little brighter in the morning, and not just because of the sun? Turns out, feeling better in the morning is more common than previously thought, according to new research coming out of University College London.
“Sleep on it” is age-old advice for anyone fretting over a major decision or dealing with bad news. Now, science seems to confirm that it’s worth listening to. Researchers surveyed almost 50,000 people over a period of two years, asking them to keep track of their moods throughout the day, and the data shows that people do, in fact, feel better in the morning. In fact, people’s moods fluctuate in a relatively predictable schedule throughout the day and even throughout the week. Generally, people feel their best in the morning, with the mood peaking in the late morning. By mid-afternoon, decision fatigue begins to set in, and mood declines. People’s moods continue to decline throughout the evening, reaching its lowest point at midnight. All this is despite the fact that the stress hormone cortisol is at its highest levels in the morning and is at its lowest at night.
Surprisingly, despite Monday’s lackluster reputation, people tend to feel better on Mondays and Fridays than they do on Sunday. Specifically, they tend to feel more satisfied with their lives on those days, while happiness peaks on Tuesdays. Does that mean that the best time of the week is Tuesday morning? More research is needed before we’ll know for sure. While researchers accounted for age, health, and employment status in their study, they didn’t gather data on sleep cycles, weather, and other factors that might contribute to fluctuating moods. They also didn’t differentiate between physical and mental well-being this time around. Nevertheless, their research might lead to improvements in mental health care. Mainly, they believe that more mental healthcare should be available later in the day, when people are feeling their lowest. Late-night therapy sessions don’t sound like the worst idea.
[Image description: A grassy field behind a wooden gate at dawn, with sunbeams shining through clouds.] Credit & copyright: Matthias Groeneveld, Pexels -
FREELiterature Daily Curio #3033Free1 CQ
Women face a lot of pressure. That might seem like an obvious statement, but someone had to be the first to write about it. On this day in 1963, The Feminine Mystique was published by American activist Betty Friedan. The book was a frank indictment of the prevailing myths surrounding women' s lives in post-World War II America.
Friedan defined the “feminine mystique” as the prevailing cultural idea that all women should feel fulfilled by dedicating themselves to domesticity. According to this idea, homemaking, raising children, and being dutiful wives to their husbands was all it took for women to be fully-realized, completely satisfied individuals. This notion was not just a matter of sociology, but psychology. Friedan herself drew inspiration from psychologists who believed that clinging to the ideals of the feminine mystique denied women the chance to grow and develop properly as adults, a particularly salient point in the post-World War II era. During the war, women had been enlisted into the workforce while many working-aged men were deployed. Upon the men' s return, the vast majority of women were ousted from the workforce, and were thus suddenly denied financial and social independence. The ideal of the nuclear family was heavily promoted in the U.S. during the Cold War, making the role of a housewife a matter of patriotism.
In reality, plenty of women during this time were highly dissatisfied with their lives. Friedan cited a number of different statistics in her book showing that the 1950s and 1960s were a period of social regression for women. Fewer women went to college, fewer women stayed in college, and in interviews with housewives, Friedan found that many of them didn’t find their limited roles fulfilling. Meanwhile, popular culture often blamed this dissatisfaction on the women themselves, attributing their lack of fulfillment to their higher education or professional ambitions. In the decades following the publication of her book, Friedan faced criticism from other feminists as well, largely for statistical inconsistencies and for the increasingly dated scope of her book as more women returned to the workforce. Regardless, The Feminine Mystique is remembered as a landmark book that helped jumpstart a national conversation about women’s rights. After all, there are some problems that should become dated.
[Image description: The feminine gender symbol in black against a pink background.] Credit & copyright: Author-created photo. Public Domain.Women face a lot of pressure. That might seem like an obvious statement, but someone had to be the first to write about it. On this day in 1963, The Feminine Mystique was published by American activist Betty Friedan. The book was a frank indictment of the prevailing myths surrounding women' s lives in post-World War II America.
Friedan defined the “feminine mystique” as the prevailing cultural idea that all women should feel fulfilled by dedicating themselves to domesticity. According to this idea, homemaking, raising children, and being dutiful wives to their husbands was all it took for women to be fully-realized, completely satisfied individuals. This notion was not just a matter of sociology, but psychology. Friedan herself drew inspiration from psychologists who believed that clinging to the ideals of the feminine mystique denied women the chance to grow and develop properly as adults, a particularly salient point in the post-World War II era. During the war, women had been enlisted into the workforce while many working-aged men were deployed. Upon the men' s return, the vast majority of women were ousted from the workforce, and were thus suddenly denied financial and social independence. The ideal of the nuclear family was heavily promoted in the U.S. during the Cold War, making the role of a housewife a matter of patriotism.
In reality, plenty of women during this time were highly dissatisfied with their lives. Friedan cited a number of different statistics in her book showing that the 1950s and 1960s were a period of social regression for women. Fewer women went to college, fewer women stayed in college, and in interviews with housewives, Friedan found that many of them didn’t find their limited roles fulfilling. Meanwhile, popular culture often blamed this dissatisfaction on the women themselves, attributing their lack of fulfillment to their higher education or professional ambitions. In the decades following the publication of her book, Friedan faced criticism from other feminists as well, largely for statistical inconsistencies and for the increasingly dated scope of her book as more women returned to the workforce. Regardless, The Feminine Mystique is remembered as a landmark book that helped jumpstart a national conversation about women’s rights. After all, there are some problems that should become dated.
[Image description: The feminine gender symbol in black against a pink background.] Credit & copyright: Author-created photo. Public Domain. -
FREELiterature Daily Curio #3032Free1 CQ
Have you ever bought a book because of a quote from another author telling you to? Such endorsements, printed on the backs of books or on the inside of their dust jackets, are called blurbs, and one publisher is doing away with them. Simon & Schuster is one of the largest publishing houses in the U.S., and like every other publishing house, they’ve implicitly required their writers to solicit blurbs from other writers for their books. The common argument in favor of blurbs often is that they give potential buyers more confidence in their purchase, especially if the blurb is from an author they’re already familiar with. Others believe that bookstores and other large-scale buyers place a similar level of trust in the blurbs, which helps sales.
The practice of acquiring blurbs, however, can be very taxing. Writers, especially those with little name recognition, must ask more established writers to read and endorse their book. Writers’ agents devote much of their time to finding blurb-writers for their clients, and editors, too, have to run through their contacts list to reach out for blurbs. Then there’s the fact that blurbs aren’t always sincere. Many writers exchange blurbs as favors, though some end up writing many more than they ever receive. The amount of time it takes to read a book means that many writers don’t actually finish the entire thing before writing a blurb. Sean Manning, the current publisher of Simon & Schuster’s flagship imprint, considers blurbs a waste of time and believes that they don’t always reflect the artistic merit of the book. After all, many of the worlds’ greatest novels were published without blurbs, and books with blurb-covered jackets don’t always do well commercially or critically. Still, writers are divided. Some believe that blurbs are meaningless, but others believe they’re an important part of marketing. Only time will tell how blurbless books will do.
[Image description: A stack of books.] Credit & copyright: Poppy Thomas Hill, PexelsHave you ever bought a book because of a quote from another author telling you to? Such endorsements, printed on the backs of books or on the inside of their dust jackets, are called blurbs, and one publisher is doing away with them. Simon & Schuster is one of the largest publishing houses in the U.S., and like every other publishing house, they’ve implicitly required their writers to solicit blurbs from other writers for their books. The common argument in favor of blurbs often is that they give potential buyers more confidence in their purchase, especially if the blurb is from an author they’re already familiar with. Others believe that bookstores and other large-scale buyers place a similar level of trust in the blurbs, which helps sales.
The practice of acquiring blurbs, however, can be very taxing. Writers, especially those with little name recognition, must ask more established writers to read and endorse their book. Writers’ agents devote much of their time to finding blurb-writers for their clients, and editors, too, have to run through their contacts list to reach out for blurbs. Then there’s the fact that blurbs aren’t always sincere. Many writers exchange blurbs as favors, though some end up writing many more than they ever receive. The amount of time it takes to read a book means that many writers don’t actually finish the entire thing before writing a blurb. Sean Manning, the current publisher of Simon & Schuster’s flagship imprint, considers blurbs a waste of time and believes that they don’t always reflect the artistic merit of the book. After all, many of the worlds’ greatest novels were published without blurbs, and books with blurb-covered jackets don’t always do well commercially or critically. Still, writers are divided. Some believe that blurbs are meaningless, but others believe they’re an important part of marketing. Only time will tell how blurbless books will do.
[Image description: A stack of books.] Credit & copyright: Poppy Thomas Hill, Pexels -
FREEMusic Appreciation Daily Curio #3031Free1 CQ
There are instruments that play music, and there are instruments that help define it. A Stradivarius violin, known as the “Joachim-Ma” Stradivarius, recently sold at auction for $10 million. After fees, the final price came out to be $11.5 million for the anonymous buyer, but even that didn’t break the record for the most expensive Stradivarius ever. That honor goes to the “Lady Blunt” Stradivarius, which sold in 2011 for $15.9 million. As eye-popping as the prices seem, there are plenty of reasons why these famed historical violins continue to be so highly prized centuries after they were made.
Handcrafted by Antonio Stradivari in the 17th and 18th centuries, Stradivarius are renowned for the quality of their sound. Stradivari made around 1,200 instruments during his career, of which 500 survive today, but he is known mostly for his violins. Some speculate that his violins are unique due to the wood from which they’re made. Stradivari used spruce, oak, and willow for his instruments, but the specific trees supposedly grew during the Little Ice Age between 1300 C.E. and 1850 C.E., which made them denser than modern wood.
Others believe that there isn’t anything inherently superior about Stradivarius violins, and indeed, in blind tests, high-end violins made today often outperform the legendary instruments. In the case of the Joachim-Ma Stradivarius, however, there’s history that gives it value. It was formerly owned by one of its namesakes, the legendary violinist Joseph Joachim, in the 1800s. Then, it was purchased by another legend and namesake, Si-Hon Ma, a violinist and the inventor of the Sihon mute, a device that can stay attached to a violin to dampen its sound as needed. Ma’s estate actually donated the Stradivarius to the NEC in 2009, and proceeds from the sale will be used to fund a scholarship program at the conservatory. They were originally hoping for a final sale price between $12-18 million, but hey, $10 million is nothing to sneeze at!
[Image description: A wooden Stradivarius violin against a gray background.] Credit & copyright: "Gould" Violin, Antonio Stradivari, Italian, 1693. The Metropolitan Museum of Art, Gift of George Gould, 1955. Public Domain, Creative Commons Zero (CC0).There are instruments that play music, and there are instruments that help define it. A Stradivarius violin, known as the “Joachim-Ma” Stradivarius, recently sold at auction for $10 million. After fees, the final price came out to be $11.5 million for the anonymous buyer, but even that didn’t break the record for the most expensive Stradivarius ever. That honor goes to the “Lady Blunt” Stradivarius, which sold in 2011 for $15.9 million. As eye-popping as the prices seem, there are plenty of reasons why these famed historical violins continue to be so highly prized centuries after they were made.
Handcrafted by Antonio Stradivari in the 17th and 18th centuries, Stradivarius are renowned for the quality of their sound. Stradivari made around 1,200 instruments during his career, of which 500 survive today, but he is known mostly for his violins. Some speculate that his violins are unique due to the wood from which they’re made. Stradivari used spruce, oak, and willow for his instruments, but the specific trees supposedly grew during the Little Ice Age between 1300 C.E. and 1850 C.E., which made them denser than modern wood.
Others believe that there isn’t anything inherently superior about Stradivarius violins, and indeed, in blind tests, high-end violins made today often outperform the legendary instruments. In the case of the Joachim-Ma Stradivarius, however, there’s history that gives it value. It was formerly owned by one of its namesakes, the legendary violinist Joseph Joachim, in the 1800s. Then, it was purchased by another legend and namesake, Si-Hon Ma, a violinist and the inventor of the Sihon mute, a device that can stay attached to a violin to dampen its sound as needed. Ma’s estate actually donated the Stradivarius to the NEC in 2009, and proceeds from the sale will be used to fund a scholarship program at the conservatory. They were originally hoping for a final sale price between $12-18 million, but hey, $10 million is nothing to sneeze at!
[Image description: A wooden Stradivarius violin against a gray background.] Credit & copyright: "Gould" Violin, Antonio Stradivari, Italian, 1693. The Metropolitan Museum of Art, Gift of George Gould, 1955. Public Domain, Creative Commons Zero (CC0). -
FREEMind + Body Daily CurioFree1 CQ
Happy Valentine’s Day! If you’re dining out with someone special this evening, chances are good that you’ll see lobster on the menu. After all, what could be more romantic and fancy than a succulent, butter-dipped seafood meal in the perfect shade of Valentine’s red? Well, it might be hard to believe, but for much of American history, people would have snubbed their noses at such a meal. In fact, in the early 1700s, lobsters were considered the “poor man’s chicken” and were even referred to as the “cockroaches of the sea.”
Lobsters are large, ocean-dwelling crustaceans that come in a variety of colors and sizes; in fact, there are 800 known lobster species. The most commonly-eaten species is the American lobster (also called the Maine lobster) which lives off the North Atlantic coast of the U.S. They’re the largest lobsters in the world, reaching lengths of around 24 inches and weighing up to 9 pounds, though some wild American lobsters have reached gargantuan weights of over 40 pounds. Lobsters are usually cooked whole and, though it’s controversial, they are often killed just before they’re served, usually by being dropped into boiling water. This isn’t just done for freshness or flavor, but because of health concerns. Lobsters’ bodies harbor bacteria that begin to spread quickly upon their death, so there isn’t much time to safely eat a lobster after it dies. Killing it just before serving is the preferred method to avoid food poisoning. Though lobster can be served in many ways and added to plenty of dishes like soup or pasta, lobsters are most commonly eaten whole. Diners use special tools called lobster crackers to break through the shell and reach the light, sweet meat within. Some choose to dip the meat in melted butter, which is meant to enhance its subtle flavors.
It’s safe to say that early American colonists didn’t know what to make of lobsters. Though they’d been eaten in various European countries and used for fertilizer and fish bait by some Native Americans for centuries, English settlers near the Atlantic Ocean were perplexed by the crustaceans. They were so populous, at the time, that their bodies would sometimes wash ashore in enormous piles. Eventually, the colonists began making use of the plentiful dead lobsters by feeding them to prisoners, slaves, and indentured servants…but since they didn’t understand the health concerns surrounding less-than-fresh lobster meat, this resulted in some serious food poisoning. Supposedly, some servants and workers even demanded clauses in their contracts stating that they wouldn’t eat lobster more than twice per week. Even after it was discovered that serving them fresh prevented food poisoning, lobsters’ reputation remained poor for some time in the U.S.
Strangely, it was the rise of long-distance passenger trains that changed things. Train managers were able to serve fresh lobster to passengers that were unfamiliar with Atlantic seafood and had no inkling of its bad reputation. Passengers, including wealthy travelers, soon became obsessed with the crustaceans, which they saw as a delicacy. By the 1870s, Maine was famous for lobster, and the first lobster pound, or large storage area for live lobsters, was established in Vinalhaven, Maine. By the 1880s, lobster had officially become expensive, and chefs in big American cities began including it on their menus in various forms. Today, lobster maintains a fancy reputation, though it’s also served in cheaper forms, especially on lobster rolls and in soups. You can put rubber bands on their claws, but these pinchy critters can’t be constrained by social class.
[Image description: Lobster dish with head and tail visible in a white bowl with a wine bottle and glasses in the background.] Credit & copyright: ROMAN ODINTSOV, PexelsHappy Valentine’s Day! If you’re dining out with someone special this evening, chances are good that you’ll see lobster on the menu. After all, what could be more romantic and fancy than a succulent, butter-dipped seafood meal in the perfect shade of Valentine’s red? Well, it might be hard to believe, but for much of American history, people would have snubbed their noses at such a meal. In fact, in the early 1700s, lobsters were considered the “poor man’s chicken” and were even referred to as the “cockroaches of the sea.”
Lobsters are large, ocean-dwelling crustaceans that come in a variety of colors and sizes; in fact, there are 800 known lobster species. The most commonly-eaten species is the American lobster (also called the Maine lobster) which lives off the North Atlantic coast of the U.S. They’re the largest lobsters in the world, reaching lengths of around 24 inches and weighing up to 9 pounds, though some wild American lobsters have reached gargantuan weights of over 40 pounds. Lobsters are usually cooked whole and, though it’s controversial, they are often killed just before they’re served, usually by being dropped into boiling water. This isn’t just done for freshness or flavor, but because of health concerns. Lobsters’ bodies harbor bacteria that begin to spread quickly upon their death, so there isn’t much time to safely eat a lobster after it dies. Killing it just before serving is the preferred method to avoid food poisoning. Though lobster can be served in many ways and added to plenty of dishes like soup or pasta, lobsters are most commonly eaten whole. Diners use special tools called lobster crackers to break through the shell and reach the light, sweet meat within. Some choose to dip the meat in melted butter, which is meant to enhance its subtle flavors.
It’s safe to say that early American colonists didn’t know what to make of lobsters. Though they’d been eaten in various European countries and used for fertilizer and fish bait by some Native Americans for centuries, English settlers near the Atlantic Ocean were perplexed by the crustaceans. They were so populous, at the time, that their bodies would sometimes wash ashore in enormous piles. Eventually, the colonists began making use of the plentiful dead lobsters by feeding them to prisoners, slaves, and indentured servants…but since they didn’t understand the health concerns surrounding less-than-fresh lobster meat, this resulted in some serious food poisoning. Supposedly, some servants and workers even demanded clauses in their contracts stating that they wouldn’t eat lobster more than twice per week. Even after it was discovered that serving them fresh prevented food poisoning, lobsters’ reputation remained poor for some time in the U.S.
Strangely, it was the rise of long-distance passenger trains that changed things. Train managers were able to serve fresh lobster to passengers that were unfamiliar with Atlantic seafood and had no inkling of its bad reputation. Passengers, including wealthy travelers, soon became obsessed with the crustaceans, which they saw as a delicacy. By the 1870s, Maine was famous for lobster, and the first lobster pound, or large storage area for live lobsters, was established in Vinalhaven, Maine. By the 1880s, lobster had officially become expensive, and chefs in big American cities began including it on their menus in various forms. Today, lobster maintains a fancy reputation, though it’s also served in cheaper forms, especially on lobster rolls and in soups. You can put rubber bands on their claws, but these pinchy critters can’t be constrained by social class.
[Image description: Lobster dish with head and tail visible in a white bowl with a wine bottle and glasses in the background.] Credit & copyright: ROMAN ODINTSOV, Pexels -
FREENutrition Daily Curio #3030Free1 CQ
Throw out those boxed dyes—vegetables are the key to keeping your youthful hair color. At least, that’s what one group of researchers from Nagoya University in Japan seems to have concluded in a recent study. While hair-graying is a natural and harmless process caused by the breakdown of pigment-delivering cells, not everyone is happy to lose their original hair color as they age. Now, it seems they may have a choice in the matter, as scientists have identified a common antioxidant that can prevent gray hair in mice. For their study, researchers focused on three antioxidants: luteolin, hesperetin, and diosmetin. They took mice that were bred to go gray just as humans do, and exposed them to all three antioxidants both orally and topically. Luteolin turned out to be the most effective; it prevented the mice from going gray regardless of how it was given. This could be good news, since luteolin is fairly common and inexpensive. It can be found in a variety of vegetables, like celery, broccoli, onions, and peppers, and it’s even available as a supplement. The antioxidant prevents gray hair by supporting the health of melanocytes, specialized cells that help distribute melanin, or biological pigment, in hair, skin, and eyes. Usually, these cells have a fairly short lifespan, and they die off around the time that an average person reaches middle age. Luteolin seems to extend the cells’ lifespan, but it still isn’t a cure-all for every hair-related issue. It isn’t known to improve hair’s shine or texture, and it isn’t believed to prevent hair loss. Hey, at least you can keep your hair color as you age, if not your hair count!
[Image description: Sliced broccoli in a metal bowl.] Credit & copyright: Cats Coming, PexelsThrow out those boxed dyes—vegetables are the key to keeping your youthful hair color. At least, that’s what one group of researchers from Nagoya University in Japan seems to have concluded in a recent study. While hair-graying is a natural and harmless process caused by the breakdown of pigment-delivering cells, not everyone is happy to lose their original hair color as they age. Now, it seems they may have a choice in the matter, as scientists have identified a common antioxidant that can prevent gray hair in mice. For their study, researchers focused on three antioxidants: luteolin, hesperetin, and diosmetin. They took mice that were bred to go gray just as humans do, and exposed them to all three antioxidants both orally and topically. Luteolin turned out to be the most effective; it prevented the mice from going gray regardless of how it was given. This could be good news, since luteolin is fairly common and inexpensive. It can be found in a variety of vegetables, like celery, broccoli, onions, and peppers, and it’s even available as a supplement. The antioxidant prevents gray hair by supporting the health of melanocytes, specialized cells that help distribute melanin, or biological pigment, in hair, skin, and eyes. Usually, these cells have a fairly short lifespan, and they die off around the time that an average person reaches middle age. Luteolin seems to extend the cells’ lifespan, but it still isn’t a cure-all for every hair-related issue. It isn’t known to improve hair’s shine or texture, and it isn’t believed to prevent hair loss. Hey, at least you can keep your hair color as you age, if not your hair count!
[Image description: Sliced broccoli in a metal bowl.] Credit & copyright: Cats Coming, Pexels -
FREEGames Daily Curio #3029Free1 CQ
What’s the latest youth craze? Ask your grandma! In major cities across the U.S., the ancient Chinese game of mahjong is gaining traction with young people. Until recently, it was largely seen as a game for older, usually Chinese or Chinese-American, players. Mahjong means “sparrows” in Mandarin, and it has its roots in the 19th century, though its exact origins are murky. Early on, mahjong had regional variations unique to different provinces in China, and though around 40 different versions of the game still exist today, the vast majority of modern mahjong is the version that gained popularity in the early 1900s. The game is played with 144 tiles called pais, traditionally made of bamboo, cow bone, or ivory, though today they’re usually plastic. Each piece features an image of a sparrow, a Chinese character, or other symbols. Players take turns drawing tiles and matching them together. The first player to create a hand of 14 tiles—meaning four suits and a pair—wins.
While the game was largely unknown outside of China for much of its early history, it gained popularity in the West thanks to American businessman Joseph Babcock. Babcock learned of mahjong when working for Standard Oil and living in Shanghai prior to WWI. He started importing mahjong sets to the U.S. in the 1920s, where it became popular with wealthy women who could afford the expensive, hand-carved sets and had the time to play. In addition to the Chinese Americans who had enjoyed mahjong for years, starting in the 1950s, mahjong was also embraced by Jewish families in the U.S., and they form a significant proportion of American mahjong players to this day. In recent years, mahjong clubs have been popping up in cities like Los Angeles and New York, where people from all walks of life gather together, united by their passion for the game. Indeed, it has always been a game that transcended social and cultural barriers, having even been played by the Empress Dowager of China in the late 1800s. From the royal court to social clubs, all you need are some friends and a mahjong set to pick up this ancient game.
[Image description: Rows of Mahjong tiles with numbers and Chinese letters.] Credit & copyright: HandigeHarry at Dutch Wikipedia. This work has been released into the public domain by its author, HandigeHarry at Dutch Wikipedia. This applies worldwide.What’s the latest youth craze? Ask your grandma! In major cities across the U.S., the ancient Chinese game of mahjong is gaining traction with young people. Until recently, it was largely seen as a game for older, usually Chinese or Chinese-American, players. Mahjong means “sparrows” in Mandarin, and it has its roots in the 19th century, though its exact origins are murky. Early on, mahjong had regional variations unique to different provinces in China, and though around 40 different versions of the game still exist today, the vast majority of modern mahjong is the version that gained popularity in the early 1900s. The game is played with 144 tiles called pais, traditionally made of bamboo, cow bone, or ivory, though today they’re usually plastic. Each piece features an image of a sparrow, a Chinese character, or other symbols. Players take turns drawing tiles and matching them together. The first player to create a hand of 14 tiles—meaning four suits and a pair—wins.
While the game was largely unknown outside of China for much of its early history, it gained popularity in the West thanks to American businessman Joseph Babcock. Babcock learned of mahjong when working for Standard Oil and living in Shanghai prior to WWI. He started importing mahjong sets to the U.S. in the 1920s, where it became popular with wealthy women who could afford the expensive, hand-carved sets and had the time to play. In addition to the Chinese Americans who had enjoyed mahjong for years, starting in the 1950s, mahjong was also embraced by Jewish families in the U.S., and they form a significant proportion of American mahjong players to this day. In recent years, mahjong clubs have been popping up in cities like Los Angeles and New York, where people from all walks of life gather together, united by their passion for the game. Indeed, it has always been a game that transcended social and cultural barriers, having even been played by the Empress Dowager of China in the late 1800s. From the royal court to social clubs, all you need are some friends and a mahjong set to pick up this ancient game.
[Image description: Rows of Mahjong tiles with numbers and Chinese letters.] Credit & copyright: HandigeHarry at Dutch Wikipedia. This work has been released into the public domain by its author, HandigeHarry at Dutch Wikipedia. This applies worldwide. -
FREEArt Appreciation Daily Curio #3028Free1 CQ
Properly maintained, a canvas painting can last for centuries. In times of war, however, it takes no time at all for it to be lost. Such was the fate of countless works of art in Ukraine, where the 2022 Russian invasion destroyed museums and left others badly damaged or looted. Fortunately, many surviving pieces have been taken to a museum in Germany and will be featured in a moving exhibition. When Russia’s invasion started, the dedicated workers of Odesa Museum of Western and Eastern Art evacuated the building of its canvas tenants, taking them to a storage facility in Lviv. The museum’s collection included paintings from the likes of Andreas Achenbach, Cornelis de Heem, and Frits Thaulow, as well as other artifacts of cultural and historical significance.
However, poor storage conditions led to concerns that the artifacts were being damaged by moisture and mold. In 2023, 74 of the paintings from Odesa were sent to Germany for restoration, some of them already removed from their frames. In Germany, the paintings were cleaned up and, last month, most of them were placed on display at Gemäldegalerie, an art museum in Berlin. They form a new exhibition called, From Odesa to Berlin: European Painting From the 16th to the 19th Century, which is divided into nine “chapters.” 60 of the paintings are the refugees from Odesa, while an additional 25 come from Gemäldegalerie’s own collection. The exhibition is a showcase of both Ukrainian artists and the Odesa museum’s collection, which was largely overlooked by the rest of Europe. As the museum’s press release stated, the exhibition shows the “multifaceted nature of the Ukrainian collection, which has hitherto been little known in Western Europe.” What better time for art to shine its uniquely hopeful light?
[Image description: The Ukrainian flag flying at the Hall of Warsaw in Poland.] Credit & copyright: Cybularny, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Properly maintained, a canvas painting can last for centuries. In times of war, however, it takes no time at all for it to be lost. Such was the fate of countless works of art in Ukraine, where the 2022 Russian invasion destroyed museums and left others badly damaged or looted. Fortunately, many surviving pieces have been taken to a museum in Germany and will be featured in a moving exhibition. When Russia’s invasion started, the dedicated workers of Odesa Museum of Western and Eastern Art evacuated the building of its canvas tenants, taking them to a storage facility in Lviv. The museum’s collection included paintings from the likes of Andreas Achenbach, Cornelis de Heem, and Frits Thaulow, as well as other artifacts of cultural and historical significance.
However, poor storage conditions led to concerns that the artifacts were being damaged by moisture and mold. In 2023, 74 of the paintings from Odesa were sent to Germany for restoration, some of them already removed from their frames. In Germany, the paintings were cleaned up and, last month, most of them were placed on display at Gemäldegalerie, an art museum in Berlin. They form a new exhibition called, From Odesa to Berlin: European Painting From the 16th to the 19th Century, which is divided into nine “chapters.” 60 of the paintings are the refugees from Odesa, while an additional 25 come from Gemäldegalerie’s own collection. The exhibition is a showcase of both Ukrainian artists and the Odesa museum’s collection, which was largely overlooked by the rest of Europe. As the museum’s press release stated, the exhibition shows the “multifaceted nature of the Ukrainian collection, which has hitherto been little known in Western Europe.” What better time for art to shine its uniquely hopeful light?
[Image description: The Ukrainian flag flying at the Hall of Warsaw in Poland.] Credit & copyright: Cybularny, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3027Free1 CQ
Who knew that you can use the same stuff to clean dirt and water? Scientists at University of Waterloo have found a way to remove a harmful form of chromium from the environment using a form of charcoal called biochar. Biochar is a popular soil amendment made by burning organic waste in a low oxygen environment. Like regular charcoal, biochar’s particular properties can vary depending on what types of plants are used to make it, but no matter what, it’s a carbon-rich substance that can improve soil health. One of the ways it does this is by absorbing toxic pollutants in the soil, but the recent discovery of its effects on chromium has researchers particularly excited.
Chromium is the element that makes stainless steel stainless. The heavy metal exists in two very different forms. One is chromium(III), which is not only harmless, but essential to the proper function of the human body as a micronutrient. Chromium(VI), on the other hand, is a dangerous carcinogen created during industrial processes like leather tanning and the creation of stainless steel. Symptoms of chromium(VI) can be as mild as skin rashes or upset stomachs, but it can also lead to stomach ulcers, respiratory problems, reproductive issues, damage to the kidneys or liver, and ovarian cancer. Chromium(VI) can be difficult to remove from groundwater, but that’s where biochar comes in. Researchers found that when biochar was added to contaminated water, it was highly effective at absorbing chromium(VI). In fact, biochar actually converted the toxic chromium(VI) to the non-toxic chromium(III). Apparently, in the presence of biochar, the chromium isotopes fractionated, with lighter isotopes being removed faster than heavier ones. Looking at this ratio of different isotopes also makes it much easier for researchers to monitor the progress of groundwater cleanup. It’s a dirty job, but biochar can do it.
[Image description: A pile of biochar on a white tarp.] Credit & copyright: USDA Forest Service photo by Deborah Page-Dumroese. Public Domain.Who knew that you can use the same stuff to clean dirt and water? Scientists at University of Waterloo have found a way to remove a harmful form of chromium from the environment using a form of charcoal called biochar. Biochar is a popular soil amendment made by burning organic waste in a low oxygen environment. Like regular charcoal, biochar’s particular properties can vary depending on what types of plants are used to make it, but no matter what, it’s a carbon-rich substance that can improve soil health. One of the ways it does this is by absorbing toxic pollutants in the soil, but the recent discovery of its effects on chromium has researchers particularly excited.
Chromium is the element that makes stainless steel stainless. The heavy metal exists in two very different forms. One is chromium(III), which is not only harmless, but essential to the proper function of the human body as a micronutrient. Chromium(VI), on the other hand, is a dangerous carcinogen created during industrial processes like leather tanning and the creation of stainless steel. Symptoms of chromium(VI) can be as mild as skin rashes or upset stomachs, but it can also lead to stomach ulcers, respiratory problems, reproductive issues, damage to the kidneys or liver, and ovarian cancer. Chromium(VI) can be difficult to remove from groundwater, but that’s where biochar comes in. Researchers found that when biochar was added to contaminated water, it was highly effective at absorbing chromium(VI). In fact, biochar actually converted the toxic chromium(VI) to the non-toxic chromium(III). Apparently, in the presence of biochar, the chromium isotopes fractionated, with lighter isotopes being removed faster than heavier ones. Looking at this ratio of different isotopes also makes it much easier for researchers to monitor the progress of groundwater cleanup. It’s a dirty job, but biochar can do it.
[Image description: A pile of biochar on a white tarp.] Credit & copyright: USDA Forest Service photo by Deborah Page-Dumroese. Public Domain. -
FREEDaily CurioFree1 CQ
This just might be the coolest dessert there is. As we kick off Black History Month, we're paying homage to Alfred L. Cralle, the Black American inventor who, on February 2, 1897, patented the ice cream scoop. Cralle was interested in mechanics from an early age and, after studying at Wayland Seminary in Washington, D.C., took jobs as a porter at a hotel and pharmacy in Pittsburgh, Pennsylvania. It was these jobs that inspired his most famous invention. Cralle saw firsthand how difficult it was to scoop ice cream one-handed, since it stuck to regular serving utensils. So, he developed a scoop with a built-in scraper to dislodge the ice cream easily, even with one hand. Such scoops are the standard in ice cream shops to this day. Cralle’s invention allowed for ice cream to be served faster, which undoubtedly helped it become one of the most important aspects of soda fountain culture in the early 20th century. It was from that culture that today’s featured food was born: the iconic sundae known as the banana split.
A banana split is an ice cream sundae traditionally made with three scoops of ice cream arranged between two halves of a banana. Toppings can vary, but usually include whipped cream, chopped walnuts or peanuts, chocolate syrup, caramel sauce, and diced strawberries or pineapple. The whipped cream is often topped with maraschino cherries.
Soda fountains played a big role in the banana split’s creation. Sometimes called candy kitchens or ice cream parlors, soda fountains evolved from early American pharmacies where pharmacists would compound, or mix, different medicinal ingredients together. Soda water was originally used in stomach-soothing tonics, with flavors added to cover up the taste of medicine. But soda soon became popular all on its own, especially when people started adding scoops of ice cream to carbonated drinks to create floats. From floats, loaded with toppings like whipped cream and chopped nuts, came sundaes—creations made entirely from ice cream in different configurations.
As for where exactly the banana split was invented and by whom, that’s a topic of heated debate. In fact, the towns of Latrobe, Pennsylvania, and Wilmington, Ohio, are both so sure that the banana split was created in their respective communities that they both hold festivals celebrating its supposedly local invention. Those in Latrobe claim that the split was invented by pharmacist David Strickler in 1904, after a young customer demanded “something different.” Strickler’s quick thinking in splitting a banana and making it into a decorative sundae served him well, as his invention became such a hit that he soon had to order custom-made glass serving bowls that could more easily accommodate bananas. Wilmington, on the other hand, attests that the banana split was invented by E.R. Hazard, a restaurateur who came up with the eye-catching, topping-heavy sundae in 1907 to attract college students to his restaurant. While most food historians feel that there’s more evidence for Latrobe—and Strickler’s—claim, we’ll never know for certain. There’s no doubt that banana splits would be a whole lot harder to scoop without Cralle’s invention, though!
[Image description: A banana split with three mounds of whip cream topped with cherries against a pink background.] Credit & copyright: David Disponett, PexelsThis just might be the coolest dessert there is. As we kick off Black History Month, we're paying homage to Alfred L. Cralle, the Black American inventor who, on February 2, 1897, patented the ice cream scoop. Cralle was interested in mechanics from an early age and, after studying at Wayland Seminary in Washington, D.C., took jobs as a porter at a hotel and pharmacy in Pittsburgh, Pennsylvania. It was these jobs that inspired his most famous invention. Cralle saw firsthand how difficult it was to scoop ice cream one-handed, since it stuck to regular serving utensils. So, he developed a scoop with a built-in scraper to dislodge the ice cream easily, even with one hand. Such scoops are the standard in ice cream shops to this day. Cralle’s invention allowed for ice cream to be served faster, which undoubtedly helped it become one of the most important aspects of soda fountain culture in the early 20th century. It was from that culture that today’s featured food was born: the iconic sundae known as the banana split.
A banana split is an ice cream sundae traditionally made with three scoops of ice cream arranged between two halves of a banana. Toppings can vary, but usually include whipped cream, chopped walnuts or peanuts, chocolate syrup, caramel sauce, and diced strawberries or pineapple. The whipped cream is often topped with maraschino cherries.
Soda fountains played a big role in the banana split’s creation. Sometimes called candy kitchens or ice cream parlors, soda fountains evolved from early American pharmacies where pharmacists would compound, or mix, different medicinal ingredients together. Soda water was originally used in stomach-soothing tonics, with flavors added to cover up the taste of medicine. But soda soon became popular all on its own, especially when people started adding scoops of ice cream to carbonated drinks to create floats. From floats, loaded with toppings like whipped cream and chopped nuts, came sundaes—creations made entirely from ice cream in different configurations.
As for where exactly the banana split was invented and by whom, that’s a topic of heated debate. In fact, the towns of Latrobe, Pennsylvania, and Wilmington, Ohio, are both so sure that the banana split was created in their respective communities that they both hold festivals celebrating its supposedly local invention. Those in Latrobe claim that the split was invented by pharmacist David Strickler in 1904, after a young customer demanded “something different.” Strickler’s quick thinking in splitting a banana and making it into a decorative sundae served him well, as his invention became such a hit that he soon had to order custom-made glass serving bowls that could more easily accommodate bananas. Wilmington, on the other hand, attests that the banana split was invented by E.R. Hazard, a restaurateur who came up with the eye-catching, topping-heavy sundae in 1907 to attract college students to his restaurant. While most food historians feel that there’s more evidence for Latrobe—and Strickler’s—claim, we’ll never know for certain. There’s no doubt that banana splits would be a whole lot harder to scoop without Cralle’s invention, though!
[Image description: A banana split with three mounds of whip cream topped with cherries against a pink background.] Credit & copyright: David Disponett, Pexels