Curio Cabinet / Daily Curio
-
FREEArt Appreciation Daily Curio #3018Free1 CQ
Some artists live and die by what critics say…others just can’t be bothered to care. Édouard Manet, born on this day in 1832, was definitely in the latter camp. Born in Paris, France, Manet had a typical upbringing and education for the time, but always showed interest in painting, even as a young student. His father had aspirations for him to become a lawyer, but Manet wasn’t interested. After refusing to enroll in law school, Manet’s father wouldn't fund his artistic education, so Manet applied for the naval college, but was rejected. He then worked aboard a transport vessel before returning to Paris and applying to the naval college again. When he was rejected again, his father finally relented and allowed Manet to pursue art.
As a painter, Manet cared very little about what critics thought. He went against the grain and eschewed the biblical and mythological themes that were popular in his time. Manet preferred to paint subjects that he personally related to or was familiar with, painting common people and common scenes. Moreover, his style sought to capture movement and light in their ephemeral states, which angered critics but inspired other artists who would go on to form the growing Impressionist movement. One painting that showcases his style is his portrait of Berthe Morisot, whom Manet painted to convey a sense of motion as she turns to look at the artist. Manet painted the woman’s hair as unkempt and her outfit somewhat abstract, leaving much to the imagination of the viewer when it comes to her posture. Another of Manet’s paintings, Olympia, caused quite a controversy upon its debut. It depicts a nude woman reclining while looking brazenly at the viewer instead of looking away demurely, and it was considered vulgar at the time. Despite being a significant influence on the Impressionists, Manet himself never completely associated with them. Defiant and independent to the end, he painted what he liked, as he liked, staying true to his own vision of art and nothing else. It wasn’t until after his death that Manet was fully appreciated as the influential artist he was, instead of the lightning rod of controversy critics had branded him as. They say “different stroke for different folks,” but some folks clearly had it wrong.
[Image description: A portion of Edouard Manet’s Berthe Morisot painting, showing a woman in an elaborate hat and fur coat.] Credit & copyright: The Cleveland Museum of Art, Bequest of Leonard C. Hanna Jr. 1958.34. Public Domain, Creative Commons Zero (CC0) designation.Some artists live and die by what critics say…others just can’t be bothered to care. Édouard Manet, born on this day in 1832, was definitely in the latter camp. Born in Paris, France, Manet had a typical upbringing and education for the time, but always showed interest in painting, even as a young student. His father had aspirations for him to become a lawyer, but Manet wasn’t interested. After refusing to enroll in law school, Manet’s father wouldn't fund his artistic education, so Manet applied for the naval college, but was rejected. He then worked aboard a transport vessel before returning to Paris and applying to the naval college again. When he was rejected again, his father finally relented and allowed Manet to pursue art.
As a painter, Manet cared very little about what critics thought. He went against the grain and eschewed the biblical and mythological themes that were popular in his time. Manet preferred to paint subjects that he personally related to or was familiar with, painting common people and common scenes. Moreover, his style sought to capture movement and light in their ephemeral states, which angered critics but inspired other artists who would go on to form the growing Impressionist movement. One painting that showcases his style is his portrait of Berthe Morisot, whom Manet painted to convey a sense of motion as she turns to look at the artist. Manet painted the woman’s hair as unkempt and her outfit somewhat abstract, leaving much to the imagination of the viewer when it comes to her posture. Another of Manet’s paintings, Olympia, caused quite a controversy upon its debut. It depicts a nude woman reclining while looking brazenly at the viewer instead of looking away demurely, and it was considered vulgar at the time. Despite being a significant influence on the Impressionists, Manet himself never completely associated with them. Defiant and independent to the end, he painted what he liked, as he liked, staying true to his own vision of art and nothing else. It wasn’t until after his death that Manet was fully appreciated as the influential artist he was, instead of the lightning rod of controversy critics had branded him as. They say “different stroke for different folks,” but some folks clearly had it wrong.
[Image description: A portion of Edouard Manet’s Berthe Morisot painting, showing a woman in an elaborate hat and fur coat.] Credit & copyright: The Cleveland Museum of Art, Bequest of Leonard C. Hanna Jr. 1958.34. Public Domain, Creative Commons Zero (CC0) designation. -
FREEUS History Daily Curio #3017Free1 CQ
Why did they call it a ration when it was so irrational? Pre-sliced bread became popular starting in the late 1920s, and it quickly became so ingrained in consumers’ preferences that when it was banned during WWII, it caused quite an uproar. Bread has been around for millennia, but pre-sliced bread has only been around for about a century and a half. The very first bread-slicing device was invented in 1860 and used parallel blades to cut a loaf of bread all at once. However, it wasn’t until Otto Frederick Rohwedder of Iowa invented an automated version in 1928 that pre-sliced bread really took off. Soon, innovations saw machines that could slice and wrap bread at the same time, and consumers were glad to buy loaves that they could more conveniently consume. There was also an added benefit: because sliced bread came wrapped and consumers only had to take out as much as they needed at a time, the bread lasted longer compared to whole loaves, which had to be completely unwrapped to slice at home.
When World War II food rationing began in the U.S., Claude R. Wickard, Secretary of Agriculture and head of the War Food Administration, issued Food Distribution Order 1, which banned sliced bread in order to save on the nation’s supply of wax paper. The American public went into an immediate uproar and Wickard was criticized in the press for the short-sighted measure. Firstly, the lack of sliced bread meant that housewives all over the nation had to vie for the same supply of bread knives, which were made of steel, another rationed resource. Secondly, because machines both sliced and wrapped the bread, both had to be done by hand again, sliced or not, which increased labor costs. Thirdly, since whole loaves went stale faster, more food was wasted during a time when families could only buy as much as their ration books allowed. Fortunately, the government reversed course on the decision, and the ban was lifted less than two months after it took effect. Let’s raise a toast to sliced bread.
[Image description: Slices of bread in front of a divided white-and-gray background. Some slices are white bread and some have whole grains on top.] Credit & copyright: Mariana Kurnyk, PexelsWhy did they call it a ration when it was so irrational? Pre-sliced bread became popular starting in the late 1920s, and it quickly became so ingrained in consumers’ preferences that when it was banned during WWII, it caused quite an uproar. Bread has been around for millennia, but pre-sliced bread has only been around for about a century and a half. The very first bread-slicing device was invented in 1860 and used parallel blades to cut a loaf of bread all at once. However, it wasn’t until Otto Frederick Rohwedder of Iowa invented an automated version in 1928 that pre-sliced bread really took off. Soon, innovations saw machines that could slice and wrap bread at the same time, and consumers were glad to buy loaves that they could more conveniently consume. There was also an added benefit: because sliced bread came wrapped and consumers only had to take out as much as they needed at a time, the bread lasted longer compared to whole loaves, which had to be completely unwrapped to slice at home.
When World War II food rationing began in the U.S., Claude R. Wickard, Secretary of Agriculture and head of the War Food Administration, issued Food Distribution Order 1, which banned sliced bread in order to save on the nation’s supply of wax paper. The American public went into an immediate uproar and Wickard was criticized in the press for the short-sighted measure. Firstly, the lack of sliced bread meant that housewives all over the nation had to vie for the same supply of bread knives, which were made of steel, another rationed resource. Secondly, because machines both sliced and wrapped the bread, both had to be done by hand again, sliced or not, which increased labor costs. Thirdly, since whole loaves went stale faster, more food was wasted during a time when families could only buy as much as their ration books allowed. Fortunately, the government reversed course on the decision, and the ban was lifted less than two months after it took effect. Let’s raise a toast to sliced bread.
[Image description: Slices of bread in front of a divided white-and-gray background. Some slices are white bread and some have whole grains on top.] Credit & copyright: Mariana Kurnyk, Pexels -
FREEBiology Daily Curio #3016Free1 CQ
Don’t read this if you can’t stand to have your heart broken. Many penguins famously mate for life, a romantic fact that has helped make them some of the world’s best-loved birds. However, a 13-year study into the breeding habits of little penguins (Eudyptula minor) has revealed that the diminutive birds are surprisingly prone to “divorce.” Also known as fairy penguins, little penguins, as their name suggests, only grow to around 14 inches tall and only weigh about three pounds. But big drama sometimes comes in small packages. Researchers from Monash University in Australia tracked the breeding habits of around a thousand little penguin pairs on Phillips Island. The island is home to the world’s largest colony of the species, with a population of 37,000 or so. Of all the pairs they studied, around 250 ended up “divorced”, with the pairs splitting up and seeking new breeding partners.
So, what causes penguin divorce? Struggles with infertility, mostly. Penguin couples were much more likely to part ways when they failed to produce offspring. While divorce rates could be as high as 26 percent in some years, rates went down when the colony saw more successful hatchings. Marital bliss isn’t determined by offspring alone, though. According to one of the researchers, Richard Reina, little penguins aren’t exactly known for their faithfulness. In a university press release, he explained, “In good times, they largely stick with their partners, although there’s often a bit of hanky-panky happening on the side.” It might be hard to swallow the idea of adorable penguins divorcing and cheating on each other, but this study into little penguin behavior is important for the future of conservation. Current efforts to protect penguin species are focused on the impact of climate change, but studies like this show that there are complex social dynamics to consider as well when trying to maintain a healthy population. No word yet on whether there are little penguin divorce lawyers.
[Image description: A little penguin standing just underneath some type of wooden structure.] Credit & copyright: Sklmsta (Sklmsta~commonswiki), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Don’t read this if you can’t stand to have your heart broken. Many penguins famously mate for life, a romantic fact that has helped make them some of the world’s best-loved birds. However, a 13-year study into the breeding habits of little penguins (Eudyptula minor) has revealed that the diminutive birds are surprisingly prone to “divorce.” Also known as fairy penguins, little penguins, as their name suggests, only grow to around 14 inches tall and only weigh about three pounds. But big drama sometimes comes in small packages. Researchers from Monash University in Australia tracked the breeding habits of around a thousand little penguin pairs on Phillips Island. The island is home to the world’s largest colony of the species, with a population of 37,000 or so. Of all the pairs they studied, around 250 ended up “divorced”, with the pairs splitting up and seeking new breeding partners.
So, what causes penguin divorce? Struggles with infertility, mostly. Penguin couples were much more likely to part ways when they failed to produce offspring. While divorce rates could be as high as 26 percent in some years, rates went down when the colony saw more successful hatchings. Marital bliss isn’t determined by offspring alone, though. According to one of the researchers, Richard Reina, little penguins aren’t exactly known for their faithfulness. In a university press release, he explained, “In good times, they largely stick with their partners, although there’s often a bit of hanky-panky happening on the side.” It might be hard to swallow the idea of adorable penguins divorcing and cheating on each other, but this study into little penguin behavior is important for the future of conservation. Current efforts to protect penguin species are focused on the impact of climate change, but studies like this show that there are complex social dynamics to consider as well when trying to maintain a healthy population. No word yet on whether there are little penguin divorce lawyers.
[Image description: A little penguin standing just underneath some type of wooden structure.] Credit & copyright: Sklmsta (Sklmsta~commonswiki), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily Curio #3015Free1 CQ
You can paint the town red all you want, but you probably shouldn’t eat all the red you want. The FDA recently banned Red No. 3, a ubiquitous food coloring agent. Also called erythrosine, Red No. 3 is a synthetic dye made from petroleum, and it’s been standing out like a red thumb for decades thanks to being a known carcinogen. While its use in cosmetics was banned years ago, the dye is still currently used in over 9,200 food products. The FDA is giving companies until the beginning of 2027 to remove the dye from their formulas, bringing an end to a decades-long battle by activists to ban the dye from the food supply. Red No. 3 was first approved for use in food in 1907, and since then, it has been the go-to dye to give sodas, candies, and other sweets a vibrant, cherry red coloration. The color may make the food appealing to the eye, but it’s not exactly kind to the rest of the body. The dye was first identified as a possible carcinogen in the 1980s when it was shown to cause cancer in male rats that were exposed to high doses. Since then, groups like the Center for Science in the Public Interest have been pressuring the FDA to ban the dye, while several states did so of their own accord. For example, the dye has been banned in California since 2023. Outside the U.S., the dye has already been banned by several countries in the European Union, Australia, and Japan, and the list is growing. However, Red No. 3 isn’t the only dye to cause controversy. Red No. 40 has been linked in recent years to behavioral issues in children, but it’s not facing a ban yet. It seems red is a tough color to dye for.
[Image description: A red rectangle.] Credit & copyright: Author’s own photo. Public Domain.You can paint the town red all you want, but you probably shouldn’t eat all the red you want. The FDA recently banned Red No. 3, a ubiquitous food coloring agent. Also called erythrosine, Red No. 3 is a synthetic dye made from petroleum, and it’s been standing out like a red thumb for decades thanks to being a known carcinogen. While its use in cosmetics was banned years ago, the dye is still currently used in over 9,200 food products. The FDA is giving companies until the beginning of 2027 to remove the dye from their formulas, bringing an end to a decades-long battle by activists to ban the dye from the food supply. Red No. 3 was first approved for use in food in 1907, and since then, it has been the go-to dye to give sodas, candies, and other sweets a vibrant, cherry red coloration. The color may make the food appealing to the eye, but it’s not exactly kind to the rest of the body. The dye was first identified as a possible carcinogen in the 1980s when it was shown to cause cancer in male rats that were exposed to high doses. Since then, groups like the Center for Science in the Public Interest have been pressuring the FDA to ban the dye, while several states did so of their own accord. For example, the dye has been banned in California since 2023. Outside the U.S., the dye has already been banned by several countries in the European Union, Australia, and Japan, and the list is growing. However, Red No. 3 isn’t the only dye to cause controversy. Red No. 40 has been linked in recent years to behavioral issues in children, but it’s not facing a ban yet. It seems red is a tough color to dye for.
[Image description: A red rectangle.] Credit & copyright: Author’s own photo. Public Domain. -
FREEMind + Body Daily CurioFree1 CQ
You could call it a kingly dish…too bad it’s been forgotten! Chicken à la king was once one of the U.S.’s most popular dishes. It was a hit at dinner parties in the 1950s and 60s, and could also be found in plenty of fancy restaurants. Today, you’d be hard pressed to find it anywhere. So, what happened?
Despite its royal name, chicken à la king is a fairly simple dish, made from easy-to-source ingredients. It consists of chopped chicken in a cream sauce with veggies like mushrooms, tomatoes, and peas. Sherry is sometimes added to the sauce. The dish is usually served over noodles, rice, or toast, making chicken à la king a sort of sauce itself.
No one knows who invented chicken à la king, though most theories suggest it dates back to the mid to late 1800s. Some claim that it was invented by a chef at the famous New York restaurant Delmonico's, where it was called “Chicken à la Keene.” There are various stories of other New York City chefs creating the dish, though one tale links chicken à la king to Philadelphia. Supposedly, in the 1890s, a cook named William "Bill" King created it while working at the Bellevue Hotel.
Wherever it came from, there’s no doubt that chicken à la king’s popularity began in New York City, where several fancy restaurants began serving it in the early to mid 1900s. Between 1910 and 1960, the dish appeared on more than 300 menus in New York City. Beginning in the 1940s, dinner parties with friends and neighbors became one of the most popular ways for suburbanites to socialize. Chicken à la king, with its short prep time and easy-to-find ingredients, quickly became one of the most commonly-found foods at such parties, not to mention at weddings and other large-scale get-togethers.
As for why the dish fell out of fashion…no one’s really sure. As the dish became more common, it’s possible that quicker and cheaper versions of it convinced some people that it didn’t live up to its original hype. Or perhaps its meteoric rise in popularity was also its downfall, and people simply got sick of it being served at every major function. One thing’s for sure: chicken à la king was here for a good time…not for a long time.
[Image description: Two pieces of raw chicken with sprigs of green herbs on a white plate.] Credit & copyright: Leeloo The First, PexelsYou could call it a kingly dish…too bad it’s been forgotten! Chicken à la king was once one of the U.S.’s most popular dishes. It was a hit at dinner parties in the 1950s and 60s, and could also be found in plenty of fancy restaurants. Today, you’d be hard pressed to find it anywhere. So, what happened?
Despite its royal name, chicken à la king is a fairly simple dish, made from easy-to-source ingredients. It consists of chopped chicken in a cream sauce with veggies like mushrooms, tomatoes, and peas. Sherry is sometimes added to the sauce. The dish is usually served over noodles, rice, or toast, making chicken à la king a sort of sauce itself.
No one knows who invented chicken à la king, though most theories suggest it dates back to the mid to late 1800s. Some claim that it was invented by a chef at the famous New York restaurant Delmonico's, where it was called “Chicken à la Keene.” There are various stories of other New York City chefs creating the dish, though one tale links chicken à la king to Philadelphia. Supposedly, in the 1890s, a cook named William "Bill" King created it while working at the Bellevue Hotel.
Wherever it came from, there’s no doubt that chicken à la king’s popularity began in New York City, where several fancy restaurants began serving it in the early to mid 1900s. Between 1910 and 1960, the dish appeared on more than 300 menus in New York City. Beginning in the 1940s, dinner parties with friends and neighbors became one of the most popular ways for suburbanites to socialize. Chicken à la king, with its short prep time and easy-to-find ingredients, quickly became one of the most commonly-found foods at such parties, not to mention at weddings and other large-scale get-togethers.
As for why the dish fell out of fashion…no one’s really sure. As the dish became more common, it’s possible that quicker and cheaper versions of it convinced some people that it didn’t live up to its original hype. Or perhaps its meteoric rise in popularity was also its downfall, and people simply got sick of it being served at every major function. One thing’s for sure: chicken à la king was here for a good time…not for a long time.
[Image description: Two pieces of raw chicken with sprigs of green herbs on a white plate.] Credit & copyright: Leeloo The First, Pexels -
FREEScience Daily Curio #3014Free1 CQ
Can you feel the heat? Devastating wildfires are wreaking havoc in populated areas of California, and as firefighters continue to battle the blazes, you may be wondering why it seems like such an uphill fight. As our climate warms, it’s increasingly important to understand how wildfires start, how they spread, and why fighting them can be extraordinarily difficult. Wildfires can start in a number of natural ways, from lightning strikes to the concentrated heat of the sun, but the most common culprit is human interference. Most wildfires are started by simple, careless actions, like discarding lit cigarettes in a dry area or failing to follow proper safety procedures with a campfire. Regardless of how they start, though, wildfires can grow out of control at an unbelievable pace. The speed at which a wildfire grows is based on three main factors: fuel, weather, and topography.
The density and material properties of a fire’s fuel (lush vegetation vs. dead, dry vegetation) can greatly impact how fast the fire spreads, but once it reaches a certain point, there’s very little difference. Even healthy, green vegetation can be quickly dried out by intense heat, and as long as there is net energy from a given source of fuel, the fire will spread. Topography, or the geography of a given location, matters a lot too. For example, fire tends to spread faster uphill because hot gases from the fire rise upward to preheat and dry out vegetation ahead of the flames. In grass fires, flames can spread up to four times faster uphill. Then, there is the weather. Humidity affects how quickly a fire spreads since it has to burn away ambient moisture in the atmosphere, but in California, firefighting efforts have largely been hampered by strong winds. Wind provides more oxygen for the flames, helping it burn hotter while carrying ashes and other flammable material over long distances, potentially spreading it to unconnected areas. Strong winds can also make it difficult to fly over the wildfires and douse them from above, hindering firefighters’ ability to contain the spread. Once wildfires spread to densely populated areas, the fires can easily destroy most buildings, whether they’re made of wood or brick, although the latter would last a little longer. If you’re given orders to evacuate ahead of an approaching wildfire, don’t try to weather the firestorm with a garden hose. It’s better to lose your home than your life.
[Image description: A nighttime wildfire burning among pine trees at Lick Creek, Umatilla National Forest, Oregon.] Credit & copyright: Brendan O'Reilly, U.S. Forest Service- Pacific Northwest Region. This image is a work of the Forest Service of the United States Department of Agriculture. As a work of the U.S. federal government, the image is in the public domain.Can you feel the heat? Devastating wildfires are wreaking havoc in populated areas of California, and as firefighters continue to battle the blazes, you may be wondering why it seems like such an uphill fight. As our climate warms, it’s increasingly important to understand how wildfires start, how they spread, and why fighting them can be extraordinarily difficult. Wildfires can start in a number of natural ways, from lightning strikes to the concentrated heat of the sun, but the most common culprit is human interference. Most wildfires are started by simple, careless actions, like discarding lit cigarettes in a dry area or failing to follow proper safety procedures with a campfire. Regardless of how they start, though, wildfires can grow out of control at an unbelievable pace. The speed at which a wildfire grows is based on three main factors: fuel, weather, and topography.
The density and material properties of a fire’s fuel (lush vegetation vs. dead, dry vegetation) can greatly impact how fast the fire spreads, but once it reaches a certain point, there’s very little difference. Even healthy, green vegetation can be quickly dried out by intense heat, and as long as there is net energy from a given source of fuel, the fire will spread. Topography, or the geography of a given location, matters a lot too. For example, fire tends to spread faster uphill because hot gases from the fire rise upward to preheat and dry out vegetation ahead of the flames. In grass fires, flames can spread up to four times faster uphill. Then, there is the weather. Humidity affects how quickly a fire spreads since it has to burn away ambient moisture in the atmosphere, but in California, firefighting efforts have largely been hampered by strong winds. Wind provides more oxygen for the flames, helping it burn hotter while carrying ashes and other flammable material over long distances, potentially spreading it to unconnected areas. Strong winds can also make it difficult to fly over the wildfires and douse them from above, hindering firefighters’ ability to contain the spread. Once wildfires spread to densely populated areas, the fires can easily destroy most buildings, whether they’re made of wood or brick, although the latter would last a little longer. If you’re given orders to evacuate ahead of an approaching wildfire, don’t try to weather the firestorm with a garden hose. It’s better to lose your home than your life.
[Image description: A nighttime wildfire burning among pine trees at Lick Creek, Umatilla National Forest, Oregon.] Credit & copyright: Brendan O'Reilly, U.S. Forest Service- Pacific Northwest Region. This image is a work of the Forest Service of the United States Department of Agriculture. As a work of the U.S. federal government, the image is in the public domain. -
FREEWorld History Daily Curio #3013Free1 CQ
Think going to the dentist hurts? Imagine going to one over 4,000 years ago, well before anesthetics existed. Archaeologists working in the ancient Egyptian capital of Memphis have uncovered the tomb of a dentist who may have once served the pharaoh himself. The discovery was made in Saqqara, a necropolis within the city that is home to some of the best preserved tombs from ancient Egypt. One of the tombs belonged to someone named Tetinebefou, and by all accounts, he was a man who wore many hats, many of them medical. Inscriptions in the tomb list his various titles, which include priest, physician, director of medicinal plants, conjurer of the goddess Serket, and chief dentist.
Ancient Egypt had some of the most advanced medical practices for its time, and many physicians specialised in a specific part of the body. Tetinebefou, however, held multiple credentials, several of which are relatively rare. The title of “conjurer of the goddess Serket” sounds esoteric, but it indicates that he specialized in treating venomous wounds, Serket being a goddess who healed snake bites and scorpion stings. The titles of chief dentist and director of medicinal plants are much rarer. While dentistry in ancient Egypt was most certainly practiced, there is very little physical evidence of it, and it’s unclear how advanced it was. Researchers have found some evidence of tooth extractions, prosthetics, and evidence of treated dental abscesses. This includes the discovery known as the “Giza bridge,” where a loose tooth was seemingly stabilized by connecting it to a neighboring tooth using gold wire. However, there’s still debate as to whether or not the Giza bridge and other prosthetics were really used in living patients. It’s possible that these were created for some other, non-medical purpose, or affixed to a body after death. Whatever the case, the tooth must be out there somewhere.
[Image description: A black-and-white diagram of teeth, one from the side and one from above.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide.Think going to the dentist hurts? Imagine going to one over 4,000 years ago, well before anesthetics existed. Archaeologists working in the ancient Egyptian capital of Memphis have uncovered the tomb of a dentist who may have once served the pharaoh himself. The discovery was made in Saqqara, a necropolis within the city that is home to some of the best preserved tombs from ancient Egypt. One of the tombs belonged to someone named Tetinebefou, and by all accounts, he was a man who wore many hats, many of them medical. Inscriptions in the tomb list his various titles, which include priest, physician, director of medicinal plants, conjurer of the goddess Serket, and chief dentist.
Ancient Egypt had some of the most advanced medical practices for its time, and many physicians specialised in a specific part of the body. Tetinebefou, however, held multiple credentials, several of which are relatively rare. The title of “conjurer of the goddess Serket” sounds esoteric, but it indicates that he specialized in treating venomous wounds, Serket being a goddess who healed snake bites and scorpion stings. The titles of chief dentist and director of medicinal plants are much rarer. While dentistry in ancient Egypt was most certainly practiced, there is very little physical evidence of it, and it’s unclear how advanced it was. Researchers have found some evidence of tooth extractions, prosthetics, and evidence of treated dental abscesses. This includes the discovery known as the “Giza bridge,” where a loose tooth was seemingly stabilized by connecting it to a neighboring tooth using gold wire. However, there’s still debate as to whether or not the Giza bridge and other prosthetics were really used in living patients. It’s possible that these were created for some other, non-medical purpose, or affixed to a body after death. Whatever the case, the tooth must be out there somewhere.
[Image description: A black-and-white diagram of teeth, one from the side and one from above.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide. -
FREEBiology Daily Curio #3012Free1 CQ
There’s a new tissue in town…sort of. Scientists at the University of California, Irvine have rediscovered a largely forgotten type of skeletal tissue, and their findings have massive potential in the field of regenerative medicine. Most cartilage in the body is held together and keeps its shape thanks to an extracellular matrix, a network of molecules and cells that provide rigidity and structure. Then there are adipocyte fat cells, which are soft and fluctuate in size based on water and food availability. Cartilage forms connections between bones, and the same kind of cartilage was thought to provide the structure for our noses, ears, and other soft, flexible body parts. However, researchers have now discovered that there is another type of skeletal tissue, called lipocartilage, which doesn’t rely on an extracellular matrix. Instead, it’s filled with lipochondrocytes, which are similar to adipocyte fat cells but don’t change size or shape over time, regardless of food availability. They’re soft, yet can maintain their shape under pressure, like resilient bubbles.
The existence of lipochondrocytes was actually first discovered in 1854 by Dr. Franz von Leydig, who noticed the presence of fat in the cartilage of rats’ ears. Just like with Von Leydig’s rats, lipocartilage is found in human ears and noses, allowing them to maintain their shape while remaining highly flexible. Von Leydig’s discovery, however, was largely unexplored by the wider medical community until scientists at UC Irvine began their own research into the subject. The discovery of lipocartilage could change the way that surgery on cartilage works in the future. Currently, the best option to replace and repair damaged cartilage is to harvest donor tissue from the ribs. In the future, lipochondrocytes could be used to grow custom-made cartilage, which might even be 3D printed into specific shapes. All this because someone decided to take a closer look at rats’ ears!
[Image description: A diagram of the skeletal system with individual bones named.] Credit & copyright: LadyofHats (Mariana Ruiz Villarreal), Wikimedia Commons. This work has been released into the public domain by its author, LadyofHats. This applies worldwide.There’s a new tissue in town…sort of. Scientists at the University of California, Irvine have rediscovered a largely forgotten type of skeletal tissue, and their findings have massive potential in the field of regenerative medicine. Most cartilage in the body is held together and keeps its shape thanks to an extracellular matrix, a network of molecules and cells that provide rigidity and structure. Then there are adipocyte fat cells, which are soft and fluctuate in size based on water and food availability. Cartilage forms connections between bones, and the same kind of cartilage was thought to provide the structure for our noses, ears, and other soft, flexible body parts. However, researchers have now discovered that there is another type of skeletal tissue, called lipocartilage, which doesn’t rely on an extracellular matrix. Instead, it’s filled with lipochondrocytes, which are similar to adipocyte fat cells but don’t change size or shape over time, regardless of food availability. They’re soft, yet can maintain their shape under pressure, like resilient bubbles.
The existence of lipochondrocytes was actually first discovered in 1854 by Dr. Franz von Leydig, who noticed the presence of fat in the cartilage of rats’ ears. Just like with Von Leydig’s rats, lipocartilage is found in human ears and noses, allowing them to maintain their shape while remaining highly flexible. Von Leydig’s discovery, however, was largely unexplored by the wider medical community until scientists at UC Irvine began their own research into the subject. The discovery of lipocartilage could change the way that surgery on cartilage works in the future. Currently, the best option to replace and repair damaged cartilage is to harvest donor tissue from the ribs. In the future, lipochondrocytes could be used to grow custom-made cartilage, which might even be 3D printed into specific shapes. All this because someone decided to take a closer look at rats’ ears!
[Image description: A diagram of the skeletal system with individual bones named.] Credit & copyright: LadyofHats (Mariana Ruiz Villarreal), Wikimedia Commons. This work has been released into the public domain by its author, LadyofHats. This applies worldwide. -
FREEScience Daily Curio #3011Free1 CQ
Even legends come and go, but dreams never really die. In the 1960s, acting legend Marlon Brando purchased an atoll, a ring shaped island surrounding a lagoon, in the South Pacific with the hopes that it would one day become a “university of the sea” for scientists. Now, researchers are honoring the late actor’s ambitions. Located around 30 miles north of Tahiti, Tetiaroa is an atoll consisting of 12 coral islets (also known as a motus) around a lagoon. In his autobiography, Brando wrote, “The lagoon was… infused with more shades of blue than I thought possible: turquoise, deep blue, light blue, indigo blue, cobalt blue, royal blue, robin’s egg blue, aquamarine.” But for Brando, Tetiaroa was more than just a tranquil tropical getaway. The actor saw its potential as a haven for marine research, a potential that is now being realized thanks to the Tetiaroa Society, a nonprofit focused on education and conservation of the atoll. Back in 2014, the Tetiaroa Society opened the Ecostation, a research facility for visiting scientists. Not long after, they built 35 private villas for visitors of all kinds, including students from nearby communities who are given a chance to learn about their Polynesian heritage. The resort provides around 70 percent of the funding for the Tetiaroa Society, which allows them to make use of the atoll’s relatively pristine state to study the effects of climate change, microplastics, and shoreline erosion. The islets of the atoll are home to green sea turtle nesting sites, which scientists keep a close eye on to see how environmental problems affect them. Unfortunately, the atoll has its share of unwanted intruders in the form of mosquitoes and other invasive species, but its isolated nature is allowing scientists to research better ways to eradicate them. That kind of vigorous research must take atoll!
[Image description: The surface of water.] Credit & copyright: Matt Hardy, PexelsEven legends come and go, but dreams never really die. In the 1960s, acting legend Marlon Brando purchased an atoll, a ring shaped island surrounding a lagoon, in the South Pacific with the hopes that it would one day become a “university of the sea” for scientists. Now, researchers are honoring the late actor’s ambitions. Located around 30 miles north of Tahiti, Tetiaroa is an atoll consisting of 12 coral islets (also known as a motus) around a lagoon. In his autobiography, Brando wrote, “The lagoon was… infused with more shades of blue than I thought possible: turquoise, deep blue, light blue, indigo blue, cobalt blue, royal blue, robin’s egg blue, aquamarine.” But for Brando, Tetiaroa was more than just a tranquil tropical getaway. The actor saw its potential as a haven for marine research, a potential that is now being realized thanks to the Tetiaroa Society, a nonprofit focused on education and conservation of the atoll. Back in 2014, the Tetiaroa Society opened the Ecostation, a research facility for visiting scientists. Not long after, they built 35 private villas for visitors of all kinds, including students from nearby communities who are given a chance to learn about their Polynesian heritage. The resort provides around 70 percent of the funding for the Tetiaroa Society, which allows them to make use of the atoll’s relatively pristine state to study the effects of climate change, microplastics, and shoreline erosion. The islets of the atoll are home to green sea turtle nesting sites, which scientists keep a close eye on to see how environmental problems affect them. Unfortunately, the atoll has its share of unwanted intruders in the form of mosquitoes and other invasive species, but its isolated nature is allowing scientists to research better ways to eradicate them. That kind of vigorous research must take atoll!
[Image description: The surface of water.] Credit & copyright: Matt Hardy, Pexels -
FREEMind + Body Daily CurioFree1 CQ
Savory, cheesy, creamy, sweet—it’s not everyday that a dish can be all these things at once! French onion soup is one of the most ubiquitous French dishes in the world, and though it’s popular year-round, it makes a particularly good shield against frigid winter weather. Praised as a hangover cure in its native France, this soup has a surprisingly long history…though the version we eat today is fairly new.
French onion soup is made by cooking fried onions in meat stock or water. Croutons and cheese are then added, and the cheese (usually Gruyère) is browned to form a sort of crust on top of the bowl. Some versions are topped with puff pastry, which breaks apart into the soup as one eats it. While various kinds of onion soup have been popular in France for centuries (and in ancient Rome before that) they were considered peasant dishes for most of their history. That’s because onions were so ubiquitous, just about anyone could get their hands on them. Early onion soups in France didn’t usually contain cheese, but they did contain bread, since soup helped to make cheap or stale bread more palatable.
Onion soup didn’t reach France’s upper classes until the 18th century, when the Polish King Stanislas, a father-in-law of Louis XV, tried it at an inn. He loved it so much that he learned the recipe and passed it on to his royal relatives, though this version still didn’t have the cheese we associate with French onion soup today. In fact, cheese didn’t become a regular part of French onion soup until the mid-19th century, and it was all thanks to Les Halles, a famous open-air market that existed in Paris from 1135 to 1973. There, in the mid-19th century, restaurants around the market competed to have the most popular onion soup. Their not-so-secret weapon was cheese, which they began serving au gratin, or with a browned crust. This version of French onion soup reached all social classes. Les Halles workers would eat it for breakfast, and upper-class, late-night Parisian revelers swore by it as a hangover cure—a reputation that the soup maintains in France to this day.
French onion soup made its way to Britain thanks to the 1827 cookbook, The Cook and Housewife's Manual, and by the 20th century it was a favorite dish in London’s Piccadilly Circus area. Though it could be found in French restaurants in America around the same time, it didn’t get truly popular in the U.S. until the 1960s, thanks to American cookbook author and television personality Julia Child. Her French recipes caused French cuisine to explode in popularity in the United States, and today French onion soup can be found not only at French restaurant in the U.S., but at sandwich shops, brunch spots, and even diners. Once again, this simple soup proved that it can’t be constrained by social class or even location.
[Image description: A white bowl of soup with puff pastry on top.] Credit & copyright: Valeria Boltneva, PexelsSavory, cheesy, creamy, sweet—it’s not everyday that a dish can be all these things at once! French onion soup is one of the most ubiquitous French dishes in the world, and though it’s popular year-round, it makes a particularly good shield against frigid winter weather. Praised as a hangover cure in its native France, this soup has a surprisingly long history…though the version we eat today is fairly new.
French onion soup is made by cooking fried onions in meat stock or water. Croutons and cheese are then added, and the cheese (usually Gruyère) is browned to form a sort of crust on top of the bowl. Some versions are topped with puff pastry, which breaks apart into the soup as one eats it. While various kinds of onion soup have been popular in France for centuries (and in ancient Rome before that) they were considered peasant dishes for most of their history. That’s because onions were so ubiquitous, just about anyone could get their hands on them. Early onion soups in France didn’t usually contain cheese, but they did contain bread, since soup helped to make cheap or stale bread more palatable.
Onion soup didn’t reach France’s upper classes until the 18th century, when the Polish King Stanislas, a father-in-law of Louis XV, tried it at an inn. He loved it so much that he learned the recipe and passed it on to his royal relatives, though this version still didn’t have the cheese we associate with French onion soup today. In fact, cheese didn’t become a regular part of French onion soup until the mid-19th century, and it was all thanks to Les Halles, a famous open-air market that existed in Paris from 1135 to 1973. There, in the mid-19th century, restaurants around the market competed to have the most popular onion soup. Their not-so-secret weapon was cheese, which they began serving au gratin, or with a browned crust. This version of French onion soup reached all social classes. Les Halles workers would eat it for breakfast, and upper-class, late-night Parisian revelers swore by it as a hangover cure—a reputation that the soup maintains in France to this day.
French onion soup made its way to Britain thanks to the 1827 cookbook, The Cook and Housewife's Manual, and by the 20th century it was a favorite dish in London’s Piccadilly Circus area. Though it could be found in French restaurants in America around the same time, it didn’t get truly popular in the U.S. until the 1960s, thanks to American cookbook author and television personality Julia Child. Her French recipes caused French cuisine to explode in popularity in the United States, and today French onion soup can be found not only at French restaurant in the U.S., but at sandwich shops, brunch spots, and even diners. Once again, this simple soup proved that it can’t be constrained by social class or even location.
[Image description: A white bowl of soup with puff pastry on top.] Credit & copyright: Valeria Boltneva, Pexels -
FREEMind + Body Daily Curio #3010Free1 CQ
Majestic, graceful and revered creatures of the sky. None of these words have ever been used to describe Canada geese. After getting fed up with the overpopulation of aggressive, poop-spreading Canada geese, the city of Denver, Colorado, finally took action against the birds, which are normally protected by the government. Despite their name, Canada geese aren’t limited to Canada. They’re abundant throughout much of North America and while they aren’t particularly dangerous, they have been known to clash with people when conditions become cramped. Still, they’ve rarely been as much of a nuisance as they were in Denver, where around 5,000 geese used to fly every summer. Even for a big city, that many geese can pose some serious problems. Each goose is capable of producing a pound of droppings a day, which equates to a whopping 35,000 pounds of droppings a week, or around 140,000 a month. All that poop not only makes picnics downright unsanitary, it can have a serious impact on local waterways.
Canada geese are normally protected by the Migratory Bird Act, but the city of Denver was so overrun that they were able to obtain special permission in 2019 to cull some of the birds. Since then, the culling program has been wildly successful, with only around 757 geese left in the city as of late 2024. Today, the city is employing gentler anti-goose methods, like a remote-controlled fanboat dubbed the “Goosinator” that scares geese off of lakes. And while the city’s former, lethal approach garnered some criticism, it also served a dual purpose. The birds’ meat was donated to charities, which in turn gave them to families in need. That’s surely a case of killing two birds with one stone.
[Image description: A Canada goose walking with one foot in the air.] Credit & copyright: Adrian Pingstone (Arpingstone), Wikimedia Commons. This work has been released into the public domain by its author, Arpingstone. This applies worldwide.Majestic, graceful and revered creatures of the sky. None of these words have ever been used to describe Canada geese. After getting fed up with the overpopulation of aggressive, poop-spreading Canada geese, the city of Denver, Colorado, finally took action against the birds, which are normally protected by the government. Despite their name, Canada geese aren’t limited to Canada. They’re abundant throughout much of North America and while they aren’t particularly dangerous, they have been known to clash with people when conditions become cramped. Still, they’ve rarely been as much of a nuisance as they were in Denver, where around 5,000 geese used to fly every summer. Even for a big city, that many geese can pose some serious problems. Each goose is capable of producing a pound of droppings a day, which equates to a whopping 35,000 pounds of droppings a week, or around 140,000 a month. All that poop not only makes picnics downright unsanitary, it can have a serious impact on local waterways.
Canada geese are normally protected by the Migratory Bird Act, but the city of Denver was so overrun that they were able to obtain special permission in 2019 to cull some of the birds. Since then, the culling program has been wildly successful, with only around 757 geese left in the city as of late 2024. Today, the city is employing gentler anti-goose methods, like a remote-controlled fanboat dubbed the “Goosinator” that scares geese off of lakes. And while the city’s former, lethal approach garnered some criticism, it also served a dual purpose. The birds’ meat was donated to charities, which in turn gave them to families in need. That’s surely a case of killing two birds with one stone.
[Image description: A Canada goose walking with one foot in the air.] Credit & copyright: Adrian Pingstone (Arpingstone), Wikimedia Commons. This work has been released into the public domain by its author, Arpingstone. This applies worldwide. -
FREEMind + Body Daily Curio #3009Free1 CQ
Camp’s closed for good, and for good reason! A camp started in the 1990s to serve children with HIV/AIDS is closing down, but it’s not as bad as it sounds. For 30 years, children with HIV/AIDS could go to One Heartland to spend some time camping in a judgment free space. The camp was founded in 1993 by 22-year-old Neil Willenson, who was inspired by the story of a 5-year-old boy in Wisconsin who faced daily discrimination due to his HIV-positive status. When the camp was first founded, it was called Camp Heartland, and they didn’t have a permanent site of their own. Instead, they rented space for a week at a time at various locations throughout the Midwest. By 1996, they were able to secure a permanent home in Willow River, Minnesota, around 40 minutes from Duluth. The camp largely served children who contracted HIV from their mothers during pregnancy, birth, or breastfeeding. Children with HIV were heavily discriminated against, and it was difficult to find an accepting space where they could be open about their HIV-positive status.
Now, after decades of invaluable service to a marginalized community, One Heartland is finally closing its doors. The reason isn’t for want of support or lack of funding, though. It’s just that after 30 years, so few children have HIV or face discrimination because of it that the camp is no longer needed. Nowadays, perinatal HIV transmission rates are extremely low in comparison to when the camp was founded, thanks in large part to the advancement of antiretroviral therapy (ART). Patients with HIV can now live long, healthy lives with the right treatments. In fact, while ART can’t cure HIV, it’s effective enough that it can render the virus undetectable in many patients. One Heartland was an incredible thing to have, but it’s also strangely positive to see the need for it become a thing of the past.
[Image description: A loop of red ribbon against a white background.] Credit & copyright: Anna Shvets, PexelsCamp’s closed for good, and for good reason! A camp started in the 1990s to serve children with HIV/AIDS is closing down, but it’s not as bad as it sounds. For 30 years, children with HIV/AIDS could go to One Heartland to spend some time camping in a judgment free space. The camp was founded in 1993 by 22-year-old Neil Willenson, who was inspired by the story of a 5-year-old boy in Wisconsin who faced daily discrimination due to his HIV-positive status. When the camp was first founded, it was called Camp Heartland, and they didn’t have a permanent site of their own. Instead, they rented space for a week at a time at various locations throughout the Midwest. By 1996, they were able to secure a permanent home in Willow River, Minnesota, around 40 minutes from Duluth. The camp largely served children who contracted HIV from their mothers during pregnancy, birth, or breastfeeding. Children with HIV were heavily discriminated against, and it was difficult to find an accepting space where they could be open about their HIV-positive status.
Now, after decades of invaluable service to a marginalized community, One Heartland is finally closing its doors. The reason isn’t for want of support or lack of funding, though. It’s just that after 30 years, so few children have HIV or face discrimination because of it that the camp is no longer needed. Nowadays, perinatal HIV transmission rates are extremely low in comparison to when the camp was founded, thanks in large part to the advancement of antiretroviral therapy (ART). Patients with HIV can now live long, healthy lives with the right treatments. In fact, while ART can’t cure HIV, it’s effective enough that it can render the virus undetectable in many patients. One Heartland was an incredible thing to have, but it’s also strangely positive to see the need for it become a thing of the past.
[Image description: A loop of red ribbon against a white background.] Credit & copyright: Anna Shvets, Pexels -
FREERunning Daily Curio #3008Free1 CQ
Litterbugs, beware! If these athletes catch you littering, you’ll have nowhere to run…because they’re already running. Joggers around the world are preparing for World Plogging Championship 2025, a competition where the goal isn’t just to finish the course, but to clean it up in the process. The name “plogging” is a combination of “jogging” and “plocka upp,” which means “to pick up” in Swedish. The unique sport was created by Erik Ahlström of Sweden in 2015, who came up with the idea while jogging in Stockholm and observing the litter there. The next year, the city hosted its very first “plogga,” where runners gathered together to jog and pick up litter as a large group. Since then, the sport has spread around the world, and Championships have been held in Italy since 2021. The latest race was held in September of 2024, bringing 80 competitors from 13 different countries to Gandino and Bergamo. Over the course of several races, including trail and urban races, competitors collected over 2,800 pounds of litter. The runners were judged not just on their time, but on the amount and quality of the litter they collected along the way. The organizers used an algorithm to determine the amount of CO2 that the litter would have otherwise omitted into the atmosphere, and combined that with the time and elevation changes (in the case of the trail competition) to create a final score. Serbian Milos Stanojević won the men’s division, while Italian Donatella Boglione won the women’s division, with a score of 211,436 points and 122,803 points respectively. While competitors have already started training for this year’s championship, proponents of plogging say that anyone can participate in the sport. There are currently around three million ploggers who clean up the environment as they stay fit, and Ahlström himself has said that it’s not necessary to job the entire time. In an interview with The Plastic Runner he explained, “One does not necessarily have to run or jog—plogging can be done walking down the street, on the golf course, biking, paddle boarding, swimming or wherever you are.” It’s good to see that this innovator is still plugging for plogging.
[Image description: Plastic litter on a sandy beach.] Credit & copyright: Ron Lach, PexelsLitterbugs, beware! If these athletes catch you littering, you’ll have nowhere to run…because they’re already running. Joggers around the world are preparing for World Plogging Championship 2025, a competition where the goal isn’t just to finish the course, but to clean it up in the process. The name “plogging” is a combination of “jogging” and “plocka upp,” which means “to pick up” in Swedish. The unique sport was created by Erik Ahlström of Sweden in 2015, who came up with the idea while jogging in Stockholm and observing the litter there. The next year, the city hosted its very first “plogga,” where runners gathered together to jog and pick up litter as a large group. Since then, the sport has spread around the world, and Championships have been held in Italy since 2021. The latest race was held in September of 2024, bringing 80 competitors from 13 different countries to Gandino and Bergamo. Over the course of several races, including trail and urban races, competitors collected over 2,800 pounds of litter. The runners were judged not just on their time, but on the amount and quality of the litter they collected along the way. The organizers used an algorithm to determine the amount of CO2 that the litter would have otherwise omitted into the atmosphere, and combined that with the time and elevation changes (in the case of the trail competition) to create a final score. Serbian Milos Stanojević won the men’s division, while Italian Donatella Boglione won the women’s division, with a score of 211,436 points and 122,803 points respectively. While competitors have already started training for this year’s championship, proponents of plogging say that anyone can participate in the sport. There are currently around three million ploggers who clean up the environment as they stay fit, and Ahlström himself has said that it’s not necessary to job the entire time. In an interview with The Plastic Runner he explained, “One does not necessarily have to run or jog—plogging can be done walking down the street, on the golf course, biking, paddle boarding, swimming or wherever you are.” It’s good to see that this innovator is still plugging for plogging.
[Image description: Plastic litter on a sandy beach.] Credit & copyright: Ron Lach, Pexels -
FREEEngineering Daily Curio #3007Free1 CQ
This dry winter air has us parched. If only we could pull water from thin air! Actually, fog catchers have been doing just that for decades, making it easier to hydrate some of the driest parts of the world. Now, the technology has made a leap forward thanks to a new innovation from the Canary Islands. Fog catchers were first invented in the 1970s by Chilean physicist Carlos Espinosa to bring water to Chile’s arid Antofagasta region. One of the driest places in the world, the northern region was experiencing a severe drought when Espinosa developed the idea to place fine mesh at high altitudes, where moisture is abundant, to catch said moisture and drain it into containers below. It worked, and fog catchers have been deployed in other parts of the world to great success. However, this technology hasn’t changed much until recently. Researchers in the Canary Islands have developed a new technique called “cloud milking.” Unlike fog catchers, a new device called a fog collector gathers moisture from clouds while solving some of the problems that plague traditional fog catchers. Since fog catchers are made from a fine mesh, they are easily damaged by strong winds along with the structures that hold them up. The solution involved taking a little bit of inspiration from nature. Fog collectors gather moisture using fine metal fronds modelled after pine needles. This allows air to pass through freely, even in strong winds. The collected water is released freely into the ground without using any energy, and there’s no need to dig wells. So far, the new invention has proved to be a resounding success. As part of an EU-backed project called Life Nieblas (Spanish for cloud), the Canary Islands have deployed fog collectors to combat the increasing desertification of recent years. When you can’t find an oasis, make your own.
[Image description: Fog over a rainforest, from above.] Credit & copyright: Ayacop, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide.This dry winter air has us parched. If only we could pull water from thin air! Actually, fog catchers have been doing just that for decades, making it easier to hydrate some of the driest parts of the world. Now, the technology has made a leap forward thanks to a new innovation from the Canary Islands. Fog catchers were first invented in the 1970s by Chilean physicist Carlos Espinosa to bring water to Chile’s arid Antofagasta region. One of the driest places in the world, the northern region was experiencing a severe drought when Espinosa developed the idea to place fine mesh at high altitudes, where moisture is abundant, to catch said moisture and drain it into containers below. It worked, and fog catchers have been deployed in other parts of the world to great success. However, this technology hasn’t changed much until recently. Researchers in the Canary Islands have developed a new technique called “cloud milking.” Unlike fog catchers, a new device called a fog collector gathers moisture from clouds while solving some of the problems that plague traditional fog catchers. Since fog catchers are made from a fine mesh, they are easily damaged by strong winds along with the structures that hold them up. The solution involved taking a little bit of inspiration from nature. Fog collectors gather moisture using fine metal fronds modelled after pine needles. This allows air to pass through freely, even in strong winds. The collected water is released freely into the ground without using any energy, and there’s no need to dig wells. So far, the new invention has proved to be a resounding success. As part of an EU-backed project called Life Nieblas (Spanish for cloud), the Canary Islands have deployed fog collectors to combat the increasing desertification of recent years. When you can’t find an oasis, make your own.
[Image description: Fog over a rainforest, from above.] Credit & copyright: Ayacop, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide. -
FREEMind + Body Daily CurioFree1 CQ
How did you ring in the new year? If you ventured out to a party, chances are that champagne played a part in your celebrations. For centuries, this unique, sparkling wine has played a major role in all sorts of festivities, yet its creation was something of a fluke, and its signature bubbles were once considered a flaw.
Champagne is a type of sparkling wine from the Champagne wine region in northeastern France. The borders of the region are specifically defined by an appellation, a legal document which also lays out rules about how champagne can be produced. Only certain grapes can be used, the three most common varieties being pinot noir and pinot meunier, which are red grapes, and chardonnay, which are green. One of the most important production rules is that the wine must undergo secondary fermentation inside the bottle, which is what causes it to become carbonated. Modern champagne is always pale gold to light pink in color.
In the 5th century, ancient Romans planted the first vineyards in what is, today, France’s Champagne region. By the time the Kingdom of the West Franks was established in 843 CE., different regions in the area were already known for producing their own varieties of wine. Hugh Capet, a descendant of Charlemane who was crowned King of France in 987, was so fond of the Champagne region’s wine that he featured it prominently at royal banquets, and other European leaders would journey from afar just to have a taste. Modern champagne-lovers might have been taken aback by this wine, though, since it lacked bubbles.
At the time, the Burgandy region, in eastern France, was most famous for its dry, red wines. It was much harder to produce red wine in Champange, to the north, since colder winters made grapes harder to grow and often kept them from ripening fully. This harsh weather also led to modern champagne’s most famous feature: its bubbles. Much to early Champagne-region winemakers’ displeasure, bottled wine stored there during winter would often cease to ferment because of the cold. This was all well and good until spring, when dormant yeast cells awakened and started the fermentation process up again. Since the wine had already been bottled, the carbon dioxide released during fermentation had nowhere to go, which sometimes caused the bottles of wine to explode. This led early Champagne-region wine to be nicknamed “le vin du diable” or “the devil’s wine.” Even if the bottles didn’t explode, winemakers weren’t pleased to find bubbles in their wine, since they considered it a flaw.
In the late 16th century, the wine of the Champagne region gained one of its most famous champions: a monk named Dom Perignon. As cellar master of the Abbey of Hautvillers, he established a set of rules for growing grapes and creating wine in the region that many winemakers after him came to follow. However, it’s not completely clear how Perignon felt about champagne’s bubbles. One common myth claims that, upon tasting sparkling wine for the first time, the monk shouted, “Come quickly, I am tasting stars!" But the quote actually came from an 1800s wine advertisement. Still, by the 17th century plenty of people had finally warmed up to champagne’s effervescent nature, most famously Philippe II, Duke of Orléans, who made the wine a hit amongst French nobility. The wine’s association with the upper class made it even more popular, and by the 18th century, winemakers in the Champagne region were making their wine sparkle on purpose…though it was a tricky process to perfect. Plenty of wine bottles still exploded every year until the 19th century, when better glass-making methods allowed for bottles strong enough to withstand the pressure.
Despite several political upheavals that affected the Champagne region during and after the 18th century (including two world wars) their wine has remained popular the world over. Today, champagne’s fancy reputation has made it a go-to drink for fancy parties and big celebrations. Here’s hoping that your 2025 is filled with a fun moment for every bubble in your champagne glass.
[Image description: Glasses of champagne on a table shown at a tilted angle.] Credit & copyright: Rene Terp, PexelsHow did you ring in the new year? If you ventured out to a party, chances are that champagne played a part in your celebrations. For centuries, this unique, sparkling wine has played a major role in all sorts of festivities, yet its creation was something of a fluke, and its signature bubbles were once considered a flaw.
Champagne is a type of sparkling wine from the Champagne wine region in northeastern France. The borders of the region are specifically defined by an appellation, a legal document which also lays out rules about how champagne can be produced. Only certain grapes can be used, the three most common varieties being pinot noir and pinot meunier, which are red grapes, and chardonnay, which are green. One of the most important production rules is that the wine must undergo secondary fermentation inside the bottle, which is what causes it to become carbonated. Modern champagne is always pale gold to light pink in color.
In the 5th century, ancient Romans planted the first vineyards in what is, today, France’s Champagne region. By the time the Kingdom of the West Franks was established in 843 CE., different regions in the area were already known for producing their own varieties of wine. Hugh Capet, a descendant of Charlemane who was crowned King of France in 987, was so fond of the Champagne region’s wine that he featured it prominently at royal banquets, and other European leaders would journey from afar just to have a taste. Modern champagne-lovers might have been taken aback by this wine, though, since it lacked bubbles.
At the time, the Burgandy region, in eastern France, was most famous for its dry, red wines. It was much harder to produce red wine in Champange, to the north, since colder winters made grapes harder to grow and often kept them from ripening fully. This harsh weather also led to modern champagne’s most famous feature: its bubbles. Much to early Champagne-region winemakers’ displeasure, bottled wine stored there during winter would often cease to ferment because of the cold. This was all well and good until spring, when dormant yeast cells awakened and started the fermentation process up again. Since the wine had already been bottled, the carbon dioxide released during fermentation had nowhere to go, which sometimes caused the bottles of wine to explode. This led early Champagne-region wine to be nicknamed “le vin du diable” or “the devil’s wine.” Even if the bottles didn’t explode, winemakers weren’t pleased to find bubbles in their wine, since they considered it a flaw.
In the late 16th century, the wine of the Champagne region gained one of its most famous champions: a monk named Dom Perignon. As cellar master of the Abbey of Hautvillers, he established a set of rules for growing grapes and creating wine in the region that many winemakers after him came to follow. However, it’s not completely clear how Perignon felt about champagne’s bubbles. One common myth claims that, upon tasting sparkling wine for the first time, the monk shouted, “Come quickly, I am tasting stars!" But the quote actually came from an 1800s wine advertisement. Still, by the 17th century plenty of people had finally warmed up to champagne’s effervescent nature, most famously Philippe II, Duke of Orléans, who made the wine a hit amongst French nobility. The wine’s association with the upper class made it even more popular, and by the 18th century, winemakers in the Champagne region were making their wine sparkle on purpose…though it was a tricky process to perfect. Plenty of wine bottles still exploded every year until the 19th century, when better glass-making methods allowed for bottles strong enough to withstand the pressure.
Despite several political upheavals that affected the Champagne region during and after the 18th century (including two world wars) their wine has remained popular the world over. Today, champagne’s fancy reputation has made it a go-to drink for fancy parties and big celebrations. Here’s hoping that your 2025 is filled with a fun moment for every bubble in your champagne glass.
[Image description: Glasses of champagne on a table shown at a tilted angle.] Credit & copyright: Rene Terp, Pexels -
FREEMind + Body Daily Curio #3006Free1 CQ
Are you a night owl or a morning person? Do you know someone who’s both? Some people seem to naturally thrive on less sleep than others, and modern research is shedding some light into why. Sleep deprivation is no joke. Once thought to be just a way for the body to rest, scientists have been learning about the various important functions of sleep in recent years. While sleeping, the body actually performs crucial brain maintenance, replenishing energy stores and flushing away toxins that build up throughout the day. At the same time, the brain consolidates long-term memories while sleeping. No wonder, then, that extreme sleep deprivation can have devastating—even fatal—consequences.
For most people, it takes around seven to nine hours of sleep each night to feel rested, but a small percentage of people do fine with just four to six. They’re called “natural short sleepers” and they’re now said to have a condition called Short Sleeper Syndrome (SSS). It sounds like a detrimental disease, but people who have SSS show no ill health effects from their diminished sleep. Not only do they sleep less than most people, they also have an easier time falling asleep, often wake up without the need for an alarm, and have more energy and a better mood during the day. The lack of negative side effects are leading scientists to believe that people with SSS are getting higher quality sleep than those without the condition, which is why they can reap benefits from sleeping for such a short time. While SSS appears to have a genetic component (scientists have identified seven genes associated with SSS), researchers believe that studying other mechanisms behind SSS could help those who struggle to get quality sleep. Of course, there are some things that anyone can do to get the most out of their sleep: going to bed at the same time each night, getting plenty of sun during the day (especially in the morning) and keeping bedrooms dark and quiet. Eating too much or drinking alcohol before bed can also decrease sleep quality. It’s all a tad more complicated than chamomile at night and coffee in the morning.
[Image description: A black alarm clock sitting on a nightstand on top of a notebook beside a blue-and-white-porcelein cup] Credit & copyright: Aphiwat chuangchoem, PexelsAre you a night owl or a morning person? Do you know someone who’s both? Some people seem to naturally thrive on less sleep than others, and modern research is shedding some light into why. Sleep deprivation is no joke. Once thought to be just a way for the body to rest, scientists have been learning about the various important functions of sleep in recent years. While sleeping, the body actually performs crucial brain maintenance, replenishing energy stores and flushing away toxins that build up throughout the day. At the same time, the brain consolidates long-term memories while sleeping. No wonder, then, that extreme sleep deprivation can have devastating—even fatal—consequences.
For most people, it takes around seven to nine hours of sleep each night to feel rested, but a small percentage of people do fine with just four to six. They’re called “natural short sleepers” and they’re now said to have a condition called Short Sleeper Syndrome (SSS). It sounds like a detrimental disease, but people who have SSS show no ill health effects from their diminished sleep. Not only do they sleep less than most people, they also have an easier time falling asleep, often wake up without the need for an alarm, and have more energy and a better mood during the day. The lack of negative side effects are leading scientists to believe that people with SSS are getting higher quality sleep than those without the condition, which is why they can reap benefits from sleeping for such a short time. While SSS appears to have a genetic component (scientists have identified seven genes associated with SSS), researchers believe that studying other mechanisms behind SSS could help those who struggle to get quality sleep. Of course, there are some things that anyone can do to get the most out of their sleep: going to bed at the same time each night, getting plenty of sun during the day (especially in the morning) and keeping bedrooms dark and quiet. Eating too much or drinking alcohol before bed can also decrease sleep quality. It’s all a tad more complicated than chamomile at night and coffee in the morning.
[Image description: A black alarm clock sitting on a nightstand on top of a notebook beside a blue-and-white-porcelein cup] Credit & copyright: Aphiwat chuangchoem, Pexels -
FREEDaily Curio #3005Free1 CQ
This is the one time we want to soak up microplastics. As recent research has shown, microplastics are everywhere and in everything. They’re notoriously difficult to remove once they’ve taken hold somewhere, and their environmental and health effects are just now beginning to be understood. Fortunately, a group of researchers at Wuhan University in China might have come up with a way to remove microplastics from water using biodegradable materials. Microplastics are defined as any piece of plastic five millimeters or smaller, and there’s estimated to be around 15.5 million tons of them just sitting on the ocean floor. That’s not even counting the microplastics that wash up on shore to mingle with sand or the ones in bodies of water further inland. With plastic production only set to increase in coming years, battling microplastics seems like a hopeless task. But cleaning this mess might just require the right sponge.
Researchers in China have managed to develop a sponge made of cotton cellulose and squid bones—both relatively inexpensive materials—that can simply soak up microplastic particles from water. The sponge acts like a filter, and when the researchers tested it in different bodies of water, they found that it was 99.9 percent effective at removing microplastics while maintaining efficiency for several decontamination cycles. That’s another thing: the sponge can be reused many times in addition to being completely biodegradable. There are some limits, though. Firstly, the sponge is only effective at removing microplastics floating around in water; it can’t do much about removing what’s already mixed in with sediment. Secondly, using the sponge would require a means to safely contain whatever microplastics have been removed, which is a separate problem entirely. Still, it could help prevent further contamination of water if scaled up and deployed widely. Even when the cleaning task is this momentous, it seems you can’t go wrong with a humble sponge.
[Image description: The surface of water under an open sky.] Credit & copyright: Matt Hardy, PexelsThis is the one time we want to soak up microplastics. As recent research has shown, microplastics are everywhere and in everything. They’re notoriously difficult to remove once they’ve taken hold somewhere, and their environmental and health effects are just now beginning to be understood. Fortunately, a group of researchers at Wuhan University in China might have come up with a way to remove microplastics from water using biodegradable materials. Microplastics are defined as any piece of plastic five millimeters or smaller, and there’s estimated to be around 15.5 million tons of them just sitting on the ocean floor. That’s not even counting the microplastics that wash up on shore to mingle with sand or the ones in bodies of water further inland. With plastic production only set to increase in coming years, battling microplastics seems like a hopeless task. But cleaning this mess might just require the right sponge.
Researchers in China have managed to develop a sponge made of cotton cellulose and squid bones—both relatively inexpensive materials—that can simply soak up microplastic particles from water. The sponge acts like a filter, and when the researchers tested it in different bodies of water, they found that it was 99.9 percent effective at removing microplastics while maintaining efficiency for several decontamination cycles. That’s another thing: the sponge can be reused many times in addition to being completely biodegradable. There are some limits, though. Firstly, the sponge is only effective at removing microplastics floating around in water; it can’t do much about removing what’s already mixed in with sediment. Secondly, using the sponge would require a means to safely contain whatever microplastics have been removed, which is a separate problem entirely. Still, it could help prevent further contamination of water if scaled up and deployed widely. Even when the cleaning task is this momentous, it seems you can’t go wrong with a humble sponge.
[Image description: The surface of water under an open sky.] Credit & copyright: Matt Hardy, Pexels -
FREEPolitical Science Daily Curio #3004Free1 CQ
As 2024 draws to a close, we're taking another look at the life of President Jimmy Carter, who passed away on December 29 at 100 years old.
For better or worse, modern American politics are a bombastic affair involving celebrity endorsements and plenty of talking heads. Former President Jimmy Carter, who recently became the first U.S. President to celebrate his 100th birthday, has lived a different sort of life than many modern politicians. His first home lacked electricity and indoor plumbing, and his career involved more quiet service than political bravado.
Born on October 1, 1924 in Plains, Georgia, James Earl “Jimmy” Carter Jr. was the first U.S. President to be born in a hospital, as home births were more common at the time. His early childhood was fairly humble. His father, Earl, was a peanut farmer and businessman who enlisted young Jimmy’s help in packing goods to be sold in town, while his mother was a trained nurse who provided healthcare services to impoverished Black families. As a student, Carter excelled at school, encouraged by his parents to be hardworking and enterprising. Aside from helping his father, he also sought work with the Sumter County Library Board, where he helped set up the bookmobile, a traveling library to service the rural areas of the county. After graduating high school in 1941, Carter attended the Georgia Institute of Technology for a year before entering the U.S. Naval Academy. He met his future wife, Rosalynn Smith, during his last year at the Academy, and the two were married in 1946. After graduating from the Academy the same year, Carter joined the U.S. Navy’s submarine service, although it was a dangerous job. He even worked with Captain Hyman Rickover, the “father of the nuclear Navy,” and studied nuclear engineering as part of the Navy’s efforts to build its first nuclear submarines. Carter would have served aboard the U.S.S. Seawolf, one of the first two such vessels, but the death of his father in 1953 prompted him to resign so that he could return to Georgia and take over the struggling family farm.
On returning to his home state, Carter and his family moved into a public housing project in Plains due to a post-war housing shortage. This experience inspired him to work with Habitat for Humanity decades later, and it also made him the first president to have lived in public housing. While turning around the fortunes of the family’s peanut farm, Carter became involved in politics, earning a seat on the Sumter County Board of Education in 1955. In 1962, he ran for a seat in the Georgia State Senate, where he earned a reputation for himself by targeting wasteful spending and laws meant to disenfranchise Black voters. Although he failed to win the Democratic primary in 1966 for a seat in the U.S. Congress (largely due to his support of the civil rights movement), he refocused his efforts toward the 1970 gubernatorial election. After a successful campaign, he surprised many in Georgia by advocating for integration and appointing more Black staff members than previous administrations. Though his idealism attracted criticism, Carter was largely popular in the state for his work in reducing government bureaucracy and increasing funding for schools.
Jimmy Carter’s political ambitions eventually led him to the White House when he took office in 1977. His Presidency took place during a chaotic time, in which the Iranian hostage crisis, a war in Afghanistan, and economic worries were just some of the problems he was tasked with helping to solve. After losing the 1980 Presidential race to Ronald Reagan, Carter and his wife moved back into their modest, ranch-style home in Georgia where they lived for more than 60 years, making him one of just a few presidents to return to their pre-presidential residences. Today, Carter is almost as well-known for his work after his presidency as during it, since he dedicated much of his life to charity work, especially building homes with Habitat for Humanity. He also wrote over 30 books, including three that he recorded as audio books which won him three Grammy Awards in the Spoken Word Album category. Not too shabby for a humble peanut farmer.
[Image description: Jimmy Carter’s official Presidential portrait; he wears a dark blue suit with a light blue shirt and striped tie.] Credit & copyright: Department of Defense. Department of the Navy. Naval Photographic Center. Wikimedia Commons. This work is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code.As 2024 draws to a close, we're taking another look at the life of President Jimmy Carter, who passed away on December 29 at 100 years old.
For better or worse, modern American politics are a bombastic affair involving celebrity endorsements and plenty of talking heads. Former President Jimmy Carter, who recently became the first U.S. President to celebrate his 100th birthday, has lived a different sort of life than many modern politicians. His first home lacked electricity and indoor plumbing, and his career involved more quiet service than political bravado.
Born on October 1, 1924 in Plains, Georgia, James Earl “Jimmy” Carter Jr. was the first U.S. President to be born in a hospital, as home births were more common at the time. His early childhood was fairly humble. His father, Earl, was a peanut farmer and businessman who enlisted young Jimmy’s help in packing goods to be sold in town, while his mother was a trained nurse who provided healthcare services to impoverished Black families. As a student, Carter excelled at school, encouraged by his parents to be hardworking and enterprising. Aside from helping his father, he also sought work with the Sumter County Library Board, where he helped set up the bookmobile, a traveling library to service the rural areas of the county. After graduating high school in 1941, Carter attended the Georgia Institute of Technology for a year before entering the U.S. Naval Academy. He met his future wife, Rosalynn Smith, during his last year at the Academy, and the two were married in 1946. After graduating from the Academy the same year, Carter joined the U.S. Navy’s submarine service, although it was a dangerous job. He even worked with Captain Hyman Rickover, the “father of the nuclear Navy,” and studied nuclear engineering as part of the Navy’s efforts to build its first nuclear submarines. Carter would have served aboard the U.S.S. Seawolf, one of the first two such vessels, but the death of his father in 1953 prompted him to resign so that he could return to Georgia and take over the struggling family farm.
On returning to his home state, Carter and his family moved into a public housing project in Plains due to a post-war housing shortage. This experience inspired him to work with Habitat for Humanity decades later, and it also made him the first president to have lived in public housing. While turning around the fortunes of the family’s peanut farm, Carter became involved in politics, earning a seat on the Sumter County Board of Education in 1955. In 1962, he ran for a seat in the Georgia State Senate, where he earned a reputation for himself by targeting wasteful spending and laws meant to disenfranchise Black voters. Although he failed to win the Democratic primary in 1966 for a seat in the U.S. Congress (largely due to his support of the civil rights movement), he refocused his efforts toward the 1970 gubernatorial election. After a successful campaign, he surprised many in Georgia by advocating for integration and appointing more Black staff members than previous administrations. Though his idealism attracted criticism, Carter was largely popular in the state for his work in reducing government bureaucracy and increasing funding for schools.
Jimmy Carter’s political ambitions eventually led him to the White House when he took office in 1977. His Presidency took place during a chaotic time, in which the Iranian hostage crisis, a war in Afghanistan, and economic worries were just some of the problems he was tasked with helping to solve. After losing the 1980 Presidential race to Ronald Reagan, Carter and his wife moved back into their modest, ranch-style home in Georgia where they lived for more than 60 years, making him one of just a few presidents to return to their pre-presidential residences. Today, Carter is almost as well-known for his work after his presidency as during it, since he dedicated much of his life to charity work, especially building homes with Habitat for Humanity. He also wrote over 30 books, including three that he recorded as audio books which won him three Grammy Awards in the Spoken Word Album category. Not too shabby for a humble peanut farmer.
[Image description: Jimmy Carter’s official Presidential portrait; he wears a dark blue suit with a light blue shirt and striped tie.] Credit & copyright: Department of Defense. Department of the Navy. Naval Photographic Center. Wikimedia Commons. This work is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. -
FREEWork Daily Curio #3003Free1 CQ
Home is where the agriculture is. A previously struggling village of just 300 residents in India is bouncing back after it won ownership rights to a nearby bamboo forest. Their success is due to a little-known piece of legislation that might end up helping other communities in similar situations.
For generations, the rural village of Pachgaon in Central India was in decline. Its population was dwindling as its residents went to cities in search of work, and those who stayed struggled to make ends meet thanks to violent seasonal floods that frequently destroyed their crops. But the villagers saw a potential answer in the Panchayat Act of 1996 and the Forest Rights Act of 2006, historic pieces of legislation that were designed to allow panchayats (tribal village councils) to apply for “community forest rights papers.” The papers would in turn allow the villagers to harvest various natural resources from the forests they inhabited, but many communities were unaware of their rights. The people of Pachgaon, however, sought the help of activist Vijay Dethe. Together, they applied for community forest rights with the government in 2009 and finally received them in 2012. Today the villagers of Panchayat have the right to work 2,486 acres of forest land, and to harvest the plentiful bamboo that grows in the area.
Bamboo is used for everything from scaffolding and setting concrete, to paper production, making it an in-demand resource. Different species of bamboo have different properties, so some are better suited for certain purposes than others. The bamboo from Panchayat, for example, isn’t suitable for being turned into pulp for paper mills, but it has plenty of other uses. In the past ten years, harvesting bamboo has brought in 34 million rupees or around $400,000 to the village, and some residents who has moved away to cities have come back. Thanks to the availability of work in the village, residents can now make a comfortable living. And unlike traditional crops, the bamboo forests aren’t affected by flooding, so there is no seasonal threat against their livelihood. Meanwhile, the operation is managed by the gram sabha (village assembly), and profits are distributed equitably among the workers. Notably, the workers are paid equally regardless of gender, and there is no formal hierarchy in the management of operations. Seems like no one’s getting bamboozled there.
[Image description: Green bamboo against a dark background.] Credit & copyright: Valeriia Miller, PexelsHome is where the agriculture is. A previously struggling village of just 300 residents in India is bouncing back after it won ownership rights to a nearby bamboo forest. Their success is due to a little-known piece of legislation that might end up helping other communities in similar situations.
For generations, the rural village of Pachgaon in Central India was in decline. Its population was dwindling as its residents went to cities in search of work, and those who stayed struggled to make ends meet thanks to violent seasonal floods that frequently destroyed their crops. But the villagers saw a potential answer in the Panchayat Act of 1996 and the Forest Rights Act of 2006, historic pieces of legislation that were designed to allow panchayats (tribal village councils) to apply for “community forest rights papers.” The papers would in turn allow the villagers to harvest various natural resources from the forests they inhabited, but many communities were unaware of their rights. The people of Pachgaon, however, sought the help of activist Vijay Dethe. Together, they applied for community forest rights with the government in 2009 and finally received them in 2012. Today the villagers of Panchayat have the right to work 2,486 acres of forest land, and to harvest the plentiful bamboo that grows in the area.
Bamboo is used for everything from scaffolding and setting concrete, to paper production, making it an in-demand resource. Different species of bamboo have different properties, so some are better suited for certain purposes than others. The bamboo from Panchayat, for example, isn’t suitable for being turned into pulp for paper mills, but it has plenty of other uses. In the past ten years, harvesting bamboo has brought in 34 million rupees or around $400,000 to the village, and some residents who has moved away to cities have come back. Thanks to the availability of work in the village, residents can now make a comfortable living. And unlike traditional crops, the bamboo forests aren’t affected by flooding, so there is no seasonal threat against their livelihood. Meanwhile, the operation is managed by the gram sabha (village assembly), and profits are distributed equitably among the workers. Notably, the workers are paid equally regardless of gender, and there is no formal hierarchy in the management of operations. Seems like no one’s getting bamboozled there.
[Image description: Green bamboo against a dark background.] Credit & copyright: Valeriia Miller, Pexels -
FREEMind + Body Daily CurioFree1 CQ
Here’s one from the holiday archives: A look back at the history of one of the season’s most festive drinks!
Love it or hate it, there’s no doubt that it’s festive. Eggnog is possibly the most divisive of all holiday drinks, but it’s also one of the most enduring. Eggnog has a surprisingly long history, and though it’s associated with homey holiday parties today, it was once considered too fancy for everyday drinkers.
Modern eggnog is an alcoholic cocktail most often made with cream, sugar, egg yolks, whipped egg whites, nutmeg, and either rum, brandy, or bourbon. Cinnamon is sometimes added for an extra festive kick. It’s easy to see why not everyone is eager to down a glass of the frothy concoction—eggnog may be sweet, but plenty of people will pause at the thought of drinking eggs. Yet, at the time of eggnog’s invention, eggs were a fairly normal ingredient. Eggnog is thought to date back to 13th century England, where it was named after two words: “grog”, meaning rum, and “noggins”, meaning wooden mug. The cocktail evolved from posset, a nonalcoholic celebratory drink that included milk, eggs, and figs that was often served as punch at social gatherings. Like posset, early eggnog was served hot. It didn’t even include alcohol until the 17th century, when celebrants added sherry to the mix. Since both sherry and eggs were expensive in Europe at the time, eggnog was considered an upper-class drink, and was mainly enjoyed by the aristocracy.
Things changed when European settlers began making their way to the U.S. The colonies included many farms, so eggs were widely available, and unlike wine, sherry, rum, and whiskey weren’t heavily taxed. So, alcoholic American eggnog began making its way into colonial celebrations, including Christmas parties. It’s thought that the drink became associated with winter because it was originally served hot, and since Christmas is the biggest wintertime celebration, the two were naturally conflated.
Eggnog remained warm until the early 1900s, when the addition of ice to many cocktails convinced Americans to try it cold. The chill has stuck since then, and even most Europeans take their eggnog cold today. We’re guessing that anyone hesitant to try an egg-heavy cocktail wouldn’t warm up to the idea if it was served hot!
[Image description: A container labeled “Egg Nog” with holly on the label behind a glass of eggnog with a striped straw, on a table with holiday decorations.] Credit & copyright: Jill Wellington, PexelsHere’s one from the holiday archives: A look back at the history of one of the season’s most festive drinks!
Love it or hate it, there’s no doubt that it’s festive. Eggnog is possibly the most divisive of all holiday drinks, but it’s also one of the most enduring. Eggnog has a surprisingly long history, and though it’s associated with homey holiday parties today, it was once considered too fancy for everyday drinkers.
Modern eggnog is an alcoholic cocktail most often made with cream, sugar, egg yolks, whipped egg whites, nutmeg, and either rum, brandy, or bourbon. Cinnamon is sometimes added for an extra festive kick. It’s easy to see why not everyone is eager to down a glass of the frothy concoction—eggnog may be sweet, but plenty of people will pause at the thought of drinking eggs. Yet, at the time of eggnog’s invention, eggs were a fairly normal ingredient. Eggnog is thought to date back to 13th century England, where it was named after two words: “grog”, meaning rum, and “noggins”, meaning wooden mug. The cocktail evolved from posset, a nonalcoholic celebratory drink that included milk, eggs, and figs that was often served as punch at social gatherings. Like posset, early eggnog was served hot. It didn’t even include alcohol until the 17th century, when celebrants added sherry to the mix. Since both sherry and eggs were expensive in Europe at the time, eggnog was considered an upper-class drink, and was mainly enjoyed by the aristocracy.
Things changed when European settlers began making their way to the U.S. The colonies included many farms, so eggs were widely available, and unlike wine, sherry, rum, and whiskey weren’t heavily taxed. So, alcoholic American eggnog began making its way into colonial celebrations, including Christmas parties. It’s thought that the drink became associated with winter because it was originally served hot, and since Christmas is the biggest wintertime celebration, the two were naturally conflated.
Eggnog remained warm until the early 1900s, when the addition of ice to many cocktails convinced Americans to try it cold. The chill has stuck since then, and even most Europeans take their eggnog cold today. We’re guessing that anyone hesitant to try an egg-heavy cocktail wouldn’t warm up to the idea if it was served hot!
[Image description: A container labeled “Egg Nog” with holly on the label behind a glass of eggnog with a striped straw, on a table with holiday decorations.] Credit & copyright: Jill Wellington, Pexels