Curio Cabinet / Daily Curio
-
FREEBiology Daily Curio #3070Free1 CQ
The colors of spring are always a sight to behold, but some of them we can’t actually see. While we’ve known for decades that there are certain colors the human eye can’t detect, new research has uncovered a previously unknown one—and has even helped a few people to see it.
Humans see color because of light sensitive cells in our eyes called cones. Some cones are sensitive to long wavelengths of light, some to medium wavelengths, and others to short wavelengths. While short-sensitive cones are stimulated by white-blue light and long-sensitive cones by red light, medium-sensitive cones aren’t stimulated by any light independent of other cones. To see what would happen if these medium-sensitive cones were stimulated directly, U.S. researchers first mapped the retinas of five study participants, noting the exact positions of their cones. Then, a laser was used to stimulate only the medium-sensitive cones in each person’s eye.
Participants reported seeing a large patch of color different from any they’d seen before. It was described as an impossibly saturated blue-green. The new color has been dubbed “olo”, a name based on the binary code 010, which indicates that only the medium-sensitive cones were activated. To ensure that participants had actually seen the same color, they each took color-matching tests. When given an adjustable color wheel and asked to match it as closely as possible to olo, all participants selected a teal color.
As amazing as the results seem, some scientists are dubious that olo is actually its own color, claiming that, though it can only be seen via unnatural stimulation, the color itself is just a highly-saturated green. As much as we’d love to see whether they’re right, we’re not quite ready to have lasers flashed in our eyes. For now, we’ll stick with regular, springtime green.[Image description: A digital illustration representing rainbow light shining through a triangular, white prism.] Credit & copyright: Author-created illustration. Public Domain.
The colors of spring are always a sight to behold, but some of them we can’t actually see. While we’ve known for decades that there are certain colors the human eye can’t detect, new research has uncovered a previously unknown one—and has even helped a few people to see it.
Humans see color because of light sensitive cells in our eyes called cones. Some cones are sensitive to long wavelengths of light, some to medium wavelengths, and others to short wavelengths. While short-sensitive cones are stimulated by white-blue light and long-sensitive cones by red light, medium-sensitive cones aren’t stimulated by any light independent of other cones. To see what would happen if these medium-sensitive cones were stimulated directly, U.S. researchers first mapped the retinas of five study participants, noting the exact positions of their cones. Then, a laser was used to stimulate only the medium-sensitive cones in each person’s eye.
Participants reported seeing a large patch of color different from any they’d seen before. It was described as an impossibly saturated blue-green. The new color has been dubbed “olo”, a name based on the binary code 010, which indicates that only the medium-sensitive cones were activated. To ensure that participants had actually seen the same color, they each took color-matching tests. When given an adjustable color wheel and asked to match it as closely as possible to olo, all participants selected a teal color.
As amazing as the results seem, some scientists are dubious that olo is actually its own color, claiming that, though it can only be seen via unnatural stimulation, the color itself is just a highly-saturated green. As much as we’d love to see whether they’re right, we’re not quite ready to have lasers flashed in our eyes. For now, we’ll stick with regular, springtime green.[Image description: A digital illustration representing rainbow light shining through a triangular, white prism.] Credit & copyright: Author-created illustration. Public Domain.
-
FREESports Daily Curio #3069Free1 CQ
When you're at a baseball game, the only sound sweeter than the crack of a bat is the peal of a pipe organ. On April 26, 1941, a pipe organ was played for the first time at a professional baseball game, creating an unexpected musical tradition that has lasted for decades. While the sound of a pipe organ is heavily associated with baseball today, live music was once something of a novelty at large sporting events. The first musician to play a pipe organ at the ballpark was Roy Nelson, who entertained fans at Wrigley Field in Chicago. At the time, the music couldn't be played over the loudspeakers, so Nelson’s performance was a pre-game event. Due to copyright concerns (since the games were being aired on the radio) Nelson was only able to play for two days, but the trend caught on anyway. In 1942, Gladys Goodding, a silent film musician who had experience playing large events at Madison Square Garden, became the first professional organist in baseball history. Her music, which punctuated different parts of the game and encouraged audience participation, made her something of a legendary figure. She even earned the nickname, “The Ebbets Field Organ Queen" during her tenure playing for the Brooklyn Dodgers. Her career as a baseball organist lasted until 1957 when the team moved to Los Angeles. Other ballparks wanted musicians of their own, and even other sports were eager to get in on the action. For example, organist John Kiley played for the Celtics basketball team, the Red Sox baseball team, and the Bruins ice hockey team in Boston. While it ultimately didn’t catch on in other sports, today organ music is associated with baseball games almost as much as it’s associated with churches. Of course, dedicated fans would probably tell you that there’s little difference between baseball and religion.
[Image description: A black-and-white photo of a baseball on the ground.] Credit & copyright: Rachel Xiao, PexelsWhen you're at a baseball game, the only sound sweeter than the crack of a bat is the peal of a pipe organ. On April 26, 1941, a pipe organ was played for the first time at a professional baseball game, creating an unexpected musical tradition that has lasted for decades. While the sound of a pipe organ is heavily associated with baseball today, live music was once something of a novelty at large sporting events. The first musician to play a pipe organ at the ballpark was Roy Nelson, who entertained fans at Wrigley Field in Chicago. At the time, the music couldn't be played over the loudspeakers, so Nelson’s performance was a pre-game event. Due to copyright concerns (since the games were being aired on the radio) Nelson was only able to play for two days, but the trend caught on anyway. In 1942, Gladys Goodding, a silent film musician who had experience playing large events at Madison Square Garden, became the first professional organist in baseball history. Her music, which punctuated different parts of the game and encouraged audience participation, made her something of a legendary figure. She even earned the nickname, “The Ebbets Field Organ Queen" during her tenure playing for the Brooklyn Dodgers. Her career as a baseball organist lasted until 1957 when the team moved to Los Angeles. Other ballparks wanted musicians of their own, and even other sports were eager to get in on the action. For example, organist John Kiley played for the Celtics basketball team, the Red Sox baseball team, and the Bruins ice hockey team in Boston. While it ultimately didn’t catch on in other sports, today organ music is associated with baseball games almost as much as it’s associated with churches. Of course, dedicated fans would probably tell you that there’s little difference between baseball and religion.
[Image description: A black-and-white photo of a baseball on the ground.] Credit & copyright: Rachel Xiao, Pexels -
FREEWorld History Daily Curio #3068Free1 CQ
Nobody likes Mondays, but you’ve probably never had one as bad as this. On Easter Monday in 1360, a deadly hailstorm devastated English forces in the Hundred Years' War so badly that they ended up signing a peace treaty. The Hundred Years' War between Britain and France was already a bloody conflict, but on one fateful day in 1360, death was dealt not by soldiers, but by inclement weather. King Edward III of England had crossed the English Channel with his troops and was making his way through the French countryside, pillaging throughout the winter. In April, Edward III's army was approaching Paris when they stopped to camp outside the town of Chartres. They weren't in any danger from enemy forces, but they would suffer heavy losses regardless. On what would come to be known as "Black Monday," a devastating hailstorm broke out over the area. First, a lightning strike killed several people, then massive hailstones fell from the sky, killing 1,000 English soldiers and 6,000 horses.
It might seem unbelievable, but there are modern records of hailstones as wide as eight inches, weighing nearly two pounds. That’s heavy enough to be lethal. Understandably, the hailstorm was seen as a divine omen, and Edward III went on to negotiate the Treaty of Brétigny. According to the treaty, Edward III was to renounce his claims to the throne of France and was given some territory in the north in exchange. The treaty didn't end the Hundred Years' War for good. The conflict started up again just nine years later, after the King of France accused Edward III of violating the terms of the treaty. The war, which began in 1337, didn’t officially conclude until 1453. Maybe weirder weather could have ended it sooner!
[Image description: Hailstones on ice.] Credit & copyright: Julia Filirovska, PexelsNobody likes Mondays, but you’ve probably never had one as bad as this. On Easter Monday in 1360, a deadly hailstorm devastated English forces in the Hundred Years' War so badly that they ended up signing a peace treaty. The Hundred Years' War between Britain and France was already a bloody conflict, but on one fateful day in 1360, death was dealt not by soldiers, but by inclement weather. King Edward III of England had crossed the English Channel with his troops and was making his way through the French countryside, pillaging throughout the winter. In April, Edward III's army was approaching Paris when they stopped to camp outside the town of Chartres. They weren't in any danger from enemy forces, but they would suffer heavy losses regardless. On what would come to be known as "Black Monday," a devastating hailstorm broke out over the area. First, a lightning strike killed several people, then massive hailstones fell from the sky, killing 1,000 English soldiers and 6,000 horses.
It might seem unbelievable, but there are modern records of hailstones as wide as eight inches, weighing nearly two pounds. That’s heavy enough to be lethal. Understandably, the hailstorm was seen as a divine omen, and Edward III went on to negotiate the Treaty of Brétigny. According to the treaty, Edward III was to renounce his claims to the throne of France and was given some territory in the north in exchange. The treaty didn't end the Hundred Years' War for good. The conflict started up again just nine years later, after the King of France accused Edward III of violating the terms of the treaty. The war, which began in 1337, didn’t officially conclude until 1453. Maybe weirder weather could have ended it sooner!
[Image description: Hailstones on ice.] Credit & copyright: Julia Filirovska, Pexels -
FREEUS History Daily Curio #3067Free1 CQ
San Francisco is no stranger to earthquakes, but this one was a particular doozy. This month in 1906, the City by the Bay was devastated and permanently reshaped by what would come to be known as the Great 1906 San Francisco Earthquake. On the morning of April 18, 1906, at 5:12 AM, many San Francisco residents were woken up by foreshocks, smaller earthquakes that can occur hours to minutes ahead of a larger one. Just 20 seconds or so later, an earthquake with a magnitude of 7.9 hit the city in earnest, shaking the ground for a full minute. The epicenter of the earthquake was at San Andreas fault, where 296 miles of the northern portion ruptured, sending out a destructive quake that could be felt as far north as Oregon and as far south as Los Angeles. The earthquake was so powerful that buildings toppled and streets were torn apart, but that was only part of the event’s destructive power. There's a reason that it's sometimes called the Great San Francisco Earthquake and Fire. The ensuing flames, caused by burst gas pipes and upended stoves, caused almost as much damage as the earthquake itself. Over the course of four days, 28,000 buildings in 500 blocks were reduced to rubble and ash. It was around $350 million worth of damage, but the loss of property paled in comparison to the loss of life. An estimated 3,000 people died in the earthquake and around 250,000 people were left homeless in its aftermath. The disaster had just one silver lining: geologic observations of the fault and a survey of the devastation proved to be a massive help in understanding how earthquakes cause damage, and the city was quickly rebuilt to be more earthquake and fire-resistant. No matter what, though, the real fault lies with the fault.
[Image description: A black-and-white photo of San Fransisco after the 1906 earthquake, with many ruined buildings.] Credit & copyright: National Archives Catalog. Photographer: Chadwick, H. D. (U.S. Gov War Department. Office of the Chief Signal Officer.) Images Collected by Brigadier General Adolphus W. Greely, Chief Signal Officer (1887-1906), between 1865–1935. Unrestricted Access, Unrestricted Use, Public Domain.San Francisco is no stranger to earthquakes, but this one was a particular doozy. This month in 1906, the City by the Bay was devastated and permanently reshaped by what would come to be known as the Great 1906 San Francisco Earthquake. On the morning of April 18, 1906, at 5:12 AM, many San Francisco residents were woken up by foreshocks, smaller earthquakes that can occur hours to minutes ahead of a larger one. Just 20 seconds or so later, an earthquake with a magnitude of 7.9 hit the city in earnest, shaking the ground for a full minute. The epicenter of the earthquake was at San Andreas fault, where 296 miles of the northern portion ruptured, sending out a destructive quake that could be felt as far north as Oregon and as far south as Los Angeles. The earthquake was so powerful that buildings toppled and streets were torn apart, but that was only part of the event’s destructive power. There's a reason that it's sometimes called the Great San Francisco Earthquake and Fire. The ensuing flames, caused by burst gas pipes and upended stoves, caused almost as much damage as the earthquake itself. Over the course of four days, 28,000 buildings in 500 blocks were reduced to rubble and ash. It was around $350 million worth of damage, but the loss of property paled in comparison to the loss of life. An estimated 3,000 people died in the earthquake and around 250,000 people were left homeless in its aftermath. The disaster had just one silver lining: geologic observations of the fault and a survey of the devastation proved to be a massive help in understanding how earthquakes cause damage, and the city was quickly rebuilt to be more earthquake and fire-resistant. No matter what, though, the real fault lies with the fault.
[Image description: A black-and-white photo of San Fransisco after the 1906 earthquake, with many ruined buildings.] Credit & copyright: National Archives Catalog. Photographer: Chadwick, H. D. (U.S. Gov War Department. Office of the Chief Signal Officer.) Images Collected by Brigadier General Adolphus W. Greely, Chief Signal Officer (1887-1906), between 1865–1935. Unrestricted Access, Unrestricted Use, Public Domain. -
FREEMind + Body Daily CurioFree1 CQ
Fire up the grill, backyard barbeque season is nearly upon us! In many places in the U.S., no outdoor get-together is complete without a scoop of Boston baked beans. This famous side’s sweet flavor sets itself apart from other baked beans. Its origins, though, are anything but sweet.
Like other kinds of baked beans, Boston baked beans are made by boiling beans (usually white common beans or navy beans) and then baking them in sauce. The sauce for Boston baked beans is sweetened with molasses and brown sugar, but also has a savory edge since bacon or salt pork is often added.
Boston baked beans are responsible for giving their titular city the nickname “Beantown.” In the years leading up to and directly following the Revolutionary War, Boston boasted more molasses than any other American city, but Bostonians didn’t produce it themselves. The city’s coastal position made it a major hub of the Triangle Trade between the Americas, Europe, and Africa. In this brutal trade, Europe shipped goods to Africa, which were traded for enslaved people, who were shipped to the Americas to farm and produce goods like cotton and rum, which were then shipped to Europe. Boston’s molasses was produced by enslaved people on sugar plantations in the Caribbean, then used in Boston to produce rum as part of the Triangle Trade. Leftover molasses became a common household item in Boston, and was used to create many New England foods that are still famous today, from molasses cookies to Boston baked beans.
In the late 19th century, large food companies began using new, industrial technology to mass produce and can goods. This included foods that were only famous in specific regions, like Boston baked beans. Once they were shipped across the country, Boston baked beans became instantly popular outside of New England. Today, most baked beans on grocery shelves are sweet and syrupy, even if they don’t call themselves Boston baked beans. If you get popular enough, your name sometimes dissolves into the sauce of the general culture.
[Image description: A white bowl filled with baked beans and sliced hot dogs.] Credit & copyright: Thomson200, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Fire up the grill, backyard barbeque season is nearly upon us! In many places in the U.S., no outdoor get-together is complete without a scoop of Boston baked beans. This famous side’s sweet flavor sets itself apart from other baked beans. Its origins, though, are anything but sweet.
Like other kinds of baked beans, Boston baked beans are made by boiling beans (usually white common beans or navy beans) and then baking them in sauce. The sauce for Boston baked beans is sweetened with molasses and brown sugar, but also has a savory edge since bacon or salt pork is often added.
Boston baked beans are responsible for giving their titular city the nickname “Beantown.” In the years leading up to and directly following the Revolutionary War, Boston boasted more molasses than any other American city, but Bostonians didn’t produce it themselves. The city’s coastal position made it a major hub of the Triangle Trade between the Americas, Europe, and Africa. In this brutal trade, Europe shipped goods to Africa, which were traded for enslaved people, who were shipped to the Americas to farm and produce goods like cotton and rum, which were then shipped to Europe. Boston’s molasses was produced by enslaved people on sugar plantations in the Caribbean, then used in Boston to produce rum as part of the Triangle Trade. Leftover molasses became a common household item in Boston, and was used to create many New England foods that are still famous today, from molasses cookies to Boston baked beans.
In the late 19th century, large food companies began using new, industrial technology to mass produce and can goods. This included foods that were only famous in specific regions, like Boston baked beans. Once they were shipped across the country, Boston baked beans became instantly popular outside of New England. Today, most baked beans on grocery shelves are sweet and syrupy, even if they don’t call themselves Boston baked beans. If you get popular enough, your name sometimes dissolves into the sauce of the general culture.
[Image description: A white bowl filled with baked beans and sliced hot dogs.] Credit & copyright: Thomson200, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3066Free1 CQ
"Don't you wish there was more plastic at the beach?", said no one ever. Those perturbed by plastic pollution in Australia now have a reason to rejoice, as new research shows that coastal plastic waste levels there have dropped by nearly 40 percent since 2013. In Australia, as in many places, most waste found on beaches (around 75 percent) is plastic. This waste is unsightly, hazardous to people (especially children), and potentially deadly for wildlife. It's good news, then, that researchers in Australia are finding less and less of the stuff every year. According to CSIRO, Australia's national science agency, there has been a 39 percent reduction in plastic waste found in coastal areas. Researchers examined a number of different Australian locales, including Hobart in Tasmania, Newcastle in New South Wales, Perth in Western Australia, Port Augusta in South Australia, Sunshine Coast in Queensland, and Alice Springs in the Northern Territory. They didn't just look at beaches by the sea, either. Areas surveyed included inland, riverine, and coastal habitats, and all were found to have reduced plastic waste. Moreover, there was a 16 percent increase in areas that were completely free of plastic waste. CSIRO researchers also identified the most common types of plastic waste: polystyrene and cigarette butts, which accounted for 24 percent and 20 percent. Other common forms of waste included beverage containers (bottles and cans) and food wrappers, as well as plenty of unspecified plastic fragments. This type of research has allowed for more focused efforts when it comes to waste collection and prevention, but CSIRO isn't ready to rest on their laurels just yet. They hope to achieve an 80 percent reduction of plastic waste by 2030 by identifying sources of waste and better understanding how it enters the environment. Australia's National Waste Policy also aims to recycle or reuse all plastic waste by 2040. As they say: waste not, want not.
[Image description: The surface of water under a sky at sunset.] Credit & copyright: Matt Hardy, Pexels"Don't you wish there was more plastic at the beach?", said no one ever. Those perturbed by plastic pollution in Australia now have a reason to rejoice, as new research shows that coastal plastic waste levels there have dropped by nearly 40 percent since 2013. In Australia, as in many places, most waste found on beaches (around 75 percent) is plastic. This waste is unsightly, hazardous to people (especially children), and potentially deadly for wildlife. It's good news, then, that researchers in Australia are finding less and less of the stuff every year. According to CSIRO, Australia's national science agency, there has been a 39 percent reduction in plastic waste found in coastal areas. Researchers examined a number of different Australian locales, including Hobart in Tasmania, Newcastle in New South Wales, Perth in Western Australia, Port Augusta in South Australia, Sunshine Coast in Queensland, and Alice Springs in the Northern Territory. They didn't just look at beaches by the sea, either. Areas surveyed included inland, riverine, and coastal habitats, and all were found to have reduced plastic waste. Moreover, there was a 16 percent increase in areas that were completely free of plastic waste. CSIRO researchers also identified the most common types of plastic waste: polystyrene and cigarette butts, which accounted for 24 percent and 20 percent. Other common forms of waste included beverage containers (bottles and cans) and food wrappers, as well as plenty of unspecified plastic fragments. This type of research has allowed for more focused efforts when it comes to waste collection and prevention, but CSIRO isn't ready to rest on their laurels just yet. They hope to achieve an 80 percent reduction of plastic waste by 2030 by identifying sources of waste and better understanding how it enters the environment. Australia's National Waste Policy also aims to recycle or reuse all plastic waste by 2040. As they say: waste not, want not.
[Image description: The surface of water under a sky at sunset.] Credit & copyright: Matt Hardy, Pexels -
FREEParenting Daily Curio #3065Free1 CQ
Are middle children really mediators? Are older children really the most responsible? Is there any truth to common stereotypes about birth-order? A new study shows that a person's place among their siblings can affect their personality, but there's more to it. When a family has three or more children, conventional wisdom says that the eldest will be bold and independent, the middle child will be the peacemaker, and the youngest will be the most easygoing (because they’re able to get away with everything). Obviously, these archetypes don't always hold true, but birth order can contribute to someone's personality in surprising ways. Researchers in Canada conducted a large-scale study using the HEXACO framework, which measures six general traits—Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. Using data from almost 800,000 participants from various English-speaking countries, the researchers deciphered how birth order affects personalities.
When it came to Honesty-Humility and Agreeableness, second or middle children scored the highest, followed by the youngest and then the eldest. Those with no siblings scored the lowest of all, but they did redeem themselves somewhat. Compared to those who have siblings, "only children" scored higher when it came to openness to experience and tended to have higher levels of intellectual curiosity. Overall, researchers found that those who came from larger families tended to be more cooperative and modest compared to those from smaller families, likely from having to share more resources and settle disputes. That's not to say that birth order is the end-all-be-all when it comes to determining personalities. In fact, researchers pointed out that these statistical differences are small, albeit consistent. They also noted that cultural differences might yield different results, and they hope to launch similar studies in non-English speaking countries. Of course, there's probably no culture on Earth without sibling rivalry.
[Image description: Three dark red hearts on a pink background.] Credit & copyright: Author-created image. Public Domain.Are middle children really mediators? Are older children really the most responsible? Is there any truth to common stereotypes about birth-order? A new study shows that a person's place among their siblings can affect their personality, but there's more to it. When a family has three or more children, conventional wisdom says that the eldest will be bold and independent, the middle child will be the peacemaker, and the youngest will be the most easygoing (because they’re able to get away with everything). Obviously, these archetypes don't always hold true, but birth order can contribute to someone's personality in surprising ways. Researchers in Canada conducted a large-scale study using the HEXACO framework, which measures six general traits—Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. Using data from almost 800,000 participants from various English-speaking countries, the researchers deciphered how birth order affects personalities.
When it came to Honesty-Humility and Agreeableness, second or middle children scored the highest, followed by the youngest and then the eldest. Those with no siblings scored the lowest of all, but they did redeem themselves somewhat. Compared to those who have siblings, "only children" scored higher when it came to openness to experience and tended to have higher levels of intellectual curiosity. Overall, researchers found that those who came from larger families tended to be more cooperative and modest compared to those from smaller families, likely from having to share more resources and settle disputes. That's not to say that birth order is the end-all-be-all when it comes to determining personalities. In fact, researchers pointed out that these statistical differences are small, albeit consistent. They also noted that cultural differences might yield different results, and they hope to launch similar studies in non-English speaking countries. Of course, there's probably no culture on Earth without sibling rivalry.
[Image description: Three dark red hearts on a pink background.] Credit & copyright: Author-created image. Public Domain. -
FREEEngineering Daily Curio #3064Free1 CQ
Chewing gum? Did you bring enough to share with everyone? Most chewing gums can only freshen your breath, but a new antiviral gum developed by researchers at the School of Dental Medicine at the University of Pennsylvania can fight the influenza virus and the herpes simplex virus (HSV). Influenza claims up to 650,000 lives per year, and while HSV isn't as deadly, the infection never goes away. According to the World Health Organization (WHO), around 3.8 billion people under 50 are infected with herpes simplex virus type 1 (HSV-1), while around 520 million people between the ages of 15 and 49 are infected with herpes simplex virus type 2 (HSV-2). HSV-1 is responsible for most cases of oral herpes, while HSV-2 is responsible for most cases of genital herpes. HSV-1 doesn't claim as many lives as influenza, but it's still the leading cause of infectious blindness in Western countries. Both influenza and HSV infections can go unnoticed or misdiagnosed, and in the case of HSV, many people can be asymptomatic for long periods of time.
Managing the spread of these diseases is a seemingly sisyphean task, but the antiviral gum from the University of Pennsylvania might make that uphill climb a little easier. The special ingredient in the gum is lablab beans, which are full of an antiviral trap protein (FRIL) that ensnares viruses in the human body and stops them from replicating. Studies show that chewing on the gum can lower viral loads by 95 percent, significantly reducing the likelihood of transmission. Delivering the treatment via gum isn’t just a cute gimmick, either. Prolonged chewing releases the FRIL from the bean gum consistently over time, increasing its effectiveness. The question remains, though, should the flavor be spearmint or something fruity?
[Image description: A piece of chewed gum in a foil wrapper.] Credit & copyright: ToTheDemosToTheStars, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Chewing gum? Did you bring enough to share with everyone? Most chewing gums can only freshen your breath, but a new antiviral gum developed by researchers at the School of Dental Medicine at the University of Pennsylvania can fight the influenza virus and the herpes simplex virus (HSV). Influenza claims up to 650,000 lives per year, and while HSV isn't as deadly, the infection never goes away. According to the World Health Organization (WHO), around 3.8 billion people under 50 are infected with herpes simplex virus type 1 (HSV-1), while around 520 million people between the ages of 15 and 49 are infected with herpes simplex virus type 2 (HSV-2). HSV-1 is responsible for most cases of oral herpes, while HSV-2 is responsible for most cases of genital herpes. HSV-1 doesn't claim as many lives as influenza, but it's still the leading cause of infectious blindness in Western countries. Both influenza and HSV infections can go unnoticed or misdiagnosed, and in the case of HSV, many people can be asymptomatic for long periods of time.
Managing the spread of these diseases is a seemingly sisyphean task, but the antiviral gum from the University of Pennsylvania might make that uphill climb a little easier. The special ingredient in the gum is lablab beans, which are full of an antiviral trap protein (FRIL) that ensnares viruses in the human body and stops them from replicating. Studies show that chewing on the gum can lower viral loads by 95 percent, significantly reducing the likelihood of transmission. Delivering the treatment via gum isn’t just a cute gimmick, either. Prolonged chewing releases the FRIL from the bean gum consistently over time, increasing its effectiveness. The question remains, though, should the flavor be spearmint or something fruity?
[Image description: A piece of chewed gum in a foil wrapper.] Credit & copyright: ToTheDemosToTheStars, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3063Free1 CQ
Social distancing? These birds have never heard of it! The annual spring migration of sandhill cranes in Nebraska had researchers concerned about the possibility of a bird flu super-spreader event, but those fears were thankfully put to rest. This year, a record-breaking 736,000 sandhill cranes gathered in central Nebraska. Conservationists and bird-lovers would normally hail this as a joyous occasion, but this year is a little different. That’s because the H5N1 virus, also known as bird flu, killed 1,500 of the cranes earlier this year in Indiana. It was only natural to be concerned about the much larger Nebraska gathering, which accounts for around 80 percent of the total sandhill crane population in North America. In such a large group, just a few sick birds would have been enough to cause devastation. Unfortunately, the danger for this year hasn’t completely passed. Sandhill crane migration begins in February and continues through April, so there could still be some late comers who might be carrying the virus.
That would be especially bad news since sandhill cranes have a hard time recovering from population dips. The cranes can begin breeding at age two, but many of them wait until they are at least seven years old. The cranes mate for life, continuing to breed for upwards of 20 years, but chicks take a while to become independent. Hatchlings stick close to their parents and only strike out on their own after seven months. Once they mature, sandhill cranes are some of the largest birds in North America, measuring over 47 inches long with a wingspan of nearly 78 inches. If only they could use that impressive wingspan to keep at wing’s length from one another.
[Image description: A sandhill crane, with white feathers and some red on its head, flies over a snowy landscape.] Credit & copyright: National Park Service, Jacob W. Frank. NPGallery Digital Asset Management System, Asset ID: 06545f00-50a9-41bd-9195-f8663386cb17. Public domain: Full Granting Rights.Social distancing? These birds have never heard of it! The annual spring migration of sandhill cranes in Nebraska had researchers concerned about the possibility of a bird flu super-spreader event, but those fears were thankfully put to rest. This year, a record-breaking 736,000 sandhill cranes gathered in central Nebraska. Conservationists and bird-lovers would normally hail this as a joyous occasion, but this year is a little different. That’s because the H5N1 virus, also known as bird flu, killed 1,500 of the cranes earlier this year in Indiana. It was only natural to be concerned about the much larger Nebraska gathering, which accounts for around 80 percent of the total sandhill crane population in North America. In such a large group, just a few sick birds would have been enough to cause devastation. Unfortunately, the danger for this year hasn’t completely passed. Sandhill crane migration begins in February and continues through April, so there could still be some late comers who might be carrying the virus.
That would be especially bad news since sandhill cranes have a hard time recovering from population dips. The cranes can begin breeding at age two, but many of them wait until they are at least seven years old. The cranes mate for life, continuing to breed for upwards of 20 years, but chicks take a while to become independent. Hatchlings stick close to their parents and only strike out on their own after seven months. Once they mature, sandhill cranes are some of the largest birds in North America, measuring over 47 inches long with a wingspan of nearly 78 inches. If only they could use that impressive wingspan to keep at wing’s length from one another.
[Image description: A sandhill crane, with white feathers and some red on its head, flies over a snowy landscape.] Credit & copyright: National Park Service, Jacob W. Frank. NPGallery Digital Asset Management System, Asset ID: 06545f00-50a9-41bd-9195-f8663386cb17. Public domain: Full Granting Rights. -
FREEMind + Body Daily CurioFree1 CQ
It’s flavorful, traditional…and controversial. Foie gras is a quintessentially French food that’s been beloved for centuries, but it’s come under fire in recent years due to animal rights concerns. Foie gras is made from fatty duck or goose liver (its name literally translates to “fatty liver”) and making it involves force-feeding ducks or geese in order to plump them up—a stressful process that can result in injuries. The birds are also confined to keep them from exercising. Foie gras has become so controversial that some cities and even entire countries, like Switzerland, have banned the dish. Now, though, there may be a way to have happier birds and eat foie gras too. A team of researchers in Germany recently found that adding lipases, naturally occurring, fat-digesting enzymes, to normal duck liver after butchering caused the liver’s fat to form large, irregular clumps, just as fat in natural foie gras does. This leads to foie gras that is creamy and fatty, even without force-feeding. The study's results were published in the journal Physics of Fluids and seem to suggest that cruelty-free foie gras is possible.
Regardless of how it's made, all foie gras has a sought-after, buttery flavor and creamy consistency that sets it apart from normal duck or goose liver. It can be served in slices or as a pâté, to be spread on crackers or bread. The dish’s history goes back a long way. The practice of force-feeding birds to fatten them up dates back to ancient Egypt, where various artworks depict workers forcing food into birds’ mouths. Ancient Romans specifically ate the livers of geese fattened with figs. By the 1500s, fattened goose liver was a delicacy in many European Jewish communities. Since fat from pigs and from certain parts of cows weren’t considered kosher in these communities, they used fat from overfed geese in their cooking, and a version of foie gras was created as a byproduct.
For at least a century, foie gras was inexpensive, as it was considered a peasant dish throughout most of Europe. Then, in 1779, French chef Jean-Joseph Clause created a pâté, or paste, from foie gras, allowing it to be easily spread over bread and other foods. Clause created an entire business supplying his foie gras to French royalty and aristocracy, ensuring that it became a food associated with luxury. It’s a reputation that endures to this day, despite the controversy currently surrounding foie gras. Would you eat fatty duck liver in the City of Light?
[Image description: Two slices of foie gras on a white plate with salad and bread.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s flavorful, traditional…and controversial. Foie gras is a quintessentially French food that’s been beloved for centuries, but it’s come under fire in recent years due to animal rights concerns. Foie gras is made from fatty duck or goose liver (its name literally translates to “fatty liver”) and making it involves force-feeding ducks or geese in order to plump them up—a stressful process that can result in injuries. The birds are also confined to keep them from exercising. Foie gras has become so controversial that some cities and even entire countries, like Switzerland, have banned the dish. Now, though, there may be a way to have happier birds and eat foie gras too. A team of researchers in Germany recently found that adding lipases, naturally occurring, fat-digesting enzymes, to normal duck liver after butchering caused the liver’s fat to form large, irregular clumps, just as fat in natural foie gras does. This leads to foie gras that is creamy and fatty, even without force-feeding. The study's results were published in the journal Physics of Fluids and seem to suggest that cruelty-free foie gras is possible.
Regardless of how it's made, all foie gras has a sought-after, buttery flavor and creamy consistency that sets it apart from normal duck or goose liver. It can be served in slices or as a pâté, to be spread on crackers or bread. The dish’s history goes back a long way. The practice of force-feeding birds to fatten them up dates back to ancient Egypt, where various artworks depict workers forcing food into birds’ mouths. Ancient Romans specifically ate the livers of geese fattened with figs. By the 1500s, fattened goose liver was a delicacy in many European Jewish communities. Since fat from pigs and from certain parts of cows weren’t considered kosher in these communities, they used fat from overfed geese in their cooking, and a version of foie gras was created as a byproduct.
For at least a century, foie gras was inexpensive, as it was considered a peasant dish throughout most of Europe. Then, in 1779, French chef Jean-Joseph Clause created a pâté, or paste, from foie gras, allowing it to be easily spread over bread and other foods. Clause created an entire business supplying his foie gras to French royalty and aristocracy, ensuring that it became a food associated with luxury. It’s a reputation that endures to this day, despite the controversy currently surrounding foie gras. Would you eat fatty duck liver in the City of Light?
[Image description: Two slices of foie gras on a white plate with salad and bread.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREECats Daily Curio #3062Free1 CQ
They’re the fuzziest, friendliest criminals you’ll ever meet. Cats are nearly essential fixtures in New York City’s bodegas, semi-outdoor convenience stores that often include deli counters or grab-and-go food. While they’re universally appreciated by the businesses’ patrons, a recent petition to protect them has highlighted the fact that, technically, they’re illegal.
Bodegas are convenient places for busy New Yorkers to pick up household essentials or a quick lunch. Animal lovers have yet another reason to visit bodegas, as they’re frequently inhabited by at least one cat. Bodega owners in New York started keeping cats in their stores in the early 1900s. The felines kept rodents at bay, which helped prevent inventory loss. Over time, bodega cats have become more than a pest-control solution, though many still serve that purpose. These days, they’re also beloved mascots of the neighborhoods they inhabit. Some become minor celebrities with their own social media pages, while most are content to receive the adoration of regular customers. Nevertheless, their presence in the stores is against the law. Specifically, it goes against a state law banning live animals from retail food stores.
Concerned New Yorkers recently submitted a 10,000-signature petition to the city asking for their exemption, but the law is actually enforced by the New York State Department of Agriculture and Markets. The agency has the authority to issue fines regarding the cats, but so far they’ve been lenient on the matter. Those who support bodega cats say that they still serve an essential function as pest control against the city’s ubiquitous rats and cockroaches, while store owners themselves say that they help build and maintain ties to the communities they serve. So far, there has been very little public controversy on the matter, as everyone from shoppers to government workers seem to love the bodega cats. Who said New Yorkers couldn’t agree on anything?
[Image description: A close-up photo of a cat’s face with white-and-gray fur and yellow eyes.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They’re the fuzziest, friendliest criminals you’ll ever meet. Cats are nearly essential fixtures in New York City’s bodegas, semi-outdoor convenience stores that often include deli counters or grab-and-go food. While they’re universally appreciated by the businesses’ patrons, a recent petition to protect them has highlighted the fact that, technically, they’re illegal.
Bodegas are convenient places for busy New Yorkers to pick up household essentials or a quick lunch. Animal lovers have yet another reason to visit bodegas, as they’re frequently inhabited by at least one cat. Bodega owners in New York started keeping cats in their stores in the early 1900s. The felines kept rodents at bay, which helped prevent inventory loss. Over time, bodega cats have become more than a pest-control solution, though many still serve that purpose. These days, they’re also beloved mascots of the neighborhoods they inhabit. Some become minor celebrities with their own social media pages, while most are content to receive the adoration of regular customers. Nevertheless, their presence in the stores is against the law. Specifically, it goes against a state law banning live animals from retail food stores.
Concerned New Yorkers recently submitted a 10,000-signature petition to the city asking for their exemption, but the law is actually enforced by the New York State Department of Agriculture and Markets. The agency has the authority to issue fines regarding the cats, but so far they’ve been lenient on the matter. Those who support bodega cats say that they still serve an essential function as pest control against the city’s ubiquitous rats and cockroaches, while store owners themselves say that they help build and maintain ties to the communities they serve. So far, there has been very little public controversy on the matter, as everyone from shoppers to government workers seem to love the bodega cats. Who said New Yorkers couldn’t agree on anything?
[Image description: A close-up photo of a cat’s face with white-and-gray fur and yellow eyes.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily Curio #3061Free1 CQ
They say that the best way to keep your mind sharp is to keep your body healthy. That previously referred to diet and exercise, but it turns out that vaccines can be good for the mind too. A recently published study in Nature by researchers from Stanford University shows that the shingles vaccine might be the most effective dementia prevention tool ever developed. The link between the shingles vaccine and dementia has actually been explored before, but the correlation between vaccination rates and the likelihood of developing dementia was never firmly established. That’s because people who voluntarily receive the vaccine are more likely to be more health conscious. Since it’s already well established that a healthier lifestyle with better diet and exercise reduces the risk of dementia, researchers had no way to confirm that the vaccine was responsible for lowering the risk as well.
In 2013, a public health policy in Wales allowed those who were 79 on September 1 of that year to receive the shingles vaccine, while those who were 80 wouldn’t be eligible to get it at all. The policy was put in place due to limited vaccine supply, but it had the unintended consequence of creating a nearly perfect, randomized, controlled trial for the effectiveness of the vaccine against dementia. Those involved in the “study” were nearly the same in age, and there were plenty of people who didn’t receive the vaccine because they couldn’t, not because they didn’t want to. When researchers combed over the data, they found that those who received the vaccine were 20 percent less likely to develop dementia compared to those who didn’t, and they drew the same conclusions from countries like Australia and Canada where the shingles vaccine was distributed in a similar fashion. 20 percent might not seem like much, but it’s a huge difference when it comes to keeping a sound mind!
[Image description: Medical needles against a pink background.] Credit & copyright: Tara Winstead, PexelsThey say that the best way to keep your mind sharp is to keep your body healthy. That previously referred to diet and exercise, but it turns out that vaccines can be good for the mind too. A recently published study in Nature by researchers from Stanford University shows that the shingles vaccine might be the most effective dementia prevention tool ever developed. The link between the shingles vaccine and dementia has actually been explored before, but the correlation between vaccination rates and the likelihood of developing dementia was never firmly established. That’s because people who voluntarily receive the vaccine are more likely to be more health conscious. Since it’s already well established that a healthier lifestyle with better diet and exercise reduces the risk of dementia, researchers had no way to confirm that the vaccine was responsible for lowering the risk as well.
In 2013, a public health policy in Wales allowed those who were 79 on September 1 of that year to receive the shingles vaccine, while those who were 80 wouldn’t be eligible to get it at all. The policy was put in place due to limited vaccine supply, but it had the unintended consequence of creating a nearly perfect, randomized, controlled trial for the effectiveness of the vaccine against dementia. Those involved in the “study” were nearly the same in age, and there were plenty of people who didn’t receive the vaccine because they couldn’t, not because they didn’t want to. When researchers combed over the data, they found that those who received the vaccine were 20 percent less likely to develop dementia compared to those who didn’t, and they drew the same conclusions from countries like Australia and Canada where the shingles vaccine was distributed in a similar fashion. 20 percent might not seem like much, but it’s a huge difference when it comes to keeping a sound mind!
[Image description: Medical needles against a pink background.] Credit & copyright: Tara Winstead, Pexels -
FREEHumanities Daily Curio #3060Free1 CQ
The effects of war don’t always end with war. Cambodia’s decades-long civil war ended in 1998, but the country is still suffering casualties from landmines, buried long ago. Luckily, an unlikely hero has emerged from the devastation: Ronin, a giant African pouched rat trained to sniff out these deadly remnants. While we’ve written about mine-sniffing rats before, Ronin recently broke a world record, making him a rat of particular renown.
Ronin is a 5-year-old mine detection rat (MDR) working for APOPO, a Belgium-based charity helping to rid Cambodia of landmines. African pouched rats like Ronin are already record-setters as the largest rats in the world. They can weigh up to nine pounds and grow up to 35 inches long. In some places like Florida, African pouched rats are unwelcome invasive species that devastate the local ecosystem, but in Cambodia, they’re saving lives. Ronin was recently entered into the Guinness World Book of Records for sniffing out 109 mines and 15 other explosives at work, beating the previous record held by fellow rat Magawa. In a press release, APOPO stated, "His exceptional accomplishments have earned him the Guinness World Records title for most landmines detected by a rat, highlighting the critical role of HeroRATS in humanitarian demining." Rats like Ronin can safely tread around in heavily mined areas without setting them off because, while they’re heavy for rats, they’re still much lighter than people. Their keen sense of smell allows them to locate any explosives, which can then be dug up and disposed of safely. Rats may have a reputation as pests, but Ronin and his coworkers are truly lionhearted rodents.
[Image description: A yellow, multilingual sign warning of landmines.] Credit & copyright: User:Mattes, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.The effects of war don’t always end with war. Cambodia’s decades-long civil war ended in 1998, but the country is still suffering casualties from landmines, buried long ago. Luckily, an unlikely hero has emerged from the devastation: Ronin, a giant African pouched rat trained to sniff out these deadly remnants. While we’ve written about mine-sniffing rats before, Ronin recently broke a world record, making him a rat of particular renown.
Ronin is a 5-year-old mine detection rat (MDR) working for APOPO, a Belgium-based charity helping to rid Cambodia of landmines. African pouched rats like Ronin are already record-setters as the largest rats in the world. They can weigh up to nine pounds and grow up to 35 inches long. In some places like Florida, African pouched rats are unwelcome invasive species that devastate the local ecosystem, but in Cambodia, they’re saving lives. Ronin was recently entered into the Guinness World Book of Records for sniffing out 109 mines and 15 other explosives at work, beating the previous record held by fellow rat Magawa. In a press release, APOPO stated, "His exceptional accomplishments have earned him the Guinness World Records title for most landmines detected by a rat, highlighting the critical role of HeroRATS in humanitarian demining." Rats like Ronin can safely tread around in heavily mined areas without setting them off because, while they’re heavy for rats, they’re still much lighter than people. Their keen sense of smell allows them to locate any explosives, which can then be dug up and disposed of safely. Rats may have a reputation as pests, but Ronin and his coworkers are truly lionhearted rodents.
[Image description: A yellow, multilingual sign warning of landmines.] Credit & copyright: User:Mattes, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEAstronomy Daily Curio #3059Free1 CQ
You can always come back home…but it might take a while to adjust. Earlier this month, astronauts Butch Wilmore and Suni Williams of NASA returned to Earth after a 9-month stay at the International Space Station (ISS). We’ve written before about the duo’s lengthened stay in space. Originally, the astronauts were meant to spend a week-long layover at the ISS after arriving via the Boeing Starliner last June. Then, technical issues compromised their ability to return safely and ended up delaying their return time and time again. After finally catching a ride back to Earth aboard the SpaceX Crew Dragon capsule on March 18, Wilmore and Williams were carried away on stretchers. Unfortunately, this wasn’t just out of an abundance of caution. Prolonged stays in space can actually have serious health consequences, affecting the body in various negative ways.
According to NASA’s Human Research Program (HRP), which has been studying the issue for five decades, astronauts lose around 1 to 1.5 percent of their bone density for every month they spend in space. That makes them more vulnerable to fractures, but their problems don't end there. There's also the issue of muscle loss, which can severely weaken astronauts even with regular exercise. During the Apollo missions of the 1960s and 70s, astronauts who spent just days in microgravity had to be pulled out of their capsules upon landing because they were unable to stand on their own. If all that weren't enough, dangerous levels of radiation in space increase the risks of cancer and other diseases. The confined environments inside spacecraft aren't great for mental health either, and crewmembers getting along can become a problem even among the most well-disciplined astronauts. Solving all of these issues, while not as glamorous as the technological innovations involved in space travel, are some of the most limiting factors for extended missions (like a hypothetical manned mission to Mars). There's a long way to go before we can go such a long way.
[Image description: A starry night sky with some purple visible.] Credit & copyright: Felix Mittermeier, PexelsYou can always come back home…but it might take a while to adjust. Earlier this month, astronauts Butch Wilmore and Suni Williams of NASA returned to Earth after a 9-month stay at the International Space Station (ISS). We’ve written before about the duo’s lengthened stay in space. Originally, the astronauts were meant to spend a week-long layover at the ISS after arriving via the Boeing Starliner last June. Then, technical issues compromised their ability to return safely and ended up delaying their return time and time again. After finally catching a ride back to Earth aboard the SpaceX Crew Dragon capsule on March 18, Wilmore and Williams were carried away on stretchers. Unfortunately, this wasn’t just out of an abundance of caution. Prolonged stays in space can actually have serious health consequences, affecting the body in various negative ways.
According to NASA’s Human Research Program (HRP), which has been studying the issue for five decades, astronauts lose around 1 to 1.5 percent of their bone density for every month they spend in space. That makes them more vulnerable to fractures, but their problems don't end there. There's also the issue of muscle loss, which can severely weaken astronauts even with regular exercise. During the Apollo missions of the 1960s and 70s, astronauts who spent just days in microgravity had to be pulled out of their capsules upon landing because they were unable to stand on their own. If all that weren't enough, dangerous levels of radiation in space increase the risks of cancer and other diseases. The confined environments inside spacecraft aren't great for mental health either, and crewmembers getting along can become a problem even among the most well-disciplined astronauts. Solving all of these issues, while not as glamorous as the technological innovations involved in space travel, are some of the most limiting factors for extended missions (like a hypothetical manned mission to Mars). There's a long way to go before we can go such a long way.
[Image description: A starry night sky with some purple visible.] Credit & copyright: Felix Mittermeier, Pexels -
FREEMind + Body Daily CurioFree1 CQ
It’s spring, which means it’s time to rise, shine, and risi e bisi! This scrumptious starter is a traditional springtime food in Venice. Translating to “rice and peas”, risi e bisi’s cheery green color perfectly reflects the hue of the season. For a dish made with such humble ingredients, this rice dish has a surprisingly posh past.
Risi e bisi is made with young peas, medium-grain rice, butter, and broth. While chicken broth is often used in modern risi e bisi because of its wide availability, traditionally the dish was made with pea shell broth, made from pea shells and other veggies, like carrots and onions. Though it’s similar to risotto, another Italian rice dish that employs broth, risi e bisi isn’t stirred constantly the way risotto is. The peas in risi e bisi remain firm since they’re added toward the end of the cooking process, while veggies in risotto are added early so that they can soften. Risotto’s texture requires that it be eaten with a spoon, while risi e bisi is solid enough to be eaten with a fork.
Risi e bisi has always been associated with spring, since peas reach peak ripeness early in the season. Venice came to love peas in the 15th century, after conquering several other Italian city-states, including the northeastern city of Vicenza, where conditions were perfect for pea farming. The dish soon became a springtime staple in Venice, but was largely seen as a peasant dish until some time around the 16th century. That’s when the leader of the Venetian Republic, known as the Doge of Venice, was served risi e bisi during the April 25th Feast of Saint Mark, which celebrates the city’s patron saint, Mark the Evangelist.
Popular as it remains, though, risi e bisi is still difficult to find as a tourist traveling to Italy. That’s because it’s still a mostly homemade dish, considered too simple to serve in restaurants. Luckily, it's pretty easy to make, doesn't require expensive ingredients, and takes less than an hour to prepare. You can’t be too busy for risi e bisi.
[Image description: Rice with peas on a gray plate. There are two lemons to the right.] Credit & copyright: Alesia Kozik, PexelsIt’s spring, which means it’s time to rise, shine, and risi e bisi! This scrumptious starter is a traditional springtime food in Venice. Translating to “rice and peas”, risi e bisi’s cheery green color perfectly reflects the hue of the season. For a dish made with such humble ingredients, this rice dish has a surprisingly posh past.
Risi e bisi is made with young peas, medium-grain rice, butter, and broth. While chicken broth is often used in modern risi e bisi because of its wide availability, traditionally the dish was made with pea shell broth, made from pea shells and other veggies, like carrots and onions. Though it’s similar to risotto, another Italian rice dish that employs broth, risi e bisi isn’t stirred constantly the way risotto is. The peas in risi e bisi remain firm since they’re added toward the end of the cooking process, while veggies in risotto are added early so that they can soften. Risotto’s texture requires that it be eaten with a spoon, while risi e bisi is solid enough to be eaten with a fork.
Risi e bisi has always been associated with spring, since peas reach peak ripeness early in the season. Venice came to love peas in the 15th century, after conquering several other Italian city-states, including the northeastern city of Vicenza, where conditions were perfect for pea farming. The dish soon became a springtime staple in Venice, but was largely seen as a peasant dish until some time around the 16th century. That’s when the leader of the Venetian Republic, known as the Doge of Venice, was served risi e bisi during the April 25th Feast of Saint Mark, which celebrates the city’s patron saint, Mark the Evangelist.
Popular as it remains, though, risi e bisi is still difficult to find as a tourist traveling to Italy. That’s because it’s still a mostly homemade dish, considered too simple to serve in restaurants. Luckily, it's pretty easy to make, doesn't require expensive ingredients, and takes less than an hour to prepare. You can’t be too busy for risi e bisi.
[Image description: Rice with peas on a gray plate. There are two lemons to the right.] Credit & copyright: Alesia Kozik, Pexels -
FREEEngineering Daily Curio #3058Free1 CQ
It sounds gross, but it really is good for you! This month in 1942, Anne Miller became the first patient to be successfully treated for a streptococcal infection with penicillin, the first antibiotic. The antibiotic properties of penicillin, a drug derived from a mold called Penicillium, were first discovered by Alexander Fleming in the late 1920s. Yet, it wasn’t until March 14, 1942, that the drug was used to save a civilian’s life. Miller was an everyday woman who had the tragic misfortune of suffering a complicated miscarriage. Within weeks, she began experiencing symptoms of a grave infection with a fever reaching 106.5 degrees Fahrenheit. The culprit was streptococcal septicemia, a deadly infection that was once common after miscarriages. Penicillin's potential had long been known by this point, and its use was already being tested in the U.K, but there were a couple of problems keeping it from widespread usage. First, a little conflict called WWII was taking place at the time, making it difficult to safely transport penicillin across the Atlantic. Secondly, penicillin was exceedingly difficult to extract, making it prohibitively expensive. Its supply was so limited that ome patients who were being treated for infections that penicillin could have defeated passed away anyway because they couldn't receive a full course.
As luck would have it, Miller's physician, Dr. John Bumstead, just so happened to be treating Dr. John Fulton, who just so happened to be friends with Howard Florey, an Australian researcher who was a pioneer in the use of penicillin for therapeutic purposes. Bumstead pleaded with Fulton to acquire a sample of the precious penicillin from his friend, and Florey obliged, using his connections in the pharmaceutical industry to secure a sample produced in the U.S. The penicillin was delivered to the hospital where Miller and Fulton were staying, and thanks to the miracle drug, Miller made a full recovery. She would go on to live another 57 years, passing away at the age of 90 in 1999. Next time you see a moldy piece of bread, show a little respect!
[Image description: A close-up photo of moldy bread.] Credit & copyright: Ciar, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide.It sounds gross, but it really is good for you! This month in 1942, Anne Miller became the first patient to be successfully treated for a streptococcal infection with penicillin, the first antibiotic. The antibiotic properties of penicillin, a drug derived from a mold called Penicillium, were first discovered by Alexander Fleming in the late 1920s. Yet, it wasn’t until March 14, 1942, that the drug was used to save a civilian’s life. Miller was an everyday woman who had the tragic misfortune of suffering a complicated miscarriage. Within weeks, she began experiencing symptoms of a grave infection with a fever reaching 106.5 degrees Fahrenheit. The culprit was streptococcal septicemia, a deadly infection that was once common after miscarriages. Penicillin's potential had long been known by this point, and its use was already being tested in the U.K, but there were a couple of problems keeping it from widespread usage. First, a little conflict called WWII was taking place at the time, making it difficult to safely transport penicillin across the Atlantic. Secondly, penicillin was exceedingly difficult to extract, making it prohibitively expensive. Its supply was so limited that ome patients who were being treated for infections that penicillin could have defeated passed away anyway because they couldn't receive a full course.
As luck would have it, Miller's physician, Dr. John Bumstead, just so happened to be treating Dr. John Fulton, who just so happened to be friends with Howard Florey, an Australian researcher who was a pioneer in the use of penicillin for therapeutic purposes. Bumstead pleaded with Fulton to acquire a sample of the precious penicillin from his friend, and Florey obliged, using his connections in the pharmaceutical industry to secure a sample produced in the U.S. The penicillin was delivered to the hospital where Miller and Fulton were staying, and thanks to the miracle drug, Miller made a full recovery. She would go on to live another 57 years, passing away at the age of 90 in 1999. Next time you see a moldy piece of bread, show a little respect!
[Image description: A close-up photo of moldy bread.] Credit & copyright: Ciar, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide. -
FREEWorld History Daily Curio #3057Free1 CQ
Just because a war is undeclared doesn't mean it's not real. Just take a look at the Falkland Island War, a conflict between Argentina and the U.K. that began on this day in 1982. As its name implies, the Falkland Island War was fought over which nation had sovereignty over the islands, which are located 300 miles from the coast of Argentina. For most of their history, the Falkland Islands were uninhabited, but by the early 1800s, there were Argentine residents living there. Then, in 1833, Britain took control of the islands and forced out their inhabitants, establishing a population of British residents. After that, the island was recognized as belonging to the U.K., despite objections over the years from the government of Argentina.
In 1982, tensions finally boiled over after negotiations between the two countries fell through, and the Argentine military junta launched an invasion of the islands. The decision to reclaim them was partially motivated by the junta's declining grip on Argentina, and they believed that taking back the islands would garner support. British troops were also sent to the islands, and fighting commenced. The conflict lasted over two months and claimed nearly a thousand lives (649 Argentine and 255 British and Falkland Islanders), yet neither England nor Argentina officially declared war. The Falkland Islands remain under British control today, and their residents have rebuffed any attempts by the Argentine government to incorporate them into Argentina. Of the over 2,500 people currently living on the islands, nearly all are English-speaking and of British descent. The island's economy depends largely on a modest agricultural industry and tourism, without much in the way of natural resources. The dispute over the islands’ ownership is mostly just a matter of national pride for both sides. Who knew island life could be so controversial?
[Image description: The flag of the Falkland Islands, featuring a dark blue background, British flag in the upper left, and a seal with a sheep and a sailing ship in the lower right.] Credit & copyright: Government of Great Britain, Wikimedia Commons. Public Domain.Just because a war is undeclared doesn't mean it's not real. Just take a look at the Falkland Island War, a conflict between Argentina and the U.K. that began on this day in 1982. As its name implies, the Falkland Island War was fought over which nation had sovereignty over the islands, which are located 300 miles from the coast of Argentina. For most of their history, the Falkland Islands were uninhabited, but by the early 1800s, there were Argentine residents living there. Then, in 1833, Britain took control of the islands and forced out their inhabitants, establishing a population of British residents. After that, the island was recognized as belonging to the U.K., despite objections over the years from the government of Argentina.
In 1982, tensions finally boiled over after negotiations between the two countries fell through, and the Argentine military junta launched an invasion of the islands. The decision to reclaim them was partially motivated by the junta's declining grip on Argentina, and they believed that taking back the islands would garner support. British troops were also sent to the islands, and fighting commenced. The conflict lasted over two months and claimed nearly a thousand lives (649 Argentine and 255 British and Falkland Islanders), yet neither England nor Argentina officially declared war. The Falkland Islands remain under British control today, and their residents have rebuffed any attempts by the Argentine government to incorporate them into Argentina. Of the over 2,500 people currently living on the islands, nearly all are English-speaking and of British descent. The island's economy depends largely on a modest agricultural industry and tourism, without much in the way of natural resources. The dispute over the islands’ ownership is mostly just a matter of national pride for both sides. Who knew island life could be so controversial?
[Image description: The flag of the Falkland Islands, featuring a dark blue background, British flag in the upper left, and a seal with a sheep and a sailing ship in the lower right.] Credit & copyright: Government of Great Britain, Wikimedia Commons. Public Domain. -
FREEPolitical Science Daily Curio #3056Free1 CQ
This wasn’t an April Fools joke, but it almost seems like one. The state of Illinois recently allowed its citizens to vote on a new design for their state flag, and by far the largest share of the votes went to the existing design. Last year, Minnesota voted to adopt a new design for their state flag, and maybe Illinois was feeling a little left out. In the end, the state’s redesign contest came down to 10 finalists, and out of around 385,000 voters, 43 percent wanted to keep the same old-same old.
Illinois originally adopted their flag in 1915, and it's not exactly known for its vexillological beauty. It features an eagle atop a rock and a shield decorated with the stars and stripes with the setting sun in the background. Next to the eagle, a banner shows the state motto—"State Sovereignty, National Union”—and on the rock are two dates. One is 1868, the year the state seal (the eagle design featured on the flag itself) was adopted, and 1818 for when Illinois became a state. While it may seem strange to hold a flag-design contest, Illinois’ current flag was actually chosen via a similar contest in 1915 organized by the Daughters of the American Revolution, and that wasn't even the first time someone tried to come up with a different state flag for Illinois. A few years prior to that contest, a man named Wallace Rice designed a flag featuring blue and white stripes, 20 blue stars, and one white star. The 21 stars were meant to represent the fact that Illinois was the 21st state to be added to the Union, but no matter the symbolism, the flag was never approved by the state legislature. Other flags considered in the past included banners created for the state’s Centennial and Sesquicentennial celebrations in 1918 and 1968 respectively, and those two were also among the 10 finalists in the latest vote. Even with flags, it seems most people agree: if it ain't broke, don't fix it!
[Image description: The Illinois state flag: a white flag with an eagle in the center. The eagle holds a red banner reading “NATIONAL UNION” and “STATE SOVEREIGNTY" while standing on a rock listing the years 1868 and 1818. There is a yellow setting sun in the background.] Credit & copyright: Public Domain.This wasn’t an April Fools joke, but it almost seems like one. The state of Illinois recently allowed its citizens to vote on a new design for their state flag, and by far the largest share of the votes went to the existing design. Last year, Minnesota voted to adopt a new design for their state flag, and maybe Illinois was feeling a little left out. In the end, the state’s redesign contest came down to 10 finalists, and out of around 385,000 voters, 43 percent wanted to keep the same old-same old.
Illinois originally adopted their flag in 1915, and it's not exactly known for its vexillological beauty. It features an eagle atop a rock and a shield decorated with the stars and stripes with the setting sun in the background. Next to the eagle, a banner shows the state motto—"State Sovereignty, National Union”—and on the rock are two dates. One is 1868, the year the state seal (the eagle design featured on the flag itself) was adopted, and 1818 for when Illinois became a state. While it may seem strange to hold a flag-design contest, Illinois’ current flag was actually chosen via a similar contest in 1915 organized by the Daughters of the American Revolution, and that wasn't even the first time someone tried to come up with a different state flag for Illinois. A few years prior to that contest, a man named Wallace Rice designed a flag featuring blue and white stripes, 20 blue stars, and one white star. The 21 stars were meant to represent the fact that Illinois was the 21st state to be added to the Union, but no matter the symbolism, the flag was never approved by the state legislature. Other flags considered in the past included banners created for the state’s Centennial and Sesquicentennial celebrations in 1918 and 1968 respectively, and those two were also among the 10 finalists in the latest vote. Even with flags, it seems most people agree: if it ain't broke, don't fix it!
[Image description: The Illinois state flag: a white flag with an eagle in the center. The eagle holds a red banner reading “NATIONAL UNION” and “STATE SOVEREIGNTY" while standing on a rock listing the years 1868 and 1818. There is a yellow setting sun in the background.] Credit & copyright: Public Domain. -
FREENutrition Daily Curio #3055Free1 CQ
Here’s a citrus to celebrate. Researchers at Harvard Medical School have discovered that eating citrus might be an effective way to lower the risk of developing depression. Depression is an extremely common condition, yet it can be extremely difficult to treat. Around 290 million people worldwide are thought to suffer from the disorder, and for many of them, treatments aren’t effective. In fact, around 70 percent of those with depression don’t find antidepressants to be effective. However, in recent years, researchers have found a strong link between an individual's gut microbiome and their mental health, and the Mediterranean diet has been found to reduce the risk of depression by almost 35 percent. Now, similar effects have been found in patients who eat at least one orange every day.
Harvard researchers’ recently examined a study known as the Nurses’ Health Study II (NHS2), which began in 1989 and involved detailed interviews with 100,000 regarding their diets and lifestyles. Those who ate a lot of citrus tended to have significantly lower rates of depression compared to those who didn't. Based on the data, just one medium orange every day might lower the risk of depression by up to 20 percent. But it's not the orange that's helping directly. Rather, citrus consumption promotes the growth of F. Prausnitzii, a beneficial bacterium found in the gut. Researchers believe that F. Prausnitzii affects the production of serotonin and dopamine in the intestines, which can make their way to the brain. Serotonin and dopamine are the hormones that are often lacking in people with depression. Apples might keep the doctor away, but it seems that oranges really keep the blues at bay.
[Image description: Rows of cut oranges.] Credit & copyright: Engin Akyurt, PexelsHere’s a citrus to celebrate. Researchers at Harvard Medical School have discovered that eating citrus might be an effective way to lower the risk of developing depression. Depression is an extremely common condition, yet it can be extremely difficult to treat. Around 290 million people worldwide are thought to suffer from the disorder, and for many of them, treatments aren’t effective. In fact, around 70 percent of those with depression don’t find antidepressants to be effective. However, in recent years, researchers have found a strong link between an individual's gut microbiome and their mental health, and the Mediterranean diet has been found to reduce the risk of depression by almost 35 percent. Now, similar effects have been found in patients who eat at least one orange every day.
Harvard researchers’ recently examined a study known as the Nurses’ Health Study II (NHS2), which began in 1989 and involved detailed interviews with 100,000 regarding their diets and lifestyles. Those who ate a lot of citrus tended to have significantly lower rates of depression compared to those who didn't. Based on the data, just one medium orange every day might lower the risk of depression by up to 20 percent. But it's not the orange that's helping directly. Rather, citrus consumption promotes the growth of F. Prausnitzii, a beneficial bacterium found in the gut. Researchers believe that F. Prausnitzii affects the production of serotonin and dopamine in the intestines, which can make their way to the brain. Serotonin and dopamine are the hormones that are often lacking in people with depression. Apples might keep the doctor away, but it seems that oranges really keep the blues at bay.
[Image description: Rows of cut oranges.] Credit & copyright: Engin Akyurt, Pexels -
FREEMind + Body Daily CurioFree1 CQ
You can’t help but catch a whiff as you chow down with this dish. Referring to a food as “stinky” might seem rude, but it’s actually a point of pride for makers of stinky tofu. This Chinese dish’s actual name, chòu dòufu, literally translates to “smelly tofu”, and is lovingly referred to as stinky tofu in English. The dish has gone viral in recent years as influencers descend on Asian food markets to try unusual dishes on camera, but stinky tofu’s history predates the internet by quite a few years. In fact, it’s ancient.
Stinky tofu is, of course, a kind of tofu, which is a gelatinous food made from soybean paste. The paste is mixed with soy milk, then an acidic coagulant is added, causing the milk to curdle, thus producing solid pieces of tofu. Normal tofu has a very mild smell, though it’s great at soaking up the smells and flavors of dishes that it’s added to. Unlike regular tofu, stinky tofu is fermented, and its pungent aroma, which is sometimes compared to that of rotting vegetables, comes from the brine it’s made in. Fermentation is the same process that turns cucumbers into pickles; it involves submerging food into a brine and keeping it in a sealed container until yeast and bacteria create chemical changes that make it taste (and smell) different. Stinky tofu is usually fermented in a brine of veggies like bamboo shoots and greens, meat products like dried shrimp, fermented milk, and spices. While stinky tofu’s flavor is stronger than that of normal tofu, it’s not nearly as overpowering as its smell. The dish is creamy and rich, with a sour, somewhat salty flavor. It can be eaten in many different ways: cold, steamed, or fried. It’s usually served with spicy sauce for dipping.
Stinky tofu dates all the way back to China’s Qing Dynasty, which lasted from 1644 to 1912. Unlike many ancient foods, we actually know who invented stinky tofu: a scholar-turned-tofu-merchant named Wang Zhihe. In 1669, he journeyed to Beijing from his home in Anhui province to try his hand at becoming part of China’s state bureaucracy. However, he failed the official examination for the job, and found himself low on funds after his journey. To keep afloat, Zhihe set up a tofu stand in the city. His bad luck continued, though, and he ended up with a lot of unsold tofu. Rather than let it go to waste, Zhihe fermented the tofu in jars. This new, stinky tofu was a hit, as it stood out from Beijing’s other street food offerings. To this day, stinky tofu is mainly sold as a street food, both at permanent food stalls and at pop up events, like festivals and night markets. It’s especially popular in Taiwan, and is considered by many to be Taiwan’s unofficial “national snack food.” Sometimes, pungency is perfection.
[Image description: A plate of five thick tofu squares with shredded vegetables in the center.] Credit & copyright: Pilzland, Wikimedia Commons. The copyright holder of this work has made it available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You can’t help but catch a whiff as you chow down with this dish. Referring to a food as “stinky” might seem rude, but it’s actually a point of pride for makers of stinky tofu. This Chinese dish’s actual name, chòu dòufu, literally translates to “smelly tofu”, and is lovingly referred to as stinky tofu in English. The dish has gone viral in recent years as influencers descend on Asian food markets to try unusual dishes on camera, but stinky tofu’s history predates the internet by quite a few years. In fact, it’s ancient.
Stinky tofu is, of course, a kind of tofu, which is a gelatinous food made from soybean paste. The paste is mixed with soy milk, then an acidic coagulant is added, causing the milk to curdle, thus producing solid pieces of tofu. Normal tofu has a very mild smell, though it’s great at soaking up the smells and flavors of dishes that it’s added to. Unlike regular tofu, stinky tofu is fermented, and its pungent aroma, which is sometimes compared to that of rotting vegetables, comes from the brine it’s made in. Fermentation is the same process that turns cucumbers into pickles; it involves submerging food into a brine and keeping it in a sealed container until yeast and bacteria create chemical changes that make it taste (and smell) different. Stinky tofu is usually fermented in a brine of veggies like bamboo shoots and greens, meat products like dried shrimp, fermented milk, and spices. While stinky tofu’s flavor is stronger than that of normal tofu, it’s not nearly as overpowering as its smell. The dish is creamy and rich, with a sour, somewhat salty flavor. It can be eaten in many different ways: cold, steamed, or fried. It’s usually served with spicy sauce for dipping.
Stinky tofu dates all the way back to China’s Qing Dynasty, which lasted from 1644 to 1912. Unlike many ancient foods, we actually know who invented stinky tofu: a scholar-turned-tofu-merchant named Wang Zhihe. In 1669, he journeyed to Beijing from his home in Anhui province to try his hand at becoming part of China’s state bureaucracy. However, he failed the official examination for the job, and found himself low on funds after his journey. To keep afloat, Zhihe set up a tofu stand in the city. His bad luck continued, though, and he ended up with a lot of unsold tofu. Rather than let it go to waste, Zhihe fermented the tofu in jars. This new, stinky tofu was a hit, as it stood out from Beijing’s other street food offerings. To this day, stinky tofu is mainly sold as a street food, both at permanent food stalls and at pop up events, like festivals and night markets. It’s especially popular in Taiwan, and is considered by many to be Taiwan’s unofficial “national snack food.” Sometimes, pungency is perfection.
[Image description: A plate of five thick tofu squares with shredded vegetables in the center.] Credit & copyright: Pilzland, Wikimedia Commons. The copyright holder of this work has made it available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.