Curio Cabinet / Daily Curio
-
FREEMind + Body Daily CurioFree1 CQ
This snack is creamy, cheesy, vegetable-y, spicy and portable. Elote, also known as Mexican street corn, really does it all. As the weather warms up and elote makes an appearance at fairs and festivals all over the world, it’s worth taking a look at this street food’s surprisingly long history.
Elote, which in Spanish can refer to either a plain ear of corn or the street food, is made by either boiling ears of corn in their husks or, more commonly, by grilling them. The corn is then slathered with mayo and cotija cheese, and sprinkled with chili powder and other seasonings, like cumin. Lime is sometimes squeezed on top for extra zest. Elote is usually put on a skewer for easy carrying, or shaved from the cob into a cup in a preparation known as esquites.
Corn is native to the lowlands of west-central Mexico, and has been cultivated there for more than 7,000 years. Corn was a staple food in both the Aztec and Mayan Empires, and was used to make tortillas, tamales, soups, and even drinks. In fact, corn was so important that it was considered holy; the Popol Vuh, a Mayan sacred text, states that the first human was made from corn. Eventually, corn cultivation spread throughout Mexico, then to the Southwestern U.S. as people migrated there. By the time Europeans arrived in the U.S., native Americans had been growing corn for at least 1,000 years.
We’ll never know exactly who invented the elote we know today, nor exactly when. We do know that it has been served in various parts of Mexico for centuries, and that its popularity has a lot to do with busy lifestyles in places like Mexico City. Just like New Yorkers love their ultra-portable hot dogs, those in Mexican cities enjoy eating elote on the go. Like hot dogs, elote is also a common food to find at backyard get-togethers and family functions. Don’t forget to grab a cob next time you’re out and about.
[Image description: An ear of corn on a stick, covered in white cheese and red spices, on white paper.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This snack is creamy, cheesy, vegetable-y, spicy and portable. Elote, also known as Mexican street corn, really does it all. As the weather warms up and elote makes an appearance at fairs and festivals all over the world, it’s worth taking a look at this street food’s surprisingly long history.
Elote, which in Spanish can refer to either a plain ear of corn or the street food, is made by either boiling ears of corn in their husks or, more commonly, by grilling them. The corn is then slathered with mayo and cotija cheese, and sprinkled with chili powder and other seasonings, like cumin. Lime is sometimes squeezed on top for extra zest. Elote is usually put on a skewer for easy carrying, or shaved from the cob into a cup in a preparation known as esquites.
Corn is native to the lowlands of west-central Mexico, and has been cultivated there for more than 7,000 years. Corn was a staple food in both the Aztec and Mayan Empires, and was used to make tortillas, tamales, soups, and even drinks. In fact, corn was so important that it was considered holy; the Popol Vuh, a Mayan sacred text, states that the first human was made from corn. Eventually, corn cultivation spread throughout Mexico, then to the Southwestern U.S. as people migrated there. By the time Europeans arrived in the U.S., native Americans had been growing corn for at least 1,000 years.
We’ll never know exactly who invented the elote we know today, nor exactly when. We do know that it has been served in various parts of Mexico for centuries, and that its popularity has a lot to do with busy lifestyles in places like Mexico City. Just like New Yorkers love their ultra-portable hot dogs, those in Mexican cities enjoy eating elote on the go. Like hot dogs, elote is also a common food to find at backyard get-togethers and family functions. Don’t forget to grab a cob next time you’re out and about.
[Image description: An ear of corn on a stick, covered in white cheese and red spices, on white paper.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEAstronomy Daily Curio #3078Free1 CQ
The Greeks had nothing on this ancient astronomer! For centuries, the oldest surviving star catalog, mapping the exact positions of heavenly bodies, was known to have come from ancient Greece. Created by the Greek astronomer Hipparchus of Nicaea some time around 130 B.C.E., it gave ancient Greece the distinction of being the first civilization to map stars using coordinates. Now, researchers in China have turned that idea on its head, as they claim to have dated a Chinese star catalog to more than 100 years before the Greeks’. It was compiled by Chinese astrologer and astronomer Shi Shen some time around 335 B.C.E. and is being called The Star Manual of Master Shi.
While this new star catalog shows detailed information about 120 stars, including their names and coordinates, it doesn’t include a date. To determine when, exactly, it was made, researchers had to get creative. We know that stars’ positions change over time relative to earthbound viewers due to a phenomenon called precession, in which the Earth wobbles slightly on its axis in slow, 26,000-year cycles. Researchers first compared The Star Manual of Master Shi to other manuals made in later periods, like the Tang and Yuan dynasties. Then, they used a specially-made algorithm to compare the positions in Shi’s manual to 10,000 different moments in later periods, factoring in the process of precession. The algorithm found that The Star Manual of Master Shi had to have been created in 335 B.C.E., which makes sense since that year falls right within Shi’s lifetime, at the height of his career. In the process of comparing Shi’s work to that of later astronomers, they also found that his coordinates had been meticulously and purposefully updated by another famous, ancient Chinese astronomer: Grand Astronomer Zhang Heng, of the Han Dynasty. We may have just discovered how important Shi’s manual was, but it seems that other astronomers already knew what was up (in the sky.)
[Image description: A starry sky with some purple visible.] Credit & copyright: Felix Mittermeier, PexelsThe Greeks had nothing on this ancient astronomer! For centuries, the oldest surviving star catalog, mapping the exact positions of heavenly bodies, was known to have come from ancient Greece. Created by the Greek astronomer Hipparchus of Nicaea some time around 130 B.C.E., it gave ancient Greece the distinction of being the first civilization to map stars using coordinates. Now, researchers in China have turned that idea on its head, as they claim to have dated a Chinese star catalog to more than 100 years before the Greeks’. It was compiled by Chinese astrologer and astronomer Shi Shen some time around 335 B.C.E. and is being called The Star Manual of Master Shi.
While this new star catalog shows detailed information about 120 stars, including their names and coordinates, it doesn’t include a date. To determine when, exactly, it was made, researchers had to get creative. We know that stars’ positions change over time relative to earthbound viewers due to a phenomenon called precession, in which the Earth wobbles slightly on its axis in slow, 26,000-year cycles. Researchers first compared The Star Manual of Master Shi to other manuals made in later periods, like the Tang and Yuan dynasties. Then, they used a specially-made algorithm to compare the positions in Shi’s manual to 10,000 different moments in later periods, factoring in the process of precession. The algorithm found that The Star Manual of Master Shi had to have been created in 335 B.C.E., which makes sense since that year falls right within Shi’s lifetime, at the height of his career. In the process of comparing Shi’s work to that of later astronomers, they also found that his coordinates had been meticulously and purposefully updated by another famous, ancient Chinese astronomer: Grand Astronomer Zhang Heng, of the Han Dynasty. We may have just discovered how important Shi’s manual was, but it seems that other astronomers already knew what was up (in the sky.)
[Image description: A starry sky with some purple visible.] Credit & copyright: Felix Mittermeier, Pexels -
FREESports Daily Curio #3077Free1 CQ
Aging out? Never heard of it! American gymnast Simone Biles recently announced that she's unsure whether or not she’ll compete in the 2028 Summer Olympics, when she'll be 28 years old. If she did choose to participate, she would undoubtedly be one of the oldest gymnasts competing in 2028…but possibly not the oldest! She’d also be far from the oldest to ever compete at the Olympics.
It's no secret that age counts for a lot in competitive sports, and that's truer in gymnastics than most others. While age can bring experience and even lend a competitive edge to athletes in some other sports, gymnastics is notoriously hard on the body, making it more difficult for aging athletes to compete and recover without pain. Those flips and jumps also require a lot of muscle mass, which tends to decline as people age. That's why Olympic gymnasts tend to be younger on average than, say, swimmers or marathon runners.
Of course, there are some exceptions. Take 49-year-old Uzbek gymnast Oksana Chusovitina, the oldest female gymnast to ever compete at the Olympics. She’s aiming to come back yet again in 2028 after missing out on Paris last year. She last competed in the 2020 Summer Olympics in Tokyo at the age of 46. Throughout her long career, she earned a gold medal in the 1992 team all-around competition in Barcelona, and a silver for vault in 2008, in Beijing. Then there's Bulgarian gymnast Yordan Yovchev. He’s retired now, but when he last competed in 2012, he was the oldest gymnast participating, at the age of 39. He’s brought home four Olympic medals, including a silver in rings at the San Juan Olympics in 1996. Yovchev also boasts the most consecutive appearances at the Olympics by any male gymnast, having competed six times between 1992 and 2012. Compared to these legendary athletes, Biles is practically a spring chicken!Aging out? Never heard of it! American gymnast Simone Biles recently announced that she's unsure whether or not she’ll compete in the 2028 Summer Olympics, when she'll be 28 years old. If she did choose to participate, she would undoubtedly be one of the oldest gymnasts competing in 2028…but possibly not the oldest! She’d also be far from the oldest to ever compete at the Olympics.
It's no secret that age counts for a lot in competitive sports, and that's truer in gymnastics than most others. While age can bring experience and even lend a competitive edge to athletes in some other sports, gymnastics is notoriously hard on the body, making it more difficult for aging athletes to compete and recover without pain. Those flips and jumps also require a lot of muscle mass, which tends to decline as people age. That's why Olympic gymnasts tend to be younger on average than, say, swimmers or marathon runners.
Of course, there are some exceptions. Take 49-year-old Uzbek gymnast Oksana Chusovitina, the oldest female gymnast to ever compete at the Olympics. She’s aiming to come back yet again in 2028 after missing out on Paris last year. She last competed in the 2020 Summer Olympics in Tokyo at the age of 46. Throughout her long career, she earned a gold medal in the 1992 team all-around competition in Barcelona, and a silver for vault in 2008, in Beijing. Then there's Bulgarian gymnast Yordan Yovchev. He’s retired now, but when he last competed in 2012, he was the oldest gymnast participating, at the age of 39. He’s brought home four Olympic medals, including a silver in rings at the San Juan Olympics in 1996. Yovchev also boasts the most consecutive appearances at the Olympics by any male gymnast, having competed six times between 1992 and 2012. Compared to these legendary athletes, Biles is practically a spring chicken! -
FREEBiology Daily Curio #3076Free1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Do you have trouble falling asleep? Do you get rocky "half-sleep"? Well hibernation might be just the cure for you. In two recent unrelated experiments, researchers isolated the neurons in the brain that "switch" on hibernation in mammals. One study, led by neurobiologist Sinisa Hrvatin of Harvard, intentionally reached its findings. Hrvatin and her team first hypothesized that they could trick mice into going into hibernation, mostly by limiting their diets and exposing them to cold temperatures. They were correct. Hrvatin and her team noticed that the combination of variables led some mice to enter a state of torpor in 10 hours, and others in up to 48 hours. As the mice lulled to sleep, the scientists observed and tagged neurons in their rodent hypothalami. The hypothalamus is an area of the brain largely concerned with primordial sensations like feeding, temperature, and eating. Once the scientists tagged and cataloged the neurons involved in the torpor, the scientists could stimulate those neurons on command. In other words, they could instantly thrust mice into a pleasant siesta. The second study, based in Japan, largely came to the same conclusion but unintentionally. Both teams posit that artificial hibernation could carry over to humans, allowing for the long-sought-after suspended sleep during space flights, metabolic control of body temperature during surgery, and a much safer form of sedation for unruly patients. And of course it may bring z's to all us purple-eyed, groggy insomniacs. Just remember to set a couple of alarms and set them beside your head before you drift off. Otherwise, you might oversleep until the spring of 2021!
Image credit & copyright: Huntsmanleader, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Do you have trouble falling asleep? Do you get rocky "half-sleep"? Well hibernation might be just the cure for you. In two recent unrelated experiments, researchers isolated the neurons in the brain that "switch" on hibernation in mammals. One study, led by neurobiologist Sinisa Hrvatin of Harvard, intentionally reached its findings. Hrvatin and her team first hypothesized that they could trick mice into going into hibernation, mostly by limiting their diets and exposing them to cold temperatures. They were correct. Hrvatin and her team noticed that the combination of variables led some mice to enter a state of torpor in 10 hours, and others in up to 48 hours. As the mice lulled to sleep, the scientists observed and tagged neurons in their rodent hypothalami. The hypothalamus is an area of the brain largely concerned with primordial sensations like feeding, temperature, and eating. Once the scientists tagged and cataloged the neurons involved in the torpor, the scientists could stimulate those neurons on command. In other words, they could instantly thrust mice into a pleasant siesta. The second study, based in Japan, largely came to the same conclusion but unintentionally. Both teams posit that artificial hibernation could carry over to humans, allowing for the long-sought-after suspended sleep during space flights, metabolic control of body temperature during surgery, and a much safer form of sedation for unruly patients. And of course it may bring z's to all us purple-eyed, groggy insomniacs. Just remember to set a couple of alarms and set them beside your head before you drift off. Otherwise, you might oversleep until the spring of 2021!
Image credit & copyright: Huntsmanleader, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
-
FREEOutdoors Daily Curio #3075Free1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
The fastest growing sport in the U.S. probably isn’t what you’d expect. With another spring comes another wave of outdoor activities, and for many fair-weather athletes, the name of the game is pickleball. Pickleball was invented in 1965 by three dads who wanted to keep their kids entertained during summer vacation. The sport’s founding fathers, Joel Pritchard, Bill Bell and Barney McCallum took a wiffle ball, lowered a badminton net to the ground, and with elements from tennis, ping-pong and badminton, they cobbled together a sport that was easy and fun.
Part of the appeal of the sport comes from the small court on which it is played, which allows for an exciting game for all ages. The paddles used are roughly twice the size of the ones used for ping-pong, and while the original versions were made out of scrap plywood, a number of manufacturers make pickleball-specific paddles and other equipment. According to the USA Pickleball Association, the sport is played on a 20 by 44 inch court with a net that hangs to 36 inches at the sides and 34 inches at the middle. It can be played as singles or doubles, just like tennis. Pickleball has experienced a surge in popularity recently, due to its soft learning curve and the pandemic which had people looking for easy outdoor activities with a social lean, but even before the pandemic, the number of players grew by 10.5 percent between 2017 and 2020. As for the name? Some claim that it was named after the Pritchards’ family dog, Pickles, while others claim that the dog was named after the sport and that the name is a reference to “pickle boats” in rowing, which are manned by athletes left over from other teams. Either way, grab a paddle![Image description: Yellow pickleballs on a blue court.] Credit & copyright: Stephen James Hall, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
The fastest growing sport in the U.S. probably isn’t what you’d expect. With another spring comes another wave of outdoor activities, and for many fair-weather athletes, the name of the game is pickleball. Pickleball was invented in 1965 by three dads who wanted to keep their kids entertained during summer vacation. The sport’s founding fathers, Joel Pritchard, Bill Bell and Barney McCallum took a wiffle ball, lowered a badminton net to the ground, and with elements from tennis, ping-pong and badminton, they cobbled together a sport that was easy and fun.
Part of the appeal of the sport comes from the small court on which it is played, which allows for an exciting game for all ages. The paddles used are roughly twice the size of the ones used for ping-pong, and while the original versions were made out of scrap plywood, a number of manufacturers make pickleball-specific paddles and other equipment. According to the USA Pickleball Association, the sport is played on a 20 by 44 inch court with a net that hangs to 36 inches at the sides and 34 inches at the middle. It can be played as singles or doubles, just like tennis. Pickleball has experienced a surge in popularity recently, due to its soft learning curve and the pandemic which had people looking for easy outdoor activities with a social lean, but even before the pandemic, the number of players grew by 10.5 percent between 2017 and 2020. As for the name? Some claim that it was named after the Pritchards’ family dog, Pickles, while others claim that the dog was named after the sport and that the name is a reference to “pickle boats” in rowing, which are manned by athletes left over from other teams. Either way, grab a paddle![Image description: Yellow pickleballs on a blue court.] Credit & copyright: Stephen James Hall, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
-
FREEMind + Body Daily CurioFree1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
This sticky topping is more than just a pancake accessory. Maple syrup has a uniquely North American history beginning with the continents’ first inhabitants. Over the centuries, it’s been used as a medicine, a drink, a food topping, and it even helped early U.S. colonists avoid hefty import fees.
As its name suggests, maple syrup is made from the sap of maple trees, usually black maples, sugar maples, or red maples. These trees are unique in that they store starch in their trunks and roots, which turns into sugar and is carried throughout the tree via sap. In late winter and early spring, when the trees are full of this sugary sap, holes are drilled in their trunks and the sap is collected. It is then heated to get rid of excess water. The result is a runny, brown, sweet-tasting syrup that’s used as a topping on many foods, most famously pancakes.
No one knows who, exactly, first discovered that maple sap was sweet and edible, but they grow throughout North America, and native peoples have been making sugar and syrup from their sap for centuries. For the Algonquian people, who lived mainly in what today is New England and Canada, maple syrup held particular cultural significance. They collected maple sap in clay buckets and turned it into syrup by letting it freeze and then throwing out the ice that formed on top, thereby getting rid of excess moisture. The pots were sometimes boiled over large fires, too. The syrup was not only used as a topping but was also mixed into a drink with herbs and spices.
When European settlers made their way to North America, the Algonquians and other peoples showed them how to make maple sugar and maple syrup. This was lucky, since, in the 17th century, sugarcane had to be imported from the West Indies at a considerable cost. Using maple syrup and sugar as their main sources of sweetness allowed colonists to save considerable money and enjoy desserts at the same time. By the early 19th century, maple syrup was sold and prized throughout North America, and was even exported to other countries. To this day, Canada is particularly proud of its maple syrup, and it’s considered a national staple. Pretty sweet, eh?
[Image description: A stack of three pancakes with whipped cream and berries. Maple syrup is being poured over them.] Credit & copyright: Sydney Troxell, PexelsThis week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
This sticky topping is more than just a pancake accessory. Maple syrup has a uniquely North American history beginning with the continents’ first inhabitants. Over the centuries, it’s been used as a medicine, a drink, a food topping, and it even helped early U.S. colonists avoid hefty import fees.
As its name suggests, maple syrup is made from the sap of maple trees, usually black maples, sugar maples, or red maples. These trees are unique in that they store starch in their trunks and roots, which turns into sugar and is carried throughout the tree via sap. In late winter and early spring, when the trees are full of this sugary sap, holes are drilled in their trunks and the sap is collected. It is then heated to get rid of excess water. The result is a runny, brown, sweet-tasting syrup that’s used as a topping on many foods, most famously pancakes.
No one knows who, exactly, first discovered that maple sap was sweet and edible, but they grow throughout North America, and native peoples have been making sugar and syrup from their sap for centuries. For the Algonquian people, who lived mainly in what today is New England and Canada, maple syrup held particular cultural significance. They collected maple sap in clay buckets and turned it into syrup by letting it freeze and then throwing out the ice that formed on top, thereby getting rid of excess moisture. The pots were sometimes boiled over large fires, too. The syrup was not only used as a topping but was also mixed into a drink with herbs and spices.
When European settlers made their way to North America, the Algonquians and other peoples showed them how to make maple sugar and maple syrup. This was lucky, since, in the 17th century, sugarcane had to be imported from the West Indies at a considerable cost. Using maple syrup and sugar as their main sources of sweetness allowed colonists to save considerable money and enjoy desserts at the same time. By the early 19th century, maple syrup was sold and prized throughout North America, and was even exported to other countries. To this day, Canada is particularly proud of its maple syrup, and it’s considered a national staple. Pretty sweet, eh?
[Image description: A stack of three pancakes with whipped cream and berries. Maple syrup is being poured over them.] Credit & copyright: Sydney Troxell, Pexels -
FREETravel Daily Curio #3074Free1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Party on. In the US, most colleges give students a 1-2 week vacation in late March or early April. This "spring break" gives the students and faculty a nice break--but it also gives a select few destination towns an enormous influx of economic activity. John Laurie decided to write his dissertation for his PhD in economic development on this topic. Tersely titled Spring Break: The Economic, Socio-Cultural and Public Governance Impacts of College Students on Spring Break Host Locations, Laurie found some pretty amazing things. Each year, spring breakers inject $1 billion into tourist towns in Florida and Texas. In Panama City Beach, nearly half-a-million students spent $170 million over a six-week period. That's a lot, especially because it is concentrated toward a select few spring-break-geared businesses like hotels and liquor stores. Conversely, the people who carry the heaviest work load of spring break derive very little in profit. Most are city or government agencies like emergency responders, hospitals, street cleaners and park or beach workers. One strange byproduct of wealthy students descending on normally lower-middle class towns is an increase in entrepreneurship. People start businesses just to profit off of a few weeks of activity every year. The people most affected by spring break are police officers, whose citation rates jump dramatically in March and April. Those poor people should get a commission for every ticket they write to all the students behaving like idiots!
[Image description: Umbrellas and lawn chairs on a sandy beach.] Credit & copyright: Jebulon, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Party on. In the US, most colleges give students a 1-2 week vacation in late March or early April. This "spring break" gives the students and faculty a nice break--but it also gives a select few destination towns an enormous influx of economic activity. John Laurie decided to write his dissertation for his PhD in economic development on this topic. Tersely titled Spring Break: The Economic, Socio-Cultural and Public Governance Impacts of College Students on Spring Break Host Locations, Laurie found some pretty amazing things. Each year, spring breakers inject $1 billion into tourist towns in Florida and Texas. In Panama City Beach, nearly half-a-million students spent $170 million over a six-week period. That's a lot, especially because it is concentrated toward a select few spring-break-geared businesses like hotels and liquor stores. Conversely, the people who carry the heaviest work load of spring break derive very little in profit. Most are city or government agencies like emergency responders, hospitals, street cleaners and park or beach workers. One strange byproduct of wealthy students descending on normally lower-middle class towns is an increase in entrepreneurship. People start businesses just to profit off of a few weeks of activity every year. The people most affected by spring break are police officers, whose citation rates jump dramatically in March and April. Those poor people should get a commission for every ticket they write to all the students behaving like idiots!
[Image description: Umbrellas and lawn chairs on a sandy beach.] Credit & copyright: Jebulon, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
-
FREESTEM Daily Curio #3073Free1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
It's a bird! It's a plane! Nope, it's a drunk bird. And Gilbert, a tiny city in northeastern Minnesota, seems to have been attacked by thousands of them. The Gilbert Chief of Police, Ty Techar, put out a notice that residents should watch out for "tipsy birds" flying into windows, dive-bombing residents, slamming into cars, and generally "flopping all over the place." The birds are also exhibiting an atypical willingness to stay put when humans approach. All of this has led authorities to conclude the birds are suffering from fermentation toxicity. In other words, they're drunk. It commonly occurs when animals eat berries that have fermented into alcohol—usually during spring thaw when berries are susceptible to wild yeasts fermenting their remaining sugars. Normally it affects bigger animals like deer, since the birds have migrated south for the winter. But this year an early frost seems to have kicked in the phenomenon much earlier than normal. Meaning tiny sparrows and robins, with blood alcohol tolerances much lower than deer, are gobbling up the berries. Some bird experts say there's no real evidence the birds are drunk, and that when migration begins there are always increased episodes of birds going berserk. But Chief Techar isn't buying it. While he admits he hasn't given the birds a breathalyzer, he says it's easy to tell. Still, he wants residents to calm down: "it's not like every bird in our town is hammered." You can find the joke there yourself.
Image credit & copyright: PookieFugglestein, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
It's a bird! It's a plane! Nope, it's a drunk bird. And Gilbert, a tiny city in northeastern Minnesota, seems to have been attacked by thousands of them. The Gilbert Chief of Police, Ty Techar, put out a notice that residents should watch out for "tipsy birds" flying into windows, dive-bombing residents, slamming into cars, and generally "flopping all over the place." The birds are also exhibiting an atypical willingness to stay put when humans approach. All of this has led authorities to conclude the birds are suffering from fermentation toxicity. In other words, they're drunk. It commonly occurs when animals eat berries that have fermented into alcohol—usually during spring thaw when berries are susceptible to wild yeasts fermenting their remaining sugars. Normally it affects bigger animals like deer, since the birds have migrated south for the winter. But this year an early frost seems to have kicked in the phenomenon much earlier than normal. Meaning tiny sparrows and robins, with blood alcohol tolerances much lower than deer, are gobbling up the berries. Some bird experts say there's no real evidence the birds are drunk, and that when migration begins there are always increased episodes of birds going berserk. But Chief Techar isn't buying it. While he admits he hasn't given the birds a breathalyzer, he says it's easy to tell. Still, he wants residents to calm down: "it's not like every bird in our town is hammered." You can find the joke there yourself.
Image credit & copyright: PookieFugglestein, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
-
FREEGardening Daily Curio #3072Free1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
This fall, consider leafing your lawn alone. The National Wildlife Federation (NWF) recently advised homeowners to not rake up leaves in the fall, since they are beneficial to lawns and to wildlife. The topic of leaf-raking has always been a somewhat contentious issue among lawncare enthusiasts, but the answer of whether to rake or not is a bit complicated. Those who rake say that leaves can pile up and block sunlight, killing the grass. Those who don’t rake say that simply running over the leaves with a lawnmower on the “mulch” setting is enough to let light through and fertilize the lawn in the process. But now there is another reason not to rake: the leaves are a haven for beneficial insects and other critters.
Butterfly and moth caterpillars seek shelter under the leaves, and in the spring, they act as pollinators. Meanwhile, birds that eat mosquitoes and other pests also use the leaves as nesting material. Environmentalists are now asking that those who rake think twice before throwing out bags of leaves, as around 33 million tons of leaves are sent to landfills, accounting for 13% of all solid waste, according to the EPA. Once discarded, they are unable to properly decompose in the anaerobic environment of landfills, leading to the production of methane, a potent greenhouse gas.
Throwing away leaves can also be akin to throwing away money, since they are very useful to overall lawn and garden care. Putting dead leaves on flower beds can help choke out weeds that come up in the spring, rendering expensive mulch and weed killers unnecessary. Extra leaves also make great compost, which can be used in potting mix or in vegetable gardens. Even just letting mulched leaves decompose on one’s lawn returns nitrogen to the soil, feeding the grass and promoting growth. Truly a natural resource worth falling for.
[Image description: White flowers grow in grass near rocks.] Credit & copyright: W.carter, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
This fall, consider leafing your lawn alone. The National Wildlife Federation (NWF) recently advised homeowners to not rake up leaves in the fall, since they are beneficial to lawns and to wildlife. The topic of leaf-raking has always been a somewhat contentious issue among lawncare enthusiasts, but the answer of whether to rake or not is a bit complicated. Those who rake say that leaves can pile up and block sunlight, killing the grass. Those who don’t rake say that simply running over the leaves with a lawnmower on the “mulch” setting is enough to let light through and fertilize the lawn in the process. But now there is another reason not to rake: the leaves are a haven for beneficial insects and other critters.
Butterfly and moth caterpillars seek shelter under the leaves, and in the spring, they act as pollinators. Meanwhile, birds that eat mosquitoes and other pests also use the leaves as nesting material. Environmentalists are now asking that those who rake think twice before throwing out bags of leaves, as around 33 million tons of leaves are sent to landfills, accounting for 13% of all solid waste, according to the EPA. Once discarded, they are unable to properly decompose in the anaerobic environment of landfills, leading to the production of methane, a potent greenhouse gas.
Throwing away leaves can also be akin to throwing away money, since they are very useful to overall lawn and garden care. Putting dead leaves on flower beds can help choke out weeds that come up in the spring, rendering expensive mulch and weed killers unnecessary. Extra leaves also make great compost, which can be used in potting mix or in vegetable gardens. Even just letting mulched leaves decompose on one’s lawn returns nitrogen to the soil, feeding the grass and promoting growth. Truly a natural resource worth falling for.
[Image description: White flowers grow in grass near rocks.] Credit & copyright: W.carter, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREESTEM Daily Curio #3071Free1 CQ
Falling in quicksand is a lot like falling in love, in that most people don't really know how it works. A Michigan man recently made headlines when he found himself stuck in quicksand. When he was rescued, he not only regained his freedom, but he'd made a love connection.
Though most people don’t encounter it on a daily basis, quicksand can be a common hazard in certain areas, especially places with high levels of groundwater. Recently, Mitchell O'Brien and Breanne Sika were rock hunting on the shores of Lake Michigan when the former stepped into quicksand and suddenly found himself stuck, waist-deep. If sand becomes saturated with enough water and air, it can become a sand-water suspension that can't support a person's weight on the surface. Luckily, a person stuck in quicksand won’t usually sink so deep that their entire body is covered, since the human body is less dense than the suspension. That's not to say that quicksand isn’t dangerous. When a person falls in, the sheer weight of the sand can lock their limbs into place. If the person struggles and loses their balance, they can drown even if they're not fully submerged. There's also a risk of drowning if a person is caught in quicksand near water during a rising tide.
While it's difficult to free oneself from quicksand without assistance, it’s not impossible. Leaning backward when stepping in quicksand can prevent the feet and legs from sinking in further, and it's possible to escape by slowly moving each limb back and forth. Unfortunately, O'Brien had to wait for firefighters to arrive to pull him out, but the "emotional torment" that he went through with his rock-hunting partner apparently brought them closer together, allowing them to express their latent romantic feelings for one another. Who needs dating apps when you have quicksand?
[Image description: A close-up photo of sand.] Credit & copyright: Annajohepworth, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Falling in quicksand is a lot like falling in love, in that most people don't really know how it works. A Michigan man recently made headlines when he found himself stuck in quicksand. When he was rescued, he not only regained his freedom, but he'd made a love connection.
Though most people don’t encounter it on a daily basis, quicksand can be a common hazard in certain areas, especially places with high levels of groundwater. Recently, Mitchell O'Brien and Breanne Sika were rock hunting on the shores of Lake Michigan when the former stepped into quicksand and suddenly found himself stuck, waist-deep. If sand becomes saturated with enough water and air, it can become a sand-water suspension that can't support a person's weight on the surface. Luckily, a person stuck in quicksand won’t usually sink so deep that their entire body is covered, since the human body is less dense than the suspension. That's not to say that quicksand isn’t dangerous. When a person falls in, the sheer weight of the sand can lock their limbs into place. If the person struggles and loses their balance, they can drown even if they're not fully submerged. There's also a risk of drowning if a person is caught in quicksand near water during a rising tide.
While it's difficult to free oneself from quicksand without assistance, it’s not impossible. Leaning backward when stepping in quicksand can prevent the feet and legs from sinking in further, and it's possible to escape by slowly moving each limb back and forth. Unfortunately, O'Brien had to wait for firefighters to arrive to pull him out, but the "emotional torment" that he went through with his rock-hunting partner apparently brought them closer together, allowing them to express their latent romantic feelings for one another. Who needs dating apps when you have quicksand?
[Image description: A close-up photo of sand.] Credit & copyright: Annajohepworth, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
Oui, more syrup, please! French toast is one of the most popular breakfast dishes on Earth, but it doesn’t actually come from France. This indulgently sweet dish has surprisingly ancient origins, and might have gotten its name not from a country, but from one enterprising man.
French toast is made by coating thick slices of bread in an egg batter. Often, this batter is made by whisking eggs with milk or cream, though sometimes no dairy is involved. The bread is then fried in butter, and spices like cinnamon or vanilla are sometimes added. French toast is often served with powdered sugar, maple syrup, and a side of butter. It’s a simple,inexpensive dish that can even be used as a way to utilize leftover bread, since stale bread maintains its shape better when wet.
French toast has real staying power, as it dates all the way back to ancient Rome. A recipe for Aliter Dulcia, or “another sweet dish” appeared in Rome's famous De Re Coquinaria cookbook. The book was compiled in the 4th or 5th century C.E. but contained recipes that might have dated all the way back to the 1st century. While this dish didn’t involve butter, it’s still recognizable as French toast, as it called for dipping stale bread in a wash of eggs and milk, then frying it and serving it with honey. European recipes for French toast, under various names, have been found throughout the centuries, from 13th-century France to 17th century Spain. The French version of the dish, pain perdu or “lost bread”, became particularly popular and remains a staple of French cuisine to this day.
The dish’s French popularity might lead one to believe that its American name was given as a tribute. However, the most popular story of how French toast got its name has to do with a man named Joseph French, an innkeeper living in the American colonies in the 18th century. Supposedly, he recreated a European recipe for the dish, then named it after himself, calling it “French’s toast.” Somewhere along the line, people stopped using the possessive name, and simply called it French toast. We’ll never know for sure whether this story is true; American chefs might have dubbed it “French” toast simply to make it sound fancier. If you want to sell stale bread, sometimes it pays to get creative.
[Image description: A plate of french toast with a round ball of butter in the middle.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Oui, more syrup, please! French toast is one of the most popular breakfast dishes on Earth, but it doesn’t actually come from France. This indulgently sweet dish has surprisingly ancient origins, and might have gotten its name not from a country, but from one enterprising man.
French toast is made by coating thick slices of bread in an egg batter. Often, this batter is made by whisking eggs with milk or cream, though sometimes no dairy is involved. The bread is then fried in butter, and spices like cinnamon or vanilla are sometimes added. French toast is often served with powdered sugar, maple syrup, and a side of butter. It’s a simple,inexpensive dish that can even be used as a way to utilize leftover bread, since stale bread maintains its shape better when wet.
French toast has real staying power, as it dates all the way back to ancient Rome. A recipe for Aliter Dulcia, or “another sweet dish” appeared in Rome's famous De Re Coquinaria cookbook. The book was compiled in the 4th or 5th century C.E. but contained recipes that might have dated all the way back to the 1st century. While this dish didn’t involve butter, it’s still recognizable as French toast, as it called for dipping stale bread in a wash of eggs and milk, then frying it and serving it with honey. European recipes for French toast, under various names, have been found throughout the centuries, from 13th-century France to 17th century Spain. The French version of the dish, pain perdu or “lost bread”, became particularly popular and remains a staple of French cuisine to this day.
The dish’s French popularity might lead one to believe that its American name was given as a tribute. However, the most popular story of how French toast got its name has to do with a man named Joseph French, an innkeeper living in the American colonies in the 18th century. Supposedly, he recreated a European recipe for the dish, then named it after himself, calling it “French’s toast.” Somewhere along the line, people stopped using the possessive name, and simply called it French toast. We’ll never know for sure whether this story is true; American chefs might have dubbed it “French” toast simply to make it sound fancier. If you want to sell stale bread, sometimes it pays to get creative.
[Image description: A plate of french toast with a round ball of butter in the middle.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3070Free1 CQ
The colors of spring are always a sight to behold, but some of them we can’t actually see. While we’ve known for decades that there are certain colors the human eye can’t detect, new research has uncovered a previously unknown one—and has even helped a few people to see it.
Humans see color because of light sensitive cells in our eyes called cones. Some cones are sensitive to long wavelengths of light, some to medium wavelengths, and others to short wavelengths. While short-sensitive cones are stimulated by white-blue light and long-sensitive cones by red light, medium-sensitive cones aren’t stimulated by any light independent of other cones. To see what would happen if these medium-sensitive cones were stimulated directly, U.S. researchers first mapped the retinas of five study participants, noting the exact positions of their cones. Then, a laser was used to stimulate only the medium-sensitive cones in each person’s eye.
Participants reported seeing a large patch of color different from any they’d seen before. It was described as an impossibly saturated blue-green. The new color has been dubbed “olo”, a name based on the binary code 010, which indicates that only the medium-sensitive cones were activated. To ensure that participants had actually seen the same color, they each took color-matching tests. When given an adjustable color wheel and asked to match it as closely as possible to olo, all participants selected a teal color.
As amazing as the results seem, some scientists are dubious that olo is actually its own color, claiming that, though it can only be seen via unnatural stimulation, the color itself is just a highly-saturated green. As much as we’d love to see whether they’re right, we’re not quite ready to have lasers flashed in our eyes. For now, we’ll stick with regular, springtime green.[Image description: A digital illustration representing rainbow light shining through a triangular, white prism.] Credit & copyright: Author-created illustration. Public Domain.
The colors of spring are always a sight to behold, but some of them we can’t actually see. While we’ve known for decades that there are certain colors the human eye can’t detect, new research has uncovered a previously unknown one—and has even helped a few people to see it.
Humans see color because of light sensitive cells in our eyes called cones. Some cones are sensitive to long wavelengths of light, some to medium wavelengths, and others to short wavelengths. While short-sensitive cones are stimulated by white-blue light and long-sensitive cones by red light, medium-sensitive cones aren’t stimulated by any light independent of other cones. To see what would happen if these medium-sensitive cones were stimulated directly, U.S. researchers first mapped the retinas of five study participants, noting the exact positions of their cones. Then, a laser was used to stimulate only the medium-sensitive cones in each person’s eye.
Participants reported seeing a large patch of color different from any they’d seen before. It was described as an impossibly saturated blue-green. The new color has been dubbed “olo”, a name based on the binary code 010, which indicates that only the medium-sensitive cones were activated. To ensure that participants had actually seen the same color, they each took color-matching tests. When given an adjustable color wheel and asked to match it as closely as possible to olo, all participants selected a teal color.
As amazing as the results seem, some scientists are dubious that olo is actually its own color, claiming that, though it can only be seen via unnatural stimulation, the color itself is just a highly-saturated green. As much as we’d love to see whether they’re right, we’re not quite ready to have lasers flashed in our eyes. For now, we’ll stick with regular, springtime green.[Image description: A digital illustration representing rainbow light shining through a triangular, white prism.] Credit & copyright: Author-created illustration. Public Domain.
-
FREESports Daily Curio #3069Free1 CQ
When you're at a baseball game, the only sound sweeter than the crack of a bat is the peal of a pipe organ. On April 26, 1941, a pipe organ was played for the first time at a professional baseball game, creating an unexpected musical tradition that has lasted for decades. While the sound of a pipe organ is heavily associated with baseball today, live music was once something of a novelty at large sporting events. The first musician to play a pipe organ at the ballpark was Roy Nelson, who entertained fans at Wrigley Field in Chicago. At the time, the music couldn't be played over the loudspeakers, so Nelson’s performance was a pre-game event. Due to copyright concerns (since the games were being aired on the radio) Nelson was only able to play for two days, but the trend caught on anyway. In 1942, Gladys Goodding, a silent film musician who had experience playing large events at Madison Square Garden, became the first professional organist in baseball history. Her music, which punctuated different parts of the game and encouraged audience participation, made her something of a legendary figure. She even earned the nickname, “The Ebbets Field Organ Queen" during her tenure playing for the Brooklyn Dodgers. Her career as a baseball organist lasted until 1957 when the team moved to Los Angeles. Other ballparks wanted musicians of their own, and even other sports were eager to get in on the action. For example, organist John Kiley played for the Celtics basketball team, the Red Sox baseball team, and the Bruins ice hockey team in Boston. While it ultimately didn’t catch on in other sports, today organ music is associated with baseball games almost as much as it’s associated with churches. Of course, dedicated fans would probably tell you that there’s little difference between baseball and religion.
[Image description: A black-and-white photo of a baseball on the ground.] Credit & copyright: Rachel Xiao, PexelsWhen you're at a baseball game, the only sound sweeter than the crack of a bat is the peal of a pipe organ. On April 26, 1941, a pipe organ was played for the first time at a professional baseball game, creating an unexpected musical tradition that has lasted for decades. While the sound of a pipe organ is heavily associated with baseball today, live music was once something of a novelty at large sporting events. The first musician to play a pipe organ at the ballpark was Roy Nelson, who entertained fans at Wrigley Field in Chicago. At the time, the music couldn't be played over the loudspeakers, so Nelson’s performance was a pre-game event. Due to copyright concerns (since the games were being aired on the radio) Nelson was only able to play for two days, but the trend caught on anyway. In 1942, Gladys Goodding, a silent film musician who had experience playing large events at Madison Square Garden, became the first professional organist in baseball history. Her music, which punctuated different parts of the game and encouraged audience participation, made her something of a legendary figure. She even earned the nickname, “The Ebbets Field Organ Queen" during her tenure playing for the Brooklyn Dodgers. Her career as a baseball organist lasted until 1957 when the team moved to Los Angeles. Other ballparks wanted musicians of their own, and even other sports were eager to get in on the action. For example, organist John Kiley played for the Celtics basketball team, the Red Sox baseball team, and the Bruins ice hockey team in Boston. While it ultimately didn’t catch on in other sports, today organ music is associated with baseball games almost as much as it’s associated with churches. Of course, dedicated fans would probably tell you that there’s little difference between baseball and religion.
[Image description: A black-and-white photo of a baseball on the ground.] Credit & copyright: Rachel Xiao, Pexels -
FREEWorld History Daily Curio #3068Free1 CQ
Nobody likes Mondays, but you’ve probably never had one as bad as this. On Easter Monday in 1360, a deadly hailstorm devastated English forces in the Hundred Years' War so badly that they ended up signing a peace treaty. The Hundred Years' War between Britain and France was already a bloody conflict, but on one fateful day in 1360, death was dealt not by soldiers, but by inclement weather. King Edward III of England had crossed the English Channel with his troops and was making his way through the French countryside, pillaging throughout the winter. In April, Edward III's army was approaching Paris when they stopped to camp outside the town of Chartres. They weren't in any danger from enemy forces, but they would suffer heavy losses regardless. On what would come to be known as "Black Monday," a devastating hailstorm broke out over the area. First, a lightning strike killed several people, then massive hailstones fell from the sky, killing 1,000 English soldiers and 6,000 horses.
It might seem unbelievable, but there are modern records of hailstones as wide as eight inches, weighing nearly two pounds. That’s heavy enough to be lethal. Understandably, the hailstorm was seen as a divine omen, and Edward III went on to negotiate the Treaty of Brétigny. According to the treaty, Edward III was to renounce his claims to the throne of France and was given some territory in the north in exchange. The treaty didn't end the Hundred Years' War for good. The conflict started up again just nine years later, after the King of France accused Edward III of violating the terms of the treaty. The war, which began in 1337, didn’t officially conclude until 1453. Maybe weirder weather could have ended it sooner!
[Image description: Hailstones on ice.] Credit & copyright: Julia Filirovska, PexelsNobody likes Mondays, but you’ve probably never had one as bad as this. On Easter Monday in 1360, a deadly hailstorm devastated English forces in the Hundred Years' War so badly that they ended up signing a peace treaty. The Hundred Years' War between Britain and France was already a bloody conflict, but on one fateful day in 1360, death was dealt not by soldiers, but by inclement weather. King Edward III of England had crossed the English Channel with his troops and was making his way through the French countryside, pillaging throughout the winter. In April, Edward III's army was approaching Paris when they stopped to camp outside the town of Chartres. They weren't in any danger from enemy forces, but they would suffer heavy losses regardless. On what would come to be known as "Black Monday," a devastating hailstorm broke out over the area. First, a lightning strike killed several people, then massive hailstones fell from the sky, killing 1,000 English soldiers and 6,000 horses.
It might seem unbelievable, but there are modern records of hailstones as wide as eight inches, weighing nearly two pounds. That’s heavy enough to be lethal. Understandably, the hailstorm was seen as a divine omen, and Edward III went on to negotiate the Treaty of Brétigny. According to the treaty, Edward III was to renounce his claims to the throne of France and was given some territory in the north in exchange. The treaty didn't end the Hundred Years' War for good. The conflict started up again just nine years later, after the King of France accused Edward III of violating the terms of the treaty. The war, which began in 1337, didn’t officially conclude until 1453. Maybe weirder weather could have ended it sooner!
[Image description: Hailstones on ice.] Credit & copyright: Julia Filirovska, Pexels -
FREEUS History Daily Curio #3067Free1 CQ
San Francisco is no stranger to earthquakes, but this one was a particular doozy. This month in 1906, the City by the Bay was devastated and permanently reshaped by what would come to be known as the Great 1906 San Francisco Earthquake. On the morning of April 18, 1906, at 5:12 AM, many San Francisco residents were woken up by foreshocks, smaller earthquakes that can occur hours to minutes ahead of a larger one. Just 20 seconds or so later, an earthquake with a magnitude of 7.9 hit the city in earnest, shaking the ground for a full minute. The epicenter of the earthquake was at San Andreas fault, where 296 miles of the northern portion ruptured, sending out a destructive quake that could be felt as far north as Oregon and as far south as Los Angeles. The earthquake was so powerful that buildings toppled and streets were torn apart, but that was only part of the event’s destructive power. There's a reason that it's sometimes called the Great San Francisco Earthquake and Fire. The ensuing flames, caused by burst gas pipes and upended stoves, caused almost as much damage as the earthquake itself. Over the course of four days, 28,000 buildings in 500 blocks were reduced to rubble and ash. It was around $350 million worth of damage, but the loss of property paled in comparison to the loss of life. An estimated 3,000 people died in the earthquake and around 250,000 people were left homeless in its aftermath. The disaster had just one silver lining: geologic observations of the fault and a survey of the devastation proved to be a massive help in understanding how earthquakes cause damage, and the city was quickly rebuilt to be more earthquake and fire-resistant. No matter what, though, the real fault lies with the fault.
[Image description: A black-and-white photo of San Fransisco after the 1906 earthquake, with many ruined buildings.] Credit & copyright: National Archives Catalog. Photographer: Chadwick, H. D. (U.S. Gov War Department. Office of the Chief Signal Officer.) Images Collected by Brigadier General Adolphus W. Greely, Chief Signal Officer (1887-1906), between 1865–1935. Unrestricted Access, Unrestricted Use, Public Domain.San Francisco is no stranger to earthquakes, but this one was a particular doozy. This month in 1906, the City by the Bay was devastated and permanently reshaped by what would come to be known as the Great 1906 San Francisco Earthquake. On the morning of April 18, 1906, at 5:12 AM, many San Francisco residents were woken up by foreshocks, smaller earthquakes that can occur hours to minutes ahead of a larger one. Just 20 seconds or so later, an earthquake with a magnitude of 7.9 hit the city in earnest, shaking the ground for a full minute. The epicenter of the earthquake was at San Andreas fault, where 296 miles of the northern portion ruptured, sending out a destructive quake that could be felt as far north as Oregon and as far south as Los Angeles. The earthquake was so powerful that buildings toppled and streets were torn apart, but that was only part of the event’s destructive power. There's a reason that it's sometimes called the Great San Francisco Earthquake and Fire. The ensuing flames, caused by burst gas pipes and upended stoves, caused almost as much damage as the earthquake itself. Over the course of four days, 28,000 buildings in 500 blocks were reduced to rubble and ash. It was around $350 million worth of damage, but the loss of property paled in comparison to the loss of life. An estimated 3,000 people died in the earthquake and around 250,000 people were left homeless in its aftermath. The disaster had just one silver lining: geologic observations of the fault and a survey of the devastation proved to be a massive help in understanding how earthquakes cause damage, and the city was quickly rebuilt to be more earthquake and fire-resistant. No matter what, though, the real fault lies with the fault.
[Image description: A black-and-white photo of San Fransisco after the 1906 earthquake, with many ruined buildings.] Credit & copyright: National Archives Catalog. Photographer: Chadwick, H. D. (U.S. Gov War Department. Office of the Chief Signal Officer.) Images Collected by Brigadier General Adolphus W. Greely, Chief Signal Officer (1887-1906), between 1865–1935. Unrestricted Access, Unrestricted Use, Public Domain. -
FREEMind + Body Daily CurioFree1 CQ
Fire up the grill, backyard barbeque season is nearly upon us! In many places in the U.S., no outdoor get-together is complete without a scoop of Boston baked beans. This famous side’s sweet flavor sets itself apart from other baked beans. Its origins, though, are anything but sweet.
Like other kinds of baked beans, Boston baked beans are made by boiling beans (usually white common beans or navy beans) and then baking them in sauce. The sauce for Boston baked beans is sweetened with molasses and brown sugar, but also has a savory edge since bacon or salt pork is often added.
Boston baked beans are responsible for giving their titular city the nickname “Beantown.” In the years leading up to and directly following the Revolutionary War, Boston boasted more molasses than any other American city, but Bostonians didn’t produce it themselves. The city’s coastal position made it a major hub of the Triangle Trade between the Americas, Europe, and Africa. In this brutal trade, Europe shipped goods to Africa, which were traded for enslaved people, who were shipped to the Americas to farm and produce goods like cotton and rum, which were then shipped to Europe. Boston’s molasses was produced by enslaved people on sugar plantations in the Caribbean, then used in Boston to produce rum as part of the Triangle Trade. Leftover molasses became a common household item in Boston, and was used to create many New England foods that are still famous today, from molasses cookies to Boston baked beans.
In the late 19th century, large food companies began using new, industrial technology to mass produce and can goods. This included foods that were only famous in specific regions, like Boston baked beans. Once they were shipped across the country, Boston baked beans became instantly popular outside of New England. Today, most baked beans on grocery shelves are sweet and syrupy, even if they don’t call themselves Boston baked beans. If you get popular enough, your name sometimes dissolves into the sauce of the general culture.
[Image description: A white bowl filled with baked beans and sliced hot dogs.] Credit & copyright: Thomson200, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Fire up the grill, backyard barbeque season is nearly upon us! In many places in the U.S., no outdoor get-together is complete without a scoop of Boston baked beans. This famous side’s sweet flavor sets itself apart from other baked beans. Its origins, though, are anything but sweet.
Like other kinds of baked beans, Boston baked beans are made by boiling beans (usually white common beans or navy beans) and then baking them in sauce. The sauce for Boston baked beans is sweetened with molasses and brown sugar, but also has a savory edge since bacon or salt pork is often added.
Boston baked beans are responsible for giving their titular city the nickname “Beantown.” In the years leading up to and directly following the Revolutionary War, Boston boasted more molasses than any other American city, but Bostonians didn’t produce it themselves. The city’s coastal position made it a major hub of the Triangle Trade between the Americas, Europe, and Africa. In this brutal trade, Europe shipped goods to Africa, which were traded for enslaved people, who were shipped to the Americas to farm and produce goods like cotton and rum, which were then shipped to Europe. Boston’s molasses was produced by enslaved people on sugar plantations in the Caribbean, then used in Boston to produce rum as part of the Triangle Trade. Leftover molasses became a common household item in Boston, and was used to create many New England foods that are still famous today, from molasses cookies to Boston baked beans.
In the late 19th century, large food companies began using new, industrial technology to mass produce and can goods. This included foods that were only famous in specific regions, like Boston baked beans. Once they were shipped across the country, Boston baked beans became instantly popular outside of New England. Today, most baked beans on grocery shelves are sweet and syrupy, even if they don’t call themselves Boston baked beans. If you get popular enough, your name sometimes dissolves into the sauce of the general culture.
[Image description: A white bowl filled with baked beans and sliced hot dogs.] Credit & copyright: Thomson200, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3066Free1 CQ
"Don't you wish there was more plastic at the beach?", said no one ever. Those perturbed by plastic pollution in Australia now have a reason to rejoice, as new research shows that coastal plastic waste levels there have dropped by nearly 40 percent since 2013. In Australia, as in many places, most waste found on beaches (around 75 percent) is plastic. This waste is unsightly, hazardous to people (especially children), and potentially deadly for wildlife. It's good news, then, that researchers in Australia are finding less and less of the stuff every year. According to CSIRO, Australia's national science agency, there has been a 39 percent reduction in plastic waste found in coastal areas. Researchers examined a number of different Australian locales, including Hobart in Tasmania, Newcastle in New South Wales, Perth in Western Australia, Port Augusta in South Australia, Sunshine Coast in Queensland, and Alice Springs in the Northern Territory. They didn't just look at beaches by the sea, either. Areas surveyed included inland, riverine, and coastal habitats, and all were found to have reduced plastic waste. Moreover, there was a 16 percent increase in areas that were completely free of plastic waste. CSIRO researchers also identified the most common types of plastic waste: polystyrene and cigarette butts, which accounted for 24 percent and 20 percent. Other common forms of waste included beverage containers (bottles and cans) and food wrappers, as well as plenty of unspecified plastic fragments. This type of research has allowed for more focused efforts when it comes to waste collection and prevention, but CSIRO isn't ready to rest on their laurels just yet. They hope to achieve an 80 percent reduction of plastic waste by 2030 by identifying sources of waste and better understanding how it enters the environment. Australia's National Waste Policy also aims to recycle or reuse all plastic waste by 2040. As they say: waste not, want not.
[Image description: The surface of water under a sky at sunset.] Credit & copyright: Matt Hardy, Pexels"Don't you wish there was more plastic at the beach?", said no one ever. Those perturbed by plastic pollution in Australia now have a reason to rejoice, as new research shows that coastal plastic waste levels there have dropped by nearly 40 percent since 2013. In Australia, as in many places, most waste found on beaches (around 75 percent) is plastic. This waste is unsightly, hazardous to people (especially children), and potentially deadly for wildlife. It's good news, then, that researchers in Australia are finding less and less of the stuff every year. According to CSIRO, Australia's national science agency, there has been a 39 percent reduction in plastic waste found in coastal areas. Researchers examined a number of different Australian locales, including Hobart in Tasmania, Newcastle in New South Wales, Perth in Western Australia, Port Augusta in South Australia, Sunshine Coast in Queensland, and Alice Springs in the Northern Territory. They didn't just look at beaches by the sea, either. Areas surveyed included inland, riverine, and coastal habitats, and all were found to have reduced plastic waste. Moreover, there was a 16 percent increase in areas that were completely free of plastic waste. CSIRO researchers also identified the most common types of plastic waste: polystyrene and cigarette butts, which accounted for 24 percent and 20 percent. Other common forms of waste included beverage containers (bottles and cans) and food wrappers, as well as plenty of unspecified plastic fragments. This type of research has allowed for more focused efforts when it comes to waste collection and prevention, but CSIRO isn't ready to rest on their laurels just yet. They hope to achieve an 80 percent reduction of plastic waste by 2030 by identifying sources of waste and better understanding how it enters the environment. Australia's National Waste Policy also aims to recycle or reuse all plastic waste by 2040. As they say: waste not, want not.
[Image description: The surface of water under a sky at sunset.] Credit & copyright: Matt Hardy, Pexels -
FREEParenting Daily Curio #3065Free1 CQ
Are middle children really mediators? Are older children really the most responsible? Is there any truth to common stereotypes about birth-order? A new study shows that a person's place among their siblings can affect their personality, but there's more to it. When a family has three or more children, conventional wisdom says that the eldest will be bold and independent, the middle child will be the peacemaker, and the youngest will be the most easygoing (because they’re able to get away with everything). Obviously, these archetypes don't always hold true, but birth order can contribute to someone's personality in surprising ways. Researchers in Canada conducted a large-scale study using the HEXACO framework, which measures six general traits—Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. Using data from almost 800,000 participants from various English-speaking countries, the researchers deciphered how birth order affects personalities.
When it came to Honesty-Humility and Agreeableness, second or middle children scored the highest, followed by the youngest and then the eldest. Those with no siblings scored the lowest of all, but they did redeem themselves somewhat. Compared to those who have siblings, "only children" scored higher when it came to openness to experience and tended to have higher levels of intellectual curiosity. Overall, researchers found that those who came from larger families tended to be more cooperative and modest compared to those from smaller families, likely from having to share more resources and settle disputes. That's not to say that birth order is the end-all-be-all when it comes to determining personalities. In fact, researchers pointed out that these statistical differences are small, albeit consistent. They also noted that cultural differences might yield different results, and they hope to launch similar studies in non-English speaking countries. Of course, there's probably no culture on Earth without sibling rivalry.
[Image description: Three dark red hearts on a pink background.] Credit & copyright: Author-created image. Public Domain.Are middle children really mediators? Are older children really the most responsible? Is there any truth to common stereotypes about birth-order? A new study shows that a person's place among their siblings can affect their personality, but there's more to it. When a family has three or more children, conventional wisdom says that the eldest will be bold and independent, the middle child will be the peacemaker, and the youngest will be the most easygoing (because they’re able to get away with everything). Obviously, these archetypes don't always hold true, but birth order can contribute to someone's personality in surprising ways. Researchers in Canada conducted a large-scale study using the HEXACO framework, which measures six general traits—Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. Using data from almost 800,000 participants from various English-speaking countries, the researchers deciphered how birth order affects personalities.
When it came to Honesty-Humility and Agreeableness, second or middle children scored the highest, followed by the youngest and then the eldest. Those with no siblings scored the lowest of all, but they did redeem themselves somewhat. Compared to those who have siblings, "only children" scored higher when it came to openness to experience and tended to have higher levels of intellectual curiosity. Overall, researchers found that those who came from larger families tended to be more cooperative and modest compared to those from smaller families, likely from having to share more resources and settle disputes. That's not to say that birth order is the end-all-be-all when it comes to determining personalities. In fact, researchers pointed out that these statistical differences are small, albeit consistent. They also noted that cultural differences might yield different results, and they hope to launch similar studies in non-English speaking countries. Of course, there's probably no culture on Earth without sibling rivalry.
[Image description: Three dark red hearts on a pink background.] Credit & copyright: Author-created image. Public Domain. -
FREEEngineering Daily Curio #3064Free1 CQ
Chewing gum? Did you bring enough to share with everyone? Most chewing gums can only freshen your breath, but a new antiviral gum developed by researchers at the School of Dental Medicine at the University of Pennsylvania can fight the influenza virus and the herpes simplex virus (HSV). Influenza claims up to 650,000 lives per year, and while HSV isn't as deadly, the infection never goes away. According to the World Health Organization (WHO), around 3.8 billion people under 50 are infected with herpes simplex virus type 1 (HSV-1), while around 520 million people between the ages of 15 and 49 are infected with herpes simplex virus type 2 (HSV-2). HSV-1 is responsible for most cases of oral herpes, while HSV-2 is responsible for most cases of genital herpes. HSV-1 doesn't claim as many lives as influenza, but it's still the leading cause of infectious blindness in Western countries. Both influenza and HSV infections can go unnoticed or misdiagnosed, and in the case of HSV, many people can be asymptomatic for long periods of time.
Managing the spread of these diseases is a seemingly sisyphean task, but the antiviral gum from the University of Pennsylvania might make that uphill climb a little easier. The special ingredient in the gum is lablab beans, which are full of an antiviral trap protein (FRIL) that ensnares viruses in the human body and stops them from replicating. Studies show that chewing on the gum can lower viral loads by 95 percent, significantly reducing the likelihood of transmission. Delivering the treatment via gum isn’t just a cute gimmick, either. Prolonged chewing releases the FRIL from the bean gum consistently over time, increasing its effectiveness. The question remains, though, should the flavor be spearmint or something fruity?
[Image description: A piece of chewed gum in a foil wrapper.] Credit & copyright: ToTheDemosToTheStars, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Chewing gum? Did you bring enough to share with everyone? Most chewing gums can only freshen your breath, but a new antiviral gum developed by researchers at the School of Dental Medicine at the University of Pennsylvania can fight the influenza virus and the herpes simplex virus (HSV). Influenza claims up to 650,000 lives per year, and while HSV isn't as deadly, the infection never goes away. According to the World Health Organization (WHO), around 3.8 billion people under 50 are infected with herpes simplex virus type 1 (HSV-1), while around 520 million people between the ages of 15 and 49 are infected with herpes simplex virus type 2 (HSV-2). HSV-1 is responsible for most cases of oral herpes, while HSV-2 is responsible for most cases of genital herpes. HSV-1 doesn't claim as many lives as influenza, but it's still the leading cause of infectious blindness in Western countries. Both influenza and HSV infections can go unnoticed or misdiagnosed, and in the case of HSV, many people can be asymptomatic for long periods of time.
Managing the spread of these diseases is a seemingly sisyphean task, but the antiviral gum from the University of Pennsylvania might make that uphill climb a little easier. The special ingredient in the gum is lablab beans, which are full of an antiviral trap protein (FRIL) that ensnares viruses in the human body and stops them from replicating. Studies show that chewing on the gum can lower viral loads by 95 percent, significantly reducing the likelihood of transmission. Delivering the treatment via gum isn’t just a cute gimmick, either. Prolonged chewing releases the FRIL from the bean gum consistently over time, increasing its effectiveness. The question remains, though, should the flavor be spearmint or something fruity?
[Image description: A piece of chewed gum in a foil wrapper.] Credit & copyright: ToTheDemosToTheStars, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3063Free1 CQ
Social distancing? These birds have never heard of it! The annual spring migration of sandhill cranes in Nebraska had researchers concerned about the possibility of a bird flu super-spreader event, but those fears were thankfully put to rest. This year, a record-breaking 736,000 sandhill cranes gathered in central Nebraska. Conservationists and bird-lovers would normally hail this as a joyous occasion, but this year is a little different. That’s because the H5N1 virus, also known as bird flu, killed 1,500 of the cranes earlier this year in Indiana. It was only natural to be concerned about the much larger Nebraska gathering, which accounts for around 80 percent of the total sandhill crane population in North America. In such a large group, just a few sick birds would have been enough to cause devastation. Unfortunately, the danger for this year hasn’t completely passed. Sandhill crane migration begins in February and continues through April, so there could still be some late comers who might be carrying the virus.
That would be especially bad news since sandhill cranes have a hard time recovering from population dips. The cranes can begin breeding at age two, but many of them wait until they are at least seven years old. The cranes mate for life, continuing to breed for upwards of 20 years, but chicks take a while to become independent. Hatchlings stick close to their parents and only strike out on their own after seven months. Once they mature, sandhill cranes are some of the largest birds in North America, measuring over 47 inches long with a wingspan of nearly 78 inches. If only they could use that impressive wingspan to keep at wing’s length from one another.
[Image description: A sandhill crane, with white feathers and some red on its head, flies over a snowy landscape.] Credit & copyright: National Park Service, Jacob W. Frank. NPGallery Digital Asset Management System, Asset ID: 06545f00-50a9-41bd-9195-f8663386cb17. Public domain: Full Granting Rights.Social distancing? These birds have never heard of it! The annual spring migration of sandhill cranes in Nebraska had researchers concerned about the possibility of a bird flu super-spreader event, but those fears were thankfully put to rest. This year, a record-breaking 736,000 sandhill cranes gathered in central Nebraska. Conservationists and bird-lovers would normally hail this as a joyous occasion, but this year is a little different. That’s because the H5N1 virus, also known as bird flu, killed 1,500 of the cranes earlier this year in Indiana. It was only natural to be concerned about the much larger Nebraska gathering, which accounts for around 80 percent of the total sandhill crane population in North America. In such a large group, just a few sick birds would have been enough to cause devastation. Unfortunately, the danger for this year hasn’t completely passed. Sandhill crane migration begins in February and continues through April, so there could still be some late comers who might be carrying the virus.
That would be especially bad news since sandhill cranes have a hard time recovering from population dips. The cranes can begin breeding at age two, but many of them wait until they are at least seven years old. The cranes mate for life, continuing to breed for upwards of 20 years, but chicks take a while to become independent. Hatchlings stick close to their parents and only strike out on their own after seven months. Once they mature, sandhill cranes are some of the largest birds in North America, measuring over 47 inches long with a wingspan of nearly 78 inches. If only they could use that impressive wingspan to keep at wing’s length from one another.
[Image description: A sandhill crane, with white feathers and some red on its head, flies over a snowy landscape.] Credit & copyright: National Park Service, Jacob W. Frank. NPGallery Digital Asset Management System, Asset ID: 06545f00-50a9-41bd-9195-f8663386cb17. Public domain: Full Granting Rights.