Curio Cabinet / Daily Curio
-
FREEMind + Body Daily CurioFree1 CQ
Fruit salad, bird beaks…the origins of pico de gallo are a lot more complicated than those of most salsas! This beloved dip is truly ancient, with roots in the Aztec Empire. It’s also surprisingly healthy and fairly simple to make.
Pico de gallo is a type of salsa, but unlike other salsas, which usually feature a liquid base made from blended vegetables, pico de gallo is “dry”, meaning that it has little to no liquid. Rather, it's simply a combination of finely chopped tomatoes, onions, and peppers (most often serrano peppers). It is seasoned with salt and cilantro, often with lime juice squeezed on top. Like other salsas, it can be used as a dip for chips or as a garnish on other dishes. It’s important to know what you’re ordering ahead of time though, as “pico de gallo” (which means “rooster’s beak”) can refer to a lot of different salads or salsas, including a popular type of fruit salad made with chopped fruit and red chili powder. As for how all these dishes got their name, it’s not about the flavor or the ingredients, but about how they were first eaten. It was once common to eat pico de gallo by pinching a bit of it between the thumb and index finger, making one’s hand look like the head and beak of a rooster.
Pico de gallo has been enjoyed in Mexico for a long time, starting with the Aztecs, whose empire flourished between the 14th and 16th centuries. Their version was a bit simpler, made with tomatoes and ground peppers, but over time it spread throughout the Yucatan Peninsula and throughout the rest of Mexico. Centuries later, during the Mexican Revolution of 1910-1920, many Mexican civilians fled to the U.S. to escape the violence. During the 1920s, Mexican restaurants and cantinas began popping up in cities where Mexican refugees settled, and America’s love affair with Mexican food began in earnest. Today, Mexican restaurants can be found all over the world, and pico de gallo is a staple in just about all of them. You might not literally pinch it today, but a pinch of this salsa still adds a lot of flavor to a meal.
[Image description: A small bowl of pico de gallo garnished with a green vegetable leaf.] Credit & copyright: Shameel mukkath, PexelsFruit salad, bird beaks…the origins of pico de gallo are a lot more complicated than those of most salsas! This beloved dip is truly ancient, with roots in the Aztec Empire. It’s also surprisingly healthy and fairly simple to make.
Pico de gallo is a type of salsa, but unlike other salsas, which usually feature a liquid base made from blended vegetables, pico de gallo is “dry”, meaning that it has little to no liquid. Rather, it's simply a combination of finely chopped tomatoes, onions, and peppers (most often serrano peppers). It is seasoned with salt and cilantro, often with lime juice squeezed on top. Like other salsas, it can be used as a dip for chips or as a garnish on other dishes. It’s important to know what you’re ordering ahead of time though, as “pico de gallo” (which means “rooster’s beak”) can refer to a lot of different salads or salsas, including a popular type of fruit salad made with chopped fruit and red chili powder. As for how all these dishes got their name, it’s not about the flavor or the ingredients, but about how they were first eaten. It was once common to eat pico de gallo by pinching a bit of it between the thumb and index finger, making one’s hand look like the head and beak of a rooster.
Pico de gallo has been enjoyed in Mexico for a long time, starting with the Aztecs, whose empire flourished between the 14th and 16th centuries. Their version was a bit simpler, made with tomatoes and ground peppers, but over time it spread throughout the Yucatan Peninsula and throughout the rest of Mexico. Centuries later, during the Mexican Revolution of 1910-1920, many Mexican civilians fled to the U.S. to escape the violence. During the 1920s, Mexican restaurants and cantinas began popping up in cities where Mexican refugees settled, and America’s love affair with Mexican food began in earnest. Today, Mexican restaurants can be found all over the world, and pico de gallo is a staple in just about all of them. You might not literally pinch it today, but a pinch of this salsa still adds a lot of flavor to a meal.
[Image description: A small bowl of pico de gallo garnished with a green vegetable leaf.] Credit & copyright: Shameel mukkath, Pexels -
FREEUS History Daily Curio #2914Free1 CQ
Talk about a blast from the past. Archaeologists just unearthed four musket balls in Concord, Massachusetts, where the first battle of the American Revolution took place. This centuries-old ammunition may look primitive by modern standards, but musket balls were state-of-the-art at the time. Firearms and specifically muskets have been around a long time. The first weapon that used gunpowder and could be considered a firearm was invented in the 14th century, although it resembled a small cannon more than a gun. Over the following centuries, firearms became increasingly complex, powerful, and, crucially, portable. By the 16th century, the musket was the weapon of choice for many militaries in the Western world. Muskets were smoothbored, muzzleloading rifles, meaning that the inside of the barrel didn’t have rifling grooves and had to be loaded from the firing end of the barrel. But the first muskets used a matchlock, which consisted of a piece of lit cord to ignite the gunpowder.
By the time of the Revolutionary War, matchlocks had been replaced by flintlocks, which used a piece of flint to create a spark. The “lock” in flintlock refers to the firing mechanism itself which was an incredibly complex piece of metalworking that not many gunsmiths could make. Basically, the lock included an L-shaped piece called the hammer, which held the flint. When the trigger was pulled, the flint struck another piece called the pan, which held gunpowder on an open surface leading into the barrel. The resulting ignition then fired a musket ball. To reload, the shooter would have to pour gunpowder into the barrel, pack another musket ball down the barrel with a ramrod, pull the hammer back, and pour more gunpowder on the pan. If that sounds like a lot just to fire one shot, that’s because it was. Experienced soldiers with a lot of practice could fire one shot every 15 seconds under ideal conditions, but most took 30 seconds or more between shots. Even then, muskets had an effective range of around 50 yards, after which they became increasingly inaccurate. If an enemy soldier got too close for comfort, the options were either to run or charge using the musket’s bayonet. All that technology and you could still essentially end up fighting with sharp sticks.
[Image description: A painting of a uniformed soldier of the American Continental Army in the Revolutionary War cleaning his musket.] Credit & copyright: Private, 1st Georgia Continental Infantry, Charles M. Lefferts (1873–1923), Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 70 years or fewer.Talk about a blast from the past. Archaeologists just unearthed four musket balls in Concord, Massachusetts, where the first battle of the American Revolution took place. This centuries-old ammunition may look primitive by modern standards, but musket balls were state-of-the-art at the time. Firearms and specifically muskets have been around a long time. The first weapon that used gunpowder and could be considered a firearm was invented in the 14th century, although it resembled a small cannon more than a gun. Over the following centuries, firearms became increasingly complex, powerful, and, crucially, portable. By the 16th century, the musket was the weapon of choice for many militaries in the Western world. Muskets were smoothbored, muzzleloading rifles, meaning that the inside of the barrel didn’t have rifling grooves and had to be loaded from the firing end of the barrel. But the first muskets used a matchlock, which consisted of a piece of lit cord to ignite the gunpowder.
By the time of the Revolutionary War, matchlocks had been replaced by flintlocks, which used a piece of flint to create a spark. The “lock” in flintlock refers to the firing mechanism itself which was an incredibly complex piece of metalworking that not many gunsmiths could make. Basically, the lock included an L-shaped piece called the hammer, which held the flint. When the trigger was pulled, the flint struck another piece called the pan, which held gunpowder on an open surface leading into the barrel. The resulting ignition then fired a musket ball. To reload, the shooter would have to pour gunpowder into the barrel, pack another musket ball down the barrel with a ramrod, pull the hammer back, and pour more gunpowder on the pan. If that sounds like a lot just to fire one shot, that’s because it was. Experienced soldiers with a lot of practice could fire one shot every 15 seconds under ideal conditions, but most took 30 seconds or more between shots. Even then, muskets had an effective range of around 50 yards, after which they became increasingly inaccurate. If an enemy soldier got too close for comfort, the options were either to run or charge using the musket’s bayonet. All that technology and you could still essentially end up fighting with sharp sticks.
[Image description: A painting of a uniformed soldier of the American Continental Army in the Revolutionary War cleaning his musket.] Credit & copyright: Private, 1st Georgia Continental Infantry, Charles M. Lefferts (1873–1923), Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 70 years or fewer. -
FREESports Daily Curio #2913Free1 CQ
They say tough times don’t last but tough people do. Well, these are some tough people! Two rowers from the U.K., Charlotte Harris and Jessica Oliver, have just completed the World’ Toughest Row – Pacific, a grueling race that took them 2,800 miles across the ocean. The World’s Toughest Row isn’t for the faint of heart. Taking competitors from Monterey, California, to Hanalei Bay on the Hawaiian island of Kauai, the race can take a team months to complete, if they complete it at all. The voyage is so harrowing that before the race was established just last year, only 82 people between 33 boats managed the crossing. This year, competitors from around the world formed 12 crews consisting of pairs, trios, and teams of four, with the majority of them being women. Wild Waves, the pairs team of Harris and Oliver, came ahead of most of the pack, completing the voyage in 37 days, 11 hours, and 43 minutes. That made them the first pairs team to finish the race and therefore the fastest duo to ever row across the Pacific. They came in second overall, losing out to the Salty Slappers, a four-man team from Britain who finished in 36 days.
The name “Pacific,” might mean “peaceful,” but the voyage is anything but. Over the 2,800 miles, Wild Waves rowed over 40-foot waves and sometimes lost an entire day’s progress due to unfavorable conditions. Besides natural hazards, they also had to watch out for manmade threats, avoiding a head-on collision with a tanker by just 33 feet. Of course, the two were experienced rowers who had already completed the 3,000-mile race across the Atlantic Ocean called the Talisker Whisky Atlantic Challenge back in 2021. On that voyage, they finished with a time of 45 days, 7 hours, and 25 minutes, which broke the previous women’s record by five days. They might be rowers, but they’re not exactly wet behind the ears.
[Image description: A close-up photo of the ocean’s surface with a sunset above.] Credit & copyright: Sebastian Voortman, PexelsThey say tough times don’t last but tough people do. Well, these are some tough people! Two rowers from the U.K., Charlotte Harris and Jessica Oliver, have just completed the World’ Toughest Row – Pacific, a grueling race that took them 2,800 miles across the ocean. The World’s Toughest Row isn’t for the faint of heart. Taking competitors from Monterey, California, to Hanalei Bay on the Hawaiian island of Kauai, the race can take a team months to complete, if they complete it at all. The voyage is so harrowing that before the race was established just last year, only 82 people between 33 boats managed the crossing. This year, competitors from around the world formed 12 crews consisting of pairs, trios, and teams of four, with the majority of them being women. Wild Waves, the pairs team of Harris and Oliver, came ahead of most of the pack, completing the voyage in 37 days, 11 hours, and 43 minutes. That made them the first pairs team to finish the race and therefore the fastest duo to ever row across the Pacific. They came in second overall, losing out to the Salty Slappers, a four-man team from Britain who finished in 36 days.
The name “Pacific,” might mean “peaceful,” but the voyage is anything but. Over the 2,800 miles, Wild Waves rowed over 40-foot waves and sometimes lost an entire day’s progress due to unfavorable conditions. Besides natural hazards, they also had to watch out for manmade threats, avoiding a head-on collision with a tanker by just 33 feet. Of course, the two were experienced rowers who had already completed the 3,000-mile race across the Atlantic Ocean called the Talisker Whisky Atlantic Challenge back in 2021. On that voyage, they finished with a time of 45 days, 7 hours, and 25 minutes, which broke the previous women’s record by five days. They might be rowers, but they’re not exactly wet behind the ears.
[Image description: A close-up photo of the ocean’s surface with a sunset above.] Credit & copyright: Sebastian Voortman, Pexels -
FREEPolitical Science Daily CurioFree1 CQ
Just when you thought this election couldn’t get any stranger! On July 21, President Biden announced that he was dropping out of the Presidential race, leaving just a few months for Vice President Kamala Harris to take over and campaign as the Democratic nominee. Yet, Biden isn’t the first U.S. President to step aside during or just before an election year. It’s actually happened a few times, first way back in 1844 and most recently (before Biden) in 1968.
The first President to actually drop out of an ongoing election was John Tyler, in 1844. Tyler had never been particularly popular. In fact, he’d only become President, in 1841, after President William Henry Harrison died unexpectedly. Tyler lost a lot of public support due to his belief that the U.S. should annex Texas, which at the time was part of Mexico. Both the Democratic party and the Whig party refused to nominate Tyler during the 1844 election, and he was forced to drop out. In 1852, Millard Fillmore, the 13th U.S. President, also failed to secure his party’s nomination due to a loss of political support. Like Tyler, he had taken over the Presidency after the death of his running mate, Zachary Taylor. In 1856, the Democratic party refused to re-nominate President Franklin Pierce, who had angered party leadership with his support for the Kansas-Nebraska Act. The Democratic party of the time was pro-slavery, and they viewed the Act as an anti-salvery law. Thus, Pierce was effectively forced to drop his re-election bid. Even Andrew Johnson, Vice President to Abraham Lincoln himself, wasn’t immune from political hardship. In 1868, he was refused the Democratic nomination after being impeached for removing another political official from a position as Secretary of War. In 1884, Chester Arthur was similarly forced to drop his re-election bid after being denied the Republican nomination. The reason? Arthur opposed political kickbacks and had sought to ban them.
The first Presidential dropout of the 1900s was Harry S. Truman, whose popularity tanked due to a series of scandals and a contentious political agenda. Truman left partway through the 1952 election. Then, in 1968, Lyndon B Johnson announced that he wouldn’t seek re-election just a few months before said election was to take place. His policies regarding the Vietnam War had made him extremely unpopular. All that is to say…Biden’s not exactly alone. Rather, he’s in good (but odd) political company.
[Image description: The front of the White House in Washington, D.C.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Just when you thought this election couldn’t get any stranger! On July 21, President Biden announced that he was dropping out of the Presidential race, leaving just a few months for Vice President Kamala Harris to take over and campaign as the Democratic nominee. Yet, Biden isn’t the first U.S. President to step aside during or just before an election year. It’s actually happened a few times, first way back in 1844 and most recently (before Biden) in 1968.
The first President to actually drop out of an ongoing election was John Tyler, in 1844. Tyler had never been particularly popular. In fact, he’d only become President, in 1841, after President William Henry Harrison died unexpectedly. Tyler lost a lot of public support due to his belief that the U.S. should annex Texas, which at the time was part of Mexico. Both the Democratic party and the Whig party refused to nominate Tyler during the 1844 election, and he was forced to drop out. In 1852, Millard Fillmore, the 13th U.S. President, also failed to secure his party’s nomination due to a loss of political support. Like Tyler, he had taken over the Presidency after the death of his running mate, Zachary Taylor. In 1856, the Democratic party refused to re-nominate President Franklin Pierce, who had angered party leadership with his support for the Kansas-Nebraska Act. The Democratic party of the time was pro-slavery, and they viewed the Act as an anti-salvery law. Thus, Pierce was effectively forced to drop his re-election bid. Even Andrew Johnson, Vice President to Abraham Lincoln himself, wasn’t immune from political hardship. In 1868, he was refused the Democratic nomination after being impeached for removing another political official from a position as Secretary of War. In 1884, Chester Arthur was similarly forced to drop his re-election bid after being denied the Republican nomination. The reason? Arthur opposed political kickbacks and had sought to ban them.
The first Presidential dropout of the 1900s was Harry S. Truman, whose popularity tanked due to a series of scandals and a contentious political agenda. Truman left partway through the 1952 election. Then, in 1968, Lyndon B Johnson announced that he wouldn’t seek re-election just a few months before said election was to take place. His policies regarding the Vietnam War had made him extremely unpopular. All that is to say…Biden’s not exactly alone. Rather, he’s in good (but odd) political company.
[Image description: The front of the White House in Washington, D.C.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #2911Free1 CQ
Anyone worried about the ongoing helium shortage can breathe a sigh of relief…and sound like a chipmunk afterwards. Helium might be one of the most abundant elements in the universe, but it’s hard to come by on Earth. Thankfully, a massive, easy-to-access deposit of the gas has been discovered in Minnesota. Considering how common helium-filled balloons are at parties and outdoor events, it might seem strange to know that there’s a shortage of the stuff. The truth is, the lighter-than-air gas is in relatively limited supply but in high demand, and it’s not because of birthday decorations. Helium is actually a fantastic coolant and is used in everything from rockets to nuclear reactors. Super-cooled liquid helium is also a great superconductor, so it’s a vital component of MRI machines. Being so critical to a variety of industries, it would be a disaster if the existing supply of it were to run out. This has been a known issue for a long time, which is why the U.S. created the Federal Helium Reserve almost a hundred years ago to ensure a steady supply.
Once known sources of helium run out, it’s effectively gone, since it takes millions of years for the gas to be churned out by the nuclear reactions far under Earth’s surface. Manufacturing it, on the other hand, is costly and only produces small amounts. But now, just outside the small city of Babbitt, Minnesota, a company called Pulsar Helium discovered a huge reservoir of the gas that’s almost too good to be true. Trapped 1,750 to 2,200 feet underground, the deposit contains helium concentrations ranging between 8.7 percent and 14.5 percent. That might not seem like much, but even concentrations as low as 0.3 percent are considered profitable. Additionally, the gas naturally flows to the surface at a rate of 821,000 cubic feet per day, making extraction a breeze. Since the helium is contained in an underground pocket, there’s no need to build additional storage on the surface. The gas just needs to be pulled out as needed. So, don’t fret over helium—the party’s not over yet.
[Image description: A bundle of pastel balloons floating] Credit & copyright: Polina Tankilevitch, PexelsAnyone worried about the ongoing helium shortage can breathe a sigh of relief…and sound like a chipmunk afterwards. Helium might be one of the most abundant elements in the universe, but it’s hard to come by on Earth. Thankfully, a massive, easy-to-access deposit of the gas has been discovered in Minnesota. Considering how common helium-filled balloons are at parties and outdoor events, it might seem strange to know that there’s a shortage of the stuff. The truth is, the lighter-than-air gas is in relatively limited supply but in high demand, and it’s not because of birthday decorations. Helium is actually a fantastic coolant and is used in everything from rockets to nuclear reactors. Super-cooled liquid helium is also a great superconductor, so it’s a vital component of MRI machines. Being so critical to a variety of industries, it would be a disaster if the existing supply of it were to run out. This has been a known issue for a long time, which is why the U.S. created the Federal Helium Reserve almost a hundred years ago to ensure a steady supply.
Once known sources of helium run out, it’s effectively gone, since it takes millions of years for the gas to be churned out by the nuclear reactions far under Earth’s surface. Manufacturing it, on the other hand, is costly and only produces small amounts. But now, just outside the small city of Babbitt, Minnesota, a company called Pulsar Helium discovered a huge reservoir of the gas that’s almost too good to be true. Trapped 1,750 to 2,200 feet underground, the deposit contains helium concentrations ranging between 8.7 percent and 14.5 percent. That might not seem like much, but even concentrations as low as 0.3 percent are considered profitable. Additionally, the gas naturally flows to the surface at a rate of 821,000 cubic feet per day, making extraction a breeze. Since the helium is contained in an underground pocket, there’s no need to build additional storage on the surface. The gas just needs to be pulled out as needed. So, don’t fret over helium—the party’s not over yet.
[Image description: A bundle of pastel balloons floating] Credit & copyright: Polina Tankilevitch, Pexels -
FREEMind + Body Daily CurioFree1 CQ
There aren’t many meals that are as juicy as they are crispy, but the humble BLT is one of them. While this simple staple is undeniably one of America’s most popular sandwiches, it probably didn’t originate in the U.S. Rather, it owes its existence to the tradition of afternoon tea in England.
A BLT (which stands for bacon, lettuce and tomato) is a sandwich made with those three ingredients along with mayo on white bread. It’s about as simple as a sandwich can get, and that’s exactly why it’s so popular; the natural flavors of the ingredients always pair well together, plus they’re cheap and easy to come by.
While BLTs are a very cherished-but-un-fancy sandwich in the modern U.S., their ancestors were a bit posher. In Victorian England, afternoon tea was a very popular tradition (and still is, in some parts of the UK). It involved a small afternoon meal of hot tea, scones, and bite-sized “finger sandwiches.” Sandwiches featuring cheddar cheese and tomato chutney were very popular, as were sandwiches featuring bacon. It’s not hard to imagine that some of these ingredients might have come together at some point. By the 1900s, people in England and the U.S. had begun eating club sandwiches, which include all the ingredients of a BLT plus turkey. It’s no wonder, then, that many food historians consider the club sandwich to be the direct ancestor of the BLT. References to “bacon sandwiches”, or what we would call BLTs, began popping up in recipe books and newspapers in the 1920s. Then, in the 1940s, as supermarkets began to spread throughout the country, BLT ingredients became easier than ever to get. Wartime rationing also made BLTs attractive, since they didn’t require many ingredients to make. Still, the sandwich’s iconic name wasn’t used much until around the 1950s, as diners surged in popularity. Busy diner staff often abbreviated the names of certain dishes to make workflow faster, and this is likely where the name “BLT” originated—though we’ll never know for certain. What we do know is that, by the 1970s, the name had stuck and the sandwich remains an American icon to this day. Hey, a simple sandwich deserves a no-fuss name!
[Image description: A close-up photo of BLT sandwiches.] Credit & copyright: Nano Erdozain, PexelsThere aren’t many meals that are as juicy as they are crispy, but the humble BLT is one of them. While this simple staple is undeniably one of America’s most popular sandwiches, it probably didn’t originate in the U.S. Rather, it owes its existence to the tradition of afternoon tea in England.
A BLT (which stands for bacon, lettuce and tomato) is a sandwich made with those three ingredients along with mayo on white bread. It’s about as simple as a sandwich can get, and that’s exactly why it’s so popular; the natural flavors of the ingredients always pair well together, plus they’re cheap and easy to come by.
While BLTs are a very cherished-but-un-fancy sandwich in the modern U.S., their ancestors were a bit posher. In Victorian England, afternoon tea was a very popular tradition (and still is, in some parts of the UK). It involved a small afternoon meal of hot tea, scones, and bite-sized “finger sandwiches.” Sandwiches featuring cheddar cheese and tomato chutney were very popular, as were sandwiches featuring bacon. It’s not hard to imagine that some of these ingredients might have come together at some point. By the 1900s, people in England and the U.S. had begun eating club sandwiches, which include all the ingredients of a BLT plus turkey. It’s no wonder, then, that many food historians consider the club sandwich to be the direct ancestor of the BLT. References to “bacon sandwiches”, or what we would call BLTs, began popping up in recipe books and newspapers in the 1920s. Then, in the 1940s, as supermarkets began to spread throughout the country, BLT ingredients became easier than ever to get. Wartime rationing also made BLTs attractive, since they didn’t require many ingredients to make. Still, the sandwich’s iconic name wasn’t used much until around the 1950s, as diners surged in popularity. Busy diner staff often abbreviated the names of certain dishes to make workflow faster, and this is likely where the name “BLT” originated—though we’ll never know for certain. What we do know is that, by the 1970s, the name had stuck and the sandwich remains an American icon to this day. Hey, a simple sandwich deserves a no-fuss name!
[Image description: A close-up photo of BLT sandwiches.] Credit & copyright: Nano Erdozain, Pexels -
FREEScience Daily Curio #2910Free1 CQ
Some pills are hard to swallow; some shots are hard to give. On this day in 1892, when cholera outbreaks were claiming countless lives in 19th-century India, scientist Waldemar Haffkine risked his life by testing a new vaccine for the disease on himself. Born in 1860 in the city of Odessa, then a part of Russia, Haffkine studied under Nobel laureate Ilya Mechnikov, before becoming one of the preeminent scientists in the growing field of bacteriology. During the 1890s, cholera outbreaks were killing hundreds of thousands of people in Asia and Europe. After developing a successful vaccine in 1892, Haffkine tested it first on animals, then on himself. However, he didn’t have the means to test the vaccine on a larger scale, so in 1894, he traveled to Calcautta, India, to further study the disease and test his vaccine. But there were a few problems he had to overcome. Firstly, Haffkine wasn’t a medical doctor, and his vaccine wasn’t taken seriously by the British medical establishment. As India was under British rule at the time, he couldn’t directly appeal to Indian authorities. Even if he could have, though, there was another issue: that of trust. People in India were suspicious of foreign experts after the British government forced medical procedures on the populace. Understandably, when Haffkine showed up with what was all but a miracle cure for cholera, he was treated with the same suspicion. So he started small, first inoculating 116 of 200 in a community where all the people shared a common, contaminated water source. After proving the vaccine’s efficacy, he worked with Indian doctors and nurses to gain the people’s trust, but there was one more thing he did to prove that he meant no harm and that his vaccine was safe: he injected himself in front of crowds. By showing that he was willing to take the same shot as everyone else, he managed to convince the city that his vaccine was safe. Soon, people were lining up all over Calcutta to receive it. How’s that for a practical demonstration?
[Image description: Three medical needles against a yellow background.] Credit & copyright: Karolina Kaboompics, PexelsSome pills are hard to swallow; some shots are hard to give. On this day in 1892, when cholera outbreaks were claiming countless lives in 19th-century India, scientist Waldemar Haffkine risked his life by testing a new vaccine for the disease on himself. Born in 1860 in the city of Odessa, then a part of Russia, Haffkine studied under Nobel laureate Ilya Mechnikov, before becoming one of the preeminent scientists in the growing field of bacteriology. During the 1890s, cholera outbreaks were killing hundreds of thousands of people in Asia and Europe. After developing a successful vaccine in 1892, Haffkine tested it first on animals, then on himself. However, he didn’t have the means to test the vaccine on a larger scale, so in 1894, he traveled to Calcautta, India, to further study the disease and test his vaccine. But there were a few problems he had to overcome. Firstly, Haffkine wasn’t a medical doctor, and his vaccine wasn’t taken seriously by the British medical establishment. As India was under British rule at the time, he couldn’t directly appeal to Indian authorities. Even if he could have, though, there was another issue: that of trust. People in India were suspicious of foreign experts after the British government forced medical procedures on the populace. Understandably, when Haffkine showed up with what was all but a miracle cure for cholera, he was treated with the same suspicion. So he started small, first inoculating 116 of 200 in a community where all the people shared a common, contaminated water source. After proving the vaccine’s efficacy, he worked with Indian doctors and nurses to gain the people’s trust, but there was one more thing he did to prove that he meant no harm and that his vaccine was safe: he injected himself in front of crowds. By showing that he was willing to take the same shot as everyone else, he managed to convince the city that his vaccine was safe. Soon, people were lining up all over Calcutta to receive it. How’s that for a practical demonstration?
[Image description: Three medical needles against a yellow background.] Credit & copyright: Karolina Kaboompics, Pexels -
FREETravel Daily Curio #2909Free1 CQ
Which would surprise you more—that there’s a cheese museum in France or that it only just opened? French cheeses are beloved around the world, and now the country has finally opened a museum dedicated to the dairy delight. Every year, the average French person consumes around 50 pounds of cheese. Pierre Brisson is likely above that average, but it’s not his appetite that’s on display at the Musée du Fromage, which he opened in June. There, visitors can learn about the hundreds of varieties of French cheeses and how they’re made. Brisson conceived of a museum dedicated to cheese after he moved to Paris and found that, despite its strong ties to French cuisine, cheese seemed enormously underappreciated. He told The Guardian, “When I moved to Paris I realized there were lots of places promoting wine, its culture, and how it is made and lots of shops selling cheese, but nothing showing people how it is made.” He also mentioned that, when most French children are asked where cheese comes from, they respond, “from the supermarket.”
That just won’t do in a country that has 56 officially recognized registered varieties of cheese and hundreds more unregistered. By some estimates, there may be as many as 1,500 varieties, considering many people make their own cheeses locally or even at home without registering them. To promote the virtues of the noble cheeses, Brisson’s museum not only teaches visitors about cheese but makes the stuff in-house. Among the museum’s staff are six cheesemakers who educate visitors on the process of making cheese and how different milks affect the final product. According to the experts, the cow’s diet and even mood can imbue their milk with distinct properties that show up in cheese. For anyone interested, the Musée du Fromage is located in Paris. If you happen to find yourself in the City of Light, there’s no whey you should miss it.
[Image description: A glass of red wine and a chunk of bleu cheese on a cutting board. A small, French flag made of paper is sticking out of the cheese.] Credit & copyright: Polina Kovaleva, PexelsWhich would surprise you more—that there’s a cheese museum in France or that it only just opened? French cheeses are beloved around the world, and now the country has finally opened a museum dedicated to the dairy delight. Every year, the average French person consumes around 50 pounds of cheese. Pierre Brisson is likely above that average, but it’s not his appetite that’s on display at the Musée du Fromage, which he opened in June. There, visitors can learn about the hundreds of varieties of French cheeses and how they’re made. Brisson conceived of a museum dedicated to cheese after he moved to Paris and found that, despite its strong ties to French cuisine, cheese seemed enormously underappreciated. He told The Guardian, “When I moved to Paris I realized there were lots of places promoting wine, its culture, and how it is made and lots of shops selling cheese, but nothing showing people how it is made.” He also mentioned that, when most French children are asked where cheese comes from, they respond, “from the supermarket.”
That just won’t do in a country that has 56 officially recognized registered varieties of cheese and hundreds more unregistered. By some estimates, there may be as many as 1,500 varieties, considering many people make their own cheeses locally or even at home without registering them. To promote the virtues of the noble cheeses, Brisson’s museum not only teaches visitors about cheese but makes the stuff in-house. Among the museum’s staff are six cheesemakers who educate visitors on the process of making cheese and how different milks affect the final product. According to the experts, the cow’s diet and even mood can imbue their milk with distinct properties that show up in cheese. For anyone interested, the Musée du Fromage is located in Paris. If you happen to find yourself in the City of Light, there’s no whey you should miss it.
[Image description: A glass of red wine and a chunk of bleu cheese on a cutting board. A small, French flag made of paper is sticking out of the cheese.] Credit & copyright: Polina Kovaleva, Pexels -
FREEWriting Music Daily Curio #2908Free1 CQ
If you feel like pop music just isn’t as good as it used to be, it could be a sign that you’re over the hill…or maybe you’re actually on to something. A recent study analyzing top hits from 1950 to 2023 found that pop melodies have actually grown less complex over the decades. The study was conducted by researchers from the Queen Mary University of London, and they looked at the top five songs from each year. These songs were then broken down using algorithms and mathematical models that revealed how complex their melodies were based on their pitch and rhythm. According to the study’s co-author, Madeline Hamilton, “Conservatively, they have both decreased by 30 percent.” But this change didn’t take place all at once. The researchers found that there were actually three distinct “melodic revolutions” over the past seven decades.
While melodies steadily simplified over time for the most part, there were also several, sudden drops in complexity. The first of these revolutions came in 1975, and much of the credit (or blame) goes to the sharp rise in popularity of disco and stadium anthems. A lesser drop occurred in 1996, possibly due to the rise of electronic music. Another sharp drop took place just four years later in 2000. Researchers don’t believe that any specific genres are responsible for the decline; the changes probably have more to do with how most modern music is made. Over the decades, advances in technology made composing and recording easier and easier. The researchers wrote, “Today, with the accessibility of digital music production software and libraries of millions of samples and loops, anyone with a laptop and an internet connection can create any sound they can imagine.” Of course, the study doesn’t imply that complex music no longer exists on the pop charts, just that it’s not as common as it once was. Complex or no, those earworms still know how to wriggle their way in.
[Image description: Labeled music notes on a treble and bass clef.] Credit & copyright: Jono4174, Wikimedia Commons, This work has been released into the public domain by its author, Jono4174, at the English Wikipedia project. This applies worldwide.If you feel like pop music just isn’t as good as it used to be, it could be a sign that you’re over the hill…or maybe you’re actually on to something. A recent study analyzing top hits from 1950 to 2023 found that pop melodies have actually grown less complex over the decades. The study was conducted by researchers from the Queen Mary University of London, and they looked at the top five songs from each year. These songs were then broken down using algorithms and mathematical models that revealed how complex their melodies were based on their pitch and rhythm. According to the study’s co-author, Madeline Hamilton, “Conservatively, they have both decreased by 30 percent.” But this change didn’t take place all at once. The researchers found that there were actually three distinct “melodic revolutions” over the past seven decades.
While melodies steadily simplified over time for the most part, there were also several, sudden drops in complexity. The first of these revolutions came in 1975, and much of the credit (or blame) goes to the sharp rise in popularity of disco and stadium anthems. A lesser drop occurred in 1996, possibly due to the rise of electronic music. Another sharp drop took place just four years later in 2000. Researchers don’t believe that any specific genres are responsible for the decline; the changes probably have more to do with how most modern music is made. Over the decades, advances in technology made composing and recording easier and easier. The researchers wrote, “Today, with the accessibility of digital music production software and libraries of millions of samples and loops, anyone with a laptop and an internet connection can create any sound they can imagine.” Of course, the study doesn’t imply that complex music no longer exists on the pop charts, just that it’s not as common as it once was. Complex or no, those earworms still know how to wriggle their way in.
[Image description: Labeled music notes on a treble and bass clef.] Credit & copyright: Jono4174, Wikimedia Commons, This work has been released into the public domain by its author, Jono4174, at the English Wikipedia project. This applies worldwide. -
FREETravel Daily Curio #2907Free1 CQ
Save some green by going green in Denmark. The Danish capital of Copenhagen has launched a new initiative to attract tourists by rewarding them for being climate-conscious. There are a lot of nice amenities in the historic city of Copenhagen, and some of them will be free for certain, considerate travelers between July 15 and August 11 thanks to the new program, called CopenPay. Travelers can be rewarded for doing small things like picking up trash or riding a bike to their destinations. Tourists who bring a piece of plastic trash from the streets of Copenhagen to the National Gallery of Denmark can participate in a workshop where they’ll learn how to turn trash into art. Visitors arriving at the city’s heating plant by bike will get a complimentary trip down the ski slope that’s built into the uniquely-designed structure. Additional rewards include free meals, wine, and kayak trips. While it doesn’t seem like a big deal for an individual tourist to take a plastic bottle off the streets or ride a bike instead of driving a car, Denmark has a booming tourism industry, and that industry can take an environmental toll. In 2023, the country welcomed 15 million tourists who collectively spent around $17.5 billion during their visits. Most of those visitors traveled to Copenhagen. So, though the tourism board believes that only a small percentage of tourists will participate in CopenPay, the initiative would still alleviate issues of traffic congestion and carbon emissions. For tourists, the benefits are clear: they’ll get to save money while visiting the city, which can get expensive for budget-minded travelers. Aside from the freebies, they’ll also get to see Copenhagen in the same way as the 62 percent of citizens who commute by bike. What better way to spend your vacation than by living like a local?
[Image description: Colorful buildings by a canal with boats in Copenhagen.] Credit & copyright: Jebulon, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide under copyright law.Save some green by going green in Denmark. The Danish capital of Copenhagen has launched a new initiative to attract tourists by rewarding them for being climate-conscious. There are a lot of nice amenities in the historic city of Copenhagen, and some of them will be free for certain, considerate travelers between July 15 and August 11 thanks to the new program, called CopenPay. Travelers can be rewarded for doing small things like picking up trash or riding a bike to their destinations. Tourists who bring a piece of plastic trash from the streets of Copenhagen to the National Gallery of Denmark can participate in a workshop where they’ll learn how to turn trash into art. Visitors arriving at the city’s heating plant by bike will get a complimentary trip down the ski slope that’s built into the uniquely-designed structure. Additional rewards include free meals, wine, and kayak trips. While it doesn’t seem like a big deal for an individual tourist to take a plastic bottle off the streets or ride a bike instead of driving a car, Denmark has a booming tourism industry, and that industry can take an environmental toll. In 2023, the country welcomed 15 million tourists who collectively spent around $17.5 billion during their visits. Most of those visitors traveled to Copenhagen. So, though the tourism board believes that only a small percentage of tourists will participate in CopenPay, the initiative would still alleviate issues of traffic congestion and carbon emissions. For tourists, the benefits are clear: they’ll get to save money while visiting the city, which can get expensive for budget-minded travelers. Aside from the freebies, they’ll also get to see Copenhagen in the same way as the 62 percent of citizens who commute by bike. What better way to spend your vacation than by living like a local?
[Image description: Colorful buildings by a canal with boats in Copenhagen.] Credit & copyright: Jebulon, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide under copyright law. -
FREEMind + Body Daily CurioFree1 CQ
Unsweetened tea? Perish the thought! In the American South, tea is two things: iced and sweet. Its history is also both long and short, depending on how you look at it. Iced tea couldn’t be kept on-hand in Southern households until the relatively recent advent of in-home refrigeration. At the same time, people have been drinking tea, in one form or another, for centuries.
Sweet tea is made with black tea, as opposed to green, white, or herbal varieties. Some people use loose tea leaves to brew their sweet tea, but tea bags are common too. While the tea is brewing, sugar, artificial sweetener, or simple syrup is added—usually a lot of it. In fact, an average glass of sweet tea contains about as much sugar as a similarly-sized glass of soda. Once the tea is brewed, it’s usually poured into a pitcher, which is refrigerated until the tea is chilled. It is then served over ice. Although many Southerners have strong opinions about what makes a “real” glass of sweet tea, there are still plenty of variations. Some people add fresh fruit or mint to their tea pitchers to impart flavor over time. Others get creative with how they brew their tea. One variation, called “sun tea”, involves adding tea bags to a pitcher of cold water and leaving it out in the sun for around two to four hours. Those who swear by this method say that the prolonged brewing time makes for a richer flavor.
While the ancient Chinese are thought to have brewed the first tea around 2737 B.C.E., there’s no written record of modern-style sweet tea until 1879, when a recipe for it was published in Housekeeping in Old Virginia. Since refrigeration wasn’t yet a household staple, the recipe called for letting tea reach room temperature before pouring it over ice. As for how an average American would have acquired ice in the 1800s, it certainly wasn’t easy. For people in New England at the time, cutting large blocks of ice from frozen ponds and shipping them all over the country in insulated rail cars was big business. Such ice cost a pretty penny, which meant that sweet tea was mainly a drink for special occasions…until refrigerators became common household appliances in the 1920s, that is. The rest is simply sweet, Southern, culinary history.
[Image description: A glass of sweet tea with lemon slices. Some blurred lemons sit in the background.] Credit & copyright: Adenrele Owoyemi, PexelsUnsweetened tea? Perish the thought! In the American South, tea is two things: iced and sweet. Its history is also both long and short, depending on how you look at it. Iced tea couldn’t be kept on-hand in Southern households until the relatively recent advent of in-home refrigeration. At the same time, people have been drinking tea, in one form or another, for centuries.
Sweet tea is made with black tea, as opposed to green, white, or herbal varieties. Some people use loose tea leaves to brew their sweet tea, but tea bags are common too. While the tea is brewing, sugar, artificial sweetener, or simple syrup is added—usually a lot of it. In fact, an average glass of sweet tea contains about as much sugar as a similarly-sized glass of soda. Once the tea is brewed, it’s usually poured into a pitcher, which is refrigerated until the tea is chilled. It is then served over ice. Although many Southerners have strong opinions about what makes a “real” glass of sweet tea, there are still plenty of variations. Some people add fresh fruit or mint to their tea pitchers to impart flavor over time. Others get creative with how they brew their tea. One variation, called “sun tea”, involves adding tea bags to a pitcher of cold water and leaving it out in the sun for around two to four hours. Those who swear by this method say that the prolonged brewing time makes for a richer flavor.
While the ancient Chinese are thought to have brewed the first tea around 2737 B.C.E., there’s no written record of modern-style sweet tea until 1879, when a recipe for it was published in Housekeeping in Old Virginia. Since refrigeration wasn’t yet a household staple, the recipe called for letting tea reach room temperature before pouring it over ice. As for how an average American would have acquired ice in the 1800s, it certainly wasn’t easy. For people in New England at the time, cutting large blocks of ice from frozen ponds and shipping them all over the country in insulated rail cars was big business. Such ice cost a pretty penny, which meant that sweet tea was mainly a drink for special occasions…until refrigerators became common household appliances in the 1920s, that is. The rest is simply sweet, Southern, culinary history.
[Image description: A glass of sweet tea with lemon slices. Some blurred lemons sit in the background.] Credit & copyright: Adenrele Owoyemi, Pexels -
FREERelationships Daily Curio #2906Free1 CQ
This advancement is a wheely big deal! The state of Tennessee is now providing all-terrain wheelchairs in many of their state parks in an effort to make them accessible for people with limited mobility. Due to a lack of accessible building entrances and ramps, getting around in a wheelchair can be tough, even in a city. The issue is only compounded out in nature. In place of pavement and ramps are sand, rocks, and uneven ground, none of which play well with conventional wheelchairs. That’s a big problem for disabled nature-lovers, especially in a place like Tennessee, with all its breathtaking mountains and wilderness. To alleviate this issue, the state’s Department of Environment and Conservation (TDEC), Department of Disability, and Aging and Sunrise Medical came together to offer the latter’s Magic Mobility power chairs at their state parks. Tennessee had already added the all-terrain wheelchairs at 12 state parks and now they’ve added 10 more to the list.
Unlike conventional wheelchairs, these mechanized devices feature oversized, pneumatic tires. Powered by a motor, the chairs can climb hills and make light work of sand, roots, rocks, and other obstacles that can make an area completely inaccessible. Moreover, each chair can be controlled by either the rider or by a caretaker, allowing it to accommodate park visitors who have extremely restricted mobility. There are a limited number of available chairs, though, and visitors may have to call in advance to make a reservation. The TDEC is hoping to increase the number of available chairs in the future, and the goal is to have them available at every state park. The Volunteer State isn’t alone in offering a more accessible option for their park-goers. Other states like Minnesota, Colorado, and South Dakota have similar programs for their visitors, though not all of them are as extensive as Tennessee’s. If this innovative idea gets around more, it could help plenty of people across the U.S. get around more too.
[Image description: A wooden hiking path through a forest.] Credit & copyright: Amanda Klamrowski, PexelsThis advancement is a wheely big deal! The state of Tennessee is now providing all-terrain wheelchairs in many of their state parks in an effort to make them accessible for people with limited mobility. Due to a lack of accessible building entrances and ramps, getting around in a wheelchair can be tough, even in a city. The issue is only compounded out in nature. In place of pavement and ramps are sand, rocks, and uneven ground, none of which play well with conventional wheelchairs. That’s a big problem for disabled nature-lovers, especially in a place like Tennessee, with all its breathtaking mountains and wilderness. To alleviate this issue, the state’s Department of Environment and Conservation (TDEC), Department of Disability, and Aging and Sunrise Medical came together to offer the latter’s Magic Mobility power chairs at their state parks. Tennessee had already added the all-terrain wheelchairs at 12 state parks and now they’ve added 10 more to the list.
Unlike conventional wheelchairs, these mechanized devices feature oversized, pneumatic tires. Powered by a motor, the chairs can climb hills and make light work of sand, roots, rocks, and other obstacles that can make an area completely inaccessible. Moreover, each chair can be controlled by either the rider or by a caretaker, allowing it to accommodate park visitors who have extremely restricted mobility. There are a limited number of available chairs, though, and visitors may have to call in advance to make a reservation. The TDEC is hoping to increase the number of available chairs in the future, and the goal is to have them available at every state park. The Volunteer State isn’t alone in offering a more accessible option for their park-goers. Other states like Minnesota, Colorado, and South Dakota have similar programs for their visitors, though not all of them are as extensive as Tennessee’s. If this innovative idea gets around more, it could help plenty of people across the U.S. get around more too.
[Image description: A wooden hiking path through a forest.] Credit & copyright: Amanda Klamrowski, Pexels -
FREEUS History Daily Curio #2905Free1 CQ
It may be the most important document in the U.S., but for much of its existence it was treated little better than a movie poster in a college dorm. Now that we’re done celebrating America’s independence, let’s learn about a more embarrassing part of U.S. history: the misadventures of the Declaration of Independence. The Declaration of Independence was the document that officially severed ties between the American colonies and Britain. While the two parties were already engaged in military conflicts, the Declaration’s purpose was to announce that the colonies were now a new nation: the United States of America. Yet, despite its historical significance, the document wasn’t handled very carefully—not even by the Founding Fathers themselves.
Written on parchment with iron gall ink, the Declaration wasn’t exactly made of long-lasting materials. Parchment is just untanned animal hide, after all, which is highly vulnerable to moisture. Iron gall ink also isn’t immune to the elements. As the American Revolution got into full swing, keeping the document out of British hands was a higher priority than conservation, so it was often tightly rolled up and carried around without much protection. Several direct copies of the Declaration were made…but making copies before copy machines wasn’t a gentle process. First, the document was dampened with water, then a copper plate was pressed onto it, transferring some of the ink onto the plate, which could then be pressed onto another piece of parchment. This, of course, faded the writing on the original Declaration, and the problem was exacerbated after the war, when it was hung in the National Portrait Gallery in Washington, D.C. directly facing a window. For decades, sunlight further faded the writing, until the Declaration was moved to the Library of Congress in 1921. Then the document was moved again during WWII, and had a corner torn off in the process. Because of these incidents and several misguided attempts at conservation, by the time it was placed in the National Archives behind bulletproof glass and state-of-the-art monitoring, the famous words on the Declaration were barely legible. Today, there are signs of tearing, creasing, and even a handprint on the yellowed parchment, making it look even older than it is. Maybe it’s a shame for it to be in such poor condition, or maybe it just adds some scrappy, American character.
[Image description: A painting of the Declaration of Independence being presented to Congress.] Credit & copyright: Declaration of Independence by John Trumbull (1756–1843). Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. This image is a work of an employee of the Architect of the Capitol, taken or made as part of that person's official duties. As a work of the U.S. federal government, all images created or made by the Architect of the Capitol are in the public domain in the United States.It may be the most important document in the U.S., but for much of its existence it was treated little better than a movie poster in a college dorm. Now that we’re done celebrating America’s independence, let’s learn about a more embarrassing part of U.S. history: the misadventures of the Declaration of Independence. The Declaration of Independence was the document that officially severed ties between the American colonies and Britain. While the two parties were already engaged in military conflicts, the Declaration’s purpose was to announce that the colonies were now a new nation: the United States of America. Yet, despite its historical significance, the document wasn’t handled very carefully—not even by the Founding Fathers themselves.
Written on parchment with iron gall ink, the Declaration wasn’t exactly made of long-lasting materials. Parchment is just untanned animal hide, after all, which is highly vulnerable to moisture. Iron gall ink also isn’t immune to the elements. As the American Revolution got into full swing, keeping the document out of British hands was a higher priority than conservation, so it was often tightly rolled up and carried around without much protection. Several direct copies of the Declaration were made…but making copies before copy machines wasn’t a gentle process. First, the document was dampened with water, then a copper plate was pressed onto it, transferring some of the ink onto the plate, which could then be pressed onto another piece of parchment. This, of course, faded the writing on the original Declaration, and the problem was exacerbated after the war, when it was hung in the National Portrait Gallery in Washington, D.C. directly facing a window. For decades, sunlight further faded the writing, until the Declaration was moved to the Library of Congress in 1921. Then the document was moved again during WWII, and had a corner torn off in the process. Because of these incidents and several misguided attempts at conservation, by the time it was placed in the National Archives behind bulletproof glass and state-of-the-art monitoring, the famous words on the Declaration were barely legible. Today, there are signs of tearing, creasing, and even a handprint on the yellowed parchment, making it look even older than it is. Maybe it’s a shame for it to be in such poor condition, or maybe it just adds some scrappy, American character.
[Image description: A painting of the Declaration of Independence being presented to Congress.] Credit & copyright: Declaration of Independence by John Trumbull (1756–1843). Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. This image is a work of an employee of the Architect of the Capitol, taken or made as part of that person's official duties. As a work of the U.S. federal government, all images created or made by the Architect of the Capitol are in the public domain in the United States. -
FREEBiology Daily Curio #2904Free1 CQ
A bigger body doesn’t always mean better cells. Scientists have long observed that, in wild animals, there’s a connection between body size and lifespan. Small animals, like insects, mice, and songbirds, tend to live much shorter lives than big ones, like horses, giant tortoises, and elephants. Yet, when it comes to domesticated dogs, the opposite holds true. Chihuahuas, the smallest domesticated dogs, live an average of 12 to 20 years, while Great Danes, the largest dogs, live just 8 to 10 years. The same pattern holds true for most small vs. large breeds, so what’s going on? Dr. Chen Hou, an associate professor of biological sciences at Missouri University of Science and Technology, recently published a paper explaining the phenomenon. According to his paper, Energetic cost of biosynthesis is a missing link between growth and longevity in mammals, published in the journal Proceedings of the National Academy of Sciences, it’s not so much about how large an animal grows, but how fast they do so. “The existing life history suggested a tradeoff between growth and somatic maintenance, i.e., more energy spent on growth would result in less for maintaining health. However, this study shows that in mammals, allocating more energy to making better cellular materials enhances the somatic maintenance and extends lifespan and therefore opens a door to explore this aspect,” Hou explains, in the paper. In other words, the reason that large animals like elephants, which don’t reach full size until around age 15, have much longer lives than Great Danes, which reach full size at around age two, is because elephants’ bodies put more energy into creating healthy cells than they put into growing as quickly as possible. Healthy cellular materials are the key to long life. Unfortunately for Great Danes and other giant dog breeds, such as Irish Wolfhounds, their cellular health is compromised by their rapid growth. Fast cell division means more “mistakes” in the dividing cells’ DNA, which over time can lead to a multitude of health problems, including cancer. Hou believes that a deeper understanding of this connection between cell health, growth, and longevity could one day be used to better understand the aging process in humans. If only it could extend the lives of our biggest canine companions!
[Image description: A black-and-white Great Dane standing near a tree.] Credit & copyright: David Kanigan, PexelsA bigger body doesn’t always mean better cells. Scientists have long observed that, in wild animals, there’s a connection between body size and lifespan. Small animals, like insects, mice, and songbirds, tend to live much shorter lives than big ones, like horses, giant tortoises, and elephants. Yet, when it comes to domesticated dogs, the opposite holds true. Chihuahuas, the smallest domesticated dogs, live an average of 12 to 20 years, while Great Danes, the largest dogs, live just 8 to 10 years. The same pattern holds true for most small vs. large breeds, so what’s going on? Dr. Chen Hou, an associate professor of biological sciences at Missouri University of Science and Technology, recently published a paper explaining the phenomenon. According to his paper, Energetic cost of biosynthesis is a missing link between growth and longevity in mammals, published in the journal Proceedings of the National Academy of Sciences, it’s not so much about how large an animal grows, but how fast they do so. “The existing life history suggested a tradeoff between growth and somatic maintenance, i.e., more energy spent on growth would result in less for maintaining health. However, this study shows that in mammals, allocating more energy to making better cellular materials enhances the somatic maintenance and extends lifespan and therefore opens a door to explore this aspect,” Hou explains, in the paper. In other words, the reason that large animals like elephants, which don’t reach full size until around age 15, have much longer lives than Great Danes, which reach full size at around age two, is because elephants’ bodies put more energy into creating healthy cells than they put into growing as quickly as possible. Healthy cellular materials are the key to long life. Unfortunately for Great Danes and other giant dog breeds, such as Irish Wolfhounds, their cellular health is compromised by their rapid growth. Fast cell division means more “mistakes” in the dividing cells’ DNA, which over time can lead to a multitude of health problems, including cancer. Hou believes that a deeper understanding of this connection between cell health, growth, and longevity could one day be used to better understand the aging process in humans. If only it could extend the lives of our biggest canine companions!
[Image description: A black-and-white Great Dane standing near a tree.] Credit & copyright: David Kanigan, Pexels -
FREEBiology Daily Curio #2903Free1 CQ
Rhinos might be tough animals, but even they need a helping hand sometimes. In South Africa, home to 15,000 rhinos, poaching is an ongoing crisis. To address the issue, conservationists from the University of the Witwatersrand are injecting radioactive isotopes into rhino horns. It may seem counter-intuitive to expose endangered animals to radioactive material, but there’s actually a very good reason for doing so. Every year, hundreds of rhinos are killed by poachers, and the number seems to be on the rise. Almost 500 rhinos were poached in 2023 in South Africa, an 11 percent increase from the previous year, and demand for the horns isn’t exactly on the decline. The horns have no practical purpose. They’re most commonly used in alternative medicines or as status symbols, and as rhinos themselves become scarcer, their horns rise in price, further incentivizing poaching.
Filling the horns with radioisotopes, however, renders them potentially deadly for human consumption and may even set off radiation detectors at border posts that smugglers often travel through. In short, it makes the horns too risky to handle or use even if poachers get their hands on them. The novel endeavor is part of the Rhisotope Project, led by James Larkin from the University of the Witwatersrand, which is located near a rhino orphanage. As well-intentioned as the Rhisotope Project is, not everyone is convinced that it will be effective. Critics of the project say that poachers and smugglers often avoid airports and border checkpoints where the radioactive horns might be detected. However, if the method does work, it would be much more cost-effective than the current practice of de-horning (trimming the horns every few years to devalue them). Larkin and his colleagues are injecting 20 rhinos with radioisotopes as a practical test. While it may sound dangerous, there’s actually no threat to the rhinos themselves. The amount of radioactive material isn’t enough to harm the animals and the rhinos are placed under sedation while the researchers drill a hole in which to place the radioisotopes. It’s a medical procedure with no “charge”, so to speak.
[Image description: Two rhinos eating in a field.] Credit & copyright: Wikimedia Commons, Komencanto, modified by ArtMechanic. This work has been released into the public domain by its author, Komencanto. This applies worldwide.Rhinos might be tough animals, but even they need a helping hand sometimes. In South Africa, home to 15,000 rhinos, poaching is an ongoing crisis. To address the issue, conservationists from the University of the Witwatersrand are injecting radioactive isotopes into rhino horns. It may seem counter-intuitive to expose endangered animals to radioactive material, but there’s actually a very good reason for doing so. Every year, hundreds of rhinos are killed by poachers, and the number seems to be on the rise. Almost 500 rhinos were poached in 2023 in South Africa, an 11 percent increase from the previous year, and demand for the horns isn’t exactly on the decline. The horns have no practical purpose. They’re most commonly used in alternative medicines or as status symbols, and as rhinos themselves become scarcer, their horns rise in price, further incentivizing poaching.
Filling the horns with radioisotopes, however, renders them potentially deadly for human consumption and may even set off radiation detectors at border posts that smugglers often travel through. In short, it makes the horns too risky to handle or use even if poachers get their hands on them. The novel endeavor is part of the Rhisotope Project, led by James Larkin from the University of the Witwatersrand, which is located near a rhino orphanage. As well-intentioned as the Rhisotope Project is, not everyone is convinced that it will be effective. Critics of the project say that poachers and smugglers often avoid airports and border checkpoints where the radioactive horns might be detected. However, if the method does work, it would be much more cost-effective than the current practice of de-horning (trimming the horns every few years to devalue them). Larkin and his colleagues are injecting 20 rhinos with radioisotopes as a practical test. While it may sound dangerous, there’s actually no threat to the rhinos themselves. The amount of radioactive material isn’t enough to harm the animals and the rhinos are placed under sedation while the researchers drill a hole in which to place the radioisotopes. It’s a medical procedure with no “charge”, so to speak.
[Image description: Two rhinos eating in a field.] Credit & copyright: Wikimedia Commons, Komencanto, modified by ArtMechanic. This work has been released into the public domain by its author, Komencanto. This applies worldwide. -
FREEMind + Body Daily CurioFree1 CQ
Happy belated Fourth of July! If a local fair was part of your celebrations this year, chances are good that you enjoyed some cotton candy with their fireworks. Highly portable, able to be served on sticks, in bags, or as a garnish on the rim of cocktails, cotton candy is a staple at American fairs. That’s fitting, since cotton candy made its worldwide debut at the 1904 World’s Fair in Saint Louis. Funnily enough, despite the fact that this sweet treat isn’t great for your teeth, it was actually invented by a dentist.
Cotton candy, sometimes called candy floss outside of the U.S., is a confection made from spun sugar. By liquifying sugar with heat and then spinning it through small holes, long threads of sugar are formed and gathered into fluffy masses. Food coloring (most popularly pastel pink and blue) and flavoring is often added for variety. Though pre-bagged cotton candy is popular, so is the fresh-spun variety, in which watching a vendor create the candy in a large, cylindrical tub is part of the experience.
Although historical records show that spun sugar has existed in various forms since at least the 1400s, it was very difficult to make before the 1897 invention of the cotton candy machine. That machine was created in Tennessee by confectioner John C. Wharton and dentist William Morrison. The duo debuted their sweet treat (which they called “fairy floss”) at the 1904 World’s Fair, to a crowd who had mostly never seen spun sugar. To say it was a hit would be an understatement. They sold over 68,000 boxes of it, and cotton candy machines were soon in high demand. In 1921, following the proud tradition of dentists making cotton-candy-related breakthroughs, a dentist from New Orleans named Joseph Lascaux coined the term “cotton candy” after inventing a machine similar to the original cotton candy machine. By the 1970s, automatic cotton candy machines made it possible for packages of the fluffy confection to be sold in stores. Today, cotton candy is sold all over the world, but it still retains its roots as a fair food. Personally, we think it’s still best enjoyed alongside a fresh corndog.
[Image description: A small piece of pink cotton candy against a hot pink background.] Credit & copyright: Nataliya Vaitkevich, PexelsHappy belated Fourth of July! If a local fair was part of your celebrations this year, chances are good that you enjoyed some cotton candy with their fireworks. Highly portable, able to be served on sticks, in bags, or as a garnish on the rim of cocktails, cotton candy is a staple at American fairs. That’s fitting, since cotton candy made its worldwide debut at the 1904 World’s Fair in Saint Louis. Funnily enough, despite the fact that this sweet treat isn’t great for your teeth, it was actually invented by a dentist.
Cotton candy, sometimes called candy floss outside of the U.S., is a confection made from spun sugar. By liquifying sugar with heat and then spinning it through small holes, long threads of sugar are formed and gathered into fluffy masses. Food coloring (most popularly pastel pink and blue) and flavoring is often added for variety. Though pre-bagged cotton candy is popular, so is the fresh-spun variety, in which watching a vendor create the candy in a large, cylindrical tub is part of the experience.
Although historical records show that spun sugar has existed in various forms since at least the 1400s, it was very difficult to make before the 1897 invention of the cotton candy machine. That machine was created in Tennessee by confectioner John C. Wharton and dentist William Morrison. The duo debuted their sweet treat (which they called “fairy floss”) at the 1904 World’s Fair, to a crowd who had mostly never seen spun sugar. To say it was a hit would be an understatement. They sold over 68,000 boxes of it, and cotton candy machines were soon in high demand. In 1921, following the proud tradition of dentists making cotton-candy-related breakthroughs, a dentist from New Orleans named Joseph Lascaux coined the term “cotton candy” after inventing a machine similar to the original cotton candy machine. By the 1970s, automatic cotton candy machines made it possible for packages of the fluffy confection to be sold in stores. Today, cotton candy is sold all over the world, but it still retains its roots as a fair food. Personally, we think it’s still best enjoyed alongside a fresh corndog.
[Image description: A small piece of pink cotton candy against a hot pink background.] Credit & copyright: Nataliya Vaitkevich, Pexels -
FREEMind + Body Daily Curio #2902Free1 CQ
These instant noodles are causing lasting problems. Officials at the National Park Office of Mount Halla in South Korea are urging hikers to mind where they throw out their leftover ramen broth, as the salty soup has been found to be detrimental to wildlife. Mount Halla, located on Jeju Island, is the tallest mountain in South Korea and a popular hiking destination. In recent years, the hike has grown more popular than ever thanks to social media, and there’s even a trend where hikers post photos of themselves on the trail eating cups of instant ramen. It seems harmless enough, but the trend has led to some unexpected problems. According to officials, visitors have been dumping between 26 and 31 gallons of leftover ramen broth a day on the ground, and while that might not seem like a whole lot, it’s enough to disrupt the water supply that local fauna and flora depend on.
The issue with ramen broth is its high salt content, which officials say poses a danger to endangered plants found only on the island, as well as several species of insects and amphibians. Another issue is the smell—the aroma of ramen broth is apparently as enticing to weasels, crows, and badgers as it is to people. When broth is dumped on the ground, it attracts these animals, which venture into areas they would normally not be found in, disrupting the fragile ecosystem of the small island. To address the broth problem, officials are taking a two-pronged approach: they’re placing special containers on the trail where visitors are required to dump unfinished broth and also they’re also raising awareness of the issue on social media. Visitors who dump broth outside of the designated containers can face a fine of 200,000 won, which is the equivalent of about $150 or around 100 bowls of instant ramen. Imagine landing in hot water and not even having ramen to make with it.
[Image description: A block of instant ramen noodles against a gray background.] Credit & copyright: Ninosan, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.These instant noodles are causing lasting problems. Officials at the National Park Office of Mount Halla in South Korea are urging hikers to mind where they throw out their leftover ramen broth, as the salty soup has been found to be detrimental to wildlife. Mount Halla, located on Jeju Island, is the tallest mountain in South Korea and a popular hiking destination. In recent years, the hike has grown more popular than ever thanks to social media, and there’s even a trend where hikers post photos of themselves on the trail eating cups of instant ramen. It seems harmless enough, but the trend has led to some unexpected problems. According to officials, visitors have been dumping between 26 and 31 gallons of leftover ramen broth a day on the ground, and while that might not seem like a whole lot, it’s enough to disrupt the water supply that local fauna and flora depend on.
The issue with ramen broth is its high salt content, which officials say poses a danger to endangered plants found only on the island, as well as several species of insects and amphibians. Another issue is the smell—the aroma of ramen broth is apparently as enticing to weasels, crows, and badgers as it is to people. When broth is dumped on the ground, it attracts these animals, which venture into areas they would normally not be found in, disrupting the fragile ecosystem of the small island. To address the broth problem, officials are taking a two-pronged approach: they’re placing special containers on the trail where visitors are required to dump unfinished broth and also they’re also raising awareness of the issue on social media. Visitors who dump broth outside of the designated containers can face a fine of 200,000 won, which is the equivalent of about $150 or around 100 bowls of instant ramen. Imagine landing in hot water and not even having ramen to make with it.
[Image description: A block of instant ramen noodles against a gray background.] Credit & copyright: Ninosan, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREENutrition Daily Curio #2901Free1 CQ
Sometimes, it pays to not go with your gut. Our gut bacteria are an integral part of our digestive systems, but they don’t always play nice with us. As some scientists recently discovered, the wrong kind of bacteria can cause food addiction. Food addiction isn’t as well known as some other forms of addiction like alcoholism, but it can be just as devastating. Excessive eating can lead to obesity and all its associated health issues, significantly shortening someone’s life expectancy. Furthermore, food addiction can be difficult to overcome because it’s impossible to avoid food completely. Like any addiction, it can be treated with the right support, but food addiction isn’t recognized as an official diagnosis by many medical professionals. That may change, though, thanks to Scientists at the Laboratory of Neuropharmacology-NeuroPhar at the Universitat Pompeu Fabra, Barcelona, Spain. They found that food addiction might actually be caused by a certain kind of bacteria in the gut, since its presence is correlated with food-addictive behaviors. The researchers studied the gut bacteria of mice and human patients and rated their behavior on the Yale Food Addiction Scale (YFAS 2.0), which looks for food-seeking behaviors, motivation and compulsive behavior associated with food addiction. According to the results, humans with more bacteria in the Proteobacteria phylum and less from the Actinobacteria phylum were more likely to have food addiction. Researchers believe that Proteobacteria might be interfering with the expression of certain genes that regulate food-seeking and compulsive behavior. While gut bacteria have been increasingly found to affect behavior and mental health in recent years, this discovery is the first time that scientists have found a direct link between specific bacteria and their effect on gene expression. With this in mind, it might be possible in the future to treat food addiction by targeting the responsible bacteria in the gut as well as promoting the growth of beneficial bacteria. That’s a lot to digest!
[Image description: A painting of food on a table including a basket of fruits and vegetables, a plate of pigs’ feet, a red plate with a fish on it, and a white-and-blue plate of butter.] Credit & copyright: Still Life with Meat, Fish, Vegetables, and Fruit c. 1615–20. Gift of Janice Hammond and Edward Hemmelgarn, Cleveland Museum of Art, Public Domain Creative Commons Zero (CC0) designation.Sometimes, it pays to not go with your gut. Our gut bacteria are an integral part of our digestive systems, but they don’t always play nice with us. As some scientists recently discovered, the wrong kind of bacteria can cause food addiction. Food addiction isn’t as well known as some other forms of addiction like alcoholism, but it can be just as devastating. Excessive eating can lead to obesity and all its associated health issues, significantly shortening someone’s life expectancy. Furthermore, food addiction can be difficult to overcome because it’s impossible to avoid food completely. Like any addiction, it can be treated with the right support, but food addiction isn’t recognized as an official diagnosis by many medical professionals. That may change, though, thanks to Scientists at the Laboratory of Neuropharmacology-NeuroPhar at the Universitat Pompeu Fabra, Barcelona, Spain. They found that food addiction might actually be caused by a certain kind of bacteria in the gut, since its presence is correlated with food-addictive behaviors. The researchers studied the gut bacteria of mice and human patients and rated their behavior on the Yale Food Addiction Scale (YFAS 2.0), which looks for food-seeking behaviors, motivation and compulsive behavior associated with food addiction. According to the results, humans with more bacteria in the Proteobacteria phylum and less from the Actinobacteria phylum were more likely to have food addiction. Researchers believe that Proteobacteria might be interfering with the expression of certain genes that regulate food-seeking and compulsive behavior. While gut bacteria have been increasingly found to affect behavior and mental health in recent years, this discovery is the first time that scientists have found a direct link between specific bacteria and their effect on gene expression. With this in mind, it might be possible in the future to treat food addiction by targeting the responsible bacteria in the gut as well as promoting the growth of beneficial bacteria. That’s a lot to digest!
[Image description: A painting of food on a table including a basket of fruits and vegetables, a plate of pigs’ feet, a red plate with a fish on it, and a white-and-blue plate of butter.] Credit & copyright: Still Life with Meat, Fish, Vegetables, and Fruit c. 1615–20. Gift of Janice Hammond and Edward Hemmelgarn, Cleveland Museum of Art, Public Domain Creative Commons Zero (CC0) designation. -
FREEScience Daily Curio #2900Free1 CQ
They may have been primitive, but they weren't cruel. Neanderthals are often portrayed as violent brutes, but several recent paleontological finds have proved that misconception very wrong. In fact, these prehistoric hominids looked out for and cared for one another as a matter of survival. Recently, researchers discovered a fossil belonging to a Neanderthal child who seems to have had Down syndrome and was at least six years old. The only way the child could have survived so long before the advent of modern medicine was for their family to provide them with continuous care. Down syndrome is a genetic disorder in which a person is born with 47 chromosomes instead of 46 (a pair of 23 chromosomes from each parent). This seemingly small change can have major impacts on a person’s health. The condition can affect the development of a person’s brain, leading to learning disorders, behavioral symptoms, and developmental delays. A host of other physical issues, including trouble breathing or seeing clearly, can also pop up. The syndrome can make one more vulnerable to infections and diseases and cause congenital heart defects that can greatly shorten a person’s life expectancy. It’s incredible that, faced with such immense odds, the Neanderthal child managed to survive for around six years. The 273,000-year-old fossil shows that these hominids were not only compassionate enough to care for a member of their family who could not help them in return, but that they had the means and knowledge to do so. It's all part of a growing body of evidence that Neanderthals might have actually valued compassion. In that way, they were more advanced than some of today’s homo sapiens.
[Image description: A painting of a family of neanderthals standing at the mouth of a cave. One is holding a spear.] Credit & copyright: Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France,
Charles Robert Knight (1874–1953). Wikimedia Commons,
American Museum of Natural History. Public Domain.They may have been primitive, but they weren't cruel. Neanderthals are often portrayed as violent brutes, but several recent paleontological finds have proved that misconception very wrong. In fact, these prehistoric hominids looked out for and cared for one another as a matter of survival. Recently, researchers discovered a fossil belonging to a Neanderthal child who seems to have had Down syndrome and was at least six years old. The only way the child could have survived so long before the advent of modern medicine was for their family to provide them with continuous care. Down syndrome is a genetic disorder in which a person is born with 47 chromosomes instead of 46 (a pair of 23 chromosomes from each parent). This seemingly small change can have major impacts on a person’s health. The condition can affect the development of a person’s brain, leading to learning disorders, behavioral symptoms, and developmental delays. A host of other physical issues, including trouble breathing or seeing clearly, can also pop up. The syndrome can make one more vulnerable to infections and diseases and cause congenital heart defects that can greatly shorten a person’s life expectancy. It’s incredible that, faced with such immense odds, the Neanderthal child managed to survive for around six years. The 273,000-year-old fossil shows that these hominids were not only compassionate enough to care for a member of their family who could not help them in return, but that they had the means and knowledge to do so. It's all part of a growing body of evidence that Neanderthals might have actually valued compassion. In that way, they were more advanced than some of today’s homo sapiens.
[Image description: A painting of a family of neanderthals standing at the mouth of a cave. One is holding a spear.] Credit & copyright: Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France,
Charles Robert Knight (1874–1953). Wikimedia Commons,
American Museum of Natural History. Public Domain. -
FREEScience Daily Curio #2899Free1 CQ
Picture this—a forest in the middle of the Sahara. It might sound like a description of a mirage, but it was once reality. Archaeologists have just discovered 5,000 year-old cave paintings in Sudan that depict the Sahara as a green paradise, giving researchers an idea of what the region might have looked like before it became the barren desert it is today. It’s also a picture of what the desert could look like in the distant future. The Sahara is an expansive desert—for now. Over eons, it goes through cycles, turning from verdant, lush grassland into a harsh, inhospitable desert. This is due in part to its location and the effects of the Earth’s gradually shifting orbit, which changes the amount of sunlight that falls on the region. But the last time the Sahara turned into a desert around 8,000 years ago, it happened earlier and faster than it should have based on the Earth’s orbit. For decades, the discrepancy has been a mystery to both archaeologists and paleoecologists, but the mystery may finally be solved. According to archaeologists, the culprit might have been humans—specifically, ancient peoples with grazing livestock.
Part of the key to understanding the premature desertification of the Sahara is realizing that it happened in patches between 8,000 and 4,500 years ago. Researchers found that this lined up with the spread of people and their livestock in these areas. As their goats and cattle moved into an area, they would reduce the amount of atmospheric moisture by overgrazing. Removing vegetation also affects albedo, or the amount of sunlight reflected from the Earth’s surface, further contributing to desertification. Lastly, fire may have been used as a land management tool, which also destroys large swaths of grasslands. Left to its own devices, the Sahara may very well turn into grasslands and forests again—just keep the goats out next time.
[Image description: Sandy desert dunes.] Credit & copyright: Amine M'siouri, PexelsPicture this—a forest in the middle of the Sahara. It might sound like a description of a mirage, but it was once reality. Archaeologists have just discovered 5,000 year-old cave paintings in Sudan that depict the Sahara as a green paradise, giving researchers an idea of what the region might have looked like before it became the barren desert it is today. It’s also a picture of what the desert could look like in the distant future. The Sahara is an expansive desert—for now. Over eons, it goes through cycles, turning from verdant, lush grassland into a harsh, inhospitable desert. This is due in part to its location and the effects of the Earth’s gradually shifting orbit, which changes the amount of sunlight that falls on the region. But the last time the Sahara turned into a desert around 8,000 years ago, it happened earlier and faster than it should have based on the Earth’s orbit. For decades, the discrepancy has been a mystery to both archaeologists and paleoecologists, but the mystery may finally be solved. According to archaeologists, the culprit might have been humans—specifically, ancient peoples with grazing livestock.
Part of the key to understanding the premature desertification of the Sahara is realizing that it happened in patches between 8,000 and 4,500 years ago. Researchers found that this lined up with the spread of people and their livestock in these areas. As their goats and cattle moved into an area, they would reduce the amount of atmospheric moisture by overgrazing. Removing vegetation also affects albedo, or the amount of sunlight reflected from the Earth’s surface, further contributing to desertification. Lastly, fire may have been used as a land management tool, which also destroys large swaths of grasslands. Left to its own devices, the Sahara may very well turn into grasslands and forests again—just keep the goats out next time.
[Image description: Sandy desert dunes.] Credit & copyright: Amine M'siouri, Pexels