Curio Cabinet / Daily Curio
-
FREEMind + Body Daily CurioFree1 CQ
Happy Fourth of July! This year, we’re highlighting a food that’s as American as apple pie…actually, much more so. Chicken and waffles is a U.S.-born, soul food staple, but exactly where, when, and how it developed is a source of heated debate.
Chicken and waffles is exactly what its name implies: a dish of waffles, usually served with butter and maple syrup, alongside fried chicken. The chicken is dredged in seasoned flour before cooking, and the exact spices used in the dredge vary from recipe to recipe. Black pepper, paprika, garlic powder, and onion powder are all common choices. The exact pieces of chicken served, whether breast meat, wings, or thighs, also varies. Sometimes, honey is substituted for syrup.
The early history of chicken and waffles is shrouded in mystery. Though there’s no doubt that it’s an American dish, there are different stories about exactly how it developed. Some say that it came about in Jazz Age Harlem, when partiers and theater-goers stayed out so late that they craved a combination of breakfast and dinner foods. This story fits with chicken and waffles’ modern designation as soul food, since Harlem was largely segregated during the Jazz Age, and soul food comes from the culinary traditions of Black Americans. Still, others say that the dish was actually made famous by founding father Thomas Jefferson, who popularized waffles after he purchased waffle irons (which were fairly expensive at the time) from Amsterdam in the 1780s. Another story holds that the Pennsylvania Dutch created chicken and waffles based on German traditions.
Though we’ll never know for certain, it’s likely that all three tales are simply parts of a larger story. Dutch colonists brought waffles to the U.S. as early as the 1600s, where they made their way into the new culinary traditions of different groups of European settlers. This included the “Pennsylvania Dutch”, who were actually from Germany, where it was common to eat meat with bread or biscuits to sop up juices. They served waffles with different types of meat, including chicken with a creamy sauce. Thomas Jefferson did, indeed, help to popularize waffles, but it was the enslaved people who cooked for him and other colonists who changed the dish into what it is today. They standardized the use of seasoned, sometimes even spicy, fried chicken served with waffles, pancakes, or biscuits. After the civil war, chicken and waffles fell out of favor with white Americans, but was still frequently served in Black-owned restaurants, including well-known establishments in Harlem and in Black communities throughout the South. For centuries, the dish was categorized as Southern soul food. Then, in the 1990s, chicken and waffles had a sudden surge in nationwide popularity, possibly due to the rise of food-centric T.V. and “foodie” culture. Today, it can be found everywhere from Southern soul food restaurants to swanky brunch cafes in northern states. Its origins were humble, but its delicious reach is undeniable.
[Image description: Chicken wings and a waffle on a white plate with an orange slice.] Credit & copyright: Joost.janssens, Wikimedia Commons. This work has been released into the public domain by its author, Joost.janssens at English Wikipedia. This applies worldwide.Happy Fourth of July! This year, we’re highlighting a food that’s as American as apple pie…actually, much more so. Chicken and waffles is a U.S.-born, soul food staple, but exactly where, when, and how it developed is a source of heated debate.
Chicken and waffles is exactly what its name implies: a dish of waffles, usually served with butter and maple syrup, alongside fried chicken. The chicken is dredged in seasoned flour before cooking, and the exact spices used in the dredge vary from recipe to recipe. Black pepper, paprika, garlic powder, and onion powder are all common choices. The exact pieces of chicken served, whether breast meat, wings, or thighs, also varies. Sometimes, honey is substituted for syrup.
The early history of chicken and waffles is shrouded in mystery. Though there’s no doubt that it’s an American dish, there are different stories about exactly how it developed. Some say that it came about in Jazz Age Harlem, when partiers and theater-goers stayed out so late that they craved a combination of breakfast and dinner foods. This story fits with chicken and waffles’ modern designation as soul food, since Harlem was largely segregated during the Jazz Age, and soul food comes from the culinary traditions of Black Americans. Still, others say that the dish was actually made famous by founding father Thomas Jefferson, who popularized waffles after he purchased waffle irons (which were fairly expensive at the time) from Amsterdam in the 1780s. Another story holds that the Pennsylvania Dutch created chicken and waffles based on German traditions.
Though we’ll never know for certain, it’s likely that all three tales are simply parts of a larger story. Dutch colonists brought waffles to the U.S. as early as the 1600s, where they made their way into the new culinary traditions of different groups of European settlers. This included the “Pennsylvania Dutch”, who were actually from Germany, where it was common to eat meat with bread or biscuits to sop up juices. They served waffles with different types of meat, including chicken with a creamy sauce. Thomas Jefferson did, indeed, help to popularize waffles, but it was the enslaved people who cooked for him and other colonists who changed the dish into what it is today. They standardized the use of seasoned, sometimes even spicy, fried chicken served with waffles, pancakes, or biscuits. After the civil war, chicken and waffles fell out of favor with white Americans, but was still frequently served in Black-owned restaurants, including well-known establishments in Harlem and in Black communities throughout the South. For centuries, the dish was categorized as Southern soul food. Then, in the 1990s, chicken and waffles had a sudden surge in nationwide popularity, possibly due to the rise of food-centric T.V. and “foodie” culture. Today, it can be found everywhere from Southern soul food restaurants to swanky brunch cafes in northern states. Its origins were humble, but its delicious reach is undeniable.
[Image description: Chicken wings and a waffle on a white plate with an orange slice.] Credit & copyright: Joost.janssens, Wikimedia Commons. This work has been released into the public domain by its author, Joost.janssens at English Wikipedia. This applies worldwide. -
FREESTEM Daily Curio #3110Free1 CQ
When the fungi kicked ash, the ash started fighting back. For over a decade, ash trees in the U.K. have been under threat from a deadly fungus. Now, the trees appear to be developing a resistance. No matter where they grow, ash trees just can’t seem to catch a break. Invasive emerald ash borers started devastating ash trees in North America in the 1990s. Then, around 30 years ago, the fungi Hymenoscyphus fraxineus arrived in Europe, making its way through the continent one forest at a time. Finally, it made its way into the U.K. in 2012. H. fraxineus is native to East Asia and is the cause of chalara, also called ash dieback. It’s particularly devastating to Fraxinus excelsior, better known as European ash, and it has already reshaped much of the U.K.’s landscape. While the fungus only directly kills ash trees, it presents a wider threat to the overall ecology of the affected areas. H. fraxineus also poses an economic threat, since ash lumber is used for everything from hand tools to furniture.
When not being felled by fungus or bugs, ash trees are capable of growing in a wide range of conditions, creating a loose canopy that allows sunlight to reach the forest floor. That, in turn, encourages the growth of other vegetation. A variety of insect species and lichen also depend on ash trees for survival. Luckily, for the past few years, researchers have been seeing a light at the end of the fungus-infested tunnel. Some ash trees have started showing signs of fungal resistance, and a genetic analysis has now revealed that the trees are adapting at a faster rate than previously thought. If even a small percentage of ash trees become fully immune to the fungus, it may be just a matter of time before their population is replenished. Ash trees are great at reproducing, as they’re each capable of producing around 10,000 seeds that are genetically distinct from each other. That also means that ash trees may be able to avoid creating a genetic bottleneck, even though their population has sharply declined due to dieback. Still, scientists estimate around 85 percent of the remaining non-immune ash trees will be gone by the time all is said and done. It’s darkest before the dawn, especially in an ash forest.
[Image description: An upward shot of ash tree limbs affected with dieback disease against a blue sky. Some limbs still have green leaves, others are bare.] Credit & copyright: Sarang, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.When the fungi kicked ash, the ash started fighting back. For over a decade, ash trees in the U.K. have been under threat from a deadly fungus. Now, the trees appear to be developing a resistance. No matter where they grow, ash trees just can’t seem to catch a break. Invasive emerald ash borers started devastating ash trees in North America in the 1990s. Then, around 30 years ago, the fungi Hymenoscyphus fraxineus arrived in Europe, making its way through the continent one forest at a time. Finally, it made its way into the U.K. in 2012. H. fraxineus is native to East Asia and is the cause of chalara, also called ash dieback. It’s particularly devastating to Fraxinus excelsior, better known as European ash, and it has already reshaped much of the U.K.’s landscape. While the fungus only directly kills ash trees, it presents a wider threat to the overall ecology of the affected areas. H. fraxineus also poses an economic threat, since ash lumber is used for everything from hand tools to furniture.
When not being felled by fungus or bugs, ash trees are capable of growing in a wide range of conditions, creating a loose canopy that allows sunlight to reach the forest floor. That, in turn, encourages the growth of other vegetation. A variety of insect species and lichen also depend on ash trees for survival. Luckily, for the past few years, researchers have been seeing a light at the end of the fungus-infested tunnel. Some ash trees have started showing signs of fungal resistance, and a genetic analysis has now revealed that the trees are adapting at a faster rate than previously thought. If even a small percentage of ash trees become fully immune to the fungus, it may be just a matter of time before their population is replenished. Ash trees are great at reproducing, as they’re each capable of producing around 10,000 seeds that are genetically distinct from each other. That also means that ash trees may be able to avoid creating a genetic bottleneck, even though their population has sharply declined due to dieback. Still, scientists estimate around 85 percent of the remaining non-immune ash trees will be gone by the time all is said and done. It’s darkest before the dawn, especially in an ash forest.
[Image description: An upward shot of ash tree limbs affected with dieback disease against a blue sky. Some limbs still have green leaves, others are bare.] Credit & copyright: Sarang, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEEngineering Daily Curio #3109Free1 CQ
They’re turning greenhouse gases into rocky masses. A London-based startup has developed a device that can not only reduce emissions from cargo ships, but turn them into something useful. Cargo ships, as efficient as they are in some ways, still produce an enormous amount of emissions. In fact, they account for roughly three percent of all greenhouse gas emissions globally. Reducing their emissions even a little could have a big environmental impact, and there have been efforts to develop wind-based technology to reduce fuel consumption as well as alternative fuel. In the case of the startup Seabound, their approach is to scrub as much of the carbon from cargo ship exhaust as possible. Their device is the shape and size of a standard shipping container and can be retrofitted onto existing ships. Once in place, it’s filled with quicklime pellets which soak up carbon from the ship’s exhaust. By the time the exhaust makes it out to the atmosphere, 78 percent of the carbon and 90 percent of the sulfur is removed from it. The process also converts quicklime back into limestone, sequestering the carbon.
Similar carbon scrubbing technology is already in use in some factories, so the concept is sound, but there are some downsides. The most common method of quicklime production involves heating limestone to high temperatures, which releases carbon from the limestone and creates emissions from the energy required to heat it. There are greener methods to produce quicklime, but supply is highly limited for the time being. In addition, the process requires an enormous quantity of quicklime, reducing the overall cargo capacity of the ships. Meanwhile, some critics believe that such devices might delay the development and adoption of alternatives that could lead to net zero emissions for the shipping industry. It’s not easy charting a course for a greener future.
[Image description: A gray limestone formation in grass photographed from above.] Credit & copyright: Northernhenge, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They’re turning greenhouse gases into rocky masses. A London-based startup has developed a device that can not only reduce emissions from cargo ships, but turn them into something useful. Cargo ships, as efficient as they are in some ways, still produce an enormous amount of emissions. In fact, they account for roughly three percent of all greenhouse gas emissions globally. Reducing their emissions even a little could have a big environmental impact, and there have been efforts to develop wind-based technology to reduce fuel consumption as well as alternative fuel. In the case of the startup Seabound, their approach is to scrub as much of the carbon from cargo ship exhaust as possible. Their device is the shape and size of a standard shipping container and can be retrofitted onto existing ships. Once in place, it’s filled with quicklime pellets which soak up carbon from the ship’s exhaust. By the time the exhaust makes it out to the atmosphere, 78 percent of the carbon and 90 percent of the sulfur is removed from it. The process also converts quicklime back into limestone, sequestering the carbon.
Similar carbon scrubbing technology is already in use in some factories, so the concept is sound, but there are some downsides. The most common method of quicklime production involves heating limestone to high temperatures, which releases carbon from the limestone and creates emissions from the energy required to heat it. There are greener methods to produce quicklime, but supply is highly limited for the time being. In addition, the process requires an enormous quantity of quicklime, reducing the overall cargo capacity of the ships. Meanwhile, some critics believe that such devices might delay the development and adoption of alternatives that could lead to net zero emissions for the shipping industry. It’s not easy charting a course for a greener future.
[Image description: A gray limestone formation in grass photographed from above.] Credit & copyright: Northernhenge, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREERunning Daily Curio #3108Free1 CQ
They’re more than sneakers—they’re a tribute. Adidas will soon be bringing back the very shoes worn by Terry Fox during his run across Canada in commemoration of the 45th anniversary of his “Marathon of Hope.” The blue-and-white-striped shoes were worn by Fox in 1980 when he embarked on a journey that would go on to inspire millions. At the time, though, no one was looking at his shoes. Born on July 28, 1958, in Winnipeg, Manitoba, Fox was diagnosed with osteogenic sarcoma in 1977 at the age of 18. The disease didn’t claim his life then, but Fox lost his right leg just above the knee. By 1979, Fox mastered the use of his artificial limb and completed a marathon, but he was determined to do more. Fox was driven by his personal experiences from dealing with cancer, including his time in the cancer ward. He believed that cancer research needed more funding, and he came up with the idea to run across Canada to raise awareness.
Fox started his marathon on April 12th, 1980, by dipping his prosthetic leg in the Atlantic Ocean, and in the first days of his journey, he attracted little attention. For months, Fox ran at a pace averaging 30 miles a day, and his persistence paid off. Over time, more and more people rallied behind Fox, and began to stand along his route to cheer him on. Then, after over 3,300 miles, Fox started suffering from chest pains. The culprit was his cancer, which had spread to his lungs and forced him to stop his marathon prematurely. Fox passed away the following year on June 28, and though he never managed to reach the Pacific side of Canada, he accomplished something more. He surpassed his goal of $24 million CAD, raising the equivalent of $1 from every single Canadian. Fox also became a national hero for his dedication, and is the youngest Canadian ever to be made a Companion of the Order of Canada, the country’s highest civilian honor. Since his passing, the Terry Fox Foundation has raised a further $850 million CAD, and a statue in his honor stands in Ottawa, Ontario. A true hero of the Great White North.
[Image description: A statue of Terry Fox running, with another wall-like memorial behind it. In the background is a building and trees.] Credit & copyright: Raysonho, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They’re more than sneakers—they’re a tribute. Adidas will soon be bringing back the very shoes worn by Terry Fox during his run across Canada in commemoration of the 45th anniversary of his “Marathon of Hope.” The blue-and-white-striped shoes were worn by Fox in 1980 when he embarked on a journey that would go on to inspire millions. At the time, though, no one was looking at his shoes. Born on July 28, 1958, in Winnipeg, Manitoba, Fox was diagnosed with osteogenic sarcoma in 1977 at the age of 18. The disease didn’t claim his life then, but Fox lost his right leg just above the knee. By 1979, Fox mastered the use of his artificial limb and completed a marathon, but he was determined to do more. Fox was driven by his personal experiences from dealing with cancer, including his time in the cancer ward. He believed that cancer research needed more funding, and he came up with the idea to run across Canada to raise awareness.
Fox started his marathon on April 12th, 1980, by dipping his prosthetic leg in the Atlantic Ocean, and in the first days of his journey, he attracted little attention. For months, Fox ran at a pace averaging 30 miles a day, and his persistence paid off. Over time, more and more people rallied behind Fox, and began to stand along his route to cheer him on. Then, after over 3,300 miles, Fox started suffering from chest pains. The culprit was his cancer, which had spread to his lungs and forced him to stop his marathon prematurely. Fox passed away the following year on June 28, and though he never managed to reach the Pacific side of Canada, he accomplished something more. He surpassed his goal of $24 million CAD, raising the equivalent of $1 from every single Canadian. Fox also became a national hero for his dedication, and is the youngest Canadian ever to be made a Companion of the Order of Canada, the country’s highest civilian honor. Since his passing, the Terry Fox Foundation has raised a further $850 million CAD, and a statue in his honor stands in Ottawa, Ontario. A true hero of the Great White North.
[Image description: A statue of Terry Fox running, with another wall-like memorial behind it. In the background is a building and trees.] Credit & copyright: Raysonho, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3107Free1 CQ
Beware the pharaoh’s… cure? A deadly fungus that once “cursed” those who entered the tomb of King Tutankhamun has been engineered into a treatment for cancer by researchers at the University of Pennsylvania. When a team of archaeologists opened up King Tutankhamun’s fabled tomb back in 1922, they couldn’t have known about the terrible fate they had been dealt. One by one, those who entered the tomb died from an unknown illness. Then, in the 1970s, a similar string of tragedies befell those who entered the 15th century tomb of King Casimir IV in Poland. One such incident might have been dismissed as an unfortunate accident, but two meant that there was something else at play. Despite speculation about ancient curses, the likely culprit was found to be a fungus called Aspergillus flavus. It’s capable of producing spores that can stay alive seemingly indefinitely, and the spores contain toxins that are deadly when inhaled by humans. As they say, though, it’s the dose that makes the poison. In this case, the proper dose can instead be a cure. Researchers studying the deadly toxins within the fungal spores found a class of compounds called RiPPs (ribosomally synthesized and post-translationally modified peptides) which are capable of killing cancer cells. Moreover, the compounds seem to be able to target only cancer cells without affecting healthy ones. That’s a huge improvement over conventional treatments like chemotherapy, which can harm a variety of healthy cells as much as they harm cancer. Another interesting fact is that the compounds can be enhanced by combining them with lipid molecules like those found in royal jelly (the special honey that is fed exclusively to queen bees), making it easier for them to pass through cell membranes. Fungus and honey coming together to cure cancer? Sounds like a sweet (and savory) deal.
[Image description: A petri dish containing a culture of the fungus Aspergillus flavus against a black background. The fungus appears as a white-ish circle.] Credit & copyright: CDC Public Health Image Library, Dr. Hardin. This image is in the public domain and thus free of any copyright restrictions.Beware the pharaoh’s… cure? A deadly fungus that once “cursed” those who entered the tomb of King Tutankhamun has been engineered into a treatment for cancer by researchers at the University of Pennsylvania. When a team of archaeologists opened up King Tutankhamun’s fabled tomb back in 1922, they couldn’t have known about the terrible fate they had been dealt. One by one, those who entered the tomb died from an unknown illness. Then, in the 1970s, a similar string of tragedies befell those who entered the 15th century tomb of King Casimir IV in Poland. One such incident might have been dismissed as an unfortunate accident, but two meant that there was something else at play. Despite speculation about ancient curses, the likely culprit was found to be a fungus called Aspergillus flavus. It’s capable of producing spores that can stay alive seemingly indefinitely, and the spores contain toxins that are deadly when inhaled by humans. As they say, though, it’s the dose that makes the poison. In this case, the proper dose can instead be a cure. Researchers studying the deadly toxins within the fungal spores found a class of compounds called RiPPs (ribosomally synthesized and post-translationally modified peptides) which are capable of killing cancer cells. Moreover, the compounds seem to be able to target only cancer cells without affecting healthy ones. That’s a huge improvement over conventional treatments like chemotherapy, which can harm a variety of healthy cells as much as they harm cancer. Another interesting fact is that the compounds can be enhanced by combining them with lipid molecules like those found in royal jelly (the special honey that is fed exclusively to queen bees), making it easier for them to pass through cell membranes. Fungus and honey coming together to cure cancer? Sounds like a sweet (and savory) deal.
[Image description: A petri dish containing a culture of the fungus Aspergillus flavus against a black background. The fungus appears as a white-ish circle.] Credit & copyright: CDC Public Health Image Library, Dr. Hardin. This image is in the public domain and thus free of any copyright restrictions. -
FREEMind + Body Daily CurioFree1 CQ
If only eating your veggies was always this sweet. On the surface, carrot cake seems like an odd confection. After all, we don’t typically use carrots (or other vegetables, for that matter) in sweet desserts these days. The dessert first took off due to World War II sugar rationing, when foods had to be used a bit more flexibly, and its full history stretches back even further.
Carrot cake has a flavor profile similar to traditional spice cake, since its batter often includes cinnamon and nutmeg. However, carrot cake has a chunkier texture since finely sliced carrots, walnuts, and sometimes raisins are also included in the batter. Carrot cake is almost always topped with cream cheese frosting which gives the entire cake a slight tang.
No one knows exactly how and where carrot cake originated, but food historians have a few clues. In the Middle Ages, sugar was difficult for most people throughout Europe to afford. This resulted in dessert recipes that utilized sweet vegetables, like carrots and parsnips. One 1591 recipe from England for “carrot pudding” consisted of a carrot stuffed with meat and baked with a batter of cream, eggs, dates, clove, and breadcrumbs. Such puddings evolved over time and had many regional variations. Some were baked in pans and had crusts, like pies; others were mashed desserts similar to modern, American-style pudding. By the 1800s, carrots made their way into British cake batters. A recipe for carrot cake with some similarities to the modern version was published in 1814 by Antoine Beauvilliers, who once worked as Louis XVI’s personal chef. Similar recipes were commonly found in France, Sweden, England, and Switzerland over the following century, but none were as popular as the carrot cake we enjoy today.
Modern carrot cake was truly born During World War II, when sugar was rationed in both the U.S. and England. The British government promoted carrots as a healthy alternative to sugar, and even included a carrot cake recipe (albeit without the cream cheese frosting) in a wartime cooking leaflet. Carrot cake’s famous cream cheese frosting was first adopted in the U.S. after British carrot cake recipes made their way overseas. Americans were already using cream cheese frosting on tomato soup cake, which did actually use a can of condensed tomato soup as a key ingredient, but mostly tasted like a traditional spice cake. Carrot cake’s flavor profile was very similar, so cream cheese frosting became a popular topping for it and remained so even after tomato soup cake fell into obscurity. Eventually, even Europeans came to adopt cream cheese frosting for their cakes. Unlike many World War II recipes, carrot cake has managed to retain its popularity to this day. Once you’ve used soup in your cake, carrots don’t seem like such a strange addition anymore.
[Image description: A large carrot cake and a slice of carrot cake on white plates with carrot-shaped decorations.] Credit & copyright: Muago, Wikimedia Commons.If only eating your veggies was always this sweet. On the surface, carrot cake seems like an odd confection. After all, we don’t typically use carrots (or other vegetables, for that matter) in sweet desserts these days. The dessert first took off due to World War II sugar rationing, when foods had to be used a bit more flexibly, and its full history stretches back even further.
Carrot cake has a flavor profile similar to traditional spice cake, since its batter often includes cinnamon and nutmeg. However, carrot cake has a chunkier texture since finely sliced carrots, walnuts, and sometimes raisins are also included in the batter. Carrot cake is almost always topped with cream cheese frosting which gives the entire cake a slight tang.
No one knows exactly how and where carrot cake originated, but food historians have a few clues. In the Middle Ages, sugar was difficult for most people throughout Europe to afford. This resulted in dessert recipes that utilized sweet vegetables, like carrots and parsnips. One 1591 recipe from England for “carrot pudding” consisted of a carrot stuffed with meat and baked with a batter of cream, eggs, dates, clove, and breadcrumbs. Such puddings evolved over time and had many regional variations. Some were baked in pans and had crusts, like pies; others were mashed desserts similar to modern, American-style pudding. By the 1800s, carrots made their way into British cake batters. A recipe for carrot cake with some similarities to the modern version was published in 1814 by Antoine Beauvilliers, who once worked as Louis XVI’s personal chef. Similar recipes were commonly found in France, Sweden, England, and Switzerland over the following century, but none were as popular as the carrot cake we enjoy today.
Modern carrot cake was truly born During World War II, when sugar was rationed in both the U.S. and England. The British government promoted carrots as a healthy alternative to sugar, and even included a carrot cake recipe (albeit without the cream cheese frosting) in a wartime cooking leaflet. Carrot cake’s famous cream cheese frosting was first adopted in the U.S. after British carrot cake recipes made their way overseas. Americans were already using cream cheese frosting on tomato soup cake, which did actually use a can of condensed tomato soup as a key ingredient, but mostly tasted like a traditional spice cake. Carrot cake’s flavor profile was very similar, so cream cheese frosting became a popular topping for it and remained so even after tomato soup cake fell into obscurity. Eventually, even Europeans came to adopt cream cheese frosting for their cakes. Unlike many World War II recipes, carrot cake has managed to retain its popularity to this day. Once you’ve used soup in your cake, carrots don’t seem like such a strange addition anymore.
[Image description: A large carrot cake and a slice of carrot cake on white plates with carrot-shaped decorations.] Credit & copyright: Muago, Wikimedia Commons. -
FREEScience Daily Curio #3106Free1 CQ
Here’s a look at everything, near and far. Astronomers at the newly-operational Vera C. Rubin Observatory just released the first batch of images from the state-of-the-art facility, and it’s revealing new things about the solar system while giving a clearer view of the greater universe. Named after American astronomer Dr. Vera C. Rubin, the observatory was designed to push the boundaries of what is possible with ground-based telescopes. That’s fitting, considering that Rubin herself pushed the envelope of astrophysics during her lifetime. Her most significant achievement was finding unequivocal evidence for the existence of dark matter.
Located in Cerro Pachón in Chile, the Rubin Observatory is equipped with the largest camera ever built and is capable of capturing images at 3,200-megapixels. Its main purpose—for now—is to take a survey of the sky continuously for the next ten years. Its decade-long mission is off to a strong start with an initial batch of images featuring distant galaxies and Milky Way stars in unprecedented clarity. Its first ten hours of observation alone revealed 2,104 asteroids within the Solar System that have never been seen before (thankfully, none of them are on their way to crash into Earth anytime soon). When all is said and done, the observatory will gather around 500 petabytes worth of images, a veritable treasure trove of space imagery. It also has one other purpose. Its ability to observe dim and distant objects could prove useful in the search for the fabled “Planet Nine,” which (if it exists) orbits the sun every 10,000 to 20,000 years. It seems that the more you can see, the more there is to find out.
[Image description: A starry night sky with a line of dark trees in the foreground.] Credit & copyright: tommy haugsveen, PexelsHere’s a look at everything, near and far. Astronomers at the newly-operational Vera C. Rubin Observatory just released the first batch of images from the state-of-the-art facility, and it’s revealing new things about the solar system while giving a clearer view of the greater universe. Named after American astronomer Dr. Vera C. Rubin, the observatory was designed to push the boundaries of what is possible with ground-based telescopes. That’s fitting, considering that Rubin herself pushed the envelope of astrophysics during her lifetime. Her most significant achievement was finding unequivocal evidence for the existence of dark matter.
Located in Cerro Pachón in Chile, the Rubin Observatory is equipped with the largest camera ever built and is capable of capturing images at 3,200-megapixels. Its main purpose—for now—is to take a survey of the sky continuously for the next ten years. Its decade-long mission is off to a strong start with an initial batch of images featuring distant galaxies and Milky Way stars in unprecedented clarity. Its first ten hours of observation alone revealed 2,104 asteroids within the Solar System that have never been seen before (thankfully, none of them are on their way to crash into Earth anytime soon). When all is said and done, the observatory will gather around 500 petabytes worth of images, a veritable treasure trove of space imagery. It also has one other purpose. Its ability to observe dim and distant objects could prove useful in the search for the fabled “Planet Nine,” which (if it exists) orbits the sun every 10,000 to 20,000 years. It seems that the more you can see, the more there is to find out.
[Image description: A starry night sky with a line of dark trees in the foreground.] Credit & copyright: tommy haugsveen, Pexels -
FREEScience Daily Curio #3105Free1 CQ
You just can’t beat this heat. This year’s summer is getting off to a brutal start for much of the U.S. as a heat dome stretches over multiple states. Heat domes are defined by suffocating heat and humidity which have a synergistic effect and make it feel even hotter than it actually is. While heat domes can cause heat waves, the two meteorological phenomena are not the same. The source of a heat dome’s elevated temperatures and humidity is a lingering high-pressure system in the atmosphere that prevents heat on Earth’s surface from rising. The high pressure comes from the jet stream after it weakens and deviates beyond its normal course. Until the jet stream corrects itself, the heat dome will continue to persist, and the longer it lasts, the worse it gets. Because the high pressure also prevents cloud formation, the sun’s rays beat down on the ground, making the heat dome hotter over time. Temperatures can easily exceed 100 degrees Fahrenheit, and it can feel tens of degrees hotter. Sometimes, a heat dome will dissipate after just a few days, but they can last for weeks.
In 1995, a particularly devastating heat dome claimed over 700 lives in the Chicago area in less than a week. Even worse, a heat dome over the southern Plains back in 1980 claimed around 10,000 lives. Part of what makes a heat dome so dangerous isn’t just the heat itself, but the humidity, which makes it impossible for sweat to effectively wick excess heat away from our bodies. When a heat dome forms over a given area, it’s best to avoid venturing outside. The best policy is to stay in a climate-controlled area and drink plenty of water until the heat dome dissipates. Some problems are better avoided than faced head on.
[Image description: The sun shining above a treetop in a clear blue sky.] Credit & copyright: TheUltimateGrass, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You just can’t beat this heat. This year’s summer is getting off to a brutal start for much of the U.S. as a heat dome stretches over multiple states. Heat domes are defined by suffocating heat and humidity which have a synergistic effect and make it feel even hotter than it actually is. While heat domes can cause heat waves, the two meteorological phenomena are not the same. The source of a heat dome’s elevated temperatures and humidity is a lingering high-pressure system in the atmosphere that prevents heat on Earth’s surface from rising. The high pressure comes from the jet stream after it weakens and deviates beyond its normal course. Until the jet stream corrects itself, the heat dome will continue to persist, and the longer it lasts, the worse it gets. Because the high pressure also prevents cloud formation, the sun’s rays beat down on the ground, making the heat dome hotter over time. Temperatures can easily exceed 100 degrees Fahrenheit, and it can feel tens of degrees hotter. Sometimes, a heat dome will dissipate after just a few days, but they can last for weeks.
In 1995, a particularly devastating heat dome claimed over 700 lives in the Chicago area in less than a week. Even worse, a heat dome over the southern Plains back in 1980 claimed around 10,000 lives. Part of what makes a heat dome so dangerous isn’t just the heat itself, but the humidity, which makes it impossible for sweat to effectively wick excess heat away from our bodies. When a heat dome forms over a given area, it’s best to avoid venturing outside. The best policy is to stay in a climate-controlled area and drink plenty of water until the heat dome dissipates. Some problems are better avoided than faced head on.
[Image description: The sun shining above a treetop in a clear blue sky.] Credit & copyright: TheUltimateGrass, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3104Free1 CQ
Chromosomes are fundamental to creating life, but you can have too much of a good thing. Just one extra copy of chromosome 21 is responsible for causing Down syndrome, which itself causes many different health problems. Now, scientists at Mie University in Japan have developed a way to remove the extra chromosome using CRISPR technology. The chromosome responsible for down syndrome is called trisomy 21. When someone is born with this chromosome, they end up with 47 total chromosomes, rather than the usual 46. This results in a range of health effects, including congenital heart problems and cognitive issues.
Until recently, genetic disorders like Down syndrome were considered untreatable, but medical advancements have been changing things. Back in 2023, the FDA approved Casgevy and Lyfgenia, both of which are cell-based gene therapies to treat sickle cell disease (SCD) in patients over age 12. The treatments were developed using CRISPR-Cas9, which utilizes enzymes to accurately target parts of the DNA strand responsible for the disease. It’s the same technology used by the scientists at Mie University, who targeted trisomy 21 in a process called allele-specific editing, or, as one of the researchers described, “Trisomic rescue via allele-specific multiple chromosome cleavage using CRISPR-Cas9 in trisomy 21 cells.” The process was performed on lab-grown cells which quickly recovered and began functioning like any other cells. It’s unlikely that this new development will signal an immediate reversal of Down syndrome, as it will be a while before the treatment can undergo human trials. One particular hurdle is that the treatment can sometimes target healthy chromosomes. Still, it shows that CRISPR-Cas9 can be used to remove entire chromosomes and that cells affected by trisomy 21 can make a full recovery with treatment. That’s a lot of medical advancement in one crisp swoop.
[Image description: A diagram of a DNA strand with a key for each labeled part. The key from top to bottom reads: Adenine, Thymine, Cytosine, Guanine, and phosphate backbone.] Credit & copyright: Forluvoft, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Chromosomes are fundamental to creating life, but you can have too much of a good thing. Just one extra copy of chromosome 21 is responsible for causing Down syndrome, which itself causes many different health problems. Now, scientists at Mie University in Japan have developed a way to remove the extra chromosome using CRISPR technology. The chromosome responsible for down syndrome is called trisomy 21. When someone is born with this chromosome, they end up with 47 total chromosomes, rather than the usual 46. This results in a range of health effects, including congenital heart problems and cognitive issues.
Until recently, genetic disorders like Down syndrome were considered untreatable, but medical advancements have been changing things. Back in 2023, the FDA approved Casgevy and Lyfgenia, both of which are cell-based gene therapies to treat sickle cell disease (SCD) in patients over age 12. The treatments were developed using CRISPR-Cas9, which utilizes enzymes to accurately target parts of the DNA strand responsible for the disease. It’s the same technology used by the scientists at Mie University, who targeted trisomy 21 in a process called allele-specific editing, or, as one of the researchers described, “Trisomic rescue via allele-specific multiple chromosome cleavage using CRISPR-Cas9 in trisomy 21 cells.” The process was performed on lab-grown cells which quickly recovered and began functioning like any other cells. It’s unlikely that this new development will signal an immediate reversal of Down syndrome, as it will be a while before the treatment can undergo human trials. One particular hurdle is that the treatment can sometimes target healthy chromosomes. Still, it shows that CRISPR-Cas9 can be used to remove entire chromosomes and that cells affected by trisomy 21 can make a full recovery with treatment. That’s a lot of medical advancement in one crisp swoop.
[Image description: A diagram of a DNA strand with a key for each labeled part. The key from top to bottom reads: Adenine, Thymine, Cytosine, Guanine, and phosphate backbone.] Credit & copyright: Forluvoft, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEGardening Daily Curio #3103Free1 CQ
They may be small, but they’re no saplings! The Brooklyn Bonsai Museum is celebrating its 100th birthday by inviting the public to learn more about the ancient art of bonsai, which has roots that go beyond just Japan. Bonsai involves growing trees in containers, carefully pruning and maintaining them to let them thrive in a confined space. When done properly, a tree kept in such a manner will resemble a full-sized tree in miniaturized form and not just look like a stunted specimen. Experienced practitioners can also guide the growth of the trunk and branches to form artful, often dramatic shapes.
Bonsai has been gaining popularity in the U.S. in the past century, and its history goes all the way back to 8th-century China, when dwarf trees were grown in containers and cultivated as luxury gifts. Then, in the Kamakura period, which lasted from the late 12th century to the early 14th century, Japan adopted many of China’s cultural and artistic practices and sensibilities, including what they would come to call bonsai.
For a tree to be a bonsai tree, it has to be grown in a shallow container which limits its overall growth while still allowing it to mature. While most bonsai trees are small enough to be placed on a desk or table, it’s not really the size that dictates what is or isn’t a bonsai. As long as it’s grown in a shallow container, a tree can be considered bonsai. In fact, there are some downright large specimens that dwarf their human caretakers. A category of bonsai called “Imperial bonsai” typically ranges between five to seven feet, but the largest bonsai in existence is a sixteen-foot red pine in Akao Herb & Rose Garden in Shizuoka, Japan. Bonsai trees can also live just as long as their container-free counterparts. The oldest currently in existence is a Ficus Retusa Linn at the Crespi Bonsai Museum in Italy, which is over 1000 years old and was originally grown in China, presumably before the practice even spread to Japan. If this tree ever falls—in a forest or not—you can bet that someone’s going to make a lot of noise.
[Image description: A potted bonsai tree sitting on a table with a bamboo fence in the background.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They may be small, but they’re no saplings! The Brooklyn Bonsai Museum is celebrating its 100th birthday by inviting the public to learn more about the ancient art of bonsai, which has roots that go beyond just Japan. Bonsai involves growing trees in containers, carefully pruning and maintaining them to let them thrive in a confined space. When done properly, a tree kept in such a manner will resemble a full-sized tree in miniaturized form and not just look like a stunted specimen. Experienced practitioners can also guide the growth of the trunk and branches to form artful, often dramatic shapes.
Bonsai has been gaining popularity in the U.S. in the past century, and its history goes all the way back to 8th-century China, when dwarf trees were grown in containers and cultivated as luxury gifts. Then, in the Kamakura period, which lasted from the late 12th century to the early 14th century, Japan adopted many of China’s cultural and artistic practices and sensibilities, including what they would come to call bonsai.
For a tree to be a bonsai tree, it has to be grown in a shallow container which limits its overall growth while still allowing it to mature. While most bonsai trees are small enough to be placed on a desk or table, it’s not really the size that dictates what is or isn’t a bonsai. As long as it’s grown in a shallow container, a tree can be considered bonsai. In fact, there are some downright large specimens that dwarf their human caretakers. A category of bonsai called “Imperial bonsai” typically ranges between five to seven feet, but the largest bonsai in existence is a sixteen-foot red pine in Akao Herb & Rose Garden in Shizuoka, Japan. Bonsai trees can also live just as long as their container-free counterparts. The oldest currently in existence is a Ficus Retusa Linn at the Crespi Bonsai Museum in Italy, which is over 1000 years old and was originally grown in China, presumably before the practice even spread to Japan. If this tree ever falls—in a forest or not—you can bet that someone’s going to make a lot of noise.
[Image description: A potted bonsai tree sitting on a table with a bamboo fence in the background.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
This isn’t your grandma’s ice cream…unless she happens to be a native Alaskan. Akutaq, also known as Alaskan ice cream, is one of the most unique frozen foods in the world. Although it resembles ice cream, as its nickname suggests, it is savory and involves no dairy. It includes plenty of animal fat, though.
Akutaq is a traditional Alaskan dish enjoyed for centuries by different groups of peoples native to the region. It is made by mixing ice, often from freshly-fallen snow, with berries, meat, animal oil, and whipped animal tallow. Tallow is made by melting fat and then cooling it into a solid, waxy substance. Whipping the tallow gives it a lighter, fluffier texture. Different Alaskan animals can be used to make akutaq, including caribou, seals, moose, or fish. The dish is traditionally mixed in an ovular, wooden bowl called a tumnaq.
No one knows which group of native Alaskans first invented akutaq. It is most often attributed to the Yupik people, who are indigenous to both Alaska and Eastern Russia, since the name “akutaq” means “mix them together” in the Yup'ik language. However, other Alaskan peoples, including Inuits, also make akutaq, and since recipes were traditionally passed down orally, it’s unlikely that we’ll ever know exactly how it came to be.
What we do know is what it was used for: quick energy for long journeys. Unlike actual ice cream, which is considered a sweet treat and nothing more, akutaq is serious business, providing protein and fat that is much-needed before long expeditions through the snow. Since many native Alaskan peoples were nomadic, moving from place to place throughout the year in order to follow herds of prey animals, long treks in cold weather were inevitable. Akutaq helped ensure that everyone had enough energy and strength to make the trip. The ingredients could also vary widely based on where people were at any given time. Before particularly long journeys, akutaq might include dried meat for added protein, while akutaq made near coasts included more fish and seal meat. It might not come on a cone, but there’s no doubt that this frozen dish is more useful than ice cream.
[Image description: A wooden bowl, called a tumnaq, made for making akutaq.] Credit & copyright: Caroline Léna Becker, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This isn’t your grandma’s ice cream…unless she happens to be a native Alaskan. Akutaq, also known as Alaskan ice cream, is one of the most unique frozen foods in the world. Although it resembles ice cream, as its nickname suggests, it is savory and involves no dairy. It includes plenty of animal fat, though.
Akutaq is a traditional Alaskan dish enjoyed for centuries by different groups of peoples native to the region. It is made by mixing ice, often from freshly-fallen snow, with berries, meat, animal oil, and whipped animal tallow. Tallow is made by melting fat and then cooling it into a solid, waxy substance. Whipping the tallow gives it a lighter, fluffier texture. Different Alaskan animals can be used to make akutaq, including caribou, seals, moose, or fish. The dish is traditionally mixed in an ovular, wooden bowl called a tumnaq.
No one knows which group of native Alaskans first invented akutaq. It is most often attributed to the Yupik people, who are indigenous to both Alaska and Eastern Russia, since the name “akutaq” means “mix them together” in the Yup'ik language. However, other Alaskan peoples, including Inuits, also make akutaq, and since recipes were traditionally passed down orally, it’s unlikely that we’ll ever know exactly how it came to be.
What we do know is what it was used for: quick energy for long journeys. Unlike actual ice cream, which is considered a sweet treat and nothing more, akutaq is serious business, providing protein and fat that is much-needed before long expeditions through the snow. Since many native Alaskan peoples were nomadic, moving from place to place throughout the year in order to follow herds of prey animals, long treks in cold weather were inevitable. Akutaq helped ensure that everyone had enough energy and strength to make the trip. The ingredients could also vary widely based on where people were at any given time. Before particularly long journeys, akutaq might include dried meat for added protein, while akutaq made near coasts included more fish and seal meat. It might not come on a cone, but there’s no doubt that this frozen dish is more useful than ice cream.
[Image description: A wooden bowl, called a tumnaq, made for making akutaq.] Credit & copyright: Caroline Léna Becker, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3102Free1 CQ
Telling people apart is as easy as breathing. Researchers recently found that the pattern of a person’s breathing may be unique to the individual, much like their fingerprints. Currently, there are only a few surefire ways to identify someone: fingerprints, eye scans, and DNA tests. Soon, another option might be available in the form of breathing. Researchers at the Weizmann Institute of Science in Israel hypothesized that a person’s breathing pattern might be unique to them, and tested the idea with the help of 100 participants. The participants were equipped with special devices that tracked their breathing throughout the day, measuring the frequency and duration of each breath, along with the amount of air passing through their nasal cavities. Over the course of two years, the researchers entered the data they collected into a machine learning program, which learned to positively identify a person through their breathing alone with an accuracy of 96.8 percent.
There may be another use for this type of analysis besides identification. Researchers found that a person’s breathing revealed not only their identity, but information about their physical and mental health. People of similar body mass indexes share similarities in the way they breath, and so do those who suffer from depression or anxiety. Those with depression tend to exhale quickly, while those with anxiety have shorter inhales and pause their breathing more frequently during sleep. According to the researchers, their next step is to find out if and how breathing can be used as a diagnostics tool. In the future, they hope that it may even be possible to change peoples’ breathing patterns for the better. As Noam Sobel, a co-author of the study said in a statement, “We intuitively assume that how depressed or anxious you are changes the way you breathe. But it might be the other way around. Perhaps the way you breathe makes you anxious or depressed. If that’s true, we might be able to change the way you breathe to change those conditions.” We’ll be able to breathe easy, then.
[Image description: The black nose of a dog with brown fur. The rest of the dog’s face is not visible.] Credit & copyright: HTO, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Telling people apart is as easy as breathing. Researchers recently found that the pattern of a person’s breathing may be unique to the individual, much like their fingerprints. Currently, there are only a few surefire ways to identify someone: fingerprints, eye scans, and DNA tests. Soon, another option might be available in the form of breathing. Researchers at the Weizmann Institute of Science in Israel hypothesized that a person’s breathing pattern might be unique to them, and tested the idea with the help of 100 participants. The participants were equipped with special devices that tracked their breathing throughout the day, measuring the frequency and duration of each breath, along with the amount of air passing through their nasal cavities. Over the course of two years, the researchers entered the data they collected into a machine learning program, which learned to positively identify a person through their breathing alone with an accuracy of 96.8 percent.
There may be another use for this type of analysis besides identification. Researchers found that a person’s breathing revealed not only their identity, but information about their physical and mental health. People of similar body mass indexes share similarities in the way they breath, and so do those who suffer from depression or anxiety. Those with depression tend to exhale quickly, while those with anxiety have shorter inhales and pause their breathing more frequently during sleep. According to the researchers, their next step is to find out if and how breathing can be used as a diagnostics tool. In the future, they hope that it may even be possible to change peoples’ breathing patterns for the better. As Noam Sobel, a co-author of the study said in a statement, “We intuitively assume that how depressed or anxious you are changes the way you breathe. But it might be the other way around. Perhaps the way you breathe makes you anxious or depressed. If that’s true, we might be able to change the way you breathe to change those conditions.” We’ll be able to breathe easy, then.
[Image description: The black nose of a dog with brown fur. The rest of the dog’s face is not visible.] Credit & copyright: HTO, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEUS History Daily Curio #3101Free1 CQ
Not all name tags are important enough to get a name of their own. Recently, the dog tag of a World War II serviceman was returned to his family after it was lost for 80 years. While the dog tag didn't get to serve its intended purpose, there’s a reason why such military name tags have been used, in various forms, for thousands of years.
Joseph L. Gray passed away in 1945 after the B-17 he was serving on crashed on the Isle of Man. The only consolation was that the crash was documented enough for his passing to be known about at the time. His dog tag was found decades later with a metal detector, after which it was donated to a local museum. The dog tag was then spotted by a descendant of a fellow crew mate, leading to its return to Gray’s family. Similar tales abound when it comes to dog tags, which have long been used in the U.S. military to serve as identification in combat. However, such name tags are a surprisingly ancient concept.
Ancient Romans issued something similar to modern dog tags in the form of a signaculum, a piece of metal worn on a legionary’s neck with identifying information. During the American Civil War, soldiers weren’t issued a standardized tag, so many fashioned their own from spare pieces of lead, copper, or even coins. Marines of the time used a piece of wood on a string for the same purpose. The first official American military ID tags were issued in 1899 during the Spanish-American War. Since then, the U.S. military has continued to issue them, with the design changing gradually over time. The name “dog tag,” however, doesn’t come from the military at all. Credit for that name goes to William Randolph Hearst, who vehemently opposed President Franklin D. Roosevelt’s New Deal and Social Security program. Hearst claimed that Roosevelt would force people to wear metal tags with their names and social security numbers, like “dog tags.” The nickname was adopted by soldiers soon after. Funnily enough, one proposed idea for issuing Social Security numbers involved metal plates instead of paper cards, and one unused prototype of the much-dreaded dog tag still exists as a museum display at the Social Security Administration's headquarters. It seems that every dog tag has its day.
[Image description: An American flag on a wooden post.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Not all name tags are important enough to get a name of their own. Recently, the dog tag of a World War II serviceman was returned to his family after it was lost for 80 years. While the dog tag didn't get to serve its intended purpose, there’s a reason why such military name tags have been used, in various forms, for thousands of years.
Joseph L. Gray passed away in 1945 after the B-17 he was serving on crashed on the Isle of Man. The only consolation was that the crash was documented enough for his passing to be known about at the time. His dog tag was found decades later with a metal detector, after which it was donated to a local museum. The dog tag was then spotted by a descendant of a fellow crew mate, leading to its return to Gray’s family. Similar tales abound when it comes to dog tags, which have long been used in the U.S. military to serve as identification in combat. However, such name tags are a surprisingly ancient concept.
Ancient Romans issued something similar to modern dog tags in the form of a signaculum, a piece of metal worn on a legionary’s neck with identifying information. During the American Civil War, soldiers weren’t issued a standardized tag, so many fashioned their own from spare pieces of lead, copper, or even coins. Marines of the time used a piece of wood on a string for the same purpose. The first official American military ID tags were issued in 1899 during the Spanish-American War. Since then, the U.S. military has continued to issue them, with the design changing gradually over time. The name “dog tag,” however, doesn’t come from the military at all. Credit for that name goes to William Randolph Hearst, who vehemently opposed President Franklin D. Roosevelt’s New Deal and Social Security program. Hearst claimed that Roosevelt would force people to wear metal tags with their names and social security numbers, like “dog tags.” The nickname was adopted by soldiers soon after. Funnily enough, one proposed idea for issuing Social Security numbers involved metal plates instead of paper cards, and one unused prototype of the much-dreaded dog tag still exists as a museum display at the Social Security Administration's headquarters. It seems that every dog tag has its day.
[Image description: An American flag on a wooden post.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3100Free1 CQ
Well, at least you can’t accuse these legislators of being boring. Several states in the U.S. have introduced legislation to ban fabled “chemtrails.” This is a name used by those who believe that the white, cloud-like lines left by airplanes contain deadly chemicals. The true nature of these streaks in the sky is not nearly that insidious, but they’re not completely harmless either.
The latest state to hop on the anti-chemtrail bandwagon is Louisiana, where a state legislator introduced a bill to outlaw the lines left in the wake of airplanes. A decades-old conspiracy theory holds that these are actually the result of a shadowy effort to disperse harmful chemicals to the general populace, but the proper term for them is “contrails,” short for condensation trails. Contrails generally form at altitudes between 32,000 and 42,000 feet due to the water vapor released from jet engines. At those altitudes, the hot water vapor cools rapidly after exiting the engine and condenses, leaving visible streaks in the sky. Of course, the conditions for this to occur have to be just right, or else the sky would be covered in an endless criss-crossing of airplane flightpaths. Aside from the altitude of the plane, the air has to be cold and humid enough for the contrails to form.
While contrails aren’t the product of nefarious intentions, that doesn’t mean that they’re beyond reproach. According to some studies, contrails might actually be contributing to global warming in an unexpected way by trapping excess heat in the atmosphere, especially if they form at night or last until nighttime. Ironically, efforts to reduce carbon emissions and save money might be making it worse. Modern airliners are designed to fly at altitudes of 38,000 feet to save on fuel by reducing drag in the thinner atmosphere, and while that certainly saves on fuel, it means that contrails are much more likely to form. The warming effect of contrails is so pronounced that they may be contributing more to atmospheric warming than the carbon emissions from the engines themselves. It’s a matter worth looking into for legislators, but they might want to familiarize themselves with the science around contrails first.
[Image description: A white plane with four contrails against a blue sky.] Credit & copyright: Adrian Pingstone (Arpingstone), Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Well, at least you can’t accuse these legislators of being boring. Several states in the U.S. have introduced legislation to ban fabled “chemtrails.” This is a name used by those who believe that the white, cloud-like lines left by airplanes contain deadly chemicals. The true nature of these streaks in the sky is not nearly that insidious, but they’re not completely harmless either.
The latest state to hop on the anti-chemtrail bandwagon is Louisiana, where a state legislator introduced a bill to outlaw the lines left in the wake of airplanes. A decades-old conspiracy theory holds that these are actually the result of a shadowy effort to disperse harmful chemicals to the general populace, but the proper term for them is “contrails,” short for condensation trails. Contrails generally form at altitudes between 32,000 and 42,000 feet due to the water vapor released from jet engines. At those altitudes, the hot water vapor cools rapidly after exiting the engine and condenses, leaving visible streaks in the sky. Of course, the conditions for this to occur have to be just right, or else the sky would be covered in an endless criss-crossing of airplane flightpaths. Aside from the altitude of the plane, the air has to be cold and humid enough for the contrails to form.
While contrails aren’t the product of nefarious intentions, that doesn’t mean that they’re beyond reproach. According to some studies, contrails might actually be contributing to global warming in an unexpected way by trapping excess heat in the atmosphere, especially if they form at night or last until nighttime. Ironically, efforts to reduce carbon emissions and save money might be making it worse. Modern airliners are designed to fly at altitudes of 38,000 feet to save on fuel by reducing drag in the thinner atmosphere, and while that certainly saves on fuel, it means that contrails are much more likely to form. The warming effect of contrails is so pronounced that they may be contributing more to atmospheric warming than the carbon emissions from the engines themselves. It’s a matter worth looking into for legislators, but they might want to familiarize themselves with the science around contrails first.
[Image description: A white plane with four contrails against a blue sky.] Credit & copyright: Adrian Pingstone (Arpingstone), Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREESTEM Daily Curio #3099Free1 CQ
It’s the comeback of the century. The European beaver has made its triumphant return to Portugal for the first time in centuries, and it’s not the only place where the endangered builder is reclaiming its old range. Like its counterpart in North America, the European beaver is a keystone species, reshaping its environment by building dams in waterways and digging channels to direct the flow of water. Yet for all its mighty endeavors, the beaver was, for centuries, at the mercy of a greater power: human hunters. The beavers were hunted to near extinction in most of Europe for their meat, fur and castoreum, an aromatic substance that comes from internal sacs near the base of beavers’ tails. Beavers were even eaten during Lent after the Catholic Church classified the mammals as fish.
Beavers had a lot going against them when it came to their relationship with humans. Their fur happens to be ideal for making felt, and has been considered a valuable commodity for centuries. Although castoreum has a distinctly unpleasant odor in its raw form (beavers mix it with their urine to mark their territory), it can be processed into an effective fixative agent to be used in the production of perfumes. Thus, the beavers were trapped and hunted until they became extinct in many European locales, including Portugal, where they were last seen around the end of the 15th century. Then, over 500 years later in 2023, one of the beavers was sighted less than 500 feet from the border, giving hope to conservationists. Indeed, the presence of European beavers in Portugal has been confirmed now, with telltale signs like dams and gnawing marks on trees. It’s a long overdue arrival for Portugal, and it’s also the result of a century and a half of conservation efforts around Europe. While their numbers are still much lower than they once were, they are now found in most countries on the continent, and conservation efforts continue to help bring back the creature that once helped shape the very land. It’s time for these endangered engineers to get back to being busy beavers.
[Image description: A close-up photo of a beaver with wet fur.] Credit & copyright: National Park Service photo, Asset ID: d34648d9-10ec-44bb-bfce-9d4c2b20ee18. Public domain:Full Granting Rights.It’s the comeback of the century. The European beaver has made its triumphant return to Portugal for the first time in centuries, and it’s not the only place where the endangered builder is reclaiming its old range. Like its counterpart in North America, the European beaver is a keystone species, reshaping its environment by building dams in waterways and digging channels to direct the flow of water. Yet for all its mighty endeavors, the beaver was, for centuries, at the mercy of a greater power: human hunters. The beavers were hunted to near extinction in most of Europe for their meat, fur and castoreum, an aromatic substance that comes from internal sacs near the base of beavers’ tails. Beavers were even eaten during Lent after the Catholic Church classified the mammals as fish.
Beavers had a lot going against them when it came to their relationship with humans. Their fur happens to be ideal for making felt, and has been considered a valuable commodity for centuries. Although castoreum has a distinctly unpleasant odor in its raw form (beavers mix it with their urine to mark their territory), it can be processed into an effective fixative agent to be used in the production of perfumes. Thus, the beavers were trapped and hunted until they became extinct in many European locales, including Portugal, where they were last seen around the end of the 15th century. Then, over 500 years later in 2023, one of the beavers was sighted less than 500 feet from the border, giving hope to conservationists. Indeed, the presence of European beavers in Portugal has been confirmed now, with telltale signs like dams and gnawing marks on trees. It’s a long overdue arrival for Portugal, and it’s also the result of a century and a half of conservation efforts around Europe. While their numbers are still much lower than they once were, they are now found in most countries on the continent, and conservation efforts continue to help bring back the creature that once helped shape the very land. It’s time for these endangered engineers to get back to being busy beavers.
[Image description: A close-up photo of a beaver with wet fur.] Credit & copyright: National Park Service photo, Asset ID: d34648d9-10ec-44bb-bfce-9d4c2b20ee18. Public domain:Full Granting Rights. -
FREEMind + Body Daily CurioFree1 CQ
Some cheeses are a transcendent taste experience. You could even say that this one is…holy. Swiss cheese is famous for its mild flavor and its unusual consistency, which famously includes holes. How these holes came to be, and how Swiss cheese got so popular across the pond from Switzerland, are just two parts of this cheese’s intriguing backstory.
Swiss cheese is a light-yellow-to-white cheese with a nutty, slightly-sweet flavor. Like many cheeses, Swiss cheese is made by heating milk and then treating it with bacterial cultures to help it form curds before it is pressed and aged. Swiss cheese is usually aged for a few months, but different varieties can be aged for two years or more. The longer the cheese ages, the more intense its flavor. Swiss is one of the world’s most popular sandwich cheeses, and is commonly found at delis in the U.S. and throughout Europe.
The secret to how it got so popular lies in its origins. As its name suggests, Swiss cheese is from Switzerland, which is famous for its dairy industry to this day. Nowhere is this more true than in West Central Switzerland, in a valley commonly called Emmental. This area has been used for dairy farming for centuries, as its grassy, rolling hills make for perfect grazing land. The swiss cheese we know and love today was invented in Emmental sometime in the 1300s. To this day, it’s known as “Emmental cheese” in Switzerland. In the mid-1800s, Swiss immigrants in Wisconsin made Swiss cheese an American favorite too, and cemented Wisconsin’s modern reputation as a dairy hotspot.
Yet, how this cheese got its famous holes was a mystery until fairly recently. For years, some farmers believed that the holes formed due to the specific cultures used to make the cheese, or due to a certain amount of humidity in the barns where it was aged. Later, scientists posited that the holes could be due to carbon dioxide released by bacteria in the cheese. It wasn’t until 2015 that Agroscope, a Swiss government agricultural research facility, discovered the actual secret: hay. Because Swiss cheese is often made in a traditional dairy farm setting, microscopic pieces of hay naturally fall into buckets of milk used to make the cheese. Holes then expand around these tiny impurities as the cheese ages. This also explains why fewer holes appear in factory-made Swiss, since hay is less likely to fall into milk in a factory setting. Hay, how’s that for solving a dairy mystery?
[Image description: A wedge of swiss cheese with four holes.] Credit & copyright: National Cancer Institute Visuals Online, Renee Comet (Photographer). Public Domain.Some cheeses are a transcendent taste experience. You could even say that this one is…holy. Swiss cheese is famous for its mild flavor and its unusual consistency, which famously includes holes. How these holes came to be, and how Swiss cheese got so popular across the pond from Switzerland, are just two parts of this cheese’s intriguing backstory.
Swiss cheese is a light-yellow-to-white cheese with a nutty, slightly-sweet flavor. Like many cheeses, Swiss cheese is made by heating milk and then treating it with bacterial cultures to help it form curds before it is pressed and aged. Swiss cheese is usually aged for a few months, but different varieties can be aged for two years or more. The longer the cheese ages, the more intense its flavor. Swiss is one of the world’s most popular sandwich cheeses, and is commonly found at delis in the U.S. and throughout Europe.
The secret to how it got so popular lies in its origins. As its name suggests, Swiss cheese is from Switzerland, which is famous for its dairy industry to this day. Nowhere is this more true than in West Central Switzerland, in a valley commonly called Emmental. This area has been used for dairy farming for centuries, as its grassy, rolling hills make for perfect grazing land. The swiss cheese we know and love today was invented in Emmental sometime in the 1300s. To this day, it’s known as “Emmental cheese” in Switzerland. In the mid-1800s, Swiss immigrants in Wisconsin made Swiss cheese an American favorite too, and cemented Wisconsin’s modern reputation as a dairy hotspot.
Yet, how this cheese got its famous holes was a mystery until fairly recently. For years, some farmers believed that the holes formed due to the specific cultures used to make the cheese, or due to a certain amount of humidity in the barns where it was aged. Later, scientists posited that the holes could be due to carbon dioxide released by bacteria in the cheese. It wasn’t until 2015 that Agroscope, a Swiss government agricultural research facility, discovered the actual secret: hay. Because Swiss cheese is often made in a traditional dairy farm setting, microscopic pieces of hay naturally fall into buckets of milk used to make the cheese. Holes then expand around these tiny impurities as the cheese ages. This also explains why fewer holes appear in factory-made Swiss, since hay is less likely to fall into milk in a factory setting. Hay, how’s that for solving a dairy mystery?
[Image description: A wedge of swiss cheese with four holes.] Credit & copyright: National Cancer Institute Visuals Online, Renee Comet (Photographer). Public Domain. -
FREESTEM Daily Curio #3098Free1 CQ
These dinosaurs might have been impressive to look at, but their table manners were awful. While most animals have to chew their food thoroughly, it seems that wasn’t the case for sauropods, some of the largest dinosaurs ever to walk the Earth. Based on a recently discovered fossil, scientists now believe that sauropods hardly chewed their food at all.
Sauropods were members of Sauropoda, a clade of enormous, long-necked, vegetarian dinosaurs. Yet, for a long time, scientists didn’t know many specifics about sauropod diets. Paleontologists made the assumption that they ate plants based on two factors: they had flat teeth, which are good for processing plant matter, and sauropods were huge, meaning that there was no feasible way for them to have depended on anything other than plants, much like large herbivores today. Besides, their gigantic bodies, long necks, and long tails would have made them clumsy hunters. Now, not only do we have confirmation that sauropods ate plants, we know quite a bit about how they did it.
Researchers discovered a cololite—fossilized intestinal contents—that belonged to Diamantinasaurus matildae, a species of sauropod that lived around 100 million years ago. By performing a CT scan on the cololite, they found that the remains were composed entirely of plant matter. The leaves of the fern-like plant were largely intact, suggesting that the sauropod barely chewed them before swallowing. This means that sauropods were probably bulk feeders, ingesting as much plant matter as possible and relying on the natural fermentation process inside their digestive systems to break down their food. It’s a more extreme version of what many herbivores do today. Cows and other ruminants rely on fermentation to digest their food, and they also spend much of their time ruminating, which means they regurgitate their food to chew it again. You really needed a strong stomach to live in the Cretaceous period.
[Image description: A black-and-white illustration of a long-necked sauropod dinosaur.] Credit & copyright: Pearson Scott Foresman, Wikimedia Commons. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide.These dinosaurs might have been impressive to look at, but their table manners were awful. While most animals have to chew their food thoroughly, it seems that wasn’t the case for sauropods, some of the largest dinosaurs ever to walk the Earth. Based on a recently discovered fossil, scientists now believe that sauropods hardly chewed their food at all.
Sauropods were members of Sauropoda, a clade of enormous, long-necked, vegetarian dinosaurs. Yet, for a long time, scientists didn’t know many specifics about sauropod diets. Paleontologists made the assumption that they ate plants based on two factors: they had flat teeth, which are good for processing plant matter, and sauropods were huge, meaning that there was no feasible way for them to have depended on anything other than plants, much like large herbivores today. Besides, their gigantic bodies, long necks, and long tails would have made them clumsy hunters. Now, not only do we have confirmation that sauropods ate plants, we know quite a bit about how they did it.
Researchers discovered a cololite—fossilized intestinal contents—that belonged to Diamantinasaurus matildae, a species of sauropod that lived around 100 million years ago. By performing a CT scan on the cololite, they found that the remains were composed entirely of plant matter. The leaves of the fern-like plant were largely intact, suggesting that the sauropod barely chewed them before swallowing. This means that sauropods were probably bulk feeders, ingesting as much plant matter as possible and relying on the natural fermentation process inside their digestive systems to break down their food. It’s a more extreme version of what many herbivores do today. Cows and other ruminants rely on fermentation to digest their food, and they also spend much of their time ruminating, which means they regurgitate their food to chew it again. You really needed a strong stomach to live in the Cretaceous period.
[Image description: A black-and-white illustration of a long-necked sauropod dinosaur.] Credit & copyright: Pearson Scott Foresman, Wikimedia Commons. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide. -
FREEMusic Appreciation Daily Curio #3097Free1 CQ
You’ll probably never hear someone sing it at a karaoke bar, but it’s still the most frequently-sung song in English. Happy Birthday is an indispensable part of birthday celebrations around the world, and the composer of the melody, Mildred J. Hill, was born this month in 1859 in Louisville, Kentucky. Hill came up with the now-famous tune in 1893, and the lyrics were written by her sister Patty, but the song they wrote wasn’t actually Happy Birthday. Instead, it was called Good Morning to All, and was meant to be sung by a teacher and their classroom. Patty was a pioneer in early childhood education. In fact, she is credited as the inventor of the modern concept of a kindergarten, and she sang Good Morning to All in her own classroom as a daily greeting.
The Hill sisters published Good Morning to All and other compositions in 1893’s Song Stories for the Kindergarten. Soon, the melody took on a life of its own. No one knows exactly how it happened, but the tune began to be used to wish someone a happy birthday. One credible account even credits the Hill sisters themselves, who were believed to have changed the lyrics during a birthday get-together they were attending. Regardless of how it happened, Happy Birthday began to spread. By the early 20th century, the song appeared in movies, plays, and even other songbooks without crediting the Hill sisters. Mildred passed away in 1916, and Patty passed away in 1946, neither being credited as the originators of Happy Birthday. Their youngest sister, Jessica Hill, took it upon herself to copyright the song and have the publisher of Song Stories for the Kindergarten re-release it in 1935. The rights to the song eventually went to another publishing company and for decades after, the rights to the song were privately held, which is why movies had to pay royalties to use it, and why restaurants wishing their patrons a happy birthday had to sing a proprietary or royalty-free song instead. Then, in 2013, the publishing company was taken to court with claims that the copyright to Happy Birthday had expired years earlier. Finally, in 2016, the song entered public domain. It’s a short and simple ditty, but its story is anything but.
[Image description: A birthday cake with lit candles in a dark setting.] Credit & copyright: Fancibaer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You’ll probably never hear someone sing it at a karaoke bar, but it’s still the most frequently-sung song in English. Happy Birthday is an indispensable part of birthday celebrations around the world, and the composer of the melody, Mildred J. Hill, was born this month in 1859 in Louisville, Kentucky. Hill came up with the now-famous tune in 1893, and the lyrics were written by her sister Patty, but the song they wrote wasn’t actually Happy Birthday. Instead, it was called Good Morning to All, and was meant to be sung by a teacher and their classroom. Patty was a pioneer in early childhood education. In fact, she is credited as the inventor of the modern concept of a kindergarten, and she sang Good Morning to All in her own classroom as a daily greeting.
The Hill sisters published Good Morning to All and other compositions in 1893’s Song Stories for the Kindergarten. Soon, the melody took on a life of its own. No one knows exactly how it happened, but the tune began to be used to wish someone a happy birthday. One credible account even credits the Hill sisters themselves, who were believed to have changed the lyrics during a birthday get-together they were attending. Regardless of how it happened, Happy Birthday began to spread. By the early 20th century, the song appeared in movies, plays, and even other songbooks without crediting the Hill sisters. Mildred passed away in 1916, and Patty passed away in 1946, neither being credited as the originators of Happy Birthday. Their youngest sister, Jessica Hill, took it upon herself to copyright the song and have the publisher of Song Stories for the Kindergarten re-release it in 1935. The rights to the song eventually went to another publishing company and for decades after, the rights to the song were privately held, which is why movies had to pay royalties to use it, and why restaurants wishing their patrons a happy birthday had to sing a proprietary or royalty-free song instead. Then, in 2013, the publishing company was taken to court with claims that the copyright to Happy Birthday had expired years earlier. Finally, in 2016, the song entered public domain. It’s a short and simple ditty, but its story is anything but.
[Image description: A birthday cake with lit candles in a dark setting.] Credit & copyright: Fancibaer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEEngineering Daily Curio #3096Free1 CQ
When it comes to engineering, there are always new uses for old standbys. Putting ice in your drink is a pretty rudimentary way to keep cool when it’s hot out, but Manhattan is putting a new twist on it by using ice to cool an entire building. Most modern air conditioners are a double-edged sword because, while they keep people comfortable and safe from extreme heat, they also consume a lot of electricity. As average global temperatures continue to rise, that puts more and more strain on city’s power grids, especially during peak daytime hours. The cooling system at New York City’s iconic Eleven Madison building is different. It does most of its work at night, when the city’s energy grid isn’t nearly as taxed.
Created by Trane Technologies, the system is called an ice battery. Every night, it uses electricity to freeze water into around 500,000 pounds of ice. During the day, the ice is used to cool the air being pushed through the building’s vents. Since electricity costs more to produce during peak hours, the system can lower energy bills by as much as 40 percent. The ice battery also drastically reduces the overall amount of energy used to cool the building, which is good news for the grid and the environment as a whole. If more buildings adopt ice batteries in the near future, it could reduce the need for more power plants to be built, even as the climate continues to warm. That’s less land and fewer resources that will have to be devoted to cooling buildings.
Of course, it still takes quite a bit of electricity to freeze ice, even at night. Research is already underway to see if chilled but unfrozen water might be a viable alternative. If enough buildings and homes are able to use such thermal energy storage systems to replace traditional HVAC systems, the environmental impact would be enormous, even though the new systems aren’t entirely carbon neutral. A step in the right direction is always better than a step back.
[Image description: A piece of clear ice with a jagged edge on top.] Credit & copyright: Dāvis Mosāns from Salaspils, Latvia. Flickr, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.When it comes to engineering, there are always new uses for old standbys. Putting ice in your drink is a pretty rudimentary way to keep cool when it’s hot out, but Manhattan is putting a new twist on it by using ice to cool an entire building. Most modern air conditioners are a double-edged sword because, while they keep people comfortable and safe from extreme heat, they also consume a lot of electricity. As average global temperatures continue to rise, that puts more and more strain on city’s power grids, especially during peak daytime hours. The cooling system at New York City’s iconic Eleven Madison building is different. It does most of its work at night, when the city’s energy grid isn’t nearly as taxed.
Created by Trane Technologies, the system is called an ice battery. Every night, it uses electricity to freeze water into around 500,000 pounds of ice. During the day, the ice is used to cool the air being pushed through the building’s vents. Since electricity costs more to produce during peak hours, the system can lower energy bills by as much as 40 percent. The ice battery also drastically reduces the overall amount of energy used to cool the building, which is good news for the grid and the environment as a whole. If more buildings adopt ice batteries in the near future, it could reduce the need for more power plants to be built, even as the climate continues to warm. That’s less land and fewer resources that will have to be devoted to cooling buildings.
Of course, it still takes quite a bit of electricity to freeze ice, even at night. Research is already underway to see if chilled but unfrozen water might be a viable alternative. If enough buildings and homes are able to use such thermal energy storage systems to replace traditional HVAC systems, the environmental impact would be enormous, even though the new systems aren’t entirely carbon neutral. A step in the right direction is always better than a step back.
[Image description: A piece of clear ice with a jagged edge on top.] Credit & copyright: Dāvis Mosāns from Salaspils, Latvia. Flickr, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3095Free1 CQ
What's the matter, cat got your head? Burmese pythons and other invasive species have been wreaking havoc in the Florida everglades for years, but it seems the local wildlife is starting to fight back. Burmese pythons are a particularly big problem in Florida. The snakes have no natural predators once fully grown, and they are prolific at multiplying. State officials have tried everything to get rid of the reptilian invaders, including declaring open season on the snakes and rewarding hunters for every one they bring in, but it seems that nothing can wipe them out completely. Meanwhile, pythons are capable of eating anything that can fit inside their surprisingly stretchy jaws, including other, native predators like alligators. For years, scientists have been keeping a keen eye on the state’s python population, and part of that includes strapping radio trackers on male pythons during breeding season. The males lead researchers to nests, so that eggs and female pythons can be removed.
Yet, when scientists rolled up to the location of one of these radio-tracked pythons recently, they didn't find a cozy love nest. Instead, they found the snake’s decapitated body, which weighed a whopping 52 pounds. After setting up a trail camera near the corpse, they found the culprit—a common bobcat happily munching away on the remains. This marks the first time that a bobcat has been known to take down a python, and it's all the more shocking considering the python's size. While bobcats have never been known to hunt and eat pythons, the snakes have been found with bobcat claws still inside them. This led scientists to believe that bobcats were unable to defend themselves against the snakes. On paper, it's obvious why—adult bobcats weigh around 30 to 40 pounds, while Burmese pythons can weigh around 200 pounds. Maybe nature has simply had enough, or maybe this cat was just particularly skilled at punching (or clawing) above its weight.
[Image description: A bobcat in tall grass from the chest up.] Credit & copyright: National Park Services, Asset ID: 8859334f-c426-41db-9049-96e7d5dd5779. Public domain: Full Granting Rights.What's the matter, cat got your head? Burmese pythons and other invasive species have been wreaking havoc in the Florida everglades for years, but it seems the local wildlife is starting to fight back. Burmese pythons are a particularly big problem in Florida. The snakes have no natural predators once fully grown, and they are prolific at multiplying. State officials have tried everything to get rid of the reptilian invaders, including declaring open season on the snakes and rewarding hunters for every one they bring in, but it seems that nothing can wipe them out completely. Meanwhile, pythons are capable of eating anything that can fit inside their surprisingly stretchy jaws, including other, native predators like alligators. For years, scientists have been keeping a keen eye on the state’s python population, and part of that includes strapping radio trackers on male pythons during breeding season. The males lead researchers to nests, so that eggs and female pythons can be removed.
Yet, when scientists rolled up to the location of one of these radio-tracked pythons recently, they didn't find a cozy love nest. Instead, they found the snake’s decapitated body, which weighed a whopping 52 pounds. After setting up a trail camera near the corpse, they found the culprit—a common bobcat happily munching away on the remains. This marks the first time that a bobcat has been known to take down a python, and it's all the more shocking considering the python's size. While bobcats have never been known to hunt and eat pythons, the snakes have been found with bobcat claws still inside them. This led scientists to believe that bobcats were unable to defend themselves against the snakes. On paper, it's obvious why—adult bobcats weigh around 30 to 40 pounds, while Burmese pythons can weigh around 200 pounds. Maybe nature has simply had enough, or maybe this cat was just particularly skilled at punching (or clawing) above its weight.
[Image description: A bobcat in tall grass from the chest up.] Credit & copyright: National Park Services, Asset ID: 8859334f-c426-41db-9049-96e7d5dd5779. Public domain: Full Granting Rights.