Curio Cabinet
- By Date
- By Type
September 23, 2025
-
FREEMusic Appreciation Song CurioFree2 CQ
Some songs have profound lyrics, and others make simple sounds uniquely theirs. Can’t Get Your Out of My Head by Australian pop star Kylie Minogue is definitely in the second category. The song’s most famous words are, undoubtedly, “la la la”, sung in a sultry voice before each chorus. Yet, despite its seeming simplicity, the 2001 pop hit remains Minogue’s most popular song. It topped the UK charts this month in its release year, and its music video, which features nostalgic, early-2000s CGI, won several awards. Can’t Get You Out of My Head is an extremely danceable song, and its easy-to-remember lyrics and electronic sound made it perfect for clubs. While the track helped Minogue achieve fame outside her home country and the UK, it wasn’t originally written for her at all. The songwriting duo of Cathy Dennis and Rob Davis originally penned the track for British pop group S Club 7, whose manager rejected it. It was then offered to British singer Sophie Ellis-Bextor, who also turned it down, before Minogue’s manager said yes. Third time’s clearly a charm!
Some songs have profound lyrics, and others make simple sounds uniquely theirs. Can’t Get Your Out of My Head by Australian pop star Kylie Minogue is definitely in the second category. The song’s most famous words are, undoubtedly, “la la la”, sung in a sultry voice before each chorus. Yet, despite its seeming simplicity, the 2001 pop hit remains Minogue’s most popular song. It topped the UK charts this month in its release year, and its music video, which features nostalgic, early-2000s CGI, won several awards. Can’t Get You Out of My Head is an extremely danceable song, and its easy-to-remember lyrics and electronic sound made it perfect for clubs. While the track helped Minogue achieve fame outside her home country and the UK, it wasn’t originally written for her at all. The songwriting duo of Cathy Dennis and Rob Davis originally penned the track for British pop group S Club 7, whose manager rejected it. It was then offered to British singer Sophie Ellis-Bextor, who also turned it down, before Minogue’s manager said yes. Third time’s clearly a charm!
-
FREEBiology Daily Curio #3156Free1 CQ
A zebra might not be able to change its stripes, but a cow can—with a little help, anyway. A group of Japanese scientists have been awarded the Ig Nobel Prize (awards given by the Annals of Improbable Science for unusual and imaginative research) proving that painting stripes on cows can imbue them with the same, fly-repellent powers that zebras possess.
Contrary to popular belief, a zebra’s stripes are not strictly for camouflage. The high-contrast, black-and-white striping does make it more difficult for predators to see a specific zebra’s outline in a herd, but it also confuses flies, preventing irritating bites. In Africa, horse flies and tsetse flies are a constant source of aggravation for the native fauna, as their painful bites can transmit fatal diseases like African horse sickness. As it turns out, this benefit can be conferred onto domesticated cows, who deal with flying pests of their own. The scientists who won the Ig Nobel’s Biology Prize proved this by painting some cows in black-and-white stripes and others in black stripes while leaving some unpainted. The cows were then left where they would be exposed to horse flies, and the cows that were painted black-and-white fared much better than the others. The black striped and unpainted cows received up to 110 bites in 30 minutes, while the zebra-striped cows only received 60 in the same time frame.
Flies don’t avoid stripes because they find them tacky, but because the stripes mess with the flies’ visual perception, preventing them from landing effectively. It’s a simple, low-tech solution to a problem that costs the cattle industry billions each year. Fighting back against the insects usually involves expensive pesticides, which have the additional risk of being toxic to the environment. It might not seem like much, but with millions of cattle all over the world, the losses add up. Flies sure are small to be causing such big problems.
[Image description: A black-and-white cow standing in tall grass.] Credit & copyright: Courtney Celley/USFWS. Media Usage Rights/License: Public Domain.A zebra might not be able to change its stripes, but a cow can—with a little help, anyway. A group of Japanese scientists have been awarded the Ig Nobel Prize (awards given by the Annals of Improbable Science for unusual and imaginative research) proving that painting stripes on cows can imbue them with the same, fly-repellent powers that zebras possess.
Contrary to popular belief, a zebra’s stripes are not strictly for camouflage. The high-contrast, black-and-white striping does make it more difficult for predators to see a specific zebra’s outline in a herd, but it also confuses flies, preventing irritating bites. In Africa, horse flies and tsetse flies are a constant source of aggravation for the native fauna, as their painful bites can transmit fatal diseases like African horse sickness. As it turns out, this benefit can be conferred onto domesticated cows, who deal with flying pests of their own. The scientists who won the Ig Nobel’s Biology Prize proved this by painting some cows in black-and-white stripes and others in black stripes while leaving some unpainted. The cows were then left where they would be exposed to horse flies, and the cows that were painted black-and-white fared much better than the others. The black striped and unpainted cows received up to 110 bites in 30 minutes, while the zebra-striped cows only received 60 in the same time frame.
Flies don’t avoid stripes because they find them tacky, but because the stripes mess with the flies’ visual perception, preventing them from landing effectively. It’s a simple, low-tech solution to a problem that costs the cattle industry billions each year. Fighting back against the insects usually involves expensive pesticides, which have the additional risk of being toxic to the environment. It might not seem like much, but with millions of cattle all over the world, the losses add up. Flies sure are small to be causing such big problems.
[Image description: A black-and-white cow standing in tall grass.] Credit & copyright: Courtney Celley/USFWS. Media Usage Rights/License: Public Domain.
September 22, 2025
-
FREEPhotography Art CurioFree1 CQ
Some artists have more range than others. Samuel Bourne was an English photographer who conducted photographic expeditions to the Himalayas. His piece above, The Manirung Pass, shows seven people walking through a mountain pass. The landscape is covered in snow, and more of the mountain range can be seen in the distance between the peaks. After traveling to India in 1863, Bourne spent seven years traveling the country and photographing everything he saw along the way. His photographs, which numbered around 2,500 by the time he departed the country, became widely popular and were collectively the most thorough visual documentation of India at the time. While in India, Bourne traveled to the Himalayas three times. During his last expedition, he took the above photograph at an elevation of 18,600 feet, setting a new record. The purpose of the last expedition was to reach the source of the Ganges, the Gangotri Glacier. Though they nearly died due to the extreme environment, Bourne’s expedition was successful, to say the least. It’s like he was Bourne for it.
The Manirung Pass, Samuel Bourne (British, 1834–1912), 1860s, Albumen silver print from glass negative, 9.31 x 11.62 in. (23.7 x 29.6 cm.), The Metropolitan Museum of Art, New York City, New York
[Image credit & copyright: The Manirung Pass, Samuel Bourne. The Metropolitan Museum of Art, Gilman Collection, Purchase, Cynthia Hazen Polsky Gift, 2005. Public Domain.Some artists have more range than others. Samuel Bourne was an English photographer who conducted photographic expeditions to the Himalayas. His piece above, The Manirung Pass, shows seven people walking through a mountain pass. The landscape is covered in snow, and more of the mountain range can be seen in the distance between the peaks. After traveling to India in 1863, Bourne spent seven years traveling the country and photographing everything he saw along the way. His photographs, which numbered around 2,500 by the time he departed the country, became widely popular and were collectively the most thorough visual documentation of India at the time. While in India, Bourne traveled to the Himalayas three times. During his last expedition, he took the above photograph at an elevation of 18,600 feet, setting a new record. The purpose of the last expedition was to reach the source of the Ganges, the Gangotri Glacier. Though they nearly died due to the extreme environment, Bourne’s expedition was successful, to say the least. It’s like he was Bourne for it.
The Manirung Pass, Samuel Bourne (British, 1834–1912), 1860s, Albumen silver print from glass negative, 9.31 x 11.62 in. (23.7 x 29.6 cm.), The Metropolitan Museum of Art, New York City, New York
[Image credit & copyright: The Manirung Pass, Samuel Bourne. The Metropolitan Museum of Art, Gilman Collection, Purchase, Cynthia Hazen Polsky Gift, 2005. Public Domain. -
FREEScience Daily Curio #3155Free1 CQ
Like resin from a tree, the continents are drifting ever so slowly. It’s fitting, then, that amber is helping scientists learn more about Gondwana, a supercontinent that started to break up around 180 million years ago. Here’s a curious fact about amber: most deposits of the fossilized resin are found in the northern hemisphere. That fact has puzzled scientists for quite a while, and the lack of amber in the southern hemisphere has also been a detriment to paleontologists. That’s because a chunk of amber is almost like a time capsule containing information about the environment it was formed in. Although it resembles stone, amber isn’t a mineral at all, since it’s formed from the resin conifer trees, then hardens over time, turning into copal. Over millions of years, the concentration of essential oils in the resin dissipates, oxidizing the copal and producing amber. Everything from pollen to insects can be caught in amber, where they become known as bioinclusions. Amber bioinclusions can be extremely valuable to scientists, since they’re often lifeforms or substances that don’t fossilize well. No wonder, then, that scientists are so excited about finding insects preserved in amber for the first time in South America. Found in a quarry in a rain forest in Ecuador, the amber contains the preserved remains of several orders of insects, including beetles and ants, as well as pieces of spider webs. Moreover, the pollen and leaves contained in the amber show that the region was once home to ferns and conifers not found there today. In all, the rare amber deposit is proving to be a treasure trove of information about the supercontinent Gondwana, which was made up of South America, Africa, Australia, and several over landmasses. Maybe it’s time we got the gang back together to see what they’ve been up to.
[Image description: A chunk of amber with insects trapped inside.] Credit & copyright: Vassil, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Like resin from a tree, the continents are drifting ever so slowly. It’s fitting, then, that amber is helping scientists learn more about Gondwana, a supercontinent that started to break up around 180 million years ago. Here’s a curious fact about amber: most deposits of the fossilized resin are found in the northern hemisphere. That fact has puzzled scientists for quite a while, and the lack of amber in the southern hemisphere has also been a detriment to paleontologists. That’s because a chunk of amber is almost like a time capsule containing information about the environment it was formed in. Although it resembles stone, amber isn’t a mineral at all, since it’s formed from the resin conifer trees, then hardens over time, turning into copal. Over millions of years, the concentration of essential oils in the resin dissipates, oxidizing the copal and producing amber. Everything from pollen to insects can be caught in amber, where they become known as bioinclusions. Amber bioinclusions can be extremely valuable to scientists, since they’re often lifeforms or substances that don’t fossilize well. No wonder, then, that scientists are so excited about finding insects preserved in amber for the first time in South America. Found in a quarry in a rain forest in Ecuador, the amber contains the preserved remains of several orders of insects, including beetles and ants, as well as pieces of spider webs. Moreover, the pollen and leaves contained in the amber show that the region was once home to ferns and conifers not found there today. In all, the rare amber deposit is proving to be a treasure trove of information about the supercontinent Gondwana, which was made up of South America, Africa, Australia, and several over landmasses. Maybe it’s time we got the gang back together to see what they’ve been up to.
[Image description: A chunk of amber with insects trapped inside.] Credit & copyright: Vassil, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.
September 21, 2025
-
FREEHumanities PP&T CurioFree1 CQ
Bombs away! Prior to the U.S. joining World War II, the U.S. military foresaw the need for a long-range, high-altitude, heavy bomber that could both evade enemy fire and defend itself. Less than a year after the nation joined the conflict, on this day in 1942, the United States Army Air Forces (AAF) introduced the B-29 Superfortress, a bomber unlike any other that—despite its flaws—was one of the most innovative aircraft of its time.
If necessity is the mother of invention, then war is necessity’s greatest sponsor. For better and for worse, military conflicts have always driven the development of industry and technology. In the case of the B-29, the mere possibility of conflict was enough for the AAF to begin development on a new aircraft to meet ambitious demands. As Germany began invading its neighbors, General Hap Arnold of the AAF feared that a German victory would deprive the U.S. of airbases across the Atlantic. The solution, then, was to create a bomber with enough range that it didn’t matter if there were no airbases across the ocean. In 1942, the B-29 debuted with a test run over the continental U.S. after taking off from Seattle. Ironically, when the bomber entered service in 1944, its first mission was in the Pacific theater against Japanese forces, not in Europe. As part of Operation Matterhorn, B-29s operating out of India, China, and other Asian countries bombed Japanese military targets and even their mainland. Eventually, two B-29s would be used in the first nuclear attacks in history. The Enola Gay would drop Little Boy on Hiroshima on August 6, 1945, while Bockscar dropped Fat Man on Nagasaki on August 9.
The B-29 was undoubtedly one of the world’s most devastating weapons, yet it wasn’t without a myriad of flaws that made it dangerous for its own crew. Some of these flaws were due to the aircraft’s ambitious design. One of the things that set B-29s apart from other bombers was that it was pressurized, allowing it to fly higher without forcing the crew to rely on oxygen. Thanks to pressurization, B-29s could fly as high as 40,000 feet, increasing fuel efficiency and avoiding as much fighting as possible, as well as potentially dangerous weather. It was a technology that had only been in limited use until then, and the B-29 was the first mass-produced aircraft to be pressurized. Instead of pressurizing the entire fuselage, however, there were three individual compartments connected by tunnels. Unfortunately, a loss in pressure while someone was in the tunnel could result in them being ejected from the plane. Another flaw was that its engines were prone to overheating and catching fire, leading to many casualties. The reason for this was the shape of the engine, which was made to be aerodynamic. Its odd shape had the unintended consequence of reducing the aircraft’s ability to cool down. Still, the Superfortress did earn its name. The plane was equipped with a radar bombing system and an array of remote-controlled turrets that also used radar to aim with increased accuracy against enemy fighters flying at hundreds of miles per hour. The B-29 wasn’t invincible by any means, but it was less reliant on escort fighters for defense.
For history buffs, the B-29 remains one of the most iconic aircraft of WWII. The plane continued to be used by the U.S. military until the late 1950s, and the innovations that its development pioneered are now commonplace in aviation. B-29s even assisted in the U.S.’s discovery of the jet stream. Japanese meteorologist Wasaburo Ooishi discovered the jet stream and had been studying it since the 1920s, and the Japanese military used that knowledge to send bomb-laden balloons across the Pacific to strike at America's West Coast. The U.S., on the other hand, didn’t discover the jet stream until their B-29s encountered it on some of their missions, where the unexpected bands of winds at high altitude threw off their aims and affected their fuel efficiency. What a way to wander into a new kind of weather.
[Image description: A Boeing B-29 Superfortress on display at the Museum of Flight in Seattle, WA.] Credit & copyright: NeonMaenad, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Bombs away! Prior to the U.S. joining World War II, the U.S. military foresaw the need for a long-range, high-altitude, heavy bomber that could both evade enemy fire and defend itself. Less than a year after the nation joined the conflict, on this day in 1942, the United States Army Air Forces (AAF) introduced the B-29 Superfortress, a bomber unlike any other that—despite its flaws—was one of the most innovative aircraft of its time.
If necessity is the mother of invention, then war is necessity’s greatest sponsor. For better and for worse, military conflicts have always driven the development of industry and technology. In the case of the B-29, the mere possibility of conflict was enough for the AAF to begin development on a new aircraft to meet ambitious demands. As Germany began invading its neighbors, General Hap Arnold of the AAF feared that a German victory would deprive the U.S. of airbases across the Atlantic. The solution, then, was to create a bomber with enough range that it didn’t matter if there were no airbases across the ocean. In 1942, the B-29 debuted with a test run over the continental U.S. after taking off from Seattle. Ironically, when the bomber entered service in 1944, its first mission was in the Pacific theater against Japanese forces, not in Europe. As part of Operation Matterhorn, B-29s operating out of India, China, and other Asian countries bombed Japanese military targets and even their mainland. Eventually, two B-29s would be used in the first nuclear attacks in history. The Enola Gay would drop Little Boy on Hiroshima on August 6, 1945, while Bockscar dropped Fat Man on Nagasaki on August 9.
The B-29 was undoubtedly one of the world’s most devastating weapons, yet it wasn’t without a myriad of flaws that made it dangerous for its own crew. Some of these flaws were due to the aircraft’s ambitious design. One of the things that set B-29s apart from other bombers was that it was pressurized, allowing it to fly higher without forcing the crew to rely on oxygen. Thanks to pressurization, B-29s could fly as high as 40,000 feet, increasing fuel efficiency and avoiding as much fighting as possible, as well as potentially dangerous weather. It was a technology that had only been in limited use until then, and the B-29 was the first mass-produced aircraft to be pressurized. Instead of pressurizing the entire fuselage, however, there were three individual compartments connected by tunnels. Unfortunately, a loss in pressure while someone was in the tunnel could result in them being ejected from the plane. Another flaw was that its engines were prone to overheating and catching fire, leading to many casualties. The reason for this was the shape of the engine, which was made to be aerodynamic. Its odd shape had the unintended consequence of reducing the aircraft’s ability to cool down. Still, the Superfortress did earn its name. The plane was equipped with a radar bombing system and an array of remote-controlled turrets that also used radar to aim with increased accuracy against enemy fighters flying at hundreds of miles per hour. The B-29 wasn’t invincible by any means, but it was less reliant on escort fighters for defense.
For history buffs, the B-29 remains one of the most iconic aircraft of WWII. The plane continued to be used by the U.S. military until the late 1950s, and the innovations that its development pioneered are now commonplace in aviation. B-29s even assisted in the U.S.’s discovery of the jet stream. Japanese meteorologist Wasaburo Ooishi discovered the jet stream and had been studying it since the 1920s, and the Japanese military used that knowledge to send bomb-laden balloons across the Pacific to strike at America's West Coast. The U.S., on the other hand, didn’t discover the jet stream until their B-29s encountered it on some of their missions, where the unexpected bands of winds at high altitude threw off their aims and affected their fuel efficiency. What a way to wander into a new kind of weather.
[Image description: A Boeing B-29 Superfortress on display at the Museum of Flight in Seattle, WA.] Credit & copyright: NeonMaenad, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
September 20, 2025
-
FREESports Sporty CurioFree1 CQ
You might know how to fly a kite, but does your kite know how to fly you? British kitesurfer Jake Scrace just set a new record by soaring more than 1,500 feet into the air. Scrace set his personal best and world record height of 1,587 feet off the coast of the Isle of Wight, demolishing the previous record of 908.7 feet. Of course, when it comes to kitesurfing, it’s not just about the altitude, but the waves. Kitesurfing, like its cousin windsurfing, was inspired by surfing. The kite looks more like a paraglider, below which surfers stand, holding onto handles, with their feet attached to a board. While surfing has been around for centuries, kitesurfing first began to emerge as a sport after French brothers Bruno and Dominique Legaignoux invented the inflatable kite, which remains the basis of kitesurfing kites to this day. The sport then grew through the 1990s until the first competitive event was held in Maui in 1998. Currently, there are two major types of kite-based water sports: kitesurfing and kiteboarding. The difference? Most people agree that the former makes use of a directional board like in surfing, while the latter uses twin-tip boards like in snowboarding. Either way, it’s impossible to be bored on these boards.
You might know how to fly a kite, but does your kite know how to fly you? British kitesurfer Jake Scrace just set a new record by soaring more than 1,500 feet into the air. Scrace set his personal best and world record height of 1,587 feet off the coast of the Isle of Wight, demolishing the previous record of 908.7 feet. Of course, when it comes to kitesurfing, it’s not just about the altitude, but the waves. Kitesurfing, like its cousin windsurfing, was inspired by surfing. The kite looks more like a paraglider, below which surfers stand, holding onto handles, with their feet attached to a board. While surfing has been around for centuries, kitesurfing first began to emerge as a sport after French brothers Bruno and Dominique Legaignoux invented the inflatable kite, which remains the basis of kitesurfing kites to this day. The sport then grew through the 1990s until the first competitive event was held in Maui in 1998. Currently, there are two major types of kite-based water sports: kitesurfing and kiteboarding. The difference? Most people agree that the former makes use of a directional board like in surfing, while the latter uses twin-tip boards like in snowboarding. Either way, it’s impossible to be bored on these boards.
September 19, 2025
-
FREEMind + Body Daily CurioFree1 CQ
You’ve heard of green tea, but what about beef tea? It may sound absurd, but beef tea was an actual beverage popular in the 19th century. It was mainly meant as a remedy for colds and other medical conditions, which might not seem so strange considering that it is very similar to another, modern, health-movement drink: bone broth.
Beef tea was a type of broth drink made by “steeping” cubed chunks of beef, along with herbs and spices thought to be healthful, like cloves, in a simmering pan, then adding water to the gravy that was produced. After skimming off any fat, the result was a brownish, transparent, savory-tasting liquid.
In Victorian times, beef tea was frequently served at hospitals and sanitariums, and was said to help those struggling with all kinds of illnesses, especially tuberculosis and other breathing disorders. Beef tea was so popular that it even appeared at the World’s Fair several times. Scottish entrepreneur John Lawson Johnston even created his own brand of meat extract paste, called Bovril, that made beef tea brewing easier and exhibited it at the Paris Exposition Universelle in 1889. Bovril is still popular in Europe today, where it’s used as a cooking ingredient and as a savory spread.
While beef tea was definitely invented in Europe, it’s impossible to say which country created it. One of the oldest written records of it comes from a newspaper recipe in Dublin, Ireland, in 1760. Its popularity surge during the Victorian era probably had a lot to do with the prevalence of tuberculosis, which had no cure. Like modern chicken noodle soup, beef tea was an easy-to-digest food that provided ill people with protein. Bone broth, which is currently popular as a health and wellness go-to, is very similar to beef tea, though it’s made by cooking bones rather than chunks of meat. It seems that the health world, like the fashion world, has trends that always seem to come back around.
[Image description: A brown figurine of a cow lying by a tree stump.] Credit & copyright: Recumbant Cow, Lyman, Fenton & Co. (1849–52). The Metropolitan Museum of Art, Gift of Mr. and Mrs. Stanley Herzman, 1983. Public Domain.You’ve heard of green tea, but what about beef tea? It may sound absurd, but beef tea was an actual beverage popular in the 19th century. It was mainly meant as a remedy for colds and other medical conditions, which might not seem so strange considering that it is very similar to another, modern, health-movement drink: bone broth.
Beef tea was a type of broth drink made by “steeping” cubed chunks of beef, along with herbs and spices thought to be healthful, like cloves, in a simmering pan, then adding water to the gravy that was produced. After skimming off any fat, the result was a brownish, transparent, savory-tasting liquid.
In Victorian times, beef tea was frequently served at hospitals and sanitariums, and was said to help those struggling with all kinds of illnesses, especially tuberculosis and other breathing disorders. Beef tea was so popular that it even appeared at the World’s Fair several times. Scottish entrepreneur John Lawson Johnston even created his own brand of meat extract paste, called Bovril, that made beef tea brewing easier and exhibited it at the Paris Exposition Universelle in 1889. Bovril is still popular in Europe today, where it’s used as a cooking ingredient and as a savory spread.
While beef tea was definitely invented in Europe, it’s impossible to say which country created it. One of the oldest written records of it comes from a newspaper recipe in Dublin, Ireland, in 1760. Its popularity surge during the Victorian era probably had a lot to do with the prevalence of tuberculosis, which had no cure. Like modern chicken noodle soup, beef tea was an easy-to-digest food that provided ill people with protein. Bone broth, which is currently popular as a health and wellness go-to, is very similar to beef tea, though it’s made by cooking bones rather than chunks of meat. It seems that the health world, like the fashion world, has trends that always seem to come back around.
[Image description: A brown figurine of a cow lying by a tree stump.] Credit & copyright: Recumbant Cow, Lyman, Fenton & Co. (1849–52). The Metropolitan Museum of Art, Gift of Mr. and Mrs. Stanley Herzman, 1983. Public Domain.
September 18, 2025
-
FREEBiology Nerdy CurioFree1 CQ
These now-extinct people are currently making a big impact. According to a paper published in the Journal of Human Evolution by researchers at the University of Pannonia in Hungary, the remnants of Denisovan DNA might be responsible for granting some people immunity against tropical diseases. Denisovans were a group of hominins that emerged around 370,000 years ago, and their classification is still a matter of debate, owing to the limited amount of fossils found. What is known, however, is that genetic traces of Denisovans still remain in modern humans, much like with Neanderthals. Some of those traces are apparently responsible for immunity against certain tropical diseases, like malaria. Researchers came to this conclusion after using a computer model to reconstruct the ancient climates of three regions where Denisovan remains were found: Siberia, the Tibetan Plateau, and Laos. The results of the model were then compared to the habitats of disease carrying insects like mosquitoes and ticks as well as data regarding Denisovan DNA in people today. What they found was that the Denisovans likely lived in all types of environments and were exposed to many diseases that still plague people today. In places where those diseases are more common, more of the Denisovan genome remains. For example, people in Melanesia are more likely to carry HLA-H*02:07, a gene originating from Denisovans that is associated with immunity against some tropical diseases. By comparing extant genes and the diseases they are associated with, researchers can also track the migration of ancient Denisovans. For example, the aforementioned Melanesians only carry the genes related to tropical diseases, not Lyme disease, indicating that they’re not descended from the same Denisovans who lived in Siberia or Tibet and developed a resistance to that disease. Denisovans might be gone, but it seems they may never stop helping us out.
These now-extinct people are currently making a big impact. According to a paper published in the Journal of Human Evolution by researchers at the University of Pannonia in Hungary, the remnants of Denisovan DNA might be responsible for granting some people immunity against tropical diseases. Denisovans were a group of hominins that emerged around 370,000 years ago, and their classification is still a matter of debate, owing to the limited amount of fossils found. What is known, however, is that genetic traces of Denisovans still remain in modern humans, much like with Neanderthals. Some of those traces are apparently responsible for immunity against certain tropical diseases, like malaria. Researchers came to this conclusion after using a computer model to reconstruct the ancient climates of three regions where Denisovan remains were found: Siberia, the Tibetan Plateau, and Laos. The results of the model were then compared to the habitats of disease carrying insects like mosquitoes and ticks as well as data regarding Denisovan DNA in people today. What they found was that the Denisovans likely lived in all types of environments and were exposed to many diseases that still plague people today. In places where those diseases are more common, more of the Denisovan genome remains. For example, people in Melanesia are more likely to carry HLA-H*02:07, a gene originating from Denisovans that is associated with immunity against some tropical diseases. By comparing extant genes and the diseases they are associated with, researchers can also track the migration of ancient Denisovans. For example, the aforementioned Melanesians only carry the genes related to tropical diseases, not Lyme disease, indicating that they’re not descended from the same Denisovans who lived in Siberia or Tibet and developed a resistance to that disease. Denisovans might be gone, but it seems they may never stop helping us out.
-
FREEMind + Body Daily Curio #3154Free1 CQ
Sure it saves daylight, but at what cost? Medical experts have been criticizing daylight savings time for decades, citing the negative short-term effects on health and safety. Now, scientists have concrete proof that it’s no good in the long run either.
Daylight savings time (DST) has one undeniable benefit: it stays brighter later. That’s why retailers love it—shoppers are more likely to be out and about when it’s light. The problem is that they’re also more likely to get into a car accident on the way to the mall or have a heart attack. Alternatively, they might have a stroke instead, or the forklift driver moving pallets of merchandise for those retailers might have an accident on the job. That’s because the biannual ritual of turning the clock forward in the spring and back in the autumn disrupts people’s circadian rhythms, also known as their bodies’ internal clocks.
When Indiana adopted DST in 2006 the state saw a 27 percent increase in heart attacks the following year. On the other hand, proponents of DST claim that the extra hour of daylight leads to energy savings, though that difference might seem small compared to the lives affected. Indeed, the latest research from Stanford University shows that doing away with DST completely could prevent an estimated 300,000 strokes per year and around 2.6 million fewer cases of obesity. Alternatively, a permanent DST could be about two thirds as beneficial as permanent standard time, and it would split the difference between the benefits and downsides of both. So, should we permanently gain an hour or lose an hour?
[Image description: A clock under glass, with a white base.] Credit & copyright: Clock, ca. 1852, The Metropolitan Museum of Art. Gift of Mr. and Mrs. Stuart P. Feld, 1983. Public Domain.Sure it saves daylight, but at what cost? Medical experts have been criticizing daylight savings time for decades, citing the negative short-term effects on health and safety. Now, scientists have concrete proof that it’s no good in the long run either.
Daylight savings time (DST) has one undeniable benefit: it stays brighter later. That’s why retailers love it—shoppers are more likely to be out and about when it’s light. The problem is that they’re also more likely to get into a car accident on the way to the mall or have a heart attack. Alternatively, they might have a stroke instead, or the forklift driver moving pallets of merchandise for those retailers might have an accident on the job. That’s because the biannual ritual of turning the clock forward in the spring and back in the autumn disrupts people’s circadian rhythms, also known as their bodies’ internal clocks.
When Indiana adopted DST in 2006 the state saw a 27 percent increase in heart attacks the following year. On the other hand, proponents of DST claim that the extra hour of daylight leads to energy savings, though that difference might seem small compared to the lives affected. Indeed, the latest research from Stanford University shows that doing away with DST completely could prevent an estimated 300,000 strokes per year and around 2.6 million fewer cases of obesity. Alternatively, a permanent DST could be about two thirds as beneficial as permanent standard time, and it would split the difference between the benefits and downsides of both. So, should we permanently gain an hour or lose an hour?
[Image description: A clock under glass, with a white base.] Credit & copyright: Clock, ca. 1852, The Metropolitan Museum of Art. Gift of Mr. and Mrs. Stuart P. Feld, 1983. Public Domain.
September 17, 2025
-
FREEWorld History Daily Curio #3153Free1 CQ
All the other mummies must be rolling in their graves! Until recently, it wasthought that the oldest mummies were around 7,000 years old, originating from the Chinchorro people in South America. Now, scientists have discovered mummies nearly twice as old in southeast Asia.
The process of deliberately mummifying a person’s remains is as varied as the cultures that practiced it. In ancient Egypt, the first mummies were created around 5000 B.C.E. thanks to the region’s naturally dry climate, which helped preserve bodies without extensive effort. More elaborate rituals developed later, around 4330 B.C.E., when Egyptians began using advanced embalming techniques to mummify the dead. Also around 5000 B.C.E., the Chinchorro people in modern-day Peru and Chile developed artificial mummification, and their mummies were thought to be the oldest in history. However, a string of new discoveries in south China, Vietnam, Laos, and other parts of Asia show that the history of artificial mummification is much older.
Researchers found human remains of hunter-gatherers which were deliberately desiccated with fire over a long period of time. In addition to the heat, the smoke helped preserve the bodies as they dried out over several months. The bodies were also positioned carefully to avoid letting the skeleton fall apart while drying, showing that there was significant effort put into the process. According to the researchers, these hunter-gatherers were using the smoke-drying method nearly 12,000 years ago, predating the Chinchorro mummies by 7,000 years. The method itself is also rare, but not unheard of. Smoke-drying was used by the indigenous peoples of Australia, and it’s still used by some groups in the highlands of New Guinea. While it’s not as famous today as ancient Egyptian mummification techniques, the smoke-drying method must have been well known in pre-history, since it appears to have been used widely over millennia. You could say these mummies are tried, dried, and true.
All the other mummies must be rolling in their graves! Until recently, it wasthought that the oldest mummies were around 7,000 years old, originating from the Chinchorro people in South America. Now, scientists have discovered mummies nearly twice as old in southeast Asia.
The process of deliberately mummifying a person’s remains is as varied as the cultures that practiced it. In ancient Egypt, the first mummies were created around 5000 B.C.E. thanks to the region’s naturally dry climate, which helped preserve bodies without extensive effort. More elaborate rituals developed later, around 4330 B.C.E., when Egyptians began using advanced embalming techniques to mummify the dead. Also around 5000 B.C.E., the Chinchorro people in modern-day Peru and Chile developed artificial mummification, and their mummies were thought to be the oldest in history. However, a string of new discoveries in south China, Vietnam, Laos, and other parts of Asia show that the history of artificial mummification is much older.
Researchers found human remains of hunter-gatherers which were deliberately desiccated with fire over a long period of time. In addition to the heat, the smoke helped preserve the bodies as they dried out over several months. The bodies were also positioned carefully to avoid letting the skeleton fall apart while drying, showing that there was significant effort put into the process. According to the researchers, these hunter-gatherers were using the smoke-drying method nearly 12,000 years ago, predating the Chinchorro mummies by 7,000 years. The method itself is also rare, but not unheard of. Smoke-drying was used by the indigenous peoples of Australia, and it’s still used by some groups in the highlands of New Guinea. While it’s not as famous today as ancient Egyptian mummification techniques, the smoke-drying method must have been well known in pre-history, since it appears to have been used widely over millennia. You could say these mummies are tried, dried, and true.
-
FREEBiology Nerdy CurioFree1 CQ
You can ko-a-la us crazy, but we think these are some of the most fascinating animals on the planet. They’re about to get a lot healthier, too: Australia recently approved a koala-specific vaccine to help the species fight chlamydia, a sexually-transmitted disease that around 48 percent of wild koalas are thought to have. The new vaccine reduces mortality by around 65 percent, which is especially good news since koala populations are currently in decline.
Koalas, like many animals native to Australia, are marsupials, meaning that they give birth to tiny, underdeveloped young which then attach themselves to a pouch and continue developing there. Many marsupials also carry their young in their pouches even after they’re fully developed, to protect them from predators and shield them from the elements. Koalas are around the size of small dogs, growing up to 33 inches long and weighing up to 33 pounds. They’re famous for their cute, teddy-bear-like appearance, but there’s a lot more to them than that.
Even among marsupials, koalas are oddballs. It’s thought that their name originates from the Dharug language, in which it means “no drink” or “no water.” Indeed, koalas almost never drink water directly, unless there’s a fire or particularly devastating heat wave. Instead, they get all their water from their only source of food: Eucalyptus leaves. Yes, koalas only eat one thing, and it’s poisonous. Most animals would die from consuming eucalyptus, but koalas have special liver enzymes and gut fauna that allow them to digest the toxic leaves with no issue.
There’s a price to pay for their unusual diet, though. First of all, eating just one thing means that koalas can’t adapt to other environments very well, so deforestation and diseases that affect eucalyptus trees inevitably devastate koala populations. Then there’s the fact that digesting poisonous leaves requires a lot of energy, which means that koalas have to conserve it in other areas. This is part of the reason that they spend up to 22 hours a day sleeping. It might also contribute to their extremely small brain size. Koalas have one of the smallest brain-to-body mass ratios in the entire animal kingdom, with their brains making up just 0.2 percent of their body mass. As if that’s not embarrassing enough, koalas’ brains are also smooth, which makes complex or out-of-the-norm tasks (such as eating eucalyptus leaves from anything other than an actual branch) difficult for them to learn. Large, complex brains take a lot of energy to maintain, and koalas simply don’t have enough to spare. Despite koalas’ declining population, Australia regards them as a national symbol, and conservationists are doing all they can to preserve their habitats and fight the climate change and deforestation that impacts them. They may be small (and sleepy) but they’ve got a lot of dedicated people in their corner.
[Image description: A koala sitting in a tree, grasping leaves in its paws.] Credit & copyright: Sklmsta, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You can ko-a-la us crazy, but we think these are some of the most fascinating animals on the planet. They’re about to get a lot healthier, too: Australia recently approved a koala-specific vaccine to help the species fight chlamydia, a sexually-transmitted disease that around 48 percent of wild koalas are thought to have. The new vaccine reduces mortality by around 65 percent, which is especially good news since koala populations are currently in decline.
Koalas, like many animals native to Australia, are marsupials, meaning that they give birth to tiny, underdeveloped young which then attach themselves to a pouch and continue developing there. Many marsupials also carry their young in their pouches even after they’re fully developed, to protect them from predators and shield them from the elements. Koalas are around the size of small dogs, growing up to 33 inches long and weighing up to 33 pounds. They’re famous for their cute, teddy-bear-like appearance, but there’s a lot more to them than that.
Even among marsupials, koalas are oddballs. It’s thought that their name originates from the Dharug language, in which it means “no drink” or “no water.” Indeed, koalas almost never drink water directly, unless there’s a fire or particularly devastating heat wave. Instead, they get all their water from their only source of food: Eucalyptus leaves. Yes, koalas only eat one thing, and it’s poisonous. Most animals would die from consuming eucalyptus, but koalas have special liver enzymes and gut fauna that allow them to digest the toxic leaves with no issue.
There’s a price to pay for their unusual diet, though. First of all, eating just one thing means that koalas can’t adapt to other environments very well, so deforestation and diseases that affect eucalyptus trees inevitably devastate koala populations. Then there’s the fact that digesting poisonous leaves requires a lot of energy, which means that koalas have to conserve it in other areas. This is part of the reason that they spend up to 22 hours a day sleeping. It might also contribute to their extremely small brain size. Koalas have one of the smallest brain-to-body mass ratios in the entire animal kingdom, with their brains making up just 0.2 percent of their body mass. As if that’s not embarrassing enough, koalas’ brains are also smooth, which makes complex or out-of-the-norm tasks (such as eating eucalyptus leaves from anything other than an actual branch) difficult for them to learn. Large, complex brains take a lot of energy to maintain, and koalas simply don’t have enough to spare. Despite koalas’ declining population, Australia regards them as a national symbol, and conservationists are doing all they can to preserve their habitats and fight the climate change and deforestation that impacts them. They may be small (and sleepy) but they’ve got a lot of dedicated people in their corner.
[Image description: A koala sitting in a tree, grasping leaves in its paws.] Credit & copyright: Sklmsta, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.