Curio Cabinet
- By Date
- By Type
January 22, 2025
-
8 minFREEWork Business CurioFree5 CQ
California officials and insurance representatives are holding workshops starting this weekend to help people deal with their insurance companies amid the fi...
California officials and insurance representatives are holding workshops starting this weekend to help people deal with their insurance companies amid the fi...
-
FREEScience Nerdy CurioFree1 CQ
Gorillas really aren’t supposed to fly. Earlier this month, a five-month-old gorilla was rescued from a plane’s cargo hold after someone tried to illegally import him into Thailand by way of Istanbul, Turkey. The baby primate, now named Zeytin, is recovering at Polonezkoy Zoo, and workers there hope that he may one day be reintroduced to the wild. Zeytin’s plight highlights a growing problem for wild gorilla populations: the illegal pet trade. But this is far from the only threat faced by the world’s largest primates.
Male gorillas can stand up to six feet tall and weigh up to 500 pounds, while females generally grow to around 4.5 feet tall and weigh around 250 pounds. Despite their enormous size and strength, these giants are fairly gentle. Most of their diet is made up of plants, though they also eat insects, like termites. Male gorillas may be famous for pounding their chests and shrieking, but such displays are actually fairly rare and are used to intimate opponents in order to avoid real fights.
There are two gorilla species: Eastern and Western, each of which has its own subspecies. All four kinds live in central and east African rainforests, and all four are endangered. Like many rainforest animals, their habitat has been rapidly shrinking due to human encroachment and the expansion of the logging industry. However, the biggest and most violent threat to gorillas is illegal poaching. Ape meat is seen as a delicacy in some wealthy areas, and gorillas are prone to being killed for their meat since they do not typically attack or run from people who get close to them.
All gorillas live in groups called families or troops that can have up to 50 members. Troops are composed of a dominant male, called a silverback, several adult females, and their young offspring. Gorillas don’t leave the troop they were born into until they’re between eight to twelve years old, which highlights another challenge they face: slow birth and growth rates. Gorillas live to be between 35 to 40 years old in the wild, but females only have one baby at a time, with gestation taking around 8.5 months. Since each baby takes around a decade to fully mature, gorilla populations struggle to bounce back after poaching attacks or habitat destruction. Luckily, conservationists have implemented captive breeding programs around the world and some countries have enacted laws to protect gorilla habitats from further destruction. Here’s hoping that brighter times are ahead for these dark-furred wonders.
[Image description: A gorilla sitting in green grass at the Pittsburgh Zoo.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Gorillas really aren’t supposed to fly. Earlier this month, a five-month-old gorilla was rescued from a plane’s cargo hold after someone tried to illegally import him into Thailand by way of Istanbul, Turkey. The baby primate, now named Zeytin, is recovering at Polonezkoy Zoo, and workers there hope that he may one day be reintroduced to the wild. Zeytin’s plight highlights a growing problem for wild gorilla populations: the illegal pet trade. But this is far from the only threat faced by the world’s largest primates.
Male gorillas can stand up to six feet tall and weigh up to 500 pounds, while females generally grow to around 4.5 feet tall and weigh around 250 pounds. Despite their enormous size and strength, these giants are fairly gentle. Most of their diet is made up of plants, though they also eat insects, like termites. Male gorillas may be famous for pounding their chests and shrieking, but such displays are actually fairly rare and are used to intimate opponents in order to avoid real fights.
There are two gorilla species: Eastern and Western, each of which has its own subspecies. All four kinds live in central and east African rainforests, and all four are endangered. Like many rainforest animals, their habitat has been rapidly shrinking due to human encroachment and the expansion of the logging industry. However, the biggest and most violent threat to gorillas is illegal poaching. Ape meat is seen as a delicacy in some wealthy areas, and gorillas are prone to being killed for their meat since they do not typically attack or run from people who get close to them.
All gorillas live in groups called families or troops that can have up to 50 members. Troops are composed of a dominant male, called a silverback, several adult females, and their young offspring. Gorillas don’t leave the troop they were born into until they’re between eight to twelve years old, which highlights another challenge they face: slow birth and growth rates. Gorillas live to be between 35 to 40 years old in the wild, but females only have one baby at a time, with gestation taking around 8.5 months. Since each baby takes around a decade to fully mature, gorilla populations struggle to bounce back after poaching attacks or habitat destruction. Luckily, conservationists have implemented captive breeding programs around the world and some countries have enacted laws to protect gorilla habitats from further destruction. Here’s hoping that brighter times are ahead for these dark-furred wonders.
[Image description: A gorilla sitting in green grass at the Pittsburgh Zoo.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History Daily Curio #3017Free1 CQ
Why did they call it a ration when it was so irrational? Pre-sliced bread became popular starting in the late 1920s, and it quickly became so ingrained in consumers’ preferences that when it was banned during WWII, it caused quite an uproar. Bread has been around for millennia, but pre-sliced bread has only been around for about a century and a half. The very first bread-slicing device was invented in 1860 and used parallel blades to cut a loaf of bread all at once. However, it wasn’t until Otto Frederick Rohwedder of Iowa invented an automated version in 1928 that pre-sliced bread really took off. Soon, innovations saw machines that could slice and wrap bread at the same time, and consumers were glad to buy loaves that they could more conveniently consume. There was also an added benefit: because sliced bread came wrapped and consumers only had to take out as much as they needed at a time, the bread lasted longer compared to whole loaves, which had to be completely unwrapped to slice at home.
When World War II food rationing began in the U.S., Claude R. Wickard, Secretary of Agriculture and head of the War Food Administration, issued Food Distribution Order 1, which banned sliced bread in order to save on the nation’s supply of wax paper. The American public went into an immediate uproar and Wickard was criticized in the press for the short-sighted measure. Firstly, the lack of sliced bread meant that housewives all over the nation had to vie for the same supply of bread knives, which were made of steel, another rationed resource. Secondly, because machines both sliced and wrapped the bread, both had to be done by hand again, sliced or not, which increased labor costs. Thirdly, since whole loaves went stale faster, more food was wasted during a time when families could only buy as much as their ration books allowed. Fortunately, the government reversed course on the decision, and the ban was lifted less than two months after it took effect. Let’s raise a toast to sliced bread.
[Image description: Slices of bread in front of a divided white-and-gray background. Some slices are white bread and some have whole grains on top.] Credit & copyright: Mariana Kurnyk, PexelsWhy did they call it a ration when it was so irrational? Pre-sliced bread became popular starting in the late 1920s, and it quickly became so ingrained in consumers’ preferences that when it was banned during WWII, it caused quite an uproar. Bread has been around for millennia, but pre-sliced bread has only been around for about a century and a half. The very first bread-slicing device was invented in 1860 and used parallel blades to cut a loaf of bread all at once. However, it wasn’t until Otto Frederick Rohwedder of Iowa invented an automated version in 1928 that pre-sliced bread really took off. Soon, innovations saw machines that could slice and wrap bread at the same time, and consumers were glad to buy loaves that they could more conveniently consume. There was also an added benefit: because sliced bread came wrapped and consumers only had to take out as much as they needed at a time, the bread lasted longer compared to whole loaves, which had to be completely unwrapped to slice at home.
When World War II food rationing began in the U.S., Claude R. Wickard, Secretary of Agriculture and head of the War Food Administration, issued Food Distribution Order 1, which banned sliced bread in order to save on the nation’s supply of wax paper. The American public went into an immediate uproar and Wickard was criticized in the press for the short-sighted measure. Firstly, the lack of sliced bread meant that housewives all over the nation had to vie for the same supply of bread knives, which were made of steel, another rationed resource. Secondly, because machines both sliced and wrapped the bread, both had to be done by hand again, sliced or not, which increased labor costs. Thirdly, since whole loaves went stale faster, more food was wasted during a time when families could only buy as much as their ration books allowed. Fortunately, the government reversed course on the decision, and the ban was lifted less than two months after it took effect. Let’s raise a toast to sliced bread.
[Image description: Slices of bread in front of a divided white-and-gray background. Some slices are white bread and some have whole grains on top.] Credit & copyright: Mariana Kurnyk, Pexels
January 21, 2025
-
7 minFREEWork Business CurioFree4 CQ
From the BBC World Service: As President Donald Trump begins his second term in office, he’s been talking tariffs — but for not for China, as many expected. ...
From the BBC World Service: As President Donald Trump begins his second term in office, he’s been talking tariffs — but for not for China, as many expected. ...
-
2 minFREEHumanities Word CurioFree2 CQ
Word of the Day
: January 21, 2025\GOOR-mahnd\ noun
What It Means
A gourmand is a person who loves and appreciates good food and drink. Gourm...
with Merriam-WebsterWord of the Day
: January 21, 2025\GOOR-mahnd\ noun
What It Means
A gourmand is a person who loves and appreciates good food and drink. Gourm...
-
FREEMusic Appreciation Song CurioFree2 CQ
Sometimes you get a hit song and a film title in one fell swoop. On this day in 1978, the soundtrack for Saturday Night Fever, a movie about New York’s disco scene, began a 24-week run at number one on the U.S. album chart. It remains the only disco album to ever win a Grammy for Album of the Year. But its name (and sound) would have been very different if the Bee Gees hadn’t been approached to work on the film. In 1977, the Bee Gees’ manager, Robert Stigwood, told them about a movie he was producing called Saturday Night and asked them to write a song with the same name for it. Instead, they gave Stigwood a song they had already written, called Night Fever, and persuaded him to change the movie’s title to Saturday Night Fever to fit it. The rest is disco history. Night Fever is one of the best-remembered disco songs of all time, featuring a danceable beat, the Bee Gees' signature harmonized falsetto, and lyrics about (what else?) dancing all night. The song helped make the movie and its soundtrack a resounding hit. The Bee Gees really knew how to work smarter, not harder!
Sometimes you get a hit song and a film title in one fell swoop. On this day in 1978, the soundtrack for Saturday Night Fever, a movie about New York’s disco scene, began a 24-week run at number one on the U.S. album chart. It remains the only disco album to ever win a Grammy for Album of the Year. But its name (and sound) would have been very different if the Bee Gees hadn’t been approached to work on the film. In 1977, the Bee Gees’ manager, Robert Stigwood, told them about a movie he was producing called Saturday Night and asked them to write a song with the same name for it. Instead, they gave Stigwood a song they had already written, called Night Fever, and persuaded him to change the movie’s title to Saturday Night Fever to fit it. The rest is disco history. Night Fever is one of the best-remembered disco songs of all time, featuring a danceable beat, the Bee Gees' signature harmonized falsetto, and lyrics about (what else?) dancing all night. The song helped make the movie and its soundtrack a resounding hit. The Bee Gees really knew how to work smarter, not harder!
-
FREEBiology Daily Curio #3016Free1 CQ
Don’t read this if you can’t stand to have your heart broken. Many penguins famously mate for life, a romantic fact that has helped make them some of the world’s best-loved birds. However, a 13-year study into the breeding habits of little penguins (Eudyptula minor) has revealed that the diminutive birds are surprisingly prone to “divorce.” Also known as fairy penguins, little penguins, as their name suggests, only grow to around 14 inches tall and only weigh about three pounds. But big drama sometimes comes in small packages. Researchers from Monash University in Australia tracked the breeding habits of around a thousand little penguin pairs on Phillips Island. The island is home to the world’s largest colony of the species, with a population of 37,000 or so. Of all the pairs they studied, around 250 ended up “divorced”, with the pairs splitting up and seeking new breeding partners.
So, what causes penguin divorce? Struggles with infertility, mostly. Penguin couples were much more likely to part ways when they failed to produce offspring. While divorce rates could be as high as 26 percent in some years, rates went down when the colony saw more successful hatchings. Marital bliss isn’t determined by offspring alone, though. According to one of the researchers, Richard Reina, little penguins aren’t exactly known for their faithfulness. In a university press release, he explained, “In good times, they largely stick with their partners, although there’s often a bit of hanky-panky happening on the side.” It might be hard to swallow the idea of adorable penguins divorcing and cheating on each other, but this study into little penguin behavior is important for the future of conservation. Current efforts to protect penguin species are focused on the impact of climate change, but studies like this show that there are complex social dynamics to consider as well when trying to maintain a healthy population. No word yet on whether there are little penguin divorce lawyers.
[Image description: A little penguin standing just underneath some type of wooden structure.] Credit & copyright: Sklmsta (Sklmsta~commonswiki), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Don’t read this if you can’t stand to have your heart broken. Many penguins famously mate for life, a romantic fact that has helped make them some of the world’s best-loved birds. However, a 13-year study into the breeding habits of little penguins (Eudyptula minor) has revealed that the diminutive birds are surprisingly prone to “divorce.” Also known as fairy penguins, little penguins, as their name suggests, only grow to around 14 inches tall and only weigh about three pounds. But big drama sometimes comes in small packages. Researchers from Monash University in Australia tracked the breeding habits of around a thousand little penguin pairs on Phillips Island. The island is home to the world’s largest colony of the species, with a population of 37,000 or so. Of all the pairs they studied, around 250 ended up “divorced”, with the pairs splitting up and seeking new breeding partners.
So, what causes penguin divorce? Struggles with infertility, mostly. Penguin couples were much more likely to part ways when they failed to produce offspring. While divorce rates could be as high as 26 percent in some years, rates went down when the colony saw more successful hatchings. Marital bliss isn’t determined by offspring alone, though. According to one of the researchers, Richard Reina, little penguins aren’t exactly known for their faithfulness. In a university press release, he explained, “In good times, they largely stick with their partners, although there’s often a bit of hanky-panky happening on the side.” It might be hard to swallow the idea of adorable penguins divorcing and cheating on each other, but this study into little penguin behavior is important for the future of conservation. Current efforts to protect penguin species are focused on the impact of climate change, but studies like this show that there are complex social dynamics to consider as well when trying to maintain a healthy population. No word yet on whether there are little penguin divorce lawyers.
[Image description: A little penguin standing just underneath some type of wooden structure.] Credit & copyright: Sklmsta (Sklmsta~commonswiki), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
January 20, 2025
-
2 minFREEHumanities Word CurioFree2 CQ
Word of the Day
: January 20, 2025\in-IM-it-uh-bul\ adjective
What It Means
Inimitable describes someone or something that is impossible to c...
with Merriam-WebsterWord of the Day
: January 20, 2025\in-IM-it-uh-bul\ adjective
What It Means
Inimitable describes someone or something that is impossible to c...
-
FREEArt Appreciation Art CurioFree1 CQ
"Prayer nut” sounds like a name for someone who loves to go to church, but its meaning is actually a lot more literal! Real prayer nuts are intricate, miniature sculptures contained inside a wooden sphere. The image above shows two round pieces of carved wood connected by a hinge. Both pieces feature detailed, carved scenes. The top scene shows a man about to be beheaded, while the bottom shows a king sitting at a throne before an audience. Also known as paternosters, prayer nuts were often no larger than a golf ball and featured such minutely detailed scenes that magnification was required to truly view them properly. They were popular in the Netherlands in the early 1500s, and were likely prohibitively expensive thanks to the level of detail involved in making them. While they were religious in nature, they were also valued as displays of wealth. Today, only around 150 prayer nuts remain, and their use as devotional items is heavily debated. It’s unclear if they were ever used for prayer at all. This is a tough nut to crack.
Prayer Nut with Scenes from the Life of St. James the Greater, Adam Dircksz (active c. 1500), c. 1500–1530, Boxwood, 2.31 x 1.87 in. (5.8 x 4.8 cm.), The Cleveland Museum of Art, Cleveland, Ohio
[Image credit & copyright: The Cleveland Museum of Art, Purchase from the J. H. Wade Fund 1961.87, Public domain, Creative Commons Zero (CC0) designation."Prayer nut” sounds like a name for someone who loves to go to church, but its meaning is actually a lot more literal! Real prayer nuts are intricate, miniature sculptures contained inside a wooden sphere. The image above shows two round pieces of carved wood connected by a hinge. Both pieces feature detailed, carved scenes. The top scene shows a man about to be beheaded, while the bottom shows a king sitting at a throne before an audience. Also known as paternosters, prayer nuts were often no larger than a golf ball and featured such minutely detailed scenes that magnification was required to truly view them properly. They were popular in the Netherlands in the early 1500s, and were likely prohibitively expensive thanks to the level of detail involved in making them. While they were religious in nature, they were also valued as displays of wealth. Today, only around 150 prayer nuts remain, and their use as devotional items is heavily debated. It’s unclear if they were ever used for prayer at all. This is a tough nut to crack.
Prayer Nut with Scenes from the Life of St. James the Greater, Adam Dircksz (active c. 1500), c. 1500–1530, Boxwood, 2.31 x 1.87 in. (5.8 x 4.8 cm.), The Cleveland Museum of Art, Cleveland, Ohio
[Image credit & copyright: The Cleveland Museum of Art, Purchase from the J. H. Wade Fund 1961.87, Public domain, Creative Commons Zero (CC0) designation. -
FREEMind + Body Daily Curio #3015Free1 CQ
You can paint the town red all you want, but you probably shouldn’t eat all the red you want. The FDA recently banned Red No. 3, a ubiquitous food coloring agent. Also called erythrosine, Red No. 3 is a synthetic dye made from petroleum, and it’s been standing out like a red thumb for decades thanks to being a known carcinogen. While its use in cosmetics was banned years ago, the dye is still currently used in over 9,200 food products. The FDA is giving companies until the beginning of 2027 to remove the dye from their formulas, bringing an end to a decades-long battle by activists to ban the dye from the food supply. Red No. 3 was first approved for use in food in 1907, and since then, it has been the go-to dye to give sodas, candies, and other sweets a vibrant, cherry red coloration. The color may make the food appealing to the eye, but it’s not exactly kind to the rest of the body. The dye was first identified as a possible carcinogen in the 1980s when it was shown to cause cancer in male rats that were exposed to high doses. Since then, groups like the Center for Science in the Public Interest have been pressuring the FDA to ban the dye, while several states did so of their own accord. For example, the dye has been banned in California since 2023. Outside the U.S., the dye has already been banned by several countries in the European Union, Australia, and Japan, and the list is growing. However, Red No. 3 isn’t the only dye to cause controversy. Red No. 40 has been linked in recent years to behavioral issues in children, but it’s not facing a ban yet. It seems red is a tough color to dye for.
[Image description: A red rectangle.] Credit & copyright: Author’s own photo. Public Domain.You can paint the town red all you want, but you probably shouldn’t eat all the red you want. The FDA recently banned Red No. 3, a ubiquitous food coloring agent. Also called erythrosine, Red No. 3 is a synthetic dye made from petroleum, and it’s been standing out like a red thumb for decades thanks to being a known carcinogen. While its use in cosmetics was banned years ago, the dye is still currently used in over 9,200 food products. The FDA is giving companies until the beginning of 2027 to remove the dye from their formulas, bringing an end to a decades-long battle by activists to ban the dye from the food supply. Red No. 3 was first approved for use in food in 1907, and since then, it has been the go-to dye to give sodas, candies, and other sweets a vibrant, cherry red coloration. The color may make the food appealing to the eye, but it’s not exactly kind to the rest of the body. The dye was first identified as a possible carcinogen in the 1980s when it was shown to cause cancer in male rats that were exposed to high doses. Since then, groups like the Center for Science in the Public Interest have been pressuring the FDA to ban the dye, while several states did so of their own accord. For example, the dye has been banned in California since 2023. Outside the U.S., the dye has already been banned by several countries in the European Union, Australia, and Japan, and the list is growing. However, Red No. 3 isn’t the only dye to cause controversy. Red No. 40 has been linked in recent years to behavioral issues in children, but it’s not facing a ban yet. It seems red is a tough color to dye for.
[Image description: A red rectangle.] Credit & copyright: Author’s own photo. Public Domain. -
8 minFREEWork Business CurioFree5 CQ
“We already felt like we’re being priced out,” said Claire Contreras, a teacher who lost her Altadena apartment to a fire. “All of this just kind of puts a b...
“We already felt like we’re being priced out,” said Claire Contreras, a teacher who lost her Altadena apartment to a fire. “All of this just kind of puts a b...
January 19, 2025
-
FREEBiology PP&T CurioFree1 CQ
You really shouldn’t spray paint at church—especially not on the grave of the world’s most famous biologist. Two climate activists recently made headlines for spray painting a message on Charles Darwin’s grave, in London’s Westminster Abbey. They hoped to draw attention to the fact that Earth’s global temperatures were 34.7 degrees higher than pre-industrial levels for the first time in 2024. While there’s no way to know how Darwin would feel about our modern climate crisis, during his lifetime he wasn’t focused on global temperatures. Rather, he wanted to learn how living things adapted to their environments. His theory of natural selection was groundbreaking…though, contrary to popular belief, Darwin was far from the first scientist to notice that organisms changed over time.
Born on February 12, 1809, in Shrewsbury, England, Charles Darwin was already interested in nature and an avid collector of plants and insects by the time he was teen. Still, he didn’t set out to study the natural world, at first. Instead, he apprenticed with his father, a doctor, then enrolled at the University of Edinburgh’s medical school in 1825. Alas, Darwin wasn’t cut out to be a doctor. Not only was he bored by medical lectures, he was deeply (and understandably) upset by medical practices of the time. This was especially true of a surgery he witnessed in which doctors operated on a child without anesthetics—because they hadn’t been invented yet. After leaving medical school, Darwin didn’t have a clear direction in life. He studied taxidermy for a time and later enrolled at Cambridge University to study theology. Yet again, Darwin found himself drawn away from his schooling, finally spurning theology to join the five-year voyage of the HMS Beagle to serve as its naturalist. The Beagle was set to circumnavigate the globe and survey the coastline of South America, among other things, allowing Darwin to travel to remote locations rarely visited by anyone.
During the voyage, Darwin did just what he’d done as a child, collecting specimens of insects, plants, animals, and fossils. He didn’t quite have the same “leave only footprints” mantra as modern scientists, though. In fact, Darwin not only documented the various lifeforms he encountered on his journey, he dined on them too. This was actually a habit dating back to his days at Cambridge, where he was the founding member of the Gourmet Club (also known as the Glutton Club). The goal of the club had been to feast on “birds and beasts which were before unknown to human palate,” and Darwin certainly made good on that motto during his time aboard the Beagle. According to his notes, Darwin ate iguanas, giant tortoises, armadillos, and even a puma, which he said was "remarkably like veal in taste." His most important contribution as a naturalist, though, was his theory of natural selection.
Darwin came up with his most famous idea after observing 13 different species of finches on the Galápagos Islands. Examining their behavior in the wild and studying their anatomy from captured specimens, Darwin found that the finches all had differently shaped beaks for different purposes. Some were better suited for eating seeds, while others ate insects. Despite these differences, Darwin concluded that they were all descended from the same bird, having many common characteristics, with specializations arising over time. Darwin wasn’t the first person to posit the possibility of evolution, though. 18th-century naturalist Jean Baptiste Lamarck believed that animals changed their bodies throughout their lives based on their environment, while Darwin’s contemporary Alfred Russel Wallace came up with the same theory of natural selection as he did. In fact, the two published a joint statement and gave a presentation at the Linnean Society in London in 1858. Darwin didn’t actually coin the phrase “survival of the fittest,” either. English philosopher Herbert Spencer came up with it in 1864 while comparing his economic and sociological theories to Darwin’s theory of evolution.
Despite Darwin’s confidence in his theory and praise from his peers in the scientific world, he actually waited 20 years to publish his findings. He was fearful of how his theory would be received by the religious community in England, since it contradicted much of what was written in the Bible. However, despite some public criticism, Darwin was mostly celebrated upon his theory’s publication. When he died in 1882, he was laid to rest in London’s Westminster Abbey, alongside England’s greatest heroes. It seems he didn’t have much to fear if his countrymen were willing to bury him in a church!
[Image description: A black-and-white photograph of Charles Darwin with a white beard.] Credit & copyright: Library of Congress, Prints & Photographs Division, LC-DIG-ggbain-03485, George Grantham Bain Collection. No known restrictions on publication.You really shouldn’t spray paint at church—especially not on the grave of the world’s most famous biologist. Two climate activists recently made headlines for spray painting a message on Charles Darwin’s grave, in London’s Westminster Abbey. They hoped to draw attention to the fact that Earth’s global temperatures were 34.7 degrees higher than pre-industrial levels for the first time in 2024. While there’s no way to know how Darwin would feel about our modern climate crisis, during his lifetime he wasn’t focused on global temperatures. Rather, he wanted to learn how living things adapted to their environments. His theory of natural selection was groundbreaking…though, contrary to popular belief, Darwin was far from the first scientist to notice that organisms changed over time.
Born on February 12, 1809, in Shrewsbury, England, Charles Darwin was already interested in nature and an avid collector of plants and insects by the time he was teen. Still, he didn’t set out to study the natural world, at first. Instead, he apprenticed with his father, a doctor, then enrolled at the University of Edinburgh’s medical school in 1825. Alas, Darwin wasn’t cut out to be a doctor. Not only was he bored by medical lectures, he was deeply (and understandably) upset by medical practices of the time. This was especially true of a surgery he witnessed in which doctors operated on a child without anesthetics—because they hadn’t been invented yet. After leaving medical school, Darwin didn’t have a clear direction in life. He studied taxidermy for a time and later enrolled at Cambridge University to study theology. Yet again, Darwin found himself drawn away from his schooling, finally spurning theology to join the five-year voyage of the HMS Beagle to serve as its naturalist. The Beagle was set to circumnavigate the globe and survey the coastline of South America, among other things, allowing Darwin to travel to remote locations rarely visited by anyone.
During the voyage, Darwin did just what he’d done as a child, collecting specimens of insects, plants, animals, and fossils. He didn’t quite have the same “leave only footprints” mantra as modern scientists, though. In fact, Darwin not only documented the various lifeforms he encountered on his journey, he dined on them too. This was actually a habit dating back to his days at Cambridge, where he was the founding member of the Gourmet Club (also known as the Glutton Club). The goal of the club had been to feast on “birds and beasts which were before unknown to human palate,” and Darwin certainly made good on that motto during his time aboard the Beagle. According to his notes, Darwin ate iguanas, giant tortoises, armadillos, and even a puma, which he said was "remarkably like veal in taste." His most important contribution as a naturalist, though, was his theory of natural selection.
Darwin came up with his most famous idea after observing 13 different species of finches on the Galápagos Islands. Examining their behavior in the wild and studying their anatomy from captured specimens, Darwin found that the finches all had differently shaped beaks for different purposes. Some were better suited for eating seeds, while others ate insects. Despite these differences, Darwin concluded that they were all descended from the same bird, having many common characteristics, with specializations arising over time. Darwin wasn’t the first person to posit the possibility of evolution, though. 18th-century naturalist Jean Baptiste Lamarck believed that animals changed their bodies throughout their lives based on their environment, while Darwin’s contemporary Alfred Russel Wallace came up with the same theory of natural selection as he did. In fact, the two published a joint statement and gave a presentation at the Linnean Society in London in 1858. Darwin didn’t actually coin the phrase “survival of the fittest,” either. English philosopher Herbert Spencer came up with it in 1864 while comparing his economic and sociological theories to Darwin’s theory of evolution.
Despite Darwin’s confidence in his theory and praise from his peers in the scientific world, he actually waited 20 years to publish his findings. He was fearful of how his theory would be received by the religious community in England, since it contradicted much of what was written in the Bible. However, despite some public criticism, Darwin was mostly celebrated upon his theory’s publication. When he died in 1882, he was laid to rest in London’s Westminster Abbey, alongside England’s greatest heroes. It seems he didn’t have much to fear if his countrymen were willing to bury him in a church!
[Image description: A black-and-white photograph of Charles Darwin with a white beard.] Credit & copyright: Library of Congress, Prints & Photographs Division, LC-DIG-ggbain-03485, George Grantham Bain Collection. No known restrictions on publication. -
9 minFREEWork Business CurioFree5 CQ
From a new so-called Department of Government Efficiency to an incoming Republican Congress, deep cuts to the federal government are promised this year. Amon...
From a new so-called Department of Government Efficiency to an incoming Republican Congress, deep cuts to the federal government are promised this year. Amon...
January 18, 2025
-
2 minFREEHumanities Word CurioFree2 CQ
Word of the Day
: January 18, 2025\MIN-uh-skyool\ adjective
What It Means
Something described as minuscule is very small. Minuscule can also ...
with Merriam-WebsterWord of the Day
: January 18, 2025\MIN-uh-skyool\ adjective
What It Means
Something described as minuscule is very small. Minuscule can also ...
-
FREESports Sporty CurioFree1 CQ
Forget the polish: they’re starting from scratch. Some Olympians who earned medals during the 2024 Paris Olympics are starting to ask for replacements after their prizes started showing signs of significant deterioration. Designed by Parisian jewelry house Chaumet and manufactured by the Monnaie de Paris, the French mint, 5,084 medals were handed out during the Paris Olympics and Paralympics last year. The medals were made with something extra inside them—a piece of the Eiffel Tower itself. These pieces came from girders and other parts of the tower that were replaced during renovations. With 18,038 iron parts making up the entirety of the tower, renovation is an ongoing process that often involves swapping out old components. But it seems that the Olympic medals that contain pieces of the tower need renovations of their own. Some athletes posted pictures of their medals deteriorating while the games were still ongoing, like American skateboarder Nyjah Huston, whose video went viral on social media. Since then, many more have spoken out about the issue. The affected medals are described as having “crocodile skin” from corrosion. The actual cause of the damage is unknown, but the Monnaie de Paris is set to begin making replacements in the coming weeks. Replacing over 5,000 medals sounds like an Olympic feat of its own.
Forget the polish: they’re starting from scratch. Some Olympians who earned medals during the 2024 Paris Olympics are starting to ask for replacements after their prizes started showing signs of significant deterioration. Designed by Parisian jewelry house Chaumet and manufactured by the Monnaie de Paris, the French mint, 5,084 medals were handed out during the Paris Olympics and Paralympics last year. The medals were made with something extra inside them—a piece of the Eiffel Tower itself. These pieces came from girders and other parts of the tower that were replaced during renovations. With 18,038 iron parts making up the entirety of the tower, renovation is an ongoing process that often involves swapping out old components. But it seems that the Olympic medals that contain pieces of the tower need renovations of their own. Some athletes posted pictures of their medals deteriorating while the games were still ongoing, like American skateboarder Nyjah Huston, whose video went viral on social media. Since then, many more have spoken out about the issue. The affected medals are described as having “crocodile skin” from corrosion. The actual cause of the damage is unknown, but the Monnaie de Paris is set to begin making replacements in the coming weeks. Replacing over 5,000 medals sounds like an Olympic feat of its own.
-
7 minFREEWork Business CurioFree4 CQ
California officials and insurance representatives are holding workshops starting this weekend to help people deal with their insurance companies amid the fi...
California officials and insurance representatives are holding workshops starting this weekend to help people deal with their insurance companies amid the fi...
January 17, 2025
-
8 minFREEWork Business CurioFree5 CQ
From the BBC World Service: China’s economy grew by 5% last year, beating expectations. This growth was driven by the country’s manufacturing sector, with th...
From the BBC World Service: China’s economy grew by 5% last year, beating expectations. This growth was driven by the country’s manufacturing sector, with th...
-
2 minFREEHumanities Word CurioFree2 CQ
Word of the Day
: January 17, 2025\ap-rih-HEN-shun\ noun
What It Means
Apprehension most often refers to the fear that something bad or unple...
with Merriam-WebsterWord of the Day
: January 17, 2025\ap-rih-HEN-shun\ noun
What It Means
Apprehension most often refers to the fear that something bad or unple...
-
FREEMind + Body Daily CurioFree1 CQ
You could call it a kingly dish…too bad it’s been forgotten! Chicken à la king was once one of the U.S.’s most popular dishes. It was a hit at dinner parties in the 1950s and 60s, and could also be found in plenty of fancy restaurants. Today, you’d be hard pressed to find it anywhere. So, what happened?
Despite its royal name, chicken à la king is a fairly simple dish, made from easy-to-source ingredients. It consists of chopped chicken in a cream sauce with veggies like mushrooms, tomatoes, and peas. Sherry is sometimes added to the sauce. The dish is usually served over noodles, rice, or toast, making chicken à la king a sort of sauce itself.
No one knows who invented chicken à la king, though most theories suggest it dates back to the mid to late 1800s. Some claim that it was invented by a chef at the famous New York restaurant Delmonico's, where it was called “Chicken à la Keene.” There are various stories of other New York City chefs creating the dish, though one tale links chicken à la king to Philadelphia. Supposedly, in the 1890s, a cook named William "Bill" King created it while working at the Bellevue Hotel.
Wherever it came from, there’s no doubt that chicken à la king’s popularity began in New York City, where several fancy restaurants began serving it in the early to mid 1900s. Between 1910 and 1960, the dish appeared on more than 300 menus in New York City. Beginning in the 1940s, dinner parties with friends and neighbors became one of the most popular ways for suburbanites to socialize. Chicken à la king, with its short prep time and easy-to-find ingredients, quickly became one of the most commonly-found foods at such parties, not to mention at weddings and other large-scale get-togethers.
As for why the dish fell out of fashion…no one’s really sure. As the dish became more common, it’s possible that quicker and cheaper versions of it convinced some people that it didn’t live up to its original hype. Or perhaps its meteoric rise in popularity was also its downfall, and people simply got sick of it being served at every major function. One thing’s for sure: chicken à la king was here for a good time…not for a long time.
[Image description: Two pieces of raw chicken with sprigs of green herbs on a white plate.] Credit & copyright: Leeloo The First, PexelsYou could call it a kingly dish…too bad it’s been forgotten! Chicken à la king was once one of the U.S.’s most popular dishes. It was a hit at dinner parties in the 1950s and 60s, and could also be found in plenty of fancy restaurants. Today, you’d be hard pressed to find it anywhere. So, what happened?
Despite its royal name, chicken à la king is a fairly simple dish, made from easy-to-source ingredients. It consists of chopped chicken in a cream sauce with veggies like mushrooms, tomatoes, and peas. Sherry is sometimes added to the sauce. The dish is usually served over noodles, rice, or toast, making chicken à la king a sort of sauce itself.
No one knows who invented chicken à la king, though most theories suggest it dates back to the mid to late 1800s. Some claim that it was invented by a chef at the famous New York restaurant Delmonico's, where it was called “Chicken à la Keene.” There are various stories of other New York City chefs creating the dish, though one tale links chicken à la king to Philadelphia. Supposedly, in the 1890s, a cook named William "Bill" King created it while working at the Bellevue Hotel.
Wherever it came from, there’s no doubt that chicken à la king’s popularity began in New York City, where several fancy restaurants began serving it in the early to mid 1900s. Between 1910 and 1960, the dish appeared on more than 300 menus in New York City. Beginning in the 1940s, dinner parties with friends and neighbors became one of the most popular ways for suburbanites to socialize. Chicken à la king, with its short prep time and easy-to-find ingredients, quickly became one of the most commonly-found foods at such parties, not to mention at weddings and other large-scale get-togethers.
As for why the dish fell out of fashion…no one’s really sure. As the dish became more common, it’s possible that quicker and cheaper versions of it convinced some people that it didn’t live up to its original hype. Or perhaps its meteoric rise in popularity was also its downfall, and people simply got sick of it being served at every major function. One thing’s for sure: chicken à la king was here for a good time…not for a long time.
[Image description: Two pieces of raw chicken with sprigs of green herbs on a white plate.] Credit & copyright: Leeloo The First, Pexels
January 16, 2025
-
9 minFREEWork Business CurioFree5 CQ
Every bottle of alcohol sold in the U.S. already has a warning label. But late last week the U.S. Surgeon General recommended changes to those labels, includ...
Every bottle of alcohol sold in the U.S. already has a warning label. But late last week the U.S. Surgeon General recommended changes to those labels, includ...
-
FREEAstronomy Nerdy CurioFree1 CQ
How did Pluto get its moon? By playing it cool, of course. Scientists have long wondered how a small dwarf planet like Pluto managed to trap an entire moon in its orbit. Now, researchers at the Lunar and Planetary Laboratory of the University of Arizona think an icy “kiss” might have been the key. Pluto and its moon, Charon, make for an unusual pair. Most planets are substantially bigger than their moons, but that’s not so with Pluto. The icy dwarf planet is around 1,400 miles wide, and its moon is 754 miles wide while being about 12 percent the mass of Pluto. They are, essentially, two dwarf planets orbiting around one another. In fact, scientists sometimes refer to them as a double dwarf planet system. Researchers are just now piecing together how the two of them ended up together, and the answer appears to be an unusual process they’re calling “kiss-and-capture.” Billions of years ago, Charon collided with Pluto, but since both of them were solid enough to withstand the impact, they ended up stuck together in a snowman-like configuration. This is different from a standard “collision capture,” where the impact deforms both colliding bodies as if they were fluids. Because Charon rotates more slowly than Pluto, the two couldn’t merge together. Instead, the dwarf planet and moon remained attached for around 10 to 15 hours, after which point Charon started to migrate away, into its current orbit. Scientists at the University of Arizona are basing this theory on an advanced computer simulation where the material properties of both bodies were used to determine how they would react during a collision. It seems that, even in a simulated environment, these two were made for each other.
[Image description: A starry sky with some purple visible.] Credit & copyright: Felix Mittermeier, Pexels
How did Pluto get its moon? By playing it cool, of course. Scientists have long wondered how a small dwarf planet like Pluto managed to trap an entire moon in its orbit. Now, researchers at the Lunar and Planetary Laboratory of the University of Arizona think an icy “kiss” might have been the key. Pluto and its moon, Charon, make for an unusual pair. Most planets are substantially bigger than their moons, but that’s not so with Pluto. The icy dwarf planet is around 1,400 miles wide, and its moon is 754 miles wide while being about 12 percent the mass of Pluto. They are, essentially, two dwarf planets orbiting around one another. In fact, scientists sometimes refer to them as a double dwarf planet system. Researchers are just now piecing together how the two of them ended up together, and the answer appears to be an unusual process they’re calling “kiss-and-capture.” Billions of years ago, Charon collided with Pluto, but since both of them were solid enough to withstand the impact, they ended up stuck together in a snowman-like configuration. This is different from a standard “collision capture,” where the impact deforms both colliding bodies as if they were fluids. Because Charon rotates more slowly than Pluto, the two couldn’t merge together. Instead, the dwarf planet and moon remained attached for around 10 to 15 hours, after which point Charon started to migrate away, into its current orbit. Scientists at the University of Arizona are basing this theory on an advanced computer simulation where the material properties of both bodies were used to determine how they would react during a collision. It seems that, even in a simulated environment, these two were made for each other.
[Image description: A starry sky with some purple visible.] Credit & copyright: Felix Mittermeier, Pexels
-
FREEScience Daily Curio #3014Free1 CQ
Can you feel the heat? Devastating wildfires are wreaking havoc in populated areas of California, and as firefighters continue to battle the blazes, you may be wondering why it seems like such an uphill fight. As our climate warms, it’s increasingly important to understand how wildfires start, how they spread, and why fighting them can be extraordinarily difficult. Wildfires can start in a number of natural ways, from lightning strikes to the concentrated heat of the sun, but the most common culprit is human interference. Most wildfires are started by simple, careless actions, like discarding lit cigarettes in a dry area or failing to follow proper safety procedures with a campfire. Regardless of how they start, though, wildfires can grow out of control at an unbelievable pace. The speed at which a wildfire grows is based on three main factors: fuel, weather, and topography.
The density and material properties of a fire’s fuel (lush vegetation vs. dead, dry vegetation) can greatly impact how fast the fire spreads, but once it reaches a certain point, there’s very little difference. Even healthy, green vegetation can be quickly dried out by intense heat, and as long as there is net energy from a given source of fuel, the fire will spread. Topography, or the geography of a given location, matters a lot too. For example, fire tends to spread faster uphill because hot gases from the fire rise upward to preheat and dry out vegetation ahead of the flames. In grass fires, flames can spread up to four times faster uphill. Then, there is the weather. Humidity affects how quickly a fire spreads since it has to burn away ambient moisture in the atmosphere, but in California, firefighting efforts have largely been hampered by strong winds. Wind provides more oxygen for the flames, helping it burn hotter while carrying ashes and other flammable material over long distances, potentially spreading it to unconnected areas. Strong winds can also make it difficult to fly over the wildfires and douse them from above, hindering firefighters’ ability to contain the spread. Once wildfires spread to densely populated areas, the fires can easily destroy most buildings, whether they’re made of wood or brick, although the latter would last a little longer. If you’re given orders to evacuate ahead of an approaching wildfire, don’t try to weather the firestorm with a garden hose. It’s better to lose your home than your life.
[Image description: A nighttime wildfire burning among pine trees at Lick Creek, Umatilla National Forest, Oregon.] Credit & copyright: Brendan O'Reilly, U.S. Forest Service- Pacific Northwest Region. This image is a work of the Forest Service of the United States Department of Agriculture. As a work of the U.S. federal government, the image is in the public domain.Can you feel the heat? Devastating wildfires are wreaking havoc in populated areas of California, and as firefighters continue to battle the blazes, you may be wondering why it seems like such an uphill fight. As our climate warms, it’s increasingly important to understand how wildfires start, how they spread, and why fighting them can be extraordinarily difficult. Wildfires can start in a number of natural ways, from lightning strikes to the concentrated heat of the sun, but the most common culprit is human interference. Most wildfires are started by simple, careless actions, like discarding lit cigarettes in a dry area or failing to follow proper safety procedures with a campfire. Regardless of how they start, though, wildfires can grow out of control at an unbelievable pace. The speed at which a wildfire grows is based on three main factors: fuel, weather, and topography.
The density and material properties of a fire’s fuel (lush vegetation vs. dead, dry vegetation) can greatly impact how fast the fire spreads, but once it reaches a certain point, there’s very little difference. Even healthy, green vegetation can be quickly dried out by intense heat, and as long as there is net energy from a given source of fuel, the fire will spread. Topography, or the geography of a given location, matters a lot too. For example, fire tends to spread faster uphill because hot gases from the fire rise upward to preheat and dry out vegetation ahead of the flames. In grass fires, flames can spread up to four times faster uphill. Then, there is the weather. Humidity affects how quickly a fire spreads since it has to burn away ambient moisture in the atmosphere, but in California, firefighting efforts have largely been hampered by strong winds. Wind provides more oxygen for the flames, helping it burn hotter while carrying ashes and other flammable material over long distances, potentially spreading it to unconnected areas. Strong winds can also make it difficult to fly over the wildfires and douse them from above, hindering firefighters’ ability to contain the spread. Once wildfires spread to densely populated areas, the fires can easily destroy most buildings, whether they’re made of wood or brick, although the latter would last a little longer. If you’re given orders to evacuate ahead of an approaching wildfire, don’t try to weather the firestorm with a garden hose. It’s better to lose your home than your life.
[Image description: A nighttime wildfire burning among pine trees at Lick Creek, Umatilla National Forest, Oregon.] Credit & copyright: Brendan O'Reilly, U.S. Forest Service- Pacific Northwest Region. This image is a work of the Forest Service of the United States Department of Agriculture. As a work of the U.S. federal government, the image is in the public domain.