Curio Cabinet / Daily Curio
-
FREEBiology Daily Curio #3104Free1 CQ
Chromosomes are fundamental to creating life, but you can have too much of a good thing. Just one extra copy of chromosome 21 is responsible for causing Down syndrome, which itself causes many different health problems. Now, scientists at Mie University in Japan have developed a way to remove the extra chromosome using CRISPR technology. The chromosome responsible for down syndrome is called trisomy 21. When someone is born with this chromosome, they end up with 47 total chromosomes, rather than the usual 46. This results in a range of health effects, including congenital heart problems and cognitive issues.
Until recently, genetic disorders like Down syndrome were considered untreatable, but medical advancements have been changing things. Back in 2023, the FDA approved Casgevy and Lyfgenia, both of which are cell-based gene therapies to treat sickle cell disease (SCD) in patients over age 12. The treatments were developed using CRISPR-Cas9, which utilizes enzymes to accurately target parts of the DNA strand responsible for the disease. It’s the same technology used by the scientists at Mie University, who targeted trisomy 21 in a process called allele-specific editing, or, as one of the researchers described, “Trisomic rescue via allele-specific multiple chromosome cleavage using CRISPR-Cas9 in trisomy 21 cells.” The process was performed on lab-grown cells which quickly recovered and began functioning like any other cells. It’s unlikely that this new development will signal an immediate reversal of Down syndrome, as it will be a while before the treatment can undergo human trials. One particular hurdle is that the treatment can sometimes target healthy chromosomes. Still, it shows that CRISPR-Cas9 can be used to remove entire chromosomes and that cells affected by trisomy 21 can make a full recovery with treatment. That’s a lot of medical advancement in one crisp swoop.
[Image description: A diagram of a DNA strand with a key for each labeled part. The key from top to bottom reads: Adenine, Thymine, Cytosine, Guanine, and phosphate backbone.] Credit & copyright: Forluvoft, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Chromosomes are fundamental to creating life, but you can have too much of a good thing. Just one extra copy of chromosome 21 is responsible for causing Down syndrome, which itself causes many different health problems. Now, scientists at Mie University in Japan have developed a way to remove the extra chromosome using CRISPR technology. The chromosome responsible for down syndrome is called trisomy 21. When someone is born with this chromosome, they end up with 47 total chromosomes, rather than the usual 46. This results in a range of health effects, including congenital heart problems and cognitive issues.
Until recently, genetic disorders like Down syndrome were considered untreatable, but medical advancements have been changing things. Back in 2023, the FDA approved Casgevy and Lyfgenia, both of which are cell-based gene therapies to treat sickle cell disease (SCD) in patients over age 12. The treatments were developed using CRISPR-Cas9, which utilizes enzymes to accurately target parts of the DNA strand responsible for the disease. It’s the same technology used by the scientists at Mie University, who targeted trisomy 21 in a process called allele-specific editing, or, as one of the researchers described, “Trisomic rescue via allele-specific multiple chromosome cleavage using CRISPR-Cas9 in trisomy 21 cells.” The process was performed on lab-grown cells which quickly recovered and began functioning like any other cells. It’s unlikely that this new development will signal an immediate reversal of Down syndrome, as it will be a while before the treatment can undergo human trials. One particular hurdle is that the treatment can sometimes target healthy chromosomes. Still, it shows that CRISPR-Cas9 can be used to remove entire chromosomes and that cells affected by trisomy 21 can make a full recovery with treatment. That’s a lot of medical advancement in one crisp swoop.
[Image description: A diagram of a DNA strand with a key for each labeled part. The key from top to bottom reads: Adenine, Thymine, Cytosine, Guanine, and phosphate backbone.] Credit & copyright: Forluvoft, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEGardening Daily Curio #3103Free1 CQ
They may be small, but they’re no saplings! The Brooklyn Bonsai Museum is celebrating its 100th birthday by inviting the public to learn more about the ancient art of bonsai, which has roots that go beyond just Japan. Bonsai involves growing trees in containers, carefully pruning and maintaining them to let them thrive in a confined space. When done properly, a tree kept in such a manner will resemble a full-sized tree in miniaturized form and not just look like a stunted specimen. Experienced practitioners can also guide the growth of the trunk and branches to form artful, often dramatic shapes.
Bonsai has been gaining popularity in the U.S. in the past century, and its history goes all the way back to 8th-century China, when dwarf trees were grown in containers and cultivated as luxury gifts. Then, in the Kamakura period, which lasted from the late 12th century to the early 14th century, Japan adopted many of China’s cultural and artistic practices and sensibilities, including what they would come to call bonsai.
For a tree to be a bonsai tree, it has to be grown in a shallow container which limits its overall growth while still allowing it to mature. While most bonsai trees are small enough to be placed on a desk or table, it’s not really the size that dictates what is or isn’t a bonsai. As long as it’s grown in a shallow container, a tree can be considered bonsai. In fact, there are some downright large specimens that dwarf their human caretakers. A category of bonsai called “Imperial bonsai” typically ranges between five to seven feet, but the largest bonsai in existence is a sixteen-foot red pine in Akao Herb & Rose Garden in Shizuoka, Japan. Bonsai trees can also live just as long as their container-free counterparts. The oldest currently in existence is a Ficus Retusa Linn at the Crespi Bonsai Museum in Italy, which is over 1000 years old and was originally grown in China, presumably before the practice even spread to Japan. If this tree ever falls—in a forest or not—you can bet that someone’s going to make a lot of noise.
[Image description: A potted bonsai tree sitting on a table with a bamboo fence in the background.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They may be small, but they’re no saplings! The Brooklyn Bonsai Museum is celebrating its 100th birthday by inviting the public to learn more about the ancient art of bonsai, which has roots that go beyond just Japan. Bonsai involves growing trees in containers, carefully pruning and maintaining them to let them thrive in a confined space. When done properly, a tree kept in such a manner will resemble a full-sized tree in miniaturized form and not just look like a stunted specimen. Experienced practitioners can also guide the growth of the trunk and branches to form artful, often dramatic shapes.
Bonsai has been gaining popularity in the U.S. in the past century, and its history goes all the way back to 8th-century China, when dwarf trees were grown in containers and cultivated as luxury gifts. Then, in the Kamakura period, which lasted from the late 12th century to the early 14th century, Japan adopted many of China’s cultural and artistic practices and sensibilities, including what they would come to call bonsai.
For a tree to be a bonsai tree, it has to be grown in a shallow container which limits its overall growth while still allowing it to mature. While most bonsai trees are small enough to be placed on a desk or table, it’s not really the size that dictates what is or isn’t a bonsai. As long as it’s grown in a shallow container, a tree can be considered bonsai. In fact, there are some downright large specimens that dwarf their human caretakers. A category of bonsai called “Imperial bonsai” typically ranges between five to seven feet, but the largest bonsai in existence is a sixteen-foot red pine in Akao Herb & Rose Garden in Shizuoka, Japan. Bonsai trees can also live just as long as their container-free counterparts. The oldest currently in existence is a Ficus Retusa Linn at the Crespi Bonsai Museum in Italy, which is over 1000 years old and was originally grown in China, presumably before the practice even spread to Japan. If this tree ever falls—in a forest or not—you can bet that someone’s going to make a lot of noise.
[Image description: A potted bonsai tree sitting on a table with a bamboo fence in the background.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
This isn’t your grandma’s ice cream…unless she happens to be a native Alaskan. Akutaq, also known as Alaskan ice cream, is one of the most unique frozen foods in the world. Although it resembles ice cream, as its nickname suggests, it is savory and involves no dairy. It includes plenty of animal fat, though.
Akutaq is a traditional Alaskan dish enjoyed for centuries by different groups of peoples native to the region. It is made by mixing ice, often from freshly-fallen snow, with berries, meat, animal oil, and whipped animal tallow. Tallow is made by melting fat and then cooling it into a solid, waxy substance. Whipping the tallow gives it a lighter, fluffier texture. Different Alaskan animals can be used to make akutaq, including caribou, seals, moose, or fish. The dish is traditionally mixed in an ovular, wooden bowl called a tumnaq.
No one knows which group of native Alaskans first invented akutaq. It is most often attributed to the Yupik people, who are indigenous to both Alaska and Eastern Russia, since the name “akutaq” means “mix them together” in the Yup'ik language. However, other Alaskan peoples, including Inuits, also make akutaq, and since recipes were traditionally passed down orally, it’s unlikely that we’ll ever know exactly how it came to be.
What we do know is what it was used for: quick energy for long journeys. Unlike actual ice cream, which is considered a sweet treat and nothing more, akutaq is serious business, providing protein and fat that is much-needed before long expeditions through the snow. Since many native Alaskan peoples were nomadic, moving from place to place throughout the year in order to follow herds of prey animals, long treks in cold weather were inevitable. Akutaq helped ensure that everyone had enough energy and strength to make the trip. The ingredients could also vary widely based on where people were at any given time. Before particularly long journeys, akutaq might include dried meat for added protein, while akutaq made near coasts included more fish and seal meat. It might not come on a cone, but there’s no doubt that this frozen dish is more useful than ice cream.
[Image description: A wooden bowl, called a tumnaq, made for making akutaq.] Credit & copyright: Caroline Léna Becker, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This isn’t your grandma’s ice cream…unless she happens to be a native Alaskan. Akutaq, also known as Alaskan ice cream, is one of the most unique frozen foods in the world. Although it resembles ice cream, as its nickname suggests, it is savory and involves no dairy. It includes plenty of animal fat, though.
Akutaq is a traditional Alaskan dish enjoyed for centuries by different groups of peoples native to the region. It is made by mixing ice, often from freshly-fallen snow, with berries, meat, animal oil, and whipped animal tallow. Tallow is made by melting fat and then cooling it into a solid, waxy substance. Whipping the tallow gives it a lighter, fluffier texture. Different Alaskan animals can be used to make akutaq, including caribou, seals, moose, or fish. The dish is traditionally mixed in an ovular, wooden bowl called a tumnaq.
No one knows which group of native Alaskans first invented akutaq. It is most often attributed to the Yupik people, who are indigenous to both Alaska and Eastern Russia, since the name “akutaq” means “mix them together” in the Yup'ik language. However, other Alaskan peoples, including Inuits, also make akutaq, and since recipes were traditionally passed down orally, it’s unlikely that we’ll ever know exactly how it came to be.
What we do know is what it was used for: quick energy for long journeys. Unlike actual ice cream, which is considered a sweet treat and nothing more, akutaq is serious business, providing protein and fat that is much-needed before long expeditions through the snow. Since many native Alaskan peoples were nomadic, moving from place to place throughout the year in order to follow herds of prey animals, long treks in cold weather were inevitable. Akutaq helped ensure that everyone had enough energy and strength to make the trip. The ingredients could also vary widely based on where people were at any given time. Before particularly long journeys, akutaq might include dried meat for added protein, while akutaq made near coasts included more fish and seal meat. It might not come on a cone, but there’s no doubt that this frozen dish is more useful than ice cream.
[Image description: A wooden bowl, called a tumnaq, made for making akutaq.] Credit & copyright: Caroline Léna Becker, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3102Free1 CQ
Telling people apart is as easy as breathing. Researchers recently found that the pattern of a person’s breathing may be unique to the individual, much like their fingerprints. Currently, there are only a few surefire ways to identify someone: fingerprints, eye scans, and DNA tests. Soon, another option might be available in the form of breathing. Researchers at the Weizmann Institute of Science in Israel hypothesized that a person’s breathing pattern might be unique to them, and tested the idea with the help of 100 participants. The participants were equipped with special devices that tracked their breathing throughout the day, measuring the frequency and duration of each breath, along with the amount of air passing through their nasal cavities. Over the course of two years, the researchers entered the data they collected into a machine learning program, which learned to positively identify a person through their breathing alone with an accuracy of 96.8 percent.
There may be another use for this type of analysis besides identification. Researchers found that a person’s breathing revealed not only their identity, but information about their physical and mental health. People of similar body mass indexes share similarities in the way they breath, and so do those who suffer from depression or anxiety. Those with depression tend to exhale quickly, while those with anxiety have shorter inhales and pause their breathing more frequently during sleep. According to the researchers, their next step is to find out if and how breathing can be used as a diagnostics tool. In the future, they hope that it may even be possible to change peoples’ breathing patterns for the better. As Noam Sobel, a co-author of the study said in a statement, “We intuitively assume that how depressed or anxious you are changes the way you breathe. But it might be the other way around. Perhaps the way you breathe makes you anxious or depressed. If that’s true, we might be able to change the way you breathe to change those conditions.” We’ll be able to breathe easy, then.
[Image description: The black nose of a dog with brown fur. The rest of the dog’s face is not visible.] Credit & copyright: HTO, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Telling people apart is as easy as breathing. Researchers recently found that the pattern of a person’s breathing may be unique to the individual, much like their fingerprints. Currently, there are only a few surefire ways to identify someone: fingerprints, eye scans, and DNA tests. Soon, another option might be available in the form of breathing. Researchers at the Weizmann Institute of Science in Israel hypothesized that a person’s breathing pattern might be unique to them, and tested the idea with the help of 100 participants. The participants were equipped with special devices that tracked their breathing throughout the day, measuring the frequency and duration of each breath, along with the amount of air passing through their nasal cavities. Over the course of two years, the researchers entered the data they collected into a machine learning program, which learned to positively identify a person through their breathing alone with an accuracy of 96.8 percent.
There may be another use for this type of analysis besides identification. Researchers found that a person’s breathing revealed not only their identity, but information about their physical and mental health. People of similar body mass indexes share similarities in the way they breath, and so do those who suffer from depression or anxiety. Those with depression tend to exhale quickly, while those with anxiety have shorter inhales and pause their breathing more frequently during sleep. According to the researchers, their next step is to find out if and how breathing can be used as a diagnostics tool. In the future, they hope that it may even be possible to change peoples’ breathing patterns for the better. As Noam Sobel, a co-author of the study said in a statement, “We intuitively assume that how depressed or anxious you are changes the way you breathe. But it might be the other way around. Perhaps the way you breathe makes you anxious or depressed. If that’s true, we might be able to change the way you breathe to change those conditions.” We’ll be able to breathe easy, then.
[Image description: The black nose of a dog with brown fur. The rest of the dog’s face is not visible.] Credit & copyright: HTO, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEUS History Daily Curio #3101Free1 CQ
Not all name tags are important enough to get a name of their own. Recently, the dog tag of a World War II serviceman was returned to his family after it was lost for 80 years. While the dog tag didn't get to serve its intended purpose, there’s a reason why such military name tags have been used, in various forms, for thousands of years.
Joseph L. Gray passed away in 1945 after the B-17 he was serving on crashed on the Isle of Man. The only consolation was that the crash was documented enough for his passing to be known about at the time. His dog tag was found decades later with a metal detector, after which it was donated to a local museum. The dog tag was then spotted by a descendant of a fellow crew mate, leading to its return to Gray’s family. Similar tales abound when it comes to dog tags, which have long been used in the U.S. military to serve as identification in combat. However, such name tags are a surprisingly ancient concept.
Ancient Romans issued something similar to modern dog tags in the form of a signaculum, a piece of metal worn on a legionary’s neck with identifying information. During the American Civil War, soldiers weren’t issued a standardized tag, so many fashioned their own from spare pieces of lead, copper, or even coins. Marines of the time used a piece of wood on a string for the same purpose. The first official American military ID tags were issued in 1899 during the Spanish-American War. Since then, the U.S. military has continued to issue them, with the design changing gradually over time. The name “dog tag,” however, doesn’t come from the military at all. Credit for that name goes to William Randolph Hearst, who vehemently opposed President Franklin D. Roosevelt’s New Deal and Social Security program. Hearst claimed that Roosevelt would force people to wear metal tags with their names and social security numbers, like “dog tags.” The nickname was adopted by soldiers soon after. Funnily enough, one proposed idea for issuing Social Security numbers involved metal plates instead of paper cards, and one unused prototype of the much-dreaded dog tag still exists as a museum display at the Social Security Administration's headquarters. It seems that every dog tag has its day.
[Image description: An American flag on a wooden post.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Not all name tags are important enough to get a name of their own. Recently, the dog tag of a World War II serviceman was returned to his family after it was lost for 80 years. While the dog tag didn't get to serve its intended purpose, there’s a reason why such military name tags have been used, in various forms, for thousands of years.
Joseph L. Gray passed away in 1945 after the B-17 he was serving on crashed on the Isle of Man. The only consolation was that the crash was documented enough for his passing to be known about at the time. His dog tag was found decades later with a metal detector, after which it was donated to a local museum. The dog tag was then spotted by a descendant of a fellow crew mate, leading to its return to Gray’s family. Similar tales abound when it comes to dog tags, which have long been used in the U.S. military to serve as identification in combat. However, such name tags are a surprisingly ancient concept.
Ancient Romans issued something similar to modern dog tags in the form of a signaculum, a piece of metal worn on a legionary’s neck with identifying information. During the American Civil War, soldiers weren’t issued a standardized tag, so many fashioned their own from spare pieces of lead, copper, or even coins. Marines of the time used a piece of wood on a string for the same purpose. The first official American military ID tags were issued in 1899 during the Spanish-American War. Since then, the U.S. military has continued to issue them, with the design changing gradually over time. The name “dog tag,” however, doesn’t come from the military at all. Credit for that name goes to William Randolph Hearst, who vehemently opposed President Franklin D. Roosevelt’s New Deal and Social Security program. Hearst claimed that Roosevelt would force people to wear metal tags with their names and social security numbers, like “dog tags.” The nickname was adopted by soldiers soon after. Funnily enough, one proposed idea for issuing Social Security numbers involved metal plates instead of paper cards, and one unused prototype of the much-dreaded dog tag still exists as a museum display at the Social Security Administration's headquarters. It seems that every dog tag has its day.
[Image description: An American flag on a wooden post.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3100Free1 CQ
Well, at least you can’t accuse these legislators of being boring. Several states in the U.S. have introduced legislation to ban fabled “chemtrails.” This is a name used by those who believe that the white, cloud-like lines left by airplanes contain deadly chemicals. The true nature of these streaks in the sky is not nearly that insidious, but they’re not completely harmless either.
The latest state to hop on the anti-chemtrail bandwagon is Louisiana, where a state legislator introduced a bill to outlaw the lines left in the wake of airplanes. A decades-old conspiracy theory holds that these are actually the result of a shadowy effort to disperse harmful chemicals to the general populace, but the proper term for them is “contrails,” short for condensation trails. Contrails generally form at altitudes between 32,000 and 42,000 feet due to the water vapor released from jet engines. At those altitudes, the hot water vapor cools rapidly after exiting the engine and condenses, leaving visible streaks in the sky. Of course, the conditions for this to occur have to be just right, or else the sky would be covered in an endless criss-crossing of airplane flightpaths. Aside from the altitude of the plane, the air has to be cold and humid enough for the contrails to form.
While contrails aren’t the product of nefarious intentions, that doesn’t mean that they’re beyond reproach. According to some studies, contrails might actually be contributing to global warming in an unexpected way by trapping excess heat in the atmosphere, especially if they form at night or last until nighttime. Ironically, efforts to reduce carbon emissions and save money might be making it worse. Modern airliners are designed to fly at altitudes of 38,000 feet to save on fuel by reducing drag in the thinner atmosphere, and while that certainly saves on fuel, it means that contrails are much more likely to form. The warming effect of contrails is so pronounced that they may be contributing more to atmospheric warming than the carbon emissions from the engines themselves. It’s a matter worth looking into for legislators, but they might want to familiarize themselves with the science around contrails first.
[Image description: A white plane with four contrails against a blue sky.] Credit & copyright: Adrian Pingstone (Arpingstone), Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Well, at least you can’t accuse these legislators of being boring. Several states in the U.S. have introduced legislation to ban fabled “chemtrails.” This is a name used by those who believe that the white, cloud-like lines left by airplanes contain deadly chemicals. The true nature of these streaks in the sky is not nearly that insidious, but they’re not completely harmless either.
The latest state to hop on the anti-chemtrail bandwagon is Louisiana, where a state legislator introduced a bill to outlaw the lines left in the wake of airplanes. A decades-old conspiracy theory holds that these are actually the result of a shadowy effort to disperse harmful chemicals to the general populace, but the proper term for them is “contrails,” short for condensation trails. Contrails generally form at altitudes between 32,000 and 42,000 feet due to the water vapor released from jet engines. At those altitudes, the hot water vapor cools rapidly after exiting the engine and condenses, leaving visible streaks in the sky. Of course, the conditions for this to occur have to be just right, or else the sky would be covered in an endless criss-crossing of airplane flightpaths. Aside from the altitude of the plane, the air has to be cold and humid enough for the contrails to form.
While contrails aren’t the product of nefarious intentions, that doesn’t mean that they’re beyond reproach. According to some studies, contrails might actually be contributing to global warming in an unexpected way by trapping excess heat in the atmosphere, especially if they form at night or last until nighttime. Ironically, efforts to reduce carbon emissions and save money might be making it worse. Modern airliners are designed to fly at altitudes of 38,000 feet to save on fuel by reducing drag in the thinner atmosphere, and while that certainly saves on fuel, it means that contrails are much more likely to form. The warming effect of contrails is so pronounced that they may be contributing more to atmospheric warming than the carbon emissions from the engines themselves. It’s a matter worth looking into for legislators, but they might want to familiarize themselves with the science around contrails first.
[Image description: A white plane with four contrails against a blue sky.] Credit & copyright: Adrian Pingstone (Arpingstone), Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREESTEM Daily Curio #3099Free1 CQ
It’s the comeback of the century. The European beaver has made its triumphant return to Portugal for the first time in centuries, and it’s not the only place where the endangered builder is reclaiming its old range. Like its counterpart in North America, the European beaver is a keystone species, reshaping its environment by building dams in waterways and digging channels to direct the flow of water. Yet for all its mighty endeavors, the beaver was, for centuries, at the mercy of a greater power: human hunters. The beavers were hunted to near extinction in most of Europe for their meat, fur and castoreum, an aromatic substance that comes from internal sacs near the base of beavers’ tails. Beavers were even eaten during Lent after the Catholic Church classified the mammals as fish.
Beavers had a lot going against them when it came to their relationship with humans. Their fur happens to be ideal for making felt, and has been considered a valuable commodity for centuries. Although castoreum has a distinctly unpleasant odor in its raw form (beavers mix it with their urine to mark their territory), it can be processed into an effective fixative agent to be used in the production of perfumes. Thus, the beavers were trapped and hunted until they became extinct in many European locales, including Portugal, where they were last seen around the end of the 15th century. Then, over 500 years later in 2023, one of the beavers was sighted less than 500 feet from the border, giving hope to conservationists. Indeed, the presence of European beavers in Portugal has been confirmed now, with telltale signs like dams and gnawing marks on trees. It’s a long overdue arrival for Portugal, and it’s also the result of a century and a half of conservation efforts around Europe. While their numbers are still much lower than they once were, they are now found in most countries on the continent, and conservation efforts continue to help bring back the creature that once helped shape the very land. It’s time for these endangered engineers to get back to being busy beavers.
[Image description: A close-up photo of a beaver with wet fur.] Credit & copyright: National Park Service photo, Asset ID: d34648d9-10ec-44bb-bfce-9d4c2b20ee18. Public domain:Full Granting Rights.It’s the comeback of the century. The European beaver has made its triumphant return to Portugal for the first time in centuries, and it’s not the only place where the endangered builder is reclaiming its old range. Like its counterpart in North America, the European beaver is a keystone species, reshaping its environment by building dams in waterways and digging channels to direct the flow of water. Yet for all its mighty endeavors, the beaver was, for centuries, at the mercy of a greater power: human hunters. The beavers were hunted to near extinction in most of Europe for their meat, fur and castoreum, an aromatic substance that comes from internal sacs near the base of beavers’ tails. Beavers were even eaten during Lent after the Catholic Church classified the mammals as fish.
Beavers had a lot going against them when it came to their relationship with humans. Their fur happens to be ideal for making felt, and has been considered a valuable commodity for centuries. Although castoreum has a distinctly unpleasant odor in its raw form (beavers mix it with their urine to mark their territory), it can be processed into an effective fixative agent to be used in the production of perfumes. Thus, the beavers were trapped and hunted until they became extinct in many European locales, including Portugal, where they were last seen around the end of the 15th century. Then, over 500 years later in 2023, one of the beavers was sighted less than 500 feet from the border, giving hope to conservationists. Indeed, the presence of European beavers in Portugal has been confirmed now, with telltale signs like dams and gnawing marks on trees. It’s a long overdue arrival for Portugal, and it’s also the result of a century and a half of conservation efforts around Europe. While their numbers are still much lower than they once were, they are now found in most countries on the continent, and conservation efforts continue to help bring back the creature that once helped shape the very land. It’s time for these endangered engineers to get back to being busy beavers.
[Image description: A close-up photo of a beaver with wet fur.] Credit & copyright: National Park Service photo, Asset ID: d34648d9-10ec-44bb-bfce-9d4c2b20ee18. Public domain:Full Granting Rights. -
FREEMind + Body Daily CurioFree1 CQ
Some cheeses are a transcendent taste experience. You could even say that this one is…holy. Swiss cheese is famous for its mild flavor and its unusual consistency, which famously includes holes. How these holes came to be, and how Swiss cheese got so popular across the pond from Switzerland, are just two parts of this cheese’s intriguing backstory.
Swiss cheese is a light-yellow-to-white cheese with a nutty, slightly-sweet flavor. Like many cheeses, Swiss cheese is made by heating milk and then treating it with bacterial cultures to help it form curds before it is pressed and aged. Swiss cheese is usually aged for a few months, but different varieties can be aged for two years or more. The longer the cheese ages, the more intense its flavor. Swiss is one of the world’s most popular sandwich cheeses, and is commonly found at delis in the U.S. and throughout Europe.
The secret to how it got so popular lies in its origins. As its name suggests, Swiss cheese is from Switzerland, which is famous for its dairy industry to this day. Nowhere is this more true than in West Central Switzerland, in a valley commonly called Emmental. This area has been used for dairy farming for centuries, as its grassy, rolling hills make for perfect grazing land. The swiss cheese we know and love today was invented in Emmental sometime in the 1300s. To this day, it’s known as “Emmental cheese” in Switzerland. In the mid-1800s, Swiss immigrants in Wisconsin made Swiss cheese an American favorite too, and cemented Wisconsin’s modern reputation as a dairy hotspot.
Yet, how this cheese got its famous holes was a mystery until fairly recently. For years, some farmers believed that the holes formed due to the specific cultures used to make the cheese, or due to a certain amount of humidity in the barns where it was aged. Later, scientists posited that the holes could be due to carbon dioxide released by bacteria in the cheese. It wasn’t until 2015 that Agroscope, a Swiss government agricultural research facility, discovered the actual secret: hay. Because Swiss cheese is often made in a traditional dairy farm setting, microscopic pieces of hay naturally fall into buckets of milk used to make the cheese. Holes then expand around these tiny impurities as the cheese ages. This also explains why fewer holes appear in factory-made Swiss, since hay is less likely to fall into milk in a factory setting. Hay, how’s that for solving a dairy mystery?
[Image description: A wedge of swiss cheese with four holes.] Credit & copyright: National Cancer Institute Visuals Online, Renee Comet (Photographer). Public Domain.Some cheeses are a transcendent taste experience. You could even say that this one is…holy. Swiss cheese is famous for its mild flavor and its unusual consistency, which famously includes holes. How these holes came to be, and how Swiss cheese got so popular across the pond from Switzerland, are just two parts of this cheese’s intriguing backstory.
Swiss cheese is a light-yellow-to-white cheese with a nutty, slightly-sweet flavor. Like many cheeses, Swiss cheese is made by heating milk and then treating it with bacterial cultures to help it form curds before it is pressed and aged. Swiss cheese is usually aged for a few months, but different varieties can be aged for two years or more. The longer the cheese ages, the more intense its flavor. Swiss is one of the world’s most popular sandwich cheeses, and is commonly found at delis in the U.S. and throughout Europe.
The secret to how it got so popular lies in its origins. As its name suggests, Swiss cheese is from Switzerland, which is famous for its dairy industry to this day. Nowhere is this more true than in West Central Switzerland, in a valley commonly called Emmental. This area has been used for dairy farming for centuries, as its grassy, rolling hills make for perfect grazing land. The swiss cheese we know and love today was invented in Emmental sometime in the 1300s. To this day, it’s known as “Emmental cheese” in Switzerland. In the mid-1800s, Swiss immigrants in Wisconsin made Swiss cheese an American favorite too, and cemented Wisconsin’s modern reputation as a dairy hotspot.
Yet, how this cheese got its famous holes was a mystery until fairly recently. For years, some farmers believed that the holes formed due to the specific cultures used to make the cheese, or due to a certain amount of humidity in the barns where it was aged. Later, scientists posited that the holes could be due to carbon dioxide released by bacteria in the cheese. It wasn’t until 2015 that Agroscope, a Swiss government agricultural research facility, discovered the actual secret: hay. Because Swiss cheese is often made in a traditional dairy farm setting, microscopic pieces of hay naturally fall into buckets of milk used to make the cheese. Holes then expand around these tiny impurities as the cheese ages. This also explains why fewer holes appear in factory-made Swiss, since hay is less likely to fall into milk in a factory setting. Hay, how’s that for solving a dairy mystery?
[Image description: A wedge of swiss cheese with four holes.] Credit & copyright: National Cancer Institute Visuals Online, Renee Comet (Photographer). Public Domain. -
FREESTEM Daily Curio #3098Free1 CQ
These dinosaurs might have been impressive to look at, but their table manners were awful. While most animals have to chew their food thoroughly, it seems that wasn’t the case for sauropods, some of the largest dinosaurs ever to walk the Earth. Based on a recently discovered fossil, scientists now believe that sauropods hardly chewed their food at all.
Sauropods were members of Sauropoda, a clade of enormous, long-necked, vegetarian dinosaurs. Yet, for a long time, scientists didn’t know many specifics about sauropod diets. Paleontologists made the assumption that they ate plants based on two factors: they had flat teeth, which are good for processing plant matter, and sauropods were huge, meaning that there was no feasible way for them to have depended on anything other than plants, much like large herbivores today. Besides, their gigantic bodies, long necks, and long tails would have made them clumsy hunters. Now, not only do we have confirmation that sauropods ate plants, we know quite a bit about how they did it.
Researchers discovered a cololite—fossilized intestinal contents—that belonged to Diamantinasaurus matildae, a species of sauropod that lived around 100 million years ago. By performing a CT scan on the cololite, they found that the remains were composed entirely of plant matter. The leaves of the fern-like plant were largely intact, suggesting that the sauropod barely chewed them before swallowing. This means that sauropods were probably bulk feeders, ingesting as much plant matter as possible and relying on the natural fermentation process inside their digestive systems to break down their food. It’s a more extreme version of what many herbivores do today. Cows and other ruminants rely on fermentation to digest their food, and they also spend much of their time ruminating, which means they regurgitate their food to chew it again. You really needed a strong stomach to live in the Cretaceous period.
[Image description: A black-and-white illustration of a long-necked sauropod dinosaur.] Credit & copyright: Pearson Scott Foresman, Wikimedia Commons. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide.These dinosaurs might have been impressive to look at, but their table manners were awful. While most animals have to chew their food thoroughly, it seems that wasn’t the case for sauropods, some of the largest dinosaurs ever to walk the Earth. Based on a recently discovered fossil, scientists now believe that sauropods hardly chewed their food at all.
Sauropods were members of Sauropoda, a clade of enormous, long-necked, vegetarian dinosaurs. Yet, for a long time, scientists didn’t know many specifics about sauropod diets. Paleontologists made the assumption that they ate plants based on two factors: they had flat teeth, which are good for processing plant matter, and sauropods were huge, meaning that there was no feasible way for them to have depended on anything other than plants, much like large herbivores today. Besides, their gigantic bodies, long necks, and long tails would have made them clumsy hunters. Now, not only do we have confirmation that sauropods ate plants, we know quite a bit about how they did it.
Researchers discovered a cololite—fossilized intestinal contents—that belonged to Diamantinasaurus matildae, a species of sauropod that lived around 100 million years ago. By performing a CT scan on the cololite, they found that the remains were composed entirely of plant matter. The leaves of the fern-like plant were largely intact, suggesting that the sauropod barely chewed them before swallowing. This means that sauropods were probably bulk feeders, ingesting as much plant matter as possible and relying on the natural fermentation process inside their digestive systems to break down their food. It’s a more extreme version of what many herbivores do today. Cows and other ruminants rely on fermentation to digest their food, and they also spend much of their time ruminating, which means they regurgitate their food to chew it again. You really needed a strong stomach to live in the Cretaceous period.
[Image description: A black-and-white illustration of a long-necked sauropod dinosaur.] Credit & copyright: Pearson Scott Foresman, Wikimedia Commons. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide. -
FREEMusic Appreciation Daily Curio #3097Free1 CQ
You’ll probably never hear someone sing it at a karaoke bar, but it’s still the most frequently-sung song in English. Happy Birthday is an indispensable part of birthday celebrations around the world, and the composer of the melody, Mildred J. Hill, was born this month in 1859 in Louisville, Kentucky. Hill came up with the now-famous tune in 1893, and the lyrics were written by her sister Patty, but the song they wrote wasn’t actually Happy Birthday. Instead, it was called Good Morning to All, and was meant to be sung by a teacher and their classroom. Patty was a pioneer in early childhood education. In fact, she is credited as the inventor of the modern concept of a kindergarten, and she sang Good Morning to All in her own classroom as a daily greeting.
The Hill sisters published Good Morning to All and other compositions in 1893’s Song Stories for the Kindergarten. Soon, the melody took on a life of its own. No one knows exactly how it happened, but the tune began to be used to wish someone a happy birthday. One credible account even credits the Hill sisters themselves, who were believed to have changed the lyrics during a birthday get-together they were attending. Regardless of how it happened, Happy Birthday began to spread. By the early 20th century, the song appeared in movies, plays, and even other songbooks without crediting the Hill sisters. Mildred passed away in 1916, and Patty passed away in 1946, neither being credited as the originators of Happy Birthday. Their youngest sister, Jessica Hill, took it upon herself to copyright the song and have the publisher of Song Stories for the Kindergarten re-release it in 1935. The rights to the song eventually went to another publishing company and for decades after, the rights to the song were privately held, which is why movies had to pay royalties to use it, and why restaurants wishing their patrons a happy birthday had to sing a proprietary or royalty-free song instead. Then, in 2013, the publishing company was taken to court with claims that the copyright to Happy Birthday had expired years earlier. Finally, in 2016, the song entered public domain. It’s a short and simple ditty, but its story is anything but.
[Image description: A birthday cake with lit candles in a dark setting.] Credit & copyright: Fancibaer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You’ll probably never hear someone sing it at a karaoke bar, but it’s still the most frequently-sung song in English. Happy Birthday is an indispensable part of birthday celebrations around the world, and the composer of the melody, Mildred J. Hill, was born this month in 1859 in Louisville, Kentucky. Hill came up with the now-famous tune in 1893, and the lyrics were written by her sister Patty, but the song they wrote wasn’t actually Happy Birthday. Instead, it was called Good Morning to All, and was meant to be sung by a teacher and their classroom. Patty was a pioneer in early childhood education. In fact, she is credited as the inventor of the modern concept of a kindergarten, and she sang Good Morning to All in her own classroom as a daily greeting.
The Hill sisters published Good Morning to All and other compositions in 1893’s Song Stories for the Kindergarten. Soon, the melody took on a life of its own. No one knows exactly how it happened, but the tune began to be used to wish someone a happy birthday. One credible account even credits the Hill sisters themselves, who were believed to have changed the lyrics during a birthday get-together they were attending. Regardless of how it happened, Happy Birthday began to spread. By the early 20th century, the song appeared in movies, plays, and even other songbooks without crediting the Hill sisters. Mildred passed away in 1916, and Patty passed away in 1946, neither being credited as the originators of Happy Birthday. Their youngest sister, Jessica Hill, took it upon herself to copyright the song and have the publisher of Song Stories for the Kindergarten re-release it in 1935. The rights to the song eventually went to another publishing company and for decades after, the rights to the song were privately held, which is why movies had to pay royalties to use it, and why restaurants wishing their patrons a happy birthday had to sing a proprietary or royalty-free song instead. Then, in 2013, the publishing company was taken to court with claims that the copyright to Happy Birthday had expired years earlier. Finally, in 2016, the song entered public domain. It’s a short and simple ditty, but its story is anything but.
[Image description: A birthday cake with lit candles in a dark setting.] Credit & copyright: Fancibaer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEEngineering Daily Curio #3096Free1 CQ
When it comes to engineering, there are always new uses for old standbys. Putting ice in your drink is a pretty rudimentary way to keep cool when it’s hot out, but Manhattan is putting a new twist on it by using ice to cool an entire building. Most modern air conditioners are a double-edged sword because, while they keep people comfortable and safe from extreme heat, they also consume a lot of electricity. As average global temperatures continue to rise, that puts more and more strain on city’s power grids, especially during peak daytime hours. The cooling system at New York City’s iconic Eleven Madison building is different. It does most of its work at night, when the city’s energy grid isn’t nearly as taxed.
Created by Trane Technologies, the system is called an ice battery. Every night, it uses electricity to freeze water into around 500,000 pounds of ice. During the day, the ice is used to cool the air being pushed through the building’s vents. Since electricity costs more to produce during peak hours, the system can lower energy bills by as much as 40 percent. The ice battery also drastically reduces the overall amount of energy used to cool the building, which is good news for the grid and the environment as a whole. If more buildings adopt ice batteries in the near future, it could reduce the need for more power plants to be built, even as the climate continues to warm. That’s less land and fewer resources that will have to be devoted to cooling buildings.
Of course, it still takes quite a bit of electricity to freeze ice, even at night. Research is already underway to see if chilled but unfrozen water might be a viable alternative. If enough buildings and homes are able to use such thermal energy storage systems to replace traditional HVAC systems, the environmental impact would be enormous, even though the new systems aren’t entirely carbon neutral. A step in the right direction is always better than a step back.
[Image description: A piece of clear ice with a jagged edge on top.] Credit & copyright: Dāvis Mosāns from Salaspils, Latvia. Flickr, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.When it comes to engineering, there are always new uses for old standbys. Putting ice in your drink is a pretty rudimentary way to keep cool when it’s hot out, but Manhattan is putting a new twist on it by using ice to cool an entire building. Most modern air conditioners are a double-edged sword because, while they keep people comfortable and safe from extreme heat, they also consume a lot of electricity. As average global temperatures continue to rise, that puts more and more strain on city’s power grids, especially during peak daytime hours. The cooling system at New York City’s iconic Eleven Madison building is different. It does most of its work at night, when the city’s energy grid isn’t nearly as taxed.
Created by Trane Technologies, the system is called an ice battery. Every night, it uses electricity to freeze water into around 500,000 pounds of ice. During the day, the ice is used to cool the air being pushed through the building’s vents. Since electricity costs more to produce during peak hours, the system can lower energy bills by as much as 40 percent. The ice battery also drastically reduces the overall amount of energy used to cool the building, which is good news for the grid and the environment as a whole. If more buildings adopt ice batteries in the near future, it could reduce the need for more power plants to be built, even as the climate continues to warm. That’s less land and fewer resources that will have to be devoted to cooling buildings.
Of course, it still takes quite a bit of electricity to freeze ice, even at night. Research is already underway to see if chilled but unfrozen water might be a viable alternative. If enough buildings and homes are able to use such thermal energy storage systems to replace traditional HVAC systems, the environmental impact would be enormous, even though the new systems aren’t entirely carbon neutral. A step in the right direction is always better than a step back.
[Image description: A piece of clear ice with a jagged edge on top.] Credit & copyright: Dāvis Mosāns from Salaspils, Latvia. Flickr, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Daily Curio #3095Free1 CQ
What's the matter, cat got your head? Burmese pythons and other invasive species have been wreaking havoc in the Florida everglades for years, but it seems the local wildlife is starting to fight back. Burmese pythons are a particularly big problem in Florida. The snakes have no natural predators once fully grown, and they are prolific at multiplying. State officials have tried everything to get rid of the reptilian invaders, including declaring open season on the snakes and rewarding hunters for every one they bring in, but it seems that nothing can wipe them out completely. Meanwhile, pythons are capable of eating anything that can fit inside their surprisingly stretchy jaws, including other, native predators like alligators. For years, scientists have been keeping a keen eye on the state’s python population, and part of that includes strapping radio trackers on male pythons during breeding season. The males lead researchers to nests, so that eggs and female pythons can be removed.
Yet, when scientists rolled up to the location of one of these radio-tracked pythons recently, they didn't find a cozy love nest. Instead, they found the snake’s decapitated body, which weighed a whopping 52 pounds. After setting up a trail camera near the corpse, they found the culprit—a common bobcat happily munching away on the remains. This marks the first time that a bobcat has been known to take down a python, and it's all the more shocking considering the python's size. While bobcats have never been known to hunt and eat pythons, the snakes have been found with bobcat claws still inside them. This led scientists to believe that bobcats were unable to defend themselves against the snakes. On paper, it's obvious why—adult bobcats weigh around 30 to 40 pounds, while Burmese pythons can weigh around 200 pounds. Maybe nature has simply had enough, or maybe this cat was just particularly skilled at punching (or clawing) above its weight.
[Image description: A bobcat in tall grass from the chest up.] Credit & copyright: National Park Services, Asset ID: 8859334f-c426-41db-9049-96e7d5dd5779. Public domain: Full Granting Rights.What's the matter, cat got your head? Burmese pythons and other invasive species have been wreaking havoc in the Florida everglades for years, but it seems the local wildlife is starting to fight back. Burmese pythons are a particularly big problem in Florida. The snakes have no natural predators once fully grown, and they are prolific at multiplying. State officials have tried everything to get rid of the reptilian invaders, including declaring open season on the snakes and rewarding hunters for every one they bring in, but it seems that nothing can wipe them out completely. Meanwhile, pythons are capable of eating anything that can fit inside their surprisingly stretchy jaws, including other, native predators like alligators. For years, scientists have been keeping a keen eye on the state’s python population, and part of that includes strapping radio trackers on male pythons during breeding season. The males lead researchers to nests, so that eggs and female pythons can be removed.
Yet, when scientists rolled up to the location of one of these radio-tracked pythons recently, they didn't find a cozy love nest. Instead, they found the snake’s decapitated body, which weighed a whopping 52 pounds. After setting up a trail camera near the corpse, they found the culprit—a common bobcat happily munching away on the remains. This marks the first time that a bobcat has been known to take down a python, and it's all the more shocking considering the python's size. While bobcats have never been known to hunt and eat pythons, the snakes have been found with bobcat claws still inside them. This led scientists to believe that bobcats were unable to defend themselves against the snakes. On paper, it's obvious why—adult bobcats weigh around 30 to 40 pounds, while Burmese pythons can weigh around 200 pounds. Maybe nature has simply had enough, or maybe this cat was just particularly skilled at punching (or clawing) above its weight.
[Image description: A bobcat in tall grass from the chest up.] Credit & copyright: National Park Services, Asset ID: 8859334f-c426-41db-9049-96e7d5dd5779. Public domain: Full Granting Rights. -
FREEMind + Body Daily CurioFree1 CQ
Would you like some sandwich with those fries? For anyone enjoying a horseshoe sandwich, it’s a fair question. Invented in Springfield, Illinois, Horseshoe sandwiches are a spectacle to behold, and a point of Midwestern pride. These open-faced, oversized sandwiches have been round since the 1920s, yet they haven’t spread far beyond the town where they were first concocted.
A horseshoe sandwich is an open-faced sandwich on thick toast, also known as Texas toast. It most commonly features a beef burger patty, though a slice of thick ham is sometimes used instead. On top of the meat is a tall pile of french fries drenched in cheese sauce. Though some modern horseshoe sandwiches use nacho cheese, traditionally the cheese sauce is inspired by Welsh rarebit, a dish of sharp cheddar cheese mixed with mustard, ale, or Worcestershire sauce served on toast.
Welsh rarebit played an important role in the formation of the horseshoe sandwich. Supposedly, in 1928, the swanky Leland hotel in downtown Springfield, Illinois was trying to attract new customers. Management asked hotel chef Joe Schweska to come up with a new, intriguing menu item. Schweska asked his wife, who had Welsh heritage, what she thought he should put on the menu. She suggested a spin on Welsh rarebit, So Schweska added french fries and a slice of thick-cut ham to the dish. The rest is history.
Except it’s difficult to know if Schweska was truly the first to make the sandwich. Some say that it was a different Leland chef, Steve Tomko, who actually invented the sandwich, since he later went on to serve it at the Red Coach Inn. Other Springfield restaurants soon had their own versions too, with several crediting themselves as the originators. No need to argue—there’s plenty of credit (and fries) to go around.
[Image description: A white plate with a hamburger patty covered in fries and white cheese sauce.] Credit & copyright: Dirtmound, Wikimdia Commons. This work has been released into the public domain by its author, Dirtmound at English Wikipedia. This applies worldwide.Would you like some sandwich with those fries? For anyone enjoying a horseshoe sandwich, it’s a fair question. Invented in Springfield, Illinois, Horseshoe sandwiches are a spectacle to behold, and a point of Midwestern pride. These open-faced, oversized sandwiches have been round since the 1920s, yet they haven’t spread far beyond the town where they were first concocted.
A horseshoe sandwich is an open-faced sandwich on thick toast, also known as Texas toast. It most commonly features a beef burger patty, though a slice of thick ham is sometimes used instead. On top of the meat is a tall pile of french fries drenched in cheese sauce. Though some modern horseshoe sandwiches use nacho cheese, traditionally the cheese sauce is inspired by Welsh rarebit, a dish of sharp cheddar cheese mixed with mustard, ale, or Worcestershire sauce served on toast.
Welsh rarebit played an important role in the formation of the horseshoe sandwich. Supposedly, in 1928, the swanky Leland hotel in downtown Springfield, Illinois was trying to attract new customers. Management asked hotel chef Joe Schweska to come up with a new, intriguing menu item. Schweska asked his wife, who had Welsh heritage, what she thought he should put on the menu. She suggested a spin on Welsh rarebit, So Schweska added french fries and a slice of thick-cut ham to the dish. The rest is history.
Except it’s difficult to know if Schweska was truly the first to make the sandwich. Some say that it was a different Leland chef, Steve Tomko, who actually invented the sandwich, since he later went on to serve it at the Red Coach Inn. Other Springfield restaurants soon had their own versions too, with several crediting themselves as the originators. No need to argue—there’s plenty of credit (and fries) to go around.
[Image description: A white plate with a hamburger patty covered in fries and white cheese sauce.] Credit & copyright: Dirtmound, Wikimdia Commons. This work has been released into the public domain by its author, Dirtmound at English Wikipedia. This applies worldwide. -
FREEWorld History Daily Curio #3094Free1 CQ
What's smooth and shiny enough for jewelry but dangerous enough for battle? Obsidian, of course. The Aztecs used obsidian for everything from necklaces to weapons of war. Now, archaeologists know where and how they sourced much of the volcanic rock. Obsidian is formed in the scorching crucible of volcanoes. As a naturally-forming glass, it is hard, brittle, and comes in a variety of colors depending on the particular mineral composition, though it's usually black. Its most striking quality, though, is that it forms extremely sharp edges when chipped. The Aztecs and other Mesoamerican cultures took advantage of this and created intricate weapons using the glassy rock.
While stone weapons might sound primitive, their production and distribution was anything but. A recent study that looked at almost 800 obsidian pieces from the Aztec capital of Tenochtitlán has revealed that the versatile rock was brought there through an intricate trade network from far away. The researchers behind the study used portable X-ray fluorescence, which can identify the unique chemical composition of a given piece of obsidian to figure out where each of them came from. Most of the obsidian used by the Aztecs appears to have been sourced from Sierra de Pachuca, a mountain range around 60 miles from their capital and beyond their borders. This implies that the Aztecs were willing to engage in long-distance trade to obtain the precious resource. For the Aztecs and other Mesoamerican cultures, obsidian wasn't just a material to be made into weapons, but precious jewelry. Obsidian with green and gold coloration was particularly valued, and was known as "obsidian of the masters”. In the hands of expert craftsmen, the dangerous rocks could be transformed into delicate pieces worn by high-ranking individuals to show off their status. Obsidian was also used as inlays in sculptures and ceremonial weapons, with some pieces left as offerings for the dead to be buried with. At least the dead won't have to worry about accidentally cutting themselves.
[Image description: A piece of black obsidian on a wooden surface.] Credit & copyright: Ziongarage, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What's smooth and shiny enough for jewelry but dangerous enough for battle? Obsidian, of course. The Aztecs used obsidian for everything from necklaces to weapons of war. Now, archaeologists know where and how they sourced much of the volcanic rock. Obsidian is formed in the scorching crucible of volcanoes. As a naturally-forming glass, it is hard, brittle, and comes in a variety of colors depending on the particular mineral composition, though it's usually black. Its most striking quality, though, is that it forms extremely sharp edges when chipped. The Aztecs and other Mesoamerican cultures took advantage of this and created intricate weapons using the glassy rock.
While stone weapons might sound primitive, their production and distribution was anything but. A recent study that looked at almost 800 obsidian pieces from the Aztec capital of Tenochtitlán has revealed that the versatile rock was brought there through an intricate trade network from far away. The researchers behind the study used portable X-ray fluorescence, which can identify the unique chemical composition of a given piece of obsidian to figure out where each of them came from. Most of the obsidian used by the Aztecs appears to have been sourced from Sierra de Pachuca, a mountain range around 60 miles from their capital and beyond their borders. This implies that the Aztecs were willing to engage in long-distance trade to obtain the precious resource. For the Aztecs and other Mesoamerican cultures, obsidian wasn't just a material to be made into weapons, but precious jewelry. Obsidian with green and gold coloration was particularly valued, and was known as "obsidian of the masters”. In the hands of expert craftsmen, the dangerous rocks could be transformed into delicate pieces worn by high-ranking individuals to show off their status. Obsidian was also used as inlays in sculptures and ceremonial weapons, with some pieces left as offerings for the dead to be buried with. At least the dead won't have to worry about accidentally cutting themselves.
[Image description: A piece of black obsidian on a wooden surface.] Credit & copyright: Ziongarage, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3093Free1 CQ
You know things are bad when one natural disaster is just the beginning. A village in Switzerland has been left devastated after a landslide, and it could be the first of many to come. On May 19, the small village of Blatten was evacuated after geologists warned of impending danger. Blatten, home to 300 residents, is located in an alpine valley overlooked by glaciers. According to the geologists, one of those glaciers was coming apart rapidly. Indeed, in just a matter of days, the Birch glacier completely disintegrated, sending chunks of ice and rock down the valley. Most of the village was destroyed directly by the landslide, and the rest was flooded soon after.
Landslides can happen for all sorts of reasons like heavy rain, snowmelt, and erosion, but this one was caused entirely by the glacier's collapse. In turn, the glacier’s destruction was brought on by climate change, and similar catastrophes may await other alpine communities. In fact, another village, Brienz, was evacuated in 2023 as a precaution, and residents have only been allowed to return on a limited basis. Back in 2017, another village called Bondo was devastated by a similar landslide which claimed 8 lives. While most of the residents of Blatten were able to make their way to safety with just one individual unaccounted for, it may be too soon to breathe a sigh of relief. The debris from the landslide could still cause flooding, further devastating the area. Scientists estimate that all of Switzerland's glaciers will disappear by the end of the century; but they're unlikely to go quietly—and that's the optimistic outlook. More and more climate experts are beginning to believe that the glacial thaw will only accelerate in coming years. The term "glacial pace" might need to be redefined.
[Image description: A red train traveling between mountains in Switzerland.] Credit & copyright: Wikimedia Commons, Sikander Iqbal (Siqbal). This work has been released into the public domain by its author, Siqbal, at the English Wikipedia project. This applies worldwide.You know things are bad when one natural disaster is just the beginning. A village in Switzerland has been left devastated after a landslide, and it could be the first of many to come. On May 19, the small village of Blatten was evacuated after geologists warned of impending danger. Blatten, home to 300 residents, is located in an alpine valley overlooked by glaciers. According to the geologists, one of those glaciers was coming apart rapidly. Indeed, in just a matter of days, the Birch glacier completely disintegrated, sending chunks of ice and rock down the valley. Most of the village was destroyed directly by the landslide, and the rest was flooded soon after.
Landslides can happen for all sorts of reasons like heavy rain, snowmelt, and erosion, but this one was caused entirely by the glacier's collapse. In turn, the glacier’s destruction was brought on by climate change, and similar catastrophes may await other alpine communities. In fact, another village, Brienz, was evacuated in 2023 as a precaution, and residents have only been allowed to return on a limited basis. Back in 2017, another village called Bondo was devastated by a similar landslide which claimed 8 lives. While most of the residents of Blatten were able to make their way to safety with just one individual unaccounted for, it may be too soon to breathe a sigh of relief. The debris from the landslide could still cause flooding, further devastating the area. Scientists estimate that all of Switzerland's glaciers will disappear by the end of the century; but they're unlikely to go quietly—and that's the optimistic outlook. More and more climate experts are beginning to believe that the glacial thaw will only accelerate in coming years. The term "glacial pace" might need to be redefined.
[Image description: A red train traveling between mountains in Switzerland.] Credit & copyright: Wikimedia Commons, Sikander Iqbal (Siqbal). This work has been released into the public domain by its author, Siqbal, at the English Wikipedia project. This applies worldwide. -
FREEScience Daily Curio #3092Free1 CQ
Its communications are regular, but its location is awfully unusual. A newly-discovered cosmic object has astronomers puzzled, but finding out its identity might reveal new insights about the universe. Back in 2022, scientists coined the term long-period transient (LPT) for cosmic objects that emit light pulses on a regular basis. Since then, 10 more LPTs have been discovered, including ASKAP J1832- 0911, perhaps the most unusual among them. Recently discovered by a team of astronomers from Curtin University working at the Australian Square Kilometre Array Pathfinder (ASKAP), ASKAP J1832- 0911 appears to be emitting both radio waves, and x-rays every 44 minutes for two minutes at a time. The team studying the cosmic object discovered this phenomenon by happenstance while using NASA's Chandra X-ray telescope. Unlike ASKAP, which surveys a large swath of the sky at a time, Chandra only looks at a small portion. As luck would have it, Chandra just happened to be pointed at ASKAP J1832- 0911 at the right time when it was emitting its x-rays. For now, astronomers aren't sure just what this oddity is. According to the team of researchers, the object might be a magnestar, which is the core of a dead star known for its powerful magnetic fields. Another possibility is that it's a white dwarf, or a white dwarf and another type of object paired as a binary star system. Yet, the team admits that even these possibilities don't account for the unusual behavior of ASKAP J1832- 0911. As the lead researcher, Andy Wang, put it, "This discovery could indicate a new type of physics or new models of stellar evolution." In space, not knowing something is sometimes more exciting than having all the answers.
[Image description: A starry night sky with a line of dark trees below.] Credit & copyright: tommy haugsveen, PexelsIts communications are regular, but its location is awfully unusual. A newly-discovered cosmic object has astronomers puzzled, but finding out its identity might reveal new insights about the universe. Back in 2022, scientists coined the term long-period transient (LPT) for cosmic objects that emit light pulses on a regular basis. Since then, 10 more LPTs have been discovered, including ASKAP J1832- 0911, perhaps the most unusual among them. Recently discovered by a team of astronomers from Curtin University working at the Australian Square Kilometre Array Pathfinder (ASKAP), ASKAP J1832- 0911 appears to be emitting both radio waves, and x-rays every 44 minutes for two minutes at a time. The team studying the cosmic object discovered this phenomenon by happenstance while using NASA's Chandra X-ray telescope. Unlike ASKAP, which surveys a large swath of the sky at a time, Chandra only looks at a small portion. As luck would have it, Chandra just happened to be pointed at ASKAP J1832- 0911 at the right time when it was emitting its x-rays. For now, astronomers aren't sure just what this oddity is. According to the team of researchers, the object might be a magnestar, which is the core of a dead star known for its powerful magnetic fields. Another possibility is that it's a white dwarf, or a white dwarf and another type of object paired as a binary star system. Yet, the team admits that even these possibilities don't account for the unusual behavior of ASKAP J1832- 0911. As the lead researcher, Andy Wang, put it, "This discovery could indicate a new type of physics or new models of stellar evolution." In space, not knowing something is sometimes more exciting than having all the answers.
[Image description: A starry night sky with a line of dark trees below.] Credit & copyright: tommy haugsveen, Pexels -
FREEWork Daily Curio #3091Free1 CQ
Sharpen your pencils and loosen your wrists—the blue book is back in school. With AI-based apps like ChatGPT allowing less-than-scrupulous students to prompt their way through exams and assignments, old-fashioned blue books (blue notebooks with lined paper that were once popular at colleges) are making a comeback. Most students today have never taken a hand-written exam, in which answers are meticulously jotted down as the clock ticks away. With the advent of word processors and affordable laptops, many institutions have moved their exams to the digital space, allowing students to type their answers much faster than they could scribble on paper. That would have been that, but in recent years AI has become equally accessible, and some educators fear that it’s impacting student’s ability to think for themselves. Now, those educators are going back to the old ways. For the last hundred years or so before the advent of laptops, hand-written exams were largely done on lined, bound, paper booklets known as "blue books." Sales of blue books were actually declining until recently, but are now seeing an uptick.
Blue books are thought to have originated at Indiana’s Butler University in the 1920s, and were colored blue after the school’s color. Since then, the blue book format has been replicated by several manufacturers. However, the origins of standardized booklets in exams might date back even further. In the 1800s, Harvard University reportedly had their own booklets, though they weren't blue. Of course, not everyone is a fan of the modern blue book renaissance. Some educators believe that hurriedly-scribbled answers made under time restraints don't necessarily represent a student's understanding of a subject. Regardless of their pedagogical value, blue books may be here to stay, at least for a while. Pencils down!
[Image description: A dark blue pencil against a light blue background.] Credit & copyright: Author’s own illustration. Public domain.Sharpen your pencils and loosen your wrists—the blue book is back in school. With AI-based apps like ChatGPT allowing less-than-scrupulous students to prompt their way through exams and assignments, old-fashioned blue books (blue notebooks with lined paper that were once popular at colleges) are making a comeback. Most students today have never taken a hand-written exam, in which answers are meticulously jotted down as the clock ticks away. With the advent of word processors and affordable laptops, many institutions have moved their exams to the digital space, allowing students to type their answers much faster than they could scribble on paper. That would have been that, but in recent years AI has become equally accessible, and some educators fear that it’s impacting student’s ability to think for themselves. Now, those educators are going back to the old ways. For the last hundred years or so before the advent of laptops, hand-written exams were largely done on lined, bound, paper booklets known as "blue books." Sales of blue books were actually declining until recently, but are now seeing an uptick.
Blue books are thought to have originated at Indiana’s Butler University in the 1920s, and were colored blue after the school’s color. Since then, the blue book format has been replicated by several manufacturers. However, the origins of standardized booklets in exams might date back even further. In the 1800s, Harvard University reportedly had their own booklets, though they weren't blue. Of course, not everyone is a fan of the modern blue book renaissance. Some educators believe that hurriedly-scribbled answers made under time restraints don't necessarily represent a student's understanding of a subject. Regardless of their pedagogical value, blue books may be here to stay, at least for a while. Pencils down!
[Image description: A dark blue pencil against a light blue background.] Credit & copyright: Author’s own illustration. Public domain. -
FREEMind + Body Daily CurioFree1 CQ
That’s a lot of zip for raw fish! Ceviche is one of the world’s best warm-weather dishes, and the perfect food to examine as summer approaches. Made with raw fish, ceviche hails from Peru, where it is considered the national dish and was even mentioned in the country’s first national anthem.
Ceviche is made from raw, chilled fish and shellfish marinated in lemon, lime, or sour orange juice. The juice also contains seasonings like chili, cilantro, and sliced onions. Ceviche is often served on a large lettuce leaf and topped with tomato slices or seaweed. It may be surrounded by boiled potatoes, yucca, chickpeas, or corn. Unlike sushi or sashimi, the fish and shellfish in ceviche tastes as if it has been cooked, since the citrus marinade breaks down proteins in the meat.
Ceviche has ancient Peruvian roots. Evidence suggests that the Caral civilization, which existed around 5,000 years ago and is the oldest known civilization in the Americas, ate raw anchovies with various seasonings. Around 2,000 years ago, a group of coastal Peruvians called the Moche used fermented banana passionfruit juice to marinate raw fish. The famed Incan Empire also served raw fish marinated in fermented juices. Modern ceviche didn’t develop until at least the sixteenth century, when Spanish and Portuguese traders brought onions, lemons, and limes to the region—all of which are used in the modern version of the dish. For some time, ceviche was found mostly in coastal Peruvian towns and cities. As faster means of travel and better refrigeration techniques were developed, however, the dish's popularity surged throughout the entire country. By 1820, ceviche had become so common that it was even mentioned in La chica, a song considered to be Peru’s first national anthem.
In 2004, ceviche was declared a Cultural Heritage of Peru. Just four years later, the country’s Ministry of Production designated June 28th as Ceviche Day. It’s celebrated the day before Peru’s annual Fisherman’s Day, honoring those who make the nation’s thriving seafood culture possible. They’re sourcing national pride while being a source of it themselves.
[Image description: A white plate of ceviche surrounded by corn and other veggies.] Credit & copyright: Dtarazona, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.That’s a lot of zip for raw fish! Ceviche is one of the world’s best warm-weather dishes, and the perfect food to examine as summer approaches. Made with raw fish, ceviche hails from Peru, where it is considered the national dish and was even mentioned in the country’s first national anthem.
Ceviche is made from raw, chilled fish and shellfish marinated in lemon, lime, or sour orange juice. The juice also contains seasonings like chili, cilantro, and sliced onions. Ceviche is often served on a large lettuce leaf and topped with tomato slices or seaweed. It may be surrounded by boiled potatoes, yucca, chickpeas, or corn. Unlike sushi or sashimi, the fish and shellfish in ceviche tastes as if it has been cooked, since the citrus marinade breaks down proteins in the meat.
Ceviche has ancient Peruvian roots. Evidence suggests that the Caral civilization, which existed around 5,000 years ago and is the oldest known civilization in the Americas, ate raw anchovies with various seasonings. Around 2,000 years ago, a group of coastal Peruvians called the Moche used fermented banana passionfruit juice to marinate raw fish. The famed Incan Empire also served raw fish marinated in fermented juices. Modern ceviche didn’t develop until at least the sixteenth century, when Spanish and Portuguese traders brought onions, lemons, and limes to the region—all of which are used in the modern version of the dish. For some time, ceviche was found mostly in coastal Peruvian towns and cities. As faster means of travel and better refrigeration techniques were developed, however, the dish's popularity surged throughout the entire country. By 1820, ceviche had become so common that it was even mentioned in La chica, a song considered to be Peru’s first national anthem.
In 2004, ceviche was declared a Cultural Heritage of Peru. Just four years later, the country’s Ministry of Production designated June 28th as Ceviche Day. It’s celebrated the day before Peru’s annual Fisherman’s Day, honoring those who make the nation’s thriving seafood culture possible. They’re sourcing national pride while being a source of it themselves.
[Image description: A white plate of ceviche surrounded by corn and other veggies.] Credit & copyright: Dtarazona, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEHumanities Daily Curio #3090Free1 CQ
Be careful calling someone a Neanderthal as an insult—you might actually be complimenting them. A team of Spanish archaeologists have announced the discovery of a fingerprint that suggests that Neanderthals were more artistically inclined than previously thought. At around 43,000 years old, the fingerprint in question was left by a Neanderthal on an unassuming granite pebble. The rock was originally discovered in 2022 at the San Lázaro rock shelter near Segovia, and at first, it wasn't clear just what the small, red dot on it was. After consulting geologists, the team found that the red color on the rock came from a pigment made of iron oxide and clay, while police forensics experts confirmed that the mark itself came from the tip of someone's finger. Although it doesn't look like much at a glance, the fingerprinted rock caught the team's attention for a number of reasons. Firstly, there was nothing else in the site that also had the red pigment on it, suggesting it was placed there deliberately after being sourced from another location. Secondly, the rock vaguely resembles a human face, and the dot just so happens to be where the nose should be. Thus, the archaeologists believe that whoever marked the rock did so to complete the face. It may sound far-fetched that a Neanderthal could make such a deliberate artistic statement, but more and more evidence suggests that they were capable of more artistic and symbolic expression that they used to be given credit for. As much as it may hurt the pride of their successors, (Homo sapiens, also known as human beings) the Neanderthals may have beaten us to the punch when it comes to developing culture. Regardless of whether or not the red dot was an intentional creation, it is now officially the oldest human fingerprint ever found. How about a round of applause for the Paleolithic Picasso?
[Image description: A painting of a Neanderthal family by a cave, with a man holding a spear out front.] Credit & copyright: Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France, Charles Robert Knight (1874–1953). American Museum of Natural History, Public Domain.Be careful calling someone a Neanderthal as an insult—you might actually be complimenting them. A team of Spanish archaeologists have announced the discovery of a fingerprint that suggests that Neanderthals were more artistically inclined than previously thought. At around 43,000 years old, the fingerprint in question was left by a Neanderthal on an unassuming granite pebble. The rock was originally discovered in 2022 at the San Lázaro rock shelter near Segovia, and at first, it wasn't clear just what the small, red dot on it was. After consulting geologists, the team found that the red color on the rock came from a pigment made of iron oxide and clay, while police forensics experts confirmed that the mark itself came from the tip of someone's finger. Although it doesn't look like much at a glance, the fingerprinted rock caught the team's attention for a number of reasons. Firstly, there was nothing else in the site that also had the red pigment on it, suggesting it was placed there deliberately after being sourced from another location. Secondly, the rock vaguely resembles a human face, and the dot just so happens to be where the nose should be. Thus, the archaeologists believe that whoever marked the rock did so to complete the face. It may sound far-fetched that a Neanderthal could make such a deliberate artistic statement, but more and more evidence suggests that they were capable of more artistic and symbolic expression that they used to be given credit for. As much as it may hurt the pride of their successors, (Homo sapiens, also known as human beings) the Neanderthals may have beaten us to the punch when it comes to developing culture. Regardless of whether or not the red dot was an intentional creation, it is now officially the oldest human fingerprint ever found. How about a round of applause for the Paleolithic Picasso?
[Image description: A painting of a Neanderthal family by a cave, with a man holding a spear out front.] Credit & copyright: Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France, Charles Robert Knight (1874–1953). American Museum of Natural History, Public Domain. -
FREEUS History Daily Curio #3089Free1 CQ
What happens when you take the "mutually" out of "mutually assured destruction?” The answer, surprisingly, is a problem. The newly announced missile defense system dubbed the "Golden Dome" is drawing comparisons to President Ronald Reagan's Strategic Defense Initiative (SDI). While SDI was similar to the Golden Dome in many ways, the circumstances of its conception gave rise to a distinctly different set of issues.
As far as most Americans in the 1980s were concerned, the Cold War was a conflict without end. The U.S. and the Soviet Union were engaged in a morbid and seemingly inescapable mandate—that of mutually assured destruction (MAD). Both sides were armed with thousands of nuclear weapons ready to strike, set to launch in kind should either party decide to use them. In 1983, President Reagan proposed a way for the U.S. to finally gain the elusive upper hand. The plan was called the Strategic Defense Initiative (SDI), and would have used satellites in space equipped with laser weaponry to shoot down any intercontinental ballistic missiles (ICBM) launched by the Soviet Union.
Critics judged the plan to be infeasible and unrealistic, calling it "Star Wars" after the movie franchise of the same name. Indeed, the technology to make such a defense system didn’t exist yet. Even today, laser weaponry is mostly experimental in nature. Reagan’s plan also had the potential to be a foreign policy disaster. Whereas MAD had made the use of nuclear weapons forbidden by default, by announcing the SDI, the U.S. was announcing that it was essentially ready to take the "mutually" out of MAD. Thus, the very existence of the plan was seen as a sign of aggression, though the infeasible nature of the technology soon eased those concerns. There were also fears that successfully rendering nuclear weapons useless for one side would simply encourage an arms race of another kind. Ultimately, the SDI was scrapped by the 1990s as the end of the Cold War reduced the incentive to develop them. We did end up getting more Star Wars movies though, so that's something.
[Image description: A blue sky with a single, white cloud.] Credit & copyright: Dinkum, Wikimedia Commons. Creative Commons Zero, Public Domain Dedication.What happens when you take the "mutually" out of "mutually assured destruction?” The answer, surprisingly, is a problem. The newly announced missile defense system dubbed the "Golden Dome" is drawing comparisons to President Ronald Reagan's Strategic Defense Initiative (SDI). While SDI was similar to the Golden Dome in many ways, the circumstances of its conception gave rise to a distinctly different set of issues.
As far as most Americans in the 1980s were concerned, the Cold War was a conflict without end. The U.S. and the Soviet Union were engaged in a morbid and seemingly inescapable mandate—that of mutually assured destruction (MAD). Both sides were armed with thousands of nuclear weapons ready to strike, set to launch in kind should either party decide to use them. In 1983, President Reagan proposed a way for the U.S. to finally gain the elusive upper hand. The plan was called the Strategic Defense Initiative (SDI), and would have used satellites in space equipped with laser weaponry to shoot down any intercontinental ballistic missiles (ICBM) launched by the Soviet Union.
Critics judged the plan to be infeasible and unrealistic, calling it "Star Wars" after the movie franchise of the same name. Indeed, the technology to make such a defense system didn’t exist yet. Even today, laser weaponry is mostly experimental in nature. Reagan’s plan also had the potential to be a foreign policy disaster. Whereas MAD had made the use of nuclear weapons forbidden by default, by announcing the SDI, the U.S. was announcing that it was essentially ready to take the "mutually" out of MAD. Thus, the very existence of the plan was seen as a sign of aggression, though the infeasible nature of the technology soon eased those concerns. There were also fears that successfully rendering nuclear weapons useless for one side would simply encourage an arms race of another kind. Ultimately, the SDI was scrapped by the 1990s as the end of the Cold War reduced the incentive to develop them. We did end up getting more Star Wars movies though, so that's something.
[Image description: A blue sky with a single, white cloud.] Credit & copyright: Dinkum, Wikimedia Commons. Creative Commons Zero, Public Domain Dedication.