Curio Cabinet / Daily Curio
-
FREEMind + Body Daily CurioFree1 CQ
You don’t have to like seafood to love these fish. Japan has plenty of beautiful foods, but none has gone quite as viral in recent years as taiyaki. Invented in the early 20th century, these fish-shaped cakes aren’t quite as ancient as some other Japanese dishes. The traditions surrounding them, however, go back a long way.
Taiyaki are fish-shaped cakes made from a batter similar to the kind used to make waffles. They are molded and cooked the same way waffles are, in specially-shaped irons. The cakes can be stuffed with all sorts of fillings. Anko, a sweet red bean paste, is traditional, but they can also be filled with jam, chocolate sauce, custard, or even ice cream.
Bean paste is central to taiyaki’s story. The cakes evolved from a street food called imagawayaki, a tall, round cake made from batter and stuffed with anko. Imagawayaki has been around since the middle of the Edo period, known as the An'ei era, which lasted from 1772–1781. They were first sold in Tokyo’s Chiyoda ward, in a northeastern area called Kanda, near the Imagawabashi Bridge, from which they take their name. At first, the cakes were mainly sold at festivals, but they eventually became a popular street food.
It took quite a while for imagawayaki to morph into the modern taiyaki. The transformation was all thanks to one man: restaurant owner Seijirō Kobe. In 1909, his restaurant, Naniwaya Sōhonten (which still exists today), was having trouble selling enough imagawayaki. Kobe tried molding his cakes into different shapes to make them more appealing. When he finally decided to make fish-shaped imagawayaki, they sold like—well—hot cakes. The reason that fish took off when other shapes didn’t has a lot to do with Japanese cultural traditions. For one thing, they didn’t resemble just any fish: they were specifically made to look like red sea bream, which were very expensive for average people to buy, at the time. Buying one of Kobe’s cakes was an easy, tongue-in-cheek way to have a little taste of luxury. Then there’s the fact that red sea bream, or tai, are symbols of good fortune in Japan. It’s from them that taiyaki got its name.
Taiyaki soon became a hit in Tokyo, with other restaurants and vendors following Kobe’s lead. The treat had a pre-internet viral moment in 1979, when a catchy children’s song called Oyoge! Taiyaki-kun spread knowledge about the snack far beyond Tokyo. Taiyaki spread to Korea and Taiwan during periods of Japanese rule, and the snacks are a part of both countries’ culinary landscapes to this day. With the advent of the internet, taiyaki rose to international fame the same way that mochi and bubble tea did. Today, you can find Taiyaki just about anywhere. In the U.S., it’s often sold at Japanese restaurants, international grocery stores, and convenience stores. Suffice to say, it’s as popular as a humble fish could ever hope to be.
[Image description: Two fish-shaped waffle cakes, taiyaki, side by side against a white background.] Credit & copyright: Ocdp, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.You don’t have to like seafood to love these fish. Japan has plenty of beautiful foods, but none has gone quite as viral in recent years as taiyaki. Invented in the early 20th century, these fish-shaped cakes aren’t quite as ancient as some other Japanese dishes. The traditions surrounding them, however, go back a long way.
Taiyaki are fish-shaped cakes made from a batter similar to the kind used to make waffles. They are molded and cooked the same way waffles are, in specially-shaped irons. The cakes can be stuffed with all sorts of fillings. Anko, a sweet red bean paste, is traditional, but they can also be filled with jam, chocolate sauce, custard, or even ice cream.
Bean paste is central to taiyaki’s story. The cakes evolved from a street food called imagawayaki, a tall, round cake made from batter and stuffed with anko. Imagawayaki has been around since the middle of the Edo period, known as the An'ei era, which lasted from 1772–1781. They were first sold in Tokyo’s Chiyoda ward, in a northeastern area called Kanda, near the Imagawabashi Bridge, from which they take their name. At first, the cakes were mainly sold at festivals, but they eventually became a popular street food.
It took quite a while for imagawayaki to morph into the modern taiyaki. The transformation was all thanks to one man: restaurant owner Seijirō Kobe. In 1909, his restaurant, Naniwaya Sōhonten (which still exists today), was having trouble selling enough imagawayaki. Kobe tried molding his cakes into different shapes to make them more appealing. When he finally decided to make fish-shaped imagawayaki, they sold like—well—hot cakes. The reason that fish took off when other shapes didn’t has a lot to do with Japanese cultural traditions. For one thing, they didn’t resemble just any fish: they were specifically made to look like red sea bream, which were very expensive for average people to buy, at the time. Buying one of Kobe’s cakes was an easy, tongue-in-cheek way to have a little taste of luxury. Then there’s the fact that red sea bream, or tai, are symbols of good fortune in Japan. It’s from them that taiyaki got its name.
Taiyaki soon became a hit in Tokyo, with other restaurants and vendors following Kobe’s lead. The treat had a pre-internet viral moment in 1979, when a catchy children’s song called Oyoge! Taiyaki-kun spread knowledge about the snack far beyond Tokyo. Taiyaki spread to Korea and Taiwan during periods of Japanese rule, and the snacks are a part of both countries’ culinary landscapes to this day. With the advent of the internet, taiyaki rose to international fame the same way that mochi and bubble tea did. Today, you can find Taiyaki just about anywhere. In the U.S., it’s often sold at Japanese restaurants, international grocery stores, and convenience stores. Suffice to say, it’s as popular as a humble fish could ever hope to be.
[Image description: Two fish-shaped waffle cakes, taiyaki, side by side against a white background.] Credit & copyright: Ocdp, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #3122Free1 CQ
It’s a wonder it took these wonders so long! The picturesque Neuschwanstein Castle, built by King Ludwig II of Bavaria in the 19th century, has been added to the UNESCO World Heritage along with three other castles from the architecture-obsessed monarch. Ludwig II was something of an odd duck, by his own admission. He is known to have told his governess, "I want to remain an eternal mystery to myself and others," and he truly has. Though he took the throne at 18, Ludwig II wasn’t a typical monarch of his time. While he didn’t shirk his royal duties, he was greatly interested in the arts, more so than anything else. He was also prone to isolating himself and indulging in a fantasy world of his own creation. For the last 11 years of his life, he slept during the day and worked and played at night. At times, he would dress up in historical costumes and ride around in a carriage. These eccentricities, however, were mild compared to his visions of fantastical castles that he wished to build.
During his reign (1864 to 1886), he commissioned the construction of four extravagant estates: Herrenchiemsee, King's House on Schachen, Linderhof Palace, and Neuschwanstein Castle. The first three were completed in his lifetime, while he died before seeing Neuschwanstein Castle completed. The design of these estates were inspired by fairy tales and fantasy, and the king used them as personal retreats. Ludwig II was also a very private man, and he didn’t allow anyone inside his retreats while he was alive. Today, Neuschwanstein Castle alone receives 1.4 million visitors every year. If the shining white castle in the mountains looks familiar, that’s because it served as inspiration for another man obsessed with fairy tales and fantasies. Walt Disney apparently based the design of the castle in Sleeping Beauty on Neuschwanstein, making the animated castle an homage to an homage to something that never actually existed.
[Image description: A black-and-white illustration of a castle on elevated rocks.] Credit & copyright: Castle Neuschwanstein, George Percival Gaskell (British, 1868–1934), The Cleveland Museum of Art. Gift in memory of Paul H. Oppman Sr. from his family 1979.97. Public Domain, (CC0) designation.It’s a wonder it took these wonders so long! The picturesque Neuschwanstein Castle, built by King Ludwig II of Bavaria in the 19th century, has been added to the UNESCO World Heritage along with three other castles from the architecture-obsessed monarch. Ludwig II was something of an odd duck, by his own admission. He is known to have told his governess, "I want to remain an eternal mystery to myself and others," and he truly has. Though he took the throne at 18, Ludwig II wasn’t a typical monarch of his time. While he didn’t shirk his royal duties, he was greatly interested in the arts, more so than anything else. He was also prone to isolating himself and indulging in a fantasy world of his own creation. For the last 11 years of his life, he slept during the day and worked and played at night. At times, he would dress up in historical costumes and ride around in a carriage. These eccentricities, however, were mild compared to his visions of fantastical castles that he wished to build.
During his reign (1864 to 1886), he commissioned the construction of four extravagant estates: Herrenchiemsee, King's House on Schachen, Linderhof Palace, and Neuschwanstein Castle. The first three were completed in his lifetime, while he died before seeing Neuschwanstein Castle completed. The design of these estates were inspired by fairy tales and fantasy, and the king used them as personal retreats. Ludwig II was also a very private man, and he didn’t allow anyone inside his retreats while he was alive. Today, Neuschwanstein Castle alone receives 1.4 million visitors every year. If the shining white castle in the mountains looks familiar, that’s because it served as inspiration for another man obsessed with fairy tales and fantasies. Walt Disney apparently based the design of the castle in Sleeping Beauty on Neuschwanstein, making the animated castle an homage to an homage to something that never actually existed.
[Image description: A black-and-white illustration of a castle on elevated rocks.] Credit & copyright: Castle Neuschwanstein, George Percival Gaskell (British, 1868–1934), The Cleveland Museum of Art. Gift in memory of Paul H. Oppman Sr. from his family 1979.97. Public Domain, (CC0) designation. -
FREEScience Daily Curio #3121Free1 CQ
Is there anything that climate change can’t ruin? The library of a 1,000-year-old monastery in Hungary is fighting to save its books against the ravages of drugstore beetles, and the changing climate is partly to blame. The Pannonhalma Archabbey, a Benedictine monastery located in western Hungary, has a bit of a pest problem. Their 400,000-book library is under attack from a horde of Stegobium paniceum, more commonly known as drugstore beetles or bread beetles. The library doesn’t hold ordinary books; The monastery is a UNESCO World Heritage site and contains irreplaceable treasures that have been entrusted to its care since it was founded in 996 C.E. Unfortunately, the beetles aren’t interested in the historical value of the tomes they’re munching on. They’re actually attracted to the centuries-old books for the starch and gelatin used in their construction.
Library staff first noticed the presence of the beetles when they found a strange dust on some of the shelves. Upon closer inspection, they found several books with holes bored into the spines. It’s no coincidence that the beetles are only a problem for the books now, after a millennium of safe storage. Drugstore beetles favor dark, warm, undisturbed places, while in their larval stage. A library housed in a 1,000-year-old building is perfect for them during the summer. As temperatures all over the world continue to rise each year the likelihood of beetle infestations grows. Even at the best of times, beetles are hard to get rid of. Since they’re experts at burrowing and hiding, it’s easy to miss them, and any that remain can produce up to two generations in a year if the weather’s warm. For now, around 100,000 books have been removed from the monastery’s shelves, and they’re set to be placed in an all-nitrogen environment for six weeks, where the beetles will hopefully die from lack of oxygen. In the meantime, the library will be thoroughly inspected and cleaned, and restorers will do what they can to save any books that have already been damaged. At least we know they’ll do things by the book.
[Image description: A close-up photo of Stegobium paniceum, a golden-brown beetle with black eyes.] Credit & copyright: Francisco Welter-Schultes, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Is there anything that climate change can’t ruin? The library of a 1,000-year-old monastery in Hungary is fighting to save its books against the ravages of drugstore beetles, and the changing climate is partly to blame. The Pannonhalma Archabbey, a Benedictine monastery located in western Hungary, has a bit of a pest problem. Their 400,000-book library is under attack from a horde of Stegobium paniceum, more commonly known as drugstore beetles or bread beetles. The library doesn’t hold ordinary books; The monastery is a UNESCO World Heritage site and contains irreplaceable treasures that have been entrusted to its care since it was founded in 996 C.E. Unfortunately, the beetles aren’t interested in the historical value of the tomes they’re munching on. They’re actually attracted to the centuries-old books for the starch and gelatin used in their construction.
Library staff first noticed the presence of the beetles when they found a strange dust on some of the shelves. Upon closer inspection, they found several books with holes bored into the spines. It’s no coincidence that the beetles are only a problem for the books now, after a millennium of safe storage. Drugstore beetles favor dark, warm, undisturbed places, while in their larval stage. A library housed in a 1,000-year-old building is perfect for them during the summer. As temperatures all over the world continue to rise each year the likelihood of beetle infestations grows. Even at the best of times, beetles are hard to get rid of. Since they’re experts at burrowing and hiding, it’s easy to miss them, and any that remain can produce up to two generations in a year if the weather’s warm. For now, around 100,000 books have been removed from the monastery’s shelves, and they’re set to be placed in an all-nitrogen environment for six weeks, where the beetles will hopefully die from lack of oxygen. In the meantime, the library will be thoroughly inspected and cleaned, and restorers will do what they can to save any books that have already been damaged. At least we know they’ll do things by the book.
[Image description: A close-up photo of Stegobium paniceum, a golden-brown beetle with black eyes.] Credit & copyright: Francisco Welter-Schultes, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREESwimming Daily Curio #3120Free1 CQ
When visiting Paris, there are a few must-dos: visiting the Louvre, going to the top of the Eiffel Tower, and now, going for a dip in the Seine. Once too polluted to swim in, the river Seine is finally ready to make a splash with the public—with a few restrictions. Beginning in 1923, the city of Paris banned swimming in the Seine…not that anyone was all that eager to break the law in the first place. While beautiful to look at, the river wasn’t the most sanitary body of water. Tens of thousands of homes in Paris, as well as the houseboats that floated along its surface, dumped their sewage directly into the river. Whatever toxic pollutants or waste that built up on the nearby streets were washed into it every time it rained.
Still, for some, the dream of going for a leisurely dip in the Seine remained something worth pursuing at any cost. That cost, incidentally, was around 1.4 billion euros, or $1.5 billion. For the last several years, the massive clean up project has been diverting and rearranging ancient sewer lines for homes and installing sewer hook up lines for houseboats to prevent further contamination of the Seine’s waters. A storage basin has even been constructed to catch runoff from the streets. While the cleanup effort was met with support from the city’s residents, some critics were skeptical about the project’s effectiveness. When the Olympics took place in Paris last year, several athletes fell ill after swimming in the Seine during the triathlon and open water races. President Emmanuel Macron also rescinded his promise to take a swim in the river before the Olympics as a show of trust, further spreading doubt about the cleanliness of the water.
Today, with cleanup seemingly complete, around 1,000 people a day will be allowed to swim at three designated areas. Authorities will also be performing daily water tests to monitor its safety. While the limit on the number of swimmers remains low for a city of millions, there are plans to create more swimming sites along the Seine in the near future. Getting your toes wet first is always an option…but we might wait for the first round of water test results first.
[Image description: The Seine river with two bridges and a cathedral visible.] Credit & copyright: Syced, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.When visiting Paris, there are a few must-dos: visiting the Louvre, going to the top of the Eiffel Tower, and now, going for a dip in the Seine. Once too polluted to swim in, the river Seine is finally ready to make a splash with the public—with a few restrictions. Beginning in 1923, the city of Paris banned swimming in the Seine…not that anyone was all that eager to break the law in the first place. While beautiful to look at, the river wasn’t the most sanitary body of water. Tens of thousands of homes in Paris, as well as the houseboats that floated along its surface, dumped their sewage directly into the river. Whatever toxic pollutants or waste that built up on the nearby streets were washed into it every time it rained.
Still, for some, the dream of going for a leisurely dip in the Seine remained something worth pursuing at any cost. That cost, incidentally, was around 1.4 billion euros, or $1.5 billion. For the last several years, the massive clean up project has been diverting and rearranging ancient sewer lines for homes and installing sewer hook up lines for houseboats to prevent further contamination of the Seine’s waters. A storage basin has even been constructed to catch runoff from the streets. While the cleanup effort was met with support from the city’s residents, some critics were skeptical about the project’s effectiveness. When the Olympics took place in Paris last year, several athletes fell ill after swimming in the Seine during the triathlon and open water races. President Emmanuel Macron also rescinded his promise to take a swim in the river before the Olympics as a show of trust, further spreading doubt about the cleanliness of the water.
Today, with cleanup seemingly complete, around 1,000 people a day will be allowed to swim at three designated areas. Authorities will also be performing daily water tests to monitor its safety. While the limit on the number of swimmers remains low for a city of millions, there are plans to create more swimming sites along the Seine in the near future. Getting your toes wet first is always an option…but we might wait for the first round of water test results first.
[Image description: The Seine river with two bridges and a cathedral visible.] Credit & copyright: Syced, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3119Free1 CQ
Even in the world of paleontology, some things are only obvious in hindsight. A paleontologist recently discovered a new reptile species that lived around 145 million years ago, and he did it just by visiting a couple of museums. In the 1930s, an enterprising—albeit not particularly ethical—individual sold a fossil of an ancient reptile. That would have been fine, except for the fact that they’d cut the original fossil in two and sold the other half to another, unwitting buyer.
Decades later, along came paleontology student Victor Beccari, who visited Senckenberg Natural History Museum in Frankfurt, Germany, and the Natural History Museum in London, U.K., both of which were displaying their half of this fossil. Thanks to Beccari’s extremely keen eye and good memory, the fossil halves have now been reunited. Beccari and his colleagues named the “newly” discovered reptile Sphenodraco scandentis, and published their findings in a scientific journal. According to Beccari, S. scandentis belonged to a group of reptiles called rhynchocephalians, from which only one extant species remains, the tuatara. Although the fossil clearly shows a reptile from the Late Jurassic period, it's not a dinosaur. Based on its skeletal structure, which forms a short body with long limbs and long digits, Beccari and his colleagues believe that it was a tree-dwelling lizard.
As for why it took nearly a century for anyone to notice that the two fossil halves were related, the one at the Senckenberg Natural History Museum had been misidentified as one belonging to Homoeosaurus maximiliani, another rhynchocephalians found in the same region in southern Germany. It’s also rare for a fossil to be split in two and for each half to retain parts of the skeleton. Usually, one half gets the skeleton and the other only gets an impression of it. No wonder the original buyers had no bones to pick—they thought they were getting it all.
[Image description: A fossil of Homeosaurus maximiliani, a species related to the newly discovered Sphenodraco scandentis.] Credit & copyright: Daderot Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Even in the world of paleontology, some things are only obvious in hindsight. A paleontologist recently discovered a new reptile species that lived around 145 million years ago, and he did it just by visiting a couple of museums. In the 1930s, an enterprising—albeit not particularly ethical—individual sold a fossil of an ancient reptile. That would have been fine, except for the fact that they’d cut the original fossil in two and sold the other half to another, unwitting buyer.
Decades later, along came paleontology student Victor Beccari, who visited Senckenberg Natural History Museum in Frankfurt, Germany, and the Natural History Museum in London, U.K., both of which were displaying their half of this fossil. Thanks to Beccari’s extremely keen eye and good memory, the fossil halves have now been reunited. Beccari and his colleagues named the “newly” discovered reptile Sphenodraco scandentis, and published their findings in a scientific journal. According to Beccari, S. scandentis belonged to a group of reptiles called rhynchocephalians, from which only one extant species remains, the tuatara. Although the fossil clearly shows a reptile from the Late Jurassic period, it's not a dinosaur. Based on its skeletal structure, which forms a short body with long limbs and long digits, Beccari and his colleagues believe that it was a tree-dwelling lizard.
As for why it took nearly a century for anyone to notice that the two fossil halves were related, the one at the Senckenberg Natural History Museum had been misidentified as one belonging to Homoeosaurus maximiliani, another rhynchocephalians found in the same region in southern Germany. It’s also rare for a fossil to be split in two and for each half to retain parts of the skeleton. Usually, one half gets the skeleton and the other only gets an impression of it. No wonder the original buyers had no bones to pick—they thought they were getting it all.
[Image description: A fossil of Homeosaurus maximiliani, a species related to the newly discovered Sphenodraco scandentis.] Credit & copyright: Daderot Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
Don’t panic, these eggs aren’t satanic! From summer barbeques to holiday feasts, deviled eggs are widely beloved despite their odd name. These savory morsels have a surprisingly long history, popping up in cultures from ancient Rome to medieval Europe.
Deviled eggs are specially-prepared, hard-boiled eggs in which the yolk is scooped out, mixed with other ingredients, and then piped back into the egg white. Recipes vary, but the yolks are usually mixed with mayo and mustard, then topped with other spices and herbs like paprika or parsley. Deviled eggs can be prepared simply or elaborately since they can be topped with practically anything, from bacon to salsa to shrimp.
Eggs have been eaten as appetizers and side dishes for centuries. In ancient Rome, boiled eggs were eaten as finger food and dipped in spicy sauces. Spiciness was a hallmark of many early deviled-egg-like recipes from medieval Europe too. One recipe from 13th-century Spain called for mixing egg yolks with pepper and onion juice, among other ingredients, then piping it back into boiled egg halves before skewering the halves together with a pepper-topped stick.
As for the name “deviled”, it’s a culinary term that applies to more than just eggs. Deviled ham still exists today, as does deviled crab. It came about in the 1700s as a way of describing heavily-spiced foods. Some food historians believe that the term had to do with the heat of the spices (the devil is known to like heat, after all). Others believe that “deviled” refers to the sinful or decadent nature of the dish, since spices and herbs were expensive and hard to obtain in the 1700s, especially in the American colonies. Either way, the name stuck, though some still prefer to call them stuffed eggs, dressed eggs, or even angel eggs. Hey, an elaborately-prepared egg by any other name still tastes just as good.
[Image description: Six deviled eggs with green garnishes on a wooden serving board.] Credit & copyright: Büşra Yaman, PexelsDon’t panic, these eggs aren’t satanic! From summer barbeques to holiday feasts, deviled eggs are widely beloved despite their odd name. These savory morsels have a surprisingly long history, popping up in cultures from ancient Rome to medieval Europe.
Deviled eggs are specially-prepared, hard-boiled eggs in which the yolk is scooped out, mixed with other ingredients, and then piped back into the egg white. Recipes vary, but the yolks are usually mixed with mayo and mustard, then topped with other spices and herbs like paprika or parsley. Deviled eggs can be prepared simply or elaborately since they can be topped with practically anything, from bacon to salsa to shrimp.
Eggs have been eaten as appetizers and side dishes for centuries. In ancient Rome, boiled eggs were eaten as finger food and dipped in spicy sauces. Spiciness was a hallmark of many early deviled-egg-like recipes from medieval Europe too. One recipe from 13th-century Spain called for mixing egg yolks with pepper and onion juice, among other ingredients, then piping it back into boiled egg halves before skewering the halves together with a pepper-topped stick.
As for the name “deviled”, it’s a culinary term that applies to more than just eggs. Deviled ham still exists today, as does deviled crab. It came about in the 1700s as a way of describing heavily-spiced foods. Some food historians believe that the term had to do with the heat of the spices (the devil is known to like heat, after all). Others believe that “deviled” refers to the sinful or decadent nature of the dish, since spices and herbs were expensive and hard to obtain in the 1700s, especially in the American colonies. Either way, the name stuck, though some still prefer to call them stuffed eggs, dressed eggs, or even angel eggs. Hey, an elaborately-prepared egg by any other name still tastes just as good.
[Image description: Six deviled eggs with green garnishes on a wooden serving board.] Credit & copyright: Büşra Yaman, Pexels -
FREEWorld History Daily Curio #3118Free1 CQ
It’s time to go on the Thames. An annual event called Swan Upping takes place around this time in England each year, and as whimsical as it sounds, it’s really serious business. King Charles III has had many titles bestowed on him in his life, including prince of Wales and earl of Chester, duke of Cornwall, Lord of the Isles, and Prince and Great Steward of Scotland. As the king of the U.K., he has yet another title: Seigneur of the Swans, or the Lord of the Swans. Of course, the king doesn’t dive into the River Thames himself. Instead, the King’s Swan Marker, wearing a red jacket and a white swan-feathered hat, leads a team of swan uppers, who row along the river in skiffs in search of swans and cygnets. The tradition dates back to the 12th century when swans were considered a delicacy, primarily served at royal banquets and feasts. In order to ensure a sustainable population of swans to feast on, it was the crown’s duty to keep track of their numbers.
Swans aren’t really considered “fair game” nowadays, and it’s no longer legal to hunt them. However, they still face threats in the form of human intervention and environmental hazards, and the Thames just wouldn’t be the same without them. So, the practice of Swan Upping has transformed into a ceremonial activity mainly focused on conservation. When swan uppers spot a swan or cygnet, they yell, “All up!” They gather the cygnets, weigh them, determine their parentage, and mark them with a ring that carries an identification number unique to that individual. The birds are also given a quick examination for injuries or diseases before they’re released. Despite rumors, the king doesn’t actually own all the swans on the Thames or in England, for that matter. Only the unmarked swans on certain parts of the Thames technically belong to the king, while the rest are claimed by two livery companies and the Ilchester family, who operate a breeding colony of the birds. The swan’s owners don’t eat them, but instead use their ownership for conservation efforts. England’s swans might no longer be served at feasts, but they do get to have a taste of the good life.
[Image description: A swan floating on blue water.] Credit & copyright: Michael, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s time to go on the Thames. An annual event called Swan Upping takes place around this time in England each year, and as whimsical as it sounds, it’s really serious business. King Charles III has had many titles bestowed on him in his life, including prince of Wales and earl of Chester, duke of Cornwall, Lord of the Isles, and Prince and Great Steward of Scotland. As the king of the U.K., he has yet another title: Seigneur of the Swans, or the Lord of the Swans. Of course, the king doesn’t dive into the River Thames himself. Instead, the King’s Swan Marker, wearing a red jacket and a white swan-feathered hat, leads a team of swan uppers, who row along the river in skiffs in search of swans and cygnets. The tradition dates back to the 12th century when swans were considered a delicacy, primarily served at royal banquets and feasts. In order to ensure a sustainable population of swans to feast on, it was the crown’s duty to keep track of their numbers.
Swans aren’t really considered “fair game” nowadays, and it’s no longer legal to hunt them. However, they still face threats in the form of human intervention and environmental hazards, and the Thames just wouldn’t be the same without them. So, the practice of Swan Upping has transformed into a ceremonial activity mainly focused on conservation. When swan uppers spot a swan or cygnet, they yell, “All up!” They gather the cygnets, weigh them, determine their parentage, and mark them with a ring that carries an identification number unique to that individual. The birds are also given a quick examination for injuries or diseases before they’re released. Despite rumors, the king doesn’t actually own all the swans on the Thames or in England, for that matter. Only the unmarked swans on certain parts of the Thames technically belong to the king, while the rest are claimed by two livery companies and the Ilchester family, who operate a breeding colony of the birds. The swan’s owners don’t eat them, but instead use their ownership for conservation efforts. England’s swans might no longer be served at feasts, but they do get to have a taste of the good life.
[Image description: A swan floating on blue water.] Credit & copyright: Michael, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #3117Free1 CQ
Whether you win or lose this race, you’ll feel the pain in your feet. As part of Pride Week in Madrid, revelers have, for decades, been participating in the “Carrera de Tacones,” or the race of heels. Racers, most of them men, don high-heeled shoes and run through the city’s streets. The premise of the race is predicated on the footwear’s notoriously impractical and uncomfortable nature, but high heels were once considered much more than fashion accessories. In fact, they were worn by soldiers.
High heels were originally developed for horseback riding in Persia, which owed much of its military success to its mounted soldiers. The pronounced heels helped riders stabilize themselves on stirrups, allowing for greater control over their steeds. Although the earliest depiction of high heels dates back to the 10th century, it’s possible that they were used before then. Regardless, high heels were largely seen as military gear, and for centuries, they were associated with masculinity. Since horseback riding was usually an activity only available to those wealthy enough to own horses, high heels were also a status symbol, and they remained that way until around the first half of the 17th century. As horseback riding became more accessible to commoners, high heels lost their distinguishing appeal, at least for a while. Then, aristocrats in Europe began wearing shoes with increasingly higher heels as a display of wealth, since such footwear would be impractical for manual labor.
Around the same time, red dye was gaining popularity as a sign of conspicuous consumption, and so red heels became popular. In the 18th century, King Louis XIV of France was so enamored and protective of the shoes as a status symbol that he only allowed members of his court to wear them. While high heels gradually fell out of favor with men, they became more and more popular with women in the 19th century as they, too, sought to wear impractical shoes that denoted their high status, distancing themselves from laborers. Today, some riding shoes still have more pronounced heels than most shoes, though not nearly to the degree they did in the past. Mainly, though, high heels are a fashion item regardless of social status, and they’ve earned such a reputation for being impractical that it’s considered novel to race in them. By the way, the race of heels takes place on cobblestone. Oh, those poor ankles!
[Image description: A pair of white, historical high-heeled shoes with pointy toes and yellow-and-green floral embroidery.] Credit & copyright: The Metropolitan Museum of Art, 1690-1700. Rogers Fund, 1906. Public Domain.Whether you win or lose this race, you’ll feel the pain in your feet. As part of Pride Week in Madrid, revelers have, for decades, been participating in the “Carrera de Tacones,” or the race of heels. Racers, most of them men, don high-heeled shoes and run through the city’s streets. The premise of the race is predicated on the footwear’s notoriously impractical and uncomfortable nature, but high heels were once considered much more than fashion accessories. In fact, they were worn by soldiers.
High heels were originally developed for horseback riding in Persia, which owed much of its military success to its mounted soldiers. The pronounced heels helped riders stabilize themselves on stirrups, allowing for greater control over their steeds. Although the earliest depiction of high heels dates back to the 10th century, it’s possible that they were used before then. Regardless, high heels were largely seen as military gear, and for centuries, they were associated with masculinity. Since horseback riding was usually an activity only available to those wealthy enough to own horses, high heels were also a status symbol, and they remained that way until around the first half of the 17th century. As horseback riding became more accessible to commoners, high heels lost their distinguishing appeal, at least for a while. Then, aristocrats in Europe began wearing shoes with increasingly higher heels as a display of wealth, since such footwear would be impractical for manual labor.
Around the same time, red dye was gaining popularity as a sign of conspicuous consumption, and so red heels became popular. In the 18th century, King Louis XIV of France was so enamored and protective of the shoes as a status symbol that he only allowed members of his court to wear them. While high heels gradually fell out of favor with men, they became more and more popular with women in the 19th century as they, too, sought to wear impractical shoes that denoted their high status, distancing themselves from laborers. Today, some riding shoes still have more pronounced heels than most shoes, though not nearly to the degree they did in the past. Mainly, though, high heels are a fashion item regardless of social status, and they’ve earned such a reputation for being impractical that it’s considered novel to race in them. By the way, the race of heels takes place on cobblestone. Oh, those poor ankles!
[Image description: A pair of white, historical high-heeled shoes with pointy toes and yellow-and-green floral embroidery.] Credit & copyright: The Metropolitan Museum of Art, 1690-1700. Rogers Fund, 1906. Public Domain. -
FREEScience Daily Curio #3116Free1 CQ
We might need to redefine what qualifies as hardwood! Fig trees are known for their delicious fruit, but they may soon be useful as a means of carbon sequestration after scientists discovered that they can turn themselves into stone. It’s not exactly news that trees, like all living things, are made out of carbon. Compared to most organisms, though, trees are great at sequestering carbon. They turn carbon dioxide into organic carbon, which they then use to form everything from roots to leaves. Since trees live so long, they can store that carbon for a long time. That’s why, to combat climate change, it’s a good idea to plant as many trees as possible. It’s a win-win, since trees can also provide food and lumber to people and form habitats for other organisms. One tree, however, seems to be a little ahead of the curve. The Ficus wakefieldii is a species of fig tree native to Kenya, and scientists have found that it can turn carbon dioxide into calcium carbonate, which happens to be what makes up much of limestone. Apparently, other fig trees can do this to some extent, but F. wakefieldii was the best at it out of the three species studied.
The process is fairly simple. First, the trees convert carbon dioxide into carbon oxalate crystals, and when parts of the tree begin to naturally decay from age, bacteria and fungi convert the crystals into calcium carbonate. Much of the calcium carbonate is released into the surrounding soil, making it less acidic for the tree, but much of it is also stored in the tissue of the tree itself. In fact, scientists found that in some specimens, their roots were completely converted to calcium carbonate. Surprisingly, F. wakefieldii isn’t the only tree capable of doing this. The iroko tree (Milicia excelsa), also native to Africa, can do the same thing, though it’s only used for lumber. Fig trees,on the other hand, can produce food. Either way, carbon minerals can stay sequestered for much longer than organic carbon, so both species could one day be cultivated for that purpose. The real question is, if you wanted to make something from these trees’ wood, would you call a carpenter or a mason?
[Image description: A brown, slightly-split fig on a bonsai fig tree.] Credit & copyright: Tangopaso, Wikimedia Commons.We might need to redefine what qualifies as hardwood! Fig trees are known for their delicious fruit, but they may soon be useful as a means of carbon sequestration after scientists discovered that they can turn themselves into stone. It’s not exactly news that trees, like all living things, are made out of carbon. Compared to most organisms, though, trees are great at sequestering carbon. They turn carbon dioxide into organic carbon, which they then use to form everything from roots to leaves. Since trees live so long, they can store that carbon for a long time. That’s why, to combat climate change, it’s a good idea to plant as many trees as possible. It’s a win-win, since trees can also provide food and lumber to people and form habitats for other organisms. One tree, however, seems to be a little ahead of the curve. The Ficus wakefieldii is a species of fig tree native to Kenya, and scientists have found that it can turn carbon dioxide into calcium carbonate, which happens to be what makes up much of limestone. Apparently, other fig trees can do this to some extent, but F. wakefieldii was the best at it out of the three species studied.
The process is fairly simple. First, the trees convert carbon dioxide into carbon oxalate crystals, and when parts of the tree begin to naturally decay from age, bacteria and fungi convert the crystals into calcium carbonate. Much of the calcium carbonate is released into the surrounding soil, making it less acidic for the tree, but much of it is also stored in the tissue of the tree itself. In fact, scientists found that in some specimens, their roots were completely converted to calcium carbonate. Surprisingly, F. wakefieldii isn’t the only tree capable of doing this. The iroko tree (Milicia excelsa), also native to Africa, can do the same thing, though it’s only used for lumber. Fig trees,on the other hand, can produce food. Either way, carbon minerals can stay sequestered for much longer than organic carbon, so both species could one day be cultivated for that purpose. The real question is, if you wanted to make something from these trees’ wood, would you call a carpenter or a mason?
[Image description: A brown, slightly-split fig on a bonsai fig tree.] Credit & copyright: Tangopaso, Wikimedia Commons. -
FREEMind + Body Daily Curio #3115Free1 CQ
It was a lot of work, but it had to get done. Once devastated by a water crisis, the city of Flint, Michigan, has now completely replaced all of its lead pipes. In 2014, the city switched its municipal water source to the Flint River, which was cheaper than piping in water from Lake Huron and should have been easy enough to do. The change was part of an ongoing effort to lower the city’s spending after it was placed under state control due to a $25 million deficit. An emergency manager had been assigned by the governor to cut costs wherever possible, and so city officials and residents had no say in the change. Problems quickly arose when those overseeing Flint failed to treat the river water, which was more acidic than the lake water. The water gradually corroded the protective coating that had formed inside the lead pipes during years of hard water use. Eventually, the coating disappeared completely and the acidic water began leeching lead from the pipes. The water was tested periodically by city officials…but not adequately. Water samples were taken after letting the tap run for a little while, allowing any built up lead in the pipes to be washed out. By 2016, however, the effects of lead contamination were obvious. Residents were showing symptoms of lead poisoning, including behavioral changes, increased anxiety and depression, and cognitive decline. Overall, some 100,000 residents and 28,000 homes in and around the city were affected. Following a court decision later that year, residents were provided with faucet filters or water delivery services for drinking water, though these were only temporary solutions. The next year, a court decision forced the city to replace its 11,000 lead pipes. Now, almost 10 years later, the project is finally complete. Time to make a toast with tap water.
[Image description: The surface of water with slight ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It was a lot of work, but it had to get done. Once devastated by a water crisis, the city of Flint, Michigan, has now completely replaced all of its lead pipes. In 2014, the city switched its municipal water source to the Flint River, which was cheaper than piping in water from Lake Huron and should have been easy enough to do. The change was part of an ongoing effort to lower the city’s spending after it was placed under state control due to a $25 million deficit. An emergency manager had been assigned by the governor to cut costs wherever possible, and so city officials and residents had no say in the change. Problems quickly arose when those overseeing Flint failed to treat the river water, which was more acidic than the lake water. The water gradually corroded the protective coating that had formed inside the lead pipes during years of hard water use. Eventually, the coating disappeared completely and the acidic water began leeching lead from the pipes. The water was tested periodically by city officials…but not adequately. Water samples were taken after letting the tap run for a little while, allowing any built up lead in the pipes to be washed out. By 2016, however, the effects of lead contamination were obvious. Residents were showing symptoms of lead poisoning, including behavioral changes, increased anxiety and depression, and cognitive decline. Overall, some 100,000 residents and 28,000 homes in and around the city were affected. Following a court decision later that year, residents were provided with faucet filters or water delivery services for drinking water, though these were only temporary solutions. The next year, a court decision forced the city to replace its 11,000 lead pipes. Now, almost 10 years later, the project is finally complete. Time to make a toast with tap water.
[Image description: The surface of water with slight ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
What does a fruit salad have to do with one of the world’s most famous hotels? More than you’d think! Waldorf salad is more than just a great choice for cooling down during summer, it’s an integral part of American culinary history. Developed at New York City’s famous Waldorf-Astoria hotel during the establishment’s golden age, this humble salad is a superstar…albiet a misunderstood one.
Modern Waldorf salad is usually made with chopped apples, mayonnaise, sliced grapes, chopped celery, and walnuts. Raisins are also sometimes added. Juice from the chopped apples melds with the mayonnaise during mixing, giving the salad a tangy, sweet flavor. Often, green apples and grapes are used, though some suggest using pink lady apples for a less pucker-inducing dish. Though Waldorf salad is fairly simple to make, it used to be even more so. The original recipe called for just three ingredients: apples, celery, and mayonnaise.
Unlike many other iconic foods, Waldorf salad’s history is well-documented. It was first served on March 13, 1896, at New York City’s Waldorf-Astoria by famed maître d'hôtel Oscar Tschirky. At the time, the Waldorf-Astoria was known as a hotel of the elite. Diplomats, movie stars, and other international celebrities frequently stayed there, and as such the hotel’s menus had to meet high standards and change frequently enough to keep guests interested. Tschirky was a master at coming up with simple yet creative dishes. He first served his three-ingredient Waldorf salad at a charity ball for St. Mary's Hospital, where it was an instant hit. It soon gained a permanent place on the hotel’s menu, and spread beyond its walls when Tschirky published The Cook Book, by "Oscar" of the Waldorf later that same year. Soon, Waldorf salad made its way onto other restaurant menus in New York City, and remained a regional dish for a time before spreading to the rest of the country. Naturally, the further from its birthplace the salad traveled, the more it changed. Regional variations that included grapes and walnuts eventually became the standard, though no one is quite sure how. What’s wrong with teaching an old salad new tricks?
[Image description: A pile of green apples with some red coloring in a cardboard box.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What does a fruit salad have to do with one of the world’s most famous hotels? More than you’d think! Waldorf salad is more than just a great choice for cooling down during summer, it’s an integral part of American culinary history. Developed at New York City’s famous Waldorf-Astoria hotel during the establishment’s golden age, this humble salad is a superstar…albiet a misunderstood one.
Modern Waldorf salad is usually made with chopped apples, mayonnaise, sliced grapes, chopped celery, and walnuts. Raisins are also sometimes added. Juice from the chopped apples melds with the mayonnaise during mixing, giving the salad a tangy, sweet flavor. Often, green apples and grapes are used, though some suggest using pink lady apples for a less pucker-inducing dish. Though Waldorf salad is fairly simple to make, it used to be even more so. The original recipe called for just three ingredients: apples, celery, and mayonnaise.
Unlike many other iconic foods, Waldorf salad’s history is well-documented. It was first served on March 13, 1896, at New York City’s Waldorf-Astoria by famed maître d'hôtel Oscar Tschirky. At the time, the Waldorf-Astoria was known as a hotel of the elite. Diplomats, movie stars, and other international celebrities frequently stayed there, and as such the hotel’s menus had to meet high standards and change frequently enough to keep guests interested. Tschirky was a master at coming up with simple yet creative dishes. He first served his three-ingredient Waldorf salad at a charity ball for St. Mary's Hospital, where it was an instant hit. It soon gained a permanent place on the hotel’s menu, and spread beyond its walls when Tschirky published The Cook Book, by "Oscar" of the Waldorf later that same year. Soon, Waldorf salad made its way onto other restaurant menus in New York City, and remained a regional dish for a time before spreading to the rest of the country. Naturally, the further from its birthplace the salad traveled, the more it changed. Regional variations that included grapes and walnuts eventually became the standard, though no one is quite sure how. What’s wrong with teaching an old salad new tricks?
[Image description: A pile of green apples with some red coloring in a cardboard box.] Credit & copyright: Daderot, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #3114Free1 CQ
These are some not-so-fresh kicks. Archaeologists in England have unearthed 2,000-year-old pairs of Roman shoes, and they’re some of the best preserved footwear from the era. The researchers were working at the Magna Roman Fort in Northumberland, located near another ancient Roman fort called Vindolanda, when they made the discovery. Many famous artifacts have been unearthed at Vindolanda, including wooden writing tablets and around 5,000 pairs of ancient Roman shoes. The Magna site, it seems, is literally following in those footsteps, with 32 shoes found so far preserved in the fort’s “ankle-breaker” trenches. Originally designed to trip and injure attackers, the trenches ended up being a perfect, anaerobic environment to preserve the shoes.
Roman shoes were made with hand-stitched leather, and many were closed-toed as opposed to the sandals often portrayed in popular media (in fact, sandals were only worn indoors). The ancient Romans were actually expert shoemakers, and their footwear contributed greatly to their military success. Most Roman soldiers wore caligae, leather boots consisting of an outer shell cut into many strips that allowed them to be laced up tightly. Replaceable iron hobnails on the soles helped the boots last longer and provided traction on soft surfaces. These boots were eventually replaced with completely enclosed ones called calcei, but the caligae have left a greater impression on the perception of Roman culture. That’s probably thanks to Caligula, the infamous Roman emperor whose real name was Gaius. When Gaius was a child, he accompanied his father on campaign in a set of kid-sized legionary gear, including the caligae. The soldiers then started calling him “Caligula,” which means “little boots.” Unfortunate, since he had some big shoes to fill as the third emperor of Rome.
[Image description: A detailed, black-and-white illustration of two elaborately-dressed ancient Roman soldiers looking at one another.] Credit & copyright: The Metropolitan Museum of Art, Two Roman Soldiers, Giovanni Francesco Venturini, 17th century. Bequest of Phyllis Massar, 2011. Public Domain.These are some not-so-fresh kicks. Archaeologists in England have unearthed 2,000-year-old pairs of Roman shoes, and they’re some of the best preserved footwear from the era. The researchers were working at the Magna Roman Fort in Northumberland, located near another ancient Roman fort called Vindolanda, when they made the discovery. Many famous artifacts have been unearthed at Vindolanda, including wooden writing tablets and around 5,000 pairs of ancient Roman shoes. The Magna site, it seems, is literally following in those footsteps, with 32 shoes found so far preserved in the fort’s “ankle-breaker” trenches. Originally designed to trip and injure attackers, the trenches ended up being a perfect, anaerobic environment to preserve the shoes.
Roman shoes were made with hand-stitched leather, and many were closed-toed as opposed to the sandals often portrayed in popular media (in fact, sandals were only worn indoors). The ancient Romans were actually expert shoemakers, and their footwear contributed greatly to their military success. Most Roman soldiers wore caligae, leather boots consisting of an outer shell cut into many strips that allowed them to be laced up tightly. Replaceable iron hobnails on the soles helped the boots last longer and provided traction on soft surfaces. These boots were eventually replaced with completely enclosed ones called calcei, but the caligae have left a greater impression on the perception of Roman culture. That’s probably thanks to Caligula, the infamous Roman emperor whose real name was Gaius. When Gaius was a child, he accompanied his father on campaign in a set of kid-sized legionary gear, including the caligae. The soldiers then started calling him “Caligula,” which means “little boots.” Unfortunate, since he had some big shoes to fill as the third emperor of Rome.
[Image description: A detailed, black-and-white illustration of two elaborately-dressed ancient Roman soldiers looking at one another.] Credit & copyright: The Metropolitan Museum of Art, Two Roman Soldiers, Giovanni Francesco Venturini, 17th century. Bequest of Phyllis Massar, 2011. Public Domain. -
FREEMind + Body Daily Curio #3113Free1 CQ
It’s not always good to go out with a bang. Heart attacks were once the number one cause of deaths in the world, but a recent study shows that the tides are changing. In the last half-century or so, the number of heart attacks has been in sharp decline. Consider the following statistic from Stanford Medicine researchers: a person over the age of 65 admitted to a hospital in 1970 had just a 60 percent chance of leaving alive, and the most likely cause of death would have been an acute myocardial infarctions, otherwise known as a heart attack. Since then, the numbers have shifted drastically. Heart disease used to account for 41 percent of all deaths in the U.S., but that number is now down to 24 percent. Deaths from heart attacks, specifically, have fallen by an astonishing 90 percent. There are a few reasons for this change, the first being that medical technology has simply advanced, giving doctors better tools with which to help their patients, including better drugs. Another reason is that more people have become health-conscious, eating better, exercising more, and smoking less. Younger Americans are also drinking less alcohol, which might continue to improve the nation’s overall heart health. More people know how to perform CPR now too, and those that don’t can easily look it up within seconds thanks to smartphones. This makes cardiac arrest itself less deadly than it once was. Nowadays, instead of heart attacks, more people are dying from chronic heart conditions. That might not sound like a good thing, but it’s ultimately a positive sign. As the lead author of the study, Sara King, said in a statement, “People now are surviving these acute events, so they have the opportunity to develop these other heart conditions.” Is it really a trade-off if the cost of not dying younger is dying older?
[Image description: A digital illustration of a cartoon heart with a break down the center. The heart is maroon, the background is red.] Credit & copyright: Author-created image. Public domain.It’s not always good to go out with a bang. Heart attacks were once the number one cause of deaths in the world, but a recent study shows that the tides are changing. In the last half-century or so, the number of heart attacks has been in sharp decline. Consider the following statistic from Stanford Medicine researchers: a person over the age of 65 admitted to a hospital in 1970 had just a 60 percent chance of leaving alive, and the most likely cause of death would have been an acute myocardial infarctions, otherwise known as a heart attack. Since then, the numbers have shifted drastically. Heart disease used to account for 41 percent of all deaths in the U.S., but that number is now down to 24 percent. Deaths from heart attacks, specifically, have fallen by an astonishing 90 percent. There are a few reasons for this change, the first being that medical technology has simply advanced, giving doctors better tools with which to help their patients, including better drugs. Another reason is that more people have become health-conscious, eating better, exercising more, and smoking less. Younger Americans are also drinking less alcohol, which might continue to improve the nation’s overall heart health. More people know how to perform CPR now too, and those that don’t can easily look it up within seconds thanks to smartphones. This makes cardiac arrest itself less deadly than it once was. Nowadays, instead of heart attacks, more people are dying from chronic heart conditions. That might not sound like a good thing, but it’s ultimately a positive sign. As the lead author of the study, Sara King, said in a statement, “People now are surviving these acute events, so they have the opportunity to develop these other heart conditions.” Is it really a trade-off if the cost of not dying younger is dying older?
[Image description: A digital illustration of a cartoon heart with a break down the center. The heart is maroon, the background is red.] Credit & copyright: Author-created image. Public domain. -
FREEBiology Daily Curio #3112Free1 CQ
The Earth is teeming with life and, apparantly, with “not-life” as well. Scientists have discovered a new type of organism that appears to defy the standard definition of “life.” All living things are organisms, but not all organisms are living. Take viruses, for instance. While viruses are capable of reproducing, they can’t do so on their own. They require a host organism to perform the biological functions necessary to reproduce. Viruses also can’t produce energy on their own or grow, unlike even simple living things, like bacteria. Now, there’s the matter of Sukunaarchaeum mirabile. The organism was discovered by accident by a team of Canadian and Japanese researchers who were looking into the DNA of Citharistes regius, a species of plankton. When they noticed a loop of DNA that didn’t belong to the plankton, they took a closer look and found Sukunaarchaeum. In some ways, this new organism resembles a virus. It can’t grow, produce energy, or reproduce on its own, but it has one distinct feature that sets it apart: it can produce its own ribosomes, messenger RNA, and transfer RNA. That latter part makes it more like a bacterium than a virus.
Then there’s the matter of its genetics. Sukunaarchaeum, it seems, is a genetic lightweight with only 238,000 base pairs of DNA. Compare that to a typical virus, which can range from 735,000 to 2.5 million base pairs, and the low number really stands out. Nearly all of Sukunaarchaeum’s genes are made to work toward the singular goal of replicating the organism. In a way, Sukunaarchaeum appears to be somewhere between a virus and a bacteria in terms of how “alive” it is, indicating that life itself exists on a spectrum. In science, nothing is as simple as it first appears.The Earth is teeming with life and, apparantly, with “not-life” as well. Scientists have discovered a new type of organism that appears to defy the standard definition of “life.” All living things are organisms, but not all organisms are living. Take viruses, for instance. While viruses are capable of reproducing, they can’t do so on their own. They require a host organism to perform the biological functions necessary to reproduce. Viruses also can’t produce energy on their own or grow, unlike even simple living things, like bacteria. Now, there’s the matter of Sukunaarchaeum mirabile. The organism was discovered by accident by a team of Canadian and Japanese researchers who were looking into the DNA of Citharistes regius, a species of plankton. When they noticed a loop of DNA that didn’t belong to the plankton, they took a closer look and found Sukunaarchaeum. In some ways, this new organism resembles a virus. It can’t grow, produce energy, or reproduce on its own, but it has one distinct feature that sets it apart: it can produce its own ribosomes, messenger RNA, and transfer RNA. That latter part makes it more like a bacterium than a virus.
Then there’s the matter of its genetics. Sukunaarchaeum, it seems, is a genetic lightweight with only 238,000 base pairs of DNA. Compare that to a typical virus, which can range from 735,000 to 2.5 million base pairs, and the low number really stands out. Nearly all of Sukunaarchaeum’s genes are made to work toward the singular goal of replicating the organism. In a way, Sukunaarchaeum appears to be somewhere between a virus and a bacteria in terms of how “alive” it is, indicating that life itself exists on a spectrum. In science, nothing is as simple as it first appears. -
FREEAstronomy Daily Curio #3111Free1 CQ
Don’t hold your breath for moon dust. Long thought to be toxic, new research shows that moon dust may be relatively harmless compared to what’s already here on Earth. While the dusty surface of the moon looks beautiful and its name sounds like a whimsical ingredient in a fairy tale potion, it was a thorn in the side of lunar explorers during the Apollo missions. NASA astronauts who traversed the moon’s dusty surface reported symptoms like nasal congestion and sneezing, which they began calling “lunar hay fever.” They also reported that moon dust smelled like burnt gunpowder, and while an unpleasant smell isn’t necessarily bad for one’s health, it couldn’t have been comforting. These symptoms were likely caused by the abrasive nature of moon dust particles, which are never smoothed out by wind or water the way they would be on Earth. The particles are also small, so they’re very hard to keep out of spacesuits and away from equipment. Then there’s the matter of the moon’s low gravity, which allows moon dust to float around for longer than it would on Earth, making it more likely to penetrate spacesuit’s seals and be inhaled into the lungs. There, like asbestos, the dust can cause tiny cuts that can lead to respiratory problems and even cancer…at least, that’s what everyone thought until recently. Researchers at the University of Technology Sydney (UTS) just published a paper claiming that moon dust might not be so dangerous after all. They believe that the dust will likely cause short-term symptoms without leading to long-term damage. Using simulated moon dust and real human lungs, they found that moon dust was less dangerous than many air pollutants found on Earth. For instance, silica (typically found on construction sites) is much more dangerous, as it can cause silicosis by lingering in the lungs, leading to scarring and lesions. Astronauts headed to the moon in the future can breathe a sigh of relief—but it may be safer to wait until they get there.
[Image description: A moon surrounded by orange-ish hazy clouds against a black sky.] Credit & copyright: Cbaile19, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Don’t hold your breath for moon dust. Long thought to be toxic, new research shows that moon dust may be relatively harmless compared to what’s already here on Earth. While the dusty surface of the moon looks beautiful and its name sounds like a whimsical ingredient in a fairy tale potion, it was a thorn in the side of lunar explorers during the Apollo missions. NASA astronauts who traversed the moon’s dusty surface reported symptoms like nasal congestion and sneezing, which they began calling “lunar hay fever.” They also reported that moon dust smelled like burnt gunpowder, and while an unpleasant smell isn’t necessarily bad for one’s health, it couldn’t have been comforting. These symptoms were likely caused by the abrasive nature of moon dust particles, which are never smoothed out by wind or water the way they would be on Earth. The particles are also small, so they’re very hard to keep out of spacesuits and away from equipment. Then there’s the matter of the moon’s low gravity, which allows moon dust to float around for longer than it would on Earth, making it more likely to penetrate spacesuit’s seals and be inhaled into the lungs. There, like asbestos, the dust can cause tiny cuts that can lead to respiratory problems and even cancer…at least, that’s what everyone thought until recently. Researchers at the University of Technology Sydney (UTS) just published a paper claiming that moon dust might not be so dangerous after all. They believe that the dust will likely cause short-term symptoms without leading to long-term damage. Using simulated moon dust and real human lungs, they found that moon dust was less dangerous than many air pollutants found on Earth. For instance, silica (typically found on construction sites) is much more dangerous, as it can cause silicosis by lingering in the lungs, leading to scarring and lesions. Astronauts headed to the moon in the future can breathe a sigh of relief—but it may be safer to wait until they get there.
[Image description: A moon surrounded by orange-ish hazy clouds against a black sky.] Credit & copyright: Cbaile19, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
Happy Fourth of July! This year, we’re highlighting a food that’s as American as apple pie…actually, much more so. Chicken and waffles is a U.S.-born, soul food staple, but exactly where, when, and how it developed is a source of heated debate.
Chicken and waffles is exactly what its name implies: a dish of waffles, usually served with butter and maple syrup, alongside fried chicken. The chicken is dredged in seasoned flour before cooking, and the exact spices used in the dredge vary from recipe to recipe. Black pepper, paprika, garlic powder, and onion powder are all common choices. The exact pieces of chicken served, whether breast meat, wings, or thighs, also varies. Sometimes, honey is substituted for syrup.
The early history of chicken and waffles is shrouded in mystery. Though there’s no doubt that it’s an American dish, there are different stories about exactly how it developed. Some say that it came about in Jazz Age Harlem, when partiers and theater-goers stayed out so late that they craved a combination of breakfast and dinner foods. This story fits with chicken and waffles’ modern designation as soul food, since Harlem was largely segregated during the Jazz Age, and soul food comes from the culinary traditions of Black Americans. Still, others say that the dish was actually made famous by founding father Thomas Jefferson, who popularized waffles after he purchased waffle irons (which were fairly expensive at the time) from Amsterdam in the 1780s. Another story holds that the Pennsylvania Dutch created chicken and waffles based on German traditions.
Though we’ll never know for certain, it’s likely that all three tales are simply parts of a larger story. Dutch colonists brought waffles to the U.S. as early as the 1600s, where they made their way into the new culinary traditions of different groups of European settlers. This included the “Pennsylvania Dutch”, who were actually from Germany, where it was common to eat meat with bread or biscuits to sop up juices. They served waffles with different types of meat, including chicken with a creamy sauce. Thomas Jefferson did, indeed, help to popularize waffles, but it was the enslaved people who cooked for him and other colonists who changed the dish into what it is today. They standardized the use of seasoned, sometimes even spicy, fried chicken served with waffles, pancakes, or biscuits. After the civil war, chicken and waffles fell out of favor with white Americans, but was still frequently served in Black-owned restaurants, including well-known establishments in Harlem and in Black communities throughout the South. For centuries, the dish was categorized as Southern soul food. Then, in the 1990s, chicken and waffles had a sudden surge in nationwide popularity, possibly due to the rise of food-centric T.V. and “foodie” culture. Today, it can be found everywhere from Southern soul food restaurants to swanky brunch cafes in northern states. Its origins were humble, but its delicious reach is undeniable.
[Image description: Chicken wings and a waffle on a white plate with an orange slice.] Credit & copyright: Joost.janssens, Wikimedia Commons. This work has been released into the public domain by its author, Joost.janssens at English Wikipedia. This applies worldwide.Happy Fourth of July! This year, we’re highlighting a food that’s as American as apple pie…actually, much more so. Chicken and waffles is a U.S.-born, soul food staple, but exactly where, when, and how it developed is a source of heated debate.
Chicken and waffles is exactly what its name implies: a dish of waffles, usually served with butter and maple syrup, alongside fried chicken. The chicken is dredged in seasoned flour before cooking, and the exact spices used in the dredge vary from recipe to recipe. Black pepper, paprika, garlic powder, and onion powder are all common choices. The exact pieces of chicken served, whether breast meat, wings, or thighs, also varies. Sometimes, honey is substituted for syrup.
The early history of chicken and waffles is shrouded in mystery. Though there’s no doubt that it’s an American dish, there are different stories about exactly how it developed. Some say that it came about in Jazz Age Harlem, when partiers and theater-goers stayed out so late that they craved a combination of breakfast and dinner foods. This story fits with chicken and waffles’ modern designation as soul food, since Harlem was largely segregated during the Jazz Age, and soul food comes from the culinary traditions of Black Americans. Still, others say that the dish was actually made famous by founding father Thomas Jefferson, who popularized waffles after he purchased waffle irons (which were fairly expensive at the time) from Amsterdam in the 1780s. Another story holds that the Pennsylvania Dutch created chicken and waffles based on German traditions.
Though we’ll never know for certain, it’s likely that all three tales are simply parts of a larger story. Dutch colonists brought waffles to the U.S. as early as the 1600s, where they made their way into the new culinary traditions of different groups of European settlers. This included the “Pennsylvania Dutch”, who were actually from Germany, where it was common to eat meat with bread or biscuits to sop up juices. They served waffles with different types of meat, including chicken with a creamy sauce. Thomas Jefferson did, indeed, help to popularize waffles, but it was the enslaved people who cooked for him and other colonists who changed the dish into what it is today. They standardized the use of seasoned, sometimes even spicy, fried chicken served with waffles, pancakes, or biscuits. After the civil war, chicken and waffles fell out of favor with white Americans, but was still frequently served in Black-owned restaurants, including well-known establishments in Harlem and in Black communities throughout the South. For centuries, the dish was categorized as Southern soul food. Then, in the 1990s, chicken and waffles had a sudden surge in nationwide popularity, possibly due to the rise of food-centric T.V. and “foodie” culture. Today, it can be found everywhere from Southern soul food restaurants to swanky brunch cafes in northern states. Its origins were humble, but its delicious reach is undeniable.
[Image description: Chicken wings and a waffle on a white plate with an orange slice.] Credit & copyright: Joost.janssens, Wikimedia Commons. This work has been released into the public domain by its author, Joost.janssens at English Wikipedia. This applies worldwide. -
FREESTEM Daily Curio #3110Free1 CQ
When the fungi kicked ash, the ash started fighting back. For over a decade, ash trees in the U.K. have been under threat from a deadly fungus. Now, the trees appear to be developing a resistance. No matter where they grow, ash trees just can’t seem to catch a break. Invasive emerald ash borers started devastating ash trees in North America in the 1990s. Then, around 30 years ago, the fungi Hymenoscyphus fraxineus arrived in Europe, making its way through the continent one forest at a time. Finally, it made its way into the U.K. in 2012. H. fraxineus is native to East Asia and is the cause of chalara, also called ash dieback. It’s particularly devastating to Fraxinus excelsior, better known as European ash, and it has already reshaped much of the U.K.’s landscape. While the fungus only directly kills ash trees, it presents a wider threat to the overall ecology of the affected areas. H. fraxineus also poses an economic threat, since ash lumber is used for everything from hand tools to furniture.
When not being felled by fungus or bugs, ash trees are capable of growing in a wide range of conditions, creating a loose canopy that allows sunlight to reach the forest floor. That, in turn, encourages the growth of other vegetation. A variety of insect species and lichen also depend on ash trees for survival. Luckily, for the past few years, researchers have been seeing a light at the end of the fungus-infested tunnel. Some ash trees have started showing signs of fungal resistance, and a genetic analysis has now revealed that the trees are adapting at a faster rate than previously thought. If even a small percentage of ash trees become fully immune to the fungus, it may be just a matter of time before their population is replenished. Ash trees are great at reproducing, as they’re each capable of producing around 10,000 seeds that are genetically distinct from each other. That also means that ash trees may be able to avoid creating a genetic bottleneck, even though their population has sharply declined due to dieback. Still, scientists estimate around 85 percent of the remaining non-immune ash trees will be gone by the time all is said and done. It’s darkest before the dawn, especially in an ash forest.
[Image description: An upward shot of ash tree limbs affected with dieback disease against a blue sky. Some limbs still have green leaves, others are bare.] Credit & copyright: Sarang, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.When the fungi kicked ash, the ash started fighting back. For over a decade, ash trees in the U.K. have been under threat from a deadly fungus. Now, the trees appear to be developing a resistance. No matter where they grow, ash trees just can’t seem to catch a break. Invasive emerald ash borers started devastating ash trees in North America in the 1990s. Then, around 30 years ago, the fungi Hymenoscyphus fraxineus arrived in Europe, making its way through the continent one forest at a time. Finally, it made its way into the U.K. in 2012. H. fraxineus is native to East Asia and is the cause of chalara, also called ash dieback. It’s particularly devastating to Fraxinus excelsior, better known as European ash, and it has already reshaped much of the U.K.’s landscape. While the fungus only directly kills ash trees, it presents a wider threat to the overall ecology of the affected areas. H. fraxineus also poses an economic threat, since ash lumber is used for everything from hand tools to furniture.
When not being felled by fungus or bugs, ash trees are capable of growing in a wide range of conditions, creating a loose canopy that allows sunlight to reach the forest floor. That, in turn, encourages the growth of other vegetation. A variety of insect species and lichen also depend on ash trees for survival. Luckily, for the past few years, researchers have been seeing a light at the end of the fungus-infested tunnel. Some ash trees have started showing signs of fungal resistance, and a genetic analysis has now revealed that the trees are adapting at a faster rate than previously thought. If even a small percentage of ash trees become fully immune to the fungus, it may be just a matter of time before their population is replenished. Ash trees are great at reproducing, as they’re each capable of producing around 10,000 seeds that are genetically distinct from each other. That also means that ash trees may be able to avoid creating a genetic bottleneck, even though their population has sharply declined due to dieback. Still, scientists estimate around 85 percent of the remaining non-immune ash trees will be gone by the time all is said and done. It’s darkest before the dawn, especially in an ash forest.
[Image description: An upward shot of ash tree limbs affected with dieback disease against a blue sky. Some limbs still have green leaves, others are bare.] Credit & copyright: Sarang, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEEngineering Daily Curio #3109Free1 CQ
They’re turning greenhouse gases into rocky masses. A London-based startup has developed a device that can not only reduce emissions from cargo ships, but turn them into something useful. Cargo ships, as efficient as they are in some ways, still produce an enormous amount of emissions. In fact, they account for roughly three percent of all greenhouse gas emissions globally. Reducing their emissions even a little could have a big environmental impact, and there have been efforts to develop wind-based technology to reduce fuel consumption as well as alternative fuel. In the case of the startup Seabound, their approach is to scrub as much of the carbon from cargo ship exhaust as possible. Their device is the shape and size of a standard shipping container and can be retrofitted onto existing ships. Once in place, it’s filled with quicklime pellets which soak up carbon from the ship’s exhaust. By the time the exhaust makes it out to the atmosphere, 78 percent of the carbon and 90 percent of the sulfur is removed from it. The process also converts quicklime back into limestone, sequestering the carbon.
Similar carbon scrubbing technology is already in use in some factories, so the concept is sound, but there are some downsides. The most common method of quicklime production involves heating limestone to high temperatures, which releases carbon from the limestone and creates emissions from the energy required to heat it. There are greener methods to produce quicklime, but supply is highly limited for the time being. In addition, the process requires an enormous quantity of quicklime, reducing the overall cargo capacity of the ships. Meanwhile, some critics believe that such devices might delay the development and adoption of alternatives that could lead to net zero emissions for the shipping industry. It’s not easy charting a course for a greener future.
[Image description: A gray limestone formation in grass photographed from above.] Credit & copyright: Northernhenge, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They’re turning greenhouse gases into rocky masses. A London-based startup has developed a device that can not only reduce emissions from cargo ships, but turn them into something useful. Cargo ships, as efficient as they are in some ways, still produce an enormous amount of emissions. In fact, they account for roughly three percent of all greenhouse gas emissions globally. Reducing their emissions even a little could have a big environmental impact, and there have been efforts to develop wind-based technology to reduce fuel consumption as well as alternative fuel. In the case of the startup Seabound, their approach is to scrub as much of the carbon from cargo ship exhaust as possible. Their device is the shape and size of a standard shipping container and can be retrofitted onto existing ships. Once in place, it’s filled with quicklime pellets which soak up carbon from the ship’s exhaust. By the time the exhaust makes it out to the atmosphere, 78 percent of the carbon and 90 percent of the sulfur is removed from it. The process also converts quicklime back into limestone, sequestering the carbon.
Similar carbon scrubbing technology is already in use in some factories, so the concept is sound, but there are some downsides. The most common method of quicklime production involves heating limestone to high temperatures, which releases carbon from the limestone and creates emissions from the energy required to heat it. There are greener methods to produce quicklime, but supply is highly limited for the time being. In addition, the process requires an enormous quantity of quicklime, reducing the overall cargo capacity of the ships. Meanwhile, some critics believe that such devices might delay the development and adoption of alternatives that could lead to net zero emissions for the shipping industry. It’s not easy charting a course for a greener future.
[Image description: A gray limestone formation in grass photographed from above.] Credit & copyright: Northernhenge, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREERunning Daily Curio #3108Free1 CQ
They’re more than sneakers—they’re a tribute. Adidas will soon be bringing back the very shoes worn by Terry Fox during his run across Canada in commemoration of the 45th anniversary of his “Marathon of Hope.” The blue-and-white-striped shoes were worn by Fox in 1980 when he embarked on a journey that would go on to inspire millions. At the time, though, no one was looking at his shoes. Born on July 28, 1958, in Winnipeg, Manitoba, Fox was diagnosed with osteogenic sarcoma in 1977 at the age of 18. The disease didn’t claim his life then, but Fox lost his right leg just above the knee. By 1979, Fox mastered the use of his artificial limb and completed a marathon, but he was determined to do more. Fox was driven by his personal experiences from dealing with cancer, including his time in the cancer ward. He believed that cancer research needed more funding, and he came up with the idea to run across Canada to raise awareness.
Fox started his marathon on April 12th, 1980, by dipping his prosthetic leg in the Atlantic Ocean, and in the first days of his journey, he attracted little attention. For months, Fox ran at a pace averaging 30 miles a day, and his persistence paid off. Over time, more and more people rallied behind Fox, and began to stand along his route to cheer him on. Then, after over 3,300 miles, Fox started suffering from chest pains. The culprit was his cancer, which had spread to his lungs and forced him to stop his marathon prematurely. Fox passed away the following year on June 28, and though he never managed to reach the Pacific side of Canada, he accomplished something more. He surpassed his goal of $24 million CAD, raising the equivalent of $1 from every single Canadian. Fox also became a national hero for his dedication, and is the youngest Canadian ever to be made a Companion of the Order of Canada, the country’s highest civilian honor. Since his passing, the Terry Fox Foundation has raised a further $850 million CAD, and a statue in his honor stands in Ottawa, Ontario. A true hero of the Great White North.
[Image description: A statue of Terry Fox running, with another wall-like memorial behind it. In the background is a building and trees.] Credit & copyright: Raysonho, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.They’re more than sneakers—they’re a tribute. Adidas will soon be bringing back the very shoes worn by Terry Fox during his run across Canada in commemoration of the 45th anniversary of his “Marathon of Hope.” The blue-and-white-striped shoes were worn by Fox in 1980 when he embarked on a journey that would go on to inspire millions. At the time, though, no one was looking at his shoes. Born on July 28, 1958, in Winnipeg, Manitoba, Fox was diagnosed with osteogenic sarcoma in 1977 at the age of 18. The disease didn’t claim his life then, but Fox lost his right leg just above the knee. By 1979, Fox mastered the use of his artificial limb and completed a marathon, but he was determined to do more. Fox was driven by his personal experiences from dealing with cancer, including his time in the cancer ward. He believed that cancer research needed more funding, and he came up with the idea to run across Canada to raise awareness.
Fox started his marathon on April 12th, 1980, by dipping his prosthetic leg in the Atlantic Ocean, and in the first days of his journey, he attracted little attention. For months, Fox ran at a pace averaging 30 miles a day, and his persistence paid off. Over time, more and more people rallied behind Fox, and began to stand along his route to cheer him on. Then, after over 3,300 miles, Fox started suffering from chest pains. The culprit was his cancer, which had spread to his lungs and forced him to stop his marathon prematurely. Fox passed away the following year on June 28, and though he never managed to reach the Pacific side of Canada, he accomplished something more. He surpassed his goal of $24 million CAD, raising the equivalent of $1 from every single Canadian. Fox also became a national hero for his dedication, and is the youngest Canadian ever to be made a Companion of the Order of Canada, the country’s highest civilian honor. Since his passing, the Terry Fox Foundation has raised a further $850 million CAD, and a statue in his honor stands in Ottawa, Ontario. A true hero of the Great White North.
[Image description: A statue of Terry Fox running, with another wall-like memorial behind it. In the background is a building and trees.] Credit & copyright: Raysonho, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3107Free1 CQ
Beware the pharaoh’s… cure? A deadly fungus that once “cursed” those who entered the tomb of King Tutankhamun has been engineered into a treatment for cancer by researchers at the University of Pennsylvania. When a team of archaeologists opened up King Tutankhamun’s fabled tomb back in 1922, they couldn’t have known about the terrible fate they had been dealt. One by one, those who entered the tomb died from an unknown illness. Then, in the 1970s, a similar string of tragedies befell those who entered the 15th century tomb of King Casimir IV in Poland. One such incident might have been dismissed as an unfortunate accident, but two meant that there was something else at play. Despite speculation about ancient curses, the likely culprit was found to be a fungus called Aspergillus flavus. It’s capable of producing spores that can stay alive seemingly indefinitely, and the spores contain toxins that are deadly when inhaled by humans. As they say, though, it’s the dose that makes the poison. In this case, the proper dose can instead be a cure. Researchers studying the deadly toxins within the fungal spores found a class of compounds called RiPPs (ribosomally synthesized and post-translationally modified peptides) which are capable of killing cancer cells. Moreover, the compounds seem to be able to target only cancer cells without affecting healthy ones. That’s a huge improvement over conventional treatments like chemotherapy, which can harm a variety of healthy cells as much as they harm cancer. Another interesting fact is that the compounds can be enhanced by combining them with lipid molecules like those found in royal jelly (the special honey that is fed exclusively to queen bees), making it easier for them to pass through cell membranes. Fungus and honey coming together to cure cancer? Sounds like a sweet (and savory) deal.
[Image description: A petri dish containing a culture of the fungus Aspergillus flavus against a black background. The fungus appears as a white-ish circle.] Credit & copyright: CDC Public Health Image Library, Dr. Hardin. This image is in the public domain and thus free of any copyright restrictions.Beware the pharaoh’s… cure? A deadly fungus that once “cursed” those who entered the tomb of King Tutankhamun has been engineered into a treatment for cancer by researchers at the University of Pennsylvania. When a team of archaeologists opened up King Tutankhamun’s fabled tomb back in 1922, they couldn’t have known about the terrible fate they had been dealt. One by one, those who entered the tomb died from an unknown illness. Then, in the 1970s, a similar string of tragedies befell those who entered the 15th century tomb of King Casimir IV in Poland. One such incident might have been dismissed as an unfortunate accident, but two meant that there was something else at play. Despite speculation about ancient curses, the likely culprit was found to be a fungus called Aspergillus flavus. It’s capable of producing spores that can stay alive seemingly indefinitely, and the spores contain toxins that are deadly when inhaled by humans. As they say, though, it’s the dose that makes the poison. In this case, the proper dose can instead be a cure. Researchers studying the deadly toxins within the fungal spores found a class of compounds called RiPPs (ribosomally synthesized and post-translationally modified peptides) which are capable of killing cancer cells. Moreover, the compounds seem to be able to target only cancer cells without affecting healthy ones. That’s a huge improvement over conventional treatments like chemotherapy, which can harm a variety of healthy cells as much as they harm cancer. Another interesting fact is that the compounds can be enhanced by combining them with lipid molecules like those found in royal jelly (the special honey that is fed exclusively to queen bees), making it easier for them to pass through cell membranes. Fungus and honey coming together to cure cancer? Sounds like a sweet (and savory) deal.
[Image description: A petri dish containing a culture of the fungus Aspergillus flavus against a black background. The fungus appears as a white-ish circle.] Credit & copyright: CDC Public Health Image Library, Dr. Hardin. This image is in the public domain and thus free of any copyright restrictions.