Curio Cabinet
- By Date
- By Type
March 12, 2025
-
FREEBiology Nerdy CurioFree1 CQ
Turns out, unicorns are real—they’ve been hanging out in the ocean this whole time. Narwhals, sometimes called the “unicorns of the sea” are some of the most unusual animals on Earth, but they’re also extremely elusive. In fact, until recently, there was little consensus on what narwhals used their long, horn-like tusks for. Now, drones have finally captured footage of narwhals using their tusks for hunting and play. The footage was captured thanks to researchers at Florida Atlantic University’s Harbor Branch Oceanographic Institute and Canada’s Department of Fisheries and Oceans, in partnership with Inuit communities in Nunavut in Canada’s High Arctic. The narwhals used their tusks to “steer” prey fish, like Arctic char, in favorable directions and even to hit and stun the fish. They also used their tusks to prod and shake various things in their environment, behavior that researchers described as “exploratory play.”
Narwhals’ “horns” aren’t horns at all, but tusks. A narwhal’s tusk begins as a canine tooth (usually their upper left) that eventually grows through their upper lip. However, not all narwhals end up with tusks at all. Some males never grow them for unknown reasons, and only about 15 percent of female narwhals do. Narwhal tusks can reach lengths of up to 10 feet. That’s more than half the length of an adult male’s body, which can reach 15.7 feet and weigh more than 3,500 pounds. Narwhals come by their large size naturally, as they’re members of the Monodontidae family. This family also includes belugas, right whales, sperm whales, and blue whales, the latter of which are the largest animals that have ever lived on Earth.
Like most whales, narwhals live in pods, or groups, of up to 10 individuals. Females, calves, and young males form pods together, while sexually mature males have pods of their own. Narwhals are also migratory, meaning that they spend different parts of the year in different places. In the summer, they spend their time in Arctic bays and fjords, but as thick sea ice forms in the fall, they migrate to deeper Arctic waters. Most narwhals spend the winter between Canada and Greenland, in areas like Baffin Bay. When narwhals return to shallower, coastal waters in the spring, they also begin searching for mates. While male narwhals have never been observed fighting for mates, they do display behavior called “tusking”, in which two males raise their tusks out of the water and lay them against each other, probably to determine which male is larger. Whichever male “wins” the contest will go on to mate with nearby females. Narwhals give birth to just one calf per year.
Unfortunately, narwhals' low birth rate makes it difficult for their numbers to recover after disasters like ocean storms or oil spills. Luckily, narwhals are not currently considered endangered, but as climate change continues to affect the Arctic waters they call home, they may have difficulty adapting to a warming world. That’s not very cool for these unicorns of the sea.
[Image description: A black-and-white illustration of a narwhal diving through water.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide.Turns out, unicorns are real—they’ve been hanging out in the ocean this whole time. Narwhals, sometimes called the “unicorns of the sea” are some of the most unusual animals on Earth, but they’re also extremely elusive. In fact, until recently, there was little consensus on what narwhals used their long, horn-like tusks for. Now, drones have finally captured footage of narwhals using their tusks for hunting and play. The footage was captured thanks to researchers at Florida Atlantic University’s Harbor Branch Oceanographic Institute and Canada’s Department of Fisheries and Oceans, in partnership with Inuit communities in Nunavut in Canada’s High Arctic. The narwhals used their tusks to “steer” prey fish, like Arctic char, in favorable directions and even to hit and stun the fish. They also used their tusks to prod and shake various things in their environment, behavior that researchers described as “exploratory play.”
Narwhals’ “horns” aren’t horns at all, but tusks. A narwhal’s tusk begins as a canine tooth (usually their upper left) that eventually grows through their upper lip. However, not all narwhals end up with tusks at all. Some males never grow them for unknown reasons, and only about 15 percent of female narwhals do. Narwhal tusks can reach lengths of up to 10 feet. That’s more than half the length of an adult male’s body, which can reach 15.7 feet and weigh more than 3,500 pounds. Narwhals come by their large size naturally, as they’re members of the Monodontidae family. This family also includes belugas, right whales, sperm whales, and blue whales, the latter of which are the largest animals that have ever lived on Earth.
Like most whales, narwhals live in pods, or groups, of up to 10 individuals. Females, calves, and young males form pods together, while sexually mature males have pods of their own. Narwhals are also migratory, meaning that they spend different parts of the year in different places. In the summer, they spend their time in Arctic bays and fjords, but as thick sea ice forms in the fall, they migrate to deeper Arctic waters. Most narwhals spend the winter between Canada and Greenland, in areas like Baffin Bay. When narwhals return to shallower, coastal waters in the spring, they also begin searching for mates. While male narwhals have never been observed fighting for mates, they do display behavior called “tusking”, in which two males raise their tusks out of the water and lay them against each other, probably to determine which male is larger. Whichever male “wins” the contest will go on to mate with nearby females. Narwhals give birth to just one calf per year.
Unfortunately, narwhals' low birth rate makes it difficult for their numbers to recover after disasters like ocean storms or oil spills. Luckily, narwhals are not currently considered endangered, but as climate change continues to affect the Arctic waters they call home, they may have difficulty adapting to a warming world. That’s not very cool for these unicorns of the sea.
[Image description: A black-and-white illustration of a narwhal diving through water.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide. -
FREEUS History Daily Curio #3045Free1 CQ
This was one march that could turn on a dime. March of Dimes recently appointed a new CEO, making this the perfect time to look back on the nonprofit’s impressive, 85-year history. Not only did the organization play a major hand in helping to eradicate polio, it has pivoted and widened its scope several times throughout its history. Initially named the National Foundation for Infantile Paralysis, March of Dimes was founded by President Franklin D. Roosevelt to combat polio. Infantile paralysis was another name for polio at the time, and the president himself relied on a wheelchair for much of his life due to his own bout with polio in 1921. The catchy name for the organization, March of Dimes, came from a campaign asking the public to send money to the White House in pursuit of a cure. While coming up with the idea for the campaign, popular comedian Eddie Cantor suggested that they call it “The March of Dimes,” a play on the name of a newsreel at the time, “The March of Time.” Much of the money sent to the White House was indeed in the form of dimes and was used to fund research headed by Jonas Salk, the developer of one of the first successful polio vaccines. The campaign was heavily promoted on the radio, and its success helped develop the fundraising model used by other medical nonprofits today.
By the 1950s, Salk successfully developed a cure for polio, and by the end of the 1970s, polio was all but eradicated. Sadly, Roosevelt passed away in 1945, before he could see the creation of the vaccine. In 1979, the organization officially changed its name to the March of Dimes Foundation. After fulfilling their original mission, the March of Dimes also diversified their efforts. The organization continues to fund research into various childhood diseases while lobbying for better access to prenatal care. As part of their mission statement, the March of Dimes acknowledges that the U.S. has some of the highest infant and maternal mortality rates among developed nations. Clearly, the march is far from over.
[Image description: A jar of coins with some coins sitting beside it.] Credit & copyright: Miguel Á. Padriñán, PexelsThis was one march that could turn on a dime. March of Dimes recently appointed a new CEO, making this the perfect time to look back on the nonprofit’s impressive, 85-year history. Not only did the organization play a major hand in helping to eradicate polio, it has pivoted and widened its scope several times throughout its history. Initially named the National Foundation for Infantile Paralysis, March of Dimes was founded by President Franklin D. Roosevelt to combat polio. Infantile paralysis was another name for polio at the time, and the president himself relied on a wheelchair for much of his life due to his own bout with polio in 1921. The catchy name for the organization, March of Dimes, came from a campaign asking the public to send money to the White House in pursuit of a cure. While coming up with the idea for the campaign, popular comedian Eddie Cantor suggested that they call it “The March of Dimes,” a play on the name of a newsreel at the time, “The March of Time.” Much of the money sent to the White House was indeed in the form of dimes and was used to fund research headed by Jonas Salk, the developer of one of the first successful polio vaccines. The campaign was heavily promoted on the radio, and its success helped develop the fundraising model used by other medical nonprofits today.
By the 1950s, Salk successfully developed a cure for polio, and by the end of the 1970s, polio was all but eradicated. Sadly, Roosevelt passed away in 1945, before he could see the creation of the vaccine. In 1979, the organization officially changed its name to the March of Dimes Foundation. After fulfilling their original mission, the March of Dimes also diversified their efforts. The organization continues to fund research into various childhood diseases while lobbying for better access to prenatal care. As part of their mission statement, the March of Dimes acknowledges that the U.S. has some of the highest infant and maternal mortality rates among developed nations. Clearly, the march is far from over.
[Image description: A jar of coins with some coins sitting beside it.] Credit & copyright: Miguel Á. Padriñán, Pexels
March 11, 2025
-
FREEEngineering Daily Curio #3044Free1 CQ
It’s a good thing your eye comes with a spare. Researchers at the Dana-Farber Cancer Institute, Massachusetts Eye and Ear, and Boston Children’s Hospital have found a way to repair previously-irreversible corneal damage in one eye using stem cells from a person’s remaining, healthy eye.
Along with the lens, an eye’s cornea plays a critical role in focusing light. Damage to the cornea, whether from injury or disease, can permanently impair vision or even lead to blindness. The cornea also protects the eye behind it by keeping out debris and germs. Since the cornea is literally at the front and center of the eye, however, it is particularly vulnerable to damage from the very things it’s designed to protect against. Unfortunately, damage to the cornea is notoriously difficult to treat.
Now, researchers have managed to extract what they call cultivated autologous limbal epithelial cells (CALEC) from a healthy eye to restore function to a damaged cornea. The process works like this: CALEC is extracted via a biopsy from the healthy eye then placed in a cellular tissue graft, where it takes up to three weeks to grow. Once ready, the graft is transplanted into the damaged eye to replace the damaged cornea. One of the researchers, Ula Jurkunas, MD, said in press release, “Now we have this new data supporting that CALEC is more than 90% effective at restoring the cornea’s surface, which makes a meaningful difference in individuals with cornea damage that was considered untreatable.” Still, as they say, an ounce of prevention is worth a pound of cure. Common causes of corneal injury include damage from foreign objects entering the eye during yard work or while working with tools or chemicals. Too much exposure to UV rays can also cause damage to the cornea, as well as physical trauma during sports. The best way to prevent these injuries, of course, is to use eye protection. Even if they can fix your cornea, having someone poke around inside your one good eye should probably remain a last resort.
[Image description: An illustrated diagram of the human eye from the side, with labels.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide.It’s a good thing your eye comes with a spare. Researchers at the Dana-Farber Cancer Institute, Massachusetts Eye and Ear, and Boston Children’s Hospital have found a way to repair previously-irreversible corneal damage in one eye using stem cells from a person’s remaining, healthy eye.
Along with the lens, an eye’s cornea plays a critical role in focusing light. Damage to the cornea, whether from injury or disease, can permanently impair vision or even lead to blindness. The cornea also protects the eye behind it by keeping out debris and germs. Since the cornea is literally at the front and center of the eye, however, it is particularly vulnerable to damage from the very things it’s designed to protect against. Unfortunately, damage to the cornea is notoriously difficult to treat.
Now, researchers have managed to extract what they call cultivated autologous limbal epithelial cells (CALEC) from a healthy eye to restore function to a damaged cornea. The process works like this: CALEC is extracted via a biopsy from the healthy eye then placed in a cellular tissue graft, where it takes up to three weeks to grow. Once ready, the graft is transplanted into the damaged eye to replace the damaged cornea. One of the researchers, Ula Jurkunas, MD, said in press release, “Now we have this new data supporting that CALEC is more than 90% effective at restoring the cornea’s surface, which makes a meaningful difference in individuals with cornea damage that was considered untreatable.” Still, as they say, an ounce of prevention is worth a pound of cure. Common causes of corneal injury include damage from foreign objects entering the eye during yard work or while working with tools or chemicals. Too much exposure to UV rays can also cause damage to the cornea, as well as physical trauma during sports. The best way to prevent these injuries, of course, is to use eye protection. Even if they can fix your cornea, having someone poke around inside your one good eye should probably remain a last resort.
[Image description: An illustrated diagram of the human eye from the side, with labels.] Credit & copyright: Archives of Pearson Scott Foresman, donated to the Wikimedia Foundation. This work has been released into the public domain by its author, Pearson Scott Foresman. This applies worldwide.
March 10, 2025
-
FREEArt Appreciation Art CurioFree1 CQ
Well, there’s something you don’t see everyday. Figure of a Monkey on a Dog is a sculpture that depicts exactly what its title implies: a monkey, dressed in a full outfit of pants, shirt, vest, and hat, riding atop a dog as if the latter animal is a horse. The dog wears two large saddlebags. While one might assume that this sculpture is simply the imaginative work of one whimsical artist, its history actually runs a lot deeper. It was part of a satirical art genre called “singeries”, or “monkey tricks” in French. After French artist Claude III Audran painted a picture of monkeys dressed in human clothes and seated at a table in 1709, other artists took up the same motif. In a movement that lasted through most of the 18th century, French artists painted and sculpted monkeys dressed in finery, engaging in all sorts of human activities, from drinking wine to dancing to playing cards. Too bad for the dog in this sculpture that canines weren’t afforded the same honor as monkeys in singeries. He ended up a beast of burden rather than a dapper dog!
Figure of a Monkey on a Dog, Manufactured by Villeroy Factory, c. 1745, soft-paste porcelain with enamel decoration, 6.25 in. (15.9 cm.), The Cleveland Museum of Art, Cleveland, Ohio
[Image credit & copyright: Manufactured by Villeroy Factory, c. 1745. The Cleveland Museum of Art, Gift of Rosenberg & Stiebel, Inc. 1953.269. Public Domain, Creative Commons Zero (CC0) designation.]Well, there’s something you don’t see everyday. Figure of a Monkey on a Dog is a sculpture that depicts exactly what its title implies: a monkey, dressed in a full outfit of pants, shirt, vest, and hat, riding atop a dog as if the latter animal is a horse. The dog wears two large saddlebags. While one might assume that this sculpture is simply the imaginative work of one whimsical artist, its history actually runs a lot deeper. It was part of a satirical art genre called “singeries”, or “monkey tricks” in French. After French artist Claude III Audran painted a picture of monkeys dressed in human clothes and seated at a table in 1709, other artists took up the same motif. In a movement that lasted through most of the 18th century, French artists painted and sculpted monkeys dressed in finery, engaging in all sorts of human activities, from drinking wine to dancing to playing cards. Too bad for the dog in this sculpture that canines weren’t afforded the same honor as monkeys in singeries. He ended up a beast of burden rather than a dapper dog!
Figure of a Monkey on a Dog, Manufactured by Villeroy Factory, c. 1745, soft-paste porcelain with enamel decoration, 6.25 in. (15.9 cm.), The Cleveland Museum of Art, Cleveland, Ohio
[Image credit & copyright: Manufactured by Villeroy Factory, c. 1745. The Cleveland Museum of Art, Gift of Rosenberg & Stiebel, Inc. 1953.269. Public Domain, Creative Commons Zero (CC0) designation.] -
FREELiterature Daily Curio #3043Free1 CQ
She might be gone, but her work lives on! Pulitzer Prize-winning American author Harper Lee published only two books before passing away in 2016: 1960’s To Kill A Mockingbird and 2015’s Go Set a Watchmen. Now, in a great surprise to fans, a collection of short stories that Lee wrote prior to 1960 is set to be published by Harper, an imprint of Harper Collins, this October.
The stories will be part of The Land of Sweet Forever: Stories and Essays, which will also include eight of Lee’s nonfiction pieces printed in various publications throughout her life. Some of the collection’s short stories draw upon themes that are also present in To Kill A Mockingbird, and include elements inspired from her own life in Alabama, where she grew up, and New York City, where she moved in 1949 and lived part-time for around 40 years. Ailah Ahmed, publishing director of the new book’s UK publisher, Hutchinson Heinemann, told The Guardian that the stories “...will prove an invaluable resource for anyone interested in Lee’s development as a writer.”
A famously private author, Lee wrote unflinchingly about the racism that plagued the deep South during the 1930s in To Kill A Mockingbird. The empathetic voice of Scout Finch, the book’s child narrator, offers some hope for a better future throughout an otherwise somber tale. Lee’s willingness to portray Atticus Finch, a white lawyer, fighting for the rights of Tom Robinson, a Black man unjustly accused of rape, showcases the idea that bravery and empathy are the ultimate antidotes to prejudice, even if injustice ultimately wins the day, as it does in the story. No doubt Lee’s fans will relish the chance to glimpse into the author’s past, to a time before To Kill A Mockingbird forever changed America’s literary landscape. Short stories like this just don’t happen everyday.
[Image description: A stack of books without titles visible.] Credit & copyright: Jess Bailey Designs, PexelsShe might be gone, but her work lives on! Pulitzer Prize-winning American author Harper Lee published only two books before passing away in 2016: 1960’s To Kill A Mockingbird and 2015’s Go Set a Watchmen. Now, in a great surprise to fans, a collection of short stories that Lee wrote prior to 1960 is set to be published by Harper, an imprint of Harper Collins, this October.
The stories will be part of The Land of Sweet Forever: Stories and Essays, which will also include eight of Lee’s nonfiction pieces printed in various publications throughout her life. Some of the collection’s short stories draw upon themes that are also present in To Kill A Mockingbird, and include elements inspired from her own life in Alabama, where she grew up, and New York City, where she moved in 1949 and lived part-time for around 40 years. Ailah Ahmed, publishing director of the new book’s UK publisher, Hutchinson Heinemann, told The Guardian that the stories “...will prove an invaluable resource for anyone interested in Lee’s development as a writer.”
A famously private author, Lee wrote unflinchingly about the racism that plagued the deep South during the 1930s in To Kill A Mockingbird. The empathetic voice of Scout Finch, the book’s child narrator, offers some hope for a better future throughout an otherwise somber tale. Lee’s willingness to portray Atticus Finch, a white lawyer, fighting for the rights of Tom Robinson, a Black man unjustly accused of rape, showcases the idea that bravery and empathy are the ultimate antidotes to prejudice, even if injustice ultimately wins the day, as it does in the story. No doubt Lee’s fans will relish the chance to glimpse into the author’s past, to a time before To Kill A Mockingbird forever changed America’s literary landscape. Short stories like this just don’t happen everyday.
[Image description: A stack of books without titles visible.] Credit & copyright: Jess Bailey Designs, Pexels
March 9, 2025
-
FREEUS History PP&T CurioFree1 CQ
Gangway! This Civil War battle didn’t take place on horseback, but on ships. While naval battles usually come to mind in relation to the World Wars, they were also part of the Revolutionary War and the American Civil War. In fact, the Battle of Hampton Roads, which ended on this day in 1862, was the first American battle involving ironclad warships.
Just a few days after the breakout of the Civil War on April 12, 1861, President Lincoln ordered a blockade of all major ports in states that had seceded from the Union, including those around Norfolk, Virginia. While in charge of maintaining the blockade at the Gosport Navy Yard in Portsmouth, Virginia, Union leaders got word that a large Confederate force was on its way to claim control of the area. The Union thus burned parts of the naval yard and several of their own warships to prevent them from falling into Confederate hands. Among them was the USS Merrimack, a type of steam-powered warship known as a steam frigate. The ship was also a screw frigate, as it was powered by screw propellers, making it quite agile for its time. When the ship was set ablaze, it only burned to the waterline. The bottom half of the Merrimack, which included its intact steam engines, sank beneath the surface of the Norfolk Navy Yard. Union troops in the area then retreated, and the Confederacy took over the area.
The Confederacy now controlled the south side of an area called Hampton Roads. This was a roadstead, or place where boats could be safely anchored, positioned in an area where the Elizabeth, Nansemond, and James rivers met before flowing into Chesapeake Bay. Determined to destroy the Union blockade that had cut them off from trade, the Confederates began pulling up remnants of recently-burned Union ships, including the Merrimack. Since the blockade included some of the Union’s most powerful ships, the Confederacy rebuilt the Merrimack as an ironclad warship, fitting an iron ram onto her prow and rebuilding her formerly wooden upper-deck with an iron-covered citadel that could mount ten guns. This new ship was named the CSS Virginia.
Word of the CSS Virginia caused something of a panic amongst Union officers, and they quickly got permission from Congress to begin construction of their own ironclad warship. The vessel was the brainchild of Swedish engineer John Ericsson, and included novel elements like a rotating turret with two large guns, rather than many small ones. They named their ship the USS Monitor.
The Battle of Hampton Roads began on the morning of March 8, 1862, when the CSS Virginia made a run for the Union’s blockade. Although several Union ships fired on the advancing Virginia, most of their gunfire bounced off her armor. The Virginia quickly rammed and sank the Cumberland, one of the five main ships in the blockade, though doing so broke off Virginia’s iron ram. Virginia then forced the surrender of another Union ship, the Congress, before firing upon it with red hot cannonballs, lighting it on fire. Already, more than 200 Union troops had been killed while the Virginia had only lost two crewmen. As night fell and visibility waned, the ship retreated to wait for daylight.
The Union quickly dispatched the Monitor to meet Virginia the next day. When the Confederates headed for the Minnesota, a grounded Union ship, Monitor rushed in to block her path. The two ironclads fired at one another, and continued to do so for most of the day, each finding it difficult to pierce the other’s armor. At one point, Virginia ran aground, but was able to get back into water just in time to avoid being destroyed. At another point, Monitor’s captain, Lieutenant John L. Worden, was temporarily blinded when his ship’s pilot house was hit with a charge. Monitor was thus forced to retreat, but neither it nor Virginia were damaged enough to render them physically incapable of fighting, so the battle ended inconclusively. Both sides claimed victory, but with the Union blockade still intact, the Confederacy hadn’t gained much ground. Eventually, the Confederacy was forced to destroy their own ship when they abandoned Norfolk, to prevent Virginia from falling into enemy hands. The Monitor sank in late 1862 when she encountered high waves while attempting to make her way to North Carolina. A pretty unimpressive end for such inventive ships.
[Image description: A painting depicting the Battle of Hampton Roads. Soldiers on horses look down a hill over a naval battle with ships on fire.] Credit & copyright: Kurz & Allison Art Publishers, 1889. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Public Domain.Gangway! This Civil War battle didn’t take place on horseback, but on ships. While naval battles usually come to mind in relation to the World Wars, they were also part of the Revolutionary War and the American Civil War. In fact, the Battle of Hampton Roads, which ended on this day in 1862, was the first American battle involving ironclad warships.
Just a few days after the breakout of the Civil War on April 12, 1861, President Lincoln ordered a blockade of all major ports in states that had seceded from the Union, including those around Norfolk, Virginia. While in charge of maintaining the blockade at the Gosport Navy Yard in Portsmouth, Virginia, Union leaders got word that a large Confederate force was on its way to claim control of the area. The Union thus burned parts of the naval yard and several of their own warships to prevent them from falling into Confederate hands. Among them was the USS Merrimack, a type of steam-powered warship known as a steam frigate. The ship was also a screw frigate, as it was powered by screw propellers, making it quite agile for its time. When the ship was set ablaze, it only burned to the waterline. The bottom half of the Merrimack, which included its intact steam engines, sank beneath the surface of the Norfolk Navy Yard. Union troops in the area then retreated, and the Confederacy took over the area.
The Confederacy now controlled the south side of an area called Hampton Roads. This was a roadstead, or place where boats could be safely anchored, positioned in an area where the Elizabeth, Nansemond, and James rivers met before flowing into Chesapeake Bay. Determined to destroy the Union blockade that had cut them off from trade, the Confederates began pulling up remnants of recently-burned Union ships, including the Merrimack. Since the blockade included some of the Union’s most powerful ships, the Confederacy rebuilt the Merrimack as an ironclad warship, fitting an iron ram onto her prow and rebuilding her formerly wooden upper-deck with an iron-covered citadel that could mount ten guns. This new ship was named the CSS Virginia.
Word of the CSS Virginia caused something of a panic amongst Union officers, and they quickly got permission from Congress to begin construction of their own ironclad warship. The vessel was the brainchild of Swedish engineer John Ericsson, and included novel elements like a rotating turret with two large guns, rather than many small ones. They named their ship the USS Monitor.
The Battle of Hampton Roads began on the morning of March 8, 1862, when the CSS Virginia made a run for the Union’s blockade. Although several Union ships fired on the advancing Virginia, most of their gunfire bounced off her armor. The Virginia quickly rammed and sank the Cumberland, one of the five main ships in the blockade, though doing so broke off Virginia’s iron ram. Virginia then forced the surrender of another Union ship, the Congress, before firing upon it with red hot cannonballs, lighting it on fire. Already, more than 200 Union troops had been killed while the Virginia had only lost two crewmen. As night fell and visibility waned, the ship retreated to wait for daylight.
The Union quickly dispatched the Monitor to meet Virginia the next day. When the Confederates headed for the Minnesota, a grounded Union ship, Monitor rushed in to block her path. The two ironclads fired at one another, and continued to do so for most of the day, each finding it difficult to pierce the other’s armor. At one point, Virginia ran aground, but was able to get back into water just in time to avoid being destroyed. At another point, Monitor’s captain, Lieutenant John L. Worden, was temporarily blinded when his ship’s pilot house was hit with a charge. Monitor was thus forced to retreat, but neither it nor Virginia were damaged enough to render them physically incapable of fighting, so the battle ended inconclusively. Both sides claimed victory, but with the Union blockade still intact, the Confederacy hadn’t gained much ground. Eventually, the Confederacy was forced to destroy their own ship when they abandoned Norfolk, to prevent Virginia from falling into enemy hands. The Monitor sank in late 1862 when she encountered high waves while attempting to make her way to North Carolina. A pretty unimpressive end for such inventive ships.
[Image description: A painting depicting the Battle of Hampton Roads. Soldiers on horses look down a hill over a naval battle with ships on fire.] Credit & copyright: Kurz & Allison Art Publishers, 1889. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Public Domain. -
7 minFREEWork Business CurioFree4 CQ
Hundreds of billions of dollars have flowed into cryptocurrency markets in the past 24 hours or so after President Donald Trump named five digital tokens to ...
Hundreds of billions of dollars have flowed into cryptocurrency markets in the past 24 hours or so after President Donald Trump named five digital tokens to ...
March 8, 2025
-
6 minFREEWork Business CurioFree4 CQ
It’s the mother of all economic numbers: GDP, or gross domestic product. But U.S. Commerce Secretary Howard Lutnick says he wants to take government spending...
It’s the mother of all economic numbers: GDP, or gross domestic product. But U.S. Commerce Secretary Howard Lutnick says he wants to take government spending...
-
FREESports Sporty CurioFree1 CQ
Just keep jumping….and jumping…and jumping! That seems to be the motto of Swedish pole-vaulter Armand Duplantis, who recently broke his 11th world record. Duplantis first broke the men’s pole vaulting world record in February, 2020, when he jumped 6.17 meters (20.24 feet)—one centimeter higher than the previous record, which was set in 2014. After that, Duplantis broke the record by one centimeter ten more times, most recently at the All Star Pole Vault in Clermont-Ferrand, France on February 28. His newest jump was an astonishing 6.27 meters (20.57 feet). Duplantis is also a two-time Olympic gold medalist, but that’s not his most unusual achievement. This pole-vaulter is in the midst of launching a music career. In fact, his debut single, Bop, a Latin-inspired pop song, released on the same day of his latest jump. It was played in the stadium as he broke the world record for the 11th time. Talk about multi-talented!
Just keep jumping….and jumping…and jumping! That seems to be the motto of Swedish pole-vaulter Armand Duplantis, who recently broke his 11th world record. Duplantis first broke the men’s pole vaulting world record in February, 2020, when he jumped 6.17 meters (20.24 feet)—one centimeter higher than the previous record, which was set in 2014. After that, Duplantis broke the record by one centimeter ten more times, most recently at the All Star Pole Vault in Clermont-Ferrand, France on February 28. His newest jump was an astonishing 6.27 meters (20.57 feet). Duplantis is also a two-time Olympic gold medalist, but that’s not his most unusual achievement. This pole-vaulter is in the midst of launching a music career. In fact, his debut single, Bop, a Latin-inspired pop song, released on the same day of his latest jump. It was played in the stadium as he broke the world record for the 11th time. Talk about multi-talented!
March 7, 2025
-
FREEMind + Body Daily CurioFree1 CQ
Let’s have an entire tray of these crispy confections, s'il vous plaît! Macarons are some of the best-known cookies in the world, and they come in so many bright colors that it’s no wonder they’re native to the fashion capital of the world: Paris, France. Or are they? Although macarons are famous symbols of France that were undoubtedly popularized in Paris, they might not have actually been invented there.
Macarons shouldn’t be confused with macaroons, which are drop cookies made from a paste of shredded coconut. Macarons are sandwich cookies made from a unique list of ingredients. Instead of being made from dough, like most cookies, macaron’s outer shells are made from almond flour, eggs, and powdered sugar that’s been whipped into a crispy meringue. The chewy filling is traditionally made from buttercream, which itself is made by beating butter and sugar together. All sorts of fillings are used in modern macarons, though, including jams, lemon curd, or chocolate ganache. Food coloring is used to give macarons their color, which means that they can come in just about any shade, and they’re often made purposefully bright to draw attention in window displays.
Macarons likely owe their invention to the long history of almond-based confections, like marzipan, and meringue-based desserts that have been popular in both Europe and the Middle East for centuries. Some believe that an early version of the macaroon was invented in the Middle East and brought to Europe by traders. Others say the cookies were invented in al-Andalus, an area that is now part of Spain, in the early 11th century, then brought to Morocco, where they were eaten during Ramadan. Still others claim that they were invented in Italy and brought to France by an Italian chef, though there are no written records of this. An early version of macarons, which were made from meringue but were not sandwich cookies, were sold in Nancy, France, in the 1790s. The sellers were two nuns who had sought asylum there during the French Revolution, though it’s unclear where their recipe came from. The cookies were so popular in Nancy that the nuns were nicknamed the “Macaron Sisters.”
While we’ll never know for certain where early macarons were first made, the type of macarons we know and love today undoubtedly come from Paris. Just which Parisian chef invented it is a topic of some debate, however. Some claim that Pierre Desfontaines, a pastry chef at Paris’s famous Ladurée pâtisserie, created the macaron sandwich cookie in the 1930s to match the colorful decor that the pâtisserie was famous for. Around the same time, another chef, Claude Gerbet, was also making macarons at his own Parisian bakery. Some believe that Gerbert invented modern macarons, since the cookies were known in Paris, for a time, as “Gerberts.” What we know for sure is that macarons quickly became an extremely popular, fast-selling snack at pâtisseries and coffeeshops throughout the city, and by the mid-1940s they were heavily associated with France, as they continue to be today. Delicate, colorful, and sweet? Très magnifique!
[Image description: A plate of light purple macarons with a matching teacup and tablecloth.] Credit & copyright: Jill Wellington, PexelsLet’s have an entire tray of these crispy confections, s'il vous plaît! Macarons are some of the best-known cookies in the world, and they come in so many bright colors that it’s no wonder they’re native to the fashion capital of the world: Paris, France. Or are they? Although macarons are famous symbols of France that were undoubtedly popularized in Paris, they might not have actually been invented there.
Macarons shouldn’t be confused with macaroons, which are drop cookies made from a paste of shredded coconut. Macarons are sandwich cookies made from a unique list of ingredients. Instead of being made from dough, like most cookies, macaron’s outer shells are made from almond flour, eggs, and powdered sugar that’s been whipped into a crispy meringue. The chewy filling is traditionally made from buttercream, which itself is made by beating butter and sugar together. All sorts of fillings are used in modern macarons, though, including jams, lemon curd, or chocolate ganache. Food coloring is used to give macarons their color, which means that they can come in just about any shade, and they’re often made purposefully bright to draw attention in window displays.
Macarons likely owe their invention to the long history of almond-based confections, like marzipan, and meringue-based desserts that have been popular in both Europe and the Middle East for centuries. Some believe that an early version of the macaroon was invented in the Middle East and brought to Europe by traders. Others say the cookies were invented in al-Andalus, an area that is now part of Spain, in the early 11th century, then brought to Morocco, where they were eaten during Ramadan. Still others claim that they were invented in Italy and brought to France by an Italian chef, though there are no written records of this. An early version of macarons, which were made from meringue but were not sandwich cookies, were sold in Nancy, France, in the 1790s. The sellers were two nuns who had sought asylum there during the French Revolution, though it’s unclear where their recipe came from. The cookies were so popular in Nancy that the nuns were nicknamed the “Macaron Sisters.”
While we’ll never know for certain where early macarons were first made, the type of macarons we know and love today undoubtedly come from Paris. Just which Parisian chef invented it is a topic of some debate, however. Some claim that Pierre Desfontaines, a pastry chef at Paris’s famous Ladurée pâtisserie, created the macaron sandwich cookie in the 1930s to match the colorful decor that the pâtisserie was famous for. Around the same time, another chef, Claude Gerbet, was also making macarons at his own Parisian bakery. Some believe that Gerbert invented modern macarons, since the cookies were known in Paris, for a time, as “Gerberts.” What we know for sure is that macarons quickly became an extremely popular, fast-selling snack at pâtisseries and coffeeshops throughout the city, and by the mid-1940s they were heavily associated with France, as they continue to be today. Delicate, colorful, and sweet? Très magnifique!
[Image description: A plate of light purple macarons with a matching teacup and tablecloth.] Credit & copyright: Jill Wellington, Pexels
March 6, 2025
-
FREEPhysics Nerdy CurioFree1 CQ
It seems that time can be reflected—but don’t worry, you won’t have to re-live any embarrassing teenage moments! Some reflections are fairly easy to understand: when lightwaves bounce off of a reflective surface, like glass, or soundwaves bounce off a non-absorbent surface, like a concrete wall, we see a reflection or hear an echo. Since the 1970s, however, scientists have theorized that there is another, stranger way for waves like sound or light to be reflected, in which they actually move backwards in time. Now, researchers have finally managed to recreate the phenomenon, known as a time reflection, in a lab. Time reflections happen when a wave, such as a soundwave, becomes stretched and changes frequency while, at the same time, the medium through which the wave is traveling also abruptly changes course. If you’ve ever heard a police siren seemingly change frequency as it whizzes by, then you’re familiar with at least the first part of this phenomenon. But imagine if you counted from one to ten out loud, and then both the frequency of the soundwave you created and the structure of the air it was traveling through changed all at once. One end of the soundwave might “curl” back at you, so that you would hear yourself counting backwards, from ten to one, in a much higher pitch than you originally spoke. This is a time reflection.
Until now, researchers assumed that it would take too much energy to recreate a time reflection in a lab, since they’d have to suddenly and drastically change whatever material their experimental waves were traveling through. They were able to solve the problem by creating a special material specifically designed to interact with electromagnetic radiation, so that its structure could be changed very quickly. They then sent different lightwaves through the material via a metal strip wired with switches. When the switches were triggered, the frequencies of the lightwaves changed, altering the material itself at the same time by changing its impedance, or opposition to electrical flow. This caused some of the lightwaves to reflect back in an altered way, thus proving that time reflections exist and can, to some degree, be controlled. It’s probably too early to get excited for mass-produced time machines, though.[Image description: A small round mirror sits outside in the snow, reflecting back snowflakes and green vegetation.] Credit & copyright: Lisa from Pexels, Pexels.
It seems that time can be reflected—but don’t worry, you won’t have to re-live any embarrassing teenage moments! Some reflections are fairly easy to understand: when lightwaves bounce off of a reflective surface, like glass, or soundwaves bounce off a non-absorbent surface, like a concrete wall, we see a reflection or hear an echo. Since the 1970s, however, scientists have theorized that there is another, stranger way for waves like sound or light to be reflected, in which they actually move backwards in time. Now, researchers have finally managed to recreate the phenomenon, known as a time reflection, in a lab. Time reflections happen when a wave, such as a soundwave, becomes stretched and changes frequency while, at the same time, the medium through which the wave is traveling also abruptly changes course. If you’ve ever heard a police siren seemingly change frequency as it whizzes by, then you’re familiar with at least the first part of this phenomenon. But imagine if you counted from one to ten out loud, and then both the frequency of the soundwave you created and the structure of the air it was traveling through changed all at once. One end of the soundwave might “curl” back at you, so that you would hear yourself counting backwards, from ten to one, in a much higher pitch than you originally spoke. This is a time reflection.
Until now, researchers assumed that it would take too much energy to recreate a time reflection in a lab, since they’d have to suddenly and drastically change whatever material their experimental waves were traveling through. They were able to solve the problem by creating a special material specifically designed to interact with electromagnetic radiation, so that its structure could be changed very quickly. They then sent different lightwaves through the material via a metal strip wired with switches. When the switches were triggered, the frequencies of the lightwaves changed, altering the material itself at the same time by changing its impedance, or opposition to electrical flow. This caused some of the lightwaves to reflect back in an altered way, thus proving that time reflections exist and can, to some degree, be controlled. It’s probably too early to get excited for mass-produced time machines, though.[Image description: A small round mirror sits outside in the snow, reflecting back snowflakes and green vegetation.] Credit & copyright: Lisa from Pexels, Pexels.
-
FREEPhysics Daily Curio #3042Free1 CQ
Here’s some hot news that’s worth reflecting on: volcanic eruptions can turn human brains into glass. In the 1960s, archaeologists unearthed many artifacts and preserved human bodies from the ancient Roman city of Pompeii and the town of Herculaneum, both of which were destroyed by the eruption of Mount Vesuvius in 79 C.E. The bodies from these sites are famous for being incredibly well-preserved by layers of volcanic ash, showing the exact poses and sometimes even expressions of the volcano’s victims in their dying moments. In 2018, however, one researcher discovered something even more interesting about one particular body, which had belonged to a 20-year-old man killed in the eruption. Italian anthropologist Pier Paolo noticed that there were shiny areas inside the body’s skull, and upon further investigation discovered that part of the victim’s brain and spine had turned into glass. Now, scientists believe they’ve uncovered the process behind this extremely rare phenomenon.
Glass does sometimes form in nature without human intervention, but the process, known as vitrification, requires extreme conditions. It can happen when lightning strikes sand, rapidly heating the grains to over 50,000° Fahrenheit, which is hotter than the surface of the sun. As soon as the lightning is done striking, the sand can rapidly cool, forming tubes or crusts of glass known as fulgurites. Glass can form after volcanic eruptions too. Obsidian is known as volcanic glass because it’s created when lava rapidly cools. However, 2018 was the first time that a vitrified human organ had ever been discovered. Researchers now believe that they know how it happened. First, a superheated ash cloud from the eruption of Vesuvius swept through Herculaneum, instantly killing those in its wake with temperatures of around 1,000 degrees Fahrenheit. Instead of incinerating victims’ bodies, the cloud left them covered in layers of ash. The cloud then dissipated quickly, allowing the bodies to cool. The brain in question was somewhat protected by the skull surrounding it, allowing it to cool rapidly and form into glass rather than being completely destroyed. It seems this ancient, cranial mystery is no longer a head-scratcher.
[Image description: A gray model of a human brain against a black background.] Credit & copyright: KATRIN BOLOVTSOVA, PexelsHere’s some hot news that’s worth reflecting on: volcanic eruptions can turn human brains into glass. In the 1960s, archaeologists unearthed many artifacts and preserved human bodies from the ancient Roman city of Pompeii and the town of Herculaneum, both of which were destroyed by the eruption of Mount Vesuvius in 79 C.E. The bodies from these sites are famous for being incredibly well-preserved by layers of volcanic ash, showing the exact poses and sometimes even expressions of the volcano’s victims in their dying moments. In 2018, however, one researcher discovered something even more interesting about one particular body, which had belonged to a 20-year-old man killed in the eruption. Italian anthropologist Pier Paolo noticed that there were shiny areas inside the body’s skull, and upon further investigation discovered that part of the victim’s brain and spine had turned into glass. Now, scientists believe they’ve uncovered the process behind this extremely rare phenomenon.
Glass does sometimes form in nature without human intervention, but the process, known as vitrification, requires extreme conditions. It can happen when lightning strikes sand, rapidly heating the grains to over 50,000° Fahrenheit, which is hotter than the surface of the sun. As soon as the lightning is done striking, the sand can rapidly cool, forming tubes or crusts of glass known as fulgurites. Glass can form after volcanic eruptions too. Obsidian is known as volcanic glass because it’s created when lava rapidly cools. However, 2018 was the first time that a vitrified human organ had ever been discovered. Researchers now believe that they know how it happened. First, a superheated ash cloud from the eruption of Vesuvius swept through Herculaneum, instantly killing those in its wake with temperatures of around 1,000 degrees Fahrenheit. Instead of incinerating victims’ bodies, the cloud left them covered in layers of ash. The cloud then dissipated quickly, allowing the bodies to cool. The brain in question was somewhat protected by the skull surrounding it, allowing it to cool rapidly and form into glass rather than being completely destroyed. It seems this ancient, cranial mystery is no longer a head-scratcher.
[Image description: A gray model of a human brain against a black background.] Credit & copyright: KATRIN BOLOVTSOVA, Pexels