Curio Cabinet / Daily Curio
-
FREEHumanities Daily Curio #3141Free1 CQ
Ah, an archaeological dig examining the bygone era of… 1978? Archaeologists at the University of Glasgow in Scotland have invited the public and aging skaters to help in the excavation of a buried skatepark, shedding light on a bit of old European skater lore. The 1970s were a time of change, not just in terms of music and (questionable) fashion, but also in the world of sports. In Scotland, skateboarding exploded in popularity as it did in the U.S., and in 1978, the city of Glasgow invested £100,000 to build the country’s first skatepark, the Kelvin Wheelies. The skatepark featured a freestyle area, a slalom run, and a halfpipe, among other ambitious features that would have made any skater at the time drool with delight. The very year it opened, the facility hosted the first Scottish Skateboard Championships. Skaters from all around the U.K. gathered to compete and for a few years, Glasgow skaters were some of the best of the best in the U.K. Unfortunately, there was a sharp decline in the sport just a few years after the skatepark opened, and the skatepark began to see fewer visitors. Over time, it fell into disrepair, and the city made the decision to bulldoze the park due to safety concerns. It was then buried underground, with a few features remaining visible on the surface. Even without the concrete remnants jutting through the ground, Glasgow skaters from those days never forgot the park. Now, however, they may get to help resurrect the glories of yesteryear with archaeologists who are seeking their help in identifying the skatepark’s features and layout as they excavate the site. In addition to getting down and dirty themselves, the skaters hope that the site will be marked in such a way that its historic significance can be remembered properly. While skateboarding may have dipped in popularity for a time in Scotland, it’s now more popular than ever around the world and has even made it into the Olympics, so it’s understandable that skating enthusiasts hold the site in such high regard. Also, by all accounts, those bowls were absolutely sick!
[Image description: A green sign on a chainlink fence. White letters read: “No Skateboarding Allowed: Police Take Notice.”] Credit & copyright: Khrystinasnell, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Ah, an archaeological dig examining the bygone era of… 1978? Archaeologists at the University of Glasgow in Scotland have invited the public and aging skaters to help in the excavation of a buried skatepark, shedding light on a bit of old European skater lore. The 1970s were a time of change, not just in terms of music and (questionable) fashion, but also in the world of sports. In Scotland, skateboarding exploded in popularity as it did in the U.S., and in 1978, the city of Glasgow invested £100,000 to build the country’s first skatepark, the Kelvin Wheelies. The skatepark featured a freestyle area, a slalom run, and a halfpipe, among other ambitious features that would have made any skater at the time drool with delight. The very year it opened, the facility hosted the first Scottish Skateboard Championships. Skaters from all around the U.K. gathered to compete and for a few years, Glasgow skaters were some of the best of the best in the U.K. Unfortunately, there was a sharp decline in the sport just a few years after the skatepark opened, and the skatepark began to see fewer visitors. Over time, it fell into disrepair, and the city made the decision to bulldoze the park due to safety concerns. It was then buried underground, with a few features remaining visible on the surface. Even without the concrete remnants jutting through the ground, Glasgow skaters from those days never forgot the park. Now, however, they may get to help resurrect the glories of yesteryear with archaeologists who are seeking their help in identifying the skatepark’s features and layout as they excavate the site. In addition to getting down and dirty themselves, the skaters hope that the site will be marked in such a way that its historic significance can be remembered properly. While skateboarding may have dipped in popularity for a time in Scotland, it’s now more popular than ever around the world and has even made it into the Olympics, so it’s understandable that skating enthusiasts hold the site in such high regard. Also, by all accounts, those bowls were absolutely sick!
[Image description: A green sign on a chainlink fence. White letters read: “No Skateboarding Allowed: Police Take Notice.”] Credit & copyright: Khrystinasnell, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History Daily Curio #3140Free1 CQ
This wasn’t your average tea party…in fact, it wasn’t even like the other famous revolutionary tea party. With so much political upheaval going on today, it’s worth looking back on different ways that Americans have protested over the centuries, including subtle ways. The Edenton Tea Party of 1774 was quite civil, but made a powerful statement all the same.
The English love their tea, and so did early American colonists. It’s no wonder, then, that when it came to unfair taxation, a tax on tea was a particularly contentious issue. When the British Parliament passed the Tea Act in 1773 and gave the British East India Tea Company a monopoly on the commodity, they probably knew that it would ruffle feathers across the pond. They might not have been prepared for just how ruffled those feathers got, though. That same year, the famous Boston Tea Party took place, during which protesters dumped 90,000 pounds of tea into the Boston Harbor. At the same time, women were encouraged to eschew British imports to participate in politics in their own way.
One woman, named Penelope Barker, took this idea a step further. On October 25, 1774, after the First Continental Congress had passed several non-importation resolutions, Barker gathered 50 women together in what would become the first political protest held by women in America. On the surface it appeared to be like any large tea party, but there were some key differences. Instead of tea made from tea leaves, Barker served herbal tea made from local plants like mulberry leaves and lavender. Furthermore, the attendees signed the 51 Ladies’ Resolution, which expressed political will as women “who are essentially interested in their welfare, to do everything as far as lies in our power to testify our sincere adherence to the same.” Unlike the men who disguised themselves to hide their identities in the Boston Tea Party, the women specifically rejected the idea of hiding. Thus, they exposed themselves to potential public backlash aimed at them personally. As expected, they were mocked heavily by British newspapers, but they also inspired other women in the colonies to have tea parties of their own, bringing more women into the political landscape for the first time. Nothing like a good cup of tea to kick off a revolution.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This wasn’t your average tea party…in fact, it wasn’t even like the other famous revolutionary tea party. With so much political upheaval going on today, it’s worth looking back on different ways that Americans have protested over the centuries, including subtle ways. The Edenton Tea Party of 1774 was quite civil, but made a powerful statement all the same.
The English love their tea, and so did early American colonists. It’s no wonder, then, that when it came to unfair taxation, a tax on tea was a particularly contentious issue. When the British Parliament passed the Tea Act in 1773 and gave the British East India Tea Company a monopoly on the commodity, they probably knew that it would ruffle feathers across the pond. They might not have been prepared for just how ruffled those feathers got, though. That same year, the famous Boston Tea Party took place, during which protesters dumped 90,000 pounds of tea into the Boston Harbor. At the same time, women were encouraged to eschew British imports to participate in politics in their own way.
One woman, named Penelope Barker, took this idea a step further. On October 25, 1774, after the First Continental Congress had passed several non-importation resolutions, Barker gathered 50 women together in what would become the first political protest held by women in America. On the surface it appeared to be like any large tea party, but there were some key differences. Instead of tea made from tea leaves, Barker served herbal tea made from local plants like mulberry leaves and lavender. Furthermore, the attendees signed the 51 Ladies’ Resolution, which expressed political will as women “who are essentially interested in their welfare, to do everything as far as lies in our power to testify our sincere adherence to the same.” Unlike the men who disguised themselves to hide their identities in the Boston Tea Party, the women specifically rejected the idea of hiding. Thus, they exposed themselves to potential public backlash aimed at them personally. As expected, they were mocked heavily by British newspapers, but they also inspired other women in the colonies to have tea parties of their own, bringing more women into the political landscape for the first time. Nothing like a good cup of tea to kick off a revolution.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3139Free1 CQ
Who knew that weeds could be so helpful? Increasingly powerful storms and rising sea levels are quickly eroding Scotland’s coastline, but the solution to slowing the progress might lie in a humble seaweed. Coastal erosion is a growing concern around the world, and the issue is as dire as can be in Scotland. In some communities, buildings are located a literal stone’s throw from the water, and erosion is encroaching on homes, businesses, and historic sites. However, researchers at Heriot-Watt University have found a simple and plentiful resource that could slow the encroachment to a crawl: kelp. Using computer modeling, researchers tested the effectiveness of kelp and other natural barriers like seagrass, oyster reefs, and mussel beds in dampening the devastating energy carried by ocean waves. What they found was that these barriers could reduce the impact and height of incoming waves to a surprising degree. Kelp was the most effective, capable of reducing wave height by up to 70 percent depending on exact location. The problem with natural barriers is that they too are in decline in many areas due to climate change. Kelp forests are already struggling to survive rising temperatures in some areas, and they could easily be wiped out during a devastating storm, leaving nearby communities more vulnerable until the kelp can recover. Researchers say that legislation may be the next, crucial step in stopping the erosion of Scotland’s coasts. If natural barriers are protected via legislation, they could not only contribute to a diverse marine habitat but act as a natural defense against erosion and flooding. Kelp also constitutes much of the ecosystem that fisheries relies on, so protecting them would directly benefit the economy too. It’s a green solution in more ways than one.
[Image description: A kelp forest underwater.] Credit & copyright: U.S. National Park Service, Asset ID: 0DB56032-0224-DD48-5D902DA5B1D6C3F5. Public domain: Full Granting Rights.Who knew that weeds could be so helpful? Increasingly powerful storms and rising sea levels are quickly eroding Scotland’s coastline, but the solution to slowing the progress might lie in a humble seaweed. Coastal erosion is a growing concern around the world, and the issue is as dire as can be in Scotland. In some communities, buildings are located a literal stone’s throw from the water, and erosion is encroaching on homes, businesses, and historic sites. However, researchers at Heriot-Watt University have found a simple and plentiful resource that could slow the encroachment to a crawl: kelp. Using computer modeling, researchers tested the effectiveness of kelp and other natural barriers like seagrass, oyster reefs, and mussel beds in dampening the devastating energy carried by ocean waves. What they found was that these barriers could reduce the impact and height of incoming waves to a surprising degree. Kelp was the most effective, capable of reducing wave height by up to 70 percent depending on exact location. The problem with natural barriers is that they too are in decline in many areas due to climate change. Kelp forests are already struggling to survive rising temperatures in some areas, and they could easily be wiped out during a devastating storm, leaving nearby communities more vulnerable until the kelp can recover. Researchers say that legislation may be the next, crucial step in stopping the erosion of Scotland’s coasts. If natural barriers are protected via legislation, they could not only contribute to a diverse marine habitat but act as a natural defense against erosion and flooding. Kelp also constitutes much of the ecosystem that fisheries relies on, so protecting them would directly benefit the economy too. It’s a green solution in more ways than one.
[Image description: A kelp forest underwater.] Credit & copyright: U.S. National Park Service, Asset ID: 0DB56032-0224-DD48-5D902DA5B1D6C3F5. Public domain: Full Granting Rights. -
FREEMind + Body Daily CurioFree1 CQ
They’re a staple across the pond, but the most likely place to find these eggs in the U.S. is at a Renaissance fair! Scotch eggs have been around for centuries, but have fallen out of favor in many of the places they were once popular. Associated with Britain yet seemingly named after Scotland, these sausage-covered snacks might actually have roots in India or Africa.
A Scotch egg is a hard-boiled or soft-boiled egg covered in sausage, then coated in breadcrumbs and deep fried. Home cooks sometimes choose to bake their Scotch eggs for convenience, and they can be served whole or cut into slices.
Scotch eggs are a popular pub food in the U.K., and in the U.S. they’re associated with historic Britain, making them popular at Renaissance faires and other European-themed events, but few other places. In truth, no one actually knows where Scotch eggs came from, though they definitely didn’t originate in Scotland, despite their name. One story claims that Scotch eggs were named after 19th-century restauranteurs William J Scott & Sons, of Whitby in Yorkshire, England. Supposedly, they served eggs called “Scotties”, coated in fish paste rather than sausage. There are plenty of other theories, of course. London department store Fortnum & Mason has long held that they invented Scotch eggs in the 18th century as a snack for wealthy customers. It’s also plausible that Scotch eggs aren’t European at all, but that they originated in Africa or India. African recipes for foods similar to Scotch eggs have been found, and might have been brought to England via trade or exploration during the reign of Queen Elizabeth I, between 1558 to 1603. It’s also possible that Scotch eggs were based on an Indian dish called nargisi kofta in which an egg was coated in spiced meat, and that the dish made its way to England during the British colonization of India. However they got there, scotch eggs are right at home in British pubs. Ye olde snacks are sometimes the best.
[Image description: Four slices of Scotch egg on a white plate.] Credit & copyright: Alvis, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.They’re a staple across the pond, but the most likely place to find these eggs in the U.S. is at a Renaissance fair! Scotch eggs have been around for centuries, but have fallen out of favor in many of the places they were once popular. Associated with Britain yet seemingly named after Scotland, these sausage-covered snacks might actually have roots in India or Africa.
A Scotch egg is a hard-boiled or soft-boiled egg covered in sausage, then coated in breadcrumbs and deep fried. Home cooks sometimes choose to bake their Scotch eggs for convenience, and they can be served whole or cut into slices.
Scotch eggs are a popular pub food in the U.K., and in the U.S. they’re associated with historic Britain, making them popular at Renaissance faires and other European-themed events, but few other places. In truth, no one actually knows where Scotch eggs came from, though they definitely didn’t originate in Scotland, despite their name. One story claims that Scotch eggs were named after 19th-century restauranteurs William J Scott & Sons, of Whitby in Yorkshire, England. Supposedly, they served eggs called “Scotties”, coated in fish paste rather than sausage. There are plenty of other theories, of course. London department store Fortnum & Mason has long held that they invented Scotch eggs in the 18th century as a snack for wealthy customers. It’s also plausible that Scotch eggs aren’t European at all, but that they originated in Africa or India. African recipes for foods similar to Scotch eggs have been found, and might have been brought to England via trade or exploration during the reign of Queen Elizabeth I, between 1558 to 1603. It’s also possible that Scotch eggs were based on an Indian dish called nargisi kofta in which an egg was coated in spiced meat, and that the dish made its way to England during the British colonization of India. However they got there, scotch eggs are right at home in British pubs. Ye olde snacks are sometimes the best.
[Image description: Four slices of Scotch egg on a white plate.] Credit & copyright: Alvis, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEPolitical Science Daily Curio #3138Free1 CQ
Washington D.C. doesn’t always get to be its own city, despite its status as the nation’s capital. With the federal government’s recent controversial takeover of law enforcement duties from the Metropolitan Police Department of the District of Columbia (MPDC), it might be worth looking back at the history of the District of Columbia Home Rule Act, which lies at the center of the debate.
Washington D.C. has been the capital of the U.S. since 1800, yet for most of its history it didn’t have much autonomy as a city. Even though it’s situated in the continental U.S., it’s not technically located in one of the 50 states. This was by design, as the Founding Fathers didn’t want any one state to have too much power over the capital. That power was instead given to the federal government, and that had some unusual repercussions for D.C. residents. For one, since the city wasn’t located in a state, the residents didn’t have a say in presidential elections with electoral votes until the 23rd amendment was ratified in 1961. Washington’s residents had been trying for most of its history to gain voting rights, and that was just one small victory in the city’s struggle for representation.
The next big development for Washington was the District of Columbia Home Rule Act of 1973, which allowed residents to vote for a mayor and a council of 12 members. Still, all legislation passed by the council has to be approved by Congress. Not only that, the city’s budget is set by Congress and its judges are appointed by the president. Finally, while Washington has representatives in Congress, they aren’t allowed to vote, effectively leaving the city without a voice in federal legislation. Recent events are a stark reminder that the city is ultimately at the mercy of federal authority for even the most basic municipal functions. With the White House invoking section 740 of the Home Rule Act to declare an emergency, the federal government has taken over law enforcement duties, and it has the power to do so for up to 30 days by notifying Congress. It might be the capital, but its rights are somewhat lowercase.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Washington D.C. doesn’t always get to be its own city, despite its status as the nation’s capital. With the federal government’s recent controversial takeover of law enforcement duties from the Metropolitan Police Department of the District of Columbia (MPDC), it might be worth looking back at the history of the District of Columbia Home Rule Act, which lies at the center of the debate.
Washington D.C. has been the capital of the U.S. since 1800, yet for most of its history it didn’t have much autonomy as a city. Even though it’s situated in the continental U.S., it’s not technically located in one of the 50 states. This was by design, as the Founding Fathers didn’t want any one state to have too much power over the capital. That power was instead given to the federal government, and that had some unusual repercussions for D.C. residents. For one, since the city wasn’t located in a state, the residents didn’t have a say in presidential elections with electoral votes until the 23rd amendment was ratified in 1961. Washington’s residents had been trying for most of its history to gain voting rights, and that was just one small victory in the city’s struggle for representation.
The next big development for Washington was the District of Columbia Home Rule Act of 1973, which allowed residents to vote for a mayor and a council of 12 members. Still, all legislation passed by the council has to be approved by Congress. Not only that, the city’s budget is set by Congress and its judges are appointed by the president. Finally, while Washington has representatives in Congress, they aren’t allowed to vote, effectively leaving the city without a voice in federal legislation. Recent events are a stark reminder that the city is ultimately at the mercy of federal authority for even the most basic municipal functions. With the White House invoking section 740 of the Home Rule Act to declare an emergency, the federal government has taken over law enforcement duties, and it has the power to do so for up to 30 days by notifying Congress. It might be the capital, but its rights are somewhat lowercase.
[Image description: An American flag with a wooden flagpole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily Curio #3137Free1 CQ
Fatigue isn’t always a symptom; sometimes, it’s the disease. In the last few decades, more people have been affected by chronic fatigue syndrome. Now, researchers may finally have found out what causes the mysterious illness. Chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis, causes such profound fatigue that no amount of rest is enough to alleviate it. The disease began attracting attention in the medical community in the late 1980s, when it was widely confused for mononucleosis, which can cause similar symptoms. In addition to being easily fatigued, those who suffer from CFS are likely to experience severe dizziness, muscle and joint pain, cognitive issues, and do not feel refreshed after sleeping. In some cases, CFS can also cause tender lymph nodes and sensitivity to various stimuli. The disease is difficult to diagnose, and some patients have reported difficulty in having their condition taken seriously, even by the doctors they turn to for help.
That might change now that CFS has been linked to a change in the gut biome as well as certain genetic signals in patients. One study analyzed the gut biome of 153 individuals who have been diagnosed with CFS and compared it to those of 96 healthy individuals. Researchers found that composition of gut biome could reliably predict CFS symptoms. The link between the gut and CFS isn’t too surprising, since the disease often manifests after the patient fights off another infection that might have affected their gut biome. Another study that analyzed the data on over 15,000 CFS patients and compared it to healthy individuals found that eight genetic signals are linked to the immune and nervous systems. While a patient’s gut biome can be used to predict the type of symptoms they will have, it appears that these genetic signals can predict the severity of those symptoms. While there is still no cure for CFS, deeper research could be the key to convincing sufferers’ bodies to finally wake up and smell the coffee.
[Image description: A black-and-white illustration of a girl sleeping while sitting up in a chair with sewing in her lap.] Credit & copyright: Sleeping Girl with Needlework in her Lap, Gerard Valck after Michiel van Musscher. The Metropolitan Museum of Art, A. Hyatt Mayor Purchase Fund, Marjorie Phelps Starr Bequest, 1988. Public Domain.Fatigue isn’t always a symptom; sometimes, it’s the disease. In the last few decades, more people have been affected by chronic fatigue syndrome. Now, researchers may finally have found out what causes the mysterious illness. Chronic fatigue syndrome (CFS), also known as myalgic encephalomyelitis, causes such profound fatigue that no amount of rest is enough to alleviate it. The disease began attracting attention in the medical community in the late 1980s, when it was widely confused for mononucleosis, which can cause similar symptoms. In addition to being easily fatigued, those who suffer from CFS are likely to experience severe dizziness, muscle and joint pain, cognitive issues, and do not feel refreshed after sleeping. In some cases, CFS can also cause tender lymph nodes and sensitivity to various stimuli. The disease is difficult to diagnose, and some patients have reported difficulty in having their condition taken seriously, even by the doctors they turn to for help.
That might change now that CFS has been linked to a change in the gut biome as well as certain genetic signals in patients. One study analyzed the gut biome of 153 individuals who have been diagnosed with CFS and compared it to those of 96 healthy individuals. Researchers found that composition of gut biome could reliably predict CFS symptoms. The link between the gut and CFS isn’t too surprising, since the disease often manifests after the patient fights off another infection that might have affected their gut biome. Another study that analyzed the data on over 15,000 CFS patients and compared it to healthy individuals found that eight genetic signals are linked to the immune and nervous systems. While a patient’s gut biome can be used to predict the type of symptoms they will have, it appears that these genetic signals can predict the severity of those symptoms. While there is still no cure for CFS, deeper research could be the key to convincing sufferers’ bodies to finally wake up and smell the coffee.
[Image description: A black-and-white illustration of a girl sleeping while sitting up in a chair with sewing in her lap.] Credit & copyright: Sleeping Girl with Needlework in her Lap, Gerard Valck after Michiel van Musscher. The Metropolitan Museum of Art, A. Hyatt Mayor Purchase Fund, Marjorie Phelps Starr Bequest, 1988. Public Domain. -
FREETravel Daily Curio #3136Free1 CQ
Even speed demons have to follow the rules on the autobahn. After a driver was recently fined over $1,000 for speeding on the German highway, some people are learning that the near-mythical autobahn does, in fact, have speed limits (sometimes). The autobahn is the name for Germany’s expansive highway system, and it has a unique reputation. Germany first began constructing it in 1913. At the time, the idea of a road with designated entry and exit points was a fairly new idea, having only been tried in New York state at the time. Contrary to popular belief, the autobahn was created long before the Nazi party came around and used their expansion of the infrastructure to both show off Germany’s economy and take credit for the idea through propaganda. The autobahn continued to be expanded after the fall of the Nazi regime and was named the Bundesautobahn, which means “federal highway.” Ironically, the expansion of the autobahn by the Nazis made it much easier for the Allied forces to move through Germany and proved invaluable during the country’s reconstruction following the war.
Today, there are over 8,000 miles of autobahn in Germany, and most of them have no speed limits. While that makes it a speed demon’s paradise, it has led to the misconception that there are no speed limits anywhere on the autobahn. There are actually areas where the speed limit can range from 50 to 80 miles-per-hour, comparable to what can be found in the U.S. Recently, one F-1 wannabe was caught speeding along at an eye-watering 199 miles-per-hour in a stretch where the speed limit was just 74.5 miles-per-hour. In addition to losing a few points on their license, they were also banned from driving for three months—an “auto ban,” if you will.
[Image description: A portion of the Autobahn with three lanes, photographed from above with green trees on either side.] Credit & copyright: AlanyaSeeburg, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Even speed demons have to follow the rules on the autobahn. After a driver was recently fined over $1,000 for speeding on the German highway, some people are learning that the near-mythical autobahn does, in fact, have speed limits (sometimes). The autobahn is the name for Germany’s expansive highway system, and it has a unique reputation. Germany first began constructing it in 1913. At the time, the idea of a road with designated entry and exit points was a fairly new idea, having only been tried in New York state at the time. Contrary to popular belief, the autobahn was created long before the Nazi party came around and used their expansion of the infrastructure to both show off Germany’s economy and take credit for the idea through propaganda. The autobahn continued to be expanded after the fall of the Nazi regime and was named the Bundesautobahn, which means “federal highway.” Ironically, the expansion of the autobahn by the Nazis made it much easier for the Allied forces to move through Germany and proved invaluable during the country’s reconstruction following the war.
Today, there are over 8,000 miles of autobahn in Germany, and most of them have no speed limits. While that makes it a speed demon’s paradise, it has led to the misconception that there are no speed limits anywhere on the autobahn. There are actually areas where the speed limit can range from 50 to 80 miles-per-hour, comparable to what can be found in the U.S. Recently, one F-1 wannabe was caught speeding along at an eye-watering 199 miles-per-hour in a stretch where the speed limit was just 74.5 miles-per-hour. In addition to losing a few points on their license, they were also banned from driving for three months—an “auto ban,” if you will.
[Image description: A portion of the Autobahn with three lanes, photographed from above with green trees on either side.] Credit & copyright: AlanyaSeeburg, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily Curio #3135Free1 CQ
Ever thought of putting on goggles to treat your back pain? Researchers have found that people living with chronic pain might benefit from the use of virtual reality (VR), and while the view is fake, the relief is for real. Researchers have long been aware of the pain-relieving benefits of simply being out in nature. Whether it’s the views or the fresh air (more on that later), it’s one of the simplest ways to manage pain. Dr. Sam Hughes at the University of Exeter in the U.K. explained in a recent statement to the press, “We’ve seen a growing body of evidence show that exposure to nature can help reduce short term, everyday pain, but there has been less research into how this might work for people living with chronic or longer-term pain.”
The problem is that many people with chronic pain have a difficult time getting out into nature. Now, there’s some proof that just viewing nature scenes using a VR headset might be enough to induce some pain-relieving benefits. In a new study, Hughes’ team had 29 healthy participants experience painful electric shocks (in the name of science) for 50 minutes during one session to simulate nerve pain. Then, in a follow-up session, they added a 360 degree VR experience of waterfalls in Oregon. In the next, they were shown the same scene but on a 2D screen. After each session, the participants filled out questionnaires to assess their experiences, and researchers found that the session with VR was surprisingly effective in reducing the effect of pain. The researchers aren’t quite sure why it had such a potent pain-fighting effect, but no matter the reason, the findings could lead to some form of therapeutic VR pain management. Maybe it’s mind over matter…or just a matter of perspective.
[Image description: A pine forest under a blue sky with mountains in the distance.] Credit & copyright: Kurtkaiser, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Ever thought of putting on goggles to treat your back pain? Researchers have found that people living with chronic pain might benefit from the use of virtual reality (VR), and while the view is fake, the relief is for real. Researchers have long been aware of the pain-relieving benefits of simply being out in nature. Whether it’s the views or the fresh air (more on that later), it’s one of the simplest ways to manage pain. Dr. Sam Hughes at the University of Exeter in the U.K. explained in a recent statement to the press, “We’ve seen a growing body of evidence show that exposure to nature can help reduce short term, everyday pain, but there has been less research into how this might work for people living with chronic or longer-term pain.”
The problem is that many people with chronic pain have a difficult time getting out into nature. Now, there’s some proof that just viewing nature scenes using a VR headset might be enough to induce some pain-relieving benefits. In a new study, Hughes’ team had 29 healthy participants experience painful electric shocks (in the name of science) for 50 minutes during one session to simulate nerve pain. Then, in a follow-up session, they added a 360 degree VR experience of waterfalls in Oregon. In the next, they were shown the same scene but on a 2D screen. After each session, the participants filled out questionnaires to assess their experiences, and researchers found that the session with VR was surprisingly effective in reducing the effect of pain. The researchers aren’t quite sure why it had such a potent pain-fighting effect, but no matter the reason, the findings could lead to some form of therapeutic VR pain management. Maybe it’s mind over matter…or just a matter of perspective.
[Image description: A pine forest under a blue sky with mountains in the distance.] Credit & copyright: Kurtkaiser, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
If the heat of late summer is getting you down, it might be time to chill out with some noodles. Naengmyeon, a Korean dish of cold noodles, is a great way to beat the heat without having to eat something sweet, like ice cream. This simple dish has a complex history that spans back centuries.
Naengmyeon is made with long buckwheat noodles in a cold beef, chicken, or dongchimi broth. The latter is a clear, tangy, acidic broth made during the process of fermenting a type of white radish. The dish is often topped with sliced vegetables and a boiled egg. Traditionally, naengmyeon’s long noodles were eaten without biting or cutting, as their length signified prosperity and long life. Today, though, the dish is often served with special scissors specifically for cutting the noodles.
The first archaeological evidence of naengmyeon dates back to the Joseon era, between 1392 and 1897. However, the dish could be even older. Though several different varieties of naengmyeon exist today, the first ones originated in North Korea, specifically the southern city of Hamhung and the nation’s capital, Pyongyang. Bibim naengmyeon, a variety of naengmyeon that’s still eaten in North Korea today, is topped with a spicy, red chili paste.
The 1940s saw major tensions arise in Korea, as the communist North and non-communist South split ideologically. When North Korea invaded South Korea on June 25, 1950, the Korean war officially broke out, with the Soviet Union supporting North Korea and the U.S. supporting South Korea. During and immediately following the war, in which the Korean Peninsula became permanently divided, thousands of refugees poured into the South, bringing their regional foods with them. This included naengmyeon. Today, the dish is popular throughout all of South Korea, with regional varieties utilizing different broths, vegetables, and meats, including seafood. War might divide, but food always unites.
[Image description: A bowl of naengmyeon, noodles with broth, topped with vegetables, in a silver bowl surrounded by utensils and side dishes.] Credit & copyright: Suohros, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.If the heat of late summer is getting you down, it might be time to chill out with some noodles. Naengmyeon, a Korean dish of cold noodles, is a great way to beat the heat without having to eat something sweet, like ice cream. This simple dish has a complex history that spans back centuries.
Naengmyeon is made with long buckwheat noodles in a cold beef, chicken, or dongchimi broth. The latter is a clear, tangy, acidic broth made during the process of fermenting a type of white radish. The dish is often topped with sliced vegetables and a boiled egg. Traditionally, naengmyeon’s long noodles were eaten without biting or cutting, as their length signified prosperity and long life. Today, though, the dish is often served with special scissors specifically for cutting the noodles.
The first archaeological evidence of naengmyeon dates back to the Joseon era, between 1392 and 1897. However, the dish could be even older. Though several different varieties of naengmyeon exist today, the first ones originated in North Korea, specifically the southern city of Hamhung and the nation’s capital, Pyongyang. Bibim naengmyeon, a variety of naengmyeon that’s still eaten in North Korea today, is topped with a spicy, red chili paste.
The 1940s saw major tensions arise in Korea, as the communist North and non-communist South split ideologically. When North Korea invaded South Korea on June 25, 1950, the Korean war officially broke out, with the Soviet Union supporting North Korea and the U.S. supporting South Korea. During and immediately following the war, in which the Korean Peninsula became permanently divided, thousands of refugees poured into the South, bringing their regional foods with them. This included naengmyeon. Today, the dish is popular throughout all of South Korea, with regional varieties utilizing different broths, vegetables, and meats, including seafood. War might divide, but food always unites.
[Image description: A bowl of naengmyeon, noodles with broth, topped with vegetables, in a silver bowl surrounded by utensils and side dishes.] Credit & copyright: Suohros, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEGames Daily Curio #3134Free1 CQ
There are few places where sports and archeology can find common ground, but a pok-ta-pok court is one. Descendants of the Maya are bringing the ancient sport of pok-ta-pok to Belize, and it’s already considered the national sport. Next month, their national team will be competing at the International Pok-ta-Pok Tournament, where they’ve been crowned champions three times in the past. The pok-ta-pok revival in Central America began in the early 1900s, but in the last few decades it has truly entered the spotlight. For those who play it, it’s not just a sport—it’s also a matter of heritage. Pok-ta-pok is so ancient that even archaeologists aren’t quite sure how old it is. Evidence shows that it was played as early as 2,000 years ago by the Mayans, who likely invented the game, and it was only after the Spanish began colonizing the region in the 16th century that the sport fell out of favor.
The game itself plays like a combination of basketball, tennis, and volleyball, in a court that is divided in two and set between two walls with a stone ring about 20 feet up at the center line on each side. Players can use their elbows, knees, and hips to touch the ball, but not their head, hands, or feet. The ball itself is made of latex, and points are awarded to a team under the following conditions: when the opposing side fails to return the ball before two bounces, when the ball reaches the end zone of the opposing side, or when the team manages to get the ball through one of the stone rings. While the game is fairly straightforward, it’s been the subject of a long enduring myth—that the winners were sacrificed to honor the gods. In truth, sacrifices were not a normal part of the game for winners or losers. Sometimes, cities would compete in a match of pok-ta-pok instead of going to war, and in those cases, the losing team could technically be sacrificed. More often than not, though, the losing team’s city would simply pay tributes in the form of jade and other valuables. Even for a “war game,” killing the losers seems a bit harsh.
[Image description: A Mayan carving depicting a man in a feathered headdress playing pok-ta-pok.] Credit & copyright: Yoke-form vessel, Maya artist(s), 350–450 CE. The Metropolitan Museum of Art. Purchase, Mrs. Charles S. Payson Gift, 1970. Public Domain.There are few places where sports and archeology can find common ground, but a pok-ta-pok court is one. Descendants of the Maya are bringing the ancient sport of pok-ta-pok to Belize, and it’s already considered the national sport. Next month, their national team will be competing at the International Pok-ta-Pok Tournament, where they’ve been crowned champions three times in the past. The pok-ta-pok revival in Central America began in the early 1900s, but in the last few decades it has truly entered the spotlight. For those who play it, it’s not just a sport—it’s also a matter of heritage. Pok-ta-pok is so ancient that even archaeologists aren’t quite sure how old it is. Evidence shows that it was played as early as 2,000 years ago by the Mayans, who likely invented the game, and it was only after the Spanish began colonizing the region in the 16th century that the sport fell out of favor.
The game itself plays like a combination of basketball, tennis, and volleyball, in a court that is divided in two and set between two walls with a stone ring about 20 feet up at the center line on each side. Players can use their elbows, knees, and hips to touch the ball, but not their head, hands, or feet. The ball itself is made of latex, and points are awarded to a team under the following conditions: when the opposing side fails to return the ball before two bounces, when the ball reaches the end zone of the opposing side, or when the team manages to get the ball through one of the stone rings. While the game is fairly straightforward, it’s been the subject of a long enduring myth—that the winners were sacrificed to honor the gods. In truth, sacrifices were not a normal part of the game for winners or losers. Sometimes, cities would compete in a match of pok-ta-pok instead of going to war, and in those cases, the losing team could technically be sacrificed. More often than not, though, the losing team’s city would simply pay tributes in the form of jade and other valuables. Even for a “war game,” killing the losers seems a bit harsh.
[Image description: A Mayan carving depicting a man in a feathered headdress playing pok-ta-pok.] Credit & copyright: Yoke-form vessel, Maya artist(s), 350–450 CE. The Metropolitan Museum of Art. Purchase, Mrs. Charles S. Payson Gift, 1970. Public Domain. -
FREEBiology Daily Curio #3133Free1 CQ
Get tongue-tied easily? Blame your genes. As common as stuttering is, its cause has never been fully understood. Now, scientists might have found some important clues after discovering dozens of genes associated with the issue. Stuttering is a speech disorder where the speaker repeats words, prolongs certain sounds, or even pauses unexpectedly in the middle of talking. Stuttering can have a severe impact on someone’s quality of life. Children who stutter are more likely to be bullied, while adults may have fewer job prospects. Over 400 million people around the world, across all languages, struggle with stuttering, yet there has never been any solid proof as to what causes it. Scientists and laypeople alike have conjectured for hundreds of years, blaming everything from childhood trauma to left-handedness. There does seem to be a connection involving early childhood development, however.
Most people who stutter begin doing so soon after they begin speaking for the first time as children, but not all of them continue. For those who do, it can become ingrained as a habit, meaning that the longer one stutters, the more difficult it is to stop. Often, speech therapy at an early age is enough to correct stuttering. For everyone else, it can become lifelong. It turns out that there is also a genetic component to stuttering. Recently, scientists used data from an online genealogy service and managed to identify 57 distinct genomic regions with 48 genes that appear to be correlated with stuttering. This means that for many stutterers, especially older ones, fixing a stutter isn’t as simple as simply going to speech therapy. Researchers found that musicality, speech, and language are deeply genetically related and share the same neurological pathway. What’s more, they found that there may be causal relationships between stuttering and impaired music rhythm, autism, and depression. When it comes to genetics, connections pop up in the most unexpected places.
[Image description: A digital illustration of a double-helix DNA strand. The strand is dark blue.] Credit & copyright: PantheraLeo1359531, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Get tongue-tied easily? Blame your genes. As common as stuttering is, its cause has never been fully understood. Now, scientists might have found some important clues after discovering dozens of genes associated with the issue. Stuttering is a speech disorder where the speaker repeats words, prolongs certain sounds, or even pauses unexpectedly in the middle of talking. Stuttering can have a severe impact on someone’s quality of life. Children who stutter are more likely to be bullied, while adults may have fewer job prospects. Over 400 million people around the world, across all languages, struggle with stuttering, yet there has never been any solid proof as to what causes it. Scientists and laypeople alike have conjectured for hundreds of years, blaming everything from childhood trauma to left-handedness. There does seem to be a connection involving early childhood development, however.
Most people who stutter begin doing so soon after they begin speaking for the first time as children, but not all of them continue. For those who do, it can become ingrained as a habit, meaning that the longer one stutters, the more difficult it is to stop. Often, speech therapy at an early age is enough to correct stuttering. For everyone else, it can become lifelong. It turns out that there is also a genetic component to stuttering. Recently, scientists used data from an online genealogy service and managed to identify 57 distinct genomic regions with 48 genes that appear to be correlated with stuttering. This means that for many stutterers, especially older ones, fixing a stutter isn’t as simple as simply going to speech therapy. Researchers found that musicality, speech, and language are deeply genetically related and share the same neurological pathway. What’s more, they found that there may be causal relationships between stuttering and impaired music rhythm, autism, and depression. When it comes to genetics, connections pop up in the most unexpected places.
[Image description: A digital illustration of a double-helix DNA strand. The strand is dark blue.] Credit & copyright: PantheraLeo1359531, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREESTEM Daily Curio #3132Free1 CQ
Do wet wipes make you dry heave? They probably should. The Port of London Authority has started to dismantle and dispose of the city’s infamous “Wet Wipe Island,” a giant mass of waste consisting largely of flushable wet wipes in the Thames. Technically, wet wipes that are advertised as “flushable” can be flushed, but what happens afterward is a real problem for everyone. Despite the marketing surrounding them, flushable wet wipes cause nothing but problems for sewer systems and for the environment in general. Wet wipes are designed to stay intact in a wet environment, so they can take months or even years to break down. During that time, they can clump up into large masses, often mixing with fat to create “fatbergs” that can block sewer lines, forcing municipalities to break them apart by hand so that pipes don’t burst. Then there’s the matter of the chemicals used in wet wipes. Such wipes are often made with plastic, which breaks down into microplastics when released into the environment. Other chemicals make the wipes more durable, but they also make them even less able to break down quickly. Some wipes are also antibacterial, which can be harmful to fish, wildlife, and overall water health. Many brands of wet wipes are made from cellulose, cotton, or other biodegradable materials, but even those don’t solve the issue. In the case of London, the wet wipes have accumulated in the River Thames, creating an 820-foot-long mass of congealed garbage that’s been dubbed Wet Wipe Island. The predicament is made worse by the fact that the Thames has an average yearly temperature of around 54 degrees Fahrenheit, slowing down the already glacial rate of biodegradation and forcing excavators to roll in and bust things up manually. This is one island getaway that really stinks.
[Image description: Ripples on the surface of water.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Do wet wipes make you dry heave? They probably should. The Port of London Authority has started to dismantle and dispose of the city’s infamous “Wet Wipe Island,” a giant mass of waste consisting largely of flushable wet wipes in the Thames. Technically, wet wipes that are advertised as “flushable” can be flushed, but what happens afterward is a real problem for everyone. Despite the marketing surrounding them, flushable wet wipes cause nothing but problems for sewer systems and for the environment in general. Wet wipes are designed to stay intact in a wet environment, so they can take months or even years to break down. During that time, they can clump up into large masses, often mixing with fat to create “fatbergs” that can block sewer lines, forcing municipalities to break them apart by hand so that pipes don’t burst. Then there’s the matter of the chemicals used in wet wipes. Such wipes are often made with plastic, which breaks down into microplastics when released into the environment. Other chemicals make the wipes more durable, but they also make them even less able to break down quickly. Some wipes are also antibacterial, which can be harmful to fish, wildlife, and overall water health. Many brands of wet wipes are made from cellulose, cotton, or other biodegradable materials, but even those don’t solve the issue. In the case of London, the wet wipes have accumulated in the River Thames, creating an 820-foot-long mass of congealed garbage that’s been dubbed Wet Wipe Island. The predicament is made worse by the fact that the Thames has an average yearly temperature of around 54 degrees Fahrenheit, slowing down the already glacial rate of biodegradation and forcing excavators to roll in and bust things up manually. This is one island getaway that really stinks.
[Image description: Ripples on the surface of water.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily Curio #3131Free1 CQ
Not only does it feel real, it can cause real headaches. As sophisticated AI chatbots were becoming mainstream a few years ago, one psychiatrist predicted a form of psychosis would arise from them. Now, it seems, that grim prediction has come true. Soon after ChatGPT was made available to the masses in 2023, Danish psychiatrist Søren Dinesen Østergaard warned that prolonged interaction with such chatbots would lead to mental health crises in predisposed individuals. More specifically, he believed that chatbots would trigger and reinforce delusional beliefs in the users, leading to a variety of issues. Many users simply become addicted to conversing with the chatbot, while others believe that the chatbot can understand them better than any human companion. Then there are those who believe that they are being spied upon at the chatbot’s suggestion, becoming steeped in a sense of paranoia. Chatbots can also trigger grandiose delusions, in which the user comes to believe that they are the “chosen one” or that they have some special spiritual or cosmic role.
While it may sound farfetched, these kinds of mental health crises can prove very serious. Of course, AI isn’t designed to create such problems. The reason they sometimes trigger such behavior in those who are predisposed to mental illness is because chatbots are often trained to be encouraging and tell the user what they want to hear. Especially in cases of grandiose delusions, chatbots were observed giving spiritual advice and bombarding users with flattery. According to Østergaard and similarly-minded critics of AI chatbots, a recent update in ChatGPT made it even more people-pleasing, making it more likely to trigger delusions. Since AI psychosis is a new phenomenon, experts like Østergaard are calling for a systematic review of cases to better understand the causes and formulate treatments. In the meantime, they warn that it can affect both those who are already diagnosed with conditions like schizophrenia, but also those who have undiagnosed or latent risk factors. Blue light glasses probably aren’t going to cut it when it comes to reducing harm from excessive screen time.Not only does it feel real, it can cause real headaches. As sophisticated AI chatbots were becoming mainstream a few years ago, one psychiatrist predicted a form of psychosis would arise from them. Now, it seems, that grim prediction has come true. Soon after ChatGPT was made available to the masses in 2023, Danish psychiatrist Søren Dinesen Østergaard warned that prolonged interaction with such chatbots would lead to mental health crises in predisposed individuals. More specifically, he believed that chatbots would trigger and reinforce delusional beliefs in the users, leading to a variety of issues. Many users simply become addicted to conversing with the chatbot, while others believe that the chatbot can understand them better than any human companion. Then there are those who believe that they are being spied upon at the chatbot’s suggestion, becoming steeped in a sense of paranoia. Chatbots can also trigger grandiose delusions, in which the user comes to believe that they are the “chosen one” or that they have some special spiritual or cosmic role.
While it may sound farfetched, these kinds of mental health crises can prove very serious. Of course, AI isn’t designed to create such problems. The reason they sometimes trigger such behavior in those who are predisposed to mental illness is because chatbots are often trained to be encouraging and tell the user what they want to hear. Especially in cases of grandiose delusions, chatbots were observed giving spiritual advice and bombarding users with flattery. According to Østergaard and similarly-minded critics of AI chatbots, a recent update in ChatGPT made it even more people-pleasing, making it more likely to trigger delusions. Since AI psychosis is a new phenomenon, experts like Østergaard are calling for a systematic review of cases to better understand the causes and formulate treatments. In the meantime, they warn that it can affect both those who are already diagnosed with conditions like schizophrenia, but also those who have undiagnosed or latent risk factors. Blue light glasses probably aren’t going to cut it when it comes to reducing harm from excessive screen time. -
FREECooking Daily CurioFree1 CQ
This dessert may be cobbled together…but that’s sort of the point! As summer begins to wind down, there’s no better time to explore the origins of one of the most undeniably summery desserts: peach cobbler. This gooey treat has been around for a surprisingly long time and, while it has some roots in Europe, its strongest historical ties are to Black communities in the American South.
Peach cobbler is a dessert made from peaches baked with sugar and topped with a crumbly, biscuit-like batter. Today, it’s often served alongside a scoop of vanilla ice cream. Cobber gets its name from its cobbled-together appearance, but its invention was very deliberate. It was first made in the American colonies, but was based on recipes for British suet puddings. These steamed British desserts combined fruit and a dough made with a type of animal fat called suet in place of butter. Early colonists had access to different ingredients and cooking equipment, and adjusted their pudding recipes accordingly. The result was a dessert that was baked rather than steamed and was topped with biscuit batter made from flour, salt, butter, and milk. These early American cobblers mostly using berries and wild fruit.
Peach trees, first introduced to the U.S. by the Spanish, who procured them from their native Asia, thrived in the southern U.S. due to its warm weather. There, enslaved people made all sorts of desserts utilizing peaches, including peach cobblers. Using proper ovens (unlike the hearths and hot coals used by European settlers) they created deep, gooey cobblers in baking pans. After slavery ended in the U.S., Black Americans kept and built upon their recipes, making peach cobbler a staple dish in Black communities. One of the earliest written cobbler recipes was published by Abby Fisher, a formerly enslaved woman who went on to have a thriving career as a pickle manufacturer and cookbook author. The recipe appears in her 1881 cookbook, What Mrs. Fisher Knows About Old Southern Cooking. Today, peach cobbler is still a staple across the southern U.S. and can be found in plenty of other states too, especially in the Midwest. You just can’t keep something this delicious confined to one region.
[Image description: A peach that has been cut in half, surrounded by whole peaches.] Credit & copyright: Photo by Jack Dykinga, USDA Agricultural Research Service. ID K6084-1. Public Domain.This dessert may be cobbled together…but that’s sort of the point! As summer begins to wind down, there’s no better time to explore the origins of one of the most undeniably summery desserts: peach cobbler. This gooey treat has been around for a surprisingly long time and, while it has some roots in Europe, its strongest historical ties are to Black communities in the American South.
Peach cobbler is a dessert made from peaches baked with sugar and topped with a crumbly, biscuit-like batter. Today, it’s often served alongside a scoop of vanilla ice cream. Cobber gets its name from its cobbled-together appearance, but its invention was very deliberate. It was first made in the American colonies, but was based on recipes for British suet puddings. These steamed British desserts combined fruit and a dough made with a type of animal fat called suet in place of butter. Early colonists had access to different ingredients and cooking equipment, and adjusted their pudding recipes accordingly. The result was a dessert that was baked rather than steamed and was topped with biscuit batter made from flour, salt, butter, and milk. These early American cobblers mostly using berries and wild fruit.
Peach trees, first introduced to the U.S. by the Spanish, who procured them from their native Asia, thrived in the southern U.S. due to its warm weather. There, enslaved people made all sorts of desserts utilizing peaches, including peach cobblers. Using proper ovens (unlike the hearths and hot coals used by European settlers) they created deep, gooey cobblers in baking pans. After slavery ended in the U.S., Black Americans kept and built upon their recipes, making peach cobbler a staple dish in Black communities. One of the earliest written cobbler recipes was published by Abby Fisher, a formerly enslaved woman who went on to have a thriving career as a pickle manufacturer and cookbook author. The recipe appears in her 1881 cookbook, What Mrs. Fisher Knows About Old Southern Cooking. Today, peach cobbler is still a staple across the southern U.S. and can be found in plenty of other states too, especially in the Midwest. You just can’t keep something this delicious confined to one region.
[Image description: A peach that has been cut in half, surrounded by whole peaches.] Credit & copyright: Photo by Jack Dykinga, USDA Agricultural Research Service. ID K6084-1. Public Domain. -
FREEMind + Body Daily Curio #3130Free1 CQ
There’s a fine line between getting a tan and cooking yourself in the summer sun. Sunburns can happen even when precautions like sunscreen are taken, especially if one forgets to reapply. Luckily, there’s a widely-accepted sunburn remedy: aloe. But does aloe gel (or cuttings directly from the plant itself) actually help soothe or heal sunburns? The answer isn’t as straightforward as it seems.
Using aloe for minor burns seems like a sensible thing to do. After all, aloe is used in a variety of cosmetic products and is touted to be good for the skin. Aloe is also chock full of antioxidants and has anti-inflammatory properties, so it stands to reason that it would help. Yet, in controlled studies, aloe products have actually been found to be no better than a placebo when it comes to treating sunburns. The only thing that aloe has been proven to do is provide temporary relief by acting as a cooling agent—and only for a short while. That might be the best anyone can hope for, though. Burns of all kinds are notoriously difficult to heal via medical interventions. When it comes to sunburns, the two best remedies are time and preventing further damage. Aloe does act as a gentle moisturizer when heavier products based on petroleum should be avoided, and moisture can help protect the skin’s surface.
If a sunburn is severe and aloe just isn’t cutting it, experts recommend the use of nonsteroidal anti-inflammatory drugs (NSAID) like ibuprofen to help with the pain and inflammation, while hydrocortisone cream might be called for in more extreme cases. Ultimately though, preventative measures like strong sunscreen (reapplied every two hours) and protective clothing is everything when it comes to sunburns. There’s no remedy under the sun that can completely fix it once the damage is done.
[Image description: Several aloe plants, Aloe castanea growing outdoors in a botanical garden.] Credit & copyright: Tangopaso, Wikimedia Commons, Jardin Exotique de Monaco. The copyright holder of this work has released it into the public domain. This applies worldwide.There’s a fine line between getting a tan and cooking yourself in the summer sun. Sunburns can happen even when precautions like sunscreen are taken, especially if one forgets to reapply. Luckily, there’s a widely-accepted sunburn remedy: aloe. But does aloe gel (or cuttings directly from the plant itself) actually help soothe or heal sunburns? The answer isn’t as straightforward as it seems.
Using aloe for minor burns seems like a sensible thing to do. After all, aloe is used in a variety of cosmetic products and is touted to be good for the skin. Aloe is also chock full of antioxidants and has anti-inflammatory properties, so it stands to reason that it would help. Yet, in controlled studies, aloe products have actually been found to be no better than a placebo when it comes to treating sunburns. The only thing that aloe has been proven to do is provide temporary relief by acting as a cooling agent—and only for a short while. That might be the best anyone can hope for, though. Burns of all kinds are notoriously difficult to heal via medical interventions. When it comes to sunburns, the two best remedies are time and preventing further damage. Aloe does act as a gentle moisturizer when heavier products based on petroleum should be avoided, and moisture can help protect the skin’s surface.
If a sunburn is severe and aloe just isn’t cutting it, experts recommend the use of nonsteroidal anti-inflammatory drugs (NSAID) like ibuprofen to help with the pain and inflammation, while hydrocortisone cream might be called for in more extreme cases. Ultimately though, preventative measures like strong sunscreen (reapplied every two hours) and protective clothing is everything when it comes to sunburns. There’s no remedy under the sun that can completely fix it once the damage is done.
[Image description: Several aloe plants, Aloe castanea growing outdoors in a botanical garden.] Credit & copyright: Tangopaso, Wikimedia Commons, Jardin Exotique de Monaco. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREENutrition Daily Curio #3129Free1 CQ
A camel can take you on all sorts of desert adventures, including the culinary kind. Somalia is currently embracing camel milk on a massive scale, and while they’re taking the lead in modernizing the dairy camel industry, they’re not the only ones who are interested.
Few countries in the world rely on camels as much as Somalia does. Agriculture makes up the lion’s share of their economy, and the beast that bears much of that burden is the camel. However, the nutritional and economic potential of camel milk was mostly overlooked until 2006, when the first commercial camel dairy operation was established in the country. Since then, camel milk has been rising in popularity. One of the main benefits of camel milk is that it’s much lower in lactose than cow milk, making it ideal for those with lactose intolerance. Camel milk also lacks β-lactoglobulin, an allergen present in cow milk that makes it unsuitable for many allergy sufferers. It also contains more vitamin C, iron, and zinc than cow milk, and yogurt made from camel milk still has plenty of probiotics.
Perhaps the most surprising benefit of camel milk is that it might help manage type-1 diabetes. Communities that consume milk from dromedary camels (camels with one hump) apparently have fewer cases of diabetes, and dromedary milk has been shown to lower blood sugar levels in diabetic lab rats. In humans with type-1 diabetes, camel milk has been shown to promote endogenous insulin secretion, though it’s far from a cure. The only downside? Those who are used to the taste of cow’s milk might have a little trouble with its slightly salty taste, which results from higher sodium levels. Sounds like a small hump to get over for so many benefits.
[Image description: A camel walking in a desert with mountains in the background.] Credit & copyright: Bernard Gagnon, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.A camel can take you on all sorts of desert adventures, including the culinary kind. Somalia is currently embracing camel milk on a massive scale, and while they’re taking the lead in modernizing the dairy camel industry, they’re not the only ones who are interested.
Few countries in the world rely on camels as much as Somalia does. Agriculture makes up the lion’s share of their economy, and the beast that bears much of that burden is the camel. However, the nutritional and economic potential of camel milk was mostly overlooked until 2006, when the first commercial camel dairy operation was established in the country. Since then, camel milk has been rising in popularity. One of the main benefits of camel milk is that it’s much lower in lactose than cow milk, making it ideal for those with lactose intolerance. Camel milk also lacks β-lactoglobulin, an allergen present in cow milk that makes it unsuitable for many allergy sufferers. It also contains more vitamin C, iron, and zinc than cow milk, and yogurt made from camel milk still has plenty of probiotics.
Perhaps the most surprising benefit of camel milk is that it might help manage type-1 diabetes. Communities that consume milk from dromedary camels (camels with one hump) apparently have fewer cases of diabetes, and dromedary milk has been shown to lower blood sugar levels in diabetic lab rats. In humans with type-1 diabetes, camel milk has been shown to promote endogenous insulin secretion, though it’s far from a cure. The only downside? Those who are used to the taste of cow’s milk might have a little trouble with its slightly salty taste, which results from higher sodium levels. Sounds like a small hump to get over for so many benefits.
[Image description: A camel walking in a desert with mountains in the background.] Credit & copyright: Bernard Gagnon, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily Curio #3128Free1 CQ
Isn’t it amazing how fast babies grow? Nash Keen, who set the world record for the most premature baby ever born, recently turned one. Nicknamed “Nash Potato” by parents Mollie and Randall Keen, Nash was born in Iowa on July 5, 2024, 19 weeks early, or about 133 days short of the typical full term of 280 days. At birth, he weighed just 10 ounces and measured 9.5 inches long. To put that into perspective, deliveries are considered premature if they occur before 37 weeks, and around one out of every 10 births in the U.S. are premature. They’re more likely to occur if the mother is 35 or older or has chronic health issues like diabetes or heart disease, but many premature births occur for no known medical reason. Although there have been previous cases with similarly short gestational periods (the previous record was 132 days premature), there was no guarantee that Nash would survive even with the best care.
Although premature births are still inherently dangerous, they are much more survivable now than they were before the invention of the first baby incubators in the late 1800s. The incubators were created by French doctor Stéphane Tarnier, who was inspired by egg incubators he had seen at a zoo in Paris. Later, German-American doctor Martin A. Couney popularized the incubators by displaying them to the public at Coney Island. The machines help regulate babies’ body temperature and keep them in a germ-free environment as their organs and immune systems continue to develop. Even so, premature babies require constant care and monitoring in the hospital, and are at greater risk of complications like respiratory distress, apnea of prematurity (paused breathing), anemia, and other issues that arise from not getting the chance to develop longer in the womb. While most premature babies go on to live healthy lives, the risk of chronic health complications goes up the shorter a gestational period is. People who were born prematurely are more likely to suffer from delayed development, depression, anxiety, ADHD, neurological disorders, dental problems, asthma, and hearing loss. While Nash currently requires supplemental oxygen and hearing aids, he seems to be a fairly healthy one-year-old otherwise. Not too bad for this remarkably small bundle of joy.
[Image description: A white figurine of a baby in a cradle.] Credit & copyright: Figure of a Baby in a Cradle Holding a Kitten, c.1830–70. The Metropolitan Museum of Art, Gift of Dr. Charles W. Green, 1947. Public Domain.Isn’t it amazing how fast babies grow? Nash Keen, who set the world record for the most premature baby ever born, recently turned one. Nicknamed “Nash Potato” by parents Mollie and Randall Keen, Nash was born in Iowa on July 5, 2024, 19 weeks early, or about 133 days short of the typical full term of 280 days. At birth, he weighed just 10 ounces and measured 9.5 inches long. To put that into perspective, deliveries are considered premature if they occur before 37 weeks, and around one out of every 10 births in the U.S. are premature. They’re more likely to occur if the mother is 35 or older or has chronic health issues like diabetes or heart disease, but many premature births occur for no known medical reason. Although there have been previous cases with similarly short gestational periods (the previous record was 132 days premature), there was no guarantee that Nash would survive even with the best care.
Although premature births are still inherently dangerous, they are much more survivable now than they were before the invention of the first baby incubators in the late 1800s. The incubators were created by French doctor Stéphane Tarnier, who was inspired by egg incubators he had seen at a zoo in Paris. Later, German-American doctor Martin A. Couney popularized the incubators by displaying them to the public at Coney Island. The machines help regulate babies’ body temperature and keep them in a germ-free environment as their organs and immune systems continue to develop. Even so, premature babies require constant care and monitoring in the hospital, and are at greater risk of complications like respiratory distress, apnea of prematurity (paused breathing), anemia, and other issues that arise from not getting the chance to develop longer in the womb. While most premature babies go on to live healthy lives, the risk of chronic health complications goes up the shorter a gestational period is. People who were born prematurely are more likely to suffer from delayed development, depression, anxiety, ADHD, neurological disorders, dental problems, asthma, and hearing loss. While Nash currently requires supplemental oxygen and hearing aids, he seems to be a fairly healthy one-year-old otherwise. Not too bad for this remarkably small bundle of joy.
[Image description: A white figurine of a baby in a cradle.] Credit & copyright: Figure of a Baby in a Cradle Holding a Kitten, c.1830–70. The Metropolitan Museum of Art, Gift of Dr. Charles W. Green, 1947. Public Domain. -
FREEUS History Daily Curio #3127Free1 CQ
Even in times of harsh oppression, some people risk everything for freedom. A statue of Robert Smalls, a formerly-enslaved man who famously sailed to freedom, will be placed in the South Carolina state capitol soon, making him the first Black American to receive the honor. Robert Smalls was born into slavery on April 5, 1839, in Beaufort, South Carolina. He worked various jobs in town as an enslaved laborer and was working on a ship called the Planter when the Civil War broke out. When the ship was contracted as a transport ship for the Confederate army, Smalls began piloting it, giving him a skill and experience that soon proved invaluable.
On May 12, 1862, under cover of night, Smalls and his fellow enslaved crewmen commandeered the Planter and escaped the South with the steamer, his family, and several others. After he gave the ship over to the U.S. Navy, Smalls was officially made captain and used the ship he had stolen from the Confederates to navigate around Charleston Harbor, aiding the Union. After the war, Smalls entered politics, earning a seat in the state House of Representatives and then the state Senate. Eventually, he was elected to serve in the U.S. House of Representatives, where he served five terms as a Congressman from 1875 to 1887. Beyond politics, Smalls also became a successful businessman, and even came to own the property of his former enslaver, Henry McKee. Although much of Smalls’s legacy was obscured for decades due to the increasingly discriminatory politics of the South in the early 20th century, his story could never truly be forgotten, and continued to inspire civil rights activists after his lifetime. Even as Jim Crow laws overtook the South, Smalls continued to fight for the rights of Black Americans until he passed away on February 22, 1915. His statue is set to be 12 feet tall, but even that might not quite do justice to such a larger-than-life figure.
[Image description: The front gate of a large, white house. A plaque on the gate reads “Robert Smalls House” with a description beneath.] Credit & copyright: NPS Image Gallery, asset ID:a78b0caf-517d-4369-8919-81e7228dbfaf. Constraints Information: Public domain:Full Granting Rights.Even in times of harsh oppression, some people risk everything for freedom. A statue of Robert Smalls, a formerly-enslaved man who famously sailed to freedom, will be placed in the South Carolina state capitol soon, making him the first Black American to receive the honor. Robert Smalls was born into slavery on April 5, 1839, in Beaufort, South Carolina. He worked various jobs in town as an enslaved laborer and was working on a ship called the Planter when the Civil War broke out. When the ship was contracted as a transport ship for the Confederate army, Smalls began piloting it, giving him a skill and experience that soon proved invaluable.
On May 12, 1862, under cover of night, Smalls and his fellow enslaved crewmen commandeered the Planter and escaped the South with the steamer, his family, and several others. After he gave the ship over to the U.S. Navy, Smalls was officially made captain and used the ship he had stolen from the Confederates to navigate around Charleston Harbor, aiding the Union. After the war, Smalls entered politics, earning a seat in the state House of Representatives and then the state Senate. Eventually, he was elected to serve in the U.S. House of Representatives, where he served five terms as a Congressman from 1875 to 1887. Beyond politics, Smalls also became a successful businessman, and even came to own the property of his former enslaver, Henry McKee. Although much of Smalls’s legacy was obscured for decades due to the increasingly discriminatory politics of the South in the early 20th century, his story could never truly be forgotten, and continued to inspire civil rights activists after his lifetime. Even as Jim Crow laws overtook the South, Smalls continued to fight for the rights of Black Americans until he passed away on February 22, 1915. His statue is set to be 12 feet tall, but even that might not quite do justice to such a larger-than-life figure.
[Image description: The front gate of a large, white house. A plaque on the gate reads “Robert Smalls House” with a description beneath.] Credit & copyright: NPS Image Gallery, asset ID:a78b0caf-517d-4369-8919-81e7228dbfaf. Constraints Information: Public domain:Full Granting Rights. -
FREEMind + Body Daily CurioFree1 CQ
It’s a sauce, it’s a dip, it’s a spread, and most importantly, it’s delicious. There’s not a whole lot that pesto can’t do, and it’s been doing it for a long, long time. In fact, some form of this Italian staple has been delighting palettes since the rise of ancient Rome.
Pesto is a green paste traditionally made by mixing and grinding seven ingredients together with a mortar and pestle: basil leaves, parmesan cheese, pecorino cheese, extra virgin olive oil, garlic, pine nuts, and salt. It has a light, vegetable-y flavor and can be used as a pasta or pizza sauce, a dip for bread, or a spread on sandwiches.
There is little doubt that pesto originated in what is now Italy. The ancient Roman version of pesto didn’t call for basil and didn’t always include nuts, but it had most of modern pesto's other ingredients, plus vinegar. The paste, which was also made with a mortar and pestle, was called moretum, and a detailed description of it appears in the Appendix Vergiliana by Virgil, a collection of poems published between 70 and 19 B.C.E.
In the Italian region of Liguria, in the city of Genoa, moretum developed into a similar sauce called agliata in the Middle Ages. This version called for walnuts, solidifying nuts as a core component of pesto. Agliata became a staple of Genoan cuisine, and over time herbs like parsley or sage were added to variations of it. Surprisingly, basil didn’t surface as pesto’s main ingredient until the mid-19th century. Once it did, though, basil outperformed the other green herbs and stuck around. Genoa has been celebrated as the birthplace of modern pesto ever since. You could say that their pesto is the best-o.
[Image description: A plate of pasta with spaghetti noodles and pesto sauce.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s a sauce, it’s a dip, it’s a spread, and most importantly, it’s delicious. There’s not a whole lot that pesto can’t do, and it’s been doing it for a long, long time. In fact, some form of this Italian staple has been delighting palettes since the rise of ancient Rome.
Pesto is a green paste traditionally made by mixing and grinding seven ingredients together with a mortar and pestle: basil leaves, parmesan cheese, pecorino cheese, extra virgin olive oil, garlic, pine nuts, and salt. It has a light, vegetable-y flavor and can be used as a pasta or pizza sauce, a dip for bread, or a spread on sandwiches.
There is little doubt that pesto originated in what is now Italy. The ancient Roman version of pesto didn’t call for basil and didn’t always include nuts, but it had most of modern pesto's other ingredients, plus vinegar. The paste, which was also made with a mortar and pestle, was called moretum, and a detailed description of it appears in the Appendix Vergiliana by Virgil, a collection of poems published between 70 and 19 B.C.E.
In the Italian region of Liguria, in the city of Genoa, moretum developed into a similar sauce called agliata in the Middle Ages. This version called for walnuts, solidifying nuts as a core component of pesto. Agliata became a staple of Genoan cuisine, and over time herbs like parsley or sage were added to variations of it. Surprisingly, basil didn’t surface as pesto’s main ingredient until the mid-19th century. Once it did, though, basil outperformed the other green herbs and stuck around. Genoa has been celebrated as the birthplace of modern pesto ever since. You could say that their pesto is the best-o.
[Image description: A plate of pasta with spaghetti noodles and pesto sauce.] Credit & copyright: Benoît Prieur (1975–), Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #3126Free1 CQ
Henry VIII was famous for making heads roll, and this head’s never stopped. English statesman Thomas More was beheaded 500 years ago, and his skull could now be exhumed for the second time. An accomplished statesman, author of Utopia, and a once-close friend of King Henry VIII, More was beheaded in 1535 for refusing to support the king as the head of the Church of England. Following his execution, More's head was placed on “display” on the London Bridge, as was the protocol for traitors. The head was meant to be thrown into the River Thames, but it was rescued and preserved by More’s daughter, Margaret. When she passed away, More’s skull was interred next to her while his body remained at the Tower of London. Later, Margaret’s son exhumed the skull and moved it to St. Dunstan’s Church in Canterbury. There it would have remained for the rest of time, but it is on the cusp of being exhumed again, for a more honorable purpose this time.
For defying the king and the Church of England, More was canonized as a saint on the 400th anniversary of his death. Now, ahead of the 500th anniversary, the council in charge of St. Dunstan’s Church, where More’s skull is currently buried, is asking for permission to conserve it as a relic. Relics are pieces of the remains of a Catholic saint, and are often placed on display, usually inside a reliquary. The 500th anniversary isn’t until 2035, but the skull would need time to dry out in order to prevent further decay. In fact, the skull has been examined before in the vault where it resided, and it was already showing signs of advanced decay. Even if the skull is exhumed, though, it's unlikely that curious observers will be able to see the skull in person. After conservation efforts take place, St. Dunstan’s officials say they will not be publicly displaying the skull, and may even return it to the vault where it originally was, albeit better preserved. It might also be placed in a reliquary where visitors can pay their respects, even if they don’t get to catch a glimpse of the skull itself. Sometimes, less is More.
[Image description: A black-and-white illustration of Thomas More wearing historical clothing, including a hat.] Credit & copyright: Rijksmuseum. Studio of Wierix after a print by Antonie Wierix (II). 1550 - 1600.Henry VIII was famous for making heads roll, and this head’s never stopped. English statesman Thomas More was beheaded 500 years ago, and his skull could now be exhumed for the second time. An accomplished statesman, author of Utopia, and a once-close friend of King Henry VIII, More was beheaded in 1535 for refusing to support the king as the head of the Church of England. Following his execution, More's head was placed on “display” on the London Bridge, as was the protocol for traitors. The head was meant to be thrown into the River Thames, but it was rescued and preserved by More’s daughter, Margaret. When she passed away, More’s skull was interred next to her while his body remained at the Tower of London. Later, Margaret’s son exhumed the skull and moved it to St. Dunstan’s Church in Canterbury. There it would have remained for the rest of time, but it is on the cusp of being exhumed again, for a more honorable purpose this time.
For defying the king and the Church of England, More was canonized as a saint on the 400th anniversary of his death. Now, ahead of the 500th anniversary, the council in charge of St. Dunstan’s Church, where More’s skull is currently buried, is asking for permission to conserve it as a relic. Relics are pieces of the remains of a Catholic saint, and are often placed on display, usually inside a reliquary. The 500th anniversary isn’t until 2035, but the skull would need time to dry out in order to prevent further decay. In fact, the skull has been examined before in the vault where it resided, and it was already showing signs of advanced decay. Even if the skull is exhumed, though, it's unlikely that curious observers will be able to see the skull in person. After conservation efforts take place, St. Dunstan’s officials say they will not be publicly displaying the skull, and may even return it to the vault where it originally was, albeit better preserved. It might also be placed in a reliquary where visitors can pay their respects, even if they don’t get to catch a glimpse of the skull itself. Sometimes, less is More.
[Image description: A black-and-white illustration of Thomas More wearing historical clothing, including a hat.] Credit & copyright: Rijksmuseum. Studio of Wierix after a print by Antonie Wierix (II). 1550 - 1600.