Curio Cabinet / Daily Curio
-
FREEEngineering Daily Curio #2941Free1 CQ
Fungus-powered robots might not sound that cool at first, but they’ll grow on you. Scientists have found a way to use live fungi to drive small, robotic machines called “biohybrid” robots. This type of machine could represent a big leap forward for robotic engineering. After all, biology has already figured out how to detect and send a variety of stimuli through self-replicating cellular structures, whereas today’s robots still rely on things as crude as circuit boards and copper wires. Scientists at Cornell University decided to switch things up by building a biohybrid robot that uses mycelium (the root-like parts of fungus) to control its movements.
Previously, biohybrid robots have been made using cells and parts from vertebrates, insects, and even sea slugs, but the Cornell team is the first to use fungus. For their robot, they grew mycelium directly on an electrical interface which converts electrophysiological activity from the fungus into signals that are sent to actuators. That’s not too far off from what fungus already does in nature, where expansive mycelium networks intertwined with root systems can transport nutrients between plants and communicate using chemical signals. So far, they’ve made a robot with wheels and another with legs. Both versions are quite capable of moving around based on the mycelium’s responses to stimuli. For now, the researchers are experimenting with using light to control the movement of the robots, but they are planning to use chemical signals in the future. Since mycelium are already capable of responding to different stimuli, there’s no need to have different sensors for light, chemicals, and pressure; the fungus acts as an all-in-one sensor unit. In the future, scientists believe that this technology could be used to create mostly autonomous robots, with applications in fields like agriculture, where they could freely roam about to dispense fertilizer or pesticides based on the chemical composition of the soil. Fungal farmers? Funny, but feasible!
[Image description: A cluster of white oyster mushrooms surrounded by grass.] Credit & copyright: Teknad, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Fungus-powered robots might not sound that cool at first, but they’ll grow on you. Scientists have found a way to use live fungi to drive small, robotic machines called “biohybrid” robots. This type of machine could represent a big leap forward for robotic engineering. After all, biology has already figured out how to detect and send a variety of stimuli through self-replicating cellular structures, whereas today’s robots still rely on things as crude as circuit boards and copper wires. Scientists at Cornell University decided to switch things up by building a biohybrid robot that uses mycelium (the root-like parts of fungus) to control its movements.
Previously, biohybrid robots have been made using cells and parts from vertebrates, insects, and even sea slugs, but the Cornell team is the first to use fungus. For their robot, they grew mycelium directly on an electrical interface which converts electrophysiological activity from the fungus into signals that are sent to actuators. That’s not too far off from what fungus already does in nature, where expansive mycelium networks intertwined with root systems can transport nutrients between plants and communicate using chemical signals. So far, they’ve made a robot with wheels and another with legs. Both versions are quite capable of moving around based on the mycelium’s responses to stimuli. For now, the researchers are experimenting with using light to control the movement of the robots, but they are planning to use chemical signals in the future. Since mycelium are already capable of responding to different stimuli, there’s no need to have different sensors for light, chemicals, and pressure; the fungus acts as an all-in-one sensor unit. In the future, scientists believe that this technology could be used to create mostly autonomous robots, with applications in fields like agriculture, where they could freely roam about to dispense fertilizer or pesticides based on the chemical composition of the soil. Fungal farmers? Funny, but feasible!
[Image description: A cluster of white oyster mushrooms surrounded by grass.] Credit & copyright: Teknad, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #2940Free1 CQ
Oof, that must have hurt! Our solar system’s many moons and planets have, throughout their long history, dealt with plenty of asteroid impacts. These destructive collisions left many scars still visible today, but scientists at Kobe University in Japan have found one case that stands out from the rest. Jupiter is no stranger to asteroid impacts, as its immense gravity attracts drifting space rocks. But when Jupiter casts its gravitational net, not all of the asteroids reach the planet’s surface. Instead, they often collide with one of Jupiter’s many moons, like Ganymede. The largest of Jupiter’s moons, Ganymede bears innumerable craters on its surface from eons of such asteroid impacts, but there’s one that left more than just a scar. The largest crater visible on its surface features a circular furrow that seems to spread out from a single point—the site of impact. This four-billion-year-old crater is so large that much of its visible detail has been obscured by a smattering of smaller craters from other asteroids. Naoyuki Hirata, a planetologist at Kobe University, knew that whatever left the crater must have been massive.
Figuring out exactly how massive was tricky. One of the clues to the size of the asteroid was the location of the crater that the furrow system emanates from, almost directly on the other side of the tidally-locked moon. That, along with the angle of impact, indicates that the asteroid was large enough to shift the rotational axis of the moon significantly, enough that the impact site, which faced Jupiter, now faced away. It’s not an event without precedent, as discoveries made through the New Horizons space probe indicate that Pluto once experienced something similar. Hirata estimates that the Ganymede asteroid must have been around 186 miles wide, or 20 times larger than the asteroid that created the Chicxulub crater 66 million years ago on Earth. That asteroid famously caused the extinction event that wiped out the dinosaurs. If the Ganymede asteroid had hit the Earth, it’s safe to say we’d be missing a lot more than dinosaurs.
[Image description: A dark night sky with stars visible.] Credit & copyright: Kai Pilger, PexelsOof, that must have hurt! Our solar system’s many moons and planets have, throughout their long history, dealt with plenty of asteroid impacts. These destructive collisions left many scars still visible today, but scientists at Kobe University in Japan have found one case that stands out from the rest. Jupiter is no stranger to asteroid impacts, as its immense gravity attracts drifting space rocks. But when Jupiter casts its gravitational net, not all of the asteroids reach the planet’s surface. Instead, they often collide with one of Jupiter’s many moons, like Ganymede. The largest of Jupiter’s moons, Ganymede bears innumerable craters on its surface from eons of such asteroid impacts, but there’s one that left more than just a scar. The largest crater visible on its surface features a circular furrow that seems to spread out from a single point—the site of impact. This four-billion-year-old crater is so large that much of its visible detail has been obscured by a smattering of smaller craters from other asteroids. Naoyuki Hirata, a planetologist at Kobe University, knew that whatever left the crater must have been massive.
Figuring out exactly how massive was tricky. One of the clues to the size of the asteroid was the location of the crater that the furrow system emanates from, almost directly on the other side of the tidally-locked moon. That, along with the angle of impact, indicates that the asteroid was large enough to shift the rotational axis of the moon significantly, enough that the impact site, which faced Jupiter, now faced away. It’s not an event without precedent, as discoveries made through the New Horizons space probe indicate that Pluto once experienced something similar. Hirata estimates that the Ganymede asteroid must have been around 186 miles wide, or 20 times larger than the asteroid that created the Chicxulub crater 66 million years ago on Earth. That asteroid famously caused the extinction event that wiped out the dinosaurs. If the Ganymede asteroid had hit the Earth, it’s safe to say we’d be missing a lot more than dinosaurs.
[Image description: A dark night sky with stars visible.] Credit & copyright: Kai Pilger, Pexels -
FREEEngineering Daily Curio #2939Free1 CQ
Tuscany is known for its fine wines and the vineyards that produce them. Soon, Tuscan tourists won’t even have to travel anywhere to get to a vineyard—they’ll be under one as soon as they get off their flight. Like many other airports around the world, the Aeroporto Amerigo Vespucci in Florence has been embracing sustainability measures to reduce energy consumption. However, they’re doing so with unmatched Tuscan flare. The airport has plans to add a 19-acre green roof, which will consist of living plants to form an insulating barrier, but it won’t just be a standard garden or lawn that covers the terminals. Instead, the airport’s green roof will be a sloping vineyard visible from beneath.
The ambitious construction project is set to be completed in two phases taking place in 2026 and 2035. Of course, there are plans to cultivate the vineyard’s grapes to produce wine, possibly on site. The project isn’t just an elaborate ploy to attract tourists, but an effective way to reduce the busy airport’s energy consumption. Buried in the vineyard will be heat exchanger coils which will be used to warm the building in winter and cool it in summer. Heat exchangers are an efficient method of climate control because, instead of creating heat, they simply move heat from one place to another. So, in the winter, the heat exchanger will move residual warmth in the vineyard’s soil to the interior of the airport, while in the summer, it will move heat in the interior outside, to the soil. Even without the heat exchange system, the green roof alone helps maintain a stable temperature year-round. On top of all that, solar panels will be placed between the vines and translucent photovoltaic panels. Along with producing energy, the panels will act as windows for those below, allowing visitors plenty of natural lighting and a peek at the vineyard. That’s a view anyone can raise a toast to.
[Image description: A close-up photo of green grapes growing in a vineyard.] Credit & copyright: W.carter, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Tuscany is known for its fine wines and the vineyards that produce them. Soon, Tuscan tourists won’t even have to travel anywhere to get to a vineyard—they’ll be under one as soon as they get off their flight. Like many other airports around the world, the Aeroporto Amerigo Vespucci in Florence has been embracing sustainability measures to reduce energy consumption. However, they’re doing so with unmatched Tuscan flare. The airport has plans to add a 19-acre green roof, which will consist of living plants to form an insulating barrier, but it won’t just be a standard garden or lawn that covers the terminals. Instead, the airport’s green roof will be a sloping vineyard visible from beneath.
The ambitious construction project is set to be completed in two phases taking place in 2026 and 2035. Of course, there are plans to cultivate the vineyard’s grapes to produce wine, possibly on site. The project isn’t just an elaborate ploy to attract tourists, but an effective way to reduce the busy airport’s energy consumption. Buried in the vineyard will be heat exchanger coils which will be used to warm the building in winter and cool it in summer. Heat exchangers are an efficient method of climate control because, instead of creating heat, they simply move heat from one place to another. So, in the winter, the heat exchanger will move residual warmth in the vineyard’s soil to the interior of the airport, while in the summer, it will move heat in the interior outside, to the soil. Even without the heat exchange system, the green roof alone helps maintain a stable temperature year-round. On top of all that, solar panels will be placed between the vines and translucent photovoltaic panels. Along with producing energy, the panels will act as windows for those below, allowing visitors plenty of natural lighting and a peek at the vineyard. That’s a view anyone can raise a toast to.
[Image description: A close-up photo of green grapes growing in a vineyard.] Credit & copyright: W.carter, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEMind + Body Daily CurioFree1 CQ
It’s creamy, it’s spicy, it’s British, it’s Indian, it’s fusion…it’s delicious! While everyone agrees that chicken tikka masala is one of the best-loved foods in England today, its origin is much-debated. What’s not up for debate is the fact that this dish spawned an entire culinary movement, helping to popularize Indian cuisine throughout Britain.
Chicken tikka masala is a dish of boneless chicken chunks in a creamy, tomato-coriander sauce. The chicken used in the dish is chicken tikka, which is made by marinating chicken in a mixture of yogurt and garam masala, a ground spice blend that often includes cinnamon, cumin, cardamom, and peppercorns. Other spices, like turmeric and paprika, are added to the sauce, which is orange thanks to its mixture of red tomatoes and yellowish coriander. Chicken tikka masala can be extremely spicy or very mild, depending on which spices are added. In many restaurants, it can even be ordered to taste.
While chicken tikka masala undoubtedly has Indian origins (many food historians believe it evolved from butter chicken, a similar Indian dish) it wasn’t actually created in India, but in Europe. The question is where exactly in Europe. Some believe that, in the early 1970s, an unnamed chef in London created the dish, which quickly spread amongst the city’s many Indian restaurants until its original creator was unclear. The most popular origin story, though, involves British Pakistani chef Ali Ahmed Aslam and his restaurant in Glasgow, Scotland. Supposedly, Aslam used spices and condensed tomato soup to make a sauce for a customer’s chicken tikka after they’d complained that it was too dry. However, when Glasgow petitioned for the city to be named the dish’s official home, they were denied, since many different places in the U.K. claimed to have invented it.
Today, chicken tikka masala is considered an unofficial national dish of the U.K., even if no one can agree exactly where in the U.K. it came from. Many also consider it to be one of the first examples of fusion cuisine. In 2001, U.K. foreign secretary Robin Cook even said that the dish was proof of “multiculturalism as a positive force for our economy and society.” There’s no arguing with that.
[Image description: A white plate of chicken tikka masala, rice, and a triangular piece of naan on a yellow-and-white table. The dish’s sauce is a brownish-orange color.] Credit & copyright: Andy Li, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s creamy, it’s spicy, it’s British, it’s Indian, it’s fusion…it’s delicious! While everyone agrees that chicken tikka masala is one of the best-loved foods in England today, its origin is much-debated. What’s not up for debate is the fact that this dish spawned an entire culinary movement, helping to popularize Indian cuisine throughout Britain.
Chicken tikka masala is a dish of boneless chicken chunks in a creamy, tomato-coriander sauce. The chicken used in the dish is chicken tikka, which is made by marinating chicken in a mixture of yogurt and garam masala, a ground spice blend that often includes cinnamon, cumin, cardamom, and peppercorns. Other spices, like turmeric and paprika, are added to the sauce, which is orange thanks to its mixture of red tomatoes and yellowish coriander. Chicken tikka masala can be extremely spicy or very mild, depending on which spices are added. In many restaurants, it can even be ordered to taste.
While chicken tikka masala undoubtedly has Indian origins (many food historians believe it evolved from butter chicken, a similar Indian dish) it wasn’t actually created in India, but in Europe. The question is where exactly in Europe. Some believe that, in the early 1970s, an unnamed chef in London created the dish, which quickly spread amongst the city’s many Indian restaurants until its original creator was unclear. The most popular origin story, though, involves British Pakistani chef Ali Ahmed Aslam and his restaurant in Glasgow, Scotland. Supposedly, Aslam used spices and condensed tomato soup to make a sauce for a customer’s chicken tikka after they’d complained that it was too dry. However, when Glasgow petitioned for the city to be named the dish’s official home, they were denied, since many different places in the U.K. claimed to have invented it.
Today, chicken tikka masala is considered an unofficial national dish of the U.K., even if no one can agree exactly where in the U.K. it came from. Many also consider it to be one of the first examples of fusion cuisine. In 2001, U.K. foreign secretary Robin Cook even said that the dish was proof of “multiculturalism as a positive force for our economy and society.” There’s no arguing with that.
[Image description: A white plate of chicken tikka masala, rice, and a triangular piece of naan on a yellow-and-white table. The dish’s sauce is a brownish-orange color.] Credit & copyright: Andy Li, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEArt Appreciation Daily Curio #2938Free1 CQ
There’s no use crying over spilt milk… or broken artifacts. After a 4-year-old boy shattered a millennia-old jar at the Hecht Museum in Haifa, Israel, in late August, the museum invited the boy and his family back to see the relic restored. You might expect such a mistake, honest though it was, to lead to a museum ban or even a fine. But the Hecht Museum has good reasons for letting bygones be bygones. First of all, the child who broke the jar clearly did so by accident. According to the family, the boy thought there might be something inside the jar, so he tried to tilt it and look inside, causing it to fall on the floor. Immediately following the incident, the parents waited for museum workers to arrive, hoping to hear that the jar was a replica. To their dismay, it was a genuine, 3,500 year old relic. Made during the Bronze Age, the jar survived for millennia without any major damage, making it a rare specimen. However, instead of punishing the boy and his family for destroying the ancient artifact, the museum invited them back to learn about the restoration process. That’s because of the museum’s philosophy of sharing its contents with the world in an open and approachable manner, even if that brings some risks. As the general director of the Hecht Museum Inbal Rivlin said in a statement, “The museum is not a mausoleum but a living place, open to families [and] accessible.” They’re also hoping to use this incident as a way to educate the public on the restoration process, which involves meticulously reassembling the jar shard by shard. It also won’t change the museum’ policy on displaying some artifacts openly, without protective cases, which they believe adds a “special charm” to visitors’ experience. Of course, they are much less forgiving when it comes to intentional damage, and have worked with the police when such incidents arose. If you decide to visit, it’s probably best to keep your hands to yourself, just in case.
There’s no use crying over spilt milk… or broken artifacts. After a 4-year-old boy shattered a millennia-old jar at the Hecht Museum in Haifa, Israel, in late August, the museum invited the boy and his family back to see the relic restored. You might expect such a mistake, honest though it was, to lead to a museum ban or even a fine. But the Hecht Museum has good reasons for letting bygones be bygones. First of all, the child who broke the jar clearly did so by accident. According to the family, the boy thought there might be something inside the jar, so he tried to tilt it and look inside, causing it to fall on the floor. Immediately following the incident, the parents waited for museum workers to arrive, hoping to hear that the jar was a replica. To their dismay, it was a genuine, 3,500 year old relic. Made during the Bronze Age, the jar survived for millennia without any major damage, making it a rare specimen. However, instead of punishing the boy and his family for destroying the ancient artifact, the museum invited them back to learn about the restoration process. That’s because of the museum’s philosophy of sharing its contents with the world in an open and approachable manner, even if that brings some risks. As the general director of the Hecht Museum Inbal Rivlin said in a statement, “The museum is not a mausoleum but a living place, open to families [and] accessible.” They’re also hoping to use this incident as a way to educate the public on the restoration process, which involves meticulously reassembling the jar shard by shard. It also won’t change the museum’ policy on displaying some artifacts openly, without protective cases, which they believe adds a “special charm” to visitors’ experience. Of course, they are much less forgiving when it comes to intentional damage, and have worked with the police when such incidents arose. If you decide to visit, it’s probably best to keep your hands to yourself, just in case.
-
FREEEngineering Daily Curio #2937Free1 CQ
They say that plenty of fiber is good for you…well, here’s a whole bunch! Researchers at Northumbria University in England and National Textile University in Pakistan have found a way to create textiles using waste materials from the banana industry. As a bonus, the process would produce energy at the same time. The method is set to roll out in Pakistan, which has large agricultural and textile industries. Pakistan also produces around 154,800 tons of bananas a year, and the fruits are one of the country’s largest agricultural exports. But banana production comes with a lot of waste. On average, every 2.5 acres of banana plantation produces around 220 tons of waste in the form of peels and the inedible “bodies” of each banana plant. Pakistan’s textile industry requires a lot of raw materials and energy, yet the country’s electricity isn’t always reliable in rural areas. To solve all these problems at once, Northumbria University and the National Textile University are teaming up with Eco Research Ltd in England, and Prime Eurotech in Pakistan. While the universities develop textiles from banana waste, whatever can’t be used for textiles will be used in the production of synthetic gas (syngas) and nitrogen fertilizers. That way, they’ll address the problem of waste while providing clean energy to the very communities that grow the bananas in the first place. As banana waste is converted into textile fibers, it could become a new source of income for these communities too, while reducing the amount of waste generated by the textile industry. In all, the process could make use of nearly 88 million tons of agricultural waste produced by the banana industry to produce over two billion cubic feet of syngas and 33 million tons of fertilizer. Cleaning up two industries at once—who knew bananas had such wide a-peel?
[Image description: Several bananas arranged on a white background. One is halfway unpeeled.] Credit & copyright: alleksana, PexelsThey say that plenty of fiber is good for you…well, here’s a whole bunch! Researchers at Northumbria University in England and National Textile University in Pakistan have found a way to create textiles using waste materials from the banana industry. As a bonus, the process would produce energy at the same time. The method is set to roll out in Pakistan, which has large agricultural and textile industries. Pakistan also produces around 154,800 tons of bananas a year, and the fruits are one of the country’s largest agricultural exports. But banana production comes with a lot of waste. On average, every 2.5 acres of banana plantation produces around 220 tons of waste in the form of peels and the inedible “bodies” of each banana plant. Pakistan’s textile industry requires a lot of raw materials and energy, yet the country’s electricity isn’t always reliable in rural areas. To solve all these problems at once, Northumbria University and the National Textile University are teaming up with Eco Research Ltd in England, and Prime Eurotech in Pakistan. While the universities develop textiles from banana waste, whatever can’t be used for textiles will be used in the production of synthetic gas (syngas) and nitrogen fertilizers. That way, they’ll address the problem of waste while providing clean energy to the very communities that grow the bananas in the first place. As banana waste is converted into textile fibers, it could become a new source of income for these communities too, while reducing the amount of waste generated by the textile industry. In all, the process could make use of nearly 88 million tons of agricultural waste produced by the banana industry to produce over two billion cubic feet of syngas and 33 million tons of fertilizer. Cleaning up two industries at once—who knew bananas had such wide a-peel?
[Image description: Several bananas arranged on a white background. One is halfway unpeeled.] Credit & copyright: alleksana, Pexels -
FREEAstronomy Daily Curio #2936Free1 CQ
If you’re stuck in outer space, the only thing more unsettling than the deafening silence is a sound that you don’t recognize. After being stranded in orbit for months, the astronauts aboard the Boeing Starliner have had to contend with changing plans, changing rides, and a strange, pulsing noise. Barry “Butch” Wilmore and Suni Williams launched from Cape Canaveral on June 5 for what was meant to be an 8-day mission to assess the capabilities of the Starliner. Built by Boeing, the craft was designed for regular missions to the International Space Station (ISS). But test flights often don’t go according to plan. First, there was a helium leak in the propulsion system followed by reaction control system thruster failures. Although the astronauts were able to fix the helium leak, NASA decided to take the cautious approach and delay the return trip until the cause of the failures could be better assessed. To that end, they tested Starliner thrusters back on Earth. Meanwhile, the astronauts were told that their mission would be extended for another six months.
Now, the plan is for the duo to abandon the Starliner altogether, staying on the ISS until they can hitch a ride back to earth aboard SpaceX’s Dragon Freedom, which will launch on September 24 and return with the stranded astronauts in February of next year, should everything go according to plan. With all that going on, Wilmore and Williams recently had to deal with yet another issue: a strange pulsing noise coming from the Starliner capsule. Fortunately, with everything else that has gone wrong, the mysterious sound was a relatively minor issue and was quickly resolved. Though it made headlines around the world when it was first discovered, it turned out to be feedback from a speaker on the capsule caused by its connection to the ISS. As for the astronauts, if they’re going to be stranded somewhere, at least it’s someplace with a view that can’t be beat.
[Image description: A dark night sky with stars visible.] Credit & copyright: Kai Pilger, PexelsIf you’re stuck in outer space, the only thing more unsettling than the deafening silence is a sound that you don’t recognize. After being stranded in orbit for months, the astronauts aboard the Boeing Starliner have had to contend with changing plans, changing rides, and a strange, pulsing noise. Barry “Butch” Wilmore and Suni Williams launched from Cape Canaveral on June 5 for what was meant to be an 8-day mission to assess the capabilities of the Starliner. Built by Boeing, the craft was designed for regular missions to the International Space Station (ISS). But test flights often don’t go according to plan. First, there was a helium leak in the propulsion system followed by reaction control system thruster failures. Although the astronauts were able to fix the helium leak, NASA decided to take the cautious approach and delay the return trip until the cause of the failures could be better assessed. To that end, they tested Starliner thrusters back on Earth. Meanwhile, the astronauts were told that their mission would be extended for another six months.
Now, the plan is for the duo to abandon the Starliner altogether, staying on the ISS until they can hitch a ride back to earth aboard SpaceX’s Dragon Freedom, which will launch on September 24 and return with the stranded astronauts in February of next year, should everything go according to plan. With all that going on, Wilmore and Williams recently had to deal with yet another issue: a strange pulsing noise coming from the Starliner capsule. Fortunately, with everything else that has gone wrong, the mysterious sound was a relatively minor issue and was quickly resolved. Though it made headlines around the world when it was first discovered, it turned out to be feedback from a speaker on the capsule caused by its connection to the ISS. As for the astronauts, if they’re going to be stranded somewhere, at least it’s someplace with a view that can’t be beat.
[Image description: A dark night sky with stars visible.] Credit & copyright: Kai Pilger, Pexels -
FREEEngineering Daily Curio #2935Free1 CQ
How do you turn devastation into hope? Turns out, the ingredients were there all along. A non-profit called Mobile Crisis Construction (MCC) is helping communities in Ukraine rebuild using rubble from the very war that has flattened so many Ukrainian buildings. MCC can take rubble and turn it into building blocks, which can be used to make new buildings. It sounds like recycling taken to extremes, but it’s simpler than it sounds. MCC has several mobile block factories which they can—as their name implies—deploy to wherever they’re needed. The factories are easy to transport because they’re modified to fit into a standard 20-foot shipping container. They take about 12 weeks to arrive at their destinations, but once they do, they can process enough rubble to produce around 33 to 44 tons of blocks every eight hours, which the MCC claims is enough for one school, five large houses, or ten small houses every week. What makes this possible is the unusual shape of the blocks, which look less like conventional bricks or cinder blocks and more like interlocking toy building blocks. Just like the toy version, they require no mortar to assemble because most of the blocks are molded into interlocking shapes that are simple to stack together. They also come with holes that can accommodate rebars for additional structural strength. According to the MCC, buildings constructed using their blocks can withstand earthquakes and cyclones. Of course, construction still requires the assistance of structural engineers and experienced workers on site, but the mobile factories at least address Ukraine’s current building-supply issue. Currently, MCC is focused on sending their factories to Ukraine, where they’re setting up near Kyiv. Once it becomes safe to do so, they plan to expand to other areas affected by the war. If all goes well, they’ll turn razed buildings into raised ones.
How do you turn devastation into hope? Turns out, the ingredients were there all along. A non-profit called Mobile Crisis Construction (MCC) is helping communities in Ukraine rebuild using rubble from the very war that has flattened so many Ukrainian buildings. MCC can take rubble and turn it into building blocks, which can be used to make new buildings. It sounds like recycling taken to extremes, but it’s simpler than it sounds. MCC has several mobile block factories which they can—as their name implies—deploy to wherever they’re needed. The factories are easy to transport because they’re modified to fit into a standard 20-foot shipping container. They take about 12 weeks to arrive at their destinations, but once they do, they can process enough rubble to produce around 33 to 44 tons of blocks every eight hours, which the MCC claims is enough for one school, five large houses, or ten small houses every week. What makes this possible is the unusual shape of the blocks, which look less like conventional bricks or cinder blocks and more like interlocking toy building blocks. Just like the toy version, they require no mortar to assemble because most of the blocks are molded into interlocking shapes that are simple to stack together. They also come with holes that can accommodate rebars for additional structural strength. According to the MCC, buildings constructed using their blocks can withstand earthquakes and cyclones. Of course, construction still requires the assistance of structural engineers and experienced workers on site, but the mobile factories at least address Ukraine’s current building-supply issue. Currently, MCC is focused on sending their factories to Ukraine, where they’re setting up near Kyiv. Once it becomes safe to do so, they plan to expand to other areas affected by the war. If all goes well, they’ll turn razed buildings into raised ones.
-
FREEMind + Body Daily CurioFree1 CQ
This rice dish is no side dish. Korean bibimbap is a colorful, savory main course featuring vegetables, meat, eggs, and, of course, rice. It’s popular enough to be found at Korean restaurants all over the world, yet the origins of this famous staple are fairly mysterious.
Bibimbap gets its name from the Korean word “bibim” which means “to mix” and “bap”, which means “cooked rice” or “meal.” It’s often served in a stone bowl, on a foundation of fluffy, white rice. Atop the rice is a colorful assortment of vegetables, which vary by region but can include: beansprouts, shiitake mushrooms, cucumbers, carrots, bell peppers, spicy radish salad (called mu saengchae), sautéd onions, fiddlehead ferns (called gosari), seaweed, and, of course, kimchi (made from fermented cabbage). Thinly sliced beef is often included too, and the whole thing is topped with an egg yolk in the middle (either fried or raw), a spicy sauce made from Korean chili paste (gochujang), and a sprinkling of sesame seeds. Since bibimbap is traditionally served in a heated stone bowl (or dolsot), the ingredients (including the egg) cook further when the dish is stirred up just before eating.
No one knows exactly when bibimbap was invented, but there are a few theories. In traditional Korean ancestral rites (jesa), people often leave food at the memorial shrines of loved ones. The food is usually mixed together before it is offered, so some food historians believe that bibimbap might have originated from such rites. Others think that bibimbap grew out of goldongban, a mixed rice and vegetable dish that comes from a ritual of the same name. In Korea, people often clean out their homes in the days before Lunar New Year, in order to rid themselves of bad luck and staleness from the previous year. For many people, that includes clearing out pantries of various condiments, vegetables, and other ingredients, usually by mixing it into rice. Thus, the dish goldongban was born, and bibimbop might be a more modern, curated version of it.
While bibimbap has been eaten in Korea for over a century, it only got popular in the U.S. in the late 20th century, when Asian food in general became more mainstream. Flights to and from South Korea began to serve it, as did American Korean restaurants, and it spread like wildfire due to its simple, inexpensive ingredients. You just can’t go wrong with rice, veggies, and meat!
[Image description: A black bowl of bibimbap, featuring colorful vegetables, beef, and an egg yolk in the center.] Credit & copyright: Andy Li, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This rice dish is no side dish. Korean bibimbap is a colorful, savory main course featuring vegetables, meat, eggs, and, of course, rice. It’s popular enough to be found at Korean restaurants all over the world, yet the origins of this famous staple are fairly mysterious.
Bibimbap gets its name from the Korean word “bibim” which means “to mix” and “bap”, which means “cooked rice” or “meal.” It’s often served in a stone bowl, on a foundation of fluffy, white rice. Atop the rice is a colorful assortment of vegetables, which vary by region but can include: beansprouts, shiitake mushrooms, cucumbers, carrots, bell peppers, spicy radish salad (called mu saengchae), sautéd onions, fiddlehead ferns (called gosari), seaweed, and, of course, kimchi (made from fermented cabbage). Thinly sliced beef is often included too, and the whole thing is topped with an egg yolk in the middle (either fried or raw), a spicy sauce made from Korean chili paste (gochujang), and a sprinkling of sesame seeds. Since bibimbap is traditionally served in a heated stone bowl (or dolsot), the ingredients (including the egg) cook further when the dish is stirred up just before eating.
No one knows exactly when bibimbap was invented, but there are a few theories. In traditional Korean ancestral rites (jesa), people often leave food at the memorial shrines of loved ones. The food is usually mixed together before it is offered, so some food historians believe that bibimbap might have originated from such rites. Others think that bibimbap grew out of goldongban, a mixed rice and vegetable dish that comes from a ritual of the same name. In Korea, people often clean out their homes in the days before Lunar New Year, in order to rid themselves of bad luck and staleness from the previous year. For many people, that includes clearing out pantries of various condiments, vegetables, and other ingredients, usually by mixing it into rice. Thus, the dish goldongban was born, and bibimbop might be a more modern, curated version of it.
While bibimbap has been eaten in Korea for over a century, it only got popular in the U.S. in the late 20th century, when Asian food in general became more mainstream. Flights to and from South Korea began to serve it, as did American Korean restaurants, and it spread like wildfire due to its simple, inexpensive ingredients. You just can’t go wrong with rice, veggies, and meat!
[Image description: A black bowl of bibimbap, featuring colorful vegetables, beef, and an egg yolk in the center.] Credit & copyright: Andy Li, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily CurioFree1 CQ
Farmers once bailed on this crop, but now they’re back to baling it up. Salt hay was once a popular crop in the U.S. before it stopped being widely cultivated in the 20th century. Now, changing environments might see more farmers bringing this crop back. A crop that can be grown in salt water and used for everything from insulation to animal feed sounds almost too good to be true. Yet, before the advent of mechanized farming, salt hay (any of a variety of saltwater-tolerant grasses, like Spartina patens) was one of the most reliable cash crops in the United States. Due to its ability to thrive in marsh lands on the coast inundated by saltwater, salt hay was easy to grow without irrigation. It was used as insulation for ice houses and even turned into pulp for manufacturing paper products. Because it grows in an environment that’s inhospitable to most other plant life, mulch made from salt hay is free of weed seeds, making it ideal for farmers and gardeners who want to minimize herbicide usage. It’s also a fantastic feed for livestock, and in Northern France, farmers raise sheep fed on the salty grass. The end product is agneau de pré-salé, or “pre-salted lamb,” renowned for its unique, almost seasoned flavor that results from the sheeps’ high-salt diet.
Despite all its virtues, salt hay was largely abandoned by U.S. farmers due to how difficult it was to harvest. Tractors and combines can’t manage the soft marshy ground it grows on, and saltwater is hostile to machinery. Typically, salt hay has to be harvested by hand, except for when the marsh freezes hard enough to support machinery. With the warming climate, though, ice is getting rarer. But farmers on the mid-Atlantic coast of the U.S. are eyeing the once-viable crop again, this time because of increasing saltwater intrusion on farmland. Every year, there’s less arable land for cash crops like corn or soy, and more salty marshland perfect for salt hay. Beyond agriculture, salt hay is beginning to be used for erosion prevention in coastal areas, since it easily forms strong root systems that hold soil in place. Maybe salt hay will have another hayday.Farmers once bailed on this crop, but now they’re back to baling it up. Salt hay was once a popular crop in the U.S. before it stopped being widely cultivated in the 20th century. Now, changing environments might see more farmers bringing this crop back. A crop that can be grown in salt water and used for everything from insulation to animal feed sounds almost too good to be true. Yet, before the advent of mechanized farming, salt hay (any of a variety of saltwater-tolerant grasses, like Spartina patens) was one of the most reliable cash crops in the United States. Due to its ability to thrive in marsh lands on the coast inundated by saltwater, salt hay was easy to grow without irrigation. It was used as insulation for ice houses and even turned into pulp for manufacturing paper products. Because it grows in an environment that’s inhospitable to most other plant life, mulch made from salt hay is free of weed seeds, making it ideal for farmers and gardeners who want to minimize herbicide usage. It’s also a fantastic feed for livestock, and in Northern France, farmers raise sheep fed on the salty grass. The end product is agneau de pré-salé, or “pre-salted lamb,” renowned for its unique, almost seasoned flavor that results from the sheeps’ high-salt diet.
Despite all its virtues, salt hay was largely abandoned by U.S. farmers due to how difficult it was to harvest. Tractors and combines can’t manage the soft marshy ground it grows on, and saltwater is hostile to machinery. Typically, salt hay has to be harvested by hand, except for when the marsh freezes hard enough to support machinery. With the warming climate, though, ice is getting rarer. But farmers on the mid-Atlantic coast of the U.S. are eyeing the once-viable crop again, this time because of increasing saltwater intrusion on farmland. Every year, there’s less arable land for cash crops like corn or soy, and more salty marshland perfect for salt hay. Beyond agriculture, salt hay is beginning to be used for erosion prevention in coastal areas, since it easily forms strong root systems that hold soil in place. Maybe salt hay will have another hayday. -
FREEMind + Body Daily CurioFree1 CQ
Would you eat a goose neck? Would you eat a barnacle? What about a gooseneck barnacle? Gooseneck barnacles are one of Lisbon’s most treasured culinary delicacies, and the unusual sea creatures command a premium price. Gooseneck barnacles, also called percebes, get their name from their elongated bodies, which resemble the necks of geese. At least, that’s one version of the story. Another story goes that in the 12th century, Europeans couldn’t figure out where geese flew off to lay their eggs. Seeing these barnacles that were reminiscent of a goose appendage, they decided that the barnacles must actually be goose eggs. Regardless of how they got their name, today, they’re a popular delicacy in Portugal, where they’re often steamed or boiled and served with butter for dipping. To eat a barnacle, the tough, leathery outer skin must be removed, and those who’ve tried it say that the meat is sweet with a chewy texture.
While the barnacles’ meat goes down easy, the bill might not. At around $112 U.S. dollars, it’s a pricey dish to order. There are a few reasons for this, the first being that gooseneck barnacles don’t exactly grow on trees. No, they grow on rocks in the ocean, for which harvesters must traverse perilously sharp rocks or dive underwater. Despite their tough exterior, the barnacles also need to be removed with care and precision, lest their prized meat be damaged. Those who harvest barnacles must be licensed and they’re only allowed to take around 44 pounds a day from the ocean, so there is a limited supply. And yes, harvesting is extremely dangerous. Between the crashing waves and the jagged rocks that the barnacles favor as their home, it’s not uncommon for barnacle harvesters to die on the job. In neighboring Spain, where gooseneck barnacles are also popular, some fisherman risk their lives in the Costa de la Muerte, which translates to the “Coast of Death” to gather them. That price on the menu is looking a bit more reasonable, isn’t it?
[Image description: Gooseneck barnacles covering a log on a beach. Each barnacle has a black “stem”, a white “shell” and orange “lips”. ] Credit & copyright: Tom Page, Aurevilly, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide.Would you eat a goose neck? Would you eat a barnacle? What about a gooseneck barnacle? Gooseneck barnacles are one of Lisbon’s most treasured culinary delicacies, and the unusual sea creatures command a premium price. Gooseneck barnacles, also called percebes, get their name from their elongated bodies, which resemble the necks of geese. At least, that’s one version of the story. Another story goes that in the 12th century, Europeans couldn’t figure out where geese flew off to lay their eggs. Seeing these barnacles that were reminiscent of a goose appendage, they decided that the barnacles must actually be goose eggs. Regardless of how they got their name, today, they’re a popular delicacy in Portugal, where they’re often steamed or boiled and served with butter for dipping. To eat a barnacle, the tough, leathery outer skin must be removed, and those who’ve tried it say that the meat is sweet with a chewy texture.
While the barnacles’ meat goes down easy, the bill might not. At around $112 U.S. dollars, it’s a pricey dish to order. There are a few reasons for this, the first being that gooseneck barnacles don’t exactly grow on trees. No, they grow on rocks in the ocean, for which harvesters must traverse perilously sharp rocks or dive underwater. Despite their tough exterior, the barnacles also need to be removed with care and precision, lest their prized meat be damaged. Those who harvest barnacles must be licensed and they’re only allowed to take around 44 pounds a day from the ocean, so there is a limited supply. And yes, harvesting is extremely dangerous. Between the crashing waves and the jagged rocks that the barnacles favor as their home, it’s not uncommon for barnacle harvesters to die on the job. In neighboring Spain, where gooseneck barnacles are also popular, some fisherman risk their lives in the Costa de la Muerte, which translates to the “Coast of Death” to gather them. That price on the menu is looking a bit more reasonable, isn’t it?
[Image description: Gooseneck barnacles covering a log on a beach. Each barnacle has a black “stem”, a white “shell” and orange “lips”. ] Credit & copyright: Tom Page, Aurevilly, Wikimedia Commons. The copyright holder of this work, has released it into the public domain. This applies worldwide. -
FREEMind + Body Daily Curio #2931Free1 CQ
The cure for baldness might be sugar! But don’t go stocking up on candy bars just yet. Millions of men around the world experience male pattern baldness, the treatments for which can have mixed results. Now, though, a group of researchers may have unlocked an actual cure for baldness by utilizing a common, naturally occurring sugar. While hair loss has never killed anyone, it’s certainly a bummer to deal with. In men, a balding head is most commonly a hormonal issue and completely natural. Some products like minoxidil can help grow back some of the lost hair, but it can cause side effects like scalp irritation (which can worsen hair loss) and is very toxic to cats (so cat lovers beware). Drugs like finasteride can also halt or reverse male pattern baldness to some degree, but it can cause undesirable side effects like gynecomastia (the development of breast tissue in men). Other remedies like hair plugs or transplants can be costly, and not everyone wants to have surgery just to have a brushable coif. But scientists at the University of Sheffield and COMSATS University Islamabad have discovered a simpler and more effective way to grow back lost hair using 2-deoxy-D-ribose (2dDR). 2dDR is a sugar that is naturally occurring in the human body and inexpensive to produce. Researchers were originally conducting a study on wound healing when they found that sites with the sugar present also had increased hair growth. When they followed up with an experiment using a gel containing the sugar, they found it to be 80 to 90 percent as effective as minoxidil. The sugar seems to promote hair growth by increasing blood flow to the affected area, bringing back dormant hair follicles and even encouraging new growth. Not only does this apparently work on male pattern baldness, researchers believe it may be able to help cancer patients experiencing chemotherapy-induced hair loss. It seems like it may be all over for the comb-over.
[Image description: Sugar cubes against a pink background.] Credit & copyright: Polina Tankilevitch, PexelsThe cure for baldness might be sugar! But don’t go stocking up on candy bars just yet. Millions of men around the world experience male pattern baldness, the treatments for which can have mixed results. Now, though, a group of researchers may have unlocked an actual cure for baldness by utilizing a common, naturally occurring sugar. While hair loss has never killed anyone, it’s certainly a bummer to deal with. In men, a balding head is most commonly a hormonal issue and completely natural. Some products like minoxidil can help grow back some of the lost hair, but it can cause side effects like scalp irritation (which can worsen hair loss) and is very toxic to cats (so cat lovers beware). Drugs like finasteride can also halt or reverse male pattern baldness to some degree, but it can cause undesirable side effects like gynecomastia (the development of breast tissue in men). Other remedies like hair plugs or transplants can be costly, and not everyone wants to have surgery just to have a brushable coif. But scientists at the University of Sheffield and COMSATS University Islamabad have discovered a simpler and more effective way to grow back lost hair using 2-deoxy-D-ribose (2dDR). 2dDR is a sugar that is naturally occurring in the human body and inexpensive to produce. Researchers were originally conducting a study on wound healing when they found that sites with the sugar present also had increased hair growth. When they followed up with an experiment using a gel containing the sugar, they found it to be 80 to 90 percent as effective as minoxidil. The sugar seems to promote hair growth by increasing blood flow to the affected area, bringing back dormant hair follicles and even encouraging new growth. Not only does this apparently work on male pattern baldness, researchers believe it may be able to help cancer patients experiencing chemotherapy-induced hair loss. It seems like it may be all over for the comb-over.
[Image description: Sugar cubes against a pink background.] Credit & copyright: Polina Tankilevitch, Pexels -
FREEMind + Body Daily CurioFree1 CQ
This is one succulent sandwich. In fact, the Vietnamese bánh mì is positively loaded with savory meat and crunchy veggies, all on a thick-yet-crispy baguette. Eating a bánh mì is practically a prerequisite for anyone visiting Vietnam, and the country takes great pride in all the sandwich’s regional variations. Yet, the bánh mì has a surprisingly tragic and violent history.
A bánh mì is a fusion of Vietnamese and French cuisine. The baguette is distinctly French, as are some commonly-used condiments, like pâté. Yet, the star ingredients are Vietnamese meats, such as xíu mại, a type of minced pork, chả lụa, a kind of sausage. Pork belly is also popular, as is a combination of all three. A good bánh mì practically overflows with veggies, which can include pickled daikon, pickled carrots, and grilled onions, along with herbs like cilantro.
The story of the bánh mì is, unfortunately, a story of war and conquest. The Cochinchina Campaign, also known as the French Invasion of Vietnam, took place between 1858 and 1862. France was eager to spread Catholicism via missionaries, expand their empire in order to seize control of Vietnamese resources, and create secure trade routes to China. Stating that Vietnamese emperor Gia Long should submit to French rule since France had provided him with military aid years prior, the French invaded Vietnam 1858 and eventually seized control of the country. Until 1954, Vietnam remained under French rule as part of a colony called French Indochina. In this system, the people of Vietnam and their resources were exploited and poverty was widespread. After a costly war against Vietnamese revolutionaries, which ended in a four-month siege of a French garrison in the northwestern city of Điện Biên Phủ, the French finally withdrew from the country.
What does this have to do with bánh mì? Well, when the French abandoned Vietnam, they left their culinary footprint behind. Bread had never been a large part of Vietnamese cuisine before French occupation, while during the occupation it was only considered acceptable for Vietnamese people to eat bread (when they could afford the expensive wheat to make it) in French ways, mainly with butter or cold cuts. But when the French left, it was suddenly safe to combine baguettes with all sorts of Vietnamese ingredients. In Saigon, as migrants fled south during the 1954 Partition of Vietnam, people began selling street food and opening restaurants. While bánh mì can be found all over the world today, the first place to sell it was a small bakery in Saigon called Hòa Mã, run by migrants Lê Minh Ngọc and Nguyễn Thị Tịnh. The rest is cross-cultural culinary history.
[Image description: A close-up photo of a Bánh mì sandwich with meat and cooked onions.] Credit & copyright: Phương Huy, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This is one succulent sandwich. In fact, the Vietnamese bánh mì is positively loaded with savory meat and crunchy veggies, all on a thick-yet-crispy baguette. Eating a bánh mì is practically a prerequisite for anyone visiting Vietnam, and the country takes great pride in all the sandwich’s regional variations. Yet, the bánh mì has a surprisingly tragic and violent history.
A bánh mì is a fusion of Vietnamese and French cuisine. The baguette is distinctly French, as are some commonly-used condiments, like pâté. Yet, the star ingredients are Vietnamese meats, such as xíu mại, a type of minced pork, chả lụa, a kind of sausage. Pork belly is also popular, as is a combination of all three. A good bánh mì practically overflows with veggies, which can include pickled daikon, pickled carrots, and grilled onions, along with herbs like cilantro.
The story of the bánh mì is, unfortunately, a story of war and conquest. The Cochinchina Campaign, also known as the French Invasion of Vietnam, took place between 1858 and 1862. France was eager to spread Catholicism via missionaries, expand their empire in order to seize control of Vietnamese resources, and create secure trade routes to China. Stating that Vietnamese emperor Gia Long should submit to French rule since France had provided him with military aid years prior, the French invaded Vietnam 1858 and eventually seized control of the country. Until 1954, Vietnam remained under French rule as part of a colony called French Indochina. In this system, the people of Vietnam and their resources were exploited and poverty was widespread. After a costly war against Vietnamese revolutionaries, which ended in a four-month siege of a French garrison in the northwestern city of Điện Biên Phủ, the French finally withdrew from the country.
What does this have to do with bánh mì? Well, when the French abandoned Vietnam, they left their culinary footprint behind. Bread had never been a large part of Vietnamese cuisine before French occupation, while during the occupation it was only considered acceptable for Vietnamese people to eat bread (when they could afford the expensive wheat to make it) in French ways, mainly with butter or cold cuts. But when the French left, it was suddenly safe to combine baguettes with all sorts of Vietnamese ingredients. In Saigon, as migrants fled south during the 1954 Partition of Vietnam, people began selling street food and opening restaurants. While bánh mì can be found all over the world today, the first place to sell it was a small bakery in Saigon called Hòa Mã, run by migrants Lê Minh Ngọc and Nguyễn Thị Tịnh. The rest is cross-cultural culinary history.
[Image description: A close-up photo of a Bánh mì sandwich with meat and cooked onions.] Credit & copyright: Phương Huy, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History Daily Curio #2930Free1 CQ
5,000 years old and still full of surprises. Stonehenge, the famous, ancient British monument, was long thought to have been built of stones sourced from modern-day Wales. However, researchers have now found that the monument’s 6-ton centerpiece might have come from much farther away. With only stone age tools at hand, no one would have blamed Stonehenge’s ancient builders for sourcing their stones nearby. And for the most part, that’s just what they did with the iconic structure and many other stone circles that dot the British Isles. There wasn’t any reason to suspect that the massive altar stone, which lies at the center of Stonehenge, was from somewhere else. At least, not until researchers discovered last year that the stone wasn’t exactly local. More specifically, it wasn’t the right kind of sandstone to have come from Wales. Sandstone has a unique composition depending on the region and conditions under which it formed. Once the discrepancy was discovered, archaeologists from the University of Exeter set out to solve this mystery, and while they couldn’t take a sample from the altar stone directly, they looked at previous excavated pieces of the slab, and found that they didn’t match up with the rest of the stone used to build the site. Instead, the altar stone seems to have come from the Orcadian Basin, an area that covers the northeastern tip of Scotland and extends out into the North Sea. That means that the 16-foot slab was carried around 460 miles from its location of origin to where it stands today—almost the entire length of the island. Interestingly, the Orcadian Basin contains the Orkney Islands, an archipelago that extends out from the north coast of Scotland. In past digs, archaeologists found remnants of pottery from that region, showing that people brought more than just sandstone down with them. Researchers say that the presence of the altar stone shows that some ancient European cultures were more developed and culturally connected than previously thought. After all, 460 miles isn’t exactly a stone’s throw away.
[Image description: A photo of Stonehenge, an ancient monument of large, ovular, gray stones.] Credit & copyright: Perituss, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.5,000 years old and still full of surprises. Stonehenge, the famous, ancient British monument, was long thought to have been built of stones sourced from modern-day Wales. However, researchers have now found that the monument’s 6-ton centerpiece might have come from much farther away. With only stone age tools at hand, no one would have blamed Stonehenge’s ancient builders for sourcing their stones nearby. And for the most part, that’s just what they did with the iconic structure and many other stone circles that dot the British Isles. There wasn’t any reason to suspect that the massive altar stone, which lies at the center of Stonehenge, was from somewhere else. At least, not until researchers discovered last year that the stone wasn’t exactly local. More specifically, it wasn’t the right kind of sandstone to have come from Wales. Sandstone has a unique composition depending on the region and conditions under which it formed. Once the discrepancy was discovered, archaeologists from the University of Exeter set out to solve this mystery, and while they couldn’t take a sample from the altar stone directly, they looked at previous excavated pieces of the slab, and found that they didn’t match up with the rest of the stone used to build the site. Instead, the altar stone seems to have come from the Orcadian Basin, an area that covers the northeastern tip of Scotland and extends out into the North Sea. That means that the 16-foot slab was carried around 460 miles from its location of origin to where it stands today—almost the entire length of the island. Interestingly, the Orcadian Basin contains the Orkney Islands, an archipelago that extends out from the north coast of Scotland. In past digs, archaeologists found remnants of pottery from that region, showing that people brought more than just sandstone down with them. Researchers say that the presence of the altar stone shows that some ancient European cultures were more developed and culturally connected than previously thought. After all, 460 miles isn’t exactly a stone’s throw away.
[Image description: A photo of Stonehenge, an ancient monument of large, ovular, gray stones.] Credit & copyright: Perituss, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREENutrition Daily Curio #2929Free1 CQ
This is some sweet health news. It sounds almost too good to be true, but researchers at the University of Illinois Urbana-Champaign have found that adding a tablespoon of honey to yogurt actually makes it healthier. That’s because the probiotics in the yogurt survive better when honey is present, which means they can do their jobs better once they reach the intestines. It’s long been known that the probiotics in yogurt have health benefits. Some probiotics are necessary for the production of yogurt itself, but others, like Bifidobacterium animalis are added just simply because they aid in digestion. But probiotics have a harrowing journey ahead of them once they’re consumed. Between the mouth, the stomach, and intestines, probiotics have to survive a veritable gauntlet of enzymes, acids, and other digestive bombardments before they can settle down where it’s safe for them to get to work. Many don’t survive long enough to reach the intestines. That’s why researchers were looking at ways to increase the viability of probiotics in yogurt, and the answer, it turns out, has been there all along. The combination of honey and yogurt—common in the Mediterranean diet—delivers a much more potent probiotic punch than yogurt alone. For their experiment, researchers took a commonly available brand of yogurt and had one group add a little honey to it while telling the control group not to. After a few weeks, the groups switched their diets to see what their gut microbiome would look like, and they found that honey really did help. In particular, the researchers found that B. animalis was much more viable with honey than without. And it has to be honey too, not just sugar—when they repeated the same experiment with plain sugar, it didn’t have the same benefits as honey. You could say this golden sweet treat is the gold standard when it comes to gut health.
[Image description: Yogurt with honey and nuts in a black bowl.] Credit & copyright: Amnah Mohammad, PexelsThis is some sweet health news. It sounds almost too good to be true, but researchers at the University of Illinois Urbana-Champaign have found that adding a tablespoon of honey to yogurt actually makes it healthier. That’s because the probiotics in the yogurt survive better when honey is present, which means they can do their jobs better once they reach the intestines. It’s long been known that the probiotics in yogurt have health benefits. Some probiotics are necessary for the production of yogurt itself, but others, like Bifidobacterium animalis are added just simply because they aid in digestion. But probiotics have a harrowing journey ahead of them once they’re consumed. Between the mouth, the stomach, and intestines, probiotics have to survive a veritable gauntlet of enzymes, acids, and other digestive bombardments before they can settle down where it’s safe for them to get to work. Many don’t survive long enough to reach the intestines. That’s why researchers were looking at ways to increase the viability of probiotics in yogurt, and the answer, it turns out, has been there all along. The combination of honey and yogurt—common in the Mediterranean diet—delivers a much more potent probiotic punch than yogurt alone. For their experiment, researchers took a commonly available brand of yogurt and had one group add a little honey to it while telling the control group not to. After a few weeks, the groups switched their diets to see what their gut microbiome would look like, and they found that honey really did help. In particular, the researchers found that B. animalis was much more viable with honey than without. And it has to be honey too, not just sugar—when they repeated the same experiment with plain sugar, it didn’t have the same benefits as honey. You could say this golden sweet treat is the gold standard when it comes to gut health.
[Image description: Yogurt with honey and nuts in a black bowl.] Credit & copyright: Amnah Mohammad, Pexels -
FREEParenting Daily Curio #2928Free1 CQ
Say adieu to baby blues. The state of Oregon just rolled out a new program called Family Connects, and its purpose is simple: help parents with the specialized care that newborns often need. While everyone knows that babies are a handful at the best of times, it’s even worse with brand new babies. In the U.S., new parents have little in the way of medical support once a baby is delivered. Some experts believe that this lack of accessible care after birth has led to the country’s high infant and maternal mortality rates compared to other wealthy nations. Oregon’s new, statewide program aims to address the problem. It allows parents to receive at-home visits from nurses who can help them better care for their newborns—and even themselves. For example, a visiting nurse from this program can show parents how to properly hold their infants while feeding or deal with issues like colic. Aside from practical advice, the nurses can also bring basic supplies if necessary, and even connect parents with local resources if they’re experiencing food insecurity or financial hardship in general. Finally, the nurses can check on the parents’ well-being, particularly when it comes to things like postpartum depression. This is especially important since many forms of depression can be worsened by the uncertainties and anxieties that come with parenthood. Of course, sending nurses to help out new parents at their home isn’t a new idea. Dr. Elizabeth Steiner, the State Senator who advocated for the program, actually got the idea from a similar one already helping parents in Durham, North Carolina. Unlike the one in Durham, though, Oregon’ version covers the entire state. It sounds expensive, but it’s likely to pay dividends in the future. Parents who receive these visits are less likely to suffer from postpartum depression and Child Protective Services are less likely to have to intervene due to neglect or abuse. They say it takes a village—in this case, the village is an entire state.
[Image description: A white crib decorated with an elephant-patterned blanket and a stuffed elephant.] Credit & copyright: Karolina Kaboompics, PexelsSay adieu to baby blues. The state of Oregon just rolled out a new program called Family Connects, and its purpose is simple: help parents with the specialized care that newborns often need. While everyone knows that babies are a handful at the best of times, it’s even worse with brand new babies. In the U.S., new parents have little in the way of medical support once a baby is delivered. Some experts believe that this lack of accessible care after birth has led to the country’s high infant and maternal mortality rates compared to other wealthy nations. Oregon’s new, statewide program aims to address the problem. It allows parents to receive at-home visits from nurses who can help them better care for their newborns—and even themselves. For example, a visiting nurse from this program can show parents how to properly hold their infants while feeding or deal with issues like colic. Aside from practical advice, the nurses can also bring basic supplies if necessary, and even connect parents with local resources if they’re experiencing food insecurity or financial hardship in general. Finally, the nurses can check on the parents’ well-being, particularly when it comes to things like postpartum depression. This is especially important since many forms of depression can be worsened by the uncertainties and anxieties that come with parenthood. Of course, sending nurses to help out new parents at their home isn’t a new idea. Dr. Elizabeth Steiner, the State Senator who advocated for the program, actually got the idea from a similar one already helping parents in Durham, North Carolina. Unlike the one in Durham, though, Oregon’ version covers the entire state. It sounds expensive, but it’s likely to pay dividends in the future. Parents who receive these visits are less likely to suffer from postpartum depression and Child Protective Services are less likely to have to intervene due to neglect or abuse. They say it takes a village—in this case, the village is an entire state.
[Image description: A white crib decorated with an elephant-patterned blanket and a stuffed elephant.] Credit & copyright: Karolina Kaboompics, Pexels -
FREEUS History Daily Curio #2927Free1 CQ
Some people want to tame the wilderness, but others want the wilderness to tame them. This sentiment was the driving force behind the creation of the Appalachian Trail, which stretches more than 2,190 miles across fourteen states, was completed this month in 1937. The Appalachian Trail was conceived of by American forest scientist Benton MacKaye in the 1920s as an extensive footpath that would take travelers through idyllic wilderness and self-sustaining agrarian communities to support them. In 1925, he created the Appalachian Trail Conference (ATC) to connect existing trails and make new ones to form one, continuous trail that spanned from New England to Georgia.
Progress was slow at first, with work on the trail beginning in earnest in the 1930s. By then, MacKaye was frequently arguing with Myron Avery, a lawyer who joined the ATC with a different vision than his own. Avery imagined the Appalachian Trail as a more straightforward, scenic trail than MacKaye’s proposed rugged trail, and eventually, MacKaye distanced himself from the project. When the trail was completed in 1937, connecting Maine to Georgia, the result was in line with Avery’s conception of what the Appalachian Trail should be. Unfortunately, the trail was heavily damaged by a hurricane the following year, and the construction of the Blue Ridge Parkway also forced the ATC to rework the route. Then, with the onset of WWII, work on the trail halted due to a lack of volunteers. In time, the trail was repaired and found two new champions: Earl V. Shaffer and Emma “Grandma” Gatewood. Shaffer became the first person to hike the entirety of the trail (2,050 miles at the time) in one journey in 1948, a feat known as “thru-hiking.” A veteran of WWII, Shaffer claimed in a book detailing the thru-hike that he was “walking off the army.” Later in 1955, Gatewood would become the first woman to do the same (by then the trail was 2,168 miles) at the age of 67. Today, the ATC is known as the Appalachian Trail Conference, and continues to maintain the trail. They also keep count of successful thru-hikes, and though thousands attempt the feat each year, only about 20,000 have traversed the entire length of the trail. That’s a lot of people, but you could still call it the road less traveled.
[Image description: A portion of the Appalachian Trail atop Mount Hight, in New Hampshire. The trail is rocky and surrounded by green pine trees with green mountains in the distance.] Credit & copyright: Ken Gallager at en.wikipedia. This work has been released into the public domain by its author, Ken Gallager, at the English Wikipedia project. This applies worldwide.Some people want to tame the wilderness, but others want the wilderness to tame them. This sentiment was the driving force behind the creation of the Appalachian Trail, which stretches more than 2,190 miles across fourteen states, was completed this month in 1937. The Appalachian Trail was conceived of by American forest scientist Benton MacKaye in the 1920s as an extensive footpath that would take travelers through idyllic wilderness and self-sustaining agrarian communities to support them. In 1925, he created the Appalachian Trail Conference (ATC) to connect existing trails and make new ones to form one, continuous trail that spanned from New England to Georgia.
Progress was slow at first, with work on the trail beginning in earnest in the 1930s. By then, MacKaye was frequently arguing with Myron Avery, a lawyer who joined the ATC with a different vision than his own. Avery imagined the Appalachian Trail as a more straightforward, scenic trail than MacKaye’s proposed rugged trail, and eventually, MacKaye distanced himself from the project. When the trail was completed in 1937, connecting Maine to Georgia, the result was in line with Avery’s conception of what the Appalachian Trail should be. Unfortunately, the trail was heavily damaged by a hurricane the following year, and the construction of the Blue Ridge Parkway also forced the ATC to rework the route. Then, with the onset of WWII, work on the trail halted due to a lack of volunteers. In time, the trail was repaired and found two new champions: Earl V. Shaffer and Emma “Grandma” Gatewood. Shaffer became the first person to hike the entirety of the trail (2,050 miles at the time) in one journey in 1948, a feat known as “thru-hiking.” A veteran of WWII, Shaffer claimed in a book detailing the thru-hike that he was “walking off the army.” Later in 1955, Gatewood would become the first woman to do the same (by then the trail was 2,168 miles) at the age of 67. Today, the ATC is known as the Appalachian Trail Conference, and continues to maintain the trail. They also keep count of successful thru-hikes, and though thousands attempt the feat each year, only about 20,000 have traversed the entire length of the trail. That’s a lot of people, but you could still call it the road less traveled.
[Image description: A portion of the Appalachian Trail atop Mount Hight, in New Hampshire. The trail is rocky and surrounded by green pine trees with green mountains in the distance.] Credit & copyright: Ken Gallager at en.wikipedia. This work has been released into the public domain by its author, Ken Gallager, at the English Wikipedia project. This applies worldwide. -
FREEUS History Daily CurioFree1 CQ
It was about time! The women’s suffrage movement in the U.S. took the better part of a century to achieve its goal. The entire process ended up coming right down to the wire…and a note from a lawmaker’s mother. After the 19th amendment was passed by the U.S. Congress in 1919, it needed to be ratified by at least 36 states to be fully adopted. The deciding vote was cast on this day in 1920 by Harry Burn, a young representative in Tennessee who had previously opposed women’s suffrage.
The road to voting rights for American women began in 1848 at the Seneca Falls Convention in the state of New York. There, prominent women’s rights advocates gathered from around the country and adopted the Declaration of Sentiments, a document of grievances pertaining to womens’ lack of agency in marriage, business, and education. Among the listed injustices was a lack of voting rights for women, which forced them to submit to laws that they had no voice in forming. Through the document, the dignitaries of the convention argued that being unable to exercise their alienable right to the elective franchise, they were without representation…yet single women with property were still subjected to the same taxes as men. This line of reasoning and the composition style of the document were meant to echo those of the Declaration of Independence. The Declaration of Sentiments was controversial, to say the least. Opponents of women’s rights ridiculed the convention-goers and even some advocates withdrew support in its wake, deeming the sentiments to be too extreme.
Meanwhile, proponents of suffrage proposed a constitutional amendment in Congress to grant women the right to vote (later called the Susan B. Anthony Amendment). This was formally proposed in 1866, 1868, 1878, 1887, 1914, 1918, and 1919 before the amendment finally passed during a second vote that final year. In between, suffragettes like Susan B. Anthony and Carrie Chapman Catt of the National American Woman Suffrage Association and others continued to petition, protest, and even testify in Congress in pursuit of the right to vote.
Even with the approval of Congress, there was still vehement opposition to women’s suffrage nationwide. And there was also the daunting prospect of having at least 36 states ratify the amendment before it could be adopted. By March of 1920, 35 states had ratified the amendment, and the Tennessee General Assembly began voting on the issue in August. While the State Senate voted to ratify, the House was split 48 to 48 twice. Eventually, there was a third vote on the matter on August 18, and among the representatives was 24-year-old Harry Burns. He had voted “nay” twice already, as telegrams from his constituents and colleagues convinced him to. But, during the third vote, he was given a letter from his mother, Febb Ensminger Burn. In the letter, which arrived shortly before the vote, the representative’s mother wrote, “Hurrah and vote for Suffrage and don’t keep them in doubt. I noticed Chandlers' speech, it was very bitter. I’ve been watching to see how you stood but have not seen anything yet...Don't forget to be a good boy and help Mrs. ‘Thomas Catt’ with her "Rats." Is she the one that put rat in ratification, Ha! No more from mama this time.” Swayed by his mother, Burn voted in favor of ratification to the surprise of the chamber. Adoption of the 19th amendment was certified soon after.
Although it was a major victory for universal voting rights, the 19th amendment didn’t guarantee an equal voice to all in practice. Black Americans had been subject to voter suppression tactics and intimidation for decades, and Black women didn’t fare much better when attempting to reach the polls. It wouldn’t be until the Voting Rights Act was passed in 1865 that another major step forward would take place. Even so, some Americans still struggle to cast a vote, disenfranchised by legislation that makes it difficult to register or by barriers like inaccessible polling places. If only we still had Febb Ensminger Burn to help the country along!
[Image description: A black-and-white photo of a suffrage worker poking her head through a wall of newspaper clippings about the passage of the 19th Amendment.] Credit & copyright: Suffrage worker with newspaper clippings on the passage of the Nineteenth Amendment granting women the right to vote by the United States Senate, 1919. Missouri History Museum. Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.It was about time! The women’s suffrage movement in the U.S. took the better part of a century to achieve its goal. The entire process ended up coming right down to the wire…and a note from a lawmaker’s mother. After the 19th amendment was passed by the U.S. Congress in 1919, it needed to be ratified by at least 36 states to be fully adopted. The deciding vote was cast on this day in 1920 by Harry Burn, a young representative in Tennessee who had previously opposed women’s suffrage.
The road to voting rights for American women began in 1848 at the Seneca Falls Convention in the state of New York. There, prominent women’s rights advocates gathered from around the country and adopted the Declaration of Sentiments, a document of grievances pertaining to womens’ lack of agency in marriage, business, and education. Among the listed injustices was a lack of voting rights for women, which forced them to submit to laws that they had no voice in forming. Through the document, the dignitaries of the convention argued that being unable to exercise their alienable right to the elective franchise, they were without representation…yet single women with property were still subjected to the same taxes as men. This line of reasoning and the composition style of the document were meant to echo those of the Declaration of Independence. The Declaration of Sentiments was controversial, to say the least. Opponents of women’s rights ridiculed the convention-goers and even some advocates withdrew support in its wake, deeming the sentiments to be too extreme.
Meanwhile, proponents of suffrage proposed a constitutional amendment in Congress to grant women the right to vote (later called the Susan B. Anthony Amendment). This was formally proposed in 1866, 1868, 1878, 1887, 1914, 1918, and 1919 before the amendment finally passed during a second vote that final year. In between, suffragettes like Susan B. Anthony and Carrie Chapman Catt of the National American Woman Suffrage Association and others continued to petition, protest, and even testify in Congress in pursuit of the right to vote.
Even with the approval of Congress, there was still vehement opposition to women’s suffrage nationwide. And there was also the daunting prospect of having at least 36 states ratify the amendment before it could be adopted. By March of 1920, 35 states had ratified the amendment, and the Tennessee General Assembly began voting on the issue in August. While the State Senate voted to ratify, the House was split 48 to 48 twice. Eventually, there was a third vote on the matter on August 18, and among the representatives was 24-year-old Harry Burns. He had voted “nay” twice already, as telegrams from his constituents and colleagues convinced him to. But, during the third vote, he was given a letter from his mother, Febb Ensminger Burn. In the letter, which arrived shortly before the vote, the representative’s mother wrote, “Hurrah and vote for Suffrage and don’t keep them in doubt. I noticed Chandlers' speech, it was very bitter. I’ve been watching to see how you stood but have not seen anything yet...Don't forget to be a good boy and help Mrs. ‘Thomas Catt’ with her "Rats." Is she the one that put rat in ratification, Ha! No more from mama this time.” Swayed by his mother, Burn voted in favor of ratification to the surprise of the chamber. Adoption of the 19th amendment was certified soon after.
Although it was a major victory for universal voting rights, the 19th amendment didn’t guarantee an equal voice to all in practice. Black Americans had been subject to voter suppression tactics and intimidation for decades, and Black women didn’t fare much better when attempting to reach the polls. It wouldn’t be until the Voting Rights Act was passed in 1865 that another major step forward would take place. Even so, some Americans still struggle to cast a vote, disenfranchised by legislation that makes it difficult to register or by barriers like inaccessible polling places. If only we still had Febb Ensminger Burn to help the country along!
[Image description: A black-and-white photo of a suffrage worker poking her head through a wall of newspaper clippings about the passage of the 19th Amendment.] Credit & copyright: Suffrage worker with newspaper clippings on the passage of the Nineteenth Amendment granting women the right to vote by the United States Senate, 1919. Missouri History Museum. Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929. -
FREEMind + Body Daily CurioFree1 CQ
This dish is so nice, it’s only fitting that Nice is where it’s from! Ratatouille, a bright, seasonal dish of stewed vegetables, originated in the French city of Nice, on the country’s southeastern coast. Despite its prominence as one of France’s most famous foods (thanks in part to the Disney/Pixar movie named after it) ratatouille hasn’t been around all that long. However, it draws on a long history of vegetable dishes native to the Mediterranean coast.
Ratatouille gets its unusual name from Occitan, a romance language spoken in southern France, among other places. The Occitan word ratatolha means chunky stew, while the French word tatouiller means “to stir up.” Simply put, it’s a dish of stewed vegetables with ingredients that can vary depending on who’s making it. Ratatouille almost always contains eggplants and tomatoes, though. Other common vegetables used in the dish include zucchini, bell peppers, and onions. Garlic and chives are often used for seasoning.
Ratatouille can look very different depending on who’s serving it. Sometimes, it looks like a chunky stew and is served in a bowl. Sometimes the vegetables are cooked together, though there are some who insist that each type of vegetable be cooked separately, in olive oil, before being combined. Others serve ratatouille’s cooked vegetables sliced, usually on top of a flavorful sauce, in order to show off their color.
Although various kinds of vegetable stew have always existed in France, ratatouille as we know it today was invented in the 18th century. French farmers, unable to afford meat, chopped and stewed vegetables instead. For a long time, ratatouille was thought of as a peasant dish due to its origins, but it didn’t take long for the simple dish to spread throughout France. In 1903, one of the first written references to ratatouille appeared in La Cuisine à la Nice by French chef Henri Heyraud. A few decades later, in 1950, ratatouille grew popular in England thanks to A Book of Mediterranean Food by British cookbook author Elizabeth David. The dish’s momentum couldn’t be stopped, and by the 1980s it was popular everywhere that French food was served, including American French restaurants. You could say these veggies have a certain Je ne sais quoi.
[Image description: Ratatouille, a dish made of sliced vegetables, with orange sauce on a white plate.] Credit & copyright: El Nuevo Doge, Wikimedia Commons, El Nuevo Doge. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.This dish is so nice, it’s only fitting that Nice is where it’s from! Ratatouille, a bright, seasonal dish of stewed vegetables, originated in the French city of Nice, on the country’s southeastern coast. Despite its prominence as one of France’s most famous foods (thanks in part to the Disney/Pixar movie named after it) ratatouille hasn’t been around all that long. However, it draws on a long history of vegetable dishes native to the Mediterranean coast.
Ratatouille gets its unusual name from Occitan, a romance language spoken in southern France, among other places. The Occitan word ratatolha means chunky stew, while the French word tatouiller means “to stir up.” Simply put, it’s a dish of stewed vegetables with ingredients that can vary depending on who’s making it. Ratatouille almost always contains eggplants and tomatoes, though. Other common vegetables used in the dish include zucchini, bell peppers, and onions. Garlic and chives are often used for seasoning.
Ratatouille can look very different depending on who’s serving it. Sometimes, it looks like a chunky stew and is served in a bowl. Sometimes the vegetables are cooked together, though there are some who insist that each type of vegetable be cooked separately, in olive oil, before being combined. Others serve ratatouille’s cooked vegetables sliced, usually on top of a flavorful sauce, in order to show off their color.
Although various kinds of vegetable stew have always existed in France, ratatouille as we know it today was invented in the 18th century. French farmers, unable to afford meat, chopped and stewed vegetables instead. For a long time, ratatouille was thought of as a peasant dish due to its origins, but it didn’t take long for the simple dish to spread throughout France. In 1903, one of the first written references to ratatouille appeared in La Cuisine à la Nice by French chef Henri Heyraud. A few decades later, in 1950, ratatouille grew popular in England thanks to A Book of Mediterranean Food by British cookbook author Elizabeth David. The dish’s momentum couldn’t be stopped, and by the 1980s it was popular everywhere that French food was served, including American French restaurants. You could say these veggies have a certain Je ne sais quoi.
[Image description: Ratatouille, a dish made of sliced vegetables, with orange sauce on a white plate.] Credit & copyright: El Nuevo Doge, Wikimedia Commons, El Nuevo Doge. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #2926Free1 CQ
How long has it been since you cleaned your microwave? You might want to give it a scrub after reading a recent study. Despite the commonly held belief that microbes can’t survive the harsh radiation inside, scientists have found thriving communities of germs living in microwaves. These ubiquitous cooking machines work on a simple principle: they use electromagnetic waves to excite water molecules in food, thereby cooking it without producing any direct heat. Since all living things contain water, even microorganisms, it’s tempting to believe that no living creature could survive the 1000-watt assault. Unfortunately for germaphobes, microorganisms can—and definitely do—survive. That’s according to microbiologists at the University of Valencia in Spain, who found the same microbes that commonly live on kitchen surfaces like counters also living inside microwaves. They discovered this by taking samples from 30 microwaves in home kitchens, laboratories, offices, and other shared kitchens or dining areas and letting them grow in a culture. In all, they found bacteria representing 747 different genera in varying levels of diversity. Single households held the lowest diversity of bacteria, while scientific labs understandably had the highest. Some of the bacteria, especially those in lab microwaves, seemed to have developed a higher-than-average resistance to radiation. While some of the microbes were the same, mostly harmless kind found on human skin (which makes sense considering who’s using the microwaves) others were food-borne bacteria that could be dangerous if they were ingested. So, the bad news is that microwaves harbor more germs than previously thought, but the good news is that, since the germs present in microwaves are the same as those on other kitchen surfaces, microwaves don't actually pose a unique threat to public health. These appliances are also pretty easy to clean. Researchers recommend using a diluted bleach solution or any common disinfectant spray to do so. Time to sanitize some surfaces.
[Image description: A white microwave.] Credit & copyright: Hedwig von Ebbel, Wikimedia Commons. This work has been released into the public domain by its author, Hedwig von Ebbel. This applies worldwide.How long has it been since you cleaned your microwave? You might want to give it a scrub after reading a recent study. Despite the commonly held belief that microbes can’t survive the harsh radiation inside, scientists have found thriving communities of germs living in microwaves. These ubiquitous cooking machines work on a simple principle: they use electromagnetic waves to excite water molecules in food, thereby cooking it without producing any direct heat. Since all living things contain water, even microorganisms, it’s tempting to believe that no living creature could survive the 1000-watt assault. Unfortunately for germaphobes, microorganisms can—and definitely do—survive. That’s according to microbiologists at the University of Valencia in Spain, who found the same microbes that commonly live on kitchen surfaces like counters also living inside microwaves. They discovered this by taking samples from 30 microwaves in home kitchens, laboratories, offices, and other shared kitchens or dining areas and letting them grow in a culture. In all, they found bacteria representing 747 different genera in varying levels of diversity. Single households held the lowest diversity of bacteria, while scientific labs understandably had the highest. Some of the bacteria, especially those in lab microwaves, seemed to have developed a higher-than-average resistance to radiation. While some of the microbes were the same, mostly harmless kind found on human skin (which makes sense considering who’s using the microwaves) others were food-borne bacteria that could be dangerous if they were ingested. So, the bad news is that microwaves harbor more germs than previously thought, but the good news is that, since the germs present in microwaves are the same as those on other kitchen surfaces, microwaves don't actually pose a unique threat to public health. These appliances are also pretty easy to clean. Researchers recommend using a diluted bleach solution or any common disinfectant spray to do so. Time to sanitize some surfaces.
[Image description: A white microwave.] Credit & copyright: Hedwig von Ebbel, Wikimedia Commons. This work has been released into the public domain by its author, Hedwig von Ebbel. This applies worldwide.