Curio Cabinet / Daily Curio
-
FREEMind + Body Daily CurioFree1 CQ
Have a hoppy breakfast! Don’t worry, though—there’s not actually any toad in the famed British dish called toad in the hole. This cheekily named breakfast food is actually made with sausage, and it’s been popular in England for centuries.
Toad in the hole is made by baking sausages in a yorkshire pudding batter. The batter is made from eggs, flour, and milk. Sausages are normally arranged in a line or other pattern on top of the batter, so they’re half-submerged during the baking process. This allows them to get crispy on top and for their flavor to sink into the batter below. The result is a warm, savory, meaty breakfast dish that’s usually served with onion gravy.
The first written record of toad in the hole comes from England in the 18th century, though dishes that combined meat and pastry, such as meat pies, existed long beforehand. Unlike meat pies, though, which were considered an upper class dish due to how much meat they contained, toad in the hole was created as a way for poorer families to make use of whatever bits of meat they had, usually as leftovers. Beef and pork weren’t always available to peasants. A 1747 recipe for the dish called for using pigeon meat, while others called for organ meats, such as lamb kidney. As years went by and England’s lower classes had more opportunities for economic advancement, toad in the hole became a heartier, meatier dish. Today, it’s sometimes served in British schools at lunchtime, but is most popular as a breakfast food.
As for the dish’s unusual name, no one really knows where it came from, though we do know that it was never made with actual toad (or frog) meat. The “hole” part of the name might come from the fact that sausages leave behind holes if they’re picked out of cooked batter, while “toad” might be a somewhat derisive reference to the cheap kinds of meat originally used in the dish. The name might also refer to the fact that toads sometimes hide in holes with the tops of their heads poking out to wait for prey, just like the sausages in toad in the hole poke halfway out of the batter. Either way, don’t let its name dissuade you from trying this meaty marvel the next time you find yourself across the pond. It won’t croak, and neither will you!
[Image description: A glass pan full of toad-in-the-hole, five sausages cooked in a pastry batter.] Credit & copyright: Robert Gibert, Wikimedia Commons. This work has been released into the public domain by its author, Robert Gibert. This applies worldwide.Have a hoppy breakfast! Don’t worry, though—there’s not actually any toad in the famed British dish called toad in the hole. This cheekily named breakfast food is actually made with sausage, and it’s been popular in England for centuries.
Toad in the hole is made by baking sausages in a yorkshire pudding batter. The batter is made from eggs, flour, and milk. Sausages are normally arranged in a line or other pattern on top of the batter, so they’re half-submerged during the baking process. This allows them to get crispy on top and for their flavor to sink into the batter below. The result is a warm, savory, meaty breakfast dish that’s usually served with onion gravy.
The first written record of toad in the hole comes from England in the 18th century, though dishes that combined meat and pastry, such as meat pies, existed long beforehand. Unlike meat pies, though, which were considered an upper class dish due to how much meat they contained, toad in the hole was created as a way for poorer families to make use of whatever bits of meat they had, usually as leftovers. Beef and pork weren’t always available to peasants. A 1747 recipe for the dish called for using pigeon meat, while others called for organ meats, such as lamb kidney. As years went by and England’s lower classes had more opportunities for economic advancement, toad in the hole became a heartier, meatier dish. Today, it’s sometimes served in British schools at lunchtime, but is most popular as a breakfast food.
As for the dish’s unusual name, no one really knows where it came from, though we do know that it was never made with actual toad (or frog) meat. The “hole” part of the name might come from the fact that sausages leave behind holes if they’re picked out of cooked batter, while “toad” might be a somewhat derisive reference to the cheap kinds of meat originally used in the dish. The name might also refer to the fact that toads sometimes hide in holes with the tops of their heads poking out to wait for prey, just like the sausages in toad in the hole poke halfway out of the batter. Either way, don’t let its name dissuade you from trying this meaty marvel the next time you find yourself across the pond. It won’t croak, and neither will you!
[Image description: A glass pan full of toad-in-the-hole, five sausages cooked in a pastry batter.] Credit & copyright: Robert Gibert, Wikimedia Commons. This work has been released into the public domain by its author, Robert Gibert. This applies worldwide. -
FREEMind + Body Daily Curio #3038Free1 CQ
Delicious things shouldn’t be hazardous. Yet, just as delicious cheeseburgers can lead to high cholesterol if consumed in abundance, tuna can lead to mercury poisoning. This dangerous condition can damage the central nervous system and is particularly harmful in children. As for how mercury ends up in fish, the metal is naturally present in the ocean, where bacteria turn it into toxic methylmercury. Plankton absorb this toxic compound, then pass it along to the small fish that eat them, which pass it along to the larger fish that eat them. The larger a fish is, the more mercury it is exposed to, and since tuna reach average weights of around 40 pounds (with some massive ones weighing as much as 2,000 pounds) mercury in tuna meat is bound to be an issue. This is why pregnant women, nursing mothers, and people with certain medical conditions are told to steer clear of tuna, and even healthy people are advised not to eat too much. However, a recent discovery might make mercury-laden tuna a thing of the past, at least when it comes to canned meat.
Researchers from Chalmers University of Technology in Sweden found that when tuna was packaged in a water solution containing cysteine, an amino acid, up to 35 percent of mercury was removed from the meat. While this is a lucky breakthrough for tuna-lovers everywhere, there may be no need to wait until this new packaging becomes available. For most people, two to three servings of tuna per week are already deemed safe, and different kinds of tuna contain different levels of mercury, making it safe to eat some kinds of tuna (such as canned light tuna) more often than other kinds, like albacore tuna. No need to throw out that tuna sandwich–just be mindful of how many you’re eating per week!
[Image description: A can of tuna from above, with some green leaves visible beside the can.] Credit & copyright: Towfiqu barbhuiya, PexelsDelicious things shouldn’t be hazardous. Yet, just as delicious cheeseburgers can lead to high cholesterol if consumed in abundance, tuna can lead to mercury poisoning. This dangerous condition can damage the central nervous system and is particularly harmful in children. As for how mercury ends up in fish, the metal is naturally present in the ocean, where bacteria turn it into toxic methylmercury. Plankton absorb this toxic compound, then pass it along to the small fish that eat them, which pass it along to the larger fish that eat them. The larger a fish is, the more mercury it is exposed to, and since tuna reach average weights of around 40 pounds (with some massive ones weighing as much as 2,000 pounds) mercury in tuna meat is bound to be an issue. This is why pregnant women, nursing mothers, and people with certain medical conditions are told to steer clear of tuna, and even healthy people are advised not to eat too much. However, a recent discovery might make mercury-laden tuna a thing of the past, at least when it comes to canned meat.
Researchers from Chalmers University of Technology in Sweden found that when tuna was packaged in a water solution containing cysteine, an amino acid, up to 35 percent of mercury was removed from the meat. While this is a lucky breakthrough for tuna-lovers everywhere, there may be no need to wait until this new packaging becomes available. For most people, two to three servings of tuna per week are already deemed safe, and different kinds of tuna contain different levels of mercury, making it safe to eat some kinds of tuna (such as canned light tuna) more often than other kinds, like albacore tuna. No need to throw out that tuna sandwich–just be mindful of how many you’re eating per week!
[Image description: A can of tuna from above, with some green leaves visible beside the can.] Credit & copyright: Towfiqu barbhuiya, Pexels -
FREEBiology Daily Curio #3037Free1 CQ
You could say that these researchers discovered something fishy. Recently, in the Mediterranean Sea near the island of Corsica, scientists went diving to study fish behavior. There was a problem, though. Every time the researchers went in the water, they took food with them to give to the fish as rewards for following certain commands. However, the seabream in the area always seemed to know who was carrying the food and would swarm that person immediately. The researchers even used other divers as decoys to no avail, and it seemed that their research progress was more or less halted by the hungry, keen-eyed fish.
Instead of giving up, the team simply pivoted a bit and took their research in another direction. Katinka Soller, one of the researchers, spent 12 days training two different types of seabream to follow her around by enticing them with food. She also started out wearing a red vest, but gradually shed the bright color over the course of the experiment. Then she had another diver join her wearing different colors. At first, they were both swarmed by fish, but when it was clear that only Soller was giving out food, the fish ignored the other diver. It seems that humans have been largely underestimating fish cognition, as the seabream were able to differentiate between people based on what those people were wearing. To confirm this, the researchers went down again, this time wearing identical gear. They found that the fish weren’t interested in either of them, since they couldn’t tell which person might have food. Yet even small clothing differences, like variations in the divers’ color of flippers, were enough for the fish to distinguish between each person. These brainy fish must be breaming with pride.
[Image description: Two seabream fish against a black background.] Credit & copyright: Beyza Kaplan, PexelsYou could say that these researchers discovered something fishy. Recently, in the Mediterranean Sea near the island of Corsica, scientists went diving to study fish behavior. There was a problem, though. Every time the researchers went in the water, they took food with them to give to the fish as rewards for following certain commands. However, the seabream in the area always seemed to know who was carrying the food and would swarm that person immediately. The researchers even used other divers as decoys to no avail, and it seemed that their research progress was more or less halted by the hungry, keen-eyed fish.
Instead of giving up, the team simply pivoted a bit and took their research in another direction. Katinka Soller, one of the researchers, spent 12 days training two different types of seabream to follow her around by enticing them with food. She also started out wearing a red vest, but gradually shed the bright color over the course of the experiment. Then she had another diver join her wearing different colors. At first, they were both swarmed by fish, but when it was clear that only Soller was giving out food, the fish ignored the other diver. It seems that humans have been largely underestimating fish cognition, as the seabream were able to differentiate between people based on what those people were wearing. To confirm this, the researchers went down again, this time wearing identical gear. They found that the fish weren’t interested in either of them, since they couldn’t tell which person might have food. Yet even small clothing differences, like variations in the divers’ color of flippers, were enough for the fish to distinguish between each person. These brainy fish must be breaming with pride.
[Image description: Two seabream fish against a black background.] Credit & copyright: Beyza Kaplan, Pexels -
FREEUS History Daily Curio #3036Free1 CQ
Every delivery has to start somewhere. This month in 1792, President George Washington signed the Post Office Act, creating nationwide mail service that persists to this day. Before the Post Office Act, the thirteen American colonies had their first taste of a comprehensive mail delivery service thanks to Benjamin Franklin, who developed the colonial mail service. As the American Revolution approached, the value of expanding such a service was clear: should conflict break out against the British, communication between the colonies would be paramount. Naturally, Franklin was chosen as the first postmaster. This early version of the postal service used couriers to relay messages from the battlefield to the Continental Congress, and was essential to the success of the revolution. Still, there was no permanent form of nationwide mail delivery until 1792, when the Post Office Act established the Post Office Department.
Besides delivering packages and war correspondence, the early Post Office made newspapers much more widely available by making their deliveries cheaper. The Post Office Act also established some important rules that helped people trust the new system. Firstly, the act strictly forbade the government from opening the mail of private citizens for the purposes of surveillance. Secondly, it gave the power of establishing new mail routes to Congress, not the executive branch. Throughout the 19th century, the Post Office was critical for the U.S. as it expanded its territories and its citizens became ever more spread out. During this time, the Post Office Department came up with several innovations, like postage stamps and standardized rates. Over a century and a half later, the Post Office was revitalized by the Postal Reorganization Act, which changed the name to the United States Postal Service (USPS). While private parcel delivery companies also exist today, many of them also rely on the USPS to make their business models feasible. Remember to thank your mail carrier, especially when they’re beset by snow or rain or heat or gloom of night!
[Image description: Three shipping boxes on the ground, with one on a dolly. Two stickers on the boxes read “FRAGILE.”] Credit & copyright: Tima Miroshnichenko, PexelsEvery delivery has to start somewhere. This month in 1792, President George Washington signed the Post Office Act, creating nationwide mail service that persists to this day. Before the Post Office Act, the thirteen American colonies had their first taste of a comprehensive mail delivery service thanks to Benjamin Franklin, who developed the colonial mail service. As the American Revolution approached, the value of expanding such a service was clear: should conflict break out against the British, communication between the colonies would be paramount. Naturally, Franklin was chosen as the first postmaster. This early version of the postal service used couriers to relay messages from the battlefield to the Continental Congress, and was essential to the success of the revolution. Still, there was no permanent form of nationwide mail delivery until 1792, when the Post Office Act established the Post Office Department.
Besides delivering packages and war correspondence, the early Post Office made newspapers much more widely available by making their deliveries cheaper. The Post Office Act also established some important rules that helped people trust the new system. Firstly, the act strictly forbade the government from opening the mail of private citizens for the purposes of surveillance. Secondly, it gave the power of establishing new mail routes to Congress, not the executive branch. Throughout the 19th century, the Post Office was critical for the U.S. as it expanded its territories and its citizens became ever more spread out. During this time, the Post Office Department came up with several innovations, like postage stamps and standardized rates. Over a century and a half later, the Post Office was revitalized by the Postal Reorganization Act, which changed the name to the United States Postal Service (USPS). While private parcel delivery companies also exist today, many of them also rely on the USPS to make their business models feasible. Remember to thank your mail carrier, especially when they’re beset by snow or rain or heat or gloom of night!
[Image description: Three shipping boxes on the ground, with one on a dolly. Two stickers on the boxes read “FRAGILE.”] Credit & copyright: Tima Miroshnichenko, Pexels -
FREEWorld History Daily Curio #3035Free1 CQ
This gives new meaning to “too much of a good thing.” This month in 1478, George Plantagenet, Duke of Clarence, was supposedly drowned in a barrel of wine. We’ll never know for certain whether this unusual execution actually took place, since it was common for stories in the 15th century to be passed around until they became exaggerated. We do know that, one way or another, the duke was executed for treason. We also know some details about the events, both sad and violent, leading up to his death.
George was the younger brother of Edward IV of England. The brothers’ father led the House of York in the War of the Roses against the House of Lancaster over the right to the English throne. While their father died in battle, Edward eventually took the throne. Given a dukedom by his brother, George could have lived the rest of his life in ease and prosperity. Instead he, along with others in the House of York, were grievously insulted when Edward married Elizabeth Woodville, a widow from the House of Lancaster. The relationships between the two brothers only soured further when George married the cousin of a man who was a vocal critic of Edward. At one point, George even helped lead a rebellion against his own brother, captured him, and held him prisoner for a time, though wartime trouble with the Scots eventually led to his release.
Edward suspected that George was plotting to overthrow him, and George was forced to flee to France, for a time, to avoid Edward’s wrath. The brothers had a brief reconciliation after Henry VI was restored to the throne via political machinations. George helped Edward defeat him and retake the throne. Then, two things happened that finally ended the feud between the brothers once and for all. First, George’s wife passed away a few months after giving birth, and he accused one of her female servants, or “ladies”, of poisoning her. Without the proper authority to do so, he had the servant arrested and executed, which angered Edward. Around the same time, someone in George’s household was accused of “imagining the king’s death by necromancy.” When George publicly protested the charge, he was charged with treason himself. Although historical accounts show that he was executed in private, rumors began circulating shortly after his death that he was drowned in Malmsey wine, an expensive fortified wine from Portugal. Some accounts even claim that he made the request himself, and that it was to mock his brother’s drinking habits. With his own brother constantly trying to overthrow him, could you blame the guy for having a drink now and then?
[Image description: A black-and-white illustration of the Duke of Clarence being drowned in a barrel of wine. The Duke is being held upside down by his feet as a guard pushes on his head.] Credit & copyright: John Cassell's illustrated history of England vol. II (London, 1858). Internet Archive. Public Domain, Mark 1.0This gives new meaning to “too much of a good thing.” This month in 1478, George Plantagenet, Duke of Clarence, was supposedly drowned in a barrel of wine. We’ll never know for certain whether this unusual execution actually took place, since it was common for stories in the 15th century to be passed around until they became exaggerated. We do know that, one way or another, the duke was executed for treason. We also know some details about the events, both sad and violent, leading up to his death.
George was the younger brother of Edward IV of England. The brothers’ father led the House of York in the War of the Roses against the House of Lancaster over the right to the English throne. While their father died in battle, Edward eventually took the throne. Given a dukedom by his brother, George could have lived the rest of his life in ease and prosperity. Instead he, along with others in the House of York, were grievously insulted when Edward married Elizabeth Woodville, a widow from the House of Lancaster. The relationships between the two brothers only soured further when George married the cousin of a man who was a vocal critic of Edward. At one point, George even helped lead a rebellion against his own brother, captured him, and held him prisoner for a time, though wartime trouble with the Scots eventually led to his release.
Edward suspected that George was plotting to overthrow him, and George was forced to flee to France, for a time, to avoid Edward’s wrath. The brothers had a brief reconciliation after Henry VI was restored to the throne via political machinations. George helped Edward defeat him and retake the throne. Then, two things happened that finally ended the feud between the brothers once and for all. First, George’s wife passed away a few months after giving birth, and he accused one of her female servants, or “ladies”, of poisoning her. Without the proper authority to do so, he had the servant arrested and executed, which angered Edward. Around the same time, someone in George’s household was accused of “imagining the king’s death by necromancy.” When George publicly protested the charge, he was charged with treason himself. Although historical accounts show that he was executed in private, rumors began circulating shortly after his death that he was drowned in Malmsey wine, an expensive fortified wine from Portugal. Some accounts even claim that he made the request himself, and that it was to mock his brother’s drinking habits. With his own brother constantly trying to overthrow him, could you blame the guy for having a drink now and then?
[Image description: A black-and-white illustration of the Duke of Clarence being drowned in a barrel of wine. The Duke is being held upside down by his feet as a guard pushes on his head.] Credit & copyright: John Cassell's illustrated history of England vol. II (London, 1858). Internet Archive. Public Domain, Mark 1.0 -
FREEMind + Body Daily CurioFree1 CQ
In many parts of the U.S., temperatures are currently plunging…making it a perfect time to cozy up with some warm, sugary beignets. These dough-y treats are heavily associated with New Orleans today, but, like much of The Big Easy’s cuisine, beignets originated in France.
Beignets are sometimes referred to as donuts since they’re also made from deep-fried dough. In fact, beignets are one of only two official state donuts. Unlike most donuts, though, beignets are rectangular and traditionally made from pâte à choux, a French dough made from flour, butter, eggs, and a substantial amount of water. During cooking, the excess water turns to steam, making the dough puff up and become airy. Not all beignets are made this way, as some do use leavened dough, making for a thicker pastry. French beignets were often served with hot chocolate for dunking, while Louisiana beignets are traditionally served with butter and powdered sugar.
People have been frying dough for centuries, all over the world, and France is no exception. Beignets began as a 16th-century, French Mardi Gras food, served during and after the yearly celebration. Then, during the French and Indian war in the 18th century, the British forced large swaths of French people from their homes in Acadia, a territory spanning parts of modern Nova Scotia, New Brunswick, and Prince Edward Island. Seeking a new, French home far from the British, these displaced people came to Louisiana, which was a French colony at the time. The term “French Acadian” was soon shortened to “Cajun”, and the culinary traditions they brought with them from France changed Louisiana’s food landscape forever.
At first, Louisiana beignets remained mostly a Mardi Gras tradition. But their sugary goodness couldn’t be contained, and, in 1862, a coffee stand called Café du Monde opened in New Orleans, selling beignets alongside their drinks. Soon, many other New Orleans restaurants and food stands were selling beignets outside of Mardi Gras season. To this day, Café du Monde sells beignets 24/7. Hey, there’s never a bad time for something this delicious.
[Image description: A tray of beignets covered in powdered sugar on a table with two coffee drinks.] Credit & copyright: Hamalya Comeau, PexelsIn many parts of the U.S., temperatures are currently plunging…making it a perfect time to cozy up with some warm, sugary beignets. These dough-y treats are heavily associated with New Orleans today, but, like much of The Big Easy’s cuisine, beignets originated in France.
Beignets are sometimes referred to as donuts since they’re also made from deep-fried dough. In fact, beignets are one of only two official state donuts. Unlike most donuts, though, beignets are rectangular and traditionally made from pâte à choux, a French dough made from flour, butter, eggs, and a substantial amount of water. During cooking, the excess water turns to steam, making the dough puff up and become airy. Not all beignets are made this way, as some do use leavened dough, making for a thicker pastry. French beignets were often served with hot chocolate for dunking, while Louisiana beignets are traditionally served with butter and powdered sugar.
People have been frying dough for centuries, all over the world, and France is no exception. Beignets began as a 16th-century, French Mardi Gras food, served during and after the yearly celebration. Then, during the French and Indian war in the 18th century, the British forced large swaths of French people from their homes in Acadia, a territory spanning parts of modern Nova Scotia, New Brunswick, and Prince Edward Island. Seeking a new, French home far from the British, these displaced people came to Louisiana, which was a French colony at the time. The term “French Acadian” was soon shortened to “Cajun”, and the culinary traditions they brought with them from France changed Louisiana’s food landscape forever.
At first, Louisiana beignets remained mostly a Mardi Gras tradition. But their sugary goodness couldn’t be contained, and, in 1862, a coffee stand called Café du Monde opened in New Orleans, selling beignets alongside their drinks. Soon, many other New Orleans restaurants and food stands were selling beignets outside of Mardi Gras season. To this day, Café du Monde sells beignets 24/7. Hey, there’s never a bad time for something this delicious.
[Image description: A tray of beignets covered in powdered sugar on a table with two coffee drinks.] Credit & copyright: Hamalya Comeau, Pexels -
FREEMind + Body Daily Curio #3034Free1 CQ
Do you ever feel like the world’s a little brighter in the morning, and not just because of the sun? Turns out, feeling better in the morning is more common than previously thought, according to new research coming out of University College London.
“Sleep on it” is age-old advice for anyone fretting over a major decision or dealing with bad news. Now, science seems to confirm that it’s worth listening to. Researchers surveyed almost 50,000 people over a period of two years, asking them to keep track of their moods throughout the day, and the data shows that people do, in fact, feel better in the morning. In fact, people’s moods fluctuate in a relatively predictable schedule throughout the day and even throughout the week. Generally, people feel their best in the morning, with the mood peaking in the late morning. By mid-afternoon, decision fatigue begins to set in, and mood declines. People’s moods continue to decline throughout the evening, reaching its lowest point at midnight. All this is despite the fact that the stress hormone cortisol is at its highest levels in the morning and is at its lowest at night.
Surprisingly, despite Monday’s lackluster reputation, people tend to feel better on Mondays and Fridays than they do on Sunday. Specifically, they tend to feel more satisfied with their lives on those days, while happiness peaks on Tuesdays. Does that mean that the best time of the week is Tuesday morning? More research is needed before we’ll know for sure. While researchers accounted for age, health, and employment status in their study, they didn’t gather data on sleep cycles, weather, and other factors that might contribute to fluctuating moods. They also didn’t differentiate between physical and mental well-being this time around. Nevertheless, their research might lead to improvements in mental health care. Mainly, they believe that more mental healthcare should be available later in the day, when people are feeling their lowest. Late-night therapy sessions don’t sound like the worst idea.
[Image description: A grassy field behind a wooden gate at dawn, with sunbeams shining through clouds.] Credit & copyright: Matthias Groeneveld, PexelsDo you ever feel like the world’s a little brighter in the morning, and not just because of the sun? Turns out, feeling better in the morning is more common than previously thought, according to new research coming out of University College London.
“Sleep on it” is age-old advice for anyone fretting over a major decision or dealing with bad news. Now, science seems to confirm that it’s worth listening to. Researchers surveyed almost 50,000 people over a period of two years, asking them to keep track of their moods throughout the day, and the data shows that people do, in fact, feel better in the morning. In fact, people’s moods fluctuate in a relatively predictable schedule throughout the day and even throughout the week. Generally, people feel their best in the morning, with the mood peaking in the late morning. By mid-afternoon, decision fatigue begins to set in, and mood declines. People’s moods continue to decline throughout the evening, reaching its lowest point at midnight. All this is despite the fact that the stress hormone cortisol is at its highest levels in the morning and is at its lowest at night.
Surprisingly, despite Monday’s lackluster reputation, people tend to feel better on Mondays and Fridays than they do on Sunday. Specifically, they tend to feel more satisfied with their lives on those days, while happiness peaks on Tuesdays. Does that mean that the best time of the week is Tuesday morning? More research is needed before we’ll know for sure. While researchers accounted for age, health, and employment status in their study, they didn’t gather data on sleep cycles, weather, and other factors that might contribute to fluctuating moods. They also didn’t differentiate between physical and mental well-being this time around. Nevertheless, their research might lead to improvements in mental health care. Mainly, they believe that more mental healthcare should be available later in the day, when people are feeling their lowest. Late-night therapy sessions don’t sound like the worst idea.
[Image description: A grassy field behind a wooden gate at dawn, with sunbeams shining through clouds.] Credit & copyright: Matthias Groeneveld, Pexels -
FREELiterature Daily Curio #3033Free1 CQ
Women face a lot of pressure. That might seem like an obvious statement, but someone had to be the first to write about it. On this day in 1963, The Feminine Mystique was published by American activist Betty Friedan. The book was a frank indictment of the prevailing myths surrounding women' s lives in post-World War II America.
Friedan defined the “feminine mystique” as the prevailing cultural idea that all women should feel fulfilled by dedicating themselves to domesticity. According to this idea, homemaking, raising children, and being dutiful wives to their husbands was all it took for women to be fully-realized, completely satisfied individuals. This notion was not just a matter of sociology, but psychology. Friedan herself drew inspiration from psychologists who believed that clinging to the ideals of the feminine mystique denied women the chance to grow and develop properly as adults, a particularly salient point in the post-World War II era. During the war, women had been enlisted into the workforce while many working-aged men were deployed. Upon the men' s return, the vast majority of women were ousted from the workforce, and were thus suddenly denied financial and social independence. The ideal of the nuclear family was heavily promoted in the U.S. during the Cold War, making the role of a housewife a matter of patriotism.
In reality, plenty of women during this time were highly dissatisfied with their lives. Friedan cited a number of different statistics in her book showing that the 1950s and 1960s were a period of social regression for women. Fewer women went to college, fewer women stayed in college, and in interviews with housewives, Friedan found that many of them didn’t find their limited roles fulfilling. Meanwhile, popular culture often blamed this dissatisfaction on the women themselves, attributing their lack of fulfillment to their higher education or professional ambitions. In the decades following the publication of her book, Friedan faced criticism from other feminists as well, largely for statistical inconsistencies and for the increasingly dated scope of her book as more women returned to the workforce. Regardless, The Feminine Mystique is remembered as a landmark book that helped jumpstart a national conversation about women’s rights. After all, there are some problems that should become dated.
[Image description: The feminine gender symbol in black against a pink background.] Credit & copyright: Author-created photo. Public Domain.Women face a lot of pressure. That might seem like an obvious statement, but someone had to be the first to write about it. On this day in 1963, The Feminine Mystique was published by American activist Betty Friedan. The book was a frank indictment of the prevailing myths surrounding women' s lives in post-World War II America.
Friedan defined the “feminine mystique” as the prevailing cultural idea that all women should feel fulfilled by dedicating themselves to domesticity. According to this idea, homemaking, raising children, and being dutiful wives to their husbands was all it took for women to be fully-realized, completely satisfied individuals. This notion was not just a matter of sociology, but psychology. Friedan herself drew inspiration from psychologists who believed that clinging to the ideals of the feminine mystique denied women the chance to grow and develop properly as adults, a particularly salient point in the post-World War II era. During the war, women had been enlisted into the workforce while many working-aged men were deployed. Upon the men' s return, the vast majority of women were ousted from the workforce, and were thus suddenly denied financial and social independence. The ideal of the nuclear family was heavily promoted in the U.S. during the Cold War, making the role of a housewife a matter of patriotism.
In reality, plenty of women during this time were highly dissatisfied with their lives. Friedan cited a number of different statistics in her book showing that the 1950s and 1960s were a period of social regression for women. Fewer women went to college, fewer women stayed in college, and in interviews with housewives, Friedan found that many of them didn’t find their limited roles fulfilling. Meanwhile, popular culture often blamed this dissatisfaction on the women themselves, attributing their lack of fulfillment to their higher education or professional ambitions. In the decades following the publication of her book, Friedan faced criticism from other feminists as well, largely for statistical inconsistencies and for the increasingly dated scope of her book as more women returned to the workforce. Regardless, The Feminine Mystique is remembered as a landmark book that helped jumpstart a national conversation about women’s rights. After all, there are some problems that should become dated.
[Image description: The feminine gender symbol in black against a pink background.] Credit & copyright: Author-created photo. Public Domain. -
FREELiterature Daily Curio #3032Free1 CQ
Have you ever bought a book because of a quote from another author telling you to? Such endorsements, printed on the backs of books or on the inside of their dust jackets, are called blurbs, and one publisher is doing away with them. Simon & Schuster is one of the largest publishing houses in the U.S., and like every other publishing house, they’ve implicitly required their writers to solicit blurbs from other writers for their books. The common argument in favor of blurbs often is that they give potential buyers more confidence in their purchase, especially if the blurb is from an author they’re already familiar with. Others believe that bookstores and other large-scale buyers place a similar level of trust in the blurbs, which helps sales.
The practice of acquiring blurbs, however, can be very taxing. Writers, especially those with little name recognition, must ask more established writers to read and endorse their book. Writers’ agents devote much of their time to finding blurb-writers for their clients, and editors, too, have to run through their contacts list to reach out for blurbs. Then there’s the fact that blurbs aren’t always sincere. Many writers exchange blurbs as favors, though some end up writing many more than they ever receive. The amount of time it takes to read a book means that many writers don’t actually finish the entire thing before writing a blurb. Sean Manning, the current publisher of Simon & Schuster’s flagship imprint, considers blurbs a waste of time and believes that they don’t always reflect the artistic merit of the book. After all, many of the worlds’ greatest novels were published without blurbs, and books with blurb-covered jackets don’t always do well commercially or critically. Still, writers are divided. Some believe that blurbs are meaningless, but others believe they’re an important part of marketing. Only time will tell how blurbless books will do.
[Image description: A stack of books.] Credit & copyright: Poppy Thomas Hill, PexelsHave you ever bought a book because of a quote from another author telling you to? Such endorsements, printed on the backs of books or on the inside of their dust jackets, are called blurbs, and one publisher is doing away with them. Simon & Schuster is one of the largest publishing houses in the U.S., and like every other publishing house, they’ve implicitly required their writers to solicit blurbs from other writers for their books. The common argument in favor of blurbs often is that they give potential buyers more confidence in their purchase, especially if the blurb is from an author they’re already familiar with. Others believe that bookstores and other large-scale buyers place a similar level of trust in the blurbs, which helps sales.
The practice of acquiring blurbs, however, can be very taxing. Writers, especially those with little name recognition, must ask more established writers to read and endorse their book. Writers’ agents devote much of their time to finding blurb-writers for their clients, and editors, too, have to run through their contacts list to reach out for blurbs. Then there’s the fact that blurbs aren’t always sincere. Many writers exchange blurbs as favors, though some end up writing many more than they ever receive. The amount of time it takes to read a book means that many writers don’t actually finish the entire thing before writing a blurb. Sean Manning, the current publisher of Simon & Schuster’s flagship imprint, considers blurbs a waste of time and believes that they don’t always reflect the artistic merit of the book. After all, many of the worlds’ greatest novels were published without blurbs, and books with blurb-covered jackets don’t always do well commercially or critically. Still, writers are divided. Some believe that blurbs are meaningless, but others believe they’re an important part of marketing. Only time will tell how blurbless books will do.
[Image description: A stack of books.] Credit & copyright: Poppy Thomas Hill, Pexels -
FREEMusic Appreciation Daily Curio #3031Free1 CQ
There are instruments that play music, and there are instruments that help define it. A Stradivarius violin, known as the “Joachim-Ma” Stradivarius, recently sold at auction for $10 million. After fees, the final price came out to be $11.5 million for the anonymous buyer, but even that didn’t break the record for the most expensive Stradivarius ever. That honor goes to the “Lady Blunt” Stradivarius, which sold in 2011 for $15.9 million. As eye-popping as the prices seem, there are plenty of reasons why these famed historical violins continue to be so highly prized centuries after they were made.
Handcrafted by Antonio Stradivari in the 17th and 18th centuries, Stradivarius are renowned for the quality of their sound. Stradivari made around 1,200 instruments during his career, of which 500 survive today, but he is known mostly for his violins. Some speculate that his violins are unique due to the wood from which they’re made. Stradivari used spruce, oak, and willow for his instruments, but the specific trees supposedly grew during the Little Ice Age between 1300 C.E. and 1850 C.E., which made them denser than modern wood.
Others believe that there isn’t anything inherently superior about Stradivarius violins, and indeed, in blind tests, high-end violins made today often outperform the legendary instruments. In the case of the Joachim-Ma Stradivarius, however, there’s history that gives it value. It was formerly owned by one of its namesakes, the legendary violinist Joseph Joachim, in the 1800s. Then, it was purchased by another legend and namesake, Si-Hon Ma, a violinist and the inventor of the Sihon mute, a device that can stay attached to a violin to dampen its sound as needed. Ma’s estate actually donated the Stradivarius to the NEC in 2009, and proceeds from the sale will be used to fund a scholarship program at the conservatory. They were originally hoping for a final sale price between $12-18 million, but hey, $10 million is nothing to sneeze at!
[Image description: A wooden Stradivarius violin against a gray background.] Credit & copyright: "Gould" Violin, Antonio Stradivari, Italian, 1693. The Metropolitan Museum of Art, Gift of George Gould, 1955. Public Domain, Creative Commons Zero (CC0).There are instruments that play music, and there are instruments that help define it. A Stradivarius violin, known as the “Joachim-Ma” Stradivarius, recently sold at auction for $10 million. After fees, the final price came out to be $11.5 million for the anonymous buyer, but even that didn’t break the record for the most expensive Stradivarius ever. That honor goes to the “Lady Blunt” Stradivarius, which sold in 2011 for $15.9 million. As eye-popping as the prices seem, there are plenty of reasons why these famed historical violins continue to be so highly prized centuries after they were made.
Handcrafted by Antonio Stradivari in the 17th and 18th centuries, Stradivarius are renowned for the quality of their sound. Stradivari made around 1,200 instruments during his career, of which 500 survive today, but he is known mostly for his violins. Some speculate that his violins are unique due to the wood from which they’re made. Stradivari used spruce, oak, and willow for his instruments, but the specific trees supposedly grew during the Little Ice Age between 1300 C.E. and 1850 C.E., which made them denser than modern wood.
Others believe that there isn’t anything inherently superior about Stradivarius violins, and indeed, in blind tests, high-end violins made today often outperform the legendary instruments. In the case of the Joachim-Ma Stradivarius, however, there’s history that gives it value. It was formerly owned by one of its namesakes, the legendary violinist Joseph Joachim, in the 1800s. Then, it was purchased by another legend and namesake, Si-Hon Ma, a violinist and the inventor of the Sihon mute, a device that can stay attached to a violin to dampen its sound as needed. Ma’s estate actually donated the Stradivarius to the NEC in 2009, and proceeds from the sale will be used to fund a scholarship program at the conservatory. They were originally hoping for a final sale price between $12-18 million, but hey, $10 million is nothing to sneeze at!
[Image description: A wooden Stradivarius violin against a gray background.] Credit & copyright: "Gould" Violin, Antonio Stradivari, Italian, 1693. The Metropolitan Museum of Art, Gift of George Gould, 1955. Public Domain, Creative Commons Zero (CC0). -
FREEMind + Body Daily CurioFree1 CQ
Happy Valentine’s Day! If you’re dining out with someone special this evening, chances are good that you’ll see lobster on the menu. After all, what could be more romantic and fancy than a succulent, butter-dipped seafood meal in the perfect shade of Valentine’s red? Well, it might be hard to believe, but for much of American history, people would have snubbed their noses at such a meal. In fact, in the early 1700s, lobsters were considered the “poor man’s chicken” and were even referred to as the “cockroaches of the sea.”
Lobsters are large, ocean-dwelling crustaceans that come in a variety of colors and sizes; in fact, there are 800 known lobster species. The most commonly-eaten species is the American lobster (also called the Maine lobster) which lives off the North Atlantic coast of the U.S. They’re the largest lobsters in the world, reaching lengths of around 24 inches and weighing up to 9 pounds, though some wild American lobsters have reached gargantuan weights of over 40 pounds. Lobsters are usually cooked whole and, though it’s controversial, they are often killed just before they’re served, usually by being dropped into boiling water. This isn’t just done for freshness or flavor, but because of health concerns. Lobsters’ bodies harbor bacteria that begin to spread quickly upon their death, so there isn’t much time to safely eat a lobster after it dies. Killing it just before serving is the preferred method to avoid food poisoning. Though lobster can be served in many ways and added to plenty of dishes like soup or pasta, lobsters are most commonly eaten whole. Diners use special tools called lobster crackers to break through the shell and reach the light, sweet meat within. Some choose to dip the meat in melted butter, which is meant to enhance its subtle flavors.
It’s safe to say that early American colonists didn’t know what to make of lobsters. Though they’d been eaten in various European countries and used for fertilizer and fish bait by some Native Americans for centuries, English settlers near the Atlantic Ocean were perplexed by the crustaceans. They were so populous, at the time, that their bodies would sometimes wash ashore in enormous piles. Eventually, the colonists began making use of the plentiful dead lobsters by feeding them to prisoners, slaves, and indentured servants…but since they didn’t understand the health concerns surrounding less-than-fresh lobster meat, this resulted in some serious food poisoning. Supposedly, some servants and workers even demanded clauses in their contracts stating that they wouldn’t eat lobster more than twice per week. Even after it was discovered that serving them fresh prevented food poisoning, lobsters’ reputation remained poor for some time in the U.S.
Strangely, it was the rise of long-distance passenger trains that changed things. Train managers were able to serve fresh lobster to passengers that were unfamiliar with Atlantic seafood and had no inkling of its bad reputation. Passengers, including wealthy travelers, soon became obsessed with the crustaceans, which they saw as a delicacy. By the 1870s, Maine was famous for lobster, and the first lobster pound, or large storage area for live lobsters, was established in Vinalhaven, Maine. By the 1880s, lobster had officially become expensive, and chefs in big American cities began including it on their menus in various forms. Today, lobster maintains a fancy reputation, though it’s also served in cheaper forms, especially on lobster rolls and in soups. You can put rubber bands on their claws, but these pinchy critters can’t be constrained by social class.
[Image description: Lobster dish with head and tail visible in a white bowl with a wine bottle and glasses in the background.] Credit & copyright: ROMAN ODINTSOV, PexelsHappy Valentine’s Day! If you’re dining out with someone special this evening, chances are good that you’ll see lobster on the menu. After all, what could be more romantic and fancy than a succulent, butter-dipped seafood meal in the perfect shade of Valentine’s red? Well, it might be hard to believe, but for much of American history, people would have snubbed their noses at such a meal. In fact, in the early 1700s, lobsters were considered the “poor man’s chicken” and were even referred to as the “cockroaches of the sea.”
Lobsters are large, ocean-dwelling crustaceans that come in a variety of colors and sizes; in fact, there are 800 known lobster species. The most commonly-eaten species is the American lobster (also called the Maine lobster) which lives off the North Atlantic coast of the U.S. They’re the largest lobsters in the world, reaching lengths of around 24 inches and weighing up to 9 pounds, though some wild American lobsters have reached gargantuan weights of over 40 pounds. Lobsters are usually cooked whole and, though it’s controversial, they are often killed just before they’re served, usually by being dropped into boiling water. This isn’t just done for freshness or flavor, but because of health concerns. Lobsters’ bodies harbor bacteria that begin to spread quickly upon their death, so there isn’t much time to safely eat a lobster after it dies. Killing it just before serving is the preferred method to avoid food poisoning. Though lobster can be served in many ways and added to plenty of dishes like soup or pasta, lobsters are most commonly eaten whole. Diners use special tools called lobster crackers to break through the shell and reach the light, sweet meat within. Some choose to dip the meat in melted butter, which is meant to enhance its subtle flavors.
It’s safe to say that early American colonists didn’t know what to make of lobsters. Though they’d been eaten in various European countries and used for fertilizer and fish bait by some Native Americans for centuries, English settlers near the Atlantic Ocean were perplexed by the crustaceans. They were so populous, at the time, that their bodies would sometimes wash ashore in enormous piles. Eventually, the colonists began making use of the plentiful dead lobsters by feeding them to prisoners, slaves, and indentured servants…but since they didn’t understand the health concerns surrounding less-than-fresh lobster meat, this resulted in some serious food poisoning. Supposedly, some servants and workers even demanded clauses in their contracts stating that they wouldn’t eat lobster more than twice per week. Even after it was discovered that serving them fresh prevented food poisoning, lobsters’ reputation remained poor for some time in the U.S.
Strangely, it was the rise of long-distance passenger trains that changed things. Train managers were able to serve fresh lobster to passengers that were unfamiliar with Atlantic seafood and had no inkling of its bad reputation. Passengers, including wealthy travelers, soon became obsessed with the crustaceans, which they saw as a delicacy. By the 1870s, Maine was famous for lobster, and the first lobster pound, or large storage area for live lobsters, was established in Vinalhaven, Maine. By the 1880s, lobster had officially become expensive, and chefs in big American cities began including it on their menus in various forms. Today, lobster maintains a fancy reputation, though it’s also served in cheaper forms, especially on lobster rolls and in soups. You can put rubber bands on their claws, but these pinchy critters can’t be constrained by social class.
[Image description: Lobster dish with head and tail visible in a white bowl with a wine bottle and glasses in the background.] Credit & copyright: ROMAN ODINTSOV, Pexels -
FREENutrition Daily Curio #3030Free1 CQ
Throw out those boxed dyes—vegetables are the key to keeping your youthful hair color. At least, that’s what one group of researchers from Nagoya University in Japan seems to have concluded in a recent study. While hair-graying is a natural and harmless process caused by the breakdown of pigment-delivering cells, not everyone is happy to lose their original hair color as they age. Now, it seems they may have a choice in the matter, as scientists have identified a common antioxidant that can prevent gray hair in mice. For their study, researchers focused on three antioxidants: luteolin, hesperetin, and diosmetin. They took mice that were bred to go gray just as humans do, and exposed them to all three antioxidants both orally and topically. Luteolin turned out to be the most effective; it prevented the mice from going gray regardless of how it was given. This could be good news, since luteolin is fairly common and inexpensive. It can be found in a variety of vegetables, like celery, broccoli, onions, and peppers, and it’s even available as a supplement. The antioxidant prevents gray hair by supporting the health of melanocytes, specialized cells that help distribute melanin, or biological pigment, in hair, skin, and eyes. Usually, these cells have a fairly short lifespan, and they die off around the time that an average person reaches middle age. Luteolin seems to extend the cells’ lifespan, but it still isn’t a cure-all for every hair-related issue. It isn’t known to improve hair’s shine or texture, and it isn’t believed to prevent hair loss. Hey, at least you can keep your hair color as you age, if not your hair count!
[Image description: Sliced broccoli in a metal bowl.] Credit & copyright: Cats Coming, PexelsThrow out those boxed dyes—vegetables are the key to keeping your youthful hair color. At least, that’s what one group of researchers from Nagoya University in Japan seems to have concluded in a recent study. While hair-graying is a natural and harmless process caused by the breakdown of pigment-delivering cells, not everyone is happy to lose their original hair color as they age. Now, it seems they may have a choice in the matter, as scientists have identified a common antioxidant that can prevent gray hair in mice. For their study, researchers focused on three antioxidants: luteolin, hesperetin, and diosmetin. They took mice that were bred to go gray just as humans do, and exposed them to all three antioxidants both orally and topically. Luteolin turned out to be the most effective; it prevented the mice from going gray regardless of how it was given. This could be good news, since luteolin is fairly common and inexpensive. It can be found in a variety of vegetables, like celery, broccoli, onions, and peppers, and it’s even available as a supplement. The antioxidant prevents gray hair by supporting the health of melanocytes, specialized cells that help distribute melanin, or biological pigment, in hair, skin, and eyes. Usually, these cells have a fairly short lifespan, and they die off around the time that an average person reaches middle age. Luteolin seems to extend the cells’ lifespan, but it still isn’t a cure-all for every hair-related issue. It isn’t known to improve hair’s shine or texture, and it isn’t believed to prevent hair loss. Hey, at least you can keep your hair color as you age, if not your hair count!
[Image description: Sliced broccoli in a metal bowl.] Credit & copyright: Cats Coming, Pexels -
FREEGames Daily Curio #3029Free1 CQ
What’s the latest youth craze? Ask your grandma! In major cities across the U.S., the ancient Chinese game of mahjong is gaining traction with young people. Until recently, it was largely seen as a game for older, usually Chinese or Chinese-American, players. Mahjong means “sparrows” in Mandarin, and it has its roots in the 19th century, though its exact origins are murky. Early on, mahjong had regional variations unique to different provinces in China, and though around 40 different versions of the game still exist today, the vast majority of modern mahjong is the version that gained popularity in the early 1900s. The game is played with 144 tiles called pais, traditionally made of bamboo, cow bone, or ivory, though today they’re usually plastic. Each piece features an image of a sparrow, a Chinese character, or other symbols. Players take turns drawing tiles and matching them together. The first player to create a hand of 14 tiles—meaning four suits and a pair—wins.
While the game was largely unknown outside of China for much of its early history, it gained popularity in the West thanks to American businessman Joseph Babcock. Babcock learned of mahjong when working for Standard Oil and living in Shanghai prior to WWI. He started importing mahjong sets to the U.S. in the 1920s, where it became popular with wealthy women who could afford the expensive, hand-carved sets and had the time to play. In addition to the Chinese Americans who had enjoyed mahjong for years, starting in the 1950s, mahjong was also embraced by Jewish families in the U.S., and they form a significant proportion of American mahjong players to this day. In recent years, mahjong clubs have been popping up in cities like Los Angeles and New York, where people from all walks of life gather together, united by their passion for the game. Indeed, it has always been a game that transcended social and cultural barriers, having even been played by the Empress Dowager of China in the late 1800s. From the royal court to social clubs, all you need are some friends and a mahjong set to pick up this ancient game.
[Image description: Rows of Mahjong tiles with numbers and Chinese letters.] Credit & copyright: HandigeHarry at Dutch Wikipedia. This work has been released into the public domain by its author, HandigeHarry at Dutch Wikipedia. This applies worldwide.What’s the latest youth craze? Ask your grandma! In major cities across the U.S., the ancient Chinese game of mahjong is gaining traction with young people. Until recently, it was largely seen as a game for older, usually Chinese or Chinese-American, players. Mahjong means “sparrows” in Mandarin, and it has its roots in the 19th century, though its exact origins are murky. Early on, mahjong had regional variations unique to different provinces in China, and though around 40 different versions of the game still exist today, the vast majority of modern mahjong is the version that gained popularity in the early 1900s. The game is played with 144 tiles called pais, traditionally made of bamboo, cow bone, or ivory, though today they’re usually plastic. Each piece features an image of a sparrow, a Chinese character, or other symbols. Players take turns drawing tiles and matching them together. The first player to create a hand of 14 tiles—meaning four suits and a pair—wins.
While the game was largely unknown outside of China for much of its early history, it gained popularity in the West thanks to American businessman Joseph Babcock. Babcock learned of mahjong when working for Standard Oil and living in Shanghai prior to WWI. He started importing mahjong sets to the U.S. in the 1920s, where it became popular with wealthy women who could afford the expensive, hand-carved sets and had the time to play. In addition to the Chinese Americans who had enjoyed mahjong for years, starting in the 1950s, mahjong was also embraced by Jewish families in the U.S., and they form a significant proportion of American mahjong players to this day. In recent years, mahjong clubs have been popping up in cities like Los Angeles and New York, where people from all walks of life gather together, united by their passion for the game. Indeed, it has always been a game that transcended social and cultural barriers, having even been played by the Empress Dowager of China in the late 1800s. From the royal court to social clubs, all you need are some friends and a mahjong set to pick up this ancient game.
[Image description: Rows of Mahjong tiles with numbers and Chinese letters.] Credit & copyright: HandigeHarry at Dutch Wikipedia. This work has been released into the public domain by its author, HandigeHarry at Dutch Wikipedia. This applies worldwide. -
FREEArt Appreciation Daily Curio #3028Free1 CQ
Properly maintained, a canvas painting can last for centuries. In times of war, however, it takes no time at all for it to be lost. Such was the fate of countless works of art in Ukraine, where the 2022 Russian invasion destroyed museums and left others badly damaged or looted. Fortunately, many surviving pieces have been taken to a museum in Germany and will be featured in a moving exhibition. When Russia’s invasion started, the dedicated workers of Odesa Museum of Western and Eastern Art evacuated the building of its canvas tenants, taking them to a storage facility in Lviv. The museum’s collection included paintings from the likes of Andreas Achenbach, Cornelis de Heem, and Frits Thaulow, as well as other artifacts of cultural and historical significance.
However, poor storage conditions led to concerns that the artifacts were being damaged by moisture and mold. In 2023, 74 of the paintings from Odesa were sent to Germany for restoration, some of them already removed from their frames. In Germany, the paintings were cleaned up and, last month, most of them were placed on display at Gemäldegalerie, an art museum in Berlin. They form a new exhibition called, From Odesa to Berlin: European Painting From the 16th to the 19th Century, which is divided into nine “chapters.” 60 of the paintings are the refugees from Odesa, while an additional 25 come from Gemäldegalerie’s own collection. The exhibition is a showcase of both Ukrainian artists and the Odesa museum’s collection, which was largely overlooked by the rest of Europe. As the museum’s press release stated, the exhibition shows the “multifaceted nature of the Ukrainian collection, which has hitherto been little known in Western Europe.” What better time for art to shine its uniquely hopeful light?
[Image description: The Ukrainian flag flying at the Hall of Warsaw in Poland.] Credit & copyright: Cybularny, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Properly maintained, a canvas painting can last for centuries. In times of war, however, it takes no time at all for it to be lost. Such was the fate of countless works of art in Ukraine, where the 2022 Russian invasion destroyed museums and left others badly damaged or looted. Fortunately, many surviving pieces have been taken to a museum in Germany and will be featured in a moving exhibition. When Russia’s invasion started, the dedicated workers of Odesa Museum of Western and Eastern Art evacuated the building of its canvas tenants, taking them to a storage facility in Lviv. The museum’s collection included paintings from the likes of Andreas Achenbach, Cornelis de Heem, and Frits Thaulow, as well as other artifacts of cultural and historical significance.
However, poor storage conditions led to concerns that the artifacts were being damaged by moisture and mold. In 2023, 74 of the paintings from Odesa were sent to Germany for restoration, some of them already removed from their frames. In Germany, the paintings were cleaned up and, last month, most of them were placed on display at Gemäldegalerie, an art museum in Berlin. They form a new exhibition called, From Odesa to Berlin: European Painting From the 16th to the 19th Century, which is divided into nine “chapters.” 60 of the paintings are the refugees from Odesa, while an additional 25 come from Gemäldegalerie’s own collection. The exhibition is a showcase of both Ukrainian artists and the Odesa museum’s collection, which was largely overlooked by the rest of Europe. As the museum’s press release stated, the exhibition shows the “multifaceted nature of the Ukrainian collection, which has hitherto been little known in Western Europe.” What better time for art to shine its uniquely hopeful light?
[Image description: The Ukrainian flag flying at the Hall of Warsaw in Poland.] Credit & copyright: Cybularny, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEScience Daily Curio #3027Free1 CQ
Who knew that you can use the same stuff to clean dirt and water? Scientists at University of Waterloo have found a way to remove a harmful form of chromium from the environment using a form of charcoal called biochar. Biochar is a popular soil amendment made by burning organic waste in a low oxygen environment. Like regular charcoal, biochar’s particular properties can vary depending on what types of plants are used to make it, but no matter what, it’s a carbon-rich substance that can improve soil health. One of the ways it does this is by absorbing toxic pollutants in the soil, but the recent discovery of its effects on chromium has researchers particularly excited.
Chromium is the element that makes stainless steel stainless. The heavy metal exists in two very different forms. One is chromium(III), which is not only harmless, but essential to the proper function of the human body as a micronutrient. Chromium(VI), on the other hand, is a dangerous carcinogen created during industrial processes like leather tanning and the creation of stainless steel. Symptoms of chromium(VI) can be as mild as skin rashes or upset stomachs, but it can also lead to stomach ulcers, respiratory problems, reproductive issues, damage to the kidneys or liver, and ovarian cancer. Chromium(VI) can be difficult to remove from groundwater, but that’s where biochar comes in. Researchers found that when biochar was added to contaminated water, it was highly effective at absorbing chromium(VI). In fact, biochar actually converted the toxic chromium(VI) to the non-toxic chromium(III). Apparently, in the presence of biochar, the chromium isotopes fractionated, with lighter isotopes being removed faster than heavier ones. Looking at this ratio of different isotopes also makes it much easier for researchers to monitor the progress of groundwater cleanup. It’s a dirty job, but biochar can do it.
[Image description: A pile of biochar on a white tarp.] Credit & copyright: USDA Forest Service photo by Deborah Page-Dumroese. Public Domain.Who knew that you can use the same stuff to clean dirt and water? Scientists at University of Waterloo have found a way to remove a harmful form of chromium from the environment using a form of charcoal called biochar. Biochar is a popular soil amendment made by burning organic waste in a low oxygen environment. Like regular charcoal, biochar’s particular properties can vary depending on what types of plants are used to make it, but no matter what, it’s a carbon-rich substance that can improve soil health. One of the ways it does this is by absorbing toxic pollutants in the soil, but the recent discovery of its effects on chromium has researchers particularly excited.
Chromium is the element that makes stainless steel stainless. The heavy metal exists in two very different forms. One is chromium(III), which is not only harmless, but essential to the proper function of the human body as a micronutrient. Chromium(VI), on the other hand, is a dangerous carcinogen created during industrial processes like leather tanning and the creation of stainless steel. Symptoms of chromium(VI) can be as mild as skin rashes or upset stomachs, but it can also lead to stomach ulcers, respiratory problems, reproductive issues, damage to the kidneys or liver, and ovarian cancer. Chromium(VI) can be difficult to remove from groundwater, but that’s where biochar comes in. Researchers found that when biochar was added to contaminated water, it was highly effective at absorbing chromium(VI). In fact, biochar actually converted the toxic chromium(VI) to the non-toxic chromium(III). Apparently, in the presence of biochar, the chromium isotopes fractionated, with lighter isotopes being removed faster than heavier ones. Looking at this ratio of different isotopes also makes it much easier for researchers to monitor the progress of groundwater cleanup. It’s a dirty job, but biochar can do it.
[Image description: A pile of biochar on a white tarp.] Credit & copyright: USDA Forest Service photo by Deborah Page-Dumroese. Public Domain. -
FREEDaily CurioFree1 CQ
This just might be the coolest dessert there is. As we kick off Black History Month, we're paying homage to Alfred L. Cralle, the Black American inventor who, on February 2, 1897, patented the ice cream scoop. Cralle was interested in mechanics from an early age and, after studying at Wayland Seminary in Washington, D.C., took jobs as a porter at a hotel and pharmacy in Pittsburgh, Pennsylvania. It was these jobs that inspired his most famous invention. Cralle saw firsthand how difficult it was to scoop ice cream one-handed, since it stuck to regular serving utensils. So, he developed a scoop with a built-in scraper to dislodge the ice cream easily, even with one hand. Such scoops are the standard in ice cream shops to this day. Cralle’s invention allowed for ice cream to be served faster, which undoubtedly helped it become one of the most important aspects of soda fountain culture in the early 20th century. It was from that culture that today’s featured food was born: the iconic sundae known as the banana split.
A banana split is an ice cream sundae traditionally made with three scoops of ice cream arranged between two halves of a banana. Toppings can vary, but usually include whipped cream, chopped walnuts or peanuts, chocolate syrup, caramel sauce, and diced strawberries or pineapple. The whipped cream is often topped with maraschino cherries.
Soda fountains played a big role in the banana split’s creation. Sometimes called candy kitchens or ice cream parlors, soda fountains evolved from early American pharmacies where pharmacists would compound, or mix, different medicinal ingredients together. Soda water was originally used in stomach-soothing tonics, with flavors added to cover up the taste of medicine. But soda soon became popular all on its own, especially when people started adding scoops of ice cream to carbonated drinks to create floats. From floats, loaded with toppings like whipped cream and chopped nuts, came sundaes—creations made entirely from ice cream in different configurations.
As for where exactly the banana split was invented and by whom, that’s a topic of heated debate. In fact, the towns of Latrobe, Pennsylvania, and Wilmington, Ohio, are both so sure that the banana split was created in their respective communities that they both hold festivals celebrating its supposedly local invention. Those in Latrobe claim that the split was invented by pharmacist David Strickler in 1904, after a young customer demanded “something different.” Strickler’s quick thinking in splitting a banana and making it into a decorative sundae served him well, as his invention became such a hit that he soon had to order custom-made glass serving bowls that could more easily accommodate bananas. Wilmington, on the other hand, attests that the banana split was invented by E.R. Hazard, a restaurateur who came up with the eye-catching, topping-heavy sundae in 1907 to attract college students to his restaurant. While most food historians feel that there’s more evidence for Latrobe—and Strickler’s—claim, we’ll never know for certain. There’s no doubt that banana splits would be a whole lot harder to scoop without Cralle’s invention, though!
[Image description: A banana split with three mounds of whip cream topped with cherries against a pink background.] Credit & copyright: David Disponett, PexelsThis just might be the coolest dessert there is. As we kick off Black History Month, we're paying homage to Alfred L. Cralle, the Black American inventor who, on February 2, 1897, patented the ice cream scoop. Cralle was interested in mechanics from an early age and, after studying at Wayland Seminary in Washington, D.C., took jobs as a porter at a hotel and pharmacy in Pittsburgh, Pennsylvania. It was these jobs that inspired his most famous invention. Cralle saw firsthand how difficult it was to scoop ice cream one-handed, since it stuck to regular serving utensils. So, he developed a scoop with a built-in scraper to dislodge the ice cream easily, even with one hand. Such scoops are the standard in ice cream shops to this day. Cralle’s invention allowed for ice cream to be served faster, which undoubtedly helped it become one of the most important aspects of soda fountain culture in the early 20th century. It was from that culture that today’s featured food was born: the iconic sundae known as the banana split.
A banana split is an ice cream sundae traditionally made with three scoops of ice cream arranged between two halves of a banana. Toppings can vary, but usually include whipped cream, chopped walnuts or peanuts, chocolate syrup, caramel sauce, and diced strawberries or pineapple. The whipped cream is often topped with maraschino cherries.
Soda fountains played a big role in the banana split’s creation. Sometimes called candy kitchens or ice cream parlors, soda fountains evolved from early American pharmacies where pharmacists would compound, or mix, different medicinal ingredients together. Soda water was originally used in stomach-soothing tonics, with flavors added to cover up the taste of medicine. But soda soon became popular all on its own, especially when people started adding scoops of ice cream to carbonated drinks to create floats. From floats, loaded with toppings like whipped cream and chopped nuts, came sundaes—creations made entirely from ice cream in different configurations.
As for where exactly the banana split was invented and by whom, that’s a topic of heated debate. In fact, the towns of Latrobe, Pennsylvania, and Wilmington, Ohio, are both so sure that the banana split was created in their respective communities that they both hold festivals celebrating its supposedly local invention. Those in Latrobe claim that the split was invented by pharmacist David Strickler in 1904, after a young customer demanded “something different.” Strickler’s quick thinking in splitting a banana and making it into a decorative sundae served him well, as his invention became such a hit that he soon had to order custom-made glass serving bowls that could more easily accommodate bananas. Wilmington, on the other hand, attests that the banana split was invented by E.R. Hazard, a restaurateur who came up with the eye-catching, topping-heavy sundae in 1907 to attract college students to his restaurant. While most food historians feel that there’s more evidence for Latrobe—and Strickler’s—claim, we’ll never know for certain. There’s no doubt that banana splits would be a whole lot harder to scoop without Cralle’s invention, though!
[Image description: A banana split with three mounds of whip cream topped with cherries against a pink background.] Credit & copyright: David Disponett, Pexels -
FREEHumanities Daily Curio #3026Free1 CQ
Who said court documents had to be boring? Researchers recently translated writings on a 1,900-year-old piece of papyrus, and it has proven to be one of the oldest true crime documents in existence. The papyrus was originally found in the Judean Desert along with the Dead Sea Scrolls, and was recently rediscovered by researchers in a storeroom, after going overlooked for quite some time. It was originally marked as being written in Nabataean, an ancient Aramaic language, but when sifting through the piles of papyrus, researcher Hannah Cotton immediately recognized the labeling as erroneous. Indeed, the papyrus actually contained 133 lines of ancient Greek text, used by Romans to document court cases. One of the researchers said in a statement from the university, “This is the best-documented Roman court case from Judaea, apart from the trial of Jesus.” However, the document isn’t anything as grandiose as a religious text or a murder trial. Instead, it describes a Roman court case against Gadalias and Saulos, two would-be tax dodgers who allegedly tried to circumvent Roman laws. The case took place around 130 C.E., after the two forged documentation regarding the buying and selling of slaves. Tax laws aren’t fun regardless of what century you live in, but in ancient Rome, they could be a matter of life or death. The Roman Empire punished tax cheats with years of hard labor and even executed them on occasion. Interestingly, the papyrus was written by the prosecuting lawyer in the case, and the detailed document is giving historians insight into what legal proceedings looked like in ancient Rome and Roman-occupied territories. Unfortunately, much of the document is incomplete since portions of the papyrus were degraded over time. This case seems to have gone from solved to unsolved. If only true-crime podcasters had existed in ancient Rome.
Who said court documents had to be boring? Researchers recently translated writings on a 1,900-year-old piece of papyrus, and it has proven to be one of the oldest true crime documents in existence. The papyrus was originally found in the Judean Desert along with the Dead Sea Scrolls, and was recently rediscovered by researchers in a storeroom, after going overlooked for quite some time. It was originally marked as being written in Nabataean, an ancient Aramaic language, but when sifting through the piles of papyrus, researcher Hannah Cotton immediately recognized the labeling as erroneous. Indeed, the papyrus actually contained 133 lines of ancient Greek text, used by Romans to document court cases. One of the researchers said in a statement from the university, “This is the best-documented Roman court case from Judaea, apart from the trial of Jesus.” However, the document isn’t anything as grandiose as a religious text or a murder trial. Instead, it describes a Roman court case against Gadalias and Saulos, two would-be tax dodgers who allegedly tried to circumvent Roman laws. The case took place around 130 C.E., after the two forged documentation regarding the buying and selling of slaves. Tax laws aren’t fun regardless of what century you live in, but in ancient Rome, they could be a matter of life or death. The Roman Empire punished tax cheats with years of hard labor and even executed them on occasion. Interestingly, the papyrus was written by the prosecuting lawyer in the case, and the detailed document is giving historians insight into what legal proceedings looked like in ancient Rome and Roman-occupied territories. Unfortunately, much of the document is incomplete since portions of the papyrus were degraded over time. This case seems to have gone from solved to unsolved. If only true-crime podcasters had existed in ancient Rome.
-
FREEAstronomy Daily Curio #3025Free1 CQ
Water those? Signs of water, of course! Using data and imagery from orbiters, scientists at the Natural History Museum in London have discovered clay mounds on Mars, and they’re proving to be some of the most convincing evidence yet that water once covered the red planet’s surface. The search for water on Mars has been going on for decades, but evidence for large bodies of it has been sparse. While Mars certainly had some water at one point, it’s been difficult to discern exactly how much. Now, scientists are pointing to mounds of Martian clay as evidence that, at one time, water was not only present but plentiful on the Red Planet. In all, around 15,000 clay mounds have been found covering an area roughly the size of Texas, and some of them are 1,600 feet tall. They contain clay minerals that could have only formed in the presence of running water.
Researcher Joe McNeil and his colleagues at the Natural History Museum in London used images collected by three separate orbiters: NASA's Mars Reconnaissance Orbiter, European Space Agency's Mars Express, and ExoMars Trace Gas Orbiter, which offered spectral composition data in addition to high-resolution images of the Martian surface. The data not only showed the presence of clay, but how much of it there was. Based on this, scientists estimate that the clay minerals were deposited between 3.7 and 4.2 billion years ago, when there were possibly oceans of water on the Red Planet. While the clay mounds indicate the presence of abundant water in the distant past, exactly how much of it there was and what parts of the planet it covered are still matters of debate. As McNeil said in a statement through the museum, "It's possible that this might have come from an ancient northern ocean on Mars, but this is an idea that's still controversial." Hopefully the clay mounds have more watery secrets to share.
[Image description: A starry sky with some purple light visible.] Credit & copyright: Felix Mittermeier, PexelsWater those? Signs of water, of course! Using data and imagery from orbiters, scientists at the Natural History Museum in London have discovered clay mounds on Mars, and they’re proving to be some of the most convincing evidence yet that water once covered the red planet’s surface. The search for water on Mars has been going on for decades, but evidence for large bodies of it has been sparse. While Mars certainly had some water at one point, it’s been difficult to discern exactly how much. Now, scientists are pointing to mounds of Martian clay as evidence that, at one time, water was not only present but plentiful on the Red Planet. In all, around 15,000 clay mounds have been found covering an area roughly the size of Texas, and some of them are 1,600 feet tall. They contain clay minerals that could have only formed in the presence of running water.
Researcher Joe McNeil and his colleagues at the Natural History Museum in London used images collected by three separate orbiters: NASA's Mars Reconnaissance Orbiter, European Space Agency's Mars Express, and ExoMars Trace Gas Orbiter, which offered spectral composition data in addition to high-resolution images of the Martian surface. The data not only showed the presence of clay, but how much of it there was. Based on this, scientists estimate that the clay minerals were deposited between 3.7 and 4.2 billion years ago, when there were possibly oceans of water on the Red Planet. While the clay mounds indicate the presence of abundant water in the distant past, exactly how much of it there was and what parts of the planet it covered are still matters of debate. As McNeil said in a statement through the museum, "It's possible that this might have come from an ancient northern ocean on Mars, but this is an idea that's still controversial." Hopefully the clay mounds have more watery secrets to share.
[Image description: A starry sky with some purple light visible.] Credit & copyright: Felix Mittermeier, Pexels -
FREEWorld History Daily Curio #3024Free1 CQ
Throughout the centuries, history reminds us of a sobering fact: prejudice can make ordinary people do terrible things. Last week was the 80-year anniversary of the liberation of Auschwitz. But even before that particular concentration camp was built, the Nazi regime carried out a series of escalating, violent attacks against minorities in Germany and other occupied territories, often with the help of ordinary citizens. These attacks were spurred on by violent Nazi rhetoric, and were meant to terrorize Jewish people, LGBTQ people, political dissidents, and anyone else that the regime considered an “enemy.” Perhaps the most infamous of these attacks came to be known as Kristallnacht, or the Night of Broken Glass.
On November 7, 1938, 17-year-old Polish-German Jew Herschel Grynszpan shot German diplomat Ernst vom Rath in Paris, France. Grynszpan’s attack on the previously-obscure diplomat was motivated by the deportation of Polish Jews into “relocation camps” just weeks prior. Grynszpan’s family were among those taken to the camps. Vom Rath died a few days after he was shot, on November 9, when a Nazi gathering in Munich happened to be celebrating the anniversary of the Beer Hall Putsch, Hitler’s 1923 failed coup attempt. Chief propagandist Joseph Goebbels seized the opportunity to rile up outrage, using the news of Vom Rath’s death as a rallying cry for Nazi supporters. Goebbels called for violent action against the Jewish population of Germany and Austria, and Nazi officials spread the orders across their territories. Over the course of two nights, countless homes and businesses owned by Jewish families were vandalized or destroyed, leaving the streets littered with shards of broken glass and rubble. 30,000 Jewish men were arrested and sent to concentration camps at Sachsenhausen, Dachau, and Buchenwald, while around 100 Jews were murdered in the outbreak of violence. When news of the events reached the U.S., President Franklin D. Roosevelt condemned the violence and cut diplomatic ties with Germany, though the move wasn’t widely supported, as many in the U.S. favored a policy of appeasement. Kristallnacht was one of the Nazi Party’s first major steps toward the planned eradication of Jews and other minorities in Europe, and further legislative measures restricting their rights were passed in the following days and months. For the Nazis, propaganda was just another weapon to be put to violent use.Throughout the centuries, history reminds us of a sobering fact: prejudice can make ordinary people do terrible things. Last week was the 80-year anniversary of the liberation of Auschwitz. But even before that particular concentration camp was built, the Nazi regime carried out a series of escalating, violent attacks against minorities in Germany and other occupied territories, often with the help of ordinary citizens. These attacks were spurred on by violent Nazi rhetoric, and were meant to terrorize Jewish people, LGBTQ people, political dissidents, and anyone else that the regime considered an “enemy.” Perhaps the most infamous of these attacks came to be known as Kristallnacht, or the Night of Broken Glass.
On November 7, 1938, 17-year-old Polish-German Jew Herschel Grynszpan shot German diplomat Ernst vom Rath in Paris, France. Grynszpan’s attack on the previously-obscure diplomat was motivated by the deportation of Polish Jews into “relocation camps” just weeks prior. Grynszpan’s family were among those taken to the camps. Vom Rath died a few days after he was shot, on November 9, when a Nazi gathering in Munich happened to be celebrating the anniversary of the Beer Hall Putsch, Hitler’s 1923 failed coup attempt. Chief propagandist Joseph Goebbels seized the opportunity to rile up outrage, using the news of Vom Rath’s death as a rallying cry for Nazi supporters. Goebbels called for violent action against the Jewish population of Germany and Austria, and Nazi officials spread the orders across their territories. Over the course of two nights, countless homes and businesses owned by Jewish families were vandalized or destroyed, leaving the streets littered with shards of broken glass and rubble. 30,000 Jewish men were arrested and sent to concentration camps at Sachsenhausen, Dachau, and Buchenwald, while around 100 Jews were murdered in the outbreak of violence. When news of the events reached the U.S., President Franklin D. Roosevelt condemned the violence and cut diplomatic ties with Germany, though the move wasn’t widely supported, as many in the U.S. favored a policy of appeasement. Kristallnacht was one of the Nazi Party’s first major steps toward the planned eradication of Jews and other minorities in Europe, and further legislative measures restricting their rights were passed in the following days and months. For the Nazis, propaganda was just another weapon to be put to violent use. -
FREETravel Daily Curio #3023Free1 CQ
Roughing it in nature can be fun...but there's nothing wrong with a few creature comforts along the way. Exploring the Alaskan wilderness is a daunting task, but in the Chugach and Tongass National Forests, there are cabins, or “huts”, to provide shelter for eager adventurers. Now, thanks to the Alaska Cabins Project, there will soon be even more stops for hut-to-hut hikers.
Currently, the U.S. Forest Service maintains around 200 cabins that provide much-needed shelter. Most of them were built in the 1920s to make the outdoors more accessible to less-experienced hikers and families, but the majority of them were built during President Franklin D. Roosevelt’s administration by the Civilian Conservation Corps (CCC), which was part of an initiative to give people more work opportunities during the Great Depression. The cabins are much like the famous Alps hut system, in Europe, where hikers, skiers, and other outdoor enthusiasts can book a hut for a day in France, Italy, or Switzerland. Many of the Alps huts offer food, lodging, and other amenities, though some are sparser, providing only limited provisions and a safe place to rest. Because of the huts, visitors to the Alps can carry lighter packs since they don’t need to worry about tents or carry as much food.
The cabins in Alaska are less hostel-like, but they are popular nonetheless, and many of them are reserved months in advance. They can be booked for an overnight stay, although other travelers are still allowed to stop in during the day. Some of Alaska’s cabins also offer canoes, boats, and other equipment that would be impractical for hikers to carry with them, allowing visitors to explore remote areas that would normally be out of reach. As part of the Alaska Cabins Project, the U.S. Forest Service is teaming up with the nonprofit National Forest Foundation (NFF) to repair 10 cabins and add 25 new ones to expand on the current trail system. The endeavor is the largest of its kind in 50 years. With new cabins, officials hope to alleviate some of the long wait times for reservations and make Alaska even more accessible. Exploring the outdoors doesn’t need to be so in-tents.
[Image description: A photo of Bullard Mountain in Alaska, across a lake with some ice floes.] Credit & copyright: Thomson200, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Roughing it in nature can be fun...but there's nothing wrong with a few creature comforts along the way. Exploring the Alaskan wilderness is a daunting task, but in the Chugach and Tongass National Forests, there are cabins, or “huts”, to provide shelter for eager adventurers. Now, thanks to the Alaska Cabins Project, there will soon be even more stops for hut-to-hut hikers.
Currently, the U.S. Forest Service maintains around 200 cabins that provide much-needed shelter. Most of them were built in the 1920s to make the outdoors more accessible to less-experienced hikers and families, but the majority of them were built during President Franklin D. Roosevelt’s administration by the Civilian Conservation Corps (CCC), which was part of an initiative to give people more work opportunities during the Great Depression. The cabins are much like the famous Alps hut system, in Europe, where hikers, skiers, and other outdoor enthusiasts can book a hut for a day in France, Italy, or Switzerland. Many of the Alps huts offer food, lodging, and other amenities, though some are sparser, providing only limited provisions and a safe place to rest. Because of the huts, visitors to the Alps can carry lighter packs since they don’t need to worry about tents or carry as much food.
The cabins in Alaska are less hostel-like, but they are popular nonetheless, and many of them are reserved months in advance. They can be booked for an overnight stay, although other travelers are still allowed to stop in during the day. Some of Alaska’s cabins also offer canoes, boats, and other equipment that would be impractical for hikers to carry with them, allowing visitors to explore remote areas that would normally be out of reach. As part of the Alaska Cabins Project, the U.S. Forest Service is teaming up with the nonprofit National Forest Foundation (NFF) to repair 10 cabins and add 25 new ones to expand on the current trail system. The endeavor is the largest of its kind in 50 years. With new cabins, officials hope to alleviate some of the long wait times for reservations and make Alaska even more accessible. Exploring the outdoors doesn’t need to be so in-tents.
[Image description: A photo of Bullard Mountain in Alaska, across a lake with some ice floes.] Credit & copyright: Thomson200, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.