Curio Cabinet / Person, Place, or Thing
-
FREEMind + Body PP&T CurioFree1 CQ
If southern hospitality had a flavor, this would probably be it. Chicken and dumplings, a dish famous in the American South, is renowned as a top-tier comfort food. Yet it’s also a source of debate. There are those who claim that the dish’s “dumplings” aren’t really dumplings, and that its depression-era backstory is dubious at best.
Chicken and dumplings is a simple soup made with simmered chicken meat and thick broth created via the simmering process. The dish’s dumplings are balls of biscuit dough, usually made from flour, shortening, and milk, though the latter can be substituted for buttermilk, water, or chicken broth. The soup is seasoned sparingly with salt and pepper.
Chicken and dumplings is a simple dish that requires few ingredients and can feed many people at once. Thus, for a time the dish was rumored to have been invented during the Great Depression, when resources were scarce. However, modern food historians have a different theory which begins not in the American South but in Germany. German cuisine includes many dishes that are similar to chicken and dumplings, such as potato dumplings in broth. Many German dishes became popular throughout the U.S. due to a wave of German immigrants in the 1820s, and the first written record of chicken and dumplings appears not long after, in the 1879 cookbook, Housekeeping in Old Virginia..
Of course, that doesn’t solve the debate about whether the dumplings in chicken and dumplings are really dumplings. Some foodies only consider something a dumpling if the food in question is stuffed with something, such as Japanese gyoza which are stuffed with meat and veggies, or European pierogies filled with potatoes and cheese. However, by that definition even gnocchi, the world’s most famous type of potato dumpling, wouldn’t fit the bill. One thing’s for certain, though: chicken and dumplings is a savory, chewy, comforting dish—no matter where it came from or what you call it.
[Image description: A rooster and several chickens pecking at grass.] Credit & copyright: Helge Klaus Rieder, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide.If southern hospitality had a flavor, this would probably be it. Chicken and dumplings, a dish famous in the American South, is renowned as a top-tier comfort food. Yet it’s also a source of debate. There are those who claim that the dish’s “dumplings” aren’t really dumplings, and that its depression-era backstory is dubious at best.
Chicken and dumplings is a simple soup made with simmered chicken meat and thick broth created via the simmering process. The dish’s dumplings are balls of biscuit dough, usually made from flour, shortening, and milk, though the latter can be substituted for buttermilk, water, or chicken broth. The soup is seasoned sparingly with salt and pepper.
Chicken and dumplings is a simple dish that requires few ingredients and can feed many people at once. Thus, for a time the dish was rumored to have been invented during the Great Depression, when resources were scarce. However, modern food historians have a different theory which begins not in the American South but in Germany. German cuisine includes many dishes that are similar to chicken and dumplings, such as potato dumplings in broth. Many German dishes became popular throughout the U.S. due to a wave of German immigrants in the 1820s, and the first written record of chicken and dumplings appears not long after, in the 1879 cookbook, Housekeeping in Old Virginia..
Of course, that doesn’t solve the debate about whether the dumplings in chicken and dumplings are really dumplings. Some foodies only consider something a dumpling if the food in question is stuffed with something, such as Japanese gyoza which are stuffed with meat and veggies, or European pierogies filled with potatoes and cheese. However, by that definition even gnocchi, the world’s most famous type of potato dumpling, wouldn’t fit the bill. One thing’s for certain, though: chicken and dumplings is a savory, chewy, comforting dish—no matter where it came from or what you call it.
[Image description: A rooster and several chickens pecking at grass.] Credit & copyright: Helge Klaus Rieder, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide. -
FREEBiology PP&T CurioFree1 CQ
Would you like the ability to regrow your limbs while staying young forever? Sounds like you want to be an axolotl. These amphibians have become such pop-culture darlings in the past few years that they’re one of the few non-fluffy creatures commonly found in stuffed-animal form. While there’s no doubt that axolotls are cute, they also happen to be some of the strangest (and most threatened) creatures on earth.
Axolotls are amphibians (aquatic salamanders, to be exact) but their life cycle is much different from other amphibians’. The vast majority of amphibians undergo metamorphosis in order to reach adulthood, such as frogs, which begin life as tadpoles. Even most other salamander species begin life in the water with feathery gills similar to axolotls’, but eventually lose them when they mature and move onto land. Researchers have found that, while axolotls can be forced to change into an “adult” form if they are exposed to large amounts of iodine (a chemical element that triggers metamorphosis in some other amphibians) they do not survive long after the forced metamorphosis. One trait that axolotls do share with some other salamander species is the ability to regenerate body parts. They’re extremely good at it, in fact. Not only can axolotls grow new tails or legs should they lose one, they can even regrow internal organs and bones, including the heart, brain, and spine.
Social media could easily convince someone that axolotls are common. In a way, it’s true: they are common in the pet trade. In the wild, though, they’re practically extinct—and their range was never very big to begin with. In fact, wild axolotls have only ever been found in two freshwater lakes in the Valley of Mexico: Lake Xochimilco and Lake Chalco. These lakes offered unique habitats for axolotls that some pet owners find difficult to emulate. The waters are dark and, most importantly, cold. Axolotls thrive at temperatures of around 55 to 68 degrees Fahrenheit, which would be much too cold for many other amphibians. Unfortunately, people have never been content to leave axolotls alone in their cool, dark homes. The salamanders’ first bout of bad luck came when the Spanish conquered the Aztec Empire and partially drained the lakes, killing many axolotls. Lake Chalco was completely drained in the 1970s, relegating all remaining wild axolotls to Lake Xochimilco. Their problems weren’t over, though: in the 1980s, the lake became polluted with wastewater and in the early 2000s, tilapia were introduced to the lake. These fish compete with axolotls for food and eat their eggs. On top of all that, people living near the lake had no qualms about eating axolotls, if the chance arose. Today, there are only around 50 to 1,000 wild axolotls left on earth, all of them relegated to a single, polluted lake.
While the pet trade can lead to ecological disaster for some animal species, it may actually help save axolotls. Plenty of people from all over the world breed captive axolotls, which means that the species has managed to maintain a large gene pool. This could bode well for efforts to re-introduce axolotls to the wild…assuming that their natural habitat is made fit for them again. In order for any such effort to succeed, Lake Xochimilco would have to be cleaned of pollution, rules about waste dumping would need to be passed and enforced, and large numbers of tilapia would need to be removed from the lake. Were all those things to happen, there’s a good chance that wild axolotls would take to the lake like fish…or, rather, like salamanders to water.
[Image description: A gray axolotl in an aquarium.] Credit & copyright: Vassil, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Would you like the ability to regrow your limbs while staying young forever? Sounds like you want to be an axolotl. These amphibians have become such pop-culture darlings in the past few years that they’re one of the few non-fluffy creatures commonly found in stuffed-animal form. While there’s no doubt that axolotls are cute, they also happen to be some of the strangest (and most threatened) creatures on earth.
Axolotls are amphibians (aquatic salamanders, to be exact) but their life cycle is much different from other amphibians’. The vast majority of amphibians undergo metamorphosis in order to reach adulthood, such as frogs, which begin life as tadpoles. Even most other salamander species begin life in the water with feathery gills similar to axolotls’, but eventually lose them when they mature and move onto land. Researchers have found that, while axolotls can be forced to change into an “adult” form if they are exposed to large amounts of iodine (a chemical element that triggers metamorphosis in some other amphibians) they do not survive long after the forced metamorphosis. One trait that axolotls do share with some other salamander species is the ability to regenerate body parts. They’re extremely good at it, in fact. Not only can axolotls grow new tails or legs should they lose one, they can even regrow internal organs and bones, including the heart, brain, and spine.
Social media could easily convince someone that axolotls are common. In a way, it’s true: they are common in the pet trade. In the wild, though, they’re practically extinct—and their range was never very big to begin with. In fact, wild axolotls have only ever been found in two freshwater lakes in the Valley of Mexico: Lake Xochimilco and Lake Chalco. These lakes offered unique habitats for axolotls that some pet owners find difficult to emulate. The waters are dark and, most importantly, cold. Axolotls thrive at temperatures of around 55 to 68 degrees Fahrenheit, which would be much too cold for many other amphibians. Unfortunately, people have never been content to leave axolotls alone in their cool, dark homes. The salamanders’ first bout of bad luck came when the Spanish conquered the Aztec Empire and partially drained the lakes, killing many axolotls. Lake Chalco was completely drained in the 1970s, relegating all remaining wild axolotls to Lake Xochimilco. Their problems weren’t over, though: in the 1980s, the lake became polluted with wastewater and in the early 2000s, tilapia were introduced to the lake. These fish compete with axolotls for food and eat their eggs. On top of all that, people living near the lake had no qualms about eating axolotls, if the chance arose. Today, there are only around 50 to 1,000 wild axolotls left on earth, all of them relegated to a single, polluted lake.
While the pet trade can lead to ecological disaster for some animal species, it may actually help save axolotls. Plenty of people from all over the world breed captive axolotls, which means that the species has managed to maintain a large gene pool. This could bode well for efforts to re-introduce axolotls to the wild…assuming that their natural habitat is made fit for them again. In order for any such effort to succeed, Lake Xochimilco would have to be cleaned of pollution, rules about waste dumping would need to be passed and enforced, and large numbers of tilapia would need to be removed from the lake. Were all those things to happen, there’s a good chance that wild axolotls would take to the lake like fish…or, rather, like salamanders to water.
[Image description: A gray axolotl in an aquarium.] Credit & copyright: Vassil, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREECollege Prep PP&T CurioFree1 CQ
It was a pandemic disappearing act that just couldn’t last. During the height of COVID-19, some colleges in the U.S. decided to stop requiring SAT scores as part of their admissions processes. Now, though, Harvard University and several other institutions have started asking for the scores once again. This has led to renewed discussions about the SAT’s relevance. Some people feel that the test is a fair, objective way to measure students’ knowledge, while others argue that standardized test scores are a poor measure of academic comprehension. It’s a debate that’s been going on for decades.
Originally called the Scholastic Aptitude Test, the SAT has changed many times since its inception, and it wasn’t even the first test of its kind. Around the turn of the 20th century, 12 university presidents came together to form the College Entrance Examination Board (shortened to College Board). The board’s main purpose was to create a standardized entrance exam for their schools. The resulting test was nothing less than daunting. It took five days to complete and challenged the taker’s knowledge of Latin, Greek, and physics. However, it began to lose relevance by the early 1900s, due in part to its limited scope. At that time, IQ tests were fairly new, having been developed in 1905, and they were all the rage. To update their exams, the College Board hired Carl Brigham, who administered IQ tests for the U.S. military, to develop a similar test for would-be university students. Eventually, the test Brigham created evolved into the SAT, which was officially released in 1926. Over the years, the test underwent many changes. The foreign language portions were dropped, for one thing, and the test was pared down to a three-hour-long affair. However, its main purpose, to assess math and verbal skills, has remained largely the same. Along with alterations to its content, the test’s name changed multiple times, from the Scholastic Assessment Test to the SAT Reasoning Test. In the end, most institutions simply ended up calling it the SAT.
Unlike its predecessors, the SAT was touted for its supposed ability to evaluate a student’s critical thinking skills instead of rewarding rote memorization. It still had plenty of detractors, though. Some universities believed that the test was an unnecessary barrier for students who faced socioeconomic hardships, since such students had limited access to resources like private tutoring and less time to study, since they often had to work outside of school. It’s a viewpoint still shared by plenty of people today. Recently, Harvard professor David J. Deming and his colleagues conducted research into the impact of socioeconomic status on standardized tests. After Harvard announced the SAT’s return, Deming acknowledged that the tests weren’t perfect, but also defended them, saying in a statement to The Harvard Gazette, “The virtue of standardized tests is their universality. Not everyone can hire an expensive college coach to help them craft a personal essay. But everyone has the chance to ace the SAT or the ACT. While some barriers do exist, the widespread availability of the test provides, in my view, the fairest admissions policy for disadvantaged applicants.” Nowadays, there are even free resources for test preparation from nonprofit organizations, though even that requires an internet connection that some applicants might not have access to. Love it or hate it, it seems that the SAT and its counterpart, the ACT (American College Test) probably won’t be going anywhere any time soon. At least test-takers can be assured that the tests’ rigorousness isn’t personal…it’s just standard procedure.
[Image description: A mathematical problem written on a chalkboard.] Credit & copyright: Monstera Production, PexelsIt was a pandemic disappearing act that just couldn’t last. During the height of COVID-19, some colleges in the U.S. decided to stop requiring SAT scores as part of their admissions processes. Now, though, Harvard University and several other institutions have started asking for the scores once again. This has led to renewed discussions about the SAT’s relevance. Some people feel that the test is a fair, objective way to measure students’ knowledge, while others argue that standardized test scores are a poor measure of academic comprehension. It’s a debate that’s been going on for decades.
Originally called the Scholastic Aptitude Test, the SAT has changed many times since its inception, and it wasn’t even the first test of its kind. Around the turn of the 20th century, 12 university presidents came together to form the College Entrance Examination Board (shortened to College Board). The board’s main purpose was to create a standardized entrance exam for their schools. The resulting test was nothing less than daunting. It took five days to complete and challenged the taker’s knowledge of Latin, Greek, and physics. However, it began to lose relevance by the early 1900s, due in part to its limited scope. At that time, IQ tests were fairly new, having been developed in 1905, and they were all the rage. To update their exams, the College Board hired Carl Brigham, who administered IQ tests for the U.S. military, to develop a similar test for would-be university students. Eventually, the test Brigham created evolved into the SAT, which was officially released in 1926. Over the years, the test underwent many changes. The foreign language portions were dropped, for one thing, and the test was pared down to a three-hour-long affair. However, its main purpose, to assess math and verbal skills, has remained largely the same. Along with alterations to its content, the test’s name changed multiple times, from the Scholastic Assessment Test to the SAT Reasoning Test. In the end, most institutions simply ended up calling it the SAT.
Unlike its predecessors, the SAT was touted for its supposed ability to evaluate a student’s critical thinking skills instead of rewarding rote memorization. It still had plenty of detractors, though. Some universities believed that the test was an unnecessary barrier for students who faced socioeconomic hardships, since such students had limited access to resources like private tutoring and less time to study, since they often had to work outside of school. It’s a viewpoint still shared by plenty of people today. Recently, Harvard professor David J. Deming and his colleagues conducted research into the impact of socioeconomic status on standardized tests. After Harvard announced the SAT’s return, Deming acknowledged that the tests weren’t perfect, but also defended them, saying in a statement to The Harvard Gazette, “The virtue of standardized tests is their universality. Not everyone can hire an expensive college coach to help them craft a personal essay. But everyone has the chance to ace the SAT or the ACT. While some barriers do exist, the widespread availability of the test provides, in my view, the fairest admissions policy for disadvantaged applicants.” Nowadays, there are even free resources for test preparation from nonprofit organizations, though even that requires an internet connection that some applicants might not have access to. Love it or hate it, it seems that the SAT and its counterpart, the ACT (American College Test) probably won’t be going anywhere any time soon. At least test-takers can be assured that the tests’ rigorousness isn’t personal…it’s just standard procedure.
[Image description: A mathematical problem written on a chalkboard.] Credit & copyright: Monstera Production, Pexels -
FREEHumanities PP&T CurioFree1 CQ
Happy belated birthday Dame Jane Goodall! This famed British primatologist and anthropologist turned 90 on April 3. Never one to sit on her laurels, she celebrated the auspicious occasion by releasing a short film in which she and other conservationists urged humanity to care for animals and the environment. It’s the same message that Goodall has been spreading since 1960 when, at 26 years old, she journeyed from England to Tanzania to study chimpanzees in the wild.
Goodall was born in London, England, in 1934, and showed great interest in animal behavior from an early age. She spent her free time observing bugs, mice, and other small, native creatures while taking notes and sketching them. Her ultimate aspiration as a child was to study animals in Africa in their native habitats. At 18, Goodall began working at Oxford University as a secretary and at a documentary film company while saving up funds for her eventual trip to Africa. In the late 1950s, Goodall traveled to Kenya for the first time, and soon started working for anthropologist Louis Leakey. Leakey specialized in early human ancestors, and he believed that the key to understanding their evolutionary path was a better understanding of extant great apes, such as chimpanzees. After all, chimpanzees are humans’ closest living relatives. Leakey also contended that previous chimp studies had been too limited in scope, and that a long-term study with field observation was necessary. Unable to commit to such a study himself, he sent Goodall to Gombe Stream Reserve in Tanzania to observe the chimpanzees.
In July of 1960, Goodall arrived at a camp in the reserve, situated on the shores of Lake Tanganyika. Once there, she developed a dedicated routine to observe the chimpanzees. She would appear at the same time every day near their feeding area, allowing the apes to get acclimated to her presence. Over time, she was able to get closer and closer as they stopped seeing her as a threat. Eventually, she was able to earn their trust by bringing them bananas, forming what she called the “Banana Club.” During the time she spent practically living with the chimpanzees, she learned new information that disproved previous studies. For example, the apes weren’t herbivores as previously thought, but omnivores that ate insects and other mammals like baboons and antelope when the opportunity presented itself. They also appeared to communicate using around 20 “words” and had a complex social system.
Perhaps most importantly, Goodall discovered that humans weren’t the only animals that use tools. She observed chimps using blades of grass that they modified to catch termites out of their mounds. This discovery, along with her other observations, are shown in the documentary Miss Goodall and the Wild Chimpanzees (1965) and in her book, In the Shadow of Man (1971), both of which changed the scientific community’s understanding of great apes. Goodall’s book, in particular, helped prove that chimpanzees shared many behaviors with humans. One example of wordless communication between humans and chimps from In The Shadow of Man reads, “When I moved my hand closer, he looked at it, and then at me, and then he took the fruit, and at the same time held my hand firmly and gently with his own. As I sat motionless, he released my hand, looked down at the nut, and dropped it to the ground. At that moment, there was no need of any scientific knowledge to understand his communication of reassurance… the barrier of untold centuries which has grown up during the separate evolution of man and chimpanzee was, for those few seconds, broken down.”
Today, Goodall’s discovery of chimpanzees’ capacity for tool-use is considered one of the greatest ethological achievements of the 20th century. It has even opened the door to studies of other animals, like parrots, that are capable of similar feats. Since her groundbreaking fieldwork, Goodall has dedicated her life to conservation. In 1977, she co-founded the Jane Goodall Institute for Wildlife Research, Education and Conservation. For her contributions to science, she was named Dame Commander of the Order of the British Empire (DBE) in 2003. It’s an impressive title for sure, but it doesn’t quite beat President of the Banana Club.
[Image description: A photo of Jane Goodall wearing a brown, turtleneck sweater and jacket in front of a floral background.] Credit & copyright: U.S. Department of State, Wikimedia Commons. This image is a work of a United States Department of State employee, taken or made as part of that person's official duties. As a work of the U.S. federal government, the image is in the public domain per 17 U.S.C. § 101 and § 105 and the Department Copyright Information.Happy belated birthday Dame Jane Goodall! This famed British primatologist and anthropologist turned 90 on April 3. Never one to sit on her laurels, she celebrated the auspicious occasion by releasing a short film in which she and other conservationists urged humanity to care for animals and the environment. It’s the same message that Goodall has been spreading since 1960 when, at 26 years old, she journeyed from England to Tanzania to study chimpanzees in the wild.
Goodall was born in London, England, in 1934, and showed great interest in animal behavior from an early age. She spent her free time observing bugs, mice, and other small, native creatures while taking notes and sketching them. Her ultimate aspiration as a child was to study animals in Africa in their native habitats. At 18, Goodall began working at Oxford University as a secretary and at a documentary film company while saving up funds for her eventual trip to Africa. In the late 1950s, Goodall traveled to Kenya for the first time, and soon started working for anthropologist Louis Leakey. Leakey specialized in early human ancestors, and he believed that the key to understanding their evolutionary path was a better understanding of extant great apes, such as chimpanzees. After all, chimpanzees are humans’ closest living relatives. Leakey also contended that previous chimp studies had been too limited in scope, and that a long-term study with field observation was necessary. Unable to commit to such a study himself, he sent Goodall to Gombe Stream Reserve in Tanzania to observe the chimpanzees.
In July of 1960, Goodall arrived at a camp in the reserve, situated on the shores of Lake Tanganyika. Once there, she developed a dedicated routine to observe the chimpanzees. She would appear at the same time every day near their feeding area, allowing the apes to get acclimated to her presence. Over time, she was able to get closer and closer as they stopped seeing her as a threat. Eventually, she was able to earn their trust by bringing them bananas, forming what she called the “Banana Club.” During the time she spent practically living with the chimpanzees, she learned new information that disproved previous studies. For example, the apes weren’t herbivores as previously thought, but omnivores that ate insects and other mammals like baboons and antelope when the opportunity presented itself. They also appeared to communicate using around 20 “words” and had a complex social system.
Perhaps most importantly, Goodall discovered that humans weren’t the only animals that use tools. She observed chimps using blades of grass that they modified to catch termites out of their mounds. This discovery, along with her other observations, are shown in the documentary Miss Goodall and the Wild Chimpanzees (1965) and in her book, In the Shadow of Man (1971), both of which changed the scientific community’s understanding of great apes. Goodall’s book, in particular, helped prove that chimpanzees shared many behaviors with humans. One example of wordless communication between humans and chimps from In The Shadow of Man reads, “When I moved my hand closer, he looked at it, and then at me, and then he took the fruit, and at the same time held my hand firmly and gently with his own. As I sat motionless, he released my hand, looked down at the nut, and dropped it to the ground. At that moment, there was no need of any scientific knowledge to understand his communication of reassurance… the barrier of untold centuries which has grown up during the separate evolution of man and chimpanzee was, for those few seconds, broken down.”
Today, Goodall’s discovery of chimpanzees’ capacity for tool-use is considered one of the greatest ethological achievements of the 20th century. It has even opened the door to studies of other animals, like parrots, that are capable of similar feats. Since her groundbreaking fieldwork, Goodall has dedicated her life to conservation. In 1977, she co-founded the Jane Goodall Institute for Wildlife Research, Education and Conservation. For her contributions to science, she was named Dame Commander of the Order of the British Empire (DBE) in 2003. It’s an impressive title for sure, but it doesn’t quite beat President of the Banana Club.
[Image description: A photo of Jane Goodall wearing a brown, turtleneck sweater and jacket in front of a floral background.] Credit & copyright: U.S. Department of State, Wikimedia Commons. This image is a work of a United States Department of State employee, taken or made as part of that person's official duties. As a work of the U.S. federal government, the image is in the public domain per 17 U.S.C. § 101 and § 105 and the Department Copyright Information. -
FREEUS History PP&T CurioFree1 CQ
If you’ve been online this week, then you’ve likely seen footage of the Francis Scott Key Bridge collapsing after it was struck by a large container ship. The tragedy claimed several lives and is still under investigation. Yet, it’s far from the worst bridge collapse in modern U.S. history. That unfortunate title is still held by 1967’s Silver Bridge collapse in Point Pleasant, West Virginia—an incident that changed U.S. bridge safety forever.
Built in 1928 over the Ohio river, the Silver Bridge connected U.S. Highway 35 between Point Pleasant, West Virginia, and Kanauga, Ohio. Named for the aluminum paint that covered its surface, the Silver Bridge was a suspension bridge, though with an unusual design. Unlike most suspension bridges, the Silver Bridge was supported by eyebars. These were flat, steel beams with round, holed ends connected via huge pins to form chains ranging between 45 feet and 55 feet in length. While unconventional, the design held up to decades of heavy traffic…until it suddenly didn’t. On December 15, 1967, a chain on one of the ends of the bridge snapped, causing it to tilt to one side. This happened at 5 p.m., when rush-hour traffic was backed up on the bridge. The tilt immediately sent unfortunate commuters sliding into the water. In the span of around one minute, dozens of cars fell 80 feet to the surface of the Ohio River, killing 46 people and injuring 9.
Following the tragic incident, the National Transportation Safety Board (NTSB) investigated the cause of the bridge collapse and released a comprehensive report. However, the bridge had already been a source of concern to state officials for some time. From the very beginning, it was known that the eyebar chains could not be adjusted after the bridge was completed, meaning that any issues arising from the condition of the chains couldn’t be easily corrected. Still, it seemed that the bridge was well looked after. Inspections were carried out regularly by the private owner of the bridge until 1941, when it was purchased by the state of West Virginia. In 1965, state inspectors recommended $30,000 in repairs, which were completed by the summer of 1967. Just over a week before the bridge collapsed, the State Road Commission sent a maintenance engineer to check the bridge again. Yet despite these measures, a growing threat went overlooked. According to the NTSB report, the main source of the failure was an improperly cast eyebar which had been used in the construction of the bridge despite having a small crack. Over the years, corrosion caused the crack to grow (rust expands as it forms) until it failed catastrophically on December 15. The bridge also hadn’t been designed to handle the heavy traffic of the 1960s. By then, not only were there more cars on the road, the cars were much heavier than the Ford Model-Ts that the bridge was originally made to accommodate. The flaw in the eyebar had been visually inaccessible, so it went undiscovered despite all the inspections.
As tragic as it was, the Silver Bridge collapse did have something of a silver lining. It made headlines around the country, forcing President Lyndon B. Johnson to address aging infrastructure. A nationwide assessment found that many bridges were, much like Silver Bridge, designed and built in the 1920s, but with even fewer inspections to keep tabs on their condition. Some bridges had never even been inspected. Today, there are federal standards for bridge design and maintenance, but the sheer number of bridges still makes thorough, regular inspections difficult. Efforts to improve maintenance procedures and intervals are often met with political resistance due to funding, and many pieces of road infrastructure today could be ticking time bombs. When it comes to infrastructure safety, there shouldn’t be such a thing as a bridge too far.If you’ve been online this week, then you’ve likely seen footage of the Francis Scott Key Bridge collapsing after it was struck by a large container ship. The tragedy claimed several lives and is still under investigation. Yet, it’s far from the worst bridge collapse in modern U.S. history. That unfortunate title is still held by 1967’s Silver Bridge collapse in Point Pleasant, West Virginia—an incident that changed U.S. bridge safety forever.
Built in 1928 over the Ohio river, the Silver Bridge connected U.S. Highway 35 between Point Pleasant, West Virginia, and Kanauga, Ohio. Named for the aluminum paint that covered its surface, the Silver Bridge was a suspension bridge, though with an unusual design. Unlike most suspension bridges, the Silver Bridge was supported by eyebars. These were flat, steel beams with round, holed ends connected via huge pins to form chains ranging between 45 feet and 55 feet in length. While unconventional, the design held up to decades of heavy traffic…until it suddenly didn’t. On December 15, 1967, a chain on one of the ends of the bridge snapped, causing it to tilt to one side. This happened at 5 p.m., when rush-hour traffic was backed up on the bridge. The tilt immediately sent unfortunate commuters sliding into the water. In the span of around one minute, dozens of cars fell 80 feet to the surface of the Ohio River, killing 46 people and injuring 9.
Following the tragic incident, the National Transportation Safety Board (NTSB) investigated the cause of the bridge collapse and released a comprehensive report. However, the bridge had already been a source of concern to state officials for some time. From the very beginning, it was known that the eyebar chains could not be adjusted after the bridge was completed, meaning that any issues arising from the condition of the chains couldn’t be easily corrected. Still, it seemed that the bridge was well looked after. Inspections were carried out regularly by the private owner of the bridge until 1941, when it was purchased by the state of West Virginia. In 1965, state inspectors recommended $30,000 in repairs, which were completed by the summer of 1967. Just over a week before the bridge collapsed, the State Road Commission sent a maintenance engineer to check the bridge again. Yet despite these measures, a growing threat went overlooked. According to the NTSB report, the main source of the failure was an improperly cast eyebar which had been used in the construction of the bridge despite having a small crack. Over the years, corrosion caused the crack to grow (rust expands as it forms) until it failed catastrophically on December 15. The bridge also hadn’t been designed to handle the heavy traffic of the 1960s. By then, not only were there more cars on the road, the cars were much heavier than the Ford Model-Ts that the bridge was originally made to accommodate. The flaw in the eyebar had been visually inaccessible, so it went undiscovered despite all the inspections.
As tragic as it was, the Silver Bridge collapse did have something of a silver lining. It made headlines around the country, forcing President Lyndon B. Johnson to address aging infrastructure. A nationwide assessment found that many bridges were, much like Silver Bridge, designed and built in the 1920s, but with even fewer inspections to keep tabs on their condition. Some bridges had never even been inspected. Today, there are federal standards for bridge design and maintenance, but the sheer number of bridges still makes thorough, regular inspections difficult. Efforts to improve maintenance procedures and intervals are often met with political resistance due to funding, and many pieces of road infrastructure today could be ticking time bombs. When it comes to infrastructure safety, there shouldn’t be such a thing as a bridge too far. -
FREEOutdoors PP&T CurioFree1 CQ
Spring has sprung! Unfortunately, though, the season can bring more than nice weather, colorful flowers, and chirping birds. It also means the re-emergence of disease-spreading pests like mosquitoes and ticks. Ticks, in particular, are responsible for spreading a disease that all outdoor adventurers fear: lyme disease. This painful condition can wreak havoc on the body and, frighteningly, in some people the symptoms seem to persist for years. However, there is heated debate around the causes of “chronic lyme” and whether that name should even be used by medical professionals.
Lyme disease is caused by the borrelia bacteria, but it’s almost always transmitted to people via tick bites. The ticks that carry lyme can be found in the American Midwest, Northeast, the Pacific Northwest, parts of Canada, and Europe, though their range seems to be spreading. Known as blacklegged ticks (Ixodes scapularis) or western blacklegged ticks (Ixodes pacificus), they are usually found in wooded areas, though they can easily make their way into people’s yards. When a tick bites a person, they often leave behind a distinctive, ring-shaped rash. While this can help someone know that they’ve been bitten, the only way to know if the tick was carrying lyme disease is to wait. Symptoms can start anytime between three to thirty days after infection. Even the symptoms aren’t always obvious, since they can feel like the flu and can include fever, muscle aches, stiff joints, fatigue, swollen lymph nodes, and headaches. Without treatment, the disease progresses to stage 2, which can cause irregular heartbeat, swelling in or around the eyes, and muscle weakness. Stage 3 can cause arthritis, and without medical intervention symptoms can continue to get worse. European ticks sometimes carry variants of lyme that can cause a condition called acrodermatitis chronica atrophicans, which causes swelling and discoloration of the skin near the joints. Fortunately, lyme disease can be treated via a simple course of antibiotics…usually. In some people, lyme disease seems to last far longer than it should, even after treatment.
This seemingly lingering lyme disease is sometimes called “chronic lyme.” However, most medical professionals prefer the term “post-treatment Lyme disease” (PTLD), which more accurately describes the condition. After all, people who have PTLD aren’t infected with the borrelia bacteria anymore. Still, months or even years after they are “cured,” they continue to experience fatigue, aches, and palpitations, in addition to numbness, dizziness, and brain fog. PTLD is difficult to treat because its cause has yet to be identified. In fact, some medical professionals don’t even believe that it’s possible to have “chronic lyme.” Since the bacteria that causes lyme disease can't be detected in PTLD, antibiotics aren’t always effective, though there are stories of some cases where a second, prolonged course of antibiotics has worked. Still, since reliable treatments for PTLD are limited, prevention is the best route to take.
Ticks can carry more diseases than just lyme, so it’s essential to watch out for them. Since ticks prefer areas with heavy vegetation, like tall grass, it’s best to avoid those areas or to only venture into them while wearing pants and sleeves that leave little skin exposed. Bug sprays that use DEET, picaridin, and Oil of Lemon Eucalyptus (OLE) can help deter the arachnids, but people who may have been in tick-infested areas should do a thorough examination of their body and gear. If there are ticks clinging on to skin, then a tick-removal device, like the kind available at outdoor shops, should be used. Experts have urged people not to rely on home remedies like burning ticks with a lighter or squeezing them off, since that can actually force the tick to eject their inner contents into bite wounds. It’s also important to remember that dogs can get lyme disease, so they should be given tick preventatives on a regular basis. Dogs can also be vaccinated against lyme disease, and a human vaccine is under development too. Just remember, a tick’s bite is worse than your (or your dog’s) bark.
[Image description: A brown-and-black tick on a blade of green grass.] Credit & copyright: Erik Karits, PexelsSpring has sprung! Unfortunately, though, the season can bring more than nice weather, colorful flowers, and chirping birds. It also means the re-emergence of disease-spreading pests like mosquitoes and ticks. Ticks, in particular, are responsible for spreading a disease that all outdoor adventurers fear: lyme disease. This painful condition can wreak havoc on the body and, frighteningly, in some people the symptoms seem to persist for years. However, there is heated debate around the causes of “chronic lyme” and whether that name should even be used by medical professionals.
Lyme disease is caused by the borrelia bacteria, but it’s almost always transmitted to people via tick bites. The ticks that carry lyme can be found in the American Midwest, Northeast, the Pacific Northwest, parts of Canada, and Europe, though their range seems to be spreading. Known as blacklegged ticks (Ixodes scapularis) or western blacklegged ticks (Ixodes pacificus), they are usually found in wooded areas, though they can easily make their way into people’s yards. When a tick bites a person, they often leave behind a distinctive, ring-shaped rash. While this can help someone know that they’ve been bitten, the only way to know if the tick was carrying lyme disease is to wait. Symptoms can start anytime between three to thirty days after infection. Even the symptoms aren’t always obvious, since they can feel like the flu and can include fever, muscle aches, stiff joints, fatigue, swollen lymph nodes, and headaches. Without treatment, the disease progresses to stage 2, which can cause irregular heartbeat, swelling in or around the eyes, and muscle weakness. Stage 3 can cause arthritis, and without medical intervention symptoms can continue to get worse. European ticks sometimes carry variants of lyme that can cause a condition called acrodermatitis chronica atrophicans, which causes swelling and discoloration of the skin near the joints. Fortunately, lyme disease can be treated via a simple course of antibiotics…usually. In some people, lyme disease seems to last far longer than it should, even after treatment.
This seemingly lingering lyme disease is sometimes called “chronic lyme.” However, most medical professionals prefer the term “post-treatment Lyme disease” (PTLD), which more accurately describes the condition. After all, people who have PTLD aren’t infected with the borrelia bacteria anymore. Still, months or even years after they are “cured,” they continue to experience fatigue, aches, and palpitations, in addition to numbness, dizziness, and brain fog. PTLD is difficult to treat because its cause has yet to be identified. In fact, some medical professionals don’t even believe that it’s possible to have “chronic lyme.” Since the bacteria that causes lyme disease can't be detected in PTLD, antibiotics aren’t always effective, though there are stories of some cases where a second, prolonged course of antibiotics has worked. Still, since reliable treatments for PTLD are limited, prevention is the best route to take.
Ticks can carry more diseases than just lyme, so it’s essential to watch out for them. Since ticks prefer areas with heavy vegetation, like tall grass, it’s best to avoid those areas or to only venture into them while wearing pants and sleeves that leave little skin exposed. Bug sprays that use DEET, picaridin, and Oil of Lemon Eucalyptus (OLE) can help deter the arachnids, but people who may have been in tick-infested areas should do a thorough examination of their body and gear. If there are ticks clinging on to skin, then a tick-removal device, like the kind available at outdoor shops, should be used. Experts have urged people not to rely on home remedies like burning ticks with a lighter or squeezing them off, since that can actually force the tick to eject their inner contents into bite wounds. It’s also important to remember that dogs can get lyme disease, so they should be given tick preventatives on a regular basis. Dogs can also be vaccinated against lyme disease, and a human vaccine is under development too. Just remember, a tick’s bite is worse than your (or your dog’s) bark.
[Image description: A brown-and-black tick on a blade of green grass.] Credit & copyright: Erik Karits, Pexels -
FREEWorld History PP&T CurioFree1 CQ
Happy Saint Patrick’s Day! Just who was this Saint Patrick guy, anyway? Like all saints who went on to become holiday mascots (think Saint Valentine and Saint Nicholas) the real Saint Patrick’s life is steeped in legend. In fact, almost everything we know about his life comes from two works that Patrick wrote himself: his autobiography, Confessio, and a letter condemning what he saw as Britain’s mistreatment of Christians in Ireland. While some of Patrick’s stories might best be taken with a grain of salt, there’s no doubt that he became an extremely successful priest and missionary in his lifetime, and that he faced plenty of tribulations along the way.
The story of Saint Patrick gets strange right off the bat since, despite his fame as the patron saint of Ireland, he wasn’t actually Irish. Rather, he was born in Britain sometime around 450 C.E. to a family of Roman descent. His father was a wealthy deacon and local politician, but even his status wasn’t enough to protect a 16-year-old Patrick from being kidnapped by Irish raiders who broke into his family’s estate. The teen was carried off into slavery in Ireland, where he was forced to work for six years herding sheep. During his time in captivity, Patrick sought solace in his religion and became more devout as a result. According to Patrick’s own writings, he had a dream one night in which the Christian god told him that it was time to leave, so he fled his captors and returned to his family in Britain. After his return, another dream told him that he would one day return to Ireland as a missionary. Whatever his reasoning, Patrick did begin 15 years of religious training, at the end of which he was ordained a priest. Amazingly, he did indeed choose to return to the land where he had been enslaved to do the bulk of his religious work.
Although some legends claim that Saint Patrick introduced Christianity to Ireland, that’s almost certainly not true, since part of his job as a missionary and priest was working with Ireland’s already-Christian population. Unlike most foreign priests, Patrick was familiar with Irish traditions and rituals due to the time he’d spent there, which endeared him to Irish Christians. It also allowed him to better relate to the non-Christians he was trying to convert. Patrick put a Christian spin on Irish, pagan rituals, such as lighting bonfires during Easter instead of doing so to worship the Celtic gods. He is also credited with redesigning the typical Christian cross by adding a circle that represents the sun—a prominent Celtic symbol—to make the reverence of the symbol feel more familiar. This design came to be known as the Celtic cross, and it’s still in use today in regions with Celtic heritage. His influence and reputation in Ireland only grew after his death, and he was heralded as a saint by acclaim alone before the Catholic Church had a formal canonization process.
As with any Catholic saint, Patrick was credited for performing a number of epic feats and miracles. The most famous of these is his eradication of snakes from the island, though historically this seems unlikely since scientific evidence points to there being no reptiles at all on the island prior to modern times. Patrick is also credited with using a three-leafed clover, or shamrock, to explain the concept of the Holy Trinity to the Irish, though this was never mentioned in his own writings. Another story tells of Patrick fasting on a mountain for 40 days, until an angel came down to speak with him on behalf of God. The story goes that Patrick then made several demands of God, like allowing him to save more damned souls than any other saint, preventing the English from ever ruling over the Irish, and giving him the privilege of judging Irish souls during the Last Judgment.
While St. Patrick is still heavily associated with Irish culture, his feast day on March 17 is celebrated in many countries today. For many, St. Patrick’s Day is a fairly secular holiday in which revelers don green clothes and drink plenty of beer. This is particularly true in the U.S., where the holiday was first promoted by Irish immigrants in Boston in the 18th century. The first St. Patrick’s Day parade was held in Boston in 1737, and the tradition has spread to cities across the country. No need to be green with envy for the Emerald Isle—everyone has the luck of the Irish on St. Patrick’s Day.
[Image description: A black-and-white engraving of Saint Patrick reading a bible and holding a staff while wearing a robe and tall hat.] Credit & copyright: Mattheus Borrekens, 1625-1670. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Happy Saint Patrick’s Day! Just who was this Saint Patrick guy, anyway? Like all saints who went on to become holiday mascots (think Saint Valentine and Saint Nicholas) the real Saint Patrick’s life is steeped in legend. In fact, almost everything we know about his life comes from two works that Patrick wrote himself: his autobiography, Confessio, and a letter condemning what he saw as Britain’s mistreatment of Christians in Ireland. While some of Patrick’s stories might best be taken with a grain of salt, there’s no doubt that he became an extremely successful priest and missionary in his lifetime, and that he faced plenty of tribulations along the way.
The story of Saint Patrick gets strange right off the bat since, despite his fame as the patron saint of Ireland, he wasn’t actually Irish. Rather, he was born in Britain sometime around 450 C.E. to a family of Roman descent. His father was a wealthy deacon and local politician, but even his status wasn’t enough to protect a 16-year-old Patrick from being kidnapped by Irish raiders who broke into his family’s estate. The teen was carried off into slavery in Ireland, where he was forced to work for six years herding sheep. During his time in captivity, Patrick sought solace in his religion and became more devout as a result. According to Patrick’s own writings, he had a dream one night in which the Christian god told him that it was time to leave, so he fled his captors and returned to his family in Britain. After his return, another dream told him that he would one day return to Ireland as a missionary. Whatever his reasoning, Patrick did begin 15 years of religious training, at the end of which he was ordained a priest. Amazingly, he did indeed choose to return to the land where he had been enslaved to do the bulk of his religious work.
Although some legends claim that Saint Patrick introduced Christianity to Ireland, that’s almost certainly not true, since part of his job as a missionary and priest was working with Ireland’s already-Christian population. Unlike most foreign priests, Patrick was familiar with Irish traditions and rituals due to the time he’d spent there, which endeared him to Irish Christians. It also allowed him to better relate to the non-Christians he was trying to convert. Patrick put a Christian spin on Irish, pagan rituals, such as lighting bonfires during Easter instead of doing so to worship the Celtic gods. He is also credited with redesigning the typical Christian cross by adding a circle that represents the sun—a prominent Celtic symbol—to make the reverence of the symbol feel more familiar. This design came to be known as the Celtic cross, and it’s still in use today in regions with Celtic heritage. His influence and reputation in Ireland only grew after his death, and he was heralded as a saint by acclaim alone before the Catholic Church had a formal canonization process.
As with any Catholic saint, Patrick was credited for performing a number of epic feats and miracles. The most famous of these is his eradication of snakes from the island, though historically this seems unlikely since scientific evidence points to there being no reptiles at all on the island prior to modern times. Patrick is also credited with using a three-leafed clover, or shamrock, to explain the concept of the Holy Trinity to the Irish, though this was never mentioned in his own writings. Another story tells of Patrick fasting on a mountain for 40 days, until an angel came down to speak with him on behalf of God. The story goes that Patrick then made several demands of God, like allowing him to save more damned souls than any other saint, preventing the English from ever ruling over the Irish, and giving him the privilege of judging Irish souls during the Last Judgment.
While St. Patrick is still heavily associated with Irish culture, his feast day on March 17 is celebrated in many countries today. For many, St. Patrick’s Day is a fairly secular holiday in which revelers don green clothes and drink plenty of beer. This is particularly true in the U.S., where the holiday was first promoted by Irish immigrants in Boston in the 18th century. The first St. Patrick’s Day parade was held in Boston in 1737, and the tradition has spread to cities across the country. No need to be green with envy for the Emerald Isle—everyone has the luck of the Irish on St. Patrick’s Day.
[Image description: A black-and-white engraving of Saint Patrick reading a bible and holding a staff while wearing a robe and tall hat.] Credit & copyright: Mattheus Borrekens, 1625-1670. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEFitness PP&T CurioFree1 CQ
We’re three months into 2024! Have you stuck to the fitness goals you set back in January? If so, you’re probably intimately familiar with one of the world’s most popular fitness machines: the treadmill. While they’re touted for their health benefits today, treadmills have a surprisingly dark history. In fact, they weren’t invented for fitness at all, but for punishment.
Treadmills may have been invented in ancient Asia, though historians aren’t entirely sure. The earliest iterations of treadmills in the West were used to pump water or grind grains. Called a treadwheel, the machine was the brainchild of William Cubitt, an English civil engineer from a family of millwrights. The design was fairly simple, with two wheels connected by cogs. Users would climb on top of it and then walk, as if ascending a never-ending flight of stairs, while holding onto a bar for support. It was a useful industrial machine, but in 1818, not long after it was first introduced, prisons noticed that it had potential as an instrument of punishment.
Thus, treadwheels began to pop up at large correctional facilities where they were given a new, dystopian name: atonement machines. The grueling labor was touted by prison officials as a way for prisoners to “work off their sins.” Of course, in reality the devices were less about atonement and more about keeping prisoners too occupied and exhausted to stir up trouble. These correctional contraptions were modified for prison use as well, with partitions separating inmates so that they couldn’t pass the time by socializing. This would obviously be considered torture by modern standards, as inmates were sometimes made to work on atonement machines for up to 10 hours a day. Unlike the first treadwheels, most atonement machines weren’t even made to do anything useful, like pumping water or grinding grain. Thankfully, the sisyphean punishment fell out of favor in the late 1800s as its efficacy as a rehabilitation tool was shown to be questionable at best and lethal at worst. By the turn of the century, there were a little more than a dozen functioning atonement machines left in English prisons.
The treadmill saw a similar rise and decline in popularity as a correctional tool in the U.S., but some enterprising Americans also thought to re-purpose the torture device as a fitness machine. In 1913, Claude Lauraine Hagen filed a patent for a “training-machine” in the U.S. in response to a 1910 report by the CDC on heart disease being caused by lack of exercise. In a similar vein, a cardiologist named Robert Bruce came up with the “Bruce Protocol” in the early 1960s, where he would evaluate a patient’s cardiac health by having them walk on a treadmill while connected to an electrocardiogram. It wasn’t until later that decade, when William Staub invented the “Pacemaster 600”, that the treadmill really caught on as a machine for fitness and recreation. Staub’s iteration of the treadmill came at a time when Americans were becoming more health-conscious and concerned with maintaining their physiques. With the Pacemaster 600, the average person could run in any weather and sweat off extra pounds. Staub was seemingly on to something, as he reportedly used a treadmill every day until he died at the ripe old age of 96. Nowadays, treadmills are a ubiquitous fixture in home gyms and fitness centers around the world…though some may still consider them a bit torturous.
[Image description: A Victorian-era illustration of prisoners walking on a treadmill while other people, wearing hats and coats, stand near a basket of food in the foreground.] Credit & copyright: British Library c. 1817. Wikimedia Commons, Public Domain.We’re three months into 2024! Have you stuck to the fitness goals you set back in January? If so, you’re probably intimately familiar with one of the world’s most popular fitness machines: the treadmill. While they’re touted for their health benefits today, treadmills have a surprisingly dark history. In fact, they weren’t invented for fitness at all, but for punishment.
Treadmills may have been invented in ancient Asia, though historians aren’t entirely sure. The earliest iterations of treadmills in the West were used to pump water or grind grains. Called a treadwheel, the machine was the brainchild of William Cubitt, an English civil engineer from a family of millwrights. The design was fairly simple, with two wheels connected by cogs. Users would climb on top of it and then walk, as if ascending a never-ending flight of stairs, while holding onto a bar for support. It was a useful industrial machine, but in 1818, not long after it was first introduced, prisons noticed that it had potential as an instrument of punishment.
Thus, treadwheels began to pop up at large correctional facilities where they were given a new, dystopian name: atonement machines. The grueling labor was touted by prison officials as a way for prisoners to “work off their sins.” Of course, in reality the devices were less about atonement and more about keeping prisoners too occupied and exhausted to stir up trouble. These correctional contraptions were modified for prison use as well, with partitions separating inmates so that they couldn’t pass the time by socializing. This would obviously be considered torture by modern standards, as inmates were sometimes made to work on atonement machines for up to 10 hours a day. Unlike the first treadwheels, most atonement machines weren’t even made to do anything useful, like pumping water or grinding grain. Thankfully, the sisyphean punishment fell out of favor in the late 1800s as its efficacy as a rehabilitation tool was shown to be questionable at best and lethal at worst. By the turn of the century, there were a little more than a dozen functioning atonement machines left in English prisons.
The treadmill saw a similar rise and decline in popularity as a correctional tool in the U.S., but some enterprising Americans also thought to re-purpose the torture device as a fitness machine. In 1913, Claude Lauraine Hagen filed a patent for a “training-machine” in the U.S. in response to a 1910 report by the CDC on heart disease being caused by lack of exercise. In a similar vein, a cardiologist named Robert Bruce came up with the “Bruce Protocol” in the early 1960s, where he would evaluate a patient’s cardiac health by having them walk on a treadmill while connected to an electrocardiogram. It wasn’t until later that decade, when William Staub invented the “Pacemaster 600”, that the treadmill really caught on as a machine for fitness and recreation. Staub’s iteration of the treadmill came at a time when Americans were becoming more health-conscious and concerned with maintaining their physiques. With the Pacemaster 600, the average person could run in any weather and sweat off extra pounds. Staub was seemingly on to something, as he reportedly used a treadmill every day until he died at the ripe old age of 96. Nowadays, treadmills are a ubiquitous fixture in home gyms and fitness centers around the world…though some may still consider them a bit torturous.
[Image description: A Victorian-era illustration of prisoners walking on a treadmill while other people, wearing hats and coats, stand near a basket of food in the foreground.] Credit & copyright: British Library c. 1817. Wikimedia Commons, Public Domain. -
FREEUS History PP&T CurioFree1 CQ
There are jailbreaks, and then there are jailbreaks. On this day in 1934, one of the most notorious criminals in U.S. history broke out of jail using only a self-made, fake gun. John Dillinger was a troublemaker long before this infamous feat, but he did become something of a legend afterward. In his time, he was even considered something of a folk hero, albeit a violent one.
John Herbert Dillinger was born in Indianapolis, Indiana, in 1903. He had a difficult family life that resulted in a tumultuous childhood. His mother died when he was only three, and he did not get along with his stepmother. In school, he frequently got into trouble and eventually dropped out at the age of 16. Hoping to reform their son’s ways by moving to a more rural area, his family relocated to a farm outside of Indianapolis, but to no avail. Despite his father’s efforts to distance him from city life, Dillinger still ventured into Indianapolis to work at a machine shop during the day, and drink in bars until all hours of the night. At age 20, Dillinger did take one serious crack at the “straight and narrow” life by joining the Navy…but after just a few months at his first station, he went AWOL and married 16-year-old Beryl Hovious.
Dillinger began his criminal career not long after getting married, though it got off to a rough start. In 1924, Dillinger and his friend, Edgar Singleton, assaulted and robbed a local grocer. Dillinger's used an iron bolt wrapped in cloth as his main weapon, but the grocer suffered only minor injuries. Dillinger was promptly identified, arrested, and sentenced to a whopping 10 to 20 years in prison. His wife divorced him while he was serving his sentence, and it seemed that Dillinger was a man whose violent actions had cost him everything. Far from learning his lesson, however, Dillinger made friends with other criminals while incarcerated and learned more about pulling successful heists. After making parole in 1933, Dillinger picked up right where he'd left off. Using what he’d learned from ex-military inmates who robbed banks with tactical precision, he began a crime spree for the ages. His first step was to break out some of his incarcerated friends: Harry Pierpont, Charles Makley, John Hamilton, Walter Dietrich, and Russell Clark. They, along with Homer Van Meter, formed the Dillinger Gang, and stole weapons by breaking into police stations. Their modus operandi was unusually meticulous; members of the gang performed reconnaissance before each robbery. They also posed as government officials to get an inside look at the daily operations of their targets, and even “rehearsed” by driving over their escape routes several times in advance.
During one of the gang’s robberies, Dillinger killed a police officer named O’Malley. The additional attention from the killing eventually caught up with him, and Dillinger was captured in January of 1934. While awaiting trial in Crown Point, Indiana, Dillinger carved a fake gun out of a wooden washboard and colored it with bootblack. Using the “gun,” he successfully escaped on March 3 of that year by taking a guard hostage (although his lawyer supposedly bribed some guards to aid in the escape) and fled to Chicago, where he underwent plastic surgery to hide his identity. In June, Dillinger was declared Public Enemy Number One by the federal government. Less than a month later, he met a fittingly violent end when he died in a hail of gunfire in a shootout with the FBI.
Oddly, in life Dillinger was considered a folk hero by some Americans, who celebrated him for holding up the “crooked banks” that they blamed for the country’s economic woes. Today, Dillinger is among the most famous of the bank robbers who plagued the U.S. in the early 1900s. Whether he was just a violent criminal or a champion of the people, one thing’s for sure: Dillinger knew how to whittle a convincing gun.
[Image description: Black-and-white mugshots of John Dilliner wearing a suit.] Credit & copyright: John Dillinger's 1924 mugshot from the Indiana State Penitentiary. Indiana State Penitentiary photographic records. Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.There are jailbreaks, and then there are jailbreaks. On this day in 1934, one of the most notorious criminals in U.S. history broke out of jail using only a self-made, fake gun. John Dillinger was a troublemaker long before this infamous feat, but he did become something of a legend afterward. In his time, he was even considered something of a folk hero, albeit a violent one.
John Herbert Dillinger was born in Indianapolis, Indiana, in 1903. He had a difficult family life that resulted in a tumultuous childhood. His mother died when he was only three, and he did not get along with his stepmother. In school, he frequently got into trouble and eventually dropped out at the age of 16. Hoping to reform their son’s ways by moving to a more rural area, his family relocated to a farm outside of Indianapolis, but to no avail. Despite his father’s efforts to distance him from city life, Dillinger still ventured into Indianapolis to work at a machine shop during the day, and drink in bars until all hours of the night. At age 20, Dillinger did take one serious crack at the “straight and narrow” life by joining the Navy…but after just a few months at his first station, he went AWOL and married 16-year-old Beryl Hovious.
Dillinger began his criminal career not long after getting married, though it got off to a rough start. In 1924, Dillinger and his friend, Edgar Singleton, assaulted and robbed a local grocer. Dillinger's used an iron bolt wrapped in cloth as his main weapon, but the grocer suffered only minor injuries. Dillinger was promptly identified, arrested, and sentenced to a whopping 10 to 20 years in prison. His wife divorced him while he was serving his sentence, and it seemed that Dillinger was a man whose violent actions had cost him everything. Far from learning his lesson, however, Dillinger made friends with other criminals while incarcerated and learned more about pulling successful heists. After making parole in 1933, Dillinger picked up right where he'd left off. Using what he’d learned from ex-military inmates who robbed banks with tactical precision, he began a crime spree for the ages. His first step was to break out some of his incarcerated friends: Harry Pierpont, Charles Makley, John Hamilton, Walter Dietrich, and Russell Clark. They, along with Homer Van Meter, formed the Dillinger Gang, and stole weapons by breaking into police stations. Their modus operandi was unusually meticulous; members of the gang performed reconnaissance before each robbery. They also posed as government officials to get an inside look at the daily operations of their targets, and even “rehearsed” by driving over their escape routes several times in advance.
During one of the gang’s robberies, Dillinger killed a police officer named O’Malley. The additional attention from the killing eventually caught up with him, and Dillinger was captured in January of 1934. While awaiting trial in Crown Point, Indiana, Dillinger carved a fake gun out of a wooden washboard and colored it with bootblack. Using the “gun,” he successfully escaped on March 3 of that year by taking a guard hostage (although his lawyer supposedly bribed some guards to aid in the escape) and fled to Chicago, where he underwent plastic surgery to hide his identity. In June, Dillinger was declared Public Enemy Number One by the federal government. Less than a month later, he met a fittingly violent end when he died in a hail of gunfire in a shootout with the FBI.
Oddly, in life Dillinger was considered a folk hero by some Americans, who celebrated him for holding up the “crooked banks” that they blamed for the country’s economic woes. Today, Dillinger is among the most famous of the bank robbers who plagued the U.S. in the early 1900s. Whether he was just a violent criminal or a champion of the people, one thing’s for sure: Dillinger knew how to whittle a convincing gun.
[Image description: Black-and-white mugshots of John Dilliner wearing a suit.] Credit & copyright: John Dillinger's 1924 mugshot from the Indiana State Penitentiary. Indiana State Penitentiary photographic records. Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929. -
FREEWorld History PP&T CurioFree1 CQ
It may look tame, but trust us—you don’t want to take the plunge. The Bolton Strid, a narrow portion of the River Wharf running through the Bolton Woods in Yorkshire, England, is considered by many to be the world’s deadliest stream. Despite looking relatively calm and easy to jump over, the Strid has claimed many lives, giving it an almost otherworldly reputation.
The Bolton Strid gets the first part of its name from the 12th century monastery nearby, the Bolton Priory. “Strid,” on the other hand, means “turmoil” in Old English. The stream’s waters are not as gentle as they appear at first glance, and the Bolton Strid’s dangerous reputation has been known to locals for centuries. In the early 1800s, British poet William Wordsworth even wrote a poem called The Force of Prayer about the Bolton Strid, and an unlucky boy who fell in while trying to jump across it. The poem reads, in part, “The Boy is in the arms of Wharf, / And strangled by a merciless force; / For never more was young Romilly seen / Till he rose a lifeless corpse.” Over the centuries, countless people have similarly fallen victim to the Bolton Strid, many while trying to jump across it. More recently, in 1998, a couple drowned there on the second day of their honeymoon after heavy rains caused the water level to rise by five feet in less than a minute.
So, what exactly makes the Bolton Strid so dangerous? It's a combination of physics and psychology. The River Wharf, which is around 30 feet across for most of its length, narrows at the Bolton Strid significantly, until it’s only a few feet across. Below the seemingly-calm surface, the sudden narrowness causes water to be “squeezed” along much faster than it would normally flow. Over centuries, the turbulent water has allowed the Bolton Strid to gouge deep into the surrounding stone—perhaps as deep as 30 feet, though the exact measurement is still unknown. Some researchers have described the Bolton Strid as a full-sized river that is simply “turned on its side”, so that all of its danger is hidden below the surface. These dangers include not only the rushing current but also sharp rocks that aren’t visible from above. There are only a few, small clues that the stream isn’t what it seems. These include swirls of water on its surface (from hidden undercurrents), bubbles rising from the bottom (which are a sign of low water density that makes it easier for objects to sink), and an inability to see the bottom of the stream, even on a clear, sunny day. Ultimately, the fact that most people miss these subtle signs is what truly makes the Bolton Strid so dangerous. People are simply more willing to approach the calm-looking stream than they would be an obviously-raging river. Hence why there are so many stories of people trying to jump across the waterway. In fact, people still do jump across all the time, due in part to the stream’s now-famous reputation. For teens, leaping over the Bolton Strid is a surefire, albeit dangerous way to “prove their love” to a romantic partner.
The fact remains that no one who has ever fallen into the Bolton Strid has made it out alive, and many don’t make it out at all. No one is sure where their bodies go when they don’t emerge, but some experts believe that objects in the Bolton Strid are sucked into underwater caverns in the stony riverbed. Sure, you should never judge a book by its cover, but it also pays not to judge a river by its width.
[Image description: A black-and-white photo of The Strid taken in 1898.] Credit & copyright: Bolton Woods; The Strid. 1898. Rijksmuseum, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It may look tame, but trust us—you don’t want to take the plunge. The Bolton Strid, a narrow portion of the River Wharf running through the Bolton Woods in Yorkshire, England, is considered by many to be the world’s deadliest stream. Despite looking relatively calm and easy to jump over, the Strid has claimed many lives, giving it an almost otherworldly reputation.
The Bolton Strid gets the first part of its name from the 12th century monastery nearby, the Bolton Priory. “Strid,” on the other hand, means “turmoil” in Old English. The stream’s waters are not as gentle as they appear at first glance, and the Bolton Strid’s dangerous reputation has been known to locals for centuries. In the early 1800s, British poet William Wordsworth even wrote a poem called The Force of Prayer about the Bolton Strid, and an unlucky boy who fell in while trying to jump across it. The poem reads, in part, “The Boy is in the arms of Wharf, / And strangled by a merciless force; / For never more was young Romilly seen / Till he rose a lifeless corpse.” Over the centuries, countless people have similarly fallen victim to the Bolton Strid, many while trying to jump across it. More recently, in 1998, a couple drowned there on the second day of their honeymoon after heavy rains caused the water level to rise by five feet in less than a minute.
So, what exactly makes the Bolton Strid so dangerous? It's a combination of physics and psychology. The River Wharf, which is around 30 feet across for most of its length, narrows at the Bolton Strid significantly, until it’s only a few feet across. Below the seemingly-calm surface, the sudden narrowness causes water to be “squeezed” along much faster than it would normally flow. Over centuries, the turbulent water has allowed the Bolton Strid to gouge deep into the surrounding stone—perhaps as deep as 30 feet, though the exact measurement is still unknown. Some researchers have described the Bolton Strid as a full-sized river that is simply “turned on its side”, so that all of its danger is hidden below the surface. These dangers include not only the rushing current but also sharp rocks that aren’t visible from above. There are only a few, small clues that the stream isn’t what it seems. These include swirls of water on its surface (from hidden undercurrents), bubbles rising from the bottom (which are a sign of low water density that makes it easier for objects to sink), and an inability to see the bottom of the stream, even on a clear, sunny day. Ultimately, the fact that most people miss these subtle signs is what truly makes the Bolton Strid so dangerous. People are simply more willing to approach the calm-looking stream than they would be an obviously-raging river. Hence why there are so many stories of people trying to jump across the waterway. In fact, people still do jump across all the time, due in part to the stream’s now-famous reputation. For teens, leaping over the Bolton Strid is a surefire, albeit dangerous way to “prove their love” to a romantic partner.
The fact remains that no one who has ever fallen into the Bolton Strid has made it out alive, and many don’t make it out at all. No one is sure where their bodies go when they don’t emerge, but some experts believe that objects in the Bolton Strid are sucked into underwater caverns in the stony riverbed. Sure, you should never judge a book by its cover, but it also pays not to judge a river by its width.
[Image description: A black-and-white photo of The Strid taken in 1898.] Credit & copyright: Bolton Woods; The Strid. 1898. Rijksmuseum, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
If you were paying attention this past Valentine’s Day, chances are good that you saw at least one depiction of Cupid. He’s everywhere this time of year—on boxes of chocolates, bouquet wrappings, and of course cards. But Cupid wasn’t always the friendly little winged baby that he’s often depicted as today; there was a time when his heart-tipped arrows inspired terror and were part of dark tales involving madness and death. That’s because Cupid was originally Eros, the Greek God of carnal love.
Cupid’s mythological backstory went through plenty of changes over the centuries. One of the earliest mentions of Eros describes him as a primordial cosmological entity who emerged from the world egg. Another version of his birth describes him as the son of Nyx (the goddess of night) and Erebus (the god of darkness). He is sometimes portrayed as a sibling of Deimos, Phobos, and Harmonia, (gods of fear, panic and harmony, respectively). In other portrayals, he is the elder brother of Anteros, the god of requited love and avenger of unrequited love. However, the most popular tales of Eros portray Aphrodite, the goddess of sexual love and beauty, as his mother. In these stories, Eros is usually portrayed as a powerful, handsome young man with wings. His enchanted bow and arrows can impart an everlasting and irresistible carnal desire for another being, and they could affect both gods and mortals. His power wasn’t always used for good. Aphrodite, being a jealous goddess, often ordered her son to inflict his arrows on mortal women she grew jealous of, sometimes making them fall in love with animals. Yet, for a god who could so easily make others fall in love, Eros himself had quite the time trying to form a romantic connection of his own.
The tale of Eros and Psyche tells the story of Eros’s own quest for love. Most versions of the tale state that Aphrodite grew jealous of Psyche, a beautiful, mortal princess, and sent Eros to punish her with his arrows. Meanwhile, concerned that suitors never came to call on his daughter, Psyche’s father sought advice from the Oracle of Delphi, who told him to leave her at the top of a mountain, where she would meet her husband. This husband was said to be a monster, but Psyche went anyway to make her father happy. When she reached the summit, she was taken to a grand palace where she lived alone, only meeting her husband under the dark cover of night. When her sisters came to visit her, they—in their jealousy—convinced her to shine a light on her husband when he came to visit. One night, Psyche did just that, and revealed a most beautiful face—that of Eros himself. Eros, as it happened, had nicked himself with his own arrow while taking aim at Psyche. But the god was angered by the betrayal and flew away, leaving Psyche alone once more. Some versions say that Psyche remained alone forever, but in other versions she and Eros reconciled and Psyche went to live with Eros on Olympus.
Things changed for Eros around 146 BCE, when the Romans took over Greece’s city-states. Although they kept much of Greece’s mythology intact, they renamed the Greek gods and, in some cases, altered their stories and appearances. The Romans changed Eros’s name to Cupid, a word that resembles the Latin verb “cupere”, meaning “to desire.” Unlike Eros, Cupid was portrayed as a mischievous young boy. This is probably because many of Eros’s stories saw him carrying out the commands of his mother, which the Romans viewed as childlike. As Christian art grew popular in Italy, Cupid came to be depicted in an even younger form, taking on the baby-like appearance of winged cherubs that were popular in Christian art. Eventually, the only thing that set Cupid apart from other cherubs were his quiver and arrows. Don’t be fooled by their modern, cartoonish appearance, though. They pack a wallop.
[Image description: A round plate with Eros depicted on it, standing nude with wings and arms outstretched.] Credit & copyright:
Louvre Museum, Wikimedia Commons. 470 BC–450 BCE. This picture was shot by Marie-Lan Nguyen (user:Jastrow) and placed in the Public Domain.If you were paying attention this past Valentine’s Day, chances are good that you saw at least one depiction of Cupid. He’s everywhere this time of year—on boxes of chocolates, bouquet wrappings, and of course cards. But Cupid wasn’t always the friendly little winged baby that he’s often depicted as today; there was a time when his heart-tipped arrows inspired terror and were part of dark tales involving madness and death. That’s because Cupid was originally Eros, the Greek God of carnal love.
Cupid’s mythological backstory went through plenty of changes over the centuries. One of the earliest mentions of Eros describes him as a primordial cosmological entity who emerged from the world egg. Another version of his birth describes him as the son of Nyx (the goddess of night) and Erebus (the god of darkness). He is sometimes portrayed as a sibling of Deimos, Phobos, and Harmonia, (gods of fear, panic and harmony, respectively). In other portrayals, he is the elder brother of Anteros, the god of requited love and avenger of unrequited love. However, the most popular tales of Eros portray Aphrodite, the goddess of sexual love and beauty, as his mother. In these stories, Eros is usually portrayed as a powerful, handsome young man with wings. His enchanted bow and arrows can impart an everlasting and irresistible carnal desire for another being, and they could affect both gods and mortals. His power wasn’t always used for good. Aphrodite, being a jealous goddess, often ordered her son to inflict his arrows on mortal women she grew jealous of, sometimes making them fall in love with animals. Yet, for a god who could so easily make others fall in love, Eros himself had quite the time trying to form a romantic connection of his own.
The tale of Eros and Psyche tells the story of Eros’s own quest for love. Most versions of the tale state that Aphrodite grew jealous of Psyche, a beautiful, mortal princess, and sent Eros to punish her with his arrows. Meanwhile, concerned that suitors never came to call on his daughter, Psyche’s father sought advice from the Oracle of Delphi, who told him to leave her at the top of a mountain, where she would meet her husband. This husband was said to be a monster, but Psyche went anyway to make her father happy. When she reached the summit, she was taken to a grand palace where she lived alone, only meeting her husband under the dark cover of night. When her sisters came to visit her, they—in their jealousy—convinced her to shine a light on her husband when he came to visit. One night, Psyche did just that, and revealed a most beautiful face—that of Eros himself. Eros, as it happened, had nicked himself with his own arrow while taking aim at Psyche. But the god was angered by the betrayal and flew away, leaving Psyche alone once more. Some versions say that Psyche remained alone forever, but in other versions she and Eros reconciled and Psyche went to live with Eros on Olympus.
Things changed for Eros around 146 BCE, when the Romans took over Greece’s city-states. Although they kept much of Greece’s mythology intact, they renamed the Greek gods and, in some cases, altered their stories and appearances. The Romans changed Eros’s name to Cupid, a word that resembles the Latin verb “cupere”, meaning “to desire.” Unlike Eros, Cupid was portrayed as a mischievous young boy. This is probably because many of Eros’s stories saw him carrying out the commands of his mother, which the Romans viewed as childlike. As Christian art grew popular in Italy, Cupid came to be depicted in an even younger form, taking on the baby-like appearance of winged cherubs that were popular in Christian art. Eventually, the only thing that set Cupid apart from other cherubs were his quiver and arrows. Don’t be fooled by their modern, cartoonish appearance, though. They pack a wallop.
[Image description: A round plate with Eros depicted on it, standing nude with wings and arms outstretched.] Credit & copyright:
Louvre Museum, Wikimedia Commons. 470 BC–450 BCE. This picture was shot by Marie-Lan Nguyen (user:Jastrow) and placed in the Public Domain. -
FREEUS History PP&T CurioFree1 CQ
In honor of Black History Month, we’re celebrating one of the greatest American writers of all time: Langston Hughes. A central figure of the Harlem Renaissance, Hughes is best remembered for his succinct, gripping poetry, but he also wrote novels, plays, and essays. His unique ability to combine themes of beauty and hope with truths about poverty and prejudice made his work an inspiration to millions.
James Mercer Langston Hughes was born in Joplin, Missouri, on February 1, 1901. His childhood was complicated and impacted by racism from the start. His father, seeking to escape the violent, anti-Black racism of the U.S., divorced Hughes’ mother and left for Mexico when Hughes was an infant. His mother, unable to find steady work due to persistent prejudice, left him to be raised by his maternal grandmother until he was 13. An activist in her youth, Hughes’ grandmother instilled in him a sense of racial pride and responsibility toward the Black community. By the time he was a teenager, Hughes had already developed a passion for literature and had begun writing poetry, thanks in large part to his grandmother.
After high school, he briefly went to live with his father in Mexico and tried, unsuccessfully, to convince him to fund his education at Columbia University, where he wanted to study writing. It was during his time in Mexico that the young Hughes wrote one of his most famous poems: The Negro Speaks of Rivers. A part of the poem reads, “I’ve known rivers ancient as the world and older than the flow of human blood in human veins/My soul has grown deep like the rivers.” The free-verse poem, focused on the strength and beauty of Black heritage, was published in NAACP’s The Crisis Magazine, and caught the attention of critics. Although Hughes did attend Columbia University for a year, he paused his studies to travel around Europe and Africa to escape the racism he encountered at school and in the U.S. in general. After returning to the U.S., he finished his education at Lincoln University in Chester County, Pennsylvania, before making what was possibly the most impactful move of his life: he settled down in New York, in the historically Black neighborhood of Harlem.
Despite his first poem’s success, most of Hughes’ early work was poorly received by Black critics and almost universally ignored by white critics. Black critics disliked Hughes because he wrote about the lives of Black people in Harlem in a stark, forthcoming light, which they found unflattering. For example, the title of his second book of poetry, Fine Clothes to the Jew, referred to the at-the-time common practice of Black people pawning their expensive clothes to mostly Jewish-owned businesses during times of financial hardship. As Hughes once wrote of other Black writers in his autobiography, “In anything that white people were likely to read, they wanted to put their best foot forward, their politely polished and cultural foot—and only that foot.” Hughes’ lack of critical acclaim didn’t seem to bother him though, as he remained most interested in the lives of everyday Black people. Hughes was well-regarded by the working class, who found his writing relatable. Within a few years, he grew popular enough to become the first Black American to make a living from writing alone. However, this success also made him a target of the US government. His interest in communism, open criticisms of capitalism, and his unwavering support of civil rights for Black Americans earned him the ire of J. Edgar Hoover, and the government kept an extensive file on Hughes for many years.
Hughes did eventually win over the literary critics. In fact, he became one of the most significant figures in the Harlem Renaissance, an explosion of Black art and literature based in Harlem, which began in the 1920s. For his part, Hughes relentlessly portrayed the struggles of Black Americans and relayed their experiences to the world with dignity and solemnity. He became friends with other leading Black literary figures, like author Gwendolyn Brooks. Throughout the rest of his life, Hughes continued to travel the world as much as he could, and was as beloved on the international stage as he was scorned by the establishment in his home country. He may have lived in Harlem, but all the world was his neighborhood.
[Image description: A black-and-white photo of Langston Hughes smiling while wearing a suit.] Credit & copyright: Jack Delano (1914–1997), Wikimedia Commons, This image is a work of an employee of the United States Farm Security Administration or Office of War Information domestic photographic units, taken as part of that person's official duties. As a work of the U.S. federal government, the image is in the public domain in the United States.In honor of Black History Month, we’re celebrating one of the greatest American writers of all time: Langston Hughes. A central figure of the Harlem Renaissance, Hughes is best remembered for his succinct, gripping poetry, but he also wrote novels, plays, and essays. His unique ability to combine themes of beauty and hope with truths about poverty and prejudice made his work an inspiration to millions.
James Mercer Langston Hughes was born in Joplin, Missouri, on February 1, 1901. His childhood was complicated and impacted by racism from the start. His father, seeking to escape the violent, anti-Black racism of the U.S., divorced Hughes’ mother and left for Mexico when Hughes was an infant. His mother, unable to find steady work due to persistent prejudice, left him to be raised by his maternal grandmother until he was 13. An activist in her youth, Hughes’ grandmother instilled in him a sense of racial pride and responsibility toward the Black community. By the time he was a teenager, Hughes had already developed a passion for literature and had begun writing poetry, thanks in large part to his grandmother.
After high school, he briefly went to live with his father in Mexico and tried, unsuccessfully, to convince him to fund his education at Columbia University, where he wanted to study writing. It was during his time in Mexico that the young Hughes wrote one of his most famous poems: The Negro Speaks of Rivers. A part of the poem reads, “I’ve known rivers ancient as the world and older than the flow of human blood in human veins/My soul has grown deep like the rivers.” The free-verse poem, focused on the strength and beauty of Black heritage, was published in NAACP’s The Crisis Magazine, and caught the attention of critics. Although Hughes did attend Columbia University for a year, he paused his studies to travel around Europe and Africa to escape the racism he encountered at school and in the U.S. in general. After returning to the U.S., he finished his education at Lincoln University in Chester County, Pennsylvania, before making what was possibly the most impactful move of his life: he settled down in New York, in the historically Black neighborhood of Harlem.
Despite his first poem’s success, most of Hughes’ early work was poorly received by Black critics and almost universally ignored by white critics. Black critics disliked Hughes because he wrote about the lives of Black people in Harlem in a stark, forthcoming light, which they found unflattering. For example, the title of his second book of poetry, Fine Clothes to the Jew, referred to the at-the-time common practice of Black people pawning their expensive clothes to mostly Jewish-owned businesses during times of financial hardship. As Hughes once wrote of other Black writers in his autobiography, “In anything that white people were likely to read, they wanted to put their best foot forward, their politely polished and cultural foot—and only that foot.” Hughes’ lack of critical acclaim didn’t seem to bother him though, as he remained most interested in the lives of everyday Black people. Hughes was well-regarded by the working class, who found his writing relatable. Within a few years, he grew popular enough to become the first Black American to make a living from writing alone. However, this success also made him a target of the US government. His interest in communism, open criticisms of capitalism, and his unwavering support of civil rights for Black Americans earned him the ire of J. Edgar Hoover, and the government kept an extensive file on Hughes for many years.
Hughes did eventually win over the literary critics. In fact, he became one of the most significant figures in the Harlem Renaissance, an explosion of Black art and literature based in Harlem, which began in the 1920s. For his part, Hughes relentlessly portrayed the struggles of Black Americans and relayed their experiences to the world with dignity and solemnity. He became friends with other leading Black literary figures, like author Gwendolyn Brooks. Throughout the rest of his life, Hughes continued to travel the world as much as he could, and was as beloved on the international stage as he was scorned by the establishment in his home country. He may have lived in Harlem, but all the world was his neighborhood.
[Image description: A black-and-white photo of Langston Hughes smiling while wearing a suit.] Credit & copyright: Jack Delano (1914–1997), Wikimedia Commons, This image is a work of an employee of the United States Farm Security Administration or Office of War Information domestic photographic units, taken as part of that person's official duties. As a work of the U.S. federal government, the image is in the public domain in the United States. -
FREEUS History PP&T CurioFree1 CQ
Revolutions don’t come cheap, you know! On this day in 1787, a little-known American rebellion was defeated less than a year after it began. Its cause? Economic uncertainty following the Revolutionary War. With credit hard to come by and creditors making difficult demands, many people struggled to pay their bills once the war was over. Some lost their land, while others were thrown behind bars, and all the while, social unrest began to spread. When discontent turned to rebellion, the dissidents joined forces and were led by one man: Daniel Shays. Shays’ Rebellion, though brief, tested the stability of a newly-independent America.
During the war, businesses in America and Europe lent massive sums of money to the Continental Army. Once the war ended, they were hesitant to lend more money when they were still owed so much, and most demanded cash up front for goods and services rendered. Meanwhile, many farmers who had fought in the war struggled to make ends meet, as they had borrowed large amounts to start their farms and provide for their families. Ironically, the reason many people were in debt was because they themselves were owed money for their service in state militias and the Continental Army, which had often issued IOUs in lieu of pay. To help alleviate the situation, state governments began to pass pro-debtor laws that forgave loans or printed more money. One exception was Massachusetts, whose Governor, James Bowdoin, refused to pass such laws. This allowed businesses in Boston to seize land from farmers and throw debtors into prison. With so many people unable to pay their debts, the situation quickly became untenable. Beginning in 1786, Disgruntled farmers and other debtors began to hold special meetings, creating an atmosphere of rebellion akin to that of the early days of the American Revolution. Among the dissidents was Daniel Shays, a farmer who had served as a Captain in the Continental Army.
Shays had fought at Bunker Hill and had a decorated military career, but in the rebellion that would eventually bear his name, he was a reluctant leader. He feared that an uprising could threaten the new democracy he’d recently fought to form. Still, he was an active participant in debtors' meetings and protests from the rebellion’s start. Shays was present during a protest in Northampton when rebels blocked judges from entering a courthouse where debtors’ trials were to be held. Not long after, he led a group of 600 men to a courthouse in Springfield for a similar purpose. Shays’ calm demeanor, as he negotiated with General William Shepard to allow the trials to proceed as long as he and his followers were allowed to protest outside, helped make him the rebellion’s de facto leader. The arrangement worked in Shays’ favor, because even though judges could freely enter the courthouse, the court couldn’t find enough people willing to serve as jurors due to the public’s growing sympathies. With Shays seemingly at the helm, the dissidents began to be referred to as “Shaysites,” though their growing rebellion wouldn’t last long.
The Shaysites had divided a fledgling nation. Some viewed their cause as a continuation of the American Revolution, while others, like Sam Adams, considered them traitors and called for their executions. Perhaps taking note of the parallels, George Washington once wrote to a friend, “commotions of this sort, like snow-balls, gather strength as they roll, if there is no opposition in the way to divide and crumble them.” The snow-ball met its match in January of 1787 at the Springfield Arsenal, a federal armory where Shays and his men were headed to procure weapons. Waiting for them was General Shepard, who fired a volley of warning shots, followed by artillery. Two Shaysites died and another twenty were wounded. The rest scattered. On February 4th, the rebellion officially ended after troops led by former Continental Army General Benjamin Lincoln ambushed Shay and his remaining rebels as they made camp. Shays, along with most of the rebellion’s other leaders, fled to Vermont.
Luckily for Shays, John Hancock soon took office as Governor of Massachusetts, and issued pardons for him and his rebels. Still, Shays’ actions proved to many that a more decisive, centralized government was needed in the U.S. Following the rebellion, George Washington, who had led the nation through the war, was elected its first president. The U.S. also abandoned the Articles of Confederation and adopted the U.S. Constitution, written by Federalists like Alexander Hamilton, who favored a stronger federal government and weaker state governments. The ups and downs of the American Revolution are enough to make your head spin.
[Image description: A stone monument in a field with two small American flags, commemorating the last day of Shay’s Rebellion.] Credit & copyright: John Bessa, Wikimedia Commons. The author of this work has released it into the public domain.Revolutions don’t come cheap, you know! On this day in 1787, a little-known American rebellion was defeated less than a year after it began. Its cause? Economic uncertainty following the Revolutionary War. With credit hard to come by and creditors making difficult demands, many people struggled to pay their bills once the war was over. Some lost their land, while others were thrown behind bars, and all the while, social unrest began to spread. When discontent turned to rebellion, the dissidents joined forces and were led by one man: Daniel Shays. Shays’ Rebellion, though brief, tested the stability of a newly-independent America.
During the war, businesses in America and Europe lent massive sums of money to the Continental Army. Once the war ended, they were hesitant to lend more money when they were still owed so much, and most demanded cash up front for goods and services rendered. Meanwhile, many farmers who had fought in the war struggled to make ends meet, as they had borrowed large amounts to start their farms and provide for their families. Ironically, the reason many people were in debt was because they themselves were owed money for their service in state militias and the Continental Army, which had often issued IOUs in lieu of pay. To help alleviate the situation, state governments began to pass pro-debtor laws that forgave loans or printed more money. One exception was Massachusetts, whose Governor, James Bowdoin, refused to pass such laws. This allowed businesses in Boston to seize land from farmers and throw debtors into prison. With so many people unable to pay their debts, the situation quickly became untenable. Beginning in 1786, Disgruntled farmers and other debtors began to hold special meetings, creating an atmosphere of rebellion akin to that of the early days of the American Revolution. Among the dissidents was Daniel Shays, a farmer who had served as a Captain in the Continental Army.
Shays had fought at Bunker Hill and had a decorated military career, but in the rebellion that would eventually bear his name, he was a reluctant leader. He feared that an uprising could threaten the new democracy he’d recently fought to form. Still, he was an active participant in debtors' meetings and protests from the rebellion’s start. Shays was present during a protest in Northampton when rebels blocked judges from entering a courthouse where debtors’ trials were to be held. Not long after, he led a group of 600 men to a courthouse in Springfield for a similar purpose. Shays’ calm demeanor, as he negotiated with General William Shepard to allow the trials to proceed as long as he and his followers were allowed to protest outside, helped make him the rebellion’s de facto leader. The arrangement worked in Shays’ favor, because even though judges could freely enter the courthouse, the court couldn’t find enough people willing to serve as jurors due to the public’s growing sympathies. With Shays seemingly at the helm, the dissidents began to be referred to as “Shaysites,” though their growing rebellion wouldn’t last long.
The Shaysites had divided a fledgling nation. Some viewed their cause as a continuation of the American Revolution, while others, like Sam Adams, considered them traitors and called for their executions. Perhaps taking note of the parallels, George Washington once wrote to a friend, “commotions of this sort, like snow-balls, gather strength as they roll, if there is no opposition in the way to divide and crumble them.” The snow-ball met its match in January of 1787 at the Springfield Arsenal, a federal armory where Shays and his men were headed to procure weapons. Waiting for them was General Shepard, who fired a volley of warning shots, followed by artillery. Two Shaysites died and another twenty were wounded. The rest scattered. On February 4th, the rebellion officially ended after troops led by former Continental Army General Benjamin Lincoln ambushed Shay and his remaining rebels as they made camp. Shays, along with most of the rebellion’s other leaders, fled to Vermont.
Luckily for Shays, John Hancock soon took office as Governor of Massachusetts, and issued pardons for him and his rebels. Still, Shays’ actions proved to many that a more decisive, centralized government was needed in the U.S. Following the rebellion, George Washington, who had led the nation through the war, was elected its first president. The U.S. also abandoned the Articles of Confederation and adopted the U.S. Constitution, written by Federalists like Alexander Hamilton, who favored a stronger federal government and weaker state governments. The ups and downs of the American Revolution are enough to make your head spin.
[Image description: A stone monument in a field with two small American flags, commemorating the last day of Shay’s Rebellion.] Credit & copyright: John Bessa, Wikimedia Commons. The author of this work has released it into the public domain. -
FREEAerobics PP&T CurioFree1 CQ
The world’s full of fitness gurus, but most are known for abs of steel, not hearts of gold! That’s not the case for Richard Simmons, who made a career out of encouraging people to exercise in a compassionate way. The retired aerobics instructor, who is the subject of an upcoming biopic starring Pauly Shore, was one of the first well-known fitness personalities to encourage playful—rather than painful—at-home workouts. He also made a point to include plus-sized people in his nearly 60 straight-to-video workout tapes and DVDs.
Richard Simmons was born on July 12, 1948 in New Orleans, Louisiana, where he spent his early life. He struggled with compulsive eating from a young age, and was overweight by the time he was just four years old. At school, he was bullied by other students for his weight, which led him to comfort himself with food, furthering the problem. By the time he graduated high school, he weighed 268 pounds. A few years later, while studying and working in Italy as a fashion illustrator, someone left an anonymous note on Simmons’ car expressing concern for his health. It was this note that prompted him to move to California, where he was determined to lose weight.
Unfortunately, weight loss can bring on problems of its own. After settling down in Los Angeles, Simmons developed an eating disorder, turning to unhealthy methods to lose weight. He ended up losing 137 pounds within a short span of time and ended up in the hospital. Determined to find a more sustainable, healthy way to maintain his weight, Simmons sought help from local fitness experts. However, he found that most trainers were too tough on their customers and focused on helping people who were already in shape. With the goal of creating a space for the “average” person, Simmons saved up $25,000 over two years and opened his own fitness studio, the Anatomy Asylum, which he eventually renamed to Slimmons.
Simmons quickly found success, first as a fitness guru giving seminars and then as a TV personality. The Richard Simmons Show, which ran from 1980 to 1984, featured fitness advice and healthy cooking segments, but it wasn’t until he started releasing video tapes of aerobic exercise routines that he became a household name. Sweatin’ to the Oldies, released in 1988, was an instant success. It featured Simmons accompanied by plus-sized participants, and was designed to be non-judgmental and fun. Over the years, he released 65 videos that sold 20 million copies, forming the foundation of his multi-million dollar fitness empire. But fitness and weight loss weren’t his only concerns. To address the kind of trauma and confidence issues he struggled with himself, Simmons also released videos like Love Yourself and Win, which focused on building self-confidence and finding motivation.
Simmons never shied away from the spotlight throughout his career, which is why many people were surprised when he disappeared from public life in 2014. His sudden withdrawal has been the subject of much speculation, much of which borders on conspiracy theories, but Simmons maintains that he’s not in hiding, just retired. Said retirement may be related to a botched knee replacement which made it difficult for him to exercise vigorously. Regardless, he remains engaged with his fans on social media, even commenting on the fact that he never authorized Pauly Shore’s recent biopic. Simmons is still beloved for his positive attitude and for being one of the first fitness gurus to lead with positivity and inclusiveness. He may have been “sweatin’ to the oldies,” but he was ahead of his time.
[Image description: Colorful aerobics equipment, including yellow yoga balls and green exercise mats.] Credit & copyright: ArsAdAstra, PixabayThe world’s full of fitness gurus, but most are known for abs of steel, not hearts of gold! That’s not the case for Richard Simmons, who made a career out of encouraging people to exercise in a compassionate way. The retired aerobics instructor, who is the subject of an upcoming biopic starring Pauly Shore, was one of the first well-known fitness personalities to encourage playful—rather than painful—at-home workouts. He also made a point to include plus-sized people in his nearly 60 straight-to-video workout tapes and DVDs.
Richard Simmons was born on July 12, 1948 in New Orleans, Louisiana, where he spent his early life. He struggled with compulsive eating from a young age, and was overweight by the time he was just four years old. At school, he was bullied by other students for his weight, which led him to comfort himself with food, furthering the problem. By the time he graduated high school, he weighed 268 pounds. A few years later, while studying and working in Italy as a fashion illustrator, someone left an anonymous note on Simmons’ car expressing concern for his health. It was this note that prompted him to move to California, where he was determined to lose weight.
Unfortunately, weight loss can bring on problems of its own. After settling down in Los Angeles, Simmons developed an eating disorder, turning to unhealthy methods to lose weight. He ended up losing 137 pounds within a short span of time and ended up in the hospital. Determined to find a more sustainable, healthy way to maintain his weight, Simmons sought help from local fitness experts. However, he found that most trainers were too tough on their customers and focused on helping people who were already in shape. With the goal of creating a space for the “average” person, Simmons saved up $25,000 over two years and opened his own fitness studio, the Anatomy Asylum, which he eventually renamed to Slimmons.
Simmons quickly found success, first as a fitness guru giving seminars and then as a TV personality. The Richard Simmons Show, which ran from 1980 to 1984, featured fitness advice and healthy cooking segments, but it wasn’t until he started releasing video tapes of aerobic exercise routines that he became a household name. Sweatin’ to the Oldies, released in 1988, was an instant success. It featured Simmons accompanied by plus-sized participants, and was designed to be non-judgmental and fun. Over the years, he released 65 videos that sold 20 million copies, forming the foundation of his multi-million dollar fitness empire. But fitness and weight loss weren’t his only concerns. To address the kind of trauma and confidence issues he struggled with himself, Simmons also released videos like Love Yourself and Win, which focused on building self-confidence and finding motivation.
Simmons never shied away from the spotlight throughout his career, which is why many people were surprised when he disappeared from public life in 2014. His sudden withdrawal has been the subject of much speculation, much of which borders on conspiracy theories, but Simmons maintains that he’s not in hiding, just retired. Said retirement may be related to a botched knee replacement which made it difficult for him to exercise vigorously. Regardless, he remains engaged with his fans on social media, even commenting on the fact that he never authorized Pauly Shore’s recent biopic. Simmons is still beloved for his positive attitude and for being one of the first fitness gurus to lead with positivity and inclusiveness. He may have been “sweatin’ to the oldies,” but he was ahead of his time.
[Image description: Colorful aerobics equipment, including yellow yoga balls and green exercise mats.] Credit & copyright: ArsAdAstra, Pixabay -
FREEWorld History PP&T CurioFree1 CQ
Don’t lose your head! During the French Revolution, that advice was particularly prudent for France’s royalty and aristocracy, many of whom were executed by guillotine during the upheaval. Even the French king, Louis XVI, was killed on this day in 1793. The revolution is still remembered as one of the most violent in history, yet at its heart it was a matter of economics: specifically economic inequality between social classes.
During the latter half of the 18th century, France was struggling financially for several reasons. Chief among these was the country’s involvement in another revolution—the American Revolution—which had proven more expensive than they’d bargained for. Compounding the issue were years of bad weather and failed harvests that led food prices to rise. Even bread was unaffordable for many people. Yet, French royals and aristocrats continued to publicly flaunt their wealth even as lower classes starved.
The King of France, Louis XVI, wasn’t completely deaf to growing discontent among the masses, though. In 1787, he tasked his Controller General, Charles Alexandre de Calonne, to carry out financial reforms that would remove land tax exemptions from the aristocrats. The hope was, if they paid their fair share, more wealth would be available to everyone. When the Estates General (an assembly of three bodies representing the clergy, nobles, and the middle class) assembled at Versaille, the proposal was met with universal support from the middle class who made up 98 percent of the country’s population, but most nobles and clergy were against it. Thus, the representatives of the middle class and sympathetic nobles held a protest until, on July 9, 1789, the King formed a National Constituent Assembly, a single legislative body with more proportional representation.
However, rumors of an impending military coup had already begun to spread, and the resulting panic led people to storm the Bastille fortress to acquire gunpowder and other supplies on July 14, 1789. This marked the beginning of the French Revolution in earnest. What followed were years of political instability and upheavals. The king’s authority was greatly lessened by the National Constituent Assembly and he became a de facto prisoner of the aristocrats. Louis XVI even tried to flee the country in 1791, hoping to secure allies from abroad, but he was caught and brought back to Paris. Meanwhile, the feudal system was abolished and the Catholic Church’s landholdings were nationalized. The land was distributed to farmers and the middle class, creating tensions across the board.
To restore his authority, Louis XVI declared war against Austria, who were soon joined by Prussia. Together, the two armies crushed French forces and began marching toward Paris. When the French people saw the approaching forces, they believed that they were coming to act as counterrevolutionaries. French forces were eventually able to turn away the invasion, but the damage was done. Believing they had been betrayed by the aristocrats and their own King, the people began executing anyone they deemed to be an aristocrat or a member of the royal family. Louis XVI was tried and found guilty of high treason, then sentenced to death. The Reign of Terror had begun.
The Reign of Terror, which lasted 10 months, saw a number of violent changes to the regime. Maximilien de Robespierre, the radical revolutionary who led the Committee of Public Safety, ordered nearly 17,000 executions without trials before being executed himself in 1794. That, in turn, marked the beginning of the Thermidorian Reaction, which sought to counter the violence of the preceding year with more moderate measures. But public discontent arising from political corruption and inefficiency eventually led the new era to end as well, as Napoleon Bonaparte staged a bloodless coup against the unpopular government. In the end, the French had revolted against a king, started a representative government, then ended up under the rule of Napoleon, who would crown himself emperor.
It wasn’t all for nothing, though. The French Revolution gave birth to radical ideas of governance and led to a rise in French nationalism. Once divided by regional identities and cultures, the French people began to identify as one unified group, a trend that would be followed by other European nations. France is, of course, a democracy today, and the storming of the Bastille is celebrated as a national holiday. It’s nice to live in a time when political reforms don’t tend to lead to decapitation.
[Image description: An engraving portraying Louis XVI in white clothing standing on an execution stage surrounded by mounted soldiers. Executioners and a priest, wearing a black robe, stand near him.] Credit & copyright: 19th century engraving, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.Don’t lose your head! During the French Revolution, that advice was particularly prudent for France’s royalty and aristocracy, many of whom were executed by guillotine during the upheaval. Even the French king, Louis XVI, was killed on this day in 1793. The revolution is still remembered as one of the most violent in history, yet at its heart it was a matter of economics: specifically economic inequality between social classes.
During the latter half of the 18th century, France was struggling financially for several reasons. Chief among these was the country’s involvement in another revolution—the American Revolution—which had proven more expensive than they’d bargained for. Compounding the issue were years of bad weather and failed harvests that led food prices to rise. Even bread was unaffordable for many people. Yet, French royals and aristocrats continued to publicly flaunt their wealth even as lower classes starved.
The King of France, Louis XVI, wasn’t completely deaf to growing discontent among the masses, though. In 1787, he tasked his Controller General, Charles Alexandre de Calonne, to carry out financial reforms that would remove land tax exemptions from the aristocrats. The hope was, if they paid their fair share, more wealth would be available to everyone. When the Estates General (an assembly of three bodies representing the clergy, nobles, and the middle class) assembled at Versaille, the proposal was met with universal support from the middle class who made up 98 percent of the country’s population, but most nobles and clergy were against it. Thus, the representatives of the middle class and sympathetic nobles held a protest until, on July 9, 1789, the King formed a National Constituent Assembly, a single legislative body with more proportional representation.
However, rumors of an impending military coup had already begun to spread, and the resulting panic led people to storm the Bastille fortress to acquire gunpowder and other supplies on July 14, 1789. This marked the beginning of the French Revolution in earnest. What followed were years of political instability and upheavals. The king’s authority was greatly lessened by the National Constituent Assembly and he became a de facto prisoner of the aristocrats. Louis XVI even tried to flee the country in 1791, hoping to secure allies from abroad, but he was caught and brought back to Paris. Meanwhile, the feudal system was abolished and the Catholic Church’s landholdings were nationalized. The land was distributed to farmers and the middle class, creating tensions across the board.
To restore his authority, Louis XVI declared war against Austria, who were soon joined by Prussia. Together, the two armies crushed French forces and began marching toward Paris. When the French people saw the approaching forces, they believed that they were coming to act as counterrevolutionaries. French forces were eventually able to turn away the invasion, but the damage was done. Believing they had been betrayed by the aristocrats and their own King, the people began executing anyone they deemed to be an aristocrat or a member of the royal family. Louis XVI was tried and found guilty of high treason, then sentenced to death. The Reign of Terror had begun.
The Reign of Terror, which lasted 10 months, saw a number of violent changes to the regime. Maximilien de Robespierre, the radical revolutionary who led the Committee of Public Safety, ordered nearly 17,000 executions without trials before being executed himself in 1794. That, in turn, marked the beginning of the Thermidorian Reaction, which sought to counter the violence of the preceding year with more moderate measures. But public discontent arising from political corruption and inefficiency eventually led the new era to end as well, as Napoleon Bonaparte staged a bloodless coup against the unpopular government. In the end, the French had revolted against a king, started a representative government, then ended up under the rule of Napoleon, who would crown himself emperor.
It wasn’t all for nothing, though. The French Revolution gave birth to radical ideas of governance and led to a rise in French nationalism. Once divided by regional identities and cultures, the French people began to identify as one unified group, a trend that would be followed by other European nations. France is, of course, a democracy today, and the storming of the Bastille is celebrated as a national holiday. It’s nice to live in a time when political reforms don’t tend to lead to decapitation.
[Image description: An engraving portraying Louis XVI in white clothing standing on an execution stage surrounded by mounted soldiers. Executioners and a priest, wearing a black robe, stand near him.] Credit & copyright: 19th century engraving, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEUS History PP&T CurioFree1 CQ
Who said that small towns weren’t interesting? New Harmony, a small town of less than 700 in southern Indiana, might seem like a low-profile place. Yet, it has one of the strangest histories of any town in the U.S. That’s because it was founded by the Harmony Society, a strange religious group that sought to make the town into an egalitarian utopia. They even built the country’s first public library and two labyrinths in the process.
The story of New Harmony began in faraway Germany, where Johann Georg Rapp and his followers broke away from the German Lutheran Church in the late 1700s to form the Harmony Society. Rapp, who declared publicly that, “I am a prophet, and I am called to be one,” was forbidden from gathering with his followers, called Rappites. After convincing his congregation to emigrate to the U.S., Rapp took a handful of his 10,000 or so followers and settled down in Butler County, Pennsylvania, to fulfill their own vision of utopia.
They named their town Harmony and generated income through farming and manufacturing, but decided to relocate to a climate more favorable for growing grapes for wine. So, in 1814, they left Pennsylvania for Indiana, and founded a new town they named Harmonie (sometimes called Neu Harmony) along the banks of the Wabash River. By this point, there were around 700 members in the Harmony Society, but 120 of them died while they were settling in their new home from malaria. Nevertheless, the second settlement to be called Harmony became a thriving economy, producing dry goods, wine, whiskey, and beer to trade with surrounding communities. At their peak, the Harmonists had 150 log homes, 20,000 acres of land, and a variety of retail buildings. However, the Harmony Society decided to relocate once again in 1824—this time, in search of land more suitable for manufacturing and commercial purposes. So they sold the settlement to a pair of business partners named Robert Owen and William Maclure for $150,000.
Owen and Maclure renamed the town to New Harmony, and they had plans of starting a utopia of their own. Despite their own immense wealth, they envisioned a society without social classes. They supported a variety of social causes, promoting free education, which led to the creation of America’s first public library as well as a public education system that accepted both men and women. In just a few years, Owen and Maclure attracted some of the most highly regarded academics, feminists, and naturalists, turning the small town into a haven of progressive thought and scientific research.
Unfortunately for them, the dream was not to last. By just 1827, the town became economically unsustainable under Owen and Maclure’s leadership, and the utopian society dissolved into a more conventional town. Meanwhile, the Harmony Society would face problems of their own. Returning east to Pennsylvania, they founded the town of Economy. They prospered for decades, but their community had one critical flaw: they were celibate. Without a new generation to carry on their beliefs or new converts, the Society began to dwindle. Eventually, infighting and schisms broke apart what little was left of them, and the community officially dissolved in 1905.
While the idealistic visions that shaped New Harmony’s origins may have faded, the small town is still an outlier of sorts. Despite its tiny population, New Harmony still attracts artists from around the country, and the town is dotted with sculptures and unique Harmonist architecture. Points of interest include the Roofless Church (a non-denominational, open-air church), the Atheneum (a visitors’ center designed by abstract artist Richard Meier) and not one but two mysterious hedge labyrinths built by the original Rappites. Appropriate, considering the town’s meandering history.
[Image description: A painting of a Harmonist imagining of what New Harmony could look like. It shows a walled, gated city near a river, with a family in the foreground.] Credit & copyright: F. Bate, London 1838. Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.Who said that small towns weren’t interesting? New Harmony, a small town of less than 700 in southern Indiana, might seem like a low-profile place. Yet, it has one of the strangest histories of any town in the U.S. That’s because it was founded by the Harmony Society, a strange religious group that sought to make the town into an egalitarian utopia. They even built the country’s first public library and two labyrinths in the process.
The story of New Harmony began in faraway Germany, where Johann Georg Rapp and his followers broke away from the German Lutheran Church in the late 1700s to form the Harmony Society. Rapp, who declared publicly that, “I am a prophet, and I am called to be one,” was forbidden from gathering with his followers, called Rappites. After convincing his congregation to emigrate to the U.S., Rapp took a handful of his 10,000 or so followers and settled down in Butler County, Pennsylvania, to fulfill their own vision of utopia.
They named their town Harmony and generated income through farming and manufacturing, but decided to relocate to a climate more favorable for growing grapes for wine. So, in 1814, they left Pennsylvania for Indiana, and founded a new town they named Harmonie (sometimes called Neu Harmony) along the banks of the Wabash River. By this point, there were around 700 members in the Harmony Society, but 120 of them died while they were settling in their new home from malaria. Nevertheless, the second settlement to be called Harmony became a thriving economy, producing dry goods, wine, whiskey, and beer to trade with surrounding communities. At their peak, the Harmonists had 150 log homes, 20,000 acres of land, and a variety of retail buildings. However, the Harmony Society decided to relocate once again in 1824—this time, in search of land more suitable for manufacturing and commercial purposes. So they sold the settlement to a pair of business partners named Robert Owen and William Maclure for $150,000.
Owen and Maclure renamed the town to New Harmony, and they had plans of starting a utopia of their own. Despite their own immense wealth, they envisioned a society without social classes. They supported a variety of social causes, promoting free education, which led to the creation of America’s first public library as well as a public education system that accepted both men and women. In just a few years, Owen and Maclure attracted some of the most highly regarded academics, feminists, and naturalists, turning the small town into a haven of progressive thought and scientific research.
Unfortunately for them, the dream was not to last. By just 1827, the town became economically unsustainable under Owen and Maclure’s leadership, and the utopian society dissolved into a more conventional town. Meanwhile, the Harmony Society would face problems of their own. Returning east to Pennsylvania, they founded the town of Economy. They prospered for decades, but their community had one critical flaw: they were celibate. Without a new generation to carry on their beliefs or new converts, the Society began to dwindle. Eventually, infighting and schisms broke apart what little was left of them, and the community officially dissolved in 1905.
While the idealistic visions that shaped New Harmony’s origins may have faded, the small town is still an outlier of sorts. Despite its tiny population, New Harmony still attracts artists from around the country, and the town is dotted with sculptures and unique Harmonist architecture. Points of interest include the Roofless Church (a non-denominational, open-air church), the Atheneum (a visitors’ center designed by abstract artist Richard Meier) and not one but two mysterious hedge labyrinths built by the original Rappites. Appropriate, considering the town’s meandering history.
[Image description: A painting of a Harmonist imagining of what New Harmony could look like. It shows a walled, gated city near a river, with a family in the foreground.] Credit & copyright: F. Bate, London 1838. Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929. -
FREEBiology PP&T CurioFree1 CQ
Sometimes, you just have to dig around a little to find what you’re looking for. That couldn’t be truer in the case of the De Winton’s golden mole, which was thought to be extinct for almost 90 years. It took a dog with a keen nose, DNA testing, and three years of searching, but researchers from the Endangered Wildlife Trust (EWT) managed to locate the long-lost subterranean mammal last year. Golden moles are some of the strangest animals on the planet…and technically they’re not actually moles at all.
One of 21 species of golden moles that make up the Chrysochloridae family, De Winton’s golden moles are found only in South Africa. Other species of golden moles are commonly found throughout Sub Saharan Africa, and despite their common name, they come in a variety of colors. In fact, “Chrysochloridae” means “green-gold” after the color of their fur in light. An oil secreted by golden moles gives their fur an iridescent copper sheen and lubricates it as they move around underground. These little animals are only about the size of a mouse, but they’re tough as nails. With their powerful claws, they can swim through sand dunes, protected by their thick skin. Since they live underground, they have little use for their eyes, which are vestigial and covered in skin. Oddly enough, though, Chrysochloridae aren’t true moles, which belong to the Talpidae family. The set of similarities between the families is an example of convergent evolution, a phenomenon in which two or more species evolve to fill a specific environmental niche (like digging underground) and thus develop similar adaptations. Despite these incredible adaptations, though, De Winton’s golden moles were nearly wiped out. Prior to their rediscovery, the last time anyone saw one was in 1936, and they were thought to have gone extinct.
Their decades-long disappearing act was caused by alluvial diamond mining in South Africa, which destroyed much of their already limited habitat. However, not everyone was convinced that the creatures were gone, though it was difficult to verify their status. The different species of golden moles look very similar to each other, so researchers couldn’t rely on sightings alone. To search for the elusive digger, EWT researchers used a two-pronged approach, with the goal of acquiring photographic and DNA evidence of De Winton’s golden moles. First, they trained a border collie named Jessie to recognize the smells of the 20 other members of Chrysochloridae, and also trained her to lie down if she smelled one. Then they took her to different sites where golden moles were known to live. When researchers found golden mole tracks and burrows but Jessie didn’t lie down, they took soil samples to test for the second piece of their plan: environmental DNA, or eDNA (skin, feces, mucus and other genetic material left behind in the soil). The samples were collected in 2021, but it wasn’t until late 2023 that they were able to confirm the presence of De Winton’s golden moles in the areas they checked. Since then, two De Winton’s golden moles have been photographed in the areas they checked, and eDNA analysis has confirmed the presence of four additional golden mole species previously unknown to science.
Unfortunately, De Winton’s golden moles are still at risk of being wiped out completely. Their habitats are still threatened by diamond mining, and they’re listed as critically endangered. The next step, EWT says, is to create protected areas where the golden moles are found and raise awareness of their plight. A senior field officer of EWT, Esther Matthew, said in a statement, “A lot of the conservation focus is on the more charismatic and big animals that people see often, while the rare ones that probably need more help are the ones that need more publicity.” If moles are good at anything, though, it’s avoiding getting trampled underfoot.
[Image description: A drawing of a brown-furred golden mole with no visible eyes.] Credit & copyright: Cornelis van Noorde (1731–1795), Wikimedia Commons, This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1928.Sometimes, you just have to dig around a little to find what you’re looking for. That couldn’t be truer in the case of the De Winton’s golden mole, which was thought to be extinct for almost 90 years. It took a dog with a keen nose, DNA testing, and three years of searching, but researchers from the Endangered Wildlife Trust (EWT) managed to locate the long-lost subterranean mammal last year. Golden moles are some of the strangest animals on the planet…and technically they’re not actually moles at all.
One of 21 species of golden moles that make up the Chrysochloridae family, De Winton’s golden moles are found only in South Africa. Other species of golden moles are commonly found throughout Sub Saharan Africa, and despite their common name, they come in a variety of colors. In fact, “Chrysochloridae” means “green-gold” after the color of their fur in light. An oil secreted by golden moles gives their fur an iridescent copper sheen and lubricates it as they move around underground. These little animals are only about the size of a mouse, but they’re tough as nails. With their powerful claws, they can swim through sand dunes, protected by their thick skin. Since they live underground, they have little use for their eyes, which are vestigial and covered in skin. Oddly enough, though, Chrysochloridae aren’t true moles, which belong to the Talpidae family. The set of similarities between the families is an example of convergent evolution, a phenomenon in which two or more species evolve to fill a specific environmental niche (like digging underground) and thus develop similar adaptations. Despite these incredible adaptations, though, De Winton’s golden moles were nearly wiped out. Prior to their rediscovery, the last time anyone saw one was in 1936, and they were thought to have gone extinct.
Their decades-long disappearing act was caused by alluvial diamond mining in South Africa, which destroyed much of their already limited habitat. However, not everyone was convinced that the creatures were gone, though it was difficult to verify their status. The different species of golden moles look very similar to each other, so researchers couldn’t rely on sightings alone. To search for the elusive digger, EWT researchers used a two-pronged approach, with the goal of acquiring photographic and DNA evidence of De Winton’s golden moles. First, they trained a border collie named Jessie to recognize the smells of the 20 other members of Chrysochloridae, and also trained her to lie down if she smelled one. Then they took her to different sites where golden moles were known to live. When researchers found golden mole tracks and burrows but Jessie didn’t lie down, they took soil samples to test for the second piece of their plan: environmental DNA, or eDNA (skin, feces, mucus and other genetic material left behind in the soil). The samples were collected in 2021, but it wasn’t until late 2023 that they were able to confirm the presence of De Winton’s golden moles in the areas they checked. Since then, two De Winton’s golden moles have been photographed in the areas they checked, and eDNA analysis has confirmed the presence of four additional golden mole species previously unknown to science.
Unfortunately, De Winton’s golden moles are still at risk of being wiped out completely. Their habitats are still threatened by diamond mining, and they’re listed as critically endangered. The next step, EWT says, is to create protected areas where the golden moles are found and raise awareness of their plight. A senior field officer of EWT, Esther Matthew, said in a statement, “A lot of the conservation focus is on the more charismatic and big animals that people see often, while the rare ones that probably need more help are the ones that need more publicity.” If moles are good at anything, though, it’s avoiding getting trampled underfoot.
[Image description: A drawing of a brown-furred golden mole with no visible eyes.] Credit & copyright: Cornelis van Noorde (1731–1795), Wikimedia Commons, This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1928. -
FREEWorld History PP&T CurioFree1 CQ
Happy New Year! While plenty of us will be swigging champagne as the ball drops tonight, in Spain people will be popping a dozen grapes into their mouths as fast as they can. This tradition, called “las doce uvas de la suerte”, or “the twelve grapes of luck”, has been practiced for at least a century. Everything about it, from the type of grapes used to the exact timing of their consumption, has a special meaning and connection to Spanish history.
The tradition itself is pretty simple. On the last day of the year, called “Nochevieja,” revelers watch the clock count down to midnight on the Real Casa de Correos, a famous eighteenth century building in Madrid, either in person or on T.V. When the clock hits midnight, bells start ringing to mark the New Year. That’s when the grapes come in. The bell chimes 12 times in all, and participants must scarf down a grape with each chime. The bell doesn’t wait for them to finish chewing, so they’ve got to be quick in order to eat the full dozen by the final chime. Since the whole thing is so time-sensitive, grapes have to be counted out for each person ahead of time, so they’re ready to go. To that end, people who are watching the countdown usually carry their grapes in individual bowls. If a person manages to eat all 12 grapes before the final chime, they will supposedly enjoy good luck for the rest of the year (each grape represents one month of the upcoming year). If not…well, the grapes are at least a delicious variety of green grapes (also known as white grapes) called Aledo. Aledo grapes mature late in the year (they’re harvested around November and December), making them perfect for New Year’s Eve festivities. They also have a particularly decadent flavor due to the unique way they’re grown. While still on the vine, the grapes are wrapped in paper bags during the summer months to protect them from pests and disease. Since the fruits aren’t exposed to the elements, their skin remains tender, and their slow maturation allows them to develop a rich, sweet flavor and aroma.
Despite being a much-loved and much-practiced tradition, the history of the twelve grapes of luck is somewhat murky. There are many supposed origin stories, though. One of the best-known says that farmers in Alicante, Spain, where most of the Aledo grapes are grown, had a bumper crop one year in the early 1900s. Supposedly, they came up with the twelve grapes tradition to sell off the excess, and the practice stuck around through the years. Another story claims that the tradition started even earlier. In the 1880s, the Spanish middle class supposedly started emulating their French counterparts who were known for welcoming the New Year with champagne and grapes, and that somehow evolved into the Spanish practice as it exists today.
Did we mention that the twelve grapes of luck tradition has one more, slightly more intimate component? Those who are truly serious about securing luck in the new year claim that the tradition works best if you don red undergarments (socks, briefs, panties, etc.). Some people also insist that one must wash down the grapes with a glass of Cava (a Spanish sparkling wine) with a gold ring resting at the bottom. In the weeks leading up to Nochevieja, stores around the country are filled with Aledo grapes and red undergarments. The grapes are available fresh, of course, but they’re also sold seeded and skinless in cans (twelve to a can, of course) for convenience. What a grape way to ring in the New Year!
[Image description: A small bowl of white grapes.] Credit & copyright: manfredrichter, PixabayHappy New Year! While plenty of us will be swigging champagne as the ball drops tonight, in Spain people will be popping a dozen grapes into their mouths as fast as they can. This tradition, called “las doce uvas de la suerte”, or “the twelve grapes of luck”, has been practiced for at least a century. Everything about it, from the type of grapes used to the exact timing of their consumption, has a special meaning and connection to Spanish history.
The tradition itself is pretty simple. On the last day of the year, called “Nochevieja,” revelers watch the clock count down to midnight on the Real Casa de Correos, a famous eighteenth century building in Madrid, either in person or on T.V. When the clock hits midnight, bells start ringing to mark the New Year. That’s when the grapes come in. The bell chimes 12 times in all, and participants must scarf down a grape with each chime. The bell doesn’t wait for them to finish chewing, so they’ve got to be quick in order to eat the full dozen by the final chime. Since the whole thing is so time-sensitive, grapes have to be counted out for each person ahead of time, so they’re ready to go. To that end, people who are watching the countdown usually carry their grapes in individual bowls. If a person manages to eat all 12 grapes before the final chime, they will supposedly enjoy good luck for the rest of the year (each grape represents one month of the upcoming year). If not…well, the grapes are at least a delicious variety of green grapes (also known as white grapes) called Aledo. Aledo grapes mature late in the year (they’re harvested around November and December), making them perfect for New Year’s Eve festivities. They also have a particularly decadent flavor due to the unique way they’re grown. While still on the vine, the grapes are wrapped in paper bags during the summer months to protect them from pests and disease. Since the fruits aren’t exposed to the elements, their skin remains tender, and their slow maturation allows them to develop a rich, sweet flavor and aroma.
Despite being a much-loved and much-practiced tradition, the history of the twelve grapes of luck is somewhat murky. There are many supposed origin stories, though. One of the best-known says that farmers in Alicante, Spain, where most of the Aledo grapes are grown, had a bumper crop one year in the early 1900s. Supposedly, they came up with the twelve grapes tradition to sell off the excess, and the practice stuck around through the years. Another story claims that the tradition started even earlier. In the 1880s, the Spanish middle class supposedly started emulating their French counterparts who were known for welcoming the New Year with champagne and grapes, and that somehow evolved into the Spanish practice as it exists today.
Did we mention that the twelve grapes of luck tradition has one more, slightly more intimate component? Those who are truly serious about securing luck in the new year claim that the tradition works best if you don red undergarments (socks, briefs, panties, etc.). Some people also insist that one must wash down the grapes with a glass of Cava (a Spanish sparkling wine) with a gold ring resting at the bottom. In the weeks leading up to Nochevieja, stores around the country are filled with Aledo grapes and red undergarments. The grapes are available fresh, of course, but they’re also sold seeded and skinless in cans (twelve to a can, of course) for convenience. What a grape way to ring in the New Year!
[Image description: A small bowl of white grapes.] Credit & copyright: manfredrichter, Pixabay -
FREEMusic Appreciation PP&T CurioFree1 CQ
Merry Christmas Eve! We’ve written before about mythical (yet commercialized) Christmas figures like Santa Claus and Rudolph, but what about their chilliest wintertime pal, Frosty the Snowman? Unlike Santa, who is loosely based on a real person, and Rudolph, who was created to sell toys, for Frosty, everything started with a song. And with West Virginia…or New York, depending on who you ask.
The catchy song Frosty the Snowman was written by Steve Nelson and lyricist Walter “Jack” Rollins. Rollins originally wrote the lyrics, about a snowman who comes to life and leads a group of children on an adventure, when he was just a child himself. The words were meant to be a poem, and Rollins wrote them while he and his family were living in Keyser, West Virginia. He didn’t even get into songwriting as a career until he was 40, when he met Nelson while working as a baggage handler in New York City. The two teamed up and went on to write Here Comes Peter Cottontail, a song about the Easter bunny, which became a hit upon its release in 1949. They followed up their success in 1950 by setting Rollins’s childhood poem to music, adapting Frosty the Snowman into a song. Their decision to write a Christmas song was largely based on the success of Rudolph the Red-Nosed Reindeer, which also began as a poem. Rudolph… showed that there was a market for secular Christmas songs. The first recording of Frosty the Snowman was performed by Gene Autry and the Cass County Boys, but there have been countless covers in every conceivable genre in the years since.
It may not sound like a controversial song, but there is a pretty passionate debate regarding where the story of Frosty the Snowman takes place. In what city or town did Frosty first put on his magic top hat? While Rollins wrote the original lyrics in West Virginia, the towns of Armonk and White Plains in New York each claim that the story takes place in their respective locales. Armonk seems to have the stronger argument: Nelson lived there for many years and probably tweaked some of the lyrics based on memories of his time there. The town is so proud of their connection to Frosty that they even hold a snowman-themed parade every year.
Regardless of where it was really set, most modern listeners likely associate the song with the 1969 Frosty the Snowman Rankin/Bass television special featuring Jimmy Durante as narrator and singer. In the special, Frosty is a regular, inanimate snowman, but comes to life after the children place a magical hat on his head. When they realize that Frosty will melt as the weather gets warmer, they try to take him further north on a train stocked with ice cream cakes. Despite a sad scene in which Frosty melts while trying to warm up his human friend in a greenhouse, Santa Claus is able to bring him back to life and take him to the North Pole. Durante’s unique rendition of the titular song helped turn the special into a Christmas classic. In fact, the most widely played recording of the song nowadays is Durante’s version from the special.
Fans of Frosty will be happy to know that the titular snowman has so far kept his promise to come “back again someday.” There have been several sequels to the 1969 special, though none of them included Durante and none reached the same level of popularity. It’s been seven decades since the original song was first released, yet Frosty remains a quintessential holiday mascot. That’s a lot of staying power for someone made of snow!
[Image description: An AI-generated digital image of a snowman wearing a scarf and top hat against a blue background.] Credit & copyright: geralt, PixabayMerry Christmas Eve! We’ve written before about mythical (yet commercialized) Christmas figures like Santa Claus and Rudolph, but what about their chilliest wintertime pal, Frosty the Snowman? Unlike Santa, who is loosely based on a real person, and Rudolph, who was created to sell toys, for Frosty, everything started with a song. And with West Virginia…or New York, depending on who you ask.
The catchy song Frosty the Snowman was written by Steve Nelson and lyricist Walter “Jack” Rollins. Rollins originally wrote the lyrics, about a snowman who comes to life and leads a group of children on an adventure, when he was just a child himself. The words were meant to be a poem, and Rollins wrote them while he and his family were living in Keyser, West Virginia. He didn’t even get into songwriting as a career until he was 40, when he met Nelson while working as a baggage handler in New York City. The two teamed up and went on to write Here Comes Peter Cottontail, a song about the Easter bunny, which became a hit upon its release in 1949. They followed up their success in 1950 by setting Rollins’s childhood poem to music, adapting Frosty the Snowman into a song. Their decision to write a Christmas song was largely based on the success of Rudolph the Red-Nosed Reindeer, which also began as a poem. Rudolph… showed that there was a market for secular Christmas songs. The first recording of Frosty the Snowman was performed by Gene Autry and the Cass County Boys, but there have been countless covers in every conceivable genre in the years since.
It may not sound like a controversial song, but there is a pretty passionate debate regarding where the story of Frosty the Snowman takes place. In what city or town did Frosty first put on his magic top hat? While Rollins wrote the original lyrics in West Virginia, the towns of Armonk and White Plains in New York each claim that the story takes place in their respective locales. Armonk seems to have the stronger argument: Nelson lived there for many years and probably tweaked some of the lyrics based on memories of his time there. The town is so proud of their connection to Frosty that they even hold a snowman-themed parade every year.
Regardless of where it was really set, most modern listeners likely associate the song with the 1969 Frosty the Snowman Rankin/Bass television special featuring Jimmy Durante as narrator and singer. In the special, Frosty is a regular, inanimate snowman, but comes to life after the children place a magical hat on his head. When they realize that Frosty will melt as the weather gets warmer, they try to take him further north on a train stocked with ice cream cakes. Despite a sad scene in which Frosty melts while trying to warm up his human friend in a greenhouse, Santa Claus is able to bring him back to life and take him to the North Pole. Durante’s unique rendition of the titular song helped turn the special into a Christmas classic. In fact, the most widely played recording of the song nowadays is Durante’s version from the special.
Fans of Frosty will be happy to know that the titular snowman has so far kept his promise to come “back again someday.” There have been several sequels to the 1969 special, though none of them included Durante and none reached the same level of popularity. It’s been seven decades since the original song was first released, yet Frosty remains a quintessential holiday mascot. That’s a lot of staying power for someone made of snow!
[Image description: An AI-generated digital image of a snowman wearing a scarf and top hat against a blue background.] Credit & copyright: geralt, Pixabay -
FREEUS History PP&T CurioFree1 CQ
She wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public DomainShe wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public Domain