Curio Cabinet / Person, Place, or Thing
-
FREEUS History PP&T CurioFree1 CQ
Was it the “trail of the century” or just a bunch of monkey business? The Scopes Monkey Trial, one of the most widely-publicized court cases in U.S. history, concluded on this day in 1925. Ostensibly, the case was a legal battle between the state of Tennessee and John T. Scopes, a teacher from the town of Dayton who defied the law and taught Charles Darwin’s theory of evolution in his public school classroom. In reality, the trail was about America’s views on science and religion, not to mention public education.
At its core, Scopes v. State centered around the violation of the Butler Act. The act was passed in March of 1925 by Tennessee’s state legislature, and it prohibited schools from teaching Darwin’s theory of evolution. At the time (as it is today) the theory of evolution was rejected by fundamentalist Christians who favored a biblical interpretation of natural history. Oddly enough, as part of the Butler Act, Tennessee’s public schools were required to use A Civic Biology (1914) by George W. Hunter’s in their classrooms, even though the textbook supported the theory. Nevertheless, soon after the act was passed, the American Civil Liberties Union (ACLU) placed ads in the state’s newspapers offering to fund the criminal defense of any teacher willing to break the new law. The idea was to test the law in court and have it found to be unconstitutional. It wasn’t until a Dayton businessman named George W. Rappleyea saw economic potential in the case that anyone challenged the Butler Act. Rappleyea believed that such a controversial case would increase Dayton’s visibility, revitalizing the town. With Rappleyea’s support, several prominent residents of the town encouraged 24-year-old high school football coach and teacher John T. Scopes to place himself within the legal crosshairs of the state.
When Scopes was charged with violating the Butler Act soon thereafter, he was represented by famed criminal defense lawyer Clarence Darrow. On the prosecution’s side was prominent politician and attorney William Jennings Bryan, who also served as a Bible expert during the trial. The trial certainly did bring the sleepy town of Dayton into the national spotlight. The Scopes Trial was the first to be broadcast nationally, and was heard as far away as London and Hong Kong. Residents of Dayton were roused by the controversy and sensationalism on display, gathering at the courthouse in such numbers that the judge moved the trial out to the lawn for fear of the courthouse collapsing under the weight of the crowd. Regardless of where it took place, it was clear early on that Scopes and Darrow were fighting an uphill battle. The judge forbade any discussions regarding the scientific validity of evolution or the constitutionality of the Butler Act, stating that the court was only concerned with whether or not Scopes had violated the law. Still, Darrow took the opportunity to grill Bryan’s credibility as a Bible expert. A famous proponent of anticlericalism, Darrow was used to criticizing fundamentalist interpretations of the Bible. Bryan, a self-proclaimed expert on scripture, was cross-examined by Darrow, during which he was largely ridiculed for his inability to reconcile the contradictions in a literal reading of the Bible. Then, on the last day of the trial, the unthinkable happened: Darrow, the defense counsel, asked the jury to find Scopes guilty so that the case could be appealed by a higher court in his closing statement. In doing so, Bryan was denied the right to give his own closing statement by Tennessee state law.
In the end, Scopes was found guilty and fined $100. However, due to a procedural error in the way the fine was determined, the case was overturned by the Tennessee Supreme Court. In 1955, the sensational story of the case was adapted into a play, Inherit the Wind, which was itself adapted into a film. As for the Butler Act, it wasn’t repealed until 1967. Today, the theory of evolution still invites controversy in certain places. Maybe we’ll see a federal case about it someday.
[Image description: Description ] Credit & copyright: Tree of Life by Ernst Haeckel (1834–1919), 1879. Wikimedia Commons, this media file is in the public domain in the United States.Was it the “trail of the century” or just a bunch of monkey business? The Scopes Monkey Trial, one of the most widely-publicized court cases in U.S. history, concluded on this day in 1925. Ostensibly, the case was a legal battle between the state of Tennessee and John T. Scopes, a teacher from the town of Dayton who defied the law and taught Charles Darwin’s theory of evolution in his public school classroom. In reality, the trail was about America’s views on science and religion, not to mention public education.
At its core, Scopes v. State centered around the violation of the Butler Act. The act was passed in March of 1925 by Tennessee’s state legislature, and it prohibited schools from teaching Darwin’s theory of evolution. At the time (as it is today) the theory of evolution was rejected by fundamentalist Christians who favored a biblical interpretation of natural history. Oddly enough, as part of the Butler Act, Tennessee’s public schools were required to use A Civic Biology (1914) by George W. Hunter’s in their classrooms, even though the textbook supported the theory. Nevertheless, soon after the act was passed, the American Civil Liberties Union (ACLU) placed ads in the state’s newspapers offering to fund the criminal defense of any teacher willing to break the new law. The idea was to test the law in court and have it found to be unconstitutional. It wasn’t until a Dayton businessman named George W. Rappleyea saw economic potential in the case that anyone challenged the Butler Act. Rappleyea believed that such a controversial case would increase Dayton’s visibility, revitalizing the town. With Rappleyea’s support, several prominent residents of the town encouraged 24-year-old high school football coach and teacher John T. Scopes to place himself within the legal crosshairs of the state.
When Scopes was charged with violating the Butler Act soon thereafter, he was represented by famed criminal defense lawyer Clarence Darrow. On the prosecution’s side was prominent politician and attorney William Jennings Bryan, who also served as a Bible expert during the trial. The trial certainly did bring the sleepy town of Dayton into the national spotlight. The Scopes Trial was the first to be broadcast nationally, and was heard as far away as London and Hong Kong. Residents of Dayton were roused by the controversy and sensationalism on display, gathering at the courthouse in such numbers that the judge moved the trial out to the lawn for fear of the courthouse collapsing under the weight of the crowd. Regardless of where it took place, it was clear early on that Scopes and Darrow were fighting an uphill battle. The judge forbade any discussions regarding the scientific validity of evolution or the constitutionality of the Butler Act, stating that the court was only concerned with whether or not Scopes had violated the law. Still, Darrow took the opportunity to grill Bryan’s credibility as a Bible expert. A famous proponent of anticlericalism, Darrow was used to criticizing fundamentalist interpretations of the Bible. Bryan, a self-proclaimed expert on scripture, was cross-examined by Darrow, during which he was largely ridiculed for his inability to reconcile the contradictions in a literal reading of the Bible. Then, on the last day of the trial, the unthinkable happened: Darrow, the defense counsel, asked the jury to find Scopes guilty so that the case could be appealed by a higher court in his closing statement. In doing so, Bryan was denied the right to give his own closing statement by Tennessee state law.
In the end, Scopes was found guilty and fined $100. However, due to a procedural error in the way the fine was determined, the case was overturned by the Tennessee Supreme Court. In 1955, the sensational story of the case was adapted into a play, Inherit the Wind, which was itself adapted into a film. As for the Butler Act, it wasn’t repealed until 1967. Today, the theory of evolution still invites controversy in certain places. Maybe we’ll see a federal case about it someday.
[Image description: Description ] Credit & copyright: Tree of Life by Ernst Haeckel (1834–1919), 1879. Wikimedia Commons, this media file is in the public domain in the United States. -
FREEHumanities PP&T CurioFree1 CQ
You’ve heard of Zeus, but have you ever wondered how the king of the gods came to power? The pantheon of ancient Greek gods is full of familiar names, from Aphrodite to Poseidon. But in Greek mythology, these deities weren’t the first to rule the heavens and earth. That distinction belongs to the Titans.
According to the ancient Greeks, the creation of the universe involved something coming forth from nothing…with a lot of family drama following after. At first, there was only Chaos, a cosmic void from which the first beings emerged. Then came the three primordial deities: Gaia (the Earth itself), Tartarus (the underworld) and Eros (desire). Gaia’s son, Uranus, was the sky, and also the father of her other 18 children. 12 of the children were the Titans (the first gods), three were the one-eyed Cyclopes and the final three were the Hecatoncheires, each of whom had 50 heads and a hundred arms. Appalled by their monstrous appearance, Uranus imprisoned the Cyclopes and the Hecatoncheires in Tartarus. Unfortunately, this made their mother very angry. In retaliation for Uranus’s cruelty, Gaia gave her son, the titan Cronus, a sickle with which he castrated his father and ultimately overthrew him. Unfortunately for Cronus, history tends to repeat itself, even for gods.
After imprisoning his father in Tartarus, Cronus ruled over the Titans and married his sister, Rhea. Rhea gave birth to six children, Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. Worried that one of these children would depose him as he had deposed his own father, Cronus decided on a violent plan of action. He swalloed his children one by one, but Rhea managed to save Zeus by giving her husband a rock disguised as her son. Once Zeus came of age in secret, he returned to his father to exact his revenge. First, he poisoned Cronus to make him vomit, freeing his brothers and sisters. With the help of his siblings, Zeus then set in motion the Titanomachy, a ten-year conflict between the Olympian gods and the Titans. Zeus and his cohort allied with the Cyclopes and the Hecatoncheires, with the former creating the iconic weapons of the Olympians: Zeus’s thunderbolts, Poseidon’s trident, and Hades’s helmet of darkness. The Olympians, of course, emerged victorious, thanks in no small part to these powerful weapons. After his defeat, Cronus was exiled, cursed to count the passing of time and age, earning him the moniker “Old Father Time.” Atlas, the Titan who led his kin into battle, was punished by having to hold up the heavens for all eternity. Meanwhile, Zeus and the others settled on the summit of Olympus in a palace built by the Cyclopes. Not all the Titans were cast out by the Olympians, though, and there was still more conflict to come.
One of the most famous Titans was Prometheus. Not only did he escape imprisonment in Tartarus, he and his twin brother, Epimetheus, were tasked by Zeus with creating mankind. However, Prometheus was angry that his creations were left in the cold, without any reliable way to keep warm or reach their true potential. Feeling pity for them, Prometheus stole fire from Olympus and brought it to the humans even though doing so was forbidden by Zeus. Along with this powerful gift, Prometheus taught them mathematics, astronomy, sailing, and architecture. Thanks to these divine boons, the humans thrived, building mighty kingdoms of their own. In time, they even came to question the power and authority of the gods, which deeply angered Zeus. Discovering Prometheus’s betrayal, Zeus punished the Titan by chaining him to a cliff. There, a giant vulture came each day to eat Prometheus’s liver, which grew back overnight. It wasn’t until Heracles came around centuries later and killed the vulture that the Titan was freed. What’s a few eons of torment if it means you can take credit for mankind's greatest achievements?
[Image description: A painting of the ancient Greek Titans, depicted as large male figures, falling into the darkness of Tartarus.] Credit & copyright: The Fall of the Titans (c. 1596–1598), Cornelis van Haarlem (1562–1638). National Gallery of Denmark, Copenhagen. Wikimedia Commons. The author died in 1638, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.You’ve heard of Zeus, but have you ever wondered how the king of the gods came to power? The pantheon of ancient Greek gods is full of familiar names, from Aphrodite to Poseidon. But in Greek mythology, these deities weren’t the first to rule the heavens and earth. That distinction belongs to the Titans.
According to the ancient Greeks, the creation of the universe involved something coming forth from nothing…with a lot of family drama following after. At first, there was only Chaos, a cosmic void from which the first beings emerged. Then came the three primordial deities: Gaia (the Earth itself), Tartarus (the underworld) and Eros (desire). Gaia’s son, Uranus, was the sky, and also the father of her other 18 children. 12 of the children were the Titans (the first gods), three were the one-eyed Cyclopes and the final three were the Hecatoncheires, each of whom had 50 heads and a hundred arms. Appalled by their monstrous appearance, Uranus imprisoned the Cyclopes and the Hecatoncheires in Tartarus. Unfortunately, this made their mother very angry. In retaliation for Uranus’s cruelty, Gaia gave her son, the titan Cronus, a sickle with which he castrated his father and ultimately overthrew him. Unfortunately for Cronus, history tends to repeat itself, even for gods.
After imprisoning his father in Tartarus, Cronus ruled over the Titans and married his sister, Rhea. Rhea gave birth to six children, Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. Worried that one of these children would depose him as he had deposed his own father, Cronus decided on a violent plan of action. He swalloed his children one by one, but Rhea managed to save Zeus by giving her husband a rock disguised as her son. Once Zeus came of age in secret, he returned to his father to exact his revenge. First, he poisoned Cronus to make him vomit, freeing his brothers and sisters. With the help of his siblings, Zeus then set in motion the Titanomachy, a ten-year conflict between the Olympian gods and the Titans. Zeus and his cohort allied with the Cyclopes and the Hecatoncheires, with the former creating the iconic weapons of the Olympians: Zeus’s thunderbolts, Poseidon’s trident, and Hades’s helmet of darkness. The Olympians, of course, emerged victorious, thanks in no small part to these powerful weapons. After his defeat, Cronus was exiled, cursed to count the passing of time and age, earning him the moniker “Old Father Time.” Atlas, the Titan who led his kin into battle, was punished by having to hold up the heavens for all eternity. Meanwhile, Zeus and the others settled on the summit of Olympus in a palace built by the Cyclopes. Not all the Titans were cast out by the Olympians, though, and there was still more conflict to come.
One of the most famous Titans was Prometheus. Not only did he escape imprisonment in Tartarus, he and his twin brother, Epimetheus, were tasked by Zeus with creating mankind. However, Prometheus was angry that his creations were left in the cold, without any reliable way to keep warm or reach their true potential. Feeling pity for them, Prometheus stole fire from Olympus and brought it to the humans even though doing so was forbidden by Zeus. Along with this powerful gift, Prometheus taught them mathematics, astronomy, sailing, and architecture. Thanks to these divine boons, the humans thrived, building mighty kingdoms of their own. In time, they even came to question the power and authority of the gods, which deeply angered Zeus. Discovering Prometheus’s betrayal, Zeus punished the Titan by chaining him to a cliff. There, a giant vulture came each day to eat Prometheus’s liver, which grew back overnight. It wasn’t until Heracles came around centuries later and killed the vulture that the Titan was freed. What’s a few eons of torment if it means you can take credit for mankind's greatest achievements?
[Image description: A painting of the ancient Greek Titans, depicted as large male figures, falling into the darkness of Tartarus.] Credit & copyright: The Fall of the Titans (c. 1596–1598), Cornelis van Haarlem (1562–1638). National Gallery of Denmark, Copenhagen. Wikimedia Commons. The author died in 1638, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEPP&T CurioFree1 CQ
In honor of the holiday weekend, enjoy this curio from the archives about one of the Revolutionary War's most unlikely figures.
She wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public DomainIn honor of the holiday weekend, enjoy this curio from the archives about one of the Revolutionary War's most unlikely figures.
She wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public Domain -
FREESports PP&T CurioFree1 CQ
As a rule, humans aren’t the world's best swimmers…but rules were made to be broken. While most members of our terrestrial species are much faster on land than in the water, Olympian Michael Phelps is a notable exception. This record-breaking athlete, born on this day in 1985, has a unique physiology that makes him perfectly suited for the pool, and an aquatic nickname to match.
Phelps began swimming at the age of seven, following in his sisters’ footsteps after they joined a local swim team. Long before he boasted nicknames like “Flying Fish” and “Baltimore Bullet,” swam competitively for his high school team and even made it onto the U.S. Swim Team at the 2000 Summer Olympics in Sydney. Though he didn’t win any medals that year, he still made history by being the youngest male Olympic swimmer in 68 years. He began setting world records while still in high school, a trend that continued when he attended the University of Michigan in Ann Arbor. It was during Phelps’ second Olympics appearance in 2004, in Athens, that he became a household name after winning eight medals, including six golds. After not winning a single medal at his first Olympics, Phelps was suddenly just one gold away from Mark Spitz's record of seven. He went on to break the record during the 2008 Summer Olympics in Beijing by winning eight gold medals, which was also the record for the most gold during a single Olympics. By the time he retired in 2016 after the Summer Olympics in Rio de Janeiro, he had 28 medals to his name, with 23 golds including 13 individual golds.
While hard work and perseverance surely played a role in Phelps' dominance in the water, he also benefited from having what may be the ideal swimmer’s body. Most of the best swimmers in the world have a similar body shape that gives them an advantage over the average person, beyond their training. Firstly, it pays for a swimmer to be tall, and indeed, most of the top Olympic swimmers hover around six feet tall. But proportions matter too, with long, flexible torsos allowing for more power behind strokes and a center of mass closer to the lungs (the center of flotation) allowing for less energy wasted in trying to stay level in the water. It also helps to have large hands and feet, which act like paddles or flippers in the water, while large lungs help swimmers stay afloat and take in more oxygen. Many swimmers have these traits, but Phelps’s physique seems to take some of them to an extreme. His lung capacity sits at 12 liters, twice that of the average person, and he has double-jointed elbows. He’s also hyper-jointed at the chest, allowing him to leverage more of his body to power each stroke. Even for a swimmer, he has a massive “wingspan,” the distance from fingertip to fingertip when the arms are held out horizontally from the body. While most people have wingspans that are about the same as their height, Phelps’s wingspan of six feet, seven inches is three inches longer than he is tall. Finally, his body was found to produce half as much lactic acid than even other trained athletes, which allows him to recover faster between training sessions.
All that isn’t to discount his talent. While Phelps may have been gifted with natural advantages, his drive and willingness to train hard are even more important. Those who’ve worked with Phelps have often expressed that the true secret behind the swimmer’s success is his immaculate technique, which can only come from extensive training. Swimming is extremely inefficient for human beings, so every movement of every stroke counts, especially at elite levels where a fraction of a second can make all the difference. It wouldn’t matter if you had shark skin and flippers for feet if you didn’t know how to use them!
[Image description: A large, empty swimming pool with blue-and-white lane dividers.] Credit & copyright: Jan van der Wolf, PexelsAs a rule, humans aren’t the world's best swimmers…but rules were made to be broken. While most members of our terrestrial species are much faster on land than in the water, Olympian Michael Phelps is a notable exception. This record-breaking athlete, born on this day in 1985, has a unique physiology that makes him perfectly suited for the pool, and an aquatic nickname to match.
Phelps began swimming at the age of seven, following in his sisters’ footsteps after they joined a local swim team. Long before he boasted nicknames like “Flying Fish” and “Baltimore Bullet,” swam competitively for his high school team and even made it onto the U.S. Swim Team at the 2000 Summer Olympics in Sydney. Though he didn’t win any medals that year, he still made history by being the youngest male Olympic swimmer in 68 years. He began setting world records while still in high school, a trend that continued when he attended the University of Michigan in Ann Arbor. It was during Phelps’ second Olympics appearance in 2004, in Athens, that he became a household name after winning eight medals, including six golds. After not winning a single medal at his first Olympics, Phelps was suddenly just one gold away from Mark Spitz's record of seven. He went on to break the record during the 2008 Summer Olympics in Beijing by winning eight gold medals, which was also the record for the most gold during a single Olympics. By the time he retired in 2016 after the Summer Olympics in Rio de Janeiro, he had 28 medals to his name, with 23 golds including 13 individual golds.
While hard work and perseverance surely played a role in Phelps' dominance in the water, he also benefited from having what may be the ideal swimmer’s body. Most of the best swimmers in the world have a similar body shape that gives them an advantage over the average person, beyond their training. Firstly, it pays for a swimmer to be tall, and indeed, most of the top Olympic swimmers hover around six feet tall. But proportions matter too, with long, flexible torsos allowing for more power behind strokes and a center of mass closer to the lungs (the center of flotation) allowing for less energy wasted in trying to stay level in the water. It also helps to have large hands and feet, which act like paddles or flippers in the water, while large lungs help swimmers stay afloat and take in more oxygen. Many swimmers have these traits, but Phelps’s physique seems to take some of them to an extreme. His lung capacity sits at 12 liters, twice that of the average person, and he has double-jointed elbows. He’s also hyper-jointed at the chest, allowing him to leverage more of his body to power each stroke. Even for a swimmer, he has a massive “wingspan,” the distance from fingertip to fingertip when the arms are held out horizontally from the body. While most people have wingspans that are about the same as their height, Phelps’s wingspan of six feet, seven inches is three inches longer than he is tall. Finally, his body was found to produce half as much lactic acid than even other trained athletes, which allows him to recover faster between training sessions.
All that isn’t to discount his talent. While Phelps may have been gifted with natural advantages, his drive and willingness to train hard are even more important. Those who’ve worked with Phelps have often expressed that the true secret behind the swimmer’s success is his immaculate technique, which can only come from extensive training. Swimming is extremely inefficient for human beings, so every movement of every stroke counts, especially at elite levels where a fraction of a second can make all the difference. It wouldn’t matter if you had shark skin and flippers for feet if you didn’t know how to use them!
[Image description: A large, empty swimming pool with blue-and-white lane dividers.] Credit & copyright: Jan van der Wolf, Pexels -
FREESports PP&T CurioFree1 CQ
Some people think he was a great baseball player. Everyone else knows he was the greatest. Willie Mays passed away on June 18 at the age of 93, and even though he had been retired for decades, no one else in the league ever managed to best his numbers. At bat or in center field, there still isn’t anyone quite like the “Say Hey Kid.”
Willie Howard Mays Jr. was born on May 6, 1931, in Westfield, Alabama, to Annie Satterwhite and Willie Mays Sr., a semi-professional baseball player. Though he was raised by relatives after his parents separated, Mays seemed to follow in his father’s footsteps, showing an interest in baseball from an early age. As a teenager, he moved to Fairfield, where he played sporadically for the Fairfield stars in the Birmingham Industrial League. In 1948, Mays was just 16 years old and still attending high school when he signed with the Birmingham Black Barons in what was then known as the Negro League. Mays played for the Black Barons until he graduated high school, after which he signed with the Giants, then based in New York.
The baseball wunderkind proved in his rookie season in 1951 that his early success wasn’t just a fluke. Moreover, he showed that he was an exceptional all-around player. Though the Giants lost the World Series to the New York Yankees that year, Mays was named the National League Rookie of the Year for his superb defensive performance. Just a few years later, the Giants would have a historic season when they went on to win the 1954 World Series. It was in Game 1 against the Cleveland Indians when Mays pulled off “The Catch,” an over-the-shoulder catch from behind that seemed like a magic trick at the time. “The Catch” happened after Vic Wertz hit a fly ball deep into center field, and Mays took off after it with his back to the plate at a dead sprint. Catching the ball meant he kept the players on bases from scoring, and all but secured a Giants win for the series opener—all with undeniable flair. Mays followed the Giants when the team moved to San Francisco, where he played until he was traded to the New York Mets in 1972. Throughout his career, Mays played in 24 All-Star games, was awarded the Gold Glove 12 times, and hit 660 home runs, all while stealing bases left and right and keeping center field a precarious place for a hitter’s aim. But it wasn’t just his presence on the field that gained him a following. He was a beloved personality off the field. He was given the nickname “Say Hey Kid,” and while accounts on its origins vary, Mays himself said at one point that it was due to his habit of addressing people with “Say hey” when he couldn’t remember someone’s name during his rookie year.
Even after retiring in 1973, Mays remained an inspiration to many Black Americans. After Jackie Robinson broke down racial barriers in 1947, Mays further pushed against the racist barriers that Black athletes faced. He was a player who could not be ignored, whose dramatic plays and charisma won games as well as hearts. For many in his time, Mays was the face of baseball, a superstar of a sport that had only recently—and begrudgingly—integrated. When he was awarded the Presidential Medal of Freedom in 2015, President Obama said of him, “It's because of Giants like Willie that someone like me could even think about running for president.” To this day, Mays is cited as an inspiration by Black baseball players, who continue to be underrepresented in the sport. It seems that this legendary giant had plenty of room on his shoulders.
[Image description: A red baseball glove, a baseball bat, and four baseballs on a wooden bench.] Credit & copyright: Tima Miroshnichenko, PexelsSome people think he was a great baseball player. Everyone else knows he was the greatest. Willie Mays passed away on June 18 at the age of 93, and even though he had been retired for decades, no one else in the league ever managed to best his numbers. At bat or in center field, there still isn’t anyone quite like the “Say Hey Kid.”
Willie Howard Mays Jr. was born on May 6, 1931, in Westfield, Alabama, to Annie Satterwhite and Willie Mays Sr., a semi-professional baseball player. Though he was raised by relatives after his parents separated, Mays seemed to follow in his father’s footsteps, showing an interest in baseball from an early age. As a teenager, he moved to Fairfield, where he played sporadically for the Fairfield stars in the Birmingham Industrial League. In 1948, Mays was just 16 years old and still attending high school when he signed with the Birmingham Black Barons in what was then known as the Negro League. Mays played for the Black Barons until he graduated high school, after which he signed with the Giants, then based in New York.
The baseball wunderkind proved in his rookie season in 1951 that his early success wasn’t just a fluke. Moreover, he showed that he was an exceptional all-around player. Though the Giants lost the World Series to the New York Yankees that year, Mays was named the National League Rookie of the Year for his superb defensive performance. Just a few years later, the Giants would have a historic season when they went on to win the 1954 World Series. It was in Game 1 against the Cleveland Indians when Mays pulled off “The Catch,” an over-the-shoulder catch from behind that seemed like a magic trick at the time. “The Catch” happened after Vic Wertz hit a fly ball deep into center field, and Mays took off after it with his back to the plate at a dead sprint. Catching the ball meant he kept the players on bases from scoring, and all but secured a Giants win for the series opener—all with undeniable flair. Mays followed the Giants when the team moved to San Francisco, where he played until he was traded to the New York Mets in 1972. Throughout his career, Mays played in 24 All-Star games, was awarded the Gold Glove 12 times, and hit 660 home runs, all while stealing bases left and right and keeping center field a precarious place for a hitter’s aim. But it wasn’t just his presence on the field that gained him a following. He was a beloved personality off the field. He was given the nickname “Say Hey Kid,” and while accounts on its origins vary, Mays himself said at one point that it was due to his habit of addressing people with “Say hey” when he couldn’t remember someone’s name during his rookie year.
Even after retiring in 1973, Mays remained an inspiration to many Black Americans. After Jackie Robinson broke down racial barriers in 1947, Mays further pushed against the racist barriers that Black athletes faced. He was a player who could not be ignored, whose dramatic plays and charisma won games as well as hearts. For many in his time, Mays was the face of baseball, a superstar of a sport that had only recently—and begrudgingly—integrated. When he was awarded the Presidential Medal of Freedom in 2015, President Obama said of him, “It's because of Giants like Willie that someone like me could even think about running for president.” To this day, Mays is cited as an inspiration by Black baseball players, who continue to be underrepresented in the sport. It seems that this legendary giant had plenty of room on his shoulders.
[Image description: A red baseball glove, a baseball bat, and four baseballs on a wooden bench.] Credit & copyright: Tima Miroshnichenko, Pexels -
FREECooking PP&T CurioFree1 CQ
As pride month continues, so does our celebration of extraordinary LGBTQ+ figures. This week, we’re taking a closer look at the late, great, American chef, James Beard. Even outside the culinary world, the James Beard award is well-known as one of the most coveted prizes that a chef or restaurant can receive. Read on to learn how this award’s namesake became one of America’s first culinary superstars, and the unconventional way that he chose to come out, later in life.
Born May 5, 1903, Beard grew up in Oregon where his parents taught him to fish and forage for food in the bountiful waters and forests of the Pacific Northwest. He was also exposed to fine dining as a child, as his mother ran a boarding house and was known for her cooking. While the passion for cooking with locally sourced ingredients was thus imparted on him at a young age, Beard’s first career choice had nothing to do with the kitchen. Instead, he traveled abroad and trained for theater as a young man, but he never found much success as an actor and struggled to make ends meet. Beard returned to the U.S. in 1927, but had no better luck in the entertainment industry stateside. In 1937, Beard started a catering business called Hors d’Oeuvre Inc. to supplement his income. Not too long afterward, though, this enterprise born out of necessity became a financial success and reignited his childhood passion for cooking.
In 1940, Beard published his first cookbook, Hors d’Oeuvre & Canapés, and in 1942, he published Cook It Outdoors. Then, in 1946, he achieved his former ambition of making it onto the screen in a roundabout way, when he began hosting a cooking segment on I Love to Eat on NBC. His books and TV appearances made him a household name in post-WWII America. What set him apart from other culinary personalities emerging around the same time was his focus on identifying and creating distinctly American dishes. As much as Italian and French cuisine were beginning to capture home cooks’ imaginations, Beard defined American cuisine as a worthy contender with its own unique traditions and merits. That’s not to say that he was one to snub other culinary traditions, of course. He himself was well-traveled and wrote extensively about everything he tried in the U.S. and abroad, particularly in Europe. Beard was also close friends with Julia Child, another American food personality who was responsible for making French cuisine accessible to the average home cook. The two met in 1961 and remained close until Beard passed away in 1985. She once said of him, “People just adored him. He was so jolly, so nice, and so generous… He was so open, he had such a general love of food, and I think he encouraged everybody.” Child was instrumental in the creation of the James Beard Foundation after his death, which awards exceptional contributions to American culinary arts and related fields.
Sadly, as successful as he was professionally, Beard’s fame made him feel pressured to keep his sexuality hidden from the public for most of his life. He only came out in 1981 in the revised version of his autobiography, Delights & Prejudices: A Memoir with Recipes, where he wrote about his relationship with his partner, Gino Cofacci. The couple spent 30 years together, and when Beard passed away in 1985, the late chef left Cofacci an apartment in his townhouse. Even so late in life, coming out was a risky thing for a celebrity like Beard to do, especially during the AIDS epidemic and anti-LGBTQ atmosphere of the 1980s. Then, as now, being a celebrity can be a double-edged chef’s knife.As pride month continues, so does our celebration of extraordinary LGBTQ+ figures. This week, we’re taking a closer look at the late, great, American chef, James Beard. Even outside the culinary world, the James Beard award is well-known as one of the most coveted prizes that a chef or restaurant can receive. Read on to learn how this award’s namesake became one of America’s first culinary superstars, and the unconventional way that he chose to come out, later in life.
Born May 5, 1903, Beard grew up in Oregon where his parents taught him to fish and forage for food in the bountiful waters and forests of the Pacific Northwest. He was also exposed to fine dining as a child, as his mother ran a boarding house and was known for her cooking. While the passion for cooking with locally sourced ingredients was thus imparted on him at a young age, Beard’s first career choice had nothing to do with the kitchen. Instead, he traveled abroad and trained for theater as a young man, but he never found much success as an actor and struggled to make ends meet. Beard returned to the U.S. in 1927, but had no better luck in the entertainment industry stateside. In 1937, Beard started a catering business called Hors d’Oeuvre Inc. to supplement his income. Not too long afterward, though, this enterprise born out of necessity became a financial success and reignited his childhood passion for cooking.
In 1940, Beard published his first cookbook, Hors d’Oeuvre & Canapés, and in 1942, he published Cook It Outdoors. Then, in 1946, he achieved his former ambition of making it onto the screen in a roundabout way, when he began hosting a cooking segment on I Love to Eat on NBC. His books and TV appearances made him a household name in post-WWII America. What set him apart from other culinary personalities emerging around the same time was his focus on identifying and creating distinctly American dishes. As much as Italian and French cuisine were beginning to capture home cooks’ imaginations, Beard defined American cuisine as a worthy contender with its own unique traditions and merits. That’s not to say that he was one to snub other culinary traditions, of course. He himself was well-traveled and wrote extensively about everything he tried in the U.S. and abroad, particularly in Europe. Beard was also close friends with Julia Child, another American food personality who was responsible for making French cuisine accessible to the average home cook. The two met in 1961 and remained close until Beard passed away in 1985. She once said of him, “People just adored him. He was so jolly, so nice, and so generous… He was so open, he had such a general love of food, and I think he encouraged everybody.” Child was instrumental in the creation of the James Beard Foundation after his death, which awards exceptional contributions to American culinary arts and related fields.
Sadly, as successful as he was professionally, Beard’s fame made him feel pressured to keep his sexuality hidden from the public for most of his life. He only came out in 1981 in the revised version of his autobiography, Delights & Prejudices: A Memoir with Recipes, where he wrote about his relationship with his partner, Gino Cofacci. The couple spent 30 years together, and when Beard passed away in 1985, the late chef left Cofacci an apartment in his townhouse. Even so late in life, coming out was a risky thing for a celebrity like Beard to do, especially during the AIDS epidemic and anti-LGBTQ atmosphere of the 1980s. Then, as now, being a celebrity can be a double-edged chef’s knife. -
FREEUS History PP&T CurioFree1 CQ
In honor of pride month, we’re taking a look at a hero of the Revolutionary War: Prussian Nobleman Baron von Steuben. His work teaching European military techniques to struggling American troops helped turn the tide of the war. Von Steuben was also a gay man during a time when being so was a crime, and he was persecuted for it, despite his military expertise.
Born in 1730 to a military family, von Steuben enlisted in the Prussian army at 16 or 17. After 17 years of service, von Steuben left the army as an experienced captain and a veteran of the Seven Years’ War. Despite having distinguished himself during service, von Steuben was dismissed from the military in 1763. The dismissal came at a time when the Prussian military was downsizing during an extended period of peace, but some historians believe that his sexuality might have played a role in his outster. Afterward, von Steuben found work as a court chamberlain for 11 years, but yearned for military service once more. Yet with Europe in a state of relative peace, military positions were few and far between. After the American Revolutionary War broke out, von Steuben was offered a job in the Continental Army by Benjamin Franklin, but von Steuben balked at the idea, as he wished to remain in Europe. Not long after, he was offered a military position in Baden, Germany, but the offer fell through when an anonymous letter accused von Steuben of having “taken familiarities” with other men in a previous job. Without any other options and unwilling to risk criminal charges, von Steuben took Franklin’s still-standing offer and sailed for America in 1777.
When von Steuben arrived in America, an inflated reputation preceded him. At the time, American officers were growing resentful of the influx of European officers, and Franklin had embellished von Steuben’s rank and accomplishments to placate them. After getting acquainted with key political figures, von Steuben was sent to Valley Forge to serve under George Washington, along with Alexander Hamilton and John Laurens as his aides. There, he found the camp in shambles, desperately in need of order and discipline. The army was plagued by low morale and discipline, exacerbated by the brutal winter. Faced with the daunting task of getting the men into fighting shape by spring, von Steuben got to work as a drillmaster, teaching the troops how to march in formation and reorganizing the chain of command, giving officers more responsibilities.
All the while, von Steuben made little effort to hide his sexuality, which both Franklin and Washington knew of but considered irrelevant for his role. Though the men of the camp found von Steuben to be a strange figure (he couldn’t speak English save for a few curse words), they respected him nonetheless. In fact, von Steuben’s comparatively outlandish mannerisms seemed to command the men’s attention, or at the least, their curiosity. He also threw extravagant parties at camp for the officers, who returned the favor in kind by donating their rations for feasts. By the next year, the once ragtag band of soldiers marched, shot, and charged like veterans. The men of Valley Forge, who were driven to fight by patriotic passion, had been tempered by Prussian military prowess.
After the war, von Steuben was granted American citizenship and a large estate in New York. He lived out the rest of his life there with William North and Benjamin Walker, who had served him as aides-de-camp. North and Walker were both legally adopted by von Steuben, a common practice at the time for homosexual men to ensure their partners could inherit their property. Today, von Steuben’s contributions to the American Revolution have been largely forgotten, though that is beginning to change. In recent years, LGBTQ members of the military have made efforts to shed light on von Steuben’s role in the war, and he is recognized by many as the father of America’s professional Army. You could say that his work should be the pride of the nation.
[Image description: A painted portrait of Baron von Steuben outdoors in a military uniform.] Credit & copyright: Ralph Earl (1751–1801), Friedrich Wilhelm von Steuben, 1786. Fenimore Art Museum, N0198.1961, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.In honor of pride month, we’re taking a look at a hero of the Revolutionary War: Prussian Nobleman Baron von Steuben. His work teaching European military techniques to struggling American troops helped turn the tide of the war. Von Steuben was also a gay man during a time when being so was a crime, and he was persecuted for it, despite his military expertise.
Born in 1730 to a military family, von Steuben enlisted in the Prussian army at 16 or 17. After 17 years of service, von Steuben left the army as an experienced captain and a veteran of the Seven Years’ War. Despite having distinguished himself during service, von Steuben was dismissed from the military in 1763. The dismissal came at a time when the Prussian military was downsizing during an extended period of peace, but some historians believe that his sexuality might have played a role in his outster. Afterward, von Steuben found work as a court chamberlain for 11 years, but yearned for military service once more. Yet with Europe in a state of relative peace, military positions were few and far between. After the American Revolutionary War broke out, von Steuben was offered a job in the Continental Army by Benjamin Franklin, but von Steuben balked at the idea, as he wished to remain in Europe. Not long after, he was offered a military position in Baden, Germany, but the offer fell through when an anonymous letter accused von Steuben of having “taken familiarities” with other men in a previous job. Without any other options and unwilling to risk criminal charges, von Steuben took Franklin’s still-standing offer and sailed for America in 1777.
When von Steuben arrived in America, an inflated reputation preceded him. At the time, American officers were growing resentful of the influx of European officers, and Franklin had embellished von Steuben’s rank and accomplishments to placate them. After getting acquainted with key political figures, von Steuben was sent to Valley Forge to serve under George Washington, along with Alexander Hamilton and John Laurens as his aides. There, he found the camp in shambles, desperately in need of order and discipline. The army was plagued by low morale and discipline, exacerbated by the brutal winter. Faced with the daunting task of getting the men into fighting shape by spring, von Steuben got to work as a drillmaster, teaching the troops how to march in formation and reorganizing the chain of command, giving officers more responsibilities.
All the while, von Steuben made little effort to hide his sexuality, which both Franklin and Washington knew of but considered irrelevant for his role. Though the men of the camp found von Steuben to be a strange figure (he couldn’t speak English save for a few curse words), they respected him nonetheless. In fact, von Steuben’s comparatively outlandish mannerisms seemed to command the men’s attention, or at the least, their curiosity. He also threw extravagant parties at camp for the officers, who returned the favor in kind by donating their rations for feasts. By the next year, the once ragtag band of soldiers marched, shot, and charged like veterans. The men of Valley Forge, who were driven to fight by patriotic passion, had been tempered by Prussian military prowess.
After the war, von Steuben was granted American citizenship and a large estate in New York. He lived out the rest of his life there with William North and Benjamin Walker, who had served him as aides-de-camp. North and Walker were both legally adopted by von Steuben, a common practice at the time for homosexual men to ensure their partners could inherit their property. Today, von Steuben’s contributions to the American Revolution have been largely forgotten, though that is beginning to change. In recent years, LGBTQ members of the military have made efforts to shed light on von Steuben’s role in the war, and he is recognized by many as the father of America’s professional Army. You could say that his work should be the pride of the nation.
[Image description: A painted portrait of Baron von Steuben outdoors in a military uniform.] Credit & copyright: Ralph Earl (1751–1801), Friedrich Wilhelm von Steuben, 1786. Fenimore Art Museum, N0198.1961, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEBiology PP&T CurioFree1 CQ
Can you keep up with the changes? As summer approaches, countless critters from tree frogs to butterflies are busy going through the process of metamorphosis. One of the strangest and most amazing processes in the biological world, metamorphosis occurs very differently in different species. From insects to frogs to jellyfish, the process of transforming comes in many forms.
Metamorphosis is a process in which an animal goes through drastic physical changes in distinct stages as it matures. Though only a few species (like butterflies) are famous for metamorphosis, the fact is that 60 percent of all animals—both vertebrates and invertebrates—go through metamorphosis at some point in their life cycles. All beetles, for example, begin their lives as larvae, which eventually become pupae before emerging as adult insects. Yet, for such a widespread biological process, little is known about how metamorphosis first evolved. Since so many different kinds of animals metamorphosize, it’s likely that a host of different evolutionary pressures led the process to develop, and it might have even done so more than once, in different species. One hypothesis specific to insects suggests that the development of wings may have had something to do with it. Insects that go through a larval stage, for instance, still molt as they grow bigger, periodically shedding their outer layer of skin. But molting with wings can be difficult, so much so that only mayflies (of which there are 3,000 extant species) bother with it. Other insects only develop wings in their adult forms, having spent their youths concerned only with eating and growing. Some non-insects might have developed metamorphosis as a sort of early-stage defense mechanism. Tadpoles, for example, can spend their early lives relatively safe in shallow, stagnant water where there aren’t very many predators. There, they can feed and grow until they’re big enough to emerge as frogs or toads.
While some animals simply grow new limbs during metamorphosis, others take a much more extreme approach. Moon jellyfish, for example, begin life as polyps attached to the seafloor and eventually develop a segmented stalk body. The segments then break off, and each individual segment becomes a separate, adult jellyfish. Butterflies and moths also go through a famously extreme change. After hatching, their larvae feed and grow until they’re ready to become pupae. Inside their cocoons or chrysalises, their bodies completely liquify, becoming a “living soup” of cells and proteins. Their adult bodies form slowly, over the course of days to weeks, practically from scratch. Despite extensive study, scientists still aren’t entirely sure how every step of this process works.
When it comes to metamorphosis-related mysteries, moths and butterflies once again provide a strange example, as these adult insects can somehow remember things they learned as caterpillars. Scientists at Georgetown University in Washington, D.C. trained tobacco hornworm caterpillars to associate the scent of ethyl acetate with mild electric shocks. Then, they allowed the caterpillars to metamorphose as usual and waited until they emerged as adults. These adults were then exposed to ethyl acetate again, and an astounding 78 percent of them still avoided the chemical. A similar experiment in the past involving fruit flies showed that insects can retain information learned as larvae into adulthood, but this was the first time it was tested on caterpillars. The experiment shows that, despite their bodies completely liquifying during metamorphosis, some part of their brain remains intact enough to retain information through the transformation. From soup to nuts, that’s got to be a strange way to grow up.
[Image description: A butterfly with a white body and wings with a black, white, yellow, and orange pattern perches on a yellow flower. ] Credit & copyright: Jeevan Jose, Kerala, India. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Can you keep up with the changes? As summer approaches, countless critters from tree frogs to butterflies are busy going through the process of metamorphosis. One of the strangest and most amazing processes in the biological world, metamorphosis occurs very differently in different species. From insects to frogs to jellyfish, the process of transforming comes in many forms.
Metamorphosis is a process in which an animal goes through drastic physical changes in distinct stages as it matures. Though only a few species (like butterflies) are famous for metamorphosis, the fact is that 60 percent of all animals—both vertebrates and invertebrates—go through metamorphosis at some point in their life cycles. All beetles, for example, begin their lives as larvae, which eventually become pupae before emerging as adult insects. Yet, for such a widespread biological process, little is known about how metamorphosis first evolved. Since so many different kinds of animals metamorphosize, it’s likely that a host of different evolutionary pressures led the process to develop, and it might have even done so more than once, in different species. One hypothesis specific to insects suggests that the development of wings may have had something to do with it. Insects that go through a larval stage, for instance, still molt as they grow bigger, periodically shedding their outer layer of skin. But molting with wings can be difficult, so much so that only mayflies (of which there are 3,000 extant species) bother with it. Other insects only develop wings in their adult forms, having spent their youths concerned only with eating and growing. Some non-insects might have developed metamorphosis as a sort of early-stage defense mechanism. Tadpoles, for example, can spend their early lives relatively safe in shallow, stagnant water where there aren’t very many predators. There, they can feed and grow until they’re big enough to emerge as frogs or toads.
While some animals simply grow new limbs during metamorphosis, others take a much more extreme approach. Moon jellyfish, for example, begin life as polyps attached to the seafloor and eventually develop a segmented stalk body. The segments then break off, and each individual segment becomes a separate, adult jellyfish. Butterflies and moths also go through a famously extreme change. After hatching, their larvae feed and grow until they’re ready to become pupae. Inside their cocoons or chrysalises, their bodies completely liquify, becoming a “living soup” of cells and proteins. Their adult bodies form slowly, over the course of days to weeks, practically from scratch. Despite extensive study, scientists still aren’t entirely sure how every step of this process works.
When it comes to metamorphosis-related mysteries, moths and butterflies once again provide a strange example, as these adult insects can somehow remember things they learned as caterpillars. Scientists at Georgetown University in Washington, D.C. trained tobacco hornworm caterpillars to associate the scent of ethyl acetate with mild electric shocks. Then, they allowed the caterpillars to metamorphose as usual and waited until they emerged as adults. These adults were then exposed to ethyl acetate again, and an astounding 78 percent of them still avoided the chemical. A similar experiment in the past involving fruit flies showed that insects can retain information learned as larvae into adulthood, but this was the first time it was tested on caterpillars. The experiment shows that, despite their bodies completely liquifying during metamorphosis, some part of their brain remains intact enough to retain information through the transformation. From soup to nuts, that’s got to be a strange way to grow up.
[Image description: A butterfly with a white body and wings with a black, white, yellow, and orange pattern perches on a yellow flower. ] Credit & copyright: Jeevan Jose, Kerala, India. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREELiterature PP&T CurioFree1 CQ
Quality and quantity aren’t always mutually exclusive. It’s a lesson that French novelist and playwright Honoré de Balzac had to learn for himself, but once he did, he became one of the most renowned and prolific writers of his time. Born this month in 1799, Balzac is largely credited for setting the standard for the modern day novel.
Born May 20, 1799, in Tours, France, Balzac’s surname was originally Balssa, but the author changed it later in life because he felt that it sounded more auspicious. After he was born, Balzac was raised by a wet nurse until he was weaned, a common practice at the time. Yet nearly as soon as he returned to his parents, he was sent away to school. At the age of 16, he began working as a lawyer’s clerk, but just three years later, he left the profession to become a writer. The young author found little success in his literary endeavors, though, and had little support from his family. Along with several novels he didn’t even publish under his own name, Balzac also suffered crippling financial blows due to a series of unsuccessful business ventures that left him deep in debt. Motivated by his need to pay off his debtors (including his own mother), he dove head first into his writing. It was an unconventional start to what became a distinguished career.
It’s an understatement to say that Balzac was not a man of moderation. When he wrote, he did so ceaselessly, for hours or sometimes days. Fueled by unchecked quantities of black coffee (some sources say as many as 50 cups a day), it wasn’t unusual for the writer to churn out page after page of handwritten works, barely stopping to eat or sleep. When he wasn’t writing, Balzac made himself known in Parisian society through scandalous affairs and affectations of grandeur. Aside from changing his name to blend into high society, Balzac also indulged in luxuries beyond his means and used the coat-of-arms of an unrelated family to represent himself. These efforts were actually pretty successful, and Balzac earned notoriety for being a gregarious braggart as much as for being a writer. As for his body of work, it was informed by his intimate understanding of Parisian society. His characters are known for their complexity and distinctly French idiosyncrasies that made them seem very real in their time. Balzac was known for portraying objects and locations in such vivid detail that they almost became characters of their own. Thus, his stories had a depth and wealth of description not commonly found in other novels of his time. That’s especially true of his magnum opus, La Comédie humaine, or The Human Comedy, in English. Written between 1829 and 1848 and consisting of 91 novels and novellas, La Comédie humaine is a collection of interconnected stories that showcase every lever of Parisian society in the years between the French Revolution and the Revolution of 1848. Through this series, Balzac explores the moral and philosophical ideas that lie at the heart of the clashes between France’s social classes, covering everything from economics to romances. Unfortunately, Balzac died relatively young at the age of 51 following a brief period of illness, just a few months after his marriage to his longtime correspondent and romantic interest Ewelina Hańska. Some believe that his heart failure was the result of his lifelong, excessive coffee consumption.
Today, Balzac is remembered for popularizing the format of the novel as it exists today. Unlike many writers of his time, he favored an omniscient narrator who presented the story with a logical flow and he portrayed interesting, flawed, relatable characters. Some have even called him the “Shakespeare of the Novel” for his witty dialogue and for his part in shaping the literary format. Drink a cup of coffee in his memory if you’d like…but maybe just the one.
[Image description: An artistic depiction of a young Honore de Balzac in sepia tones.] Credit & copyright: Achille Devéria (1800–1857), Wikimedia Commons. The Museums of the City of Paris, Balzac’s House. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Quality and quantity aren’t always mutually exclusive. It’s a lesson that French novelist and playwright Honoré de Balzac had to learn for himself, but once he did, he became one of the most renowned and prolific writers of his time. Born this month in 1799, Balzac is largely credited for setting the standard for the modern day novel.
Born May 20, 1799, in Tours, France, Balzac’s surname was originally Balssa, but the author changed it later in life because he felt that it sounded more auspicious. After he was born, Balzac was raised by a wet nurse until he was weaned, a common practice at the time. Yet nearly as soon as he returned to his parents, he was sent away to school. At the age of 16, he began working as a lawyer’s clerk, but just three years later, he left the profession to become a writer. The young author found little success in his literary endeavors, though, and had little support from his family. Along with several novels he didn’t even publish under his own name, Balzac also suffered crippling financial blows due to a series of unsuccessful business ventures that left him deep in debt. Motivated by his need to pay off his debtors (including his own mother), he dove head first into his writing. It was an unconventional start to what became a distinguished career.
It’s an understatement to say that Balzac was not a man of moderation. When he wrote, he did so ceaselessly, for hours or sometimes days. Fueled by unchecked quantities of black coffee (some sources say as many as 50 cups a day), it wasn’t unusual for the writer to churn out page after page of handwritten works, barely stopping to eat or sleep. When he wasn’t writing, Balzac made himself known in Parisian society through scandalous affairs and affectations of grandeur. Aside from changing his name to blend into high society, Balzac also indulged in luxuries beyond his means and used the coat-of-arms of an unrelated family to represent himself. These efforts were actually pretty successful, and Balzac earned notoriety for being a gregarious braggart as much as for being a writer. As for his body of work, it was informed by his intimate understanding of Parisian society. His characters are known for their complexity and distinctly French idiosyncrasies that made them seem very real in their time. Balzac was known for portraying objects and locations in such vivid detail that they almost became characters of their own. Thus, his stories had a depth and wealth of description not commonly found in other novels of his time. That’s especially true of his magnum opus, La Comédie humaine, or The Human Comedy, in English. Written between 1829 and 1848 and consisting of 91 novels and novellas, La Comédie humaine is a collection of interconnected stories that showcase every lever of Parisian society in the years between the French Revolution and the Revolution of 1848. Through this series, Balzac explores the moral and philosophical ideas that lie at the heart of the clashes between France’s social classes, covering everything from economics to romances. Unfortunately, Balzac died relatively young at the age of 51 following a brief period of illness, just a few months after his marriage to his longtime correspondent and romantic interest Ewelina Hańska. Some believe that his heart failure was the result of his lifelong, excessive coffee consumption.
Today, Balzac is remembered for popularizing the format of the novel as it exists today. Unlike many writers of his time, he favored an omniscient narrator who presented the story with a logical flow and he portrayed interesting, flawed, relatable characters. Some have even called him the “Shakespeare of the Novel” for his witty dialogue and for his part in shaping the literary format. Drink a cup of coffee in his memory if you’d like…but maybe just the one.
[Image description: An artistic depiction of a young Honore de Balzac in sepia tones.] Credit & copyright: Achille Devéria (1800–1857), Wikimedia Commons. The Museums of the City of Paris, Balzac’s House. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
It’s one of the most tragic tales in all of history; a massive loss of knowledge that set humanity back by decades…right? Maybe not. The burning of the Library of Alexandria is certainly a dramatic tale, but in recent years many scholars have begun to question its validity. Not only do most accounts of the library’s burning come from many years after the supposed event itself, no one can even agree on who did the actual burning.
The Library of Alexandria, built some time around 331 B.C.E. in Alexandria, Egypt, was one of the most massive and comprehensive libraries of its day. Part of a research institution called the Mouseion (which later came to include another, smaller library) the Library of Alexandria was likely the brainchild of Ptolemy I Soter, pharaoh of Ptolemaic Egypt, who began collecting papyrus scrolls for it long before a building was created to house them. His son, Ptolemy II Philadelphus, is more likely to have overseen the actual construction of the library itself during his own, subsequent reign. It was an ambitious project. The idea was for a true, universal library—where all knowledge from around the world could be stored. To that end, Ptolemy II Philadelphus collected scrolls from wherever and whoever he could (scrolls, not bound books, were the way that written works were distributed, at the time.) The pharaoh might have been considered a hoarder of knowledge if not for one important detail —he made high-quality copies of almost every scroll he received and gave them back to the people who had provided the originals—usually historians or other scholars. After all, it would have caused bad blood and alienated Philadelphus from the scholarly world if he had simply taken these works for himself, without permission…and permission usually hinged on a copy being provided. This means that, though the Library of Alexandria housed an impressive collection of knowledge that made Alexandria itself famous as a city of learning, much of that knowledge also still existed outside of the library’s walls.
That’s lucky, since the library did eventually come to ruin. How, exactly, that happened is still a source of debate, despite the longstanding myth that the library was purposefully burned in a single day. At the height of its popularity, the library housed somewhere between 40,000 to 400,000 scrolls, but that popularity eventually waned. In 145 B.C.E., Ptolemy VIII Physcon, who had very different ideas about knowledge than his predecessors. His reign was a violent one, and included several massacres which saw many Alexandrian intellectuals killed or exiled. Scholars had been the lifeblood of Alexandria’s library, and without them it fell into decline. Then, there was a fire. The two most common stories about the library’s burning implicate either Julius Caesar or Caliph Umar, who led the Arab conquest of Alexandria in 642 C.E. The second story is easily dismissed, since other sources point to the library already being gone by the time of that particular invasion. As for Caesar, he may have burnt the library…but it was probably an accident. According to the ancient Greek philosopher and historian Plutarch, in 48 B.C.E., during Caesar’s civil war, Caesar set fire to a fleet of Egyptian ships in Alexandria’s harbor. Due to windy weather, the flames spread to the library, which burned with all the scrolls inside. However, many historians now believe that the library survived this accidental burning, and may have even been rebuilt afterward, since there are records of other historical figures visiting the library after Caesar’s war was over.
Ultimately, the library likely died due to a problem that still plagues libraries today: a lack of funding. During the Roman period, those in power simply stopped prioritizing the library’s upkeep, and it fell into disrepair. The Palmyrene Invasion of 270 C.E. likely destroyed the rest of the already-unkempt structure. Still, it’s unlikely that the loss of the library set humanity’s overall progress back, despite stories to the contrary. After all, much of the knowledge inside had already been copied. Then, as now, it pays to back up your work!
[Image description: A black-and-white illustration depicting the burning of the Library of Alexandria with a crowd of people rushing toward the flames.] Credit & copyright:
Ambrose Dudley (1867–1951), The Burning of the Library at Alexandria in 391 AD. Bridgeman Art Library: Object 357910, Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.It’s one of the most tragic tales in all of history; a massive loss of knowledge that set humanity back by decades…right? Maybe not. The burning of the Library of Alexandria is certainly a dramatic tale, but in recent years many scholars have begun to question its validity. Not only do most accounts of the library’s burning come from many years after the supposed event itself, no one can even agree on who did the actual burning.
The Library of Alexandria, built some time around 331 B.C.E. in Alexandria, Egypt, was one of the most massive and comprehensive libraries of its day. Part of a research institution called the Mouseion (which later came to include another, smaller library) the Library of Alexandria was likely the brainchild of Ptolemy I Soter, pharaoh of Ptolemaic Egypt, who began collecting papyrus scrolls for it long before a building was created to house them. His son, Ptolemy II Philadelphus, is more likely to have overseen the actual construction of the library itself during his own, subsequent reign. It was an ambitious project. The idea was for a true, universal library—where all knowledge from around the world could be stored. To that end, Ptolemy II Philadelphus collected scrolls from wherever and whoever he could (scrolls, not bound books, were the way that written works were distributed, at the time.) The pharaoh might have been considered a hoarder of knowledge if not for one important detail —he made high-quality copies of almost every scroll he received and gave them back to the people who had provided the originals—usually historians or other scholars. After all, it would have caused bad blood and alienated Philadelphus from the scholarly world if he had simply taken these works for himself, without permission…and permission usually hinged on a copy being provided. This means that, though the Library of Alexandria housed an impressive collection of knowledge that made Alexandria itself famous as a city of learning, much of that knowledge also still existed outside of the library’s walls.
That’s lucky, since the library did eventually come to ruin. How, exactly, that happened is still a source of debate, despite the longstanding myth that the library was purposefully burned in a single day. At the height of its popularity, the library housed somewhere between 40,000 to 400,000 scrolls, but that popularity eventually waned. In 145 B.C.E., Ptolemy VIII Physcon, who had very different ideas about knowledge than his predecessors. His reign was a violent one, and included several massacres which saw many Alexandrian intellectuals killed or exiled. Scholars had been the lifeblood of Alexandria’s library, and without them it fell into decline. Then, there was a fire. The two most common stories about the library’s burning implicate either Julius Caesar or Caliph Umar, who led the Arab conquest of Alexandria in 642 C.E. The second story is easily dismissed, since other sources point to the library already being gone by the time of that particular invasion. As for Caesar, he may have burnt the library…but it was probably an accident. According to the ancient Greek philosopher and historian Plutarch, in 48 B.C.E., during Caesar’s civil war, Caesar set fire to a fleet of Egyptian ships in Alexandria’s harbor. Due to windy weather, the flames spread to the library, which burned with all the scrolls inside. However, many historians now believe that the library survived this accidental burning, and may have even been rebuilt afterward, since there are records of other historical figures visiting the library after Caesar’s war was over.
Ultimately, the library likely died due to a problem that still plagues libraries today: a lack of funding. During the Roman period, those in power simply stopped prioritizing the library’s upkeep, and it fell into disrepair. The Palmyrene Invasion of 270 C.E. likely destroyed the rest of the already-unkempt structure. Still, it’s unlikely that the loss of the library set humanity’s overall progress back, despite stories to the contrary. After all, much of the knowledge inside had already been copied. Then, as now, it pays to back up your work!
[Image description: A black-and-white illustration depicting the burning of the Library of Alexandria with a crowd of people rushing toward the flames.] Credit & copyright:
Ambrose Dudley (1867–1951), The Burning of the Library at Alexandria in 391 AD. Bridgeman Art Library: Object 357910, Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929. -
FREESports PP&T CurioFree1 CQ
Can you hold on? That simple question is the heart of bull riding, the most popular event in modern rodeos. There are plenty of other events too, though, from barrel racing to calf roping, and all of them grew out of what were once daily chores for ranch workers. Some rodeo fans might be surprised to learn that, despite the sport’s all-American image, rodeos were originally shaped by Mexican and Spanish traditions. In fact, the very word “rodeo” comes from the Spanish word “rodear,” which means “to ride.”
While humans have surely been trying to ride unruly animals since time immemorial, rodeos as they are today have their origins in 19th-century Mexico. The pioneers of the sport were vaqueros—Mexican cowboys—who traveled constantly in search of work. Vaqueros, like their American counterparts, were a rowdy, diverse bunch hailing from all ethnic backgrounds. The work of driving cattle was difficult, unglamorous, and basically available to anyone willing to take the job. Due to their lifestyle, the vaqueros had little money or property to their names. What they could own were bragging rights by showing off the skills of their trade. Vaqueros would come together between busy seasons to participate in competitions that tested their abilities. The most prestigious event was, unsurprisingly, riding broncos—wild horses who did not take kindly to strangers on their backs. These mighty beasts would buck and jump in an effort to throw off a rider, and whoever could hold on longest was the winner. It was a simple format for a sport, but it’s mostly the same today.
Rodeos began gaining popularity in the American West with the annexation of a large part of Mexico in 1845. Along with the land came vaquero culture, which mingled with existing cowboy culture in U.S. territories. The first American to really bring the sport into the limelight was William F. Cody, better known as Buffalo Bill. Cody wasn’t just another cowboy in the fading Wild West of America, but an enterprising showman. Until then, rodeos had been small, loosely organized events used to pass the downtime in rural areas. Cody began marketing rodeos to large audiences as a sporting extravaganza. In 1883, Cody launched Buffalo Bill’s Wild West Show, which featured a variety of acts like stunt shows and sharpshooters. But the most enduring events were bronc riding and bull riding, the latter of which had roots in bullfighting and bulldogging, in which a cowboy would attempt to wrestle a bull to the ground. If these activities sound absurdly dangerous, that’s because they are, but that was also their appeal. Buffalo Bill’s show drew in 3 million attendees in 1893 during the World’s Columbian Exposition in Chicago, and the show ran for decades.
Today, the rules are much the same: for either broncs or bulls, hold on for dear life for eight seconds. There are more rules, of course, such as using only one hand and being scored based on the difficulty of the mount. There have also been some changes over the years, like helmets and puncture-proof vests for bull riders. When those aren’t enough, though, rodeo clowns come in to distract the animal and get a fallen rider to safety. The biggest bull riding league today is Professional Bull Riders (PBR), which draws talent from around the world to events in the U.S., Canada, Brazil, and Australia. They may also be implementing a major change: instead of eight seconds, PBR is shortening the ride to just six seconds to address the increasingly stronger animals being bred for the sport. You just can’t take those bulls by the horns…seriously, it’s not advised to do so.
[Image description: A black-and-white photo of a man riding a bull with one hand in the air.] Credit & copyright: Published by Southwest Georgia Regional Library, Bainbridge, Georgia. 1944. Wikimedia Commons. This media file is in the public domain in the United States.Can you hold on? That simple question is the heart of bull riding, the most popular event in modern rodeos. There are plenty of other events too, though, from barrel racing to calf roping, and all of them grew out of what were once daily chores for ranch workers. Some rodeo fans might be surprised to learn that, despite the sport’s all-American image, rodeos were originally shaped by Mexican and Spanish traditions. In fact, the very word “rodeo” comes from the Spanish word “rodear,” which means “to ride.”
While humans have surely been trying to ride unruly animals since time immemorial, rodeos as they are today have their origins in 19th-century Mexico. The pioneers of the sport were vaqueros—Mexican cowboys—who traveled constantly in search of work. Vaqueros, like their American counterparts, were a rowdy, diverse bunch hailing from all ethnic backgrounds. The work of driving cattle was difficult, unglamorous, and basically available to anyone willing to take the job. Due to their lifestyle, the vaqueros had little money or property to their names. What they could own were bragging rights by showing off the skills of their trade. Vaqueros would come together between busy seasons to participate in competitions that tested their abilities. The most prestigious event was, unsurprisingly, riding broncos—wild horses who did not take kindly to strangers on their backs. These mighty beasts would buck and jump in an effort to throw off a rider, and whoever could hold on longest was the winner. It was a simple format for a sport, but it’s mostly the same today.
Rodeos began gaining popularity in the American West with the annexation of a large part of Mexico in 1845. Along with the land came vaquero culture, which mingled with existing cowboy culture in U.S. territories. The first American to really bring the sport into the limelight was William F. Cody, better known as Buffalo Bill. Cody wasn’t just another cowboy in the fading Wild West of America, but an enterprising showman. Until then, rodeos had been small, loosely organized events used to pass the downtime in rural areas. Cody began marketing rodeos to large audiences as a sporting extravaganza. In 1883, Cody launched Buffalo Bill’s Wild West Show, which featured a variety of acts like stunt shows and sharpshooters. But the most enduring events were bronc riding and bull riding, the latter of which had roots in bullfighting and bulldogging, in which a cowboy would attempt to wrestle a bull to the ground. If these activities sound absurdly dangerous, that’s because they are, but that was also their appeal. Buffalo Bill’s show drew in 3 million attendees in 1893 during the World’s Columbian Exposition in Chicago, and the show ran for decades.
Today, the rules are much the same: for either broncs or bulls, hold on for dear life for eight seconds. There are more rules, of course, such as using only one hand and being scored based on the difficulty of the mount. There have also been some changes over the years, like helmets and puncture-proof vests for bull riders. When those aren’t enough, though, rodeo clowns come in to distract the animal and get a fallen rider to safety. The biggest bull riding league today is Professional Bull Riders (PBR), which draws talent from around the world to events in the U.S., Canada, Brazil, and Australia. They may also be implementing a major change: instead of eight seconds, PBR is shortening the ride to just six seconds to address the increasingly stronger animals being bred for the sport. You just can’t take those bulls by the horns…seriously, it’s not advised to do so.
[Image description: A black-and-white photo of a man riding a bull with one hand in the air.] Credit & copyright: Published by Southwest Georgia Regional Library, Bainbridge, Georgia. 1944. Wikimedia Commons. This media file is in the public domain in the United States. -
FREEMind + Body PP&T CurioFree1 CQ
The illnesses just keep coming! First it was COVID-19, then a bird flu scare. Now, people are concerned about another disorder that might be making the leap from animals to humans: chronic wasting disease (CWD). For years, this fatal illness has only affected cervids (members of the deer family) but a recent case involving two hunters has some people (and government agencies) concerned that it could impact people as well… assuming that those people eat contaminated venison.
Unlike COVID-19 or bird flu, CWD isn’t caused by a virus. Rather, it’s a prion disease, like mad cow disease. Prions aren’t alive like bacteria and other microbes, nor do they contain genetic material like viruses. Rather, they’re misfolded proteins that cause other proteins to become similarly misfolded. As a result, prions can cause a cascade effect, bumping into proteins and creating copies of themselves, destroying the ability of infected tissue (usually in the brain) to function properly. In short, a prion is like an immortal bull in a china shop, except that every time it breaks a plate, that plate becomes another bull. Compounding their danger is the fact that prions are resistant to treatments that are effective on most pathogens, and they can last a long time—even years—if left undisturbed. Prions can develop spontaneously in otherwise healthy organisms, but most well-known cases involve transmissions of existing ones.
CWD was first discovered in 1967, but was thought to only impact deer, until recently. Among cervids like white-tailed deer, mule deer, elk, and moose, CWD spreads via saliva, urine, and feces. As its name implies, CWD causes an infected animal to lose a significant amount of weight. Over time, they begin to exhibit cognitive issues, rendering them unable to socialize properly with other deer, and making them lose awareness of their surroundings and their natural fear of humans.
It has recently been reported that, in 2022, two American hunters ate venison infected with CWD and subsequently became ill with Creutzfeldt-Jakob disease (CJD), a rare neurodegenerative disorder with symptoms very similar to Alzheimer’s disease. CJD and CWD are types of spongiform encephalopathies, which means that they cause degradation of brain tissue. Symptoms may include depression, confusion, a change in gait, and hallucinations. Both disorders are fatal, and decline in health can occur rapidly. One of the hunters died less than a month after his symptoms began. Up until now, humans have only been diagnosed with CJD after receiving transplants like cornea tissue from infected donors. But this recent case could end up proving that, just as the prion disorder known as mad cow disease can jump from livestock to humans, CWD can make the same leap from deer.
That’s not to say that there’s likely to be a sudden pandemic of prion infections. Although both hunters contracted the fatal disease after eating infected deer meat, the population they were eating from was known to be infected with CWD. The disease doesn’t affect a high proportion of the American deer population, either, though it can spread rapidly through populations once it takes hold. Human intervention in wild deer life, such as feeding, baiting, or using urine-based lures, can quicken the spread. Limiting or banning such practices is usually step number one when it comes to CWD mitigation. If CWD were ever to get out of hand in American deer populations, hunters might then be required to submit tissue samples from harvested deer, or to report any carcasses found in the wild. In the meantime, wildlife officials advise against eating meat from deer that looked obviously sick or emaciated, just in case. You want venison to be lean, but not that lean.
[Image description: Three white tailed deer graze on grass. A male deer with antlers stands at the front of the group.] Credit & copyright: Wikimedia Commons, Richard Lydekker (1849–1915). This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.The illnesses just keep coming! First it was COVID-19, then a bird flu scare. Now, people are concerned about another disorder that might be making the leap from animals to humans: chronic wasting disease (CWD). For years, this fatal illness has only affected cervids (members of the deer family) but a recent case involving two hunters has some people (and government agencies) concerned that it could impact people as well… assuming that those people eat contaminated venison.
Unlike COVID-19 or bird flu, CWD isn’t caused by a virus. Rather, it’s a prion disease, like mad cow disease. Prions aren’t alive like bacteria and other microbes, nor do they contain genetic material like viruses. Rather, they’re misfolded proteins that cause other proteins to become similarly misfolded. As a result, prions can cause a cascade effect, bumping into proteins and creating copies of themselves, destroying the ability of infected tissue (usually in the brain) to function properly. In short, a prion is like an immortal bull in a china shop, except that every time it breaks a plate, that plate becomes another bull. Compounding their danger is the fact that prions are resistant to treatments that are effective on most pathogens, and they can last a long time—even years—if left undisturbed. Prions can develop spontaneously in otherwise healthy organisms, but most well-known cases involve transmissions of existing ones.
CWD was first discovered in 1967, but was thought to only impact deer, until recently. Among cervids like white-tailed deer, mule deer, elk, and moose, CWD spreads via saliva, urine, and feces. As its name implies, CWD causes an infected animal to lose a significant amount of weight. Over time, they begin to exhibit cognitive issues, rendering them unable to socialize properly with other deer, and making them lose awareness of their surroundings and their natural fear of humans.
It has recently been reported that, in 2022, two American hunters ate venison infected with CWD and subsequently became ill with Creutzfeldt-Jakob disease (CJD), a rare neurodegenerative disorder with symptoms very similar to Alzheimer’s disease. CJD and CWD are types of spongiform encephalopathies, which means that they cause degradation of brain tissue. Symptoms may include depression, confusion, a change in gait, and hallucinations. Both disorders are fatal, and decline in health can occur rapidly. One of the hunters died less than a month after his symptoms began. Up until now, humans have only been diagnosed with CJD after receiving transplants like cornea tissue from infected donors. But this recent case could end up proving that, just as the prion disorder known as mad cow disease can jump from livestock to humans, CWD can make the same leap from deer.
That’s not to say that there’s likely to be a sudden pandemic of prion infections. Although both hunters contracted the fatal disease after eating infected deer meat, the population they were eating from was known to be infected with CWD. The disease doesn’t affect a high proportion of the American deer population, either, though it can spread rapidly through populations once it takes hold. Human intervention in wild deer life, such as feeding, baiting, or using urine-based lures, can quicken the spread. Limiting or banning such practices is usually step number one when it comes to CWD mitigation. If CWD were ever to get out of hand in American deer populations, hunters might then be required to submit tissue samples from harvested deer, or to report any carcasses found in the wild. In the meantime, wildlife officials advise against eating meat from deer that looked obviously sick or emaciated, just in case. You want venison to be lean, but not that lean.
[Image description: Three white tailed deer graze on grass. A male deer with antlers stands at the front of the group.] Credit & copyright: Wikimedia Commons, Richard Lydekker (1849–1915). This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929. -
FREEMind + Body PP&T CurioFree1 CQ
If southern hospitality had a flavor, this would probably be it. Chicken and dumplings, a dish famous in the American South, is renowned as a top-tier comfort food. Yet it’s also a source of debate. There are those who claim that the dish’s “dumplings” aren’t really dumplings, and that its depression-era backstory is dubious at best.
Chicken and dumplings is a simple soup made with simmered chicken meat and thick broth created via the simmering process. The dish’s dumplings are balls of biscuit dough, usually made from flour, shortening, and milk, though the latter can be substituted for buttermilk, water, or chicken broth. The soup is seasoned sparingly with salt and pepper.
Chicken and dumplings is a simple dish that requires few ingredients and can feed many people at once. Thus, for a time the dish was rumored to have been invented during the Great Depression, when resources were scarce. However, modern food historians have a different theory which begins not in the American South but in Germany. German cuisine includes many dishes that are similar to chicken and dumplings, such as potato dumplings in broth. Many German dishes became popular throughout the U.S. due to a wave of German immigrants in the 1820s, and the first written record of chicken and dumplings appears not long after, in the 1879 cookbook, Housekeeping in Old Virginia..
Of course, that doesn’t solve the debate about whether the dumplings in chicken and dumplings are really dumplings. Some foodies only consider something a dumpling if the food in question is stuffed with something, such as Japanese gyoza which are stuffed with meat and veggies, or European pierogies filled with potatoes and cheese. However, by that definition even gnocchi, the world’s most famous type of potato dumpling, wouldn’t fit the bill. One thing’s for certain, though: chicken and dumplings is a savory, chewy, comforting dish—no matter where it came from or what you call it.
[Image description: A rooster and several chickens pecking at grass.] Credit & copyright: Helge Klaus Rieder, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide.If southern hospitality had a flavor, this would probably be it. Chicken and dumplings, a dish famous in the American South, is renowned as a top-tier comfort food. Yet it’s also a source of debate. There are those who claim that the dish’s “dumplings” aren’t really dumplings, and that its depression-era backstory is dubious at best.
Chicken and dumplings is a simple soup made with simmered chicken meat and thick broth created via the simmering process. The dish’s dumplings are balls of biscuit dough, usually made from flour, shortening, and milk, though the latter can be substituted for buttermilk, water, or chicken broth. The soup is seasoned sparingly with salt and pepper.
Chicken and dumplings is a simple dish that requires few ingredients and can feed many people at once. Thus, for a time the dish was rumored to have been invented during the Great Depression, when resources were scarce. However, modern food historians have a different theory which begins not in the American South but in Germany. German cuisine includes many dishes that are similar to chicken and dumplings, such as potato dumplings in broth. Many German dishes became popular throughout the U.S. due to a wave of German immigrants in the 1820s, and the first written record of chicken and dumplings appears not long after, in the 1879 cookbook, Housekeeping in Old Virginia..
Of course, that doesn’t solve the debate about whether the dumplings in chicken and dumplings are really dumplings. Some foodies only consider something a dumpling if the food in question is stuffed with something, such as Japanese gyoza which are stuffed with meat and veggies, or European pierogies filled with potatoes and cheese. However, by that definition even gnocchi, the world’s most famous type of potato dumpling, wouldn’t fit the bill. One thing’s for certain, though: chicken and dumplings is a savory, chewy, comforting dish—no matter where it came from or what you call it.
[Image description: A rooster and several chickens pecking at grass.] Credit & copyright: Helge Klaus Rieder, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide. -
FREEBiology PP&T CurioFree1 CQ
Would you like the ability to regrow your limbs while staying young forever? Sounds like you want to be an axolotl. These amphibians have become such pop-culture darlings in the past few years that they’re one of the few non-fluffy creatures commonly found in stuffed-animal form. While there’s no doubt that axolotls are cute, they also happen to be some of the strangest (and most threatened) creatures on earth.
Axolotls are amphibians (aquatic salamanders, to be exact) but their life cycle is much different from other amphibians’. The vast majority of amphibians undergo metamorphosis in order to reach adulthood, such as frogs, which begin life as tadpoles. Even most other salamander species begin life in the water with feathery gills similar to axolotls’, but eventually lose them when they mature and move onto land. Researchers have found that, while axolotls can be forced to change into an “adult” form if they are exposed to large amounts of iodine (a chemical element that triggers metamorphosis in some other amphibians) they do not survive long after the forced metamorphosis. One trait that axolotls do share with some other salamander species is the ability to regenerate body parts. They’re extremely good at it, in fact. Not only can axolotls grow new tails or legs should they lose one, they can even regrow internal organs and bones, including the heart, brain, and spine.
Social media could easily convince someone that axolotls are common. In a way, it’s true: they are common in the pet trade. In the wild, though, they’re practically extinct—and their range was never very big to begin with. In fact, wild axolotls have only ever been found in two freshwater lakes in the Valley of Mexico: Lake Xochimilco and Lake Chalco. These lakes offered unique habitats for axolotls that some pet owners find difficult to emulate. The waters are dark and, most importantly, cold. Axolotls thrive at temperatures of around 55 to 68 degrees Fahrenheit, which would be much too cold for many other amphibians. Unfortunately, people have never been content to leave axolotls alone in their cool, dark homes. The salamanders’ first bout of bad luck came when the Spanish conquered the Aztec Empire and partially drained the lakes, killing many axolotls. Lake Chalco was completely drained in the 1970s, relegating all remaining wild axolotls to Lake Xochimilco. Their problems weren’t over, though: in the 1980s, the lake became polluted with wastewater and in the early 2000s, tilapia were introduced to the lake. These fish compete with axolotls for food and eat their eggs. On top of all that, people living near the lake had no qualms about eating axolotls, if the chance arose. Today, there are only around 50 to 1,000 wild axolotls left on earth, all of them relegated to a single, polluted lake.
While the pet trade can lead to ecological disaster for some animal species, it may actually help save axolotls. Plenty of people from all over the world breed captive axolotls, which means that the species has managed to maintain a large gene pool. This could bode well for efforts to re-introduce axolotls to the wild…assuming that their natural habitat is made fit for them again. In order for any such effort to succeed, Lake Xochimilco would have to be cleaned of pollution, rules about waste dumping would need to be passed and enforced, and large numbers of tilapia would need to be removed from the lake. Were all those things to happen, there’s a good chance that wild axolotls would take to the lake like fish…or, rather, like salamanders to water.
[Image description: A gray axolotl in an aquarium.] Credit & copyright: Vassil, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Would you like the ability to regrow your limbs while staying young forever? Sounds like you want to be an axolotl. These amphibians have become such pop-culture darlings in the past few years that they’re one of the few non-fluffy creatures commonly found in stuffed-animal form. While there’s no doubt that axolotls are cute, they also happen to be some of the strangest (and most threatened) creatures on earth.
Axolotls are amphibians (aquatic salamanders, to be exact) but their life cycle is much different from other amphibians’. The vast majority of amphibians undergo metamorphosis in order to reach adulthood, such as frogs, which begin life as tadpoles. Even most other salamander species begin life in the water with feathery gills similar to axolotls’, but eventually lose them when they mature and move onto land. Researchers have found that, while axolotls can be forced to change into an “adult” form if they are exposed to large amounts of iodine (a chemical element that triggers metamorphosis in some other amphibians) they do not survive long after the forced metamorphosis. One trait that axolotls do share with some other salamander species is the ability to regenerate body parts. They’re extremely good at it, in fact. Not only can axolotls grow new tails or legs should they lose one, they can even regrow internal organs and bones, including the heart, brain, and spine.
Social media could easily convince someone that axolotls are common. In a way, it’s true: they are common in the pet trade. In the wild, though, they’re practically extinct—and their range was never very big to begin with. In fact, wild axolotls have only ever been found in two freshwater lakes in the Valley of Mexico: Lake Xochimilco and Lake Chalco. These lakes offered unique habitats for axolotls that some pet owners find difficult to emulate. The waters are dark and, most importantly, cold. Axolotls thrive at temperatures of around 55 to 68 degrees Fahrenheit, which would be much too cold for many other amphibians. Unfortunately, people have never been content to leave axolotls alone in their cool, dark homes. The salamanders’ first bout of bad luck came when the Spanish conquered the Aztec Empire and partially drained the lakes, killing many axolotls. Lake Chalco was completely drained in the 1970s, relegating all remaining wild axolotls to Lake Xochimilco. Their problems weren’t over, though: in the 1980s, the lake became polluted with wastewater and in the early 2000s, tilapia were introduced to the lake. These fish compete with axolotls for food and eat their eggs. On top of all that, people living near the lake had no qualms about eating axolotls, if the chance arose. Today, there are only around 50 to 1,000 wild axolotls left on earth, all of them relegated to a single, polluted lake.
While the pet trade can lead to ecological disaster for some animal species, it may actually help save axolotls. Plenty of people from all over the world breed captive axolotls, which means that the species has managed to maintain a large gene pool. This could bode well for efforts to re-introduce axolotls to the wild…assuming that their natural habitat is made fit for them again. In order for any such effort to succeed, Lake Xochimilco would have to be cleaned of pollution, rules about waste dumping would need to be passed and enforced, and large numbers of tilapia would need to be removed from the lake. Were all those things to happen, there’s a good chance that wild axolotls would take to the lake like fish…or, rather, like salamanders to water.
[Image description: A gray axolotl in an aquarium.] Credit & copyright: Vassil, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREECollege Prep PP&T CurioFree1 CQ
It was a pandemic disappearing act that just couldn’t last. During the height of COVID-19, some colleges in the U.S. decided to stop requiring SAT scores as part of their admissions processes. Now, though, Harvard University and several other institutions have started asking for the scores once again. This has led to renewed discussions about the SAT’s relevance. Some people feel that the test is a fair, objective way to measure students’ knowledge, while others argue that standardized test scores are a poor measure of academic comprehension. It’s a debate that’s been going on for decades.
Originally called the Scholastic Aptitude Test, the SAT has changed many times since its inception, and it wasn’t even the first test of its kind. Around the turn of the 20th century, 12 university presidents came together to form the College Entrance Examination Board (shortened to College Board). The board’s main purpose was to create a standardized entrance exam for their schools. The resulting test was nothing less than daunting. It took five days to complete and challenged the taker’s knowledge of Latin, Greek, and physics. However, it began to lose relevance by the early 1900s, due in part to its limited scope. At that time, IQ tests were fairly new, having been developed in 1905, and they were all the rage. To update their exams, the College Board hired Carl Brigham, who administered IQ tests for the U.S. military, to develop a similar test for would-be university students. Eventually, the test Brigham created evolved into the SAT, which was officially released in 1926. Over the years, the test underwent many changes. The foreign language portions were dropped, for one thing, and the test was pared down to a three-hour-long affair. However, its main purpose, to assess math and verbal skills, has remained largely the same. Along with alterations to its content, the test’s name changed multiple times, from the Scholastic Assessment Test to the SAT Reasoning Test. In the end, most institutions simply ended up calling it the SAT.
Unlike its predecessors, the SAT was touted for its supposed ability to evaluate a student’s critical thinking skills instead of rewarding rote memorization. It still had plenty of detractors, though. Some universities believed that the test was an unnecessary barrier for students who faced socioeconomic hardships, since such students had limited access to resources like private tutoring and less time to study, since they often had to work outside of school. It’s a viewpoint still shared by plenty of people today. Recently, Harvard professor David J. Deming and his colleagues conducted research into the impact of socioeconomic status on standardized tests. After Harvard announced the SAT’s return, Deming acknowledged that the tests weren’t perfect, but also defended them, saying in a statement to The Harvard Gazette, “The virtue of standardized tests is their universality. Not everyone can hire an expensive college coach to help them craft a personal essay. But everyone has the chance to ace the SAT or the ACT. While some barriers do exist, the widespread availability of the test provides, in my view, the fairest admissions policy for disadvantaged applicants.” Nowadays, there are even free resources for test preparation from nonprofit organizations, though even that requires an internet connection that some applicants might not have access to. Love it or hate it, it seems that the SAT and its counterpart, the ACT (American College Test) probably won’t be going anywhere any time soon. At least test-takers can be assured that the tests’ rigorousness isn’t personal…it’s just standard procedure.
[Image description: A mathematical problem written on a chalkboard.] Credit & copyright: Monstera Production, PexelsIt was a pandemic disappearing act that just couldn’t last. During the height of COVID-19, some colleges in the U.S. decided to stop requiring SAT scores as part of their admissions processes. Now, though, Harvard University and several other institutions have started asking for the scores once again. This has led to renewed discussions about the SAT’s relevance. Some people feel that the test is a fair, objective way to measure students’ knowledge, while others argue that standardized test scores are a poor measure of academic comprehension. It’s a debate that’s been going on for decades.
Originally called the Scholastic Aptitude Test, the SAT has changed many times since its inception, and it wasn’t even the first test of its kind. Around the turn of the 20th century, 12 university presidents came together to form the College Entrance Examination Board (shortened to College Board). The board’s main purpose was to create a standardized entrance exam for their schools. The resulting test was nothing less than daunting. It took five days to complete and challenged the taker’s knowledge of Latin, Greek, and physics. However, it began to lose relevance by the early 1900s, due in part to its limited scope. At that time, IQ tests were fairly new, having been developed in 1905, and they were all the rage. To update their exams, the College Board hired Carl Brigham, who administered IQ tests for the U.S. military, to develop a similar test for would-be university students. Eventually, the test Brigham created evolved into the SAT, which was officially released in 1926. Over the years, the test underwent many changes. The foreign language portions were dropped, for one thing, and the test was pared down to a three-hour-long affair. However, its main purpose, to assess math and verbal skills, has remained largely the same. Along with alterations to its content, the test’s name changed multiple times, from the Scholastic Assessment Test to the SAT Reasoning Test. In the end, most institutions simply ended up calling it the SAT.
Unlike its predecessors, the SAT was touted for its supposed ability to evaluate a student’s critical thinking skills instead of rewarding rote memorization. It still had plenty of detractors, though. Some universities believed that the test was an unnecessary barrier for students who faced socioeconomic hardships, since such students had limited access to resources like private tutoring and less time to study, since they often had to work outside of school. It’s a viewpoint still shared by plenty of people today. Recently, Harvard professor David J. Deming and his colleagues conducted research into the impact of socioeconomic status on standardized tests. After Harvard announced the SAT’s return, Deming acknowledged that the tests weren’t perfect, but also defended them, saying in a statement to The Harvard Gazette, “The virtue of standardized tests is their universality. Not everyone can hire an expensive college coach to help them craft a personal essay. But everyone has the chance to ace the SAT or the ACT. While some barriers do exist, the widespread availability of the test provides, in my view, the fairest admissions policy for disadvantaged applicants.” Nowadays, there are even free resources for test preparation from nonprofit organizations, though even that requires an internet connection that some applicants might not have access to. Love it or hate it, it seems that the SAT and its counterpart, the ACT (American College Test) probably won’t be going anywhere any time soon. At least test-takers can be assured that the tests’ rigorousness isn’t personal…it’s just standard procedure.
[Image description: A mathematical problem written on a chalkboard.] Credit & copyright: Monstera Production, Pexels -
FREEHumanities PP&T CurioFree1 CQ
Happy belated birthday Dame Jane Goodall! This famed British primatologist and anthropologist turned 90 on April 3. Never one to sit on her laurels, she celebrated the auspicious occasion by releasing a short film in which she and other conservationists urged humanity to care for animals and the environment. It’s the same message that Goodall has been spreading since 1960 when, at 26 years old, she journeyed from England to Tanzania to study chimpanzees in the wild.
Goodall was born in London, England, in 1934, and showed great interest in animal behavior from an early age. She spent her free time observing bugs, mice, and other small, native creatures while taking notes and sketching them. Her ultimate aspiration as a child was to study animals in Africa in their native habitats. At 18, Goodall began working at Oxford University as a secretary and at a documentary film company while saving up funds for her eventual trip to Africa. In the late 1950s, Goodall traveled to Kenya for the first time, and soon started working for anthropologist Louis Leakey. Leakey specialized in early human ancestors, and he believed that the key to understanding their evolutionary path was a better understanding of extant great apes, such as chimpanzees. After all, chimpanzees are humans’ closest living relatives. Leakey also contended that previous chimp studies had been too limited in scope, and that a long-term study with field observation was necessary. Unable to commit to such a study himself, he sent Goodall to Gombe Stream Reserve in Tanzania to observe the chimpanzees.
In July of 1960, Goodall arrived at a camp in the reserve, situated on the shores of Lake Tanganyika. Once there, she developed a dedicated routine to observe the chimpanzees. She would appear at the same time every day near their feeding area, allowing the apes to get acclimated to her presence. Over time, she was able to get closer and closer as they stopped seeing her as a threat. Eventually, she was able to earn their trust by bringing them bananas, forming what she called the “Banana Club.” During the time she spent practically living with the chimpanzees, she learned new information that disproved previous studies. For example, the apes weren’t herbivores as previously thought, but omnivores that ate insects and other mammals like baboons and antelope when the opportunity presented itself. They also appeared to communicate using around 20 “words” and had a complex social system.
Perhaps most importantly, Goodall discovered that humans weren’t the only animals that use tools. She observed chimps using blades of grass that they modified to catch termites out of their mounds. This discovery, along with her other observations, are shown in the documentary Miss Goodall and the Wild Chimpanzees (1965) and in her book, In the Shadow of Man (1971), both of which changed the scientific community’s understanding of great apes. Goodall’s book, in particular, helped prove that chimpanzees shared many behaviors with humans. One example of wordless communication between humans and chimps from In The Shadow of Man reads, “When I moved my hand closer, he looked at it, and then at me, and then he took the fruit, and at the same time held my hand firmly and gently with his own. As I sat motionless, he released my hand, looked down at the nut, and dropped it to the ground. At that moment, there was no need of any scientific knowledge to understand his communication of reassurance… the barrier of untold centuries which has grown up during the separate evolution of man and chimpanzee was, for those few seconds, broken down.”
Today, Goodall’s discovery of chimpanzees’ capacity for tool-use is considered one of the greatest ethological achievements of the 20th century. It has even opened the door to studies of other animals, like parrots, that are capable of similar feats. Since her groundbreaking fieldwork, Goodall has dedicated her life to conservation. In 1977, she co-founded the Jane Goodall Institute for Wildlife Research, Education and Conservation. For her contributions to science, she was named Dame Commander of the Order of the British Empire (DBE) in 2003. It’s an impressive title for sure, but it doesn’t quite beat President of the Banana Club.
[Image description: A photo of Jane Goodall wearing a brown, turtleneck sweater and jacket in front of a floral background.] Credit & copyright: U.S. Department of State, Wikimedia Commons. This image is a work of a United States Department of State employee, taken or made as part of that person's official duties. As a work of the U.S. federal government, the image is in the public domain per 17 U.S.C. § 101 and § 105 and the Department Copyright Information.Happy belated birthday Dame Jane Goodall! This famed British primatologist and anthropologist turned 90 on April 3. Never one to sit on her laurels, she celebrated the auspicious occasion by releasing a short film in which she and other conservationists urged humanity to care for animals and the environment. It’s the same message that Goodall has been spreading since 1960 when, at 26 years old, she journeyed from England to Tanzania to study chimpanzees in the wild.
Goodall was born in London, England, in 1934, and showed great interest in animal behavior from an early age. She spent her free time observing bugs, mice, and other small, native creatures while taking notes and sketching them. Her ultimate aspiration as a child was to study animals in Africa in their native habitats. At 18, Goodall began working at Oxford University as a secretary and at a documentary film company while saving up funds for her eventual trip to Africa. In the late 1950s, Goodall traveled to Kenya for the first time, and soon started working for anthropologist Louis Leakey. Leakey specialized in early human ancestors, and he believed that the key to understanding their evolutionary path was a better understanding of extant great apes, such as chimpanzees. After all, chimpanzees are humans’ closest living relatives. Leakey also contended that previous chimp studies had been too limited in scope, and that a long-term study with field observation was necessary. Unable to commit to such a study himself, he sent Goodall to Gombe Stream Reserve in Tanzania to observe the chimpanzees.
In July of 1960, Goodall arrived at a camp in the reserve, situated on the shores of Lake Tanganyika. Once there, she developed a dedicated routine to observe the chimpanzees. She would appear at the same time every day near their feeding area, allowing the apes to get acclimated to her presence. Over time, she was able to get closer and closer as they stopped seeing her as a threat. Eventually, she was able to earn their trust by bringing them bananas, forming what she called the “Banana Club.” During the time she spent practically living with the chimpanzees, she learned new information that disproved previous studies. For example, the apes weren’t herbivores as previously thought, but omnivores that ate insects and other mammals like baboons and antelope when the opportunity presented itself. They also appeared to communicate using around 20 “words” and had a complex social system.
Perhaps most importantly, Goodall discovered that humans weren’t the only animals that use tools. She observed chimps using blades of grass that they modified to catch termites out of their mounds. This discovery, along with her other observations, are shown in the documentary Miss Goodall and the Wild Chimpanzees (1965) and in her book, In the Shadow of Man (1971), both of which changed the scientific community’s understanding of great apes. Goodall’s book, in particular, helped prove that chimpanzees shared many behaviors with humans. One example of wordless communication between humans and chimps from In The Shadow of Man reads, “When I moved my hand closer, he looked at it, and then at me, and then he took the fruit, and at the same time held my hand firmly and gently with his own. As I sat motionless, he released my hand, looked down at the nut, and dropped it to the ground. At that moment, there was no need of any scientific knowledge to understand his communication of reassurance… the barrier of untold centuries which has grown up during the separate evolution of man and chimpanzee was, for those few seconds, broken down.”
Today, Goodall’s discovery of chimpanzees’ capacity for tool-use is considered one of the greatest ethological achievements of the 20th century. It has even opened the door to studies of other animals, like parrots, that are capable of similar feats. Since her groundbreaking fieldwork, Goodall has dedicated her life to conservation. In 1977, she co-founded the Jane Goodall Institute for Wildlife Research, Education and Conservation. For her contributions to science, she was named Dame Commander of the Order of the British Empire (DBE) in 2003. It’s an impressive title for sure, but it doesn’t quite beat President of the Banana Club.
[Image description: A photo of Jane Goodall wearing a brown, turtleneck sweater and jacket in front of a floral background.] Credit & copyright: U.S. Department of State, Wikimedia Commons. This image is a work of a United States Department of State employee, taken or made as part of that person's official duties. As a work of the U.S. federal government, the image is in the public domain per 17 U.S.C. § 101 and § 105 and the Department Copyright Information. -
FREEUS History PP&T CurioFree1 CQ
If you’ve been online this week, then you’ve likely seen footage of the Francis Scott Key Bridge collapsing after it was struck by a large container ship. The tragedy claimed several lives and is still under investigation. Yet, it’s far from the worst bridge collapse in modern U.S. history. That unfortunate title is still held by 1967’s Silver Bridge collapse in Point Pleasant, West Virginia—an incident that changed U.S. bridge safety forever.
Built in 1928 over the Ohio river, the Silver Bridge connected U.S. Highway 35 between Point Pleasant, West Virginia, and Kanauga, Ohio. Named for the aluminum paint that covered its surface, the Silver Bridge was a suspension bridge, though with an unusual design. Unlike most suspension bridges, the Silver Bridge was supported by eyebars. These were flat, steel beams with round, holed ends connected via huge pins to form chains ranging between 45 feet and 55 feet in length. While unconventional, the design held up to decades of heavy traffic…until it suddenly didn’t. On December 15, 1967, a chain on one of the ends of the bridge snapped, causing it to tilt to one side. This happened at 5 p.m., when rush-hour traffic was backed up on the bridge. The tilt immediately sent unfortunate commuters sliding into the water. In the span of around one minute, dozens of cars fell 80 feet to the surface of the Ohio River, killing 46 people and injuring 9.
Following the tragic incident, the National Transportation Safety Board (NTSB) investigated the cause of the bridge collapse and released a comprehensive report. However, the bridge had already been a source of concern to state officials for some time. From the very beginning, it was known that the eyebar chains could not be adjusted after the bridge was completed, meaning that any issues arising from the condition of the chains couldn’t be easily corrected. Still, it seemed that the bridge was well looked after. Inspections were carried out regularly by the private owner of the bridge until 1941, when it was purchased by the state of West Virginia. In 1965, state inspectors recommended $30,000 in repairs, which were completed by the summer of 1967. Just over a week before the bridge collapsed, the State Road Commission sent a maintenance engineer to check the bridge again. Yet despite these measures, a growing threat went overlooked. According to the NTSB report, the main source of the failure was an improperly cast eyebar which had been used in the construction of the bridge despite having a small crack. Over the years, corrosion caused the crack to grow (rust expands as it forms) until it failed catastrophically on December 15. The bridge also hadn’t been designed to handle the heavy traffic of the 1960s. By then, not only were there more cars on the road, the cars were much heavier than the Ford Model-Ts that the bridge was originally made to accommodate. The flaw in the eyebar had been visually inaccessible, so it went undiscovered despite all the inspections.
As tragic as it was, the Silver Bridge collapse did have something of a silver lining. It made headlines around the country, forcing President Lyndon B. Johnson to address aging infrastructure. A nationwide assessment found that many bridges were, much like Silver Bridge, designed and built in the 1920s, but with even fewer inspections to keep tabs on their condition. Some bridges had never even been inspected. Today, there are federal standards for bridge design and maintenance, but the sheer number of bridges still makes thorough, regular inspections difficult. Efforts to improve maintenance procedures and intervals are often met with political resistance due to funding, and many pieces of road infrastructure today could be ticking time bombs. When it comes to infrastructure safety, there shouldn’t be such a thing as a bridge too far.If you’ve been online this week, then you’ve likely seen footage of the Francis Scott Key Bridge collapsing after it was struck by a large container ship. The tragedy claimed several lives and is still under investigation. Yet, it’s far from the worst bridge collapse in modern U.S. history. That unfortunate title is still held by 1967’s Silver Bridge collapse in Point Pleasant, West Virginia—an incident that changed U.S. bridge safety forever.
Built in 1928 over the Ohio river, the Silver Bridge connected U.S. Highway 35 between Point Pleasant, West Virginia, and Kanauga, Ohio. Named for the aluminum paint that covered its surface, the Silver Bridge was a suspension bridge, though with an unusual design. Unlike most suspension bridges, the Silver Bridge was supported by eyebars. These were flat, steel beams with round, holed ends connected via huge pins to form chains ranging between 45 feet and 55 feet in length. While unconventional, the design held up to decades of heavy traffic…until it suddenly didn’t. On December 15, 1967, a chain on one of the ends of the bridge snapped, causing it to tilt to one side. This happened at 5 p.m., when rush-hour traffic was backed up on the bridge. The tilt immediately sent unfortunate commuters sliding into the water. In the span of around one minute, dozens of cars fell 80 feet to the surface of the Ohio River, killing 46 people and injuring 9.
Following the tragic incident, the National Transportation Safety Board (NTSB) investigated the cause of the bridge collapse and released a comprehensive report. However, the bridge had already been a source of concern to state officials for some time. From the very beginning, it was known that the eyebar chains could not be adjusted after the bridge was completed, meaning that any issues arising from the condition of the chains couldn’t be easily corrected. Still, it seemed that the bridge was well looked after. Inspections were carried out regularly by the private owner of the bridge until 1941, when it was purchased by the state of West Virginia. In 1965, state inspectors recommended $30,000 in repairs, which were completed by the summer of 1967. Just over a week before the bridge collapsed, the State Road Commission sent a maintenance engineer to check the bridge again. Yet despite these measures, a growing threat went overlooked. According to the NTSB report, the main source of the failure was an improperly cast eyebar which had been used in the construction of the bridge despite having a small crack. Over the years, corrosion caused the crack to grow (rust expands as it forms) until it failed catastrophically on December 15. The bridge also hadn’t been designed to handle the heavy traffic of the 1960s. By then, not only were there more cars on the road, the cars were much heavier than the Ford Model-Ts that the bridge was originally made to accommodate. The flaw in the eyebar had been visually inaccessible, so it went undiscovered despite all the inspections.
As tragic as it was, the Silver Bridge collapse did have something of a silver lining. It made headlines around the country, forcing President Lyndon B. Johnson to address aging infrastructure. A nationwide assessment found that many bridges were, much like Silver Bridge, designed and built in the 1920s, but with even fewer inspections to keep tabs on their condition. Some bridges had never even been inspected. Today, there are federal standards for bridge design and maintenance, but the sheer number of bridges still makes thorough, regular inspections difficult. Efforts to improve maintenance procedures and intervals are often met with political resistance due to funding, and many pieces of road infrastructure today could be ticking time bombs. When it comes to infrastructure safety, there shouldn’t be such a thing as a bridge too far. -
FREEOutdoors PP&T CurioFree1 CQ
Spring has sprung! Unfortunately, though, the season can bring more than nice weather, colorful flowers, and chirping birds. It also means the re-emergence of disease-spreading pests like mosquitoes and ticks. Ticks, in particular, are responsible for spreading a disease that all outdoor adventurers fear: lyme disease. This painful condition can wreak havoc on the body and, frighteningly, in some people the symptoms seem to persist for years. However, there is heated debate around the causes of “chronic lyme” and whether that name should even be used by medical professionals.
Lyme disease is caused by the borrelia bacteria, but it’s almost always transmitted to people via tick bites. The ticks that carry lyme can be found in the American Midwest, Northeast, the Pacific Northwest, parts of Canada, and Europe, though their range seems to be spreading. Known as blacklegged ticks (Ixodes scapularis) or western blacklegged ticks (Ixodes pacificus), they are usually found in wooded areas, though they can easily make their way into people’s yards. When a tick bites a person, they often leave behind a distinctive, ring-shaped rash. While this can help someone know that they’ve been bitten, the only way to know if the tick was carrying lyme disease is to wait. Symptoms can start anytime between three to thirty days after infection. Even the symptoms aren’t always obvious, since they can feel like the flu and can include fever, muscle aches, stiff joints, fatigue, swollen lymph nodes, and headaches. Without treatment, the disease progresses to stage 2, which can cause irregular heartbeat, swelling in or around the eyes, and muscle weakness. Stage 3 can cause arthritis, and without medical intervention symptoms can continue to get worse. European ticks sometimes carry variants of lyme that can cause a condition called acrodermatitis chronica atrophicans, which causes swelling and discoloration of the skin near the joints. Fortunately, lyme disease can be treated via a simple course of antibiotics…usually. In some people, lyme disease seems to last far longer than it should, even after treatment.
This seemingly lingering lyme disease is sometimes called “chronic lyme.” However, most medical professionals prefer the term “post-treatment Lyme disease” (PTLD), which more accurately describes the condition. After all, people who have PTLD aren’t infected with the borrelia bacteria anymore. Still, months or even years after they are “cured,” they continue to experience fatigue, aches, and palpitations, in addition to numbness, dizziness, and brain fog. PTLD is difficult to treat because its cause has yet to be identified. In fact, some medical professionals don’t even believe that it’s possible to have “chronic lyme.” Since the bacteria that causes lyme disease can't be detected in PTLD, antibiotics aren’t always effective, though there are stories of some cases where a second, prolonged course of antibiotics has worked. Still, since reliable treatments for PTLD are limited, prevention is the best route to take.
Ticks can carry more diseases than just lyme, so it’s essential to watch out for them. Since ticks prefer areas with heavy vegetation, like tall grass, it’s best to avoid those areas or to only venture into them while wearing pants and sleeves that leave little skin exposed. Bug sprays that use DEET, picaridin, and Oil of Lemon Eucalyptus (OLE) can help deter the arachnids, but people who may have been in tick-infested areas should do a thorough examination of their body and gear. If there are ticks clinging on to skin, then a tick-removal device, like the kind available at outdoor shops, should be used. Experts have urged people not to rely on home remedies like burning ticks with a lighter or squeezing them off, since that can actually force the tick to eject their inner contents into bite wounds. It’s also important to remember that dogs can get lyme disease, so they should be given tick preventatives on a regular basis. Dogs can also be vaccinated against lyme disease, and a human vaccine is under development too. Just remember, a tick’s bite is worse than your (or your dog’s) bark.
[Image description: A brown-and-black tick on a blade of green grass.] Credit & copyright: Erik Karits, PexelsSpring has sprung! Unfortunately, though, the season can bring more than nice weather, colorful flowers, and chirping birds. It also means the re-emergence of disease-spreading pests like mosquitoes and ticks. Ticks, in particular, are responsible for spreading a disease that all outdoor adventurers fear: lyme disease. This painful condition can wreak havoc on the body and, frighteningly, in some people the symptoms seem to persist for years. However, there is heated debate around the causes of “chronic lyme” and whether that name should even be used by medical professionals.
Lyme disease is caused by the borrelia bacteria, but it’s almost always transmitted to people via tick bites. The ticks that carry lyme can be found in the American Midwest, Northeast, the Pacific Northwest, parts of Canada, and Europe, though their range seems to be spreading. Known as blacklegged ticks (Ixodes scapularis) or western blacklegged ticks (Ixodes pacificus), they are usually found in wooded areas, though they can easily make their way into people’s yards. When a tick bites a person, they often leave behind a distinctive, ring-shaped rash. While this can help someone know that they’ve been bitten, the only way to know if the tick was carrying lyme disease is to wait. Symptoms can start anytime between three to thirty days after infection. Even the symptoms aren’t always obvious, since they can feel like the flu and can include fever, muscle aches, stiff joints, fatigue, swollen lymph nodes, and headaches. Without treatment, the disease progresses to stage 2, which can cause irregular heartbeat, swelling in or around the eyes, and muscle weakness. Stage 3 can cause arthritis, and without medical intervention symptoms can continue to get worse. European ticks sometimes carry variants of lyme that can cause a condition called acrodermatitis chronica atrophicans, which causes swelling and discoloration of the skin near the joints. Fortunately, lyme disease can be treated via a simple course of antibiotics…usually. In some people, lyme disease seems to last far longer than it should, even after treatment.
This seemingly lingering lyme disease is sometimes called “chronic lyme.” However, most medical professionals prefer the term “post-treatment Lyme disease” (PTLD), which more accurately describes the condition. After all, people who have PTLD aren’t infected with the borrelia bacteria anymore. Still, months or even years after they are “cured,” they continue to experience fatigue, aches, and palpitations, in addition to numbness, dizziness, and brain fog. PTLD is difficult to treat because its cause has yet to be identified. In fact, some medical professionals don’t even believe that it’s possible to have “chronic lyme.” Since the bacteria that causes lyme disease can't be detected in PTLD, antibiotics aren’t always effective, though there are stories of some cases where a second, prolonged course of antibiotics has worked. Still, since reliable treatments for PTLD are limited, prevention is the best route to take.
Ticks can carry more diseases than just lyme, so it’s essential to watch out for them. Since ticks prefer areas with heavy vegetation, like tall grass, it’s best to avoid those areas or to only venture into them while wearing pants and sleeves that leave little skin exposed. Bug sprays that use DEET, picaridin, and Oil of Lemon Eucalyptus (OLE) can help deter the arachnids, but people who may have been in tick-infested areas should do a thorough examination of their body and gear. If there are ticks clinging on to skin, then a tick-removal device, like the kind available at outdoor shops, should be used. Experts have urged people not to rely on home remedies like burning ticks with a lighter or squeezing them off, since that can actually force the tick to eject their inner contents into bite wounds. It’s also important to remember that dogs can get lyme disease, so they should be given tick preventatives on a regular basis. Dogs can also be vaccinated against lyme disease, and a human vaccine is under development too. Just remember, a tick’s bite is worse than your (or your dog’s) bark.
[Image description: A brown-and-black tick on a blade of green grass.] Credit & copyright: Erik Karits, Pexels -
FREEWorld History PP&T CurioFree1 CQ
Happy Saint Patrick’s Day! Just who was this Saint Patrick guy, anyway? Like all saints who went on to become holiday mascots (think Saint Valentine and Saint Nicholas) the real Saint Patrick’s life is steeped in legend. In fact, almost everything we know about his life comes from two works that Patrick wrote himself: his autobiography, Confessio, and a letter condemning what he saw as Britain’s mistreatment of Christians in Ireland. While some of Patrick’s stories might best be taken with a grain of salt, there’s no doubt that he became an extremely successful priest and missionary in his lifetime, and that he faced plenty of tribulations along the way.
The story of Saint Patrick gets strange right off the bat since, despite his fame as the patron saint of Ireland, he wasn’t actually Irish. Rather, he was born in Britain sometime around 450 C.E. to a family of Roman descent. His father was a wealthy deacon and local politician, but even his status wasn’t enough to protect a 16-year-old Patrick from being kidnapped by Irish raiders who broke into his family’s estate. The teen was carried off into slavery in Ireland, where he was forced to work for six years herding sheep. During his time in captivity, Patrick sought solace in his religion and became more devout as a result. According to Patrick’s own writings, he had a dream one night in which the Christian god told him that it was time to leave, so he fled his captors and returned to his family in Britain. After his return, another dream told him that he would one day return to Ireland as a missionary. Whatever his reasoning, Patrick did begin 15 years of religious training, at the end of which he was ordained a priest. Amazingly, he did indeed choose to return to the land where he had been enslaved to do the bulk of his religious work.
Although some legends claim that Saint Patrick introduced Christianity to Ireland, that’s almost certainly not true, since part of his job as a missionary and priest was working with Ireland’s already-Christian population. Unlike most foreign priests, Patrick was familiar with Irish traditions and rituals due to the time he’d spent there, which endeared him to Irish Christians. It also allowed him to better relate to the non-Christians he was trying to convert. Patrick put a Christian spin on Irish, pagan rituals, such as lighting bonfires during Easter instead of doing so to worship the Celtic gods. He is also credited with redesigning the typical Christian cross by adding a circle that represents the sun—a prominent Celtic symbol—to make the reverence of the symbol feel more familiar. This design came to be known as the Celtic cross, and it’s still in use today in regions with Celtic heritage. His influence and reputation in Ireland only grew after his death, and he was heralded as a saint by acclaim alone before the Catholic Church had a formal canonization process.
As with any Catholic saint, Patrick was credited for performing a number of epic feats and miracles. The most famous of these is his eradication of snakes from the island, though historically this seems unlikely since scientific evidence points to there being no reptiles at all on the island prior to modern times. Patrick is also credited with using a three-leafed clover, or shamrock, to explain the concept of the Holy Trinity to the Irish, though this was never mentioned in his own writings. Another story tells of Patrick fasting on a mountain for 40 days, until an angel came down to speak with him on behalf of God. The story goes that Patrick then made several demands of God, like allowing him to save more damned souls than any other saint, preventing the English from ever ruling over the Irish, and giving him the privilege of judging Irish souls during the Last Judgment.
While St. Patrick is still heavily associated with Irish culture, his feast day on March 17 is celebrated in many countries today. For many, St. Patrick’s Day is a fairly secular holiday in which revelers don green clothes and drink plenty of beer. This is particularly true in the U.S., where the holiday was first promoted by Irish immigrants in Boston in the 18th century. The first St. Patrick’s Day parade was held in Boston in 1737, and the tradition has spread to cities across the country. No need to be green with envy for the Emerald Isle—everyone has the luck of the Irish on St. Patrick’s Day.
[Image description: A black-and-white engraving of Saint Patrick reading a bible and holding a staff while wearing a robe and tall hat.] Credit & copyright: Mattheus Borrekens, 1625-1670. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Happy Saint Patrick’s Day! Just who was this Saint Patrick guy, anyway? Like all saints who went on to become holiday mascots (think Saint Valentine and Saint Nicholas) the real Saint Patrick’s life is steeped in legend. In fact, almost everything we know about his life comes from two works that Patrick wrote himself: his autobiography, Confessio, and a letter condemning what he saw as Britain’s mistreatment of Christians in Ireland. While some of Patrick’s stories might best be taken with a grain of salt, there’s no doubt that he became an extremely successful priest and missionary in his lifetime, and that he faced plenty of tribulations along the way.
The story of Saint Patrick gets strange right off the bat since, despite his fame as the patron saint of Ireland, he wasn’t actually Irish. Rather, he was born in Britain sometime around 450 C.E. to a family of Roman descent. His father was a wealthy deacon and local politician, but even his status wasn’t enough to protect a 16-year-old Patrick from being kidnapped by Irish raiders who broke into his family’s estate. The teen was carried off into slavery in Ireland, where he was forced to work for six years herding sheep. During his time in captivity, Patrick sought solace in his religion and became more devout as a result. According to Patrick’s own writings, he had a dream one night in which the Christian god told him that it was time to leave, so he fled his captors and returned to his family in Britain. After his return, another dream told him that he would one day return to Ireland as a missionary. Whatever his reasoning, Patrick did begin 15 years of religious training, at the end of which he was ordained a priest. Amazingly, he did indeed choose to return to the land where he had been enslaved to do the bulk of his religious work.
Although some legends claim that Saint Patrick introduced Christianity to Ireland, that’s almost certainly not true, since part of his job as a missionary and priest was working with Ireland’s already-Christian population. Unlike most foreign priests, Patrick was familiar with Irish traditions and rituals due to the time he’d spent there, which endeared him to Irish Christians. It also allowed him to better relate to the non-Christians he was trying to convert. Patrick put a Christian spin on Irish, pagan rituals, such as lighting bonfires during Easter instead of doing so to worship the Celtic gods. He is also credited with redesigning the typical Christian cross by adding a circle that represents the sun—a prominent Celtic symbol—to make the reverence of the symbol feel more familiar. This design came to be known as the Celtic cross, and it’s still in use today in regions with Celtic heritage. His influence and reputation in Ireland only grew after his death, and he was heralded as a saint by acclaim alone before the Catholic Church had a formal canonization process.
As with any Catholic saint, Patrick was credited for performing a number of epic feats and miracles. The most famous of these is his eradication of snakes from the island, though historically this seems unlikely since scientific evidence points to there being no reptiles at all on the island prior to modern times. Patrick is also credited with using a three-leafed clover, or shamrock, to explain the concept of the Holy Trinity to the Irish, though this was never mentioned in his own writings. Another story tells of Patrick fasting on a mountain for 40 days, until an angel came down to speak with him on behalf of God. The story goes that Patrick then made several demands of God, like allowing him to save more damned souls than any other saint, preventing the English from ever ruling over the Irish, and giving him the privilege of judging Irish souls during the Last Judgment.
While St. Patrick is still heavily associated with Irish culture, his feast day on March 17 is celebrated in many countries today. For many, St. Patrick’s Day is a fairly secular holiday in which revelers don green clothes and drink plenty of beer. This is particularly true in the U.S., where the holiday was first promoted by Irish immigrants in Boston in the 18th century. The first St. Patrick’s Day parade was held in Boston in 1737, and the tradition has spread to cities across the country. No need to be green with envy for the Emerald Isle—everyone has the luck of the Irish on St. Patrick’s Day.
[Image description: A black-and-white engraving of Saint Patrick reading a bible and holding a staff while wearing a robe and tall hat.] Credit & copyright: Mattheus Borrekens, 1625-1670. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEFitness PP&T CurioFree1 CQ
We’re three months into 2024! Have you stuck to the fitness goals you set back in January? If so, you’re probably intimately familiar with one of the world’s most popular fitness machines: the treadmill. While they’re touted for their health benefits today, treadmills have a surprisingly dark history. In fact, they weren’t invented for fitness at all, but for punishment.
Treadmills may have been invented in ancient Asia, though historians aren’t entirely sure. The earliest iterations of treadmills in the West were used to pump water or grind grains. Called a treadwheel, the machine was the brainchild of William Cubitt, an English civil engineer from a family of millwrights. The design was fairly simple, with two wheels connected by cogs. Users would climb on top of it and then walk, as if ascending a never-ending flight of stairs, while holding onto a bar for support. It was a useful industrial machine, but in 1818, not long after it was first introduced, prisons noticed that it had potential as an instrument of punishment.
Thus, treadwheels began to pop up at large correctional facilities where they were given a new, dystopian name: atonement machines. The grueling labor was touted by prison officials as a way for prisoners to “work off their sins.” Of course, in reality the devices were less about atonement and more about keeping prisoners too occupied and exhausted to stir up trouble. These correctional contraptions were modified for prison use as well, with partitions separating inmates so that they couldn’t pass the time by socializing. This would obviously be considered torture by modern standards, as inmates were sometimes made to work on atonement machines for up to 10 hours a day. Unlike the first treadwheels, most atonement machines weren’t even made to do anything useful, like pumping water or grinding grain. Thankfully, the sisyphean punishment fell out of favor in the late 1800s as its efficacy as a rehabilitation tool was shown to be questionable at best and lethal at worst. By the turn of the century, there were a little more than a dozen functioning atonement machines left in English prisons.
The treadmill saw a similar rise and decline in popularity as a correctional tool in the U.S., but some enterprising Americans also thought to re-purpose the torture device as a fitness machine. In 1913, Claude Lauraine Hagen filed a patent for a “training-machine” in the U.S. in response to a 1910 report by the CDC on heart disease being caused by lack of exercise. In a similar vein, a cardiologist named Robert Bruce came up with the “Bruce Protocol” in the early 1960s, where he would evaluate a patient’s cardiac health by having them walk on a treadmill while connected to an electrocardiogram. It wasn’t until later that decade, when William Staub invented the “Pacemaster 600”, that the treadmill really caught on as a machine for fitness and recreation. Staub’s iteration of the treadmill came at a time when Americans were becoming more health-conscious and concerned with maintaining their physiques. With the Pacemaster 600, the average person could run in any weather and sweat off extra pounds. Staub was seemingly on to something, as he reportedly used a treadmill every day until he died at the ripe old age of 96. Nowadays, treadmills are a ubiquitous fixture in home gyms and fitness centers around the world…though some may still consider them a bit torturous.
[Image description: A Victorian-era illustration of prisoners walking on a treadmill while other people, wearing hats and coats, stand near a basket of food in the foreground.] Credit & copyright: British Library c. 1817. Wikimedia Commons, Public Domain.We’re three months into 2024! Have you stuck to the fitness goals you set back in January? If so, you’re probably intimately familiar with one of the world’s most popular fitness machines: the treadmill. While they’re touted for their health benefits today, treadmills have a surprisingly dark history. In fact, they weren’t invented for fitness at all, but for punishment.
Treadmills may have been invented in ancient Asia, though historians aren’t entirely sure. The earliest iterations of treadmills in the West were used to pump water or grind grains. Called a treadwheel, the machine was the brainchild of William Cubitt, an English civil engineer from a family of millwrights. The design was fairly simple, with two wheels connected by cogs. Users would climb on top of it and then walk, as if ascending a never-ending flight of stairs, while holding onto a bar for support. It was a useful industrial machine, but in 1818, not long after it was first introduced, prisons noticed that it had potential as an instrument of punishment.
Thus, treadwheels began to pop up at large correctional facilities where they were given a new, dystopian name: atonement machines. The grueling labor was touted by prison officials as a way for prisoners to “work off their sins.” Of course, in reality the devices were less about atonement and more about keeping prisoners too occupied and exhausted to stir up trouble. These correctional contraptions were modified for prison use as well, with partitions separating inmates so that they couldn’t pass the time by socializing. This would obviously be considered torture by modern standards, as inmates were sometimes made to work on atonement machines for up to 10 hours a day. Unlike the first treadwheels, most atonement machines weren’t even made to do anything useful, like pumping water or grinding grain. Thankfully, the sisyphean punishment fell out of favor in the late 1800s as its efficacy as a rehabilitation tool was shown to be questionable at best and lethal at worst. By the turn of the century, there were a little more than a dozen functioning atonement machines left in English prisons.
The treadmill saw a similar rise and decline in popularity as a correctional tool in the U.S., but some enterprising Americans also thought to re-purpose the torture device as a fitness machine. In 1913, Claude Lauraine Hagen filed a patent for a “training-machine” in the U.S. in response to a 1910 report by the CDC on heart disease being caused by lack of exercise. In a similar vein, a cardiologist named Robert Bruce came up with the “Bruce Protocol” in the early 1960s, where he would evaluate a patient’s cardiac health by having them walk on a treadmill while connected to an electrocardiogram. It wasn’t until later that decade, when William Staub invented the “Pacemaster 600”, that the treadmill really caught on as a machine for fitness and recreation. Staub’s iteration of the treadmill came at a time when Americans were becoming more health-conscious and concerned with maintaining their physiques. With the Pacemaster 600, the average person could run in any weather and sweat off extra pounds. Staub was seemingly on to something, as he reportedly used a treadmill every day until he died at the ripe old age of 96. Nowadays, treadmills are a ubiquitous fixture in home gyms and fitness centers around the world…though some may still consider them a bit torturous.
[Image description: A Victorian-era illustration of prisoners walking on a treadmill while other people, wearing hats and coats, stand near a basket of food in the foreground.] Credit & copyright: British Library c. 1817. Wikimedia Commons, Public Domain.