Curio Cabinet / Person, Place, or Thing
-
FREEUS History PP&T CurioFree1 CQ
What happens when you have to abandon ship, but doing so isn't a viable option? The USS Indianapolis was sunk this month in 1945 after completing a crucial mission, and the aftermath of the attack was arguably worse than the attack itself. Of the 1,195 men on board, only 316 survived what became one of the most harrowing events in U.S. naval history.
The USS Indianapolis was a Portland-class heavy cruiser so impressive that it once carried President Franklin D. Roosevelt during his visit to South America in 1936. During the tail end of WWII, however, it carried other high-stakes cargo in the form of critical internal components for the nuclear bombs that would be dropped on Japan. After completing a top-secret delivery mission, the ship, under the command of Captain Charles B. McVay, was on its way to the Leyte Gulf in the Philippines on July 30, 1945, when it was intercepted by a Japanese submarine. The USS Indianapolis was immediately hit with two torpedoes and began to sink. Around 330 crew members perished in the immediate explosion. The rest ended up in the ocean with life jackets and a few life rafts. The Japanese didn’t target the men in the water, but they didn’t have to. Stranded in shark-infested waters with no supplies for five days, most of the men succumbed to the elements or to shark attacks. By the time help arrived, only 316 remained alive, and the event became the single greatest loss of life in U.S. naval history.
Captain McVay was among the few who survived, and for his alleged failure to take proper action, he was court-martialed and found guilty of negligence the following year. The argument by the prosecution was that he failed to use a zigzagging maneuver to avoid enemy torpedoes, which supposedly would have saved the ship and the lives of the men on board. There was doubt about the veracity of that claim, however, even from many of the survivors. Over the years, McVay’s defenders have pointed out that he had requested a destroyer escort, but that his request was denied. Then there was the fact that U.S. naval intelligence was aware of Japanese submarines in the area that the cruiser was sailing through but purposely didn’t warn them in advance, presumably to hide the fact that the Japanese codes had already been broken. Perhaps the most significant piece of evidence in McVay’s favor was the testimony of the commanding officer of the Japanese submarine that sunk the cruiser. According to Commander Mochitsura Hashimoto, who appeared at the court-martial to testify in person, the cruiser would not have been saved even if McVay had ordered the zigzag maneuver. Unfortunately, the damage to McVay’s reputation had already been done. He was largely blamed by the public and families of the deceased for the loss of life, and died by suicide in 1968.
Over the decades, McVay’s name and reputation have been largely cleared, thanks to the efforts of the survivors. In 2000, the U.S. Congress passed a joint resolution officially exonerating McVay. Today, only a single survivor of the sinking of the USS Indianapolis remains, but the story of the ship, the horrors endured by its men, and the injustice committed against its commanding officer make for one of the most tragic stories to come out of WWII. In a conflict as large and deadly as a World War, that’s saying a lot.
[Image description: The surface of water with some ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What happens when you have to abandon ship, but doing so isn't a viable option? The USS Indianapolis was sunk this month in 1945 after completing a crucial mission, and the aftermath of the attack was arguably worse than the attack itself. Of the 1,195 men on board, only 316 survived what became one of the most harrowing events in U.S. naval history.
The USS Indianapolis was a Portland-class heavy cruiser so impressive that it once carried President Franklin D. Roosevelt during his visit to South America in 1936. During the tail end of WWII, however, it carried other high-stakes cargo in the form of critical internal components for the nuclear bombs that would be dropped on Japan. After completing a top-secret delivery mission, the ship, under the command of Captain Charles B. McVay, was on its way to the Leyte Gulf in the Philippines on July 30, 1945, when it was intercepted by a Japanese submarine. The USS Indianapolis was immediately hit with two torpedoes and began to sink. Around 330 crew members perished in the immediate explosion. The rest ended up in the ocean with life jackets and a few life rafts. The Japanese didn’t target the men in the water, but they didn’t have to. Stranded in shark-infested waters with no supplies for five days, most of the men succumbed to the elements or to shark attacks. By the time help arrived, only 316 remained alive, and the event became the single greatest loss of life in U.S. naval history.
Captain McVay was among the few who survived, and for his alleged failure to take proper action, he was court-martialed and found guilty of negligence the following year. The argument by the prosecution was that he failed to use a zigzagging maneuver to avoid enemy torpedoes, which supposedly would have saved the ship and the lives of the men on board. There was doubt about the veracity of that claim, however, even from many of the survivors. Over the years, McVay’s defenders have pointed out that he had requested a destroyer escort, but that his request was denied. Then there was the fact that U.S. naval intelligence was aware of Japanese submarines in the area that the cruiser was sailing through but purposely didn’t warn them in advance, presumably to hide the fact that the Japanese codes had already been broken. Perhaps the most significant piece of evidence in McVay’s favor was the testimony of the commanding officer of the Japanese submarine that sunk the cruiser. According to Commander Mochitsura Hashimoto, who appeared at the court-martial to testify in person, the cruiser would not have been saved even if McVay had ordered the zigzag maneuver. Unfortunately, the damage to McVay’s reputation had already been done. He was largely blamed by the public and families of the deceased for the loss of life, and died by suicide in 1968.
Over the decades, McVay’s name and reputation have been largely cleared, thanks to the efforts of the survivors. In 2000, the U.S. Congress passed a joint resolution officially exonerating McVay. Today, only a single survivor of the sinking of the USS Indianapolis remains, but the story of the ship, the horrors endured by its men, and the injustice committed against its commanding officer make for one of the most tragic stories to come out of WWII. In a conflict as large and deadly as a World War, that’s saying a lot.
[Image description: The surface of water with some ripples.] Credit & copyright: MartinThoma, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History PP&T CurioFree1 CQ
Going to the airport just got a lot less annoying. For decades, travelers have had to endure the inconvenience of taking their shoes on and off at security checkpoints at U.S. airports. To the relief of many, the Transportation Safety Administration (TSA) has now announced that they will be scrapping their much-criticized policy, and will allow travelers to keep their shoes on through security. Still, it might behoove us not to forget the criminal plot that triggered the shoes-off rule in the first place.
While many equate the TSA’s shoes-off rule with the September 11th terrorist attacks, it's more closely related to a completely different crime. The man responsible was Richard Reid, a British national who became radicalized and left the U.K. in 1998 to receive training from al-Qaeda in Afghanistan. In December of 2001, he arrived in Brussels, Belgium, then traveled to Paris. While in Brussels, Reid purchased a pair of black sneakers, which he plotted to use in an attempted bombing. In Paris, he purchased round-trip tickets to Antigua and Barbuda which included a stop in Miami. By then, he had already rigged his sneakers with homemade explosives by cutting a space for them in the soles. Reid even hid fuses in the shoes’ tongues. Despite his efforts to disguise the device, airport security in Paris grew suspicious of Reid for several reasons. First, he used cash to purchase his tickets, which was unusual given their high price. Secondly, he carried no luggage with him despite his supposed intention to travel overseas. Delayed by airport security, Reid missed his flight and booked another for Miami, which he did manage to board successfully on December 22. During the flight, Reid made several attempts to light the fuse of his homemade “shoe bomb”, but was caught by a passenger who complained about the smell of sulfur emanating from his seat. On the second attempt, he got into an altercation with the same passenger, after which other passengers and flight attendants jumped in to subdue him. Unable to carry out his plan, Reid was tied down and injected with sedatives until authorities could detain him.
Although the attempted plot by the so-called “Shoe Bomber” took place in 2001, it wasn’t until 2006 that TSA instated the shoes-off policy. The rule required travelers to take off their shoes, place them in a bin, and pass them through an x-ray machine for screening before they were allowed to put them back on. It was a hassle at the best of times, and could lead to slow lines and delays at the worst of times. Now, the Department of Homeland Security (DHS), which oversees the TSA, claims that such screenings are no longer necessary due to more advanced scanners and an increase in the number of officers at security checkpoints.
As annoying as the policy may have been, it still had its supporters. After all, Reid’s homemade explosive contained just ten ounces of explosive material, yet according to the FBI, the explosion would have torn a hole in the fuselage and caused the plane to crash had it been successfully detonated. For most of the world, the consequences of Reid’s actions will be largely forgotten with the repeal of the shoes-off policy. The perpetrator himself, on the other hand, is still serving a life sentence at a maximum-security prison after pleading guilty to eight terrorism-related charges in 2002. Reid also wasn’t the last person to attempt an airline bombing. In 2009, another would-be terrorist failed to detonate an explosive hidden in his underwear, prompting the use of full-body scanners by the TSA shortly thereafter. At least they didn’t make everyone take off their underwear while in line.
[Image description: A pair of men’s vintage black dress shoes.] Credit & copyright: The Metropolitan Museum of Art. Public Domain.Going to the airport just got a lot less annoying. For decades, travelers have had to endure the inconvenience of taking their shoes on and off at security checkpoints at U.S. airports. To the relief of many, the Transportation Safety Administration (TSA) has now announced that they will be scrapping their much-criticized policy, and will allow travelers to keep their shoes on through security. Still, it might behoove us not to forget the criminal plot that triggered the shoes-off rule in the first place.
While many equate the TSA’s shoes-off rule with the September 11th terrorist attacks, it's more closely related to a completely different crime. The man responsible was Richard Reid, a British national who became radicalized and left the U.K. in 1998 to receive training from al-Qaeda in Afghanistan. In December of 2001, he arrived in Brussels, Belgium, then traveled to Paris. While in Brussels, Reid purchased a pair of black sneakers, which he plotted to use in an attempted bombing. In Paris, he purchased round-trip tickets to Antigua and Barbuda which included a stop in Miami. By then, he had already rigged his sneakers with homemade explosives by cutting a space for them in the soles. Reid even hid fuses in the shoes’ tongues. Despite his efforts to disguise the device, airport security in Paris grew suspicious of Reid for several reasons. First, he used cash to purchase his tickets, which was unusual given their high price. Secondly, he carried no luggage with him despite his supposed intention to travel overseas. Delayed by airport security, Reid missed his flight and booked another for Miami, which he did manage to board successfully on December 22. During the flight, Reid made several attempts to light the fuse of his homemade “shoe bomb”, but was caught by a passenger who complained about the smell of sulfur emanating from his seat. On the second attempt, he got into an altercation with the same passenger, after which other passengers and flight attendants jumped in to subdue him. Unable to carry out his plan, Reid was tied down and injected with sedatives until authorities could detain him.
Although the attempted plot by the so-called “Shoe Bomber” took place in 2001, it wasn’t until 2006 that TSA instated the shoes-off policy. The rule required travelers to take off their shoes, place them in a bin, and pass them through an x-ray machine for screening before they were allowed to put them back on. It was a hassle at the best of times, and could lead to slow lines and delays at the worst of times. Now, the Department of Homeland Security (DHS), which oversees the TSA, claims that such screenings are no longer necessary due to more advanced scanners and an increase in the number of officers at security checkpoints.
As annoying as the policy may have been, it still had its supporters. After all, Reid’s homemade explosive contained just ten ounces of explosive material, yet according to the FBI, the explosion would have torn a hole in the fuselage and caused the plane to crash had it been successfully detonated. For most of the world, the consequences of Reid’s actions will be largely forgotten with the repeal of the shoes-off policy. The perpetrator himself, on the other hand, is still serving a life sentence at a maximum-security prison after pleading guilty to eight terrorism-related charges in 2002. Reid also wasn’t the last person to attempt an airline bombing. In 2009, another would-be terrorist failed to detonate an explosive hidden in his underwear, prompting the use of full-body scanners by the TSA shortly thereafter. At least they didn’t make everyone take off their underwear while in line.
[Image description: A pair of men’s vintage black dress shoes.] Credit & copyright: The Metropolitan Museum of Art. Public Domain. -
FREEHumanities PP&T CurioFree1 CQ
They say that a dog is man’s best friend, but there’s one thing that can get in the way of that friendship like nothing else. For thousands of years, rabies has claimed countless lives, often transmitted to humans via dogs, raccoons, foxes, and other mammals. For most of that time, there was no way to directly prevent the transmission of rabies, until French scientist Louis Pasteur managed to successfully inoculate someone against the disease on this day in 1885.
Rabies has always been a disease without a cure. Even the ancient Sumerians knew about the deadly disease and how it could be transmitted through a bite from an infected animal. It was a common enough problem that the Babylonians had specific regulations on how the owner of a rabid dog was to compensate a victim’s family in the event of a bite. The disease itself is caused by a virus which is expressed in the saliva, and causes the infected animal to behave in an agitated or aggressive manner. Symptoms across species remain similar, and when humans are infected, they show signs of agitation, hyperactivity, fever, nausea, confusion, and the same excessive salivation seen in other animals. In advanced stages, victims begin hallucinating and having difficulty swallowing. The latter symptom also leads to a fear of water. Rabies is almost always fatal without intervention. Fortunately, post-exposure prophylaxis against rabies now exists, thanks to the efforts of one scientist.
By the time 9-year-old Joseph Meister was bitten 14 times by a rabid dog in the town of Alsace, French chemist Louis Pasteur was already working with rabid dogs. Pasteur had developed a rabies vaccine which he was administering to dogs and rabbits. Though it showed promise, it had never been tested on human subjects. It had also never been used on a subject who had already been infected. When Joseph’s mother brought the child to Paris to seek treatment from Pasteur, he and his colleagues didn’t want to administer the vaccine due to its untested nature. That might have been the end for the young Joseph but for Dr. Jacques Joseph Grancher’s intervention. Grancher offered to administer the vaccine on the boy himself, and over the course of 10 days, Joseph received 12 doses. Remarkably, Joseph was cured by the end of the month, proving the vaccine’s efficacy as both a preventative and a treatment. While credit for developing the vaccine goes to Pasteur, Grancher was also recognized for his part in ending the era of rabies as an automatic death sentence. In 1888, Grancher was given the rank of Grand Officer of the Legion of Honor, the highest French honor given at the time to civilians or military personnel.
The rabies vaccine and post-exposure prophylaxis have greatly improved since Pasteur’s time, and they’re no longer as grueling to receive as they once were. Still, rabies remains a dangerous disease. Luckily, cases are few and far between nowadays, with only around ten fatalities a year in North America thanks to decades of wildlife vaccination efforts. Most cases are spread by infected raccoons, foxes, bats, or skunks, as most pet dogs are vaccinated against rabies. In the rare instance that someone is infected and unable to receive a post-exposure prophylaxis quickly, the disease is still almost always fatal. Once symptoms start showing, it’s already too late. In a way, rabies still hasn’t been put to Pasteur.
[Image description: A raccoon poking its head out from underneath a large wooden beam.] Credit & copyright: Poivrier, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.They say that a dog is man’s best friend, but there’s one thing that can get in the way of that friendship like nothing else. For thousands of years, rabies has claimed countless lives, often transmitted to humans via dogs, raccoons, foxes, and other mammals. For most of that time, there was no way to directly prevent the transmission of rabies, until French scientist Louis Pasteur managed to successfully inoculate someone against the disease on this day in 1885.
Rabies has always been a disease without a cure. Even the ancient Sumerians knew about the deadly disease and how it could be transmitted through a bite from an infected animal. It was a common enough problem that the Babylonians had specific regulations on how the owner of a rabid dog was to compensate a victim’s family in the event of a bite. The disease itself is caused by a virus which is expressed in the saliva, and causes the infected animal to behave in an agitated or aggressive manner. Symptoms across species remain similar, and when humans are infected, they show signs of agitation, hyperactivity, fever, nausea, confusion, and the same excessive salivation seen in other animals. In advanced stages, victims begin hallucinating and having difficulty swallowing. The latter symptom also leads to a fear of water. Rabies is almost always fatal without intervention. Fortunately, post-exposure prophylaxis against rabies now exists, thanks to the efforts of one scientist.
By the time 9-year-old Joseph Meister was bitten 14 times by a rabid dog in the town of Alsace, French chemist Louis Pasteur was already working with rabid dogs. Pasteur had developed a rabies vaccine which he was administering to dogs and rabbits. Though it showed promise, it had never been tested on human subjects. It had also never been used on a subject who had already been infected. When Joseph’s mother brought the child to Paris to seek treatment from Pasteur, he and his colleagues didn’t want to administer the vaccine due to its untested nature. That might have been the end for the young Joseph but for Dr. Jacques Joseph Grancher’s intervention. Grancher offered to administer the vaccine on the boy himself, and over the course of 10 days, Joseph received 12 doses. Remarkably, Joseph was cured by the end of the month, proving the vaccine’s efficacy as both a preventative and a treatment. While credit for developing the vaccine goes to Pasteur, Grancher was also recognized for his part in ending the era of rabies as an automatic death sentence. In 1888, Grancher was given the rank of Grand Officer of the Legion of Honor, the highest French honor given at the time to civilians or military personnel.
The rabies vaccine and post-exposure prophylaxis have greatly improved since Pasteur’s time, and they’re no longer as grueling to receive as they once were. Still, rabies remains a dangerous disease. Luckily, cases are few and far between nowadays, with only around ten fatalities a year in North America thanks to decades of wildlife vaccination efforts. Most cases are spread by infected raccoons, foxes, bats, or skunks, as most pet dogs are vaccinated against rabies. In the rare instance that someone is infected and unable to receive a post-exposure prophylaxis quickly, the disease is still almost always fatal. Once symptoms start showing, it’s already too late. In a way, rabies still hasn’t been put to Pasteur.
[Image description: A raccoon poking its head out from underneath a large wooden beam.] Credit & copyright: Poivrier, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEWorld History PP&T CurioFree1 CQ
Small islands aren’t immune from big problems—just ask Hong Kong. Though it's now a special administrative region of China and one of the world’s most celebrated big cities, for most of its existence Hong Kong was a quiet fishing village. In the 19th century, it fell under British control after two wars. Then, this month in 1997, it was returned to China after a century and a half of (mostly) British rule and lots of hectic changes.
Located mostly on Hong Kong Island on the southern coast of China with a small portion of its territory on mainland China, Hong Kong has been under the control of Chinese regimes for much of its history. The island itself has been inhabited by humans since the stone age, and it was first incorporated into the Chinese Empire around 300 B.C.E., during the Qin Dynasty. Like most coastal communities in the region, Hong Kong’s early economy was based around fishing, pearl farming, and salt production. For nearly 2,000 years, Hong Kong was a fishing community with a slowly but steadily growing population.
Things changed drastically with the arrival of British traders in search of tea. By the 1800s, Hong Kong developed into a major free port, but the lucrative nature of the trade placed a target on Hong Kong. British traders, dissatisfied with China’s demand for silver in exchange for tea, began trafficking opium into the country. In response to the addiction epidemic caused by the drug, the Chinese government began confiscating and destroying opium shipments. The rising tensions between British traders and the local population eventually devolved into violence when the British government sent military forces to support its traders. The First Opium War, as it came to be known, began in 1839. Then, in 1841, Hong Kong Island fell under British occupation. Over a decade later, the British instigated further hostilities, leading to the Second Opium War in 1856. In 1860, with a British victory, the Chinese government was forced to surrender the Kowloon Peninsula, expanding Hong Kong into the mainland.
In 1898, the British negotiated a further expansion of their territories during the Second Convention of Peking. The New Territories, as they came to be called, reached from Hong Kong’s border on Kowloon Peninsula to the Shenzhen River and came with a 99 year lease set to end on July 1, 1997. Unfortunately, the pending peaceful transfer was preceded by yet another military occupation, this time from the Japanese. From 1941 to the end of WWII, Hong Kong remained under Japanese control. When the war ended, Hong Kong was returned to the British for the remainder of the lease. As the end of the lease approached, the British and Chinese governments began planning a peaceful handover. At first, there were talks of the British holding on to Hong Kong Island and the Kowloon Peninsula, since the lease technically only pertained to the New Territories on the mainland. This idea was scrapped, however, as it was considered impractical to split the region in two, severing economic and social ties that were so intermingled. Instead, the two governments signed the Sino-British Joint Declaration in 1984, establishing the “one country, two systems” arrangement. Per the declaration, Hong Kong would remain a largely independent territory for 50 years with autonomy over its economic and social policies like free speech, free press, and free assembly.
Today, Hong Kong boasts a population of over 7.5 million and is a center of commerce, manufacturing, and culture in Asia. Remnants of British rule can be found all over in the architecture and the names of locations like Victoria Harbour. English also remains an official language alongside Chinese, and many residents are fluent in both. Old habits (and cultural practices) sometimes just stick.
[Image description: Part of the Hong Kong skyline and harbor on a slightly hazy day.] Credit & copyright: Syced, Wikimedia Commons.Small islands aren’t immune from big problems—just ask Hong Kong. Though it's now a special administrative region of China and one of the world’s most celebrated big cities, for most of its existence Hong Kong was a quiet fishing village. In the 19th century, it fell under British control after two wars. Then, this month in 1997, it was returned to China after a century and a half of (mostly) British rule and lots of hectic changes.
Located mostly on Hong Kong Island on the southern coast of China with a small portion of its territory on mainland China, Hong Kong has been under the control of Chinese regimes for much of its history. The island itself has been inhabited by humans since the stone age, and it was first incorporated into the Chinese Empire around 300 B.C.E., during the Qin Dynasty. Like most coastal communities in the region, Hong Kong’s early economy was based around fishing, pearl farming, and salt production. For nearly 2,000 years, Hong Kong was a fishing community with a slowly but steadily growing population.
Things changed drastically with the arrival of British traders in search of tea. By the 1800s, Hong Kong developed into a major free port, but the lucrative nature of the trade placed a target on Hong Kong. British traders, dissatisfied with China’s demand for silver in exchange for tea, began trafficking opium into the country. In response to the addiction epidemic caused by the drug, the Chinese government began confiscating and destroying opium shipments. The rising tensions between British traders and the local population eventually devolved into violence when the British government sent military forces to support its traders. The First Opium War, as it came to be known, began in 1839. Then, in 1841, Hong Kong Island fell under British occupation. Over a decade later, the British instigated further hostilities, leading to the Second Opium War in 1856. In 1860, with a British victory, the Chinese government was forced to surrender the Kowloon Peninsula, expanding Hong Kong into the mainland.
In 1898, the British negotiated a further expansion of their territories during the Second Convention of Peking. The New Territories, as they came to be called, reached from Hong Kong’s border on Kowloon Peninsula to the Shenzhen River and came with a 99 year lease set to end on July 1, 1997. Unfortunately, the pending peaceful transfer was preceded by yet another military occupation, this time from the Japanese. From 1941 to the end of WWII, Hong Kong remained under Japanese control. When the war ended, Hong Kong was returned to the British for the remainder of the lease. As the end of the lease approached, the British and Chinese governments began planning a peaceful handover. At first, there were talks of the British holding on to Hong Kong Island and the Kowloon Peninsula, since the lease technically only pertained to the New Territories on the mainland. This idea was scrapped, however, as it was considered impractical to split the region in two, severing economic and social ties that were so intermingled. Instead, the two governments signed the Sino-British Joint Declaration in 1984, establishing the “one country, two systems” arrangement. Per the declaration, Hong Kong would remain a largely independent territory for 50 years with autonomy over its economic and social policies like free speech, free press, and free assembly.
Today, Hong Kong boasts a population of over 7.5 million and is a center of commerce, manufacturing, and culture in Asia. Remnants of British rule can be found all over in the architecture and the names of locations like Victoria Harbour. English also remains an official language alongside Chinese, and many residents are fluent in both. Old habits (and cultural practices) sometimes just stick.
[Image description: Part of the Hong Kong skyline and harbor on a slightly hazy day.] Credit & copyright: Syced, Wikimedia Commons. -
FREEHumanities PP&T CurioFree1 CQ
What does the fall of Napoleon have to do with dentures? More than you might think. Napoleon Bonaparte was defeated at the Battle of Waterloo this month in 1812 by a military alliance consisting of Great Britain, the Netherlands, Prussia, and Belgium, ending with a whopping 50,000 casualties. The historic battle was a terrible time to be a soldier, but it was a red letter day for looters in search of teeth. Before the invention of synthetic materials, most dentures and other dental prostheses were made from actual human teeth and other natural materials.
As incredible as it might seem, the history of dentures and dental prosthesis dates back all the way to the ancient Egyptians. Archaeological finds supporting their advanced dental techniques include gold filled teeth and false teeth found buried with the deceased. Much of what is known about Egyptian dentistry was actually preserved by the ancient Greeks, who learned from them. Greeks, too, used gold to fill cavities as well as gold wire and wooden teeth to create bridges. Even the Etruscans, an ancient civilization that flourished in northern Italy that predates the Romans, were capable of creating dental prostheses. These include some of the earlier examples of dental bridges made of animal teeth held together with gold. The Romans were no slouches either in the dentistry department. There is written evidence that ancient Roman dentists were able to replace missing teeth with artificial ones made of bone or ivory, using methods similar to the Etruscans. Archaeological evidence also shows that they were capable of creating a complete set of dentures in this manner.
Unfortunately for those suffering from missing teeth throughout history, there were few significant advancements in the field of dental prosthesis for centuries after the fall of the Roman Empire. Teeth continued to be made from animal teeth, bones, or ivory with precious metals as the base to hold them together. Dentistry as a whole wasn’t particularly respected as a profession, and so its associated duties often fell to barbers and blacksmiths as supplementary work. Things began to slowly improve starting in the 1700s. In 1737, the “father of modern dentistry” Pierre Fauchard, created a set of complete dentures held together with springs. Fauchard was also the first to suggest making false teeth out of porcelain, though he never got around to it himself. Of course, there’s a popular myth that George Washington, who lived around the same time as Fauchard, had wooden dentures, but that’s entirely false. Washington wore dentures made from both human and animal teeth, which used ivory and lead for the base. Some believe that the origin of the wooden teeth myth comes from Washington’s affinity for Madeira wine, which stained hairline fractures in the false teeth, giving them the appearance of wood grains.
Until the mid-1800s, human teeth continued to be the standard for dentures, often bought and extracted from those desperate for money or looted from graves or battlefields. When Napoleon’s army was defeated after a bloody battle at Waterloo, survivors, locals, and professional scavengers descended on the piles of corpses and pulled as many teeth as they could to be sold to denture makers (though they often skipped the molars since they were hard to pull and would likely need reshaping).
Luckily, the practice of using human teeth eventually fell out of fashion, partly from legislation regulating the commercial use of human bodies, and partly from the advent of porcelain and celluloid teeth. Also in the 1800s, the newly developed rubber compound called vulcanite replaced the metals and ivory that formed the base of most dentures. Today, dentures are made from advanced materials like acrylic resin that closely mimic the look and function of real teeth. Modern dentures make even those from a few decades ago seem primitive by comparison. One thing’s for sure: high-tech teeth are a lot better than looted ones.
[Image description: A set of dentures partially visible against a blue background.] Credit & copyright: User: Thirunavukkarasye-Raveendran, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What does the fall of Napoleon have to do with dentures? More than you might think. Napoleon Bonaparte was defeated at the Battle of Waterloo this month in 1812 by a military alliance consisting of Great Britain, the Netherlands, Prussia, and Belgium, ending with a whopping 50,000 casualties. The historic battle was a terrible time to be a soldier, but it was a red letter day for looters in search of teeth. Before the invention of synthetic materials, most dentures and other dental prostheses were made from actual human teeth and other natural materials.
As incredible as it might seem, the history of dentures and dental prosthesis dates back all the way to the ancient Egyptians. Archaeological finds supporting their advanced dental techniques include gold filled teeth and false teeth found buried with the deceased. Much of what is known about Egyptian dentistry was actually preserved by the ancient Greeks, who learned from them. Greeks, too, used gold to fill cavities as well as gold wire and wooden teeth to create bridges. Even the Etruscans, an ancient civilization that flourished in northern Italy that predates the Romans, were capable of creating dental prostheses. These include some of the earlier examples of dental bridges made of animal teeth held together with gold. The Romans were no slouches either in the dentistry department. There is written evidence that ancient Roman dentists were able to replace missing teeth with artificial ones made of bone or ivory, using methods similar to the Etruscans. Archaeological evidence also shows that they were capable of creating a complete set of dentures in this manner.
Unfortunately for those suffering from missing teeth throughout history, there were few significant advancements in the field of dental prosthesis for centuries after the fall of the Roman Empire. Teeth continued to be made from animal teeth, bones, or ivory with precious metals as the base to hold them together. Dentistry as a whole wasn’t particularly respected as a profession, and so its associated duties often fell to barbers and blacksmiths as supplementary work. Things began to slowly improve starting in the 1700s. In 1737, the “father of modern dentistry” Pierre Fauchard, created a set of complete dentures held together with springs. Fauchard was also the first to suggest making false teeth out of porcelain, though he never got around to it himself. Of course, there’s a popular myth that George Washington, who lived around the same time as Fauchard, had wooden dentures, but that’s entirely false. Washington wore dentures made from both human and animal teeth, which used ivory and lead for the base. Some believe that the origin of the wooden teeth myth comes from Washington’s affinity for Madeira wine, which stained hairline fractures in the false teeth, giving them the appearance of wood grains.
Until the mid-1800s, human teeth continued to be the standard for dentures, often bought and extracted from those desperate for money or looted from graves or battlefields. When Napoleon’s army was defeated after a bloody battle at Waterloo, survivors, locals, and professional scavengers descended on the piles of corpses and pulled as many teeth as they could to be sold to denture makers (though they often skipped the molars since they were hard to pull and would likely need reshaping).
Luckily, the practice of using human teeth eventually fell out of fashion, partly from legislation regulating the commercial use of human bodies, and partly from the advent of porcelain and celluloid teeth. Also in the 1800s, the newly developed rubber compound called vulcanite replaced the metals and ivory that formed the base of most dentures. Today, dentures are made from advanced materials like acrylic resin that closely mimic the look and function of real teeth. Modern dentures make even those from a few decades ago seem primitive by comparison. One thing’s for sure: high-tech teeth are a lot better than looted ones.
[Image description: A set of dentures partially visible against a blue background.] Credit & copyright: User: Thirunavukkarasye-Raveendran, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEPlay PP&T CurioFree1 CQ
Looks like we’re in for one mild ride! The carousel, also called a merry-go-round or galloper, isn’t exactly a thrill ride. Yet, as family-friendly and inviting as they are, carousels have a surprisingly violent history. As summer begins and carousels begin popping up at carnivals all over the world, it’s the perfect time to learn a bit about this ubiquitous attraction.
The idea of an amusement ride has been around for millennia in some form or another. An early predecessor of the carousel even existed in the Byzantine Empire. In Constantinople, now Istanbul, there existed a ride that spun riders in baskets attached by poles to a rotating center. Later on, in medieval Europe, a similar concept was used to train knights for mounted battle. “Mounted” riders would sit atop a rotating seat, from which they would use a practice weapon to hit targets. In Turkey, riders would instead throw clay balls filled with perfume at their human opponents, but both versions of this “ride” were less about amusement, and more about training. These contraptions were eventually replaced with real horses and jousting tournaments, which tended to be violent and dangerous. When such tournaments fell out of fashion around the 17th century, the real horses were once again replaced with wooden facsimiles, with knights lancing rings and ribbons instead of other knights to show off their martial prowess. This, in turn, developed into a more accessible form of entertainment, allowing even commoners to enjoy the thrill of simulated combat. Evidence of the carousel’s roots in war games and jousting remains in its name. The word itself possibly comes from the French word “carrousel,” which means “tilting match,” or the Spanish word “carosella,” which means “little match.”
By the 18th century, the carousel began to evolve into something that more closely resembled the versions that exist today. The combat-oriented elements of the carousel were abandoned, with riders solely focused on enjoying themselves. In place of horses, seats hanging from chains on poles spun riders around at increasingly dizzying speeds, sometimes flinging the hapless amusement seekers outward. This version of the carousel was often called the “flying-horses,” despite its lack of horses. Also despite its risks, it was a popular ride at fairgrounds in England and parts of Europe. Meanwhile, in the U.S. and other parts of the world, rotating rides featuring wooden horses as seats came and went in various forms.
Finally, in 1861, the first iteration of the modern carousel arrived when the American inventor Thomas Bradshaw created the first steam-powered carousel. Throughout the 1800s, steam-powered carousels used their waste steam to power an automatic organ to play music, which is why even many modern iterations play organ music to this day. Another innovator in the esteemed field of carousel design was English inventor Fredrick Savage, who came up with the idea to have the horses move up and down as they rotated, further simulating the feeling of riding a horse. He also toyed around with other, less equestrian themes, including boats and velocipedes instead of horses.
Today, carousels are nearly unrecognizable when compared to their medieval counterparts. They feature elaborate ornamentation and whimsical themes, and are powered by electric motors. While carousels evolved from war games, they’re largely considered a gentle ride for children and their horses (or other animals) are made of fiberglass or other materials, not wood. Though they may have lost their dangerous edge over the centuries and frequently stray from their equestrian theming, carousels aren’t going anywhere. With so many traveling carnivals, these rides really get around as they spin around.
[Image description: A carousel featuring horses and dragons under the worlds “Welsh Galloping Horses.”] Credit & copyright: Jongleur100, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.Looks like we’re in for one mild ride! The carousel, also called a merry-go-round or galloper, isn’t exactly a thrill ride. Yet, as family-friendly and inviting as they are, carousels have a surprisingly violent history. As summer begins and carousels begin popping up at carnivals all over the world, it’s the perfect time to learn a bit about this ubiquitous attraction.
The idea of an amusement ride has been around for millennia in some form or another. An early predecessor of the carousel even existed in the Byzantine Empire. In Constantinople, now Istanbul, there existed a ride that spun riders in baskets attached by poles to a rotating center. Later on, in medieval Europe, a similar concept was used to train knights for mounted battle. “Mounted” riders would sit atop a rotating seat, from which they would use a practice weapon to hit targets. In Turkey, riders would instead throw clay balls filled with perfume at their human opponents, but both versions of this “ride” were less about amusement, and more about training. These contraptions were eventually replaced with real horses and jousting tournaments, which tended to be violent and dangerous. When such tournaments fell out of fashion around the 17th century, the real horses were once again replaced with wooden facsimiles, with knights lancing rings and ribbons instead of other knights to show off their martial prowess. This, in turn, developed into a more accessible form of entertainment, allowing even commoners to enjoy the thrill of simulated combat. Evidence of the carousel’s roots in war games and jousting remains in its name. The word itself possibly comes from the French word “carrousel,” which means “tilting match,” or the Spanish word “carosella,” which means “little match.”
By the 18th century, the carousel began to evolve into something that more closely resembled the versions that exist today. The combat-oriented elements of the carousel were abandoned, with riders solely focused on enjoying themselves. In place of horses, seats hanging from chains on poles spun riders around at increasingly dizzying speeds, sometimes flinging the hapless amusement seekers outward. This version of the carousel was often called the “flying-horses,” despite its lack of horses. Also despite its risks, it was a popular ride at fairgrounds in England and parts of Europe. Meanwhile, in the U.S. and other parts of the world, rotating rides featuring wooden horses as seats came and went in various forms.
Finally, in 1861, the first iteration of the modern carousel arrived when the American inventor Thomas Bradshaw created the first steam-powered carousel. Throughout the 1800s, steam-powered carousels used their waste steam to power an automatic organ to play music, which is why even many modern iterations play organ music to this day. Another innovator in the esteemed field of carousel design was English inventor Fredrick Savage, who came up with the idea to have the horses move up and down as they rotated, further simulating the feeling of riding a horse. He also toyed around with other, less equestrian themes, including boats and velocipedes instead of horses.
Today, carousels are nearly unrecognizable when compared to their medieval counterparts. They feature elaborate ornamentation and whimsical themes, and are powered by electric motors. While carousels evolved from war games, they’re largely considered a gentle ride for children and their horses (or other animals) are made of fiberglass or other materials, not wood. Though they may have lost their dangerous edge over the centuries and frequently stray from their equestrian theming, carousels aren’t going anywhere. With so many traveling carnivals, these rides really get around as they spin around.
[Image description: A carousel featuring horses and dragons under the worlds “Welsh Galloping Horses.”] Credit & copyright: Jongleur100, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEArchitecture PP&T CurioFree1 CQ
These buildings are certainly imposing…perhaps even brutally so! Brutalism is undoubtedly one of the most divisive architectural styles ever created. Most people either love it or hate it. Regardless of aesthetic opinion, though, the style has an interesting history, and its name doesn’t actually mean what one might assume.
Brutalism is an architectural style that focuses on plainness, showcasing bare building materials like concrete, steel, and glass without paint or other ornamentation. Brutalist buildings often feature large blocks of concrete and simple, geometric shapes that give them something of a “building-block” look. It’s a common misconception that the term “brutalism” derives from the word “brutal”, as in cruel, due to its imposing look. Rather, the term comes from the French word béton brut, meaning “raw concrete.” In the 1950s and 60s, when brutalism first became popular, raw concrete was usually hidden rather than showcased in architecture, which made the new style stand out.
Brutalism’s popularity began in Europe, not long after the end of World War II. It was then that Swiss-French architectural designer Charles-Édouard Jeanneret, better known as Le Corbusier, designed the 18-story Unité d'Habitation in Marseille, France. The structure is now thought of as one of the first examples of brutalism, with its exposed concrete and geometric design. Le Corbusier didn’t actually label any of his work as brutalism, but he was a painter and great lover of modernist art, and translated many elements of the style into his architectural designs. Far from the grim reputation that brutalism is sometimes associated with today, Le Corbusier saw his architecture as part of a utopian future, in which simple form and minimalism would be parts of everyday, modern living. These ideas were particularly attractive in Europe after the devastation of World War II, and architects in Britain began to emulate the style.
There is some debate around who first coined the term “brutalism.” Many historians believe that it was Swedish architect Hans Asplund, who used the word in 1949 when describing a square, brick house in Uppsala, Sweden. Reyner Banham, a British architectural critic, undoubtedly popularized the name when he penned his 1955 essay, The New Brutalism. Once the term took off, a modernist philosophy similar to Le Corbusier’s began to be associated with brutalist design, and suddenly brutalism was an architectural movement, rather than just a style. Brutalist architects sought to move away from ornate, nostalgic, pre-war designs and into a new, modernized European age in which technology would help people live more equitable lives. Brutalist buildings began popping up in office complexes, on college campuses, and even in neighborhoods across Europe, Canada, Australia, and the U.S.
As ambitious as the brutalist philosophy was, the style was not to last. By the 1970s, brutalism had declined dramatically in popularity. Some complained about the aesthetics of the style, since brutalist buildings can be seen as imposing and, at worst, intimidating. Raw concrete is also prone to weathering and staining, so many brutalist buildings from the 50s were showing plenty of wear and tear by the 70s. Because brutalism was a style used for many public buildings, most of which were in cities, some people came to associate the style with crime in densely-populated areas, especially in the U.S. and Britain. Though plenty of brutalist architecture still exists today, much of it has been demolished, and new brutalist works are rarely made. Still, it’s remembered as one of the most unique architectural styles of the modern world. It took a lot of work for architecture to look so simple!
[Image description: A concrete, brutalist building. It is the Natural Resources Canada CanmetENERGY's building in the Bells Corners Complex in Haanel Drive, Ottawa.] Credit & copyright: CanmetCoop, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide.These buildings are certainly imposing…perhaps even brutally so! Brutalism is undoubtedly one of the most divisive architectural styles ever created. Most people either love it or hate it. Regardless of aesthetic opinion, though, the style has an interesting history, and its name doesn’t actually mean what one might assume.
Brutalism is an architectural style that focuses on plainness, showcasing bare building materials like concrete, steel, and glass without paint or other ornamentation. Brutalist buildings often feature large blocks of concrete and simple, geometric shapes that give them something of a “building-block” look. It’s a common misconception that the term “brutalism” derives from the word “brutal”, as in cruel, due to its imposing look. Rather, the term comes from the French word béton brut, meaning “raw concrete.” In the 1950s and 60s, when brutalism first became popular, raw concrete was usually hidden rather than showcased in architecture, which made the new style stand out.
Brutalism’s popularity began in Europe, not long after the end of World War II. It was then that Swiss-French architectural designer Charles-Édouard Jeanneret, better known as Le Corbusier, designed the 18-story Unité d'Habitation in Marseille, France. The structure is now thought of as one of the first examples of brutalism, with its exposed concrete and geometric design. Le Corbusier didn’t actually label any of his work as brutalism, but he was a painter and great lover of modernist art, and translated many elements of the style into his architectural designs. Far from the grim reputation that brutalism is sometimes associated with today, Le Corbusier saw his architecture as part of a utopian future, in which simple form and minimalism would be parts of everyday, modern living. These ideas were particularly attractive in Europe after the devastation of World War II, and architects in Britain began to emulate the style.
There is some debate around who first coined the term “brutalism.” Many historians believe that it was Swedish architect Hans Asplund, who used the word in 1949 when describing a square, brick house in Uppsala, Sweden. Reyner Banham, a British architectural critic, undoubtedly popularized the name when he penned his 1955 essay, The New Brutalism. Once the term took off, a modernist philosophy similar to Le Corbusier’s began to be associated with brutalist design, and suddenly brutalism was an architectural movement, rather than just a style. Brutalist architects sought to move away from ornate, nostalgic, pre-war designs and into a new, modernized European age in which technology would help people live more equitable lives. Brutalist buildings began popping up in office complexes, on college campuses, and even in neighborhoods across Europe, Canada, Australia, and the U.S.
As ambitious as the brutalist philosophy was, the style was not to last. By the 1970s, brutalism had declined dramatically in popularity. Some complained about the aesthetics of the style, since brutalist buildings can be seen as imposing and, at worst, intimidating. Raw concrete is also prone to weathering and staining, so many brutalist buildings from the 50s were showing plenty of wear and tear by the 70s. Because brutalism was a style used for many public buildings, most of which were in cities, some people came to associate the style with crime in densely-populated areas, especially in the U.S. and Britain. Though plenty of brutalist architecture still exists today, much of it has been demolished, and new brutalist works are rarely made. Still, it’s remembered as one of the most unique architectural styles of the modern world. It took a lot of work for architecture to look so simple!
[Image description: A concrete, brutalist building. It is the Natural Resources Canada CanmetENERGY's building in the Bells Corners Complex in Haanel Drive, Ottawa.] Credit & copyright: CanmetCoop, Wikimedia Commons. The copyright holder of this work has released it into the public domain. This applies worldwide. -
FREEWorld History PP&T CurioFree1 CQ
It seems we’re living in the “era” of the Regency era. With the recent announcement of another season of Bridgerton and the show’s elegant fashion inspiring modern trends, the Regency era (or at least a fictionalized version of it) is on a lot of peoples’ minds. The real Regency era took place at the tail end of the Georgian era and was a brief but impactful period of British history. As romanticized as it is, though, the era had a dark side too.
The Regency era took place roughly from the late 1700s to the 1830s. However, the regency for which it is named actually took place between 1811 and 1820, during which the future King George IV ruled as regent in place of his father, George III. George III suffered from poor health and frequent bouts of insanity during his 60 years on the throne. Though the precise nature of his illness remains a mystery, we know that by 1810 he was unable to function in his official capacity as king. Still, George III was never deposed. Instead, parliament passed the Regency Act in 1811, giving his son all the authority of the king on paper while the elder George lived out the last ten years of his life under the care of his wife. As Prince Regent, George IV was the king in all but name until he officially acceded in 1820 after the death of his father. Politically, the Regency era was a relatively peaceful time, though neither George III or George IV were particularly popular as heads of state.
Beyond politics, the Regency era was defined by changing cultural and artistic sensibilities. While the Georgian era was defined by elaborate fashions and ornamental artwork designed to convey a sense of opulence and excess, the Regency was influenced by the more democratic, egalitarian ideals espoused by the French Revolution. Thus, luxurious fabrics such as silks were replaced by more modest muslins, and the fashionable outfits of the time didn't serve to delineate between the social classes. Wide skirts and small waistlines gave way to more natural silhouettes, and clothes became more practical in general. Aesthetically, Regency styles were greatly inspired by Greek, Roman, and Egyptian art of antiquity. Neoclassicism had already been popular during the Georgian era, but the aesthetic sensibilities of the Regency era pursued a more authentic interpretation of ancient styles than ever before. The architecture of the time was similarly influenced by cultures of the past, favoring elegance, symmetry, and open designs. Europeans began to import Japanese and Chinese goods and art styles, and furniture was crafted to be less ornate in shape but more lavishly decorated with veneers of exotic woods. Indeed, as the Regency era progressed, elaborate designs in general became popular again, leading into the extravagance of the Victorian era.
Today, the Regency era is often overshadowed by the much longer-lasting Victorian era, but it nevertheless has its devotees. The aesthetics of the time have been greatly romanticized, and its influences can be seen in popular culture. Notably, Regency romance is a popular genre on paper and on the screen, as can be seen in shows like Bridgerton. The somewhat unusual political arrangement that the period is named after remains largely forgotten, and few remember the regent himself fondly. Ironically, George IV's shortcomings as a ruler may have helped spur on the cultural changes that the era is known for. As both regent and king, George IV preferred to spend his time patronizing artists and architects, shaping the nation's art and culture while eschewing politics for the most part. He might not have been a great head of state, but you could say he was the king of taste.
[Image description: A painting of a Regency-era woman in a white dress, smiling with her arms crossed as her dog, a beagle, looks up at her.] Credit & copyright: Lady Maria Conyngham (died 1843) by Sir Thomas Lawrence, ca. 1824–25. The Metropolitan Museum of Art, Gift of Jessie Woolworth Donahue, 1955. Public Domain.It seems we’re living in the “era” of the Regency era. With the recent announcement of another season of Bridgerton and the show’s elegant fashion inspiring modern trends, the Regency era (or at least a fictionalized version of it) is on a lot of peoples’ minds. The real Regency era took place at the tail end of the Georgian era and was a brief but impactful period of British history. As romanticized as it is, though, the era had a dark side too.
The Regency era took place roughly from the late 1700s to the 1830s. However, the regency for which it is named actually took place between 1811 and 1820, during which the future King George IV ruled as regent in place of his father, George III. George III suffered from poor health and frequent bouts of insanity during his 60 years on the throne. Though the precise nature of his illness remains a mystery, we know that by 1810 he was unable to function in his official capacity as king. Still, George III was never deposed. Instead, parliament passed the Regency Act in 1811, giving his son all the authority of the king on paper while the elder George lived out the last ten years of his life under the care of his wife. As Prince Regent, George IV was the king in all but name until he officially acceded in 1820 after the death of his father. Politically, the Regency era was a relatively peaceful time, though neither George III or George IV were particularly popular as heads of state.
Beyond politics, the Regency era was defined by changing cultural and artistic sensibilities. While the Georgian era was defined by elaborate fashions and ornamental artwork designed to convey a sense of opulence and excess, the Regency was influenced by the more democratic, egalitarian ideals espoused by the French Revolution. Thus, luxurious fabrics such as silks were replaced by more modest muslins, and the fashionable outfits of the time didn't serve to delineate between the social classes. Wide skirts and small waistlines gave way to more natural silhouettes, and clothes became more practical in general. Aesthetically, Regency styles were greatly inspired by Greek, Roman, and Egyptian art of antiquity. Neoclassicism had already been popular during the Georgian era, but the aesthetic sensibilities of the Regency era pursued a more authentic interpretation of ancient styles than ever before. The architecture of the time was similarly influenced by cultures of the past, favoring elegance, symmetry, and open designs. Europeans began to import Japanese and Chinese goods and art styles, and furniture was crafted to be less ornate in shape but more lavishly decorated with veneers of exotic woods. Indeed, as the Regency era progressed, elaborate designs in general became popular again, leading into the extravagance of the Victorian era.
Today, the Regency era is often overshadowed by the much longer-lasting Victorian era, but it nevertheless has its devotees. The aesthetics of the time have been greatly romanticized, and its influences can be seen in popular culture. Notably, Regency romance is a popular genre on paper and on the screen, as can be seen in shows like Bridgerton. The somewhat unusual political arrangement that the period is named after remains largely forgotten, and few remember the regent himself fondly. Ironically, George IV's shortcomings as a ruler may have helped spur on the cultural changes that the era is known for. As both regent and king, George IV preferred to spend his time patronizing artists and architects, shaping the nation's art and culture while eschewing politics for the most part. He might not have been a great head of state, but you could say he was the king of taste.
[Image description: A painting of a Regency-era woman in a white dress, smiling with her arms crossed as her dog, a beagle, looks up at her.] Credit & copyright: Lady Maria Conyngham (died 1843) by Sir Thomas Lawrence, ca. 1824–25. The Metropolitan Museum of Art, Gift of Jessie Woolworth Donahue, 1955. Public Domain. -
FREEUS History PP&T CurioFree1 CQ
Jumping jackalopes! Of all the hoaxes that anyone ever tried to pull off, the myth of the jackalope might be the most harmless. In fact, it proved surprisingly helpful. More than just a jackrabbit with a pair of antlers, the jackalope is the continuation of a surprisingly old myth and has even played a part in the development of a life-saving medical advancement.
On the surface, the jackalope is a fairly simple mythical beast. Most accounts describe it as looking like a black-tailed jackrabbit with a pair of antlers like a deer. Jackrabbits, despite their name, are actually hares, not rabbits. They have long, wide ears that stretch out from their heads and longer, leaner looking bodies than their rabbit cousins. None of them have antlers, though, and none of them have the same kinds of legends attached to them that jackalopes do.
Jackalopes aren’t exactly grand mythical beasts. They don’t guard treasure, as dragons do. They aren’t immortal, like phoenixes. They don’t lure humans to their deaths, like sirens, or perplex them with riddles, like sphynxes. Mostly, jackalopes just like to bother people for fun. Some claim that jackalopes will harmonize with cowboys singing by a campfire, and will only mate during lightning storms. It might be bad luck to hunt jackalopes, but some stories posit that the beasts can be easily tricked. Someone wishing to trap a jackalope can supposedly lure one by setting out a bowl of whiskey. Once a jackalope is drunk, it will be filled with bravado and believe that it can catch bullets with its teeth. But jackalopes are also able to catch hunters unawares thanks to their extraordinary vocal talents. Aside from their singing abilities, jackalopes can supposedly throw voices and mimic different sounds, even the ringtone of a hunter's phone.
Jackalopes are far from the only rabbit or hare tricksters in folklore, nor are they the only ones to have horns. In fact, stories of horned rabbits date back centuries. Europeans even officially recognized the supposed Lepus cornutus as a real species of horned hare, though it never really existed. The jackalope, however, is a purely American invention, cooked up by brothers Douglas and Ralph Herrick in 1932. According to their story, first revealed in Ralph's obituary in 2003, the brothers taxidermed a jackrabbit and attached horns to it themselves. They sold the taxidermed piece to a local bar for $10, and eventually started producing them en masse. While the Herrick brothers might have given rise to the popularity of the modern jackrabbit myth, the preexisting accounts of horned rabbits and hares might have been inspired by something less playful. Rabbits and hares around the world are vulnerable to a virus called Shope papillomavirus, named after Richard Shope, who discovered it in—coincidentally—1932. The virus is similar to the human papillomavirus (HPV), but unlike HPV, which causes cancer, Shope papillomavirus causes keratinized growths that can resemble horns to grow out of the skin. These growths can eventually get large enough to hinder the animal's health, and if it grows around the mouth, it can affect their ability to eat.
One animal's tragedy, it seems, can be another's treasure. The Shope papillomavirus was the first virus found to lead to cancer in a mammal, and this discovery led to advancements in human cancer research. In the 1970s, German virologist Harald zur Hausen proved that HPV was the main culprit for cervical cancer. Later, in the 1980s, Isabelle Giri published the complete genomic sequence of the Shope papillomavirus, which turned out to be similar to HPV. All these findings, of course, eventually led to the development of the HPV vaccine, which immunizes people against most strains of HPV responsible for causing cancer. Those are some leaps and bounds that even a jackalope would struggle to make.
[Image description: An illustration showing a squirrel, two rabbits, and a jackalope inside an oval. The jackalope sits in the center.] Credit & copyright: National Gallery of Art, Joris Hoefnagel, (Flemish, 1542 - 1600). Gift of Mrs. Lessing J. Rosenwald. Pubic Domain.Jumping jackalopes! Of all the hoaxes that anyone ever tried to pull off, the myth of the jackalope might be the most harmless. In fact, it proved surprisingly helpful. More than just a jackrabbit with a pair of antlers, the jackalope is the continuation of a surprisingly old myth and has even played a part in the development of a life-saving medical advancement.
On the surface, the jackalope is a fairly simple mythical beast. Most accounts describe it as looking like a black-tailed jackrabbit with a pair of antlers like a deer. Jackrabbits, despite their name, are actually hares, not rabbits. They have long, wide ears that stretch out from their heads and longer, leaner looking bodies than their rabbit cousins. None of them have antlers, though, and none of them have the same kinds of legends attached to them that jackalopes do.
Jackalopes aren’t exactly grand mythical beasts. They don’t guard treasure, as dragons do. They aren’t immortal, like phoenixes. They don’t lure humans to their deaths, like sirens, or perplex them with riddles, like sphynxes. Mostly, jackalopes just like to bother people for fun. Some claim that jackalopes will harmonize with cowboys singing by a campfire, and will only mate during lightning storms. It might be bad luck to hunt jackalopes, but some stories posit that the beasts can be easily tricked. Someone wishing to trap a jackalope can supposedly lure one by setting out a bowl of whiskey. Once a jackalope is drunk, it will be filled with bravado and believe that it can catch bullets with its teeth. But jackalopes are also able to catch hunters unawares thanks to their extraordinary vocal talents. Aside from their singing abilities, jackalopes can supposedly throw voices and mimic different sounds, even the ringtone of a hunter's phone.
Jackalopes are far from the only rabbit or hare tricksters in folklore, nor are they the only ones to have horns. In fact, stories of horned rabbits date back centuries. Europeans even officially recognized the supposed Lepus cornutus as a real species of horned hare, though it never really existed. The jackalope, however, is a purely American invention, cooked up by brothers Douglas and Ralph Herrick in 1932. According to their story, first revealed in Ralph's obituary in 2003, the brothers taxidermed a jackrabbit and attached horns to it themselves. They sold the taxidermed piece to a local bar for $10, and eventually started producing them en masse. While the Herrick brothers might have given rise to the popularity of the modern jackrabbit myth, the preexisting accounts of horned rabbits and hares might have been inspired by something less playful. Rabbits and hares around the world are vulnerable to a virus called Shope papillomavirus, named after Richard Shope, who discovered it in—coincidentally—1932. The virus is similar to the human papillomavirus (HPV), but unlike HPV, which causes cancer, Shope papillomavirus causes keratinized growths that can resemble horns to grow out of the skin. These growths can eventually get large enough to hinder the animal's health, and if it grows around the mouth, it can affect their ability to eat.
One animal's tragedy, it seems, can be another's treasure. The Shope papillomavirus was the first virus found to lead to cancer in a mammal, and this discovery led to advancements in human cancer research. In the 1970s, German virologist Harald zur Hausen proved that HPV was the main culprit for cervical cancer. Later, in the 1980s, Isabelle Giri published the complete genomic sequence of the Shope papillomavirus, which turned out to be similar to HPV. All these findings, of course, eventually led to the development of the HPV vaccine, which immunizes people against most strains of HPV responsible for causing cancer. Those are some leaps and bounds that even a jackalope would struggle to make.
[Image description: An illustration showing a squirrel, two rabbits, and a jackalope inside an oval. The jackalope sits in the center.] Credit & copyright: National Gallery of Art, Joris Hoefnagel, (Flemish, 1542 - 1600). Gift of Mrs. Lessing J. Rosenwald. Pubic Domain. -
FREEUS History PP&T CurioFree1 CQ
It was mayhem on the Mississippi. The Siege of Vicksburg, which began on this day in 1863, was one of the most significant battles of the American Civil War. Ending just a day after the Battle of Gettysburg, the Union victory at Vicksburg secured their control over the Mississippi River, a critical lifeline for the South. Moreover, the battle played a major role in turning the tides against the Confederacy by eroding morale.
The battle of Vicksburg was all about control of the Mississippi River. Led by General Ulysses S. Grant, Union forces set their sights on the town of Vicksburg on the river’s east bank, which lay about halfway between Memphis and New Orleans. Taking control of Vicksburg would separate the Southern states on each side of the river. Conquering the Confederate stronghold was easier said than done, however. Following the Confederates' loss of key forts in neighboring Tennessee, Vicksburg was the last fortified position from which the South could maintain control over the Mississippi. Knowing this, Confederate Lieutenant General John C. Pemberton, who was in charge of a garrison of around 33,000 men in Vicksburg, began preparing for an impending attack. A Union assault using ironclad ships on the river failed to yield results, while Union General William Tecumseh Sherman's approach by land was repelled by Confederate bombardments. At one point, Grant even tried to dig a canal to circumvent the city's defenses, to no avail.
Eventually, Grant's persistence prevailed. Union forces were able to find footing at Bruinsburg, and after stepping ashore from the Mississippi, they marched toward the state's capital of Jackson. Grant took Jackson by May 14 before continuing toward Vicksburg, fighting Confederate forces along the way. On May 18, Grant and his troops arrived at a heavily fortified Vicksburg, but finding that the garrison was poorly prepared, he hoped to take the city quickly.
To Grant’s chagrin, a quick and sound victory was not to be. Pemberton was able to establish a stubborn defense, forcing Grant to lay siege to the city after several days of fighting. But Pemberton was at a severe disadvantage; though he was able to thwart an attempt to breach the fortifications by sappers (also known as combat engineers) who used explosives to destroy part of their defenses, his garrison was low on rations and cut off from reinforcements. Despite this, when Grant demanded an unconditional surrender from Pemberton, the latter denied the proposition. With neither willing to back away, the siege continued with day after day of contentious but fruitless fighting. Still, it was clear to Pemberton that his garrison could not last. Grant controlled all roads to Vicksburg and the garrison was on the verge of starvation. After more than a month and a half of fighting, Grant offered parole for any remaining defenders, allowing them to go home rather than be imprisoned. Thus, the battle ended in a Union victory on July 4. Of the 77,000 Union soldiers and 33,000 Confederate soldiers who fought at Vicksburg, over 1,600 died and thousands more were wounded.
Today, the Siege of Vicksburg is considered one of the death knells of the Confederacy, though it is often overshadowed by the Battle of Gettysburg. While the war continued for another two years, these two battles were a turning point in the trajectory of the conflict which had, until then, favored the Confederacy. After the Union took Vicksburg, Southern forces were unable to maintain their already-waning strength. Morale plummeted, hopes of aid from England were all but gone, and Grant had distinguished himself as a Union commander. Before the Siege of Vicksburg, Grant had been a relatively unknown figure, but his triumph there gave him political momentum that would later place him in the White House. Which would be more frightening, leading a siege or running the country?
[Image description: A black-and-white illustration of Confederate soldiers ready to fire a canon at the Battle of Vicksburg.] Credit & copyright: A Popular History of the United States, Volume 5, George W. Peters, 1876. Public Domain.It was mayhem on the Mississippi. The Siege of Vicksburg, which began on this day in 1863, was one of the most significant battles of the American Civil War. Ending just a day after the Battle of Gettysburg, the Union victory at Vicksburg secured their control over the Mississippi River, a critical lifeline for the South. Moreover, the battle played a major role in turning the tides against the Confederacy by eroding morale.
The battle of Vicksburg was all about control of the Mississippi River. Led by General Ulysses S. Grant, Union forces set their sights on the town of Vicksburg on the river’s east bank, which lay about halfway between Memphis and New Orleans. Taking control of Vicksburg would separate the Southern states on each side of the river. Conquering the Confederate stronghold was easier said than done, however. Following the Confederates' loss of key forts in neighboring Tennessee, Vicksburg was the last fortified position from which the South could maintain control over the Mississippi. Knowing this, Confederate Lieutenant General John C. Pemberton, who was in charge of a garrison of around 33,000 men in Vicksburg, began preparing for an impending attack. A Union assault using ironclad ships on the river failed to yield results, while Union General William Tecumseh Sherman's approach by land was repelled by Confederate bombardments. At one point, Grant even tried to dig a canal to circumvent the city's defenses, to no avail.
Eventually, Grant's persistence prevailed. Union forces were able to find footing at Bruinsburg, and after stepping ashore from the Mississippi, they marched toward the state's capital of Jackson. Grant took Jackson by May 14 before continuing toward Vicksburg, fighting Confederate forces along the way. On May 18, Grant and his troops arrived at a heavily fortified Vicksburg, but finding that the garrison was poorly prepared, he hoped to take the city quickly.
To Grant’s chagrin, a quick and sound victory was not to be. Pemberton was able to establish a stubborn defense, forcing Grant to lay siege to the city after several days of fighting. But Pemberton was at a severe disadvantage; though he was able to thwart an attempt to breach the fortifications by sappers (also known as combat engineers) who used explosives to destroy part of their defenses, his garrison was low on rations and cut off from reinforcements. Despite this, when Grant demanded an unconditional surrender from Pemberton, the latter denied the proposition. With neither willing to back away, the siege continued with day after day of contentious but fruitless fighting. Still, it was clear to Pemberton that his garrison could not last. Grant controlled all roads to Vicksburg and the garrison was on the verge of starvation. After more than a month and a half of fighting, Grant offered parole for any remaining defenders, allowing them to go home rather than be imprisoned. Thus, the battle ended in a Union victory on July 4. Of the 77,000 Union soldiers and 33,000 Confederate soldiers who fought at Vicksburg, over 1,600 died and thousands more were wounded.
Today, the Siege of Vicksburg is considered one of the death knells of the Confederacy, though it is often overshadowed by the Battle of Gettysburg. While the war continued for another two years, these two battles were a turning point in the trajectory of the conflict which had, until then, favored the Confederacy. After the Union took Vicksburg, Southern forces were unable to maintain their already-waning strength. Morale plummeted, hopes of aid from England were all but gone, and Grant had distinguished himself as a Union commander. Before the Siege of Vicksburg, Grant had been a relatively unknown figure, but his triumph there gave him political momentum that would later place him in the White House. Which would be more frightening, leading a siege or running the country?
[Image description: A black-and-white illustration of Confederate soldiers ready to fire a canon at the Battle of Vicksburg.] Credit & copyright: A Popular History of the United States, Volume 5, George W. Peters, 1876. Public Domain. -
FREEMusic Appreciation PP&T CurioFree1 CQ
What would life be without a little music? It’s one of the great cornerstones of culture, yet music only exploded as an industry with the advent of mass media in the 20th century. This month in 1959, the National Academy of Recording Arts & Sciences (NARAS), also known as "The Recording Academy," began celebrating musicians, singers, songwriters, and other music industry professionals with the Grammy Awards.
Originally called The Gramophone Awards, the Grammys got their start as black-tie dinners held at the same time in Los Angeles and New York City. The award ceremonies were established to recognize those in the music industry in the same way that the Oscars and the Emmys did for film and television. Compared to the other events, however, The Gramophone Awards were much more formal, and compared to today, they covered relatively few categories: only 28 in total. Modern Grammys cover a whopping 94 categories. Still, many different musical styles were covered by the awards. In fact, the first-ever Record of the Year and Song of the Year awards went to Italian singer-songwriter Domenico Modugno for Nel Blu Dipinto Di Blu (Volare). Meanwhile, Henry Macini won Album of the Year for The Music from Peter Gunn, and Ella Fitzgerald won awards for Best Vocal Performance, Female and Best Jazz Performance, Individual. Though Frank Sinatra led the pack with the most nominations at six, he only received an award as the art director for the cover of his album, Only the Lonely. With such esteemed musicians and performers recognized during the first Gramophone Awards, the event quickly earned a prestigious status in the entertainment industry.
Over the years, the Grammys have grown in scope, covering more genres and roles within the music industry. Beginning in 1980, the Recording Academy began recognizing Rock as a genre, followed by Rap in 1989. Not all categories are shown during the Grammys’ yearly broadcast due to time constraints, which leads to some awards being fairly overlooked. Some lesser-known Grammys are those concerning musical theater and children's music. At one point, there were 109 categories, but the Academy managed to pare things down to 79 after 2011. This was partly achieved by eliminating gendered categories and getting rid of the differentiation between solo and group acts. Of course, now the categories have slowly increased again to 94 in total. In 1997, NARAS established the Latin Academy of Recording Arts & Sciences (LARS), which started holding its own awards ceremony in 2000 for records released in Spanish or Portuguese.
Today, the Grammys is as much known for providing a televised spectacle for fans of popular music as it is for its prestige. In contrast to the much more formal gatherings of its early years, the ceremonies and the red carpet leading up to the modern Grammys have become stages for fashion and political statements. Some modern Grammy winners have been recognized multiple times, setting impressive records. These include performers like Beyoncé and Quincy Jones, who have been awarded 35 and 28 Grammys respectively, but there are other, lesser-known recordholders too. Hungarian-British classical conductor Georg Solti received 31 Grammys in his lifetime. Then there's Jimmy Sturr, who has won 18 of the 25 Grammys ever awarded for Polka, and Yo-Yo Ma, who has won 19 awards for Classical and World Music. The Grammys might have started off as a small dinner, but it's now a veritable feast for the ears.What would life be without a little music? It’s one of the great cornerstones of culture, yet music only exploded as an industry with the advent of mass media in the 20th century. This month in 1959, the National Academy of Recording Arts & Sciences (NARAS), also known as "The Recording Academy," began celebrating musicians, singers, songwriters, and other music industry professionals with the Grammy Awards.
Originally called The Gramophone Awards, the Grammys got their start as black-tie dinners held at the same time in Los Angeles and New York City. The award ceremonies were established to recognize those in the music industry in the same way that the Oscars and the Emmys did for film and television. Compared to the other events, however, The Gramophone Awards were much more formal, and compared to today, they covered relatively few categories: only 28 in total. Modern Grammys cover a whopping 94 categories. Still, many different musical styles were covered by the awards. In fact, the first-ever Record of the Year and Song of the Year awards went to Italian singer-songwriter Domenico Modugno for Nel Blu Dipinto Di Blu (Volare). Meanwhile, Henry Macini won Album of the Year for The Music from Peter Gunn, and Ella Fitzgerald won awards for Best Vocal Performance, Female and Best Jazz Performance, Individual. Though Frank Sinatra led the pack with the most nominations at six, he only received an award as the art director for the cover of his album, Only the Lonely. With such esteemed musicians and performers recognized during the first Gramophone Awards, the event quickly earned a prestigious status in the entertainment industry.
Over the years, the Grammys have grown in scope, covering more genres and roles within the music industry. Beginning in 1980, the Recording Academy began recognizing Rock as a genre, followed by Rap in 1989. Not all categories are shown during the Grammys’ yearly broadcast due to time constraints, which leads to some awards being fairly overlooked. Some lesser-known Grammys are those concerning musical theater and children's music. At one point, there were 109 categories, but the Academy managed to pare things down to 79 after 2011. This was partly achieved by eliminating gendered categories and getting rid of the differentiation between solo and group acts. Of course, now the categories have slowly increased again to 94 in total. In 1997, NARAS established the Latin Academy of Recording Arts & Sciences (LARS), which started holding its own awards ceremony in 2000 for records released in Spanish or Portuguese.
Today, the Grammys is as much known for providing a televised spectacle for fans of popular music as it is for its prestige. In contrast to the much more formal gatherings of its early years, the ceremonies and the red carpet leading up to the modern Grammys have become stages for fashion and political statements. Some modern Grammy winners have been recognized multiple times, setting impressive records. These include performers like Beyoncé and Quincy Jones, who have been awarded 35 and 28 Grammys respectively, but there are other, lesser-known recordholders too. Hungarian-British classical conductor Georg Solti received 31 Grammys in his lifetime. Then there's Jimmy Sturr, who has won 18 of the 25 Grammys ever awarded for Polka, and Yo-Yo Ma, who has won 19 awards for Classical and World Music. The Grammys might have started off as a small dinner, but it's now a veritable feast for the ears. -
FREEWorld History PP&T CurioFree1 CQ
This week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Bilbies, not bunnies! That’s the slogan of those in Australia who support the Easter Bilby, an Aussie alternative to the traditional Easter Bunny. Bilbies are endangered Australian marsupials with some rabbit-like features, such as long ears and strong back legs that make them prolific jumpers. This time of year, Australian shops sell chocolate bilbies and picture books featuring the Easter-themed marsupial. But the Easter Bilby isn’t just a way to Aussie-fy Easter. It helps bring awareness to two related environmental problems down under.
Bilbies are unique creatures, and some of the world’s oldest living mammals. They thrive in arid environments where many other animals have trouble surviving. Unlike rabbits, bilbies are omnivores who survive by eating a combination of plants, seeds, fungi, and insects. It’s no wonder that Australians are proud enough of this native animal to use it as a holiday mascot. As is fitting of such a whimsical character, the Easter Bilby was invented by a child. In 1968, 9-year-old Australian Rose-Marie Dusting wrote a short story called Billy The Aussie Easter Bilby, which she later published as a book. The book was popular enough to raise the general public’s interest in bilbies, and the Easter Bilby began appearing on Easter cards and decorations. The Easter Bilby really took off, though, when chocolate companies got on board and began selling chocolate bilbies right alongside the usual Easter Bunnies. Seeing that the Easter Bilby was quite popular, Australian environmentalists seized the opportunity to educate Australians about the bilby’s endangered status and the environmental problems posed by the nation's feral rabbits.
Bilbies were once found across 70 percent of Australia, but today that percentage has shriveled to 20 percent. Besides simple habitat encroachment, human life harmed bilbies in another big way: by introducing non-native species. Europeans introduced both foxes and domesticated cats to Australia in the 19th Century. Today, foxes kill around 300 million native Australian animals every year, While cats kill a whopping 2 billion annually. While it’s obvious how predators like foxes and cats can hunt and kill bilbies, cute, fluffy bunnies pose just as much of a threat. On Christmas Day in 1859, European settler Thomas Austin released 24 rabbits into the Australian wilderness, believing that hunting them would provide good sport for his fellow colonists. He couldn’t have foreseen the devastating consequences of his decision. From his original 24 rabbits, an entire population of non-native, feral rabbits was born, and they’ve been decimating native Australian wildlife ever since. These rabbits gobble up millions of native plants. This not only kills species that directly depend on the plants for food, it also causes soil erosion since the plants’ roots normally help keep soil compacted. Erosion can change entire landscapes, making them uninhabitable to native species. Unfortunately, rabbits helped drive one of Australia’s two bilby species, the Lesser Bilby, to extinction in the 1950s. Now, less than 10,000 Greater Bilbies remain in the wild.
When conservation group Foundation for Rabbit-Free Australia caught wind of the Easter Bilby, they took the opportunity to promote it as an environmentally-friendly alternative to the bunny-centric holiday. Their efforts led to more chocolate companies producing chocolate bilbies. Some even began donating their proceeds to help save real bilbies. Companies like Pink Lady and Haigh’s Chocolates have donated tens of thousands of dollars to Australia’s Save the Bilby Fund. Other Easter Bilby products include mugs, keychains, and stuffed toys. Some Australian artists create work featuring the Easter Bilby. Just like the Easter Bunny, the Easter Bilby is usually pictured bringing colorful eggs to children, and frolicking in springtime flowers. If he’s anything like his real-life counterparts, he’d sooner eat troublesome termites than cause any environmental damage. Win-win!
[Image description: A vintage drawing of a bilby with its long ears laid back.] Credit & copyright:
John Gould, Mammals of Australia Vol. I Plate 7, Wikimedia Commons, Public DomainThis week, as the weather continues to warm, we're looking back on some of our favorite springtime curios from years past.
Bilbies, not bunnies! That’s the slogan of those in Australia who support the Easter Bilby, an Aussie alternative to the traditional Easter Bunny. Bilbies are endangered Australian marsupials with some rabbit-like features, such as long ears and strong back legs that make them prolific jumpers. This time of year, Australian shops sell chocolate bilbies and picture books featuring the Easter-themed marsupial. But the Easter Bilby isn’t just a way to Aussie-fy Easter. It helps bring awareness to two related environmental problems down under.
Bilbies are unique creatures, and some of the world’s oldest living mammals. They thrive in arid environments where many other animals have trouble surviving. Unlike rabbits, bilbies are omnivores who survive by eating a combination of plants, seeds, fungi, and insects. It’s no wonder that Australians are proud enough of this native animal to use it as a holiday mascot. As is fitting of such a whimsical character, the Easter Bilby was invented by a child. In 1968, 9-year-old Australian Rose-Marie Dusting wrote a short story called Billy The Aussie Easter Bilby, which she later published as a book. The book was popular enough to raise the general public’s interest in bilbies, and the Easter Bilby began appearing on Easter cards and decorations. The Easter Bilby really took off, though, when chocolate companies got on board and began selling chocolate bilbies right alongside the usual Easter Bunnies. Seeing that the Easter Bilby was quite popular, Australian environmentalists seized the opportunity to educate Australians about the bilby’s endangered status and the environmental problems posed by the nation's feral rabbits.
Bilbies were once found across 70 percent of Australia, but today that percentage has shriveled to 20 percent. Besides simple habitat encroachment, human life harmed bilbies in another big way: by introducing non-native species. Europeans introduced both foxes and domesticated cats to Australia in the 19th Century. Today, foxes kill around 300 million native Australian animals every year, While cats kill a whopping 2 billion annually. While it’s obvious how predators like foxes and cats can hunt and kill bilbies, cute, fluffy bunnies pose just as much of a threat. On Christmas Day in 1859, European settler Thomas Austin released 24 rabbits into the Australian wilderness, believing that hunting them would provide good sport for his fellow colonists. He couldn’t have foreseen the devastating consequences of his decision. From his original 24 rabbits, an entire population of non-native, feral rabbits was born, and they’ve been decimating native Australian wildlife ever since. These rabbits gobble up millions of native plants. This not only kills species that directly depend on the plants for food, it also causes soil erosion since the plants’ roots normally help keep soil compacted. Erosion can change entire landscapes, making them uninhabitable to native species. Unfortunately, rabbits helped drive one of Australia’s two bilby species, the Lesser Bilby, to extinction in the 1950s. Now, less than 10,000 Greater Bilbies remain in the wild.
When conservation group Foundation for Rabbit-Free Australia caught wind of the Easter Bilby, they took the opportunity to promote it as an environmentally-friendly alternative to the bunny-centric holiday. Their efforts led to more chocolate companies producing chocolate bilbies. Some even began donating their proceeds to help save real bilbies. Companies like Pink Lady and Haigh’s Chocolates have donated tens of thousands of dollars to Australia’s Save the Bilby Fund. Other Easter Bilby products include mugs, keychains, and stuffed toys. Some Australian artists create work featuring the Easter Bilby. Just like the Easter Bunny, the Easter Bilby is usually pictured bringing colorful eggs to children, and frolicking in springtime flowers. If he’s anything like his real-life counterparts, he’d sooner eat troublesome termites than cause any environmental damage. Win-win!
[Image description: A vintage drawing of a bilby with its long ears laid back.] Credit & copyright:
John Gould, Mammals of Australia Vol. I Plate 7, Wikimedia Commons, Public Domain -
FREEUS History PP&T CurioFree1 CQ
".... .- .--. .--. -.-- / -... .. .-. - .... -.. .- -.--" That's Morse code for happy birthday! The inventor of the electric telegraph and Morse code, Samuel Morse, was born on this day in 1791. Tinkering and inventing were just two of Morse’s varied interests, but the story of how he invented the telegram involved both genius and tragedy.
Morse was born in Charlestown, Massachusetts, and attended Yale as a young man. As a student, he had a passing interest in electricity, but his real passion was for painting. He especially enjoyed painting miniature portraits, much to the chagrin of his parents, who wanted him to start working as a bookseller's apprentice. Despite the pressure, he remained steadfast in his pursuit of the arts, and eventually made his way to London to train properly as a painter. His painting The Dying Hercules became a critical success, and by 1815 he returned to the U.S. to open his own studio. The following year, though, the 25-year-old Morse was struck by a sudden and unexpected tragedy. While he was away from home, working on a portrait of Marquis de Lafayette, a pivotal figure in the American Revolution, Morse received word that his wife had fallen critically ill. He rushed home, but he was too late—his wife had passed away days before his arrival. Morse was understandably distraught, and the tragedy marked the beginning of his renewed interest in electricity. Specifically, he believed that the key to instant communication lay with electromagnetism.
Although Morse’s name would come to be forever associated with telegrams, he wasn't the first to invent them by any means. In 1833, Germans Carl Friedrich Gauss Wilhelm Weber created the first commercial telegraph. Meanwhile, William Cooke and Charles Wheatstone in England were working on a telegraph system of their own, but their version was limited in range. Morse himself only managed to create a prototype by 1834, yet by 1838—and with the help of machinist Alfred Vail—he was able to create a telegraph system that could relay a message up to two miles. The message sent during this demonstration, "A patient waiter is no loser," was sent in the newly developed Morse code, which Morse devised with the help of Vail. Morse applied for a patent for his telegraph in 1840, and in 1844, a line connecting Baltimore, Maryland to Washington D.C. was established. Famously, the first message sent on this line was "What hath God wrought!" Although Morse Code as created by Morse was adequate for communicating in English, it wasn't particularly accommodating of other languages. So, in 1851, a number of European countries worked together to develop a variant called the International Morse Code, which was simpler. This version would come to be adopted in the U.S. as well for its simplicity, and remains more widely used.
Both the telegraph and the Morse code remained the quickest way to communicate over long distances for many years, until the advent of the radio and other mass communication devices rendered them obsolete during the 20th century. The death blow to the telegraph—and subsequently, Morse code—as the dominant form of long distance communication came after WWII, when aging telegraph lines became too great an expense to justify in the age of radio. Morse code still had its uses with radio as the medium instead of the telegraph, but its heyday was long over. Today, telegraphs and Morse code have been relegated to niche uses, but it's undeniable that they helped shape the age of instant communication that we currently occupy. Word travels fast, but Morse made it faster.
[Image description: A photo of Samuel Morse, a mall with white hair and a beard, wearing a uniform with many medals.] Credit & copyright: The Metropolitan Museum of Art, Samuel F. B. Morse, Attributed to Mathew B. Brady, ca. 1870. Gilman Collection, Gift of The Howard Gilman Foundation, 2005. Public Domain.".... .- .--. .--. -.-- / -... .. .-. - .... -.. .- -.--" That's Morse code for happy birthday! The inventor of the electric telegraph and Morse code, Samuel Morse, was born on this day in 1791. Tinkering and inventing were just two of Morse’s varied interests, but the story of how he invented the telegram involved both genius and tragedy.
Morse was born in Charlestown, Massachusetts, and attended Yale as a young man. As a student, he had a passing interest in electricity, but his real passion was for painting. He especially enjoyed painting miniature portraits, much to the chagrin of his parents, who wanted him to start working as a bookseller's apprentice. Despite the pressure, he remained steadfast in his pursuit of the arts, and eventually made his way to London to train properly as a painter. His painting The Dying Hercules became a critical success, and by 1815 he returned to the U.S. to open his own studio. The following year, though, the 25-year-old Morse was struck by a sudden and unexpected tragedy. While he was away from home, working on a portrait of Marquis de Lafayette, a pivotal figure in the American Revolution, Morse received word that his wife had fallen critically ill. He rushed home, but he was too late—his wife had passed away days before his arrival. Morse was understandably distraught, and the tragedy marked the beginning of his renewed interest in electricity. Specifically, he believed that the key to instant communication lay with electromagnetism.
Although Morse’s name would come to be forever associated with telegrams, he wasn't the first to invent them by any means. In 1833, Germans Carl Friedrich Gauss Wilhelm Weber created the first commercial telegraph. Meanwhile, William Cooke and Charles Wheatstone in England were working on a telegraph system of their own, but their version was limited in range. Morse himself only managed to create a prototype by 1834, yet by 1838—and with the help of machinist Alfred Vail—he was able to create a telegraph system that could relay a message up to two miles. The message sent during this demonstration, "A patient waiter is no loser," was sent in the newly developed Morse code, which Morse devised with the help of Vail. Morse applied for a patent for his telegraph in 1840, and in 1844, a line connecting Baltimore, Maryland to Washington D.C. was established. Famously, the first message sent on this line was "What hath God wrought!" Although Morse Code as created by Morse was adequate for communicating in English, it wasn't particularly accommodating of other languages. So, in 1851, a number of European countries worked together to develop a variant called the International Morse Code, which was simpler. This version would come to be adopted in the U.S. as well for its simplicity, and remains more widely used.
Both the telegraph and the Morse code remained the quickest way to communicate over long distances for many years, until the advent of the radio and other mass communication devices rendered them obsolete during the 20th century. The death blow to the telegraph—and subsequently, Morse code—as the dominant form of long distance communication came after WWII, when aging telegraph lines became too great an expense to justify in the age of radio. Morse code still had its uses with radio as the medium instead of the telegraph, but its heyday was long over. Today, telegraphs and Morse code have been relegated to niche uses, but it's undeniable that they helped shape the age of instant communication that we currently occupy. Word travels fast, but Morse made it faster.
[Image description: A photo of Samuel Morse, a mall with white hair and a beard, wearing a uniform with many medals.] Credit & copyright: The Metropolitan Museum of Art, Samuel F. B. Morse, Attributed to Mathew B. Brady, ca. 1870. Gilman Collection, Gift of The Howard Gilman Foundation, 2005. Public Domain. -
FREEPolitical Science PP&T CurioFree1 CQ
Tariffs, duties, customs—no matter what you call them, they can be a volatile tool capable of protecting weak industries or bleeding an economy dry. As much as tariffs have been in the news lately, they can be difficult to understand. Luckily, one can always turn to history to see how they’ve been used in the past. In the early days of the U.S., tariffs helped domestic industries stay competitive. However, they can easily turn harmful if they’re implemented without due consideration.
A tariff is a tax applied to goods that are imported or exported, though the latter is rarely used nowadays. Tariffs on exports were sometimes used to safeguard limited resources from leaving the country, but when the word "tariff" is used, it almost always means a tax applied to imports. Tariffs are paid by the entity importing the goods, and they can be either “ad valorem” or "specific" tariffs. Ad valorem tariffs are based on a percentage of the value of the goods being taxed, while specific tariffs are fixed amounts, regardless of the total value. It's easy to think that tariffs are categorically detrimental, but there’s sometimes a good reason for them. Making certain types of imported goods more expensive might help domestic producers of those goods stay more competitive by allowing them to sell their goods at lower prices. On the other hand, poorly-conceived tariffs can end up raising the prices of goods across the board, putting economic pressure on consumers without helping domestic industries. Tariffs used to be much more common around the world, but as international trade grew throughout the 20th century, they became less and less so. In the U.S., at least, many factors led to tariffs’ decline.
In 1789, the Tariff Act was one of the first major pieces of legislation passed by Congress, and it created a massive source of revenue for the fledgling nation. Tariffs helped domestic industries gain their footing by leveling the playing field against their better-established foreign competitors, particularly in Britain. By the beginning of the Civil War, tariffs accounted for around 90 percent of the U.S. government’s revenue. As Americans took up arms against each other, however, there was a sudden, dire need for other sources of government funding. Other taxes were introduced, leading to tariffs becoming less significant. Still, even immediately after the war, tariffs accounted for around 50 percent of the nation's revenue. During the Great Depression, tariffs caused more problems than they solved. The Smoot-Hawley Tariff Act of 1930 was intended to bolster domestic industries, but it also made it less feasible for those industries to export goods, hindering their overall business. By the start of World War II, the U.S. government simply could not rely on tariffs as a significant source of revenue any longer. Social security, the New Deal, and exponentially growing military expenditure among other things created mountains of expenses far too large for tariffs to cover. Thus, tariffs became less popular and less relevant over the decades.
Today, tariffs are typically used as negotiation tools between countries engaged in trade. Generally, tariffs are applied on specific industries or goods. For example, tariffs on steel have been used a number of times in recent history to aid American producers. However, the tariffs making the news as of late are unusual. Instead of targeting specific industries, tariffs are being applied across the board against entire countries, even on goods from established trade partners like Canada and Mexico. Only time will tell how this will impact U.S. consumers and U.S. industries. It’ll be historic no matter what…but that doesn’t always mean it will be smooth sailing.
[Image description: A U.S. flag with a wooden pole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Tariffs, duties, customs—no matter what you call them, they can be a volatile tool capable of protecting weak industries or bleeding an economy dry. As much as tariffs have been in the news lately, they can be difficult to understand. Luckily, one can always turn to history to see how they’ve been used in the past. In the early days of the U.S., tariffs helped domestic industries stay competitive. However, they can easily turn harmful if they’re implemented without due consideration.
A tariff is a tax applied to goods that are imported or exported, though the latter is rarely used nowadays. Tariffs on exports were sometimes used to safeguard limited resources from leaving the country, but when the word "tariff" is used, it almost always means a tax applied to imports. Tariffs are paid by the entity importing the goods, and they can be either “ad valorem” or "specific" tariffs. Ad valorem tariffs are based on a percentage of the value of the goods being taxed, while specific tariffs are fixed amounts, regardless of the total value. It's easy to think that tariffs are categorically detrimental, but there’s sometimes a good reason for them. Making certain types of imported goods more expensive might help domestic producers of those goods stay more competitive by allowing them to sell their goods at lower prices. On the other hand, poorly-conceived tariffs can end up raising the prices of goods across the board, putting economic pressure on consumers without helping domestic industries. Tariffs used to be much more common around the world, but as international trade grew throughout the 20th century, they became less and less so. In the U.S., at least, many factors led to tariffs’ decline.
In 1789, the Tariff Act was one of the first major pieces of legislation passed by Congress, and it created a massive source of revenue for the fledgling nation. Tariffs helped domestic industries gain their footing by leveling the playing field against their better-established foreign competitors, particularly in Britain. By the beginning of the Civil War, tariffs accounted for around 90 percent of the U.S. government’s revenue. As Americans took up arms against each other, however, there was a sudden, dire need for other sources of government funding. Other taxes were introduced, leading to tariffs becoming less significant. Still, even immediately after the war, tariffs accounted for around 50 percent of the nation's revenue. During the Great Depression, tariffs caused more problems than they solved. The Smoot-Hawley Tariff Act of 1930 was intended to bolster domestic industries, but it also made it less feasible for those industries to export goods, hindering their overall business. By the start of World War II, the U.S. government simply could not rely on tariffs as a significant source of revenue any longer. Social security, the New Deal, and exponentially growing military expenditure among other things created mountains of expenses far too large for tariffs to cover. Thus, tariffs became less popular and less relevant over the decades.
Today, tariffs are typically used as negotiation tools between countries engaged in trade. Generally, tariffs are applied on specific industries or goods. For example, tariffs on steel have been used a number of times in recent history to aid American producers. However, the tariffs making the news as of late are unusual. Instead of targeting specific industries, tariffs are being applied across the board against entire countries, even on goods from established trade partners like Canada and Mexico. Only time will tell how this will impact U.S. consumers and U.S. industries. It’ll be historic no matter what…but that doesn’t always mean it will be smooth sailing.
[Image description: A U.S. flag with a wooden pole.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEUS History PP&T CurioFree1 CQ
You could say that he was a man of many words. While England is the motherland of English, every Anglophone country has its own unique twist on the language. American English wasn't always held in the highest esteem, but Noah Webster helped formalize the American vernacular and help it stand on its own with his American Dictionary of the English Language, published this month in 1828.
Webster was born on October 16, 1758, in West Hartford, Connecticut, to a family of modest means. His father was a farmer and weaver, while his mother was a homemaker. Though most working class people didn't attend college at the time, Webster's parents encouraged his studies as a young man, and he began attending Yale at the age of 16. During this time, he briefly served in the local militia and even met George Washington on one occasion as the Revolutionary War raged on. Unfortunately, Webster's financial hardships kept him from pursuing law, his original passion. Instead, he chose to become a teacher. In this role, he began to see the shortcomings of existing textbooks on the English language, all of which came from Britain. These books not only failed to reflect the language as spoken by Americans, but also included a pledge of allegiance to King George. As a grammarian, educator, and proud American, Webster believed that American students should be taught American English, which he dubbed "Federal English."
Thus, Webster set out to formalize the English spoken by Americans. He published the first of his seminal works, A Grammatical Institute of the English Language, in 1783. Also called the “American Spelling Book” or the “Blue-Backed Speller” for the color of its binding, the book codified the spelling of English words as written by Americans. When writing the book, Webster set out and followed three rules—he divided each word into syllables, described how each word was pronounced, and wrote the proper way to spell each word. Webster also simplified the spelling of many words, but not all of his spellings caught on. For instance, he wanted Americans to spell "tongue" as "tung." Still, he continued his efforts to simplify spelling in A Compendious Dictionary of the English Language, published in 1806. The book contained around 37,000 words, and many of the spellings within are still used today. For example, "colour" was simplified to "color," "musick" became "music," and many words that ended in "-re" were changed to end in "-er.” The book even added words not included in British textbooks or dictionaries. Webster’s magnum opus, American Dictionary of the English Language, greatly expanded on his first dictionary by including over 65,000 words. The dictionary was a comprehensive reflection of Webster's own views on American English and its usage, and was largely defined by its "Americanisms," which included nonliterary words and technical words from the arts and sciences. It reflected Webster's belief that spoken language should shape the English language in both the definition of words and their pronunciation.
Today, Webster is remembered through the continuously-revised editions of the dictionary that bears his name. His views on language continue to influence lexicographers and linguists. In a way, Webster was his own sort of revolutionary rebel. Instead of muskets on the battlefield, he fought for his country's identity with books in classrooms by going against the grain culturally and academically. Who knew grammar and spelling could be part of a war effort?
[Image description: A portrait of a white-haired man with a white shirt and black jacket sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution; gift of William A. Ellis. Portrait of Noah Webster , James Herring, 12 Jan 1794 - 8 Oct 1867. Public Domain, CC0.You could say that he was a man of many words. While England is the motherland of English, every Anglophone country has its own unique twist on the language. American English wasn't always held in the highest esteem, but Noah Webster helped formalize the American vernacular and help it stand on its own with his American Dictionary of the English Language, published this month in 1828.
Webster was born on October 16, 1758, in West Hartford, Connecticut, to a family of modest means. His father was a farmer and weaver, while his mother was a homemaker. Though most working class people didn't attend college at the time, Webster's parents encouraged his studies as a young man, and he began attending Yale at the age of 16. During this time, he briefly served in the local militia and even met George Washington on one occasion as the Revolutionary War raged on. Unfortunately, Webster's financial hardships kept him from pursuing law, his original passion. Instead, he chose to become a teacher. In this role, he began to see the shortcomings of existing textbooks on the English language, all of which came from Britain. These books not only failed to reflect the language as spoken by Americans, but also included a pledge of allegiance to King George. As a grammarian, educator, and proud American, Webster believed that American students should be taught American English, which he dubbed "Federal English."
Thus, Webster set out to formalize the English spoken by Americans. He published the first of his seminal works, A Grammatical Institute of the English Language, in 1783. Also called the “American Spelling Book” or the “Blue-Backed Speller” for the color of its binding, the book codified the spelling of English words as written by Americans. When writing the book, Webster set out and followed three rules—he divided each word into syllables, described how each word was pronounced, and wrote the proper way to spell each word. Webster also simplified the spelling of many words, but not all of his spellings caught on. For instance, he wanted Americans to spell "tongue" as "tung." Still, he continued his efforts to simplify spelling in A Compendious Dictionary of the English Language, published in 1806. The book contained around 37,000 words, and many of the spellings within are still used today. For example, "colour" was simplified to "color," "musick" became "music," and many words that ended in "-re" were changed to end in "-er.” The book even added words not included in British textbooks or dictionaries. Webster’s magnum opus, American Dictionary of the English Language, greatly expanded on his first dictionary by including over 65,000 words. The dictionary was a comprehensive reflection of Webster's own views on American English and its usage, and was largely defined by its "Americanisms," which included nonliterary words and technical words from the arts and sciences. It reflected Webster's belief that spoken language should shape the English language in both the definition of words and their pronunciation.
Today, Webster is remembered through the continuously-revised editions of the dictionary that bears his name. His views on language continue to influence lexicographers and linguists. In a way, Webster was his own sort of revolutionary rebel. Instead of muskets on the battlefield, he fought for his country's identity with books in classrooms by going against the grain culturally and academically. Who knew grammar and spelling could be part of a war effort?
[Image description: A portrait of a white-haired man with a white shirt and black jacket sitting in a green chair.] Credit & copyright: National Portrait Gallery, Smithsonian Institution; gift of William A. Ellis. Portrait of Noah Webster , James Herring, 12 Jan 1794 - 8 Oct 1867. Public Domain, CC0. -
FREEUS History PP&T CurioFree1 CQ
Sometimes, you’re in the right place at the wrong time. The Pony Express was a mail delivery service that defied the perils of the wilderness to connect the Eastern and Western sides of the U.S. The riders who traversed the dangerous trail earned themselves a lasting reputation, but the famed service wasn’t destined to last.
Prior to the establishment of the Pony Express on April 3, 1860, there was only one reliable way for someone on the East Coast to send letters or parcels to the West: steamships. These ships traveled by sea from the East Coast down to Panama, where their cargo was unloaded and carried to the Atlantic side of the isthmus. There, the mail was loaded onto yet another ship and taken up to San Francisco, where it could finally be split up and sent off to various addresses. The only other option was for a ship to travel around the southern tip of South America, which could be treacherous. In addition to the inherent risks of going by sea, these routes were costly and time consuming. Mail delivery by ship took months, costing the U.S. government more money than they earned in postage. Going directly on land from East to West was also prohibitively dangerous due to a lack of established trails and challenging terrain. Various officials proposed some type of overland mail delivery system using horses, but for years none came to fruition.
Although the early history and conception of the Pony Express is disputed, most historians credit William H. Russell with the concept. He was one of the owners of Russell, Majors and Waddell, a freight, mail, and passenger transportation company. Russell and his partners later founded the Central Overland California & Pike’s Peak Express Company to serve as the parent company to the Pony Express. Simply put, the Pony Express was an express mail delivery service that used a system of relay stations to switch out riders and horses as needed. This wasn’t a unique concept by itself, as similar systems were already in use, but the Pony Express was set apart by its speed and the distance it covered. Operating out of St. Joseph, Missouri, the company guaranteed delivery of mail to and from San Francisco in 10 days. To accomplish this, riders carried up to 20 pounds of mail on horseback and rode California mustangs (feral horses trained to accept riders) 10 to 15 miles at a time between relay stations. Using this system, riders were able to cover over 1,900 miles in the promised 10 days. Riders traveled in any weather through all types of terrain on poorly established trails, both day and night.
Bridging the nearly 2,000-mile gap between U.S. coasts was no easy feat, and the Pony Express quickly established itself as a reliable service. However, just 18 months after operations began, the Pony Express became largely obsolete thanks to the establishment of a telegraph line connecting New York City and San Francisco. In October of 1861, the company stopped accepting new mail, and their last shipment was delivered in November. Despite its short-lived success, the Pony Express holds a near-mythical place in American popular history. Its riders were seen as adventurers who braved the elements through untamed wilderness, and they are considered daring symbols of the Old West. It might not be as reliable as the modern postal service, but it was a lot easier to romanticize.
[Image description: A stone and concrete pillar-style monument near the site of Rockwell's Station along the Pony Express route in Utah.] Credit & copyright: Beneathtimp, Wikimedia CommonsSometimes, you’re in the right place at the wrong time. The Pony Express was a mail delivery service that defied the perils of the wilderness to connect the Eastern and Western sides of the U.S. The riders who traversed the dangerous trail earned themselves a lasting reputation, but the famed service wasn’t destined to last.
Prior to the establishment of the Pony Express on April 3, 1860, there was only one reliable way for someone on the East Coast to send letters or parcels to the West: steamships. These ships traveled by sea from the East Coast down to Panama, where their cargo was unloaded and carried to the Atlantic side of the isthmus. There, the mail was loaded onto yet another ship and taken up to San Francisco, where it could finally be split up and sent off to various addresses. The only other option was for a ship to travel around the southern tip of South America, which could be treacherous. In addition to the inherent risks of going by sea, these routes were costly and time consuming. Mail delivery by ship took months, costing the U.S. government more money than they earned in postage. Going directly on land from East to West was also prohibitively dangerous due to a lack of established trails and challenging terrain. Various officials proposed some type of overland mail delivery system using horses, but for years none came to fruition.
Although the early history and conception of the Pony Express is disputed, most historians credit William H. Russell with the concept. He was one of the owners of Russell, Majors and Waddell, a freight, mail, and passenger transportation company. Russell and his partners later founded the Central Overland California & Pike’s Peak Express Company to serve as the parent company to the Pony Express. Simply put, the Pony Express was an express mail delivery service that used a system of relay stations to switch out riders and horses as needed. This wasn’t a unique concept by itself, as similar systems were already in use, but the Pony Express was set apart by its speed and the distance it covered. Operating out of St. Joseph, Missouri, the company guaranteed delivery of mail to and from San Francisco in 10 days. To accomplish this, riders carried up to 20 pounds of mail on horseback and rode California mustangs (feral horses trained to accept riders) 10 to 15 miles at a time between relay stations. Using this system, riders were able to cover over 1,900 miles in the promised 10 days. Riders traveled in any weather through all types of terrain on poorly established trails, both day and night.
Bridging the nearly 2,000-mile gap between U.S. coasts was no easy feat, and the Pony Express quickly established itself as a reliable service. However, just 18 months after operations began, the Pony Express became largely obsolete thanks to the establishment of a telegraph line connecting New York City and San Francisco. In October of 1861, the company stopped accepting new mail, and their last shipment was delivered in November. Despite its short-lived success, the Pony Express holds a near-mythical place in American popular history. Its riders were seen as adventurers who braved the elements through untamed wilderness, and they are considered daring symbols of the Old West. It might not be as reliable as the modern postal service, but it was a lot easier to romanticize.
[Image description: A stone and concrete pillar-style monument near the site of Rockwell's Station along the Pony Express route in Utah.] Credit & copyright: Beneathtimp, Wikimedia Commons -
FREEUS History PP&T CurioFree1 CQ
Rags to riches is an understatement. Madam C.J. Walker, the daughter of two former slaves, worked her way up the ladder of a prejudiced society to earn enormous riches as an entrepreneur. Today we're celebrating her birthday with a look back at her remarkable career.
As a young black woman living in St. Louis in the 1890s, Walker didn't start out looking for the "next big idea." She was eking out a living for her and her daughter as a washerwoman. It wasn't until she found a job as a sales agent with a haircare company that things started taking off. The role was personal for her, as she suffered from scalp rashes and balding. Plus, her brothers worked in the hair business as barbers.
Walker was successful selling other people's hair products, but employment was getting in the way of her dream. Literally: a man who visited her in a dream inspired her to start her own company, selling hair and beauty products geared towards black women. The Madam C.J. Walker Manufacturing Company, of which Walker was the sole stakeholder, made its fortunes on sales of Madam Walker’s Wonderful Hair Grower. 19th-century hygiene called for only infrequent hair washing, which led to scalp infections, bacteria, lice, and—most commonly—balding. Walker's Hair Grower combatted balding and was backed by Walker's own guarantee that she used it to fix her own hair issues. A marketing strategy focused on black women, a neglected but growing portion of consumers, was a key ingredient for success.
As the business grew, Walker revealed bigger ambitions. “I am not merely satisfied in making money for myself," she said. "I am endeavoring to provide employment for hundreds of women of my race." Her company employed some 40,000 “Walker Agents” to teach women about proper hair care. Walker stepped beyond the boundaries of her business as a social activist and philanthropist. She donated thousands to the NAACP and put her voice behind causes like preserving Frederick Douglass's home and fighting for the rights of black World War I veterans.
It's often claimed that Walker was America's first black female self-made millionaire. But when she passed away in 1919, assessors found out her estate totaled around $600,000. Not that the number matters at all, really; Walker's legacy is priceless. We're guessing the businessmen and women she inspired could more than make up the difference.Rags to riches is an understatement. Madam C.J. Walker, the daughter of two former slaves, worked her way up the ladder of a prejudiced society to earn enormous riches as an entrepreneur. Today we're celebrating her birthday with a look back at her remarkable career.
As a young black woman living in St. Louis in the 1890s, Walker didn't start out looking for the "next big idea." She was eking out a living for her and her daughter as a washerwoman. It wasn't until she found a job as a sales agent with a haircare company that things started taking off. The role was personal for her, as she suffered from scalp rashes and balding. Plus, her brothers worked in the hair business as barbers.
Walker was successful selling other people's hair products, but employment was getting in the way of her dream. Literally: a man who visited her in a dream inspired her to start her own company, selling hair and beauty products geared towards black women. The Madam C.J. Walker Manufacturing Company, of which Walker was the sole stakeholder, made its fortunes on sales of Madam Walker’s Wonderful Hair Grower. 19th-century hygiene called for only infrequent hair washing, which led to scalp infections, bacteria, lice, and—most commonly—balding. Walker's Hair Grower combatted balding and was backed by Walker's own guarantee that she used it to fix her own hair issues. A marketing strategy focused on black women, a neglected but growing portion of consumers, was a key ingredient for success.
As the business grew, Walker revealed bigger ambitions. “I am not merely satisfied in making money for myself," she said. "I am endeavoring to provide employment for hundreds of women of my race." Her company employed some 40,000 “Walker Agents” to teach women about proper hair care. Walker stepped beyond the boundaries of her business as a social activist and philanthropist. She donated thousands to the NAACP and put her voice behind causes like preserving Frederick Douglass's home and fighting for the rights of black World War I veterans.
It's often claimed that Walker was America's first black female self-made millionaire. But when she passed away in 1919, assessors found out her estate totaled around $600,000. Not that the number matters at all, really; Walker's legacy is priceless. We're guessing the businessmen and women she inspired could more than make up the difference. -
FREEUS History PP&T CurioFree1 CQ
This governor has the gift of gab. On this day in 1775, about a month before the American Revolution began in earnest, orator Patrick Henry uttered one of the most famous sentences in American history: “Give me liberty or give me death.” Or did he? There’s still some scholarly debate as to whether Henry actually said those iconic words, but there’s no doubt that his speeches stirred American imaginations.
Born on May 29, 1736, in Hanover County in the Colony of Virginia, Henry’s childhood put him on a good path for a future orator. His father, a Scottish immigrant, had been educated at King’s College, while his mother came from a wealthy local family. Since his family’s wealth would pass to Henry’s older brother rather than him, he couldn’t afford a life of leisure. He was educated at home by his father, and in his late teens tried to open and run a store with his brother, though it quickly failed. For a time he helped his father-in-law run a tavern in Hanover, before beginning at-home studies to become a lawyer. Henry already understood the power of words and the persuasive force of good oration. He’d grown up watching passionate preachers during the religious revival known as the Great Awakening, which helped drive him toward his new profession.
After earning his law license in 1760, Henry’s wit and speaking ability made him a quick success. His most important legal victory came in 1763, in the sentencing phase of a case known as the Parson’s Cause. Since tobacco was a major cash crop in Virginia, many Virginian officials received their annual pay in tobacco. When a series of droughts in the 1750s caused tobacco prices to rise from two cents per pound to three times that much, the Virginia legislature stepped in to stabilize things. They passed the Two-Penny Act, which set the price of tobacco used to pay contracts at the usual two cents per pound. Clergy in the Anglican Church, which was sponsored by the British government, didn’t want their revenue limited by the Two-Penny Act. So, they appealed to authorities in England, who sided with them, overruling the Two-Penny Act. With the power of England behind him, Reverend James Maury of Hanover County, Virginia, sued his own parish for backpay and won. All that was left was to decide exactly how much backpay Maury was owed. That’s where Patrick Henry came in. Arguing on behalf of the parish vestry, Henry gave a passionate speech about what he saw as the greed of church officials and the overreach of Britain. By overturning the Two-Penny Act, he argued, the British government was exerting tyrannical power over the people of Virginia. Though some in the courtroom accused Henry of treason, the jury sided with him, awarding Maury just one penny of backpay. The case made Henry so popular that it gained him 164 new clients within a year.
Now famous for his fiery speeches and resistance to British power, Henry was elected to the Virginia legislature’s lower chamber, the House of Burgesses, in 1765. In 1774, Henry became a delegate to the First Continental Congress. His most famous speech came the following year, at the Second Virginia Convention, where members debated whether to add language to Virginian governing documents stating that the British king could veto colonial legislation. Henry instead proposed amendments about raising an independent militia, since he believed that war with England was imminent. On March 23, he delivered a fiery address saying, “Gentlemen may cry, Peace, Peace but there is no peace. The war is actually begun!” After arguing in favor of his amendments in more detail, he ended with the famous line, “I know not what course others may take; but as for me, give me liberty or give me death!”
In truth, we’ll never know if Henry actually uttered that famous sentence. His speech was never transcribed during his lifetime, but was pieced together from recollections of those present more than 10 years after his death. Regardless, we do know that Henry went on to become Virginia’s first governor in 1776, after the United States declared independence from England, and that he served until 1779. He was elected again in 1785, and served for two years. Though Henry is best remembered for a single speech, he made plenty of others, won plenty of legal cases, and served his newly-formed state and country in both peace and wartime. No one can say that he was all talk!
[Image description: A black-and-white illustration of Patrick Henry delivering his famous speech to other men at the Virginia Assembly. He has one hand raised, as do many of the audience members. On the ground is a paper reading “Proceedings of the Virginia Assembly.”] Credit & copyright: Published by Currier & Ives, c. 1876. Library of Congress. Public Domain.This governor has the gift of gab. On this day in 1775, about a month before the American Revolution began in earnest, orator Patrick Henry uttered one of the most famous sentences in American history: “Give me liberty or give me death.” Or did he? There’s still some scholarly debate as to whether Henry actually said those iconic words, but there’s no doubt that his speeches stirred American imaginations.
Born on May 29, 1736, in Hanover County in the Colony of Virginia, Henry’s childhood put him on a good path for a future orator. His father, a Scottish immigrant, had been educated at King’s College, while his mother came from a wealthy local family. Since his family’s wealth would pass to Henry’s older brother rather than him, he couldn’t afford a life of leisure. He was educated at home by his father, and in his late teens tried to open and run a store with his brother, though it quickly failed. For a time he helped his father-in-law run a tavern in Hanover, before beginning at-home studies to become a lawyer. Henry already understood the power of words and the persuasive force of good oration. He’d grown up watching passionate preachers during the religious revival known as the Great Awakening, which helped drive him toward his new profession.
After earning his law license in 1760, Henry’s wit and speaking ability made him a quick success. His most important legal victory came in 1763, in the sentencing phase of a case known as the Parson’s Cause. Since tobacco was a major cash crop in Virginia, many Virginian officials received their annual pay in tobacco. When a series of droughts in the 1750s caused tobacco prices to rise from two cents per pound to three times that much, the Virginia legislature stepped in to stabilize things. They passed the Two-Penny Act, which set the price of tobacco used to pay contracts at the usual two cents per pound. Clergy in the Anglican Church, which was sponsored by the British government, didn’t want their revenue limited by the Two-Penny Act. So, they appealed to authorities in England, who sided with them, overruling the Two-Penny Act. With the power of England behind him, Reverend James Maury of Hanover County, Virginia, sued his own parish for backpay and won. All that was left was to decide exactly how much backpay Maury was owed. That’s where Patrick Henry came in. Arguing on behalf of the parish vestry, Henry gave a passionate speech about what he saw as the greed of church officials and the overreach of Britain. By overturning the Two-Penny Act, he argued, the British government was exerting tyrannical power over the people of Virginia. Though some in the courtroom accused Henry of treason, the jury sided with him, awarding Maury just one penny of backpay. The case made Henry so popular that it gained him 164 new clients within a year.
Now famous for his fiery speeches and resistance to British power, Henry was elected to the Virginia legislature’s lower chamber, the House of Burgesses, in 1765. In 1774, Henry became a delegate to the First Continental Congress. His most famous speech came the following year, at the Second Virginia Convention, where members debated whether to add language to Virginian governing documents stating that the British king could veto colonial legislation. Henry instead proposed amendments about raising an independent militia, since he believed that war with England was imminent. On March 23, he delivered a fiery address saying, “Gentlemen may cry, Peace, Peace but there is no peace. The war is actually begun!” After arguing in favor of his amendments in more detail, he ended with the famous line, “I know not what course others may take; but as for me, give me liberty or give me death!”
In truth, we’ll never know if Henry actually uttered that famous sentence. His speech was never transcribed during his lifetime, but was pieced together from recollections of those present more than 10 years after his death. Regardless, we do know that Henry went on to become Virginia’s first governor in 1776, after the United States declared independence from England, and that he served until 1779. He was elected again in 1785, and served for two years. Though Henry is best remembered for a single speech, he made plenty of others, won plenty of legal cases, and served his newly-formed state and country in both peace and wartime. No one can say that he was all talk!
[Image description: A black-and-white illustration of Patrick Henry delivering his famous speech to other men at the Virginia Assembly. He has one hand raised, as do many of the audience members. On the ground is a paper reading “Proceedings of the Virginia Assembly.”] Credit & copyright: Published by Currier & Ives, c. 1876. Library of Congress. Public Domain. -
FREEScience PP&T CurioFree1 CQ
If a comet is named after you, does that make you a star among the stars? German astronomer Caroline Herschel, born on this day in 1750, would probably say so. The 35P/Herschel–Rigollet comet bears her name, and she discovered plenty of other comets throughout her long career, which was mostly spent working alongside her brother, William Herschel. That’s not to say that she toiled in her sibling’s shadow, though. Herschel made a name for herself as the first woman in England to hold a government position and the first known woman in the world to receive a salary as a scientist.
Born on March 16, 1750, in Hanover, Germany, Herschel's childhood got off to a rough start. Though she was the eighth child and fourth daughter in her family, two of her sisters died in childhood, while the eldest married and left home when Herschel was just five years old, leaving her alone as the family’s main housekeeper. At just 10 years old, Herschel contracted typhus and nearly died. The infection blinded her in her left eye and severely stunted her growth, leaving her with an adult height of four feet, three inches. Though her father wanted her to be educated, her mother insisted that, since she likely wouldn’t marry due to her disabilities, she should be trained as a housekeeping servant. Though her father educated her as best he could, Herschel ultimately learned little more than basic reading, arithmetic, and some sewing throughout her teenage years.
Things changed in Herschel’s early 20s, when she received an invitation from her two brothers, William and Alexander, to join them in Bath, England. William was becoming a fairly successful singer, and the brothers proposed that Herschel sing with him during some performances. While learning to sing in Bath, Herschel was finally able to become educated in other subjects too. After a few years of running William’s household and participating in his music career, she offered him support when his interests turned from music to astronomy.
Soon, William was making his own telescope lenses, which proved to be more powerful than conventional ones. In 1781, William discovered the planet Uranus, though he mistook it for a comet. As the siblings worked together, Herschel began scanning the sky each night for interesting objects, and meticulously recording their positions in a record book, along with any discoveries that she and William made. She also compared their observations with the Messier Catalog, a book of astronomy by French astronomer Charles Messier, which at the time was considered the most comprehensive catalog of astronomical objects. On February 26, 1783, Herschel made her first two, independent discoveries when she noticed a nebula that didn’t appear in the Messier Catalog and a small galaxy that later came to be known as Messier 110, a satellite of the Andromeda Galaxy.
In 1798, Herschel presented her astronomical catalog to the Royal Society in England, to be used as an update to English astronomer John Flamsteed’s observations. Her catalog was meticulously detailed, and was organized by north polar distance rather than by constellation. Using telescopes built by William, Herschel went on to discover eight comets. As she and William published papers with the Royal Society, both of them began earning a wage for their work. Herschel was paid £50 a year, making her the first known woman to earn a wage as a scientist.
By 1799, Herschel’s work was so well known that she was independently invited to spend a week with the royal family. Three years later, the Royal Society published the most detailed version of her work yet, though they did so under William’s name. After her brother’s death, Herschel created yet another astronomical catalog, this one for William’s son, John, who had also shown a great interest in astronomy. This catalog eventually became the New General Catalogue, which gave us the NGC numbers by which many astronomical objects are still identified.
Despite her childhood hardships and growing up during a time when women weren't encouraged to practice science, Caroline Herschel made some of the 18th and 19 centuries’ most important contributions to astronomy. Her determination to “mind the heavens”, as she put it, has impacted centuries of astronomical study. Happy Women’s History Month!
[Image description: A black-and-white portrait of Caroline Herschel wearing a bonnet and high-collared, lacy blouse.] Credit & copyright: Portrait by M. F. Tielemann, 1829. From page 114-115 of Agnes Clerke's The Herschels and Modern Astronomy (1895).If a comet is named after you, does that make you a star among the stars? German astronomer Caroline Herschel, born on this day in 1750, would probably say so. The 35P/Herschel–Rigollet comet bears her name, and she discovered plenty of other comets throughout her long career, which was mostly spent working alongside her brother, William Herschel. That’s not to say that she toiled in her sibling’s shadow, though. Herschel made a name for herself as the first woman in England to hold a government position and the first known woman in the world to receive a salary as a scientist.
Born on March 16, 1750, in Hanover, Germany, Herschel's childhood got off to a rough start. Though she was the eighth child and fourth daughter in her family, two of her sisters died in childhood, while the eldest married and left home when Herschel was just five years old, leaving her alone as the family’s main housekeeper. At just 10 years old, Herschel contracted typhus and nearly died. The infection blinded her in her left eye and severely stunted her growth, leaving her with an adult height of four feet, three inches. Though her father wanted her to be educated, her mother insisted that, since she likely wouldn’t marry due to her disabilities, she should be trained as a housekeeping servant. Though her father educated her as best he could, Herschel ultimately learned little more than basic reading, arithmetic, and some sewing throughout her teenage years.
Things changed in Herschel’s early 20s, when she received an invitation from her two brothers, William and Alexander, to join them in Bath, England. William was becoming a fairly successful singer, and the brothers proposed that Herschel sing with him during some performances. While learning to sing in Bath, Herschel was finally able to become educated in other subjects too. After a few years of running William’s household and participating in his music career, she offered him support when his interests turned from music to astronomy.
Soon, William was making his own telescope lenses, which proved to be more powerful than conventional ones. In 1781, William discovered the planet Uranus, though he mistook it for a comet. As the siblings worked together, Herschel began scanning the sky each night for interesting objects, and meticulously recording their positions in a record book, along with any discoveries that she and William made. She also compared their observations with the Messier Catalog, a book of astronomy by French astronomer Charles Messier, which at the time was considered the most comprehensive catalog of astronomical objects. On February 26, 1783, Herschel made her first two, independent discoveries when she noticed a nebula that didn’t appear in the Messier Catalog and a small galaxy that later came to be known as Messier 110, a satellite of the Andromeda Galaxy.
In 1798, Herschel presented her astronomical catalog to the Royal Society in England, to be used as an update to English astronomer John Flamsteed’s observations. Her catalog was meticulously detailed, and was organized by north polar distance rather than by constellation. Using telescopes built by William, Herschel went on to discover eight comets. As she and William published papers with the Royal Society, both of them began earning a wage for their work. Herschel was paid £50 a year, making her the first known woman to earn a wage as a scientist.
By 1799, Herschel’s work was so well known that she was independently invited to spend a week with the royal family. Three years later, the Royal Society published the most detailed version of her work yet, though they did so under William’s name. After her brother’s death, Herschel created yet another astronomical catalog, this one for William’s son, John, who had also shown a great interest in astronomy. This catalog eventually became the New General Catalogue, which gave us the NGC numbers by which many astronomical objects are still identified.
Despite her childhood hardships and growing up during a time when women weren't encouraged to practice science, Caroline Herschel made some of the 18th and 19 centuries’ most important contributions to astronomy. Her determination to “mind the heavens”, as she put it, has impacted centuries of astronomical study. Happy Women’s History Month!
[Image description: A black-and-white portrait of Caroline Herschel wearing a bonnet and high-collared, lacy blouse.] Credit & copyright: Portrait by M. F. Tielemann, 1829. From page 114-115 of Agnes Clerke's The Herschels and Modern Astronomy (1895). -
FREEUS History PP&T CurioFree1 CQ
Gangway! This Civil War battle didn’t take place on horseback, but on ships. While naval battles usually come to mind in relation to the World Wars, they were also part of the Revolutionary War and the American Civil War. In fact, the Battle of Hampton Roads, which ended on this day in 1862, was the first American battle involving ironclad warships.
Just a few days after the breakout of the Civil War on April 12, 1861, President Lincoln ordered a blockade of all major ports in states that had seceded from the Union, including those around Norfolk, Virginia. While in charge of maintaining the blockade at the Gosport Navy Yard in Portsmouth, Virginia, Union leaders got word that a large Confederate force was on its way to claim control of the area. The Union thus burned parts of the naval yard and several of their own warships to prevent them from falling into Confederate hands. Among them was the USS Merrimack, a type of steam-powered warship known as a steam frigate. The ship was also a screw frigate, as it was powered by screw propellers, making it quite agile for its time. When the ship was set ablaze, it only burned to the waterline. The bottom half of the Merrimack, which included its intact steam engines, sank beneath the surface of the Norfolk Navy Yard. Union troops in the area then retreated, and the Confederacy took over the area.
The Confederacy now controlled the south side of an area called Hampton Roads. This was a roadstead, or place where boats could be safely anchored, positioned in an area where the Elizabeth, Nansemond, and James rivers met before flowing into Chesapeake Bay. Determined to destroy the Union blockade that had cut them off from trade, the Confederates began pulling up remnants of recently-burned Union ships, including the Merrimack. Since the blockade included some of the Union’s most powerful ships, the Confederacy rebuilt the Merrimack as an ironclad warship, fitting an iron ram onto her prow and rebuilding her formerly wooden upper-deck with an iron-covered citadel that could mount ten guns. This new ship was named the CSS Virginia.
Word of the CSS Virginia caused something of a panic amongst Union officers, and they quickly got permission from Congress to begin construction of their own ironclad warship. The vessel was the brainchild of Swedish engineer John Ericsson, and included novel elements like a rotating turret with two large guns, rather than many small ones. They named their ship the USS Monitor.
The Battle of Hampton Roads began on the morning of March 8, 1862, when the CSS Virginia made a run for the Union’s blockade. Although several Union ships fired on the advancing Virginia, most of their gunfire bounced off her armor. The Virginia quickly rammed and sank the Cumberland, one of the five main ships in the blockade, though doing so broke off Virginia’s iron ram. Virginia then forced the surrender of another Union ship, the Congress, before firing upon it with red hot cannonballs, lighting it on fire. Already, more than 200 Union troops had been killed while the Virginia had only lost two crewmen. As night fell and visibility waned, the ship retreated to wait for daylight.
The Union quickly dispatched the Monitor to meet Virginia the next day. When the Confederates headed for the Minnesota, a grounded Union ship, Monitor rushed in to block her path. The two ironclads fired at one another, and continued to do so for most of the day, each finding it difficult to pierce the other’s armor. At one point, Virginia ran aground, but was able to get back into water just in time to avoid being destroyed. At another point, Monitor’s captain, Lieutenant John L. Worden, was temporarily blinded when his ship’s pilot house was hit with a charge. Monitor was thus forced to retreat, but neither it nor Virginia were damaged enough to render them physically incapable of fighting, so the battle ended inconclusively. Both sides claimed victory, but with the Union blockade still intact, the Confederacy hadn’t gained much ground. Eventually, the Confederacy was forced to destroy their own ship when they abandoned Norfolk, to prevent Virginia from falling into enemy hands. The Monitor sank in late 1862 when she encountered high waves while attempting to make her way to North Carolina. A pretty unimpressive end for such inventive ships.
[Image description: A painting depicting the Battle of Hampton Roads. Soldiers on horses look down a hill over a naval battle with ships on fire.] Credit & copyright: Kurz & Allison Art Publishers, 1889. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Public Domain.Gangway! This Civil War battle didn’t take place on horseback, but on ships. While naval battles usually come to mind in relation to the World Wars, they were also part of the Revolutionary War and the American Civil War. In fact, the Battle of Hampton Roads, which ended on this day in 1862, was the first American battle involving ironclad warships.
Just a few days after the breakout of the Civil War on April 12, 1861, President Lincoln ordered a blockade of all major ports in states that had seceded from the Union, including those around Norfolk, Virginia. While in charge of maintaining the blockade at the Gosport Navy Yard in Portsmouth, Virginia, Union leaders got word that a large Confederate force was on its way to claim control of the area. The Union thus burned parts of the naval yard and several of their own warships to prevent them from falling into Confederate hands. Among them was the USS Merrimack, a type of steam-powered warship known as a steam frigate. The ship was also a screw frigate, as it was powered by screw propellers, making it quite agile for its time. When the ship was set ablaze, it only burned to the waterline. The bottom half of the Merrimack, which included its intact steam engines, sank beneath the surface of the Norfolk Navy Yard. Union troops in the area then retreated, and the Confederacy took over the area.
The Confederacy now controlled the south side of an area called Hampton Roads. This was a roadstead, or place where boats could be safely anchored, positioned in an area where the Elizabeth, Nansemond, and James rivers met before flowing into Chesapeake Bay. Determined to destroy the Union blockade that had cut them off from trade, the Confederates began pulling up remnants of recently-burned Union ships, including the Merrimack. Since the blockade included some of the Union’s most powerful ships, the Confederacy rebuilt the Merrimack as an ironclad warship, fitting an iron ram onto her prow and rebuilding her formerly wooden upper-deck with an iron-covered citadel that could mount ten guns. This new ship was named the CSS Virginia.
Word of the CSS Virginia caused something of a panic amongst Union officers, and they quickly got permission from Congress to begin construction of their own ironclad warship. The vessel was the brainchild of Swedish engineer John Ericsson, and included novel elements like a rotating turret with two large guns, rather than many small ones. They named their ship the USS Monitor.
The Battle of Hampton Roads began on the morning of March 8, 1862, when the CSS Virginia made a run for the Union’s blockade. Although several Union ships fired on the advancing Virginia, most of their gunfire bounced off her armor. The Virginia quickly rammed and sank the Cumberland, one of the five main ships in the blockade, though doing so broke off Virginia’s iron ram. Virginia then forced the surrender of another Union ship, the Congress, before firing upon it with red hot cannonballs, lighting it on fire. Already, more than 200 Union troops had been killed while the Virginia had only lost two crewmen. As night fell and visibility waned, the ship retreated to wait for daylight.
The Union quickly dispatched the Monitor to meet Virginia the next day. When the Confederates headed for the Minnesota, a grounded Union ship, Monitor rushed in to block her path. The two ironclads fired at one another, and continued to do so for most of the day, each finding it difficult to pierce the other’s armor. At one point, Virginia ran aground, but was able to get back into water just in time to avoid being destroyed. At another point, Monitor’s captain, Lieutenant John L. Worden, was temporarily blinded when his ship’s pilot house was hit with a charge. Monitor was thus forced to retreat, but neither it nor Virginia were damaged enough to render them physically incapable of fighting, so the battle ended inconclusively. Both sides claimed victory, but with the Union blockade still intact, the Confederacy hadn’t gained much ground. Eventually, the Confederacy was forced to destroy their own ship when they abandoned Norfolk, to prevent Virginia from falling into enemy hands. The Monitor sank in late 1862 when she encountered high waves while attempting to make her way to North Carolina. A pretty unimpressive end for such inventive ships.
[Image description: A painting depicting the Battle of Hampton Roads. Soldiers on horses look down a hill over a naval battle with ships on fire.] Credit & copyright: Kurz & Allison Art Publishers, 1889. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Public Domain.