Curio Cabinet / Person, Place, or Thing
-
FREEBiology PP&T CurioFree1 CQ
You really shouldn’t spray paint at church—especially not on the grave of the world’s most famous biologist. Two climate activists recently made headlines for spray painting a message on Charles Darwin’s grave, in London’s Westminster Abbey. They hoped to draw attention to the fact that Earth’s global temperatures were 34.7 degrees higher than pre-industrial levels for the first time in 2024. While there’s no way to know how Darwin would feel about our modern climate crisis, during his lifetime he wasn’t focused on global temperatures. Rather, he wanted to learn how living things adapted to their environments. His theory of natural selection was groundbreaking…though, contrary to popular belief, Darwin was far from the first scientist to notice that organisms changed over time.
Born on February 12, 1809, in Shrewsbury, England, Charles Darwin was already interested in nature and an avid collector of plants and insects by the time he was teen. Still, he didn’t set out to study the natural world, at first. Instead, he apprenticed with his father, a doctor, then enrolled at the University of Edinburgh’s medical school in 1825. Alas, Darwin wasn’t cut out to be a doctor. Not only was he bored by medical lectures, he was deeply (and understandably) upset by medical practices of the time. This was especially true of a surgery he witnessed in which doctors operated on a child without anesthetics—because they hadn’t been invented yet. After leaving medical school, Darwin didn’t have a clear direction in life. He studied taxidermy for a time and later enrolled at Cambridge University to study theology. Yet again, Darwin found himself drawn away from his schooling, finally spurning theology to join the five-year voyage of the HMS Beagle to serve as its naturalist. The Beagle was set to circumnavigate the globe and survey the coastline of South America, among other things, allowing Darwin to travel to remote locations rarely visited by anyone.
During the voyage, Darwin did just what he’d done as a child, collecting specimens of insects, plants, animals, and fossils. He didn’t quite have the same “leave only footprints” mantra as modern scientists, though. In fact, Darwin not only documented the various lifeforms he encountered on his journey, he dined on them too. This was actually a habit dating back to his days at Cambridge, where he was the founding member of the Gourmet Club (also known as the Glutton Club). The goal of the club had been to feast on “birds and beasts which were before unknown to human palate,” and Darwin certainly made good on that motto during his time aboard the Beagle. According to his notes, Darwin ate iguanas, giant tortoises, armadillos, and even a puma, which he said was "remarkably like veal in taste." His most important contribution as a naturalist, though, was his theory of natural selection.
Darwin came up with his most famous idea after observing 13 different species of finches on the Galápagos Islands. Examining their behavior in the wild and studying their anatomy from captured specimens, Darwin found that the finches all had differently shaped beaks for different purposes. Some were better suited for eating seeds, while others ate insects. Despite these differences, Darwin concluded that they were all descended from the same bird, having many common characteristics, with specializations arising over time. Darwin wasn’t the first person to posit the possibility of evolution, though. 18th-century naturalist Jean Baptiste Lamarck believed that animals changed their bodies throughout their lives based on their environment, while Darwin’s contemporary Alfred Russel Wallace came up with the same theory of natural selection as he did. In fact, the two published a joint statement and gave a presentation at the Linnean Society in London in 1858. Darwin didn’t actually coin the phrase “survival of the fittest,” either. English philosopher Herbert Spencer came up with it in 1864 while comparing his economic and sociological theories to Darwin’s theory of evolution.
Despite Darwin’s confidence in his theory and praise from his peers in the scientific world, he actually waited 20 years to publish his findings. He was fearful of how his theory would be received by the religious community in England, since it contradicted much of what was written in the Bible. However, despite some public criticism, Darwin was mostly celebrated upon his theory’s publication. When he died in 1882, he was laid to rest in London’s Westminster Abbey, alongside England’s greatest heroes. It seems he didn’t have much to fear if his countrymen were willing to bury him in a church!
[Image description: A black-and-white photograph of Charles Darwin with a white beard.] Credit & copyright: Library of Congress, Prints & Photographs Division, LC-DIG-ggbain-03485, George Grantham Bain Collection. No known restrictions on publication.You really shouldn’t spray paint at church—especially not on the grave of the world’s most famous biologist. Two climate activists recently made headlines for spray painting a message on Charles Darwin’s grave, in London’s Westminster Abbey. They hoped to draw attention to the fact that Earth’s global temperatures were 34.7 degrees higher than pre-industrial levels for the first time in 2024. While there’s no way to know how Darwin would feel about our modern climate crisis, during his lifetime he wasn’t focused on global temperatures. Rather, he wanted to learn how living things adapted to their environments. His theory of natural selection was groundbreaking…though, contrary to popular belief, Darwin was far from the first scientist to notice that organisms changed over time.
Born on February 12, 1809, in Shrewsbury, England, Charles Darwin was already interested in nature and an avid collector of plants and insects by the time he was teen. Still, he didn’t set out to study the natural world, at first. Instead, he apprenticed with his father, a doctor, then enrolled at the University of Edinburgh’s medical school in 1825. Alas, Darwin wasn’t cut out to be a doctor. Not only was he bored by medical lectures, he was deeply (and understandably) upset by medical practices of the time. This was especially true of a surgery he witnessed in which doctors operated on a child without anesthetics—because they hadn’t been invented yet. After leaving medical school, Darwin didn’t have a clear direction in life. He studied taxidermy for a time and later enrolled at Cambridge University to study theology. Yet again, Darwin found himself drawn away from his schooling, finally spurning theology to join the five-year voyage of the HMS Beagle to serve as its naturalist. The Beagle was set to circumnavigate the globe and survey the coastline of South America, among other things, allowing Darwin to travel to remote locations rarely visited by anyone.
During the voyage, Darwin did just what he’d done as a child, collecting specimens of insects, plants, animals, and fossils. He didn’t quite have the same “leave only footprints” mantra as modern scientists, though. In fact, Darwin not only documented the various lifeforms he encountered on his journey, he dined on them too. This was actually a habit dating back to his days at Cambridge, where he was the founding member of the Gourmet Club (also known as the Glutton Club). The goal of the club had been to feast on “birds and beasts which were before unknown to human palate,” and Darwin certainly made good on that motto during his time aboard the Beagle. According to his notes, Darwin ate iguanas, giant tortoises, armadillos, and even a puma, which he said was "remarkably like veal in taste." His most important contribution as a naturalist, though, was his theory of natural selection.
Darwin came up with his most famous idea after observing 13 different species of finches on the Galápagos Islands. Examining their behavior in the wild and studying their anatomy from captured specimens, Darwin found that the finches all had differently shaped beaks for different purposes. Some were better suited for eating seeds, while others ate insects. Despite these differences, Darwin concluded that they were all descended from the same bird, having many common characteristics, with specializations arising over time. Darwin wasn’t the first person to posit the possibility of evolution, though. 18th-century naturalist Jean Baptiste Lamarck believed that animals changed their bodies throughout their lives based on their environment, while Darwin’s contemporary Alfred Russel Wallace came up with the same theory of natural selection as he did. In fact, the two published a joint statement and gave a presentation at the Linnean Society in London in 1858. Darwin didn’t actually coin the phrase “survival of the fittest,” either. English philosopher Herbert Spencer came up with it in 1864 while comparing his economic and sociological theories to Darwin’s theory of evolution.
Despite Darwin’s confidence in his theory and praise from his peers in the scientific world, he actually waited 20 years to publish his findings. He was fearful of how his theory would be received by the religious community in England, since it contradicted much of what was written in the Bible. However, despite some public criticism, Darwin was mostly celebrated upon his theory’s publication. When he died in 1882, he was laid to rest in London’s Westminster Abbey, alongside England’s greatest heroes. It seems he didn’t have much to fear if his countrymen were willing to bury him in a church!
[Image description: A black-and-white photograph of Charles Darwin with a white beard.] Credit & copyright: Library of Congress, Prints & Photographs Division, LC-DIG-ggbain-03485, George Grantham Bain Collection. No known restrictions on publication. -
FREEUS History PP&T CurioFree1 CQ
For most people today, winter is either a time for fun activities like sledding, ice skating, and skiing, or a time of inconvenience, when streets are slippery, commutes are longer, and windshields need scraping. But not so long ago, winter was a truly dangerous time for average people, especially if they were traveling. No story illustrates that point quite as well as the tragic tale of the Donner Party, a group of pioneers migrating from the Midwest to California in 1846. Their attempt to survive a brutal winter in the Sierra Nevada is considered one of the darkest chapters from the time of westward expansion in America.
Before the completion of the first Transcontinental Railroad in 1869, traversing the U.S. was a dangerous, harrowing task. Journeys were made largely on foot with provisions and other supplies carried on wagons. There weren’t always well-established roads or reliable maps, making long-distance travel a particularly haphazard endeavor. Nevertheless, the allure of fertile farmland drew thousands to the West Coast, including brothers George and Jacob Donner, as well as James Reed, a successful businessman from Springfield, Illinois. The Donner brothers and Reed formed a party of around 31 people and set off for Independence, Missouri, on April 14, 1846. On May 12, they joined a wagon train (a group of individual parties that traveled together for mutual protection) and headed west toward Fort Laramie 650 miles away. For the first portion of the trip, they stayed on the Oregon Trail, which ended near present-day Portland, Oregon. The Donners and Reeds, however, were traveling to California, and intended to take the California Trail, which diverged from the Oregon Trail at two points between Fort Bridger and Fort Hall. However, instead of waiting for either of those better-established routes, the Donners and Reed opted to take what they believed was a shortcut on the advice of a guide they were traveling with named Lansford Hastings. This supposed shortcut, called Hastings Cutoff, was purported to cut 300 miles from the trip, which would have gotten the travelers to their destination months earlier than anticipated. Hastings Cutoff was heavily promoted by Hastings in his book, The Emigrants' Guide to Oregon and California, which contained advice and trail maps to the West Coast.
What the Donners and Reed didn’t know was that Hastings had never actually traveled his namesake shortcut himself. Contrary to his assertions, the shortcut actually added 125 miles to the trip. Hastings also didn’t join the Donners and Reeds, who parted ways from him at Fort Bridger. Electing George Donner as their leader, the Donners, the Reeds, and dozens more joined together to tackle Hastings Cutoff. The Donner party reached it on July 31, and initially made good time. But since Hastings Cutoff took them through largely untraveled wilderness, they faced severe delays, preventing them from crossing the Sierra Nevada before winter. On October 31, the party established a camp to survive the winter in the area now known as Donner Pass. By then, Reed and his family had set off on their own after he killed another man in the party. As winter set in, the Donner party built cabins for shelter, but they had little in the way of supplies, having lost most of their food during their previous delays. By December, they were trapped by heavy snow, and on the 16th of that month, 15 members of the party set out to find help. Most of the remaining survivors at camp were children.
The aftermath of the disastrous venture made headlines around the country. Only seven of the party members who set out for help survived, and of the original 89 members of the Donner party, 42 starved or froze to death. Sensational claims of cannibalism became the focus of the story after it was discovered that about half of the survivors had consumed the flesh of the dead after depleting their meager supply of food, livestock, dogs, and whatever leather they could boil. Among the dead were the Donner brothers and most of their immediate family. Today, the doomed expedition is memorialized through museum exhibits and the area where the Donner party spent their harrowing winter, which is now called Donner Pass. The next time you curse yourself for taking the wrong exit on a road trip, thank your lucky stars for GPS.
[Image description: Snow falling against a black background.] Credit & copyright: Dillon Kydd, PexelsFor most people today, winter is either a time for fun activities like sledding, ice skating, and skiing, or a time of inconvenience, when streets are slippery, commutes are longer, and windshields need scraping. But not so long ago, winter was a truly dangerous time for average people, especially if they were traveling. No story illustrates that point quite as well as the tragic tale of the Donner Party, a group of pioneers migrating from the Midwest to California in 1846. Their attempt to survive a brutal winter in the Sierra Nevada is considered one of the darkest chapters from the time of westward expansion in America.
Before the completion of the first Transcontinental Railroad in 1869, traversing the U.S. was a dangerous, harrowing task. Journeys were made largely on foot with provisions and other supplies carried on wagons. There weren’t always well-established roads or reliable maps, making long-distance travel a particularly haphazard endeavor. Nevertheless, the allure of fertile farmland drew thousands to the West Coast, including brothers George and Jacob Donner, as well as James Reed, a successful businessman from Springfield, Illinois. The Donner brothers and Reed formed a party of around 31 people and set off for Independence, Missouri, on April 14, 1846. On May 12, they joined a wagon train (a group of individual parties that traveled together for mutual protection) and headed west toward Fort Laramie 650 miles away. For the first portion of the trip, they stayed on the Oregon Trail, which ended near present-day Portland, Oregon. The Donners and Reeds, however, were traveling to California, and intended to take the California Trail, which diverged from the Oregon Trail at two points between Fort Bridger and Fort Hall. However, instead of waiting for either of those better-established routes, the Donners and Reed opted to take what they believed was a shortcut on the advice of a guide they were traveling with named Lansford Hastings. This supposed shortcut, called Hastings Cutoff, was purported to cut 300 miles from the trip, which would have gotten the travelers to their destination months earlier than anticipated. Hastings Cutoff was heavily promoted by Hastings in his book, The Emigrants' Guide to Oregon and California, which contained advice and trail maps to the West Coast.
What the Donners and Reed didn’t know was that Hastings had never actually traveled his namesake shortcut himself. Contrary to his assertions, the shortcut actually added 125 miles to the trip. Hastings also didn’t join the Donners and Reeds, who parted ways from him at Fort Bridger. Electing George Donner as their leader, the Donners, the Reeds, and dozens more joined together to tackle Hastings Cutoff. The Donner party reached it on July 31, and initially made good time. But since Hastings Cutoff took them through largely untraveled wilderness, they faced severe delays, preventing them from crossing the Sierra Nevada before winter. On October 31, the party established a camp to survive the winter in the area now known as Donner Pass. By then, Reed and his family had set off on their own after he killed another man in the party. As winter set in, the Donner party built cabins for shelter, but they had little in the way of supplies, having lost most of their food during their previous delays. By December, they were trapped by heavy snow, and on the 16th of that month, 15 members of the party set out to find help. Most of the remaining survivors at camp were children.
The aftermath of the disastrous venture made headlines around the country. Only seven of the party members who set out for help survived, and of the original 89 members of the Donner party, 42 starved or froze to death. Sensational claims of cannibalism became the focus of the story after it was discovered that about half of the survivors had consumed the flesh of the dead after depleting their meager supply of food, livestock, dogs, and whatever leather they could boil. Among the dead were the Donner brothers and most of their immediate family. Today, the doomed expedition is memorialized through museum exhibits and the area where the Donner party spent their harrowing winter, which is now called Donner Pass. The next time you curse yourself for taking the wrong exit on a road trip, thank your lucky stars for GPS.
[Image description: Snow falling against a black background.] Credit & copyright: Dillon Kydd, Pexels -
FREEPlay PP&T CurioFree1 CQ
This family business really took off around the globe…by making globes! Snow globes are popular souvenirs and holiday decorations the world over. While these whimsical decorations seem like a simple concept—a diorama inside a glass globe with some water and fake snow thrown in—they have a surprisingly scientific origin.
Erwin Perzy I, an Austrian trademan and tinkerer, didn’t set out to invent the snowglobe. Rather, he was in the business of selling medical instruments to local surgeons. In 1900, many physicians were looking to improve the lighting in their operating rooms, which at the time were often small, dim, and hard to work in. So, Perzy went to work, experimenting with a lightbulb placed near a water-filled glass globe. In order to amplify the brightness, Perzy tried adding different materials in the water to reflect the light. His invention never caught on with surgeons, but it did give Perzy an idea. He was already making miniature pewter replicas of the nearby Mariazell Basilica to sell to tourists and pilgrims who visited the site in droves. The souvenir was already popular, so he decided to bump it up a notch by taking some of the tiny buildings and placing them inside the globes. Filled with water and a proprietary blend of wax to mimic snow, the souvenir was sold as a diorama of the Mariazell Basilica in winter, and it was an instant success.
Some historians have pointed out that snow globes may have existed, at least in some form, before Perzy's invention. During the 1878 Paris Universelle Exposition, a French glassware company sold domed paperweights containing a model of a man holding an umbrella. The dome was also filled with water and imitation snow, but this version never caught on. Either way, Perzy’s patent for his snow globe was the first of its kind, and by 1905, business was booming.
At first, snow globes remained a regional craze. In 1908, Emperor Franz Joseph of Austria awarded Perzy for his novel contributions to toymaking, helping to boost snow globe’s popularity. For the first decades of the 20th century, snow globe’s spread steadily across Europe, but sales fell during World War I, World War II, and the intervening period of economic depression. After World War II, business took off again and began to spread to the U.S. By then, Perzy’s son, Erwin Perzy II, was in charge of the family business and made the decision to market snow globes as a Christmas item. The first Christmas snow globe featured a Christmas tree inside, and proved to be a great success. With the post-war baby boom and a rising economy, snow globe sales skyrocketed. Beginning in the 1970s, Erwin Perzy III took over the family business and started selling snow globes to Japan, but by the end of the 1980s, there was a problem. The patent filed by the first Perzy expired, forcing the family to pivot and market their products as the real deal, naming themselves the Original Viennese Snow Globes.
Today, the company is still owned and operated by the Perzy family, and while plenty of other companies sell snow globes, they’re still recognized as the original. In the years since their rebranding, they’ve been commissioned to make custom snow globes for a number of U.S. presidents, and in 2020, they even made one with a model toilet paper roll inside to poke fun at the shortages during the COVID pandemic. In addition to being the original, the company still uses a proprietary blend of wax and plastic for their snow, which they claim floats longer than their competitors’. That’s one way to keep shaking up the industry after all these years.
[Image description: A snowglobe with two figures inside.] Credit & copyright: Merve Sultan, PexelsThis family business really took off around the globe…by making globes! Snow globes are popular souvenirs and holiday decorations the world over. While these whimsical decorations seem like a simple concept—a diorama inside a glass globe with some water and fake snow thrown in—they have a surprisingly scientific origin.
Erwin Perzy I, an Austrian trademan and tinkerer, didn’t set out to invent the snowglobe. Rather, he was in the business of selling medical instruments to local surgeons. In 1900, many physicians were looking to improve the lighting in their operating rooms, which at the time were often small, dim, and hard to work in. So, Perzy went to work, experimenting with a lightbulb placed near a water-filled glass globe. In order to amplify the brightness, Perzy tried adding different materials in the water to reflect the light. His invention never caught on with surgeons, but it did give Perzy an idea. He was already making miniature pewter replicas of the nearby Mariazell Basilica to sell to tourists and pilgrims who visited the site in droves. The souvenir was already popular, so he decided to bump it up a notch by taking some of the tiny buildings and placing them inside the globes. Filled with water and a proprietary blend of wax to mimic snow, the souvenir was sold as a diorama of the Mariazell Basilica in winter, and it was an instant success.
Some historians have pointed out that snow globes may have existed, at least in some form, before Perzy's invention. During the 1878 Paris Universelle Exposition, a French glassware company sold domed paperweights containing a model of a man holding an umbrella. The dome was also filled with water and imitation snow, but this version never caught on. Either way, Perzy’s patent for his snow globe was the first of its kind, and by 1905, business was booming.
At first, snow globes remained a regional craze. In 1908, Emperor Franz Joseph of Austria awarded Perzy for his novel contributions to toymaking, helping to boost snow globe’s popularity. For the first decades of the 20th century, snow globe’s spread steadily across Europe, but sales fell during World War I, World War II, and the intervening period of economic depression. After World War II, business took off again and began to spread to the U.S. By then, Perzy’s son, Erwin Perzy II, was in charge of the family business and made the decision to market snow globes as a Christmas item. The first Christmas snow globe featured a Christmas tree inside, and proved to be a great success. With the post-war baby boom and a rising economy, snow globe sales skyrocketed. Beginning in the 1970s, Erwin Perzy III took over the family business and started selling snow globes to Japan, but by the end of the 1980s, there was a problem. The patent filed by the first Perzy expired, forcing the family to pivot and market their products as the real deal, naming themselves the Original Viennese Snow Globes.
Today, the company is still owned and operated by the Perzy family, and while plenty of other companies sell snow globes, they’re still recognized as the original. In the years since their rebranding, they’ve been commissioned to make custom snow globes for a number of U.S. presidents, and in 2020, they even made one with a model toilet paper roll inside to poke fun at the shortages during the COVID pandemic. In addition to being the original, the company still uses a proprietary blend of wax and plastic for their snow, which they claim floats longer than their competitors’. That’s one way to keep shaking up the industry after all these years.
[Image description: A snowglobe with two figures inside.] Credit & copyright: Merve Sultan, Pexels -
FREEPP&T CurioFree1 CQ
This is one dispute between neighbors that got way out of hand. On this day in 1845, the U.S. Congress approved the annexation of the Republic of Texas, leading to the Mexican-American War. The conflict lasted for two brutal years and claimed the lives of nearly 40,000 soldiers.
Contrary of popular belief, Texas was not actually part of Mexico at the time of its annexation. Rather, it was a breakaway state—a Republic of its own that had gained independence from Mexico during the fittingly-named Texas Revolution. When the U.S. decided to annex it, the Republic had existed for around 10 years. For most of its existence, the U.S. recognized the Republic of Texas as an independent nation, while Mexico did not. Mexico considered it a rebellious state, and was eager to quash the Republic’s independent economic dealings with other nations. At the same time, they threatened war if the U.S. ever tried to annex the Republic of Texas.
Mexico had plenty of reasons to worry since the Republic of Texas itself was in favor of being annexed. In 1836, the Republic voted to become part of the U.S., as they were eager to procure the protection of the U.S. military and gain a stronger economic standing. However, it wasn’t until 1845 that President John Tyler, with the help of President-elect James K. Polk, passed a joint resolution in both houses of Congress and officially made Texas part of the United States. This increase in U.S. territory followed a trend of westward expansion at the time.
Mexico wasn’t happy, but they didn’t make good on their threat to declare war over the annexation. Rather, they took issue with Texas’ new borders. Mexico believed that the border should only extend as far as the Nueces River, but Texas claimed that their border extended all the way to the Rio Grande River and included portions of modern-day New Mexico and Colorado. In November, 1845, The U.S. sent Congressman John Slidell to negotiate a purchase agreement with Mexico for the disputed areas of land. At the same time, The U.S. Army began to take up stations within the disputed territory, infuriating Mexican military leaders and leading to open skirmishes between Mexican and U.S. troops. President Polk had run on a platform of westward U.S. expansion, so he wasn’t about to cede any land to Mexico, and Mexico wouldn’t allow it to be purchased. So, Polk urged Congress to declare war on Mexico, which they did on May 13, 1846.
From the start, Mexico faced serious disadvantages. Their armaments were outdated compared to those of U.S. troops, as most Mexican soldiers used surplus British muskets while U.S. soldiers had access to rifles and revolvers. Most difficult for Mexico to overcome were its own, severe political divisions. Centralistas, who supported a centralized Mexican government, were bitter rivals with federalists, who wanted a decentralized government structure. These two groups often failed to work together within military ranks, and sometimes even turned their weapons on one another. Even Mexican General Antonio López de Santa Anna, Mexico’s most famous military leader, struggled to get his nation’s divided political factions to fight together.
These obstacles quickly proved insurmountable for the Mexican military. After a three-day battle, the U.S. handily captured the major city of Monterrey, Mexico, on September 24, 1846. Not long after, the U.S. advanced into central Mexico and the bloody Battle of Buena Vista ended ambiguously, with both sides claiming victory. However, Mexico never decisively won a single battle in the war, and on September 14, 1847, the U.S. Army captured Mexico City, ending the fighting.
It wasn’t exactly smooth sailing from that point on. The Mexican government had to reform enough to be able to negotiate the war’s ending. This took time, since most of the Mexican government had fled Mexico City in advance of its downfall. It wasn’t until February 2, 1848, that the Treaty of Guadalupe Hidalgo was signed, and the war officially ended. The treaty granted the U.S. all of the formerly-contested territory, which eventually became the states of New Mexico, Utah, Arizona, Nevada, Colorado, California, and, of course, Texas. In return, Mexico got $15 million—far less than the U.S. originally offered to purchase the territory for. It might not have been a great deal to begin with—but Mexico likely ended up wishing they'd taken it.
[Image description: An illustration of soldiers in blue uniforms on horseback, one holding a sword aloft. Other soldiers are on the ground in disarray as others march up a distant hill amid clouds of smoke.] Credit & copyright: Storming of Independence Hill at the Battle of Monterey Kelloggs & Thayer, c. 1850-1900. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Control number: 93507890. Public Domain.This is one dispute between neighbors that got way out of hand. On this day in 1845, the U.S. Congress approved the annexation of the Republic of Texas, leading to the Mexican-American War. The conflict lasted for two brutal years and claimed the lives of nearly 40,000 soldiers.
Contrary of popular belief, Texas was not actually part of Mexico at the time of its annexation. Rather, it was a breakaway state—a Republic of its own that had gained independence from Mexico during the fittingly-named Texas Revolution. When the U.S. decided to annex it, the Republic had existed for around 10 years. For most of its existence, the U.S. recognized the Republic of Texas as an independent nation, while Mexico did not. Mexico considered it a rebellious state, and was eager to quash the Republic’s independent economic dealings with other nations. At the same time, they threatened war if the U.S. ever tried to annex the Republic of Texas.
Mexico had plenty of reasons to worry since the Republic of Texas itself was in favor of being annexed. In 1836, the Republic voted to become part of the U.S., as they were eager to procure the protection of the U.S. military and gain a stronger economic standing. However, it wasn’t until 1845 that President John Tyler, with the help of President-elect James K. Polk, passed a joint resolution in both houses of Congress and officially made Texas part of the United States. This increase in U.S. territory followed a trend of westward expansion at the time.
Mexico wasn’t happy, but they didn’t make good on their threat to declare war over the annexation. Rather, they took issue with Texas’ new borders. Mexico believed that the border should only extend as far as the Nueces River, but Texas claimed that their border extended all the way to the Rio Grande River and included portions of modern-day New Mexico and Colorado. In November, 1845, The U.S. sent Congressman John Slidell to negotiate a purchase agreement with Mexico for the disputed areas of land. At the same time, The U.S. Army began to take up stations within the disputed territory, infuriating Mexican military leaders and leading to open skirmishes between Mexican and U.S. troops. President Polk had run on a platform of westward U.S. expansion, so he wasn’t about to cede any land to Mexico, and Mexico wouldn’t allow it to be purchased. So, Polk urged Congress to declare war on Mexico, which they did on May 13, 1846.
From the start, Mexico faced serious disadvantages. Their armaments were outdated compared to those of U.S. troops, as most Mexican soldiers used surplus British muskets while U.S. soldiers had access to rifles and revolvers. Most difficult for Mexico to overcome were its own, severe political divisions. Centralistas, who supported a centralized Mexican government, were bitter rivals with federalists, who wanted a decentralized government structure. These two groups often failed to work together within military ranks, and sometimes even turned their weapons on one another. Even Mexican General Antonio López de Santa Anna, Mexico’s most famous military leader, struggled to get his nation’s divided political factions to fight together.
These obstacles quickly proved insurmountable for the Mexican military. After a three-day battle, the U.S. handily captured the major city of Monterrey, Mexico, on September 24, 1846. Not long after, the U.S. advanced into central Mexico and the bloody Battle of Buena Vista ended ambiguously, with both sides claiming victory. However, Mexico never decisively won a single battle in the war, and on September 14, 1847, the U.S. Army captured Mexico City, ending the fighting.
It wasn’t exactly smooth sailing from that point on. The Mexican government had to reform enough to be able to negotiate the war’s ending. This took time, since most of the Mexican government had fled Mexico City in advance of its downfall. It wasn’t until February 2, 1848, that the Treaty of Guadalupe Hidalgo was signed, and the war officially ended. The treaty granted the U.S. all of the formerly-contested territory, which eventually became the states of New Mexico, Utah, Arizona, Nevada, Colorado, California, and, of course, Texas. In return, Mexico got $15 million—far less than the U.S. originally offered to purchase the territory for. It might not have been a great deal to begin with—but Mexico likely ended up wishing they'd taken it.
[Image description: An illustration of soldiers in blue uniforms on horseback, one holding a sword aloft. Other soldiers are on the ground in disarray as others march up a distant hill amid clouds of smoke.] Credit & copyright: Storming of Independence Hill at the Battle of Monterey Kelloggs & Thayer, c. 1850-1900. Library of Congress Prints and Photographs Division Washington, D.C. 20540 USA. Control number: 93507890. Public Domain. -
FREEWorld History PP&T CurioFree1 CQ
Guys, I don’t think that’s Santa! In recent years, a monster-like figure known as Krampus has taken the modern world by storm, popping up in memes and even starring in his own movie. But this folkloric figure is far from a modern invention. In fact, his fame as a Christmas figure began in the 17th century (though his origins stretch back even further, to the 12th century) and he was actually portrayed as Santa’s helper.
The name Krampus, is thought to come from the German word for claw, “Krampen.” Krampus certainly does have fearsome claws, along with exaggerated, goat-like features (horns, legs, hooves, and a tail) on a mostly humanoid body with a long tongue and shaggy, black fur. Krampus is also associated with Norse mythology, and one of his earliest iterations was thought to be as the son of Hel, the god of the underworld. Regardless of exactly where he came from, Krampus came to have just one job during Christmas, according to many European countries: punish children who misbehaved during the year. Unlike Santa, who merely rewards good children, the Krampus takes punitive measures like beating children with sticks and sometimes even kidnapping them. Santa isn’t unaware of Krampus’s deeds, either. According to folklore, since Santa is a saint, he can’t punish children…which is why Krampus does it for him. Both St. Nicholas and Krampus are said to arrive on Krampusnacht, or Krampus Run (December 5), to dole out each child’s reward or punishment, respectively. The next morning, children are supposed to be either basking in their presents or crying over their injuries from the night before. Compared to that, some coal in the stocking might be preferable.
This bizarre goat-monster probably came to be associated with Christmas because he was already associated with Winter Solstice and the pagan traditions surrounding it. Once Christianity began to spread in once-pagan regions, the two traditions became mingled, creating an unlikely crossover of a Turkish saint and a Norse demon. However, Krampusnacht might have taken more from the pagans than the Christians. Krampusnacht usually involves revelers handing out alcohol and a parade where people dressed like the Krampus run around chasing children. No surprise, then, that since the Krampus started to become intertwined with Christmas, the Catholic Church attempted to abolish the figure several times, to no avail. One particularly large, long-running festival takes place in Lienz, Austria, with a parade called Perchtenlauf, where cowbells ring to signal the arrival of Krampus.
Krampus’s popularity really began to take off in the early 20th century, when the figure was featured on holiday cards that ranged from comical to spooky. At first, Krampus cards were contained mostly to Germany and Austria, but the figure’s popularity began to spread around Europe and even across the Atlantic. In the U.S., the Krampus has become the go-to figure for those who wish to forego the typical Christmas sentimentality and embrace a more horror-centric and ironic approach to the holidays.
Today, many of the older traditions around the Krampus are still practiced, but the figure is also something of a pop-culture icon. 2015 saw the debut of Krampus, a horror movie that casts the monster as the main antagonist. Other films have followed suit, often incorporating elements from real folklore. Krampus might have also gained traction in the U.S. partly as a novel way to protest the increasing commercialization of Christmas. But that might have been in vain, since merchandise featuring Krampus is becoming ever more popular. How long until we get a Christmas carol about the guy?
[Image description: Krampus, a furry, black monster with horns and a long tongue, puts a child in a sack while another child kneels by a bowl of fruit.] Credit & copyright: c. 1900, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.Guys, I don’t think that’s Santa! In recent years, a monster-like figure known as Krampus has taken the modern world by storm, popping up in memes and even starring in his own movie. But this folkloric figure is far from a modern invention. In fact, his fame as a Christmas figure began in the 17th century (though his origins stretch back even further, to the 12th century) and he was actually portrayed as Santa’s helper.
The name Krampus, is thought to come from the German word for claw, “Krampen.” Krampus certainly does have fearsome claws, along with exaggerated, goat-like features (horns, legs, hooves, and a tail) on a mostly humanoid body with a long tongue and shaggy, black fur. Krampus is also associated with Norse mythology, and one of his earliest iterations was thought to be as the son of Hel, the god of the underworld. Regardless of exactly where he came from, Krampus came to have just one job during Christmas, according to many European countries: punish children who misbehaved during the year. Unlike Santa, who merely rewards good children, the Krampus takes punitive measures like beating children with sticks and sometimes even kidnapping them. Santa isn’t unaware of Krampus’s deeds, either. According to folklore, since Santa is a saint, he can’t punish children…which is why Krampus does it for him. Both St. Nicholas and Krampus are said to arrive on Krampusnacht, or Krampus Run (December 5), to dole out each child’s reward or punishment, respectively. The next morning, children are supposed to be either basking in their presents or crying over their injuries from the night before. Compared to that, some coal in the stocking might be preferable.
This bizarre goat-monster probably came to be associated with Christmas because he was already associated with Winter Solstice and the pagan traditions surrounding it. Once Christianity began to spread in once-pagan regions, the two traditions became mingled, creating an unlikely crossover of a Turkish saint and a Norse demon. However, Krampusnacht might have taken more from the pagans than the Christians. Krampusnacht usually involves revelers handing out alcohol and a parade where people dressed like the Krampus run around chasing children. No surprise, then, that since the Krampus started to become intertwined with Christmas, the Catholic Church attempted to abolish the figure several times, to no avail. One particularly large, long-running festival takes place in Lienz, Austria, with a parade called Perchtenlauf, where cowbells ring to signal the arrival of Krampus.
Krampus’s popularity really began to take off in the early 20th century, when the figure was featured on holiday cards that ranged from comical to spooky. At first, Krampus cards were contained mostly to Germany and Austria, but the figure’s popularity began to spread around Europe and even across the Atlantic. In the U.S., the Krampus has become the go-to figure for those who wish to forego the typical Christmas sentimentality and embrace a more horror-centric and ironic approach to the holidays.
Today, many of the older traditions around the Krampus are still practiced, but the figure is also something of a pop-culture icon. 2015 saw the debut of Krampus, a horror movie that casts the monster as the main antagonist. Other films have followed suit, often incorporating elements from real folklore. Krampus might have also gained traction in the U.S. partly as a novel way to protest the increasing commercialization of Christmas. But that might have been in vain, since merchandise featuring Krampus is becoming ever more popular. How long until we get a Christmas carol about the guy?
[Image description: Krampus, a furry, black monster with horns and a long tongue, puts a child in a sack while another child kneels by a bowl of fruit.] Credit & copyright: c. 1900, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEWorld History PP&T CurioFree1 CQ
It’s really not as scary as it sounds. The Black Forest region of Germany is known for its picturesque landscape and traditional crafts. During the holiday season, German Christmas markets (or Christkindlmarkts) around the world are filled with hand-carved wooden toys and figurines from the region, and Black Forest ham is a beloved culinary delight throughout the year. However, there’s more to this historic, wooded area than just toys and food. The people living there have proudly retained distinct cultural practices that make the region unique.
Located in the southwestern state of Baden-Württemberg, the Black Forest is called Schwarzwald in German, though it went by other names in the past. The ancient Romans once associated the area with Abnoba Mons, a mountain range named after a Celtic deity. The earliest written record of the Black Forest also comes from the Romans, in the form of the Tabula Peutingeriana, a medieval copy of a Roman map that detailed the empire’s public road system. In it, the Black Forest is called Silva Marciana, which means “border forest,” in reference to the Marcomanni ("border people") who lived near Roman settlements in the area. The Black Forest today consists of 2,320 square miles of heavily forested land that stretches around 100 miles long and up to 25 miles wide. It contains the sources of both the Danube and Neckar rivers, and the area was historically known for its rich pastureland. Of course, the true stars of the Black Forest are the trees that define the region. The forests of Schwarzwald are mainly known for their oak, beech, and fir trees, the latter of which gives the region its name. Unsurprisingly, lumber production was historically a large part of the Black Forest’s economy, along with mining.
The Black Forest’s history of woodworking and woodcraft goes back centuries. Arguably the most famous craft to come out of the forest is the cuckoo clock, which was invented some time in the 17th century. As their name implies, cuckoo clocks typically feature a small, carved bird that emerges from above the clock face to mark the arrival of each hour with a call or song. More elaborate clocks sometimes have a set of dancers that circle in and out of a balcony in time to the sound. Most cuckoo clocks are carved out of wood to resemble houses, cabins, beer halls, or other traditional structures, with a scene of domestic or village life around it. While many modern cuckoo clocks use an electronic movement to keep time, mechanical versions using weights and pendulums are still being made. The weights that power the movement are often made to resemble pine cones, and users need only pull down on them periodically to keep the clock ticking. There are a limitless variety of cuckoo clock designs, and there are still traditional craftsmen making them by hand. The Black Forest is also known for wood carved figurines and sculptures, many of which served as children’s toys. Wood carving as an industry first gained traction in the 19th century, when drought and famine forced locals to seek alternative sources of income, but it is now a cherished part of the region’s culture.
Today, the Black Forest is still home to many woodworkers. The region is also a popular destination for outdoor enthusiasts, thanks to its many hiking trails and immense natural beauty. Towns in and around the Black Forest feature traditional, pastoral architecture and growing art scenes, where artists take inspiration from local traditions and landscapes. All those clocks, and they still manage to stay timeless.
[Image description: A section of the northern Black Forest with thin pine trees. Credit & copyright: Leonhard Lenz, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.It’s really not as scary as it sounds. The Black Forest region of Germany is known for its picturesque landscape and traditional crafts. During the holiday season, German Christmas markets (or Christkindlmarkts) around the world are filled with hand-carved wooden toys and figurines from the region, and Black Forest ham is a beloved culinary delight throughout the year. However, there’s more to this historic, wooded area than just toys and food. The people living there have proudly retained distinct cultural practices that make the region unique.
Located in the southwestern state of Baden-Württemberg, the Black Forest is called Schwarzwald in German, though it went by other names in the past. The ancient Romans once associated the area with Abnoba Mons, a mountain range named after a Celtic deity. The earliest written record of the Black Forest also comes from the Romans, in the form of the Tabula Peutingeriana, a medieval copy of a Roman map that detailed the empire’s public road system. In it, the Black Forest is called Silva Marciana, which means “border forest,” in reference to the Marcomanni ("border people") who lived near Roman settlements in the area. The Black Forest today consists of 2,320 square miles of heavily forested land that stretches around 100 miles long and up to 25 miles wide. It contains the sources of both the Danube and Neckar rivers, and the area was historically known for its rich pastureland. Of course, the true stars of the Black Forest are the trees that define the region. The forests of Schwarzwald are mainly known for their oak, beech, and fir trees, the latter of which gives the region its name. Unsurprisingly, lumber production was historically a large part of the Black Forest’s economy, along with mining.
The Black Forest’s history of woodworking and woodcraft goes back centuries. Arguably the most famous craft to come out of the forest is the cuckoo clock, which was invented some time in the 17th century. As their name implies, cuckoo clocks typically feature a small, carved bird that emerges from above the clock face to mark the arrival of each hour with a call or song. More elaborate clocks sometimes have a set of dancers that circle in and out of a balcony in time to the sound. Most cuckoo clocks are carved out of wood to resemble houses, cabins, beer halls, or other traditional structures, with a scene of domestic or village life around it. While many modern cuckoo clocks use an electronic movement to keep time, mechanical versions using weights and pendulums are still being made. The weights that power the movement are often made to resemble pine cones, and users need only pull down on them periodically to keep the clock ticking. There are a limitless variety of cuckoo clock designs, and there are still traditional craftsmen making them by hand. The Black Forest is also known for wood carved figurines and sculptures, many of which served as children’s toys. Wood carving as an industry first gained traction in the 19th century, when drought and famine forced locals to seek alternative sources of income, but it is now a cherished part of the region’s culture.
Today, the Black Forest is still home to many woodworkers. The region is also a popular destination for outdoor enthusiasts, thanks to its many hiking trails and immense natural beauty. Towns in and around the Black Forest feature traditional, pastoral architecture and growing art scenes, where artists take inspiration from local traditions and landscapes. All those clocks, and they still manage to stay timeless.
[Image description: A section of the northern Black Forest with thin pine trees. Credit & copyright: Leonhard Lenz, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEAlgebra PP&T CurioFree1 CQ
Math and logic are an inseparable pair, right? Well, they weren't always. Mathematics and logic existed separately for thousands of years before the two disciplines ever merged together, but their eventual marriage was possible thanks in part to a man named George Boole. Boole, who died on this day in 1864, is known as the father of binary logic, and by extension, a key figure in the field of modern computing.
Despite his later career as a revolutionary academic and educator, Boole never received much formal education. Instead, his early life was enriched by his father’s personal interest in math and science. Born on November 2, 1815, in Lincoln, Lincolnshire, England, Boole was largely educated by his father, a shoemaker. As a child, he also attended local schools, but most of his knowledge in mathematics was self-taught. When his father’s business began to slow down, Boole started teaching at the young age of 16. By 20, he had opened his own school, and remained a dedicated educator throughout his life. He worked as the headmaster of his school for 15 years, during which time he took it upon himself to continue his own education. Beginning in the 1840s, Boole began to publish papers in the Cambridge Mathematical Journal. In 1849, he began his tenure as a professor of mathematics at Queens College in Cork, Ireland.
Before Boole, logic was considered part of philosophy. He published a pamphlet in 1847 titled The Mathematical Analysis of Logic, being an Essay towards a Calculus of Deductive Reasoning in which he argued that logic was not a matter of philosophy, but shared a domain with mathematics. He expounded on this idea in An Investigation into the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities, which he released in 1854.
With these two works, Boolean algebra was established, wherein math and algebraic symbols could be used to express a binary system of logic. Essentially, Boolean algebra is the mathematical representation of logic using boolean values: the values of true or false, often represented today as 1 and 0 in computer science. Boolean algebra also plays an important role in the theory of probabilities, information theory, and circuit design in digital computers. Boole’s integration of math and logic was a revolution millennia in the making, with much of his work based on Aristotle’s system of logic. Even Boole’s book, The Laws of Thought, was titled after existing fundamental laws of logic used by ancient philosophers.
Thanks to the recognition and acclaim he earned from his works, Boole was given an honorary membership to the Cambridge Philosophical Society and an honorary degree from Oxford University in 1858 and 1859, respectively. Sadly, Boole’s extreme dedication to his profession ultimately led to his death. One day in November of 1864, Boole walked through a cold and torrential downpour to reach his class at Queens College. Once in his classroom, he conducted an entire lecture in drenched clothes. In the following days, Boole contracted pneumonia and passed away at the age of 49, survived by his wife and children.
Even if Boole lived a long and healthy life, he wouldn’t have seen the advent of digital computing that relied on his principles with practical applications. While many programming languages exist today, digital computing is fundamentally based on circuits formed using the boolean values of true and false. Boole’s impact has been left on everything from algebra textbooks to the entire field of digital computing. All that, despite spending much of his life with little formal education. Who said a shoemaker’s son couldn’t accomplish great feats?
[Image description: Rows of white 1s and 0s against a back background.] Credit & copyright: Author’s own photo. The author releases this image into the Public Domain.Math and logic are an inseparable pair, right? Well, they weren't always. Mathematics and logic existed separately for thousands of years before the two disciplines ever merged together, but their eventual marriage was possible thanks in part to a man named George Boole. Boole, who died on this day in 1864, is known as the father of binary logic, and by extension, a key figure in the field of modern computing.
Despite his later career as a revolutionary academic and educator, Boole never received much formal education. Instead, his early life was enriched by his father’s personal interest in math and science. Born on November 2, 1815, in Lincoln, Lincolnshire, England, Boole was largely educated by his father, a shoemaker. As a child, he also attended local schools, but most of his knowledge in mathematics was self-taught. When his father’s business began to slow down, Boole started teaching at the young age of 16. By 20, he had opened his own school, and remained a dedicated educator throughout his life. He worked as the headmaster of his school for 15 years, during which time he took it upon himself to continue his own education. Beginning in the 1840s, Boole began to publish papers in the Cambridge Mathematical Journal. In 1849, he began his tenure as a professor of mathematics at Queens College in Cork, Ireland.
Before Boole, logic was considered part of philosophy. He published a pamphlet in 1847 titled The Mathematical Analysis of Logic, being an Essay towards a Calculus of Deductive Reasoning in which he argued that logic was not a matter of philosophy, but shared a domain with mathematics. He expounded on this idea in An Investigation into the Laws of Thought, on Which Are Founded the Mathematical Theories of Logic and Probabilities, which he released in 1854.
With these two works, Boolean algebra was established, wherein math and algebraic symbols could be used to express a binary system of logic. Essentially, Boolean algebra is the mathematical representation of logic using boolean values: the values of true or false, often represented today as 1 and 0 in computer science. Boolean algebra also plays an important role in the theory of probabilities, information theory, and circuit design in digital computers. Boole’s integration of math and logic was a revolution millennia in the making, with much of his work based on Aristotle’s system of logic. Even Boole’s book, The Laws of Thought, was titled after existing fundamental laws of logic used by ancient philosophers.
Thanks to the recognition and acclaim he earned from his works, Boole was given an honorary membership to the Cambridge Philosophical Society and an honorary degree from Oxford University in 1858 and 1859, respectively. Sadly, Boole’s extreme dedication to his profession ultimately led to his death. One day in November of 1864, Boole walked through a cold and torrential downpour to reach his class at Queens College. Once in his classroom, he conducted an entire lecture in drenched clothes. In the following days, Boole contracted pneumonia and passed away at the age of 49, survived by his wife and children.
Even if Boole lived a long and healthy life, he wouldn’t have seen the advent of digital computing that relied on his principles with practical applications. While many programming languages exist today, digital computing is fundamentally based on circuits formed using the boolean values of true and false. Boole’s impact has been left on everything from algebra textbooks to the entire field of digital computing. All that, despite spending much of his life with little formal education. Who said a shoemaker’s son couldn’t accomplish great feats?
[Image description: Rows of white 1s and 0s against a back background.] Credit & copyright: Author’s own photo. The author releases this image into the Public Domain. -
FREEWorld History PP&T CurioFree1 CQ
It’s not exactly floating on cloud nine, but it might feel pretty close. Since they first took to the skies, airships have held the popular imagination captive. Some of the world’s first airships (a term that includes blimps and dirigibles) used hydrogen to become lighter than air. Hydrogen was eventually replaced by helium, which was much less explosive. The very first airship to use helium took its maiden voyage on this day in 1921, and things seemed to be looking up for the future of airships. To the disappointment of many enthusiasts, however, they never really took off as a popular form of transportation.
Airships were, unsurprisingly, inspired by hot air balloons, which were invented in 1783. French engineer Jean Baptiste Meusnier was the first to build on the concept of a lighter-than-air vessel with a design that included steering by way of three propellers and a fully sealed balloon filled with gas, not hot air. Unfortunately for Meusnier, his design was never built, though it went on to inspire others. In 1785, French inventor Jean-Pierre Blanchard and American Dr John Jeffries made history by crossing the English Channel in a hydrogen-filled airship. Their success launched a new airship industry in which improvements and innovations developed fast. One major improvement was steam power, first used in 1852 by yet another French engineer, Henri Giffard. The most famous contribution to airship technology came in 1895, from German inventor Count Ferdinand von Zeppelin. The count designed an entirely new type of airship, named after himself: the Zeppelin, which was much more rigid than its predecessors. The first Zeppelin was built by Hungarian inventor David Schwarz, and was shaped like a long cigar that was wider at the front, with fins at the rear. Its rigid frame the Zeppelin faster than other airships of the time, capable of reaching speeds of up to 25 miles-per-hour. Zeppelins were also more resilient to adverse weather conditions.
Other airships soon adopted more rigid frames. While they were largely used for scenic passenger flights, Zeppelins were also used as military aircraft to bomb Britain during WWI due to their impressive cargo capacity. The U.S. military also adopted the use of airships, though they mostly used non-rigid dirigibles. The most prevalent among them were the Goodyear Pilgrims, invented in 1925. Though these were only capable of carrying two passengers and two crew members, and were originally made for scenic passenger flights, during the war they were utilized for surveillance by the U.S. Army and Navy. In fact, the first helium airship was the U.S. Navy’s C-7 blimp, which could carry a crew of four. Goodyear also made other nonrigid airships, or blimps, and they were a common sight during large events, where they served as advertisements. Some of these even remain in service today. With varied uses and designs, airships seemed to be on the rise during the early 20th century. One tragic event, however, changed course of the airship industry forever: the Hindenburg disaster. The Hindenburg was the first airship to provide regularly-scheduled service between Europe and North America, carrying passengers across the Atlantic faster than any ship of the time. But in 1937, the Hindenburg crashed during its landing approach in Lakehurst, New Jersey. After a hydrogen leak caught on fire from a static discharge, flames consumed the fabric covering containing the gas. In almost no time at all, the Hindenburg fell to the ground in a smoky blaze. Of the 97 passengers and crew on board, 35 lost their lives. Once the terrifying images of the conflagration spread around the world, the golden age of airships was essentially over.
With modern airplanes that can ferry hundreds of passengers across continents in hours, it might seem like airships are irrelevant today. Yet, these unusual aircraft do manage to find a place in modern times. Airships are still used to deliver aid relief to remote, undeveloped areas with no landing strips, since airships can safely drop cargo without having to land. They’re also widely used in scientific research and military surveillance, though in a reversal of past trends, there is a growing interest in airships for scenic flights. Then there are the enthusiasts who still fly dirigibles just for the fun of it. Don’t worry though; airships nowadays are filled with helium, making tragedies like the Hindenburg much less likely to occur. Who’s up for a leisurely blimp ride?
[Image description: A black-and-white image of the airship Captain Ferber in its hangar with people in uniform standing about.] Credit & copyright: Epinal Municipal Library, Limedia galleries. Etalab Open License, Public Domain.It’s not exactly floating on cloud nine, but it might feel pretty close. Since they first took to the skies, airships have held the popular imagination captive. Some of the world’s first airships (a term that includes blimps and dirigibles) used hydrogen to become lighter than air. Hydrogen was eventually replaced by helium, which was much less explosive. The very first airship to use helium took its maiden voyage on this day in 1921, and things seemed to be looking up for the future of airships. To the disappointment of many enthusiasts, however, they never really took off as a popular form of transportation.
Airships were, unsurprisingly, inspired by hot air balloons, which were invented in 1783. French engineer Jean Baptiste Meusnier was the first to build on the concept of a lighter-than-air vessel with a design that included steering by way of three propellers and a fully sealed balloon filled with gas, not hot air. Unfortunately for Meusnier, his design was never built, though it went on to inspire others. In 1785, French inventor Jean-Pierre Blanchard and American Dr John Jeffries made history by crossing the English Channel in a hydrogen-filled airship. Their success launched a new airship industry in which improvements and innovations developed fast. One major improvement was steam power, first used in 1852 by yet another French engineer, Henri Giffard. The most famous contribution to airship technology came in 1895, from German inventor Count Ferdinand von Zeppelin. The count designed an entirely new type of airship, named after himself: the Zeppelin, which was much more rigid than its predecessors. The first Zeppelin was built by Hungarian inventor David Schwarz, and was shaped like a long cigar that was wider at the front, with fins at the rear. Its rigid frame the Zeppelin faster than other airships of the time, capable of reaching speeds of up to 25 miles-per-hour. Zeppelins were also more resilient to adverse weather conditions.
Other airships soon adopted more rigid frames. While they were largely used for scenic passenger flights, Zeppelins were also used as military aircraft to bomb Britain during WWI due to their impressive cargo capacity. The U.S. military also adopted the use of airships, though they mostly used non-rigid dirigibles. The most prevalent among them were the Goodyear Pilgrims, invented in 1925. Though these were only capable of carrying two passengers and two crew members, and were originally made for scenic passenger flights, during the war they were utilized for surveillance by the U.S. Army and Navy. In fact, the first helium airship was the U.S. Navy’s C-7 blimp, which could carry a crew of four. Goodyear also made other nonrigid airships, or blimps, and they were a common sight during large events, where they served as advertisements. Some of these even remain in service today. With varied uses and designs, airships seemed to be on the rise during the early 20th century. One tragic event, however, changed course of the airship industry forever: the Hindenburg disaster. The Hindenburg was the first airship to provide regularly-scheduled service between Europe and North America, carrying passengers across the Atlantic faster than any ship of the time. But in 1937, the Hindenburg crashed during its landing approach in Lakehurst, New Jersey. After a hydrogen leak caught on fire from a static discharge, flames consumed the fabric covering containing the gas. In almost no time at all, the Hindenburg fell to the ground in a smoky blaze. Of the 97 passengers and crew on board, 35 lost their lives. Once the terrifying images of the conflagration spread around the world, the golden age of airships was essentially over.
With modern airplanes that can ferry hundreds of passengers across continents in hours, it might seem like airships are irrelevant today. Yet, these unusual aircraft do manage to find a place in modern times. Airships are still used to deliver aid relief to remote, undeveloped areas with no landing strips, since airships can safely drop cargo without having to land. They’re also widely used in scientific research and military surveillance, though in a reversal of past trends, there is a growing interest in airships for scenic flights. Then there are the enthusiasts who still fly dirigibles just for the fun of it. Don’t worry though; airships nowadays are filled with helium, making tragedies like the Hindenburg much less likely to occur. Who’s up for a leisurely blimp ride?
[Image description: A black-and-white image of the airship Captain Ferber in its hangar with people in uniform standing about.] Credit & copyright: Epinal Municipal Library, Limedia galleries. Etalab Open License, Public Domain. -
FREELiterature PP&T CurioFree1 CQ
If you’re only going to write one book, make it count. That’s exactly what 19th century British author Anna Sewell did with her one and only novel, Black Beauty. Published on this day in 1877, the book was a critical and commercial success. Written from the perspective of a horse, the story follows the titular character as he experiences increasing hardship under different owners. The book features vivid descriptions of inhumane treatment of horses, which was sadly common at the time of its publication. However, the novel actually helped bring an end to at least one cruel practice in addition to changing children’s literature forever.
Born on March 30, 1820 in Norfolk, England, Anna Sewell’s early life was difficult. Growing up in poverty, her family moved frequently, and the Sewell children (Anna and her brother) sometimes stayed with relatives. When she was 12 (or possibly 14), Anna broke both of her ankles after slipping and falling. Her medical treatment was inadequate, leaving her with lifelong mobility issues. Anna’s mother was a prolific author of religious children’s books, as well as books on social issues like abolition and temperance. In her adolescence, Sewell began helping her mother edit her manuscripts. However, it wasn’t until her fifties that Anna began work on a book of her own. The story was inspired by the very animals that her injury forced her to rely upon: horses. Unable to walk without pain and with her condition worsening over her lifetime, she was more dependent on horses than most people. Perhaps owing to her own injury and chronic pain, she developed a deep empathy for the animals. By the time Sewell published her book, she was 57 and in failing health. Just five months after Black Beauty was released, Sewell passed away from what was likely tuberculosis.
Sewell’s novel follows Black Beauty—a highbred male horse—throughout his life from his perspective. As a foal, he lives on a farm owned by kind masters who treat him well. He lives with his mother, Duchess, and half-brother, Rob Roy. After he is trained to be ridden and pull carts, Black Beauty is sold to another master, who also treats him well. During his time with his second masters, Black Beauty makes friends with his master’s other horses. However, his circumstances change for the worse when his owner’s family moves out of England and he is sold yet again. Black Beauty is separated from his friends, and his new owner is not as kind to him. One day, the new owner rides him while drunk, injuring him in the process. The injury is accompanied by a disfiguring scar which renders him unfashionable to ride, and he is sold once again, this time as a work horse in industrialized London. In the city, Black Beauty experiences increasing hardship as he is forced to perform grueling labor. Eventually, he is purchased by a kindly cabdriver, but is sold again after three years. During that time, he encounters one of his old friends, whose health and body have been ruined by years of hard labor and neglect. Later, Black Beauty himself collapses while attempting to pull a crowded cab. He is then purchased by a farmer who restores him to health and later sells him to a couple of old ladies who treat him well. After a long and difficult life, Black Beauty is able to live in quiet and peace once more.
Sewell’s novel was not only a hit, it contributed greatly to the banning of bearing-reins, a piece of horse harness that forced the animal’s neck back to create a more upright posture. The use of bearing-reins (also called checkreins or overchecks) was common before the book was published, and often caused debilitating injuries to horses. Black Beauty was heavily promoted by the Royal Society for the Prevention of Cruelty to Animals for its sympathetic portrayal of horses and their combined efforts helped end the use of bearing-reins in England. In the literary world, Black Beauty ushered in a new type of novel, in which animals could literally tell their stories. Children’s classics like Charlotte's Web might not exist if not for Black Beauty. Young readers (and horses) would do well to thank Anna Sewell!
[Image description: The cover of the 1877 first edition of Black Beauty. The cover is green with gold flowers and the black head and neck of a horse.] Credit & copyright: London: Jarrold and Sons, Wikimedia Commons. This image (or other media file) is in the public domain because its copyright has expired. This applies to the European Union and those countries with a copyright term of 70 years after the work was made available to the public.If you’re only going to write one book, make it count. That’s exactly what 19th century British author Anna Sewell did with her one and only novel, Black Beauty. Published on this day in 1877, the book was a critical and commercial success. Written from the perspective of a horse, the story follows the titular character as he experiences increasing hardship under different owners. The book features vivid descriptions of inhumane treatment of horses, which was sadly common at the time of its publication. However, the novel actually helped bring an end to at least one cruel practice in addition to changing children’s literature forever.
Born on March 30, 1820 in Norfolk, England, Anna Sewell’s early life was difficult. Growing up in poverty, her family moved frequently, and the Sewell children (Anna and her brother) sometimes stayed with relatives. When she was 12 (or possibly 14), Anna broke both of her ankles after slipping and falling. Her medical treatment was inadequate, leaving her with lifelong mobility issues. Anna’s mother was a prolific author of religious children’s books, as well as books on social issues like abolition and temperance. In her adolescence, Sewell began helping her mother edit her manuscripts. However, it wasn’t until her fifties that Anna began work on a book of her own. The story was inspired by the very animals that her injury forced her to rely upon: horses. Unable to walk without pain and with her condition worsening over her lifetime, she was more dependent on horses than most people. Perhaps owing to her own injury and chronic pain, she developed a deep empathy for the animals. By the time Sewell published her book, she was 57 and in failing health. Just five months after Black Beauty was released, Sewell passed away from what was likely tuberculosis.
Sewell’s novel follows Black Beauty—a highbred male horse—throughout his life from his perspective. As a foal, he lives on a farm owned by kind masters who treat him well. He lives with his mother, Duchess, and half-brother, Rob Roy. After he is trained to be ridden and pull carts, Black Beauty is sold to another master, who also treats him well. During his time with his second masters, Black Beauty makes friends with his master’s other horses. However, his circumstances change for the worse when his owner’s family moves out of England and he is sold yet again. Black Beauty is separated from his friends, and his new owner is not as kind to him. One day, the new owner rides him while drunk, injuring him in the process. The injury is accompanied by a disfiguring scar which renders him unfashionable to ride, and he is sold once again, this time as a work horse in industrialized London. In the city, Black Beauty experiences increasing hardship as he is forced to perform grueling labor. Eventually, he is purchased by a kindly cabdriver, but is sold again after three years. During that time, he encounters one of his old friends, whose health and body have been ruined by years of hard labor and neglect. Later, Black Beauty himself collapses while attempting to pull a crowded cab. He is then purchased by a farmer who restores him to health and later sells him to a couple of old ladies who treat him well. After a long and difficult life, Black Beauty is able to live in quiet and peace once more.
Sewell’s novel was not only a hit, it contributed greatly to the banning of bearing-reins, a piece of horse harness that forced the animal’s neck back to create a more upright posture. The use of bearing-reins (also called checkreins or overchecks) was common before the book was published, and often caused debilitating injuries to horses. Black Beauty was heavily promoted by the Royal Society for the Prevention of Cruelty to Animals for its sympathetic portrayal of horses and their combined efforts helped end the use of bearing-reins in England. In the literary world, Black Beauty ushered in a new type of novel, in which animals could literally tell their stories. Children’s classics like Charlotte's Web might not exist if not for Black Beauty. Young readers (and horses) would do well to thank Anna Sewell!
[Image description: The cover of the 1877 first edition of Black Beauty. The cover is green with gold flowers and the black head and neck of a horse.] Credit & copyright: London: Jarrold and Sons, Wikimedia Commons. This image (or other media file) is in the public domain because its copyright has expired. This applies to the European Union and those countries with a copyright term of 70 years after the work was made available to the public. -
FREEEngineering PP&T CurioFree1 CQ
Where there’s a will, there’s a way…even if it takes a lot of digging. Connecting the Red Sea and the Mediterranean Sea seems like an impossible feat, but it actually happened several times throughout history. From the ancient Egyptians to the Byzantines, various rulers attempted and failed to maintain a maritime passage between the two seas. The latest—and possibly the greatest—iteration yet is the Suez Canal. Located on the Isthmus of Suez, the canal opened on this day in 1869, and it continues to be crucial to global commerce as it connects Asia and Europe without the need to navigate around the southern tip of Africa.
Historians believe that the notion of connecting the Red and Mediterranean seas was first ideated by Pharaoh Senausert III of the Twelfth Dynasty in the 19th century B.C.E. The pharaoh envisioned a canal that would lead ships to the Nile River and through the Bitter Lakes, creating a lucrative trade route to Asia. A canal was created, but it became impassable by 610 B.C.E. due to sand deposition. Later attempts to connect the seas were limited in scope, capacity, and permanence. Various canal systems connecting the seas through the Nile and the Bitter Lakes came and went, and in at least one instance, the destruction of the passage was deliberate. Abu Jafar El-Mansur of the Abbasid Caliphate ordered the canal to be filled with sand in 760 C.E. to quell a rebellion in Mecca and Medina, and that was the last time that a passage between the seas existed for over a thousand years. It wasn’t until the 19th century that anyone would make earnest efforts to reconnect the seas. Instead of a system of small canals that made use of the Nile for the majority of its length, this new passage was designed to run straight through the Isthmus of Suez, making it the longest sea level canal in the world at the time.
The Suez Canal was commissioned by Mohamed Sa'id Pasha, the Ottoman governor of Egypt in 1854. That year, he tasked French diplomat Ferdinand de Lesseps with constructing the canal, and in 1856, the Suez Canal Company was given the right to manage it for 99 years starting from the date of completion. Construction was initially expected to take around six years, but was delayed by various setbacks. At first, construction was performed by forced laborers who were only equipped with hand tools and baskets. Many of the laborers died in 1865 when a cholera epidemic swept through the area, and the project eventually switched over to the use of dredgers and steam shovels, which greatly accelerated the pace of construction. Finally, the Suez Canal opened on November 17, 1869, to great fanfare, with the inaugural voyage attended by the wife of Napoleon III, Empress Eugénie. The canal was originally only 25 feet deep, 72 feet wide at the bottom, and up to 300 feet wide at the surface, but was expanded in 1876 to accommodate larger ships.
During its first full year of operation, the canal saw an average of two ships pass through it. Today, an average of 58 ships a day, carrying 437,000 tons of cargo, sail its waters. The canal remains significant to global commerce, and when a cargo ship got stuck and caused a blockage in 2021, it held up 369 ships at a cost of $9.6 billion in trade a day. Lesseps, however, didn’t fare as well as his creation. Following the success of the Suez, he was hired to construct the Panama Canal. Unfortunately, Lesseps wasn’t an engineer. Rather, his previous feat was largely that of organizing financing and creating political will. His attempt to dig another sea level canal through the isthmus nation proved disastrous. Between disease and the much more difficult terrain, Lesseps failed to make meaningful progress using the same techniques he employed before. The Panama Canal was eventually completed later by the U.S., which opted for a system of canal locks that allowed ships to change elevations, eliminating the need to dig straight through the entire length. Lesseps was one man who really should have rested on his laurels.
[Image description: A photo of a navy ship on the Suez Canal, from above.] Credit & copyright: W. M. Welch/US Navy, Wikimedia Commons. This file is a work of a sailor or employee of the U.S. Navy, taken or made as part of that person's official duties. As a work of the U.S. federal government, it is in the public domain in the United States.Where there’s a will, there’s a way…even if it takes a lot of digging. Connecting the Red Sea and the Mediterranean Sea seems like an impossible feat, but it actually happened several times throughout history. From the ancient Egyptians to the Byzantines, various rulers attempted and failed to maintain a maritime passage between the two seas. The latest—and possibly the greatest—iteration yet is the Suez Canal. Located on the Isthmus of Suez, the canal opened on this day in 1869, and it continues to be crucial to global commerce as it connects Asia and Europe without the need to navigate around the southern tip of Africa.
Historians believe that the notion of connecting the Red and Mediterranean seas was first ideated by Pharaoh Senausert III of the Twelfth Dynasty in the 19th century B.C.E. The pharaoh envisioned a canal that would lead ships to the Nile River and through the Bitter Lakes, creating a lucrative trade route to Asia. A canal was created, but it became impassable by 610 B.C.E. due to sand deposition. Later attempts to connect the seas were limited in scope, capacity, and permanence. Various canal systems connecting the seas through the Nile and the Bitter Lakes came and went, and in at least one instance, the destruction of the passage was deliberate. Abu Jafar El-Mansur of the Abbasid Caliphate ordered the canal to be filled with sand in 760 C.E. to quell a rebellion in Mecca and Medina, and that was the last time that a passage between the seas existed for over a thousand years. It wasn’t until the 19th century that anyone would make earnest efforts to reconnect the seas. Instead of a system of small canals that made use of the Nile for the majority of its length, this new passage was designed to run straight through the Isthmus of Suez, making it the longest sea level canal in the world at the time.
The Suez Canal was commissioned by Mohamed Sa'id Pasha, the Ottoman governor of Egypt in 1854. That year, he tasked French diplomat Ferdinand de Lesseps with constructing the canal, and in 1856, the Suez Canal Company was given the right to manage it for 99 years starting from the date of completion. Construction was initially expected to take around six years, but was delayed by various setbacks. At first, construction was performed by forced laborers who were only equipped with hand tools and baskets. Many of the laborers died in 1865 when a cholera epidemic swept through the area, and the project eventually switched over to the use of dredgers and steam shovels, which greatly accelerated the pace of construction. Finally, the Suez Canal opened on November 17, 1869, to great fanfare, with the inaugural voyage attended by the wife of Napoleon III, Empress Eugénie. The canal was originally only 25 feet deep, 72 feet wide at the bottom, and up to 300 feet wide at the surface, but was expanded in 1876 to accommodate larger ships.
During its first full year of operation, the canal saw an average of two ships pass through it. Today, an average of 58 ships a day, carrying 437,000 tons of cargo, sail its waters. The canal remains significant to global commerce, and when a cargo ship got stuck and caused a blockage in 2021, it held up 369 ships at a cost of $9.6 billion in trade a day. Lesseps, however, didn’t fare as well as his creation. Following the success of the Suez, he was hired to construct the Panama Canal. Unfortunately, Lesseps wasn’t an engineer. Rather, his previous feat was largely that of organizing financing and creating political will. His attempt to dig another sea level canal through the isthmus nation proved disastrous. Between disease and the much more difficult terrain, Lesseps failed to make meaningful progress using the same techniques he employed before. The Panama Canal was eventually completed later by the U.S., which opted for a system of canal locks that allowed ships to change elevations, eliminating the need to dig straight through the entire length. Lesseps was one man who really should have rested on his laurels.
[Image description: A photo of a navy ship on the Suez Canal, from above.] Credit & copyright: W. M. Welch/US Navy, Wikimedia Commons. This file is a work of a sailor or employee of the U.S. Navy, taken or made as part of that person's official duties. As a work of the U.S. federal government, it is in the public domain in the United States. -
FREEEngineering PP&T CurioFree1 CQ
What’s a little rain while you’re driving? Terrifying. At least, it was at the beginning of the 20th century. American inventor Mary Anderson filed the first-ever patent for a windshield wiper on this day in 1903. Before then, people just had to make do with wet or muddy windshields. However, Anderson never got to reap the rewards for her world-changing invention.
Born in Alabama in 1866, Anderson wasn’t a career inventor. Little is known about her early life, but as an adult, she was a winemaker, rancher, and real estate developer. By all available accounts, her invention of the first windshield wiper was her one and only foray into the world of engineering or design. But her varied job titles implies that she likely had a keen eye for spotting opportunities, and the inspiration for her invention was no exception. The story goes that Anderson was visiting New York City during the winter and boarded a streetcar on one particularly wet and blustery day. Because of the inclement weather, the windshield of the streetcar kept getting splattered with water and debris, forcing the driver to open a window to manually wipe the windshield clean. Every time he did so, cold wind would blast through the opening, and this didn’t sit well with Anderson, who was used to the balmy Southern weather of her home state. Streetcar drivers weren’t the only ones who had to contend with this problem, of course. As automobiles became more common, the drivers of those vehicles resorted to similar measures or simply drove with their heads sticking out car windows. Inspired by the streetcar driver’s struggle, and perhaps frustrated by the cold ride, Anderson set out to come up with a better solution. In 1903, the U.S. Patent and Trademark Office awarded Anderson with U.S. Patent No. 743,801, or Window-Cleaning Device.
Anderson’s invention, though groundbreaking for its time, doesn’t resemble the modern iteration much. Her version was still operated by hand (albeit from the inside) and consisted of a single rubber blade to clear the windshield. The device also included a counterweight to keep the blade firmly in contact with the glass, and though it was relatively primitive, it was still pretty effective. Unfortunately for Anderson, automakers were hesitant to embrace her invention early on. Despite several attempts, Anderson was never able to attract investors or have them manufactured for sale due to lack of interest. She may have simply been too ahead of her time. Automakers didn’t start making windshield wipers standard equipment in their vehicles until 1916. By then, Anderson’s patent had expired, keeping her from making any profit from her inventions through licensing. Then again, maybe automakers didn’t adopt her windshield wipers on purpose so as not to pay her any fees, though the actual reason is unclear.
Though her invention may not have earned her any money, Anderson has since been recognized for her contribution. In 2011, over 60 years after her death, she was inducted into the National Inventors Hall of Fame. These days, many improvements have been made to her original windshield wiper. In 1917, Charlotte Bridgewood invented the Electric Storm Windshield Cleaner (U.S. Patent No. 1,274,983), the first to be powered by electricity. A few years later, in 1922, brothers William M. and Fred Folberth invented the simply-named Windshield Cleaner (U.S. Patent No. 1,420,538) which was powered by redirected engine exhaust. However, the version that most windshield wipers are based on today was invented by Robert Kearns in the 1960s. Called Windshield Wiper System With Intermittent Operation (U.S. Patent No. 3,351,836), it was motorized and capable of variable speeds. Who knew there were so many ways to clean a windshield?
[Image description: raindrops on a windshield which has been partially wiped clean.] Credit & copyright: Valeriia Miller, PexelsWhat’s a little rain while you’re driving? Terrifying. At least, it was at the beginning of the 20th century. American inventor Mary Anderson filed the first-ever patent for a windshield wiper on this day in 1903. Before then, people just had to make do with wet or muddy windshields. However, Anderson never got to reap the rewards for her world-changing invention.
Born in Alabama in 1866, Anderson wasn’t a career inventor. Little is known about her early life, but as an adult, she was a winemaker, rancher, and real estate developer. By all available accounts, her invention of the first windshield wiper was her one and only foray into the world of engineering or design. But her varied job titles implies that she likely had a keen eye for spotting opportunities, and the inspiration for her invention was no exception. The story goes that Anderson was visiting New York City during the winter and boarded a streetcar on one particularly wet and blustery day. Because of the inclement weather, the windshield of the streetcar kept getting splattered with water and debris, forcing the driver to open a window to manually wipe the windshield clean. Every time he did so, cold wind would blast through the opening, and this didn’t sit well with Anderson, who was used to the balmy Southern weather of her home state. Streetcar drivers weren’t the only ones who had to contend with this problem, of course. As automobiles became more common, the drivers of those vehicles resorted to similar measures or simply drove with their heads sticking out car windows. Inspired by the streetcar driver’s struggle, and perhaps frustrated by the cold ride, Anderson set out to come up with a better solution. In 1903, the U.S. Patent and Trademark Office awarded Anderson with U.S. Patent No. 743,801, or Window-Cleaning Device.
Anderson’s invention, though groundbreaking for its time, doesn’t resemble the modern iteration much. Her version was still operated by hand (albeit from the inside) and consisted of a single rubber blade to clear the windshield. The device also included a counterweight to keep the blade firmly in contact with the glass, and though it was relatively primitive, it was still pretty effective. Unfortunately for Anderson, automakers were hesitant to embrace her invention early on. Despite several attempts, Anderson was never able to attract investors or have them manufactured for sale due to lack of interest. She may have simply been too ahead of her time. Automakers didn’t start making windshield wipers standard equipment in their vehicles until 1916. By then, Anderson’s patent had expired, keeping her from making any profit from her inventions through licensing. Then again, maybe automakers didn’t adopt her windshield wipers on purpose so as not to pay her any fees, though the actual reason is unclear.
Though her invention may not have earned her any money, Anderson has since been recognized for her contribution. In 2011, over 60 years after her death, she was inducted into the National Inventors Hall of Fame. These days, many improvements have been made to her original windshield wiper. In 1917, Charlotte Bridgewood invented the Electric Storm Windshield Cleaner (U.S. Patent No. 1,274,983), the first to be powered by electricity. A few years later, in 1922, brothers William M. and Fred Folberth invented the simply-named Windshield Cleaner (U.S. Patent No. 1,420,538) which was powered by redirected engine exhaust. However, the version that most windshield wipers are based on today was invented by Robert Kearns in the 1960s. Called Windshield Wiper System With Intermittent Operation (U.S. Patent No. 3,351,836), it was motorized and capable of variable speeds. Who knew there were so many ways to clean a windshield?
[Image description: raindrops on a windshield which has been partially wiped clean.] Credit & copyright: Valeriia Miller, Pexels -
FREEArt Appreciation PP&T CurioFree1 CQ
There are movements that shape artists, and there are artists that shape movements. Henri Matisse was decidedly the latter of the two. The multidisciplinary French artist passed away on this day in 1954, and during his illustrious career, he became one of the most prolific and influential artists of all time, engaging in friendships and rivalries with other masters of modern art, most notably Pablo Picasso.
Henri Émile Benoît Matisse was born on December 31, 1869 in Le Cateau-Cambresis, Nord, France. Unlike many of his artistic contemporaries, Matisse wasn’t trained in the discipline nor did he show any significant interest in it until he was already a young man. Before picking up his first paintbrush, Matisse moved to Paris in 1887 to study law and went on to find work as a court administrator in northern France. It wasn’t until 1889, when he became ill with appendicitis, that he began painting after his mother gifted him some art supplies to stave off boredom during his recovery. The young Matisse quickly became completely enamored with painting, later describing it as "a kind of paradise.” Much to the chagrin of his father, Matisse abandoned his legal ambitions and moved back to Paris to learn art, studying under the likes of William-Adolphe Bouguereau and Gustave Moreau. However, the work produced in his early years, mostly consisting of still lifes in earth toned palettes, was quite unlike the work that would eventually make him famous. His true artistic awakening didn’t occur until 1896, when he met Australian painter John Russell. A friend of Vincent van Gogh, Russell showed the struggling artist a collection of Van Gogh’s paintings, introducing Matisse to Impressionism.
In the following years, Matisse began collecting and studying the work of his contemporaries, particularly the Neo-Impressionists. Inspired by their bright colors and bold brushstrokes, his own vision of the world began coalescing along with that of other, like-minded artists into a relatively short-lived but influential movement called Fauvism. The works of the “Fauves” (“wild beasts” in French) like Matisse were defined by unconventional and intense color palettes laid down with striking brushstrokes. Despite being a founding member of a movement, Matisse was never one to settle for just one style or medium. Throughout his life, he dabbled in pointillism, printmaking, sculpting, and paper cutting. At times, he even returned to and was praised for his more traditional works, which he pursued in the post-WWI period. Among his contemporaries, there was only one who seemed to match him: Pablo Picasso. Matisse’s rivalry with this fellow master of modern art is well documented, and the two seemed to study each other’s works carefully. Matisse and Picasso often painted the same scenes and subjects, including the same models. At times, they even titled their pieces the same, not for lack of creativity, but to serve as a riposte on canvas. Matisse once likened their rivalry to a boxing match, and though the two didn’t initially care for each other’s work, they eventually developed a mutual admiration.
Today, the name Matisse is practically synonymous with modern art, and his influence goes beyond the canvas. In his later years, Matisse’s failing health forced him to rely on assistants for much of his work. During the 1940s, Matisse worked with paper, creating colorful collages called gouaches découpés that he described as “painting with scissors.” His final masterpiece, however, was his design for a stained-glass window for the Union Church of Pocantico Hills in New York City. No matter what medium he touched, Matisse always left an impression, leaving behind a body of work that is wildly eclectic yet always recognizably his. Surely his father had to admit that Matisse did the right thing by leaving law school.
[Image description: A fanned-out group of paint brushes smattered with paint.] Credit & copyright: Steve Johnson, PexelsThere are movements that shape artists, and there are artists that shape movements. Henri Matisse was decidedly the latter of the two. The multidisciplinary French artist passed away on this day in 1954, and during his illustrious career, he became one of the most prolific and influential artists of all time, engaging in friendships and rivalries with other masters of modern art, most notably Pablo Picasso.
Henri Émile Benoît Matisse was born on December 31, 1869 in Le Cateau-Cambresis, Nord, France. Unlike many of his artistic contemporaries, Matisse wasn’t trained in the discipline nor did he show any significant interest in it until he was already a young man. Before picking up his first paintbrush, Matisse moved to Paris in 1887 to study law and went on to find work as a court administrator in northern France. It wasn’t until 1889, when he became ill with appendicitis, that he began painting after his mother gifted him some art supplies to stave off boredom during his recovery. The young Matisse quickly became completely enamored with painting, later describing it as "a kind of paradise.” Much to the chagrin of his father, Matisse abandoned his legal ambitions and moved back to Paris to learn art, studying under the likes of William-Adolphe Bouguereau and Gustave Moreau. However, the work produced in his early years, mostly consisting of still lifes in earth toned palettes, was quite unlike the work that would eventually make him famous. His true artistic awakening didn’t occur until 1896, when he met Australian painter John Russell. A friend of Vincent van Gogh, Russell showed the struggling artist a collection of Van Gogh’s paintings, introducing Matisse to Impressionism.
In the following years, Matisse began collecting and studying the work of his contemporaries, particularly the Neo-Impressionists. Inspired by their bright colors and bold brushstrokes, his own vision of the world began coalescing along with that of other, like-minded artists into a relatively short-lived but influential movement called Fauvism. The works of the “Fauves” (“wild beasts” in French) like Matisse were defined by unconventional and intense color palettes laid down with striking brushstrokes. Despite being a founding member of a movement, Matisse was never one to settle for just one style or medium. Throughout his life, he dabbled in pointillism, printmaking, sculpting, and paper cutting. At times, he even returned to and was praised for his more traditional works, which he pursued in the post-WWI period. Among his contemporaries, there was only one who seemed to match him: Pablo Picasso. Matisse’s rivalry with this fellow master of modern art is well documented, and the two seemed to study each other’s works carefully. Matisse and Picasso often painted the same scenes and subjects, including the same models. At times, they even titled their pieces the same, not for lack of creativity, but to serve as a riposte on canvas. Matisse once likened their rivalry to a boxing match, and though the two didn’t initially care for each other’s work, they eventually developed a mutual admiration.
Today, the name Matisse is practically synonymous with modern art, and his influence goes beyond the canvas. In his later years, Matisse’s failing health forced him to rely on assistants for much of his work. During the 1940s, Matisse worked with paper, creating colorful collages called gouaches découpés that he described as “painting with scissors.” His final masterpiece, however, was his design for a stained-glass window for the Union Church of Pocantico Hills in New York City. No matter what medium he touched, Matisse always left an impression, leaving behind a body of work that is wildly eclectic yet always recognizably his. Surely his father had to admit that Matisse did the right thing by leaving law school.
[Image description: A fanned-out group of paint brushes smattered with paint.] Credit & copyright: Steve Johnson, Pexels -
FREEUS History PP&T CurioFree1 CQ
New York is full of engineering wonders, from skyscrapers to suspension bridges, but one of the most impressive isn’t even visible above ground. The New York City subway system transports over a billion riders through the urban jungle every year. The city’s first subway system opened on this day in 1904, and since then it has continued to expand and serve an exponentially growing population.
By the late 1800s, New York City was already the most populated city in the United States. Already known as a center of commerce and culture, the city was growing quickly…and quickly running out of room. Roads were congested with horse-drawn carriages and the island borough of Manhattan was serviced by elevated railways that took up precious real estate. City planners needed a solution that would address the transportation needs of the residents without taking up what little room was left. A subway system seemed like a logical answer. After all, the world’s first underground transit system was already a proven success, as it had been operating in London since 1863. In nearby Boston, America’s first subway was finished in 1897, though it was more limited in scope and used streetcars. There had even been a limited subway line in New York City between 1870 and 1873. During those short few years, a pneumatic-powered, 18-passenger car traversed under Broadway using a 100 horsepower fan. There had been talk of expanding the line, but the technology was made obsolete by improvements in electric traction motors, and the line was soon abandoned. Indeed, the future of transit in New York City was electric, and after much lobbying from the city’s Board of Rapid Transit and financing from prominent financier August Belmont, Jr., construction on the permanent subway system began in 1900.
As construction crews dug underground, they built temporary wooden bridges over the subway tunnels to allow traffic to continue unimpeded. Not everything went so smoothly, though. Because the tunnel was close to the surface in many places, construction often involved moving existing infrastructure like gas and water lines. Some things weren’t so easy to move out of the way, such as the Columbus Monument in Central Park. One section of the tunnels had to pass through the east side of the 700-ton monument’s foundation, and simply digging through could have led to its collapse. To avoid damaging it, workers had to build a new support under the monument, slowing progress on the subway. Another major obstacle was the New York Times building, which had a pressroom below where the tunnel was to be built. So, the subway was simply built through the building with steel channels to reinforce its structure. Despite these and other engineering challenges, construction was completed just four years after it started, and the inaugural run of the city’s new transit system took place on October 27, 1904, at 2:35 PM, with Mayor George McClellan at the controls. The subway system was operated by the Interborough Rapid Transit Company (IRT) and consisted of just 9.1 miles of tracks passing through 28 stations. That may seem limited compared to today, but it was an astounding leap for commuters at the time, with IRT claiming to take passengers from “City Hall to Harlem in 15 minutes.” At 7 PM, just hours after the inaugural run, the subway was opened to the public for just a nickel per passenger. On opening day, around 100,000 passengers tried out the newly-minted subway, and that number has only grown since.
Today, New York City’s subway system has 472 stations and 665 miles of track. It’s operated by the Metropolitan Transport Authority (MTA) and serves over three million riders a day. The city’s subway system wasn’t the first, nor is it currently the largest, but it remains the only one to operate 24 hours a day, 7 days a week—a feature that many New Yorkers have come to rely on. The extensive and convenient transit system allowed the city to grow throughout the 20th century, and the Big Apple might have ended up as Small Potatoes without it.
[Image description: A subway train near a sign reading “W 8 Street.”] Credit & copyright: Tim Gouw, PexelsNew York is full of engineering wonders, from skyscrapers to suspension bridges, but one of the most impressive isn’t even visible above ground. The New York City subway system transports over a billion riders through the urban jungle every year. The city’s first subway system opened on this day in 1904, and since then it has continued to expand and serve an exponentially growing population.
By the late 1800s, New York City was already the most populated city in the United States. Already known as a center of commerce and culture, the city was growing quickly…and quickly running out of room. Roads were congested with horse-drawn carriages and the island borough of Manhattan was serviced by elevated railways that took up precious real estate. City planners needed a solution that would address the transportation needs of the residents without taking up what little room was left. A subway system seemed like a logical answer. After all, the world’s first underground transit system was already a proven success, as it had been operating in London since 1863. In nearby Boston, America’s first subway was finished in 1897, though it was more limited in scope and used streetcars. There had even been a limited subway line in New York City between 1870 and 1873. During those short few years, a pneumatic-powered, 18-passenger car traversed under Broadway using a 100 horsepower fan. There had been talk of expanding the line, but the technology was made obsolete by improvements in electric traction motors, and the line was soon abandoned. Indeed, the future of transit in New York City was electric, and after much lobbying from the city’s Board of Rapid Transit and financing from prominent financier August Belmont, Jr., construction on the permanent subway system began in 1900.
As construction crews dug underground, they built temporary wooden bridges over the subway tunnels to allow traffic to continue unimpeded. Not everything went so smoothly, though. Because the tunnel was close to the surface in many places, construction often involved moving existing infrastructure like gas and water lines. Some things weren’t so easy to move out of the way, such as the Columbus Monument in Central Park. One section of the tunnels had to pass through the east side of the 700-ton monument’s foundation, and simply digging through could have led to its collapse. To avoid damaging it, workers had to build a new support under the monument, slowing progress on the subway. Another major obstacle was the New York Times building, which had a pressroom below where the tunnel was to be built. So, the subway was simply built through the building with steel channels to reinforce its structure. Despite these and other engineering challenges, construction was completed just four years after it started, and the inaugural run of the city’s new transit system took place on October 27, 1904, at 2:35 PM, with Mayor George McClellan at the controls. The subway system was operated by the Interborough Rapid Transit Company (IRT) and consisted of just 9.1 miles of tracks passing through 28 stations. That may seem limited compared to today, but it was an astounding leap for commuters at the time, with IRT claiming to take passengers from “City Hall to Harlem in 15 minutes.” At 7 PM, just hours after the inaugural run, the subway was opened to the public for just a nickel per passenger. On opening day, around 100,000 passengers tried out the newly-minted subway, and that number has only grown since.
Today, New York City’s subway system has 472 stations and 665 miles of track. It’s operated by the Metropolitan Transport Authority (MTA) and serves over three million riders a day. The city’s subway system wasn’t the first, nor is it currently the largest, but it remains the only one to operate 24 hours a day, 7 days a week—a feature that many New Yorkers have come to rely on. The extensive and convenient transit system allowed the city to grow throughout the 20th century, and the Big Apple might have ended up as Small Potatoes without it.
[Image description: A subway train near a sign reading “W 8 Street.”] Credit & copyright: Tim Gouw, Pexels -
FREELiterature PP&T CurioFree1 CQ
Halloween approaches, and with it a host of familiar, spooky tales, many of which have their basis in classic novels. Oscar Wilde’s The Picture Dorian Gray isn’t quite as famous as Dracula or Frankenstein, but this novel is just as spooky, and it’s had its fair share of pop culture appearances and film adaptations too. It’s not exactly a story about a monster… but about the monstrous faults that lurk in all of us.
The Picture Dorian Gray was first published in 1890 in Lippincott’s Monthly Magazine as a novella, which was common for new stories at the time. It follows the titular character through his descent into moral decay. Dorian Gray is a handsome, rich young man who enjoys a relatively carefree life. Gray’s friend, Basil Hallward, paints his portrait and discusses Gray’s extraordinary beauty with Lord Henry Wotton, a hedonistic socialite. When Gray arrives to see the finished piece, Wotton describes his personal life philosophy: that one should live to indulge their impulses and appetites. He goes on to say to Gray, “…you have the most marvelous youth, and youth is the one thing worth having.” As Hallward places the finishing touches on the painting, Gray declares, “But this picture will remain always young. It will never be older than this particular day of June…If it were only the other way! If it were I who was to be always young, and the picture that was to grow old! For that—for that—I would give everything! Yes, there is nothing in the whole world I would not give! I would give my soul for that!” From that point on, Gray begins to commit cruel and even violent transgressions, the first of which leads to the death of his lover, Sibyl Vane. Yet, he remains ageless and beautiful while his portrait warps into an increasingly grotesque reflection of his inner self. Ultimately, even his attempt to redeem himself by a kind act is revealed to be self-serving, as the portrait changes to reflect his cunning. Eventually, Gray murders the portrait's creator after Hallword discovers how hideous it has become. When a crazed Gray stabs the portrait in frustration, a servant hears him scream and comes to his aid, only to find the body of an ugly, old man with a knife in his chest. The portrait, meanwhile, has reverted back to its original, beautiful form.
Wilde’s novel didn’t have quite the reception he’d hoped for. When it was unleashed upon the Victorian readership, it set off a storm of controversy with Wilde at the center. This was despite the fact that Lippincott’s editor, J. M. Stoddart, had heavily edited the novella to censor portions that he believed were too obscene for Victorian sensibilities. The cuts to the text were made without Wilde’s input or consent, and largely targeted the homosexual undertones present in the interactions between some of the male characters. In particular, Hallward was originally characterized as having much more overt homosexual inclinations toward Gray. Stoddart also removed some of the more salacious details surrounding the novel’s heterosexual relationships. When the book was engulfed in scandal, Wilde himself made further edits of his own accord, but to no avail. When Wilde was accused of having engaged in a homosexual relationship with Lord Alfred Douglas by Douglas’s father, the author sued the latter for libel. The suit fell apart in court after the homosexual themes in The Picture of Dorian Gray were used as evidence against Wilde, and the failure of the suit left him open to criminal prosecution for homosexuality under British law. After two trials, Wilde was sentenced to two years of hard labor in 1895. After his release, he was plagued by poor health while commercial success eluded him. Wilde passed away in Paris, France, in 1900 of acute meningitis.
Today, The Picture of Dorian Gray is seen in a much different light. The work is considered one of the best examples of Wilde’s wit and eye for characterization. It’s also the most representative of Wilde’s Aestheticism, a worldview espoused by several characters in the novella. Nowadays, a version true to the author’s original intent is available as The Picture of Dorian Gray: An Annotated, Uncensored Edition (2011), which restores material cut from the text by Stoddart and Wilde. It may not be so controversial for modern sensibilities, but just in case, make sure you’re wearing some pearls so you have something to clutch if you buy a copy.
[Image description: A 1908 illustration from Oscar Wilde's The Picture of Dorian Gray] Credit & copyright:
Eugène Dété (1848–1922) after Paul Thiriat (1868–1943), 1908. Mississippi State University, College of Architecture Art and Design, Wikimedia Commons. This work is in the public domain in its source country and the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.Halloween approaches, and with it a host of familiar, spooky tales, many of which have their basis in classic novels. Oscar Wilde’s The Picture Dorian Gray isn’t quite as famous as Dracula or Frankenstein, but this novel is just as spooky, and it’s had its fair share of pop culture appearances and film adaptations too. It’s not exactly a story about a monster… but about the monstrous faults that lurk in all of us.
The Picture Dorian Gray was first published in 1890 in Lippincott’s Monthly Magazine as a novella, which was common for new stories at the time. It follows the titular character through his descent into moral decay. Dorian Gray is a handsome, rich young man who enjoys a relatively carefree life. Gray’s friend, Basil Hallward, paints his portrait and discusses Gray’s extraordinary beauty with Lord Henry Wotton, a hedonistic socialite. When Gray arrives to see the finished piece, Wotton describes his personal life philosophy: that one should live to indulge their impulses and appetites. He goes on to say to Gray, “…you have the most marvelous youth, and youth is the one thing worth having.” As Hallward places the finishing touches on the painting, Gray declares, “But this picture will remain always young. It will never be older than this particular day of June…If it were only the other way! If it were I who was to be always young, and the picture that was to grow old! For that—for that—I would give everything! Yes, there is nothing in the whole world I would not give! I would give my soul for that!” From that point on, Gray begins to commit cruel and even violent transgressions, the first of which leads to the death of his lover, Sibyl Vane. Yet, he remains ageless and beautiful while his portrait warps into an increasingly grotesque reflection of his inner self. Ultimately, even his attempt to redeem himself by a kind act is revealed to be self-serving, as the portrait changes to reflect his cunning. Eventually, Gray murders the portrait's creator after Hallword discovers how hideous it has become. When a crazed Gray stabs the portrait in frustration, a servant hears him scream and comes to his aid, only to find the body of an ugly, old man with a knife in his chest. The portrait, meanwhile, has reverted back to its original, beautiful form.
Wilde’s novel didn’t have quite the reception he’d hoped for. When it was unleashed upon the Victorian readership, it set off a storm of controversy with Wilde at the center. This was despite the fact that Lippincott’s editor, J. M. Stoddart, had heavily edited the novella to censor portions that he believed were too obscene for Victorian sensibilities. The cuts to the text were made without Wilde’s input or consent, and largely targeted the homosexual undertones present in the interactions between some of the male characters. In particular, Hallward was originally characterized as having much more overt homosexual inclinations toward Gray. Stoddart also removed some of the more salacious details surrounding the novel’s heterosexual relationships. When the book was engulfed in scandal, Wilde himself made further edits of his own accord, but to no avail. When Wilde was accused of having engaged in a homosexual relationship with Lord Alfred Douglas by Douglas’s father, the author sued the latter for libel. The suit fell apart in court after the homosexual themes in The Picture of Dorian Gray were used as evidence against Wilde, and the failure of the suit left him open to criminal prosecution for homosexuality under British law. After two trials, Wilde was sentenced to two years of hard labor in 1895. After his release, he was plagued by poor health while commercial success eluded him. Wilde passed away in Paris, France, in 1900 of acute meningitis.
Today, The Picture of Dorian Gray is seen in a much different light. The work is considered one of the best examples of Wilde’s wit and eye for characterization. It’s also the most representative of Wilde’s Aestheticism, a worldview espoused by several characters in the novella. Nowadays, a version true to the author’s original intent is available as The Picture of Dorian Gray: An Annotated, Uncensored Edition (2011), which restores material cut from the text by Stoddart and Wilde. It may not be so controversial for modern sensibilities, but just in case, make sure you’re wearing some pearls so you have something to clutch if you buy a copy.
[Image description: A 1908 illustration from Oscar Wilde's The Picture of Dorian Gray] Credit & copyright:
Eugène Dété (1848–1922) after Paul Thiriat (1868–1943), 1908. Mississippi State University, College of Architecture Art and Design, Wikimedia Commons. This work is in the public domain in its source country and the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929. -
FREEPolitical Science PP&T CurioFree1 CQ
For better or worse, modern American politics are a bombastic affair involving celebrity endorsements and plenty of talking heads. Former President Jimmy Carter, who recently became the first U.S. President to celebrate his 100th birthday, has lived a different sort of life than many modern politicians. His first home lacked electricity and indoor plumbing, and his career involved more quiet service than political bravado.
Born on October 1, 1924 in Plains, Georgia, James Earl “Jimmy” Carter Jr. was the first U.S. President to be born in a hospital, as home births were more common at the time. His early childhood was fairly humble. His father, Earl, was a peanut farmer and businessman who enlisted young Jimmy’s help in packing goods to be sold in town, while his mother was a trained nurse who provided healthcare services to impoverished Black families. As a student, Carter excelled at school, encouraged by his parents to be hardworking and enterprising. Aside from helping his father, he also sought work with the Sumter County Library Board, where he helped set up the bookmobile, a traveling library to service the rural areas of the county. After graduating high school in 1941, Carter attended the Georgia Institute of Technology for a year before entering the U.S. Naval Academy. He met his future wife, Rosalynn Smith, during his last year at the Academy, and the two were married in 1946. After graduating from the Academy the same year, Carter joined the U.S. Navy’s submarine service, although it was a dangerous job. He even worked with Captain Hyman Rickover, the “father of the nuclear Navy,” and studied nuclear engineering as part of the Navy’s efforts to build its first nuclear submarines. Carter would have served aboard the U.S.S. Seawolf, one of the first two such vessels, but the death of his father in 1953 prompted him to resign so that he could return to Georgia and take over the struggling family farm.
On returning to his home state, Carter and his family moved into a public housing project in Plains due to a post-war housing shortage. This experience inspired him to work with Habitat for Humanity decades later, and it also made him the first president to have lived in public housing. While turning around the fortunes of the family’s peanut farm, Carter became involved in politics, earning a seat on the Sumter County Board of Education in 1955. In 1962, he ran for a seat in the Georgia State Senate, where he earned a reputation for himself by targeting wasteful spending and laws meant to disenfranchise Black voters. Although he failed to win the Democratic primary in 1966 for a seat in the U.S. Congress (largely due to his support of the civil rights movement), he refocused his efforts toward the 1970 gubernatorial election. After a successful campaign, he surprised many in Georgia by advocating for integration and appointing more Black staff members than previous administrations. Though his idealism attracted criticism, Carter was largely popular in the state for his work in reducing government bureaucracy and increasing funding for schools.
Jimmy Carter’s political ambitions eventually led him to the White House when he took office in 1977. His Presidency took place during a chaotic time, in which the Iranian hostage crisis, a war in Afghanistan, and economic worries were just some of the problems he was tasked with helping to solve. After losing the 1980 Presidential race to Ronald Reagan, Carter and his wife moved back into their modest, ranch-style home in Georgia where they lived for more than 60 years, making him one of just a few presidents to return to their pre-presidential residences. Today, Carter is almost as well-known for his work after his presidency, as during it, since he dedicated much of his life to charity work, especially building homes with Habitat for Humanity. He also wrote over 30 books, including three that he recorded as audio books which won him three Grammy Awards in the Spoken Word Album category. Not too shabby for a humble peanut farmer.
[Image description: Jimmy Carter’s official Presidential portrait; he wears a dark blue suit with a light blue shirt and striped tie.] Credit & copyright: Department of Defense. Department of the Navy. Naval Photographic Center. Wikimedia Commons. This work is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code.For better or worse, modern American politics are a bombastic affair involving celebrity endorsements and plenty of talking heads. Former President Jimmy Carter, who recently became the first U.S. President to celebrate his 100th birthday, has lived a different sort of life than many modern politicians. His first home lacked electricity and indoor plumbing, and his career involved more quiet service than political bravado.
Born on October 1, 1924 in Plains, Georgia, James Earl “Jimmy” Carter Jr. was the first U.S. President to be born in a hospital, as home births were more common at the time. His early childhood was fairly humble. His father, Earl, was a peanut farmer and businessman who enlisted young Jimmy’s help in packing goods to be sold in town, while his mother was a trained nurse who provided healthcare services to impoverished Black families. As a student, Carter excelled at school, encouraged by his parents to be hardworking and enterprising. Aside from helping his father, he also sought work with the Sumter County Library Board, where he helped set up the bookmobile, a traveling library to service the rural areas of the county. After graduating high school in 1941, Carter attended the Georgia Institute of Technology for a year before entering the U.S. Naval Academy. He met his future wife, Rosalynn Smith, during his last year at the Academy, and the two were married in 1946. After graduating from the Academy the same year, Carter joined the U.S. Navy’s submarine service, although it was a dangerous job. He even worked with Captain Hyman Rickover, the “father of the nuclear Navy,” and studied nuclear engineering as part of the Navy’s efforts to build its first nuclear submarines. Carter would have served aboard the U.S.S. Seawolf, one of the first two such vessels, but the death of his father in 1953 prompted him to resign so that he could return to Georgia and take over the struggling family farm.
On returning to his home state, Carter and his family moved into a public housing project in Plains due to a post-war housing shortage. This experience inspired him to work with Habitat for Humanity decades later, and it also made him the first president to have lived in public housing. While turning around the fortunes of the family’s peanut farm, Carter became involved in politics, earning a seat on the Sumter County Board of Education in 1955. In 1962, he ran for a seat in the Georgia State Senate, where he earned a reputation for himself by targeting wasteful spending and laws meant to disenfranchise Black voters. Although he failed to win the Democratic primary in 1966 for a seat in the U.S. Congress (largely due to his support of the civil rights movement), he refocused his efforts toward the 1970 gubernatorial election. After a successful campaign, he surprised many in Georgia by advocating for integration and appointing more Black staff members than previous administrations. Though his idealism attracted criticism, Carter was largely popular in the state for his work in reducing government bureaucracy and increasing funding for schools.
Jimmy Carter’s political ambitions eventually led him to the White House when he took office in 1977. His Presidency took place during a chaotic time, in which the Iranian hostage crisis, a war in Afghanistan, and economic worries were just some of the problems he was tasked with helping to solve. After losing the 1980 Presidential race to Ronald Reagan, Carter and his wife moved back into their modest, ranch-style home in Georgia where they lived for more than 60 years, making him one of just a few presidents to return to their pre-presidential residences. Today, Carter is almost as well-known for his work after his presidency, as during it, since he dedicated much of his life to charity work, especially building homes with Habitat for Humanity. He also wrote over 30 books, including three that he recorded as audio books which won him three Grammy Awards in the Spoken Word Album category. Not too shabby for a humble peanut farmer.
[Image description: Jimmy Carter’s official Presidential portrait; he wears a dark blue suit with a light blue shirt and striped tie.] Credit & copyright: Department of Defense. Department of the Navy. Naval Photographic Center. Wikimedia Commons. This work is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. -
FREEPolitical Science PP&T CurioFree1 CQ
With nationwide relief efforts underway following the devastation of Hurricane Helene, you’ve likely been hearing a lot about one federal agency: FEMA (Federal Emergency Management Agency). With a workforce of more than 20,000 people, FEMA is uniquely equipped to respond to all sorts of emergencies. Before its founding, though, Americans dealing with disasters were largely left on their own.
In December of 1802, Portsmouth, New Hampshire, was practically destroyed by a fire. At the time, Portsmouth was among the U.S.’s busiest ports, and its destruction spelled disaster for the economy. The federal government didn’t directly help rebuild the city, but the U.S. Congress suspended bond payments for local merchants to allow them to continue operations in Portsmouth. Similar measures were taken after other major fires, such as one in New York City in 1835 and the Great Chicago Fire of 1871. Still, there wasn’t much interest in creating a proactive federal response system for disasters until the early 20th century, when two tragic events led to calls for action. First, there was the Galveston Hurricane in 1900, which killed thousands of people. Then, the San Francisco Earthquake in 1906 leveled much of the city. In both cases, very little federal action was taken to address displaced citizens or to rebuild critical infrastructure, with the onus falling entirely on local governments. Those local governments, in turn, began asking the federal government to create some kind of task force to help when future disasters arose. Finally, in 1950, Congress created the Federal Disaster Assistance Program, giving the federal government powers to act directly in the case of disasters. A series of devastating hurricanes and earthquakes in the 1960s provided further impetus to expand these powers, resulting in the Disaster Relief Act of 1970. This allowed affected individuals to receive federal loans and tax assistance. Finally, in 1979, President Jimmy Carter issued an executive order to combine a number of agencies responsible for disaster response to create FEMA.
Since FEMA was created, it has helped in the face of everything from volcanoes to hurricanes, but it hasn’t always been beyond criticism. For example, the federal response to the Loma Prieta Earthquake in 1989 and Hurricane Andrew in 1992 were considered inadequate. Major reforms in the 1990s and the increasing emphasis on being proactive, not simply reactive, allowed the agency to respond to disasters more effectively. Some of the proactive measures included purchasing property in areas at higher risk of natural disasters and encouraging more stringent building codes. While FEMA was improving its response to natural disasters, there were also unnatural disasters to contend with. In 1995, FEMA responded to the Oklahoma City Bombing. Six years later, the terrorist attacks of September 11, 2001 led to the most significant change to the agency since its creation. When the Department of Homeland Security (DHS) was created to handle federal responses to terrorist attacks, FEMA was absorbed into it, expanding its scope to terrorism preparedness.
Today, FEMA continues in its original mission of disaster relief, and it’s been getting busier by the year. With climate change creating storms of greater frequency and power, FEMA has been kept on its toes recently. When such storms approach, it’s up to governors of affected states to request assistance through the FEMA Regional Office. Since they can do this before storms actually strike, FEMA can begin providing financial aid and moving people and supplies into position before any actual damage has occurred. Aside from providing practical necessities like food, water, and shelter to affected people, part of FEMA’s purpose is to ensure that allocated funds are handled appropriately. After all, when things go sideways, you want to make sure everything else is on the up and up.
[Image description: An American flag with a wooden flagpole flying against a blue sky.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.With nationwide relief efforts underway following the devastation of Hurricane Helene, you’ve likely been hearing a lot about one federal agency: FEMA (Federal Emergency Management Agency). With a workforce of more than 20,000 people, FEMA is uniquely equipped to respond to all sorts of emergencies. Before its founding, though, Americans dealing with disasters were largely left on their own.
In December of 1802, Portsmouth, New Hampshire, was practically destroyed by a fire. At the time, Portsmouth was among the U.S.’s busiest ports, and its destruction spelled disaster for the economy. The federal government didn’t directly help rebuild the city, but the U.S. Congress suspended bond payments for local merchants to allow them to continue operations in Portsmouth. Similar measures were taken after other major fires, such as one in New York City in 1835 and the Great Chicago Fire of 1871. Still, there wasn’t much interest in creating a proactive federal response system for disasters until the early 20th century, when two tragic events led to calls for action. First, there was the Galveston Hurricane in 1900, which killed thousands of people. Then, the San Francisco Earthquake in 1906 leveled much of the city. In both cases, very little federal action was taken to address displaced citizens or to rebuild critical infrastructure, with the onus falling entirely on local governments. Those local governments, in turn, began asking the federal government to create some kind of task force to help when future disasters arose. Finally, in 1950, Congress created the Federal Disaster Assistance Program, giving the federal government powers to act directly in the case of disasters. A series of devastating hurricanes and earthquakes in the 1960s provided further impetus to expand these powers, resulting in the Disaster Relief Act of 1970. This allowed affected individuals to receive federal loans and tax assistance. Finally, in 1979, President Jimmy Carter issued an executive order to combine a number of agencies responsible for disaster response to create FEMA.
Since FEMA was created, it has helped in the face of everything from volcanoes to hurricanes, but it hasn’t always been beyond criticism. For example, the federal response to the Loma Prieta Earthquake in 1989 and Hurricane Andrew in 1992 were considered inadequate. Major reforms in the 1990s and the increasing emphasis on being proactive, not simply reactive, allowed the agency to respond to disasters more effectively. Some of the proactive measures included purchasing property in areas at higher risk of natural disasters and encouraging more stringent building codes. While FEMA was improving its response to natural disasters, there were also unnatural disasters to contend with. In 1995, FEMA responded to the Oklahoma City Bombing. Six years later, the terrorist attacks of September 11, 2001 led to the most significant change to the agency since its creation. When the Department of Homeland Security (DHS) was created to handle federal responses to terrorist attacks, FEMA was absorbed into it, expanding its scope to terrorism preparedness.
Today, FEMA continues in its original mission of disaster relief, and it’s been getting busier by the year. With climate change creating storms of greater frequency and power, FEMA has been kept on its toes recently. When such storms approach, it’s up to governors of affected states to request assistance through the FEMA Regional Office. Since they can do this before storms actually strike, FEMA can begin providing financial aid and moving people and supplies into position before any actual damage has occurred. Aside from providing practical necessities like food, water, and shelter to affected people, part of FEMA’s purpose is to ensure that allocated funds are handled appropriately. After all, when things go sideways, you want to make sure everything else is on the up and up.
[Image description: An American flag with a wooden flagpole flying against a blue sky.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEStyle PP&T CurioFree1 CQ
Ooo la la! This timeless headpiece is as French as escargot, yet the beret has managed to maintain incredible worldwide appeal throughout the centuries. This simple, unisex hat has shown up on the heads of everyone from European royals to uniformed soldiers and is still going strong despite a history that stretches back at least as far as the 14th century.
Although modern berets are heavily associated with French fashion and largely gained popularity in the 20th century, flat-cap style hats have been worn since the time of ancient Greece. The true ancestor of the beret comes from Europe in the 1300s, when felted or fulled wool hats were a durable, warm choice for many people working outdoors. The simple design of these hats gave them a timeless quality that endured through the centuries, and they were eventually adopted by the people of the Basque region, sandwiched between the border of France and Spain. The Basque people were renowned fishermen and whalers who sailed long distances in search of their quarry. Basque berets were perfect for these hardy sailors, who needed water–resistant hats to keep them warm while sailing the cold, northern seas. Their version of the beret became so emblematic of their culture that receiving one at the age of 10 was a rite of passage for boys in the Basque city of Béarn, where the hat is said to have originated. Other European cultures recognized the sailing prowess of Basque fishermen as well, and many came to Basque country to learn from the best. That, along with the long-reaching travels of the Basque sailors, spread the Basque beret around Europe. It wasn’t until 1835, though, that the hat began to be called “beret,” short for the French name for it, “béret basque.” Throughout the 1800s, the hat gained increasing popularity outside of maritime professions, though for less peaceful purposes.
The beret came into the forefront of fashion and history when Spanish-Basque military officer Tomás de Zumalacárregui wore a large, red iteration of the hat during the First and Second Carlist Wars. From then on, the beret was inextricably linked to military aesthetics, and was adopted by various European armies thereafter. Another famous example were the Chasseurs Alpins, an elite group of French soldiers trained to fight in the mountains. They wore blue berets to distinguish themselves and keep warm. Then came the brutal conflicts of WWI and WWII, when the advent of wireless communication with the widespread adoption of radios and telephones gave the beret a novel advantage: its compact design allowed it to fit in the cramped spaces inside tanks and other vehicles, while also allowing for the wearing of headphones. Soon, berets became associated with elite forces like the Green Berets of the U.S. Army.
Around the same time, though, the beret once again found itself being worn for fashion. They were embraced by artists and writers like Ernest Hemingway, who considered their roots in European peasantry a means of rebelling against mainstream fashion. As Paris distinguished itself as the world’s fashion center, the hats became most heavily associated with France. Today, the beret remains largely a fashion statement, but it’s also been worn by political revolutionaries such as Che Guevara and the Black Panthers as a means to identify themselves. No matter who you are, though, when you put on a beret, you’re not just wearing a fashionable headpiece. You’re wearing a piece of history.
[Image description: A maroon-colored beret hat with a puffed decoration on top, sitting on a blank mannequin head.] Credit & copyright:
Metropolitan Museum of Art, Wikimedia Commons. Brooklyn Museum Costume Collection at The Metropolitan Museum of Art, Gift of the Brooklyn Museum, 2009; Gift of E. F. Schermerhorn, 1953. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Ooo la la! This timeless headpiece is as French as escargot, yet the beret has managed to maintain incredible worldwide appeal throughout the centuries. This simple, unisex hat has shown up on the heads of everyone from European royals to uniformed soldiers and is still going strong despite a history that stretches back at least as far as the 14th century.
Although modern berets are heavily associated with French fashion and largely gained popularity in the 20th century, flat-cap style hats have been worn since the time of ancient Greece. The true ancestor of the beret comes from Europe in the 1300s, when felted or fulled wool hats were a durable, warm choice for many people working outdoors. The simple design of these hats gave them a timeless quality that endured through the centuries, and they were eventually adopted by the people of the Basque region, sandwiched between the border of France and Spain. The Basque people were renowned fishermen and whalers who sailed long distances in search of their quarry. Basque berets were perfect for these hardy sailors, who needed water–resistant hats to keep them warm while sailing the cold, northern seas. Their version of the beret became so emblematic of their culture that receiving one at the age of 10 was a rite of passage for boys in the Basque city of Béarn, where the hat is said to have originated. Other European cultures recognized the sailing prowess of Basque fishermen as well, and many came to Basque country to learn from the best. That, along with the long-reaching travels of the Basque sailors, spread the Basque beret around Europe. It wasn’t until 1835, though, that the hat began to be called “beret,” short for the French name for it, “béret basque.” Throughout the 1800s, the hat gained increasing popularity outside of maritime professions, though for less peaceful purposes.
The beret came into the forefront of fashion and history when Spanish-Basque military officer Tomás de Zumalacárregui wore a large, red iteration of the hat during the First and Second Carlist Wars. From then on, the beret was inextricably linked to military aesthetics, and was adopted by various European armies thereafter. Another famous example were the Chasseurs Alpins, an elite group of French soldiers trained to fight in the mountains. They wore blue berets to distinguish themselves and keep warm. Then came the brutal conflicts of WWI and WWII, when the advent of wireless communication with the widespread adoption of radios and telephones gave the beret a novel advantage: its compact design allowed it to fit in the cramped spaces inside tanks and other vehicles, while also allowing for the wearing of headphones. Soon, berets became associated with elite forces like the Green Berets of the U.S. Army.
Around the same time, though, the beret once again found itself being worn for fashion. They were embraced by artists and writers like Ernest Hemingway, who considered their roots in European peasantry a means of rebelling against mainstream fashion. As Paris distinguished itself as the world’s fashion center, the hats became most heavily associated with France. Today, the beret remains largely a fashion statement, but it’s also been worn by political revolutionaries such as Che Guevara and the Black Panthers as a means to identify themselves. No matter who you are, though, when you put on a beret, you’re not just wearing a fashionable headpiece. You’re wearing a piece of history.
[Image description: A maroon-colored beret hat with a puffed decoration on top, sitting on a blank mannequin head.] Credit & copyright:
Metropolitan Museum of Art, Wikimedia Commons. Brooklyn Museum Costume Collection at The Metropolitan Museum of Art, Gift of the Brooklyn Museum, 2009; Gift of E. F. Schermerhorn, 1953. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
Did you think you were in France? Au contraire, mon ami! Québec City
May look like Paris, but it’s one of the oldest cities in Canada. Its distinct Old World architecture also makes it one of the most unique cities in North America. Québec City’s unique culture emanates throughout the rest of the province of Québec, where French is still spoken as the primary language and locals are quite proud of their French heritage.
Québec City was founded in 1608 by French explorer Samuel de Champlain, but he wasn’t exactly the first to reach it. That distinction goes to another French explorer, Jacques Cartier, who is credited as “discovering” Canada in 1534. Cartier was the first European to encounter many of the indigenous communities that lived along the St. Lawrence River, and he named the new land “Kanata,” a Huron-Iroquois word for “village” or “settlement.” Cartier traveled and mapped the area around the river, eventually reaching the site where Québec City stands today. However, the French were unable to send further expeditions let alone establish colonies due to religious upheavals and wars back in Europe. Once France was in a position to resume their exploration of Canada, or “New France”, they sent Champlain, who established Québec City as a trading post. The location of the city was also of strategic importance, as its location on the narrow portion of the St. Lawrence River allowed the French to control travel farther into the continent for the fur trade. Unfortunately for the French, the British also had their eyes on the North American fur trade, and the two countries came into military conflict over control of New France. The British managed to capture and hold Québec City between 1629 and 1632, and in 1759, they once again defeated the French. This time, though, the French were forced to give up most of their territory in North America, and Québec City was never returned to France.
Despite British rule, though, Québec City managed to hold on to its French culture. Much of this is due to the 1774 passage of the Québec Act, which allowed the Francophone residents to maintain their language and cultural institutions. Then, the Constitutional Act of 1791 split Canada into Upper Canada and Lower Canada (with Québec City as the provincial capital), which would become the modern day provinces of Ontario and Québec, respectively. The Constitutional Act helped draw a clear cultural boundary, contributing to Québec and its capital remaining ardently French in culture. French was declared the sole official language of Québec after the province passed the Official Languages Act in 1974 and the Charter of the French Language, which made French mandatory in schools, businesses, government administration, and signage. Much of France’s Old World influences can also be seen in Québec City’s historic buildings, some of which date back to French rule in the 1600s.
In a strange way, the survival of the city’s architectural heritage is owed, at least in part, to its economic struggles in the late 19th century. The historic district of Old Québec contains some of the oldest buildings in the city, but it didn’t remain largely untouched just for cultural reasons. Rather, the economic hardships of the late 19th century made it too expensive to redevelop. That’s not to say that there isn’t a longstanding spirit of historic preservation in the city. In the 1870s, demolitions began on the then-obsolete fortifications that surrounded the city, but not everyone was eager to erase the city’s architectural heritage. Eventually, then-Governor General of Canada, Lord Dufferin, ordered that parts of the fortifications be saved for posterity, including St. Louis Gate and St. John Gate. In addition, Dufferin ordered the construction of new gates in the Romantic style but wide enough to accommodate the increasingly large volume of traffic. In 1985, Old Québec was declared a UNESCO World Heritage site, thanks largely in part to people like Dufferin. Today, French is still the main language spoken in Québec City, which boasts some of the world’s most photographed buildings and a thriving French culinary movement. Sometimes it pays to look to the past when building a city’s future.
[Image description: Buildings and a courtyard lit up with multicolored lights at night in Quebec City.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Did you think you were in France? Au contraire, mon ami! Québec City
May look like Paris, but it’s one of the oldest cities in Canada. Its distinct Old World architecture also makes it one of the most unique cities in North America. Québec City’s unique culture emanates throughout the rest of the province of Québec, where French is still spoken as the primary language and locals are quite proud of their French heritage.
Québec City was founded in 1608 by French explorer Samuel de Champlain, but he wasn’t exactly the first to reach it. That distinction goes to another French explorer, Jacques Cartier, who is credited as “discovering” Canada in 1534. Cartier was the first European to encounter many of the indigenous communities that lived along the St. Lawrence River, and he named the new land “Kanata,” a Huron-Iroquois word for “village” or “settlement.” Cartier traveled and mapped the area around the river, eventually reaching the site where Québec City stands today. However, the French were unable to send further expeditions let alone establish colonies due to religious upheavals and wars back in Europe. Once France was in a position to resume their exploration of Canada, or “New France”, they sent Champlain, who established Québec City as a trading post. The location of the city was also of strategic importance, as its location on the narrow portion of the St. Lawrence River allowed the French to control travel farther into the continent for the fur trade. Unfortunately for the French, the British also had their eyes on the North American fur trade, and the two countries came into military conflict over control of New France. The British managed to capture and hold Québec City between 1629 and 1632, and in 1759, they once again defeated the French. This time, though, the French were forced to give up most of their territory in North America, and Québec City was never returned to France.
Despite British rule, though, Québec City managed to hold on to its French culture. Much of this is due to the 1774 passage of the Québec Act, which allowed the Francophone residents to maintain their language and cultural institutions. Then, the Constitutional Act of 1791 split Canada into Upper Canada and Lower Canada (with Québec City as the provincial capital), which would become the modern day provinces of Ontario and Québec, respectively. The Constitutional Act helped draw a clear cultural boundary, contributing to Québec and its capital remaining ardently French in culture. French was declared the sole official language of Québec after the province passed the Official Languages Act in 1974 and the Charter of the French Language, which made French mandatory in schools, businesses, government administration, and signage. Much of France’s Old World influences can also be seen in Québec City’s historic buildings, some of which date back to French rule in the 1600s.
In a strange way, the survival of the city’s architectural heritage is owed, at least in part, to its economic struggles in the late 19th century. The historic district of Old Québec contains some of the oldest buildings in the city, but it didn’t remain largely untouched just for cultural reasons. Rather, the economic hardships of the late 19th century made it too expensive to redevelop. That’s not to say that there isn’t a longstanding spirit of historic preservation in the city. In the 1870s, demolitions began on the then-obsolete fortifications that surrounded the city, but not everyone was eager to erase the city’s architectural heritage. Eventually, then-Governor General of Canada, Lord Dufferin, ordered that parts of the fortifications be saved for posterity, including St. Louis Gate and St. John Gate. In addition, Dufferin ordered the construction of new gates in the Romantic style but wide enough to accommodate the increasingly large volume of traffic. In 1985, Old Québec was declared a UNESCO World Heritage site, thanks largely in part to people like Dufferin. Today, French is still the main language spoken in Québec City, which boasts some of the world’s most photographed buildings and a thriving French culinary movement. Sometimes it pays to look to the past when building a city’s future.
[Image description: Buildings and a courtyard lit up with multicolored lights at night in Quebec City.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEPolitical Science PP&T CurioFree1 CQ
For as much as we hear about voter fraud today (especially during election years) it’s pretty rare in the modern United States; when it happens, it’s usually on a small scale. That wasn’t always the case, though. There was a time when lax regulations made it much easier for large groups to “fix” elections, especially local ones. Yet, it wasn’t Congress or even a state-level lawmaker who took the first step toward stopping such fraud. It was actually a suffragist from small-town Indiana named Stella Courtright Stimson.
Even before she could legally vote, Stimson was heavily involved in local politics in her home town of Terre Haute. In 1909, she was elected to serve on the school board in her town and was part of the Indiana Federation of Clubs, which promoted women’s suffrage. But aside from the greater national issue of the women’s right to vote, Stimson was also concerned with the economic and social development of Terre Haute, where laws were laxly enforced. At the time, Terre Haute had a reputation for being a “wide open city,” meaning that it was unregulated when it came to laws about drinking, gambling, and prostitution. Enabling and profiting from the city’s illicit industries were politicians like city-engineer-turned-mayor Donn Roberts. Roberts first made a name for himself in Terre Haute’s political circles by stuffing ballots and casting illegal votes with packs of men hired for the purpose. On election days, he would go from polling station to polling station to have his men cast fraudulent ballots under pseudonyms. In 1913, Roberts ran for the office of mayor and ensured his own victory using the same tactics. As mayor, Roberts turned a blind eye to the city’s illegal businesses in exchange for bribes.
Stimson was well aware of Roberts’s operation and tried to inform the governor to no avail. Nevertheless, she and other local women gathered at polling stations to hinder Roberts by calling out those who were casting multiple ballots in various disguises or under false identities. They eventually found an ally in Joseph Roach Jr., a special prosecutor appointed to serve in a trial against Roberts in 1914. Although the women had gathered plenty of evidence, Roberts was ultimately acquitted by the jury. Undeterred by the defeat, Roach turned to federal laws and found one based on the Enforcement Act of 1870, which forbade two or more people from conspiring to “injure, oppress, threaten, or intimidate any citizen in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States.” He then took the issue to U.S. District Attorney Frank C. Dailey, who convinced a federal judge to accept the case.
The trouble was, Dailey couldn’t use any of the evidence that had been used by Roach a second time, so Stimson and the other poll-watchers once again got to work. They found that thousands of fraudulent registrations had been made by Roberts using names of people from other parts of the state which he had tied to random addresses in Terre Haute. In December of 1914, using evidence gathered by Stimson’s volunteers, U.S. Marshals arrested 116 individuals, including Roberts. In United States v. Aczel, all of the defendants were charged with four counts of conspiracy, and 88 of them pled guilty. On March 8, 1915, Roberts and the remaining defendants were found guilty on all charges.
Roberts was sentenced to six years in prison and a fine of $2,000, though he was released early on parole. Although he retained control of the city by proxy via a loyal ally, his greater political ambitions of becoming governor were never realized. Meanwhile, his successful prosecution set an important precedent at the federal level in enforcing election laws, helping to pave the way for the Voting Rights Act of 1965. Just a few years after helping to take down Roberts, Stimson and her fellow suffragists won the right to vote with the ratification of the 19th Amendment. She and Roach proved that participation in politics and elections weren’t just a right—they were a matter of dedication and civic duty.
[Image description: The Indiana state flag, which is dark blue with stars surrounding a torch and the word “INDIANA.”] Credit & copyright: HoosierMan1816, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.For as much as we hear about voter fraud today (especially during election years) it’s pretty rare in the modern United States; when it happens, it’s usually on a small scale. That wasn’t always the case, though. There was a time when lax regulations made it much easier for large groups to “fix” elections, especially local ones. Yet, it wasn’t Congress or even a state-level lawmaker who took the first step toward stopping such fraud. It was actually a suffragist from small-town Indiana named Stella Courtright Stimson.
Even before she could legally vote, Stimson was heavily involved in local politics in her home town of Terre Haute. In 1909, she was elected to serve on the school board in her town and was part of the Indiana Federation of Clubs, which promoted women’s suffrage. But aside from the greater national issue of the women’s right to vote, Stimson was also concerned with the economic and social development of Terre Haute, where laws were laxly enforced. At the time, Terre Haute had a reputation for being a “wide open city,” meaning that it was unregulated when it came to laws about drinking, gambling, and prostitution. Enabling and profiting from the city’s illicit industries were politicians like city-engineer-turned-mayor Donn Roberts. Roberts first made a name for himself in Terre Haute’s political circles by stuffing ballots and casting illegal votes with packs of men hired for the purpose. On election days, he would go from polling station to polling station to have his men cast fraudulent ballots under pseudonyms. In 1913, Roberts ran for the office of mayor and ensured his own victory using the same tactics. As mayor, Roberts turned a blind eye to the city’s illegal businesses in exchange for bribes.
Stimson was well aware of Roberts’s operation and tried to inform the governor to no avail. Nevertheless, she and other local women gathered at polling stations to hinder Roberts by calling out those who were casting multiple ballots in various disguises or under false identities. They eventually found an ally in Joseph Roach Jr., a special prosecutor appointed to serve in a trial against Roberts in 1914. Although the women had gathered plenty of evidence, Roberts was ultimately acquitted by the jury. Undeterred by the defeat, Roach turned to federal laws and found one based on the Enforcement Act of 1870, which forbade two or more people from conspiring to “injure, oppress, threaten, or intimidate any citizen in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States.” He then took the issue to U.S. District Attorney Frank C. Dailey, who convinced a federal judge to accept the case.
The trouble was, Dailey couldn’t use any of the evidence that had been used by Roach a second time, so Stimson and the other poll-watchers once again got to work. They found that thousands of fraudulent registrations had been made by Roberts using names of people from other parts of the state which he had tied to random addresses in Terre Haute. In December of 1914, using evidence gathered by Stimson’s volunteers, U.S. Marshals arrested 116 individuals, including Roberts. In United States v. Aczel, all of the defendants were charged with four counts of conspiracy, and 88 of them pled guilty. On March 8, 1915, Roberts and the remaining defendants were found guilty on all charges.
Roberts was sentenced to six years in prison and a fine of $2,000, though he was released early on parole. Although he retained control of the city by proxy via a loyal ally, his greater political ambitions of becoming governor were never realized. Meanwhile, his successful prosecution set an important precedent at the federal level in enforcing election laws, helping to pave the way for the Voting Rights Act of 1965. Just a few years after helping to take down Roberts, Stimson and her fellow suffragists won the right to vote with the ratification of the 19th Amendment. She and Roach proved that participation in politics and elections weren’t just a right—they were a matter of dedication and civic duty.
[Image description: The Indiana state flag, which is dark blue with stars surrounding a torch and the word “INDIANA.”] Credit & copyright: HoosierMan1816, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
It may not seem exciting, but we wouldn’t have much without it! Cement is used in the construction of… pretty much everything. It’s also been around for millennia. Yet as ubiquitous and essential for modern life as it is, cement remains mostly misunderstood. Many people have no idea what’s even in it. Well, get your dust mask ready to explore the history of cement through the ages.
First things first: cement and concrete are not the same thing. They may both be dusty, gray stuff that hardens when mixed with water, but concrete is actually a combination of several different materials, one of which is cement itself. To make concrete, cement and aggregates like gravel are mixed together with a variety of ingredients (depending on the application) to form a strong, porous mass. Cement itself has been produced since antiquity by the ancient Greeks and Romans, with their version consisting of lime and volcanic ash mixed together. They created lime by calcining limestone, a process of heating it in a low oxygen environment to remove impurities like carbon dioxide. When the lime and ash are mixed with water, they undergo a chemical reaction called hydration, combining the calcium in lime and the silica in ash to form calcium silicate hydrates. Romans in particular are renowned for their use of cement to build massive structures that have lasted thousands of years with little maintenance. They used cement as mortar to hold bricks together, and used it to make concrete. In fact, their word for concrete, “opus caementicium”, is where the modern word “cement” comes from. Their most famous innovation was using cement to build structures in or near water. Since cement—and by extension, concrete—cures instead of drying like mud or clay, they could use both materials to build the bases of bridges, dams, and aqueducts. And unlike wood, cement and concrete don’t get weaker with time or when exposed to water. In fact, water makes the materials more durable, because small cracks that let in water trigger a secondary curing process that helps maintain structural integrity.
Most cement used today is Portland cement, and its development started in the 1800s. In 1824, British bricklayer Joseph Aspdin created the first iteration of Portland cement by heating a mixture of lime and clay together until they calcined. Aspdin took the resulting product and ground it to a fine powder. When mixed with water, it became exceptionally strong, so he named it after the stones from the Isle of Portland in Dorset, U.K., which were known for their strength. Portland cement was then improved upon by his son, William Aspdin, who added tricalcium silicate. Then, in 1850, cement manufacturer Isaac Johnson created Portland cement as it is today. Johnson heated his ingredients at a higher temperature than Aspdin had, going up to 2732 degrees Fahrenheit, resulting in a product called clinker, a fusion of lime and the silicates. In addition to being strong, Portland cement also sets much more quickly than its predecessors, and remains the primary ingredient of concrete used in modern construction.
The modern world would certainly be different without cement—in more ways than one. While the material has allowed for the construction of everything from majestic skyscrapers to monumental hydroelectric dams, its production is also a major source of greenhouse emissions. Aside from the massive amount of fuel required to heat kilns for clinker and the transportation of the heavy material through fossil fuel-powered means, the very process of heating limestone releases carbon dioxide into the atmosphere. Still, cement has a lot of qualities that make it worthwhile. Concrete buildings are often very energy efficient, and since they can last so long, it means that less material might be used to maintain or rebuild structures. Also, scientists are currently working on cements that can absorb carbon dioxide from the atmosphere, further offsetting the emissions released during its production. So, when it comes to cement, what’s old is new and what’s gray is (hopefully) green.
[Image description: A portion of a building made from unpainted cement blocks.] Credit & copyright: Tobit Nazar Nieto Hernandez, PexelsIt may not seem exciting, but we wouldn’t have much without it! Cement is used in the construction of… pretty much everything. It’s also been around for millennia. Yet as ubiquitous and essential for modern life as it is, cement remains mostly misunderstood. Many people have no idea what’s even in it. Well, get your dust mask ready to explore the history of cement through the ages.
First things first: cement and concrete are not the same thing. They may both be dusty, gray stuff that hardens when mixed with water, but concrete is actually a combination of several different materials, one of which is cement itself. To make concrete, cement and aggregates like gravel are mixed together with a variety of ingredients (depending on the application) to form a strong, porous mass. Cement itself has been produced since antiquity by the ancient Greeks and Romans, with their version consisting of lime and volcanic ash mixed together. They created lime by calcining limestone, a process of heating it in a low oxygen environment to remove impurities like carbon dioxide. When the lime and ash are mixed with water, they undergo a chemical reaction called hydration, combining the calcium in lime and the silica in ash to form calcium silicate hydrates. Romans in particular are renowned for their use of cement to build massive structures that have lasted thousands of years with little maintenance. They used cement as mortar to hold bricks together, and used it to make concrete. In fact, their word for concrete, “opus caementicium”, is where the modern word “cement” comes from. Their most famous innovation was using cement to build structures in or near water. Since cement—and by extension, concrete—cures instead of drying like mud or clay, they could use both materials to build the bases of bridges, dams, and aqueducts. And unlike wood, cement and concrete don’t get weaker with time or when exposed to water. In fact, water makes the materials more durable, because small cracks that let in water trigger a secondary curing process that helps maintain structural integrity.
Most cement used today is Portland cement, and its development started in the 1800s. In 1824, British bricklayer Joseph Aspdin created the first iteration of Portland cement by heating a mixture of lime and clay together until they calcined. Aspdin took the resulting product and ground it to a fine powder. When mixed with water, it became exceptionally strong, so he named it after the stones from the Isle of Portland in Dorset, U.K., which were known for their strength. Portland cement was then improved upon by his son, William Aspdin, who added tricalcium silicate. Then, in 1850, cement manufacturer Isaac Johnson created Portland cement as it is today. Johnson heated his ingredients at a higher temperature than Aspdin had, going up to 2732 degrees Fahrenheit, resulting in a product called clinker, a fusion of lime and the silicates. In addition to being strong, Portland cement also sets much more quickly than its predecessors, and remains the primary ingredient of concrete used in modern construction.
The modern world would certainly be different without cement—in more ways than one. While the material has allowed for the construction of everything from majestic skyscrapers to monumental hydroelectric dams, its production is also a major source of greenhouse emissions. Aside from the massive amount of fuel required to heat kilns for clinker and the transportation of the heavy material through fossil fuel-powered means, the very process of heating limestone releases carbon dioxide into the atmosphere. Still, cement has a lot of qualities that make it worthwhile. Concrete buildings are often very energy efficient, and since they can last so long, it means that less material might be used to maintain or rebuild structures. Also, scientists are currently working on cements that can absorb carbon dioxide from the atmosphere, further offsetting the emissions released during its production. So, when it comes to cement, what’s old is new and what’s gray is (hopefully) green.
[Image description: A portion of a building made from unpainted cement blocks.] Credit & copyright: Tobit Nazar Nieto Hernandez, Pexels