Curio Cabinet / Person, Place, or Thing
-
FREEPolitical Science PP&T CurioFree1 CQ
With nationwide relief efforts underway following the devastation of Hurricane Helene, you’ve likely been hearing a lot about one federal agency: FEMA (Federal Emergency Management Agency). With a workforce of more than 20,000 people, FEMA is uniquely equipped to respond to all sorts of emergencies. Before its founding, though, Americans dealing with disasters were largely left on their own.
In December of 1802, Portsmouth, New Hampshire, was practically destroyed by a fire. At the time, Portsmouth was among the U.S.’s busiest ports, and its destruction spelled disaster for the economy. The federal government didn’t directly help rebuild the city, but the U.S. Congress suspended bond payments for local merchants to allow them to continue operations in Portsmouth. Similar measures were taken after other major fires, such as one in New York City in 1835 and the Great Chicago Fire of 1871. Still, there wasn’t much interest in creating a proactive federal response system for disasters until the early 20th century, when two tragic events led to calls for action. First, there was the Galveston Hurricane in 1900, which killed thousands of people. Then, the San Francisco Earthquake in 1906 leveled much of the city. In both cases, very little federal action was taken to address displaced citizens or to rebuild critical infrastructure, with the onus falling entirely on local governments. Those local governments, in turn, began asking the federal government to create some kind of task force to help when future disasters arose. Finally, in 1950, Congress created the Federal Disaster Assistance Program, giving the federal government powers to act directly in the case of disasters. A series of devastating hurricanes and earthquakes in the 1960s provided further impetus to expand these powers, resulting in the Disaster Relief Act of 1970. This allowed affected individuals to receive federal loans and tax assistance. Finally, in 1979, President Jimmy Carter issued an executive order to combine a number of agencies responsible for disaster response to create FEMA.
Since FEMA was created, it has helped in the face of everything from volcanoes to hurricanes, but it hasn’t always been beyond criticism. For example, the federal response to the Loma Prieta Earthquake in 1989 and Hurricane Andrew in 1992 were considered inadequate. Major reforms in the 1990s and the increasing emphasis on being proactive, not simply reactive, allowed the agency to respond to disasters more effectively. Some of the proactive measures included purchasing property in areas at higher risk of natural disasters and encouraging more stringent building codes. While FEMA was improving its response to natural disasters, there were also unnatural disasters to contend with. In 1995, FEMA responded to the Oklahoma City Bombing. Six years later, the terrorist attacks of September 11, 2001 led to the most significant change to the agency since its creation. When the Department of Homeland Security (DHS) was created to handle federal responses to terrorist attacks, FEMA was absorbed into it, expanding its scope to terrorism preparedness.
Today, FEMA continues in its original mission of disaster relief, and it’s been getting busier by the year. With climate change creating storms of greater frequency and power, FEMA has been kept on its toes recently. When such storms approach, it’s up to governors of affected states to request assistance through the FEMA Regional Office. Since they can do this before storms actually strike, FEMA can begin providing financial aid and moving people and supplies into position before any actual damage has occurred. Aside from providing practical necessities like food, water, and shelter to affected people, part of FEMA’s purpose is to ensure that allocated funds are handled appropriately. After all, when things go sideways, you want to make sure everything else is on the up and up.
[Image description: An American flag with a wooden flagpole flying against a blue sky.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.With nationwide relief efforts underway following the devastation of Hurricane Helene, you’ve likely been hearing a lot about one federal agency: FEMA (Federal Emergency Management Agency). With a workforce of more than 20,000 people, FEMA is uniquely equipped to respond to all sorts of emergencies. Before its founding, though, Americans dealing with disasters were largely left on their own.
In December of 1802, Portsmouth, New Hampshire, was practically destroyed by a fire. At the time, Portsmouth was among the U.S.’s busiest ports, and its destruction spelled disaster for the economy. The federal government didn’t directly help rebuild the city, but the U.S. Congress suspended bond payments for local merchants to allow them to continue operations in Portsmouth. Similar measures were taken after other major fires, such as one in New York City in 1835 and the Great Chicago Fire of 1871. Still, there wasn’t much interest in creating a proactive federal response system for disasters until the early 20th century, when two tragic events led to calls for action. First, there was the Galveston Hurricane in 1900, which killed thousands of people. Then, the San Francisco Earthquake in 1906 leveled much of the city. In both cases, very little federal action was taken to address displaced citizens or to rebuild critical infrastructure, with the onus falling entirely on local governments. Those local governments, in turn, began asking the federal government to create some kind of task force to help when future disasters arose. Finally, in 1950, Congress created the Federal Disaster Assistance Program, giving the federal government powers to act directly in the case of disasters. A series of devastating hurricanes and earthquakes in the 1960s provided further impetus to expand these powers, resulting in the Disaster Relief Act of 1970. This allowed affected individuals to receive federal loans and tax assistance. Finally, in 1979, President Jimmy Carter issued an executive order to combine a number of agencies responsible for disaster response to create FEMA.
Since FEMA was created, it has helped in the face of everything from volcanoes to hurricanes, but it hasn’t always been beyond criticism. For example, the federal response to the Loma Prieta Earthquake in 1989 and Hurricane Andrew in 1992 were considered inadequate. Major reforms in the 1990s and the increasing emphasis on being proactive, not simply reactive, allowed the agency to respond to disasters more effectively. Some of the proactive measures included purchasing property in areas at higher risk of natural disasters and encouraging more stringent building codes. While FEMA was improving its response to natural disasters, there were also unnatural disasters to contend with. In 1995, FEMA responded to the Oklahoma City Bombing. Six years later, the terrorist attacks of September 11, 2001 led to the most significant change to the agency since its creation. When the Department of Homeland Security (DHS) was created to handle federal responses to terrorist attacks, FEMA was absorbed into it, expanding its scope to terrorism preparedness.
Today, FEMA continues in its original mission of disaster relief, and it’s been getting busier by the year. With climate change creating storms of greater frequency and power, FEMA has been kept on its toes recently. When such storms approach, it’s up to governors of affected states to request assistance through the FEMA Regional Office. Since they can do this before storms actually strike, FEMA can begin providing financial aid and moving people and supplies into position before any actual damage has occurred. Aside from providing practical necessities like food, water, and shelter to affected people, part of FEMA’s purpose is to ensure that allocated funds are handled appropriately. After all, when things go sideways, you want to make sure everything else is on the up and up.
[Image description: An American flag with a wooden flagpole flying against a blue sky.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEStyle PP&T CurioFree1 CQ
Ooo la la! This timeless headpiece is as French as escargot, yet the beret has managed to maintain incredible worldwide appeal throughout the centuries. This simple, unisex hat has shown up on the heads of everyone from European royals to uniformed soldiers and is still going strong despite a history that stretches back at least as far as the 14th century.
Although modern berets are heavily associated with French fashion and largely gained popularity in the 20th century, flat-cap style hats have been worn since the time of ancient Greece. The true ancestor of the beret comes from Europe in the 1300s, when felted or fulled wool hats were a durable, warm choice for many people working outdoors. The simple design of these hats gave them a timeless quality that endured through the centuries, and they were eventually adopted by the people of the Basque region, sandwiched between the border of France and Spain. The Basque people were renowned fishermen and whalers who sailed long distances in search of their quarry. Basque berets were perfect for these hardy sailors, who needed water–resistant hats to keep them warm while sailing the cold, northern seas. Their version of the beret became so emblematic of their culture that receiving one at the age of 10 was a rite of passage for boys in the Basque city of Béarn, where the hat is said to have originated. Other European cultures recognized the sailing prowess of Basque fishermen as well, and many came to Basque country to learn from the best. That, along with the long-reaching travels of the Basque sailors, spread the Basque beret around Europe. It wasn’t until 1835, though, that the hat began to be called “beret,” short for the French name for it, “béret basque.” Throughout the 1800s, the hat gained increasing popularity outside of maritime professions, though for less peaceful purposes.
The beret came into the forefront of fashion and history when Spanish-Basque military officer Tomás de Zumalacárregui wore a large, red iteration of the hat during the First and Second Carlist Wars. From then on, the beret was inextricably linked to military aesthetics, and was adopted by various European armies thereafter. Another famous example were the Chasseurs Alpins, an elite group of French soldiers trained to fight in the mountains. They wore blue berets to distinguish themselves and keep warm. Then came the brutal conflicts of WWI and WWII, when the advent of wireless communication with the widespread adoption of radios and telephones gave the beret a novel advantage: its compact design allowed it to fit in the cramped spaces inside tanks and other vehicles, while also allowing for the wearing of headphones. Soon, berets became associated with elite forces like the Green Berets of the U.S. Army.
Around the same time, though, the beret once again found itself being worn for fashion. They were embraced by artists and writers like Ernest Hemingway, who considered their roots in European peasantry a means of rebelling against mainstream fashion. As Paris distinguished itself as the world’s fashion center, the hats became most heavily associated with France. Today, the beret remains largely a fashion statement, but it’s also been worn by political revolutionaries such as Che Guevara and the Black Panthers as a means to identify themselves. No matter who you are, though, when you put on a beret, you’re not just wearing a fashionable headpiece. You’re wearing a piece of history.
[Image description: A maroon-colored beret hat with a puffed decoration on top, sitting on a blank mannequin head.] Credit & copyright:
Metropolitan Museum of Art, Wikimedia Commons. Brooklyn Museum Costume Collection at The Metropolitan Museum of Art, Gift of the Brooklyn Museum, 2009; Gift of E. F. Schermerhorn, 1953. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Ooo la la! This timeless headpiece is as French as escargot, yet the beret has managed to maintain incredible worldwide appeal throughout the centuries. This simple, unisex hat has shown up on the heads of everyone from European royals to uniformed soldiers and is still going strong despite a history that stretches back at least as far as the 14th century.
Although modern berets are heavily associated with French fashion and largely gained popularity in the 20th century, flat-cap style hats have been worn since the time of ancient Greece. The true ancestor of the beret comes from Europe in the 1300s, when felted or fulled wool hats were a durable, warm choice for many people working outdoors. The simple design of these hats gave them a timeless quality that endured through the centuries, and they were eventually adopted by the people of the Basque region, sandwiched between the border of France and Spain. The Basque people were renowned fishermen and whalers who sailed long distances in search of their quarry. Basque berets were perfect for these hardy sailors, who needed water–resistant hats to keep them warm while sailing the cold, northern seas. Their version of the beret became so emblematic of their culture that receiving one at the age of 10 was a rite of passage for boys in the Basque city of Béarn, where the hat is said to have originated. Other European cultures recognized the sailing prowess of Basque fishermen as well, and many came to Basque country to learn from the best. That, along with the long-reaching travels of the Basque sailors, spread the Basque beret around Europe. It wasn’t until 1835, though, that the hat began to be called “beret,” short for the French name for it, “béret basque.” Throughout the 1800s, the hat gained increasing popularity outside of maritime professions, though for less peaceful purposes.
The beret came into the forefront of fashion and history when Spanish-Basque military officer Tomás de Zumalacárregui wore a large, red iteration of the hat during the First and Second Carlist Wars. From then on, the beret was inextricably linked to military aesthetics, and was adopted by various European armies thereafter. Another famous example were the Chasseurs Alpins, an elite group of French soldiers trained to fight in the mountains. They wore blue berets to distinguish themselves and keep warm. Then came the brutal conflicts of WWI and WWII, when the advent of wireless communication with the widespread adoption of radios and telephones gave the beret a novel advantage: its compact design allowed it to fit in the cramped spaces inside tanks and other vehicles, while also allowing for the wearing of headphones. Soon, berets became associated with elite forces like the Green Berets of the U.S. Army.
Around the same time, though, the beret once again found itself being worn for fashion. They were embraced by artists and writers like Ernest Hemingway, who considered their roots in European peasantry a means of rebelling against mainstream fashion. As Paris distinguished itself as the world’s fashion center, the hats became most heavily associated with France. Today, the beret remains largely a fashion statement, but it’s also been worn by political revolutionaries such as Che Guevara and the Black Panthers as a means to identify themselves. No matter who you are, though, when you put on a beret, you’re not just wearing a fashionable headpiece. You’re wearing a piece of history.
[Image description: A maroon-colored beret hat with a puffed decoration on top, sitting on a blank mannequin head.] Credit & copyright:
Metropolitan Museum of Art, Wikimedia Commons. Brooklyn Museum Costume Collection at The Metropolitan Museum of Art, Gift of the Brooklyn Museum, 2009; Gift of E. F. Schermerhorn, 1953. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
Did you think you were in France? Au contraire, mon ami! Québec City
May look like Paris, but it’s one of the oldest cities in Canada. Its distinct Old World architecture also makes it one of the most unique cities in North America. Québec City’s unique culture emanates throughout the rest of the province of Québec, where French is still spoken as the primary language and locals are quite proud of their French heritage.
Québec City was founded in 1608 by French explorer Samuel de Champlain, but he wasn’t exactly the first to reach it. That distinction goes to another French explorer, Jacques Cartier, who is credited as “discovering” Canada in 1534. Cartier was the first European to encounter many of the indigenous communities that lived along the St. Lawrence River, and he named the new land “Kanata,” a Huron-Iroquois word for “village” or “settlement.” Cartier traveled and mapped the area around the river, eventually reaching the site where Québec City stands today. However, the French were unable to send further expeditions let alone establish colonies due to religious upheavals and wars back in Europe. Once France was in a position to resume their exploration of Canada, or “New France”, they sent Champlain, who established Québec City as a trading post. The location of the city was also of strategic importance, as its location on the narrow portion of the St. Lawrence River allowed the French to control travel farther into the continent for the fur trade. Unfortunately for the French, the British also had their eyes on the North American fur trade, and the two countries came into military conflict over control of New France. The British managed to capture and hold Québec City between 1629 and 1632, and in 1759, they once again defeated the French. This time, though, the French were forced to give up most of their territory in North America, and Québec City was never returned to France.
Despite British rule, though, Québec City managed to hold on to its French culture. Much of this is due to the 1774 passage of the Québec Act, which allowed the Francophone residents to maintain their language and cultural institutions. Then, the Constitutional Act of 1791 split Canada into Upper Canada and Lower Canada (with Québec City as the provincial capital), which would become the modern day provinces of Ontario and Québec, respectively. The Constitutional Act helped draw a clear cultural boundary, contributing to Québec and its capital remaining ardently French in culture. French was declared the sole official language of Québec after the province passed the Official Languages Act in 1974 and the Charter of the French Language, which made French mandatory in schools, businesses, government administration, and signage. Much of France’s Old World influences can also be seen in Québec City’s historic buildings, some of which date back to French rule in the 1600s.
In a strange way, the survival of the city’s architectural heritage is owed, at least in part, to its economic struggles in the late 19th century. The historic district of Old Québec contains some of the oldest buildings in the city, but it didn’t remain largely untouched just for cultural reasons. Rather, the economic hardships of the late 19th century made it too expensive to redevelop. That’s not to say that there isn’t a longstanding spirit of historic preservation in the city. In the 1870s, demolitions began on the then-obsolete fortifications that surrounded the city, but not everyone was eager to erase the city’s architectural heritage. Eventually, then-Governor General of Canada, Lord Dufferin, ordered that parts of the fortifications be saved for posterity, including St. Louis Gate and St. John Gate. In addition, Dufferin ordered the construction of new gates in the Romantic style but wide enough to accommodate the increasingly large volume of traffic. In 1985, Old Québec was declared a UNESCO World Heritage site, thanks largely in part to people like Dufferin. Today, French is still the main language spoken in Québec City, which boasts some of the world’s most photographed buildings and a thriving French culinary movement. Sometimes it pays to look to the past when building a city’s future.
[Image description: Buildings and a courtyard lit up with multicolored lights at night in Quebec City.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Did you think you were in France? Au contraire, mon ami! Québec City
May look like Paris, but it’s one of the oldest cities in Canada. Its distinct Old World architecture also makes it one of the most unique cities in North America. Québec City’s unique culture emanates throughout the rest of the province of Québec, where French is still spoken as the primary language and locals are quite proud of their French heritage.
Québec City was founded in 1608 by French explorer Samuel de Champlain, but he wasn’t exactly the first to reach it. That distinction goes to another French explorer, Jacques Cartier, who is credited as “discovering” Canada in 1534. Cartier was the first European to encounter many of the indigenous communities that lived along the St. Lawrence River, and he named the new land “Kanata,” a Huron-Iroquois word for “village” or “settlement.” Cartier traveled and mapped the area around the river, eventually reaching the site where Québec City stands today. However, the French were unable to send further expeditions let alone establish colonies due to religious upheavals and wars back in Europe. Once France was in a position to resume their exploration of Canada, or “New France”, they sent Champlain, who established Québec City as a trading post. The location of the city was also of strategic importance, as its location on the narrow portion of the St. Lawrence River allowed the French to control travel farther into the continent for the fur trade. Unfortunately for the French, the British also had their eyes on the North American fur trade, and the two countries came into military conflict over control of New France. The British managed to capture and hold Québec City between 1629 and 1632, and in 1759, they once again defeated the French. This time, though, the French were forced to give up most of their territory in North America, and Québec City was never returned to France.
Despite British rule, though, Québec City managed to hold on to its French culture. Much of this is due to the 1774 passage of the Québec Act, which allowed the Francophone residents to maintain their language and cultural institutions. Then, the Constitutional Act of 1791 split Canada into Upper Canada and Lower Canada (with Québec City as the provincial capital), which would become the modern day provinces of Ontario and Québec, respectively. The Constitutional Act helped draw a clear cultural boundary, contributing to Québec and its capital remaining ardently French in culture. French was declared the sole official language of Québec after the province passed the Official Languages Act in 1974 and the Charter of the French Language, which made French mandatory in schools, businesses, government administration, and signage. Much of France’s Old World influences can also be seen in Québec City’s historic buildings, some of which date back to French rule in the 1600s.
In a strange way, the survival of the city’s architectural heritage is owed, at least in part, to its economic struggles in the late 19th century. The historic district of Old Québec contains some of the oldest buildings in the city, but it didn’t remain largely untouched just for cultural reasons. Rather, the economic hardships of the late 19th century made it too expensive to redevelop. That’s not to say that there isn’t a longstanding spirit of historic preservation in the city. In the 1870s, demolitions began on the then-obsolete fortifications that surrounded the city, but not everyone was eager to erase the city’s architectural heritage. Eventually, then-Governor General of Canada, Lord Dufferin, ordered that parts of the fortifications be saved for posterity, including St. Louis Gate and St. John Gate. In addition, Dufferin ordered the construction of new gates in the Romantic style but wide enough to accommodate the increasingly large volume of traffic. In 1985, Old Québec was declared a UNESCO World Heritage site, thanks largely in part to people like Dufferin. Today, French is still the main language spoken in Québec City, which boasts some of the world’s most photographed buildings and a thriving French culinary movement. Sometimes it pays to look to the past when building a city’s future.
[Image description: Buildings and a courtyard lit up with multicolored lights at night in Quebec City.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEPolitical Science PP&T CurioFree1 CQ
For as much as we hear about voter fraud today (especially during election years) it’s pretty rare in the modern United States; when it happens, it’s usually on a small scale. That wasn’t always the case, though. There was a time when lax regulations made it much easier for large groups to “fix” elections, especially local ones. Yet, it wasn’t Congress or even a state-level lawmaker who took the first step toward stopping such fraud. It was actually a suffragist from small-town Indiana named Stella Courtright Stimson.
Even before she could legally vote, Stimson was heavily involved in local politics in her home town of Terre Haute. In 1909, she was elected to serve on the school board in her town and was part of the Indiana Federation of Clubs, which promoted women’s suffrage. But aside from the greater national issue of the women’s right to vote, Stimson was also concerned with the economic and social development of Terre Haute, where laws were laxly enforced. At the time, Terre Haute had a reputation for being a “wide open city,” meaning that it was unregulated when it came to laws about drinking, gambling, and prostitution. Enabling and profiting from the city’s illicit industries were politicians like city-engineer-turned-mayor Donn Roberts. Roberts first made a name for himself in Terre Haute’s political circles by stuffing ballots and casting illegal votes with packs of men hired for the purpose. On election days, he would go from polling station to polling station to have his men cast fraudulent ballots under pseudonyms. In 1913, Roberts ran for the office of mayor and ensured his own victory using the same tactics. As mayor, Roberts turned a blind eye to the city’s illegal businesses in exchange for bribes.
Stimson was well aware of Roberts’s operation and tried to inform the governor to no avail. Nevertheless, she and other local women gathered at polling stations to hinder Roberts by calling out those who were casting multiple ballots in various disguises or under false identities. They eventually found an ally in Joseph Roach Jr., a special prosecutor appointed to serve in a trial against Roberts in 1914. Although the women had gathered plenty of evidence, Roberts was ultimately acquitted by the jury. Undeterred by the defeat, Roach turned to federal laws and found one based on the Enforcement Act of 1870, which forbade two or more people from conspiring to “injure, oppress, threaten, or intimidate any citizen in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States.” He then took the issue to U.S. District Attorney Frank C. Dailey, who convinced a federal judge to accept the case.
The trouble was, Dailey couldn’t use any of the evidence that had been used by Roach a second time, so Stimson and the other poll-watchers once again got to work. They found that thousands of fraudulent registrations had been made by Roberts using names of people from other parts of the state which he had tied to random addresses in Terre Haute. In December of 1914, using evidence gathered by Stimson’s volunteers, U.S. Marshals arrested 116 individuals, including Roberts. In United States v. Aczel, all of the defendants were charged with four counts of conspiracy, and 88 of them pled guilty. On March 8, 1915, Roberts and the remaining defendants were found guilty on all charges.
Roberts was sentenced to six years in prison and a fine of $2,000, though he was released early on parole. Although he retained control of the city by proxy via a loyal ally, his greater political ambitions of becoming governor were never realized. Meanwhile, his successful prosecution set an important precedent at the federal level in enforcing election laws, helping to pave the way for the Voting Rights Act of 1965. Just a few years after helping to take down Roberts, Stimson and her fellow suffragists won the right to vote with the ratification of the 19th Amendment. She and Roach proved that participation in politics and elections weren’t just a right—they were a matter of dedication and civic duty.
[Image description: The Indiana state flag, which is dark blue with stars surrounding a torch and the word “INDIANA.”] Credit & copyright: HoosierMan1816, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.For as much as we hear about voter fraud today (especially during election years) it’s pretty rare in the modern United States; when it happens, it’s usually on a small scale. That wasn’t always the case, though. There was a time when lax regulations made it much easier for large groups to “fix” elections, especially local ones. Yet, it wasn’t Congress or even a state-level lawmaker who took the first step toward stopping such fraud. It was actually a suffragist from small-town Indiana named Stella Courtright Stimson.
Even before she could legally vote, Stimson was heavily involved in local politics in her home town of Terre Haute. In 1909, she was elected to serve on the school board in her town and was part of the Indiana Federation of Clubs, which promoted women’s suffrage. But aside from the greater national issue of the women’s right to vote, Stimson was also concerned with the economic and social development of Terre Haute, where laws were laxly enforced. At the time, Terre Haute had a reputation for being a “wide open city,” meaning that it was unregulated when it came to laws about drinking, gambling, and prostitution. Enabling and profiting from the city’s illicit industries were politicians like city-engineer-turned-mayor Donn Roberts. Roberts first made a name for himself in Terre Haute’s political circles by stuffing ballots and casting illegal votes with packs of men hired for the purpose. On election days, he would go from polling station to polling station to have his men cast fraudulent ballots under pseudonyms. In 1913, Roberts ran for the office of mayor and ensured his own victory using the same tactics. As mayor, Roberts turned a blind eye to the city’s illegal businesses in exchange for bribes.
Stimson was well aware of Roberts’s operation and tried to inform the governor to no avail. Nevertheless, she and other local women gathered at polling stations to hinder Roberts by calling out those who were casting multiple ballots in various disguises or under false identities. They eventually found an ally in Joseph Roach Jr., a special prosecutor appointed to serve in a trial against Roberts in 1914. Although the women had gathered plenty of evidence, Roberts was ultimately acquitted by the jury. Undeterred by the defeat, Roach turned to federal laws and found one based on the Enforcement Act of 1870, which forbade two or more people from conspiring to “injure, oppress, threaten, or intimidate any citizen in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States.” He then took the issue to U.S. District Attorney Frank C. Dailey, who convinced a federal judge to accept the case.
The trouble was, Dailey couldn’t use any of the evidence that had been used by Roach a second time, so Stimson and the other poll-watchers once again got to work. They found that thousands of fraudulent registrations had been made by Roberts using names of people from other parts of the state which he had tied to random addresses in Terre Haute. In December of 1914, using evidence gathered by Stimson’s volunteers, U.S. Marshals arrested 116 individuals, including Roberts. In United States v. Aczel, all of the defendants were charged with four counts of conspiracy, and 88 of them pled guilty. On March 8, 1915, Roberts and the remaining defendants were found guilty on all charges.
Roberts was sentenced to six years in prison and a fine of $2,000, though he was released early on parole. Although he retained control of the city by proxy via a loyal ally, his greater political ambitions of becoming governor were never realized. Meanwhile, his successful prosecution set an important precedent at the federal level in enforcing election laws, helping to pave the way for the Voting Rights Act of 1965. Just a few years after helping to take down Roberts, Stimson and her fellow suffragists won the right to vote with the ratification of the 19th Amendment. She and Roach proved that participation in politics and elections weren’t just a right—they were a matter of dedication and civic duty.
[Image description: The Indiana state flag, which is dark blue with stars surrounding a torch and the word “INDIANA.”] Credit & copyright: HoosierMan1816, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
It may not seem exciting, but we wouldn’t have much without it! Cement is used in the construction of… pretty much everything. It’s also been around for millennia. Yet as ubiquitous and essential for modern life as it is, cement remains mostly misunderstood. Many people have no idea what’s even in it. Well, get your dust mask ready to explore the history of cement through the ages.
First things first: cement and concrete are not the same thing. They may both be dusty, gray stuff that hardens when mixed with water, but concrete is actually a combination of several different materials, one of which is cement itself. To make concrete, cement and aggregates like gravel are mixed together with a variety of ingredients (depending on the application) to form a strong, porous mass. Cement itself has been produced since antiquity by the ancient Greeks and Romans, with their version consisting of lime and volcanic ash mixed together. They created lime by calcining limestone, a process of heating it in a low oxygen environment to remove impurities like carbon dioxide. When the lime and ash are mixed with water, they undergo a chemical reaction called hydration, combining the calcium in lime and the silica in ash to form calcium silicate hydrates. Romans in particular are renowned for their use of cement to build massive structures that have lasted thousands of years with little maintenance. They used cement as mortar to hold bricks together, and used it to make concrete. In fact, their word for concrete, “opus caementicium”, is where the modern word “cement” comes from. Their most famous innovation was using cement to build structures in or near water. Since cement—and by extension, concrete—cures instead of drying like mud or clay, they could use both materials to build the bases of bridges, dams, and aqueducts. And unlike wood, cement and concrete don’t get weaker with time or when exposed to water. In fact, water makes the materials more durable, because small cracks that let in water trigger a secondary curing process that helps maintain structural integrity.
Most cement used today is Portland cement, and its development started in the 1800s. In 1824, British bricklayer Joseph Aspdin created the first iteration of Portland cement by heating a mixture of lime and clay together until they calcined. Aspdin took the resulting product and ground it to a fine powder. When mixed with water, it became exceptionally strong, so he named it after the stones from the Isle of Portland in Dorset, U.K., which were known for their strength. Portland cement was then improved upon by his son, William Aspdin, who added tricalcium silicate. Then, in 1850, cement manufacturer Isaac Johnson created Portland cement as it is today. Johnson heated his ingredients at a higher temperature than Aspdin had, going up to 2732 degrees Fahrenheit, resulting in a product called clinker, a fusion of lime and the silicates. In addition to being strong, Portland cement also sets much more quickly than its predecessors, and remains the primary ingredient of concrete used in modern construction.
The modern world would certainly be different without cement—in more ways than one. While the material has allowed for the construction of everything from majestic skyscrapers to monumental hydroelectric dams, its production is also a major source of greenhouse emissions. Aside from the massive amount of fuel required to heat kilns for clinker and the transportation of the heavy material through fossil fuel-powered means, the very process of heating limestone releases carbon dioxide into the atmosphere. Still, cement has a lot of qualities that make it worthwhile. Concrete buildings are often very energy efficient, and since they can last so long, it means that less material might be used to maintain or rebuild structures. Also, scientists are currently working on cements that can absorb carbon dioxide from the atmosphere, further offsetting the emissions released during its production. So, when it comes to cement, what’s old is new and what’s gray is (hopefully) green.
[Image description: A portion of a building made from unpainted cement blocks.] Credit & copyright: Tobit Nazar Nieto Hernandez, PexelsIt may not seem exciting, but we wouldn’t have much without it! Cement is used in the construction of… pretty much everything. It’s also been around for millennia. Yet as ubiquitous and essential for modern life as it is, cement remains mostly misunderstood. Many people have no idea what’s even in it. Well, get your dust mask ready to explore the history of cement through the ages.
First things first: cement and concrete are not the same thing. They may both be dusty, gray stuff that hardens when mixed with water, but concrete is actually a combination of several different materials, one of which is cement itself. To make concrete, cement and aggregates like gravel are mixed together with a variety of ingredients (depending on the application) to form a strong, porous mass. Cement itself has been produced since antiquity by the ancient Greeks and Romans, with their version consisting of lime and volcanic ash mixed together. They created lime by calcining limestone, a process of heating it in a low oxygen environment to remove impurities like carbon dioxide. When the lime and ash are mixed with water, they undergo a chemical reaction called hydration, combining the calcium in lime and the silica in ash to form calcium silicate hydrates. Romans in particular are renowned for their use of cement to build massive structures that have lasted thousands of years with little maintenance. They used cement as mortar to hold bricks together, and used it to make concrete. In fact, their word for concrete, “opus caementicium”, is where the modern word “cement” comes from. Their most famous innovation was using cement to build structures in or near water. Since cement—and by extension, concrete—cures instead of drying like mud or clay, they could use both materials to build the bases of bridges, dams, and aqueducts. And unlike wood, cement and concrete don’t get weaker with time or when exposed to water. In fact, water makes the materials more durable, because small cracks that let in water trigger a secondary curing process that helps maintain structural integrity.
Most cement used today is Portland cement, and its development started in the 1800s. In 1824, British bricklayer Joseph Aspdin created the first iteration of Portland cement by heating a mixture of lime and clay together until they calcined. Aspdin took the resulting product and ground it to a fine powder. When mixed with water, it became exceptionally strong, so he named it after the stones from the Isle of Portland in Dorset, U.K., which were known for their strength. Portland cement was then improved upon by his son, William Aspdin, who added tricalcium silicate. Then, in 1850, cement manufacturer Isaac Johnson created Portland cement as it is today. Johnson heated his ingredients at a higher temperature than Aspdin had, going up to 2732 degrees Fahrenheit, resulting in a product called clinker, a fusion of lime and the silicates. In addition to being strong, Portland cement also sets much more quickly than its predecessors, and remains the primary ingredient of concrete used in modern construction.
The modern world would certainly be different without cement—in more ways than one. While the material has allowed for the construction of everything from majestic skyscrapers to monumental hydroelectric dams, its production is also a major source of greenhouse emissions. Aside from the massive amount of fuel required to heat kilns for clinker and the transportation of the heavy material through fossil fuel-powered means, the very process of heating limestone releases carbon dioxide into the atmosphere. Still, cement has a lot of qualities that make it worthwhile. Concrete buildings are often very energy efficient, and since they can last so long, it means that less material might be used to maintain or rebuild structures. Also, scientists are currently working on cements that can absorb carbon dioxide from the atmosphere, further offsetting the emissions released during its production. So, when it comes to cement, what’s old is new and what’s gray is (hopefully) green.
[Image description: A portion of a building made from unpainted cement blocks.] Credit & copyright: Tobit Nazar Nieto Hernandez, Pexels -
FREEUS History PP&T CurioFree1 CQ
Oh fudge, get that car out of here! Located between Michigan’s upper and lower Peninsulas on Lake Huron, the second-biggest of the U.S. Great Lakes, Mackinac Island is a quirky yet popular vacation spot. The 3.8-square-mile island is famous for its fudge shops, its military past, its beautiful architecture, and for being the only place in the U.S. where cars are completely banned.
In modern times, Mackinac Island is a popular vacation spot, hosting an estimated 1.2 million tourists each year. That’s especially impressive considering that only around 600 people live on the island year-round. Since no cars are allowed, visitors and residents alike must get around by walking, biking, or by horse. Horse-drawn carriages, buggies, and individual saddle horses are a common sight on the island, including on the 8-mile-long M-185, the only carless highway in America. Yet, for all of the island’s unusual and tourist-friendly features, and despite its small size, Mackinac Island’s past involves a surprising amount of military activity.
Mackinac island was originally home to several tribes of Native peoples, including the Anishinaabe, who spoke the Ojibwe language. Since they thought that the island’s shape resembled a turtle, they called it “Mitchimakinak”, or “big turtle.” When the French first arrived on the island in the 17th century, they wrote and pronounced the name as “Michilimackinac”. When the British came to the island in 1761 following the Seven Years War, they shortened it to “Mackinac.” The first Europeans to settle on Mackinac Island were French Jesuit missionaries, the most famous of whom was Jacques Marquette. In 1671, he began preaching on Mackinac Island in small chapels, some of which were made of bark, in an attempt to convert both Native peoples and French fur traders. Though he later moved his mission to the nearby mainland city of St. Ignace, a statue of Marquette and a reconstruction of one of his bark chapels still stands on the island today.
In 1781, aiming to control the waters between Lake Huron and Lake Michigan as well as the region’s profitable fur trade, British forces constructed Fort Mackinac on a bluff overlooking much of the island. Their plans hit a snag in 1796, when the U.S. won the Revolutionary War and seized control of the fort. Yet even that victory wasn’t enough to secure the island forever. When the War of 1812 broke out between the U.S. and England, British forces once again demanded control of the fort. When British Captain Charles Roberts landed on the island on July 17, 1812, with a force of British troops, Native American warriors, and Canadian militiamen, he sent an urgent message to American Fort Commander Porter Hanks, demanding an immediate surrender. Hanks was baffled, as he had no idea that war had broken out at all. Still, since the 24-year-old commander was badly outnumbered with a force of just 57 men, he surrendered the fort to the British. Just two years later, U.S. troops tried to take the fort back again, and the bloody Battle of Mackinac Island ensued. 13 Americans were killed in the only battle ever fought on the island, and the fort remained in British hands until the war of 1812 ended, and the fort was returned to U.S. control. It was officially decommissioned in 1895, but it still stands on the island today as an educational tourist site.
After the war, wealthy tourists eager to cheer themselves up and escape crowded cities began flocking to the island each summer. Catering to the tastes of these tourists, architects built elaborate, Victorian-style homes and buildings all over the island, and planted a wide array of colorful flowers. The Grand Hotel, built in 1887, is still a famed landmark on the island and boasts the world’s longest porch at 660 feet. Also in 1887, the Murdick family opened a candy kitchen on Mackinac specializing in fudge. The island’s temperate climate made fudge-making easy (since fudge requires warm, moist air to keep from drying out) and soon many other fudge shops opened. Not long after, in 1898, cars were officially banned in order to preserve the island’s historic atmosphere and the thriving horse-based businesses there. The ban was certainly effective, since Mackinac Island is still full of history, original buildings, beautiful flowers, and plenty of horses to this day. Just be prepared for a slow-moving vacation if you decide to visit.
[Image description: A photo of a portion of Mackinac island from above. A statue can be seen in a grassy field, as well as several white buildings and a harbor with boats. A green land mass can be seen across Lake Huron.] Credit & copyright: Author’s own photo. The author has dedicated this work to the public domain.Oh fudge, get that car out of here! Located between Michigan’s upper and lower Peninsulas on Lake Huron, the second-biggest of the U.S. Great Lakes, Mackinac Island is a quirky yet popular vacation spot. The 3.8-square-mile island is famous for its fudge shops, its military past, its beautiful architecture, and for being the only place in the U.S. where cars are completely banned.
In modern times, Mackinac Island is a popular vacation spot, hosting an estimated 1.2 million tourists each year. That’s especially impressive considering that only around 600 people live on the island year-round. Since no cars are allowed, visitors and residents alike must get around by walking, biking, or by horse. Horse-drawn carriages, buggies, and individual saddle horses are a common sight on the island, including on the 8-mile-long M-185, the only carless highway in America. Yet, for all of the island’s unusual and tourist-friendly features, and despite its small size, Mackinac Island’s past involves a surprising amount of military activity.
Mackinac island was originally home to several tribes of Native peoples, including the Anishinaabe, who spoke the Ojibwe language. Since they thought that the island’s shape resembled a turtle, they called it “Mitchimakinak”, or “big turtle.” When the French first arrived on the island in the 17th century, they wrote and pronounced the name as “Michilimackinac”. When the British came to the island in 1761 following the Seven Years War, they shortened it to “Mackinac.” The first Europeans to settle on Mackinac Island were French Jesuit missionaries, the most famous of whom was Jacques Marquette. In 1671, he began preaching on Mackinac Island in small chapels, some of which were made of bark, in an attempt to convert both Native peoples and French fur traders. Though he later moved his mission to the nearby mainland city of St. Ignace, a statue of Marquette and a reconstruction of one of his bark chapels still stands on the island today.
In 1781, aiming to control the waters between Lake Huron and Lake Michigan as well as the region’s profitable fur trade, British forces constructed Fort Mackinac on a bluff overlooking much of the island. Their plans hit a snag in 1796, when the U.S. won the Revolutionary War and seized control of the fort. Yet even that victory wasn’t enough to secure the island forever. When the War of 1812 broke out between the U.S. and England, British forces once again demanded control of the fort. When British Captain Charles Roberts landed on the island on July 17, 1812, with a force of British troops, Native American warriors, and Canadian militiamen, he sent an urgent message to American Fort Commander Porter Hanks, demanding an immediate surrender. Hanks was baffled, as he had no idea that war had broken out at all. Still, since the 24-year-old commander was badly outnumbered with a force of just 57 men, he surrendered the fort to the British. Just two years later, U.S. troops tried to take the fort back again, and the bloody Battle of Mackinac Island ensued. 13 Americans were killed in the only battle ever fought on the island, and the fort remained in British hands until the war of 1812 ended, and the fort was returned to U.S. control. It was officially decommissioned in 1895, but it still stands on the island today as an educational tourist site.
After the war, wealthy tourists eager to cheer themselves up and escape crowded cities began flocking to the island each summer. Catering to the tastes of these tourists, architects built elaborate, Victorian-style homes and buildings all over the island, and planted a wide array of colorful flowers. The Grand Hotel, built in 1887, is still a famed landmark on the island and boasts the world’s longest porch at 660 feet. Also in 1887, the Murdick family opened a candy kitchen on Mackinac specializing in fudge. The island’s temperate climate made fudge-making easy (since fudge requires warm, moist air to keep from drying out) and soon many other fudge shops opened. Not long after, in 1898, cars were officially banned in order to preserve the island’s historic atmosphere and the thriving horse-based businesses there. The ban was certainly effective, since Mackinac Island is still full of history, original buildings, beautiful flowers, and plenty of horses to this day. Just be prepared for a slow-moving vacation if you decide to visit.
[Image description: A photo of a portion of Mackinac island from above. A statue can be seen in a grassy field, as well as several white buildings and a harbor with boats. A green land mass can be seen across Lake Huron.] Credit & copyright: Author’s own photo. The author has dedicated this work to the public domain. -
FREEPolitical Science PP&T CurioFree1 CQ
A national park without the National Park Service? That’s like mashed potatoes without gravy! It might surprise modern park-goers, but the first national park in the U.S. was created in 1872, long before there was a government agency to specifically manage it. Instead, national parks were variably managed by the Department of the Interior, the War Department, and the Department of Agriculture’s Forest Service before 1916. It wasn’t until this day that year that the National Park Service (NPS) was established with the passing of the Organic Act, placing the management of all national parks and monuments under a single agency.
When Yellowstone National Park (shared by Wyoming, Idaho, and Montana) was created in 1872, you could say it was a pretty big deal. Never before had a piece of wilderness been placed under federal protection solely for the purpose of preserving its natural beauty. Once Yellowstone paved the way, others followed suit. Some of the most well known national parks were created before 1916, including Yosemite, Crater Lake, and Glacier in 1890, 1902, and 1910, respectively. In all, 35 national parks and monuments (smaller areas and objects of historic, prehistoric, or scientific interest) were created by 1916, but there was a problem. The object of national parks was to protect the designated lands, but that didn’t always work out in practice. Environmental advocates were becoming concerned by the lack of effort put in to actually preserve and maintain the pristine conditions of national parks. Due to lack of funding and a lack of authority inside the parks, hunting, logging, and livestock grazing continued to harm the supposedly protected land. One concerned citizen was Stephen Mather, a businessman who found himself appalled by the state of the national parks he visited. With the help of other environmental advocates, Mather pushed Congress to pass the Organic Act in 1916, which created the NPS, placing 35 national parks and monuments under its purview with Mather as the agency’s first director. Then, in 1933, the management of another 56 national monuments and military sites were transferred to the NPS, further expanding the national park system.
There are now over 400 national parks, and they truly reflect the varied geography and biomes of the U.S. and its territories. For example, Denali’s peak in Alaska is the highest point in North America at 20,320 feet, while Death Valley in California is the lowest at 282 feet below sea level (and also the hottest place in the world). Some parks, like Yellowstone and Yosemite, are large enough that visitors could spend days or weeks traversing them. Wrangell-St. Elias National Park and Preserve is larger than some countries at 13.2 million acres. Yet others represent natural oddities, like Devils Tower National Monument in Wyoming. Also called Bear Lodge, it’s a mesa that rises over 5,100 feet in elevation. Surrounded by a forest, Devils Tower itself looks like a giant tree stump made of crumbling stone.
Today, over 20,000 employees and hundreds of volunteers work throughout the year to maintain 140 parks and monuments while also managing the millions of visitors that pass through. The most visible of these workers are the park rangers, who act as law enforcement, guides, and interpreters within park boundaries. The NPS is technically part of the Department of the Interior, and has an annual budget of $2.6 billion. While that might sound like an eye-watering amount of money to spend just to preserve some natural beauty, the park system also generates hundreds of thousands of jobs in the towns and cities that surround the parks, as well as $27 billion a year for the U.S. economy. Who says conservation doesn’t pay?
[Image description: A grassy field at Yellowstone National Park with mountains and trees on either side.] Credit & copyright: Mike Cline, Wikimedia Commons, This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.A national park without the National Park Service? That’s like mashed potatoes without gravy! It might surprise modern park-goers, but the first national park in the U.S. was created in 1872, long before there was a government agency to specifically manage it. Instead, national parks were variably managed by the Department of the Interior, the War Department, and the Department of Agriculture’s Forest Service before 1916. It wasn’t until this day that year that the National Park Service (NPS) was established with the passing of the Organic Act, placing the management of all national parks and monuments under a single agency.
When Yellowstone National Park (shared by Wyoming, Idaho, and Montana) was created in 1872, you could say it was a pretty big deal. Never before had a piece of wilderness been placed under federal protection solely for the purpose of preserving its natural beauty. Once Yellowstone paved the way, others followed suit. Some of the most well known national parks were created before 1916, including Yosemite, Crater Lake, and Glacier in 1890, 1902, and 1910, respectively. In all, 35 national parks and monuments (smaller areas and objects of historic, prehistoric, or scientific interest) were created by 1916, but there was a problem. The object of national parks was to protect the designated lands, but that didn’t always work out in practice. Environmental advocates were becoming concerned by the lack of effort put in to actually preserve and maintain the pristine conditions of national parks. Due to lack of funding and a lack of authority inside the parks, hunting, logging, and livestock grazing continued to harm the supposedly protected land. One concerned citizen was Stephen Mather, a businessman who found himself appalled by the state of the national parks he visited. With the help of other environmental advocates, Mather pushed Congress to pass the Organic Act in 1916, which created the NPS, placing 35 national parks and monuments under its purview with Mather as the agency’s first director. Then, in 1933, the management of another 56 national monuments and military sites were transferred to the NPS, further expanding the national park system.
There are now over 400 national parks, and they truly reflect the varied geography and biomes of the U.S. and its territories. For example, Denali’s peak in Alaska is the highest point in North America at 20,320 feet, while Death Valley in California is the lowest at 282 feet below sea level (and also the hottest place in the world). Some parks, like Yellowstone and Yosemite, are large enough that visitors could spend days or weeks traversing them. Wrangell-St. Elias National Park and Preserve is larger than some countries at 13.2 million acres. Yet others represent natural oddities, like Devils Tower National Monument in Wyoming. Also called Bear Lodge, it’s a mesa that rises over 5,100 feet in elevation. Surrounded by a forest, Devils Tower itself looks like a giant tree stump made of crumbling stone.
Today, over 20,000 employees and hundreds of volunteers work throughout the year to maintain 140 parks and monuments while also managing the millions of visitors that pass through. The most visible of these workers are the park rangers, who act as law enforcement, guides, and interpreters within park boundaries. The NPS is technically part of the Department of the Interior, and has an annual budget of $2.6 billion. While that might sound like an eye-watering amount of money to spend just to preserve some natural beauty, the park system also generates hundreds of thousands of jobs in the towns and cities that surround the parks, as well as $27 billion a year for the U.S. economy. Who says conservation doesn’t pay?
[Image description: A grassy field at Yellowstone National Park with mountains and trees on either side.] Credit & copyright: Mike Cline, Wikimedia Commons, This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
What is that beguiling scent? It could be intestinal waste from a whale. For centuries, people made perfumes, incense, and even medicines from a mysterious substance known as ambergris. Clumps of the stuff would wash up on beaches around the world from time to time and, though ancient peoples had no idea what it was, they knew that it smelled very interesting. Today, we know that ambergris originates in the intestines of sperm whales—and it’s been made illegal in many places to prevent whaling. Still, the fragrance industry as we know it probably wouldn’t exist if it wasn’t for ambergris.
Ambergris gets its modern name from the French words for “gray amber,” but its been called many things over the centuries. It was prized by the Egyptians who considered it so valuable that they gave it as an offering to the gods. Outside of religious rituals, ambergris was used primarily as an ingredient in perfume, but also as an aphrodisiac and even as a spice. King Charles II of England took it a step further, with his favorite dish reportedly being a plate of eggs and ambergris. That might come as a shock to anyone who’s actually been in the vicinity of fresh ambergris. Its smell is what one would expect from something that came out of a whale’s intestine—very musky and pungent. However, when ambergris is left in the sun, a chemical reaction occurs that lends it a pleasant fragrance and changes its color from black to a yellowish-gray. This latter form is much more useful for perfume-making and cooking…assuming you’re brave enough to eat it, of course.
It wasn’t just ambergris’s smell that made it a popular perfume ingredient, though. It contains a myriad of chemical compounds, one of which is ambreine. Ambreine is a triterpene alcohol that can make up as much as 80 percent of ambergris, and it’s used as a fixative in fragrances. Essentially, it holds together the other chemical compounds that define a fragrance and keeps them from evaporating too quickly. Most modern perfumers use a synthesized version of ambreine, but some companies still use real ambergris. It’s still legal to use in Canada, the U.K., and the European Union, and although it’s technically illegal in the U.S., the law often goes unenforced.
As popular as ambergris has been for millennia, its origin was unknown until fairly recently. Some believed that ambergris was produced by swarming sea animals or underwater volcanoes. It wasn’t until the 18th century, when commercial whaling in North America took off and people started dissecting sperm whales, that they discovered the true source of ambergris. Even then, no one was sure why whales produced it until 2006, when marine biologist Robert Clarke proposed that sperm whales did so to protect their intestinal lining from the sharp beaks of their main prey: squids. Indeed, one of the ways to determine the authenticity of an ambergris chunk is to search it for squid beaks.
Today, even in most countries where ambergris is legal, it’s still illegal to hunt whales to obtain it. Therefore there are only two ways to come by ambergris: be lucky enough to discover a chunk washed up on shore, or be rich enough to buy a chunk when it goes up for auction. Even these aren’t options everywhere, though. The U.S. and Australia have banned commercial sales of ambergris to discourage illegal whaling, while in India all ambergris that washes ashore is technically considered property of the government. Surely we can all agree that whales are more important than what’s in their guts, no matter how good it smells.
[Image description: A chunk of ambergris, which looks like a lumpy rock, against a reddish background.] Credit & copyright: Wmpearl, Wikimedia Commons. Skagway Museum. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What is that beguiling scent? It could be intestinal waste from a whale. For centuries, people made perfumes, incense, and even medicines from a mysterious substance known as ambergris. Clumps of the stuff would wash up on beaches around the world from time to time and, though ancient peoples had no idea what it was, they knew that it smelled very interesting. Today, we know that ambergris originates in the intestines of sperm whales—and it’s been made illegal in many places to prevent whaling. Still, the fragrance industry as we know it probably wouldn’t exist if it wasn’t for ambergris.
Ambergris gets its modern name from the French words for “gray amber,” but its been called many things over the centuries. It was prized by the Egyptians who considered it so valuable that they gave it as an offering to the gods. Outside of religious rituals, ambergris was used primarily as an ingredient in perfume, but also as an aphrodisiac and even as a spice. King Charles II of England took it a step further, with his favorite dish reportedly being a plate of eggs and ambergris. That might come as a shock to anyone who’s actually been in the vicinity of fresh ambergris. Its smell is what one would expect from something that came out of a whale’s intestine—very musky and pungent. However, when ambergris is left in the sun, a chemical reaction occurs that lends it a pleasant fragrance and changes its color from black to a yellowish-gray. This latter form is much more useful for perfume-making and cooking…assuming you’re brave enough to eat it, of course.
It wasn’t just ambergris’s smell that made it a popular perfume ingredient, though. It contains a myriad of chemical compounds, one of which is ambreine. Ambreine is a triterpene alcohol that can make up as much as 80 percent of ambergris, and it’s used as a fixative in fragrances. Essentially, it holds together the other chemical compounds that define a fragrance and keeps them from evaporating too quickly. Most modern perfumers use a synthesized version of ambreine, but some companies still use real ambergris. It’s still legal to use in Canada, the U.K., and the European Union, and although it’s technically illegal in the U.S., the law often goes unenforced.
As popular as ambergris has been for millennia, its origin was unknown until fairly recently. Some believed that ambergris was produced by swarming sea animals or underwater volcanoes. It wasn’t until the 18th century, when commercial whaling in North America took off and people started dissecting sperm whales, that they discovered the true source of ambergris. Even then, no one was sure why whales produced it until 2006, when marine biologist Robert Clarke proposed that sperm whales did so to protect their intestinal lining from the sharp beaks of their main prey: squids. Indeed, one of the ways to determine the authenticity of an ambergris chunk is to search it for squid beaks.
Today, even in most countries where ambergris is legal, it’s still illegal to hunt whales to obtain it. Therefore there are only two ways to come by ambergris: be lucky enough to discover a chunk washed up on shore, or be rich enough to buy a chunk when it goes up for auction. Even these aren’t options everywhere, though. The U.S. and Australia have banned commercial sales of ambergris to discourage illegal whaling, while in India all ambergris that washes ashore is technically considered property of the government. Surely we can all agree that whales are more important than what’s in their guts, no matter how good it smells.
[Image description: A chunk of ambergris, which looks like a lumpy rock, against a reddish background.] Credit & copyright: Wmpearl, Wikimedia Commons. Skagway Museum. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREETravel PP&T CurioFree1 CQ
Corn, soybeans, tenderloins… and sand dunes? You might not think of the latter when someone mentions Indiana, but Indiana Dunes National Park really does feature towering sand dunes, the tallest of which stands at an impressive 192 feet. Until recently, this unique park was known as Indiana Dunes National Lakeshore, but it finally, officially became a national park in 2019. The designation came long after natural forces shaped the park’s unusual landscape, but not all that long after several, lengthy legal battles to save it from man-made destruction.
Though rolling sand dunes might seem out of place in the Midwest, their presence is as natural as the lake they border. The history of the Indiana Dunes begins with Lake Michigan, which formed 20,000 years ago when massive glaciers began receding after carving through the area for millennia. When the glaciers melted, it created the Great Lake, and the battering of waves on the shore deposited layer after layer of sediment carried there by rivers. Starting from 10,000 years ago, the worn down sediment became the dunes, which continued to expand along the coastline. Further inland, there are bogs and ponds between the ever-shifting dunes that move with the wind. Though the dunes still exist today and form the idyllic beaches along the park's shoreline, many of the dunes were destroyed over the past century by development and erosion. Still, the 15,000 acres that remain are the most biodiverse in North America.
In 1905, the United States Steel Corporation (U.S. Steel) purchased thousands of acres on the Lake Michigan coast, leveling the dunes there to build a steel mill. Near the mill, U.S. Steel developed the city of Gary to house its employees, which at its peak numbered around 30,000 workers. That was the first major, man-made blow to the Indiana Dunes, but not the last. The following year, the namesake of the town of Ogden Dunes, Francis A. Ogden, purchased 2.5 miles of land along the coastline to sell its sand for use in construction. Starting in the 1950s, interstates and a flood of new residents began filling in the once rural region, chipping away at the ever-dwindling dunes. Meanwhile, the first effort to give the Indiana Dunes federal protection began in 1917 using the straightforward slogan, “Save the Dunes.”
Unfortunately for conservationists and local advocates, their efforts were interrupted by WWI, which sent potential federal attention and funding from the dunes to the growing war effort. By the time the war ended and conservationists resumed their lobbying, there was little interest from the federal government, and the growing presence of industry in the area made preserving the dunes economically unappealing to politicians. It wasn’t until 1952 that the sentiment of “Save the Dunes” would be championed again, this time by Ogden Dunes resident Dorothy Buell. Buell formed the Save the Dunes Council along with 21 other local women, and the conservationists organized fundraising events to fund the purchase of unused land that was part of the dunes. Soon, their efforts caught the interest of Senator Paul H. Douglas of Illinois, who had a summer home near the Indiana Dunes. Beginning in 1958, Douglas continually introduced bills to the U.S. Senate to have the dunes placed under the authority of the National Park Service (NPS). He was ultimately successful in 1966, when President Lyndon B. Johnson signed a bill to form the Indiana Dunes National Lakeshore.
Indiana Dunes finally became Indiana Dunes National Park in 2019, after years of petitioning for the status upgrade. Today, it includes many more acres than it did in 1966 due to the NPS purchasing more land over the decades. Save the Dunes still exists as well, and their mission isn’t over yet. Industrial presence, the resulting pollution, and development continue to be issues. Additionally, erosion is a growing concern in the face of climate change and the accompanying rising water levels that threaten to wash away the sand faster than it can accumulate. If only there was a way to bring back some of that sand that got sold off!
[Image description: A photo of the Lake Michigan shoreline from atop a sandy dune at Indiana Dunes National Park.] Credit & copyright: Wikimedia Commons, National Park Service. This image or media file contains material based on a work of a National Park Service employee, created as part of that person's official duties. As a work of the U.S. federal government, such work is in the public domain in the United States.Corn, soybeans, tenderloins… and sand dunes? You might not think of the latter when someone mentions Indiana, but Indiana Dunes National Park really does feature towering sand dunes, the tallest of which stands at an impressive 192 feet. Until recently, this unique park was known as Indiana Dunes National Lakeshore, but it finally, officially became a national park in 2019. The designation came long after natural forces shaped the park’s unusual landscape, but not all that long after several, lengthy legal battles to save it from man-made destruction.
Though rolling sand dunes might seem out of place in the Midwest, their presence is as natural as the lake they border. The history of the Indiana Dunes begins with Lake Michigan, which formed 20,000 years ago when massive glaciers began receding after carving through the area for millennia. When the glaciers melted, it created the Great Lake, and the battering of waves on the shore deposited layer after layer of sediment carried there by rivers. Starting from 10,000 years ago, the worn down sediment became the dunes, which continued to expand along the coastline. Further inland, there are bogs and ponds between the ever-shifting dunes that move with the wind. Though the dunes still exist today and form the idyllic beaches along the park's shoreline, many of the dunes were destroyed over the past century by development and erosion. Still, the 15,000 acres that remain are the most biodiverse in North America.
In 1905, the United States Steel Corporation (U.S. Steel) purchased thousands of acres on the Lake Michigan coast, leveling the dunes there to build a steel mill. Near the mill, U.S. Steel developed the city of Gary to house its employees, which at its peak numbered around 30,000 workers. That was the first major, man-made blow to the Indiana Dunes, but not the last. The following year, the namesake of the town of Ogden Dunes, Francis A. Ogden, purchased 2.5 miles of land along the coastline to sell its sand for use in construction. Starting in the 1950s, interstates and a flood of new residents began filling in the once rural region, chipping away at the ever-dwindling dunes. Meanwhile, the first effort to give the Indiana Dunes federal protection began in 1917 using the straightforward slogan, “Save the Dunes.”
Unfortunately for conservationists and local advocates, their efforts were interrupted by WWI, which sent potential federal attention and funding from the dunes to the growing war effort. By the time the war ended and conservationists resumed their lobbying, there was little interest from the federal government, and the growing presence of industry in the area made preserving the dunes economically unappealing to politicians. It wasn’t until 1952 that the sentiment of “Save the Dunes” would be championed again, this time by Ogden Dunes resident Dorothy Buell. Buell formed the Save the Dunes Council along with 21 other local women, and the conservationists organized fundraising events to fund the purchase of unused land that was part of the dunes. Soon, their efforts caught the interest of Senator Paul H. Douglas of Illinois, who had a summer home near the Indiana Dunes. Beginning in 1958, Douglas continually introduced bills to the U.S. Senate to have the dunes placed under the authority of the National Park Service (NPS). He was ultimately successful in 1966, when President Lyndon B. Johnson signed a bill to form the Indiana Dunes National Lakeshore.
Indiana Dunes finally became Indiana Dunes National Park in 2019, after years of petitioning for the status upgrade. Today, it includes many more acres than it did in 1966 due to the NPS purchasing more land over the decades. Save the Dunes still exists as well, and their mission isn’t over yet. Industrial presence, the resulting pollution, and development continue to be issues. Additionally, erosion is a growing concern in the face of climate change and the accompanying rising water levels that threaten to wash away the sand faster than it can accumulate. If only there was a way to bring back some of that sand that got sold off!
[Image description: A photo of the Lake Michigan shoreline from atop a sandy dune at Indiana Dunes National Park.] Credit & copyright: Wikimedia Commons, National Park Service. This image or media file contains material based on a work of a National Park Service employee, created as part of that person's official duties. As a work of the U.S. federal government, such work is in the public domain in the United States. -
FREEMind + Body PP&T CurioFree1 CQ
It’s bad enough to be diagnosed with something that has no known cure…but what about something that also has no known cause? Until now, people dealing with Lupus, an autoimmune disease, had no way of knowing what caused their illness. Worse, they had few effective treatment options. Recently, though, researchers discovered that Lupus is caused by a problem with a specific immune system pathway. This vital information might even lead to a cure in the near future.
Lupus is an autoimmune disorder that can affect everything from a person’s skin to their organs. It’s currently incurable, and it’s difficult to diagnose accurately due to the variability of its symptoms. As an autoimmune disorder, lupus causes the body’s immune system to attack its own, healthy cells as if they were foreign bodies. This causes symptoms like dry eyes, fatigue, and fever, but even more severe symptoms can develop over time. These include joint pain and swelling, photosensitivity, lesions on the skin, and Raynaud's syndrome, which causes fingers to turn blue during times of stress or when exposed to cold. The most distinct symptom of lupus is a red, butterfly-shaped rash that appears on the face, covering the cheeks and nose. Eventually, a patient with lupus can experience organ failure, particularly of the kidneys and liver. Although lupus has been referenced in medical writings for thousands of years, the modern understanding of lupus only began in 1948 with the discovery of lupus erythematosus cells (LE cell) by Malcolm McCallum Hargraves. Hargraves found the cells in the bone marrow of a patient suffering from lupus, and they are sometimes called Hargrave cells.
Currently, lupus is treated mostly using immunosuppressants, Nonsteroidal anti-inflammatory drugs (NSAIDs) and corticosteroids with limited success. Even when they help, immunosuppressants come with their own problems. They leave patients more vulnerable to infections and more likely to develop cancer. Other drugs like NSAIDs only address individual symptoms without improving the overall condition. In fact, NSAIDs can be dangerous in some cases since they’re processed by the kidneys, making them potentially toxic to some lupus patients with decreased kidney function. Finally, corticosteroids can help with inflammation caused by a lupus patient’s overzealous immune system, but long term use can lead to high blood pressure and put users at risk for developing diabetes.
Now, a better treatment that addresses the underlying issues of lupus might be on the way after a major breakthrough in lupus research was made for the first time in 76 years. Researchers at Northwestern University Feinberg School of Medicine and Brigham and Women’s Hospital have discovered the cause of lupus: a malfunctioning immune pathway. The pathway in question is responsible for regulating immune cells’ response to pathogens, toxins, and pollutants. In lupus patients, the pathway isn’t properly activated due to an insufficient activation of the aryl hydrocarbon receptor (AHR). The resulting lack of regulation leads to the immune system going haywire, producing more immune cells than necessary, which then go on to attack the body. With this in mind, researchers are looking to AHR-activating drugs as potential treatments or cures. The good news is that these drugs already exist, but they’ll still have to be tested for safety and efficacy.
Lupus is an often misunderstood disorder, especially since many of the most debilitating symptoms are invisible. The difficulty in diagnosing it also means that many patients already have advanced disease by the time they’re finally diagnosed. Those who suspect they may have lupus could find out for sure through blood tests and urinalysis, and those who are diagnosed would benefit from avoiding direct sunlight and maintaining a healthy lifestyle. Some studies also show that there may be a benefit to taking vitamin D and calcium supplements. Here’s hoping that new lupus treatments with fewer side effects are soon added to patients’ arsenals.
[Image description: Three medical needles against a yellow background.] Credit & copyright: Karolina Kaboompics, PexelsIt’s bad enough to be diagnosed with something that has no known cure…but what about something that also has no known cause? Until now, people dealing with Lupus, an autoimmune disease, had no way of knowing what caused their illness. Worse, they had few effective treatment options. Recently, though, researchers discovered that Lupus is caused by a problem with a specific immune system pathway. This vital information might even lead to a cure in the near future.
Lupus is an autoimmune disorder that can affect everything from a person’s skin to their organs. It’s currently incurable, and it’s difficult to diagnose accurately due to the variability of its symptoms. As an autoimmune disorder, lupus causes the body’s immune system to attack its own, healthy cells as if they were foreign bodies. This causes symptoms like dry eyes, fatigue, and fever, but even more severe symptoms can develop over time. These include joint pain and swelling, photosensitivity, lesions on the skin, and Raynaud's syndrome, which causes fingers to turn blue during times of stress or when exposed to cold. The most distinct symptom of lupus is a red, butterfly-shaped rash that appears on the face, covering the cheeks and nose. Eventually, a patient with lupus can experience organ failure, particularly of the kidneys and liver. Although lupus has been referenced in medical writings for thousands of years, the modern understanding of lupus only began in 1948 with the discovery of lupus erythematosus cells (LE cell) by Malcolm McCallum Hargraves. Hargraves found the cells in the bone marrow of a patient suffering from lupus, and they are sometimes called Hargrave cells.
Currently, lupus is treated mostly using immunosuppressants, Nonsteroidal anti-inflammatory drugs (NSAIDs) and corticosteroids with limited success. Even when they help, immunosuppressants come with their own problems. They leave patients more vulnerable to infections and more likely to develop cancer. Other drugs like NSAIDs only address individual symptoms without improving the overall condition. In fact, NSAIDs can be dangerous in some cases since they’re processed by the kidneys, making them potentially toxic to some lupus patients with decreased kidney function. Finally, corticosteroids can help with inflammation caused by a lupus patient’s overzealous immune system, but long term use can lead to high blood pressure and put users at risk for developing diabetes.
Now, a better treatment that addresses the underlying issues of lupus might be on the way after a major breakthrough in lupus research was made for the first time in 76 years. Researchers at Northwestern University Feinberg School of Medicine and Brigham and Women’s Hospital have discovered the cause of lupus: a malfunctioning immune pathway. The pathway in question is responsible for regulating immune cells’ response to pathogens, toxins, and pollutants. In lupus patients, the pathway isn’t properly activated due to an insufficient activation of the aryl hydrocarbon receptor (AHR). The resulting lack of regulation leads to the immune system going haywire, producing more immune cells than necessary, which then go on to attack the body. With this in mind, researchers are looking to AHR-activating drugs as potential treatments or cures. The good news is that these drugs already exist, but they’ll still have to be tested for safety and efficacy.
Lupus is an often misunderstood disorder, especially since many of the most debilitating symptoms are invisible. The difficulty in diagnosing it also means that many patients already have advanced disease by the time they’re finally diagnosed. Those who suspect they may have lupus could find out for sure through blood tests and urinalysis, and those who are diagnosed would benefit from avoiding direct sunlight and maintaining a healthy lifestyle. Some studies also show that there may be a benefit to taking vitamin D and calcium supplements. Here’s hoping that new lupus treatments with fewer side effects are soon added to patients’ arsenals.
[Image description: Three medical needles against a yellow background.] Credit & copyright: Karolina Kaboompics, Pexels -
FREEUS History PP&T CurioFree1 CQ
Was it the “trail of the century” or just a bunch of monkey business? The Scopes Monkey Trial, one of the most widely-publicized court cases in U.S. history, concluded on this day in 1925. Ostensibly, the case was a legal battle between the state of Tennessee and John T. Scopes, a teacher from the town of Dayton who defied the law and taught Charles Darwin’s theory of evolution in his public school classroom. In reality, the trail was about America’s views on science and religion, not to mention public education.
At its core, Scopes v. State centered around the violation of the Butler Act. The act was passed in March of 1925 by Tennessee’s state legislature, and it prohibited schools from teaching Darwin’s theory of evolution. At the time (as it is today) the theory of evolution was rejected by fundamentalist Christians who favored a biblical interpretation of natural history. Oddly enough, as part of the Butler Act, Tennessee’s public schools were required to use A Civic Biology (1914) by George W. Hunter’s in their classrooms, even though the textbook supported the theory. Nevertheless, soon after the act was passed, the American Civil Liberties Union (ACLU) placed ads in the state’s newspapers offering to fund the criminal defense of any teacher willing to break the new law. The idea was to test the law in court and have it found to be unconstitutional. It wasn’t until a Dayton businessman named George W. Rappleyea saw economic potential in the case that anyone challenged the Butler Act. Rappleyea believed that such a controversial case would increase Dayton’s visibility, revitalizing the town. With Rappleyea’s support, several prominent residents of the town encouraged 24-year-old high school football coach and teacher John T. Scopes to place himself within the legal crosshairs of the state.
When Scopes was charged with violating the Butler Act soon thereafter, he was represented by famed criminal defense lawyer Clarence Darrow. On the prosecution’s side was prominent politician and attorney William Jennings Bryan, who also served as a Bible expert during the trial. The trial certainly did bring the sleepy town of Dayton into the national spotlight. The Scopes Trial was the first to be broadcast nationally, and was heard as far away as London and Hong Kong. Residents of Dayton were roused by the controversy and sensationalism on display, gathering at the courthouse in such numbers that the judge moved the trial out to the lawn for fear of the courthouse collapsing under the weight of the crowd. Regardless of where it took place, it was clear early on that Scopes and Darrow were fighting an uphill battle. The judge forbade any discussions regarding the scientific validity of evolution or the constitutionality of the Butler Act, stating that the court was only concerned with whether or not Scopes had violated the law. Still, Darrow took the opportunity to grill Bryan’s credibility as a Bible expert. A famous proponent of anticlericalism, Darrow was used to criticizing fundamentalist interpretations of the Bible. Bryan, a self-proclaimed expert on scripture, was cross-examined by Darrow, during which he was largely ridiculed for his inability to reconcile the contradictions in a literal reading of the Bible. Then, on the last day of the trial, the unthinkable happened: Darrow, the defense counsel, asked the jury to find Scopes guilty so that the case could be appealed by a higher court in his closing statement. In doing so, Bryan was denied the right to give his own closing statement by Tennessee state law.
In the end, Scopes was found guilty and fined $100. However, due to a procedural error in the way the fine was determined, the case was overturned by the Tennessee Supreme Court. In 1955, the sensational story of the case was adapted into a play, Inherit the Wind, which was itself adapted into a film. As for the Butler Act, it wasn’t repealed until 1967. Today, the theory of evolution still invites controversy in certain places. Maybe we’ll see a federal case about it someday.
[Image description: Description ] Credit & copyright: Tree of Life by Ernst Haeckel (1834–1919), 1879. Wikimedia Commons, this media file is in the public domain in the United States.Was it the “trail of the century” or just a bunch of monkey business? The Scopes Monkey Trial, one of the most widely-publicized court cases in U.S. history, concluded on this day in 1925. Ostensibly, the case was a legal battle between the state of Tennessee and John T. Scopes, a teacher from the town of Dayton who defied the law and taught Charles Darwin’s theory of evolution in his public school classroom. In reality, the trail was about America’s views on science and religion, not to mention public education.
At its core, Scopes v. State centered around the violation of the Butler Act. The act was passed in March of 1925 by Tennessee’s state legislature, and it prohibited schools from teaching Darwin’s theory of evolution. At the time (as it is today) the theory of evolution was rejected by fundamentalist Christians who favored a biblical interpretation of natural history. Oddly enough, as part of the Butler Act, Tennessee’s public schools were required to use A Civic Biology (1914) by George W. Hunter’s in their classrooms, even though the textbook supported the theory. Nevertheless, soon after the act was passed, the American Civil Liberties Union (ACLU) placed ads in the state’s newspapers offering to fund the criminal defense of any teacher willing to break the new law. The idea was to test the law in court and have it found to be unconstitutional. It wasn’t until a Dayton businessman named George W. Rappleyea saw economic potential in the case that anyone challenged the Butler Act. Rappleyea believed that such a controversial case would increase Dayton’s visibility, revitalizing the town. With Rappleyea’s support, several prominent residents of the town encouraged 24-year-old high school football coach and teacher John T. Scopes to place himself within the legal crosshairs of the state.
When Scopes was charged with violating the Butler Act soon thereafter, he was represented by famed criminal defense lawyer Clarence Darrow. On the prosecution’s side was prominent politician and attorney William Jennings Bryan, who also served as a Bible expert during the trial. The trial certainly did bring the sleepy town of Dayton into the national spotlight. The Scopes Trial was the first to be broadcast nationally, and was heard as far away as London and Hong Kong. Residents of Dayton were roused by the controversy and sensationalism on display, gathering at the courthouse in such numbers that the judge moved the trial out to the lawn for fear of the courthouse collapsing under the weight of the crowd. Regardless of where it took place, it was clear early on that Scopes and Darrow were fighting an uphill battle. The judge forbade any discussions regarding the scientific validity of evolution or the constitutionality of the Butler Act, stating that the court was only concerned with whether or not Scopes had violated the law. Still, Darrow took the opportunity to grill Bryan’s credibility as a Bible expert. A famous proponent of anticlericalism, Darrow was used to criticizing fundamentalist interpretations of the Bible. Bryan, a self-proclaimed expert on scripture, was cross-examined by Darrow, during which he was largely ridiculed for his inability to reconcile the contradictions in a literal reading of the Bible. Then, on the last day of the trial, the unthinkable happened: Darrow, the defense counsel, asked the jury to find Scopes guilty so that the case could be appealed by a higher court in his closing statement. In doing so, Bryan was denied the right to give his own closing statement by Tennessee state law.
In the end, Scopes was found guilty and fined $100. However, due to a procedural error in the way the fine was determined, the case was overturned by the Tennessee Supreme Court. In 1955, the sensational story of the case was adapted into a play, Inherit the Wind, which was itself adapted into a film. As for the Butler Act, it wasn’t repealed until 1967. Today, the theory of evolution still invites controversy in certain places. Maybe we’ll see a federal case about it someday.
[Image description: Description ] Credit & copyright: Tree of Life by Ernst Haeckel (1834–1919), 1879. Wikimedia Commons, this media file is in the public domain in the United States. -
FREEHumanities PP&T CurioFree1 CQ
You’ve heard of Zeus, but have you ever wondered how the king of the gods came to power? The pantheon of ancient Greek gods is full of familiar names, from Aphrodite to Poseidon. But in Greek mythology, these deities weren’t the first to rule the heavens and earth. That distinction belongs to the Titans.
According to the ancient Greeks, the creation of the universe involved something coming forth from nothing…with a lot of family drama following after. At first, there was only Chaos, a cosmic void from which the first beings emerged. Then came the three primordial deities: Gaia (the Earth itself), Tartarus (the underworld) and Eros (desire). Gaia’s son, Uranus, was the sky, and also the father of her other 18 children. 12 of the children were the Titans (the first gods), three were the one-eyed Cyclopes and the final three were the Hecatoncheires, each of whom had 50 heads and a hundred arms. Appalled by their monstrous appearance, Uranus imprisoned the Cyclopes and the Hecatoncheires in Tartarus. Unfortunately, this made their mother very angry. In retaliation for Uranus’s cruelty, Gaia gave her son, the titan Cronus, a sickle with which he castrated his father and ultimately overthrew him. Unfortunately for Cronus, history tends to repeat itself, even for gods.
After imprisoning his father in Tartarus, Cronus ruled over the Titans and married his sister, Rhea. Rhea gave birth to six children, Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. Worried that one of these children would depose him as he had deposed his own father, Cronus decided on a violent plan of action. He swalloed his children one by one, but Rhea managed to save Zeus by giving her husband a rock disguised as her son. Once Zeus came of age in secret, he returned to his father to exact his revenge. First, he poisoned Cronus to make him vomit, freeing his brothers and sisters. With the help of his siblings, Zeus then set in motion the Titanomachy, a ten-year conflict between the Olympian gods and the Titans. Zeus and his cohort allied with the Cyclopes and the Hecatoncheires, with the former creating the iconic weapons of the Olympians: Zeus’s thunderbolts, Poseidon’s trident, and Hades’s helmet of darkness. The Olympians, of course, emerged victorious, thanks in no small part to these powerful weapons. After his defeat, Cronus was exiled, cursed to count the passing of time and age, earning him the moniker “Old Father Time.” Atlas, the Titan who led his kin into battle, was punished by having to hold up the heavens for all eternity. Meanwhile, Zeus and the others settled on the summit of Olympus in a palace built by the Cyclopes. Not all the Titans were cast out by the Olympians, though, and there was still more conflict to come.
One of the most famous Titans was Prometheus. Not only did he escape imprisonment in Tartarus, he and his twin brother, Epimetheus, were tasked by Zeus with creating mankind. However, Prometheus was angry that his creations were left in the cold, without any reliable way to keep warm or reach their true potential. Feeling pity for them, Prometheus stole fire from Olympus and brought it to the humans even though doing so was forbidden by Zeus. Along with this powerful gift, Prometheus taught them mathematics, astronomy, sailing, and architecture. Thanks to these divine boons, the humans thrived, building mighty kingdoms of their own. In time, they even came to question the power and authority of the gods, which deeply angered Zeus. Discovering Prometheus’s betrayal, Zeus punished the Titan by chaining him to a cliff. There, a giant vulture came each day to eat Prometheus’s liver, which grew back overnight. It wasn’t until Heracles came around centuries later and killed the vulture that the Titan was freed. What’s a few eons of torment if it means you can take credit for mankind's greatest achievements?
[Image description: A painting of the ancient Greek Titans, depicted as large male figures, falling into the darkness of Tartarus.] Credit & copyright: The Fall of the Titans (c. 1596–1598), Cornelis van Haarlem (1562–1638). National Gallery of Denmark, Copenhagen. Wikimedia Commons. The author died in 1638, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.You’ve heard of Zeus, but have you ever wondered how the king of the gods came to power? The pantheon of ancient Greek gods is full of familiar names, from Aphrodite to Poseidon. But in Greek mythology, these deities weren’t the first to rule the heavens and earth. That distinction belongs to the Titans.
According to the ancient Greeks, the creation of the universe involved something coming forth from nothing…with a lot of family drama following after. At first, there was only Chaos, a cosmic void from which the first beings emerged. Then came the three primordial deities: Gaia (the Earth itself), Tartarus (the underworld) and Eros (desire). Gaia’s son, Uranus, was the sky, and also the father of her other 18 children. 12 of the children were the Titans (the first gods), three were the one-eyed Cyclopes and the final three were the Hecatoncheires, each of whom had 50 heads and a hundred arms. Appalled by their monstrous appearance, Uranus imprisoned the Cyclopes and the Hecatoncheires in Tartarus. Unfortunately, this made their mother very angry. In retaliation for Uranus’s cruelty, Gaia gave her son, the titan Cronus, a sickle with which he castrated his father and ultimately overthrew him. Unfortunately for Cronus, history tends to repeat itself, even for gods.
After imprisoning his father in Tartarus, Cronus ruled over the Titans and married his sister, Rhea. Rhea gave birth to six children, Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. Worried that one of these children would depose him as he had deposed his own father, Cronus decided on a violent plan of action. He swalloed his children one by one, but Rhea managed to save Zeus by giving her husband a rock disguised as her son. Once Zeus came of age in secret, he returned to his father to exact his revenge. First, he poisoned Cronus to make him vomit, freeing his brothers and sisters. With the help of his siblings, Zeus then set in motion the Titanomachy, a ten-year conflict between the Olympian gods and the Titans. Zeus and his cohort allied with the Cyclopes and the Hecatoncheires, with the former creating the iconic weapons of the Olympians: Zeus’s thunderbolts, Poseidon’s trident, and Hades’s helmet of darkness. The Olympians, of course, emerged victorious, thanks in no small part to these powerful weapons. After his defeat, Cronus was exiled, cursed to count the passing of time and age, earning him the moniker “Old Father Time.” Atlas, the Titan who led his kin into battle, was punished by having to hold up the heavens for all eternity. Meanwhile, Zeus and the others settled on the summit of Olympus in a palace built by the Cyclopes. Not all the Titans were cast out by the Olympians, though, and there was still more conflict to come.
One of the most famous Titans was Prometheus. Not only did he escape imprisonment in Tartarus, he and his twin brother, Epimetheus, were tasked by Zeus with creating mankind. However, Prometheus was angry that his creations were left in the cold, without any reliable way to keep warm or reach their true potential. Feeling pity for them, Prometheus stole fire from Olympus and brought it to the humans even though doing so was forbidden by Zeus. Along with this powerful gift, Prometheus taught them mathematics, astronomy, sailing, and architecture. Thanks to these divine boons, the humans thrived, building mighty kingdoms of their own. In time, they even came to question the power and authority of the gods, which deeply angered Zeus. Discovering Prometheus’s betrayal, Zeus punished the Titan by chaining him to a cliff. There, a giant vulture came each day to eat Prometheus’s liver, which grew back overnight. It wasn’t until Heracles came around centuries later and killed the vulture that the Titan was freed. What’s a few eons of torment if it means you can take credit for mankind's greatest achievements?
[Image description: A painting of the ancient Greek Titans, depicted as large male figures, falling into the darkness of Tartarus.] Credit & copyright: The Fall of the Titans (c. 1596–1598), Cornelis van Haarlem (1562–1638). National Gallery of Denmark, Copenhagen. Wikimedia Commons. The author died in 1638, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEPP&T CurioFree1 CQ
In honor of the holiday weekend, enjoy this curio from the archives about one of the Revolutionary War's most unlikely figures.
She wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public DomainIn honor of the holiday weekend, enjoy this curio from the archives about one of the Revolutionary War's most unlikely figures.
She wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public Domain -
FREESports PP&T CurioFree1 CQ
As a rule, humans aren’t the world's best swimmers…but rules were made to be broken. While most members of our terrestrial species are much faster on land than in the water, Olympian Michael Phelps is a notable exception. This record-breaking athlete, born on this day in 1985, has a unique physiology that makes him perfectly suited for the pool, and an aquatic nickname to match.
Phelps began swimming at the age of seven, following in his sisters’ footsteps after they joined a local swim team. Long before he boasted nicknames like “Flying Fish” and “Baltimore Bullet,” swam competitively for his high school team and even made it onto the U.S. Swim Team at the 2000 Summer Olympics in Sydney. Though he didn’t win any medals that year, he still made history by being the youngest male Olympic swimmer in 68 years. He began setting world records while still in high school, a trend that continued when he attended the University of Michigan in Ann Arbor. It was during Phelps’ second Olympics appearance in 2004, in Athens, that he became a household name after winning eight medals, including six golds. After not winning a single medal at his first Olympics, Phelps was suddenly just one gold away from Mark Spitz's record of seven. He went on to break the record during the 2008 Summer Olympics in Beijing by winning eight gold medals, which was also the record for the most gold during a single Olympics. By the time he retired in 2016 after the Summer Olympics in Rio de Janeiro, he had 28 medals to his name, with 23 golds including 13 individual golds.
While hard work and perseverance surely played a role in Phelps' dominance in the water, he also benefited from having what may be the ideal swimmer’s body. Most of the best swimmers in the world have a similar body shape that gives them an advantage over the average person, beyond their training. Firstly, it pays for a swimmer to be tall, and indeed, most of the top Olympic swimmers hover around six feet tall. But proportions matter too, with long, flexible torsos allowing for more power behind strokes and a center of mass closer to the lungs (the center of flotation) allowing for less energy wasted in trying to stay level in the water. It also helps to have large hands and feet, which act like paddles or flippers in the water, while large lungs help swimmers stay afloat and take in more oxygen. Many swimmers have these traits, but Phelps’s physique seems to take some of them to an extreme. His lung capacity sits at 12 liters, twice that of the average person, and he has double-jointed elbows. He’s also hyper-jointed at the chest, allowing him to leverage more of his body to power each stroke. Even for a swimmer, he has a massive “wingspan,” the distance from fingertip to fingertip when the arms are held out horizontally from the body. While most people have wingspans that are about the same as their height, Phelps’s wingspan of six feet, seven inches is three inches longer than he is tall. Finally, his body was found to produce half as much lactic acid than even other trained athletes, which allows him to recover faster between training sessions.
All that isn’t to discount his talent. While Phelps may have been gifted with natural advantages, his drive and willingness to train hard are even more important. Those who’ve worked with Phelps have often expressed that the true secret behind the swimmer’s success is his immaculate technique, which can only come from extensive training. Swimming is extremely inefficient for human beings, so every movement of every stroke counts, especially at elite levels where a fraction of a second can make all the difference. It wouldn’t matter if you had shark skin and flippers for feet if you didn’t know how to use them!
[Image description: A large, empty swimming pool with blue-and-white lane dividers.] Credit & copyright: Jan van der Wolf, PexelsAs a rule, humans aren’t the world's best swimmers…but rules were made to be broken. While most members of our terrestrial species are much faster on land than in the water, Olympian Michael Phelps is a notable exception. This record-breaking athlete, born on this day in 1985, has a unique physiology that makes him perfectly suited for the pool, and an aquatic nickname to match.
Phelps began swimming at the age of seven, following in his sisters’ footsteps after they joined a local swim team. Long before he boasted nicknames like “Flying Fish” and “Baltimore Bullet,” swam competitively for his high school team and even made it onto the U.S. Swim Team at the 2000 Summer Olympics in Sydney. Though he didn’t win any medals that year, he still made history by being the youngest male Olympic swimmer in 68 years. He began setting world records while still in high school, a trend that continued when he attended the University of Michigan in Ann Arbor. It was during Phelps’ second Olympics appearance in 2004, in Athens, that he became a household name after winning eight medals, including six golds. After not winning a single medal at his first Olympics, Phelps was suddenly just one gold away from Mark Spitz's record of seven. He went on to break the record during the 2008 Summer Olympics in Beijing by winning eight gold medals, which was also the record for the most gold during a single Olympics. By the time he retired in 2016 after the Summer Olympics in Rio de Janeiro, he had 28 medals to his name, with 23 golds including 13 individual golds.
While hard work and perseverance surely played a role in Phelps' dominance in the water, he also benefited from having what may be the ideal swimmer’s body. Most of the best swimmers in the world have a similar body shape that gives them an advantage over the average person, beyond their training. Firstly, it pays for a swimmer to be tall, and indeed, most of the top Olympic swimmers hover around six feet tall. But proportions matter too, with long, flexible torsos allowing for more power behind strokes and a center of mass closer to the lungs (the center of flotation) allowing for less energy wasted in trying to stay level in the water. It also helps to have large hands and feet, which act like paddles or flippers in the water, while large lungs help swimmers stay afloat and take in more oxygen. Many swimmers have these traits, but Phelps’s physique seems to take some of them to an extreme. His lung capacity sits at 12 liters, twice that of the average person, and he has double-jointed elbows. He’s also hyper-jointed at the chest, allowing him to leverage more of his body to power each stroke. Even for a swimmer, he has a massive “wingspan,” the distance from fingertip to fingertip when the arms are held out horizontally from the body. While most people have wingspans that are about the same as their height, Phelps’s wingspan of six feet, seven inches is three inches longer than he is tall. Finally, his body was found to produce half as much lactic acid than even other trained athletes, which allows him to recover faster between training sessions.
All that isn’t to discount his talent. While Phelps may have been gifted with natural advantages, his drive and willingness to train hard are even more important. Those who’ve worked with Phelps have often expressed that the true secret behind the swimmer’s success is his immaculate technique, which can only come from extensive training. Swimming is extremely inefficient for human beings, so every movement of every stroke counts, especially at elite levels where a fraction of a second can make all the difference. It wouldn’t matter if you had shark skin and flippers for feet if you didn’t know how to use them!
[Image description: A large, empty swimming pool with blue-and-white lane dividers.] Credit & copyright: Jan van der Wolf, Pexels -
FREESports PP&T CurioFree1 CQ
Some people think he was a great baseball player. Everyone else knows he was the greatest. Willie Mays passed away on June 18 at the age of 93, and even though he had been retired for decades, no one else in the league ever managed to best his numbers. At bat or in center field, there still isn’t anyone quite like the “Say Hey Kid.”
Willie Howard Mays Jr. was born on May 6, 1931, in Westfield, Alabama, to Annie Satterwhite and Willie Mays Sr., a semi-professional baseball player. Though he was raised by relatives after his parents separated, Mays seemed to follow in his father’s footsteps, showing an interest in baseball from an early age. As a teenager, he moved to Fairfield, where he played sporadically for the Fairfield stars in the Birmingham Industrial League. In 1948, Mays was just 16 years old and still attending high school when he signed with the Birmingham Black Barons in what was then known as the Negro League. Mays played for the Black Barons until he graduated high school, after which he signed with the Giants, then based in New York.
The baseball wunderkind proved in his rookie season in 1951 that his early success wasn’t just a fluke. Moreover, he showed that he was an exceptional all-around player. Though the Giants lost the World Series to the New York Yankees that year, Mays was named the National League Rookie of the Year for his superb defensive performance. Just a few years later, the Giants would have a historic season when they went on to win the 1954 World Series. It was in Game 1 against the Cleveland Indians when Mays pulled off “The Catch,” an over-the-shoulder catch from behind that seemed like a magic trick at the time. “The Catch” happened after Vic Wertz hit a fly ball deep into center field, and Mays took off after it with his back to the plate at a dead sprint. Catching the ball meant he kept the players on bases from scoring, and all but secured a Giants win for the series opener—all with undeniable flair. Mays followed the Giants when the team moved to San Francisco, where he played until he was traded to the New York Mets in 1972. Throughout his career, Mays played in 24 All-Star games, was awarded the Gold Glove 12 times, and hit 660 home runs, all while stealing bases left and right and keeping center field a precarious place for a hitter’s aim. But it wasn’t just his presence on the field that gained him a following. He was a beloved personality off the field. He was given the nickname “Say Hey Kid,” and while accounts on its origins vary, Mays himself said at one point that it was due to his habit of addressing people with “Say hey” when he couldn’t remember someone’s name during his rookie year.
Even after retiring in 1973, Mays remained an inspiration to many Black Americans. After Jackie Robinson broke down racial barriers in 1947, Mays further pushed against the racist barriers that Black athletes faced. He was a player who could not be ignored, whose dramatic plays and charisma won games as well as hearts. For many in his time, Mays was the face of baseball, a superstar of a sport that had only recently—and begrudgingly—integrated. When he was awarded the Presidential Medal of Freedom in 2015, President Obama said of him, “It's because of Giants like Willie that someone like me could even think about running for president.” To this day, Mays is cited as an inspiration by Black baseball players, who continue to be underrepresented in the sport. It seems that this legendary giant had plenty of room on his shoulders.
[Image description: A red baseball glove, a baseball bat, and four baseballs on a wooden bench.] Credit & copyright: Tima Miroshnichenko, PexelsSome people think he was a great baseball player. Everyone else knows he was the greatest. Willie Mays passed away on June 18 at the age of 93, and even though he had been retired for decades, no one else in the league ever managed to best his numbers. At bat or in center field, there still isn’t anyone quite like the “Say Hey Kid.”
Willie Howard Mays Jr. was born on May 6, 1931, in Westfield, Alabama, to Annie Satterwhite and Willie Mays Sr., a semi-professional baseball player. Though he was raised by relatives after his parents separated, Mays seemed to follow in his father’s footsteps, showing an interest in baseball from an early age. As a teenager, he moved to Fairfield, where he played sporadically for the Fairfield stars in the Birmingham Industrial League. In 1948, Mays was just 16 years old and still attending high school when he signed with the Birmingham Black Barons in what was then known as the Negro League. Mays played for the Black Barons until he graduated high school, after which he signed with the Giants, then based in New York.
The baseball wunderkind proved in his rookie season in 1951 that his early success wasn’t just a fluke. Moreover, he showed that he was an exceptional all-around player. Though the Giants lost the World Series to the New York Yankees that year, Mays was named the National League Rookie of the Year for his superb defensive performance. Just a few years later, the Giants would have a historic season when they went on to win the 1954 World Series. It was in Game 1 against the Cleveland Indians when Mays pulled off “The Catch,” an over-the-shoulder catch from behind that seemed like a magic trick at the time. “The Catch” happened after Vic Wertz hit a fly ball deep into center field, and Mays took off after it with his back to the plate at a dead sprint. Catching the ball meant he kept the players on bases from scoring, and all but secured a Giants win for the series opener—all with undeniable flair. Mays followed the Giants when the team moved to San Francisco, where he played until he was traded to the New York Mets in 1972. Throughout his career, Mays played in 24 All-Star games, was awarded the Gold Glove 12 times, and hit 660 home runs, all while stealing bases left and right and keeping center field a precarious place for a hitter’s aim. But it wasn’t just his presence on the field that gained him a following. He was a beloved personality off the field. He was given the nickname “Say Hey Kid,” and while accounts on its origins vary, Mays himself said at one point that it was due to his habit of addressing people with “Say hey” when he couldn’t remember someone’s name during his rookie year.
Even after retiring in 1973, Mays remained an inspiration to many Black Americans. After Jackie Robinson broke down racial barriers in 1947, Mays further pushed against the racist barriers that Black athletes faced. He was a player who could not be ignored, whose dramatic plays and charisma won games as well as hearts. For many in his time, Mays was the face of baseball, a superstar of a sport that had only recently—and begrudgingly—integrated. When he was awarded the Presidential Medal of Freedom in 2015, President Obama said of him, “It's because of Giants like Willie that someone like me could even think about running for president.” To this day, Mays is cited as an inspiration by Black baseball players, who continue to be underrepresented in the sport. It seems that this legendary giant had plenty of room on his shoulders.
[Image description: A red baseball glove, a baseball bat, and four baseballs on a wooden bench.] Credit & copyright: Tima Miroshnichenko, Pexels -
FREECooking PP&T CurioFree1 CQ
As pride month continues, so does our celebration of extraordinary LGBTQ+ figures. This week, we’re taking a closer look at the late, great, American chef, James Beard. Even outside the culinary world, the James Beard award is well-known as one of the most coveted prizes that a chef or restaurant can receive. Read on to learn how this award’s namesake became one of America’s first culinary superstars, and the unconventional way that he chose to come out, later in life.
Born May 5, 1903, Beard grew up in Oregon where his parents taught him to fish and forage for food in the bountiful waters and forests of the Pacific Northwest. He was also exposed to fine dining as a child, as his mother ran a boarding house and was known for her cooking. While the passion for cooking with locally sourced ingredients was thus imparted on him at a young age, Beard’s first career choice had nothing to do with the kitchen. Instead, he traveled abroad and trained for theater as a young man, but he never found much success as an actor and struggled to make ends meet. Beard returned to the U.S. in 1927, but had no better luck in the entertainment industry stateside. In 1937, Beard started a catering business called Hors d’Oeuvre Inc. to supplement his income. Not too long afterward, though, this enterprise born out of necessity became a financial success and reignited his childhood passion for cooking.
In 1940, Beard published his first cookbook, Hors d’Oeuvre & Canapés, and in 1942, he published Cook It Outdoors. Then, in 1946, he achieved his former ambition of making it onto the screen in a roundabout way, when he began hosting a cooking segment on I Love to Eat on NBC. His books and TV appearances made him a household name in post-WWII America. What set him apart from other culinary personalities emerging around the same time was his focus on identifying and creating distinctly American dishes. As much as Italian and French cuisine were beginning to capture home cooks’ imaginations, Beard defined American cuisine as a worthy contender with its own unique traditions and merits. That’s not to say that he was one to snub other culinary traditions, of course. He himself was well-traveled and wrote extensively about everything he tried in the U.S. and abroad, particularly in Europe. Beard was also close friends with Julia Child, another American food personality who was responsible for making French cuisine accessible to the average home cook. The two met in 1961 and remained close until Beard passed away in 1985. She once said of him, “People just adored him. He was so jolly, so nice, and so generous… He was so open, he had such a general love of food, and I think he encouraged everybody.” Child was instrumental in the creation of the James Beard Foundation after his death, which awards exceptional contributions to American culinary arts and related fields.
Sadly, as successful as he was professionally, Beard’s fame made him feel pressured to keep his sexuality hidden from the public for most of his life. He only came out in 1981 in the revised version of his autobiography, Delights & Prejudices: A Memoir with Recipes, where he wrote about his relationship with his partner, Gino Cofacci. The couple spent 30 years together, and when Beard passed away in 1985, the late chef left Cofacci an apartment in his townhouse. Even so late in life, coming out was a risky thing for a celebrity like Beard to do, especially during the AIDS epidemic and anti-LGBTQ atmosphere of the 1980s. Then, as now, being a celebrity can be a double-edged chef’s knife.As pride month continues, so does our celebration of extraordinary LGBTQ+ figures. This week, we’re taking a closer look at the late, great, American chef, James Beard. Even outside the culinary world, the James Beard award is well-known as one of the most coveted prizes that a chef or restaurant can receive. Read on to learn how this award’s namesake became one of America’s first culinary superstars, and the unconventional way that he chose to come out, later in life.
Born May 5, 1903, Beard grew up in Oregon where his parents taught him to fish and forage for food in the bountiful waters and forests of the Pacific Northwest. He was also exposed to fine dining as a child, as his mother ran a boarding house and was known for her cooking. While the passion for cooking with locally sourced ingredients was thus imparted on him at a young age, Beard’s first career choice had nothing to do with the kitchen. Instead, he traveled abroad and trained for theater as a young man, but he never found much success as an actor and struggled to make ends meet. Beard returned to the U.S. in 1927, but had no better luck in the entertainment industry stateside. In 1937, Beard started a catering business called Hors d’Oeuvre Inc. to supplement his income. Not too long afterward, though, this enterprise born out of necessity became a financial success and reignited his childhood passion for cooking.
In 1940, Beard published his first cookbook, Hors d’Oeuvre & Canapés, and in 1942, he published Cook It Outdoors. Then, in 1946, he achieved his former ambition of making it onto the screen in a roundabout way, when he began hosting a cooking segment on I Love to Eat on NBC. His books and TV appearances made him a household name in post-WWII America. What set him apart from other culinary personalities emerging around the same time was his focus on identifying and creating distinctly American dishes. As much as Italian and French cuisine were beginning to capture home cooks’ imaginations, Beard defined American cuisine as a worthy contender with its own unique traditions and merits. That’s not to say that he was one to snub other culinary traditions, of course. He himself was well-traveled and wrote extensively about everything he tried in the U.S. and abroad, particularly in Europe. Beard was also close friends with Julia Child, another American food personality who was responsible for making French cuisine accessible to the average home cook. The two met in 1961 and remained close until Beard passed away in 1985. She once said of him, “People just adored him. He was so jolly, so nice, and so generous… He was so open, he had such a general love of food, and I think he encouraged everybody.” Child was instrumental in the creation of the James Beard Foundation after his death, which awards exceptional contributions to American culinary arts and related fields.
Sadly, as successful as he was professionally, Beard’s fame made him feel pressured to keep his sexuality hidden from the public for most of his life. He only came out in 1981 in the revised version of his autobiography, Delights & Prejudices: A Memoir with Recipes, where he wrote about his relationship with his partner, Gino Cofacci. The couple spent 30 years together, and when Beard passed away in 1985, the late chef left Cofacci an apartment in his townhouse. Even so late in life, coming out was a risky thing for a celebrity like Beard to do, especially during the AIDS epidemic and anti-LGBTQ atmosphere of the 1980s. Then, as now, being a celebrity can be a double-edged chef’s knife. -
FREEUS History PP&T CurioFree1 CQ
In honor of pride month, we’re taking a look at a hero of the Revolutionary War: Prussian Nobleman Baron von Steuben. His work teaching European military techniques to struggling American troops helped turn the tide of the war. Von Steuben was also a gay man during a time when being so was a crime, and he was persecuted for it, despite his military expertise.
Born in 1730 to a military family, von Steuben enlisted in the Prussian army at 16 or 17. After 17 years of service, von Steuben left the army as an experienced captain and a veteran of the Seven Years’ War. Despite having distinguished himself during service, von Steuben was dismissed from the military in 1763. The dismissal came at a time when the Prussian military was downsizing during an extended period of peace, but some historians believe that his sexuality might have played a role in his outster. Afterward, von Steuben found work as a court chamberlain for 11 years, but yearned for military service once more. Yet with Europe in a state of relative peace, military positions were few and far between. After the American Revolutionary War broke out, von Steuben was offered a job in the Continental Army by Benjamin Franklin, but von Steuben balked at the idea, as he wished to remain in Europe. Not long after, he was offered a military position in Baden, Germany, but the offer fell through when an anonymous letter accused von Steuben of having “taken familiarities” with other men in a previous job. Without any other options and unwilling to risk criminal charges, von Steuben took Franklin’s still-standing offer and sailed for America in 1777.
When von Steuben arrived in America, an inflated reputation preceded him. At the time, American officers were growing resentful of the influx of European officers, and Franklin had embellished von Steuben’s rank and accomplishments to placate them. After getting acquainted with key political figures, von Steuben was sent to Valley Forge to serve under George Washington, along with Alexander Hamilton and John Laurens as his aides. There, he found the camp in shambles, desperately in need of order and discipline. The army was plagued by low morale and discipline, exacerbated by the brutal winter. Faced with the daunting task of getting the men into fighting shape by spring, von Steuben got to work as a drillmaster, teaching the troops how to march in formation and reorganizing the chain of command, giving officers more responsibilities.
All the while, von Steuben made little effort to hide his sexuality, which both Franklin and Washington knew of but considered irrelevant for his role. Though the men of the camp found von Steuben to be a strange figure (he couldn’t speak English save for a few curse words), they respected him nonetheless. In fact, von Steuben’s comparatively outlandish mannerisms seemed to command the men’s attention, or at the least, their curiosity. He also threw extravagant parties at camp for the officers, who returned the favor in kind by donating their rations for feasts. By the next year, the once ragtag band of soldiers marched, shot, and charged like veterans. The men of Valley Forge, who were driven to fight by patriotic passion, had been tempered by Prussian military prowess.
After the war, von Steuben was granted American citizenship and a large estate in New York. He lived out the rest of his life there with William North and Benjamin Walker, who had served him as aides-de-camp. North and Walker were both legally adopted by von Steuben, a common practice at the time for homosexual men to ensure their partners could inherit their property. Today, von Steuben’s contributions to the American Revolution have been largely forgotten, though that is beginning to change. In recent years, LGBTQ members of the military have made efforts to shed light on von Steuben’s role in the war, and he is recognized by many as the father of America’s professional Army. You could say that his work should be the pride of the nation.
[Image description: A painted portrait of Baron von Steuben outdoors in a military uniform.] Credit & copyright: Ralph Earl (1751–1801), Friedrich Wilhelm von Steuben, 1786. Fenimore Art Museum, N0198.1961, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.In honor of pride month, we’re taking a look at a hero of the Revolutionary War: Prussian Nobleman Baron von Steuben. His work teaching European military techniques to struggling American troops helped turn the tide of the war. Von Steuben was also a gay man during a time when being so was a crime, and he was persecuted for it, despite his military expertise.
Born in 1730 to a military family, von Steuben enlisted in the Prussian army at 16 or 17. After 17 years of service, von Steuben left the army as an experienced captain and a veteran of the Seven Years’ War. Despite having distinguished himself during service, von Steuben was dismissed from the military in 1763. The dismissal came at a time when the Prussian military was downsizing during an extended period of peace, but some historians believe that his sexuality might have played a role in his outster. Afterward, von Steuben found work as a court chamberlain for 11 years, but yearned for military service once more. Yet with Europe in a state of relative peace, military positions were few and far between. After the American Revolutionary War broke out, von Steuben was offered a job in the Continental Army by Benjamin Franklin, but von Steuben balked at the idea, as he wished to remain in Europe. Not long after, he was offered a military position in Baden, Germany, but the offer fell through when an anonymous letter accused von Steuben of having “taken familiarities” with other men in a previous job. Without any other options and unwilling to risk criminal charges, von Steuben took Franklin’s still-standing offer and sailed for America in 1777.
When von Steuben arrived in America, an inflated reputation preceded him. At the time, American officers were growing resentful of the influx of European officers, and Franklin had embellished von Steuben’s rank and accomplishments to placate them. After getting acquainted with key political figures, von Steuben was sent to Valley Forge to serve under George Washington, along with Alexander Hamilton and John Laurens as his aides. There, he found the camp in shambles, desperately in need of order and discipline. The army was plagued by low morale and discipline, exacerbated by the brutal winter. Faced with the daunting task of getting the men into fighting shape by spring, von Steuben got to work as a drillmaster, teaching the troops how to march in formation and reorganizing the chain of command, giving officers more responsibilities.
All the while, von Steuben made little effort to hide his sexuality, which both Franklin and Washington knew of but considered irrelevant for his role. Though the men of the camp found von Steuben to be a strange figure (he couldn’t speak English save for a few curse words), they respected him nonetheless. In fact, von Steuben’s comparatively outlandish mannerisms seemed to command the men’s attention, or at the least, their curiosity. He also threw extravagant parties at camp for the officers, who returned the favor in kind by donating their rations for feasts. By the next year, the once ragtag band of soldiers marched, shot, and charged like veterans. The men of Valley Forge, who were driven to fight by patriotic passion, had been tempered by Prussian military prowess.
After the war, von Steuben was granted American citizenship and a large estate in New York. He lived out the rest of his life there with William North and Benjamin Walker, who had served him as aides-de-camp. North and Walker were both legally adopted by von Steuben, a common practice at the time for homosexual men to ensure their partners could inherit their property. Today, von Steuben’s contributions to the American Revolution have been largely forgotten, though that is beginning to change. In recent years, LGBTQ members of the military have made efforts to shed light on von Steuben’s role in the war, and he is recognized by many as the father of America’s professional Army. You could say that his work should be the pride of the nation.
[Image description: A painted portrait of Baron von Steuben outdoors in a military uniform.] Credit & copyright: Ralph Earl (1751–1801), Friedrich Wilhelm von Steuben, 1786. Fenimore Art Museum, N0198.1961, Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEBiology PP&T CurioFree1 CQ
Can you keep up with the changes? As summer approaches, countless critters from tree frogs to butterflies are busy going through the process of metamorphosis. One of the strangest and most amazing processes in the biological world, metamorphosis occurs very differently in different species. From insects to frogs to jellyfish, the process of transforming comes in many forms.
Metamorphosis is a process in which an animal goes through drastic physical changes in distinct stages as it matures. Though only a few species (like butterflies) are famous for metamorphosis, the fact is that 60 percent of all animals—both vertebrates and invertebrates—go through metamorphosis at some point in their life cycles. All beetles, for example, begin their lives as larvae, which eventually become pupae before emerging as adult insects. Yet, for such a widespread biological process, little is known about how metamorphosis first evolved. Since so many different kinds of animals metamorphosize, it’s likely that a host of different evolutionary pressures led the process to develop, and it might have even done so more than once, in different species. One hypothesis specific to insects suggests that the development of wings may have had something to do with it. Insects that go through a larval stage, for instance, still molt as they grow bigger, periodically shedding their outer layer of skin. But molting with wings can be difficult, so much so that only mayflies (of which there are 3,000 extant species) bother with it. Other insects only develop wings in their adult forms, having spent their youths concerned only with eating and growing. Some non-insects might have developed metamorphosis as a sort of early-stage defense mechanism. Tadpoles, for example, can spend their early lives relatively safe in shallow, stagnant water where there aren’t very many predators. There, they can feed and grow until they’re big enough to emerge as frogs or toads.
While some animals simply grow new limbs during metamorphosis, others take a much more extreme approach. Moon jellyfish, for example, begin life as polyps attached to the seafloor and eventually develop a segmented stalk body. The segments then break off, and each individual segment becomes a separate, adult jellyfish. Butterflies and moths also go through a famously extreme change. After hatching, their larvae feed and grow until they’re ready to become pupae. Inside their cocoons or chrysalises, their bodies completely liquify, becoming a “living soup” of cells and proteins. Their adult bodies form slowly, over the course of days to weeks, practically from scratch. Despite extensive study, scientists still aren’t entirely sure how every step of this process works.
When it comes to metamorphosis-related mysteries, moths and butterflies once again provide a strange example, as these adult insects can somehow remember things they learned as caterpillars. Scientists at Georgetown University in Washington, D.C. trained tobacco hornworm caterpillars to associate the scent of ethyl acetate with mild electric shocks. Then, they allowed the caterpillars to metamorphose as usual and waited until they emerged as adults. These adults were then exposed to ethyl acetate again, and an astounding 78 percent of them still avoided the chemical. A similar experiment in the past involving fruit flies showed that insects can retain information learned as larvae into adulthood, but this was the first time it was tested on caterpillars. The experiment shows that, despite their bodies completely liquifying during metamorphosis, some part of their brain remains intact enough to retain information through the transformation. From soup to nuts, that’s got to be a strange way to grow up.
[Image description: A butterfly with a white body and wings with a black, white, yellow, and orange pattern perches on a yellow flower. ] Credit & copyright: Jeevan Jose, Kerala, India. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Can you keep up with the changes? As summer approaches, countless critters from tree frogs to butterflies are busy going through the process of metamorphosis. One of the strangest and most amazing processes in the biological world, metamorphosis occurs very differently in different species. From insects to frogs to jellyfish, the process of transforming comes in many forms.
Metamorphosis is a process in which an animal goes through drastic physical changes in distinct stages as it matures. Though only a few species (like butterflies) are famous for metamorphosis, the fact is that 60 percent of all animals—both vertebrates and invertebrates—go through metamorphosis at some point in their life cycles. All beetles, for example, begin their lives as larvae, which eventually become pupae before emerging as adult insects. Yet, for such a widespread biological process, little is known about how metamorphosis first evolved. Since so many different kinds of animals metamorphosize, it’s likely that a host of different evolutionary pressures led the process to develop, and it might have even done so more than once, in different species. One hypothesis specific to insects suggests that the development of wings may have had something to do with it. Insects that go through a larval stage, for instance, still molt as they grow bigger, periodically shedding their outer layer of skin. But molting with wings can be difficult, so much so that only mayflies (of which there are 3,000 extant species) bother with it. Other insects only develop wings in their adult forms, having spent their youths concerned only with eating and growing. Some non-insects might have developed metamorphosis as a sort of early-stage defense mechanism. Tadpoles, for example, can spend their early lives relatively safe in shallow, stagnant water where there aren’t very many predators. There, they can feed and grow until they’re big enough to emerge as frogs or toads.
While some animals simply grow new limbs during metamorphosis, others take a much more extreme approach. Moon jellyfish, for example, begin life as polyps attached to the seafloor and eventually develop a segmented stalk body. The segments then break off, and each individual segment becomes a separate, adult jellyfish. Butterflies and moths also go through a famously extreme change. After hatching, their larvae feed and grow until they’re ready to become pupae. Inside their cocoons or chrysalises, their bodies completely liquify, becoming a “living soup” of cells and proteins. Their adult bodies form slowly, over the course of days to weeks, practically from scratch. Despite extensive study, scientists still aren’t entirely sure how every step of this process works.
When it comes to metamorphosis-related mysteries, moths and butterflies once again provide a strange example, as these adult insects can somehow remember things they learned as caterpillars. Scientists at Georgetown University in Washington, D.C. trained tobacco hornworm caterpillars to associate the scent of ethyl acetate with mild electric shocks. Then, they allowed the caterpillars to metamorphose as usual and waited until they emerged as adults. These adults were then exposed to ethyl acetate again, and an astounding 78 percent of them still avoided the chemical. A similar experiment in the past involving fruit flies showed that insects can retain information learned as larvae into adulthood, but this was the first time it was tested on caterpillars. The experiment shows that, despite their bodies completely liquifying during metamorphosis, some part of their brain remains intact enough to retain information through the transformation. From soup to nuts, that’s got to be a strange way to grow up.
[Image description: A butterfly with a white body and wings with a black, white, yellow, and orange pattern perches on a yellow flower. ] Credit & copyright: Jeevan Jose, Kerala, India. Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREELiterature PP&T CurioFree1 CQ
Quality and quantity aren’t always mutually exclusive. It’s a lesson that French novelist and playwright Honoré de Balzac had to learn for himself, but once he did, he became one of the most renowned and prolific writers of his time. Born this month in 1799, Balzac is largely credited for setting the standard for the modern day novel.
Born May 20, 1799, in Tours, France, Balzac’s surname was originally Balssa, but the author changed it later in life because he felt that it sounded more auspicious. After he was born, Balzac was raised by a wet nurse until he was weaned, a common practice at the time. Yet nearly as soon as he returned to his parents, he was sent away to school. At the age of 16, he began working as a lawyer’s clerk, but just three years later, he left the profession to become a writer. The young author found little success in his literary endeavors, though, and had little support from his family. Along with several novels he didn’t even publish under his own name, Balzac also suffered crippling financial blows due to a series of unsuccessful business ventures that left him deep in debt. Motivated by his need to pay off his debtors (including his own mother), he dove head first into his writing. It was an unconventional start to what became a distinguished career.
It’s an understatement to say that Balzac was not a man of moderation. When he wrote, he did so ceaselessly, for hours or sometimes days. Fueled by unchecked quantities of black coffee (some sources say as many as 50 cups a day), it wasn’t unusual for the writer to churn out page after page of handwritten works, barely stopping to eat or sleep. When he wasn’t writing, Balzac made himself known in Parisian society through scandalous affairs and affectations of grandeur. Aside from changing his name to blend into high society, Balzac also indulged in luxuries beyond his means and used the coat-of-arms of an unrelated family to represent himself. These efforts were actually pretty successful, and Balzac earned notoriety for being a gregarious braggart as much as for being a writer. As for his body of work, it was informed by his intimate understanding of Parisian society. His characters are known for their complexity and distinctly French idiosyncrasies that made them seem very real in their time. Balzac was known for portraying objects and locations in such vivid detail that they almost became characters of their own. Thus, his stories had a depth and wealth of description not commonly found in other novels of his time. That’s especially true of his magnum opus, La Comédie humaine, or The Human Comedy, in English. Written between 1829 and 1848 and consisting of 91 novels and novellas, La Comédie humaine is a collection of interconnected stories that showcase every lever of Parisian society in the years between the French Revolution and the Revolution of 1848. Through this series, Balzac explores the moral and philosophical ideas that lie at the heart of the clashes between France’s social classes, covering everything from economics to romances. Unfortunately, Balzac died relatively young at the age of 51 following a brief period of illness, just a few months after his marriage to his longtime correspondent and romantic interest Ewelina Hańska. Some believe that his heart failure was the result of his lifelong, excessive coffee consumption.
Today, Balzac is remembered for popularizing the format of the novel as it exists today. Unlike many writers of his time, he favored an omniscient narrator who presented the story with a logical flow and he portrayed interesting, flawed, relatable characters. Some have even called him the “Shakespeare of the Novel” for his witty dialogue and for his part in shaping the literary format. Drink a cup of coffee in his memory if you’d like…but maybe just the one.
[Image description: An artistic depiction of a young Honore de Balzac in sepia tones.] Credit & copyright: Achille Devéria (1800–1857), Wikimedia Commons. The Museums of the City of Paris, Balzac’s House. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Quality and quantity aren’t always mutually exclusive. It’s a lesson that French novelist and playwright Honoré de Balzac had to learn for himself, but once he did, he became one of the most renowned and prolific writers of his time. Born this month in 1799, Balzac is largely credited for setting the standard for the modern day novel.
Born May 20, 1799, in Tours, France, Balzac’s surname was originally Balssa, but the author changed it later in life because he felt that it sounded more auspicious. After he was born, Balzac was raised by a wet nurse until he was weaned, a common practice at the time. Yet nearly as soon as he returned to his parents, he was sent away to school. At the age of 16, he began working as a lawyer’s clerk, but just three years later, he left the profession to become a writer. The young author found little success in his literary endeavors, though, and had little support from his family. Along with several novels he didn’t even publish under his own name, Balzac also suffered crippling financial blows due to a series of unsuccessful business ventures that left him deep in debt. Motivated by his need to pay off his debtors (including his own mother), he dove head first into his writing. It was an unconventional start to what became a distinguished career.
It’s an understatement to say that Balzac was not a man of moderation. When he wrote, he did so ceaselessly, for hours or sometimes days. Fueled by unchecked quantities of black coffee (some sources say as many as 50 cups a day), it wasn’t unusual for the writer to churn out page after page of handwritten works, barely stopping to eat or sleep. When he wasn’t writing, Balzac made himself known in Parisian society through scandalous affairs and affectations of grandeur. Aside from changing his name to blend into high society, Balzac also indulged in luxuries beyond his means and used the coat-of-arms of an unrelated family to represent himself. These efforts were actually pretty successful, and Balzac earned notoriety for being a gregarious braggart as much as for being a writer. As for his body of work, it was informed by his intimate understanding of Parisian society. His characters are known for their complexity and distinctly French idiosyncrasies that made them seem very real in their time. Balzac was known for portraying objects and locations in such vivid detail that they almost became characters of their own. Thus, his stories had a depth and wealth of description not commonly found in other novels of his time. That’s especially true of his magnum opus, La Comédie humaine, or The Human Comedy, in English. Written between 1829 and 1848 and consisting of 91 novels and novellas, La Comédie humaine is a collection of interconnected stories that showcase every lever of Parisian society in the years between the French Revolution and the Revolution of 1848. Through this series, Balzac explores the moral and philosophical ideas that lie at the heart of the clashes between France’s social classes, covering everything from economics to romances. Unfortunately, Balzac died relatively young at the age of 51 following a brief period of illness, just a few months after his marriage to his longtime correspondent and romantic interest Ewelina Hańska. Some believe that his heart failure was the result of his lifelong, excessive coffee consumption.
Today, Balzac is remembered for popularizing the format of the novel as it exists today. Unlike many writers of his time, he favored an omniscient narrator who presented the story with a logical flow and he portrayed interesting, flawed, relatable characters. Some have even called him the “Shakespeare of the Novel” for his witty dialogue and for his part in shaping the literary format. Drink a cup of coffee in his memory if you’d like…but maybe just the one.
[Image description: An artistic depiction of a young Honore de Balzac in sepia tones.] Credit & copyright: Achille Devéria (1800–1857), Wikimedia Commons. The Museums of the City of Paris, Balzac’s House. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
It’s one of the most tragic tales in all of history; a massive loss of knowledge that set humanity back by decades…right? Maybe not. The burning of the Library of Alexandria is certainly a dramatic tale, but in recent years many scholars have begun to question its validity. Not only do most accounts of the library’s burning come from many years after the supposed event itself, no one can even agree on who did the actual burning.
The Library of Alexandria, built some time around 331 B.C.E. in Alexandria, Egypt, was one of the most massive and comprehensive libraries of its day. Part of a research institution called the Mouseion (which later came to include another, smaller library) the Library of Alexandria was likely the brainchild of Ptolemy I Soter, pharaoh of Ptolemaic Egypt, who began collecting papyrus scrolls for it long before a building was created to house them. His son, Ptolemy II Philadelphus, is more likely to have overseen the actual construction of the library itself during his own, subsequent reign. It was an ambitious project. The idea was for a true, universal library—where all knowledge from around the world could be stored. To that end, Ptolemy II Philadelphus collected scrolls from wherever and whoever he could (scrolls, not bound books, were the way that written works were distributed, at the time.) The pharaoh might have been considered a hoarder of knowledge if not for one important detail —he made high-quality copies of almost every scroll he received and gave them back to the people who had provided the originals—usually historians or other scholars. After all, it would have caused bad blood and alienated Philadelphus from the scholarly world if he had simply taken these works for himself, without permission…and permission usually hinged on a copy being provided. This means that, though the Library of Alexandria housed an impressive collection of knowledge that made Alexandria itself famous as a city of learning, much of that knowledge also still existed outside of the library’s walls.
That’s lucky, since the library did eventually come to ruin. How, exactly, that happened is still a source of debate, despite the longstanding myth that the library was purposefully burned in a single day. At the height of its popularity, the library housed somewhere between 40,000 to 400,000 scrolls, but that popularity eventually waned. In 145 B.C.E., Ptolemy VIII Physcon, who had very different ideas about knowledge than his predecessors. His reign was a violent one, and included several massacres which saw many Alexandrian intellectuals killed or exiled. Scholars had been the lifeblood of Alexandria’s library, and without them it fell into decline. Then, there was a fire. The two most common stories about the library’s burning implicate either Julius Caesar or Caliph Umar, who led the Arab conquest of Alexandria in 642 C.E. The second story is easily dismissed, since other sources point to the library already being gone by the time of that particular invasion. As for Caesar, he may have burnt the library…but it was probably an accident. According to the ancient Greek philosopher and historian Plutarch, in 48 B.C.E., during Caesar’s civil war, Caesar set fire to a fleet of Egyptian ships in Alexandria’s harbor. Due to windy weather, the flames spread to the library, which burned with all the scrolls inside. However, many historians now believe that the library survived this accidental burning, and may have even been rebuilt afterward, since there are records of other historical figures visiting the library after Caesar’s war was over.
Ultimately, the library likely died due to a problem that still plagues libraries today: a lack of funding. During the Roman period, those in power simply stopped prioritizing the library’s upkeep, and it fell into disrepair. The Palmyrene Invasion of 270 C.E. likely destroyed the rest of the already-unkempt structure. Still, it’s unlikely that the loss of the library set humanity’s overall progress back, despite stories to the contrary. After all, much of the knowledge inside had already been copied. Then, as now, it pays to back up your work!
[Image description: A black-and-white illustration depicting the burning of the Library of Alexandria with a crowd of people rushing toward the flames.] Credit & copyright:
Ambrose Dudley (1867–1951), The Burning of the Library at Alexandria in 391 AD. Bridgeman Art Library: Object 357910, Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.It’s one of the most tragic tales in all of history; a massive loss of knowledge that set humanity back by decades…right? Maybe not. The burning of the Library of Alexandria is certainly a dramatic tale, but in recent years many scholars have begun to question its validity. Not only do most accounts of the library’s burning come from many years after the supposed event itself, no one can even agree on who did the actual burning.
The Library of Alexandria, built some time around 331 B.C.E. in Alexandria, Egypt, was one of the most massive and comprehensive libraries of its day. Part of a research institution called the Mouseion (which later came to include another, smaller library) the Library of Alexandria was likely the brainchild of Ptolemy I Soter, pharaoh of Ptolemaic Egypt, who began collecting papyrus scrolls for it long before a building was created to house them. His son, Ptolemy II Philadelphus, is more likely to have overseen the actual construction of the library itself during his own, subsequent reign. It was an ambitious project. The idea was for a true, universal library—where all knowledge from around the world could be stored. To that end, Ptolemy II Philadelphus collected scrolls from wherever and whoever he could (scrolls, not bound books, were the way that written works were distributed, at the time.) The pharaoh might have been considered a hoarder of knowledge if not for one important detail —he made high-quality copies of almost every scroll he received and gave them back to the people who had provided the originals—usually historians or other scholars. After all, it would have caused bad blood and alienated Philadelphus from the scholarly world if he had simply taken these works for himself, without permission…and permission usually hinged on a copy being provided. This means that, though the Library of Alexandria housed an impressive collection of knowledge that made Alexandria itself famous as a city of learning, much of that knowledge also still existed outside of the library’s walls.
That’s lucky, since the library did eventually come to ruin. How, exactly, that happened is still a source of debate, despite the longstanding myth that the library was purposefully burned in a single day. At the height of its popularity, the library housed somewhere between 40,000 to 400,000 scrolls, but that popularity eventually waned. In 145 B.C.E., Ptolemy VIII Physcon, who had very different ideas about knowledge than his predecessors. His reign was a violent one, and included several massacres which saw many Alexandrian intellectuals killed or exiled. Scholars had been the lifeblood of Alexandria’s library, and without them it fell into decline. Then, there was a fire. The two most common stories about the library’s burning implicate either Julius Caesar or Caliph Umar, who led the Arab conquest of Alexandria in 642 C.E. The second story is easily dismissed, since other sources point to the library already being gone by the time of that particular invasion. As for Caesar, he may have burnt the library…but it was probably an accident. According to the ancient Greek philosopher and historian Plutarch, in 48 B.C.E., during Caesar’s civil war, Caesar set fire to a fleet of Egyptian ships in Alexandria’s harbor. Due to windy weather, the flames spread to the library, which burned with all the scrolls inside. However, many historians now believe that the library survived this accidental burning, and may have even been rebuilt afterward, since there are records of other historical figures visiting the library after Caesar’s war was over.
Ultimately, the library likely died due to a problem that still plagues libraries today: a lack of funding. During the Roman period, those in power simply stopped prioritizing the library’s upkeep, and it fell into disrepair. The Palmyrene Invasion of 270 C.E. likely destroyed the rest of the already-unkempt structure. Still, it’s unlikely that the loss of the library set humanity’s overall progress back, despite stories to the contrary. After all, much of the knowledge inside had already been copied. Then, as now, it pays to back up your work!
[Image description: A black-and-white illustration depicting the burning of the Library of Alexandria with a crowd of people rushing toward the flames.] Credit & copyright:
Ambrose Dudley (1867–1951), The Burning of the Library at Alexandria in 391 AD. Bridgeman Art Library: Object 357910, Wikimedia Commons. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.