Curio Cabinet / Person, Place, or Thing
-
FREEEngineering PP&T CurioFree1 CQ
What’s a little rain while you’re driving? Terrifying. At least, it was at the beginning of the 20th century. American inventor Mary Anderson filed the first-ever patent for a windshield wiper on this day in 1903. Before then, people just had to make do with wet or muddy windshields. However, Anderson never got to reap the rewards for her world-changing invention.
Born in Alabama in 1866, Anderson wasn’t a career inventor. Little is known about her early life, but as an adult, she was a winemaker, rancher, and real estate developer. By all available accounts, her invention of the first windshield wiper was her one and only foray into the world of engineering or design. But her varied job titles implies that she likely had a keen eye for spotting opportunities, and the inspiration for her invention was no exception. The story goes that Anderson was visiting New York City during the winter and boarded a streetcar on one particularly wet and blustery day. Because of the inclement weather, the windshield of the streetcar kept getting splattered with water and debris, forcing the driver to open a window to manually wipe the windshield clean. Every time he did so, cold wind would blast through the opening, and this didn’t sit well with Anderson, who was used to the balmy Southern weather of her home state. Streetcar drivers weren’t the only ones who had to contend with this problem, of course. As automobiles became more common, the drivers of those vehicles resorted to similar measures or simply drove with their heads sticking out car windows. Inspired by the streetcar driver’s struggle, and perhaps frustrated by the cold ride, Anderson set out to come up with a better solution. In 1903, the U.S. Patent and Trademark Office awarded Anderson with U.S. Patent No. 743,801, or Window-Cleaning Device.
Anderson’s invention, though groundbreaking for its time, doesn’t resemble the modern iteration much. Her version was still operated by hand (albeit from the inside) and consisted of a single rubber blade to clear the windshield. The device also included a counterweight to keep the blade firmly in contact with the glass, and though it was relatively primitive, it was still pretty effective. Unfortunately for Anderson, automakers were hesitant to embrace her invention early on. Despite several attempts, Anderson was never able to attract investors or have them manufactured for sale due to lack of interest. She may have simply been too ahead of her time. Automakers didn’t start making windshield wipers standard equipment in their vehicles until 1916. By then, Anderson’s patent had expired, keeping her from making any profit from her inventions through licensing. Then again, maybe automakers didn’t adopt her windshield wipers on purpose so as not to pay her any fees, though the actual reason is unclear.
Though her invention may not have earned her any money, Anderson has since been recognized for her contribution. In 2011, over 60 years after her death, she was inducted into the National Inventors Hall of Fame. These days, many improvements have been made to her original windshield wiper. In 1917, Charlotte Bridgewood invented the Electric Storm Windshield Cleaner (U.S. Patent No. 1,274,983), the first to be powered by electricity. A few years later, in 1922, brothers William M. and Fred Folberth invented the simply-named Windshield Cleaner (U.S. Patent No. 1,420,538) which was powered by redirected engine exhaust. However, the version that most windshield wipers are based on today was invented by Robert Kearns in the 1960s. Called Windshield Wiper System With Intermittent Operation (U.S. Patent No. 3,351,836), it was motorized and capable of variable speeds. Who knew there were so many ways to clean a windshield?
[Image description: raindrops on a windshield which has been partially wiped clean.] Credit & copyright: Valeriia Miller, PexelsWhat’s a little rain while you’re driving? Terrifying. At least, it was at the beginning of the 20th century. American inventor Mary Anderson filed the first-ever patent for a windshield wiper on this day in 1903. Before then, people just had to make do with wet or muddy windshields. However, Anderson never got to reap the rewards for her world-changing invention.
Born in Alabama in 1866, Anderson wasn’t a career inventor. Little is known about her early life, but as an adult, she was a winemaker, rancher, and real estate developer. By all available accounts, her invention of the first windshield wiper was her one and only foray into the world of engineering or design. But her varied job titles implies that she likely had a keen eye for spotting opportunities, and the inspiration for her invention was no exception. The story goes that Anderson was visiting New York City during the winter and boarded a streetcar on one particularly wet and blustery day. Because of the inclement weather, the windshield of the streetcar kept getting splattered with water and debris, forcing the driver to open a window to manually wipe the windshield clean. Every time he did so, cold wind would blast through the opening, and this didn’t sit well with Anderson, who was used to the balmy Southern weather of her home state. Streetcar drivers weren’t the only ones who had to contend with this problem, of course. As automobiles became more common, the drivers of those vehicles resorted to similar measures or simply drove with their heads sticking out car windows. Inspired by the streetcar driver’s struggle, and perhaps frustrated by the cold ride, Anderson set out to come up with a better solution. In 1903, the U.S. Patent and Trademark Office awarded Anderson with U.S. Patent No. 743,801, or Window-Cleaning Device.
Anderson’s invention, though groundbreaking for its time, doesn’t resemble the modern iteration much. Her version was still operated by hand (albeit from the inside) and consisted of a single rubber blade to clear the windshield. The device also included a counterweight to keep the blade firmly in contact with the glass, and though it was relatively primitive, it was still pretty effective. Unfortunately for Anderson, automakers were hesitant to embrace her invention early on. Despite several attempts, Anderson was never able to attract investors or have them manufactured for sale due to lack of interest. She may have simply been too ahead of her time. Automakers didn’t start making windshield wipers standard equipment in their vehicles until 1916. By then, Anderson’s patent had expired, keeping her from making any profit from her inventions through licensing. Then again, maybe automakers didn’t adopt her windshield wipers on purpose so as not to pay her any fees, though the actual reason is unclear.
Though her invention may not have earned her any money, Anderson has since been recognized for her contribution. In 2011, over 60 years after her death, she was inducted into the National Inventors Hall of Fame. These days, many improvements have been made to her original windshield wiper. In 1917, Charlotte Bridgewood invented the Electric Storm Windshield Cleaner (U.S. Patent No. 1,274,983), the first to be powered by electricity. A few years later, in 1922, brothers William M. and Fred Folberth invented the simply-named Windshield Cleaner (U.S. Patent No. 1,420,538) which was powered by redirected engine exhaust. However, the version that most windshield wipers are based on today was invented by Robert Kearns in the 1960s. Called Windshield Wiper System With Intermittent Operation (U.S. Patent No. 3,351,836), it was motorized and capable of variable speeds. Who knew there were so many ways to clean a windshield?
[Image description: raindrops on a windshield which has been partially wiped clean.] Credit & copyright: Valeriia Miller, Pexels -
FREEArt Appreciation PP&T CurioFree1 CQ
There are movements that shape artists, and there are artists that shape movements. Henri Matisse was decidedly the latter of the two. The multidisciplinary French artist passed away on this day in 1954, and during his illustrious career, he became one of the most prolific and influential artists of all time, engaging in friendships and rivalries with other masters of modern art, most notably Pablo Picasso.
Henri Émile Benoît Matisse was born on December 31, 1869 in Le Cateau-Cambresis, Nord, France. Unlike many of his artistic contemporaries, Matisse wasn’t trained in the discipline nor did he show any significant interest in it until he was already a young man. Before picking up his first paintbrush, Matisse moved to Paris in 1887 to study law and went on to find work as a court administrator in northern France. It wasn’t until 1889, when he became ill with appendicitis, that he began painting after his mother gifted him some art supplies to stave off boredom during his recovery. The young Matisse quickly became completely enamored with painting, later describing it as "a kind of paradise.” Much to the chagrin of his father, Matisse abandoned his legal ambitions and moved back to Paris to learn art, studying under the likes of William-Adolphe Bouguereau and Gustave Moreau. However, the work produced in his early years, mostly consisting of still lifes in earth toned palettes, was quite unlike the work that would eventually make him famous. His true artistic awakening didn’t occur until 1896, when he met Australian painter John Russell. A friend of Vincent van Gogh, Russell showed the struggling artist a collection of Van Gogh’s paintings, introducing Matisse to Impressionism.
In the following years, Matisse began collecting and studying the work of his contemporaries, particularly the Neo-Impressionists. Inspired by their bright colors and bold brushstrokes, his own vision of the world began coalescing along with that of other, like-minded artists into a relatively short-lived but influential movement called Fauvism. The works of the “Fauves” (“wild beasts” in French) like Matisse were defined by unconventional and intense color palettes laid down with striking brushstrokes. Despite being a founding member of a movement, Matisse was never one to settle for just one style or medium. Throughout his life, he dabbled in pointillism, printmaking, sculpting, and paper cutting. At times, he even returned to and was praised for his more traditional works, which he pursued in the post-WWI period. Among his contemporaries, there was only one who seemed to match him: Pablo Picasso. Matisse’s rivalry with this fellow master of modern art is well documented, and the two seemed to study each other’s works carefully. Matisse and Picasso often painted the same scenes and subjects, including the same models. At times, they even titled their pieces the same, not for lack of creativity, but to serve as a riposte on canvas. Matisse once likened their rivalry to a boxing match, and though the two didn’t initially care for each other’s work, they eventually developed a mutual admiration.
Today, the name Matisse is practically synonymous with modern art, and his influence goes beyond the canvas. In his later years, Matisse’s failing health forced him to rely on assistants for much of his work. During the 1940s, Matisse worked with paper, creating colorful collages called gouaches découpés that he described as “painting with scissors.” His final masterpiece, however, was his design for a stained-glass window for the Union Church of Pocantico Hills in New York City. No matter what medium he touched, Matisse always left an impression, leaving behind a body of work that is wildly eclectic yet always recognizably his. Surely his father had to admit that Matisse did the right thing by leaving law school.
[Image description: A fanned-out group of paint brushes smattered with paint.] Credit & copyright: Steve Johnson, PexelsThere are movements that shape artists, and there are artists that shape movements. Henri Matisse was decidedly the latter of the two. The multidisciplinary French artist passed away on this day in 1954, and during his illustrious career, he became one of the most prolific and influential artists of all time, engaging in friendships and rivalries with other masters of modern art, most notably Pablo Picasso.
Henri Émile Benoît Matisse was born on December 31, 1869 in Le Cateau-Cambresis, Nord, France. Unlike many of his artistic contemporaries, Matisse wasn’t trained in the discipline nor did he show any significant interest in it until he was already a young man. Before picking up his first paintbrush, Matisse moved to Paris in 1887 to study law and went on to find work as a court administrator in northern France. It wasn’t until 1889, when he became ill with appendicitis, that he began painting after his mother gifted him some art supplies to stave off boredom during his recovery. The young Matisse quickly became completely enamored with painting, later describing it as "a kind of paradise.” Much to the chagrin of his father, Matisse abandoned his legal ambitions and moved back to Paris to learn art, studying under the likes of William-Adolphe Bouguereau and Gustave Moreau. However, the work produced in his early years, mostly consisting of still lifes in earth toned palettes, was quite unlike the work that would eventually make him famous. His true artistic awakening didn’t occur until 1896, when he met Australian painter John Russell. A friend of Vincent van Gogh, Russell showed the struggling artist a collection of Van Gogh’s paintings, introducing Matisse to Impressionism.
In the following years, Matisse began collecting and studying the work of his contemporaries, particularly the Neo-Impressionists. Inspired by their bright colors and bold brushstrokes, his own vision of the world began coalescing along with that of other, like-minded artists into a relatively short-lived but influential movement called Fauvism. The works of the “Fauves” (“wild beasts” in French) like Matisse were defined by unconventional and intense color palettes laid down with striking brushstrokes. Despite being a founding member of a movement, Matisse was never one to settle for just one style or medium. Throughout his life, he dabbled in pointillism, printmaking, sculpting, and paper cutting. At times, he even returned to and was praised for his more traditional works, which he pursued in the post-WWI period. Among his contemporaries, there was only one who seemed to match him: Pablo Picasso. Matisse’s rivalry with this fellow master of modern art is well documented, and the two seemed to study each other’s works carefully. Matisse and Picasso often painted the same scenes and subjects, including the same models. At times, they even titled their pieces the same, not for lack of creativity, but to serve as a riposte on canvas. Matisse once likened their rivalry to a boxing match, and though the two didn’t initially care for each other’s work, they eventually developed a mutual admiration.
Today, the name Matisse is practically synonymous with modern art, and his influence goes beyond the canvas. In his later years, Matisse’s failing health forced him to rely on assistants for much of his work. During the 1940s, Matisse worked with paper, creating colorful collages called gouaches découpés that he described as “painting with scissors.” His final masterpiece, however, was his design for a stained-glass window for the Union Church of Pocantico Hills in New York City. No matter what medium he touched, Matisse always left an impression, leaving behind a body of work that is wildly eclectic yet always recognizably his. Surely his father had to admit that Matisse did the right thing by leaving law school.
[Image description: A fanned-out group of paint brushes smattered with paint.] Credit & copyright: Steve Johnson, Pexels -
FREEUS History PP&T CurioFree1 CQ
New York is full of engineering wonders, from skyscrapers to suspension bridges, but one of the most impressive isn’t even visible above ground. The New York City subway system transports over a billion riders through the urban jungle every year. The city’s first subway system opened on this day in 1904, and since then it has continued to expand and serve an exponentially growing population.
By the late 1800s, New York City was already the most populated city in the United States. Already known as a center of commerce and culture, the city was growing quickly…and quickly running out of room. Roads were congested with horse-drawn carriages and the island borough of Manhattan was serviced by elevated railways that took up precious real estate. City planners needed a solution that would address the transportation needs of the residents without taking up what little room was left. A subway system seemed like a logical answer. After all, the world’s first underground transit system was already a proven success, as it had been operating in London since 1863. In nearby Boston, America’s first subway was finished in 1897, though it was more limited in scope and used streetcars. There had even been a limited subway line in New York City between 1870 and 1873. During those short few years, a pneumatic-powered, 18-passenger car traversed under Broadway using a 100 horsepower fan. There had been talk of expanding the line, but the technology was made obsolete by improvements in electric traction motors, and the line was soon abandoned. Indeed, the future of transit in New York City was electric, and after much lobbying from the city’s Board of Rapid Transit and financing from prominent financier August Belmont, Jr., construction on the permanent subway system began in 1900.
As construction crews dug underground, they built temporary wooden bridges over the subway tunnels to allow traffic to continue unimpeded. Not everything went so smoothly, though. Because the tunnel was close to the surface in many places, construction often involved moving existing infrastructure like gas and water lines. Some things weren’t so easy to move out of the way, such as the Columbus Monument in Central Park. One section of the tunnels had to pass through the east side of the 700-ton monument’s foundation, and simply digging through could have led to its collapse. To avoid damaging it, workers had to build a new support under the monument, slowing progress on the subway. Another major obstacle was the New York Times building, which had a pressroom below where the tunnel was to be built. So, the subway was simply built through the building with steel channels to reinforce its structure. Despite these and other engineering challenges, construction was completed just four years after it started, and the inaugural run of the city’s new transit system took place on October 27, 1904, at 2:35 PM, with Mayor George McClellan at the controls. The subway system was operated by the Interborough Rapid Transit Company (IRT) and consisted of just 9.1 miles of tracks passing through 28 stations. That may seem limited compared to today, but it was an astounding leap for commuters at the time, with IRT claiming to take passengers from “City Hall to Harlem in 15 minutes.” At 7 PM, just hours after the inaugural run, the subway was opened to the public for just a nickel per passenger. On opening day, around 100,000 passengers tried out the newly-minted subway, and that number has only grown since.
Today, New York City’s subway system has 472 stations and 665 miles of track. It’s operated by the Metropolitan Transport Authority (MTA) and serves over three million riders a day. The city’s subway system wasn’t the first, nor is it currently the largest, but it remains the only one to operate 24 hours a day, 7 days a week—a feature that many New Yorkers have come to rely on. The extensive and convenient transit system allowed the city to grow throughout the 20th century, and the Big Apple might have ended up as Small Potatoes without it.
[Image description: A subway train near a sign reading “W 8 Street.”] Credit & copyright: Tim Gouw, PexelsNew York is full of engineering wonders, from skyscrapers to suspension bridges, but one of the most impressive isn’t even visible above ground. The New York City subway system transports over a billion riders through the urban jungle every year. The city’s first subway system opened on this day in 1904, and since then it has continued to expand and serve an exponentially growing population.
By the late 1800s, New York City was already the most populated city in the United States. Already known as a center of commerce and culture, the city was growing quickly…and quickly running out of room. Roads were congested with horse-drawn carriages and the island borough of Manhattan was serviced by elevated railways that took up precious real estate. City planners needed a solution that would address the transportation needs of the residents without taking up what little room was left. A subway system seemed like a logical answer. After all, the world’s first underground transit system was already a proven success, as it had been operating in London since 1863. In nearby Boston, America’s first subway was finished in 1897, though it was more limited in scope and used streetcars. There had even been a limited subway line in New York City between 1870 and 1873. During those short few years, a pneumatic-powered, 18-passenger car traversed under Broadway using a 100 horsepower fan. There had been talk of expanding the line, but the technology was made obsolete by improvements in electric traction motors, and the line was soon abandoned. Indeed, the future of transit in New York City was electric, and after much lobbying from the city’s Board of Rapid Transit and financing from prominent financier August Belmont, Jr., construction on the permanent subway system began in 1900.
As construction crews dug underground, they built temporary wooden bridges over the subway tunnels to allow traffic to continue unimpeded. Not everything went so smoothly, though. Because the tunnel was close to the surface in many places, construction often involved moving existing infrastructure like gas and water lines. Some things weren’t so easy to move out of the way, such as the Columbus Monument in Central Park. One section of the tunnels had to pass through the east side of the 700-ton monument’s foundation, and simply digging through could have led to its collapse. To avoid damaging it, workers had to build a new support under the monument, slowing progress on the subway. Another major obstacle was the New York Times building, which had a pressroom below where the tunnel was to be built. So, the subway was simply built through the building with steel channels to reinforce its structure. Despite these and other engineering challenges, construction was completed just four years after it started, and the inaugural run of the city’s new transit system took place on October 27, 1904, at 2:35 PM, with Mayor George McClellan at the controls. The subway system was operated by the Interborough Rapid Transit Company (IRT) and consisted of just 9.1 miles of tracks passing through 28 stations. That may seem limited compared to today, but it was an astounding leap for commuters at the time, with IRT claiming to take passengers from “City Hall to Harlem in 15 minutes.” At 7 PM, just hours after the inaugural run, the subway was opened to the public for just a nickel per passenger. On opening day, around 100,000 passengers tried out the newly-minted subway, and that number has only grown since.
Today, New York City’s subway system has 472 stations and 665 miles of track. It’s operated by the Metropolitan Transport Authority (MTA) and serves over three million riders a day. The city’s subway system wasn’t the first, nor is it currently the largest, but it remains the only one to operate 24 hours a day, 7 days a week—a feature that many New Yorkers have come to rely on. The extensive and convenient transit system allowed the city to grow throughout the 20th century, and the Big Apple might have ended up as Small Potatoes without it.
[Image description: A subway train near a sign reading “W 8 Street.”] Credit & copyright: Tim Gouw, Pexels -
FREELiterature PP&T CurioFree1 CQ
Halloween approaches, and with it a host of familiar, spooky tales, many of which have their basis in classic novels. Oscar Wilde’s The Picture Dorian Gray isn’t quite as famous as Dracula or Frankenstein, but this novel is just as spooky, and it’s had its fair share of pop culture appearances and film adaptations too. It’s not exactly a story about a monster… but about the monstrous faults that lurk in all of us.
The Picture Dorian Gray was first published in 1890 in Lippincott’s Monthly Magazine as a novella, which was common for new stories at the time. It follows the titular character through his descent into moral decay. Dorian Gray is a handsome, rich young man who enjoys a relatively carefree life. Gray’s friend, Basil Hallward, paints his portrait and discusses Gray’s extraordinary beauty with Lord Henry Wotton, a hedonistic socialite. When Gray arrives to see the finished piece, Wotton describes his personal life philosophy: that one should live to indulge their impulses and appetites. He goes on to say to Gray, “…you have the most marvelous youth, and youth is the one thing worth having.” As Hallward places the finishing touches on the painting, Gray declares, “But this picture will remain always young. It will never be older than this particular day of June…If it were only the other way! If it were I who was to be always young, and the picture that was to grow old! For that—for that—I would give everything! Yes, there is nothing in the whole world I would not give! I would give my soul for that!” From that point on, Gray begins to commit cruel and even violent transgressions, the first of which leads to the death of his lover, Sibyl Vane. Yet, he remains ageless and beautiful while his portrait warps into an increasingly grotesque reflection of his inner self. Ultimately, even his attempt to redeem himself by a kind act is revealed to be self-serving, as the portrait changes to reflect his cunning. Eventually, Gray murders the portrait's creator after Hallword discovers how hideous it has become. When a crazed Gray stabs the portrait in frustration, a servant hears him scream and comes to his aid, only to find the body of an ugly, old man with a knife in his chest. The portrait, meanwhile, has reverted back to its original, beautiful form.
Wilde’s novel didn’t have quite the reception he’d hoped for. When it was unleashed upon the Victorian readership, it set off a storm of controversy with Wilde at the center. This was despite the fact that Lippincott’s editor, J. M. Stoddart, had heavily edited the novella to censor portions that he believed were too obscene for Victorian sensibilities. The cuts to the text were made without Wilde’s input or consent, and largely targeted the homosexual undertones present in the interactions between some of the male characters. In particular, Hallward was originally characterized as having much more overt homosexual inclinations toward Gray. Stoddart also removed some of the more salacious details surrounding the novel’s heterosexual relationships. When the book was engulfed in scandal, Wilde himself made further edits of his own accord, but to no avail. When Wilde was accused of having engaged in a homosexual relationship with Lord Alfred Douglas by Douglas’s father, the author sued the latter for libel. The suit fell apart in court after the homosexual themes in The Picture of Dorian Gray were used as evidence against Wilde, and the failure of the suit left him open to criminal prosecution for homosexuality under British law. After two trials, Wilde was sentenced to two years of hard labor in 1895. After his release, he was plagued by poor health while commercial success eluded him. Wilde passed away in Paris, France, in 1900 of acute meningitis.
Today, The Picture of Dorian Gray is seen in a much different light. The work is considered one of the best examples of Wilde’s wit and eye for characterization. It’s also the most representative of Wilde’s Aestheticism, a worldview espoused by several characters in the novella. Nowadays, a version true to the author’s original intent is available as The Picture of Dorian Gray: An Annotated, Uncensored Edition (2011), which restores material cut from the text by Stoddart and Wilde. It may not be so controversial for modern sensibilities, but just in case, make sure you’re wearing some pearls so you have something to clutch if you buy a copy.
[Image description: A 1908 illustration from Oscar Wilde's The Picture of Dorian Gray] Credit & copyright:
Eugène Dété (1848–1922) after Paul Thiriat (1868–1943), 1908. Mississippi State University, College of Architecture Art and Design, Wikimedia Commons. This work is in the public domain in its source country and the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.Halloween approaches, and with it a host of familiar, spooky tales, many of which have their basis in classic novels. Oscar Wilde’s The Picture Dorian Gray isn’t quite as famous as Dracula or Frankenstein, but this novel is just as spooky, and it’s had its fair share of pop culture appearances and film adaptations too. It’s not exactly a story about a monster… but about the monstrous faults that lurk in all of us.
The Picture Dorian Gray was first published in 1890 in Lippincott’s Monthly Magazine as a novella, which was common for new stories at the time. It follows the titular character through his descent into moral decay. Dorian Gray is a handsome, rich young man who enjoys a relatively carefree life. Gray’s friend, Basil Hallward, paints his portrait and discusses Gray’s extraordinary beauty with Lord Henry Wotton, a hedonistic socialite. When Gray arrives to see the finished piece, Wotton describes his personal life philosophy: that one should live to indulge their impulses and appetites. He goes on to say to Gray, “…you have the most marvelous youth, and youth is the one thing worth having.” As Hallward places the finishing touches on the painting, Gray declares, “But this picture will remain always young. It will never be older than this particular day of June…If it were only the other way! If it were I who was to be always young, and the picture that was to grow old! For that—for that—I would give everything! Yes, there is nothing in the whole world I would not give! I would give my soul for that!” From that point on, Gray begins to commit cruel and even violent transgressions, the first of which leads to the death of his lover, Sibyl Vane. Yet, he remains ageless and beautiful while his portrait warps into an increasingly grotesque reflection of his inner self. Ultimately, even his attempt to redeem himself by a kind act is revealed to be self-serving, as the portrait changes to reflect his cunning. Eventually, Gray murders the portrait's creator after Hallword discovers how hideous it has become. When a crazed Gray stabs the portrait in frustration, a servant hears him scream and comes to his aid, only to find the body of an ugly, old man with a knife in his chest. The portrait, meanwhile, has reverted back to its original, beautiful form.
Wilde’s novel didn’t have quite the reception he’d hoped for. When it was unleashed upon the Victorian readership, it set off a storm of controversy with Wilde at the center. This was despite the fact that Lippincott’s editor, J. M. Stoddart, had heavily edited the novella to censor portions that he believed were too obscene for Victorian sensibilities. The cuts to the text were made without Wilde’s input or consent, and largely targeted the homosexual undertones present in the interactions between some of the male characters. In particular, Hallward was originally characterized as having much more overt homosexual inclinations toward Gray. Stoddart also removed some of the more salacious details surrounding the novel’s heterosexual relationships. When the book was engulfed in scandal, Wilde himself made further edits of his own accord, but to no avail. When Wilde was accused of having engaged in a homosexual relationship with Lord Alfred Douglas by Douglas’s father, the author sued the latter for libel. The suit fell apart in court after the homosexual themes in The Picture of Dorian Gray were used as evidence against Wilde, and the failure of the suit left him open to criminal prosecution for homosexuality under British law. After two trials, Wilde was sentenced to two years of hard labor in 1895. After his release, he was plagued by poor health while commercial success eluded him. Wilde passed away in Paris, France, in 1900 of acute meningitis.
Today, The Picture of Dorian Gray is seen in a much different light. The work is considered one of the best examples of Wilde’s wit and eye for characterization. It’s also the most representative of Wilde’s Aestheticism, a worldview espoused by several characters in the novella. Nowadays, a version true to the author’s original intent is available as The Picture of Dorian Gray: An Annotated, Uncensored Edition (2011), which restores material cut from the text by Stoddart and Wilde. It may not be so controversial for modern sensibilities, but just in case, make sure you’re wearing some pearls so you have something to clutch if you buy a copy.
[Image description: A 1908 illustration from Oscar Wilde's The Picture of Dorian Gray] Credit & copyright:
Eugène Dété (1848–1922) after Paul Thiriat (1868–1943), 1908. Mississippi State University, College of Architecture Art and Design, Wikimedia Commons. This work is in the public domain in its source country and the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929. -
FREEPolitical Science PP&T CurioFree1 CQ
For better or worse, modern American politics are a bombastic affair involving celebrity endorsements and plenty of talking heads. Former President Jimmy Carter, who recently became the first U.S. President to celebrate his 100th birthday, has lived a different sort of life than many modern politicians. His first home lacked electricity and indoor plumbing, and his career involved more quiet service than political bravado.
Born on October 1, 1924 in Plains, Georgia, James Earl “Jimmy” Carter Jr. was the first U.S. President to be born in a hospital, as home births were more common at the time. His early childhood was fairly humble. His father, Earl, was a peanut farmer and businessman who enlisted young Jimmy’s help in packing goods to be sold in town, while his mother was a trained nurse who provided healthcare services to impoverished Black families. As a student, Carter excelled at school, encouraged by his parents to be hardworking and enterprising. Aside from helping his father, he also sought work with the Sumter County Library Board, where he helped set up the bookmobile, a traveling library to service the rural areas of the county. After graduating high school in 1941, Carter attended the Georgia Institute of Technology for a year before entering the U.S. Naval Academy. He met his future wife, Rosalynn Smith, during his last year at the Academy, and the two were married in 1946. After graduating from the Academy the same year, Carter joined the U.S. Navy’s submarine service, although it was a dangerous job. He even worked with Captain Hyman Rickover, the “father of the nuclear Navy,” and studied nuclear engineering as part of the Navy’s efforts to build its first nuclear submarines. Carter would have served aboard the U.S.S. Seawolf, one of the first two such vessels, but the death of his father in 1953 prompted him to resign so that he could return to Georgia and take over the struggling family farm.
On returning to his home state, Carter and his family moved into a public housing project in Plains due to a post-war housing shortage. This experience inspired him to work with Habitat for Humanity decades later, and it also made him the first president to have lived in public housing. While turning around the fortunes of the family’s peanut farm, Carter became involved in politics, earning a seat on the Sumter County Board of Education in 1955. In 1962, he ran for a seat in the Georgia State Senate, where he earned a reputation for himself by targeting wasteful spending and laws meant to disenfranchise Black voters. Although he failed to win the Democratic primary in 1966 for a seat in the U.S. Congress (largely due to his support of the civil rights movement), he refocused his efforts toward the 1970 gubernatorial election. After a successful campaign, he surprised many in Georgia by advocating for integration and appointing more Black staff members than previous administrations. Though his idealism attracted criticism, Carter was largely popular in the state for his work in reducing government bureaucracy and increasing funding for schools.
Jimmy Carter’s political ambitions eventually led him to the White House when he took office in 1977. His Presidency took place during a chaotic time, in which the Iranian hostage crisis, a war in Afghanistan, and economic worries were just some of the problems he was tasked with helping to solve. After losing the 1980 Presidential race to Ronald Reagan, Carter and his wife moved back into their modest, ranch-style home in Georgia where they lived for more than 60 years, making him one of just a few presidents to return to their pre-presidential residences. Today, Carter is almost as well-known for his work after his presidency, as during it, since he dedicated much of his life to charity work, especially building homes with Habitat for Humanity. He also wrote over 30 books, including three that he recorded as audio books which won him three Grammy Awards in the Spoken Word Album category. Not too shabby for a humble peanut farmer.
[Image description: Jimmy Carter’s official Presidential portrait; he wears a dark blue suit with a light blue shirt and striped tie.] Credit & copyright: Department of Defense. Department of the Navy. Naval Photographic Center. Wikimedia Commons. This work is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code.For better or worse, modern American politics are a bombastic affair involving celebrity endorsements and plenty of talking heads. Former President Jimmy Carter, who recently became the first U.S. President to celebrate his 100th birthday, has lived a different sort of life than many modern politicians. His first home lacked electricity and indoor plumbing, and his career involved more quiet service than political bravado.
Born on October 1, 1924 in Plains, Georgia, James Earl “Jimmy” Carter Jr. was the first U.S. President to be born in a hospital, as home births were more common at the time. His early childhood was fairly humble. His father, Earl, was a peanut farmer and businessman who enlisted young Jimmy’s help in packing goods to be sold in town, while his mother was a trained nurse who provided healthcare services to impoverished Black families. As a student, Carter excelled at school, encouraged by his parents to be hardworking and enterprising. Aside from helping his father, he also sought work with the Sumter County Library Board, where he helped set up the bookmobile, a traveling library to service the rural areas of the county. After graduating high school in 1941, Carter attended the Georgia Institute of Technology for a year before entering the U.S. Naval Academy. He met his future wife, Rosalynn Smith, during his last year at the Academy, and the two were married in 1946. After graduating from the Academy the same year, Carter joined the U.S. Navy’s submarine service, although it was a dangerous job. He even worked with Captain Hyman Rickover, the “father of the nuclear Navy,” and studied nuclear engineering as part of the Navy’s efforts to build its first nuclear submarines. Carter would have served aboard the U.S.S. Seawolf, one of the first two such vessels, but the death of his father in 1953 prompted him to resign so that he could return to Georgia and take over the struggling family farm.
On returning to his home state, Carter and his family moved into a public housing project in Plains due to a post-war housing shortage. This experience inspired him to work with Habitat for Humanity decades later, and it also made him the first president to have lived in public housing. While turning around the fortunes of the family’s peanut farm, Carter became involved in politics, earning a seat on the Sumter County Board of Education in 1955. In 1962, he ran for a seat in the Georgia State Senate, where he earned a reputation for himself by targeting wasteful spending and laws meant to disenfranchise Black voters. Although he failed to win the Democratic primary in 1966 for a seat in the U.S. Congress (largely due to his support of the civil rights movement), he refocused his efforts toward the 1970 gubernatorial election. After a successful campaign, he surprised many in Georgia by advocating for integration and appointing more Black staff members than previous administrations. Though his idealism attracted criticism, Carter was largely popular in the state for his work in reducing government bureaucracy and increasing funding for schools.
Jimmy Carter’s political ambitions eventually led him to the White House when he took office in 1977. His Presidency took place during a chaotic time, in which the Iranian hostage crisis, a war in Afghanistan, and economic worries were just some of the problems he was tasked with helping to solve. After losing the 1980 Presidential race to Ronald Reagan, Carter and his wife moved back into their modest, ranch-style home in Georgia where they lived for more than 60 years, making him one of just a few presidents to return to their pre-presidential residences. Today, Carter is almost as well-known for his work after his presidency, as during it, since he dedicated much of his life to charity work, especially building homes with Habitat for Humanity. He also wrote over 30 books, including three that he recorded as audio books which won him three Grammy Awards in the Spoken Word Album category. Not too shabby for a humble peanut farmer.
[Image description: Jimmy Carter’s official Presidential portrait; he wears a dark blue suit with a light blue shirt and striped tie.] Credit & copyright: Department of Defense. Department of the Navy. Naval Photographic Center. Wikimedia Commons. This work is in the public domain in the United States because it is a work prepared by an officer or employee of the United States Government as part of that person’s official duties under the terms of Title 17, Chapter 1, Section 105 of the US Code. -
FREEPolitical Science PP&T CurioFree1 CQ
With nationwide relief efforts underway following the devastation of Hurricane Helene, you’ve likely been hearing a lot about one federal agency: FEMA (Federal Emergency Management Agency). With a workforce of more than 20,000 people, FEMA is uniquely equipped to respond to all sorts of emergencies. Before its founding, though, Americans dealing with disasters were largely left on their own.
In December of 1802, Portsmouth, New Hampshire, was practically destroyed by a fire. At the time, Portsmouth was among the U.S.’s busiest ports, and its destruction spelled disaster for the economy. The federal government didn’t directly help rebuild the city, but the U.S. Congress suspended bond payments for local merchants to allow them to continue operations in Portsmouth. Similar measures were taken after other major fires, such as one in New York City in 1835 and the Great Chicago Fire of 1871. Still, there wasn’t much interest in creating a proactive federal response system for disasters until the early 20th century, when two tragic events led to calls for action. First, there was the Galveston Hurricane in 1900, which killed thousands of people. Then, the San Francisco Earthquake in 1906 leveled much of the city. In both cases, very little federal action was taken to address displaced citizens or to rebuild critical infrastructure, with the onus falling entirely on local governments. Those local governments, in turn, began asking the federal government to create some kind of task force to help when future disasters arose. Finally, in 1950, Congress created the Federal Disaster Assistance Program, giving the federal government powers to act directly in the case of disasters. A series of devastating hurricanes and earthquakes in the 1960s provided further impetus to expand these powers, resulting in the Disaster Relief Act of 1970. This allowed affected individuals to receive federal loans and tax assistance. Finally, in 1979, President Jimmy Carter issued an executive order to combine a number of agencies responsible for disaster response to create FEMA.
Since FEMA was created, it has helped in the face of everything from volcanoes to hurricanes, but it hasn’t always been beyond criticism. For example, the federal response to the Loma Prieta Earthquake in 1989 and Hurricane Andrew in 1992 were considered inadequate. Major reforms in the 1990s and the increasing emphasis on being proactive, not simply reactive, allowed the agency to respond to disasters more effectively. Some of the proactive measures included purchasing property in areas at higher risk of natural disasters and encouraging more stringent building codes. While FEMA was improving its response to natural disasters, there were also unnatural disasters to contend with. In 1995, FEMA responded to the Oklahoma City Bombing. Six years later, the terrorist attacks of September 11, 2001 led to the most significant change to the agency since its creation. When the Department of Homeland Security (DHS) was created to handle federal responses to terrorist attacks, FEMA was absorbed into it, expanding its scope to terrorism preparedness.
Today, FEMA continues in its original mission of disaster relief, and it’s been getting busier by the year. With climate change creating storms of greater frequency and power, FEMA has been kept on its toes recently. When such storms approach, it’s up to governors of affected states to request assistance through the FEMA Regional Office. Since they can do this before storms actually strike, FEMA can begin providing financial aid and moving people and supplies into position before any actual damage has occurred. Aside from providing practical necessities like food, water, and shelter to affected people, part of FEMA’s purpose is to ensure that allocated funds are handled appropriately. After all, when things go sideways, you want to make sure everything else is on the up and up.
[Image description: An American flag with a wooden flagpole flying against a blue sky.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.With nationwide relief efforts underway following the devastation of Hurricane Helene, you’ve likely been hearing a lot about one federal agency: FEMA (Federal Emergency Management Agency). With a workforce of more than 20,000 people, FEMA is uniquely equipped to respond to all sorts of emergencies. Before its founding, though, Americans dealing with disasters were largely left on their own.
In December of 1802, Portsmouth, New Hampshire, was practically destroyed by a fire. At the time, Portsmouth was among the U.S.’s busiest ports, and its destruction spelled disaster for the economy. The federal government didn’t directly help rebuild the city, but the U.S. Congress suspended bond payments for local merchants to allow them to continue operations in Portsmouth. Similar measures were taken after other major fires, such as one in New York City in 1835 and the Great Chicago Fire of 1871. Still, there wasn’t much interest in creating a proactive federal response system for disasters until the early 20th century, when two tragic events led to calls for action. First, there was the Galveston Hurricane in 1900, which killed thousands of people. Then, the San Francisco Earthquake in 1906 leveled much of the city. In both cases, very little federal action was taken to address displaced citizens or to rebuild critical infrastructure, with the onus falling entirely on local governments. Those local governments, in turn, began asking the federal government to create some kind of task force to help when future disasters arose. Finally, in 1950, Congress created the Federal Disaster Assistance Program, giving the federal government powers to act directly in the case of disasters. A series of devastating hurricanes and earthquakes in the 1960s provided further impetus to expand these powers, resulting in the Disaster Relief Act of 1970. This allowed affected individuals to receive federal loans and tax assistance. Finally, in 1979, President Jimmy Carter issued an executive order to combine a number of agencies responsible for disaster response to create FEMA.
Since FEMA was created, it has helped in the face of everything from volcanoes to hurricanes, but it hasn’t always been beyond criticism. For example, the federal response to the Loma Prieta Earthquake in 1989 and Hurricane Andrew in 1992 were considered inadequate. Major reforms in the 1990s and the increasing emphasis on being proactive, not simply reactive, allowed the agency to respond to disasters more effectively. Some of the proactive measures included purchasing property in areas at higher risk of natural disasters and encouraging more stringent building codes. While FEMA was improving its response to natural disasters, there were also unnatural disasters to contend with. In 1995, FEMA responded to the Oklahoma City Bombing. Six years later, the terrorist attacks of September 11, 2001 led to the most significant change to the agency since its creation. When the Department of Homeland Security (DHS) was created to handle federal responses to terrorist attacks, FEMA was absorbed into it, expanding its scope to terrorism preparedness.
Today, FEMA continues in its original mission of disaster relief, and it’s been getting busier by the year. With climate change creating storms of greater frequency and power, FEMA has been kept on its toes recently. When such storms approach, it’s up to governors of affected states to request assistance through the FEMA Regional Office. Since they can do this before storms actually strike, FEMA can begin providing financial aid and moving people and supplies into position before any actual damage has occurred. Aside from providing practical necessities like food, water, and shelter to affected people, part of FEMA’s purpose is to ensure that allocated funds are handled appropriately. After all, when things go sideways, you want to make sure everything else is on the up and up.
[Image description: An American flag with a wooden flagpole flying against a blue sky.] Credit & copyright: Crefollet, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEStyle PP&T CurioFree1 CQ
Ooo la la! This timeless headpiece is as French as escargot, yet the beret has managed to maintain incredible worldwide appeal throughout the centuries. This simple, unisex hat has shown up on the heads of everyone from European royals to uniformed soldiers and is still going strong despite a history that stretches back at least as far as the 14th century.
Although modern berets are heavily associated with French fashion and largely gained popularity in the 20th century, flat-cap style hats have been worn since the time of ancient Greece. The true ancestor of the beret comes from Europe in the 1300s, when felted or fulled wool hats were a durable, warm choice for many people working outdoors. The simple design of these hats gave them a timeless quality that endured through the centuries, and they were eventually adopted by the people of the Basque region, sandwiched between the border of France and Spain. The Basque people were renowned fishermen and whalers who sailed long distances in search of their quarry. Basque berets were perfect for these hardy sailors, who needed water–resistant hats to keep them warm while sailing the cold, northern seas. Their version of the beret became so emblematic of their culture that receiving one at the age of 10 was a rite of passage for boys in the Basque city of Béarn, where the hat is said to have originated. Other European cultures recognized the sailing prowess of Basque fishermen as well, and many came to Basque country to learn from the best. That, along with the long-reaching travels of the Basque sailors, spread the Basque beret around Europe. It wasn’t until 1835, though, that the hat began to be called “beret,” short for the French name for it, “béret basque.” Throughout the 1800s, the hat gained increasing popularity outside of maritime professions, though for less peaceful purposes.
The beret came into the forefront of fashion and history when Spanish-Basque military officer Tomás de Zumalacárregui wore a large, red iteration of the hat during the First and Second Carlist Wars. From then on, the beret was inextricably linked to military aesthetics, and was adopted by various European armies thereafter. Another famous example were the Chasseurs Alpins, an elite group of French soldiers trained to fight in the mountains. They wore blue berets to distinguish themselves and keep warm. Then came the brutal conflicts of WWI and WWII, when the advent of wireless communication with the widespread adoption of radios and telephones gave the beret a novel advantage: its compact design allowed it to fit in the cramped spaces inside tanks and other vehicles, while also allowing for the wearing of headphones. Soon, berets became associated with elite forces like the Green Berets of the U.S. Army.
Around the same time, though, the beret once again found itself being worn for fashion. They were embraced by artists and writers like Ernest Hemingway, who considered their roots in European peasantry a means of rebelling against mainstream fashion. As Paris distinguished itself as the world’s fashion center, the hats became most heavily associated with France. Today, the beret remains largely a fashion statement, but it’s also been worn by political revolutionaries such as Che Guevara and the Black Panthers as a means to identify themselves. No matter who you are, though, when you put on a beret, you’re not just wearing a fashionable headpiece. You’re wearing a piece of history.
[Image description: A maroon-colored beret hat with a puffed decoration on top, sitting on a blank mannequin head.] Credit & copyright:
Metropolitan Museum of Art, Wikimedia Commons. Brooklyn Museum Costume Collection at The Metropolitan Museum of Art, Gift of the Brooklyn Museum, 2009; Gift of E. F. Schermerhorn, 1953. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Ooo la la! This timeless headpiece is as French as escargot, yet the beret has managed to maintain incredible worldwide appeal throughout the centuries. This simple, unisex hat has shown up on the heads of everyone from European royals to uniformed soldiers and is still going strong despite a history that stretches back at least as far as the 14th century.
Although modern berets are heavily associated with French fashion and largely gained popularity in the 20th century, flat-cap style hats have been worn since the time of ancient Greece. The true ancestor of the beret comes from Europe in the 1300s, when felted or fulled wool hats were a durable, warm choice for many people working outdoors. The simple design of these hats gave them a timeless quality that endured through the centuries, and they were eventually adopted by the people of the Basque region, sandwiched between the border of France and Spain. The Basque people were renowned fishermen and whalers who sailed long distances in search of their quarry. Basque berets were perfect for these hardy sailors, who needed water–resistant hats to keep them warm while sailing the cold, northern seas. Their version of the beret became so emblematic of their culture that receiving one at the age of 10 was a rite of passage for boys in the Basque city of Béarn, where the hat is said to have originated. Other European cultures recognized the sailing prowess of Basque fishermen as well, and many came to Basque country to learn from the best. That, along with the long-reaching travels of the Basque sailors, spread the Basque beret around Europe. It wasn’t until 1835, though, that the hat began to be called “beret,” short for the French name for it, “béret basque.” Throughout the 1800s, the hat gained increasing popularity outside of maritime professions, though for less peaceful purposes.
The beret came into the forefront of fashion and history when Spanish-Basque military officer Tomás de Zumalacárregui wore a large, red iteration of the hat during the First and Second Carlist Wars. From then on, the beret was inextricably linked to military aesthetics, and was adopted by various European armies thereafter. Another famous example were the Chasseurs Alpins, an elite group of French soldiers trained to fight in the mountains. They wore blue berets to distinguish themselves and keep warm. Then came the brutal conflicts of WWI and WWII, when the advent of wireless communication with the widespread adoption of radios and telephones gave the beret a novel advantage: its compact design allowed it to fit in the cramped spaces inside tanks and other vehicles, while also allowing for the wearing of headphones. Soon, berets became associated with elite forces like the Green Berets of the U.S. Army.
Around the same time, though, the beret once again found itself being worn for fashion. They were embraced by artists and writers like Ernest Hemingway, who considered their roots in European peasantry a means of rebelling against mainstream fashion. As Paris distinguished itself as the world’s fashion center, the hats became most heavily associated with France. Today, the beret remains largely a fashion statement, but it’s also been worn by political revolutionaries such as Che Guevara and the Black Panthers as a means to identify themselves. No matter who you are, though, when you put on a beret, you’re not just wearing a fashionable headpiece. You’re wearing a piece of history.
[Image description: A maroon-colored beret hat with a puffed decoration on top, sitting on a blank mannequin head.] Credit & copyright:
Metropolitan Museum of Art, Wikimedia Commons. Brooklyn Museum Costume Collection at The Metropolitan Museum of Art, Gift of the Brooklyn Museum, 2009; Gift of E. F. Schermerhorn, 1953. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
Did you think you were in France? Au contraire, mon ami! Québec City
May look like Paris, but it’s one of the oldest cities in Canada. Its distinct Old World architecture also makes it one of the most unique cities in North America. Québec City’s unique culture emanates throughout the rest of the province of Québec, where French is still spoken as the primary language and locals are quite proud of their French heritage.
Québec City was founded in 1608 by French explorer Samuel de Champlain, but he wasn’t exactly the first to reach it. That distinction goes to another French explorer, Jacques Cartier, who is credited as “discovering” Canada in 1534. Cartier was the first European to encounter many of the indigenous communities that lived along the St. Lawrence River, and he named the new land “Kanata,” a Huron-Iroquois word for “village” or “settlement.” Cartier traveled and mapped the area around the river, eventually reaching the site where Québec City stands today. However, the French were unable to send further expeditions let alone establish colonies due to religious upheavals and wars back in Europe. Once France was in a position to resume their exploration of Canada, or “New France”, they sent Champlain, who established Québec City as a trading post. The location of the city was also of strategic importance, as its location on the narrow portion of the St. Lawrence River allowed the French to control travel farther into the continent for the fur trade. Unfortunately for the French, the British also had their eyes on the North American fur trade, and the two countries came into military conflict over control of New France. The British managed to capture and hold Québec City between 1629 and 1632, and in 1759, they once again defeated the French. This time, though, the French were forced to give up most of their territory in North America, and Québec City was never returned to France.
Despite British rule, though, Québec City managed to hold on to its French culture. Much of this is due to the 1774 passage of the Québec Act, which allowed the Francophone residents to maintain their language and cultural institutions. Then, the Constitutional Act of 1791 split Canada into Upper Canada and Lower Canada (with Québec City as the provincial capital), which would become the modern day provinces of Ontario and Québec, respectively. The Constitutional Act helped draw a clear cultural boundary, contributing to Québec and its capital remaining ardently French in culture. French was declared the sole official language of Québec after the province passed the Official Languages Act in 1974 and the Charter of the French Language, which made French mandatory in schools, businesses, government administration, and signage. Much of France’s Old World influences can also be seen in Québec City’s historic buildings, some of which date back to French rule in the 1600s.
In a strange way, the survival of the city’s architectural heritage is owed, at least in part, to its economic struggles in the late 19th century. The historic district of Old Québec contains some of the oldest buildings in the city, but it didn’t remain largely untouched just for cultural reasons. Rather, the economic hardships of the late 19th century made it too expensive to redevelop. That’s not to say that there isn’t a longstanding spirit of historic preservation in the city. In the 1870s, demolitions began on the then-obsolete fortifications that surrounded the city, but not everyone was eager to erase the city’s architectural heritage. Eventually, then-Governor General of Canada, Lord Dufferin, ordered that parts of the fortifications be saved for posterity, including St. Louis Gate and St. John Gate. In addition, Dufferin ordered the construction of new gates in the Romantic style but wide enough to accommodate the increasingly large volume of traffic. In 1985, Old Québec was declared a UNESCO World Heritage site, thanks largely in part to people like Dufferin. Today, French is still the main language spoken in Québec City, which boasts some of the world’s most photographed buildings and a thriving French culinary movement. Sometimes it pays to look to the past when building a city’s future.
[Image description: Buildings and a courtyard lit up with multicolored lights at night in Quebec City.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Did you think you were in France? Au contraire, mon ami! Québec City
May look like Paris, but it’s one of the oldest cities in Canada. Its distinct Old World architecture also makes it one of the most unique cities in North America. Québec City’s unique culture emanates throughout the rest of the province of Québec, where French is still spoken as the primary language and locals are quite proud of their French heritage.
Québec City was founded in 1608 by French explorer Samuel de Champlain, but he wasn’t exactly the first to reach it. That distinction goes to another French explorer, Jacques Cartier, who is credited as “discovering” Canada in 1534. Cartier was the first European to encounter many of the indigenous communities that lived along the St. Lawrence River, and he named the new land “Kanata,” a Huron-Iroquois word for “village” or “settlement.” Cartier traveled and mapped the area around the river, eventually reaching the site where Québec City stands today. However, the French were unable to send further expeditions let alone establish colonies due to religious upheavals and wars back in Europe. Once France was in a position to resume their exploration of Canada, or “New France”, they sent Champlain, who established Québec City as a trading post. The location of the city was also of strategic importance, as its location on the narrow portion of the St. Lawrence River allowed the French to control travel farther into the continent for the fur trade. Unfortunately for the French, the British also had their eyes on the North American fur trade, and the two countries came into military conflict over control of New France. The British managed to capture and hold Québec City between 1629 and 1632, and in 1759, they once again defeated the French. This time, though, the French were forced to give up most of their territory in North America, and Québec City was never returned to France.
Despite British rule, though, Québec City managed to hold on to its French culture. Much of this is due to the 1774 passage of the Québec Act, which allowed the Francophone residents to maintain their language and cultural institutions. Then, the Constitutional Act of 1791 split Canada into Upper Canada and Lower Canada (with Québec City as the provincial capital), which would become the modern day provinces of Ontario and Québec, respectively. The Constitutional Act helped draw a clear cultural boundary, contributing to Québec and its capital remaining ardently French in culture. French was declared the sole official language of Québec after the province passed the Official Languages Act in 1974 and the Charter of the French Language, which made French mandatory in schools, businesses, government administration, and signage. Much of France’s Old World influences can also be seen in Québec City’s historic buildings, some of which date back to French rule in the 1600s.
In a strange way, the survival of the city’s architectural heritage is owed, at least in part, to its economic struggles in the late 19th century. The historic district of Old Québec contains some of the oldest buildings in the city, but it didn’t remain largely untouched just for cultural reasons. Rather, the economic hardships of the late 19th century made it too expensive to redevelop. That’s not to say that there isn’t a longstanding spirit of historic preservation in the city. In the 1870s, demolitions began on the then-obsolete fortifications that surrounded the city, but not everyone was eager to erase the city’s architectural heritage. Eventually, then-Governor General of Canada, Lord Dufferin, ordered that parts of the fortifications be saved for posterity, including St. Louis Gate and St. John Gate. In addition, Dufferin ordered the construction of new gates in the Romantic style but wide enough to accommodate the increasingly large volume of traffic. In 1985, Old Québec was declared a UNESCO World Heritage site, thanks largely in part to people like Dufferin. Today, French is still the main language spoken in Québec City, which boasts some of the world’s most photographed buildings and a thriving French culinary movement. Sometimes it pays to look to the past when building a city’s future.
[Image description: Buildings and a courtyard lit up with multicolored lights at night in Quebec City.] Credit & copyright: Wilfredo Rafael Rodriguez Hernandez, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEPolitical Science PP&T CurioFree1 CQ
For as much as we hear about voter fraud today (especially during election years) it’s pretty rare in the modern United States; when it happens, it’s usually on a small scale. That wasn’t always the case, though. There was a time when lax regulations made it much easier for large groups to “fix” elections, especially local ones. Yet, it wasn’t Congress or even a state-level lawmaker who took the first step toward stopping such fraud. It was actually a suffragist from small-town Indiana named Stella Courtright Stimson.
Even before she could legally vote, Stimson was heavily involved in local politics in her home town of Terre Haute. In 1909, she was elected to serve on the school board in her town and was part of the Indiana Federation of Clubs, which promoted women’s suffrage. But aside from the greater national issue of the women’s right to vote, Stimson was also concerned with the economic and social development of Terre Haute, where laws were laxly enforced. At the time, Terre Haute had a reputation for being a “wide open city,” meaning that it was unregulated when it came to laws about drinking, gambling, and prostitution. Enabling and profiting from the city’s illicit industries were politicians like city-engineer-turned-mayor Donn Roberts. Roberts first made a name for himself in Terre Haute’s political circles by stuffing ballots and casting illegal votes with packs of men hired for the purpose. On election days, he would go from polling station to polling station to have his men cast fraudulent ballots under pseudonyms. In 1913, Roberts ran for the office of mayor and ensured his own victory using the same tactics. As mayor, Roberts turned a blind eye to the city’s illegal businesses in exchange for bribes.
Stimson was well aware of Roberts’s operation and tried to inform the governor to no avail. Nevertheless, she and other local women gathered at polling stations to hinder Roberts by calling out those who were casting multiple ballots in various disguises or under false identities. They eventually found an ally in Joseph Roach Jr., a special prosecutor appointed to serve in a trial against Roberts in 1914. Although the women had gathered plenty of evidence, Roberts was ultimately acquitted by the jury. Undeterred by the defeat, Roach turned to federal laws and found one based on the Enforcement Act of 1870, which forbade two or more people from conspiring to “injure, oppress, threaten, or intimidate any citizen in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States.” He then took the issue to U.S. District Attorney Frank C. Dailey, who convinced a federal judge to accept the case.
The trouble was, Dailey couldn’t use any of the evidence that had been used by Roach a second time, so Stimson and the other poll-watchers once again got to work. They found that thousands of fraudulent registrations had been made by Roberts using names of people from other parts of the state which he had tied to random addresses in Terre Haute. In December of 1914, using evidence gathered by Stimson’s volunteers, U.S. Marshals arrested 116 individuals, including Roberts. In United States v. Aczel, all of the defendants were charged with four counts of conspiracy, and 88 of them pled guilty. On March 8, 1915, Roberts and the remaining defendants were found guilty on all charges.
Roberts was sentenced to six years in prison and a fine of $2,000, though he was released early on parole. Although he retained control of the city by proxy via a loyal ally, his greater political ambitions of becoming governor were never realized. Meanwhile, his successful prosecution set an important precedent at the federal level in enforcing election laws, helping to pave the way for the Voting Rights Act of 1965. Just a few years after helping to take down Roberts, Stimson and her fellow suffragists won the right to vote with the ratification of the 19th Amendment. She and Roach proved that participation in politics and elections weren’t just a right—they were a matter of dedication and civic duty.
[Image description: The Indiana state flag, which is dark blue with stars surrounding a torch and the word “INDIANA.”] Credit & copyright: HoosierMan1816, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.For as much as we hear about voter fraud today (especially during election years) it’s pretty rare in the modern United States; when it happens, it’s usually on a small scale. That wasn’t always the case, though. There was a time when lax regulations made it much easier for large groups to “fix” elections, especially local ones. Yet, it wasn’t Congress or even a state-level lawmaker who took the first step toward stopping such fraud. It was actually a suffragist from small-town Indiana named Stella Courtright Stimson.
Even before she could legally vote, Stimson was heavily involved in local politics in her home town of Terre Haute. In 1909, she was elected to serve on the school board in her town and was part of the Indiana Federation of Clubs, which promoted women’s suffrage. But aside from the greater national issue of the women’s right to vote, Stimson was also concerned with the economic and social development of Terre Haute, where laws were laxly enforced. At the time, Terre Haute had a reputation for being a “wide open city,” meaning that it was unregulated when it came to laws about drinking, gambling, and prostitution. Enabling and profiting from the city’s illicit industries were politicians like city-engineer-turned-mayor Donn Roberts. Roberts first made a name for himself in Terre Haute’s political circles by stuffing ballots and casting illegal votes with packs of men hired for the purpose. On election days, he would go from polling station to polling station to have his men cast fraudulent ballots under pseudonyms. In 1913, Roberts ran for the office of mayor and ensured his own victory using the same tactics. As mayor, Roberts turned a blind eye to the city’s illegal businesses in exchange for bribes.
Stimson was well aware of Roberts’s operation and tried to inform the governor to no avail. Nevertheless, she and other local women gathered at polling stations to hinder Roberts by calling out those who were casting multiple ballots in various disguises or under false identities. They eventually found an ally in Joseph Roach Jr., a special prosecutor appointed to serve in a trial against Roberts in 1914. Although the women had gathered plenty of evidence, Roberts was ultimately acquitted by the jury. Undeterred by the defeat, Roach turned to federal laws and found one based on the Enforcement Act of 1870, which forbade two or more people from conspiring to “injure, oppress, threaten, or intimidate any citizen in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States.” He then took the issue to U.S. District Attorney Frank C. Dailey, who convinced a federal judge to accept the case.
The trouble was, Dailey couldn’t use any of the evidence that had been used by Roach a second time, so Stimson and the other poll-watchers once again got to work. They found that thousands of fraudulent registrations had been made by Roberts using names of people from other parts of the state which he had tied to random addresses in Terre Haute. In December of 1914, using evidence gathered by Stimson’s volunteers, U.S. Marshals arrested 116 individuals, including Roberts. In United States v. Aczel, all of the defendants were charged with four counts of conspiracy, and 88 of them pled guilty. On March 8, 1915, Roberts and the remaining defendants were found guilty on all charges.
Roberts was sentenced to six years in prison and a fine of $2,000, though he was released early on parole. Although he retained control of the city by proxy via a loyal ally, his greater political ambitions of becoming governor were never realized. Meanwhile, his successful prosecution set an important precedent at the federal level in enforcing election laws, helping to pave the way for the Voting Rights Act of 1965. Just a few years after helping to take down Roberts, Stimson and her fellow suffragists won the right to vote with the ratification of the 19th Amendment. She and Roach proved that participation in politics and elections weren’t just a right—they were a matter of dedication and civic duty.
[Image description: The Indiana state flag, which is dark blue with stars surrounding a torch and the word “INDIANA.”] Credit & copyright: HoosierMan1816, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
It may not seem exciting, but we wouldn’t have much without it! Cement is used in the construction of… pretty much everything. It’s also been around for millennia. Yet as ubiquitous and essential for modern life as it is, cement remains mostly misunderstood. Many people have no idea what’s even in it. Well, get your dust mask ready to explore the history of cement through the ages.
First things first: cement and concrete are not the same thing. They may both be dusty, gray stuff that hardens when mixed with water, but concrete is actually a combination of several different materials, one of which is cement itself. To make concrete, cement and aggregates like gravel are mixed together with a variety of ingredients (depending on the application) to form a strong, porous mass. Cement itself has been produced since antiquity by the ancient Greeks and Romans, with their version consisting of lime and volcanic ash mixed together. They created lime by calcining limestone, a process of heating it in a low oxygen environment to remove impurities like carbon dioxide. When the lime and ash are mixed with water, they undergo a chemical reaction called hydration, combining the calcium in lime and the silica in ash to form calcium silicate hydrates. Romans in particular are renowned for their use of cement to build massive structures that have lasted thousands of years with little maintenance. They used cement as mortar to hold bricks together, and used it to make concrete. In fact, their word for concrete, “opus caementicium”, is where the modern word “cement” comes from. Their most famous innovation was using cement to build structures in or near water. Since cement—and by extension, concrete—cures instead of drying like mud or clay, they could use both materials to build the bases of bridges, dams, and aqueducts. And unlike wood, cement and concrete don’t get weaker with time or when exposed to water. In fact, water makes the materials more durable, because small cracks that let in water trigger a secondary curing process that helps maintain structural integrity.
Most cement used today is Portland cement, and its development started in the 1800s. In 1824, British bricklayer Joseph Aspdin created the first iteration of Portland cement by heating a mixture of lime and clay together until they calcined. Aspdin took the resulting product and ground it to a fine powder. When mixed with water, it became exceptionally strong, so he named it after the stones from the Isle of Portland in Dorset, U.K., which were known for their strength. Portland cement was then improved upon by his son, William Aspdin, who added tricalcium silicate. Then, in 1850, cement manufacturer Isaac Johnson created Portland cement as it is today. Johnson heated his ingredients at a higher temperature than Aspdin had, going up to 2732 degrees Fahrenheit, resulting in a product called clinker, a fusion of lime and the silicates. In addition to being strong, Portland cement also sets much more quickly than its predecessors, and remains the primary ingredient of concrete used in modern construction.
The modern world would certainly be different without cement—in more ways than one. While the material has allowed for the construction of everything from majestic skyscrapers to monumental hydroelectric dams, its production is also a major source of greenhouse emissions. Aside from the massive amount of fuel required to heat kilns for clinker and the transportation of the heavy material through fossil fuel-powered means, the very process of heating limestone releases carbon dioxide into the atmosphere. Still, cement has a lot of qualities that make it worthwhile. Concrete buildings are often very energy efficient, and since they can last so long, it means that less material might be used to maintain or rebuild structures. Also, scientists are currently working on cements that can absorb carbon dioxide from the atmosphere, further offsetting the emissions released during its production. So, when it comes to cement, what’s old is new and what’s gray is (hopefully) green.
[Image description: A portion of a building made from unpainted cement blocks.] Credit & copyright: Tobit Nazar Nieto Hernandez, PexelsIt may not seem exciting, but we wouldn’t have much without it! Cement is used in the construction of… pretty much everything. It’s also been around for millennia. Yet as ubiquitous and essential for modern life as it is, cement remains mostly misunderstood. Many people have no idea what’s even in it. Well, get your dust mask ready to explore the history of cement through the ages.
First things first: cement and concrete are not the same thing. They may both be dusty, gray stuff that hardens when mixed with water, but concrete is actually a combination of several different materials, one of which is cement itself. To make concrete, cement and aggregates like gravel are mixed together with a variety of ingredients (depending on the application) to form a strong, porous mass. Cement itself has been produced since antiquity by the ancient Greeks and Romans, with their version consisting of lime and volcanic ash mixed together. They created lime by calcining limestone, a process of heating it in a low oxygen environment to remove impurities like carbon dioxide. When the lime and ash are mixed with water, they undergo a chemical reaction called hydration, combining the calcium in lime and the silica in ash to form calcium silicate hydrates. Romans in particular are renowned for their use of cement to build massive structures that have lasted thousands of years with little maintenance. They used cement as mortar to hold bricks together, and used it to make concrete. In fact, their word for concrete, “opus caementicium”, is where the modern word “cement” comes from. Their most famous innovation was using cement to build structures in or near water. Since cement—and by extension, concrete—cures instead of drying like mud or clay, they could use both materials to build the bases of bridges, dams, and aqueducts. And unlike wood, cement and concrete don’t get weaker with time or when exposed to water. In fact, water makes the materials more durable, because small cracks that let in water trigger a secondary curing process that helps maintain structural integrity.
Most cement used today is Portland cement, and its development started in the 1800s. In 1824, British bricklayer Joseph Aspdin created the first iteration of Portland cement by heating a mixture of lime and clay together until they calcined. Aspdin took the resulting product and ground it to a fine powder. When mixed with water, it became exceptionally strong, so he named it after the stones from the Isle of Portland in Dorset, U.K., which were known for their strength. Portland cement was then improved upon by his son, William Aspdin, who added tricalcium silicate. Then, in 1850, cement manufacturer Isaac Johnson created Portland cement as it is today. Johnson heated his ingredients at a higher temperature than Aspdin had, going up to 2732 degrees Fahrenheit, resulting in a product called clinker, a fusion of lime and the silicates. In addition to being strong, Portland cement also sets much more quickly than its predecessors, and remains the primary ingredient of concrete used in modern construction.
The modern world would certainly be different without cement—in more ways than one. While the material has allowed for the construction of everything from majestic skyscrapers to monumental hydroelectric dams, its production is also a major source of greenhouse emissions. Aside from the massive amount of fuel required to heat kilns for clinker and the transportation of the heavy material through fossil fuel-powered means, the very process of heating limestone releases carbon dioxide into the atmosphere. Still, cement has a lot of qualities that make it worthwhile. Concrete buildings are often very energy efficient, and since they can last so long, it means that less material might be used to maintain or rebuild structures. Also, scientists are currently working on cements that can absorb carbon dioxide from the atmosphere, further offsetting the emissions released during its production. So, when it comes to cement, what’s old is new and what’s gray is (hopefully) green.
[Image description: A portion of a building made from unpainted cement blocks.] Credit & copyright: Tobit Nazar Nieto Hernandez, Pexels -
FREEUS History PP&T CurioFree1 CQ
Oh fudge, get that car out of here! Located between Michigan’s upper and lower Peninsulas on Lake Huron, the second-biggest of the U.S. Great Lakes, Mackinac Island is a quirky yet popular vacation spot. The 3.8-square-mile island is famous for its fudge shops, its military past, its beautiful architecture, and for being the only place in the U.S. where cars are completely banned.
In modern times, Mackinac Island is a popular vacation spot, hosting an estimated 1.2 million tourists each year. That’s especially impressive considering that only around 600 people live on the island year-round. Since no cars are allowed, visitors and residents alike must get around by walking, biking, or by horse. Horse-drawn carriages, buggies, and individual saddle horses are a common sight on the island, including on the 8-mile-long M-185, the only carless highway in America. Yet, for all of the island’s unusual and tourist-friendly features, and despite its small size, Mackinac Island’s past involves a surprising amount of military activity.
Mackinac island was originally home to several tribes of Native peoples, including the Anishinaabe, who spoke the Ojibwe language. Since they thought that the island’s shape resembled a turtle, they called it “Mitchimakinak”, or “big turtle.” When the French first arrived on the island in the 17th century, they wrote and pronounced the name as “Michilimackinac”. When the British came to the island in 1761 following the Seven Years War, they shortened it to “Mackinac.” The first Europeans to settle on Mackinac Island were French Jesuit missionaries, the most famous of whom was Jacques Marquette. In 1671, he began preaching on Mackinac Island in small chapels, some of which were made of bark, in an attempt to convert both Native peoples and French fur traders. Though he later moved his mission to the nearby mainland city of St. Ignace, a statue of Marquette and a reconstruction of one of his bark chapels still stands on the island today.
In 1781, aiming to control the waters between Lake Huron and Lake Michigan as well as the region’s profitable fur trade, British forces constructed Fort Mackinac on a bluff overlooking much of the island. Their plans hit a snag in 1796, when the U.S. won the Revolutionary War and seized control of the fort. Yet even that victory wasn’t enough to secure the island forever. When the War of 1812 broke out between the U.S. and England, British forces once again demanded control of the fort. When British Captain Charles Roberts landed on the island on July 17, 1812, with a force of British troops, Native American warriors, and Canadian militiamen, he sent an urgent message to American Fort Commander Porter Hanks, demanding an immediate surrender. Hanks was baffled, as he had no idea that war had broken out at all. Still, since the 24-year-old commander was badly outnumbered with a force of just 57 men, he surrendered the fort to the British. Just two years later, U.S. troops tried to take the fort back again, and the bloody Battle of Mackinac Island ensued. 13 Americans were killed in the only battle ever fought on the island, and the fort remained in British hands until the war of 1812 ended, and the fort was returned to U.S. control. It was officially decommissioned in 1895, but it still stands on the island today as an educational tourist site.
After the war, wealthy tourists eager to cheer themselves up and escape crowded cities began flocking to the island each summer. Catering to the tastes of these tourists, architects built elaborate, Victorian-style homes and buildings all over the island, and planted a wide array of colorful flowers. The Grand Hotel, built in 1887, is still a famed landmark on the island and boasts the world’s longest porch at 660 feet. Also in 1887, the Murdick family opened a candy kitchen on Mackinac specializing in fudge. The island’s temperate climate made fudge-making easy (since fudge requires warm, moist air to keep from drying out) and soon many other fudge shops opened. Not long after, in 1898, cars were officially banned in order to preserve the island’s historic atmosphere and the thriving horse-based businesses there. The ban was certainly effective, since Mackinac Island is still full of history, original buildings, beautiful flowers, and plenty of horses to this day. Just be prepared for a slow-moving vacation if you decide to visit.
[Image description: A photo of a portion of Mackinac island from above. A statue can be seen in a grassy field, as well as several white buildings and a harbor with boats. A green land mass can be seen across Lake Huron.] Credit & copyright: Author’s own photo. The author has dedicated this work to the public domain.Oh fudge, get that car out of here! Located between Michigan’s upper and lower Peninsulas on Lake Huron, the second-biggest of the U.S. Great Lakes, Mackinac Island is a quirky yet popular vacation spot. The 3.8-square-mile island is famous for its fudge shops, its military past, its beautiful architecture, and for being the only place in the U.S. where cars are completely banned.
In modern times, Mackinac Island is a popular vacation spot, hosting an estimated 1.2 million tourists each year. That’s especially impressive considering that only around 600 people live on the island year-round. Since no cars are allowed, visitors and residents alike must get around by walking, biking, or by horse. Horse-drawn carriages, buggies, and individual saddle horses are a common sight on the island, including on the 8-mile-long M-185, the only carless highway in America. Yet, for all of the island’s unusual and tourist-friendly features, and despite its small size, Mackinac Island’s past involves a surprising amount of military activity.
Mackinac island was originally home to several tribes of Native peoples, including the Anishinaabe, who spoke the Ojibwe language. Since they thought that the island’s shape resembled a turtle, they called it “Mitchimakinak”, or “big turtle.” When the French first arrived on the island in the 17th century, they wrote and pronounced the name as “Michilimackinac”. When the British came to the island in 1761 following the Seven Years War, they shortened it to “Mackinac.” The first Europeans to settle on Mackinac Island were French Jesuit missionaries, the most famous of whom was Jacques Marquette. In 1671, he began preaching on Mackinac Island in small chapels, some of which were made of bark, in an attempt to convert both Native peoples and French fur traders. Though he later moved his mission to the nearby mainland city of St. Ignace, a statue of Marquette and a reconstruction of one of his bark chapels still stands on the island today.
In 1781, aiming to control the waters between Lake Huron and Lake Michigan as well as the region’s profitable fur trade, British forces constructed Fort Mackinac on a bluff overlooking much of the island. Their plans hit a snag in 1796, when the U.S. won the Revolutionary War and seized control of the fort. Yet even that victory wasn’t enough to secure the island forever. When the War of 1812 broke out between the U.S. and England, British forces once again demanded control of the fort. When British Captain Charles Roberts landed on the island on July 17, 1812, with a force of British troops, Native American warriors, and Canadian militiamen, he sent an urgent message to American Fort Commander Porter Hanks, demanding an immediate surrender. Hanks was baffled, as he had no idea that war had broken out at all. Still, since the 24-year-old commander was badly outnumbered with a force of just 57 men, he surrendered the fort to the British. Just two years later, U.S. troops tried to take the fort back again, and the bloody Battle of Mackinac Island ensued. 13 Americans were killed in the only battle ever fought on the island, and the fort remained in British hands until the war of 1812 ended, and the fort was returned to U.S. control. It was officially decommissioned in 1895, but it still stands on the island today as an educational tourist site.
After the war, wealthy tourists eager to cheer themselves up and escape crowded cities began flocking to the island each summer. Catering to the tastes of these tourists, architects built elaborate, Victorian-style homes and buildings all over the island, and planted a wide array of colorful flowers. The Grand Hotel, built in 1887, is still a famed landmark on the island and boasts the world’s longest porch at 660 feet. Also in 1887, the Murdick family opened a candy kitchen on Mackinac specializing in fudge. The island’s temperate climate made fudge-making easy (since fudge requires warm, moist air to keep from drying out) and soon many other fudge shops opened. Not long after, in 1898, cars were officially banned in order to preserve the island’s historic atmosphere and the thriving horse-based businesses there. The ban was certainly effective, since Mackinac Island is still full of history, original buildings, beautiful flowers, and plenty of horses to this day. Just be prepared for a slow-moving vacation if you decide to visit.
[Image description: A photo of a portion of Mackinac island from above. A statue can be seen in a grassy field, as well as several white buildings and a harbor with boats. A green land mass can be seen across Lake Huron.] Credit & copyright: Author’s own photo. The author has dedicated this work to the public domain. -
FREEPolitical Science PP&T CurioFree1 CQ
A national park without the National Park Service? That’s like mashed potatoes without gravy! It might surprise modern park-goers, but the first national park in the U.S. was created in 1872, long before there was a government agency to specifically manage it. Instead, national parks were variably managed by the Department of the Interior, the War Department, and the Department of Agriculture’s Forest Service before 1916. It wasn’t until this day that year that the National Park Service (NPS) was established with the passing of the Organic Act, placing the management of all national parks and monuments under a single agency.
When Yellowstone National Park (shared by Wyoming, Idaho, and Montana) was created in 1872, you could say it was a pretty big deal. Never before had a piece of wilderness been placed under federal protection solely for the purpose of preserving its natural beauty. Once Yellowstone paved the way, others followed suit. Some of the most well known national parks were created before 1916, including Yosemite, Crater Lake, and Glacier in 1890, 1902, and 1910, respectively. In all, 35 national parks and monuments (smaller areas and objects of historic, prehistoric, or scientific interest) were created by 1916, but there was a problem. The object of national parks was to protect the designated lands, but that didn’t always work out in practice. Environmental advocates were becoming concerned by the lack of effort put in to actually preserve and maintain the pristine conditions of national parks. Due to lack of funding and a lack of authority inside the parks, hunting, logging, and livestock grazing continued to harm the supposedly protected land. One concerned citizen was Stephen Mather, a businessman who found himself appalled by the state of the national parks he visited. With the help of other environmental advocates, Mather pushed Congress to pass the Organic Act in 1916, which created the NPS, placing 35 national parks and monuments under its purview with Mather as the agency’s first director. Then, in 1933, the management of another 56 national monuments and military sites were transferred to the NPS, further expanding the national park system.
There are now over 400 national parks, and they truly reflect the varied geography and biomes of the U.S. and its territories. For example, Denali’s peak in Alaska is the highest point in North America at 20,320 feet, while Death Valley in California is the lowest at 282 feet below sea level (and also the hottest place in the world). Some parks, like Yellowstone and Yosemite, are large enough that visitors could spend days or weeks traversing them. Wrangell-St. Elias National Park and Preserve is larger than some countries at 13.2 million acres. Yet others represent natural oddities, like Devils Tower National Monument in Wyoming. Also called Bear Lodge, it’s a mesa that rises over 5,100 feet in elevation. Surrounded by a forest, Devils Tower itself looks like a giant tree stump made of crumbling stone.
Today, over 20,000 employees and hundreds of volunteers work throughout the year to maintain 140 parks and monuments while also managing the millions of visitors that pass through. The most visible of these workers are the park rangers, who act as law enforcement, guides, and interpreters within park boundaries. The NPS is technically part of the Department of the Interior, and has an annual budget of $2.6 billion. While that might sound like an eye-watering amount of money to spend just to preserve some natural beauty, the park system also generates hundreds of thousands of jobs in the towns and cities that surround the parks, as well as $27 billion a year for the U.S. economy. Who says conservation doesn’t pay?
[Image description: A grassy field at Yellowstone National Park with mountains and trees on either side.] Credit & copyright: Mike Cline, Wikimedia Commons, This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.A national park without the National Park Service? That’s like mashed potatoes without gravy! It might surprise modern park-goers, but the first national park in the U.S. was created in 1872, long before there was a government agency to specifically manage it. Instead, national parks were variably managed by the Department of the Interior, the War Department, and the Department of Agriculture’s Forest Service before 1916. It wasn’t until this day that year that the National Park Service (NPS) was established with the passing of the Organic Act, placing the management of all national parks and monuments under a single agency.
When Yellowstone National Park (shared by Wyoming, Idaho, and Montana) was created in 1872, you could say it was a pretty big deal. Never before had a piece of wilderness been placed under federal protection solely for the purpose of preserving its natural beauty. Once Yellowstone paved the way, others followed suit. Some of the most well known national parks were created before 1916, including Yosemite, Crater Lake, and Glacier in 1890, 1902, and 1910, respectively. In all, 35 national parks and monuments (smaller areas and objects of historic, prehistoric, or scientific interest) were created by 1916, but there was a problem. The object of national parks was to protect the designated lands, but that didn’t always work out in practice. Environmental advocates were becoming concerned by the lack of effort put in to actually preserve and maintain the pristine conditions of national parks. Due to lack of funding and a lack of authority inside the parks, hunting, logging, and livestock grazing continued to harm the supposedly protected land. One concerned citizen was Stephen Mather, a businessman who found himself appalled by the state of the national parks he visited. With the help of other environmental advocates, Mather pushed Congress to pass the Organic Act in 1916, which created the NPS, placing 35 national parks and monuments under its purview with Mather as the agency’s first director. Then, in 1933, the management of another 56 national monuments and military sites were transferred to the NPS, further expanding the national park system.
There are now over 400 national parks, and they truly reflect the varied geography and biomes of the U.S. and its territories. For example, Denali’s peak in Alaska is the highest point in North America at 20,320 feet, while Death Valley in California is the lowest at 282 feet below sea level (and also the hottest place in the world). Some parks, like Yellowstone and Yosemite, are large enough that visitors could spend days or weeks traversing them. Wrangell-St. Elias National Park and Preserve is larger than some countries at 13.2 million acres. Yet others represent natural oddities, like Devils Tower National Monument in Wyoming. Also called Bear Lodge, it’s a mesa that rises over 5,100 feet in elevation. Surrounded by a forest, Devils Tower itself looks like a giant tree stump made of crumbling stone.
Today, over 20,000 employees and hundreds of volunteers work throughout the year to maintain 140 parks and monuments while also managing the millions of visitors that pass through. The most visible of these workers are the park rangers, who act as law enforcement, guides, and interpreters within park boundaries. The NPS is technically part of the Department of the Interior, and has an annual budget of $2.6 billion. While that might sound like an eye-watering amount of money to spend just to preserve some natural beauty, the park system also generates hundreds of thousands of jobs in the towns and cities that surround the parks, as well as $27 billion a year for the U.S. economy. Who says conservation doesn’t pay?
[Image description: A grassy field at Yellowstone National Park with mountains and trees on either side.] Credit & copyright: Mike Cline, Wikimedia Commons, This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEWorld History PP&T CurioFree1 CQ
What is that beguiling scent? It could be intestinal waste from a whale. For centuries, people made perfumes, incense, and even medicines from a mysterious substance known as ambergris. Clumps of the stuff would wash up on beaches around the world from time to time and, though ancient peoples had no idea what it was, they knew that it smelled very interesting. Today, we know that ambergris originates in the intestines of sperm whales—and it’s been made illegal in many places to prevent whaling. Still, the fragrance industry as we know it probably wouldn’t exist if it wasn’t for ambergris.
Ambergris gets its modern name from the French words for “gray amber,” but its been called many things over the centuries. It was prized by the Egyptians who considered it so valuable that they gave it as an offering to the gods. Outside of religious rituals, ambergris was used primarily as an ingredient in perfume, but also as an aphrodisiac and even as a spice. King Charles II of England took it a step further, with his favorite dish reportedly being a plate of eggs and ambergris. That might come as a shock to anyone who’s actually been in the vicinity of fresh ambergris. Its smell is what one would expect from something that came out of a whale’s intestine—very musky and pungent. However, when ambergris is left in the sun, a chemical reaction occurs that lends it a pleasant fragrance and changes its color from black to a yellowish-gray. This latter form is much more useful for perfume-making and cooking…assuming you’re brave enough to eat it, of course.
It wasn’t just ambergris’s smell that made it a popular perfume ingredient, though. It contains a myriad of chemical compounds, one of which is ambreine. Ambreine is a triterpene alcohol that can make up as much as 80 percent of ambergris, and it’s used as a fixative in fragrances. Essentially, it holds together the other chemical compounds that define a fragrance and keeps them from evaporating too quickly. Most modern perfumers use a synthesized version of ambreine, but some companies still use real ambergris. It’s still legal to use in Canada, the U.K., and the European Union, and although it’s technically illegal in the U.S., the law often goes unenforced.
As popular as ambergris has been for millennia, its origin was unknown until fairly recently. Some believed that ambergris was produced by swarming sea animals or underwater volcanoes. It wasn’t until the 18th century, when commercial whaling in North America took off and people started dissecting sperm whales, that they discovered the true source of ambergris. Even then, no one was sure why whales produced it until 2006, when marine biologist Robert Clarke proposed that sperm whales did so to protect their intestinal lining from the sharp beaks of their main prey: squids. Indeed, one of the ways to determine the authenticity of an ambergris chunk is to search it for squid beaks.
Today, even in most countries where ambergris is legal, it’s still illegal to hunt whales to obtain it. Therefore there are only two ways to come by ambergris: be lucky enough to discover a chunk washed up on shore, or be rich enough to buy a chunk when it goes up for auction. Even these aren’t options everywhere, though. The U.S. and Australia have banned commercial sales of ambergris to discourage illegal whaling, while in India all ambergris that washes ashore is technically considered property of the government. Surely we can all agree that whales are more important than what’s in their guts, no matter how good it smells.
[Image description: A chunk of ambergris, which looks like a lumpy rock, against a reddish background.] Credit & copyright: Wmpearl, Wikimedia Commons. Skagway Museum. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.What is that beguiling scent? It could be intestinal waste from a whale. For centuries, people made perfumes, incense, and even medicines from a mysterious substance known as ambergris. Clumps of the stuff would wash up on beaches around the world from time to time and, though ancient peoples had no idea what it was, they knew that it smelled very interesting. Today, we know that ambergris originates in the intestines of sperm whales—and it’s been made illegal in many places to prevent whaling. Still, the fragrance industry as we know it probably wouldn’t exist if it wasn’t for ambergris.
Ambergris gets its modern name from the French words for “gray amber,” but its been called many things over the centuries. It was prized by the Egyptians who considered it so valuable that they gave it as an offering to the gods. Outside of religious rituals, ambergris was used primarily as an ingredient in perfume, but also as an aphrodisiac and even as a spice. King Charles II of England took it a step further, with his favorite dish reportedly being a plate of eggs and ambergris. That might come as a shock to anyone who’s actually been in the vicinity of fresh ambergris. Its smell is what one would expect from something that came out of a whale’s intestine—very musky and pungent. However, when ambergris is left in the sun, a chemical reaction occurs that lends it a pleasant fragrance and changes its color from black to a yellowish-gray. This latter form is much more useful for perfume-making and cooking…assuming you’re brave enough to eat it, of course.
It wasn’t just ambergris’s smell that made it a popular perfume ingredient, though. It contains a myriad of chemical compounds, one of which is ambreine. Ambreine is a triterpene alcohol that can make up as much as 80 percent of ambergris, and it’s used as a fixative in fragrances. Essentially, it holds together the other chemical compounds that define a fragrance and keeps them from evaporating too quickly. Most modern perfumers use a synthesized version of ambreine, but some companies still use real ambergris. It’s still legal to use in Canada, the U.K., and the European Union, and although it’s technically illegal in the U.S., the law often goes unenforced.
As popular as ambergris has been for millennia, its origin was unknown until fairly recently. Some believed that ambergris was produced by swarming sea animals or underwater volcanoes. It wasn’t until the 18th century, when commercial whaling in North America took off and people started dissecting sperm whales, that they discovered the true source of ambergris. Even then, no one was sure why whales produced it until 2006, when marine biologist Robert Clarke proposed that sperm whales did so to protect their intestinal lining from the sharp beaks of their main prey: squids. Indeed, one of the ways to determine the authenticity of an ambergris chunk is to search it for squid beaks.
Today, even in most countries where ambergris is legal, it’s still illegal to hunt whales to obtain it. Therefore there are only two ways to come by ambergris: be lucky enough to discover a chunk washed up on shore, or be rich enough to buy a chunk when it goes up for auction. Even these aren’t options everywhere, though. The U.S. and Australia have banned commercial sales of ambergris to discourage illegal whaling, while in India all ambergris that washes ashore is technically considered property of the government. Surely we can all agree that whales are more important than what’s in their guts, no matter how good it smells.
[Image description: A chunk of ambergris, which looks like a lumpy rock, against a reddish background.] Credit & copyright: Wmpearl, Wikimedia Commons. Skagway Museum. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREETravel PP&T CurioFree1 CQ
Corn, soybeans, tenderloins… and sand dunes? You might not think of the latter when someone mentions Indiana, but Indiana Dunes National Park really does feature towering sand dunes, the tallest of which stands at an impressive 192 feet. Until recently, this unique park was known as Indiana Dunes National Lakeshore, but it finally, officially became a national park in 2019. The designation came long after natural forces shaped the park’s unusual landscape, but not all that long after several, lengthy legal battles to save it from man-made destruction.
Though rolling sand dunes might seem out of place in the Midwest, their presence is as natural as the lake they border. The history of the Indiana Dunes begins with Lake Michigan, which formed 20,000 years ago when massive glaciers began receding after carving through the area for millennia. When the glaciers melted, it created the Great Lake, and the battering of waves on the shore deposited layer after layer of sediment carried there by rivers. Starting from 10,000 years ago, the worn down sediment became the dunes, which continued to expand along the coastline. Further inland, there are bogs and ponds between the ever-shifting dunes that move with the wind. Though the dunes still exist today and form the idyllic beaches along the park's shoreline, many of the dunes were destroyed over the past century by development and erosion. Still, the 15,000 acres that remain are the most biodiverse in North America.
In 1905, the United States Steel Corporation (U.S. Steel) purchased thousands of acres on the Lake Michigan coast, leveling the dunes there to build a steel mill. Near the mill, U.S. Steel developed the city of Gary to house its employees, which at its peak numbered around 30,000 workers. That was the first major, man-made blow to the Indiana Dunes, but not the last. The following year, the namesake of the town of Ogden Dunes, Francis A. Ogden, purchased 2.5 miles of land along the coastline to sell its sand for use in construction. Starting in the 1950s, interstates and a flood of new residents began filling in the once rural region, chipping away at the ever-dwindling dunes. Meanwhile, the first effort to give the Indiana Dunes federal protection began in 1917 using the straightforward slogan, “Save the Dunes.”
Unfortunately for conservationists and local advocates, their efforts were interrupted by WWI, which sent potential federal attention and funding from the dunes to the growing war effort. By the time the war ended and conservationists resumed their lobbying, there was little interest from the federal government, and the growing presence of industry in the area made preserving the dunes economically unappealing to politicians. It wasn’t until 1952 that the sentiment of “Save the Dunes” would be championed again, this time by Ogden Dunes resident Dorothy Buell. Buell formed the Save the Dunes Council along with 21 other local women, and the conservationists organized fundraising events to fund the purchase of unused land that was part of the dunes. Soon, their efforts caught the interest of Senator Paul H. Douglas of Illinois, who had a summer home near the Indiana Dunes. Beginning in 1958, Douglas continually introduced bills to the U.S. Senate to have the dunes placed under the authority of the National Park Service (NPS). He was ultimately successful in 1966, when President Lyndon B. Johnson signed a bill to form the Indiana Dunes National Lakeshore.
Indiana Dunes finally became Indiana Dunes National Park in 2019, after years of petitioning for the status upgrade. Today, it includes many more acres than it did in 1966 due to the NPS purchasing more land over the decades. Save the Dunes still exists as well, and their mission isn’t over yet. Industrial presence, the resulting pollution, and development continue to be issues. Additionally, erosion is a growing concern in the face of climate change and the accompanying rising water levels that threaten to wash away the sand faster than it can accumulate. If only there was a way to bring back some of that sand that got sold off!
[Image description: A photo of the Lake Michigan shoreline from atop a sandy dune at Indiana Dunes National Park.] Credit & copyright: Wikimedia Commons, National Park Service. This image or media file contains material based on a work of a National Park Service employee, created as part of that person's official duties. As a work of the U.S. federal government, such work is in the public domain in the United States.Corn, soybeans, tenderloins… and sand dunes? You might not think of the latter when someone mentions Indiana, but Indiana Dunes National Park really does feature towering sand dunes, the tallest of which stands at an impressive 192 feet. Until recently, this unique park was known as Indiana Dunes National Lakeshore, but it finally, officially became a national park in 2019. The designation came long after natural forces shaped the park’s unusual landscape, but not all that long after several, lengthy legal battles to save it from man-made destruction.
Though rolling sand dunes might seem out of place in the Midwest, their presence is as natural as the lake they border. The history of the Indiana Dunes begins with Lake Michigan, which formed 20,000 years ago when massive glaciers began receding after carving through the area for millennia. When the glaciers melted, it created the Great Lake, and the battering of waves on the shore deposited layer after layer of sediment carried there by rivers. Starting from 10,000 years ago, the worn down sediment became the dunes, which continued to expand along the coastline. Further inland, there are bogs and ponds between the ever-shifting dunes that move with the wind. Though the dunes still exist today and form the idyllic beaches along the park's shoreline, many of the dunes were destroyed over the past century by development and erosion. Still, the 15,000 acres that remain are the most biodiverse in North America.
In 1905, the United States Steel Corporation (U.S. Steel) purchased thousands of acres on the Lake Michigan coast, leveling the dunes there to build a steel mill. Near the mill, U.S. Steel developed the city of Gary to house its employees, which at its peak numbered around 30,000 workers. That was the first major, man-made blow to the Indiana Dunes, but not the last. The following year, the namesake of the town of Ogden Dunes, Francis A. Ogden, purchased 2.5 miles of land along the coastline to sell its sand for use in construction. Starting in the 1950s, interstates and a flood of new residents began filling in the once rural region, chipping away at the ever-dwindling dunes. Meanwhile, the first effort to give the Indiana Dunes federal protection began in 1917 using the straightforward slogan, “Save the Dunes.”
Unfortunately for conservationists and local advocates, their efforts were interrupted by WWI, which sent potential federal attention and funding from the dunes to the growing war effort. By the time the war ended and conservationists resumed their lobbying, there was little interest from the federal government, and the growing presence of industry in the area made preserving the dunes economically unappealing to politicians. It wasn’t until 1952 that the sentiment of “Save the Dunes” would be championed again, this time by Ogden Dunes resident Dorothy Buell. Buell formed the Save the Dunes Council along with 21 other local women, and the conservationists organized fundraising events to fund the purchase of unused land that was part of the dunes. Soon, their efforts caught the interest of Senator Paul H. Douglas of Illinois, who had a summer home near the Indiana Dunes. Beginning in 1958, Douglas continually introduced bills to the U.S. Senate to have the dunes placed under the authority of the National Park Service (NPS). He was ultimately successful in 1966, when President Lyndon B. Johnson signed a bill to form the Indiana Dunes National Lakeshore.
Indiana Dunes finally became Indiana Dunes National Park in 2019, after years of petitioning for the status upgrade. Today, it includes many more acres than it did in 1966 due to the NPS purchasing more land over the decades. Save the Dunes still exists as well, and their mission isn’t over yet. Industrial presence, the resulting pollution, and development continue to be issues. Additionally, erosion is a growing concern in the face of climate change and the accompanying rising water levels that threaten to wash away the sand faster than it can accumulate. If only there was a way to bring back some of that sand that got sold off!
[Image description: A photo of the Lake Michigan shoreline from atop a sandy dune at Indiana Dunes National Park.] Credit & copyright: Wikimedia Commons, National Park Service. This image or media file contains material based on a work of a National Park Service employee, created as part of that person's official duties. As a work of the U.S. federal government, such work is in the public domain in the United States. -
FREEMind + Body PP&T CurioFree1 CQ
It’s bad enough to be diagnosed with something that has no known cure…but what about something that also has no known cause? Until now, people dealing with Lupus, an autoimmune disease, had no way of knowing what caused their illness. Worse, they had few effective treatment options. Recently, though, researchers discovered that Lupus is caused by a problem with a specific immune system pathway. This vital information might even lead to a cure in the near future.
Lupus is an autoimmune disorder that can affect everything from a person’s skin to their organs. It’s currently incurable, and it’s difficult to diagnose accurately due to the variability of its symptoms. As an autoimmune disorder, lupus causes the body’s immune system to attack its own, healthy cells as if they were foreign bodies. This causes symptoms like dry eyes, fatigue, and fever, but even more severe symptoms can develop over time. These include joint pain and swelling, photosensitivity, lesions on the skin, and Raynaud's syndrome, which causes fingers to turn blue during times of stress or when exposed to cold. The most distinct symptom of lupus is a red, butterfly-shaped rash that appears on the face, covering the cheeks and nose. Eventually, a patient with lupus can experience organ failure, particularly of the kidneys and liver. Although lupus has been referenced in medical writings for thousands of years, the modern understanding of lupus only began in 1948 with the discovery of lupus erythematosus cells (LE cell) by Malcolm McCallum Hargraves. Hargraves found the cells in the bone marrow of a patient suffering from lupus, and they are sometimes called Hargrave cells.
Currently, lupus is treated mostly using immunosuppressants, Nonsteroidal anti-inflammatory drugs (NSAIDs) and corticosteroids with limited success. Even when they help, immunosuppressants come with their own problems. They leave patients more vulnerable to infections and more likely to develop cancer. Other drugs like NSAIDs only address individual symptoms without improving the overall condition. In fact, NSAIDs can be dangerous in some cases since they’re processed by the kidneys, making them potentially toxic to some lupus patients with decreased kidney function. Finally, corticosteroids can help with inflammation caused by a lupus patient’s overzealous immune system, but long term use can lead to high blood pressure and put users at risk for developing diabetes.
Now, a better treatment that addresses the underlying issues of lupus might be on the way after a major breakthrough in lupus research was made for the first time in 76 years. Researchers at Northwestern University Feinberg School of Medicine and Brigham and Women’s Hospital have discovered the cause of lupus: a malfunctioning immune pathway. The pathway in question is responsible for regulating immune cells’ response to pathogens, toxins, and pollutants. In lupus patients, the pathway isn’t properly activated due to an insufficient activation of the aryl hydrocarbon receptor (AHR). The resulting lack of regulation leads to the immune system going haywire, producing more immune cells than necessary, which then go on to attack the body. With this in mind, researchers are looking to AHR-activating drugs as potential treatments or cures. The good news is that these drugs already exist, but they’ll still have to be tested for safety and efficacy.
Lupus is an often misunderstood disorder, especially since many of the most debilitating symptoms are invisible. The difficulty in diagnosing it also means that many patients already have advanced disease by the time they’re finally diagnosed. Those who suspect they may have lupus could find out for sure through blood tests and urinalysis, and those who are diagnosed would benefit from avoiding direct sunlight and maintaining a healthy lifestyle. Some studies also show that there may be a benefit to taking vitamin D and calcium supplements. Here’s hoping that new lupus treatments with fewer side effects are soon added to patients’ arsenals.
[Image description: Three medical needles against a yellow background.] Credit & copyright: Karolina Kaboompics, PexelsIt’s bad enough to be diagnosed with something that has no known cure…but what about something that also has no known cause? Until now, people dealing with Lupus, an autoimmune disease, had no way of knowing what caused their illness. Worse, they had few effective treatment options. Recently, though, researchers discovered that Lupus is caused by a problem with a specific immune system pathway. This vital information might even lead to a cure in the near future.
Lupus is an autoimmune disorder that can affect everything from a person’s skin to their organs. It’s currently incurable, and it’s difficult to diagnose accurately due to the variability of its symptoms. As an autoimmune disorder, lupus causes the body’s immune system to attack its own, healthy cells as if they were foreign bodies. This causes symptoms like dry eyes, fatigue, and fever, but even more severe symptoms can develop over time. These include joint pain and swelling, photosensitivity, lesions on the skin, and Raynaud's syndrome, which causes fingers to turn blue during times of stress or when exposed to cold. The most distinct symptom of lupus is a red, butterfly-shaped rash that appears on the face, covering the cheeks and nose. Eventually, a patient with lupus can experience organ failure, particularly of the kidneys and liver. Although lupus has been referenced in medical writings for thousands of years, the modern understanding of lupus only began in 1948 with the discovery of lupus erythematosus cells (LE cell) by Malcolm McCallum Hargraves. Hargraves found the cells in the bone marrow of a patient suffering from lupus, and they are sometimes called Hargrave cells.
Currently, lupus is treated mostly using immunosuppressants, Nonsteroidal anti-inflammatory drugs (NSAIDs) and corticosteroids with limited success. Even when they help, immunosuppressants come with their own problems. They leave patients more vulnerable to infections and more likely to develop cancer. Other drugs like NSAIDs only address individual symptoms without improving the overall condition. In fact, NSAIDs can be dangerous in some cases since they’re processed by the kidneys, making them potentially toxic to some lupus patients with decreased kidney function. Finally, corticosteroids can help with inflammation caused by a lupus patient’s overzealous immune system, but long term use can lead to high blood pressure and put users at risk for developing diabetes.
Now, a better treatment that addresses the underlying issues of lupus might be on the way after a major breakthrough in lupus research was made for the first time in 76 years. Researchers at Northwestern University Feinberg School of Medicine and Brigham and Women’s Hospital have discovered the cause of lupus: a malfunctioning immune pathway. The pathway in question is responsible for regulating immune cells’ response to pathogens, toxins, and pollutants. In lupus patients, the pathway isn’t properly activated due to an insufficient activation of the aryl hydrocarbon receptor (AHR). The resulting lack of regulation leads to the immune system going haywire, producing more immune cells than necessary, which then go on to attack the body. With this in mind, researchers are looking to AHR-activating drugs as potential treatments or cures. The good news is that these drugs already exist, but they’ll still have to be tested for safety and efficacy.
Lupus is an often misunderstood disorder, especially since many of the most debilitating symptoms are invisible. The difficulty in diagnosing it also means that many patients already have advanced disease by the time they’re finally diagnosed. Those who suspect they may have lupus could find out for sure through blood tests and urinalysis, and those who are diagnosed would benefit from avoiding direct sunlight and maintaining a healthy lifestyle. Some studies also show that there may be a benefit to taking vitamin D and calcium supplements. Here’s hoping that new lupus treatments with fewer side effects are soon added to patients’ arsenals.
[Image description: Three medical needles against a yellow background.] Credit & copyright: Karolina Kaboompics, Pexels -
FREEUS History PP&T CurioFree1 CQ
Was it the “trail of the century” or just a bunch of monkey business? The Scopes Monkey Trial, one of the most widely-publicized court cases in U.S. history, concluded on this day in 1925. Ostensibly, the case was a legal battle between the state of Tennessee and John T. Scopes, a teacher from the town of Dayton who defied the law and taught Charles Darwin’s theory of evolution in his public school classroom. In reality, the trail was about America’s views on science and religion, not to mention public education.
At its core, Scopes v. State centered around the violation of the Butler Act. The act was passed in March of 1925 by Tennessee’s state legislature, and it prohibited schools from teaching Darwin’s theory of evolution. At the time (as it is today) the theory of evolution was rejected by fundamentalist Christians who favored a biblical interpretation of natural history. Oddly enough, as part of the Butler Act, Tennessee’s public schools were required to use A Civic Biology (1914) by George W. Hunter’s in their classrooms, even though the textbook supported the theory. Nevertheless, soon after the act was passed, the American Civil Liberties Union (ACLU) placed ads in the state’s newspapers offering to fund the criminal defense of any teacher willing to break the new law. The idea was to test the law in court and have it found to be unconstitutional. It wasn’t until a Dayton businessman named George W. Rappleyea saw economic potential in the case that anyone challenged the Butler Act. Rappleyea believed that such a controversial case would increase Dayton’s visibility, revitalizing the town. With Rappleyea’s support, several prominent residents of the town encouraged 24-year-old high school football coach and teacher John T. Scopes to place himself within the legal crosshairs of the state.
When Scopes was charged with violating the Butler Act soon thereafter, he was represented by famed criminal defense lawyer Clarence Darrow. On the prosecution’s side was prominent politician and attorney William Jennings Bryan, who also served as a Bible expert during the trial. The trial certainly did bring the sleepy town of Dayton into the national spotlight. The Scopes Trial was the first to be broadcast nationally, and was heard as far away as London and Hong Kong. Residents of Dayton were roused by the controversy and sensationalism on display, gathering at the courthouse in such numbers that the judge moved the trial out to the lawn for fear of the courthouse collapsing under the weight of the crowd. Regardless of where it took place, it was clear early on that Scopes and Darrow were fighting an uphill battle. The judge forbade any discussions regarding the scientific validity of evolution or the constitutionality of the Butler Act, stating that the court was only concerned with whether or not Scopes had violated the law. Still, Darrow took the opportunity to grill Bryan’s credibility as a Bible expert. A famous proponent of anticlericalism, Darrow was used to criticizing fundamentalist interpretations of the Bible. Bryan, a self-proclaimed expert on scripture, was cross-examined by Darrow, during which he was largely ridiculed for his inability to reconcile the contradictions in a literal reading of the Bible. Then, on the last day of the trial, the unthinkable happened: Darrow, the defense counsel, asked the jury to find Scopes guilty so that the case could be appealed by a higher court in his closing statement. In doing so, Bryan was denied the right to give his own closing statement by Tennessee state law.
In the end, Scopes was found guilty and fined $100. However, due to a procedural error in the way the fine was determined, the case was overturned by the Tennessee Supreme Court. In 1955, the sensational story of the case was adapted into a play, Inherit the Wind, which was itself adapted into a film. As for the Butler Act, it wasn’t repealed until 1967. Today, the theory of evolution still invites controversy in certain places. Maybe we’ll see a federal case about it someday.
[Image description: Description ] Credit & copyright: Tree of Life by Ernst Haeckel (1834–1919), 1879. Wikimedia Commons, this media file is in the public domain in the United States.Was it the “trail of the century” or just a bunch of monkey business? The Scopes Monkey Trial, one of the most widely-publicized court cases in U.S. history, concluded on this day in 1925. Ostensibly, the case was a legal battle between the state of Tennessee and John T. Scopes, a teacher from the town of Dayton who defied the law and taught Charles Darwin’s theory of evolution in his public school classroom. In reality, the trail was about America’s views on science and religion, not to mention public education.
At its core, Scopes v. State centered around the violation of the Butler Act. The act was passed in March of 1925 by Tennessee’s state legislature, and it prohibited schools from teaching Darwin’s theory of evolution. At the time (as it is today) the theory of evolution was rejected by fundamentalist Christians who favored a biblical interpretation of natural history. Oddly enough, as part of the Butler Act, Tennessee’s public schools were required to use A Civic Biology (1914) by George W. Hunter’s in their classrooms, even though the textbook supported the theory. Nevertheless, soon after the act was passed, the American Civil Liberties Union (ACLU) placed ads in the state’s newspapers offering to fund the criminal defense of any teacher willing to break the new law. The idea was to test the law in court and have it found to be unconstitutional. It wasn’t until a Dayton businessman named George W. Rappleyea saw economic potential in the case that anyone challenged the Butler Act. Rappleyea believed that such a controversial case would increase Dayton’s visibility, revitalizing the town. With Rappleyea’s support, several prominent residents of the town encouraged 24-year-old high school football coach and teacher John T. Scopes to place himself within the legal crosshairs of the state.
When Scopes was charged with violating the Butler Act soon thereafter, he was represented by famed criminal defense lawyer Clarence Darrow. On the prosecution’s side was prominent politician and attorney William Jennings Bryan, who also served as a Bible expert during the trial. The trial certainly did bring the sleepy town of Dayton into the national spotlight. The Scopes Trial was the first to be broadcast nationally, and was heard as far away as London and Hong Kong. Residents of Dayton were roused by the controversy and sensationalism on display, gathering at the courthouse in such numbers that the judge moved the trial out to the lawn for fear of the courthouse collapsing under the weight of the crowd. Regardless of where it took place, it was clear early on that Scopes and Darrow were fighting an uphill battle. The judge forbade any discussions regarding the scientific validity of evolution or the constitutionality of the Butler Act, stating that the court was only concerned with whether or not Scopes had violated the law. Still, Darrow took the opportunity to grill Bryan’s credibility as a Bible expert. A famous proponent of anticlericalism, Darrow was used to criticizing fundamentalist interpretations of the Bible. Bryan, a self-proclaimed expert on scripture, was cross-examined by Darrow, during which he was largely ridiculed for his inability to reconcile the contradictions in a literal reading of the Bible. Then, on the last day of the trial, the unthinkable happened: Darrow, the defense counsel, asked the jury to find Scopes guilty so that the case could be appealed by a higher court in his closing statement. In doing so, Bryan was denied the right to give his own closing statement by Tennessee state law.
In the end, Scopes was found guilty and fined $100. However, due to a procedural error in the way the fine was determined, the case was overturned by the Tennessee Supreme Court. In 1955, the sensational story of the case was adapted into a play, Inherit the Wind, which was itself adapted into a film. As for the Butler Act, it wasn’t repealed until 1967. Today, the theory of evolution still invites controversy in certain places. Maybe we’ll see a federal case about it someday.
[Image description: Description ] Credit & copyright: Tree of Life by Ernst Haeckel (1834–1919), 1879. Wikimedia Commons, this media file is in the public domain in the United States. -
FREEHumanities PP&T CurioFree1 CQ
You’ve heard of Zeus, but have you ever wondered how the king of the gods came to power? The pantheon of ancient Greek gods is full of familiar names, from Aphrodite to Poseidon. But in Greek mythology, these deities weren’t the first to rule the heavens and earth. That distinction belongs to the Titans.
According to the ancient Greeks, the creation of the universe involved something coming forth from nothing…with a lot of family drama following after. At first, there was only Chaos, a cosmic void from which the first beings emerged. Then came the three primordial deities: Gaia (the Earth itself), Tartarus (the underworld) and Eros (desire). Gaia’s son, Uranus, was the sky, and also the father of her other 18 children. 12 of the children were the Titans (the first gods), three were the one-eyed Cyclopes and the final three were the Hecatoncheires, each of whom had 50 heads and a hundred arms. Appalled by their monstrous appearance, Uranus imprisoned the Cyclopes and the Hecatoncheires in Tartarus. Unfortunately, this made their mother very angry. In retaliation for Uranus’s cruelty, Gaia gave her son, the titan Cronus, a sickle with which he castrated his father and ultimately overthrew him. Unfortunately for Cronus, history tends to repeat itself, even for gods.
After imprisoning his father in Tartarus, Cronus ruled over the Titans and married his sister, Rhea. Rhea gave birth to six children, Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. Worried that one of these children would depose him as he had deposed his own father, Cronus decided on a violent plan of action. He swalloed his children one by one, but Rhea managed to save Zeus by giving her husband a rock disguised as her son. Once Zeus came of age in secret, he returned to his father to exact his revenge. First, he poisoned Cronus to make him vomit, freeing his brothers and sisters. With the help of his siblings, Zeus then set in motion the Titanomachy, a ten-year conflict between the Olympian gods and the Titans. Zeus and his cohort allied with the Cyclopes and the Hecatoncheires, with the former creating the iconic weapons of the Olympians: Zeus’s thunderbolts, Poseidon’s trident, and Hades’s helmet of darkness. The Olympians, of course, emerged victorious, thanks in no small part to these powerful weapons. After his defeat, Cronus was exiled, cursed to count the passing of time and age, earning him the moniker “Old Father Time.” Atlas, the Titan who led his kin into battle, was punished by having to hold up the heavens for all eternity. Meanwhile, Zeus and the others settled on the summit of Olympus in a palace built by the Cyclopes. Not all the Titans were cast out by the Olympians, though, and there was still more conflict to come.
One of the most famous Titans was Prometheus. Not only did he escape imprisonment in Tartarus, he and his twin brother, Epimetheus, were tasked by Zeus with creating mankind. However, Prometheus was angry that his creations were left in the cold, without any reliable way to keep warm or reach their true potential. Feeling pity for them, Prometheus stole fire from Olympus and brought it to the humans even though doing so was forbidden by Zeus. Along with this powerful gift, Prometheus taught them mathematics, astronomy, sailing, and architecture. Thanks to these divine boons, the humans thrived, building mighty kingdoms of their own. In time, they even came to question the power and authority of the gods, which deeply angered Zeus. Discovering Prometheus’s betrayal, Zeus punished the Titan by chaining him to a cliff. There, a giant vulture came each day to eat Prometheus’s liver, which grew back overnight. It wasn’t until Heracles came around centuries later and killed the vulture that the Titan was freed. What’s a few eons of torment if it means you can take credit for mankind's greatest achievements?
[Image description: A painting of the ancient Greek Titans, depicted as large male figures, falling into the darkness of Tartarus.] Credit & copyright: The Fall of the Titans (c. 1596–1598), Cornelis van Haarlem (1562–1638). National Gallery of Denmark, Copenhagen. Wikimedia Commons. The author died in 1638, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.You’ve heard of Zeus, but have you ever wondered how the king of the gods came to power? The pantheon of ancient Greek gods is full of familiar names, from Aphrodite to Poseidon. But in Greek mythology, these deities weren’t the first to rule the heavens and earth. That distinction belongs to the Titans.
According to the ancient Greeks, the creation of the universe involved something coming forth from nothing…with a lot of family drama following after. At first, there was only Chaos, a cosmic void from which the first beings emerged. Then came the three primordial deities: Gaia (the Earth itself), Tartarus (the underworld) and Eros (desire). Gaia’s son, Uranus, was the sky, and also the father of her other 18 children. 12 of the children were the Titans (the first gods), three were the one-eyed Cyclopes and the final three were the Hecatoncheires, each of whom had 50 heads and a hundred arms. Appalled by their monstrous appearance, Uranus imprisoned the Cyclopes and the Hecatoncheires in Tartarus. Unfortunately, this made their mother very angry. In retaliation for Uranus’s cruelty, Gaia gave her son, the titan Cronus, a sickle with which he castrated his father and ultimately overthrew him. Unfortunately for Cronus, history tends to repeat itself, even for gods.
After imprisoning his father in Tartarus, Cronus ruled over the Titans and married his sister, Rhea. Rhea gave birth to six children, Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. Worried that one of these children would depose him as he had deposed his own father, Cronus decided on a violent plan of action. He swalloed his children one by one, but Rhea managed to save Zeus by giving her husband a rock disguised as her son. Once Zeus came of age in secret, he returned to his father to exact his revenge. First, he poisoned Cronus to make him vomit, freeing his brothers and sisters. With the help of his siblings, Zeus then set in motion the Titanomachy, a ten-year conflict between the Olympian gods and the Titans. Zeus and his cohort allied with the Cyclopes and the Hecatoncheires, with the former creating the iconic weapons of the Olympians: Zeus’s thunderbolts, Poseidon’s trident, and Hades’s helmet of darkness. The Olympians, of course, emerged victorious, thanks in no small part to these powerful weapons. After his defeat, Cronus was exiled, cursed to count the passing of time and age, earning him the moniker “Old Father Time.” Atlas, the Titan who led his kin into battle, was punished by having to hold up the heavens for all eternity. Meanwhile, Zeus and the others settled on the summit of Olympus in a palace built by the Cyclopes. Not all the Titans were cast out by the Olympians, though, and there was still more conflict to come.
One of the most famous Titans was Prometheus. Not only did he escape imprisonment in Tartarus, he and his twin brother, Epimetheus, were tasked by Zeus with creating mankind. However, Prometheus was angry that his creations were left in the cold, without any reliable way to keep warm or reach their true potential. Feeling pity for them, Prometheus stole fire from Olympus and brought it to the humans even though doing so was forbidden by Zeus. Along with this powerful gift, Prometheus taught them mathematics, astronomy, sailing, and architecture. Thanks to these divine boons, the humans thrived, building mighty kingdoms of their own. In time, they even came to question the power and authority of the gods, which deeply angered Zeus. Discovering Prometheus’s betrayal, Zeus punished the Titan by chaining him to a cliff. There, a giant vulture came each day to eat Prometheus’s liver, which grew back overnight. It wasn’t until Heracles came around centuries later and killed the vulture that the Titan was freed. What’s a few eons of torment if it means you can take credit for mankind's greatest achievements?
[Image description: A painting of the ancient Greek Titans, depicted as large male figures, falling into the darkness of Tartarus.] Credit & copyright: The Fall of the Titans (c. 1596–1598), Cornelis van Haarlem (1562–1638). National Gallery of Denmark, Copenhagen. Wikimedia Commons. The author died in 1638, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer. -
FREEPP&T CurioFree1 CQ
In honor of the holiday weekend, enjoy this curio from the archives about one of the Revolutionary War's most unlikely figures.
She wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public DomainIn honor of the holiday weekend, enjoy this curio from the archives about one of the Revolutionary War's most unlikely figures.
She wasn’t trying to start a revolution, but she wasn’t afraid to join one. Deborah Sampson was the first woman in U.S. history to receive a military pension—not as a spouse, but as a veteran. Born on this day 1760, Sampson disguised herself as a man and adopted a new identity to fight in the Continental Army. Later, she toured the newly formed nation as a lecturer.
Born in Plympton, Massachusetts, Sampson had a difficult childhood. Her father was lost at sea when she was just five years old, and her family struggled financially as a result. Starting from the age of ten, she worked as an indentured servant on a farm until she turned 18. Afterward, she found work as a schoolteacher in the summer and as a weaver in the winter while the American Revolutionary War raged on. But starting in the 1780s, as the war continued, Sampson tried to enlist in the Continental Army in disguise. Her first attempt ended in failure, leading to her immediate discovery and a scandal in town. That didn’t deter her, though, and her second attempt in 1782 was successful. Taking on the name Robert Shurtleff, Sampson joined the 4th Massachusetts Regiment. Her fellow soldiers didn’t catch on to her ruse and her true gender went unnoticed, although she was given the nickname “Molly” due to her lack of facial hair,
For 17 months, “Shurtleff” served in the Continental Army. Just months after joining, Sampson participated in a skirmish against Tory forces that saw her fighting one-on-one against enemy soldiers. She also served as a scout, entering Manhattan and reporting on the British troops that were mobilizing and gathering supplies there. Sampson’s cover was almost blown several times, but she was so determined to keep her secret that she even dug a bullet out of her own leg after she was shot, to avoid a doctor’s examination. This resulted in her living the rest of her life with some shrapnel in her leg. Unfortunately, she was found out after she came down with a serious illness. While in Philadelphia, she was sent to a hospital with a severe fever. She fell unconscious after arriving, and medical staff discovered her true gender while treating her. After being discovered, Sampson received an honorable discharge and returned to Massachusetts. In 1785, she married Benjamin Gannet, with whom she had three children. During this time, she did not receive a pension for her service, and she lived a quiet life. However, things changed as stories of her deeds spread due to the publication of The Female Review: or, Memoirs of an American Young Lady by Herman Mann in 1797. The book was a detailed account of Sampson’s time in the army. To promote the book, Sampson herself went on a year-long lecture tour in 1802. She regaled listeners with war stories, often in uniform, though she may have embellished things a bit. For instance, she claimed to have dug trenches and faced cannons during the Battle of Yorktown, but that battle took place a year before she enlisted. Nevertheless, her accomplishments were largely corroborated and even Paul Revere came to her aid to help her secure a military pension from the state of Massachusetts.
Today, Sampson is remembered as a folk hero of the Revolutionary War. After she passed away in 1827 in Sharon, Massachusetts, the town erected statues in her honor. There’s even one standing outside the town’s public library. It shows her dressed as a woman, but holding her musket, with her uniform jacket draped over her shoulder. In 1982, Massachusetts declared May 23 “Deborah Sampson Day” and made her the official state heroine. That seems well-deserved, given that she was the first woman to bayonet-charge her way through the gender barrier.
[Image description: An engraving of Deborah Sampson wearing a dress with a frilled collar.] Credit & copyright: Engraving by George Graham. From a drawing by William Beastall, which was based on a painting by Joseph Stone. Wikimedia Commons, Public Domain -
FREESports PP&T CurioFree1 CQ
As a rule, humans aren’t the world's best swimmers…but rules were made to be broken. While most members of our terrestrial species are much faster on land than in the water, Olympian Michael Phelps is a notable exception. This record-breaking athlete, born on this day in 1985, has a unique physiology that makes him perfectly suited for the pool, and an aquatic nickname to match.
Phelps began swimming at the age of seven, following in his sisters’ footsteps after they joined a local swim team. Long before he boasted nicknames like “Flying Fish” and “Baltimore Bullet,” swam competitively for his high school team and even made it onto the U.S. Swim Team at the 2000 Summer Olympics in Sydney. Though he didn’t win any medals that year, he still made history by being the youngest male Olympic swimmer in 68 years. He began setting world records while still in high school, a trend that continued when he attended the University of Michigan in Ann Arbor. It was during Phelps’ second Olympics appearance in 2004, in Athens, that he became a household name after winning eight medals, including six golds. After not winning a single medal at his first Olympics, Phelps was suddenly just one gold away from Mark Spitz's record of seven. He went on to break the record during the 2008 Summer Olympics in Beijing by winning eight gold medals, which was also the record for the most gold during a single Olympics. By the time he retired in 2016 after the Summer Olympics in Rio de Janeiro, he had 28 medals to his name, with 23 golds including 13 individual golds.
While hard work and perseverance surely played a role in Phelps' dominance in the water, he also benefited from having what may be the ideal swimmer’s body. Most of the best swimmers in the world have a similar body shape that gives them an advantage over the average person, beyond their training. Firstly, it pays for a swimmer to be tall, and indeed, most of the top Olympic swimmers hover around six feet tall. But proportions matter too, with long, flexible torsos allowing for more power behind strokes and a center of mass closer to the lungs (the center of flotation) allowing for less energy wasted in trying to stay level in the water. It also helps to have large hands and feet, which act like paddles or flippers in the water, while large lungs help swimmers stay afloat and take in more oxygen. Many swimmers have these traits, but Phelps’s physique seems to take some of them to an extreme. His lung capacity sits at 12 liters, twice that of the average person, and he has double-jointed elbows. He’s also hyper-jointed at the chest, allowing him to leverage more of his body to power each stroke. Even for a swimmer, he has a massive “wingspan,” the distance from fingertip to fingertip when the arms are held out horizontally from the body. While most people have wingspans that are about the same as their height, Phelps’s wingspan of six feet, seven inches is three inches longer than he is tall. Finally, his body was found to produce half as much lactic acid than even other trained athletes, which allows him to recover faster between training sessions.
All that isn’t to discount his talent. While Phelps may have been gifted with natural advantages, his drive and willingness to train hard are even more important. Those who’ve worked with Phelps have often expressed that the true secret behind the swimmer’s success is his immaculate technique, which can only come from extensive training. Swimming is extremely inefficient for human beings, so every movement of every stroke counts, especially at elite levels where a fraction of a second can make all the difference. It wouldn’t matter if you had shark skin and flippers for feet if you didn’t know how to use them!
[Image description: A large, empty swimming pool with blue-and-white lane dividers.] Credit & copyright: Jan van der Wolf, PexelsAs a rule, humans aren’t the world's best swimmers…but rules were made to be broken. While most members of our terrestrial species are much faster on land than in the water, Olympian Michael Phelps is a notable exception. This record-breaking athlete, born on this day in 1985, has a unique physiology that makes him perfectly suited for the pool, and an aquatic nickname to match.
Phelps began swimming at the age of seven, following in his sisters’ footsteps after they joined a local swim team. Long before he boasted nicknames like “Flying Fish” and “Baltimore Bullet,” swam competitively for his high school team and even made it onto the U.S. Swim Team at the 2000 Summer Olympics in Sydney. Though he didn’t win any medals that year, he still made history by being the youngest male Olympic swimmer in 68 years. He began setting world records while still in high school, a trend that continued when he attended the University of Michigan in Ann Arbor. It was during Phelps’ second Olympics appearance in 2004, in Athens, that he became a household name after winning eight medals, including six golds. After not winning a single medal at his first Olympics, Phelps was suddenly just one gold away from Mark Spitz's record of seven. He went on to break the record during the 2008 Summer Olympics in Beijing by winning eight gold medals, which was also the record for the most gold during a single Olympics. By the time he retired in 2016 after the Summer Olympics in Rio de Janeiro, he had 28 medals to his name, with 23 golds including 13 individual golds.
While hard work and perseverance surely played a role in Phelps' dominance in the water, he also benefited from having what may be the ideal swimmer’s body. Most of the best swimmers in the world have a similar body shape that gives them an advantage over the average person, beyond their training. Firstly, it pays for a swimmer to be tall, and indeed, most of the top Olympic swimmers hover around six feet tall. But proportions matter too, with long, flexible torsos allowing for more power behind strokes and a center of mass closer to the lungs (the center of flotation) allowing for less energy wasted in trying to stay level in the water. It also helps to have large hands and feet, which act like paddles or flippers in the water, while large lungs help swimmers stay afloat and take in more oxygen. Many swimmers have these traits, but Phelps’s physique seems to take some of them to an extreme. His lung capacity sits at 12 liters, twice that of the average person, and he has double-jointed elbows. He’s also hyper-jointed at the chest, allowing him to leverage more of his body to power each stroke. Even for a swimmer, he has a massive “wingspan,” the distance from fingertip to fingertip when the arms are held out horizontally from the body. While most people have wingspans that are about the same as their height, Phelps’s wingspan of six feet, seven inches is three inches longer than he is tall. Finally, his body was found to produce half as much lactic acid than even other trained athletes, which allows him to recover faster between training sessions.
All that isn’t to discount his talent. While Phelps may have been gifted with natural advantages, his drive and willingness to train hard are even more important. Those who’ve worked with Phelps have often expressed that the true secret behind the swimmer’s success is his immaculate technique, which can only come from extensive training. Swimming is extremely inefficient for human beings, so every movement of every stroke counts, especially at elite levels where a fraction of a second can make all the difference. It wouldn’t matter if you had shark skin and flippers for feet if you didn’t know how to use them!
[Image description: A large, empty swimming pool with blue-and-white lane dividers.] Credit & copyright: Jan van der Wolf, Pexels -
FREESports PP&T CurioFree1 CQ
Some people think he was a great baseball player. Everyone else knows he was the greatest. Willie Mays passed away on June 18 at the age of 93, and even though he had been retired for decades, no one else in the league ever managed to best his numbers. At bat or in center field, there still isn’t anyone quite like the “Say Hey Kid.”
Willie Howard Mays Jr. was born on May 6, 1931, in Westfield, Alabama, to Annie Satterwhite and Willie Mays Sr., a semi-professional baseball player. Though he was raised by relatives after his parents separated, Mays seemed to follow in his father’s footsteps, showing an interest in baseball from an early age. As a teenager, he moved to Fairfield, where he played sporadically for the Fairfield stars in the Birmingham Industrial League. In 1948, Mays was just 16 years old and still attending high school when he signed with the Birmingham Black Barons in what was then known as the Negro League. Mays played for the Black Barons until he graduated high school, after which he signed with the Giants, then based in New York.
The baseball wunderkind proved in his rookie season in 1951 that his early success wasn’t just a fluke. Moreover, he showed that he was an exceptional all-around player. Though the Giants lost the World Series to the New York Yankees that year, Mays was named the National League Rookie of the Year for his superb defensive performance. Just a few years later, the Giants would have a historic season when they went on to win the 1954 World Series. It was in Game 1 against the Cleveland Indians when Mays pulled off “The Catch,” an over-the-shoulder catch from behind that seemed like a magic trick at the time. “The Catch” happened after Vic Wertz hit a fly ball deep into center field, and Mays took off after it with his back to the plate at a dead sprint. Catching the ball meant he kept the players on bases from scoring, and all but secured a Giants win for the series opener—all with undeniable flair. Mays followed the Giants when the team moved to San Francisco, where he played until he was traded to the New York Mets in 1972. Throughout his career, Mays played in 24 All-Star games, was awarded the Gold Glove 12 times, and hit 660 home runs, all while stealing bases left and right and keeping center field a precarious place for a hitter’s aim. But it wasn’t just his presence on the field that gained him a following. He was a beloved personality off the field. He was given the nickname “Say Hey Kid,” and while accounts on its origins vary, Mays himself said at one point that it was due to his habit of addressing people with “Say hey” when he couldn’t remember someone’s name during his rookie year.
Even after retiring in 1973, Mays remained an inspiration to many Black Americans. After Jackie Robinson broke down racial barriers in 1947, Mays further pushed against the racist barriers that Black athletes faced. He was a player who could not be ignored, whose dramatic plays and charisma won games as well as hearts. For many in his time, Mays was the face of baseball, a superstar of a sport that had only recently—and begrudgingly—integrated. When he was awarded the Presidential Medal of Freedom in 2015, President Obama said of him, “It's because of Giants like Willie that someone like me could even think about running for president.” To this day, Mays is cited as an inspiration by Black baseball players, who continue to be underrepresented in the sport. It seems that this legendary giant had plenty of room on his shoulders.
[Image description: A red baseball glove, a baseball bat, and four baseballs on a wooden bench.] Credit & copyright: Tima Miroshnichenko, PexelsSome people think he was a great baseball player. Everyone else knows he was the greatest. Willie Mays passed away on June 18 at the age of 93, and even though he had been retired for decades, no one else in the league ever managed to best his numbers. At bat or in center field, there still isn’t anyone quite like the “Say Hey Kid.”
Willie Howard Mays Jr. was born on May 6, 1931, in Westfield, Alabama, to Annie Satterwhite and Willie Mays Sr., a semi-professional baseball player. Though he was raised by relatives after his parents separated, Mays seemed to follow in his father’s footsteps, showing an interest in baseball from an early age. As a teenager, he moved to Fairfield, where he played sporadically for the Fairfield stars in the Birmingham Industrial League. In 1948, Mays was just 16 years old and still attending high school when he signed with the Birmingham Black Barons in what was then known as the Negro League. Mays played for the Black Barons until he graduated high school, after which he signed with the Giants, then based in New York.
The baseball wunderkind proved in his rookie season in 1951 that his early success wasn’t just a fluke. Moreover, he showed that he was an exceptional all-around player. Though the Giants lost the World Series to the New York Yankees that year, Mays was named the National League Rookie of the Year for his superb defensive performance. Just a few years later, the Giants would have a historic season when they went on to win the 1954 World Series. It was in Game 1 against the Cleveland Indians when Mays pulled off “The Catch,” an over-the-shoulder catch from behind that seemed like a magic trick at the time. “The Catch” happened after Vic Wertz hit a fly ball deep into center field, and Mays took off after it with his back to the plate at a dead sprint. Catching the ball meant he kept the players on bases from scoring, and all but secured a Giants win for the series opener—all with undeniable flair. Mays followed the Giants when the team moved to San Francisco, where he played until he was traded to the New York Mets in 1972. Throughout his career, Mays played in 24 All-Star games, was awarded the Gold Glove 12 times, and hit 660 home runs, all while stealing bases left and right and keeping center field a precarious place for a hitter’s aim. But it wasn’t just his presence on the field that gained him a following. He was a beloved personality off the field. He was given the nickname “Say Hey Kid,” and while accounts on its origins vary, Mays himself said at one point that it was due to his habit of addressing people with “Say hey” when he couldn’t remember someone’s name during his rookie year.
Even after retiring in 1973, Mays remained an inspiration to many Black Americans. After Jackie Robinson broke down racial barriers in 1947, Mays further pushed against the racist barriers that Black athletes faced. He was a player who could not be ignored, whose dramatic plays and charisma won games as well as hearts. For many in his time, Mays was the face of baseball, a superstar of a sport that had only recently—and begrudgingly—integrated. When he was awarded the Presidential Medal of Freedom in 2015, President Obama said of him, “It's because of Giants like Willie that someone like me could even think about running for president.” To this day, Mays is cited as an inspiration by Black baseball players, who continue to be underrepresented in the sport. It seems that this legendary giant had plenty of room on his shoulders.
[Image description: A red baseball glove, a baseball bat, and four baseballs on a wooden bench.] Credit & copyright: Tima Miroshnichenko, Pexels