Curio Cabinet
- By Date
- By Type
September 11, 2025
-
FREEScience Daily Curio #3150Free1 CQ
If you start feeling queasy in the car, try blasting the stereo. Scientists have found that happy music can alleviate the symptoms of motion sickness, but the specific tunes one chooses to listen to matters a lot. Motion sickness is something that happens when a person’s brain receives conflicting information from the different senses regarding their motion. So, if the eyes see that the environment around them is moving but the inner ears and muscles don’t detect any movement, that conflict can lead to nausea, dizziness, and cold sweats. This means that riding in cars, boats, amusement park rides, and even using VR headsets can cause motion sickness, which can really cut down on the enjoyment of a car trip or vacation.
To test their theory that music can affect motion sickness, researchers at Southwest University in China actually used a driving simulator to induce the condition in participants. Participants were also equipped with electroencephalogram (EEG) caps to measure signals associated with motion sickness in the brain. When they started feeling queasy, researchers played different types of music. What they found was that “joyful” music was capable of reducing symptoms of motion sickness by 57.3 percent, while “soft” music did the same by 56.7 percent. “Passionate” music only alleviated symptoms by 48.3 percent, while “sad” music was as good as nothing, or maybe worse. In fact, they found that sad music might slightly worsen the symptoms by triggering negative emotions. Aside from music, there are other, more conventional remedies that also help with motion sickness, like sweet treats, fresh air, and taking a break from whatever is causing the sickness. In cars or other moving vehicles, reading can induce motion sickness, so it might be a good idea to take your eyes off the page or the phone. On your next road trip, maybe an MD should be the DJ.
[Image description: A reflection of trees, clouds, and the sun in a car window.] Credit & copyright: Tomwsulcer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.If you start feeling queasy in the car, try blasting the stereo. Scientists have found that happy music can alleviate the symptoms of motion sickness, but the specific tunes one chooses to listen to matters a lot. Motion sickness is something that happens when a person’s brain receives conflicting information from the different senses regarding their motion. So, if the eyes see that the environment around them is moving but the inner ears and muscles don’t detect any movement, that conflict can lead to nausea, dizziness, and cold sweats. This means that riding in cars, boats, amusement park rides, and even using VR headsets can cause motion sickness, which can really cut down on the enjoyment of a car trip or vacation.
To test their theory that music can affect motion sickness, researchers at Southwest University in China actually used a driving simulator to induce the condition in participants. Participants were also equipped with electroencephalogram (EEG) caps to measure signals associated with motion sickness in the brain. When they started feeling queasy, researchers played different types of music. What they found was that “joyful” music was capable of reducing symptoms of motion sickness by 57.3 percent, while “soft” music did the same by 56.7 percent. “Passionate” music only alleviated symptoms by 48.3 percent, while “sad” music was as good as nothing, or maybe worse. In fact, they found that sad music might slightly worsen the symptoms by triggering negative emotions. Aside from music, there are other, more conventional remedies that also help with motion sickness, like sweet treats, fresh air, and taking a break from whatever is causing the sickness. In cars or other moving vehicles, reading can induce motion sickness, so it might be a good idea to take your eyes off the page or the phone. On your next road trip, maybe an MD should be the DJ.
[Image description: A reflection of trees, clouds, and the sun in a car window.] Credit & copyright: Tomwsulcer, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication. -
FREEBiology Nerdy CurioFree1 CQ
When it comes to the severity of flu infections, it’s often a matter of what you have, not what you lack. It’s common knowledge that older people are more vulnerable to the flu, and according to a paper published in PNAS by an international team of researchers, the culprit isn’t just weak immune systems. According to Kin-Chow Chang, a co-author of the paper from the School of Veterinary Medicine and Science at the University of Nottingham, understanding the mechanism behind severe flu symptoms in the elderly is a matter that requires urgent attention. As he explains, "Aging is a leading risk factor in influenza-related deaths. Furthermore, the global population is aging at an unprecedented rate in human history, posing major issues for health care and the economy.” To get to the bottom of the issue, Chang and his colleagues used an aging mouse model and samples of human tissue to observe how the immune system of an older population responds to the influenza virus. They discovered that older people produce more apolipoprotein D (ApoD), a glycosylated protein associated with lipid metabolism and inflammation. Higher levels of ApoD lead to more mitophagy, or the destruction of mitochondria. In the context of a viral infection, this means two things. First, since mitochondria produce cellular energy, higher rates of mitophagy means a weakened immune response. Two, the mitochondria also induce protective interferons, which inhibit viral replication. These two factors combine to make the elderly much weaker to a variety of viral infections, including the flu. People who are 65 or older account for 90 percent of flu deaths and up to 70 percent of flu-related hospitalizations, making them by far the most vulnerable population. With flu season just around the corner, this new knowledge won’t exactly have anyone breathing easy. Here’s hoping that this insight can lead to some solutions for older folks facing the flu.
When it comes to the severity of flu infections, it’s often a matter of what you have, not what you lack. It’s common knowledge that older people are more vulnerable to the flu, and according to a paper published in PNAS by an international team of researchers, the culprit isn’t just weak immune systems. According to Kin-Chow Chang, a co-author of the paper from the School of Veterinary Medicine and Science at the University of Nottingham, understanding the mechanism behind severe flu symptoms in the elderly is a matter that requires urgent attention. As he explains, "Aging is a leading risk factor in influenza-related deaths. Furthermore, the global population is aging at an unprecedented rate in human history, posing major issues for health care and the economy.” To get to the bottom of the issue, Chang and his colleagues used an aging mouse model and samples of human tissue to observe how the immune system of an older population responds to the influenza virus. They discovered that older people produce more apolipoprotein D (ApoD), a glycosylated protein associated with lipid metabolism and inflammation. Higher levels of ApoD lead to more mitophagy, or the destruction of mitochondria. In the context of a viral infection, this means two things. First, since mitochondria produce cellular energy, higher rates of mitophagy means a weakened immune response. Two, the mitochondria also induce protective interferons, which inhibit viral replication. These two factors combine to make the elderly much weaker to a variety of viral infections, including the flu. People who are 65 or older account for 90 percent of flu deaths and up to 70 percent of flu-related hospitalizations, making them by far the most vulnerable population. With flu season just around the corner, this new knowledge won’t exactly have anyone breathing easy. Here’s hoping that this insight can lead to some solutions for older folks facing the flu.
September 10, 2025
-
FREEBiology Nerdy CurioFree1 CQ
Fuzzy and fossorial…what’s not to love? With their torpedo-shaped bodies, enormous, stout feet, and eyes that are practically invisible, moles have to be one of the strangest looking animals on Earth. Yet, unlike other famously odd animals like platypuses and echidnas which live on only one continent, moles are common on every continent except South America and Antarctica. It isn’t just their looks that are strange, either—most female moles have intersex qualities, meaning that they have characteristics usually only found in males.
Moles come in a variety of shapes and sizes, from bizarre star-nosed moles to fairly plain Eastern moles. They all share one important trait, though: they’re fossorial, meaning that they spend most of their lives underground. A single molehill (the part of a mole’s tunnel system that can be seen aboveground) can signal a vast underground network of tunnels stretching up to 230 feet. A single Eastern mole can dig an 18-foot-long tunnel in an hour. That’s pretty impressive for an animal that only grows to be around eight inches long.
Despite living in cozy homemade tunnels, moles don’t spend a lot of time relaxing. Their constant digging requires them to have a high metabolism, which in turn requires them to eat constantly. Moles spend nearly all their time hunting for worms, grubs, and other insects that live underground. While moles are formidable predators in their chosen environment, things are very different when they’re forced aboveground by tunnel cave-ins, flooding, or a lack of food. Moles are extremely vulnerable aboveground due to their small size and poor eyesight. Because their large feet are made for digging rather than running, moles are slow aboveground and can easily be killed by foxes, snakes, or birds of prey.
Despite these vulnerabilities, moles have managed to thrive throughout most of the world for over 30 million years—and it’s not just because they stay underground. Moles have a unique approach to reproduction that helps both female moles and their offspring survive. In many mole species, female moles have both ovarian and testicular tissue, meaning that they produce egg cells along with large quantities of testosterone. This added testosterone makes female moles far more aggressive than those of most other mammalian species. After giving birth to two to seven pups in a specially-made underground nesting chamber, mother moles will viciously fight off any threat to their young, including large, aboveground predators like foxes, even when there’s little chance of success. Don’t poke the mama mole!
[Image description: A black-and-white illustration of a mole.] Credit & copyright: Dead Mole, Wenceslaus Hollar, 1646. Harris Brisbane Dick Fund, 1917, The Metropolitan Museum of Art. Public Domain.Fuzzy and fossorial…what’s not to love? With their torpedo-shaped bodies, enormous, stout feet, and eyes that are practically invisible, moles have to be one of the strangest looking animals on Earth. Yet, unlike other famously odd animals like platypuses and echidnas which live on only one continent, moles are common on every continent except South America and Antarctica. It isn’t just their looks that are strange, either—most female moles have intersex qualities, meaning that they have characteristics usually only found in males.
Moles come in a variety of shapes and sizes, from bizarre star-nosed moles to fairly plain Eastern moles. They all share one important trait, though: they’re fossorial, meaning that they spend most of their lives underground. A single molehill (the part of a mole’s tunnel system that can be seen aboveground) can signal a vast underground network of tunnels stretching up to 230 feet. A single Eastern mole can dig an 18-foot-long tunnel in an hour. That’s pretty impressive for an animal that only grows to be around eight inches long.
Despite living in cozy homemade tunnels, moles don’t spend a lot of time relaxing. Their constant digging requires them to have a high metabolism, which in turn requires them to eat constantly. Moles spend nearly all their time hunting for worms, grubs, and other insects that live underground. While moles are formidable predators in their chosen environment, things are very different when they’re forced aboveground by tunnel cave-ins, flooding, or a lack of food. Moles are extremely vulnerable aboveground due to their small size and poor eyesight. Because their large feet are made for digging rather than running, moles are slow aboveground and can easily be killed by foxes, snakes, or birds of prey.
Despite these vulnerabilities, moles have managed to thrive throughout most of the world for over 30 million years—and it’s not just because they stay underground. Moles have a unique approach to reproduction that helps both female moles and their offspring survive. In many mole species, female moles have both ovarian and testicular tissue, meaning that they produce egg cells along with large quantities of testosterone. This added testosterone makes female moles far more aggressive than those of most other mammalian species. After giving birth to two to seven pups in a specially-made underground nesting chamber, mother moles will viciously fight off any threat to their young, including large, aboveground predators like foxes, even when there’s little chance of success. Don’t poke the mama mole!
[Image description: A black-and-white illustration of a mole.] Credit & copyright: Dead Mole, Wenceslaus Hollar, 1646. Harris Brisbane Dick Fund, 1917, The Metropolitan Museum of Art. Public Domain. -
FREEBiology Daily Curio #3149Free1 CQ
Did you know there’s more to being a redhead than hair? The city of Tilburg, Netherlands, is hosting their annual Redhead Days festival, where thousands of fiery manes gather to celebrate what makes them unique. The festival began 20 years ago after Dutch artist Bart Rouwenhorst took out a newspaper ad asking for 15 redheads to participate in an art project. After he was met with ten times the requested number, it inspired him to make it an annual event that has grown in size ever since. Still, its size will always be somewhat limited considering the rarity of red hair. Even in Scotland and Ireland, where they’re more common, redheads make up only ten percent of the population, and most redheads in the world have northwestern European ancestry.
Redheads are rare because the gene that causes red hair, MC1R, is recessive. Despite this, redheads can still be born from non-redheaded parents. If both parents carry the recessive allele, they have a 25 percent chance of having a redheaded child. If one of the parents is also a redhead, then that chance goes up to 50 percent. If both parents are redheads, then that chance goes up to 100 percent.
Both hair and skin are affected by two types of melanin, which is responsible for pigmentation. One is eumelanin, which causes darker skin by providing black and brown pigments. Pheomelanin, on the other hand, is responsible for creating the pink hues found in skin. Eumelanin and pheomelanin can be found in hair, and the more pheomelanin there is, the redder the hair appears. Those who have just eumelanin will have darker brown or black hair, while just a little of this melanin produces blonde hair. A little pheomelanin in otherwise blonde hair creates strawberry blondes, and those who have much more pheomelanin than eumelanin have red hair.
Red hair might look great, but it does come with a few downsides. Those who have red hair are more likely to develop skin cancer, endometriosis, and Parkinson’s disease. They also tend to have different levels of pain tolerance than the general population, and they may respond differently to anesthetic and pain relievers. You could say they’re more prone to medical red alerts.
[Image description: A painting of a woman with red hair inspecting her hair in a hand mirror.] Credit & copyright: Jo, La Belle Irlandaise, Gustave Courbet. H. O. Havemeyer Collection, Bequest of Mrs. H. O. Havemeyer, 1929. The Metropolitan Museum of Art, Public Domain.Did you know there’s more to being a redhead than hair? The city of Tilburg, Netherlands, is hosting their annual Redhead Days festival, where thousands of fiery manes gather to celebrate what makes them unique. The festival began 20 years ago after Dutch artist Bart Rouwenhorst took out a newspaper ad asking for 15 redheads to participate in an art project. After he was met with ten times the requested number, it inspired him to make it an annual event that has grown in size ever since. Still, its size will always be somewhat limited considering the rarity of red hair. Even in Scotland and Ireland, where they’re more common, redheads make up only ten percent of the population, and most redheads in the world have northwestern European ancestry.
Redheads are rare because the gene that causes red hair, MC1R, is recessive. Despite this, redheads can still be born from non-redheaded parents. If both parents carry the recessive allele, they have a 25 percent chance of having a redheaded child. If one of the parents is also a redhead, then that chance goes up to 50 percent. If both parents are redheads, then that chance goes up to 100 percent.
Both hair and skin are affected by two types of melanin, which is responsible for pigmentation. One is eumelanin, which causes darker skin by providing black and brown pigments. Pheomelanin, on the other hand, is responsible for creating the pink hues found in skin. Eumelanin and pheomelanin can be found in hair, and the more pheomelanin there is, the redder the hair appears. Those who have just eumelanin will have darker brown or black hair, while just a little of this melanin produces blonde hair. A little pheomelanin in otherwise blonde hair creates strawberry blondes, and those who have much more pheomelanin than eumelanin have red hair.
Red hair might look great, but it does come with a few downsides. Those who have red hair are more likely to develop skin cancer, endometriosis, and Parkinson’s disease. They also tend to have different levels of pain tolerance than the general population, and they may respond differently to anesthetic and pain relievers. You could say they’re more prone to medical red alerts.
[Image description: A painting of a woman with red hair inspecting her hair in a hand mirror.] Credit & copyright: Jo, La Belle Irlandaise, Gustave Courbet. H. O. Havemeyer Collection, Bequest of Mrs. H. O. Havemeyer, 1929. The Metropolitan Museum of Art, Public Domain.
September 9, 2025
-
FREEMusic Song CurioFree2 CQ
Indie, folk, rock…there’s no need to choose. That was seemingly the ethos of famed Canadian-American band Buffalo Springfield, which incorporated sounds from all three genres into their music, including their biggest 1970s hits. Bassist Bruce Palmer, born on this day in 1946, played a key role in the band’s unique sound on tracks like 1967’s Rock & Roll Woman. The song begins with folksy harmonies before Palmer’s groovy baseline turns it into something almost psychedelic, and jagged guitar riffs finally carry the track into pure rock territory. Though the song only reached 44 on the Billboard Hot 100 the year it was released, it’s still one of the band’s best-remembered hits, and that’s saying a lot considering that they were inducted into the Rock and Roll Hall of Fame in 1997. Rock and Roll Woman was inspirational to many, including Stevie Nicks, who reportedly heard the song when she was 19 and felt almost as if it was describing her future life as a musician, with its lyrics about a beautiful, mysterious rockstar. Buffalo Springfield might not have been purposefully prophetic, but that doesn’t mean that Nicks was wrong!
Indie, folk, rock…there’s no need to choose. That was seemingly the ethos of famed Canadian-American band Buffalo Springfield, which incorporated sounds from all three genres into their music, including their biggest 1970s hits. Bassist Bruce Palmer, born on this day in 1946, played a key role in the band’s unique sound on tracks like 1967’s Rock & Roll Woman. The song begins with folksy harmonies before Palmer’s groovy baseline turns it into something almost psychedelic, and jagged guitar riffs finally carry the track into pure rock territory. Though the song only reached 44 on the Billboard Hot 100 the year it was released, it’s still one of the band’s best-remembered hits, and that’s saying a lot considering that they were inducted into the Rock and Roll Hall of Fame in 1997. Rock and Roll Woman was inspirational to many, including Stevie Nicks, who reportedly heard the song when she was 19 and felt almost as if it was describing her future life as a musician, with its lyrics about a beautiful, mysterious rockstar. Buffalo Springfield might not have been purposefully prophetic, but that doesn’t mean that Nicks was wrong!
-
FREEGeography Daily Curio #3148Free1 CQ
Maps can tell you how to get somewhere, but they’re not always great at telling you where you’re going. This especially applies to common world maps known as Mercator projection maps, which distort the real size of various places. Now, an international campaign in Africa is hoping to change the way people view the world.
The “Correct The Map” campaign, endorsed by the African Union, is using one of the oldest criticisms of the Mercator projection against it in the hopes of changing people’s perceptions about the continent. Though Africa is home to over 1.4 billion people, those looking at a world map tend to underestimate the continent’s size, population, and global significance due to the undersized portrayal that results from the Mercator projection. Developed by Flemish cartographer Gerardus Mercator in the 16th century, the Mercator projection is a cylindrical map projection that allows the spherical world on a linear scale, where the meridians are equidistant and the lines of latitude grow further apart the closer it gets to the poles. The Mercator projection was ideal for nautical navigation at a time before computer assisted navigation, as it could depict rhumb lines (lines of constant course) as straight lines, making charting easier.
Beyond navigation, however, the Mercator projection has some significant flaws. Because the projection scales infinitely as the lines of latitude approach the poles, objects closer to the poles become more distorted. As a result, Greenland and other landmasses near the poles appear to be much larger than they actually are. Based on the Mercator projection, Greenland appears to be larger than Africa, when it’s actually a fraction of its size. This might be just a curious shortcoming of an otherwise practical map projection, but supporters of Correct The Map claim that such visual distortions minimize the cultural and economic influence of Africa. Therefore, they hope to come up with an alternative that more accurately depicts the size of objects in a map to encourage a more balanced portrayal. It’s a big, wide world, and it just might call for a big, wide map.
[Image description: A map of the world with green continents and blue oceans, without words or labels.] Credit & copyright: Noleander, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.Maps can tell you how to get somewhere, but they’re not always great at telling you where you’re going. This especially applies to common world maps known as Mercator projection maps, which distort the real size of various places. Now, an international campaign in Africa is hoping to change the way people view the world.
The “Correct The Map” campaign, endorsed by the African Union, is using one of the oldest criticisms of the Mercator projection against it in the hopes of changing people’s perceptions about the continent. Though Africa is home to over 1.4 billion people, those looking at a world map tend to underestimate the continent’s size, population, and global significance due to the undersized portrayal that results from the Mercator projection. Developed by Flemish cartographer Gerardus Mercator in the 16th century, the Mercator projection is a cylindrical map projection that allows the spherical world on a linear scale, where the meridians are equidistant and the lines of latitude grow further apart the closer it gets to the poles. The Mercator projection was ideal for nautical navigation at a time before computer assisted navigation, as it could depict rhumb lines (lines of constant course) as straight lines, making charting easier.
Beyond navigation, however, the Mercator projection has some significant flaws. Because the projection scales infinitely as the lines of latitude approach the poles, objects closer to the poles become more distorted. As a result, Greenland and other landmasses near the poles appear to be much larger than they actually are. Based on the Mercator projection, Greenland appears to be larger than Africa, when it’s actually a fraction of its size. This might be just a curious shortcoming of an otherwise practical map projection, but supporters of Correct The Map claim that such visual distortions minimize the cultural and economic influence of Africa. Therefore, they hope to come up with an alternative that more accurately depicts the size of objects in a map to encourage a more balanced portrayal. It’s a big, wide world, and it just might call for a big, wide map.
[Image description: A map of the world with green continents and blue oceans, without words or labels.] Credit & copyright: Noleander, Wikimedia Commons. This file is made available under the Creative Commons CC0 1.0 Universal Public Domain Dedication.
September 8, 2025
-
FREEArt Appreciation Art CurioFree1 CQ
It was a fashion item that was shawl to please! Though mostly forgotten now, a fichu was once an indispensable part of a woman’s wardrobe throughout much of Europe in the 18th century. The picture above shows a black, triangular garment. It’s adorned with floral lace on transparent silk. A fichu was a shawl with a specific purpose. It could either be square and folded in half or made in a triangular shape, but either way, they were meant to go around the neck and come together at the front to cover the neckline. This was partly for warmth and partly for modesty, as the dresses of the 18th century and the Regency era had lower necklines than the times directly before or after. Triangular fichus were sometimes made with a rounded curve on the inside to better conform to the shape of the neck. Shawls and scarves are still worn today, of course, but what set the fichu apart was the distinct purpose it served. When fashions changed and necklines rose, the fichu fell out of favor. Spare a tissue for the fichu.
Fichu, 19th century, Silk, The Metropolitan Museum of Art, New York City, New York
[Image credit & copyright: The Metropolitan Museum of Art, Bequest of Emma T. Gary, 1934. Public Domain.It was a fashion item that was shawl to please! Though mostly forgotten now, a fichu was once an indispensable part of a woman’s wardrobe throughout much of Europe in the 18th century. The picture above shows a black, triangular garment. It’s adorned with floral lace on transparent silk. A fichu was a shawl with a specific purpose. It could either be square and folded in half or made in a triangular shape, but either way, they were meant to go around the neck and come together at the front to cover the neckline. This was partly for warmth and partly for modesty, as the dresses of the 18th century and the Regency era had lower necklines than the times directly before or after. Triangular fichus were sometimes made with a rounded curve on the inside to better conform to the shape of the neck. Shawls and scarves are still worn today, of course, but what set the fichu apart was the distinct purpose it served. When fashions changed and necklines rose, the fichu fell out of favor. Spare a tissue for the fichu.
Fichu, 19th century, Silk, The Metropolitan Museum of Art, New York City, New York
[Image credit & copyright: The Metropolitan Museum of Art, Bequest of Emma T. Gary, 1934. Public Domain. -
FREEScience Daily Curio #3147Free1 CQ
If three bodies are a problem, how bad are four? Astronomers recently discovered a quadruple star system with some unusual inhabitants. It’s one of the first times a system with four stars orbiting each other has been found. While our own galaxy has just one nearby star, out in the vast cosmos, there are many systems with multiple stars. Most of them are binary systems, where two stars are locked in orbit with each other, engaged in a cosmic dance that will last eons until one of them explodes into a brilliant supernova. More rarely, the two stars combine into a single, more massive star. Lesser known are triple-star systems, also known as ternary or trinary systems. These consist of three stars locked in orbit, usually with two stars orbiting each other and a third orbiting around them both. Just a cosmic stone’s throw away, the Alpha Centauri system is known to contain three stars. To date, there have been multiple-star systems discovered containing up to seven stars, but the recently-discovered quadruple star system has something a little extra that makes it even rarer. Right here in the Milky Way, of all places, astronomers found a quadruple-star system containing two brown dwarfs. A brown dwarf is a rare object, also known as a “failed star.” As its nickname implies, a brown dwarf is a sub-stellar object which is too massive to be considered a planet, yet not massive enough to start and sustain nuclear fusion. Since there is no nuclear fusion taking place, brown dwarfs are cold and emit very little energy, making them difficult to find, let alone study in contrast to their brighter, full-fledged stellar counterparts. Sure, you could make the argument that having two sub-stellar objects disqualifies it from being a quadruple-star system, but who’s going to complain?
[Image description: A digital illustration of stars in a dark blue sky with a large, brown star in the center.] Credit & copyright: Author-created image. Public domain.If three bodies are a problem, how bad are four? Astronomers recently discovered a quadruple star system with some unusual inhabitants. It’s one of the first times a system with four stars orbiting each other has been found. While our own galaxy has just one nearby star, out in the vast cosmos, there are many systems with multiple stars. Most of them are binary systems, where two stars are locked in orbit with each other, engaged in a cosmic dance that will last eons until one of them explodes into a brilliant supernova. More rarely, the two stars combine into a single, more massive star. Lesser known are triple-star systems, also known as ternary or trinary systems. These consist of three stars locked in orbit, usually with two stars orbiting each other and a third orbiting around them both. Just a cosmic stone’s throw away, the Alpha Centauri system is known to contain three stars. To date, there have been multiple-star systems discovered containing up to seven stars, but the recently-discovered quadruple star system has something a little extra that makes it even rarer. Right here in the Milky Way, of all places, astronomers found a quadruple-star system containing two brown dwarfs. A brown dwarf is a rare object, also known as a “failed star.” As its nickname implies, a brown dwarf is a sub-stellar object which is too massive to be considered a planet, yet not massive enough to start and sustain nuclear fusion. Since there is no nuclear fusion taking place, brown dwarfs are cold and emit very little energy, making them difficult to find, let alone study in contrast to their brighter, full-fledged stellar counterparts. Sure, you could make the argument that having two sub-stellar objects disqualifies it from being a quadruple-star system, but who’s going to complain?
[Image description: A digital illustration of stars in a dark blue sky with a large, brown star in the center.] Credit & copyright: Author-created image. Public domain.
September 7, 2025
-
FREEStyle PP&T CurioFree1 CQ
If you think bras are uncomfortable, try wearing corsets in the middle of summer. Brassieres have been around in some form for millennia, but the modern bra has only been popular for around a century. Some call them indispensable, some call them oppressive. Whether you love them or hate them, one thing’s for sure: they’ve got quite the history.
Before the advent of underwires, nylon, and adjustable straps, women used a variety of undergarments to support and cover their breasts. As far back as the 4th and 5th centuries B.C.E., the ancient Greeks and Romans used long pieces of fabric to make bandeau like garments. The Romans called it a strophium or mamillare, and these early predecessors to the bra lifted, separated, and supported the breasts. In the following eras, various other cultures developed breast-supporting garments of their own, though there isn’t much surviving documentation describing them. One medieval-era bra discovered in Austria resembles a modern bra or bikini top, consisting of two “cups” made of fabric meant to provide support and held up by straps made from the same material. During the 14th century, French surgeon Henri de Mondeville described a garment worn by many women of that time, which he likened to two bags of fabric fitted tightly on the breasts and fastened into place with a band. However, starting in the 16th century, corsets were introduced in Europe and became favored by middle and upper class women. Corsets were made to measure and wrapped around the wearer’s waist. Sewn into the layers of fabric was boning, often made of literal whale bones or, later, out of steel. Unlike the earlier iterations of the bra, corsets provided support from below by lacing around the torso. They also provided a thinning effect on the waist. Contrary to popular belief, corsets weren’t necessarily uncomfortable, provided they were properly fitted. While tightlacing trends sometimes took the idea of thinning women' s waists to extremes, these were exceptions, not the rule.
Even if corsets were perfectly comfortable, their days were numbered. Their downfall in popularity ultimately came down to their need for boning. The first iteration of the modern bra was invented in 1893 by Marie Tucek, whose patent shows a bra with shoulder straps and a pair of connected cups that support the breasts from underneath. Tucek’s bra didn’t catch on widely, but a few decades later in 1913, the modern bra as most people know it was invented by Mary Phelps Jacob. Jacob was a New York socialite who was sick of corset boning showing through the thinner fabrics of the fashionable evening dresses of the time. Her version was originally made of two silk handkerchiefs fastened onto her with ribbons, and she later created an improved version which she patented in 1914. Jacob also gave the garment its name, with “brassiere” being based on the French word for “bodice” or “arm guard.” Jacob first gave out her bras to friends and family by request, and later sold them commercially under the name “Caresse Crosby.” She later sold her patent to the Warner Brothers Corset Company, and the bras grew in popularity over the years.
The killing blow to the corset was actually World War I. During the war, steel rationing meant that most women were encouraged to give up corsets, which in the U.S. used up enough steel to build two battleships. With a better alternative, there was no reason not to switch to bras.
Over the last century, bras have undergone many changes, usually reflecting the fashions of the time. New materials like Lycra (better known as Spandex) and nylon allowed for more comfortable bras, while “bullet bras” or “cone bras,” with their exaggerated shapes, were designed purely for aesthetic reasons. Today, the bra has evolved from undergarment to essential sportswear for women with the advent of sports bras, which prioritize functionality and comfort, and are often worn without an additional layer of clothing over them. And not a sliver of whale bone in sight!
[Image description: A modern black bra against a white background.] Credit & copyright: FlorenceFlowerRaqs, Wikimedia Commons.If you think bras are uncomfortable, try wearing corsets in the middle of summer. Brassieres have been around in some form for millennia, but the modern bra has only been popular for around a century. Some call them indispensable, some call them oppressive. Whether you love them or hate them, one thing’s for sure: they’ve got quite the history.
Before the advent of underwires, nylon, and adjustable straps, women used a variety of undergarments to support and cover their breasts. As far back as the 4th and 5th centuries B.C.E., the ancient Greeks and Romans used long pieces of fabric to make bandeau like garments. The Romans called it a strophium or mamillare, and these early predecessors to the bra lifted, separated, and supported the breasts. In the following eras, various other cultures developed breast-supporting garments of their own, though there isn’t much surviving documentation describing them. One medieval-era bra discovered in Austria resembles a modern bra or bikini top, consisting of two “cups” made of fabric meant to provide support and held up by straps made from the same material. During the 14th century, French surgeon Henri de Mondeville described a garment worn by many women of that time, which he likened to two bags of fabric fitted tightly on the breasts and fastened into place with a band. However, starting in the 16th century, corsets were introduced in Europe and became favored by middle and upper class women. Corsets were made to measure and wrapped around the wearer’s waist. Sewn into the layers of fabric was boning, often made of literal whale bones or, later, out of steel. Unlike the earlier iterations of the bra, corsets provided support from below by lacing around the torso. They also provided a thinning effect on the waist. Contrary to popular belief, corsets weren’t necessarily uncomfortable, provided they were properly fitted. While tightlacing trends sometimes took the idea of thinning women' s waists to extremes, these were exceptions, not the rule.
Even if corsets were perfectly comfortable, their days were numbered. Their downfall in popularity ultimately came down to their need for boning. The first iteration of the modern bra was invented in 1893 by Marie Tucek, whose patent shows a bra with shoulder straps and a pair of connected cups that support the breasts from underneath. Tucek’s bra didn’t catch on widely, but a few decades later in 1913, the modern bra as most people know it was invented by Mary Phelps Jacob. Jacob was a New York socialite who was sick of corset boning showing through the thinner fabrics of the fashionable evening dresses of the time. Her version was originally made of two silk handkerchiefs fastened onto her with ribbons, and she later created an improved version which she patented in 1914. Jacob also gave the garment its name, with “brassiere” being based on the French word for “bodice” or “arm guard.” Jacob first gave out her bras to friends and family by request, and later sold them commercially under the name “Caresse Crosby.” She later sold her patent to the Warner Brothers Corset Company, and the bras grew in popularity over the years.
The killing blow to the corset was actually World War I. During the war, steel rationing meant that most women were encouraged to give up corsets, which in the U.S. used up enough steel to build two battleships. With a better alternative, there was no reason not to switch to bras.
Over the last century, bras have undergone many changes, usually reflecting the fashions of the time. New materials like Lycra (better known as Spandex) and nylon allowed for more comfortable bras, while “bullet bras” or “cone bras,” with their exaggerated shapes, were designed purely for aesthetic reasons. Today, the bra has evolved from undergarment to essential sportswear for women with the advent of sports bras, which prioritize functionality and comfort, and are often worn without an additional layer of clothing over them. And not a sliver of whale bone in sight!
[Image description: A modern black bra against a white background.] Credit & copyright: FlorenceFlowerRaqs, Wikimedia Commons.
September 6, 2025
-
FREESports Sporty CurioFree1 CQ
Even elbow grease has its limits. The San Francisco Giants have announced that their relief pitcher, Randy Rodriguez, will have to sit out the rest of the season due to an elbow surgery that’s getting more and more common among baseball players. Known as the Tommy John surgery, the operation repairs a type of injury that could easily have ended an athlete’s career just decades ago. Named after the first baseball player to undergo the procedure in 1874, it involves taking ligaments from a donor site (usually the wrist) and using them to repair the ulnar collateral ligament (UCL). As of 2023, around 35 percent of pitchers in the MLB have received the surgery. More surprising, however, is the number of young baseball players who have needed it. In fact, over half of all people who have had their UCL repaired are athletes between the ages of 15 and 19, mostly baseball players. Increased specialization in the sport combined with year-round training and increasing competition have made injuries more common. Those who pitch more and at faster speeds are more likely to injure their UCL, and young athletes aiming for scholarships are no exception. The trend is so pronounced among young athletes that they are the fastest growing segment of the population to receive the surgery. They really do grow up so fast.
Even elbow grease has its limits. The San Francisco Giants have announced that their relief pitcher, Randy Rodriguez, will have to sit out the rest of the season due to an elbow surgery that’s getting more and more common among baseball players. Known as the Tommy John surgery, the operation repairs a type of injury that could easily have ended an athlete’s career just decades ago. Named after the first baseball player to undergo the procedure in 1874, it involves taking ligaments from a donor site (usually the wrist) and using them to repair the ulnar collateral ligament (UCL). As of 2023, around 35 percent of pitchers in the MLB have received the surgery. More surprising, however, is the number of young baseball players who have needed it. In fact, over half of all people who have had their UCL repaired are athletes between the ages of 15 and 19, mostly baseball players. Increased specialization in the sport combined with year-round training and increasing competition have made injuries more common. Those who pitch more and at faster speeds are more likely to injure their UCL, and young athletes aiming for scholarships are no exception. The trend is so pronounced among young athletes that they are the fastest growing segment of the population to receive the surgery. They really do grow up so fast.
September 5, 2025
-
FREEMind + Body Daily CurioFree1 CQ
It’s small, but there’s no doubt that it’s mighty! Jamaican cuisine is known for its bold, spicy flavors, and nothing embodies that more than one of the island’s most common foods: Jamaican beef patties. These hand pies have been around since the 17th century, and though they’re one of Jamaica’s best loved dishes today, the roots of their flavor stretch to many different places.
Jamaican beef patties, also known simply as Jamaican patties depending on their filling, are a kind of hand pie or turnover with thick crust on the outside and a spicy, meaty mixture on the inside. The sturdy-yet-flaky crust is usually made with flour, fat, salt, and baking powder, and gets its signature yellow color from either egg yolks or turmeric. The filling is traditionally made with ground beef, root vegetables like onions, and spices like garlic, ginger, cayenne pepper powder, curry powder, thyme, and Scotch bonnet pepper powder. Some patties use pulled chicken in place of beef, and some are vegetarian, utilizing vegetables like carrots, peas, potatoes, and corn.
It’s no coincidence that Jamaican beef patties bear a resemblance to European meat pies. Similar foods first came to the island around 1509, when the Spanish began to colonize Jamaica, bringing turnovers with them. In 1655, the British took control of the island from Spain, and brought along their own Cornish pasties. These meat pies, with their hard crust, were usually served with gravy. It didn’t take long, however, for Jamaicans and others living in the Caribbean to make the dish their own. Scotch bonnet peppers, commonly used in Jamaican cuisine, were added to the beef filling, while Indian traders and workers added curry powder and enslaved Africans made their patties with cayenne pepper. The patties were made smaller and thinner than Cornish pasties and were served without gravy or sauce, making them easier to carry around and eat while working. Today, the patties are eaten throughout the Caribbean, and regional variations are common. From Europe to Asia to the Caribbean, these seemingly simple patties are actually a flavorful international affair!It’s small, but there’s no doubt that it’s mighty! Jamaican cuisine is known for its bold, spicy flavors, and nothing embodies that more than one of the island’s most common foods: Jamaican beef patties. These hand pies have been around since the 17th century, and though they’re one of Jamaica’s best loved dishes today, the roots of their flavor stretch to many different places.
Jamaican beef patties, also known simply as Jamaican patties depending on their filling, are a kind of hand pie or turnover with thick crust on the outside and a spicy, meaty mixture on the inside. The sturdy-yet-flaky crust is usually made with flour, fat, salt, and baking powder, and gets its signature yellow color from either egg yolks or turmeric. The filling is traditionally made with ground beef, root vegetables like onions, and spices like garlic, ginger, cayenne pepper powder, curry powder, thyme, and Scotch bonnet pepper powder. Some patties use pulled chicken in place of beef, and some are vegetarian, utilizing vegetables like carrots, peas, potatoes, and corn.
It’s no coincidence that Jamaican beef patties bear a resemblance to European meat pies. Similar foods first came to the island around 1509, when the Spanish began to colonize Jamaica, bringing turnovers with them. In 1655, the British took control of the island from Spain, and brought along their own Cornish pasties. These meat pies, with their hard crust, were usually served with gravy. It didn’t take long, however, for Jamaicans and others living in the Caribbean to make the dish their own. Scotch bonnet peppers, commonly used in Jamaican cuisine, were added to the beef filling, while Indian traders and workers added curry powder and enslaved Africans made their patties with cayenne pepper. The patties were made smaller and thinner than Cornish pasties and were served without gravy or sauce, making them easier to carry around and eat while working. Today, the patties are eaten throughout the Caribbean, and regional variations are common. From Europe to Asia to the Caribbean, these seemingly simple patties are actually a flavorful international affair!