Curio Cabinet / Nerdy Curio
-
FREEScience Nerdy CurioFree1 CQ
Some things are made to last, but that’s not always a good thing. Microplastics, tiny, sometimes microscopic bits of plastic that have shown up in everything from snowfall to the human bloodstream, have captured public attention in recent years. Now, a study published by a group of researchers in Vienna, Austria, in the journal Chemospheres states that microplastics may be linked to rising rates of colorectal cancer in young people. Every week, the average person breathes in or ingests around .176 ounces of plastic (about the equivalent of a credit card), most of which ends up in the gastrointestinal tract. The good news is that it doesn’t all stick around. The bad news is that the stuff that does is still dangerous. According to the new research, nanoplastics (particles that are one micrometer or smaller) can stay inside a person’s body longer than previously thought and can even be passed on to new cells during cell division. These micro- and nanoplastic particles (MNPs) are difficult to get rid of because unlike other foreign materials, they aren’t broken down by a cell’s lysosomes. This is particularly dangerous when they end up in a cancer cell, because MNPs were found to increase cell migration, which, for cancer cells, means metastasis, or malignant growths that spread from the original cancer site. The researchers therefore believe that MNPs could be at least partially responsible for the recent worldwide rise in colorectal cancer rates, especially in those under 50 years old. Indeed, they found that colorectal cancer rates have been on the rise since the 1960s, when inexpensive plastics started to become ubiquitous. Since then, practically everyone has been consuming plastic to some degree. It seems that this material was never cheap after all…the bill was just overdue.
[Image description: A plastic cup half-covered by sand on a beach.] Credit & copyright: Hamsterfreund, Pixabay
Some things are made to last, but that’s not always a good thing. Microplastics, tiny, sometimes microscopic bits of plastic that have shown up in everything from snowfall to the human bloodstream, have captured public attention in recent years. Now, a study published by a group of researchers in Vienna, Austria, in the journal Chemospheres states that microplastics may be linked to rising rates of colorectal cancer in young people. Every week, the average person breathes in or ingests around .176 ounces of plastic (about the equivalent of a credit card), most of which ends up in the gastrointestinal tract. The good news is that it doesn’t all stick around. The bad news is that the stuff that does is still dangerous. According to the new research, nanoplastics (particles that are one micrometer or smaller) can stay inside a person’s body longer than previously thought and can even be passed on to new cells during cell division. These micro- and nanoplastic particles (MNPs) are difficult to get rid of because unlike other foreign materials, they aren’t broken down by a cell’s lysosomes. This is particularly dangerous when they end up in a cancer cell, because MNPs were found to increase cell migration, which, for cancer cells, means metastasis, or malignant growths that spread from the original cancer site. The researchers therefore believe that MNPs could be at least partially responsible for the recent worldwide rise in colorectal cancer rates, especially in those under 50 years old. Indeed, they found that colorectal cancer rates have been on the rise since the 1960s, when inexpensive plastics started to become ubiquitous. Since then, practically everyone has been consuming plastic to some degree. It seems that this material was never cheap after all…the bill was just overdue.
[Image description: A plastic cup half-covered by sand on a beach.] Credit & copyright: Hamsterfreund, Pixabay
-
FREEScience Nerdy CurioFree1 CQ
Will they or won’t they? That’s the question astronomers from Stanford University are asking in a paper published in The Astrophysical Journal regarding a black hole binary with some unique features. A black hole binary is a system of two black holes that orbit one another. While stellar-mass black holes are known to sometimes merge together, supermassive black holes (which are many times the mass of even the largest stars) have never been observed doing so. Whether or not they can merge has been a subject of debate among astronomers for decades. The matter may soon be settled though, thanks to data collected by the Gemini Observatory regarding a black hole binary system called B2 0402+379. This binary is unusual in a number of ways: firstly, it’s the only instance to be observed in enough detail to see each black hole separately, despite there being just 24 light years between them (yes, that distance is considered close for black holes). Meanwhile, the binary is the heaviest of its kind at around 28 billion times the mass of the sun. For astronomers studying B2 0402+379, this last bit of information was key. They concluded that the unusually large mass of the two objects likely allowed them to completely obliterate any stars and other matter from their respective galaxies that would have slowed down their orbit. Without any matter remaining, their orbit effectively stalled, and they’ve stayed where they are for the last 3 billion years. Typically, stellar-mass black holes at this stage emit gravitational waves that sap their orbital momentum, causing them to merge. But it appears that the sheer mass of this binary has allowed it to remain eternally stalled. More time and data is still needed to know what will happen for sure, but until then, the debate has finally, similarly wound down.
[Image description: A sky full of stars that appear to be “swirling.”] Credit & copyright: Faik Akmd, Pexels
Will they or won’t they? That’s the question astronomers from Stanford University are asking in a paper published in The Astrophysical Journal regarding a black hole binary with some unique features. A black hole binary is a system of two black holes that orbit one another. While stellar-mass black holes are known to sometimes merge together, supermassive black holes (which are many times the mass of even the largest stars) have never been observed doing so. Whether or not they can merge has been a subject of debate among astronomers for decades. The matter may soon be settled though, thanks to data collected by the Gemini Observatory regarding a black hole binary system called B2 0402+379. This binary is unusual in a number of ways: firstly, it’s the only instance to be observed in enough detail to see each black hole separately, despite there being just 24 light years between them (yes, that distance is considered close for black holes). Meanwhile, the binary is the heaviest of its kind at around 28 billion times the mass of the sun. For astronomers studying B2 0402+379, this last bit of information was key. They concluded that the unusually large mass of the two objects likely allowed them to completely obliterate any stars and other matter from their respective galaxies that would have slowed down their orbit. Without any matter remaining, their orbit effectively stalled, and they’ve stayed where they are for the last 3 billion years. Typically, stellar-mass black holes at this stage emit gravitational waves that sap their orbital momentum, causing them to merge. But it appears that the sheer mass of this binary has allowed it to remain eternally stalled. More time and data is still needed to know what will happen for sure, but until then, the debate has finally, similarly wound down.
[Image description: A sky full of stars that appear to be “swirling.”] Credit & copyright: Faik Akmd, Pexels
-
FREEScience Nerdy CurioFree1 CQ
Archaeology can be a sticky business. Neanderthals are usually thought of as less intelligent than our Homo sapien ancestors, but mounting evidence suggests that they were more like us than we realize. In fact, a paper recently published in the journal Science Advances by archaeologists from New York University, the University of Tübingen, and the National Museums in Berlin, details how Neanderthals created their own, specialized adhesive. As ancient species go, Neanderthals get a pretty bad rap. Discovered in the mid 1800s, these cousins of modern humans were long thought to have been much less intelligent than us. But more recent discoveries have revealed that Neanderthals had distinct, developed cultures, and genetic testing has revealed that they likely interbred with early homo sapiens. While reexamining some stone tools made by Neanderthals that were unearthed in the early 1900s, researchers recently found traces of bitumen (a naturally occurring petroleum-based substance) and ochre (a deep yellow mineral) mixed together. The purpose of these substances weren’t clear at first. Bitumen, while sticky, is difficult to work with. Ochre, on the other hand, would inhibit the adhesive property of bitumen. However, when they mixed the two substances together using fresh samples of both, researchers made an easily workable adhesive that is just sticky enough to hold a stone tool together, but not sticky enough to bind skin. The material was easily moldable, so it could be fitted to tools in order to improve grip. You’ve gotta hand it to them: Neanderthals were crafty folks.
[Image description: A painting of a family of six Neanderthals at the mouth of a cave. One of them carries a spear.] Credit & copyright: Charles Robert Knight (1874–1953). Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France. 1920. Wikimedia Commons. The author died in 1953, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 70 years or fewer. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.
Archaeology can be a sticky business. Neanderthals are usually thought of as less intelligent than our Homo sapien ancestors, but mounting evidence suggests that they were more like us than we realize. In fact, a paper recently published in the journal Science Advances by archaeologists from New York University, the University of Tübingen, and the National Museums in Berlin, details how Neanderthals created their own, specialized adhesive. As ancient species go, Neanderthals get a pretty bad rap. Discovered in the mid 1800s, these cousins of modern humans were long thought to have been much less intelligent than us. But more recent discoveries have revealed that Neanderthals had distinct, developed cultures, and genetic testing has revealed that they likely interbred with early homo sapiens. While reexamining some stone tools made by Neanderthals that were unearthed in the early 1900s, researchers recently found traces of bitumen (a naturally occurring petroleum-based substance) and ochre (a deep yellow mineral) mixed together. The purpose of these substances weren’t clear at first. Bitumen, while sticky, is difficult to work with. Ochre, on the other hand, would inhibit the adhesive property of bitumen. However, when they mixed the two substances together using fresh samples of both, researchers made an easily workable adhesive that is just sticky enough to hold a stone tool together, but not sticky enough to bind skin. The material was easily moldable, so it could be fitted to tools in order to improve grip. You’ve gotta hand it to them: Neanderthals were crafty folks.
[Image description: A painting of a family of six Neanderthals at the mouth of a cave. One of them carries a spear.] Credit & copyright: Charles Robert Knight (1874–1953). Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France. 1920. Wikimedia Commons. The author died in 1953, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 70 years or fewer. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.
-
FREEWork Nerdy CurioFree1 CQ
A can of worms seems to have opened in the Big Apple. In 2019, New York City’s MTA announced that the city would soon implement a congestion pricing program, which would charge a fee for any non-commercial vehicle entering Manhattan. While environmentalists cheered the idea, which is meant to dissuade car use and limit air pollution, plenty of New York drivers weren’t fond of the idea. Several lawsuits sprang up, with the most recent being brought by a group of around 50 New York City small business owners. On February 27, the plaintiffs held a rally at City Hall where they spoke about their concerns. Mainly, their fear is that congestion pricing will drive them out of business by keeping too many potential customers out of Manhattan. The MTA’s current plan, which will go into effect this summer, is to charge non-commercial vehicles entering Manhattan south of 60th street a fee of $15 if they use E-ZPass and $22.50 if they don’t. It’s easy to see why business owners might be bothered by the plan, but the MTA has pointed out that much of the funding for their capital plan for 2020 through 2024 depends on the revenue that congestion pricing will generate. If lawsuits delay the program’s implementation, some major city renovation projects may have to be put on hold. For now, it seems that the city’s business owners, drivers, and agencies are caught between a rock and an economic hardship.
[Image description: An AI-generated illustration of a car made from money.] Credit & copyright: adamlapunik, PixabayA can of worms seems to have opened in the Big Apple. In 2019, New York City’s MTA announced that the city would soon implement a congestion pricing program, which would charge a fee for any non-commercial vehicle entering Manhattan. While environmentalists cheered the idea, which is meant to dissuade car use and limit air pollution, plenty of New York drivers weren’t fond of the idea. Several lawsuits sprang up, with the most recent being brought by a group of around 50 New York City small business owners. On February 27, the plaintiffs held a rally at City Hall where they spoke about their concerns. Mainly, their fear is that congestion pricing will drive them out of business by keeping too many potential customers out of Manhattan. The MTA’s current plan, which will go into effect this summer, is to charge non-commercial vehicles entering Manhattan south of 60th street a fee of $15 if they use E-ZPass and $22.50 if they don’t. It’s easy to see why business owners might be bothered by the plan, but the MTA has pointed out that much of the funding for their capital plan for 2020 through 2024 depends on the revenue that congestion pricing will generate. If lawsuits delay the program’s implementation, some major city renovation projects may have to be put on hold. For now, it seems that the city’s business owners, drivers, and agencies are caught between a rock and an economic hardship.
[Image description: An AI-generated illustration of a car made from money.] Credit & copyright: adamlapunik, Pixabay -
FREEScience Nerdy CurioFree1 CQ
In the future, you might find yourself picking your nose…from a shelf. According to a recently published paper in the journal ACS Sensors, a team of researchers at the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has discovered a novel way to make a reliable, portable “e-nose.” It might seem like everything’s getting an unnecessary e-prefix these days, but an e-nose could actually be important for detecting volatile compounds (VOCs) in the air. There are sensors that predate this latest innovation, but they’ve been cumbersome and unreliable. Even the ubiquitous breathalyzer is a far cry from a true olfactory sensor, as it can only detect alcohol and its accuracy is sometimes questionable. Currently, the best way to detect dangerous gasses is by taking an air sample to a lab, and that might not be practical when, for example, there’s an impending explosion from a natural gas leak. But the new e-nose is different; it’s smaller, more portable, and more sensitive than detectors of the past, which could make it usable in the field and in emergencies. Researchers managed to create the sensor by using a chemiresistor (a material with electrical resistance that varies in the presence of different oxidizing gasses) consisting of a tungsten trioxide (WO3) nanorod film and taking advantage of the material’s extremely fast thermal relaxation time. The film, which acts as a sensing layer and a self-heating layer, reacts to 12 different types of gas molecules in a second or less. If the device works as intended, it could be used for everything from detecting food spoilage to hazardous waste clean ups. Finally, you won’t have to rely on your own sense of smell to know if that month-old jug of milk is okay to drink.
[Image description: A French bulldog sniffs purple flowers.] Credit & copyright: Mylene2401, Pixabay
In the future, you might find yourself picking your nose…from a shelf. According to a recently published paper in the journal ACS Sensors, a team of researchers at the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has discovered a novel way to make a reliable, portable “e-nose.” It might seem like everything’s getting an unnecessary e-prefix these days, but an e-nose could actually be important for detecting volatile compounds (VOCs) in the air. There are sensors that predate this latest innovation, but they’ve been cumbersome and unreliable. Even the ubiquitous breathalyzer is a far cry from a true olfactory sensor, as it can only detect alcohol and its accuracy is sometimes questionable. Currently, the best way to detect dangerous gasses is by taking an air sample to a lab, and that might not be practical when, for example, there’s an impending explosion from a natural gas leak. But the new e-nose is different; it’s smaller, more portable, and more sensitive than detectors of the past, which could make it usable in the field and in emergencies. Researchers managed to create the sensor by using a chemiresistor (a material with electrical resistance that varies in the presence of different oxidizing gasses) consisting of a tungsten trioxide (WO3) nanorod film and taking advantage of the material’s extremely fast thermal relaxation time. The film, which acts as a sensing layer and a self-heating layer, reacts to 12 different types of gas molecules in a second or less. If the device works as intended, it could be used for everything from detecting food spoilage to hazardous waste clean ups. Finally, you won’t have to rely on your own sense of smell to know if that month-old jug of milk is okay to drink.
[Image description: A French bulldog sniffs purple flowers.] Credit & copyright: Mylene2401, Pixabay
-
FREEBiology Nerdy CurioFree1 CQ
Roses are red, blueberries too, if it’s true what we said, then what’s with the hue? Blueberries might look blue, but they’re actually red. Now, scientists at the University of Bristol have revealed how this perennial favorite of the produce aisle creates its deceptive coloration in a paper published in the journal Science Advances. The secret lies in the beloved berries’ waxy skin. As anyone who has gotten blueberry juice on their clothing can tell you, blueberries don’t turn things blue, but rather a reddish purple. That’s because blueberries don’t actually have any blue pigment in their skin. But they have plenty of red pigment, despite appearances. The cause of the dark blue coloring, it seems, is the crystal structure of the wax on blueberries’ skin, which scatters blue and UV light. This is a similar mechanism to what makes some birds appear to have blue feathers, despite birds being incapable of actually producing blue pigment. Incredibly, the waxy layer responsible for this phenomenon is only two microns thick (a single micron is one-millionth of a meter). After discovering the waxy layer, scientists went a step further and removed the wax covering, allowing it to re-crystallize on a card. The result was the same blue coloration, and they believe this could one day be used to make environmentally friendly—and possibly even edible—blue reflective paint. Maybe one day we’ll look back at this discovery and remember how it all blue up.
[Image description: A close-up photo of a pile of blueberries] Credit & copyright: borislagosbarrera, Pixabay
Roses are red, blueberries too, if it’s true what we said, then what’s with the hue? Blueberries might look blue, but they’re actually red. Now, scientists at the University of Bristol have revealed how this perennial favorite of the produce aisle creates its deceptive coloration in a paper published in the journal Science Advances. The secret lies in the beloved berries’ waxy skin. As anyone who has gotten blueberry juice on their clothing can tell you, blueberries don’t turn things blue, but rather a reddish purple. That’s because blueberries don’t actually have any blue pigment in their skin. But they have plenty of red pigment, despite appearances. The cause of the dark blue coloring, it seems, is the crystal structure of the wax on blueberries’ skin, which scatters blue and UV light. This is a similar mechanism to what makes some birds appear to have blue feathers, despite birds being incapable of actually producing blue pigment. Incredibly, the waxy layer responsible for this phenomenon is only two microns thick (a single micron is one-millionth of a meter). After discovering the waxy layer, scientists went a step further and removed the wax covering, allowing it to re-crystallize on a card. The result was the same blue coloration, and they believe this could one day be used to make environmentally friendly—and possibly even edible—blue reflective paint. Maybe one day we’ll look back at this discovery and remember how it all blue up.
[Image description: A close-up photo of a pile of blueberries] Credit & copyright: borislagosbarrera, Pixabay
-
FREEPolitical Science Nerdy CurioFree1 CQ
Outer space isn’t NASA’s only concern; they care about earthly business too. NASA recently named Dwight Deneal as a new assistant administrator for their Office of Small Business Programs (OSBP). The move has brought some publicity to the little-known office. One doesn’t usually think of small businesses in relation to NASA. Yet, small businesses are actually vital to helping NASA function. In fact, the organization has worked with hundreds of small businesses over the years. Besides providing proprietary technologies for things like the James Webb Space Telescope, small businesses that specialize in logistics have helped NASA manage, track, and document various projects. Budget management is another area where small businesses have stepped in to lend the government agency a hand. Of course, NASA isn’t the only government agency or office that relies on small business contracts. Before his recent appointment, Deneal worked as director for the Defense Logistics Agency’s Office of Small Business Programs, where he contracted small businesses to work with the agency and promoted programs to incentivize small businesses to work with the U.S. military. Even when thinking big, it can behoove U.S. agencies to think small.
Outer space isn’t NASA’s only concern; they care about earthly business too. NASA recently named Dwight Deneal as a new assistant administrator for their Office of Small Business Programs (OSBP). The move has brought some publicity to the little-known office. One doesn’t usually think of small businesses in relation to NASA. Yet, small businesses are actually vital to helping NASA function. In fact, the organization has worked with hundreds of small businesses over the years. Besides providing proprietary technologies for things like the James Webb Space Telescope, small businesses that specialize in logistics have helped NASA manage, track, and document various projects. Budget management is another area where small businesses have stepped in to lend the government agency a hand. Of course, NASA isn’t the only government agency or office that relies on small business contracts. Before his recent appointment, Deneal worked as director for the Defense Logistics Agency’s Office of Small Business Programs, where he contracted small businesses to work with the agency and promoted programs to incentivize small businesses to work with the U.S. military. Even when thinking big, it can behoove U.S. agencies to think small.
-
FREEPhysics Nerdy CurioFree1 CQ
Going green doesn’t have to be more expensive. Just ask the team of researchers at the University of Oregon who are working on a cheaper, more eco-friendly way to produce metallic iron. Their findings, recently published in Joule, could profoundly change industries that rely on steel, an alloy of iron and carbon. It’s had to overstate the importance of steel in the modern world, yet producing the ubiquitous metal causes eight percent of all annual carbon emissions. However, the researchers in Oregon have been working on an electrochemical method that only uses saltwater, iron oxide, and some electricity. The process involves submerging an iron oxide cathode into one end of a saltwater bath, and a positively-charged electrode (an anode) on the other end. When a current runs through the setup, oxygen atoms are released from the cathode and bind with the sodium in the saltwater. The end products are pure iron, chlorine, and sodium hydroxide. Researchers say that chlorine, which has many industrial uses, could potentially be sold to offset the cost of the process, while sodium hydroxide can bind with CO2, which means the process can also be carbon-negative. Unlike a furnace, which needs a steady supply of fuel, this process could run entirely on renewable energy. Before this can be scaled up for industrial use, though, two major hurdles must be overcome. The first is that the process only works with pure iron oxide, and iron ore is rarely so pure. The second issue is that it would potentially create much more chlorine that would ever be needed, and there needs to be a way to store it safely. It would be pretty ironic to reduce carbon emissions only to release a ton of deadly chlorine.
[Image description: A close-up photo of metal chain links.] Credit & copyright: analogicus, Pixabay
Going green doesn’t have to be more expensive. Just ask the team of researchers at the University of Oregon who are working on a cheaper, more eco-friendly way to produce metallic iron. Their findings, recently published in Joule, could profoundly change industries that rely on steel, an alloy of iron and carbon. It’s had to overstate the importance of steel in the modern world, yet producing the ubiquitous metal causes eight percent of all annual carbon emissions. However, the researchers in Oregon have been working on an electrochemical method that only uses saltwater, iron oxide, and some electricity. The process involves submerging an iron oxide cathode into one end of a saltwater bath, and a positively-charged electrode (an anode) on the other end. When a current runs through the setup, oxygen atoms are released from the cathode and bind with the sodium in the saltwater. The end products are pure iron, chlorine, and sodium hydroxide. Researchers say that chlorine, which has many industrial uses, could potentially be sold to offset the cost of the process, while sodium hydroxide can bind with CO2, which means the process can also be carbon-negative. Unlike a furnace, which needs a steady supply of fuel, this process could run entirely on renewable energy. Before this can be scaled up for industrial use, though, two major hurdles must be overcome. The first is that the process only works with pure iron oxide, and iron ore is rarely so pure. The second issue is that it would potentially create much more chlorine that would ever be needed, and there needs to be a way to store it safely. It would be pretty ironic to reduce carbon emissions only to release a ton of deadly chlorine.
[Image description: A close-up photo of metal chain links.] Credit & copyright: analogicus, Pixabay
-
FREEEntrepreneurship Nerdy CurioFree1 CQ
Shark Tank is a show that has had its fair share of odd moments. The program features scrappy entrepreneurs pitching their products and services to a panel of investors, or “sharks.” As is often the case with reality T.V., wacky situations sometimes ensue…yet some turn out to be less wacky than they initially seem. Take the HummViewer, a product pitched on Shark Tank that, at first glance, seemed fairly off-the-wall. Essentially, HummViewer is a plastic face mask with flower-shaped hummingbird feeders attached to it, allowing its wearer to get up close and personal with the elusive little creatures. In a recent segment, HummViewer co-founders Joan and John Creed spoke about the impact of their 2022 appearance on Shark Tank. Far from seeing their product as a joke, viewers immediately flocked to the small company’s website, buying over $102,000 worth of product in a single day. Revenue has only increased since then, so much so that the founders were able to quit their full-time jobs to focus solely on their small business. While only around 29 percent of businesses that appear on the show end up finalizing a business deal, it’s clear that the “television effect” can be a real boon to businesses…even quirky ones.
[Image description: A green-and-white hummingbird, mid-flight.] Credit & copyright: JillWellington, PixabayShark Tank is a show that has had its fair share of odd moments. The program features scrappy entrepreneurs pitching their products and services to a panel of investors, or “sharks.” As is often the case with reality T.V., wacky situations sometimes ensue…yet some turn out to be less wacky than they initially seem. Take the HummViewer, a product pitched on Shark Tank that, at first glance, seemed fairly off-the-wall. Essentially, HummViewer is a plastic face mask with flower-shaped hummingbird feeders attached to it, allowing its wearer to get up close and personal with the elusive little creatures. In a recent segment, HummViewer co-founders Joan and John Creed spoke about the impact of their 2022 appearance on Shark Tank. Far from seeing their product as a joke, viewers immediately flocked to the small company’s website, buying over $102,000 worth of product in a single day. Revenue has only increased since then, so much so that the founders were able to quit their full-time jobs to focus solely on their small business. While only around 29 percent of businesses that appear on the show end up finalizing a business deal, it’s clear that the “television effect” can be a real boon to businesses…even quirky ones.
[Image description: A green-and-white hummingbird, mid-flight.] Credit & copyright: JillWellington, Pixabay -
FREEScience Nerdy CurioFree1 CQ
Cold weather is nothing to sneeze at. The decline of the Roman Empire has been studied by many historians through the centuries, but a group of international researchers has become the first to explore climate change as a potential factor, according to a paper published in the journal Science Advances. The Roman state was at its peak between 200 BCE and 100 CE, after which they were beset by plagues and declining agricultural production, eventually leading to the fall of the Western Roman Empire in 476 CE with the sacking of Rome. Then, during the sixth century, in the early days of the Eastern Roman Empire, the Plague of Justinian (a pandemic caused by the bubonic plague) killed around half of all Romans and millions more in the surrounding regions. Now, an analysis of the changing climate between 200 BCE and 600 CE has revealed that the outbreaks of disease and poor agricultural yields were caused by periods of colder, drier weather when temperatures dipped by as much as 37 degrees Fahrenheit. Researchers were able to determine the climate of the past by taking samples of different layers of marine sediment in the Gulf of Taranto off the southern coast of Italy. The sediment contained the fossilized remains of dinoflagellates, microorganisms that are sensitive to changes in sea temperature. Different temperatures led to the rise and fall of different species, and scientists were able to figure out what the temperature was during a given time by examining which species were more prevalent. Given that such small differences in temperature had such an impact on history, the researchers hope that this discovery might shed some light on the relationship between pandemics and climate change in the near future. Hopefully history doesn’t repeat with heat.
[Image description: A portion of the painting Saint Sebastian Interceding for the Plague Stricken, showing Roman priests praying and onlookers crying as workers place wrapped bodies into graves.] Credit & copyright: Saint Sebastian Interceding for the Plague Stricken, Josse Lieferinxe (–1508), Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.
Cold weather is nothing to sneeze at. The decline of the Roman Empire has been studied by many historians through the centuries, but a group of international researchers has become the first to explore climate change as a potential factor, according to a paper published in the journal Science Advances. The Roman state was at its peak between 200 BCE and 100 CE, after which they were beset by plagues and declining agricultural production, eventually leading to the fall of the Western Roman Empire in 476 CE with the sacking of Rome. Then, during the sixth century, in the early days of the Eastern Roman Empire, the Plague of Justinian (a pandemic caused by the bubonic plague) killed around half of all Romans and millions more in the surrounding regions. Now, an analysis of the changing climate between 200 BCE and 600 CE has revealed that the outbreaks of disease and poor agricultural yields were caused by periods of colder, drier weather when temperatures dipped by as much as 37 degrees Fahrenheit. Researchers were able to determine the climate of the past by taking samples of different layers of marine sediment in the Gulf of Taranto off the southern coast of Italy. The sediment contained the fossilized remains of dinoflagellates, microorganisms that are sensitive to changes in sea temperature. Different temperatures led to the rise and fall of different species, and scientists were able to figure out what the temperature was during a given time by examining which species were more prevalent. Given that such small differences in temperature had such an impact on history, the researchers hope that this discovery might shed some light on the relationship between pandemics and climate change in the near future. Hopefully history doesn’t repeat with heat.
[Image description: A portion of the painting Saint Sebastian Interceding for the Plague Stricken, showing Roman priests praying and onlookers crying as workers place wrapped bodies into graves.] Credit & copyright: Saint Sebastian Interceding for the Plague Stricken, Josse Lieferinxe (–1508), Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.
-
FREEManagement Nerdy CurioFree1 CQ
For some workers, 2024 has gotten off to a rough start. Massive layoffs in the tech industry and stagnant wages are just two of the challenges many working Americans are facing at the moment. So it stands to reason that many people were surprised when Walmart bucked the trend and announced significant new perks for store managers. In mid-January, the retail giant stated that it was boosting managers’ average pay to $128,000 per year and making them eligible for salary bonuses. Shortly thereafter, the company announced that store managers will get up to $20,000 in Walmart stock grants every year. The exact amount of the grants will depend on store size, with Hometown store managers getting $10,000, Neighborhood Market and Division store managers getting $15,000 and Supercenter managers getting $20,000. Altogether, this means that some Walmart managers could end up making more than $525,000 a year—a remarkable salary for a position that doesn’t require a four-year degree. In a statement on LinkedIn, John Furner, President and CEO for Walmart's US division, explained, “A Walmart store manager is running a multi-million dollar business and managing hundreds of people, and it's a far more complex job today then when I managed a store…We ask our managers to own their roles and act like owners. Now, they’ll literally be owners.” Here’s hoping this kicks off a new kind of trend in the retail world.
[Image description: A digital illustration of a pile of dollar bills.] Credit & copyright: geralt, PixabayFor some workers, 2024 has gotten off to a rough start. Massive layoffs in the tech industry and stagnant wages are just two of the challenges many working Americans are facing at the moment. So it stands to reason that many people were surprised when Walmart bucked the trend and announced significant new perks for store managers. In mid-January, the retail giant stated that it was boosting managers’ average pay to $128,000 per year and making them eligible for salary bonuses. Shortly thereafter, the company announced that store managers will get up to $20,000 in Walmart stock grants every year. The exact amount of the grants will depend on store size, with Hometown store managers getting $10,000, Neighborhood Market and Division store managers getting $15,000 and Supercenter managers getting $20,000. Altogether, this means that some Walmart managers could end up making more than $525,000 a year—a remarkable salary for a position that doesn’t require a four-year degree. In a statement on LinkedIn, John Furner, President and CEO for Walmart's US division, explained, “A Walmart store manager is running a multi-million dollar business and managing hundreds of people, and it's a far more complex job today then when I managed a store…We ask our managers to own their roles and act like owners. Now, they’ll literally be owners.” Here’s hoping this kicks off a new kind of trend in the retail world.
[Image description: A digital illustration of a pile of dollar bills.] Credit & copyright: geralt, Pixabay -
FREENerdy CurioFree1 CQ
Stars might fade, but they’re not always forgotten. Astronomers at the University of Chicago and the Sloan Digital Sky Survey (SDSS) have found evidence of a very old, very strange star that was unlike anything they’d seen. In fact, it seems to defy existing models of star formation and death. Around 13 billion years ago, a star was formed that was about 50 times the mass of Earth’s sun. It was so big that the astronomers who discovered it are calling it a “Blockbuster.” Today, there are only elemental traces left from its supernova, which is what researchers discovered. However, based on their data, it shouldn’t have existed or gone supernova as it did. It’s so strange that they named it Barbenheimer, after last year’s unlikely box office pairing of Barbie and Oppenheimer that dominated the cultural landscape over the summer. The unusual nature of the star can be seen in the elements it left behind. In a statement by the SDSS, Alex Ji of the University of Chicago and SDSS explained that the star produced a large amount of elements found near iron on the periodic table, including nickel and zinc. At the same time, it also produced a low number of odd-numbered elements, and a large number of heavier elements like strontium and palladium. These features aren’t rare on their own, but they’ve never been seen in a single star before. Furthermore, the star’s immense mass should have caused it to collapse into a black hole when it died. Instead, it went supernova, ejecting the strange mix of elements it produced during its life. The discovery shows that current computer models and simulations regarding star deaths may be inaccurate, and that a better understanding of the conditions present in the early days of the universe may be needed to explain how such stars existed. It might involve a mixture of elements, but this case is far from elementary.
[Image description: A starry sky with a purple hue.] Credit & copyright: StockSnap, Pixabay
Stars might fade, but they’re not always forgotten. Astronomers at the University of Chicago and the Sloan Digital Sky Survey (SDSS) have found evidence of a very old, very strange star that was unlike anything they’d seen. In fact, it seems to defy existing models of star formation and death. Around 13 billion years ago, a star was formed that was about 50 times the mass of Earth’s sun. It was so big that the astronomers who discovered it are calling it a “Blockbuster.” Today, there are only elemental traces left from its supernova, which is what researchers discovered. However, based on their data, it shouldn’t have existed or gone supernova as it did. It’s so strange that they named it Barbenheimer, after last year’s unlikely box office pairing of Barbie and Oppenheimer that dominated the cultural landscape over the summer. The unusual nature of the star can be seen in the elements it left behind. In a statement by the SDSS, Alex Ji of the University of Chicago and SDSS explained that the star produced a large amount of elements found near iron on the periodic table, including nickel and zinc. At the same time, it also produced a low number of odd-numbered elements, and a large number of heavier elements like strontium and palladium. These features aren’t rare on their own, but they’ve never been seen in a single star before. Furthermore, the star’s immense mass should have caused it to collapse into a black hole when it died. Instead, it went supernova, ejecting the strange mix of elements it produced during its life. The discovery shows that current computer models and simulations regarding star deaths may be inaccurate, and that a better understanding of the conditions present in the early days of the universe may be needed to explain how such stars existed. It might involve a mixture of elements, but this case is far from elementary.
[Image description: A starry sky with a purple hue.] Credit & copyright: StockSnap, Pixabay
-
FREEScience Nerdy CurioFree1 CQ
Norman Vincent Peale once said, "Shoot for the Moon. Even if you miss, you'll land among the stars." In real life, though, you just crash back to Earth, as with the recent failure of Astrobotic’s unmanned Peregrine moon lander. Launched via the United Launch Alliance (ULA) Vulcan Centaur rocket, the Peregrine was supposed to be the first commercial moon lander, and the first from the U.S. to land on the moon in over 50 years. It’s the first mission in the Commercial Lunar Payload Services (CLPS) initiative, in which NASA is partnering with private entities to conduct lunar lander missions. However, the historic mission ran into problems soon after the probe separated from the boosts. After making it into deep space, the probe started leaking propellant, sending it tumbling off course until engineers managed to get it back under control. Unfortunately, this is currently forcing the thrusters to work harder than they’re designed to, which will lead the craft to fail prematurely. The thrusters aren’t just responsible for maneuvering the probe, they were meant to keep the Peregrine pointed at the sun, so that it could use solar energy to power itself. Once the thrusters fail, the probe won’t be able to orient itself, leading to loss of power. Astrobotic initially estimated that the Peregrine might last for 40 hours or so, but it has outlasted that prognosis thanks to the propellant leak slowing down somewhat in recent days. Originally, the Peregrine was supposed to land on the moon on February 23, but now, the hope is to get it as close to the moon as possible during the time it has left. Once the probe’s thrusters fail, it will likely crash back down to Earth along with the five NASA experiments it was carrying and 15 other payloads from paying customers. Among the payloads were human remains which were supposed to be buried on the moon’s surface. They’ll likely have to settle for a burial at sea, instead.
[Image description: A detailed photo of the moon, partially in shadow.] Credit & copyright: Ponciano, Pixabay
Norman Vincent Peale once said, "Shoot for the Moon. Even if you miss, you'll land among the stars." In real life, though, you just crash back to Earth, as with the recent failure of Astrobotic’s unmanned Peregrine moon lander. Launched via the United Launch Alliance (ULA) Vulcan Centaur rocket, the Peregrine was supposed to be the first commercial moon lander, and the first from the U.S. to land on the moon in over 50 years. It’s the first mission in the Commercial Lunar Payload Services (CLPS) initiative, in which NASA is partnering with private entities to conduct lunar lander missions. However, the historic mission ran into problems soon after the probe separated from the boosts. After making it into deep space, the probe started leaking propellant, sending it tumbling off course until engineers managed to get it back under control. Unfortunately, this is currently forcing the thrusters to work harder than they’re designed to, which will lead the craft to fail prematurely. The thrusters aren’t just responsible for maneuvering the probe, they were meant to keep the Peregrine pointed at the sun, so that it could use solar energy to power itself. Once the thrusters fail, the probe won’t be able to orient itself, leading to loss of power. Astrobotic initially estimated that the Peregrine might last for 40 hours or so, but it has outlasted that prognosis thanks to the propellant leak slowing down somewhat in recent days. Originally, the Peregrine was supposed to land on the moon on February 23, but now, the hope is to get it as close to the moon as possible during the time it has left. Once the probe’s thrusters fail, it will likely crash back down to Earth along with the five NASA experiments it was carrying and 15 other payloads from paying customers. Among the payloads were human remains which were supposed to be buried on the moon’s surface. They’ll likely have to settle for a burial at sea, instead.
[Image description: A detailed photo of the moon, partially in shadow.] Credit & copyright: Ponciano, Pixabay
-
FREEAstronomy Nerdy CurioFree1 CQ
This astronomical event is going to be a de-light. Astronomers and stargazers are buzzing about the total solar eclipse that will occur on April 8 this year. Such eclipses take place when the sun, moon, and Earth are lined up in such a way that the moon blocks out most of the sun. For this to happen, the moon has to be at the lunar node, where its orbit around the Earth meets the Earth’s orbit around the sun. Secondly, the moon has to be at or near perigee, the point in its elliptical orbit where it’s the closest it will ever get to Earth. Finally, it has to be a new moon, which is when the moon is between the sun and the Earth. It might sound like it should be a rare occurrence, but it actually happens regularly—every 18 months, in fact. While total solar eclipses are common, they’re rarely seen by many people because the total phase of the eclipse is only visible within the path of totality—a 90-mile-wide band. But the eclipse that will occur this April is special because the path of totality will cross North America, rendering the total eclipse visible to around 43 million people (about 0.5 percent of the world’s entire population). It’ll be 20 years before an eclipse like it will happen again. So grab your eclipse glasses and keep your eyes peeled.
[Image description: A solar eclipse in space.] Credit & copyright: Caldwbr, Wikimedia Commons. Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
This astronomical event is going to be a de-light. Astronomers and stargazers are buzzing about the total solar eclipse that will occur on April 8 this year. Such eclipses take place when the sun, moon, and Earth are lined up in such a way that the moon blocks out most of the sun. For this to happen, the moon has to be at the lunar node, where its orbit around the Earth meets the Earth’s orbit around the sun. Secondly, the moon has to be at or near perigee, the point in its elliptical orbit where it’s the closest it will ever get to Earth. Finally, it has to be a new moon, which is when the moon is between the sun and the Earth. It might sound like it should be a rare occurrence, but it actually happens regularly—every 18 months, in fact. While total solar eclipses are common, they’re rarely seen by many people because the total phase of the eclipse is only visible within the path of totality—a 90-mile-wide band. But the eclipse that will occur this April is special because the path of totality will cross North America, rendering the total eclipse visible to around 43 million people (about 0.5 percent of the world’s entire population). It’ll be 20 years before an eclipse like it will happen again. So grab your eclipse glasses and keep your eyes peeled.
[Image description: A solar eclipse in space.] Credit & copyright: Caldwbr, Wikimedia Commons. Creative Commons CC0 1.0 Universal Public Domain Dedication. The person who associated a work with this deed has dedicated the work to the public domain by waiving all of their rights to the work worldwide under copyright law, including all related and neighboring rights, to the extent allowed by law.
-
FREEFinance Nerdy CurioFree1 CQ
The new year is off to a rough start for airline giant Boeing. Recently, a frightening incident occurred involving a Boeing plane flying for Alaska Airlines. On its way from Oregon to California, a Boeing 737 Max 9 aircraft was left with a large hole in one side of the plane after part of the wall suddenly detached. The plane made an emergency landing and thankfully no one was seriously injured, since the seats closest to the hole happened to be unoccupied. Still, the incident has left travelers and investors shaken. After the Federal Aviation Administration ordered all similar aircraft to be grounded, Boeing’s shares dropped by more than 8.5 percent. Not only that, but a company that manufactures and installs certain parts for Boeing, Spirit AeroSystems, fell by around 11.5 percent. It’s the latest in a years-long string of issues for Boeing. Some of their aircraft were similarly grounded in 2019, following two crashes that killed 346 people. Soon after, the pandemic made air travel nearly impossible, causing more losses for the company. Who knows whether 2024 will be the year things turn around, or whether we’ll simply be seeing more aircraft turn around mid-flight.
[Image description: A sky at sunset filled with contrails, or airplane tracks.] Credit & copyright: rpdesignpro, PixabayThe new year is off to a rough start for airline giant Boeing. Recently, a frightening incident occurred involving a Boeing plane flying for Alaska Airlines. On its way from Oregon to California, a Boeing 737 Max 9 aircraft was left with a large hole in one side of the plane after part of the wall suddenly detached. The plane made an emergency landing and thankfully no one was seriously injured, since the seats closest to the hole happened to be unoccupied. Still, the incident has left travelers and investors shaken. After the Federal Aviation Administration ordered all similar aircraft to be grounded, Boeing’s shares dropped by more than 8.5 percent. Not only that, but a company that manufactures and installs certain parts for Boeing, Spirit AeroSystems, fell by around 11.5 percent. It’s the latest in a years-long string of issues for Boeing. Some of their aircraft were similarly grounded in 2019, following two crashes that killed 346 people. Soon after, the pandemic made air travel nearly impossible, causing more losses for the company. Who knows whether 2024 will be the year things turn around, or whether we’ll simply be seeing more aircraft turn around mid-flight.
[Image description: A sky at sunset filled with contrails, or airplane tracks.] Credit & copyright: rpdesignpro, Pixabay -
FREEComputer Science Nerdy CurioFree1 CQ
How do you keep a bank robber out of the vault when they can walk through the walls? That question could be important in the near future thanks to quantum computers. According to a report in the journal Science by computer scientist Oded Regev at New York University (NYU), quantum computers may soon be able to break through the digital barriers meant to keep data safe. Currently, computers securely store data via encryption, which, in simple terms, is a barrier made of astronomically complex math problems. In principle, encryption makes it infeasible for another computer to perform all the calculations necessary to access the data. However, scientists have been hypothesizing about the potential of quantum computers for decades, mainly their ability to outperform conventional computers which are limited by classical physics. Long story short, classical computing is based on binary bits—on or off, one or zero. Meanwhile, quantum computers process information using quantum mechanics, which uses qubits, a unit of information that is in a superposition of either on or off, one or zero. Essentially, this means that calculations can be performed exponentially faster.
For instance, with current RSA encryption, data-access keys are the products of two large prime numbers. Without one of the two keys, the encrypted data is inaccessible. To crack the encryption would require a computer to calculate every combination of numbers to find the right prime number. That could take years for a conventional computer to do, and even the latest quantum computer would need around 104 days to crack 2048-bit RSA encryption. But Regev claims to have created a new algorithm that reduces the number of calculations required by several orders of magnitude, allowing quantum computers to find a number’s prime factors much more quickly by performing multiple small calculations concurrently instead of one large calculation. For better or worse, it could prove to be a quantum leap for computing.[Image description: An AI-generated image of purple-tinted internal computing hardware.] Credit & copyright: SuttleMedia, Pixabay
How do you keep a bank robber out of the vault when they can walk through the walls? That question could be important in the near future thanks to quantum computers. According to a report in the journal Science by computer scientist Oded Regev at New York University (NYU), quantum computers may soon be able to break through the digital barriers meant to keep data safe. Currently, computers securely store data via encryption, which, in simple terms, is a barrier made of astronomically complex math problems. In principle, encryption makes it infeasible for another computer to perform all the calculations necessary to access the data. However, scientists have been hypothesizing about the potential of quantum computers for decades, mainly their ability to outperform conventional computers which are limited by classical physics. Long story short, classical computing is based on binary bits—on or off, one or zero. Meanwhile, quantum computers process information using quantum mechanics, which uses qubits, a unit of information that is in a superposition of either on or off, one or zero. Essentially, this means that calculations can be performed exponentially faster.
For instance, with current RSA encryption, data-access keys are the products of two large prime numbers. Without one of the two keys, the encrypted data is inaccessible. To crack the encryption would require a computer to calculate every combination of numbers to find the right prime number. That could take years for a conventional computer to do, and even the latest quantum computer would need around 104 days to crack 2048-bit RSA encryption. But Regev claims to have created a new algorithm that reduces the number of calculations required by several orders of magnitude, allowing quantum computers to find a number’s prime factors much more quickly by performing multiple small calculations concurrently instead of one large calculation. For better or worse, it could prove to be a quantum leap for computing.[Image description: An AI-generated image of purple-tinted internal computing hardware.] Credit & copyright: SuttleMedia, Pixabay
-
FREEEntrepreneurship Nerdy CurioFree1 CQ
Tenacity is at the heart of what all entrepreneurs do, but some need even more of it than usual. Ali Elreda, owner of Fatima’s Grill in Downey, California, had an unconventional start to his business career. He learned to cook while serving a prison sentence for intent to distribute cocaine, starting in 2009. While incarcerated, Elreda signed up to work in the prison kitchen and took culinary classes. He hoped to get a job in a professional kitchen once he was freed, but finding employment is notoriously difficult for those who have served time. Even though he loved cooking and had spent years elevating his skills, he struggled to get a restaurant job while living at a halfway house in 2011. Luckily, Elreda’s family saw the value in what he’d learned. His cousin, who owned a stall at a meat market in their shared hometown of Bell, California, offered him a cooking job. Elreda did such an impressive job that, when the cousin moved overseas, he sold the stall to him for $80,000. It was a big investment, but Elreda immediately made the most of it, combining Mediterranean and Mexican-American cuisines to create colorful, innovative dishes. Soon, Elreda’s food was going viral on social media. Since his food stall was small and had low overhead costs, he was able to save aggressively, and with quite a bit of social media hype behind him, he opened his own restaurant in 2016. Named after his 19-year-old daughter, Fatina’s Grill is a popular spot both in-person and online. Six new locations have opened since 2021, all located in Southern California. The restaurant’s official social media profiles also have around 1.4 million followers on TikTok and Instagram combined. The business now brings in over $1 million per year, proving that no matter where you start, tenacity can ensure that you end up somewhere amazing.
Tenacity is at the heart of what all entrepreneurs do, but some need even more of it than usual. Ali Elreda, owner of Fatima’s Grill in Downey, California, had an unconventional start to his business career. He learned to cook while serving a prison sentence for intent to distribute cocaine, starting in 2009. While incarcerated, Elreda signed up to work in the prison kitchen and took culinary classes. He hoped to get a job in a professional kitchen once he was freed, but finding employment is notoriously difficult for those who have served time. Even though he loved cooking and had spent years elevating his skills, he struggled to get a restaurant job while living at a halfway house in 2011. Luckily, Elreda’s family saw the value in what he’d learned. His cousin, who owned a stall at a meat market in their shared hometown of Bell, California, offered him a cooking job. Elreda did such an impressive job that, when the cousin moved overseas, he sold the stall to him for $80,000. It was a big investment, but Elreda immediately made the most of it, combining Mediterranean and Mexican-American cuisines to create colorful, innovative dishes. Soon, Elreda’s food was going viral on social media. Since his food stall was small and had low overhead costs, he was able to save aggressively, and with quite a bit of social media hype behind him, he opened his own restaurant in 2016. Named after his 19-year-old daughter, Fatina’s Grill is a popular spot both in-person and online. Six new locations have opened since 2021, all located in Southern California. The restaurant’s official social media profiles also have around 1.4 million followers on TikTok and Instagram combined. The business now brings in over $1 million per year, proving that no matter where you start, tenacity can ensure that you end up somewhere amazing.
-
FREESTEM Nerdy CurioFree1 CQ
Infections are bad enough, but drug-resistant ones are just plain scary. Methicillin-resistant Staphylococcus aureus (MRSA) affects around 80,000 people a year in the U.S. alone, and the infection is notoriously difficult to treat, as its resistant to many drugs. However, there may be a new treatment on the horizon thanks to researchers at MIT. Cases of MRSA, a type of staph infection, start out looking innocent enough, but it’s one of the most dangerous infections someone can get. At first, a person may develop a small rash or a red bump that feels warm to the touch, but these quickly grow into a pus-filled boils that create large abscesses just under the skin. From there, the infection can spread to the rest of the body via the bloodstream, leading to sepsis, pneumonia, and other potentially deadly conditions. Not only is MRSA resistant to antibiotics, it’s often contracted at the very places people go to in order to seek treatment for other medical conditions—hospitals, long-term care facilities and dialysis centers.
Treatment resistant infections like MRSA develop because, just like larger animals, bacteria reproduce and can evolve over time. Just as an animal can adapt to outrun specific predators, the bacteria that cause MRSA have evolved to withstand many common antibiotics, leaving doctors scrambling to find drugs that can actually treat these infections. Since it was first described in 1961, drug-resistant MRSA infections have continued to infect more people each year. In a bid to help with this growing problem, researchers at MIT trained an AI model to pinpoint drugs that might be able to help. They did this by feeding it the data on around 39,000 drug compounds. The AI then identified which of the compounds had antibiotic properties based on their chemical structure, and this list was further whittled down by having it predict their potential toxicity on various human cells (liver, skeletal, muscles, and lungs). In the end, they identified 280 compounds that were found to be effective in treating MRSA in mice. Finding an effective treatment might still be like finding a needle in a haystack, but with AI, we’ve never been able to sort through the hay quite so fast.[Image description: A tipped-over bottle with many different colored pills spilling out.] Credit & copyright: kravaivan11, Pixabay
Infections are bad enough, but drug-resistant ones are just plain scary. Methicillin-resistant Staphylococcus aureus (MRSA) affects around 80,000 people a year in the U.S. alone, and the infection is notoriously difficult to treat, as its resistant to many drugs. However, there may be a new treatment on the horizon thanks to researchers at MIT. Cases of MRSA, a type of staph infection, start out looking innocent enough, but it’s one of the most dangerous infections someone can get. At first, a person may develop a small rash or a red bump that feels warm to the touch, but these quickly grow into a pus-filled boils that create large abscesses just under the skin. From there, the infection can spread to the rest of the body via the bloodstream, leading to sepsis, pneumonia, and other potentially deadly conditions. Not only is MRSA resistant to antibiotics, it’s often contracted at the very places people go to in order to seek treatment for other medical conditions—hospitals, long-term care facilities and dialysis centers.
Treatment resistant infections like MRSA develop because, just like larger animals, bacteria reproduce and can evolve over time. Just as an animal can adapt to outrun specific predators, the bacteria that cause MRSA have evolved to withstand many common antibiotics, leaving doctors scrambling to find drugs that can actually treat these infections. Since it was first described in 1961, drug-resistant MRSA infections have continued to infect more people each year. In a bid to help with this growing problem, researchers at MIT trained an AI model to pinpoint drugs that might be able to help. They did this by feeding it the data on around 39,000 drug compounds. The AI then identified which of the compounds had antibiotic properties based on their chemical structure, and this list was further whittled down by having it predict their potential toxicity on various human cells (liver, skeletal, muscles, and lungs). In the end, they identified 280 compounds that were found to be effective in treating MRSA in mice. Finding an effective treatment might still be like finding a needle in a haystack, but with AI, we’ve never been able to sort through the hay quite so fast.[Image description: A tipped-over bottle with many different colored pills spilling out.] Credit & copyright: kravaivan11, Pixabay
-
FREEMath Nerdy CurioFree1 CQ
Fortune-telling isn’t a bunch of hooey—it’s a bunch of math and programming. Researchers at the Technical University of Denmark have created an AI that uses so-called “fortune-telling algorithms” to predict when someone is likely to die, according to a paper published in the journal Nature Computational Science. There are probably very few people who would like to know when they will die, but there’s still a whole industry behind figuring out that information; and no, it’s not the magic industry. Insurance companies hire actuaries to figure out which people are likely to die in which ways so that the companies can maximize their profits. But humans can only consider so many variables and perform so many calculations. That’s where AI comes in. Sune Lehmann and his colleagues in Denmark created life2vec, an AI that can predict when a particular person might die by looking at a person’s job history, income, place of residence, age, medical history, and other factors. To test the accuracy of life2vec, Lehmann fed it the data of 6 million people in Denmark between 2008 and 2016. Then, they asked life2vec to predict which people would still be alive by 2020, and compared the results to a government registry. The results were 78 percent accurate, outperforming actuary tables used by insurance companies. Researchers hope that life2vec might one day help people stay healthy by pinpointing which life-threatening issues are most likely to affect them. For now, the model still needs to be improved before it can become more widely applicable outside Denmark. Until then, you’ll need to consult math experts or psychics.
[Image description: A digital illustration of a blue human brain surrounded by wires and other computer components.] Credit & copyright: geralt, Pixabay
Fortune-telling isn’t a bunch of hooey—it’s a bunch of math and programming. Researchers at the Technical University of Denmark have created an AI that uses so-called “fortune-telling algorithms” to predict when someone is likely to die, according to a paper published in the journal Nature Computational Science. There are probably very few people who would like to know when they will die, but there’s still a whole industry behind figuring out that information; and no, it’s not the magic industry. Insurance companies hire actuaries to figure out which people are likely to die in which ways so that the companies can maximize their profits. But humans can only consider so many variables and perform so many calculations. That’s where AI comes in. Sune Lehmann and his colleagues in Denmark created life2vec, an AI that can predict when a particular person might die by looking at a person’s job history, income, place of residence, age, medical history, and other factors. To test the accuracy of life2vec, Lehmann fed it the data of 6 million people in Denmark between 2008 and 2016. Then, they asked life2vec to predict which people would still be alive by 2020, and compared the results to a government registry. The results were 78 percent accurate, outperforming actuary tables used by insurance companies. Researchers hope that life2vec might one day help people stay healthy by pinpointing which life-threatening issues are most likely to affect them. For now, the model still needs to be improved before it can become more widely applicable outside Denmark. Until then, you’ll need to consult math experts or psychics.
[Image description: A digital illustration of a blue human brain surrounded by wires and other computer components.] Credit & copyright: geralt, Pixabay
-
FREENerdy CurioFree1 CQ
Even tech giants like Apple can end up in hot water now and then. Recently, Apple announced that it will stop selling its Apple Watch Series 9 and Apple Watch Ultra 2 in the U.S. due to a patent dispute. California-based company Masimo, which makes medical technology equipment, claims that Apple violated Masimo’s patent for a specific kind of blood-oxygen sensor. Back in October, a federal trade agency agreed. The government has until December 25 to complete a formal review, and Apple has asked them to reverse the ruling, but they’ve also pulled the watches in the meantime. Apple Watches that have already been purchased will not be affected, nor will watches without the disputed technology. Still, an import ban on the watches will go into effect starting December 26 unless the government steps in. So, how likely is that to happen? There’s no way to be sure, but things don’t exactly look great for Apple. According to court documents, Apple held talks with Masimo in 2013 about licensing their technology, but no agreement was made. Apple then hired several executives and engineers who originally worked at Masimo. Masimo claims that Apple did this in order to poach the smaller company’s technology. Still, this dispute isn’t really over until the White House sings.
Even tech giants like Apple can end up in hot water now and then. Recently, Apple announced that it will stop selling its Apple Watch Series 9 and Apple Watch Ultra 2 in the U.S. due to a patent dispute. California-based company Masimo, which makes medical technology equipment, claims that Apple violated Masimo’s patent for a specific kind of blood-oxygen sensor. Back in October, a federal trade agency agreed. The government has until December 25 to complete a formal review, and Apple has asked them to reverse the ruling, but they’ve also pulled the watches in the meantime. Apple Watches that have already been purchased will not be affected, nor will watches without the disputed technology. Still, an import ban on the watches will go into effect starting December 26 unless the government steps in. So, how likely is that to happen? There’s no way to be sure, but things don’t exactly look great for Apple. According to court documents, Apple held talks with Masimo in 2013 about licensing their technology, but no agreement was made. Apple then hired several executives and engineers who originally worked at Masimo. Masimo claims that Apple did this in order to poach the smaller company’s technology. Still, this dispute isn’t really over until the White House sings.