Curio Cabinet / Nerdy Curio
-
FREEBiology Nerdy CurioFree1 CQ
It’s the ultimate two-for-one deal. An international team of scientists have managed to see one of the world’s rarest natural phenomena—primary endosymbiosis—in action after years of observation. As one of the authors of their recent study stated, when this last happened over a billion years ago, the first plants appeared on Earth. Primary endosymbiosis occurs when two separate lifeforms join together as one, with the smaller of the two becoming an organelle. The first time the phenomenon occurred, a single-celled organism absorbed a bacterium whole, and that bacterium became the mitochondria. Without it, complex life that requires more energy than what a single-celled organism can produce could not have come into existence. Then came plants, which was the result of one of these complex organisms swallowing a cyanobacteria. Cyanobacteria are capable of turning sunlight into energy, and inside their new hosts, they became chloroplasts which make photosynthesis possible for plants. The latest example of this phenomena was just discovered, but has actually been happening for around a 100 million years. It involves an algae called Braarudosphaera bigelowii that absorbs a cyanobacteria called UCYN-A. Since the process of endosymbiosis first started between these two, UCYN-A has been losing parts of its genome while increasingly becoming reliant on B. bigelowii to provide necessary nutrients, indicating that it has fully accepted its role as an organelle. In exchange, UCYN-A fixes nitrogen from the air, something that algae and plants can’t do on their own. Even legumes—which are often referred to as nitrogen fixers—rely on bacteria living in their roots for the vital element. With further study, scientists say that it may be possible to use the algae to to fix nitrogen in crops, lessening dependence on fertilizers. Now that’s a green solution.
[Image description: A digital diagram of an ovular animal mitochondria with labeled portions.] Credit & copyright: Mariana Ruiz Villarreal LadyofHats, Wikimedia Commons
It’s the ultimate two-for-one deal. An international team of scientists have managed to see one of the world’s rarest natural phenomena—primary endosymbiosis—in action after years of observation. As one of the authors of their recent study stated, when this last happened over a billion years ago, the first plants appeared on Earth. Primary endosymbiosis occurs when two separate lifeforms join together as one, with the smaller of the two becoming an organelle. The first time the phenomenon occurred, a single-celled organism absorbed a bacterium whole, and that bacterium became the mitochondria. Without it, complex life that requires more energy than what a single-celled organism can produce could not have come into existence. Then came plants, which was the result of one of these complex organisms swallowing a cyanobacteria. Cyanobacteria are capable of turning sunlight into energy, and inside their new hosts, they became chloroplasts which make photosynthesis possible for plants. The latest example of this phenomena was just discovered, but has actually been happening for around a 100 million years. It involves an algae called Braarudosphaera bigelowii that absorbs a cyanobacteria called UCYN-A. Since the process of endosymbiosis first started between these two, UCYN-A has been losing parts of its genome while increasingly becoming reliant on B. bigelowii to provide necessary nutrients, indicating that it has fully accepted its role as an organelle. In exchange, UCYN-A fixes nitrogen from the air, something that algae and plants can’t do on their own. Even legumes—which are often referred to as nitrogen fixers—rely on bacteria living in their roots for the vital element. With further study, scientists say that it may be possible to use the algae to to fix nitrogen in crops, lessening dependence on fertilizers. Now that’s a green solution.
[Image description: A digital diagram of an ovular animal mitochondria with labeled portions.] Credit & copyright: Mariana Ruiz Villarreal LadyofHats, Wikimedia Commons
-
FREENerdy CurioFree1 CQ
There may not be little green men on their way to invade planet Earth, but if there is alien life, it could be purple. At least, that’s what microbiologists from the U.S. are saying in a paper published in the International Journal of Astrobiology. Earth is full of lush, green forests and verdant plains of grass. However, there was a time that the color of life on Earth was purple, not green. Plants and algae use chlorophyll (which is green) to get energy from the sun via photosynthesis, but they weren’t the first to use solar energy to thrive. Rather, bacteria and single-celled organisms relied on retinal light-harvesting, which uses a molecule called retinal to process sunlight into energy. This process has some benefits, like absorbing energy-rich green light instead of reflecting it, like plants do. Absorbing green light, incidentally, makes living things that use retinal light-harvesting appear purple. In fact, these purple microorganisms are still common on Earth. It could be, then, that similar life would evolve on other planets, and that purple life forms would become the most visible in the absence of plant-like life. This could be a game-changer for astrobiologists looking for life on other planets through surface biosignatures, because their efforts are currently based on looking for the color green. Figuring out a way to detect purple could increase their chances, especially since, unlike plants, purple life can thrive in environments with low light and low oxygen. Our own, Earthly plants must be green with envy.
[Image description: A digital illustration of purple stars against a purple background] Credit & copyright: Curious team member’s own work.
There may not be little green men on their way to invade planet Earth, but if there is alien life, it could be purple. At least, that’s what microbiologists from the U.S. are saying in a paper published in the International Journal of Astrobiology. Earth is full of lush, green forests and verdant plains of grass. However, there was a time that the color of life on Earth was purple, not green. Plants and algae use chlorophyll (which is green) to get energy from the sun via photosynthesis, but they weren’t the first to use solar energy to thrive. Rather, bacteria and single-celled organisms relied on retinal light-harvesting, which uses a molecule called retinal to process sunlight into energy. This process has some benefits, like absorbing energy-rich green light instead of reflecting it, like plants do. Absorbing green light, incidentally, makes living things that use retinal light-harvesting appear purple. In fact, these purple microorganisms are still common on Earth. It could be, then, that similar life would evolve on other planets, and that purple life forms would become the most visible in the absence of plant-like life. This could be a game-changer for astrobiologists looking for life on other planets through surface biosignatures, because their efforts are currently based on looking for the color green. Figuring out a way to detect purple could increase their chances, especially since, unlike plants, purple life can thrive in environments with low light and low oxygen. Our own, Earthly plants must be green with envy.
[Image description: A digital illustration of purple stars against a purple background] Credit & copyright: Curious team member’s own work.
-
FREEEconomics Nerdy CurioFree1 CQ
World leaders aren’t the only ones reeling after Iran’s recent attack on Israel—the economy is feeling the shock too. Oil prices, in particular, spiked directly following the attack, though they dropped just as quickly once word got out that around 99 percent of the missiles fired in the attack had been intercepted. Still, fears of escalating violence persist, along with concerns that a mounting conflict could significantly drive up oil prices. After all, Iran exports around 1.5 percent of the world’s oil supply. Of course, the Middle East isn’t the only region engulfed in a conflict with the potential to affect the global economy. Russia’s ongoing invasion of Ukraine has already resulted in rising prices for commodities like food and energy. This is not only due to sanctions imposed on Russia by the U.S. and European nations but also a decrease in exports from both Russia and Ukraine. Recently, new sanctions were announced, which will prevent Russian aluminum, nickel, and copper from being sold on LME and CME exchanges. While this isn’t expected to raise global prices on industrial metals in the short term, the longer the invasion drags on, the more economic impact the world is likely to feel. It’s just one more reason to hope for a peaceful and swift end to these conflicts.
World leaders aren’t the only ones reeling after Iran’s recent attack on Israel—the economy is feeling the shock too. Oil prices, in particular, spiked directly following the attack, though they dropped just as quickly once word got out that around 99 percent of the missiles fired in the attack had been intercepted. Still, fears of escalating violence persist, along with concerns that a mounting conflict could significantly drive up oil prices. After all, Iran exports around 1.5 percent of the world’s oil supply. Of course, the Middle East isn’t the only region engulfed in a conflict with the potential to affect the global economy. Russia’s ongoing invasion of Ukraine has already resulted in rising prices for commodities like food and energy. This is not only due to sanctions imposed on Russia by the U.S. and European nations but also a decrease in exports from both Russia and Ukraine. Recently, new sanctions were announced, which will prevent Russian aluminum, nickel, and copper from being sold on LME and CME exchanges. While this isn’t expected to raise global prices on industrial metals in the short term, the longer the invasion drags on, the more economic impact the world is likely to feel. It’s just one more reason to hope for a peaceful and swift end to these conflicts.
-
FREEScience Nerdy CurioFree1 CQ
Dating is difficult enough as it is, but it’s about to get even harder for these moths. Researchers at the French National Research Institute for Agriculture, Food, and Environment (INRAE) have discovered that the female sex pheromones of the African Cotton Moth (Spodoptera littoralis) can be used to disrupt the circadian rhythms of the species’ males, according to a paper published in the journal Current Biology. The discovery, while limited to one moth species for now, is a potential stepping stone to a highly-effective form of biological control, or biocontrol. The moths, also known as Egyptian cotton leafworms during their larval stage, are considered crop pests. Conventional mitigation measures involve pesticides that can also kill beneficial insects like bees and can even be dangerous to human health. That’s why organizations like INRAE are looking into biocontrol as an option. Biocontrol refers to methods of pest suppression which use natural predators or competitors, making them generally cheap, safe, and target-specific. In this case, it seems that the insects’ own pheromones could be used against them. The sex pheromones of S. littoralis contain a chemical compound called (Z,E)-9,11-tetradecadienyl acetate that is capable of disrupting the males’ circadian rhythm even in the presence of daylight. By exposing males to the pheromones, researchers were able to get the females and males on opposite sleep cycles, preventing their meeting and mating, which must happen during a narrow, 8-day window. Best of all, male moths are extremely sensitive to the pheromones and can be attracted to it from large distances. You snooze, you lose.
[Image description: A large rice field under a blue sky.] Credit & copyright: DESPIERRES Cécile, Pexels
Dating is difficult enough as it is, but it’s about to get even harder for these moths. Researchers at the French National Research Institute for Agriculture, Food, and Environment (INRAE) have discovered that the female sex pheromones of the African Cotton Moth (Spodoptera littoralis) can be used to disrupt the circadian rhythms of the species’ males, according to a paper published in the journal Current Biology. The discovery, while limited to one moth species for now, is a potential stepping stone to a highly-effective form of biological control, or biocontrol. The moths, also known as Egyptian cotton leafworms during their larval stage, are considered crop pests. Conventional mitigation measures involve pesticides that can also kill beneficial insects like bees and can even be dangerous to human health. That’s why organizations like INRAE are looking into biocontrol as an option. Biocontrol refers to methods of pest suppression which use natural predators or competitors, making them generally cheap, safe, and target-specific. In this case, it seems that the insects’ own pheromones could be used against them. The sex pheromones of S. littoralis contain a chemical compound called (Z,E)-9,11-tetradecadienyl acetate that is capable of disrupting the males’ circadian rhythm even in the presence of daylight. By exposing males to the pheromones, researchers were able to get the females and males on opposite sleep cycles, preventing their meeting and mating, which must happen during a narrow, 8-day window. Best of all, male moths are extremely sensitive to the pheromones and can be attracted to it from large distances. You snooze, you lose.
[Image description: A large rice field under a blue sky.] Credit & copyright: DESPIERRES Cécile, Pexels
-
FREESTEM Nerdy CurioFree1 CQ
Can smartphones get even smarter? Possibly. Soon, our phones may even boast holographic displays thanks to researchers from the University of Tokyo, according to a paper published in the journal Optics Letters. The recent release of Apple’s Apple Vision Pro introduced a wide audience to the concept of so-called “spatial computing.” The term was defined over 20 years ago by MIT researcher Simon Greenwold, who described it as “Human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.” In other words, such displays allow users to interact with virtual objects that are tied to a physical space. However, while the Apple Vision Pro is worn like a VR headset, the researchers from Japan are implementing the technology into smartphones, allowing their screens to display holographic images, otherwise known as computer-generated holography (CGH). They achieved this with an iPhone 14 Pro and a spatial light modulator, which is an optical device that controls the wavefronts of light. Whereas conventional CGH requires large and relatively expensive hardware that utilizes lasers with two layers of spatial light modulators, the new version uses an algorithm which coordinates light from the phone’s screen with just one spatial light monitor. This creates the same effect as bigger holographic technology, but on a much smaller scale. Researchers also believe that this technology could be used to improve images on VR headsets. The future is virtually here.
Can smartphones get even smarter? Possibly. Soon, our phones may even boast holographic displays thanks to researchers from the University of Tokyo, according to a paper published in the journal Optics Letters. The recent release of Apple’s Apple Vision Pro introduced a wide audience to the concept of so-called “spatial computing.” The term was defined over 20 years ago by MIT researcher Simon Greenwold, who described it as “Human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.” In other words, such displays allow users to interact with virtual objects that are tied to a physical space. However, while the Apple Vision Pro is worn like a VR headset, the researchers from Japan are implementing the technology into smartphones, allowing their screens to display holographic images, otherwise known as computer-generated holography (CGH). They achieved this with an iPhone 14 Pro and a spatial light modulator, which is an optical device that controls the wavefronts of light. Whereas conventional CGH requires large and relatively expensive hardware that utilizes lasers with two layers of spatial light modulators, the new version uses an algorithm which coordinates light from the phone’s screen with just one spatial light monitor. This creates the same effect as bigger holographic technology, but on a much smaller scale. Researchers also believe that this technology could be used to improve images on VR headsets. The future is virtually here.
-
FREEScience Nerdy CurioFree1 CQ
This type of science is sure to bear fruit. Researchers at Sandford Orchards and the University of Bristol are getting ready to collect genetic samples of heritage apple trees from the Royal Horticultural Society’s (RHS) Garden in Rosemoor in a nation-wide effort to preserve heritage cultivars. Apples are one of the most important fruits in the UK, featuring in many traditional British dishes (including apple pie, which Americans didn’t actually invent). Over the centuries, Brits have grown an impressive variety of the fruits, many of which were created by cider makers who were trying to get an edge on flavor in a very competitive industry. Somewhat ironically, that may have been the downfall of several apple cultivars, as cider makers were notoriously secretive about their growing practices. While many of these rarer varieties still remain in orchards throughout Britain, there isn’t a comprehensive list of apple cultivars. Meanwhile, the number of orchards in Britain is in decline, meaning time is running out for countless apple varieties. That’s why Sandford Orchards and the University of Bristol partnered to collect DNA samples of apple trees from across Britain. First, they asked members of the public to send in samples of their apple trees, and they were met with an enthusiastic response. Now, they’re trying to preserve the genotypes of apple trees in the possession of the RHS. The researchers hope to identify varieties that are resilient to the changing climate so that Britain can always have orchards for cider making and to feed the pollinators that many crops depend on. An apple a day might keep the doctor away, but hopefully that doesn’t apply to researchers.
[Image description: Red apples growing on a tree branch with green leaves.] Credit & copyright: Elizabeth Tr. Armstrong, Pexels
This type of science is sure to bear fruit. Researchers at Sandford Orchards and the University of Bristol are getting ready to collect genetic samples of heritage apple trees from the Royal Horticultural Society’s (RHS) Garden in Rosemoor in a nation-wide effort to preserve heritage cultivars. Apples are one of the most important fruits in the UK, featuring in many traditional British dishes (including apple pie, which Americans didn’t actually invent). Over the centuries, Brits have grown an impressive variety of the fruits, many of which were created by cider makers who were trying to get an edge on flavor in a very competitive industry. Somewhat ironically, that may have been the downfall of several apple cultivars, as cider makers were notoriously secretive about their growing practices. While many of these rarer varieties still remain in orchards throughout Britain, there isn’t a comprehensive list of apple cultivars. Meanwhile, the number of orchards in Britain is in decline, meaning time is running out for countless apple varieties. That’s why Sandford Orchards and the University of Bristol partnered to collect DNA samples of apple trees from across Britain. First, they asked members of the public to send in samples of their apple trees, and they were met with an enthusiastic response. Now, they’re trying to preserve the genotypes of apple trees in the possession of the RHS. The researchers hope to identify varieties that are resilient to the changing climate so that Britain can always have orchards for cider making and to feed the pollinators that many crops depend on. An apple a day might keep the doctor away, but hopefully that doesn’t apply to researchers.
[Image description: Red apples growing on a tree branch with green leaves.] Credit & copyright: Elizabeth Tr. Armstrong, Pexels
-
FREEWork Nerdy CurioFree1 CQ
Will real estate brokers soon be going broke? Probably not, but the American real estate market is definitely in for some changes. On March 15, the National Association of Realtors (NAR) agreed to a class-action settlement over agent commissions. The plaintiffs were home sellers who argued that certain NAR policies were artificially inflating the real estate market. As part of their settlement, the NAR has agreed to change certain guidelines concerning how real estate brokers and agents are paid. While no one knows exactly how this will play out, there are plenty of theories. Some people believe that the new guidelines will make it cheaper to buy and sell homes, since sellers will no longer pay the five to six percent commission that used to pay a buyers’ agent. Instead, it would be up to buyers to pay their own agents, and no one knows how much they’ll choose to pay. New guidelines also state that homebuyers have to sign explicit deals with brokers if they choose to work with them, a move that industry experts believe may cause buyers to work with brokers less frequently. While some who work in the real estate industry are understandably wary of the new rules, plenty of potential homebuyers are celebrating…perhaps prematurely. While the new rules could lower home costs by thousands of dollars by lowering brokerage fees, it’s really too soon to tell. After all, that assumption relies on the logic that sellers will lower their home prices just because they don’t have to pay as many fees…but they don’t necessarily have to. For now, it’s just one more thing (alongside record-high home prices and rising interest rates) to keep an eye on in this volatile market.
Will real estate brokers soon be going broke? Probably not, but the American real estate market is definitely in for some changes. On March 15, the National Association of Realtors (NAR) agreed to a class-action settlement over agent commissions. The plaintiffs were home sellers who argued that certain NAR policies were artificially inflating the real estate market. As part of their settlement, the NAR has agreed to change certain guidelines concerning how real estate brokers and agents are paid. While no one knows exactly how this will play out, there are plenty of theories. Some people believe that the new guidelines will make it cheaper to buy and sell homes, since sellers will no longer pay the five to six percent commission that used to pay a buyers’ agent. Instead, it would be up to buyers to pay their own agents, and no one knows how much they’ll choose to pay. New guidelines also state that homebuyers have to sign explicit deals with brokers if they choose to work with them, a move that industry experts believe may cause buyers to work with brokers less frequently. While some who work in the real estate industry are understandably wary of the new rules, plenty of potential homebuyers are celebrating…perhaps prematurely. While the new rules could lower home costs by thousands of dollars by lowering brokerage fees, it’s really too soon to tell. After all, that assumption relies on the logic that sellers will lower their home prices just because they don’t have to pay as many fees…but they don’t necessarily have to. For now, it’s just one more thing (alongside record-high home prices and rising interest rates) to keep an eye on in this volatile market.
-
FREEScience Nerdy CurioFree1 CQ
Geologically speaking, Iceland has been awfully busy lately. On March 16, the Nordic nation endured yet another volcano eruption in the Reykjanes peninsula. This marks the fourth such eruption in the southern peninsula since December, 2023. The nearby town of Grindavík was evacuated as lava flow came dangerously close, though luckily not much damage was done. Iceland is a famously volcanic island, so while eruptions aren’t exactly rare, they don’t usually occur with such frequency. In fact, until 2021, it had been 800 years since the last eruption in the Reykjanes peninsula. Experts aren’t sure why the area has suddenly become so volcanically active, but they do know how lava reaches the surface, where the flows may be headed, and the specific dangers they may still pose, even to those who don’t live right next door. While many people think of volcanoes as tall, fiery mountains, that isn’t always the case. This volcano is entirely submerged, making it invisible until it erupts. When that happens, lava flows through a 9-mile-long magma tunnel underground and, when enough pressure has built up, it breaks through to the surface, causing deep fissures. This is why earthquakes and tremors often accompany eruptions. While magma only bursts out of the ground for a short while, it can continue flowing for much longer—sometimes longer than a week, hence the recent evacuation of Grindavík. The latest eruption caused two fissures which have a possible combined length of up to 2.5 miles. While the flowing lava isn’t currently endangering anyone, experts have warned that it could still cause big problems if it’s able to flow all the way to the sea. That’s because the lava will cool rapidly if it touches water, which could release hydrochloric gas. This gas can be fatal at close ranges, but can lead to severe injuries even at some distance. Representatives from the Icelandic Met Office (IMO) explained in a statement, “In a radius of about 500m [1,640 feet] from the point where lava would come into contact with the sea, conditions would be life-threatening.” Luckily the gas is much less deadly beyond a two-mile radius, and the only populated area in harm’s way is Grindavík. Evacuated citizens won’t be returning home until they’re given the go-ahead. Lava just doesn’t make for the warmest homecoming.
[Image description: A photo of the Icelandic town of Grindavik, under a blue sky.] Credit & copyright: Roman Zacharij, Wikimedia Commons. The copyright holder of this work, has released this work into the public domain. This applies worldwide.
Geologically speaking, Iceland has been awfully busy lately. On March 16, the Nordic nation endured yet another volcano eruption in the Reykjanes peninsula. This marks the fourth such eruption in the southern peninsula since December, 2023. The nearby town of Grindavík was evacuated as lava flow came dangerously close, though luckily not much damage was done. Iceland is a famously volcanic island, so while eruptions aren’t exactly rare, they don’t usually occur with such frequency. In fact, until 2021, it had been 800 years since the last eruption in the Reykjanes peninsula. Experts aren’t sure why the area has suddenly become so volcanically active, but they do know how lava reaches the surface, where the flows may be headed, and the specific dangers they may still pose, even to those who don’t live right next door. While many people think of volcanoes as tall, fiery mountains, that isn’t always the case. This volcano is entirely submerged, making it invisible until it erupts. When that happens, lava flows through a 9-mile-long magma tunnel underground and, when enough pressure has built up, it breaks through to the surface, causing deep fissures. This is why earthquakes and tremors often accompany eruptions. While magma only bursts out of the ground for a short while, it can continue flowing for much longer—sometimes longer than a week, hence the recent evacuation of Grindavík. The latest eruption caused two fissures which have a possible combined length of up to 2.5 miles. While the flowing lava isn’t currently endangering anyone, experts have warned that it could still cause big problems if it’s able to flow all the way to the sea. That’s because the lava will cool rapidly if it touches water, which could release hydrochloric gas. This gas can be fatal at close ranges, but can lead to severe injuries even at some distance. Representatives from the Icelandic Met Office (IMO) explained in a statement, “In a radius of about 500m [1,640 feet] from the point where lava would come into contact with the sea, conditions would be life-threatening.” Luckily the gas is much less deadly beyond a two-mile radius, and the only populated area in harm’s way is Grindavík. Evacuated citizens won’t be returning home until they’re given the go-ahead. Lava just doesn’t make for the warmest homecoming.
[Image description: A photo of the Icelandic town of Grindavik, under a blue sky.] Credit & copyright: Roman Zacharij, Wikimedia Commons. The copyright holder of this work, has released this work into the public domain. This applies worldwide.
-
FREENerdy CurioFree1 CQ
Post your trendy dance videos while you still can! Many Americans, especially the 150 million of them who use TikTok, are in an uproar about a bill that recently passed the U.S. House. If passed through the Senate and signed into law, it would force TikTok’s parent company, Bytedance, to either sell Tiktok or face a nationwide U.S. ban. Bytedance will almost certainly refuse to sell TikTok, despite the fact that the U.S. is its biggest international customer (in China, users use a different version of the app, called Douyin.) The reasons for Bytedance’s expected refusal have to do with the company’s ties to the Chinese government—the very ties that U.S. lawmakers fear may put American users’ data at risk. Unlike a private American company, Bytedance would need its government’s approval in order to sell, and the Chinese government is unlikely to comply. This is at least partially due to the fact that TikTok has a famously good algorithm, which recommends individualized content to users. Selling TikTok would mean exporting the code for its algorithm, and that would require Bytedance to go through administrative licensing procedures. So, even if the Chinese government did approve the sale of the app and the divulging of its algorithm’s code, the procedures around a sale would take months—likely more time than the six-month deadline that the current U.S. bill allows. This makes a sale extremely unlikely. So, if the bill passes the Senate, a TikTok ban could be the ultimate outcome. Of course, that’s a big “if.” Regardless of how one feels about TikTok, there’s no denying that Bytedance is one of the world’s most successful social media companies, with around 1.22 billion monthly active users around the world. With those numbers, Bytedance will probably be just fine even if they do face a U.S. ban. The idea of losing 150 million users still has to sting a bit, though.
Post your trendy dance videos while you still can! Many Americans, especially the 150 million of them who use TikTok, are in an uproar about a bill that recently passed the U.S. House. If passed through the Senate and signed into law, it would force TikTok’s parent company, Bytedance, to either sell Tiktok or face a nationwide U.S. ban. Bytedance will almost certainly refuse to sell TikTok, despite the fact that the U.S. is its biggest international customer (in China, users use a different version of the app, called Douyin.) The reasons for Bytedance’s expected refusal have to do with the company’s ties to the Chinese government—the very ties that U.S. lawmakers fear may put American users’ data at risk. Unlike a private American company, Bytedance would need its government’s approval in order to sell, and the Chinese government is unlikely to comply. This is at least partially due to the fact that TikTok has a famously good algorithm, which recommends individualized content to users. Selling TikTok would mean exporting the code for its algorithm, and that would require Bytedance to go through administrative licensing procedures. So, even if the Chinese government did approve the sale of the app and the divulging of its algorithm’s code, the procedures around a sale would take months—likely more time than the six-month deadline that the current U.S. bill allows. This makes a sale extremely unlikely. So, if the bill passes the Senate, a TikTok ban could be the ultimate outcome. Of course, that’s a big “if.” Regardless of how one feels about TikTok, there’s no denying that Bytedance is one of the world’s most successful social media companies, with around 1.22 billion monthly active users around the world. With those numbers, Bytedance will probably be just fine even if they do face a U.S. ban. The idea of losing 150 million users still has to sting a bit, though.
-
FREEScience Nerdy CurioFree1 CQ
Some things are made to last, but that’s not always a good thing. Microplastics, tiny, sometimes microscopic bits of plastic that have shown up in everything from snowfall to the human bloodstream, have captured public attention in recent years. Now, a study published by a group of researchers in Vienna, Austria, in the journal Chemospheres states that microplastics may be linked to rising rates of colorectal cancer in young people. Every week, the average person breathes in or ingests around .176 ounces of plastic (about the equivalent of a credit card), most of which ends up in the gastrointestinal tract. The good news is that it doesn’t all stick around. The bad news is that the stuff that does is still dangerous. According to the new research, nanoplastics (particles that are one micrometer or smaller) can stay inside a person’s body longer than previously thought and can even be passed on to new cells during cell division. These micro- and nanoplastic particles (MNPs) are difficult to get rid of because unlike other foreign materials, they aren’t broken down by a cell’s lysosomes. This is particularly dangerous when they end up in a cancer cell, because MNPs were found to increase cell migration, which, for cancer cells, means metastasis, or malignant growths that spread from the original cancer site. The researchers therefore believe that MNPs could be at least partially responsible for the recent worldwide rise in colorectal cancer rates, especially in those under 50 years old. Indeed, they found that colorectal cancer rates have been on the rise since the 1960s, when inexpensive plastics started to become ubiquitous. Since then, practically everyone has been consuming plastic to some degree. It seems that this material was never cheap after all…the bill was just overdue.
[Image description: A plastic cup half-covered by sand on a beach.] Credit & copyright: Hamsterfreund, Pixabay
Some things are made to last, but that’s not always a good thing. Microplastics, tiny, sometimes microscopic bits of plastic that have shown up in everything from snowfall to the human bloodstream, have captured public attention in recent years. Now, a study published by a group of researchers in Vienna, Austria, in the journal Chemospheres states that microplastics may be linked to rising rates of colorectal cancer in young people. Every week, the average person breathes in or ingests around .176 ounces of plastic (about the equivalent of a credit card), most of which ends up in the gastrointestinal tract. The good news is that it doesn’t all stick around. The bad news is that the stuff that does is still dangerous. According to the new research, nanoplastics (particles that are one micrometer or smaller) can stay inside a person’s body longer than previously thought and can even be passed on to new cells during cell division. These micro- and nanoplastic particles (MNPs) are difficult to get rid of because unlike other foreign materials, they aren’t broken down by a cell’s lysosomes. This is particularly dangerous when they end up in a cancer cell, because MNPs were found to increase cell migration, which, for cancer cells, means metastasis, or malignant growths that spread from the original cancer site. The researchers therefore believe that MNPs could be at least partially responsible for the recent worldwide rise in colorectal cancer rates, especially in those under 50 years old. Indeed, they found that colorectal cancer rates have been on the rise since the 1960s, when inexpensive plastics started to become ubiquitous. Since then, practically everyone has been consuming plastic to some degree. It seems that this material was never cheap after all…the bill was just overdue.
[Image description: A plastic cup half-covered by sand on a beach.] Credit & copyright: Hamsterfreund, Pixabay
-
FREEScience Nerdy CurioFree1 CQ
Will they or won’t they? That’s the question astronomers from Stanford University are asking in a paper published in The Astrophysical Journal regarding a black hole binary with some unique features. A black hole binary is a system of two black holes that orbit one another. While stellar-mass black holes are known to sometimes merge together, supermassive black holes (which are many times the mass of even the largest stars) have never been observed doing so. Whether or not they can merge has been a subject of debate among astronomers for decades. The matter may soon be settled though, thanks to data collected by the Gemini Observatory regarding a black hole binary system called B2 0402+379. This binary is unusual in a number of ways: firstly, it’s the only instance to be observed in enough detail to see each black hole separately, despite there being just 24 light years between them (yes, that distance is considered close for black holes). Meanwhile, the binary is the heaviest of its kind at around 28 billion times the mass of the sun. For astronomers studying B2 0402+379, this last bit of information was key. They concluded that the unusually large mass of the two objects likely allowed them to completely obliterate any stars and other matter from their respective galaxies that would have slowed down their orbit. Without any matter remaining, their orbit effectively stalled, and they’ve stayed where they are for the last 3 billion years. Typically, stellar-mass black holes at this stage emit gravitational waves that sap their orbital momentum, causing them to merge. But it appears that the sheer mass of this binary has allowed it to remain eternally stalled. More time and data is still needed to know what will happen for sure, but until then, the debate has finally, similarly wound down.
[Image description: A sky full of stars that appear to be “swirling.”] Credit & copyright: Faik Akmd, Pexels
Will they or won’t they? That’s the question astronomers from Stanford University are asking in a paper published in The Astrophysical Journal regarding a black hole binary with some unique features. A black hole binary is a system of two black holes that orbit one another. While stellar-mass black holes are known to sometimes merge together, supermassive black holes (which are many times the mass of even the largest stars) have never been observed doing so. Whether or not they can merge has been a subject of debate among astronomers for decades. The matter may soon be settled though, thanks to data collected by the Gemini Observatory regarding a black hole binary system called B2 0402+379. This binary is unusual in a number of ways: firstly, it’s the only instance to be observed in enough detail to see each black hole separately, despite there being just 24 light years between them (yes, that distance is considered close for black holes). Meanwhile, the binary is the heaviest of its kind at around 28 billion times the mass of the sun. For astronomers studying B2 0402+379, this last bit of information was key. They concluded that the unusually large mass of the two objects likely allowed them to completely obliterate any stars and other matter from their respective galaxies that would have slowed down their orbit. Without any matter remaining, their orbit effectively stalled, and they’ve stayed where they are for the last 3 billion years. Typically, stellar-mass black holes at this stage emit gravitational waves that sap their orbital momentum, causing them to merge. But it appears that the sheer mass of this binary has allowed it to remain eternally stalled. More time and data is still needed to know what will happen for sure, but until then, the debate has finally, similarly wound down.
[Image description: A sky full of stars that appear to be “swirling.”] Credit & copyright: Faik Akmd, Pexels
-
FREEScience Nerdy CurioFree1 CQ
Archaeology can be a sticky business. Neanderthals are usually thought of as less intelligent than our Homo sapien ancestors, but mounting evidence suggests that they were more like us than we realize. In fact, a paper recently published in the journal Science Advances by archaeologists from New York University, the University of Tübingen, and the National Museums in Berlin, details how Neanderthals created their own, specialized adhesive. As ancient species go, Neanderthals get a pretty bad rap. Discovered in the mid 1800s, these cousins of modern humans were long thought to have been much less intelligent than us. But more recent discoveries have revealed that Neanderthals had distinct, developed cultures, and genetic testing has revealed that they likely interbred with early homo sapiens. While reexamining some stone tools made by Neanderthals that were unearthed in the early 1900s, researchers recently found traces of bitumen (a naturally occurring petroleum-based substance) and ochre (a deep yellow mineral) mixed together. The purpose of these substances weren’t clear at first. Bitumen, while sticky, is difficult to work with. Ochre, on the other hand, would inhibit the adhesive property of bitumen. However, when they mixed the two substances together using fresh samples of both, researchers made an easily workable adhesive that is just sticky enough to hold a stone tool together, but not sticky enough to bind skin. The material was easily moldable, so it could be fitted to tools in order to improve grip. You’ve gotta hand it to them: Neanderthals were crafty folks.
[Image description: A painting of a family of six Neanderthals at the mouth of a cave. One of them carries a spear.] Credit & copyright: Charles Robert Knight (1874–1953). Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France. 1920. Wikimedia Commons. The author died in 1953, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 70 years or fewer. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.
Archaeology can be a sticky business. Neanderthals are usually thought of as less intelligent than our Homo sapien ancestors, but mounting evidence suggests that they were more like us than we realize. In fact, a paper recently published in the journal Science Advances by archaeologists from New York University, the University of Tübingen, and the National Museums in Berlin, details how Neanderthals created their own, specialized adhesive. As ancient species go, Neanderthals get a pretty bad rap. Discovered in the mid 1800s, these cousins of modern humans were long thought to have been much less intelligent than us. But more recent discoveries have revealed that Neanderthals had distinct, developed cultures, and genetic testing has revealed that they likely interbred with early homo sapiens. While reexamining some stone tools made by Neanderthals that were unearthed in the early 1900s, researchers recently found traces of bitumen (a naturally occurring petroleum-based substance) and ochre (a deep yellow mineral) mixed together. The purpose of these substances weren’t clear at first. Bitumen, while sticky, is difficult to work with. Ochre, on the other hand, would inhibit the adhesive property of bitumen. However, when they mixed the two substances together using fresh samples of both, researchers made an easily workable adhesive that is just sticky enough to hold a stone tool together, but not sticky enough to bind skin. The material was easily moldable, so it could be fitted to tools in order to improve grip. You’ve gotta hand it to them: Neanderthals were crafty folks.
[Image description: A painting of a family of six Neanderthals at the mouth of a cave. One of them carries a spear.] Credit & copyright: Charles Robert Knight (1874–1953). Neanderthal Flintworkers, Le Moustier Cavern, Dordogne, France. 1920. Wikimedia Commons. The author died in 1953, so this work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 70 years or fewer. This work is in the public domain in the United States because it was published (or registered with the U.S. Copyright Office) before January 1, 1929.
-
FREEWork Nerdy CurioFree1 CQ
A can of worms seems to have opened in the Big Apple. In 2019, New York City’s MTA announced that the city would soon implement a congestion pricing program, which would charge a fee for any non-commercial vehicle entering Manhattan. While environmentalists cheered the idea, which is meant to dissuade car use and limit air pollution, plenty of New York drivers weren’t fond of the idea. Several lawsuits sprang up, with the most recent being brought by a group of around 50 New York City small business owners. On February 27, the plaintiffs held a rally at City Hall where they spoke about their concerns. Mainly, their fear is that congestion pricing will drive them out of business by keeping too many potential customers out of Manhattan. The MTA’s current plan, which will go into effect this summer, is to charge non-commercial vehicles entering Manhattan south of 60th street a fee of $15 if they use E-ZPass and $22.50 if they don’t. It’s easy to see why business owners might be bothered by the plan, but the MTA has pointed out that much of the funding for their capital plan for 2020 through 2024 depends on the revenue that congestion pricing will generate. If lawsuits delay the program’s implementation, some major city renovation projects may have to be put on hold. For now, it seems that the city’s business owners, drivers, and agencies are caught between a rock and an economic hardship.
[Image description: An AI-generated illustration of a car made from money.] Credit & copyright: adamlapunik, PixabayA can of worms seems to have opened in the Big Apple. In 2019, New York City’s MTA announced that the city would soon implement a congestion pricing program, which would charge a fee for any non-commercial vehicle entering Manhattan. While environmentalists cheered the idea, which is meant to dissuade car use and limit air pollution, plenty of New York drivers weren’t fond of the idea. Several lawsuits sprang up, with the most recent being brought by a group of around 50 New York City small business owners. On February 27, the plaintiffs held a rally at City Hall where they spoke about their concerns. Mainly, their fear is that congestion pricing will drive them out of business by keeping too many potential customers out of Manhattan. The MTA’s current plan, which will go into effect this summer, is to charge non-commercial vehicles entering Manhattan south of 60th street a fee of $15 if they use E-ZPass and $22.50 if they don’t. It’s easy to see why business owners might be bothered by the plan, but the MTA has pointed out that much of the funding for their capital plan for 2020 through 2024 depends on the revenue that congestion pricing will generate. If lawsuits delay the program’s implementation, some major city renovation projects may have to be put on hold. For now, it seems that the city’s business owners, drivers, and agencies are caught between a rock and an economic hardship.
[Image description: An AI-generated illustration of a car made from money.] Credit & copyright: adamlapunik, Pixabay -
FREEScience Nerdy CurioFree1 CQ
In the future, you might find yourself picking your nose…from a shelf. According to a recently published paper in the journal ACS Sensors, a team of researchers at the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has discovered a novel way to make a reliable, portable “e-nose.” It might seem like everything’s getting an unnecessary e-prefix these days, but an e-nose could actually be important for detecting volatile compounds (VOCs) in the air. There are sensors that predate this latest innovation, but they’ve been cumbersome and unreliable. Even the ubiquitous breathalyzer is a far cry from a true olfactory sensor, as it can only detect alcohol and its accuracy is sometimes questionable. Currently, the best way to detect dangerous gasses is by taking an air sample to a lab, and that might not be practical when, for example, there’s an impending explosion from a natural gas leak. But the new e-nose is different; it’s smaller, more portable, and more sensitive than detectors of the past, which could make it usable in the field and in emergencies. Researchers managed to create the sensor by using a chemiresistor (a material with electrical resistance that varies in the presence of different oxidizing gasses) consisting of a tungsten trioxide (WO3) nanorod film and taking advantage of the material’s extremely fast thermal relaxation time. The film, which acts as a sensing layer and a self-heating layer, reacts to 12 different types of gas molecules in a second or less. If the device works as intended, it could be used for everything from detecting food spoilage to hazardous waste clean ups. Finally, you won’t have to rely on your own sense of smell to know if that month-old jug of milk is okay to drink.
[Image description: A French bulldog sniffs purple flowers.] Credit & copyright: Mylene2401, Pixabay
In the future, you might find yourself picking your nose…from a shelf. According to a recently published paper in the journal ACS Sensors, a team of researchers at the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has discovered a novel way to make a reliable, portable “e-nose.” It might seem like everything’s getting an unnecessary e-prefix these days, but an e-nose could actually be important for detecting volatile compounds (VOCs) in the air. There are sensors that predate this latest innovation, but they’ve been cumbersome and unreliable. Even the ubiquitous breathalyzer is a far cry from a true olfactory sensor, as it can only detect alcohol and its accuracy is sometimes questionable. Currently, the best way to detect dangerous gasses is by taking an air sample to a lab, and that might not be practical when, for example, there’s an impending explosion from a natural gas leak. But the new e-nose is different; it’s smaller, more portable, and more sensitive than detectors of the past, which could make it usable in the field and in emergencies. Researchers managed to create the sensor by using a chemiresistor (a material with electrical resistance that varies in the presence of different oxidizing gasses) consisting of a tungsten trioxide (WO3) nanorod film and taking advantage of the material’s extremely fast thermal relaxation time. The film, which acts as a sensing layer and a self-heating layer, reacts to 12 different types of gas molecules in a second or less. If the device works as intended, it could be used for everything from detecting food spoilage to hazardous waste clean ups. Finally, you won’t have to rely on your own sense of smell to know if that month-old jug of milk is okay to drink.
[Image description: A French bulldog sniffs purple flowers.] Credit & copyright: Mylene2401, Pixabay
-
FREEBiology Nerdy CurioFree1 CQ
Roses are red, blueberries too, if it’s true what we said, then what’s with the hue? Blueberries might look blue, but they’re actually red. Now, scientists at the University of Bristol have revealed how this perennial favorite of the produce aisle creates its deceptive coloration in a paper published in the journal Science Advances. The secret lies in the beloved berries’ waxy skin. As anyone who has gotten blueberry juice on their clothing can tell you, blueberries don’t turn things blue, but rather a reddish purple. That’s because blueberries don’t actually have any blue pigment in their skin. But they have plenty of red pigment, despite appearances. The cause of the dark blue coloring, it seems, is the crystal structure of the wax on blueberries’ skin, which scatters blue and UV light. This is a similar mechanism to what makes some birds appear to have blue feathers, despite birds being incapable of actually producing blue pigment. Incredibly, the waxy layer responsible for this phenomenon is only two microns thick (a single micron is one-millionth of a meter). After discovering the waxy layer, scientists went a step further and removed the wax covering, allowing it to re-crystallize on a card. The result was the same blue coloration, and they believe this could one day be used to make environmentally friendly—and possibly even edible—blue reflective paint. Maybe one day we’ll look back at this discovery and remember how it all blue up.
[Image description: A close-up photo of a pile of blueberries] Credit & copyright: borislagosbarrera, Pixabay
Roses are red, blueberries too, if it’s true what we said, then what’s with the hue? Blueberries might look blue, but they’re actually red. Now, scientists at the University of Bristol have revealed how this perennial favorite of the produce aisle creates its deceptive coloration in a paper published in the journal Science Advances. The secret lies in the beloved berries’ waxy skin. As anyone who has gotten blueberry juice on their clothing can tell you, blueberries don’t turn things blue, but rather a reddish purple. That’s because blueberries don’t actually have any blue pigment in their skin. But they have plenty of red pigment, despite appearances. The cause of the dark blue coloring, it seems, is the crystal structure of the wax on blueberries’ skin, which scatters blue and UV light. This is a similar mechanism to what makes some birds appear to have blue feathers, despite birds being incapable of actually producing blue pigment. Incredibly, the waxy layer responsible for this phenomenon is only two microns thick (a single micron is one-millionth of a meter). After discovering the waxy layer, scientists went a step further and removed the wax covering, allowing it to re-crystallize on a card. The result was the same blue coloration, and they believe this could one day be used to make environmentally friendly—and possibly even edible—blue reflective paint. Maybe one day we’ll look back at this discovery and remember how it all blue up.
[Image description: A close-up photo of a pile of blueberries] Credit & copyright: borislagosbarrera, Pixabay
-
FREEPolitical Science Nerdy CurioFree1 CQ
Outer space isn’t NASA’s only concern; they care about earthly business too. NASA recently named Dwight Deneal as a new assistant administrator for their Office of Small Business Programs (OSBP). The move has brought some publicity to the little-known office. One doesn’t usually think of small businesses in relation to NASA. Yet, small businesses are actually vital to helping NASA function. In fact, the organization has worked with hundreds of small businesses over the years. Besides providing proprietary technologies for things like the James Webb Space Telescope, small businesses that specialize in logistics have helped NASA manage, track, and document various projects. Budget management is another area where small businesses have stepped in to lend the government agency a hand. Of course, NASA isn’t the only government agency or office that relies on small business contracts. Before his recent appointment, Deneal worked as director for the Defense Logistics Agency’s Office of Small Business Programs, where he contracted small businesses to work with the agency and promoted programs to incentivize small businesses to work with the U.S. military. Even when thinking big, it can behoove U.S. agencies to think small.
Outer space isn’t NASA’s only concern; they care about earthly business too. NASA recently named Dwight Deneal as a new assistant administrator for their Office of Small Business Programs (OSBP). The move has brought some publicity to the little-known office. One doesn’t usually think of small businesses in relation to NASA. Yet, small businesses are actually vital to helping NASA function. In fact, the organization has worked with hundreds of small businesses over the years. Besides providing proprietary technologies for things like the James Webb Space Telescope, small businesses that specialize in logistics have helped NASA manage, track, and document various projects. Budget management is another area where small businesses have stepped in to lend the government agency a hand. Of course, NASA isn’t the only government agency or office that relies on small business contracts. Before his recent appointment, Deneal worked as director for the Defense Logistics Agency’s Office of Small Business Programs, where he contracted small businesses to work with the agency and promoted programs to incentivize small businesses to work with the U.S. military. Even when thinking big, it can behoove U.S. agencies to think small.
-
FREEPhysics Nerdy CurioFree1 CQ
Going green doesn’t have to be more expensive. Just ask the team of researchers at the University of Oregon who are working on a cheaper, more eco-friendly way to produce metallic iron. Their findings, recently published in Joule, could profoundly change industries that rely on steel, an alloy of iron and carbon. It’s had to overstate the importance of steel in the modern world, yet producing the ubiquitous metal causes eight percent of all annual carbon emissions. However, the researchers in Oregon have been working on an electrochemical method that only uses saltwater, iron oxide, and some electricity. The process involves submerging an iron oxide cathode into one end of a saltwater bath, and a positively-charged electrode (an anode) on the other end. When a current runs through the setup, oxygen atoms are released from the cathode and bind with the sodium in the saltwater. The end products are pure iron, chlorine, and sodium hydroxide. Researchers say that chlorine, which has many industrial uses, could potentially be sold to offset the cost of the process, while sodium hydroxide can bind with CO2, which means the process can also be carbon-negative. Unlike a furnace, which needs a steady supply of fuel, this process could run entirely on renewable energy. Before this can be scaled up for industrial use, though, two major hurdles must be overcome. The first is that the process only works with pure iron oxide, and iron ore is rarely so pure. The second issue is that it would potentially create much more chlorine that would ever be needed, and there needs to be a way to store it safely. It would be pretty ironic to reduce carbon emissions only to release a ton of deadly chlorine.
[Image description: A close-up photo of metal chain links.] Credit & copyright: analogicus, Pixabay
Going green doesn’t have to be more expensive. Just ask the team of researchers at the University of Oregon who are working on a cheaper, more eco-friendly way to produce metallic iron. Their findings, recently published in Joule, could profoundly change industries that rely on steel, an alloy of iron and carbon. It’s had to overstate the importance of steel in the modern world, yet producing the ubiquitous metal causes eight percent of all annual carbon emissions. However, the researchers in Oregon have been working on an electrochemical method that only uses saltwater, iron oxide, and some electricity. The process involves submerging an iron oxide cathode into one end of a saltwater bath, and a positively-charged electrode (an anode) on the other end. When a current runs through the setup, oxygen atoms are released from the cathode and bind with the sodium in the saltwater. The end products are pure iron, chlorine, and sodium hydroxide. Researchers say that chlorine, which has many industrial uses, could potentially be sold to offset the cost of the process, while sodium hydroxide can bind with CO2, which means the process can also be carbon-negative. Unlike a furnace, which needs a steady supply of fuel, this process could run entirely on renewable energy. Before this can be scaled up for industrial use, though, two major hurdles must be overcome. The first is that the process only works with pure iron oxide, and iron ore is rarely so pure. The second issue is that it would potentially create much more chlorine that would ever be needed, and there needs to be a way to store it safely. It would be pretty ironic to reduce carbon emissions only to release a ton of deadly chlorine.
[Image description: A close-up photo of metal chain links.] Credit & copyright: analogicus, Pixabay
-
FREEEntrepreneurship Nerdy CurioFree1 CQ
Shark Tank is a show that has had its fair share of odd moments. The program features scrappy entrepreneurs pitching their products and services to a panel of investors, or “sharks.” As is often the case with reality T.V., wacky situations sometimes ensue…yet some turn out to be less wacky than they initially seem. Take the HummViewer, a product pitched on Shark Tank that, at first glance, seemed fairly off-the-wall. Essentially, HummViewer is a plastic face mask with flower-shaped hummingbird feeders attached to it, allowing its wearer to get up close and personal with the elusive little creatures. In a recent segment, HummViewer co-founders Joan and John Creed spoke about the impact of their 2022 appearance on Shark Tank. Far from seeing their product as a joke, viewers immediately flocked to the small company’s website, buying over $102,000 worth of product in a single day. Revenue has only increased since then, so much so that the founders were able to quit their full-time jobs to focus solely on their small business. While only around 29 percent of businesses that appear on the show end up finalizing a business deal, it’s clear that the “television effect” can be a real boon to businesses…even quirky ones.
[Image description: A green-and-white hummingbird, mid-flight.] Credit & copyright: JillWellington, PixabayShark Tank is a show that has had its fair share of odd moments. The program features scrappy entrepreneurs pitching their products and services to a panel of investors, or “sharks.” As is often the case with reality T.V., wacky situations sometimes ensue…yet some turn out to be less wacky than they initially seem. Take the HummViewer, a product pitched on Shark Tank that, at first glance, seemed fairly off-the-wall. Essentially, HummViewer is a plastic face mask with flower-shaped hummingbird feeders attached to it, allowing its wearer to get up close and personal with the elusive little creatures. In a recent segment, HummViewer co-founders Joan and John Creed spoke about the impact of their 2022 appearance on Shark Tank. Far from seeing their product as a joke, viewers immediately flocked to the small company’s website, buying over $102,000 worth of product in a single day. Revenue has only increased since then, so much so that the founders were able to quit their full-time jobs to focus solely on their small business. While only around 29 percent of businesses that appear on the show end up finalizing a business deal, it’s clear that the “television effect” can be a real boon to businesses…even quirky ones.
[Image description: A green-and-white hummingbird, mid-flight.] Credit & copyright: JillWellington, Pixabay -
FREEScience Nerdy CurioFree1 CQ
Cold weather is nothing to sneeze at. The decline of the Roman Empire has been studied by many historians through the centuries, but a group of international researchers has become the first to explore climate change as a potential factor, according to a paper published in the journal Science Advances. The Roman state was at its peak between 200 BCE and 100 CE, after which they were beset by plagues and declining agricultural production, eventually leading to the fall of the Western Roman Empire in 476 CE with the sacking of Rome. Then, during the sixth century, in the early days of the Eastern Roman Empire, the Plague of Justinian (a pandemic caused by the bubonic plague) killed around half of all Romans and millions more in the surrounding regions. Now, an analysis of the changing climate between 200 BCE and 600 CE has revealed that the outbreaks of disease and poor agricultural yields were caused by periods of colder, drier weather when temperatures dipped by as much as 37 degrees Fahrenheit. Researchers were able to determine the climate of the past by taking samples of different layers of marine sediment in the Gulf of Taranto off the southern coast of Italy. The sediment contained the fossilized remains of dinoflagellates, microorganisms that are sensitive to changes in sea temperature. Different temperatures led to the rise and fall of different species, and scientists were able to figure out what the temperature was during a given time by examining which species were more prevalent. Given that such small differences in temperature had such an impact on history, the researchers hope that this discovery might shed some light on the relationship between pandemics and climate change in the near future. Hopefully history doesn’t repeat with heat.
[Image description: A portion of the painting Saint Sebastian Interceding for the Plague Stricken, showing Roman priests praying and onlookers crying as workers place wrapped bodies into graves.] Credit & copyright: Saint Sebastian Interceding for the Plague Stricken, Josse Lieferinxe (–1508), Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.
Cold weather is nothing to sneeze at. The decline of the Roman Empire has been studied by many historians through the centuries, but a group of international researchers has become the first to explore climate change as a potential factor, according to a paper published in the journal Science Advances. The Roman state was at its peak between 200 BCE and 100 CE, after which they were beset by plagues and declining agricultural production, eventually leading to the fall of the Western Roman Empire in 476 CE with the sacking of Rome. Then, during the sixth century, in the early days of the Eastern Roman Empire, the Plague of Justinian (a pandemic caused by the bubonic plague) killed around half of all Romans and millions more in the surrounding regions. Now, an analysis of the changing climate between 200 BCE and 600 CE has revealed that the outbreaks of disease and poor agricultural yields were caused by periods of colder, drier weather when temperatures dipped by as much as 37 degrees Fahrenheit. Researchers were able to determine the climate of the past by taking samples of different layers of marine sediment in the Gulf of Taranto off the southern coast of Italy. The sediment contained the fossilized remains of dinoflagellates, microorganisms that are sensitive to changes in sea temperature. Different temperatures led to the rise and fall of different species, and scientists were able to figure out what the temperature was during a given time by examining which species were more prevalent. Given that such small differences in temperature had such an impact on history, the researchers hope that this discovery might shed some light on the relationship between pandemics and climate change in the near future. Hopefully history doesn’t repeat with heat.
[Image description: A portion of the painting Saint Sebastian Interceding for the Plague Stricken, showing Roman priests praying and onlookers crying as workers place wrapped bodies into graves.] Credit & copyright: Saint Sebastian Interceding for the Plague Stricken, Josse Lieferinxe (–1508), Wikimedia Commons. This work is in the public domain in its country of origin and other countries and areas where the copyright term is the author's life plus 100 years or fewer.
-
FREEManagement Nerdy CurioFree1 CQ
For some workers, 2024 has gotten off to a rough start. Massive layoffs in the tech industry and stagnant wages are just two of the challenges many working Americans are facing at the moment. So it stands to reason that many people were surprised when Walmart bucked the trend and announced significant new perks for store managers. In mid-January, the retail giant stated that it was boosting managers’ average pay to $128,000 per year and making them eligible for salary bonuses. Shortly thereafter, the company announced that store managers will get up to $20,000 in Walmart stock grants every year. The exact amount of the grants will depend on store size, with Hometown store managers getting $10,000, Neighborhood Market and Division store managers getting $15,000 and Supercenter managers getting $20,000. Altogether, this means that some Walmart managers could end up making more than $525,000 a year—a remarkable salary for a position that doesn’t require a four-year degree. In a statement on LinkedIn, John Furner, President and CEO for Walmart's US division, explained, “A Walmart store manager is running a multi-million dollar business and managing hundreds of people, and it's a far more complex job today then when I managed a store…We ask our managers to own their roles and act like owners. Now, they’ll literally be owners.” Here’s hoping this kicks off a new kind of trend in the retail world.
[Image description: A digital illustration of a pile of dollar bills.] Credit & copyright: geralt, PixabayFor some workers, 2024 has gotten off to a rough start. Massive layoffs in the tech industry and stagnant wages are just two of the challenges many working Americans are facing at the moment. So it stands to reason that many people were surprised when Walmart bucked the trend and announced significant new perks for store managers. In mid-January, the retail giant stated that it was boosting managers’ average pay to $128,000 per year and making them eligible for salary bonuses. Shortly thereafter, the company announced that store managers will get up to $20,000 in Walmart stock grants every year. The exact amount of the grants will depend on store size, with Hometown store managers getting $10,000, Neighborhood Market and Division store managers getting $15,000 and Supercenter managers getting $20,000. Altogether, this means that some Walmart managers could end up making more than $525,000 a year—a remarkable salary for a position that doesn’t require a four-year degree. In a statement on LinkedIn, John Furner, President and CEO for Walmart's US division, explained, “A Walmart store manager is running a multi-million dollar business and managing hundreds of people, and it's a far more complex job today then when I managed a store…We ask our managers to own their roles and act like owners. Now, they’ll literally be owners.” Here’s hoping this kicks off a new kind of trend in the retail world.
[Image description: A digital illustration of a pile of dollar bills.] Credit & copyright: geralt, Pixabay