How protecting ‘umbrella species’ Polar bear helps scientists in Arctic conservation

On the frozen edge of Hudson Bay, polar bears are doing more than hunting seals. They are helping scientists map the future of Arctic conservation.

A new study led by researchers at the University of Alberta and the San Diego Zoo Wildlife Alliance (SDZWA) suggests that protecting polar bear habitats could also shield a wide network of Arctic species. The research draws on nearly 20 years of tracking data from 355 bears to identify areas where conservation efforts may deliver the greatest impact.

The findings, published in the journal Arctic Science, focus on western Hudson Bay, a region already under pressure from warming temperatures and shifting ice conditions.

Hudson Bay Polar Bear Tracking Study Identifies High-Use Conservation Zone

The research pinpoints a “high-use” area near Cape Churchill in Manitoba as a priority zone for protection. Scientists analyzed long-term movement patterns to determine where polar bears consistently spend time, particularly during critical periods such as feeding and migration.

Establishing marine protected areas, or MPAs, in Arctic waters has long been complicated by limited data on where marine life concentrates. The study proposes that polar bears can serve as a proxy for broader ecosystem activity, offering a data-rich foundation for decision-making.

“By leveraging the extensive data we have on polar bears, we can help design MPAs that safeguard both the bears and the vast network of Arctic species that rely on them,” said Dr. Nicholas Pilfold, a conservation scientist at SDZWA.

Researchers argue that the approach addresses a central challenge in marine conservation. Instead of attempting to track multiple species across vast and remote regions, policymakers can use one well-studied species to guide protection efforts.

Umbrella Species Concept Gains Ground In Arctic Conservation Strategy

The concept of an “umbrella species” refers to a single species whose protection indirectly benefits others that share its habitat. According to the study, polar bears meet nearly all criteria for this role.

They have large home ranges, well-documented biological data, and high sensitivity to environmental disturbance. These characteristics make them effective indicators of ecosystem health.

The research highlights how polar bears influence their surroundings beyond their own survival. When bears hunt, leftover carcasses provide food for scavengers such as Arctic foxes, wolves, ravens, and gulls. This behavior creates a chain of ecological benefits that extends across species.

Dr. Andrew Derocher, a professor of biological sciences at the University of Alberta, said the data offers a practical path forward for conservation planning. “In the rapidly warming Arctic, marine ecosystems will be stressed by the additive effects of industrial activity and polar bear location data provide a path to designing marine protected areas,” he said. [2]

Policy Momentum Builds Around Hudson Bay Marine Protection Plans

The study arrives as policymakers in Canada consider expanding protections in the region. In February 2026, Manitoba Premier Wab Kinew announced funding to explore the creation of a national marine conservation area in western Hudson Bay.

While details of the proposal remain under development, the research provides scientific backing for where boundaries could be drawn. Conservation areas designed around polar bear activity may capture critical habitats for multiple species without requiring extensive new data collection.

Scientists involved in the study also emphasize the need for flexibility. The Arctic environment is changing rapidly, with sea ice loss altering migration routes and feeding patterns.

Pilfold noted that dynamic MPAs, which can adapt to shifting ecological conditions, may be particularly effective in this context. “Well-designed dynamic MPAs have the potential to preserve biodiversity in a constantly changing Arctic landscape,” he said.

The researchers acknowledge that climate change could eventually reduce the effectiveness of polar bears as an umbrella species if their habitat continues to shrink. Still, they describe the approach as a practical starting point for immediate conservation action.

For now, the polar bear’s movements offer something rare in the Arctic: a clear, data-driven map for protecting life in one of the planet’s most fragile ecosystems.

Also Read:

Scared of spiders? A world without them is true nightmare tale: Study

Pythons are true choke artists; Take on prey as large as a deer

Thousands Of Pico-Satellites Could Redefine Direct Smartphone Connectivity From Space

A new approach to satellite communications could significantly reshape how smartphones connect to space, with researchers proposing the use of thousands of tiny satellites working in unison rather than relying on a single, complex spacecraft.

Scientists in Japan have demonstrated that swarms of pico-satellites—each carrying a small हिस्सा of a larger antenna system—can collectively function as a single, powerful phased-array antenna. The early-stage experiment showed that such a distributed system can deliver stable, high-quality data transmission, offering a potential pathway to cheaper and more resilient global connectivity.

The concept builds on the growing interest in direct-to-device (D2D) satellite communications, which aim to allow ordinary smartphones to connect directly to satellites without the need for ground infrastructure. The technology is particularly attractive for extending coverage to remote regions such as oceans, deserts, and disaster-hit areas where terrestrial networks are either weak or nonexistent.

Traditionally, achieving this requires large satellites equipped with sophisticated phased-array antennas. These systems rely on tightly coordinated antenna elements that can steer signals electronically. However, they are expensive to build and launch, and their centralized design creates a single point of failure—any major malfunction can render the entire satellite ineffective.

The Japanese research team, led by Associate Professor Atsushi Shirane, has proposed a fundamentally different architecture. Instead of concentrating antenna elements on one satellite, the system distributes them across thousands of pico-satellites flying in formation. These miniature units are synchronized wirelessly, eliminating the need for physical connections.

At the heart of the innovation is what the researchers describe as “spatial wireless combining and distributing technology.” In this setup, a central gateway satellite broadcasts a reference signal that allows all participating pico-satellites to remain precisely synchronized. This removes the need for energy-intensive components such as local oscillators on each unit, enabling further miniaturization and reducing power consumption.

The team developed a compact transceiver chip using standard silicon CMOS technology, making it suitable for large-scale, low-cost manufacturing. In laboratory simulations replicating satellite formations, the system demonstrated accurate beam steering and reliable data transmission using communication protocols similar to those found in modern smartphones.

Beyond lowering costs, the distributed nature of the system offers a major reliability advantage. Because the network is made up of numerous independent satellites, the failure of individual units does not compromise the entire system—unlike traditional monolithic satellites.

The findings suggest that formation-flying pico-satellites could become a viable foundation for next-generation satellite networks. If scaled successfully, the approach could expand global connectivity while reducing both financial and operational risks, bringing direct satellite communication closer to everyday mobile users.

Also Read:

SpaceX Launches Double Satellite, Deploys 46 Satellites in Record Time; Crew to Return Tomorrow [Watch Live]

PSLV-C38 Successfully Launched With 31 Satellites

XRISM finally solves famous star’s 50-year space mystery

A star visible to the naked eye has held a secret for more than half a century.

Gamma Cassiopeiae, a bright star in the constellation Cassiopeia, has puzzled astronomers since the 1970s with its unusually intense X ray emissions. [1]

Now, researchers using the X Ray Imaging and Spectroscopy Mission, a joint space observatory developed by Japan, the United States and Europe, say they have identified the source. [1]

The emissions come from an unseen white dwarf companion that pulls in material from the larger star and releases X rays as it does so. [1]

Gamma Cas X ray origin explained by white dwarf companion

The findings are based on high resolution observations from XRISM’s Resolve spectrometer, which can track subtle changes in X ray signals.

Researchers found that the hot plasma responsible for the X rays moves in sync with the orbit of the hidden companion star. [1]

This motion provided direct evidence that the emissions are linked to accretion, a process in which matter falls onto a dense object such as a white dwarf.

Lead author Yaël Nazé, an astronomer at the University of Liège in Belgium, said the result concludes decades of investigation.

“There has been an intense effort to solve the mystery of gamma Cas across many research groups for many decades. And now, thanks to the high precision observations of XRISM, we have finally done it,” Nazé said. [1]

For years, scientists had narrowed the explanation to two possibilities. One involved magnetic interactions between the star and its surrounding disc. The other suggested that a companion object was drawing in material and generating X rays.

The XRISM data supports the second explanation. [1]

Be stars gamma Cas history and unusual emission features

Gamma Cassiopeiae belongs to a class known as Be stars, a type of hot, rapidly rotating star surrounded by a disc of material.

The star’s unusual behavior was first noted in 1866 by Italian astronomer Angelo Secchi, who observed unexpected emission lines in its light spectrum. [1]

Those observations led to the classification of Be stars, which are known for ejecting material that forms a rotating disc around them.

By the mid 20th century, astronomers had detected that gamma Cas also had a low mass companion, though it remained invisible to direct observation. [1]

The discovery of strong X ray emissions in the 1970s added another layer to the mystery. The radiation was traced to extremely hot plasma, reaching temperatures of about 150 million degrees, far exceeding typical levels for such stars. [1]

Subsequent observations with space telescopes such as XMM Newton, the European Space Agency’s X ray observatory, NASA’s Chandra X ray Observatory, and the eROSITA telescope identified similar behavior in a small group of stars now known as gamma Cas type objects. [1]

XRISM discovery impact on binary star evolution research

The identification of a white dwarf companion resolves the origin of the X rays and provides a clearer picture of how these systems function.

In this model, material from the Be star’s disc spirals toward the white dwarf, heating up and emitting high energy radiation in the process.

Researchers say the findings also raise new questions about how such binary systems form.

White dwarf companions were expected to be common in systems with lower mass stars. The new results suggest they may instead occur more frequently with high mass Be stars. [1]

Alice Borghese, a research fellow at the European Space Agency specializing in high energy astrophysics, said earlier missions helped narrow the possibilities.

“XMM Newton did so much of the groundwork in ruling out various theories about gamma Cas. And now with the next generation of advanced instrumentation, XRISM has brought us over the finish line,” she said. [1]

The study highlights the role of international collaboration in space science. XRISM combines contributions from Japanese, European and American teams.

Matteo Guainazzi, the European Space Agency’s XRISM project scientist, said the result demonstrates the value of that cooperation.

“This wonderful result underlines the strong collaboration between XRISM’s Japanese, European and American teams,” he said. [1]

For astronomers, the long running puzzle of gamma Cas has shifted from speculation to measurement.

A mystery that began with unusual light signatures in the 19th century now has a defined mechanism grounded in observation.

Also Read:

Cosmic girls: UN nurtures next generation of space professionals

Space News: Planetary-scale ‘heat wave’ discovered in Jupiter’s atmosphere

Wired for water: How electrification is transforming desalination

Pressure on the world’s water resources is rising steadily — and in many places, it is reaching critical levels. Growing populations, expanding cities, and increasing demand from agriculture and industry are all putting fresh water supplies under strain, particularly in regions that are already struggling.

To cope with this, many countries have turned to desalination — the process of converting seawater into usable fresh water. While this has helped ease shortages in some of the hardest-hit areas, it comes at a cost. Desalination can be energy-intensive, accounting for anything from a negligible share to as much as 15 per cent of a country’s total energy use, depending on how heavily it relies on the technology. Now, a shift is underway. Older, heat-based systems are gradually being replaced by electricity-driven methods, reflecting a broader transition in how energy is produced and used.

The scale of global water use highlights the challenge. Each year, more than 4,000 billion cubic metres of freshwater are withdrawn worldwide. Of this, nearly 1,500 billion cubic metres are consumed — meaning the water is not returned to its source. To put that into perspective, humanity uses roughly the equivalent of the entire volume of Lake Michigan every year.

Agriculture remains by far the largest consumer, accounting for around 70 per cent of total withdrawals and close to 90 per cent of actual consumption. As the global population has grown by about 30 per cent since 2000, water demand from cities has risen at a similar pace. A slight decline in industrial water use has done little to offset this broader increase.

The result is mounting water stress. In many regions, water is being extracted faster than it can be replenished, particularly from underground sources. Over time, this kind of overuse can permanently damage ecosystems and lead to what experts describe as “water bankruptcy” — a point at which natural reserves can no longer recover.

Over the past two decades, nearly one billion more people have come to live in areas facing high water stress, pushing the global total to over three billion. Much of this increase has occurred in regions already under severe strain. Today, about 30 per cent of the world’s population lives in areas classified as extremely water-stressed, with around 85 per cent of those affected residing in emerging and developing economies.

The situation is especially stark in fast-growing countries. In India, for instance, more than 70 per cent of the population lives in water-stressed regions. The scale of the problem is such that the number of people currently affected is roughly equal to the country’s entire population in the early 2000s.

The Middle East and North Africa face an even harsher reality. Home to around 490 million people as of 2024, the region has long grappled with limited water resources. About three-quarters of its population lived under water stress at the turn of the century, and despite some population shifts toward relatively less affected areas, more than 70 per cent still live in conditions of high or extreme water scarcity today.

Taken together, the trends point to a deepening global challenge. As demand continues to rise and climate pressures intensify, managing water resources — and the energy needed to sustain them — is becoming one of the defining issues of our time.

Also Read:

World enters era of ‘global water bankruptcy’

Newer ground water is linked with increased risk of Parkinson disease

NASA’s Artemis II Rocket Reaches Launch Pad 39B, Final Countdown Begins

Cape Canaveral, March 22, 2026: NASA’s Artemis II mission has reached a critical milestone, with the Space Launch System (SLS) rocket and Orion spacecraft now standing at Launch Pad 39B at the agency’s Kennedy Space Center in Florida, setting the stage for the first crewed lunar mission in more than five decades.

The towering 322-foot-tall Moon rocket arrived at the pad at 11:21 a.m. EDT on Friday, March 20, completing an 11-hour journey from the Vehicle Assembly Building. The slow and steady trek began at 12:20 a.m. EDT, as NASA’s crawler-transporter 2 carried the integrated SLS and Orion, secured atop the mobile launcher, along the 4-mile path at a maximum speed of just 0.82 mph.

With the rocket now in place at Pad 39B, the historic launch site of Apollo missions and numerous space shuttle flights, NASA teams are entering the final phase of prelaunch preparations. The mission is targeting liftoff as soon as Wednesday, April 1, with the early April launch window extending through Monday, April 6.

Artemis II will mark the first crewed test flight of the SLS rocket and Orion spacecraft, carrying a four-member astronaut team on a 10-day journey around the Moon and back. The crew includes NASA astronauts Reid Wiseman as Commander, Victor Glover as Pilot, and Christina Koch as Mission Specialist, alongside Canadian Space Agency (CSA) astronaut Jeremy Hansen as Mission Specialist.

The mission represents a pivotal step in what NASA describes as a “Golden Age of innovation and exploration.” Artemis II will pave the way for subsequent U.S.-crewed missions to the lunar surface, with the goal of establishing a sustained presence on the Moon that will ultimately enable the agency to prepare for human exploration of Mars.

As the world watches, the final countdown has begun for humanity’s return to deep space.

Also Read:

NASA to stream launch and docking of ‘Progress 94 cargo spacecraft’ to ISS

Huge Craters On an Asteroid Psyche Could Provide Clues to Early Planets

 

NASA to stream launch and docking of ‘Progress 94 cargo spacecraft’ to ISS

NASA is set to broadcast the launch and arrival of a Russian cargo spacecraft carrying essential supplies to astronauts aboard the International Space Station, as part of routine resupply operations that keep the orbital lab running.

The uncrewed Progress 94 spacecraft, operated by Russia’s space agency Roscosmos, is scheduled to lift off on Sunday, March 22, at 7:59 a.m. EDT from the Baikonur Cosmodrome in Kazakhstan. The mission will ride aboard a Soyuz rocket and is loaded with nearly three tonnes of food, fuel, and other critical materials for the station’s crew.

NASA will begin live coverage of the launch at 7:30 a.m. EDT. The broadcast will be available on NASA+, Amazon Prime, and the agency’s official YouTube channel, alongside other digital platforms.

Following a two-day journey in orbit, the spacecraft is expected to dock automatically with the space-facing port of the Poisk module at around 9:34 a.m. EDT on Tuesday, March 24. Live coverage of the rendezvous and docking is scheduled to start at 8:45 a.m.

Once attached, Progress 94 will remain at the station for roughly six months. During that time, it will serve both as a supply vessel and a storage unit for waste. At the end of its mission, it will detach and burn up upon re-entry into Earth’s atmosphere, safely disposing of onboard trash.

The mission follows the departure of Progress 92, which undocked from the station on March 16 and disintegrated over the Pacific Ocean without incident.

The International Space Station has been continuously inhabited for over 25 years, serving as a hub for scientific research in microgravity. The platform continues to support studies that cannot be conducted on Earth, while also helping space agencies prepare for longer missions beyond low Earth orbit, including NASA’s Artemis programme aimed at returning humans to the Moon, and eventual crewed missions to Mars.

Also Read:

New NASA DART mission data reveals asteroids throw ‘cosmic snowballs’ at each other

NASA’s Webb Cameras Explore Largest Star-Forming Cloud in Milky Way

 

NASA to Brief Media on X-59 Supersonic Aircraft Flight Tests After 2nd California Mission

NASA is scheduled to host a media teleconference Friday at 6 p.m. EDT to outline the next phase of flight testing for its X-59 quiet supersonic aircraft, with the briefing set to follow the plane’s second test flight over California the same day.

The call will include NASA leadership, representatives from the agency’s Quesst mission, and officials from primary contractor Lockheed Martin Skunk Works. The X-59’s test pilots are also expected to participate, addressing questions about flight conditions and pre-flight preparation protocols.

The Quesst mission, short for Quiet SuperSonic Technology, is designed to gather data on how communities on the ground perceive sonic disturbances from supersonic flight, with the goal of informing potential regulatory changes to current restrictions on overland supersonic commercial travel in the United States. The X-59 is engineered to reduce the sonic boom typically associated with supersonic aircraft to what NASA describes as a quieter “sonic thump.”

Lockheed Martin Skunk Works, the advanced development division behind the aircraft’s construction, has been working alongside NASA on the program since the agency awarded the contract in 2018. The X-59 completed its first flight in March 2024 at Lockheed’s facility in Palmdale, California.

Full teleconference details and dial-in credentials are expected to be made available through NASA’s media channels ahead of the Friday briefing, which will be streamed on NASA’s YouTube channel. An instant replay will be available online.

Participants include:

  • Bob Pearce, associate administrator, NASA Aeronautics Research Mission Directorate, Washington
  • Cathy Bahm, project manager, Low Boom Flight Demonstrator, NASA’s Armstrong Flight Research Center, Edwards, California
  • Peter Coen, Quesst mission integration manager, NASA’s Langley Research Center, Hampton, Virginia
  • Jim “Clue” Less, X-59 test pilot, NASA Armstrong
  • Pat LeBeau, Lockheed Martin X-59 project manager

To participate in the virtual call, members of the media must RSVP no later than two hours before the start of the event to: kristen.m.hatfield@nasa.gov. NASA’s media accreditation policy is available online.

 

Also Read:

NASA Sensors to Help Detect Methane Emitted by Landfills

NASA’s Swift, Fermi missions detect exceptional cosmic blast

Python Blood is finding itself into a new way of making safer weight loss treatments

According to scientists at the University of Colorado Boulder, the next generation of weight-loss therapies may be shaped with the help of an unexpected source namely the python blood.

Researchers did name a compound found in pythons, which was published in Nature Metabolism on March 19, that seems to naturally inhibit appetite but maintain the muscle and general metabolic well-being. The discovery may open the path to some weight-loss medications that do not have some of the side effects experienced with the existing drugs.

The study, which was carried out in partnership with the researchers at Stanford and Baylor universities, focuses on the way in which the pythons cope with the extreme feeding patterns. These snakes are able to eat enormous meals and then spend months and even longer without eating it without ill-effect on their organs and even muscle tissue.

The senior author of the research, Leslie Leinwand, remarked that the work was a feeling of learning through the extreme of nature: the scientific perspective. Animals such as pythons, according to her, are capable of doing biological things that mammals cannot, and this provides hints on medical innovation.

Pythons are also characterized by great metabolic plasticity. Once they have eaten, their bodies change dramatically: the size of their heart may grow by approximately 25 percent and their metabolism may kick into overdrive to digest food effectively.

In a bid to determine the cause of these changes, scientists examined blood samples of ball pythons and Burmese pythons after feeding cycles. They discovered over 200 metabolites that were highly increased after a meal.

A single compound, para-tyramine-O-sulfate (pTOS) was outstanding. Its levels increased in almost 1000 times following feeding.

Subsequent experiments, which were implemented along with Baylor researchers revealed that, when pTOS was given to mice, it worked on the appetite-controlling centre in the brain- the hypothalamus, resulting in weight loss. Notably, this has been achieved without inciting gastrointestinal distress, muscle wastage or energy deficiency.

Gut bacteria in snakes synthesize the compound which is not inherent to mice. It is found in humans at low concentrations, especially after meals, but has remained largely undetected, since most metabolic research is done on rodents.

The results are published when the use of drugs affecting the GLP-1 hormone like Ozempic and Wegovy is popular in weight loss management, but may cause side effects and may be quit in the first year.

According to Leinwand, the new discovered compound might be another course. She indicated that even the available GLP-1 medicines were nature-inspired, namely, a hormone present in Gila monster venom.

Based on the finding, the research group has started a start-up, Arkana Therapeutics, to examine how python-related metabolic compounds may be converted into medication.

In addition to the loss of weight, the scientists are also exploring wider applications. Sarcopenia or age-related muscle loss is a significant unresolved medical issue and no current effective treatments exist. The fact that the python can maintain muscle mass, even with many days of starvation, could be of considerable importance.

The researchers intend to explore more into the pTOS mechanism in human beings and examine other metabolites that were found in the study some of which rose by up to 500 to 800 percent following a meal.

The present findings, he said, are but the tip of the iceberg as there is much more to discover about nature-inspired metabolic therapies.

Also Read:

Black tea may help with weight loss, too

Weight Loss, Diabetes Interlinked, Says New Study

Huge Craters On an Asteroid Psyche Could Provide Clues to Early Planets

Another investigation that forms the structure of massive craters on asteroid 16 Psyche is providing new perspectives on one of the most persistent mysteries of the Solar System, whether the metallic object is the open core of an unsuccessful planet or a complex of debris formed during numerous collisions.

The scientists in the Lunar and Planetary Laboratory of the University of Arizona are the researchers who conducted the study, which was published in JGR Planets, and dedicated to the possibility of unlocking the inner composition of Psyche due to a large impact basin located near the north pole of the asteroid. The results will likely inform the interpretation of the data of the NASA Psyche space probe, which will visit the asteroid in the year 2029.

The largest known metal-rich asteroid is psyche, which is found in the prime asteroid belt separating mars and Jupiter and is one of the heaviest bodies found in the area. Its bizarre structure has been a long-standing puzzle to scientists, and rival theories have proposed that it might be the rocky and metallic inertia of an early planet, or of violent impact that caused the mixing of metals and rock over time.

To experiment with such situations, scientists ran high-speed crashes on a 3-D model of Psyche which was how a crater similar to 30 miles across and three miles deep was formed. The differing impact conditions and internal structures allowed the team to come up with predictions regarding the way various compositions would form the resulting crater and the surrounding debris.

According to the simulations, porosity, which is the empty space in the asteroid, is an important factor that affects the crater formation. This is different to solid planetary bodies, most asteroids are loose or fractured and thus can absorb impact energy in a different manner. Impacts in more porous structures will create deeper and steeper craters and less material ejected on the surface.

Asteroid layered metallic core

There were two main models of the interior of Psyche tested in the study: the asteroid is layered reaching a dense metallic core and thin rocky mantle, and the second one is that the metal and silicate materials are evenly intermingled. Although both scenarios could result in the measured crater sizes, each scenario created a different ejecta pattern and internal compression pattern.

These variations, according to researchers, may turn out to be important suggestions when there would be direct observations. Equipments in the Psyche spacecraft will capture the surface composition of the asteroid, gravity and magnetic field, an assessment of the difference in density that could have occurred due to impact in the past.

Scientists compare the research to the reconstruction of a process that has been abandoned long ago based on its remains. Through surface studies of craters and patterns of debris those studying them hope to be able to determine the internal composition of a body that might be able to tell us about the very earliest phases of planetary formation.

Origin of Psyche

The theory of the origin of Psyche has more far-reaching consequences in the field of planetary science. The discovery of the asteroid as an exposed core would give an opportunity to study processes that formed rocky planets such as Earth processes that are otherwise not reachable since planetary cores are buried deep within thick mantles.

Another theme addressed in the study is the increased importance of advanced simulations in space mission preparation. Predicting tests set in advance before the arrival of the spacecraft, researchers want to speed up the analysis of the information once the real-time stream of information arrives.

Psyche mission, which was initiated by Arizona State University and is supported by NASA Jet Propulsion Laboratory and other organizations belongs to NASA Discovery Program. By the time the spacecraft arrives at its destination towards the end of this decade, scientists are hopeful that it will provide the first close-up view of a metallic world – and possibly end a two hundred plus century long debate.

Also Read:

New NASA DART mission data reveals asteroids throw ‘cosmic snowballs’ at each other

NASA Sensors to Help Detect Methane Emitted by Landfills

DNA Gaps: Why Most Neanderthal Men Preferred to Sleep With Modern Female Humans?

The latest genetic study indicates that initial interactions between Neanderthals and modern humans were uneven, with some indication that most of the Neanderthals were men who slept with female modern humans, which could be the reason behind long term gaps in human DNA.

The experiment conducted by researcher, Alexander Platt and other researchers, investigates the distribution of Neanderthal genetic material in current human beings. Although the majority of those not in Africa have some traces of Neanderthal ancestry, these have been distributed unevenly throughout the human genome.

A particularly interesting characteristic is the existence of so-called Neanderthal deserts – large areas of the DNA, in which the genetic material of Neanderthals is virtually nonexistent. These deletions are more pronounced in the X chromosome and this poses a question on how the ancient interbreeding process occurred.

There has been long speculation among scientists as to whether these deletions were due to natural selection (whereby the deleterious Neanderthal genes are becoming more and more extinct) or that the interbreeding itself is the cause.

The researchers reversed the question to investigate. They did not simply study the Neanderthal DNA of the contemporary human beings, but rather the remnants of the early modern human DNA in the Neanderthal genomes. The comparison of these with genetic data of the sub-Saharan African populations, most of which do not have Neanderthal ancestry, helped the team recreate ancient gene flow patterns between the two groups.

Great Imbalances in DNA

Their results showed a great imbalance: the proportion of the modern human DNA in the Neanderthal X chromosomes was much higher than anticipated- approximately 62 percent higher. Researchers believe that such an asymmetry can best be attributed to the possibility that the vast majority of the interbreeding took place between male Neanderthals and female modern humans.

This would limit the survival of Neanderthal X-linked DNA into the subsequent generations of human population because males can only transmit their X chromosome to the females. This would over time lead to the low concentration of Neanderthal genetic material on the human X chromosome today.

The paper also indicates that social or behavioural influences, including mate preferences, could have contributed to the development of such patterns, but demographic influence, such as the variation in number or migration cannot be disqualified.

Natural Selection Behind Imbalance?

Moreover, this imbalance was probably supported by natural selection. Dangerous or incompatible genes of Neanderthals especially those associated with significant biological functions might have been gradually eliminated in human gene pool across generations.

The results provide a new understanding of the complicated relationships between the early human groups and the ones closest to their evolutionary lineage not only regarding genetic inheritance but also on social process that might have influenced the evolution of humans.

Using the combination of genomic evidence and evolutionary modelling, researchers indicate that the study is leading scientists nearer to the realization of how ancient interbreeding events still impact the genetic landscape of modern humans.

Also Read:

Neanderthal gene in modern women helps give birth to more children, says study

Research establishes traces of Neandertal DNA present in genome of modern humans

 

Organ Donations After Cardiac Death Soar in US, Expand Transplant Lifeline 

One of the significant changes in the way people approach organ donation in the United States is the growing availability of transplantation organs, with almost half of all donors being patients whose heart has gone dead, according to latest studies.

According to the study by scientists at NYU Langone Health, it has been established that donation after circulatory death (DCD) has increased significantly in the last 25 years – marking an increase of 2 percent of all donors in 2000 to 49 percent in 2025. According to the findings published in Journal of the American Medicine, the development of medical technology is transforming transplant medicine.

The growth has been realized when demand is acute. According to the United Network for Organ Sharing, more than 100,000 individuals are already on transplant waiting lists in the U.S., and this fact requires finding new sources of viable organs.

Conventionally, organs donated have been infected out of patients who have been declared brain dead, those organs keep being oxygenated with the heart still beating. Conversely, DCD deals with patients who are not yet dead, but are on life support. In case life-sustaining treatment is withdrawn and the patient dies in a given period, then organs can be removed to be transplanted, though it must be otherwise previously agreed.

Drawbacks Overcome With Tech 

In past, organs transplanted by such sources were less viable because of a short period of lack of oxygen following the cessation of the heart. Nevertheless, these drawbacks have been overcome with the recent technology advances.

Improved organ preservation has been achieved using techniques like normothermic regional perfusion in which blood flow to organs is resumed following cardiac death and machine perfusion systems in which oxygenated fluids are delivered extravascularly. These inventions have made innovations through which the surgeons can safely utilize organs that were not considered to be perfect.

According to researchers, this has expanded the pool of donors. The researchers discovered that current DCD donors are older individuals with higher probabilities of underlying diseases like diabetes or hypertension as compared to previous generation, which is more inclusive in the selection of the donor.

Syed Ali Husain, the lead author, indicated that the increase in circulatory-death donations is already producing a tangible impact, and thousands of patients were already getting transplants who otherwise would not have been able to survive the wait.

Regional Disparity Persists

The national data on transplants also indicated that there were disparities in the connections of the regions. DCD donors contributed up to 73 per cent of all donations in certain regions of the country and only 11 per cent in other regions indicating a lack of balance in the practice.

The researchers working on the study underlined the importance of developing uniform national standards and ongoing involvement of the population to protect the ethics and preserve a trusting attitude towards the process of donating.

Researchers believe that more papers are required to understand long-term outcomes and enhance protocols as the DCD is becoming more popular. Further research will aim to enhance the process of donor identification and understand the performance of organs of donors who died of a circulatory death as opposed to the performance of organs of those who died of a traditional brain-death.

The results represent an important development in the field of transplant medicine – one that may aid in reducing the disparity between supply and demand of organs, and also pose new challenges to clinical practice, ethics and popular opinion.

Also Read:

New Chip Helps Diagnose Heart Attacks Based on Blood Test in Minutes

Heart attack prevention lags for people with stroke, peripheral artery disease: Study

Bull Sharks Form Social Bonds, Finds Study; Changes age-Old Perception of Predators

A recently published long-term study has been carried out in the Shark Reef Marine Reserve in Fiji which has discovered that bull sharks have stable social connections, that they show preferences towards particular companions instead of associating with anyone randomly, which supports the old view of sharks as highly individualistic creatures.

The study conducted by the scientists at the University of Exeter, Lancaster University, Fiji Shark Lab and Beqa Adventure Divers monitored the behaviour of 184 bull sharks during six years. The analysis of people at three stages of life has been made sub-adults, adults and older, post-reproductive sharks, which provides one of the most comprehensive insights to date on shark social structure.

The researchers claim that the sharks showed what they refer to as active social preferences and the sharks often associate with specific individuals and shun certain individuals. These relationships were assessed using proximity of sharks which swam within one body length of one another and more complex relationships like parallel swimming and the pattern of the leader-follower movements.

According to lead researcher Natasha D. Marosi, the results indicate that there are similarities between the results and social behaviour in humans and other animals whereby people do not interact randomly but instead have a range of relationships.

In the study, the adult sharks constituted the center of these social networks and the most common and close interaction were among the sharks of the same size. Conversely, younger sub-adults and older sharks were not as socially bound meaning that there might be differences in social activity among life stages.

Males Prefer Larger Number of Social Contacts

It was also found by the researchers that both male and female sharks preferred associating with females. Nevertheless, males were determined to have a larger number of social contacts in general. The study hypothesizes one possible reason, which is that larger male sharks can reduce the threats of aggression by other large sharks through heightened social integration.

Professor Darren Croft of the University of Exeter stated that the research evidence suggests the degree of behavioural sophistication which is not normally associated with sharks, indicating that sociality can confer benefits such as foraging success, learning and mating opportunities and avoiding conflict.

The Shark Reef Marine Reserve, which is an enclosed zone where sharks flock throughout the year allowed tracking the same species over a period of time. This consistency enabled scholars to examine how social associations were changed with the passage of time as the sharks grew old.

The paper also emphasized the fact that younger sharks are more likely to be found in other habitats including nearshore, rivers and estuaries where evading predators, including adult bull sharks is their main survival strategy. Few sub-adults were seen coming into the reserve with some seemingly being able to establish relationships with older sharks, which could have helped them integrate and learn.

In the old sharks, however, the researcher found them to be less active socially, which they theorize might indicate a certain level of experience in hunting and survival, and therefore, experience no necessity of social interaction.

Researchers say that the findings may be used in conservation efforts. The improved knowledge of shark socialization can be used to inform the management policy, especially in the protection zones where human activity and ecotourism overlap with the marine ecosystems.

Fiji Shark Lab is currently collaborating with Fiji Ministry of Fisheries to integrate the behavioural perspectives of the study into the conservation process since scientists keep on trying to understand the social lifestyle of shark species, which have been severely misinterpreted over a long period of time.

 

Also Read:

Scared of spiders? A world without them is true nightmare tale: Study

Wolves kill, and ravens recall where: What is the scavenging strategy?

Poor smoke does not equal poor risk: All solid fuels identified to produce ultrafine particles

University of Galway-led research has discovered that when low smoke manufactured fuels are burnt, they emit minute ultrafine particles which may be even more detrimental to human health.

The Ryan Institute at the University conducted several controlled burn experiments with peat, wood, “low-smoke” manufactured products, such as “low-smoke” coal – since 2022, banned in domestic stoves – and several domestic heating fuels to figure out precisely what various domestic fuels emit to the air.

The scientists quantified the smoke with sophisticated equipment that relies on monitoring the number of particles that are generated, their size, and their composition.

The team also took real-life measurements of air in Dublin and Birr, Co Offaly over a period of several years and thus they were able to compare lab results and what people actually breath in during periods of winter pollution.

With the help of these measurements and known statistical fingerprinting methods and proven lung-deposition models, the researchers were able to determine the most harmful contribution of fumes by different fuels and how deep these particles may enter the respiratory system.

The findings – the ones witnessed in a low smoke zone in Ireland and applicable in the rest of Europe and with immense implications on the regions that are in an extremely rapid transition like those in China and India – indicate that the EU, international and national regulatory frameworks must react quicker to the accumulating body of scientific evidence.

This study was published in Nature Geosciences.

This was a research conducted by the Centre of Climate and Air Pollution Studies, Ryan Institute, University of Galway, in conjunction with Irish, Chinese, Australian and USA partners.

Director of the Centre of Climate and Air Pollution Studies, Professor Jurgita Ovadnevaite at the Ryan Institute, University of Galway, stated: “In an attempt to reduce the amount of particulate mass, our research indicates that emissions of the smallest particles have been inadvertently increased and this could be even more detrimental to the human condition than the larger ones. These ultrafine particles of the low smoke fuels get to the deepest point of the lungs, then to the cardiovascular system and it even gets to the brain.

On this basis, we highlight why we should abandon residential solid fuel burning as one of the broader societal goals to decarbonise the economy by 2050.

The research also reveals that there is a serious necessity to revise EU and International air quality standards and cover ultrafine particles in the list of pollutants so that the mass concentration may be managed without an increase in the number of ultrafine particles.

In the study, it is shown that the substitution of smoky fuels with the low-smoke counterparts doubles and even triples the amount of ultrafines emissions.

Taking into account the fact that the smaller ultrafine particles are capable of penetrating more deeply into the lungs and settling there, the newly recorded trend can offset some of the benefits of the reduction in smoke emission. Rather than decreasing the total exposure of the human being to ultrafine particles by decreasing the total mass of the particulate matter (PM), it leads to a subsequent increase in the number of ultrafine particles and, possibly, health effects.

Air pollution/Photo:en.wikipedia.org

Literature indicates that the concentrations of the number of particles in the air are greatly (ten times) underrated in the existing air quality models.
Air pollution causes a number of several million premature deaths every year around the world. One of the greatest factors contributing to this frightening statistic is exposure to airborne fine particulate matter (PM2.5; less than 2.5 um in diameter). PM2.5 pollution is associated with over 1,700 premature deaths per year even in Ireland, which is commonly viewed to have clean air. Ultrafine particles (smaller than 100 nm in diameter), in comparison to PM2.5, cause more severe pulmonary inflammation and long-term lung retention because of their potential to penetrate deep to the respiratory tract even through the bloodbrain barrier. They become more toxic with diminishing size, greater specific surface area, constituents that are bound on the surface and their intrinsic physical characteristics.

Although the health impact of ultrafine particles continues to be identified as a health issue in the European policy, with the recent amendment of the Ambient Air Quality Directive (EU 2024/2881), the first time that includes the obligatory monitoring of ultrafine particles in the Member States. This research contributes to the literature that the directive should extend further and establish binding regulatory limit values of ultrafine particles.

The Centre for Climate and Air Pollution Studies, University of Galway, offers evidence to policymakers in the country and EU, aiding in the formulation of air-quality standards, emission-reduction policies and planning of climate actions. Its effort is the foundation of the ability of Ireland to comply with new regulatory standards, such as the new EU regulations on the ultrafine particle monitoring.

Read More:

Belém COP30 delivers climate finance boost and a pledge to plan fossil fuel transition

The Oil Shock Lesson: Why Energy Diversification Is Back On The Global Agenda

Newer ground water is linked with increased risk of Parkinson disease

A new study has established that individuals whose drinking water was supplied by newer groundwater were at a greater risk of getting Parkinson disease as compared to those individuals whose drinking water was supplied by older ground water.

  • The study does not prove that newer groundwater causes Parkinson’s; it only shows an association.
  • Older groundwater would usually have less contaminants since it is mostly deeper and well covered.
  • It was discovered by the researchers that drinking water that was derived in carbonate aquifers was related to the 24 percent greater risk of Parkinson disease as compared to other varieties of aquifers.
  • It was also linked to increased 62 percent risks than when one uses water in glacial aquifers.
  • It has also been reported that newer ground water, less than 75 years, in carbonate systems was linked to increased risk of Parkinson by 11 per cent than older than 12,000 years of ice age ground water.

People whose drinking water was supplied by more recent groundwater were at a greater risk of developing Parkinson’s disease compared to those whose drinking water was served by older groundwater as per a preliminary study published March 2, 2026, and will be presented at the 78th Annual Meeting of the American Academy of Neurology to be held April 18-22, 2026, in Chicago and online. The research does not demonstrate that newer groundwater is a cause of Parkinson disease but just indicates that there is a correlation.

The paper examined the age of ground water. It also examined aquifers which were the sources where groundwater was extracted. An aquifer refers to a layer of porous rock, sand or silt in the ground that contain and moves the ground water.

This study was carried out by a study author who at the time of conducting the research was a member of the American Academy of Neurology in Phoenix, Arizona, a researcher named Brittany Krzyzanowski, PhD at the Atria Research Institute in New York City, and is considered to have conducted the research in one way or the other as she was studying our exposure to modern pollution through drinking water. More pollutants have been exposed to newer groundwater, which is formed by precipitation that has fallen during the last 70 to 75 years. The aged groundwater tends to have fewer contaminants due to the fact that most of them tend to be deep and have a better protection against surface contaminants. Our research established that the groundwater age and groundwater location is a possible environmental risk factor of Parkinson disease.

The researchers used 12,370 individuals with Parkinson’s disease and over 1.2 million individuals without the disease to derive the results after matching the individuals based on variables such as age, sex and race and ethnicity. All the participants were within 3 miles of 1,279 groundwater sampling locations in 21 large aquifers in the U.S.

The researchers sought to examine age of groundwater, type and source of drinking water (either municipal groundwater systems or personal wells) to be used as a possible indicator of exposure to neurotoxic contaminants.

The most common aquifer in the United States is carbonate aquifers which are mainly composed of limestone and the water is trapped in the fissures and cracks. They are usually quite vulnerable to the contamination of the surface water by groundwater flowing through fractures very fast.

The composition of glacial aquifers is made up of sand and gravel containing water in the cracks and they are formed when the glaciers had moved forward and back over 12, 000 years ago. Such aquifers are more likely to facilitate a more diffuse flow and natural filtration.

Carbonate aquifers are prevalent in U.S. in portions of Midwest, South and Florida whereas glacial aquifers are prevalent in Upper Midwest and Northeast.

Of all individuals with Parkinson 3,463 received their drinking water as a product of carbonate aquifers, 515 received it as a product of glacial aquifers and 8,392 received it as a product of other aquifers. Of non-Parkinson 300 264 obtained their drinking water through carbonate aquifers, 62 917 glacial aquifers, and 860 993 other aquifers.

It was found that when factors like age, sex, income, and air pollution were taken into consideration, individuals that received their drinking water in municipal ground water system or in private wells that worked off carbonate aquifers were at a greater risk of developing Parkinson disease by 24% compared to everyone who received their drinking water in all other aquifers. Their risk was also 62 times and compared to individuals who had glacial water aquifers.

The safety of older groundwater was discovered under the condition when water was obtained in carbonate aquifers. The risk of Parkinson disease decreased by about 6.5 per one-standard-deviation of groundwater age. It was also discovered that newer ground water (less than 75 years old) of carbonate systems had 11% more likelihood of causing Parkinson disease than ground water older than 12,000 years old of ice age.

Carbonate systems 

Krzyzanowski postulated that the data on the apparent protective effect of older groundwater is observed predominantly in carbonate systems due to their ability to present a more distinct contrast between newer and older water. Newly recharged groundwater in such aquifers is more susceptible to surface contamination, whereas older groundwater can also be cleaner in case it is segregated by a confining layer.

According to Krzyzanowski, on the contrary, groundwater flow in glacial aquifers is slower, and contaminants are filtered in their natural course by groundwater as it flows through the ground. Consequently, the amount of contamination between new and old groundwater in these aquifers might not vary greatly and thus might be difficult to detect.

Krzyzanowski observed that the origin of the drinking water of individuals can typically be determined through the local water utility or, in the case of a personal well, through state or county groundwater sources.

According to Krzyzanowski, this study emphasized the fact that the origin of our water, groundwater age, and the nature of water source, may influence the long-term neurological health. Although further studies are required, the integration of knowledge on groundwater and brain health can be beneficial to enable communities to evaluate and mitigate environmental risks.

One of the weaknesses of the study was that, it assumed that all people within a 3-mile area around a point of sampling had the same aquifer characteristics and age of the groundwater at the point of sampling.

Also Read:

World enters era of ‘global water bankruptcy’

WORLD WATER DAY: The ‘cold hard truth’

Scared of spiders? A world without them is true nightmare tale: Study

The objects of revulsion, disgust and fear are frequently members of the arachnid class–think spiders, scorpions and harvestmen (daddy long legs). However, they are essential towards the prosperity of the ecosystems.

Considering the plummeting global biodiversity, and some even refer to it as the insect apocalypse, two ecologists at the University of Massachusetts Amherst chose to see what is generally happening with insects and arachnids in the United States, only to find huge gaps in the data. Their study, which was recently published in PNAS, suggests that an imminent need to evaluate, preserve and appreciate insects and arachnids, which is a major support of planetary health.

The senior author of the paper is Laura Figueroa, an assistant professor of environmental conservation in the UMass Amherst, who writes that insects and arachnids are basic to human society. They are useful in the pollination and control of pests biologically; they may also be used as environmental indicators to monitor air and water quality, and have become so ingrained in various other cultures all over the world that we can think of Aragog in the Harry Potter books, as an example. Many humans are interested in popular charismatic animals on earth such as the lion and the panda which deserved the international conservation interest rightfully. Since insects and arachnids do not normally receive the same attention, we were interested in how they were doing.

To determine the health of our creepier crawlier neighbors, Figueroa and her graduate student, Wes Walsh, the lead author of the paper, compiled conservation assessments of the 99312 known insect and arachnids species in North America, north of Mexico.

Findings mind boggling

As Figueroa says, almost 90 percent of the species of insects and arachnids, or 88.5 percent to be exact, have none of the conservation status. “We do not even know how they are doing. Little is known concerning the conservation requirements of the majority of the insects and arachnids in North America.”

The small data available was skewed to aquatic species that are crucial to water quality surveillance (mayflies, stoneflies and caddisflies), and more popular insect fauna such as butterflies and dragonflies got a disproportionate portion of protection.

Even the arachnids are not enjoying conservation; most states do not even protect one species of the spider group. More information and security to the insects, yet arachnids as well, says Walsh.

Another finding made by the team was that states that were most dependent on extractive industries, e.g., mining, quarrying and oil and gas extraction, had lower chances to protect either insects or arachnids but those with more eco-centric views by the populace were protecting more species.

Comparatively, Figueroa refers to the conservation of birds, which has been much more successful in conservation and preservation of species. According to the research, it turns out that you will get the best conservation efforts when there is a broad and diverse coalition of people. In the example of the birds, they were the hunters, the bird watchers, the nonprofit organizations and many more constituencies that joined their hands in an effort to achieve a unified objective.

Insects and arachnids are not things to be feared, as Walsh ignores with a gorgeous spider tattoo on his arm. It is time to value them and recognize their ecological significance, and this should start with the gathering of more information and the recognition that they deserve conservation.

Also Read:

Wolves kill, and ravens recall where: What is the scavenging strategy?

At-risk mountain vipers and iguanas, in rare company at key wildlife talks

 

 

Whar are “blue tears”? New AI algorithm allows scientific monitoring of this unique phenomenon

Blue tears is a stunning, natural bioluminescent phenomenon where coastal waters glow with an ethereal blue light, primarily caused by massive blooms of microscopic marine plankton, such as Noctiluca scintillans or ostracod crustaceans, disturbed by wave motion. Often seen in China, Taiwan, and the Maldives, this glowing,, sometimes toxic, “sea of stars” usually occurs in warmer months.

The chasing of blue tears has emerged as a common tourism activity in the recent years along the coastlines in order to experience the natural spectacular. Nonetheless, algal blooms occurrences and movements are unpredictable, and this affects the quality of tourist experiences as well as introducing safety risks and ecological pressures.

A group of researchers headed by the professor of the Shenzhen Institutes of Advanced Technology of the Chinese Academy of Sciences, Li Jianping, and the partners of the Ministry of Natural Resources came up with a novel algorithm to monitor videos in real-time called BT-YOLO in one of the studies published in the Ecological Informatics.

Bt-YOLO algorithm makes it possible to pixel-by-pixel segment the glowing parts of video footage and localize and analyze bloom strength and distribution in a quantitative manner. In contrast to the traditional approach of revealing the existence of the so-called blue tears, the given algorithm offers the scientific rationale to rate the blooms harshness and advance to the creation of a predicting algorithm.

(Courtesy: AGU–Advanced Earth and Space Sciences)

We have made scales and rulers to measure blue tears. It is after the implementation of the coastal surveillance camera network that this algorithm will enable us to carry out quick quantification and get closer to an operational forecasting system, as explained by Prof. LI. The algorithm can also be used to track other features in the seas like the red tide and marine debris, and there is a solution to smart coastal management.

The research forms a basis upon which the time, place, magnitude, and severity of the occurrence of blue tears can be predicted. The forecasting system will be further refined through the data of the coastal camera networks that will bring the system nearer to the real world which will allow balancing the ecological protection with the sustainable tourism.

Also Read:

End of the World champion Meade surfaces again, says April 23 the Doomsday

‘Aliens Real But Not in Form We Imagine’: Reiterates Alien Hunter Bill Diamond of SETI

What makes lithium-ion batteries fail? Microscopic metal thorns give leads to scientists

This is the first time that scientists have observed the growth of tiny metal thorns known as dendrites grow within lithium-ion batteries thus making the batteries short-circuit. Their results published Mar. 12 in the journal Science illuminate the hitherto unrecognized mechanical aspects of the lithium dendrites during their development.

Lithium dendrites have been the subject of study of scientists since a long time, yet their behavior within batteries has not been well understood. Dendrites are developed at the nanoscale; development is difficult to monitor in a closed system such as a working battery, but has been associated with battery degradation and failure.

The new work, an international alliance of scholars at the U.S. and Singapore universities, simulated and experimented and came up with the first view on how dendrites crystalize, according to co-lead author Xing Liu, an assistant professor of mechanical and industrial engineering at New Jersey Institute of Technology and head of the NJIT Computational Mechanics and Physics Lab.

He says that it is a result of a close collaboration between experimental and computational mechanics and possibly could be used to make batteries safer.

Co-author Qing Ai, a former research scientist at Rice University, says: “The basic nanomechanical behavior of lithium dendrites has been a riddle of decades.”

Customized platforms
Lithium dendrites (named after the Latin word for branch) are about 100 times narrower than the thickness of a human hair and they are spouting out of anodes, which are negative terminals in lithium-ion batteries. The branches of dendrites may extend into an electrolyte in a lithium cell; in case the dendrites run to the negatively charged anode, and extend to the positively charged cathode, they may short out the battery.

Lithium dendrites are commonly known to be one of the largest impediments to commercialization of lithium-metal batteries, Liu says. Under battery operation, it is possible to have lithium dendrites form, break and be electrically isolated to the lithium metal anode to form so-called dead lithium. This is what causes a progressive depletion of battery capacity with time. Moreover, the dendrites may tunnel through the separator, and form an internal short between the anode and cathode. Capacity loss and short-circuit dendrite risks tend to be common in laboratory experiments.

Better still, lithium dendrites become almost impossible to eliminate in a battery once they develop.

At this point in time, says Liu, “there is no empirical way to cleanse dendrites of a working battery cell.”

In the new study, scientists at the Rice University together with their counterparts in Georgia Institute of Technology, the University of Houston and the Nanyang Technological University in Singapore extracted dendrites of working batteries to see whether they were mechanically strong or not.

“In order to make the quantitative study of lithium dendrites possible, we constructed specialized sample preparation and mechanical characterization stations of such delicate work,” says Boyu Zhang, a Rice doctoral graduate and a co-lead author on the work.

Rice Karl F. Hasselmann Professor of Materials Science and Nanoengineering co-corresponding author Jun Lou headed a team at the Nanomaterials, Nanomechanics and Nanodevices lab in performing a direct probe into the mechanical behavior of dendrites as they grew in real batteries. The extremely delicate experiments were done by Ai and Zhang, former members of the lab of Lou with the help of study co-corresponding author Hua Guo and co-author Wenhua Guo of the Rice University Shared Equipment Authority.

In order to execute the experiments, they made air-tight platforms to prepare and study the samples since lithium is a highly reactive element that changes chemically and structurally due to the amount of air it is exposed to. The nature of the deformation of individual dendrites to controlled stresses was then exposed using high-resolution electron microscopy.

‘Like dry spaghetti’

Lithium bulk is soft and cushy; the dendrites of lithium, consequently, were supposed to be soft as well. The experiments however indicated otherwise. This observation of the failure of dendrites in real-time under the operation of a battery through the University of Houston team under the leadership of one of the co-corresponding authors Yan Yao, a professor at the Department of Electrical and Computer Engineering, supported the idea that dendrites are brittle in liquid as well as solid electrolyte systems.

Liu says that for long it has been thought that the lithium dendrites are soft and ductile, resembling Play-Doh. However, it seems to us that they can be tough and brittle, too, and break like dry spaghetti.

Data on the observations was then modeled and theoretically analyzed by teams of NJIT and Georgia Tech.

To answer the question, Liu says that they did scale-bridging simulations to understand the reason lithium dendrites act contrary to expectations.

They discovered that when dendrites are growing in a battery cell, they will be covered by a thin coating of solid electrolyte interphase, known as SEI. The SEI coating causes the dendrites to become rigid and needle like and are able to pierce battery cells separators and electrolytes and are likely to break under stress and accumulate in the battery cell as lithium dead time fragments and lead to battery failure.

Liu explains that by knowing about the physics behind it, soon it will be possible to develop methods of making dendrites less susceptible to brittle fracture, such as; utilizing lithium alloy anodes. To scholars in the field of computational mechanics, the mechanisms to be found in the experiment, like the way that structures defame, or the reasons why they break and break down, are like musical notes and can be added to a symphony of high-performance materials and high-energy storage systems.

“The strengthening mechanism we identified in lithium dendrites adds a new note to this composition,” Liu says.

Read More:

MIT engineers build a battery-free, wireless underwater camera; captures color photos even in unclear environment

The Sun is Just a Secret Earthquake Machine with switch: Reveals Japan Study

 

How moss led to the solving a grave-robbing mystery

In 2009, a cemetery, located directly outside of Chicago, revealed a scandal. The employees at the Burr Oak Cemetery in Alsip, Illinois had been accused of digging up aged graves, shifting the remains to other places within the cemetery and selling the burial plots back. One such piece of evidence was a small knot of moss when the case went to trial in 2015.

Researchers have published the original full scientific account of the case in a new article in the journal Forensic Sciences Research where it is described how exactly moss was used to find that a crime had been committed.

The lead author of the paper, Matt von Konrat, the head of the botany collections at the Field Museum in Chicago, is a follower of detective programs on television (the new paper is called Silent Witness on the BBC), but he never thought that his work would bring him into a criminal case scenario. Around 2009, von Konrat received a call on the phone, which happened to be the FBI, inquiring whether she could assist in identifying a few plants, says von Konrat. The FBI appeared at the Field Museum and gave von Konrat a piece of moss which was discovered eight inches under the earth, and the recovered human remains at the cemetery.

What sort of moss it was, and how long it had been lying in the soil, they wanted to know.

First, von Konrat and his associates had looked at the moss under a microscope and compared it with dried moss specimens in museum collections to conclude that it was taxifolius Fissidens, which is also referred to as a common pocket moss. According to von Konrat, they conducted a survey of the various types of mosses found in different locations around the site of the crime and that type of moss was not present in the area. However, examining the remainder of the cemetery we discovered a large colony of that form of moss growing in the same spot where the investigator thought that the bones had been disturbed.

The investigators did not only require the species of the moss, however, they were also concerned about its age. The defendants to the case argued that someone must have exhumed the bones and reburied them at a later time prior to the defendants commencing working in the cemetery. As the moss was buried with the re-buried bones, the length of time that the moss had been under the ground would be used to help prove the date that the bones were reburied.

“Moss,” says von Konrat, “is a bit of a freak. Mosses are intriguingly physiologically regulated so that although they may be dry and lifeless and preserved, still they may have an active metabolism and some active cells. The level of metabolic activity decays with time, and that would inform us about when a moss sample was harvested.”

The metabolic activity of a plant may be determined by its chlorophyll – the green color that is used to photosynthesize the food. The chlorophyll in the cells of plants deteriorates as they die and more of the cells of plants lose the ability to perform their functions. The authors of the research determined the quantity of the light captured by chlorophyll of the moss specimens in known ages, including fresh and those that have been stashed in the museum collections over the past 14 years. Then they repeated the same test on the moss that was picked at the crime scene. The researchers concluded that the evidence moss was no more than a year or two old- which helped the case against the cemetery employees who in 2015 were finally found guilty of desecrating human remains.

https://en.wikipedia.org/wiki/Burr_Oak_Cemetery

“Occasionally, there are also cases when the FBI only has to summon experts to assist in the gathering of evidence, conduct analyses, submit the evidence to the prosecutors and testify to their efforts should a conviction be required. Burr Oak Cemetery case was one of those cases when we approached the Chicago Field Museum Botanical Program, which happened to be of invaluable help since plant material within the cemetery provided the key to charge four individuals and convict them,” says Doug Seccombe, a former FBI agent and worked on the case, as well as, a co-author of the new paper.

Von Konrat has been consulted after the Burr Oak Cemetery case on a number of moss cases. However, these cases are quite few in the field of forensic science: in 2025, he and some of his co-authors released another article, looking into the application of mosses and other bryophyte plants as forensic evidence. It was only within the last century that they had discovered a dozen-odd examples.

“Mosses are usually underrated and that is how we hope our research will help to create awareness that there are other groups of plants out there other than flowering plants and they play a very crucial role in society and around us. However, most to the point, we wish to mention this microscopic group of plants as a law enforcement tool. Should we find the means of raising mosses as possible evidence, perhaps this might prove of service to some families in the future.”

What are the conditions suitable for life on distant moons

Liquid water is said to be a necessity to life. Amazingly, however, there could be conducive conditions of life far away in an area that is not near a sun. A group of researchers working on the Excellence Cluster ORIGINS at LMU and the Max Planck Institute of Extraterrestrial Physics (MPE) has demonstrated how moons of free-floating planets can retain their water oceans as liquid to as long as 4.3 billion years through dense hydrogen atmospheres and tidal heating – that is to say, roughly as long as Earth has been around and complex life can evolve.

Planetary systems are usually created when the conditions are not steady. In the event of close approach of young planets they have the ability to launch one another out of orbit. This results in free-floating planets (FFPs) that move around the galaxy with no parent star. A previous paper by LMU physicist Dr. Giulia Roccetti had indicated that gas giants that were thrown out in this manner do not always lose their moons in the process.

Oceans remain in their liquid state because of tidal heating

The ejection however does change the orbits of the moons. They are elongated to a high extent in which their distance to the planet is constantly varying. This leads to the tidal forces rhythmically deforming the lunar body, compressing the body interior, and creating heat due to friction. This tidal heating can be adequate to keep oceans of liquid water on the surface – without the power of a star, and in the coolness of interstellar space.

Hydrogen as stable heat trap

It is the atmosphere that dictates whether this heat remains on the surface or not. Carbon dioxide is a good greenhouse gas on earth. Prior research had shown that carbon dioxide would be able to stabilize life-supportable conditions on exomoons of up to 1.6 billion years. In really low temperatures of free-floating systems, however, carbon dioxide would condense, lose the protective effect on the atmosphere and the heat to escape.

Thus, the scientists of astrophysics, biophysics, and astrochemistry started to research the possibilities of the hydrogen-rich atmospheres being the alternative heat traps. Despite the fact that the molecular hydrogen is mostly transparent to infrared radiation, an important physical phenomenon occurs under high pressures: collision-induced absorption. During this process, hydrogen colliding molecules create temporary complexes, which are able to take up the thermal radiation and store it in the atmosphere. Simultaneously, hydrogen is a stable element even at the lowest temperature.

Parallels to early Earth

The results also provide new insights to the origin of life. The cooperation with the team of Professor Dieter Braun enabled us to understand that the cradle of life does not always need a sun, says David Dahlbudding who is a doctoral researcher at LMU and the lead author of the study. According to the case, there was a distinct relationship between these moons that were far away and the early Earth, which had high levels of hydrogen due to asteroid impact in order to form conditions that supported life.

The tidal force was even capable of providing heat, as well as, chemical development processes. There is deformation periodically, which produces local wet-dry cycles, where water evaporates and condenses. These cycles have been regarded as a significant process of the formation of complex molecules and may make essential steps in the direction to the emergence of life.

Life-friendly moons in interstellar space

The free-floating planets are believed to be common. It has been estimated that these so-called nomadic planets in the Milky Way may be as numerous as the stars. Their moons could also offer long term stable habitats. The new discoveries were therefore able to considerably expand the range of potential habitats in which life might exist – and indicate that life would not only exist but also be able to survive even in the darkest parts of the galaxy.

Read More:

Watching water droplets merge on the International Space Station

Webb space Telescope Captures Clearest View of Neptune’s Rings, Unusual Moon ‘Triton’

Better brain health is linked with the enhancement of your biological age gap

The narrower the difference between your biological age and actual age the lower the risk of a stroke and the health of your brain.
The study involved 250,000 people. The scientists measured the level of 18 biomarkers in their blood to obtain their biological age. Brain scans were also done to a section of individuals.
Individuals that bridged the difference between their biological and chronological ages during the intervention were 23% less likely than the rest to experience a stroke in the future.
The research does not demonstrate that the reduction of the age gap is the reason of the reduced stroke risk and positive brain health changes. It only shows an association.
According to researchers, a healthy diet, regular exercise, proper sleep and blood pressure management can contribute to the age gap in the biology of the body, although this study has not assessed any lifestyle program.
The article is a preliminary study published in March of 2026 will be presented at the American Academy of Neurology 78th Annual Meeting in April 18-22, 2026 in Chicago. It found that the closer your biological age is to your chronological age, the lower the risk of stroke and the better the signs of damage in the brain.

Betterment of age gap

The research does not demonstrate that betterment of the age gap is the reason behind better brain health; it only presents a correlation.

The researcher Cyprien Rivier of Yale University and an American Academy of Neurology member, said that efforts to “change our biological age may be one of the ways to help our brains stay healthy. Lifestyle habit, such as healthy diet, physical activity, sleep and good blood pressure management, which can help to prevent cardiovascular and metabolic disease, might help reduce the biological age difference, but we did not assess lifestyle interventions in the study.”

In the study, the biological age of 258,169 individuals of a health care research database was analyzed. They quantified 18 biomarkers in the blood, including cholesterol, average red blood cell volume and white blood cell count, to assess biological age at the beginning of the study and six years later in a sub-group of the participants. Researchers then found the participants who had a stroke after an average of 10 years. A group of the participants also administered tests on memory and thinking ability and brain scans to examine indications of brain damage.

In the beginning of the study, the biological age of the participants was 54 on average and their real age was 56. Their actual age was 62 years but on average, they were 58 years biologically six years on.

Individuals whose biological age was more than their chronological age at the conclusion of the study exhibited poorer brain scans and also poorer scores in the cognitive tests. They were also at a higher risk of stroke by 41 percent.

Those who lengthened the distance between their biological and chronological ages between the beginning of the study and the repeat measure had their risks of developing a stroke in the follow-up phase reduced by 23%.

Individuals who had some improvement also contained a smaller amount of white matter hyperintensities, an indicator of tissue damage to the white matter, by the conclusion of the study compared to those who had no amelioration in their biological age gaps. The overall amount of damage that they could do was 13 per cent less with each standard deviation of progress.

These outcomes factored in other factors that might influence the risk of stroke and damage to the brain including high blood pressure and other blood vessels conditions and socioeconomic outcomes.

Study’s Insufficiency

One of the weaknesses of the research was that although it identified correlations, it was not a causal study. In addition, only a smaller number underwent repeat blood tests and this does not allow the researcher to draw conclusions of change over time especially on cognitive tests.