Tuesday, 31 March 2026

Scientists create “smart” DNA drug that targets cancer cells with extreme precision

 How can doctors destroy cancer cells without harming healthy tissue? That question remains one of the biggest challenges in modern oncology. Researchers at the University of Geneva (UNIGE) have now developed a "smart" system built from synthetic DNA strands that can identify cancer cells with remarkable accuracy and release powerful drugs only where they are needed. In addition to cancer treatment, this approach points toward a future of programmable, responsive medicines. The findings appear in Nature Biotechnology.

Targeted therapies have already reshaped cancer care by directing drugs straight to tumors, helping reduce damage to healthy cells and easing harsh side effects linked to chemotherapy. One of the most successful strategies involves antibody-drug conjugates (ADCs), which use monoclonal antibodies to carry treatments directly to cancer cells.

However, ADCs still have drawbacks. Their relatively large size can limit how well they penetrate tumors, and they can only carry a limited amount of drug. These challenges have pushed scientists to explore new ways to deliver therapies more effectively.

DNA-Based Drug Delivery Offers New Advantages

To overcome these limitations, the UNIGE team designed a system based on short DNA strands. Because these molecules are much smaller than antibodies, they can move more easily through tumor tissue. They can also be engineered to carry multiple components, increasing their potential effectiveness.

A "Two-Key" System for Precision Drug Activation

The new method relies on several separate DNA strands, each carrying a specific function. Some strands include binders that recognize cancer markers, while another carries a toxic drug.

When two distinct cancer markers are present on a cell, the DNA components attach to them and assemble at that exact location. This triggers a chain reaction that builds up more DNA structures at the site, boosting the amount of drug delivered. The process works much like two-factor authentication on a banking website. Both markers must be detected before activation occurs. If one is missing, the reaction does not begin, and the drug remains inactive.

Lab Results Show High Selectivity and Power

In laboratory experiments, the system successfully identified cancer cells with specific combinations of surface proteins and delivered potent drugs directly to them. Nearby healthy cells were not affected.

The researchers also showed that multiple drugs can be delivered together using this approach. This could be important for preventing or overcoming resistance, a common problem in cancer treatment.

"This could mark an important step forward in the evolution of medicine, with the introduction of a self-operating drug system. Until now, computers and AI have helped us design new drugs. What's new here is that the drug itself can, in a simple way, 'compute' and respond intelligently to biological signals," explains Nicolas Winssinger, full professor in the Department of Organic Chemistry of the School of Chemistry and Biochemistry, Faculty of science, UNIGE, and last author of the study.

Source: Sciencedaily

Monday, 30 March 2026

Stroke triggers a hidden brain change that looks like rejuvenation

 A new study in The Lancet Digital Health suggests the brain can respond to stroke in a surprising way. Researchers at the USC Mark and Mary Stevens Neuroimaging and Informatics Institute (Stevens INI) found that people with severe physical impairments after a stroke may show signs of a "younger" brain structure in areas that were not damaged. This appears to reflect how the brain adapts and reorganizes itself after injury.The research was conducted as part of the Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Stroke Recovery Working Group. Scientists analyzed brain scans from more than 500 stroke survivors collected across 34 research centers in eight countries. By applying deep learning models trained on tens of thousands of MRI scans, the team estimated the "brain age" of different regions in each hemisphere and examined how stroke affects both structure and recovery."We found that larger strokes accelerate aging in the damaged hemisphere but paradoxically make the opposite side of the brain appear younger," said Hosung Kim, PhD, associate professor of research neurology at the Keck School of Medicine of USC and co-senior author of the study. "This pattern suggests the brain may be reorganizing itself, essentially rejuvenating undamaged networks to compensate for lost function."

AI Reveals Brain Rewiring After Stroke

To carry out the analysis, researchers used a type of artificial intelligence called a graph convolutional network. This system estimated the biological age of 18 brain regions based on MRI data. They then compared this predicted age with each person's actual age, a measure known as the brain-predicted age difference (brain-PAD), which serves as an indicator of brain health.When these brain age measurements were compared with motor function scores, a clear pattern emerged. Stroke survivors with severe movement impairments, even after more than 6 months of rehabilitation, showed younger-than-expected brain age in regions opposite the site of injury. This effect was especially strong in the frontoparietal network, which plays an important role in movement planning, attention, and coordination."These findings suggest that when stroke damage leads to greater movement loss, undamaged regions on the opposite side of the brain may adapt to help compensate," Kim explained. "We saw this in the contralesional frontoparietal network, which showed a more 'youthful' pattern and is known to support motor planning, attention, and coordination. Rather than indicating full recovery of movement, this pattern may reflect the brain's attempt to adjust when the damaged motor system can no longer function normally. This gives us a new way to see neuroplasticity that traditional imaging could not capture."

Large-Scale Data Reveals Hidden Patterns

The study relied on ENIGMA, a global collaboration that combines data from more than 50 countries to better understand the brain across different conditions. By standardizing MRI data and clinical information from many research groups, the team created the largest stroke neuroimaging dataset of its kind."By pooling data from hundreds of stroke survivors worldwide and applying cutting-edge AI, we can detect subtle patterns of brain reorganization that would be invisible in smaller studies. These findings of regionally differential brain aging in chronic stroke could eventually guide personalized rehabilitation strategies," said Arthur W. Toga, PhD, director of the Stevens INI and Provost Professor at USC.

Toward Personalized Stroke Recovery

The researchers plan to continue this work by following patients over time, from the early stages after a stroke through long-term recovery. Tracking how brain aging patterns and structural changes evolve could help doctors tailor treatments to each person's unique recovery process, with the goal of improving outcomes and quality of life.Learn more about associations between contralesional neuroplasticity and motor impairment by viewing this video made by the Stevens INI.The study, "Deep learning prediction of MRI-based regional brain age reveals contralesional neuroplasticity associated with severe motor impairment in chronic stroke: A worldwide ENIGMA study," was funded by the National Institutes of Health (NIH) grant R01 NS115845 and supported by international collaborators from institutions including the University of British Columbia, Monash University, Emory University, and the University of Oslo.

Source: ScienceDaily

Sunday, 29 March 2026

Scientists solved the mystery of missing ocean plastic—and the answer is alarming

 Scientists have uncovered something surprising in the Atlantic Ocean. The majority of plastic pollution may no longer be visible at all. Instead, it exists as nanoplastics, particles so small they are measured in billionths of a meter.

"This estimate shows that there is more plastic in the form of nanoparticles floating in this part of the ocean than there is in larger micro- or macroplastics floating in the Atlantic or even all the world's oceans!" said Helge Niemann, researcher at NIOZ and professor of geochemistry at Utrecht University. In mid-June, he received a 3.5 million euro grant to further investigate nanoplastics and what ultimately happens to them.

Ocean Expedition Reveals Tiny Plastic Particles

To gather data, Utrecht master's student Sophie ten Hietbrink spent four weeks aboard the research vessel RV Pelagia. The ship traveled from the Azores to the European continental shelf, where she collected water samples at 12 different locations.

Each sample was carefully filtered to remove anything larger than one micrometer. What remained contained the smallest particles. "By drying and heating the remaining material, we were able to measure the characteristic molecules of different types of plastics in the Utrecht laboratory, using mass spectrometry," Ten Hietbrink explains.

First Real Estimate of Ocean Nanoplastics

Previous studies had confirmed that nanoplastics existed in ocean water, but no one had been able to calculate how much was actually there. This research marks the first time scientists have produced a meaningful estimate.

Niemann notes that this breakthrough was made possible by combining ocean research with expertise from atmospheric science, including contributions from Utrecht University scientist Dusân Materic.

27 Million Tons of Invisible Plastic

When the team scaled their measurements across the North Atlantic, the results were striking. They estimate that about 27 million tons of nanoplastics are floating in this region alone.

"A shocking amount," Ten Hietbrink says. The finding may finally explain a long-standing mystery. Scientists have struggled to account for all the plastic ever produced. Much of it appeared to be missing. This study suggests that a large share has broken down into tiny particles that are now suspended throughout the ocean.

How Nanoplastics Enter the Ocean

These microscopic plastics come from multiple sources. Larger plastic debris can fragment over time due to sunlight. Rivers also carry plastic particles from land into the sea.

Another pathway comes from the atmosphere. Nanoplastics can travel through the air and fall into the ocean with rain or settle directly onto the water's surface through a process known as dry deposition.

Potential Risks to Ecosystems and Human Health

The widespread presence of nanoplastics raises serious concerns. Niemann points out that these particles are small enough to enter living organisms.

"It is already known that nanoplastics can penetrate deep into our bodies. They are even found in brain tissue," he says. Because they are now known to be present throughout the ocean, they likely move through entire food webs, from microorganisms to fish and ultimately to humans. The full impact on ecosystems and health is still unclear and requires further study.

What Scientists Still Don't Know

There are still important gaps in knowledge. Researchers did not detect certain common plastics, such as polyethylene or polypropylene, in the smallest particle range.

"It may well be that those were masked by other molecules in the study," Niemann says. The team also wants to determine whether similar levels of nanoplastics exist in other oceans. Early indications suggest this could be the case, but more research is needed.

Prevention May Be the Only Solution

While this discovery fills a critical gap in understanding ocean pollution, it also presents a difficult reality. These particles are too small and too widespread to remove.

"The nanoplastics that are there can never be cleaned up," Niemann emphasizes. The findings highlight the urgency of preventing further plastic pollution before it breaks down into an even more persistent and invisible problem.

Source: ScienceDaily

Saturday, 28 March 2026

This quantum computing breakthrough may not be what it seemed

 A team of researchers led by Sergey Frolov, a physics professor at the University of Pittsburgh, along with collaborators from Minnesota and Grenoble, carried out a series of replication studies focused on topological effects in nanoscale superconducting and semiconducting devices. This area of research is considered crucial because it could enable topological quantum computing, a proposed approach to storing and processing quantum information in a way that naturally resists errorsAcross multiple experiments, the researchers consistently identified other ways to interpret the same data. Earlier studies had presented these results as major steps forward in quantum computing and were published in leading scientific journals. However, the follow-up replication studies struggled to gain acceptance from those same journals. Editors often rejected them on the grounds that replication work lacks novelty or that the field had already moved on after a few years. In reality, replication studies require significant time, resources, and careful experimentation, and meaningful scientific questions do not become outdated so quickly.

Combining Evidence and Calling for Reform

To strengthen their case, the researchers brought together several replication efforts into a single, comprehensive paper focused on topological quantum computing. Their goal was twofold: to show that even striking experimental signals that appear to confirm major breakthroughs can sometimes be explained in other ways, especially when more complete datasets are analyzed, and to suggest improvements to how research is conducted and reviewed. These proposed changes include greater data sharing and more open discussion of alternative interpretations to improve the reliability of experimental findings.

A Lengthy Path to Publication

Gaining acceptance for these conclusions took time. The broader scientific community needed extensive discussion and debate before considering the possibility that earlier interpretations might be incomplete. The paper underwent a record two years of peer and editorial review after being submitted in September 2023. It was ultimately published in the journal Science on January 8, 2026.A group of scientists, including Sergey Frolov, professor of physics at the University of Pittsburgh, and coauthors from Minnesota and Grenoble have undertaken several replication studies centered around topological effects in nanoscale superconducting or semiconducting devices. This field is important because it can bring about topological quantum computing, a hypothetical way of storing and manipulating quantum information while protecting it against errors.In all cases they found alternative explanations of similar data. While the original papers claimed advances for quantum computing and made their way into top scientific journals, the individual follow-ups could not make it past the editors at those same journals. Reasons given for its rejection included that being a replication it was not novel; that after a couple of years the field has moved on. But replications take time and effort and the experiments are resource-intensive and cannot happen overnight. And important science does not become irrelevant on the scale of years.The scientists then united several replication attempts in the same field of topological quantum computing into a single paper. The aim was twofold: demonstrate that even very dramatic signatures that may appear consistent with major breakthroughs can have other explanations-especially when fuller datasets are considered, and outline changes to the research and peer review process that have the potential to increase the reliability of experimental results: sharing more data and openly discussing alternative explanations.It took significant time and argumentation for the rest of the community to accept this possibility: the paper spent a record two years under peer and editorial review. It was submitted in September 2023. It was published in the journal Science on January 8, 2026.

Source: ScienceDaily

Friday, 27 March 2026

This hidden state of water could explain why life exists

 Researchers at Stockholm University have used advanced x-ray lasers to uncover a long-suspected feature of water: a critical point that appears when water is deeply supercooled. This occurs at about -63 °C and 1000 atmosphere. Even under everyday conditions, this hidden point influences how water behaves, helping explain many of its unusual properties. The results were published in the journal Science.

Water is everywhere and essential for life, yet it does not act like most other liquids. Properties such as density, heat capacity, viscosity, and compressibility respond to temperature and pressure in ways that are opposite to what scientists see in typical substances.

In most materials, cooling causes them to contract and become denser. Based on this pattern, water should reach its highest density when it freezes. Instead, ice floats, and liquid water is actually most dense at 4 degrees C. That is why colder water remains below warmer water in lakes and oceans.

When water is cooled below 4 degrees, it begins expanding again. If pure water is cooled below 0 degrees (where crystallization happens slowly), this expansion continues and even accelerates as the temperature drops further. Other properties, including compressibility and heat capacity, also behave in increasingly unusual ways as the temperature decreases.

Capturing Water's Hidden State With X-Ray Lasers

To investigate these strange behaviors, scientists used extremely fast x-ray pulses generated by powerful lasers in South Korea. These pulses allowed them to observe water in a supercooled state just before it turned into ice.

"What was special was that we were able to X-ray unimaginably fast before the ice froze and could observe how the liquid-liquid transition vanishes and a new critical state emerges," says Anders Nilsson, Professor of Chemical Physics at the Department of Physics at Stockholm University. "For decades there has been speculations and different theories to explain these remarkable properties and one theory has been the existence of a critical point. Now we have found that such a point exists."

Two Liquid Forms of Water and a Critical Transition

Under low temperatures and high pressure, water can exist as two distinct liquid phases with different molecular bonding structures. As conditions change, these two forms merge into a single phase at the critical point.

Near this point, the system becomes highly unstable, and water rapidly shifts between the two liquid states or mixtures of them. These fluctuations extend across a wide range of temperatures and pressures, even reaching normal environmental conditions. Scientists believe these constant shifts are what give water its unusual characteristics.

Beyond the critical point, water enters a supercritical state, and under everyday conditions, it already exists in this regime.

A "Black Hole-Like" Effect in Water Dynamics

The researchers also found that molecular motion slows dramatically as water approaches the critical point.

"It looks almost that you cannot escape the critical point if you entered it, almost like a Black Hole," says Robin Tyburski, researcher in Chemical Physics at Stockholm University.

A Breakthrough Decades in the Making

"It's amazing how amorphous ices, such an extensively studied state of water, happened to become our entrance to the critical region. It's a great inspiration for my further studies and a reminder of the possibilities of making discoveries in much-studied topics such as water," says Aigerim Karina, Postdoc in Chemical Physics at Stockholm University.

"It was a dream come true to be able to measure water under such low temperature condition without freezing," says Iason Andronis, PhD student in Chemical Physics at Stockholm University. "Many have dreamt about finding this critical point but the means have not been available before the development of the x-ray lasers."

"I find it very exciting that water is the only supercritical liquid at ambient conditions where life exists and we also know there is no life without water. Is this a pure coincidence or is there some essential knowledge for us to gain in the future?" says Fivos Perakis, an associate professor in Chemical Physics at Stockholm University.

Source: ScienceDaily

Thursday, 26 March 2026

Lost in space: Microgravity makes sperm lose their sense of direction

 Starting a family beyond Earth could be more challenging than expected. New research from Adelaide University shows that sperm struggle to navigate in low gravity, suggesting that gravity plays a key role in helping them reach an egg.

Scientists from the Robinson Research Institute, the School of Biomedicine, and the Freemasons Centre for Male Health and Wellbeing studied how space-like conditions affect sperm navigation, fertilization, and early embryo development.To simulate microgravity, researchers used a 3D clinostat machine developed by Dr. Giles Kirby at Firefly Biotech. This device continuously rotates cells to mimic the disorienting effects of zero gravity. Sperm from three different mammals, including humans, were tested by sending them through a maze designed to resemble the female reproductive tract.

"This is the first time we have been able to show that gravity is an important factor in sperm's ability to navigate through a channel like the reproductive tract," said senior author Dr. Nicole McPherson from Adelaide University's Robinson Research Institute.

"We observed a significant reduction in the number of sperm that were able to successfully find their way through the chamber maze in microgravity conditions compared to normal gravity.

"This was experienced right across all models, despite no changes to the way sperm physically move. This indicates that their loss of direction was not due to a change in motility but other elements."

Progesterone May Help Guide Sperm

The researchers also found that adding the sex hormone progesterone improved how well human sperm navigated under simulated microgravity conditions.

"We believe this is because progesterone is also released from the egg and can help guide sperm to the site of fertilization, but this warrants further exploration as a potential solution," said Dr. McPherson.

Fertilisation and Embryo Development Affected

The team examined how exposure to microgravity during fertilisation influences early embryo development in animal models.

After four hours in simulated zero gravity, the number of successfully fertilized mouse eggs dropped by 30 per cent compared to normal Earth conditions.

"We observed reduced fertilization rates during four-to-six hours of exposure to microgravity. Prolonged exposure appeared to be even more detrimental, resulting in development delays and, in some cases, reduced cells that go on to form the fetus in the earliest stages of embryo formation," said Dr. McPherson."These insights show how complex reproductive success in space is and the critical need for more research across all early stages of development."

Why Gravity Matters for Reproduction

Earlier research has explored how sperm move in space, but none had tested their ability to navigate through a reproductive channel under controlled conditions like this.

The findings were published in Communications Biology.

This study was conducted in collaboration with Adelaide University's Andy Thomas Centre for Space Resources, which focuses on the challenges of long-term space exploration and living beyond Earth.

"As we progress toward becoming a spacefaring or multi-planetary species, understanding how microgravity affects the earliest stages of reproduction is critical," said Associate Professor John Culton, Director of the Andy Thomas Centre for Space Resources.

Future Research on Reproduction in Space

The next phase of the research will explore how different gravity environments, including those on the Moon, Mars, and in artificial gravity systems, affect sperm navigation and early embryo development.

A key question is whether these effects change gradually as gravity decreases or if there is a threshold where changes occur suddenly, creating an "all or nothing" response.

Answering this will be essential for planning human reproduction in future Moon and Mars settlements and for designing artificial gravity systems that support healthy development.

"In our most recent study, many healthy embryos were still able to form even when fertilized under these conditions. This gives us hope that reproducing in space may one day be possible," said Dr. McPherson.

Source: ScienceDaily

Wednesday, 25 March 2026

One of Earth’s most explosive supervolcanoes is recharging

 Scientists have discovered that the magma reservoir tied to the largest volcanic eruption of the Holocene is filling again. The finding, led by Kobe University researchers studying Japan's Kikai caldera, offers new insight into how massive caldera systems such as Yellowstone and Toba evolve over time and may help improve future eruption forecasting.

Some volcanic eruptions are so extreme that they release enough magma to bury all of Central Park under 12 kilometers of material. After such an event, the landscape collapses into a broad, relatively shallow crater known as a caldera. Famous examples include Yellowstone in the United States, Toba in Indonesia, and the largely submerged Kikai caldera in Japan. Kikai last erupted 7,300 years ago in the most powerful eruption of the current geological epoch, the Holocene. While scientists know these systems can erupt again, the buildup to such events remains poorly understood. "We must understand how such large quantities of magma can accumulate to understand how giant caldera eruptions occur," says Kobe University geophysicist SEAMA Nobukazu.Underwater Seismic Imaging Reveals Magma System

Kikai's underwater setting provides a unique research advantage. Seama explains, "The underwater location allows us to implement systematic, large-scale surveys." Working with the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), the team used airgun arrays to generate controlled seismic pulses and ocean bottom seismometers to track how those waves move through the Earth's crust. This approach allowed them to build a detailed picture of the structures beneath the caldera.

The results, published in Communications Earth & Environment, confirm a large magma-rich zone directly beneath the site of the ancient eruption. The researchers were able to map the reservoir's size and shape and determine its connection to past activity. Seama says, "Due to its extent and location it is clear that this is in fact the same magma reservoir as in the previous eruption."

Fresh Magma Injection Drives Recharging Process

The magma currently present does not appear to be leftover from the earlier eruption. Scientists had already observed a lava dome forming at the center of the caldera over the past 3,900 years. Chemical analysis shows that this newer material differs from what was released during the previous eruption. "This means that the magma that is now present in the magma reservoir under the lava dome is likely newly injected magma," summarizes Seama. These findings support a broader model explaining how magma reservoirs beneath caldera volcanoes refill over time.

Implications for Yellowstone and Future Eruptions

The proposed magma re-injection model aligns with observations of large, shallow magma systems beneath other major calderas such as Yellowstone and Toba. Seama suggests this work could help scientists better understand how magma supply cycles develop after massive eruptions. He concludes, saying: "We want to refine the methods that have proved to be so useful in this study to more deeply understand the re-injection processes. Our ultimate goal is to become better able to monitor the crucial indicators of future giant eruptions."

Source: ScienceDaily

Tuesday, 24 March 2026

Scientists shocked to find lab gloves may be skewing microplastics data

 A University of Michigan study suggests that the nitrile and latex gloves scientists commonly use could be causing microplastics levels to appear higher than they actually are.

Researchers found that these gloves can unintentionally transfer particles onto lab tools used to analyze air, water, and other environmental samples. The contamination comes from stearates, which are not plastics but can closely resemble them during testing. Because of this, scientists may be detecting particles that are not true microplastics. To reduce this issue, U-M researchers Madeline Clough and Anne McNeil recommend using cleanroom gloves, which release far fewer particles.Stearates are salt-based, soap-like substances added to disposable gloves to help them separate easily from molds during manufacturing. However, their chemical similarity to certain plastics makes them difficult to distinguish in lab analyses, increasing the risk of false positives when studying microplastic pollution.

The researchers emphasize that this does not mean microplastics are not a real problem.

"We may be overestimating microplastics, but there should be none," said McNeil, senior author of the study and U-M professor of chemistry, macromolecular science and engineering, and the Program in the Environment. "There's still a lot out there, and that's the problem."

Clough added, "As microplastic researchers looking for microplastics in the environment, we're searching for the needle in the haystack, but there really shouldn't be a needle to begin with."

The research, led by Clough, a recent doctoral graduate, was published in RSC Analytical Methods and supported by the U-M College of Literature, Science, and the Arts' Meet the Moment Research Initiative.

Unexpected Source Behind Inflated Results

The discovery began during a collaborative project examining airborne microplastics in Michigan. The effort involved researchers from multiple U-M departments, including Chemistry, Statistics, and Climate and Space Sciences Engineering. Clough and McNeil worked with collaborators such as chemistry professor Andy Ault and graduate students Rebecca Parham and Abbygail Ayala to collect air samples.

To capture particles, the team used air samplers equipped with metal surfaces that collect material from the atmosphere. These samples were then analyzed using light-based spectroscopy to identify the types of particles present.

While preparing the sampling surfaces, Clough followed standard practice and wore nitrile gloves. However, when she reviewed the results, the number of detected microplastics was thousands of times higher than expected.

Source: ScienceDaily

Monday, 23 March 2026

A hidden breathing problem may be behind chronic fatigue’s crushing exhaustion

 Chronic fatigue syndrome leaves many people completely drained of energy and struggling to think clearly, and their symptoms often worsen after mental or physical exertion -- a reaction known as post-exertional malaise. Researchers studying shortness of breath in people with chronic fatigue have now found that these patients are much more likely to experience dysfunctional breathing. This irregular breathing pattern may be linked to dysautonomia, a disorder involving abnormal nerve control of blood vessels and muscles. By focusing treatment on these breathing irregularities, scientists believe it may be possible to ease some of the debilitating symptoms.

"Nearly half of our chronic fatigue subjects had some disorder of breathing -- a totally unappreciated issue, probably involved in making symptoms worse," said Dr. Benjamin Natelson of the Icahn School of Medicine, senior author of the study published in Frontiers in Medicine. "Identifying these abnormalities will lead researchers to new strategies to treat them, with the ultimate goal of reducing symptoms."

Breathe easy

The study included 57 people diagnosed with chronic fatigue syndrome and 25 healthy individuals of similar age and activity level. All participants completed two days of cardiopulmonary exercise tests. During these sessions, the researchers monitored heart rate, blood pressure, oxygen uptake efficiency, blood oxygen saturation, and how much effort participants used to breathe. They also analyzed breathing rate and patterns to detect signs of hyperventilation and dysfunctional breathing.

Dysfunctional breathing is often seen in asthma patients, but it can develop for many different reasons. Typical features include frequent deep sighs, rapid breathing, forceful exhalation from the abdomen, or chest breathing without proper diaphragm use, which prevents the lungs from fully expanding. It can also involve a lack of coordination between chest and abdominal movements, meaning the muscles that support breathing are no longer working smoothly together.

"While we know the symptoms generated by hyperventilation, we remain unsure what symptoms may be worse with dysfunctional breathing," said Dr. Donna Mancini of the Icahn School of Medicine, first author of the study. "But we are sure patients can have dysfunctional breathing without being aware of it. Dysfunctional breathing can occur in a resting state."

Catching your breath

Results showed that people with chronic fatigue syndrome took in roughly the same amount of oxygen as the control group -- their peak VO2 max was similar. However, 71% of the chronic fatigue group showed breathing abnormalities, such as hyperventilation, dysfunctional breathing, or both.

Almost half of the chronic fatigue participants breathed irregularly during the tests, compared to only four people in the control group. About one-third of the fatigue patients hyperventilated, while just one person in the control group did. Nine patients had both hyperventilation and dysfunctional breathing, a combination not seen in any of the controls.Both of these breathing disorders can produce symptoms similar to those of chronic fatigue, including dizziness, difficulty concentrating, shortness of breath, and exhaustion. When both occur together, they can also cause chest pain, palpitations, fatigue, and (unsurprisingly) anxiety. The researchers believe that these breathing problems may worsen the effects of chronic fatigue or even play a direct role in post-exertional malaise."Possibly dysautonomia could trigger more rapid and irregular breathing," said Mancini. "It is well known that chronic fatigue syndrome patients often have dysautonomia in the form of orthostatic intolerance, which means you feel worse when upright and not moving. This raises the heart rate and leads to hyperventilation."

Pulmonary physiotherapy?

These findings suggest that addressing dysfunctional breathing could help relieve some symptoms of chronic fatigue. The researchers plan to continue investigating how dysfunctional breathing and hyperventilation interact. Although more studies are needed before any official treatments are recommended, they already have several promising ideas."Breathing exercises via yoga could potentially help, or gentle physical conditioning where breath control is important, as with swimming," suggested Natelson. "Or biofeedback, with assessment of breathing while encouraging gentle continuous breath use. If a patient is hyperventilating, this can be seen by a device that measures exhaled CO2. If this value is low, then the patient can try to reduce the depth of breathing to raise it to more normal values."

Source: ScienceDaily



Sunday, 22 March 2026

Scientists discover surprising brain trigger behind high blood pressure

 Researchers have identified a specific part of the brain that may play a key role in high blood pressure.

This area, called the lateral parafacial region, is located in the brainstem, the oldest part of the brain responsible for automatic functions like breathing, digestion, and heart rate."The lateral parafacial region is recruited into action causing us to exhale during a laugh, exercise or coughing," says lead researcher Professor Julian Paton, director of Manaaki Manawa, Centre for Heart Research at Waipapa Taumata Rau, University of Auckland.

"These exhalations are what we call 'forced' and driven by our powerful abdominal muscles.

"In contrast, a normal exhalation does not need these muscles to contract, it happens because the lungs are elastic."

How Breathing and Blood Pressure Are Connected

The team found that this brain region is also linked to nerves that constrict blood vessels, which increases blood pressure.

"We've unearthed a new region of the brain that is causing high blood pressure. Yes, the brain is to blame for hypertension!" says Paton.

"We discovered that, in conditions of high blood pressure, the lateral parafacial region is activated and, when our team inactivated this region, blood pressure fell to normal levels."

These findings suggest that certain breathing patterns, particularly those involving strong abdominal muscle use, can contribute to elevated blood pressure. Identifying abdominal breathing in people with hypertension may help pinpoint the cause and guide more targeted treatment.

The study was recently published in the journal Circulation Research.

A Potential New Treatment Target

'Can we target this brainstem region?'

The researchers then explored whether this part of the brain could be treated with medication.

"Targeting the brain with drugs is tricky because they act on the entire brain and not a selected region such as the parafacial nucleus," says Paton.

A key breakthrough came when the team discovered that this region is activated by signals originating outside the brain. These signals come from the carotid bodies, small clusters of cells in the neck near the carotid artery that monitor oxygen levels in the blood.

Because the carotid bodies can be safely targeted with medication, they offer a promising alternative approach.

"Our goal is to target the carotid bodies, and we are importing a new drug that is being repurposed by us to quench carotid body activity and inactivate 'remotely' the lateral parafacial region safely, i.e., without needing to use a drug that penetrates the brain."

This discovery could lead to new ways to treat high blood pressure, especially in people with sleep apnoea, where carotid body activity increases when breathing stops during sleep.

Source: Sciencedaily

Saturday, 21 March 2026

How to use AI for discovery -- without leading science astray

 Over the past decade, AI has permeated nearly every corner of science: Machine learning models have been used to predict protein structures, estimate the fraction of the Amazon rainforest that has been lost to deforestation and even classify faraway galaxies that might be home to exoplanets.

But while AI can be used to speed scientific discovery -- helping researchers make predictions about phenomena that may be difficult or costly to study in the real world -- it can also lead scientists astray. In the same way that chatbots sometimes "hallucinate," or make things up, machine learning models can sometimes present misleading or downright false results.In a paper published online today (Thursday, Nov. 9) in Science, researchers at the University of California, Berkeley, present a new statistical technique for safely using the predictions obtained from machine learning models to test scientific hypotheses.

The technique, called prediction-powered inference (PPI), uses a small amount of real-world data to correct the output of large, general models -- such as AlphaFold, which predicts protein structures -- in the context of specific scientific questions.

"These models are meant to be general: They can answer many questions, but we don't know which questions they answer well and which questions they answer badly -- and if you use them naively, without knowing which case you're in, you can get bad answers," said study author Michael Jordan, the Pehong Chen Distinguished Professor of electrical engineering and computer science and of statistics at UC Berkeley. "With PPI, you're able to use the model, but correct for possible errors, even when you don't know the nature of those errors at the outset."

The risk of hidden biases

When scientists conduct experiments, they're not just looking for a single answer -- they want to obtain a range of plausible answers. This is done by calculating a "confidence interval," which, in the simplest case, can be found by repeating an experiment many times and seeing how the results vary.

In most science studies, a confidence interval usually refers to a summary or combined statistic, not individual data points. Unfortunately, machine learning systems focus on individual data points, and thus do not provide scientists with the kinds of uncertainty assessments that they care about. For instance, AlphaFold predicts the structure of a single protein, but it doesn't provide a notion of confidence for that structure, nor a way to obtain confidence intervals that refer to general properties of proteins.

Scientists may be tempted to use the predictions from AlphaFold as if they were data to compute classical confidence intervals, ignoring the fact that these predictions are not data. The problem with this approach is that machine learning systems have many hidden biases that can skew the results. These biases arise, in part, from the data on which they are trained, which are generally existing scientific research that may not have had the same focus as the current study.

Source: ScienceDaily

Friday, 20 March 2026

Google and ChatGPT have mixed results in medical informatiom queries

 When you need accurate information about a serious illness, should you go to Google or ChatGPT?

An interdisciplinary study led by University of California, Riverside, computer scientists found that both internet information gathering services have strengths and weaknesses for people seeking information about Alzheimer's disease and other forms of dementia. The team included clinical scientists from the University of Alabama and Florida International University.Google provides the most current information, but query results are skewed by service and product providers seeking customers, the researchers found. ChatGPT, meanwhile, provides more objective information, but it can be outdated and lacks the sources of its information in its narrative responses.

"If you pick the best features of both, you can build a better system, and I think that this is what will happen in the next couple of years," said Vagelis Hristidis, a professor of computer science and engineering in UCR's Bourns College of Engineering.

In their study, Hristidis and his co-authors submitted 60 queries to both Google and ChatGPT that would be typical submissions from people living with dementia and their families.

The researchers focused on dementia because more than 6 million Americans are impacted by Alzheimer's disease or a related condition, said study co-author Nicole Ruggiano, a professor of social work at the University of Alabama.

"Research also shows that caregivers of people living with dementia are among the most engaged stakeholders in pursuing health information, since they often are tasked with making decisions for their loved one's care," Ruggiano said.

Half of the queries submitted by the researchers sought information about the disease processes, while the other half sought information on services that could assist patients and their families.

The results were mixed.

"Google has more up-to-date information, and covers everything," Hristidis said. "Whereas ChatGPT is trained every few months. So, it is behind. Let's say there's some new medicine that just came out last week, you will not find it on ChatGPT."

While dated, ChatGPT provided more reliable and accurate information than Google. This is because the ChatGPT creators at OpenAI choose the most reliable websites when they train ChatGPT through computationally intensive machine learning. Yet, users are left in dark about specific sources of information because the resulting narratives are void of references.Google, however, has a reliability problem because it essentially "covers everything from the reliable sources to advertisements," Hristidis said.

In fact, advertisers pay Google for their website links to appear at the top of search result pages. So, users often first see links to websites of for-profit companies trying to sell them care-related services and products. Finding reliable information from Google searches thus requires a level of user skill and experience, Hristidis said.

Co-author Ellen Brown, an associate professor of nursing at the Florida International University, pointed out that families need timely information about Alzheimer's. .

"Although there is no cure for the disease, many clinical trials are underway and recently a promising treatment for early stage Alzheimer's disease was approved by the FDA," Brown said. "Therefore, up-to-date information is important for families looking to learn about recent discoveries and available treatments."

The authors of the study write that "the addition of both the source and the date of health-related information and availability in other languages may increase the value of these platforms for both non-medical and medical professionals." It was published in the Journal of Medical Internet Research under the title "ChatGPT vs Google for Queries Related to Dementia and Other Cognitive Decline: Comparison of Results."

Google and ChatGPT both scored low for readability scores, which makes it difficult for people with lower levels of education and low health literacy skills.

"My prediction is that the readability is the easier thing to improve because there are already some tools, some AI methods, that can read and paraphrase text," Hristidis said. "In terms of improving reliability, accuracy, and so on, that's much harder. Don't forget that it took scientists many decades of AI research to build ChatGPT. It is going to be slow improvements from where we are now."

Source: ScienceDaily

Thursday, 19 March 2026

AIs are irrational, but not in the same way that humans are

 Large Language Models behind popular generative AI platforms like ChatGPT gave different answers when asked to respond to the same reasoning test and didn't improve when given additional context, finds a new study from researchers at UCL.

The study, published in Royal Society Open Science, tested the most advanced Large Language Models (LLMs) using cognitive psychology tests to gauge their capacity for reasoning. The results highlight the importance of understanding how these AIs 'think' before entrusting them with tasks, particularly those involving decision-making.

In recent years, the LLMs that power generative AI apps like ChatGPT have become increasingly sophisticated. Their ability to produce realistic text, images, audio and video has prompted concern about their capacity to steal jobs, influence elections and commit crime.Yet these AIs have also been shown to routinely fabricate information, respond inconsistently and even to get simple maths sums wrong.

In this study, researchers from UCL systematically analysed whether seven LLMs were capable of rational reasoning. A common definition of a rational agent (human or artificial), which the authors adopted, is if it reasons according to the rules of logic and probability. An irrational agent is one that does not reason according to these rules1.

The LLMs were given a battery of 12 common tests from cognitive psychology to evaluate reasoning, including the Wason task, the Linda problem and the Monty Hall problem2. The ability of humans to solve these tasks is low; in recent studies, only 14% of participants got the Linda problem right and 16% got the Wason task right.

The models exhibited irrationality in many of their answers, such as providing varying responses when asked the same question 10 times. They were prone to making simple mistakes, including basic addition errors and mistaking consonants for vowels, which led them to provide incorrect answers.

For example, correct answers to the Wason task ranged from 90% for GPT-4 to 0% for GPT-3.5 and Google Bard. Llama 2 70b, which answered correctly 10% of the time, mistook the letter K for a vowel and so answered incorrectly.

While most humans would also fail to answer the Wason task correctly, it is unlikely that this would be because they didn't know what a vowel was.

Olivia Macmillan-Scott, first author of the study from UCL Computer Science, said: "Based on the results of our study and other research on Large Language Models, it's safe to say that these models do not 'think' like humans yet.

"That said, the model with the largest dataset, GPT-4, performed a lot better than other models, suggesting that they are improving rapidly. However, it is difficult to say how this particular model reasons because it is a closed system. I suspect there are other tools in use that you wouldn't have found in its predecessor GPT-3.5."

Source: ScienceDaily

Wednesday, 18 March 2026

Thinking AI models emit 50x more CO2—and often for nothing

 No matter which questions we ask an AI, the model will come up with an answer. To produce this information - regardless of whether than answer is correct or not - the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.

This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies. Now, researchers in Germany measured and compared CO2 emissions of different, already trained, LLMs using a set of standardized questions."The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study. "We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models."

'Thinking' AI causes most emissions

The researchers evaluated 14 LLMs ranging from seven to 72 billion parameters on 1,000 benchmark questions across diverse subjects. Parameters determine how LLMs learn and process information.

Reasoning models, on average, created 543.5 'thinking' tokens per questions, whereas concise models required just 37.7 tokens per question. Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions. It doesn't, however, necessarily mean the resulting answers are more correct, as elaborate detail that is not always essential for correctness.

The most accurate model was the reasoning-enabled Cogito model with 70 billion parameters, reaching 84.9% accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," said Dauner. "None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly." CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.

Practicing thoughtful use

The researchers said they hope their work will cause people to make more informed decisions about their own AI use. "Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dauner pointed out.

Choice of model, for instance, can make a significant difference in CO2 emissions. For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.

The researchers said that their results may be impacted by the choice of hardware used in the study, an emission factor that may vary regionally depending on local energy grid mixes, and the examined models. These factors may limit the generalizability of the results.

"If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies," Dauner concluded.

Source: ScienceDaily

Tuesday, 17 March 2026

Study finds ChatGPT gets science wrong more often than you think

 Washington State University professor Mesut Cicek and his research team repeatedly tested ChatGPT by giving it hypotheses taken from scientific papers. The goal was to see if the AI could correctly determine whether each claim was supported by research or not -- in other words, whether it was true or false.

In total, the team evaluated more than 700 hypotheses and asked the same question 10 times for each one to measure consistency.

Accuracy Results and Limits of AI Performance

When the experiment was first conducted in 2024, ChatGPT answered correctly 76.5% of the time. In a follow-up test in 2025, accuracy rose slightly to 80%. However, once the researchers adjusted for random guessing, the results looked far less impressive. The AI performed only about 60% better than chance, a level closer to a low D than to strong reliability.

The system had the most difficulty identifying false statements, correctly labeling them only 16.4% of the time. It also showed notable inconsistency. Even when given the exact same prompt 10 times, ChatGPT produced consistent answers only about 73% of the time.

Inconsistent Answers Raise Concerns

"We're not just talking about accuracy, we're talking about inconsistency, because if you ask the same question again and again, you come up with different answers," said Cicek, an associate professor in the Department of Marketing and International Business in WSU's Carson College of Business and lead author of the new publication.

"We used 10 prompts with the same exact question. Everything was identical. It would answer true. Next, it says it's false. It's true, it's false, false, true. There were several cases where there were five true, five false."

AI Fluency vs. Real Understanding

The findings, published in the Rutgers Business Review, highlight the importance of using caution when relying on AI for important decisions, especially those that require nuanced or complex reasoning. While generative AI can produce smooth, convincing language, it does not yet demonstrate the same level of conceptual understanding.

According to Cicek, these results suggest that artificial general intelligence capable of truly "thinking" may still be further away than many expect.

"Current AI tools don't understand the world the way we do -- they don't have a 'brain,'" Cicek said. "They just memorize, and they can give you some insight, but they don't understand what they're talking about."

Source: ScienceDaily