Monday, 23 March 2026

A hidden breathing problem may be behind chronic fatigue’s crushing exhaustion

 Chronic fatigue syndrome leaves many people completely drained of energy and struggling to think clearly, and their symptoms often worsen after mental or physical exertion -- a reaction known as post-exertional malaise. Researchers studying shortness of breath in people with chronic fatigue have now found that these patients are much more likely to experience dysfunctional breathing. This irregular breathing pattern may be linked to dysautonomia, a disorder involving abnormal nerve control of blood vessels and muscles. By focusing treatment on these breathing irregularities, scientists believe it may be possible to ease some of the debilitating symptoms.

"Nearly half of our chronic fatigue subjects had some disorder of breathing -- a totally unappreciated issue, probably involved in making symptoms worse," said Dr. Benjamin Natelson of the Icahn School of Medicine, senior author of the study published in Frontiers in Medicine. "Identifying these abnormalities will lead researchers to new strategies to treat them, with the ultimate goal of reducing symptoms."

Breathe easy

The study included 57 people diagnosed with chronic fatigue syndrome and 25 healthy individuals of similar age and activity level. All participants completed two days of cardiopulmonary exercise tests. During these sessions, the researchers monitored heart rate, blood pressure, oxygen uptake efficiency, blood oxygen saturation, and how much effort participants used to breathe. They also analyzed breathing rate and patterns to detect signs of hyperventilation and dysfunctional breathing.

Dysfunctional breathing is often seen in asthma patients, but it can develop for many different reasons. Typical features include frequent deep sighs, rapid breathing, forceful exhalation from the abdomen, or chest breathing without proper diaphragm use, which prevents the lungs from fully expanding. It can also involve a lack of coordination between chest and abdominal movements, meaning the muscles that support breathing are no longer working smoothly together.

"While we know the symptoms generated by hyperventilation, we remain unsure what symptoms may be worse with dysfunctional breathing," said Dr. Donna Mancini of the Icahn School of Medicine, first author of the study. "But we are sure patients can have dysfunctional breathing without being aware of it. Dysfunctional breathing can occur in a resting state."

Catching your breath

Results showed that people with chronic fatigue syndrome took in roughly the same amount of oxygen as the control group -- their peak VO2 max was similar. However, 71% of the chronic fatigue group showed breathing abnormalities, such as hyperventilation, dysfunctional breathing, or both.

Almost half of the chronic fatigue participants breathed irregularly during the tests, compared to only four people in the control group. About one-third of the fatigue patients hyperventilated, while just one person in the control group did. Nine patients had both hyperventilation and dysfunctional breathing, a combination not seen in any of the controls.Both of these breathing disorders can produce symptoms similar to those of chronic fatigue, including dizziness, difficulty concentrating, shortness of breath, and exhaustion. When both occur together, they can also cause chest pain, palpitations, fatigue, and (unsurprisingly) anxiety. The researchers believe that these breathing problems may worsen the effects of chronic fatigue or even play a direct role in post-exertional malaise."Possibly dysautonomia could trigger more rapid and irregular breathing," said Mancini. "It is well known that chronic fatigue syndrome patients often have dysautonomia in the form of orthostatic intolerance, which means you feel worse when upright and not moving. This raises the heart rate and leads to hyperventilation."

Pulmonary physiotherapy?

These findings suggest that addressing dysfunctional breathing could help relieve some symptoms of chronic fatigue. The researchers plan to continue investigating how dysfunctional breathing and hyperventilation interact. Although more studies are needed before any official treatments are recommended, they already have several promising ideas."Breathing exercises via yoga could potentially help, or gentle physical conditioning where breath control is important, as with swimming," suggested Natelson. "Or biofeedback, with assessment of breathing while encouraging gentle continuous breath use. If a patient is hyperventilating, this can be seen by a device that measures exhaled CO2. If this value is low, then the patient can try to reduce the depth of breathing to raise it to more normal values."

Source: ScienceDaily



Sunday, 22 March 2026

Scientists discover surprising brain trigger behind high blood pressure

 Researchers have identified a specific part of the brain that may play a key role in high blood pressure.

This area, called the lateral parafacial region, is located in the brainstem, the oldest part of the brain responsible for automatic functions like breathing, digestion, and heart rate."The lateral parafacial region is recruited into action causing us to exhale during a laugh, exercise or coughing," says lead researcher Professor Julian Paton, director of Manaaki Manawa, Centre for Heart Research at Waipapa Taumata Rau, University of Auckland.

"These exhalations are what we call 'forced' and driven by our powerful abdominal muscles.

"In contrast, a normal exhalation does not need these muscles to contract, it happens because the lungs are elastic."

How Breathing and Blood Pressure Are Connected

The team found that this brain region is also linked to nerves that constrict blood vessels, which increases blood pressure.

"We've unearthed a new region of the brain that is causing high blood pressure. Yes, the brain is to blame for hypertension!" says Paton.

"We discovered that, in conditions of high blood pressure, the lateral parafacial region is activated and, when our team inactivated this region, blood pressure fell to normal levels."

These findings suggest that certain breathing patterns, particularly those involving strong abdominal muscle use, can contribute to elevated blood pressure. Identifying abdominal breathing in people with hypertension may help pinpoint the cause and guide more targeted treatment.

The study was recently published in the journal Circulation Research.

A Potential New Treatment Target

'Can we target this brainstem region?'

The researchers then explored whether this part of the brain could be treated with medication.

"Targeting the brain with drugs is tricky because they act on the entire brain and not a selected region such as the parafacial nucleus," says Paton.

A key breakthrough came when the team discovered that this region is activated by signals originating outside the brain. These signals come from the carotid bodies, small clusters of cells in the neck near the carotid artery that monitor oxygen levels in the blood.

Because the carotid bodies can be safely targeted with medication, they offer a promising alternative approach.

"Our goal is to target the carotid bodies, and we are importing a new drug that is being repurposed by us to quench carotid body activity and inactivate 'remotely' the lateral parafacial region safely, i.e., without needing to use a drug that penetrates the brain."

This discovery could lead to new ways to treat high blood pressure, especially in people with sleep apnoea, where carotid body activity increases when breathing stops during sleep.

Source: Sciencedaily

Saturday, 21 March 2026

How to use AI for discovery -- without leading science astray

 Over the past decade, AI has permeated nearly every corner of science: Machine learning models have been used to predict protein structures, estimate the fraction of the Amazon rainforest that has been lost to deforestation and even classify faraway galaxies that might be home to exoplanets.

But while AI can be used to speed scientific discovery -- helping researchers make predictions about phenomena that may be difficult or costly to study in the real world -- it can also lead scientists astray. In the same way that chatbots sometimes "hallucinate," or make things up, machine learning models can sometimes present misleading or downright false results.In a paper published online today (Thursday, Nov. 9) in Science, researchers at the University of California, Berkeley, present a new statistical technique for safely using the predictions obtained from machine learning models to test scientific hypotheses.

The technique, called prediction-powered inference (PPI), uses a small amount of real-world data to correct the output of large, general models -- such as AlphaFold, which predicts protein structures -- in the context of specific scientific questions.

"These models are meant to be general: They can answer many questions, but we don't know which questions they answer well and which questions they answer badly -- and if you use them naively, without knowing which case you're in, you can get bad answers," said study author Michael Jordan, the Pehong Chen Distinguished Professor of electrical engineering and computer science and of statistics at UC Berkeley. "With PPI, you're able to use the model, but correct for possible errors, even when you don't know the nature of those errors at the outset."

The risk of hidden biases

When scientists conduct experiments, they're not just looking for a single answer -- they want to obtain a range of plausible answers. This is done by calculating a "confidence interval," which, in the simplest case, can be found by repeating an experiment many times and seeing how the results vary.

In most science studies, a confidence interval usually refers to a summary or combined statistic, not individual data points. Unfortunately, machine learning systems focus on individual data points, and thus do not provide scientists with the kinds of uncertainty assessments that they care about. For instance, AlphaFold predicts the structure of a single protein, but it doesn't provide a notion of confidence for that structure, nor a way to obtain confidence intervals that refer to general properties of proteins.

Scientists may be tempted to use the predictions from AlphaFold as if they were data to compute classical confidence intervals, ignoring the fact that these predictions are not data. The problem with this approach is that machine learning systems have many hidden biases that can skew the results. These biases arise, in part, from the data on which they are trained, which are generally existing scientific research that may not have had the same focus as the current study.

Source: ScienceDaily

Friday, 20 March 2026

Google and ChatGPT have mixed results in medical informatiom queries

 When you need accurate information about a serious illness, should you go to Google or ChatGPT?

An interdisciplinary study led by University of California, Riverside, computer scientists found that both internet information gathering services have strengths and weaknesses for people seeking information about Alzheimer's disease and other forms of dementia. The team included clinical scientists from the University of Alabama and Florida International University.Google provides the most current information, but query results are skewed by service and product providers seeking customers, the researchers found. ChatGPT, meanwhile, provides more objective information, but it can be outdated and lacks the sources of its information in its narrative responses.

"If you pick the best features of both, you can build a better system, and I think that this is what will happen in the next couple of years," said Vagelis Hristidis, a professor of computer science and engineering in UCR's Bourns College of Engineering.

In their study, Hristidis and his co-authors submitted 60 queries to both Google and ChatGPT that would be typical submissions from people living with dementia and their families.

The researchers focused on dementia because more than 6 million Americans are impacted by Alzheimer's disease or a related condition, said study co-author Nicole Ruggiano, a professor of social work at the University of Alabama.

"Research also shows that caregivers of people living with dementia are among the most engaged stakeholders in pursuing health information, since they often are tasked with making decisions for their loved one's care," Ruggiano said.

Half of the queries submitted by the researchers sought information about the disease processes, while the other half sought information on services that could assist patients and their families.

The results were mixed.

"Google has more up-to-date information, and covers everything," Hristidis said. "Whereas ChatGPT is trained every few months. So, it is behind. Let's say there's some new medicine that just came out last week, you will not find it on ChatGPT."

While dated, ChatGPT provided more reliable and accurate information than Google. This is because the ChatGPT creators at OpenAI choose the most reliable websites when they train ChatGPT through computationally intensive machine learning. Yet, users are left in dark about specific sources of information because the resulting narratives are void of references.Google, however, has a reliability problem because it essentially "covers everything from the reliable sources to advertisements," Hristidis said.

In fact, advertisers pay Google for their website links to appear at the top of search result pages. So, users often first see links to websites of for-profit companies trying to sell them care-related services and products. Finding reliable information from Google searches thus requires a level of user skill and experience, Hristidis said.

Co-author Ellen Brown, an associate professor of nursing at the Florida International University, pointed out that families need timely information about Alzheimer's. .

"Although there is no cure for the disease, many clinical trials are underway and recently a promising treatment for early stage Alzheimer's disease was approved by the FDA," Brown said. "Therefore, up-to-date information is important for families looking to learn about recent discoveries and available treatments."

The authors of the study write that "the addition of both the source and the date of health-related information and availability in other languages may increase the value of these platforms for both non-medical and medical professionals." It was published in the Journal of Medical Internet Research under the title "ChatGPT vs Google for Queries Related to Dementia and Other Cognitive Decline: Comparison of Results."

Google and ChatGPT both scored low for readability scores, which makes it difficult for people with lower levels of education and low health literacy skills.

"My prediction is that the readability is the easier thing to improve because there are already some tools, some AI methods, that can read and paraphrase text," Hristidis said. "In terms of improving reliability, accuracy, and so on, that's much harder. Don't forget that it took scientists many decades of AI research to build ChatGPT. It is going to be slow improvements from where we are now."

Source: ScienceDaily

Thursday, 19 March 2026

AIs are irrational, but not in the same way that humans are

 Large Language Models behind popular generative AI platforms like ChatGPT gave different answers when asked to respond to the same reasoning test and didn't improve when given additional context, finds a new study from researchers at UCL.

The study, published in Royal Society Open Science, tested the most advanced Large Language Models (LLMs) using cognitive psychology tests to gauge their capacity for reasoning. The results highlight the importance of understanding how these AIs 'think' before entrusting them with tasks, particularly those involving decision-making.

In recent years, the LLMs that power generative AI apps like ChatGPT have become increasingly sophisticated. Their ability to produce realistic text, images, audio and video has prompted concern about their capacity to steal jobs, influence elections and commit crime.Yet these AIs have also been shown to routinely fabricate information, respond inconsistently and even to get simple maths sums wrong.

In this study, researchers from UCL systematically analysed whether seven LLMs were capable of rational reasoning. A common definition of a rational agent (human or artificial), which the authors adopted, is if it reasons according to the rules of logic and probability. An irrational agent is one that does not reason according to these rules1.

The LLMs were given a battery of 12 common tests from cognitive psychology to evaluate reasoning, including the Wason task, the Linda problem and the Monty Hall problem2. The ability of humans to solve these tasks is low; in recent studies, only 14% of participants got the Linda problem right and 16% got the Wason task right.

The models exhibited irrationality in many of their answers, such as providing varying responses when asked the same question 10 times. They were prone to making simple mistakes, including basic addition errors and mistaking consonants for vowels, which led them to provide incorrect answers.

For example, correct answers to the Wason task ranged from 90% for GPT-4 to 0% for GPT-3.5 and Google Bard. Llama 2 70b, which answered correctly 10% of the time, mistook the letter K for a vowel and so answered incorrectly.

While most humans would also fail to answer the Wason task correctly, it is unlikely that this would be because they didn't know what a vowel was.

Olivia Macmillan-Scott, first author of the study from UCL Computer Science, said: "Based on the results of our study and other research on Large Language Models, it's safe to say that these models do not 'think' like humans yet.

"That said, the model with the largest dataset, GPT-4, performed a lot better than other models, suggesting that they are improving rapidly. However, it is difficult to say how this particular model reasons because it is a closed system. I suspect there are other tools in use that you wouldn't have found in its predecessor GPT-3.5."

Source: ScienceDaily

Wednesday, 18 March 2026

Thinking AI models emit 50x more CO2—and often for nothing

 No matter which questions we ask an AI, the model will come up with an answer. To produce this information - regardless of whether than answer is correct or not - the model uses tokens. Tokens are words or parts of words that are converted into a string of numbers that can be processed by the LLM.

This conversion, as well as other computing processes, produce CO2 emissions. Many users, however, are unaware of the substantial carbon footprint associated with these technologies. Now, researchers in Germany measured and compared CO2 emissions of different, already trained, LLMs using a set of standardized questions."The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions," said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the Frontiers in Communication study. "We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models."

'Thinking' AI causes most emissions

The researchers evaluated 14 LLMs ranging from seven to 72 billion parameters on 1,000 benchmark questions across diverse subjects. Parameters determine how LLMs learn and process information.

Reasoning models, on average, created 543.5 'thinking' tokens per questions, whereas concise models required just 37.7 tokens per question. Thinking tokens are additional tokens that reasoning LLMs generate before producing an answer. A higher token footprint always means higher CO2 emissions. It doesn't, however, necessarily mean the resulting answers are more correct, as elaborate detail that is not always essential for correctness.

The most accurate model was the reasoning-enabled Cogito model with 70 billion parameters, reaching 84.9% accuracy. The model produced three times more CO2 emissions than similar sized models that generated concise answers. "Currently, we see a clear accuracy-sustainability trade-off inherent in LLM technologies," said Dauner. "None of the models that kept emissions below 500 grams of CO2 equivalent achieved higher than 80% accuracy on answering the 1,000 questions correctly." CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

Subject matter also resulted in significantly different levels of CO2 emissions. Questions that required lengthy reasoning processes, for example abstract algebra or philosophy, led to up to six times higher emissions than more straightforward subjects, like high school history.

Practicing thoughtful use

The researchers said they hope their work will cause people to make more informed decisions about their own AI use. "Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power," Dauner pointed out.

Choice of model, for instance, can make a significant difference in CO2 emissions. For example, having DeepSeek R1 (70 billion parameters) answer 600,000 questions would create CO2 emissions equal to a round-trip flight from London to New York. Meanwhile, Qwen 2.5 (72 billion parameters) can answer more than three times as many questions (about 1.9 million) with similar accuracy rates while generating the same emissions.

The researchers said that their results may be impacted by the choice of hardware used in the study, an emission factor that may vary regionally depending on local energy grid mixes, and the examined models. These factors may limit the generalizability of the results.

"If users know the exact CO2 cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies," Dauner concluded.

Source: ScienceDaily

Tuesday, 17 March 2026

Study finds ChatGPT gets science wrong more often than you think

 Washington State University professor Mesut Cicek and his research team repeatedly tested ChatGPT by giving it hypotheses taken from scientific papers. The goal was to see if the AI could correctly determine whether each claim was supported by research or not -- in other words, whether it was true or false.

In total, the team evaluated more than 700 hypotheses and asked the same question 10 times for each one to measure consistency.

Accuracy Results and Limits of AI Performance

When the experiment was first conducted in 2024, ChatGPT answered correctly 76.5% of the time. In a follow-up test in 2025, accuracy rose slightly to 80%. However, once the researchers adjusted for random guessing, the results looked far less impressive. The AI performed only about 60% better than chance, a level closer to a low D than to strong reliability.

The system had the most difficulty identifying false statements, correctly labeling them only 16.4% of the time. It also showed notable inconsistency. Even when given the exact same prompt 10 times, ChatGPT produced consistent answers only about 73% of the time.

Inconsistent Answers Raise Concerns

"We're not just talking about accuracy, we're talking about inconsistency, because if you ask the same question again and again, you come up with different answers," said Cicek, an associate professor in the Department of Marketing and International Business in WSU's Carson College of Business and lead author of the new publication.

"We used 10 prompts with the same exact question. Everything was identical. It would answer true. Next, it says it's false. It's true, it's false, false, true. There were several cases where there were five true, five false."

AI Fluency vs. Real Understanding

The findings, published in the Rutgers Business Review, highlight the importance of using caution when relying on AI for important decisions, especially those that require nuanced or complex reasoning. While generative AI can produce smooth, convincing language, it does not yet demonstrate the same level of conceptual understanding.

According to Cicek, these results suggest that artificial general intelligence capable of truly "thinking" may still be further away than many expect.

"Current AI tools don't understand the world the way we do -- they don't have a 'brain,'" Cicek said. "They just memorize, and they can give you some insight, but they don't understand what they're talking about."

Source: ScienceDaily