The findings will encourage greater data sharing, collaboration among researchers.
As brain scans have become more detailed and informative in recent decades, neuroimaging promises to offer a way for doctors and scientists to “see” what is going wrong inside the brains of people with mental illnesses or neurological conditions. Is. Such imaging has revealed correlations between brain anatomy or function and disease, suggesting potential new ways of diagnosing and treating psychiatric, psychiatric and neurological conditions. But the promise hasn’t turned into reality yet, and a new study explains why: The results of most studies are unreliable because they included so few participants.
Scientists rely on brainwide association studies to measure brain structure and function using MRI brain scans – and link them to complex features such as personality, behavior, cognition, neurological conditions and mental illness. But in a study published March 16, 2022, by researchers at Washington University School of Medicine and the University of Minnesota in St. Louis, Nature, shows that most published brainwide association studies are performed with too few participants to obtain reliable conclusions.
Using publicly available data sets – involving a total of approximately 50,000 participants – the researchers analyzed a range of sample sizes and found that brainwide association studies required thousands of individuals to achieve high reproducibility. it occurs. The typical brainwide association study enrolled just a couple dozen people.
Such so-called weak studies are susceptible to uncovering strong but spurious associations by chance while missing real but weak associations. Routinely low-powered brainwide association studies result in surprisingly strong yet irrefutable findings that show slow progress toward understanding how the brain works, the researchers said.
“Our findings reflect a systematic, structural problem with studies that are designed to find correlations between two complex things like the brain and behavior,” said senior author Nico Dosenbach, MD, PhD, an associate of neurology at the University of Washington. said the professor. “This is not a problem with an individual researcher or study. It is also not unique to neuroimaging. The field of genomics discovered a similar problem with genomic data about a decade ago and took steps to address it.” The NIH (National Institutes of Health) began funding big data-collection efforts and mandated that data be shared publicly, which minimizes bias and, as a result, genome science is much better. It’s done. Sometimes you just have to change the research paradigm. Genomics has shown us the way.”
First author Scott Marek, PhD, an instructor in psychiatry at the University of Washington, and co-first author Brendan Turvo-Clemens, PhD, a postdoctoral researcher at Massachusetts General Hospital/Harvard Medical School, realized how brainwide association studies usually are. Well, something was wrong with it. were conducted when they could not replicate the results of their own study.
“We were interested in knowing how cognitive ability is represented in the brain,” Marek said. “We ran our analysis on a sample of 1,000 kids and found a significant correlation and were like, ‘Great!’ But then we thought, ‘Can we reproduce this in a thousand more children?’ And it turns out we couldn’t. It just blew me away because a sample of a thousand should have been big enough. We were scratching our heads, wondering what was going on.”
To identify problems with brain-wide association studies, the research team—including Dosenbach, Marek, Turvo-Clemens, co-senior author Damien A. Fair, PhD, director of the Masonic Institute for the Developing Brain at the University of Minnesota, and others – Started by accessing the three largest neuroimaging datasets: the Adolescent Brain Cognitive Development Study (11,874 participants), the Human Connectome Project (1,200 participants) and the UK Biobank (35,375 participants). Then, they analyzed the dataset for correlations between brain characteristics and demographic, cognitive, mental health, and behavioral measures, using subsets of different sizes. Using separate subsets, they attempted to replicate any identified correlations. In all, they ran billions of analyzes backed by the powerful computing resources of Faire’s Masonic Institute of the Developing Brain.
The researchers found that brain-behavior correlations identified using a sample size of 25 — the average sample size in published papers — usually failed to replicate in a different sample. As the sample size increased into the thousands, the correlations became more likely to be reproducible.
Furthermore, the estimated strength of the correlation, a measure known as the effect size, tends to be greatest for the smallest samples. The effect size is scaled from 0 to 1, with 0 being no correlation and 1 being perfect correlation. An effect size of 0.2 is considered strong enough. As sample size increased and correlations became more reproducible, effect sizes decreased. The mean reproducible effect size was .01. Yet published papers on brain-wide association studies routinely report effect sizes of 0.2 or more.
In retrospect, it should have been clear that the reported effect sizes were enormous, Marek said.
“You can find effect sizes of 0.8 in the literature, but nothing in nature has an effect size of 0.8,” Marek said. “The relationship between height and weight is 0.4. The relationship between height and daily temperature is 0.3. Those are strong, clear, easily measured correlations, and they’re not anywhere close to 0.8. So why did we ever think that the two are too many?” Only the relationship between complex things, like brain function and depression, would be 0.8? That doesn’t pass the sniff test.”
Neuroimaging studies are expensive and time consuming. An hour on an MRI machine can cost $1,000. No individual investigator has the time or money to scan thousands of participants for each study. But if all the data from several smaller studies were put together and analyzed, including statistically insignificant results and modest effect sizes, the results would probably presuppose the correct answer, Dosenbach said.
“The future of the field is now bright and rests in open science, data sharing and resource sharing across institutions to make large datasets available to any scientist,” Fair said. “This very paper is a wonderful example of that.”
Dossenbach, an associate professor of biomedical engineering, occupational medicine, pediatrics and radiology, said: “There is a lot of promise for this kind of work in terms of finding solutions to mental illnesses and understanding how the mind works. Good news It is that we have identified one of the main reasons why brain imaging has yet to deliver on its promise to revolutionize mental health care. This work is not only by clearly defining previously impediments, but also represents a major turning point for linking brain activity and behavior, by clearly defining new paths ahead.”
References: Scott Marek, Brendan Tervo-Clemens, Finnegan J. Calabro, David F. Montez, Benjamin P. K, Alexander S. Hattoum, Meghan Rose Donohue, “Reproducible brain-wide association studies require thousands of individuals” by William Foran. Ryland L. Miller, Timothy J. Hendrickson, Stephen M. Malone, Sridhar Kandala, Eric Fekko, Oscar Miranda-Dominguez, Alice M. Graham, Eric A. Earl, Anders J. Perón, Michaela Cordova, Olivia Doyle, Lucille A. Moore, Gregory M. Conan, Johnny Uriarte, Kathy Snyder, Benjamin J. Lynch, James C. Vilgenbush, Thomas Pango, Angela Tam, Jianzhong Chen, Dillon J. Newbold, Annie Zheng, Nicole A. Cedar, Andrew N. Vans, Athanasia Metoki, Rosaline J. Chauvin, Timothy O. Lawman, Deanna J. Green, Steven E. Peterson, Hugh Garavan, Wesley K. Thompson, Thomas E. Nichols, BT Thomas Yeo, Dina M. Barch, Beatriz Luna, Damien A. Fair and Nico UF Dosenbach, March 16, 2022, Nature,