A strong, black coffee to wake you up after a bad night’s sleep could impair control of blood sugar levels, according to a new study.
Research from the Centre for Nutrition, Exercise & Metabolism at the University of Bath (UK) looked at the effect of broken sleep and morning coffee across a range of different metabolic markers.
Writing in the British Journal of Nutrition the scientists show that whilst one night of poor sleep has limited impact on our metabolism, drinking coffee as a way to perk you up from a slumber can have a negative effect on blood glucose (sugar) control.
Given the importance of keeping our blood sugar levels within a safe range to reduce the risk of conditions, such as diabetes and heart disease, they say these results could have ‘far-reaching’ health implications especially considering the global popularity of coffee.
For their study, the physiologists at the University of Bath asked 29 healthy men and women to undergo three different overnight experiments in a random order:
In one, condition participants had a normal night’s sleep and were asked to consume a sugary drink on waking in the morning.
On another occasion, participants experienced a disrupted night’s sleep (where the researchers woke them every hour for five minutes) and then upon waking were given the same sugary drink.
On another, participants experienced the same sleep disruption (i.e. being woken throughout the night ) but this time were first given a strong black coffee 30 minutes before consuming the sugary drink.
In each of these tests, blood samples from participants were taken following the glucose drink which in energy content (calories) mirrored what might typically be consumed for breakfast.
Their findings highlight that one night of disrupted sleep did not worsen participants’ blood glucose/insulin responses at breakfast, when compared to a normal night’s sleep. Past research suggests that losing many hours of sleep over one and/or multiple nights can have negative metabolic effects, so it is reassuring to learn that a single night of fragmented sleep (e.g. due to insomnia, noise disturbance or a new baby) does not have the same effect.
However, strong black coffee consumed before breakfast substantially increased the blood glucose response to breakfast by around 50%. Although population-level surveys indicate that coffee may be linked to good health, past research has previously demonstrated that caffeine has the potential to cause insulin resistance. This new study therefore reveals that the common remedy of drinking coffee after a bad night’s sleep may solve the problem of feeling sleepy but could create another by limiting your body’s ability to tolerate the sugar in your breakfast.
Professor James Betts, Co-Director of the Centre for Nutrition, Exercise and Metabolism at the University of Bath who oversaw the work, explains: “We know that nearly half of us will wake in the morning and, before doing anything else, drink coffee – intuitively the more tired we feel, the stronger the coffee. This study is important and has far-reaching health implications as up until now we have had limited knowledge about what this is doing to our bodies, in particular for our metabolic and blood sugar control.
“Put simply, our blood sugar control is impaired when the first thing our bodies come into contact with is coffee especially after a night of disrupted sleep. We might improve this by eating first and then drinking coffee later if we feel we still feel need it. Knowing this can have important health benefits for us all.”
Lead researcher, Harry Smith from the Department for Health at Bath added: “These results show that one night of disrupted sleep alone did not worsen participants’ blood glucose/insulin response to the sugary drink compared to a normal night of sleep which will be reassuring to many of us. However, starting a day after a poor night’s sleep with a strong coffee did have a negative effect on glucose metabolism by around 50%. As such, individuals should try to balance the potential stimulating benefits of caffeinated coffee in the morning with the potential for higher blood glucose levels and it may be better to consume coffee following breakfast rather than before.
“There is a lot more we need to learn about the effects of sleep on our metabolism, such as how much sleep disruption is necessary to impair our metabolism and what some of the longer-term implications of this are, as well as how exercise, for instance, could help to counter some of this.”
This week marks International Coffee Day (1 October) in celebration of the widespread appeal of coffee around the world. Coffee is now the world’s most popular drink, with around two billion cups consumed every day. Half of all people in the United States aged 18 and over drink coffee every day, whilst in the UK, according to the British Coffee Association, 80% of households buy instant coffee for in-home consumption.
The full study ‘Glucose control upon waking is unaffected by hourly sleep fragmentation during the night, but is impaired by morning caffeinated coffee’ is published in the British Journal of Nutritionhttps://doi.org/10.1017/S0007114520001865.
A new study in mice identifies a gene that is critical for short-term memory but functions in a part of the brain not traditionally associated with memory.
Classical studies of short-term memory have concentrated on the prefrontal cortex area of the brain. Recent studies, however, have suggested other brain regions may also play a role.
To discover new genes and brain circuits that are important for short-term memory, the researchers turned to studying genetically diverse mice, rather than inbred mice commonly used in research.
“We needed a population that is diverse enough to be able to answer the question of what genetic differences might account for variation in short-term memory,” said Praveen Sethupathy ’03, associate professor of biomedical sciences in the College of Veterinary Medicine, director of the Cornell Center for Vertebrate Genomics, and a senior author of the study.
Priya Rajasethupathy ’04, the Jonathan M. Nelson Family Assistant Professor and head of the Laboratory of Neural Dynamics and Cognition at Rockefeller University, is the other senior author of the paper. Sethupathy and Rajasethupathy are siblings; they conceived of this study over family dinners. Kuangfu Hsiao, a research associate at Rockefeller University, is the lead author of the study.
The researchers began with about 200 genetically diverse mice to identify regions of the DNA that contribute to the observed variation in short-term memory among the mice. They screened the mice on a short-term memory task and used genetic mapping techniques to identify a region of the genome, harboring 26 genes, that is associated with working memory. With further genome-scale analyses, they whittled the list of genes down to four of special interest. By disabling each of these four genes one at a time, they found that one in particular, Gpr12, coded for a protein that is required for and promotes working memory.
“I expected the prefrontal cortex would be the region most globally changed by the activity of Gpr12,” Rajasethupathy said. “Strikingly, it was actually the thalamus, by far.”
They also found that when they took low-performing mice and increased the amount of the Gpr12 protein in the thalamus, their accuracy in the memory task increased from 50% to 80%, similar to the level of high-performers.
To understand the neural circuits involved, the researchers compared low performers against low performers with artificially increased Gpr12 protein in the thalamus. These mice were also engineered with fluorescent calcium sensors that light up when a neuron is active. They recorded neurons firing in multiple brain regions while the mice performed the memory task. During many phases of the task, when short-term memory was required, the researchers observed synchronous activity between the prefrontal cortex and thalamus.
“When the thalamus activity went down, prefrontal went down; when the thalamus went up, prefrontal went up,” Rajasethupathy said. “We found that these two brain regions are very highly correlated with each other in high-performers but not in low-performers. This finding implies a directionality [where one area influences the other], but we don’t yet know the direction.”
Often, when scientists identify that a specific gene is linked to a certain behavior, it takes time and more research to understand how that gene is driving the behavior, Rajasethupathy said.
“We were inspired in this study to link genetics to neural circuits to behavior,” Sethupathy added. “Future work will investigate what mechanisms regulate the Gpr12 gene and what signaling pathways downstream of the Gpr12 protein mediate its effects.”
Interestingly, the Gpr12 geneis highly conserved among mammals, including humans. The work therefore offers the possibility of a novel therapeutic angle for reversing deficits in short-term memory. More immediately, it adds a new dimension to classical models by emphasizing the importance of a two-way neural dialogue between the prefrontal cortex and the thalamus in support of short-term memory.
The study was funded by the National Institutes of Health.
When we think about hackers, we might imagine someone stealing data and selling it on the dark web for financial gain. But new research from the University of Delaware’s John D’Arcy suggests that some hackers may have a different motivation: disappointment in a company’s attempts to fake social responsibility.
“There is emerging evidence that the hacking community is not homogenous, and at least some hackers appear to be motivated by what they dislike, as opposed to solely financial gain,” said D’Arcy, who is a professor of management information systems (MIS) at UD’s Alfred Lerner College of Business and Economics. “Recent hacks against the World Health Organization, due to its actions (or supposed inactions) related to the COVID-19 pandemic, are a case in point.”
D’Arcy and his coauthors, interested in exploring whether a firm’s corporate social performance (CSP) impacts their likelihood of being breached, studied a unique dataset that included information on data breach incidents, external assessments of firms’ CSP and other factors. The results, published on Sept. 18 in the Information Systems Research paper “Too Good to Be True: Firm Social Performance and the Risk of Data Breach,” were intriguing.
The key to these results, D’Arcy explained, lies in understanding the difference between two different types of corporate social responsibility efforts: those that are more minor and peripheral (like recycling programs or charitable donations) versus those that involve social responsibility being embedded throughout the firm’s core business and processes (like diversity initiatives and producing eco-friendly products).
Companies only participating in peripheral efforts and not more deeply embedded ones are sometimes called “greenwashing,” attempting to give the appearance of social responsibility without infusing such practices throughout their entire organization. According to D’Arcy’s research, firms that do this are more likely to face problems from hackers.
“An example of a firm that has been accused of greenwashing is Walmart,” D’Arcy said. “This is because Walmart has touted its investments in charitable causes and environmental programs, but at the same time has been criticized for providing low wages and neglecting investments in employees’ physical and psychological working environment.”
The study found that hackers of all kinds — from internal disgruntled employees to external hacktivist groups — can “sniff out” these actions that only give the appearance of social responsibility. To an even further extent, when companies not only are trying to improve their image but also are using these actions to mask poor overall CSP, they are especially likely to be breached.
“Consequently, these firms are more likely to be victimized by a malicious data breach for these reasons,” D’Arcy said. “Firms may be placing a proverbial target on their back, in an information security sense, by engaging in greenwashing efforts.”
Conversely, the study found that when firms that engage in more embedded and meaningful forms of corporate responsibility, they are more likely to see solely positive outcomes. In this case, that means fewer hacks and data breaches.
“These same internal and external hackers are likely to see such embedded CSP efforts as genuine attempts at social responsibility (in other words, the company is ‘walking its talk’ when it comes to social responsibility) and thus they will be less likely to target these firms for a computer attack that results in a breach,” D’Arcy said.
What lessons should companies take from this research? D’Arcy warned that companies should be cautious about promoting peripheral CSP efforts if they have otherwise poor records on corporate social issues.
“What was once accepted as meaningful CSP activity may no longer appease certain stakeholders,” he said. “And in this era of increased information transparency and greater expectations of the firm’s role in society, engaging in only peripheral actions may result in stakeholder backlash. Firms need to be cautious about promoting their CSP activities unless they can defend their actions as embedded in core practices and as authentically motivated.”
The potential acceleration of job automation spurred by COVID-19 will disproportionately affect Latinos in U.S. service sector jobs, according to a new UCLA report, which also urges state and local officials to start planning now to implement programs to support and retrain these workers.
The report, by the UCLA Latino Policy and Politics Initiative, looked at occupational data from the six states with the largest Latino populations and found an overrepresentation of Latinos in industries where jobs are more susceptible to automation, like construction, leisure and hospitality, agriculture, and wholesale or retail trade.
More than 7.1 million Latinos, representing almost 40% of the Latino workforce in those six states — Arizona, California, Florida, Illinois, New York and Texas — are at high risk of being displaced by automation, the report shows.
“As Latinos take a disproportionate financial hit from the COVID-19 crisis, now is a good time to focus on increasing training opportunities and to strengthen the social safety net to catch workers who are left behind,” said Rodrigo Dominguez-Villegas, the report’s author and director of research at the policy initiative.
“Millions of people have lost their jobs amid the pandemic, and the future of those jobs is uncertain as employers look to reduce costs by accelerating automation,” Dominguez-Villegas added. “For many Latinos, the economic recovery will not bring back their jobs.”
A failure to prepare Latinos for jobs in the digital economy and other growing sectors will come with economic repercussions to the U.S. by creating a shortage of skilled workers in an aging and shrinking labor force. As the nation’s youngest demographic group, with a median age of 30 years — compared with 44 for whites, 38 for Asians and 35 for African Americans — Latino workers can fill increasing workforce demands in health care and tech-focused jobs if enough resources are focused on retraining the Latino workforce, according to the report.
“In the face of COVID-19, global warming and economic chaos, Latinos are critical to America’s recovery,” said Sonja Diaz, founding director of the policy initiative. “Policymakers need to strengthen pathways to opportunity that are centered on workers of color or risk further financial ruin.”
With the exception of Florida, where Latinos are almost twice as likely to have a college degree and access to higher-skilled jobs, Latino workers, particulary those in California and Texas, could see heavy job loss in construction and hospitality from automation. Some figures estimate that up to 70 percent of jobs in hospitality and 49 percent of construction jobs could soon become completely automated.
The report makes the following policy recommendations to begin preparing the Latino workforce for a digitalized future:
Modernize unemployment insurance programs to expand eligibility and provide worker retraining assistance.
Create apprenticeship programs that provide career pathways for digitally oriented jobs and create a pipeline to employers.
Invest in broadband access and programs that connect Latinos with digital technologies.
Increase Latino enrollment in and graduation from higher education institutions and increase access to social-safety services such as housing, food and health care.
The report will be used as a baseline for discussion at a convening of policymakers, industry leaders, higher education administrators and training organizations organized by the Aspen Institute’s Latinos and Society Program in October 2020.
“Data is critical as policymakers work with the private sector to ensure that Latinos have access to the training and education opportunities necessary to drive our economy in the digital age,” said Domenika Lynch, executive director of the Aspen Institute program.
Researchers at Berkeley Lab have transformed lignin, a waste product of the paper industry, into a precursor for a useful chemical with a wide range of potential applications.
Lignin is a complex material found in plant cell walls that is notoriously difficult to break down and turn into something useful. Typically, lignin is burned for energy, but scientists are focusing on ways to repurpose it.
In a recent study, researchers demonstrated their ability to convert lignin into a chemical compound that is a building block of bio-based ionic liquids. The research was a collaboration between the Advanced Biofuels and Bioproducts Process Development Unit, the Joint BioEnergy Institute (both established by the Department of Energy and based at Berkeley Lab), and the Queens University of Charlotte.
Ionic liquids are powerful solvents/catalysts used in many important industrial processes, including the production of sustainable biofuels and biopolymers. However, traditional ionic liquids are petroleum-based and costly. Bio-based ionic liquids made with lignin, an inexpensive organic waste product, would be cheaper and more environmentally friendly.
“This research brings us one step closer to creating bio-based ionic liquids,” said Ning Sun, the study’s co-corresponding author. “Now we just need to optimize and scale up the technology.”
According to Sun, bio-based ionic liquids also have a broad range of potential uses outside of industry. “We now have the platform to synthesize bio-based ionic liquids with different structures that have different applications, such as antivirals,” Sun said.
This research was funded by DOE’s Bioenergy Technologies Office through the Technology Commercialization Fund.
Plants can produce a wide range of molecules, many of which help them fight off harmful pests and pathogens. Biologists have harnessed this ability to produce many molecules important for human health — aspirin and the antimalarial drug artemisinin, for example, are derived from plants.
Now, scientists at the Joint BioEnergy Institute (JBEI) are using synthetic biology to give plants the ability to create molecules never seen before in nature. New research led by Patrick Shih, director of Plant Biosystems Design at JBEI, and Beth Sattely of Stanford University describes success in swapping enzymes between plants to engineer new synthetic metabolic pathways. These pathways gave plants the ability to create new classes of chemical compounds, some of which have enhanced properties.
“This is a demonstration of how we can begin to start rewiring and redesigning plant metabolism to make molecules of interest for a range of applications,” Shih said.
Engineering plants to make new molecules themselves provides a sustainable platform to produce a wide range of compounds. One of the compounds the researchers were able to create is comparable to commercially used pesticides in their effectiveness, while others may have anti-cancer properties. The long-term goal is to engineer plants to be biofactories of molecules such as these, bypassing the need to externally spray pesticides or synthesize therapeutic molecules in a lab.
“That’s the motivation for where we could go,” Shih said. “We want to push the boundaries of plant metabolism to make compounds we’ve never seen before.”
JBEI is a DOE Bioenergy Research Center supported by DOE’s Office of Science.
A study of more than a half-million people in India who were exposed to the novel coronavirus SARS-CoV-2 suggests that the virus’ continued spread is driven by only a small percentage of those who become infected.
Furthermore, children and young adults were found to be potentially much more important to transmitting the virus — especially within households — than previous studies have identified, according to a paper by researchers from the United States and India published Sept. 30 in the journal Science.
Researchers from the Princeton Environmental Institute (PEI), Johns Hopkins University and the University of California, Berkeley, worked with public health officials in the southeast Indian states of Tamil Nadu and Andhra Pradesh to track the infection pathways and mortality rate of 575,071 individuals who were exposed to 84,965 confirmed cases of COVID-19, the disease caused by SARS-CoV-2. It is the largest contact tracing study — which is the process of identifying people who came into contact with an infected person — conducted in the world for any disease.
Lead researcher Ramanan Laxminarayan, a senior research scholar in PEI, said that the paper is the first large study to capture the extraordinary extent to which SARS-CoV-2 hinges on “superspreading,” in which a small percentage of the infected population passes the virus on to more people. The researchers found that 71% of infected individuals did not infect any of their contacts, while a mere 8% of infected individuals accounted for 60% of new infections.
“Our study presents the largest empirical demonstration of superspreading that we are aware of in any infectious disease,” Laxminarayan said. “Superspreading events are the rule rather than the exception when one is looking at the spread of COVID-19, both in India and likely in all affected places.”
The findings provide extensive insight into the spread and deadliness of COVID-19 in countries such as India — which has experienced more than 96,000 deaths from the disease — that have a high incidence of resource-limited populations, the researchers reported. They found that coronavirus-related deaths in India occurred, on average, six days after hospitalization compared to an average of 13 days in the United States. Also, deaths from coronavirus in India have been concentrated among people aged 50-64, which is slightly younger than the 60-plus at-risk population in the United States.
The researchers also reported, however, the first large-scale evidence that the implementation of a countrywide shutdown in India led to substantial reductions in coronavirus transmission.
The researchers found that the chances of a person with coronavirus, regardless of their age, passing it on to a close contact ranged from 2.6% in the community to 9% in the household. The researchers found that children and young adults — who made up one-third of COVID cases — were especially key to transmitting the virus in the studied populations.
“Kids are very efficient transmitters in this setting, which is something that hasn’t been firmly established in previous studies,” Laxminarayan said. “We found that reported cases and deaths have been more concentrated in younger cohorts than we expected based on observations in higher-income countries.”
Children and young adults were much more likely to contract coronavirus from people their own age, the study found. Across all age groups, people had a greater chance of catching the coronavirus from someone their own age. The overall probability of catching coronavirus ranged from 4.7% for low-risk contacts up to 10.7% for high-risk contacts.
The study, “Epidemiology and transmission dynamics of COVID-19 in two Indian states,” was published Sept. 30 by the journal Science. The work was supported by the National Science Foundation and the Centers for Disease Control and Prevention.
Two and a half years ago, MIT entered into a research agreement with startup company Commonwealth Fusion Systems to develop a next-generation fusion research experiment, called SPARC, as a precursor to a practical, emissions-free power plant.
Now, after many months of intensive research and engineering work, the researchers charged with defining and refining the physics behind the ambitious tokamak design have published a series of papers summarizing the progress they have made and outlining the key research questions SPARC will enable.
Overall, says Martin Greenwald, deputy director of MIT’s Plasma Science and Fusion Center and one of the project’s lead scientists, the work is progressing smoothly and on track. This series of papers provides a high level of confidence in the plasma physics and the performance predictions for SPARC, he says. No unexpected impediments or surprises have shown up, and the remaining challenges appear to be manageable. This sets a solid basis for the device’s operation once constructed, according to Greenwald.
Greenwald wrote the introduction for a set of seven research papers authored by 47 researchers from 12 institutions and published today in a special issue of the Journal of Plasma Physics. Together, the papers outline the theoretical and empirical physics basis for the new fusion system, which the consortium expects to start building next year.
SPARC is planned to be the first experimental device ever to achieve a “burning plasma” — that is, a self-sustaining fusion reaction in which different isotopes of the element hydrogen fuse together to form helium, without the need for any further input of energy. Studying the behavior of this burning plasma — something never before seen on Earth in a controlled fashion — is seen as crucial information for developing the next step, a working prototype of a practical, power-generating power plant.
Such fusion power plants might significantly reduce greenhouse gas emissions from the power-generation sector, one of the major sources of these emissions globally. The MIT and CFS project is one of the largest privately funded research and development projects ever undertaken in the fusion field.
“The MIT group is pursuing a very compelling approach to fusion energy.” says Chris Hegna, a professor of engineering physics at the University of Wisconsin at Madison, who was not connected to this work. “They realized the emergence of high-temperature superconducting technology enables a high magnetic field approach to producing net energy gain from a magnetic confinement system. This work is a potential game-changer for the international fusion program.”
The SPARC design, though about the twice the size as MIT’s now-retired Alcator C-Mod experiment and similar to several other research fusion machines currently in operation, would be far more powerful, achieving fusion performance comparable to that expected in the much larger ITER tokamak being built in France by an international consortium. The high power in a small size is made possible by advances in superconducting magnets that allow for a much stronger magnetic field to confine the hot plasma.
The SPARC project was launched in early 2018, and work on its first stage, the development of the superconducting magnets that would allow smaller fusion systems to be built, has been proceeding apace. The new set of papers represents the first time that the underlying physics basis for the SPARC machine has been outlined in detail in peer-reviewed publications. The seven papers explore the specific areas of the physics that had to be further refined, and that still require ongoing research to pin down the final elements of the machine design and the operating procedures and tests that will be involved as work progresses toward the power plant.
The papers also describe the use of calculations and simulation tools for the design of SPARC, which have been tested against many experiments around the world. The authors used cutting-edge simulations, run on powerful supercomputers, that have been developed to aid the design of ITER. The large multi-institutional team of researchers represented in the new set of papers aimed to bring the best consensus tools to the SPARC machine design to increase confidence it will achieve its mission.
The analysis done so far shows that the planned fusion energy output of the SPARC tokamak should be able to meet the design specifications with a comfortable margin to spare. It is designed to achieve a Q factor — a key parameter denoting the efficiency of a fusion plasma — of at least 2, essentially meaning that twice as much fusion energy is produced as the amount of energy pumped in to generate the reaction. That would be the first time a fusion plasma of any kind has produced more energy than it consumed.
The calculations at this point show that SPARC could actually achieve a Q ratio of 10 or more, according to the new papers. While Greenwald cautions that the team wants to be careful not to overpromise, and much work remains, the results so far indicate that the project will at least achieve its goals, and specifically will meet its key objective of producing a burning plasma, wherein the self-heating dominates the energy balance.
Limitations imposed by the Covid-19 pandemic slowed progress a bit, but not much, he says, and the researchers are back in the labs under new operating guidelines.
Overall, “we’re still aiming for a start of construction in roughly June of ’21,” Greenwald says. “The physics effort is well-integrated with the engineering design. What we’re trying to do is put the project on the firmest possible physics basis, so that we’re confident about how it’s going to perform, and then to provide guidance and answer questions for the engineering design as it proceeds.”
Many of the fine details are still being worked out on the machine design, covering the best ways of getting energy and fuel into the device, getting the power out, dealing with any sudden thermal or power transients, and how and where to measure key parameters in order to monitor the machine’s operation.
So far, there have been only minor changes to the overall design. The diameter of the tokamak has been increased by about 12 percent, but little else has changed, Greenwald says. “There’s always the question of a little more of this, a little less of that, and there’s lots of things that weigh into that, engineering issues, mechanical stresses, thermal stresses, and there’s also the physics — how do you affect the performance of the machine?”
The publication of this special issue of the journal, he says, “represents a summary, a snapshot of the physics basis as it stands today.” Though members of the team have discussed many aspects of it at physics meetings, “this is our first opportunity to tell our story, get it reviewed, get the stamp of approval, and put it out into the community.”
Greenwald says there is still much to be learned about the physics of burning plasmas, and once this machine is up and running, key information can be gained that will help pave the way to commercial, power-producing fusion devices, whose fuel — the hydrogen isotopes deuterium and tritium — can be made available in virtually limitless supplies.
The details of the burning plasma “are really novel and important,” he says. “The big mountain we have to get over is to understand this self-heated state of a plasma.”
“The analysis presented in these papers will provide the world-wide fusion community with an opportunity to better understand the physics basis of the SPARC device and gauge for itself the remaining challenges that need to be resolved,” says George Tynan, professor of mechanical and aerospace engineering at the University of California at San Diego, who was not connected to this work. “Their publication marks an important milestone on the road to the study of burning plasmas and the first demonstration of net energy production from controlled fusion, and I applaud the authors for putting this work out for all to see.”
Overall, Greenwald says, the work that has gone into the analysis presented in this package of papers “helps to validate our confidence that we will achieve the mission. We haven’t run into anything where we say, ‘oh, this is predicting that we won’t get to where we want.” In short, he says, “one of the conclusions is that things are still looking on-track. We believe it’s going to work.”
A new study from researchers at the UC Berkeley School of Public Health and Stanford University School of Medicine has determined that higher severe maternal morbidity rates for Black, American Indian/Alaska Native, and mixed-race women may be reduced if they had delivered in the same hospitals as non-Hispanic White women.
Researchers specifically targeted severe maternal morbidity (SMM), an umbrella term for a set of 21 adverse health complications including eclampsia and heart failure that can occur during childbirth. SMM has emerged as a growing public health crisis. The CDC reports that “the overall rate of SMM increased almost 200%” between 1993 and 2014, with Black women experiencing these outcomes at 2-3 times the rate of White women. The factors explaining the sharp increase in SMM nationally, as well as the persistent disparities by race and ethnicity, are inadequately understood, leaving few options to prevent short- and long-term health consequences for women and their newborns.
The researchers reviewed more than 3 million California birth records from 2007-2012 to see if hospital-level factors (such as teaching affiliation and proportion of SMM deliveries) could explain racial disparities in maternal outcomes related to giving birth.
“We found that the prevalence of SMM in California was highest in Black women and double that of White women (2.1% vs. 1.1%), a disparity that we know is increasing over time based on prior research by our team. We hypothesized that birth hospital might be an important underlying contributor to these disparities, given that national data suggests that Black women tend to deliver at hospitals that have worse outcomes,” said Mahasin Mujahid, lead author of a paper published in August in the American Journal of Obstetrics and Gynecology and associate professor of epidemiology and Chancellor’s Professor of Public Health at UC Berkeley’s School of Public Health.
Mujahid’s research found 33% of White women delivered in hospitals with the highest tertile of SMM rates compared to 53% of Black women. “Our model found that if Black women gave birth at the same distribution of hospitals as White women, this would have resulted in 156 fewer cases of SMM in Black women, representing a 7.8% reduction in the Black-White disparity,” Mujahid said.
The findings highlight the critical need for more research on the potential role of structural racism in shaping differential access to high quality hospitals based on race and ethnicity and in determining the within-hospital experiences of minoritized women such as experiences of discrimination that may disproportionately affect their birth experiences.
“We are in the midst of a national reckoning on the impacts of structural racism on Black Americans,” said Mujahid. “It is imperative that we include the health disparities experienced by historically marginalized women. SMM is one of a number of health problems that disproportionately affect racially and ethnically minoritized women, with particularly devastating consequences for Black and Native American women. More work is urgently needed to uncover the systemic factors that produce these disparities and to develop targeted interventions that promote health equity.”
As the United States prepares for November’s general election, almost every step of the voting process is being revamped and reevaluated to ensure that COVID-19 will not spread in local communities when millions of Americans cast their ballot in the fall.
While some states are expanding their vote-by-mail programs, many precincts are still expecting a high turnout for in-person voting.
Helping election administrators and poll workers prepare for safe in-person voting is a team at the Stanford Hasso Plattner Institute of Design, also known as the d.school. In May 2020, they partnered with the Healthy Elections Project, a joint effort between Stanford and Massachusetts Institute of Technology (MIT), to develop and promote best practices for a safe and secure election this November.
“The United States is making the most fundamental transformation to its election infrastructure in the shortest period of time in recent memory,” said Nathaniel Persily, the James B. McClatchy Professor Law and former Senior Research Director of the Presidential Commission on Election Administration.
“When it became clear that we needed to redesign our polling places, going to the d.school – world experts in design – was the natural place to look,” added Persily, who co-founded the Healthy Elections Project with Charles Stewart III from MIT.
The d.school’s task was to figure out how to apply human-centered design – an approach to finding and solving problems that put people’s mindsets and behaviors at the center of the process – to designing safe polling places during a pandemic. “Elections are a series of experiences,” said project collaborator Nadia Roumani, a senior designer with the d.school’s Designing for Social Systems Program. “One of the things that human-centered design brings to the voting process is the ability to understand and acknowledge the complexity of that experience and, when appropriate, make it more accessible.”
Toward this end, the group created the 2020 Healthy Polling Places Guidebook, a 51-page document that offers practical examples for how to prepare a safe environment for in-person voting.
The guidebook draws its inspiration from some of the several dozen statewide primary and run-off elections that have been held across the U.S. since COVID-19 was declared a pandemic by the World Health Organization on March 11, 2020. As local election administrators rethink their own voting procedures to incorporate public health recommendations like social distancing to reduce the spread of COVID-19 they are turning to earlier elections to learn what worked, Roumani said.
Early on, Roumani and her colleagues at the d.school partnered with several organizations that have extensive experience working with elections officials. As Roumani learned, state and county regulations for running elections are both highly technical and incredibly decentralized. Every state, county and town administer their own elections differently.
“Part of our work has been to serve as a design coach for some of these organizations and help them take what they already have, which is robust, thorough and very detail-oriented, and make it more visual, digestible, action-oriented and experience-centered,” Roumani said.
As election officials prepare for safe and clean environments for both workers and voters, the guidebook highlights dozens of examples that show what every step of the voting process has looked like so far in the pandemic. Included are photos of the signage voters saw when they entered their local polling place; the clear, plexiglass barriers they encountered when checking in; and the floor markings they followed when exiting.
Accompanying each of these images are brief but thorough descriptions of what election administrators might consider if they were to pursue one of these options, including step-by-step guidance and checklists.
The 2020 Healthy Polling Places Guidebook also features examples of what outdoor voting could look like. For example, included is an image of a tent outside the town hall from a primary election held in April in Dunn, Wisconsin, that offered people an alternative to indoor voting.
The guidebook even shows alternative examples for collecting ballots – such as curbside voting and drive-through voting – which allowed people to vote without leaving their vehicle. It also offers suggested language, links to resources administrators can use to layout their worksites and reminders for how to promote and maintain safety throughout the day. Included as well are practical tips and a training module for how to manage stressful situations that may arise, such as how to deal with a voter who forgets their face covering or refuses to wear one at all.
“The other part is understanding that there are potentially some emotional moments and anxiety-provoking moments that poll workers may face that we need to design for,” Roumani said.
Preparing election officials for challenges ahead
In addition to partnering with people like Nadia Roumani and her team at the d.school, the Healthy Elections Project has collaborated with dozens of academics, civic organizations, election administrators and election administration experts to address other challenges the pandemic poses to officials and local jurisdictions, including how to expand mail-in and absentee voting programs.
While there are some states that have spent years rolling out efforts for their mail-in voting programs, other states are having to do it in a matter of months. Some jurisdictions do not have the expertise to make these changes so quickly – which is where the Healthy Election Project steps in.
“We really need the best available research to try to educate election officials, voters and NGOs on how to pull off this election in a safe and secure way,” Persily said. “The goal of the healthy elections project is to really turn that research into action.”
Since the Healthy Elections Project launched in April, students from Stanford and MIT have been researching and drafting relevant memos that include specific recommendations and resources to election officials making critical changes to their infrastructure. One report, for example, goes into granular detail of what supplies jurisdictions might consider purchasing to make their polling places pandemic-proof, what they might need to expand vote-by-mail programs, as well as timelines to avoid bottlenecks in the supply chain.
As the election draws closer, the Healthy Election Project will continue to prepare and provide elections administrators with additional tools and resources to manage issues that may arise, such as managing mail ballots, analyzing election data and communicating with voters. There is also a growing amount of litigation regarding election rules during the pandemic, and the Healthy Elections Project is tracking these as well to keep election officials and voters up-to-date of issues in their jurisdictions.
“It’s incredibly difficult during the pandemic to try to effectuate changes in election administration across the country, but we are trying to do our best,” said Persily. “We hope that we’re making at least a small contribution to make it a smoother election.”
Poll worker recruitment
Another key issue to emerge from research conducted by Stanford and MIT students involved in the Healthy Elections Project was the need to recruit poll workers. In a detailed memo analyzing some of the recent primary and run-off elections held during the pandemic, students reported how some states had to rapidly respond to staffing shortages because of the pandemic. Typically, more than half of poll workers have been over the age of 60 – the demographic most at risk of experiencing health complications due to COVID-19.
“When we have a poll worker recruitment shortage, election officials have no choice but to consolidate or combine polling places, which in some cases can make it more difficult for voters to get to the polls. It can also lead to longer lines, more crowding and more processing delays at single polling locations,” said Stanford law student Chelsey Davidson who has been working fulltime on the Healthy Elections Project.
To have a successful and healthy election in November, new poll workers are needed. The Healthy Elections Project has rolled out a robust poll worker recruitment effort. They’ve also partnered with Power the Polls to recruit new poll workers to staff in-person voting locations on Election Day. Stanford’s d.school has also been addressing the issue of poll worker recruitment as well – they created the Pollworker Screening Tool, an easily adaptable application form that local election officials can use to evaluate volunteers.
In the September issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. They have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.
“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”
In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.
An electron micrograph of the artificial neuron. The niobium dioxide layer (in yellow) endows the device with neuron-like behavior.
R. Stanley Williams
Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.
In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.
“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”
Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.
But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. By mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.
To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.
When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.
These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.
“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” Kumar said. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”
To evaluate if their synthetic neurons can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.
By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.
Networks of artificial neurons connected together can solve toy versions of the the viral quasispecies reconstruction problem.
Rachel Barton/Texas A&M Engineering
The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.
Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.
The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.
“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”
Ziwen Wang from Stanford University also contributed to this research.
This research was funded by the National Science Foundation, the Department of Energy and the Texas A&M X-Grants program.
A team of researchers in the United States and Japan reports that spinal cord stimulation (SCS) measurably decreased pain and reduced motor symptoms of Parkinson’s disease, both as a singular therapy and as a “salvage therapy” after deep brain stimulation (DBS) therapies were ineffective.
Writing in the September 28, 2020 issue of Bioelectronic Medicine, first author Krishnan Chakravarthy, MD, PhD, assistant professor of anesthesiology at University of California San Diego School of Medicine, and colleagues recruited 15 patients with Parkinson’s disease, a neurodegenerative disorder that is commonly characterized by physical symptoms, such as tremors and progressive difficulty walking and talking, and non-motor symptoms, such as pain and mental or behavioral changes.
The mean age of the patients was 74, with an average disease duration of 17 years. All of the patients were experiencing pain not alleviated by previous treatments. Eight had undergone earlier DBS, a non-invasive, pain therapy in which electrical currents are used to stimulate specific parts of the brain. Seven patients had received only drug treatments previously.
Researchers implanted percutaneous (through the skin) electrodes near the patients’ spines, who then chose one of three types of electrical stimulation: continuous, on-off bursts or continuous bursts of varying intensity.
Following continuous programmed treatment post-implantation, the researchers said all patients reported significant improvement, based on the Visual Analogue Scale, a measurement of pain intensity, with a mean reduction of 59 percent across all patients and stimulation modes.
Seventy-three percent of patients showed improvement in the 10-meter walk, a test that measures walking speed to assess functional mobility and gait, with an average improvement of 12 percent.
And 64 percent of patients experienced improvements in the Timed Up and Go (TUG) test, which measures how long it takes a person to rise from a chair, walk three meters, turn around, walk back to the chair and sit down. TUG assesses physical balance and stability, both standing and in motion. Average TUG improvement was 21 percent.
The authors said the findings suggest SCS may have therapeutic benefit for patients with Parkinson’s in terms of treatment for pain and motor symptoms, though they noted further studies are needed to determine whether improved motor function is due to neurological changes caused by SCS or simply decreased pain.
“We are seeing growing data on novel uses of spinal cord stimulation and specific waveforms on applications outside of chronic pain management, specifically Parkinson’s disease,” said Chakravarthy, pain management specialist at UC San Diego Health. “The potential ease of access and implantation of stimulators in the spinal cord compared to the brain suggests that this is a very exciting area for future exploration.”
Co-authors include: Rahul Chaturvedi and Rajiv Reddy, UC San Diego; Takashi Agari, Tokyo Metropolitan Neurological Hospital; Hirokazu Iwamuro, Juntendo University, Tokyo; and Ayano Matsui, National Center Hospital of Neurology and Psychiatry, Tokyo.
Temperatures at Earth’s highest latitudes were nearly as warm after Antarctica’s polar ice sheets developed as they were prior to glaciation, according to a new study led by Yale University. The finding upends most scientists’ basic understanding of how ice and climate develop over long stretches of time.
The study, based on a reconstruction of global surface temperatures, gives researchers a better understanding of a key moment in Earth’s climate history — when it transitioned from a “greenhouse” state to an “icehouse” state. The study appears in the journal Proceedings of the National Academy of Sciences the week of Sept. 28.
“This work fills in an important, largely unwritten chapter in Earth’s surface temperature history,” said Pincelli Hull, assistant professor of earth and planetary studies at Yale, and senior author of the study.
Charlotte O’Brien, a former Yale Institute for Biospheric Studies (YIBS) Donnelley Postdoctoral Fellow who is now a postdoctoral research associate at University College London, is the study’s lead author.
During the Eocene period (from 56 to 34 million years ago), temperatures at Earth’s higher latitudes were much higher than they are today. The formation of polar ice sheets began near the end of the Eocene period — and has been linked by many scientists to the onset of global cooling during the Oligocene period (33.9 to 23 million years ago).
Although there has been much scientific focus on the development of Antarctic glaciation, there have been relatively few sea surface temperature records for the Oligocene period.
The researchers generated new sea surface temperature models for the Oligocene at two ocean sites in the western tropical Atlantic and the southwestern Atlantic. They combined the new data with other existing sea surface temperature estimates for the Oligocene and Eocene epochs, plus data from climate modeling.
The result was a reconstruction of how surface temperatures evolved at a key moment in Earth’s climate history, as it transitioned from a greenhouse state to an icehouse state with Antarctic glaciation.
“Our analysis revealed that Oligocene ‘icehouse’ surface temperatures were almost as warm as those of the late Eocene ‘greenhouse’ climate,” O’Brien said.
The study estimated that global mean surface temperatures (GMSTs) during the Oligocene were roughly 71 to 75 degrees Fahrenheit, similar to late Eocene GMSTs of about 73 degrees Fahrenheit. For context, in 2019 the GMST average was 58.7 degrees Fahrenheit, according to the National Oceanic and Atmospheric Administration.
“This challenges our basic understanding of how the climate works, as well as the relationship between climate and ice volume through time,” O’Brien said.
The late Yale professor Mark Pagani was a co-author of the study. Additional co-authors were Yale senior research scientist Ellen Thomas, former Yale researchers James Super and Leanne Elder, and Purdue University professor Matthew Huber.
The National Science Foundation and the YIBS Donnelley Environmental Fellowship program funded the research.
A new Duke University-led analysis shows that during the early months of the COVID pandemic, the average number of new infections caused by an infected individual (i.e. the basic reproduction number, R0) was 4.5, or more than twice as many as the initial 2.2 rate estimated by the World Health Organization at the time.
At that higher rate of infectious spread, governments had just 20 days from the first reported cases to implement non-pharmaceutical interventions stringent enough to reduce the transmission rate to below 1.1 and prevent widespread infections and deaths, the analysis shows.
If delays in implementing these interventions allowed the reproduction rate to remain above 2.7 for at least 44 days – as was the case in many of the 57 countries studied – any subsequent interventions were unlikely to be effective.
“These numbers confirm that we only had a small window of time to act, and unfortunately that’s not what happened in most countries,” said the Gabriel Katul, Theodore S. Coile Distinguished Professor of Hydrology and Micrometeorology at Duke, who led the study.
We can’t undo the consequences of that inaction, but we can use the insights from the new study to prepare for a second wave of COVID or future pandemics, he said. Katul and his colleagues published their peer-reviewed study Sept. 24 in the open-access journal PLOSOne.
“Being able to estimate transmission rates at different phases of a disease’s spread and under different conditions helps identify the timing and type of interventions that may work best, the hospital capacity we’ll need, and other critical considerations,” Katul said.
For instance, the new analysis estimates that achieving herd immunity from COVID requires 78% of a population to no longer be susceptible to it. That can help inform decisions about how many vaccines are needed.
To arrive at their estimates, the researchers used a conventional “susceptible-infectious-removed” (SIR) mathematical model to analyze confirmed new COVID cases reported daily from January to March 2020 in 57 countries. They also used the model to analyze mortalities based on the so-called Infection Fatality Rate that accommodates both symptomatic and asymptomatic cases. The SIR model is widely used by epidemiologists to track and project changes in disease status among populations who are susceptible to a disease, infected with it, or recovered from it (and thus “removed” from the general pool).
Using the model allowed Katul and his team to chart the disease’s early-phase transmission rate under different conditions and intervention scenarios; identify changes in those rates over time; and project how many cases and deaths ultimately might occur under different intervention scenarios until herd immunity is achieved. It also allowed them to determine, in hindsight, how soon intervention strategies should have been put into place to slow or stop the virus’ spread.
To explore whether transmission rates differed at regional versus national scales, the scientists also used the SIR model to analyze data on new cases and deaths in individual provinces, countries or cities in Italy and the United Kingdom. Initial rates of transmission differed in some of the locations, but over time the differences evened out.
The impact of super-spreaders — infected people who infect a large number of others — was also found to even out over time.
Despite some short-term spikes caused by super-spreaders, or other factors such as ramp-ups in testing, inferred local rates of transmissions all converged over time to a global average of about 4.5 new cases per infected individual where early-phase intervention was insufficient or nonexistent, Katul noted.
“In the end, it all comes down to timely, effective intervention,” he said. “The best defense against uncontrolled future outbreaks is to put stringent safety protocols in place at the first sign of an outbreak and make use of the tools science has provided us.”
The case and mortality data used in the study came from the European Center on Disease Prevention and Control.
Katul conducted the analysis with Assaad Mrad, a doctoral student at Duke’s Nicholas School; Sara Bonetti of ETH Zurich and University College London; Gabriele Manoli of University College London; and Anthony Parolari of Marquette University. Bonetti, Manoli and Parolari all are doctoral graduates or former post-doctoral researchers at Duke University.
CITATION: “Global Convergence of COVID-19 Basic Reproduction Number and Estimation from Early-Time SIR Dynamics,” Gabriel G. Katul, Assaad Mrad, Sara Bonetti, Gabriele Manoli and Anthony J. Parolari; Sept. 24, 2020, PLOSOne. DOI: 10.1371/journal.pone.0239800
Note: Gabriel Katul is available for additional comment at gaby@duke.edu.
Dementia is a growing problem for people as they age, but it often goes undiagnosed. Now investigators at Harvard-affiliated Massachusetts General Hospital (MGH) and Beth Israel Deaconess Medical Center have discovered and validated a marker of dementia that may help clinicians identify patients who have the condition or are at risk of developing it. The findings are published in JAMA NetworkOpen.
The team recently created the Brain Age Index (BAI), a model that relies on artificial intelligence and a large set of sleep data to estimate the difference between a person’s chronological age and the biological age of their brain when computed through electrical measurements (with an electroencephalogram, or EEG) during sleep. A higher BAI signifies deviation from normal brain aging, which could reflect the presence and severity of dementia.
“The model computes the difference between a person’s chronological age and how old their brain activity during sleep ‘looks,’ to provide an indication of whether a person’s brain is aging faster than is normal,” said senior author M. Brandon Westover, investigator in the Department of Neurology at MGH and director of Data Science at the MGH McCance Center for Brain Health. “This is an important advance, because before now it has only been possible to measure brain age using brain imaging with magnetic resonance imaging, which is much more expensive, not easy to repeat, and impossible to measure at home,” added Elissa Ye, the first author of the study and a member of Westover’s laboratory. She noted that sleep EEG tests are increasingly accessible in non-sleep laboratory environments, using inexpensive technologies such as headbands and dry EEG electrodes.
To test whether high BAI values obtained through EEG measurements may be indicative of dementia, the researchers computed values for 5,144 sleep tests in 88 individuals with dementia, 44 with mild cognitive impairment, 1,075 with cognitive symptoms but no diagnosis of impairment, and 2,336 without dementia. BAI values rose across the groups as cognitive impairment increased, and patients with dementia had an average value of about four years older than those without dementia. BAI values also correlated with neuropsychiatric scores from standard cognitive assessments conducted by clinicians before or after the sleep study.
“Because quite feasible to obtain multiple nights of EEG, even at home, we expect that measuring BAI will one day become a routine part of primary care, as important as measuring blood pressure,” said co-senior author Alice D. Lam, an investigator in the Department of Neurology at MGH. “BAI has potential as a screening tool for the presence of underlying neurodegenerative disease and monitoring of disease progression.”
As the globe warms, the atmosphere is becoming more unstable, but the oceans are becoming more stable, according to an international team of climate scientists, who say that the increase in stability is greater than predicted and a stable ocean will absorb less carbon and be less productive.
Stable conditions in the atmosphere favor fair weather. However, when the ocean is stable, the layers of the ocean do not mix. Cooler, oxygenated water from beneath does not rise up and deliver oxygen and nutrients to waters near the surface, and warm surface water does not absorb carbon dioxide and bury it at depth.
“The same process, global warming, is both making the atmosphere less stable and the oceans more stable,” said Michael Mann, distinguished professor of atmospheric sciences and director of the Earth System Science Center at Penn State. “Water near the ocean’s surface is warming faster than the water below. That makes the oceans become more stable.”
Just as hot air rises, as is seen in the formation of towering clouds, hot water rises as well because it is less dense than cold water. If the hottest water is on top, vertical mixing in the oceans slows. Also, melting ice from various glaciers introduces fresh water into the upper layers of the oceans. Fresh water is less dense than salt water and so it tends to remain on the surface as well. Both elevated temperature and salinity cause greater ocean stratification and less ocean mixing.
“The ability of the oceans to bury heat from the atmosphere and mitigate global warming is made more difficult when the ocean becomes more stratified and there is less mixing,” said Mann. “Less downward mixing of warming waters means the ocean surface warms even faster, leading, for example, to more powerful hurricanes. Global climate models underestimate these trends.”
Mann and his team are not the first to investigate the impact of a warming climate on ocean stratification, but they are looking at the problem in a different way. The team has gone deeper into the ocean than previous research and they have a more sophisticated method of dealing with gaps in the data. They report their results today (Sept. 29) in Nature Climate Change.
“Other researchers filled in gaps in the data with long-term averages,” said Mann. “That tends to suppress any trends that are present. We used an ocean model to fill in the gaps, allowing the physics of the model to determine the most likely values of the missing data points.”
According to Mann, this is a more dynamic approach.
“Using the more sophisticated physics-based method, we find that ocean stability is increasing faster than we thought before and faster than models predict, with worrying potential consequences,” he said.
Other researchers on this project were Guancheng Li, Lijing Cheng and Jiang Zhu, International Center for Climate and Environment Sciences, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, Center for Ocean Mega-Science, Qingdao and University of Chinese Academy of Science, Beijing; Kevin E Trenberth, National Center for Atmospheric Research; and John P. Abraham, School of Engineering, University of St. Thomas, St. Paul, Minnesota.
The Chinese Academy of Sciences, National Key R&D Program of China, National Center for Atmospheric Research and the U.S. National Science Foundation supported this research.
It saved lives in past epidemics of lung-damaging viruses. Now, the life-support option known as ECMO appears to be doing the same for many of the critically ill COVID-19 patients who receive it, according to an international study led by a University of Michigan researcher.
The 1,035 patients in the study faced a staggeringly high risk of death, as ventilators and other care failed to support their lungs. But after they were placed on ECMO—extracorporeal membrane oxygenation—their actual death rate was less than 40%. That’s similar to the rate for patients treated with ECMO in past outbreaks of lung-damaging viruses and other severe forms of viral pneumonia.
The new study published in The Lancet provides strong support for the use of ECMO in appropriate patients as the pandemic rages worldwide. It may help more hospitals that have ECMO capability understand which of their COVID-19 patients might benefit from the technique, which channels blood out of the body and into a circuit of equipment that adds oxygen directly to the blood before pumping it back into regular circulation.
Still, the international team of authors cautions that patients who show signs of needing advanced life support should receive it at hospitals with experienced ECMO teams, and that hospitals shouldn’t try to add ECMO capability mid-pandemic.
Global cooperation to achieve results
The study was made possible by a rapidly created international registry that has given critical care experts near real-time data on the use of ECMO in COVID-19 patients since early in the year.
Hosted by the Extracorporeal Life Support Organization (ELSO), the registry includes data submitted by the 213 hospitals on four continents whose patients were included in the new analysis. The study includes data on patients age 16 or older who were started on ECMO between January 16 and May 1, and follows them until death, discharge from the hospital, or August 5, whichever occurred first.
“These results from hospitals experienced in providing ECMO are similar to past reports of ECMO-supported patients, with other forms of acute respiratory distress syndrome or viral pneumonia,” says co-lead author Ryan Barbaro of Michigan Medicine, U-M’s academic medical center. “These results support recommendations to consider ECMO in COVID-19 if the ventilator is failing. We hope these findings help hospitals make decisions about this resource-intensive option.”
Co-lead author Graeme MacLaren of the National University Health System in Singapore said most centers in the study did not need to use ECMO for COVID-19 very often.
“By bringing data from over 200 international centers together into the same study, ELSO has deepened our knowledge about the use of ECMO for COVID-19 in a way that would be impossible for individual centers to learn on their own,” he said.
Insights into patient outcomes
Seventy percent of the patients in the study were transferred to the hospital where they received ECMO. Half of these were actually started on ECMO—likely by the receiving hospital’s team—before they were transferred. This reinforces the importance of communication between ECMO-capable hospitals and non-ECMO hospitals that might have COVID-19 patients who could benefit from ECMO.
The new study could also help identify which patients will benefit most if they are placed on ECMO.
“Our findings also show that mortality risk rises significantly with patient age, and that those who are immunocompromised, have acute kidney injuries, worse ventilator outcomes or COVID-19-related cardiac arrests are less likely to survive,” said Barbaro, who chairs ELSO’s COVID-19 registry committee and provides ECMO care as a pediatric intensive care physician at U-M’s C.S. Mott Children’s Hospital.
“Those who need ECMO to replace cardiac function as well as lung function also did worse. All of this knowledge can help centers and families understand what patients might face if they are placed on ECMO.”
Co-senior author Daniel Brodie of New York Presbyterian Hospital said the lack of reliable information early in the pandemic hampered the research team’s ability to understand the role of ECMO for COVID-19.
“The results of this large-scale international registry study, while hardly definitive evidence, provide a real-world understanding of the potential for ECMO to save lives in a highly selected population of COVID-19 patients,” said Brodie, who shares senior authorship with Roberto Lorusso of the Maastricht University Medical Center in the Netherlands and Alain Combes of Sorbonne University in Paris.
A robust statistical approach
Because the ELSO database does not track what happens to patients once they are discharged to home, other hospitals and long-term acute care or rehabilitation facilities, the study used a statistical approach based on in-hospital mortality up to 90 days after the patient was put on ECMO. This also allowed the team to account for the 67 patients who were still in the hospital as of August 5, whether they were still on ECMO, in the ICU or in step-down units.
The study tracked the outcomes for more than 1,000 patients for 90 days after they were placed on ECMO life support.
Philip Boonstra of the U-M School of Public Health, helped design the study using a “competing risk” approach, based on his experience handling the statistical design and analysis of long-term data from clinical trials for cancer.
“We used 90-day in-hospital mortality because this is the highest-risk period and because it allows us to use the information we have to the fullest, even if we don’t know the final outcome for every patient,” he said.
Having data through August, when only a small number of the patients in the study remained in the hospital, was important—though data are missing on a small number of patients. And even though patients who were discharged to their homes or a rehabilitation facility will likely have a long recovery ahead after the intensive level of care involved in ECMO, they are likely to survive based on past data. However, the fate of those who went to LTAC facilities, which provide long-term care at a near-ICU level, is less certain.
More about the study and next steps
More than half of the patients in the study were treated in hospitals in the United States and Canada, including Michigan Medicine’s own hospitals. U-M’s Robert Bartlett, emeritus professor of surgery and a co-author of the new paper, is considered a key figure in the development of ECMO, including the first use in adults in the 1980s. He led the development of the initial guidance for the use of ECMO in COVID-19.
“ECMO is the final step in the algorithm for managing life-threatening lung failure in advanced ICUs,” Bartlett said. “Now we know it is effective in COVID-19.”
As of Aug. 5, 380 of the patients in the study had died in the hospital, more than 80% of them within 24 hours of a proactive decision to discontinue ECMO care because of a poor prognosis. Of the remaining patients, 57% had gone home or to a rehabilitation center (311 patients) or had been discharged to another hospital or a long-term acute care center (277 patients). The rest were still in the hospital but had reached 90 days after the start of ECMO.
The new study adds to the information used to create the ECMO COVID-19 guidelines published by ELSO, which is in part based on past randomized controlled trials of ECMO’s use in ARDS.
Barbaro and others are studying the longer-term effects of ECMO care for any patient; he leads a team that has recently received a National Institutes of Health grant for a long-term study of children who have survived after treatment with ECMO.
Meanwhile, the ELSO registry continues to track the care of patients placed on ECMO because of COVID-19. Christine Stead, chief executive officer of ELSO, credits the rapid pivot and intense teamwork among ECMO centers and their staff for the strength of the new paper.
“We started with a WeChat dialogue with teams in China, who were able to share knowledge and help their counterparts in Japan be ready for the spread to their country,” she said. “We asked all the centers that take part in ELSO to change their practice, and begin entering data about patients as soon as they were placed on ECMO, rather than waiting until they were discharged from the hospital. This has allowed us to achieve something that will help hospitals make more informed decisions, based on meaningful data, as the pandemic continues.”