Evidence that babies react to taste, smell in the womb; Carrot for “laughter-face” response, kale for “cry-face” response: Study

A study led by Durham University’s Fetal and Neonatal Research Lab, UK, took 4D ultrasound scans of 100 pregnant women to see how their unborn babies responded after being exposed to flavours from foods eaten by their mothers.

Researchers looked at how the fetuses reacted to either carrot or kale flavours just a short time after the flavours had been ingested by the mothers.

Fetuses exposed to carrot showed more “laughter-face” responses while those exposed to kale showed more “cry-face” responses.

Their findings could further our understanding of the development of human taste and smell receptors.

The researchers also believe that what pregnant women eat might influence babies’ taste preferences after birth and potentially have implications for establishing healthy eating habits.

The study is published in the journal Psychological Science.

pregnant lady/Commons.wikimedia.org

Humans experience flavour through a combination of taste and smell. In fetuses it is thought that this might happen through inhaling and swallowing the amniotic fluid in the womb.

Lead researcher Beyza Ustun, a postgraduate researcher in the Fetal and Neonatal Research Lab, Department of Psychology, Durham University, said: “A number of studies have suggested that babies can taste and smell in the womb, but they are based on post-birth outcomes while our study is the first to see these reactions prior to birth.

“As a result, we think that this repeated exposure to flavours before birth could help to establish food preferences post-birth, which could be important when thinking about messaging around healthy eating and the potential for avoiding ‘food-fussiness’ when weaning.

“It was really amazing to see unborn babies’ reaction to kale or carrot flavours during the scans and share those moments with their parents.”

The research team, which also included scientists from Aston University, Birmingham, UK, and the National Centre for Scientific Research-University of Burgundy, France, scanned the mothers, aged 18 to 40, at both 32 weeks and 36 weeks of pregnancy to see fetal facial reactions to the kale and carrot flavours.

Mothers were given a single capsule containing approximately 400mg of carrot or 400mg kale powder around 20 minutes before each scan. They were asked not to consume any food or flavoured drinks one hour before their scans.

A 4D scan image of a fetus showing a neutral face/CREDIT: FETAP (Fetal Taste Preferences) Study, Fetal and Neonatal Research Lab, Durham University.

The mothers also did not eat or drink anything containing carrot or kale on the day of their scans to control for factors that could affect fetal reactions.

Facial reactions seen in both flavour groups, compared with fetuses in a control group who were not exposed to either flavour, showed that exposure to just a small amount of carrot or kale flavour was enough to stimulate a reaction.

Co-author Professor Nadja Reissland, head of the Fetal and Neonatal Research Lab, Department of Psychology, Durham University, supervised Beyza Ustun’s research. She said: “Previous research conducted in my lab has suggested that 4D ultrasound scans are a way of monitoring fetal reactions to understand how they respond to maternal health behaviours such as smoking, and their mental health including stress, depression, and anxiety.

“This latest study could have important implications for understanding the earliest evidence for fetal abilities to sense and discriminate different flavours and smells from the foods ingested by their mothers.”

Co-author Professor Benoist Schaal, of the National Centre for Scientific Research-University of Burgundy, France, said: “Looking at fetuses’ facial reactions we can assume that a range of chemical stimuli pass through maternal diet into the fetal environment.

This could have important implications for our understanding of the development of our taste and smell receptors, and related perception and memory.”

The researchers say their findings might also help with information given to mothers about the importance of taste and healthy diets during pregnancy.

They have now begun a follow-up study with the same babies post-birth to see if the influence of flavours they experienced in the womb affects their acceptance of different foods.

Research co-author Professor Jackie Blissett, of Aston University, said: “It could be argued that repeated prenatal flavour exposures may lead to preferences for those flavours experienced postnatally. In other words, exposing the fetus to less ‘liked’ flavours, such as kale, might mean they get used to those flavours in utero.

“The next step is to examine whether fetuses show less ‘negative’ responses to these flavours over time, resulting in greater acceptance of those flavours when babies first taste them outside of the womb.”

Related: http://dx.doi.org/10.1177/09567976221105460

 

Using machine learning to improve patient care

Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.

One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.

“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says PhD student Harini Suresh, lead author on the paper about ICU Intervene. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”

Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.

ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi. It was presented this month at the Machine Learning for Healthcare Conference in Boston.

EHR Model Transfer was co-developed by lead authors Jen Gong and Tristan Naumann, both PhD students at CSAIL, as well as Szolovits and John Guttag, who is the Dugald C. Jackson Professor in Electrical Engineering. It was presented at the ACM’s Special Interest Group on Knowledge Discovery and Data Mining in Halifax, Canada.

Both models were trained using data from the critical care database MIMIC, which includes de-identified data from roughly 40,000 critical care patients and was developed by the MIT Lab for Computational Physiology.

ICU Intervene

Integrated ICU data is vital to automating the process of predicting patients’ health outcomes.

“Much of the previous work in clinical decision-making has focused on outcomes such as mortality (likelihood of death), while this work predicts actionable treatments,” Suresh says. “In addition, the system is able to use a single model to predict many outcomes.”

ICU Intervene focuses on hourly prediction of five different interventions that cover a wide variety of critical care needs, such as breathing assistance, improving cardiovascular function, lowering blood pressure, and fluid therapy.

At each hour, the system extracts values from the data that represent vital signs, as well as clinical notes and other data points. All of the data are represented with values that indicate how far off a patient is from the average (to then evaluate further treatment).

Importantly, ICU Intervene can make predictions far into the future. For example, the model can predict whether a patient will need a ventilator six hours later rather than just 30 minutes or an hour later. The team also focused on providing reasoning for the model’s predictions, giving physicians more insight.

“Deep neural-network-based predictive models in medicine are often criticized for their black-box nature,” says Nigam Shah, an associate professor of medicine at Stanford University who was not involved in the paper. “However, these authors predict the start and end of medical interventions with high accuracy, and are able to demonstrate interpretability for the predictions they make.”

The team found that the system outperformed previous work in predicting interventions, and was especially good at predicting the need for vasopressors, a medication that tightens blood vessels and raises blood pressure.

In the future, the researchers will be trying to improve ICU Intervene to be able to give more individualized care and provide more advanced reasoning for decisions, such as why one patient might be able to taper off steroids, or why another might need a procedure like an endoscopy.

EHR Model Transfer

Another important consideration for leveraging ICU data is how it’s stored and what happens when that storage method gets changed. Existing machine-learning models need data to be encoded in a consistent way, so the fact that hospitals often change their EHR systems can create major problems for data analysis and prediction.

That’s where EHR Model Transfer comes in. The approach works across different versions of EHR platforms, using natural language processing to identify clinical concepts that are encoded differently across systems and then mapping them to a common set of clinical concepts (such as “blood pressure” and “heart rate”).

For example, a patient in one EHR platform could be switching hospitals and would need their data transferred to a different type of platform. EHR Model Transfer aims to ensure that the model could still predict aspects of that patient’s ICU visit, such as their likelihood of a prolonged stay or even of dying in the unit.

“Machine-learning models in health care often suffer from low external validity, and poor portability across sites,” says Shah. “The authors devise a nifty strategy for using prior knowledge in medical ontologies to derive a shared representation across two sites that allows models trained at one site to perform well at another site. I am excited to see such creative use of codified medical knowledge in improving portability of predictive models.”

With EHR Model Transfer, the team tested their model’s ability to predict two outcomes: mortality and the need for a prolonged stay. They trained it on one EHR platform and then tested its predictions on a different platform. EHR Model Transfer was found to outperform baseline approaches and demonstrated better transfer of predictive models across EHR versions compared to using EHR-specific events alone.

In the future, the EHR Model Transfer team plans to evaluate the system on data and EHR systems from other hospitals and care settings.

Both papers were supported, in part, by the Intel Science and Technology Center for Big Data and the National Library of Medicine. The paper detailing EHR Model Transfer was additionally supported by the National Science Foundation and Quanta Computer, Inc.