How the mother’s mood influences her baby’s ability to speak

Up to 70 percent of mothers develop postnatal depressive mood, also known as baby blues, after their baby is born. Analyses show that this can also affect the development of the children themselves and their speech. Until now, however, it was unclear exactly how this impairment manifests itself in early language development in infants.

In a study, scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig have now investigated how well babies can distinguish speech sounds from one another depending on their mother’s mood. This ability is considered an important prerequisite for the further steps towards a well-developed language. If sounds can be distinguished from one another, individual words can also be distinguished from one another. It became clear that if mothers indicate a more negative mood two months after birth, their children show on average a less mature processing of speech sounds at the age of six months.

The infants found it particularly difficult to distinguish between syllable-pitches. Specifically, they showed that the development of their so-called Mismatch Response was delayed than in those whose mothers were in a more positive mood. This Mismatch Response in turn serves as a measure of how well someone can separate sounds from one another. If this development towards a pronounced mismatch reaction is delayed, this is considered an indication of an increased risk of suffering from a speech disorder later in life.

“We suspect that the affected mothers use less infant-directed-speech,” explains Gesa Schaadt, postdoc at MPI CBS, professor of development in childhood and adolescence at FU Berlin and first author of the study, which has now appeared in the journal JAMA Network Open. “They probably use less pitch variation when directing speech to their infants.” This also leads to a more limited perception of different pitches in the children, she said. This perception, in turn, is considered a prerequisite for further language development.

The results show how important it is that parents use infant-directed speech for the further language development of their children. Infant-directed speech that varies greatly in pitch, emphasizes certain parts of words more clearly – and thus focuses the little ones’ attention on what is being said – is considered appropriate for children. Mothers, in turn, who suffer from depressive mood, often use more monotonous, less infant-directed speech. “To ensure the proper development of young children, appropriate support is also needed for mothers who suffer from mild upsets that often do not yet require treatment,” Schaadt says. That doesn’t necessarily have to be organized intervention measures. “Sometimes it just takes the fathers to be more involved.”

The researchers investigated these relationships with the help of 46 mothers who reported different moods after giving birth. Their moods were measured using a standardized questionnaire typically used to diagnose postnatal upset. They also used electroencephalography (EEG), which helps to measure how well babies can distinguish speech sounds from one another. The so-called Mismatch Response is used for this purpose, in which a specific EEG signal shows how well the brain processes and distinguishes between different speech sounds. The researchers recorded this reaction in the babies at the ages of two and six months while they were presented with various syllables such as “ba,” “ga” and “bu.

Also Read:

Evidence that babies react to taste, smell in the womb; Carrot for “laughter-face” response, kale for “cry-face” response: Study

Fish oil, vitamin D supplements during pregnancy lower risk of croup in babies

A scientific recipe that helps babies stop crying, calm down and sleep in bed

How ‘Digital mask’ protects patients’ privacy [Details]

Scientists have created a ‘digital mask’ that will allow facial images to be stored in medical records while preventing potentially sensitive personal biometric information from being extracted and shared.

In research published today in Nature Medicine, a team led by scientists from the University of Cambridge and Sun Yat-sen University in Guangzhou, China, used three-dimensional (3D) reconstruction and deep learning algorithms to erase identifiable features from facial images while retaining disease-relevant features needed for diagnosis.

Facial images can be useful for identifying signs of disease. For example, features such as deep forehead wrinkles and wrinkles around the eyes are significantly associated with coronary heart disease, while abnormal changes in eye movement can indicate poor visual function and visual cognitive developmental problems. However, facial images also inevitably record other biometric information about the patient, including their race, sex, age and mood.

Graphic showing digital masking process/Photo:Professor Haotian Lin’s research group

With the increasing digitalisation of medical records comes the risk of data breaches. While most patient data can be anonymised, facial data is more difficult to anonymise while retaining essential information. Common methods, including blurring and cropping identifiable areas, may lose important disease-relevant information, yet even so cannot fully evade face recognition systems.

Due to privacy concerns, people often hesitate to share their medical data for public medical research or electronic health records, hindering the development of digital medical care.

Professor Haotian Lin from Sun Yat-sen University said: “During the COVID-19 pandemic, we had to turn to consultations over the phone or by video link rather than in person. Remote healthcare for eye diseases requires patients to share a large amount of digital facial information. Patients want to know that their potentially sensitive information is secure and that their privacy is protected.”

Professor Lin and colleagues developed a ‘digital mask’, which inputs an original video of a patient’s face and outputs a video based on the use of a deep learning algorithm and 3D reconstruction, while discarding as much of the patient’s personal biometric information as possible – and from which it was not possible to identify the individual.

Deep learning extracts features from different facial parts, while 3D reconstruction automatically digitises the shapes and movement of 3D faces, eyelids, and eyeballs based on the extracted facial features. Converting the digital mask videos back to the original videos is extremely difficult because most of the necessary information is no longer retained in the mask.

Next, the researchers tested how useful the masks were in clinical practice and found that diagnosis using the digital masks was consistent with that carried out using the original videos. This suggests that the reconstruction was precise enough for use in clinical practice.

Compared to the traditional method used to ‘de-identify’ patients – cropping the image – the risk of being identified was significantly lower in the digitally-masked patients. The researchers tested this by showing 12 ophthalmologists digitally-masked or cropped images and asking them to identify the original from five other images. They correctly identified the original from the digitally-masked image in just over a quarter (27%) of cases; for the cropped figure, they were able to do so in the overwhelming majority of cases (91%). This is likely to be an over-estimation, however: in real situations, one would likely have to identify the original image from a much larger set.

The team surveyed randomly selected patients attending clinics to test their attitudes towards digital masks. Over 80% of patients believed the digital mask would alleviate their privacy concerns and they expressed an increased willingness to share their personal information if such a measure was implemented.

Doctor/IANS

Finally, the team confirmed that the digital masks can also evade artificial intelligence-powered facial recognition algorithms.

Professor Patrick Yu-Wai-Man from the University of Cambridge said: “Digital masking offers a pragmatic approach to safeguarding patient privacy while still allowing the information to be useful to clinicians. At the moment, the only options available are crude, but our digital mask is a much more sophisticated tool for anonymising facial images.

“This could make telemedicine – phone and video consultations – much more feasible, making healthcare delivery more efficient. If telemedicine is to be widely adopted, then we need to overcome the barriers and concerns related to privacy protection. Our digital mask is an important step in this direction.”

Meal timing may influence mood vulnerability; Daytime eating benefits mental health

“Our findings provide evidence for the timing of food intake as a novel strategy to potentially minimize mood vulnerability in individuals experiencing circadian misalignment, such as people engaged in shift work, experiencing jet lag, or suffering from circadian rhythm disorders,” said co-corresponding author Frank A. J. L. Scheer, PhD, Director of the Medical Chronobiology Program in the Brigham’s Division of Sleep and Circadian Disorders. “Future studies in shift workers and clinical populations are required to firmly establish if changes in meal timing can prevent their increased mood vulnerability. Until then, our study brings a new ‘player’ to the table: the timing of food intake matters for our mood.”

Shift workers account for up to 20 percent of the workforce in industrial societies and are directly responsible for many hospital services, factory work, and other essential services. Shift workers often experience a misalignment between their central circadian clock in the brain and daily behaviors, such as sleep/wake and fasting/eating cycles. Importantly, they also have a 25 to 40 percent higher risk of depression and anxiety.

Eating/Photo:en.wikipedia.org

“Shift workers — as well as individuals experiencing circadian disruption, including jet lag — may benefit from our meal timing intervention,” said co-corresponding author Sarah L. Chellappa, MD, PhD, who completed work on this project while at the Brigham.“Our findings open the door for a novel sleep/circadian behavioral strategy that might also benefit individuals experiencing mental health disorders. Our study adds to a growing body of evidence finding that strategies that optimize sleep and circadian rhythms may help promote mental health.”

To conduct the study, Scheer, Chellappa, and colleagues enrolled 19 participants (12 men and 7 women) for a randomized controlled study. Participants underwent a Forced Desynchrony protocol in dim light for four 28-hour “days,” such that by the fourth “day” their behavioral cycles were inverted by 12 hours, simulating night work and causing circadian misalignment. Participants were randomly assigned to one of two meal timing groups: the Daytime and Nighttime Meal Control Group, which had meals according to a 28-hour cycle (resulting in eating both during the night and day, which is typical among night workers), and the Daytime-Only Meal Intervention Group, which had meals on a 24-hour cycle (resulting in eating only during the day). The team assessed depression- and anxiety-like mood levels every hour.

food

The team found that meal timing significantly affected the participants’ mood levels. During the simulated night shift (day 4), those in the Daytime and Nighttime Meal Control Group had increased depression-like mood levels and anxiety-like mood levels, compared to baseline (day 1). In contrast, there were no changes in mood in the Daytime Meal Intervention Group during the simulated night shift. Participants with a greater degree of circadian misalignment experienced more depression– and anxiety-like mood.

“Meal timing is emerging as an important aspect of nutrition that may influence physical health,” said Chellappa. “But the causal role of the timing of food intake on mental health remains to be tested. Future studies are required to establish if changes in meal timing can help individuals experiencing depressive and anxiety/anxiety-related disorders.”