How the mother’s mood influences her baby’s ability to speak

Up to 70 percent of mothers develop postnatal depressive mood, also known as baby blues, after their baby is born. Analyses show that this can also affect the development of the children themselves and their speech. Until now, however, it was unclear exactly how this impairment manifests itself in early language development in infants.

In a study, scientists at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig have now investigated how well babies can distinguish speech sounds from one another depending on their mother’s mood. This ability is considered an important prerequisite for the further steps towards a well-developed language. If sounds can be distinguished from one another, individual words can also be distinguished from one another. It became clear that if mothers indicate a more negative mood two months after birth, their children show on average a less mature processing of speech sounds at the age of six months.

The infants found it particularly difficult to distinguish between syllable-pitches. Specifically, they showed that the development of their so-called Mismatch Response was delayed than in those whose mothers were in a more positive mood. This Mismatch Response in turn serves as a measure of how well someone can separate sounds from one another. If this development towards a pronounced mismatch reaction is delayed, this is considered an indication of an increased risk of suffering from a speech disorder later in life.

“We suspect that the affected mothers use less infant-directed-speech,” explains Gesa Schaadt, postdoc at MPI CBS, professor of development in childhood and adolescence at FU Berlin and first author of the study, which has now appeared in the journal JAMA Network Open. “They probably use less pitch variation when directing speech to their infants.” This also leads to a more limited perception of different pitches in the children, she said. This perception, in turn, is considered a prerequisite for further language development.

The results show how important it is that parents use infant-directed speech for the further language development of their children. Infant-directed speech that varies greatly in pitch, emphasizes certain parts of words more clearly – and thus focuses the little ones’ attention on what is being said – is considered appropriate for children. Mothers, in turn, who suffer from depressive mood, often use more monotonous, less infant-directed speech. “To ensure the proper development of young children, appropriate support is also needed for mothers who suffer from mild upsets that often do not yet require treatment,” Schaadt says. That doesn’t necessarily have to be organized intervention measures. “Sometimes it just takes the fathers to be more involved.”

The researchers investigated these relationships with the help of 46 mothers who reported different moods after giving birth. Their moods were measured using a standardized questionnaire typically used to diagnose postnatal upset. They also used electroencephalography (EEG), which helps to measure how well babies can distinguish speech sounds from one another. The so-called Mismatch Response is used for this purpose, in which a specific EEG signal shows how well the brain processes and distinguishes between different speech sounds. The researchers recorded this reaction in the babies at the ages of two and six months while they were presented with various syllables such as “ba,” “ga” and “bu.

Also Read:

Evidence that babies react to taste, smell in the womb; Carrot for “laughter-face” response, kale for “cry-face” response: Study

Fish oil, vitamin D supplements during pregnancy lower risk of croup in babies

A scientific recipe that helps babies stop crying, calm down and sleep in bed

Religious affiliation impacts language use on Facebook

Are you more likely to use words like “happy” and “family” in your social media posts? Or do you use emotional and cognitive words like “angry” and “thinking?” The words you use may be a clue to your religious affiliation. A study of 12,815 U.S. and U.K. Facebook users finds use of positive emotion and social words is associated with religious affiliation whereas use of negative emotion and cognitive processes is more common for those who are not religious than those who are religious.

The work replicates Ritter et al.’s 2013 results on religious and nonreligious language use on Twitter and appears in the journal Social Psychological and Personality Science. Researchers from the U.S., U.K., and Australia conducted the work.

Just as Ritter and colleagues discovered in 2013, “We also found that positive emotion and social words are associated with religious affiliation whereas negative emotion and cognitive processes are more associated with non-religious affiliation,” says David Yaden (University of Pennsylvania), lead author of the study.

And they found some additional insight; “non-religious individuals make more frequent mention of the body and of death” than religious people, says Yaden.

The researchers collected data from the MyPersonality application, which asked Facebook users to report their religious affiliation (among other things), and asks them for consent to allow researchers to analyze their written online posts and other self-reported information (Kosinski, Stillwell, Graepel, 2013). They ran two analyses, to see what words each group (religious vs. non-religious) used more than the other group.

The team conducted both a “top-down” and a “bottom-up” analysis. The top down approach, Linguistic Inquiry and Word Count (LIWC), uses groupings chosen by researchers, and is useful in making sense of the data in terms of theory. The “bottom-up,” or Differential Language Analysis (DLA), approach allows an algorithm to group the words and can provide a more “transparent view” into the language.

Unsurprisingly, religious people used more religious words, like “devil,” “blessing,” and “praying” than do non-religious people. They also showed higher use of positive words like “love” and family and social words such as “mothers” and “we.” The non-religious individuals used words from the anger category, like “hate” more than did religious people. They also showed a higher use of words associated with negative emotion and cognitive processes such as “reasons.” Other areas where the nonreligious dominated: swear words (you can figure those out), bodies, including “heads” and “neck” and words related to death including “dead.”

The Role of Religion

While secularism is increasing in the west, “over 80% of the world’s population identifies with some type of religion – a trend that appears to be on the rise” write the authors. “Religion is associated with longer lives and well-being, but can also be associated with higher rates of obesity and racism.” For the researchers, understanding language use is part of the bigger picture of understanding how religious affiliation relates to these life outcomes.

Yaden and his colleagues do not know if the different linguistic behaviors between religious and non-religious people reflect the psychological states of those in the group, or if the language use reflects the social norms of being part of that group, or some combination of the two. They hope further research will offer more insights.

Originally Yaden and colleagues hoped to “compare different religious affiliations with one another. That is, how do Buddhists differ from Hindus? Christians from Muslims? Atheists from Agnostics?,” but they did not have enough specific data to conduct these analyses. “We hope to do so once a larger dataset becomes available to us,” says Yaden.

Bilingual babies listen to language

Are two languages at a time too much for the mind? Caregivers and teachers should know that infants growing up bilingual have the learning capacities to make sense of the complexities of two languages just by listening. In a new study, an international team of researchers, including those from Princeton University, report that bilingual infants as young as 20 months of age efficiently and accurately process two languages.

The study, published Aug. 7 in the journal Proceedings of the National Academy of Sciences, found that infants can differentiate between words in different languages. “By 20 months, bilingual babies already know something about the differences between words in their two languages,” said Casey Lew-Williams, an assistant professor of psychology and co-director of the Princeton Baby Lab, where researchers study how babies and young children learn to see, talk and understand the world. He is also the co-author of the paper.

“They do not think that ‘dog’ and ‘chien’ [French] are just two versions of the same thing,” Lew-Williams said. “They implicitly know that these words belong to different languages.”

To determine infants’ ability to monitor and control language, the researchers showed 24 French-English bilingual infants and 24 adults in Montreal pairs of photographs of familiar objects. Participants heard simple sentences in either a single language (“Look! Find the dog!”) or a mix of two languages (“Look! Find the chien!”). In another experiment, they heard a language switch that crossed sentences (“That one looks fun! Le chien!”). These types of language switches, called code switches, are regularly heard by children in bilingual communities.

The researchers then used eye-tracking measures, such as how long an infant’s or an adult’s eyes remained fixed to a photograph after hearing a sentence, and pupil dilation. Pupil diameter is an involuntary response to how hard the brain is “working,” and is used as an indirect measure of cognitive effort.

The researchers tested bilingual adults as a control group and used the same photographs and eye-tracking procedure as tested on bilingual infants to examine whether these language-control mechanisms were the same across a bilingual speaker’s life.

They found that bilingual infants and adults incurred a processing “cost” when hearing switched-language sentences and, at the moment of the language switch, their pupils dilated. However, this switch cost was reduced or eliminated when the switch was from the non-dominant to the dominant language, and when the language switch crossed sentences.

“We identified convergent behavioral and physiological markers of there being a ‘cost’ associated with language switching,” Lew-Williams said. Rather than indicating barriers to comprehension, the study “shows an efficient processing strategy where there is an activation and prioritization of the currently heard language,” Lew-Williams said.

The similar results in both the infant and adult subjects also imply that “bilinguals across the lifespan have important similarities in how they process their languages,” Lew-Williams said.

“We have known for a long time that the language currently being spoken between two bilingual interlocutors — the base language — is more active than the language not being spoken, even when mixed speech is possible,” said François Grosjean, professor emeritus of psycholinguistics at Neuchâtel University in Switzerland, who is familiar with the research but was not involved with the study.

“This creates a preference for the base language when listening, and hence processing a code-switch can take a bit more time, but momentarily,” added Grosjean. “When language switches occur frequently, or are situated at [sentence] boundaries, or listeners expect them, then no extra processing time is needed. The current study shows that many of these aspects are true in young bilingual infants, and this is quite remarkable.”

“These findings advance our understanding of bilingual language use in exciting ways — both in toddlers in the initial stages of acquisition and in the proficient bilingual adult,” said Janet Werker, a professor of psychology at the University of British Columbia, who was not involved with the research. She noted that the findings may have implications for optimal teaching in bilingual settings. “One of the most obvious implications of these results is that we needn’t be concerned that children growing up bilingual will confuse their two languages. Indeed, rather than being confused as to which language to expect, the results indicate that even toddlers naturally activate the vocabulary of the language that is being used in any particular setting.”

A bilingual advantage?

Lew-Williams suggests that this study not only confirms that bilingual infants monitor and control their languages while listening to the simplest of sentences, but also provides a likely explanation of why bilinguals show cognitive advantages across the lifespan. Children and adults who have dual-language proficiency have been observed to perform better in “tasks that require switching or the inhibiting of a previously learned response,” Lew-Williams said.

“Researchers used to think this ‘bilingual advantage’ was from bilinguals’ practice dealing with their two languages while speaking,” Lew-Williams said. “We believe that everyday listening experience in infancy — this back-and-forth processing of two languages — is likely to give rise to the cognitive advantages that have been documented in both bilingual children and adults.”