RBI Governor Warns of ‘Systemic Risks’ from AI in Banking Sector

The increasing use of artificial intelligence (AI) and machine learning (ML) in the global financial sector could pose significant risks to financial stability if not properly managed, according to Shaktikanta Das, the Governor of the Reserve Bank of India (RBI). Speaking at an event in New Delhi on Monday, Das emphasized the need for banks to adopt strong risk mitigation practices as they integrate AI into their operations.

Das highlighted that the financial sector’s growing reliance on AI could lead to concentration risks, particularly if a few technology providers dominate the market. “The heavy dependence on AI by financial institutions can amplify systemic risks. Failures or disruptions in these AI systems could ripple through the entire financial sector,” he cautioned.

In India, banks and financial service providers are increasingly using AI to enhance customer experience, reduce operational costs, manage risks, and boost growth through applications like chatbots and personalized banking services. However, this growing reliance on AI also introduces new vulnerabilities, including a heightened risk of cyberattacks and data breaches.

Das pointed out another key concern—the “opacity” of AI algorithms. The complexity and lack of transparency in AI systems make it difficult to audit or interpret the decision-making processes behind lending and other financial services. This could lead to unpredictable market outcomes, with potentially severe consequences.

In addition to AI-related risks, Das also raised concerns about the rapid expansion of private credit markets globally. These markets, he noted, are lightly regulated and have not undergone stress testing during a significant economic downturn. The unchecked growth of private credit could pose further risks to financial stability.

As the adoption of AI continues to reshape the financial landscape, Das urged banks and regulators to stay vigilant and ensure that adequate safeguards are in place to prevent systemic disruptions.

Voice control smart devices might hinder children’s social, emotional development: Study

Voice control smart devices, such as Alexa, Siri, and Google Home, might hinder children’s social and emotional development, argues an expert in the use of artificial intelligence and machine learning in healthcare, in a viewpoint published online in the Archives of Disease in Childhood.

These devices might have long term effects by impeding children’s critical thinking, capacity for empathy and compassion, and their learning skills, says Anmol Arora of the University of Cambridge.

While voice control devices may act as ‘friends’ and help to improve children’s reading and communication skills, their advanced AI and ‘human’ sounding voices have prompted concerns about the potential long term effects on children’s brains at a crucial stage of development.

There are three broad areas of concern, explains the author. These comprise inappropriate responses; impeding social development; and hindering learning.

He cites some well publicised examples of inappropriate responses, including a device suggesting that a 10-year old should try touching a live plug with a coin.

Children

Children-wikipedia

“It is difficult to enforce robust parental controls on such devices without severely affecting their functionality,” he suggests, adding that privacy issues have also arisen in respect of the recording of private conversations.

These devices can’t teach children how to behave politely, because there’s no expectation of a “please” or “thank you”, and no need to consider the tone of voice, he points out.

“The lack of ability to engage in non-verbal communication makes use of the devices a poor method of learning social interaction,” he writes. “While in normal human interactions, a child would usually receive constructive feedback if they were to behave inappropriately, this is beyond the scope of a smart device.”

Preliminary research on the use of voice assistants as social companions for lonely adults is encouraging. But it’s not at all clear if this also applies to children, he notes.

“This is particularly important at a time when children might already have had social development impaired as a result of COVID-19 restrictions and when [they] might have been spending more time isolated with smart devices at home,” he emphasises.

Devices are designed to search for requested information and provide a concise, specific answer, but this may hinder traditional processes by which children learn and absorb information, the author suggests.

When children ask adults questions, the adult can request contextual information, explain the limitations of their knowledge and probe the child’s reasoning—a process that these devices can’t replicate, he says.

Searching for information is also an important learning experience, which teaches critical thinking and logical reasoning, he explains.

“The rise of voice devices has provided great benefit to the population. Their abilities to provide information rapidly, assist with daily activities, and act as a social companion to lonely adults are both important and useful, the author acknowledges.

“However, urgent research is required into the long-term consequences for children interacting with such devices,” he insists.

“Interacting with the devices at a crucial stage in social and emotional development might have long-term consequences on empathy, compassion, and critical thinking,” he concludes.

 

Now using machine learning, find out odors and fragrances

Tokyo Institute of Technology researchers have invented a new method that predicts smell based on  the odor impression instead of predicting the smell from molecular features.

As the sense of smell is one of the basic senses of animal species, it is critical to finding food, realizing attraction, and sensing danger. Humans detect smells, or odorants, with olfactory receptors expressed in olfactory nerve cells.

These olfactory impressions of odorants on nerve cells are associated with their molecular features and physicochemical properties. This makes it possible to tailor odors to create an intended odor impression. Current methods only predict olfactory impressions from the physicochemical features of odorants. But, that method cannot predict the sensing data, which is indispensable for creating smells.

To tackle this issue, scientists from Tokyo Institute of Technology (Tokyo Tech) have employed the innovative strategy of solving the inverse problem. Instead of predicting the smell from molecular data, this method predicts molecular features based on the odor impression.

Using standard mass spectrum data and machine learning (ML) models, the Tokyo Tech team has found a new method. “We used a machine-learning-based odor predictive model that we had previously developed to obtain the odor impression. Then we predicted the mass spectrum from odor impression inversely based on the previously developed forward model,” explains Professor Takamichi Nakamoto, the leader of the research effort by Tokyo Tech. The findings have been published in PLoS One.

Aroma/Photo:en.wikipedia.org

This simple method allows for the quick preparation of the predicted spectra of odor mixtures and can also predict the required mixing ratio, an important part of the recipe for new odor preparation.

“For example, we show which molecules give the mass spectrum of apple flavor with enhanced ‘fruit’ and ‘sweet’ impressions. Our analysis method shows that combinations of either 59 or 60 molecules give the same mass spectrum as the one obtained from the specified odor impression. With this information, and the correct mixing ratio needed for a certain impression, we could theoretically prepare the desired scent,” highlights Prof. Nakamoto.

This novel method can provide highly accurate predictions of the physicochemical properties of odor mixtures, as well as the mixing ratios required to prepare them, thereby opening the door to endless tailor-made fragrances, said the team.

It looks like the future of odor mixtures smells good!

Super-fast electric car charging is here with Mida’s touch

Despite the growing popularity of electric vehicles, many consumers still hesitate as it may take longer to power up an electric car than it does to gas up a conventional one.

Another concern is that frequent charging or speeding up the charging process can damage the battery and reduce its lifespan. Now, scientists have developed a superfast charging methods tailored to power different types of electric vehicle batteries in 10 minutes or less without harm.

The researchers will present their results Monday at the fall meeting of the American Chemical Society (ACS) Fall 2022, a hybrid meeting being held virtually and in-person on Aug. 21-25, with nearly 11,000 presentations on a wide range of science topics.

“Fast charging is the key to increasing consumer confidence and overall adoption of electric vehicles,” says Eric Dufek, who is presenting this work at the meeting. “It would allow vehicle charging to be very similar to filling up at a gas station.” Such an advance could help the US reach President Biden’s goal that by 2030, half of all vehicles sold should be electric or hybrid.

When a lithium-ion battery is being charged, lithium ions migrate from one side of the device, the cathode, to the other, the anode. By making the lithium ions migrate faster, the battery is charged more quickly, but sometimes the lithium ions don’t fully move into the anode. In this situation, lithium metal can build up, and this can trigger early battery failure and reducing the lifetime of the battery.

To address these challenges, Dufek and his research team at Idaho National Laboratory used machine learning to create unique charging protocols. By inputting information about the condition of many lithium-ion batteries during their charging and discharging cycles, the scientists trained the machine learning analysis to predict lifetimes. The team then analyzed to identify and optimize new protocols.

“We’ve significantly increased the amount of energy that can go into a battery cell in a short amount of time,” says Dufek. “Currently, we’re seeing batteries charge to over 90% in 10 minutes without lithium plating or cathode cracking.”

Going from a nearly dead battery to one at 90% power in only 10 minutes is a far cry from current methods, which, at best, can get an electric vehicle to full charge in about half an hour. While many researchers are looking for methods to achieve this sort of super-fast charging, Dufek says that one advantage of their machine learning model is that it ties the protocols to the physics of what is actually happening in a battery.

The researchers plan to use their model to develop and design new lithium-ion batteries that are optimized to undergo fast charging.

Improving clinical trials with machine learning

Machine learning could improve our ability to determine whether a new drug works in the brain, potentially enabling researchers to detect drug effects that would be missed entirely by conventional statistical tests, finds a new UCL study published in Brain.

“Current statistical models are too simple. They fail to capture complex biological variations across people, discarding them as mere noise. We suspected this could partly explain why so many drug trials work in simple animals but fail in the complex brains of humans. If so, machine learning capable of modelling the human brain in its full complexity may uncover treatment effects that would otherwise be missed,” said the study’s lead author, Dr Parashkev Nachev (UCL Institute of Neurology).

To test the concept, the research team looked at large-scale data from patients with stroke, extracting the complex anatomical pattern of brain damage caused by the stroke in each patient, creating in the process the largest collection of anatomically registered images of stroke ever assembled. As an index of the impact of stroke, they used gaze direction, objectively measured from the eyes as seen on head CT scans upon hospital admission, and from MRI scans typically done 1-3 days later.

They then simulated a large-scale meta-analysis of a set of hypothetical drugs, to see if treatment effects of different magnitudes that would have been missed by conventional statistical analysis could be identified with machine learning. For example, given a drug treatment that shrinks a brain lesion by 70%, they tested for a significant effect using conventional (low-dimensional) statistical tests as well as by using high-dimensional machine learning methods.

The machine learning technique took into account the presence or absence of damage across the entire brain, treating the stroke as a complex “fingerprint”, described by a multitude of variables.

“Stroke trials tend to use relatively few, crude variables, such as the size of the lesion, ignoring whether the lesion is centred on a critical area or at the edge of it. Our algorithm learned the entire pattern of damage across the brain instead, employing thousands of variables at high anatomical resolution. By illuminating the complex relationship between anatomy and clinical outcome, it enabled us to detect therapeutic effects with far greater sensitivity than conventional techniques,” explained the study’s first author, Tianbo Xu (UCL Institute of Neurology).

The advantage of the machine learning approach was particularly strong when looking at interventions that reduce the volume of the lesion itself. With conventional low-dimensional models, the intervention would need to shrink the lesion by 78.4% of its volume for the effect to be detected in a trial more often than not, while the high-dimensional model would more than likely detect an effect when the lesion was shrunk by only 55%.

“Conventional statistical models will miss an effect even if the drug typically reduces the size of the lesion by half, or more, simply because the complexity of the brain’s functional anatomy–when left unaccounted for–introduces so much individual variability in measured clinical outcomes. Yet saving 50% of the affected brain area is meaningful even if it doesn’t have a clear impact on behaviour. There’s no such thing as redundant brain,” said Dr Nachev.

The researchers say their findings demonstrate that machine learning could be invaluable to medical science, especially when the system under study–such as the brain–is highly complex.

“The real value of machine learning lies not so much in automating things we find easy to do naturally, but formalising very complex decisions. Machine learning can combine the intuitive flexibility of a clinician with the formality of the statistics that drive evidence-based medicine. Models that pull together 1000s of variables can still be rigorous and mathematically sound. We can now capture the complex relationship between anatomy and outcome with high precision,” said Dr Nachev.

“We hope that researchers and clinicians begin using our methods the next time they need to run a clinical trial,” said co-author Professor Geraint Rees (Dean, UCL Faculty of Life Sciences).

Using machine learning to improve patient care

Doctors are often deluged by signals from charts, test results, and other metrics to keep track of. It can be difficult to integrate and monitor all of these data for multiple patients while making real-time treatment decisions, especially when data is documented inconsistently across hospitals.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) explore ways for computers to help doctors make better medical decisions.

One team created a machine-learning approach called “ICU Intervene” that takes large amounts of intensive-care-unit (ICU) data, from vitals and labs to notes and demographics, to determine what kinds of treatments are needed for different symptoms. The system uses “deep learning” to make real-time predictions, learning from past ICU cases to make suggestions for critical care, while also explaining the reasoning behind these decisions.

“The system could potentially be an aid for doctors in the ICU, which is a high-stress, high-demand environment,” says PhD student Harini Suresh, lead author on the paper about ICU Intervene. “The goal is to leverage data from medical records to improve health care and predict actionable interventions.”

Another team developed an approach called “EHR Model Transfer” that can facilitate the application of predictive models on an electronic health record (EHR) system, despite being trained on data from a different EHR system. Specifically, using this approach the team showed that predictive models for mortality and prolonged length of stay can be trained on one EHR system and used to make predictions in another.

ICU Intervene was co-developed by Suresh, undergraduate student Nathan Hunt, postdoc Alistair Johnson, researcher Leo Anthony Celi, MIT Professor Peter Szolovits, and PhD student Marzyeh Ghassemi. It was presented this month at the Machine Learning for Healthcare Conference in Boston.

EHR Model Transfer was co-developed by lead authors Jen Gong and Tristan Naumann, both PhD students at CSAIL, as well as Szolovits and John Guttag, who is the Dugald C. Jackson Professor in Electrical Engineering. It was presented at the ACM’s Special Interest Group on Knowledge Discovery and Data Mining in Halifax, Canada.

Both models were trained using data from the critical care database MIMIC, which includes de-identified data from roughly 40,000 critical care patients and was developed by the MIT Lab for Computational Physiology.

ICU Intervene

Integrated ICU data is vital to automating the process of predicting patients’ health outcomes.

“Much of the previous work in clinical decision-making has focused on outcomes such as mortality (likelihood of death), while this work predicts actionable treatments,” Suresh says. “In addition, the system is able to use a single model to predict many outcomes.”

ICU Intervene focuses on hourly prediction of five different interventions that cover a wide variety of critical care needs, such as breathing assistance, improving cardiovascular function, lowering blood pressure, and fluid therapy.

At each hour, the system extracts values from the data that represent vital signs, as well as clinical notes and other data points. All of the data are represented with values that indicate how far off a patient is from the average (to then evaluate further treatment).

Importantly, ICU Intervene can make predictions far into the future. For example, the model can predict whether a patient will need a ventilator six hours later rather than just 30 minutes or an hour later. The team also focused on providing reasoning for the model’s predictions, giving physicians more insight.

“Deep neural-network-based predictive models in medicine are often criticized for their black-box nature,” says Nigam Shah, an associate professor of medicine at Stanford University who was not involved in the paper. “However, these authors predict the start and end of medical interventions with high accuracy, and are able to demonstrate interpretability for the predictions they make.”

The team found that the system outperformed previous work in predicting interventions, and was especially good at predicting the need for vasopressors, a medication that tightens blood vessels and raises blood pressure.

In the future, the researchers will be trying to improve ICU Intervene to be able to give more individualized care and provide more advanced reasoning for decisions, such as why one patient might be able to taper off steroids, or why another might need a procedure like an endoscopy.

EHR Model Transfer

Another important consideration for leveraging ICU data is how it’s stored and what happens when that storage method gets changed. Existing machine-learning models need data to be encoded in a consistent way, so the fact that hospitals often change their EHR systems can create major problems for data analysis and prediction.

That’s where EHR Model Transfer comes in. The approach works across different versions of EHR platforms, using natural language processing to identify clinical concepts that are encoded differently across systems and then mapping them to a common set of clinical concepts (such as “blood pressure” and “heart rate”).

For example, a patient in one EHR platform could be switching hospitals and would need their data transferred to a different type of platform. EHR Model Transfer aims to ensure that the model could still predict aspects of that patient’s ICU visit, such as their likelihood of a prolonged stay or even of dying in the unit.

“Machine-learning models in health care often suffer from low external validity, and poor portability across sites,” says Shah. “The authors devise a nifty strategy for using prior knowledge in medical ontologies to derive a shared representation across two sites that allows models trained at one site to perform well at another site. I am excited to see such creative use of codified medical knowledge in improving portability of predictive models.”

With EHR Model Transfer, the team tested their model’s ability to predict two outcomes: mortality and the need for a prolonged stay. They trained it on one EHR platform and then tested its predictions on a different platform. EHR Model Transfer was found to outperform baseline approaches and demonstrated better transfer of predictive models across EHR versions compared to using EHR-specific events alone.

In the future, the EHR Model Transfer team plans to evaluate the system on data and EHR systems from other hospitals and care settings.

Both papers were supported, in part, by the Intel Science and Technology Center for Big Data and the National Library of Medicine. The paper detailing EHR Model Transfer was additionally supported by the National Science Foundation and Quanta Computer, Inc.