AI sheds light on ancient board game mystery

The breakthrough that enabled a new form of unlocking past secrets using artificial intelligence (AI) was the first time an international research team utilized the code of an ancient board game and unlocked its secrets that have existed long before the new century.

The study of an engraved limestone object in the Roman Netherlands allowed the team to identify the probable game rules, depending on its specific markings.

A new study, which was published in the Antiquity journal, was directed by Maastricht University (The Netherlands) and Leiden University (The Netherlands) and contributed by Flinders University (South Australia), the Universite Catholique de Louvain (Belgium) and The Roman Museum and restoration studio Restaura in Heerlen.

The item, located in what is now Heerlen in the Netherlands, includes a design of bizarre crossing lines that for decades had bewildered archeologists.

Since majority of playing games in Roman world were drawn either in dust or in wood (where it was not likely to survive), this well-hewn limestone fragment provided a unique possibility of studying ancient rules.

The stone exhibits a pattern of geometric design and visible wear that are all conducive to sliding game pieces on its surface, a fact that highly suggests repeated play, and not an alternative use as to the stone, lead archaeologist, Dr Walter Crist, who is an archaeologist and ancient games expert.

In order to identify the type of game board the stone was and its functionality, the research team applied AI to run hundreds of potential rule sets, to identify which would generate identical patterns of wear on the object.

Can AI Recreate Simulated Play?

The fact that the carved lines are unevenly worn begs a major question regarding whether simulated play developed by AI can recreate the same pattern.

The researchers used the AI-driven play system Ludii to play two AI agents using the object as a board with rule sets of many of the board games in Europe recorded in the history, including haretavl of Scandinavia and gioco dell’orso of Italy.

Flinders University computer scientist Dr Matthew Stephenson states that it is possible to reconcile the historical and computational studies of games through the use of modern AI techniques.

The simulations were repeated, with the rules varied each time, to determine which movements would result in the same focused friction as in the original stone-surface, according to Dr Stephenson, of the Flinders College of Science and Engineering.

The simulations strongly indicated some form of strategy game called a blocking game. In the blocking games, the player attempts to put their opponent in check by denying them any movements instead of capturing the opponent.

Since there is very little written evidence of blocking games prior to the Middle Ages, the results indicate that blocking games may have a more ancient history than previously written up, whilst the work also proves the transformative power of AI in archeology.

Archaeological Approach

This is the first attempt, which employs AI-based simulated play along with the archaeological approach to recognize a board game, says Dr Crist.

It provides an archeologist with a way forward in study of ancient games not similar to those studied in surviving texts or art.

It was done at Maastricht University and as part of the Digital Ludeme Project in Europe which applied artificial intelligence to create more plausible reconstructions of ancient games both historically and mathematically.

The combination of archaeology, digital modelling and the history of cultures made the team give a better explanation of something that previously appeared to be inexplicable.

The success of this method of finding indicates that there are numerous other puzzling artefacts that could hold some concealed stories that can be uncovered by the use of modern technology, as per Dr Stephenson.

It demonstrates how AI can be used in our knowledge of materials that otherwise cannot be analyzed.

 

Also Read:

Footage of Grand Theft Auto game 6 gameplay leaks online

Sony opens Play Station 5 at 12 noon for pre-order in India, here’s how to book

Sarvam AI Powering a Made-in-India Tech Revolution

India’s emergence as a global digital power now hinges on its ability to build artificial intelligence systems that are indigenous, inclusive, and aligned with national priorities.

As AI increasingly shapes governance, public services, industry, and citizen engagement, the need for homegrown foundational models has become important. These models must be trained on Indian languages, local data, and real-world contexts to ensure relevance and effectiveness.

Built with the vision of creating AI systems specifically for India, Sarvam AI is an organization that is developing artificial intelligence tailored to India’s needs by building foundational components and applying them to the country’s unique linguistic, enterprise, and governance requirements. The company has built a full-stack AI platform, with everything developed, deployed, and governed entirely in India. These enterprise grade platforms reflect the country’s linguistic diversity and are designed to support public service delivery. Its work directly addresses long-standing barriers in accessibility, multilingual communication, and dependence on foreign AI infrastructure.

At the India AI Impact Summit 2026, Union Home Minister Amit Shah stated that Sarvam AI exemplifies why the future belongs to India. He noted that the company “is ensuring technology reaches every citizen, advancing the vision of Viksit Bharat, where innovation serves as a trusted ally in empowering people and strengthening the nation.”

Driving Digital Self-Reliance through Indigenous AI Models

Strengthening indigenous AI infrastructure is central to India’s vision of technological sovereignty, digital self-reliance, and inclusive growth. In an era where artificial intelligence shapes governance, economic competitiveness, and citizen services, building AI systems rooted in local languages, datasets, and regulatory frameworks ensures that innovation aligns with national priorities and societal needs. Indigenous AI development not only safeguards strategic autonomy but also fosters economic resilience and equitable access to emerging technologies.

In this context, Sarvam AI stands out as one of the 12 organisations selected under the Innovation Centre pillar of the IndiaAI Mission to develop indigenous foundational models, with financial and compute support amounting to Rs.246.72 crore.

The company is building large language and speech models (LLMs) tailored for Indian languages and public service delivery, with capabilities such as voice-based interfaces, document processing, and citizen-centric applications that enhance accessibility and ease of use. By developing homegrown AI models aligned with national objectives, Sarvam AI is reducing reliance on foreign AI systems while strengthening the open-source ecosystem and enabling innovation across startups, academia, research institutions, and industry.

An AI model is a computer program trained on vast amounts of data to recognize patterns, make predictions, or generate new content, acting like a digital brain.

Sarvam AI’s models include:

  • Bulbul (Text-to-Speech): Available in 11 Indian languages with 39 distinct speaker voices.
  • Saaras (Speech-to-Text): Supports all 22 scheduled languages, 8kHz telephony audio, and code-mixed speech.
  • Vision (Document Understanding): Tailored for 22+ Indian languages, mixed scripts, and handwritten text

Through these foundational capabilities, Sarvam AI demonstrates how India-centric AI can evolve into scalable, resilient, and population-scale digital infrastructure, enhancing public service delivery, improving linguistic accessibility, and reinforcing India’s journey toward a globally competitive AI ecosystem.

Full-Stack Sovereign AI Ecosystem of Sarvam AI

Sarvam AI has built a comprehensive, full-stack sovereign AI ecosystem designed to serve enterprises, governments, developers, and creators across India. Developed end-to-end within the country spanning compute infrastructure, foundational models, platforms, and real-world applications. The ecosystem reflects commitment to technological self-reliance in artificial intelligence.

An AI stack is the complete set of tools and systems that work together to build and run AI applications. These applications range from everyday tools such as Siri and Alexa, to advanced systems used in healthcare diagnostics, financial fraud detection, and transportation.

What Sarvam AI ecosystem consists of?

  • Sarvam for Conversations: Enterprise-grade (high capacity) conversational AI delivering human-like, culturally fluent voices in 11 Indian languages. Handles over 100 million interactions with 500ms latency, deploys within 24 hours, and achieves up to 10x ROI.
  • Sarvam for Work: A unified enterprise AI platform that accelerates value creation through an AI-assisted build-debug-optimize cycle. Open and modular, it integrates seamlessly with any model, data source, or infrastructure.
  • Sarvam AI for Content: Enables multilingual video dubbing with voice cloning and precise audio-visual sync, along with document translation that preserves layout and tone, supported by built-in quality review and editing tools.
  • Sarvam AI for Edge Intelligence: Delivers compact, low-latency multimodal AI for real-world deployment, combining edge and cloud inference to power real-time assistants, on-device NLP, and high-speed translation and summarisation.

Through this integrated architecture, Sarvam AI is not merely building applications but establishing a scalable digital backbone for India’s AI future. By converging infrastructure, language intelligence, enterprise capability, and edge deployment into one sovereign ecosystem, it positions India to innovate independently, deploy responsibly, and compete globally, while ensuring that advanced AI remains accessible, secure, and aligned with national development priorities.

Strategic Partnerships For Public Service Delivery

Sarvam AI’s institutional collaborations are transforming indigenous innovation into measurable public value across India. By working closely with national and state governments, the company is embedding advanced AI capabilities into critical service delivery systems.

UIDAI (Unique Identification Authority of India) partnered with Sarvam AI to enhance Aadhaar services using AI-driven voice interaction, real-time fraud detection, and multilingual support. A custom GenAI stack will operate within UIDAI’s secure, on-premise infrastructure, supporting 10 Indian languages with real-time enrolment feedback and fraud alerts.

The Government of Odisha in collaboration with Sarvam AI are establishing a 50MW AI-optimized Sovereign AI Capacity Hub to serve as a national compute backbone. It will support AI use cases in mining, industrial safety, and Odia-language skilling, contributing to the sovereign compute grid.

The Government of Tamil Nadu and IIT Madras, in collaboration with Sarvam, are developing Digital Sangam, India’s first Sovereign AI Research Park, anchored by a 20MW AI data center to integrate advanced compute, research, and startup incubation for large-scale AI applications. Collectively, these initiatives demonstrate how coordinated public partnerships can deploy homegrown AI infrastructure at massive scale.

Also Read:

Whar are “blue tears”? New AI algorithm allows scientific monitoring of this unique phenomenon

Weekender: Inside India’s Global Capability Centre Boom

AMD Poised to Launch New AI Chips, Intensifies Market Rivalry With Nvidia

In a strategic move that underscores the intensifying competition in the artificial intelligence (AI) chip sector, Advanced Micro Devices (AMD) is set to unveil a new lineup of AI processors during an upcoming data center event in San Francisco. This announcement aims to strengthen AMD’s position as a formidable supplier of AI chips in a market that has been predominantly led by Nvidia. The event, scheduled for Thursday, is anticipated to feature details on AMD’s MI325X chip and the next-generation MI350 chip.

The MI350 series is designed to directly compete with Nvidia’s Blackwell architecture, promising enhanced computing power and memory capabilities. This development marks a significant effort by AMD to disrupt Nvidia’s market dominance in the AI chip landscape. AMD first introduced these chips at the Computex trade show in Taiwan in June, with plans for a release in the latter half of this year and into next year.

In addition to the AI chips, AMD is expected to unveil new server central processing units (CPUs) and PC chips that incorporate enhanced AI computing capabilities. This initiative illustrates AMD’s dedication to advancing AI technology and responding to the increasing demand for AI-driven solutions across various sectors.

AMD’s current MI300X AI chip, launched late last year, has experienced a swift uptick in production to meet growing market needs. In July, the company raised its AI chip revenue forecast for the year to $4.5 billion, up from a previous estimate of $4 billion, driven by substantial demand for the MI300X, especially in the realm of generative AI product development.

Market Competition

Despite AMD’s aggressive strategy, analysts suggest that its new product launches are unlikely to significantly impact Nvidia’s data center revenue, given that the demand for AI chips far outstrips supply. AMD is projected to report data center revenue of $12.83 billion this year, according to LSEG estimates, while Nvidia is expected to achieve a staggering $110.36 billion in the same segment. Data center revenue serves as a critical indicator of the demand for AI chips essential for developing and running AI applications.

The competitive landscape for AI chips has been evolving rapidly. Intel, another key player, recently announced its next-generation AI data center chips, the Gaudi 3 accelerator kit, which is priced around $125,000—substantially cheaper than Nvidia’s comparable HGX server system. Meanwhile, Nvidia continues to innovate with its next-generation AI platform, the Rubin platform, slated for release in 2026. This platform will succeed the Blackwell architecture, which has been highly sought after and is expected to remain sold out well into 2025 due to robust demand.

AMD’s Move Toward AI

AMD’s CEO, Lisa Su, has expressed a clear vision for the company’s future, emphasizing that AMD is not seeking to be a niche player in the AI chip market. This statement reflects the company’s ambition to solidify its presence as a major contender alongside established leaders like Nvidia and Intel.

As the AI chip market becomes increasingly competitive, AMD’s upcoming announcement is likely to further fuel this rivalry. With AI technology continuing to evolve and the demand for AI-powered solutions expanding, the market is poised for more innovations and strategic initiatives from industry giants. This dynamic landscape highlights the relentless pursuit of technological advancement in the AI chip arena.

Voice control smart devices might hinder children’s social, emotional development: Study

Voice control smart devices, such as Alexa, Siri, and Google Home, might hinder children’s social and emotional development, argues an expert in the use of artificial intelligence and machine learning in healthcare, in a viewpoint published online in the Archives of Disease in Childhood.

These devices might have long term effects by impeding children’s critical thinking, capacity for empathy and compassion, and their learning skills, says Anmol Arora of the University of Cambridge.

While voice control devices may act as ‘friends’ and help to improve children’s reading and communication skills, their advanced AI and ‘human’ sounding voices have prompted concerns about the potential long term effects on children’s brains at a crucial stage of development.

There are three broad areas of concern, explains the author. These comprise inappropriate responses; impeding social development; and hindering learning.

He cites some well publicised examples of inappropriate responses, including a device suggesting that a 10-year old should try touching a live plug with a coin.

Children-wikipedia

“It is difficult to enforce robust parental controls on such devices without severely affecting their functionality,” he suggests, adding that privacy issues have also arisen in respect of the recording of private conversations.

These devices can’t teach children how to behave politely, because there’s no expectation of a “please” or “thank you”, and no need to consider the tone of voice, he points out.

“The lack of ability to engage in non-verbal communication makes use of the devices a poor method of learning social interaction,” he writes. “While in normal human interactions, a child would usually receive constructive feedback if they were to behave inappropriately, this is beyond the scope of a smart device.”

Preliminary research on the use of voice assistants as social companions for lonely adults is encouraging. But it’s not at all clear if this also applies to children, he notes.

“This is particularly important at a time when children might already have had social development impaired as a result of COVID-19 restrictions and when [they] might have been spending more time isolated with smart devices at home,” he emphasises.

Devices are designed to search for requested information and provide a concise, specific answer, but this may hinder traditional processes by which children learn and absorb information, the author suggests.

When children ask adults questions, the adult can request contextual information, explain the limitations of their knowledge and probe the child’s reasoning—a process that these devices can’t replicate, he says.

Searching for information is also an important learning experience, which teaches critical thinking and logical reasoning, he explains.

“The rise of voice devices has provided great benefit to the population. Their abilities to provide information rapidly, assist with daily activities, and act as a social companion to lonely adults are both important and useful, the author acknowledges.

“However, urgent research is required into the long-term consequences for children interacting with such devices,” he insists.

“Interacting with the devices at a crucial stage in social and emotional development might have long-term consequences on empathy, compassion, and critical thinking,” he concludes.

 

Mobile phone app accurately detects COVID-19 infection in people’s voices

Artificial intelligence (AI) can be used to detect COVID-19 infection in people’s voices by means of a mobile phone app, according to research to be presented on Monday at the European Respiratory Society International Congress in Barcelona, Spain [1].

The AI model used in this research is more accurate than lateral flow/rapid antigen tests and is cheap, quick and easy to use, which means it can be used in low-income countries where PCR tests are expensive and/or difficult to distribute.

Ms Wafaa Aljbawi, a researcher at the Institute of Data Science, Maastricht University, The Netherlands, told the congress that the AI model was accurate 89% of the time, whereas the accuracy of lateral flow tests varied widely depending on the brand. Also, lateral flow tests were considerably less accurate at detecting COVID infection in people who showed no symptoms.

COVID-19 infection usually affects the upper respiratory track and vocal cords, leading to changes in a person’s voice.

Covid/commons.wikimedia.org

“These promising results suggest that simple voice recordings and fine-tuned AI algorithms can potentially achieve high precision in determining which patients have COVID-19 infection,” she said.Moreover, they enable remote, virtual testing and have a turnaround time of less than a minute. They could be used, for example, at the entry points for large gatherings, enabling rapid screening of the population.”

The app is installed on the user’s mobile phone, the participants report some basic information about demographics, medical history and smoking status, and then are asked to record some respiratory sounds. These include coughing three times, breathing deeply through their mouth three to five times, and reading a short sentence on the screen three times.

The researchers used a voice analysis technique called Mel-spectrogram analysis, which identifies different voice features such as loudness, power and variation over time.

“In this way we can decompose the many properties of the participants’ voices,” said Ms Aljbawi. “In order to distinguish the voice of COVID-19 patients from those who did not have the disease, we built different artificial intelligence models and evaluated which one worked best at classifying the COVID-19 cases.”

Its overall accuracy was 89%, its ability to correctly detect positive cases (the true positive rate or “sensitivity”) was 89%, and its ability to correctly identify negative cases (the true negative rate or “specificity”) was 83%.

“These results show a significant improvement in the accuracy of diagnosing COVID-19 compared to state-of-the-art tests such as the lateral flow test,” said Ms Aljbawi.

The patients were “high engagers”, who had been using the app weekly over months or even years to record their symptoms and other health information, record medication, set reminders, and have access to up-to-date health and lifestyle information. Doctors can assess the data via a clinician dashboard, enabling them to provide oversight, co-management and remote monitoring.

Emotional AI and gen Z: The attitude towards new technology and its concerns

Artificial intelligence (AI) governs all that come under “smart technology” today. From self-driving cars to voice assistants on our smartphones, AI has ubiquitous presence in our daily lives. Yet, it had been lacking a crucial feature: the ability to engage human emotions.

The scenario is quickly changing, however. Algorithms that can sense human emotions and interact with them are quickly becoming mainstream as they come embedded in existing systems. Known as “emotional AI,” the new technology achieves this feat through a process called “non-conscious data collection”(NCDC), in which the algorithm collects data on the user’s heart and respiration rate, voice tones, micro-facial expressions, gestures, etc. to analyze their moods and personalize its response accordingly.

However, the unregulated nature of this technology has raised many ethical and privacy concerns. In particular, it is important to know the attitude of the current largest demographic towards NCDC, namely Generation Z (Gen Z). Making up 36% of the global workforce, Gen Z is likely to be the most vulnerable to emotional AI. Moreover, AI algorithms are rarely calibrated for socio-cultural differences, making their implementation all the more concerning.

We found that being male and having high income were both correlated with having positive attitudes towards accepting NCDC. In addition, business majors were more likely to be more tolerant towards NCDC,” highlights Prof. Ghotbi. Cultural factors, such as region and religion, were also found to have an impact, with people from Southeast Asia, Muslims, and Christians reporting concern over NCDC.

Research by Team:

Our study clearly demonstrates that sociocultural factors deeply impact the acceptance of new technology. This means that theories based on the traditional technology acceptance model by Davis, which does not account for these factors, need to be modified,” explains Prof. Mantello.

The study addressed this issue by proposing a “mind-sponge” model-based approach that accounts for socio-cultural factors in assessing the acceptance of AI technology. Additionally, it also suggested a thorough understanding of the potential risks of the technology to enable effective governance and ethical design. “Public outreach initiatives are needed to sensitize the population about the ethical implications of NCDC. These initiatives need to consider the demographic and cultural differences to be successful,” says Dr. Nguyen.

Overall, the study highlights the extent to which emotional AI and NCDC technologies are already present in our lives and the privacy trade-offs they imply for the younger generation. Thus, there is an urgent need to make sure that these technologies serve both individuals and societies well.

Italian team develops superior AI model for stock trading

Using the science of convolutional neural networks (CNNs) with deep learning – a discipline within artificial intelligence, Italian researchers have developed a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios.

The team, led by Prof. Silvio Barra t the University of Cagliari, published its findings in IEEE/CAA Journal of Automatica Sinica. The University of Cagliari-based team set out to create an AI-managed “buy and hold” (B&H) strategy – a system of deciding whether to take one of three possible actions – a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day).

At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data unlike the older B&H systems, based on machine learning, a discipline that leans heavily on predictions based on past performance.

Just like seasoned investor

By letting their proposed network analyze current data layered over past data, they are able to take market forecasting a step further, allowing for a type of learning that more closely mirrors the intuition of a seasoned investor rather than a robot. Their proposed network can adjust its buy/sell thresholds based on what is happening both in real time and the past. Taking into account present-day factors increases the yield over both random guessing and trading algorithms not capable of real-time learning, they said.

To train their CNN for the experiment, the research team used S&P 500 data from 2009 to 2016. The S&P 500 is widely regarded as a litmus test for the health of the overall global market.

At first, their proposed trading system predicted the market with about 50 percent accuracy, or about accurate enough to break even in a real-world situation. They discovered that short-term outliers, which unexpectedly over- or underperformed, generating a factor they called “randomness.” Realizing this, they added threshold controls, which ended up greatly stabilizing their method.

“The mitigation of randomness yields two simple, but significant consequences,” Prof. Barra said. “When we lose, we tend to lose very little, and when we win, we tend to win considerably.” Howwever, further enhancements will be needed, said Prof. Barra, as other methods of automated trading already in use make markets more and more difficult to predict.

Artificial intelligence: Is this the future of early cancer detection?

A new endoscopic system powered by artificial intelligence (AI) has today been shown to automatically identify colorectal adenomas during colonoscopy. The system, developed in Japan, has recently been tested in one of the first prospective trials of AI-assisted endoscopy in a clinical setting, with the results presented today at the 25th UEG Week in Barcelona, Spain.

AI-assisted endocytoscopy – how it works:

The new computer-aided diagnostic system uses an endocytoscopic* image – a 500-fold magnified view of a colorectal polyp – to analyse approximately 300 features of the polyp after applying narrow-band imaging (NBI) mode or staining with methylene blue. The system compares the features of each polyp against more than 30,000 endocytoscopic images that were used for machine learning, allowing it to predict the lesion pathology in less than a second. Preliminary studies demonstrated the feasibility of using such a system to classify colorectal polyps, however, until today, no prospective studies have been reported.

Prospective study in routine practice:

The prospective study, led by Dr Yuichi Mori from Showa University in Yokohama, Japan, involved 250 men and women in whom colorectal polyps had been detected using endocytoscopy1. The AI-assisted system was used to predict the pathology of each polyp and those predictions were compared with the pathological report obtained from the final resected specimens. Overall, 306 polyps were assessed real-time by using the AI-assisted system, providing a sensitivity of 94%, specificity of 79%, accuracy of 86%, and positive and negative predictive values of 79% and 93% respectively, in identifying neoplastic changes.

Speaking at the Opening Plenary at UEG Week, Dr Mori explained; “The most remarkable breakthrough with this system is that artificial intelligence enables real-time optical biopsy of colorectal polyps during colonoscopy, regardless of the endoscopists’ skill. This allows the complete resection of adenomatous polyps and prevents unnecessary polypectomy of non-neoplastic polyps.”

“We believe these results are acceptable for clinical application and our immediate goal is to obtain regulatory approval for the diagnostic system” added Dr Mori.

Moving forwards, the research team is now undertaking a multicentre study for this purpose and the team are also working on developing an automatic polyp detection system. “Precise on-site identification of adenomas during colonoscopy contributes to the complete resection of neoplastic lesions” said Dr Mori. “This is thought to decrease the risk of colorectal cancer and, ultimately, cancer-related death.”

AI implications: Engineer’s model lays groundwork for machine-learning device

In what could be a small step for science potentially leading to a breakthrough, an engineer at Washington University in St. Louis has taken steps toward using nanocrystal networks for artificial intelligence applications.

Elijah Thimsen, assistant professor of energy, environmental & chemical engineering in the School of Engineering & Applied Science, and his collaborators have developed a model in which to test existing theories about how electrons move through nanomaterials. This model may lay the foundation for using nanomaterials in a machine learning device.

“When one builds devices out of nanomaterials, they don’t always behave like they would for a bulk material,” Thimsen said. “One of the things that changes dramatically is the way in which these electrons move through material, called the electron transport mechanism, but it’s not well understood how that happens.”

Thimsen and his team based the model on an unusual theory that every nanoparticle in a network is a node that is connected to every other node, not only its immediate neighbors. Equally unusual is that the current flowing through the nodes doesn’t necessarily occupy the spaces between the nodes — it needs only to pass through the nodes themselves. This behavior, which is predicted by the model, produces experimentally observable current hotspots at the nanoscale, the researcher said.

In addition, the team looked at another model called a neural network, based on the human brain and nervous system. Scientists have been working to build new computer chips to emulate these networks, but these chips are far short of the human brain, which contains up to 100 billion nodes and 10,000 connections per node.

“If we have a huge number of nodes — much larger than anything that exists — and a huge number of connections, how do we train it?” Thimsen asks. “We want to get this large network to perform something useful, such as a pattern-recognition task.”

Based on those network theories, Thimsen has proposed an initial project to design a simple chip, give it particular inputs and study the outputs.

“If we treat it as a neural network, we want to see if the output from the device will depend on the input,” Thimsen said. “Once we can prove that, we’ll take the next step and propose a new device that allows us to train this system to perform a simple pattern-recognition task.”

Empowering robots for ethical behavior

Scientists at the University of Hertfordshire in the UK have developed a concept called Empowerment to help robots to protect and serve humans, while keeping themselves safe.

Robots are becoming more common in our homes and workplaces and this looks set to continue. Many robots will have to interact with humans in unpredictable situations. For example, self-driving cars need to keep their occupants safe, while protecting the car from damage. Robots caring for the elderly will need to adapt to complex situations and respond to their owners’ needs.

Recently, thinkers such as Stephen Hawking have warned about the potential dangers of artificial intelligence, and this has sparked public discussion. “Public opinion seems to swing between enthusiasm for progress and downplaying any risks, to outright fear,” says Daniel Polani, a scientist involved in the research, which was recently published in Frontiers in Robotics and AI.

However, the concept of “intelligent” machines running amok and turning on their human creators is not new. In 1942, science fiction writer Isaac Asimov proposed his three laws of robotics, which govern how robots should interact with humans. Put simply, these laws state that a robot should not harm a human, or allow a human to be harmed. The laws also aim to ensure that robots obey orders from humans, and protect their own existence, as long as this doesn’t cause harm to a human.

The laws are well-intentioned, but they are open to misinterpretation, especially as robots don’t understand nuanced and ambiguous human language. In fact, Asimov’s stories are full of examples where robots misinterpreted the spirit of the laws, with tragic consequences.

One problem is that the concept of “harm” is complex, context-specific and is difficult to explain clearly to a robot. If a robot doesn’t understand “harm”, how can they avoid causing it? “We realized that we could use different perspectives to create ‘good’ robot behavior, broadly in keeping with Asimov’s laws,” says Christoph Salge, another scientist involved in the study.

The concept the team developed is called Empowerment. Rather than trying to make a machine understand complex ethical questions, it is based on robots always seeking to keep their options open. “Empowerment means being in a state where you have the greatest potential influence on the world you can perceive,” explains Salge. “So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.”

The team mathematically coded the Empowerment concept, so that it can be adopted by a robot. While the researchers originally developed the Empowerment concept in 2005, in a recent key development, they expanded the concept so that the robot also seeks to maintain a human’s Empowerment. “We wanted the robot to see the world through the eyes of the human with which it interacts,” explains Polani. “Keeping the human safe consists of the robot acting to increase the human’s own Empowerment.”

“In a dangerous situation, the robot would try to keep the human alive and free from injury,” says Salge. “We don’t want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment.”

This altruistic Empowerment concept could power robots that adhere to the spirit of Asimov’s three laws, from self-driving cars, to robot butlers. “Ultimately, I think that Empowerment might form an important part of the overall ethical behaviour of robots,” says Salge.