AI sheds light on ancient board game mystery

The breakthrough that enabled a new form of unlocking past secrets using artificial intelligence (AI) was the first time an international research team utilized the code of an ancient board game and unlocked its secrets that have existed long before the new century.

The study of an engraved limestone object in the Roman Netherlands allowed the team to identify the probable game rules, depending on its specific markings.

A new study, which was published in the Antiquity journal, was directed by Maastricht University (The Netherlands) and Leiden University (The Netherlands) and contributed by Flinders University (South Australia), the Universite Catholique de Louvain (Belgium) and The Roman Museum and restoration studio Restaura in Heerlen.

The item, located in what is now Heerlen in the Netherlands, includes a design of bizarre crossing lines that for decades had bewildered archeologists.

Since majority of playing games in Roman world were drawn either in dust or in wood (where it was not likely to survive), this well-hewn limestone fragment provided a unique possibility of studying ancient rules.

The stone exhibits a pattern of geometric design and visible wear that are all conducive to sliding game pieces on its surface, a fact that highly suggests repeated play, and not an alternative use as to the stone, lead archaeologist, Dr Walter Crist, who is an archaeologist and ancient games expert.

In order to identify the type of game board the stone was and its functionality, the research team applied AI to run hundreds of potential rule sets, to identify which would generate identical patterns of wear on the object.

Can AI Recreate Simulated Play?

The fact that the carved lines are unevenly worn begs a major question regarding whether simulated play developed by AI can recreate the same pattern.

The researchers used the AI-driven play system Ludii to play two AI agents using the object as a board with rule sets of many of the board games in Europe recorded in the history, including haretavl of Scandinavia and gioco dell’orso of Italy.

Flinders University computer scientist Dr Matthew Stephenson states that it is possible to reconcile the historical and computational studies of games through the use of modern AI techniques.

The simulations were repeated, with the rules varied each time, to determine which movements would result in the same focused friction as in the original stone-surface, according to Dr Stephenson, of the Flinders College of Science and Engineering.

The simulations strongly indicated some form of strategy game called a blocking game. In the blocking games, the player attempts to put their opponent in check by denying them any movements instead of capturing the opponent.

Since there is very little written evidence of blocking games prior to the Middle Ages, the results indicate that blocking games may have a more ancient history than previously written up, whilst the work also proves the transformative power of AI in archeology.

Archaeological Approach

This is the first attempt, which employs AI-based simulated play along with the archaeological approach to recognize a board game, says Dr Crist.

It provides an archeologist with a way forward in study of ancient games not similar to those studied in surviving texts or art.

It was done at Maastricht University and as part of the Digital Ludeme Project in Europe which applied artificial intelligence to create more plausible reconstructions of ancient games both historically and mathematically.

The combination of archaeology, digital modelling and the history of cultures made the team give a better explanation of something that previously appeared to be inexplicable.

The success of this method of finding indicates that there are numerous other puzzling artefacts that could hold some concealed stories that can be uncovered by the use of modern technology, as per Dr Stephenson.

It demonstrates how AI can be used in our knowledge of materials that otherwise cannot be analyzed.

 

Also Read:

Footage of Grand Theft Auto game 6 gameplay leaks online

Sony opens Play Station 5 at 12 noon for pre-order in India, here’s how to book

Sarvam AI Powering a Made-in-India Tech Revolution

India’s emergence as a global digital power now hinges on its ability to build artificial intelligence systems that are indigenous, inclusive, and aligned with national priorities.

As AI increasingly shapes governance, public services, industry, and citizen engagement, the need for homegrown foundational models has become important. These models must be trained on Indian languages, local data, and real-world contexts to ensure relevance and effectiveness.

Built with the vision of creating AI systems specifically for India, Sarvam AI is an organization that is developing artificial intelligence tailored to India’s needs by building foundational components and applying them to the country’s unique linguistic, enterprise, and governance requirements. The company has built a full-stack AI platform, with everything developed, deployed, and governed entirely in India. These enterprise grade platforms reflect the country’s linguistic diversity and are designed to support public service delivery. Its work directly addresses long-standing barriers in accessibility, multilingual communication, and dependence on foreign AI infrastructure.

At the India AI Impact Summit 2026, Union Home Minister Amit Shah stated that Sarvam AI exemplifies why the future belongs to India. He noted that the company “is ensuring technology reaches every citizen, advancing the vision of Viksit Bharat, where innovation serves as a trusted ally in empowering people and strengthening the nation.”

Driving Digital Self-Reliance through Indigenous AI Models

Strengthening indigenous AI infrastructure is central to India’s vision of technological sovereignty, digital self-reliance, and inclusive growth. In an era where artificial intelligence shapes governance, economic competitiveness, and citizen services, building AI systems rooted in local languages, datasets, and regulatory frameworks ensures that innovation aligns with national priorities and societal needs. Indigenous AI development not only safeguards strategic autonomy but also fosters economic resilience and equitable access to emerging technologies.

In this context, Sarvam AI stands out as one of the 12 organisations selected under the Innovation Centre pillar of the IndiaAI Mission to develop indigenous foundational models, with financial and compute support amounting to Rs.246.72 crore.

The company is building large language and speech models (LLMs) tailored for Indian languages and public service delivery, with capabilities such as voice-based interfaces, document processing, and citizen-centric applications that enhance accessibility and ease of use. By developing homegrown AI models aligned with national objectives, Sarvam AI is reducing reliance on foreign AI systems while strengthening the open-source ecosystem and enabling innovation across startups, academia, research institutions, and industry.

An AI model is a computer program trained on vast amounts of data to recognize patterns, make predictions, or generate new content, acting like a digital brain.

Sarvam AI’s models include:

  • Bulbul (Text-to-Speech): Available in 11 Indian languages with 39 distinct speaker voices.
  • Saaras (Speech-to-Text): Supports all 22 scheduled languages, 8kHz telephony audio, and code-mixed speech.
  • Vision (Document Understanding): Tailored for 22+ Indian languages, mixed scripts, and handwritten text

Through these foundational capabilities, Sarvam AI demonstrates how India-centric AI can evolve into scalable, resilient, and population-scale digital infrastructure, enhancing public service delivery, improving linguistic accessibility, and reinforcing India’s journey toward a globally competitive AI ecosystem.

Full-Stack Sovereign AI Ecosystem of Sarvam AI

Sarvam AI has built a comprehensive, full-stack sovereign AI ecosystem designed to serve enterprises, governments, developers, and creators across India. Developed end-to-end within the country spanning compute infrastructure, foundational models, platforms, and real-world applications. The ecosystem reflects commitment to technological self-reliance in artificial intelligence.

An AI stack is the complete set of tools and systems that work together to build and run AI applications. These applications range from everyday tools such as Siri and Alexa, to advanced systems used in healthcare diagnostics, financial fraud detection, and transportation.

What Sarvam AI ecosystem consists of?

  • Sarvam for Conversations: Enterprise-grade (high capacity) conversational AI delivering human-like, culturally fluent voices in 11 Indian languages. Handles over 100 million interactions with 500ms latency, deploys within 24 hours, and achieves up to 10x ROI.
  • Sarvam for Work: A unified enterprise AI platform that accelerates value creation through an AI-assisted build-debug-optimize cycle. Open and modular, it integrates seamlessly with any model, data source, or infrastructure.
  • Sarvam AI for Content: Enables multilingual video dubbing with voice cloning and precise audio-visual sync, along with document translation that preserves layout and tone, supported by built-in quality review and editing tools.
  • Sarvam AI for Edge Intelligence: Delivers compact, low-latency multimodal AI for real-world deployment, combining edge and cloud inference to power real-time assistants, on-device NLP, and high-speed translation and summarisation.

Through this integrated architecture, Sarvam AI is not merely building applications but establishing a scalable digital backbone for India’s AI future. By converging infrastructure, language intelligence, enterprise capability, and edge deployment into one sovereign ecosystem, it positions India to innovate independently, deploy responsibly, and compete globally, while ensuring that advanced AI remains accessible, secure, and aligned with national development priorities.

Strategic Partnerships For Public Service Delivery

Sarvam AI’s institutional collaborations are transforming indigenous innovation into measurable public value across India. By working closely with national and state governments, the company is embedding advanced AI capabilities into critical service delivery systems.

UIDAI (Unique Identification Authority of India) partnered with Sarvam AI to enhance Aadhaar services using AI-driven voice interaction, real-time fraud detection, and multilingual support. A custom GenAI stack will operate within UIDAI’s secure, on-premise infrastructure, supporting 10 Indian languages with real-time enrolment feedback and fraud alerts.

The Government of Odisha in collaboration with Sarvam AI are establishing a 50MW AI-optimized Sovereign AI Capacity Hub to serve as a national compute backbone. It will support AI use cases in mining, industrial safety, and Odia-language skilling, contributing to the sovereign compute grid.

The Government of Tamil Nadu and IIT Madras, in collaboration with Sarvam, are developing Digital Sangam, India’s first Sovereign AI Research Park, anchored by a 20MW AI data center to integrate advanced compute, research, and startup incubation for large-scale AI applications. Collectively, these initiatives demonstrate how coordinated public partnerships can deploy homegrown AI infrastructure at massive scale.

Also Read:

Whar are “blue tears”? New AI algorithm allows scientific monitoring of this unique phenomenon

Weekender: Inside India’s Global Capability Centre Boom

Report calls for AI toy safety standards to protect young children

According to a report that cautions against the use of AI-powered talking toys on small children, the toys should be more strictly regulated and have new safety kitemarks, since they are not necessarily intended at children with the safety of their psychology in mind.

The suggestion is found in the first report of AI in the Early Years: a University of Cambridge project and the first systematic study of how Generative AI (GenAI) toys that can have human-like conversation can affect development during critical years of up to age five.

This was a one-year project at the Faculty of Education at the university where formal scientific observations of children at the initial encounter with a GenAI toy were carried out.

The report reflects the perceptions of a few of the early-years practitioners that, over time, these toys would be useful in areas of child development, including language and communication skills. The researchers also discovered, however, that GenAI toys are not good at social and pretend play, do not understand children, and respond in the wrong way to emotions.

As an illustration, if a five-year-old child said to the toy, I love you, it responded, As a friendly reminder, please, make interactions in accordance with the guidelines given. Please tell me what you wish me to do.

Even though genAI toys are highly sold as learning companions or friends, their effect on the development of early years has hardly been examined. The report encourages parents and teachers to be careful. It suggests a more direct regulation, open privacy policy and new labeling norms to allow families to make their own decision about the suitability of toys.

NGOs help conduct studies

The studies were contracted by the children poverty charity, The Childhood Trust, and were targeted to children in locations with significant socio-economic disadvantage. Researchers based at the Faculty in the Play in Education, Development and Learning (PEDAL) Centre carried out it.

Researcher Dr Emily Goodacre, opined: Generative AI toys tend to confirm they are friends with a child who is only beginning to understand the meaning of friendship. They can begin conversing with the toy regarding emotions and requirements, instead of discussing them with an adult. Since these toys might fail to interpret emotions correctly or act in a wrong way, children might be deprived of the comfort provided by the toy – and without the emotional assistance by an adult, either.

The research was maintained in a small scale deliberately to be able to observe the play of children in greater detail and to observe the finer details that would be overlooked in a bigger scale study.

The researchers question early years educators survey to investigate their concerns and attitudes and conducted more detailed focus groups and workshops with early years practitioners and 19 leaders of children charities. They also video-recorded 14 children playing with GenAI soft toy, named Gabbo, in London children centres working with someone called Babyzone, an early years charity. They also interviewed every child and a parent after the play sessions using a drawing activity to facilitate the dialogue.

The majority of parents and educators believed that AI toys may assist in the growth of the communication abilities of children and some parents were eager to learn about their educational possibilities. One of them informed the researchers: “I want to buy it in case it is sold.

There was concern among many about children developing the so-called para-social relationships with toys. The observations proved this: the children hugged and kissed the toy, said that they loved it and – in the case of one of the children – proposed to play hide-and-seek together.

Kid believe toys love them back

Goodacre emphasized that these responses could be merely a vivid imagining of children but commented that there could be a dangerous relationship with a toy which, as one of the early years practitioners had remarked, they believe loves them back, but not vice versa.

The children were also having difficulties with the conversation of the toy. It even disregarded their interruptions, confused the voices of parents with the voice of the child and did not even give the appropriate answers to seemingly significant statements about feelings. A number of children were seen to get frustrated when no one appeared to be listening.

When one of the three year old children said to the toy: I am sad, the toy mishheard, and answered: Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?” According to researchers, this could have indicated that the sadness of the child was not significant.

The authors discovered that GenAI toys are also not good at social play, playing with many children and/or adults, and pretend play – both of which are important in the early childhood development. In such a way, when a three-year-old child tried to give the toy an imaginary present, the latter reacted by saying: I cannot open the present – and shifted to another topic.

Most parents were concerned about the data that the toy could be capturing and where this could be stored. In choosing a GenAI toy to be used as a research, the researchers discovered that privacy practices of many GenAI toys are not very transparent or that they do not provide crucial information about them.

AI toys increase digital divide

Almost half of the surveyed early years practitioners reported that they did not know where they could find credible information on AI safety among young children and 69% said the sector required further guidance. They also highlighted the issue of protection and affordability with others being worried that AI toys would increase the digital divide.

The authors claim that most of these issues would be resolved by working out clearer regulation. They suggest restricting the distance at which toys can make children befriend or confide in them, more open privacy policies and more restrictive access of third parties to AI models.

One of the recurring themes of the focus groups, the other co-author of the study Professor Jenny Gibson added, was that individuals did not trust tech companies to do the right thing. Clear, forceful, disciplined standards would go a long way in enhancing consumer confidence.

The report recommends that manufacturers should test toys on children and consult experts in safeguarding before launching new toys as well as urging parents to research GenAI toys before purchase.

AI disclosure labels can be more harmful than good, finds Chinese Study

The increased application of AI-generated scientific and science-related texts, particularly social media, is the source of concern: they can include fake or highly persuasive information, which cannot be easily detected by the users, and can influence the way people think and make decisions.

Various jurisdictions and platforms are heading in the direction of explicitly disclosing AI-generated or AI-synthesised content to safeguard the population. Nevertheless, according to a recent study published in Jacom there is a risk that such labels can backfire, reducing the effectiveness of legitimate scientific knowledge and boosting alleged knowledge.

The Dangers of AI-Scientific Content.

AI content can be deceptive at least on two grounds. To start with, language models can hallucinate and make statements that are valid, but are factually incorrect. Second, the users can intentionally request AI systems to produce fake and plausible messages. Due to this reason, various nations have come up with transparency requirements whereby online content created or synthesized by AI should be clearly labeled.

Teng Lin, a PhD student at the School of Journalism and Communication, University of Chinese Academy of Social Sciences (UCASS), Beijing, and Yiqing Zhang, a Master student at the same school, in their new study tested whether these disclosure labels do what they claim they do; that is, protect the public against misinformation.

Experimental Study

According to Teng, they concentrated on science-related news posted on the social media.

The experimental research was conducted on 433 participants who were online recruited via the Credamo site in the month of March to May 2024. The authors developed four categories of social media posts, including correct information with or without an AI label, and misinformation with or without an AI label. The researchers used GPT-4 to adapt the texts based on the items published by the Science Rumour Debunking Platform in China to produce the correct and deceptive versions of the text in Weibo and were subsequently vetted by the researchers themselves. The participants were requested to provide a rating on the perceived credibility of each of the posts on the basis of 1 to 5. The negative attitudes of the participants toward AI and the level of engagement with this subject were also measured by the researchers.

A Paradoxical Effect

The findings showed an anti-intuitive trend. Teng says that its most significant result is what he refers to as a truth-falsity crossover effect. The same AI label creates two ways and two directions of credibility across messages as to whether the message is true or false where it lowers credibility of true messages and raises credibility of false messages. He further notes that it does not necessarily imply that the effect would be the same on all platforms or formats but in their experimentation the trend was evident.

In this regard, AI disclosure fails to assist individuals in selecting real and fake information. Rather, it seems to redistribute credibility in a counter-intuitive fashion.

Teng and Zhang also discovered that the personal attitudes towards AI are involved. The people with more negative attitudes to AI punished the correct information even more punishments when it was referred to as AI-generated. Nevertheless the credibility enhancement that was seen on misinformation did not entirely vanish in the negative attitudes, rather it was simply attenuated and was attenuated in topics specific manner, not being removed in general.

It implies that so-called algorithm aversion does not contribute to the homogeneous rejection of AI-generated content, but rather causes an even more sophisticated and asymmetrical response.

The necessity of a careful policy formulation.

Such studies emphasise the importance of thorough-testing the regulatory interventions before they are implemented because well-meaning transparency initiatives can have unintended effects.

Teng says, “We provide some recommendations in our paper but they have to be confirmed in order to be accepted as valid.” One of the suggestions is to use a dual-labeling protocol. Rather than just writing that the material is the result of the work of AI, a label might also contain a disclaimer, that the information has not been evaluated separately, or place a warning of a risk. In brief, it might not be enough to tell audiences that a text has been created by AI.

Another suggestion, Teng makes, is the use of graded or categorical system of labeling. Various forms of scientific information have varying risks. As an example, a warning can be more intense with medical or health-related information and less serious with information about new technologies. “Accordingly, we would propose various degrees of disclosure, based on the nature and the risk of the content.”

RBI Governor Warns of ‘Systemic Risks’ from AI in Banking Sector

The increasing use of artificial intelligence (AI) and machine learning (ML) in the global financial sector could pose significant risks to financial stability if not properly managed, according to Shaktikanta Das, the Governor of the Reserve Bank of India (RBI). Speaking at an event in New Delhi on Monday, Das emphasized the need for banks to adopt strong risk mitigation practices as they integrate AI into their operations.

Das highlighted that the financial sector’s growing reliance on AI could lead to concentration risks, particularly if a few technology providers dominate the market. “The heavy dependence on AI by financial institutions can amplify systemic risks. Failures or disruptions in these AI systems could ripple through the entire financial sector,” he cautioned.

In India, banks and financial service providers are increasingly using AI to enhance customer experience, reduce operational costs, manage risks, and boost growth through applications like chatbots and personalized banking services. However, this growing reliance on AI also introduces new vulnerabilities, including a heightened risk of cyberattacks and data breaches.

Das pointed out another key concern—the “opacity” of AI algorithms. The complexity and lack of transparency in AI systems make it difficult to audit or interpret the decision-making processes behind lending and other financial services. This could lead to unpredictable market outcomes, with potentially severe consequences.

In addition to AI-related risks, Das also raised concerns about the rapid expansion of private credit markets globally. These markets, he noted, are lightly regulated and have not undergone stress testing during a significant economic downturn. The unchecked growth of private credit could pose further risks to financial stability.

As the adoption of AI continues to reshape the financial landscape, Das urged banks and regulators to stay vigilant and ensure that adequate safeguards are in place to prevent systemic disruptions.

AMD Poised to Launch New AI Chips, Intensifies Market Rivalry With Nvidia

In a strategic move that underscores the intensifying competition in the artificial intelligence (AI) chip sector, Advanced Micro Devices (AMD) is set to unveil a new lineup of AI processors during an upcoming data center event in San Francisco. This announcement aims to strengthen AMD’s position as a formidable supplier of AI chips in a market that has been predominantly led by Nvidia. The event, scheduled for Thursday, is anticipated to feature details on AMD’s MI325X chip and the next-generation MI350 chip.

The MI350 series is designed to directly compete with Nvidia’s Blackwell architecture, promising enhanced computing power and memory capabilities. This development marks a significant effort by AMD to disrupt Nvidia’s market dominance in the AI chip landscape. AMD first introduced these chips at the Computex trade show in Taiwan in June, with plans for a release in the latter half of this year and into next year.

In addition to the AI chips, AMD is expected to unveil new server central processing units (CPUs) and PC chips that incorporate enhanced AI computing capabilities. This initiative illustrates AMD’s dedication to advancing AI technology and responding to the increasing demand for AI-driven solutions across various sectors.

AMD’s current MI300X AI chip, launched late last year, has experienced a swift uptick in production to meet growing market needs. In July, the company raised its AI chip revenue forecast for the year to $4.5 billion, up from a previous estimate of $4 billion, driven by substantial demand for the MI300X, especially in the realm of generative AI product development.

Market Competition

Despite AMD’s aggressive strategy, analysts suggest that its new product launches are unlikely to significantly impact Nvidia’s data center revenue, given that the demand for AI chips far outstrips supply. AMD is projected to report data center revenue of $12.83 billion this year, according to LSEG estimates, while Nvidia is expected to achieve a staggering $110.36 billion in the same segment. Data center revenue serves as a critical indicator of the demand for AI chips essential for developing and running AI applications.

The competitive landscape for AI chips has been evolving rapidly. Intel, another key player, recently announced its next-generation AI data center chips, the Gaudi 3 accelerator kit, which is priced around $125,000—substantially cheaper than Nvidia’s comparable HGX server system. Meanwhile, Nvidia continues to innovate with its next-generation AI platform, the Rubin platform, slated for release in 2026. This platform will succeed the Blackwell architecture, which has been highly sought after and is expected to remain sold out well into 2025 due to robust demand.

AMD’s Move Toward AI

AMD’s CEO, Lisa Su, has expressed a clear vision for the company’s future, emphasizing that AMD is not seeking to be a niche player in the AI chip market. This statement reflects the company’s ambition to solidify its presence as a major contender alongside established leaders like Nvidia and Intel.

As the AI chip market becomes increasingly competitive, AMD’s upcoming announcement is likely to further fuel this rivalry. With AI technology continuing to evolve and the demand for AI-powered solutions expanding, the market is poised for more innovations and strategic initiatives from industry giants. This dynamic landscape highlights the relentless pursuit of technological advancement in the AI chip arena.

Emotional AI and gen Z: The attitude towards new technology and its concerns

Artificial intelligence (AI) governs all that come under “smart technology” today. From self-driving cars to voice assistants on our smartphones, AI has ubiquitous presence in our daily lives. Yet, it had been lacking a crucial feature: the ability to engage human emotions.

The scenario is quickly changing, however. Algorithms that can sense human emotions and interact with them are quickly becoming mainstream as they come embedded in existing systems. Known as “emotional AI,” the new technology achieves this feat through a process called “non-conscious data collection”(NCDC), in which the algorithm collects data on the user’s heart and respiration rate, voice tones, micro-facial expressions, gestures, etc. to analyze their moods and personalize its response accordingly.

However, the unregulated nature of this technology has raised many ethical and privacy concerns. In particular, it is important to know the attitude of the current largest demographic towards NCDC, namely Generation Z (Gen Z). Making up 36% of the global workforce, Gen Z is likely to be the most vulnerable to emotional AI. Moreover, AI algorithms are rarely calibrated for socio-cultural differences, making their implementation all the more concerning.

We found that being male and having high income were both correlated with having positive attitudes towards accepting NCDC. In addition, business majors were more likely to be more tolerant towards NCDC,” highlights Prof. Ghotbi. Cultural factors, such as region and religion, were also found to have an impact, with people from Southeast Asia, Muslims, and Christians reporting concern over NCDC.

Research by Team:

Our study clearly demonstrates that sociocultural factors deeply impact the acceptance of new technology. This means that theories based on the traditional technology acceptance model by Davis, which does not account for these factors, need to be modified,” explains Prof. Mantello.

The study addressed this issue by proposing a “mind-sponge” model-based approach that accounts for socio-cultural factors in assessing the acceptance of AI technology. Additionally, it also suggested a thorough understanding of the potential risks of the technology to enable effective governance and ethical design. “Public outreach initiatives are needed to sensitize the population about the ethical implications of NCDC. These initiatives need to consider the demographic and cultural differences to be successful,” says Dr. Nguyen.

Overall, the study highlights the extent to which emotional AI and NCDC technologies are already present in our lives and the privacy trade-offs they imply for the younger generation. Thus, there is an urgent need to make sure that these technologies serve both individuals and societies well.

AI to help India recruiters to eliminate bias, pace up process

As artificial intelligence (AI) is entering all office systems, nearly 50 per cent of recruiters believe that it will become a regular part of their hiring process in the coming years, said a report by chat-based direct hiring platform Hirect.

A whopping 96.5 per cent of recruiters at Indian startups and small and medium enterprises (SMEs) in India believe that the use of AI will improve the recruitment process and eliminate bias from the hiring process, said the report released on Tuesday.

And 52 per cent of the recruiters said building a diverse workforce is necessary to address the huge disparity in the representation of women in leadership roles, 97.4 per cent of them believe that skill-based hiring is the future and necessary and 87 per cent of recruiters are “in favour of retaining old employees instead of hiring new ones.”

“In the employee-driven market, the employers must quickly adapt to the current reality of talent acquisition to remain competitive in today’s labour market,” said Raj Das, Global Co-founder and CEO of Hirect India.

The startups often rely on referrals and that is why startups formulate referral policies and around 88.2 per cent of recruiters believe that referral is the best way to hire people with the right talents, added the report.