Scientists Develop Faster Method To Track Quantum Memory Loss In Qubits

Researchers in Norway and Denmark have developed a new method to measure how quickly quantum computers lose information, a key obstacle in building stable systems. The study, led by the Norwegian University of Science and Technology and the Niels Bohr Institute, reduces measurement time from about one second to roughly 10 milliseconds. Scientists say the breakthrough allows near real-time tracking of qubit instability, helping identify the causes of information loss.

 

Indian scientists convert discarded battery waste into high-value material for cleaner fuel cells

Scientists in India have developed a method to reuse graphite from discarded lithium-ion batteries to improve fuel cell efficiency, according to a recent study. The research, conducted by the International Advanced Research Centre for Powder Metallurgy and New Materials, shows that recycled graphite can enhance catalyst performance and durability in fuel cells. The findings, published in ACS Sustainable Resource Management, point to a dual solution for battery waste and clean energy challenges.

A used lithium-ion battery, often discarded after years of service, may hold more value than previously thought.

Scientists have found a way to extract graphite from spent batteries and transform it into a high-performance material that improves how fuel cells operate, offering a potential bridge between waste management and clean energy systems.

The work was carried out by researchers at the International Advanced Research Centre for Powder Metallurgy and New Materials, an autonomous institute under the Department of Science and Technology.

Recycled graphite and the challenge of fuel cell efficiency

Fuel cells, particularly those used in clean energy applications, rely on catalysts to drive chemical reactions that generate electricity. One of the most critical reactions is the oxygen reduction reaction, or ORR, which directly affects efficiency.

Platinum-based catalysts are widely used for this purpose but face two major limitations. They are expensive, and their performance can degrade over time due to poisoning by carbon monoxide and interference from methanol in certain fuel cell systems.

At the same time, the rapid rise in lithium-ion battery usage has created a growing stream of waste, with graphite being a major component of discarded batteries.

Researchers have been exploring whether this waste material could be repurposed to address bottlenecks in fuel cell technology.

How the material was developed and tested

The research team recovered graphite from end-of-life lithium-ion batteries and chemically exfoliated it, a process that increases its surface area and introduces more active sites for chemical interaction.

They then carried out detailed characterization and electrochemical testing to evaluate how the material performed in ORR conditions, including its tolerance to methanol.

Unlike earlier studies that focused mainly on alkaline environments, this work demonstrated effective performance in acidic conditions, which are relevant for many commercial fuel cell systems.

The exfoliated graphite was combined with platinum catalysts to form a conductive network that improved both electron flow and oxygen transport within the system.

Fig: Graphical illustration of the Pt–exfoliated graphite catalyst, with exfoliated graphite forming a conductive network that suppresses methanol crossover and CO poisoning, leading to improved oxygen reduction performance and durability PIB

Performance gains and durability improvements

The study identified an optimal composition of 10 percent exfoliated graphite by weight, which delivered improved performance and stability compared with conventional setups.

The material showed an ability to selectively adsorb methanol molecules, acting as a barrier that prevents unwanted reactions. This reduces methanol oxidation and limits carbon monoxide poisoning of the platinum catalyst.

As a result, the system maintained higher efficiency over longer operating periods.

Researchers said the improvement in methanol tolerance and catalyst protection could address a key challenge in Direct Methanol Fuel Cells, a technology considered promising for portable and stationary energy applications.

Linking battery recycling with clean energy goals

The findings highlight a potential pathway to address two growing concerns: battery waste and the cost and durability of fuel cell technologies.

By converting discarded graphite into a functional material, the approach reduces reliance on expensive catalyst components while creating value from waste.

The work also supports broader efforts to build sustainable energy systems by improving the performance of fuel cells, which produce electricity with lower emissions compared with conventional combustion-based technologies.

Scientists say further research and scaling efforts will be needed to translate laboratory results into commercial applications, but the study establishes a proof of concept for integrating recycling and energy innovation.

The approach reflects a shift toward circular material use, where components from one technology lifecycle are repurposed to enhance another, reducing environmental impact while advancing clean energy solutions.

Also Read:

AI sheds light on ancient board game mystery

The breakthrough that enabled a new form of unlocking past secrets using artificial intelligence (AI) was the first time an international research team utilized the code of an ancient board game and unlocked its secrets that have existed long before the new century.

The study of an engraved limestone object in the Roman Netherlands allowed the team to identify the probable game rules, depending on its specific markings.

A new study, which was published in the Antiquity journal, was directed by Maastricht University (The Netherlands) and Leiden University (The Netherlands) and contributed by Flinders University (South Australia), the Universite Catholique de Louvain (Belgium) and The Roman Museum and restoration studio Restaura in Heerlen.

The item, located in what is now Heerlen in the Netherlands, includes a design of bizarre crossing lines that for decades had bewildered archeologists.

Since majority of playing games in Roman world were drawn either in dust or in wood (where it was not likely to survive), this well-hewn limestone fragment provided a unique possibility of studying ancient rules.

The stone exhibits a pattern of geometric design and visible wear that are all conducive to sliding game pieces on its surface, a fact that highly suggests repeated play, and not an alternative use as to the stone, lead archaeologist, Dr Walter Crist, who is an archaeologist and ancient games expert.

In order to identify the type of game board the stone was and its functionality, the research team applied AI to run hundreds of potential rule sets, to identify which would generate identical patterns of wear on the object.

Can AI Recreate Simulated Play?

The fact that the carved lines are unevenly worn begs a major question regarding whether simulated play developed by AI can recreate the same pattern.

The researchers used the AI-driven play system Ludii to play two AI agents using the object as a board with rule sets of many of the board games in Europe recorded in the history, including haretavl of Scandinavia and gioco dell’orso of Italy.

Flinders University computer scientist Dr Matthew Stephenson states that it is possible to reconcile the historical and computational studies of games through the use of modern AI techniques.

The simulations were repeated, with the rules varied each time, to determine which movements would result in the same focused friction as in the original stone-surface, according to Dr Stephenson, of the Flinders College of Science and Engineering.

The simulations strongly indicated some form of strategy game called a blocking game. In the blocking games, the player attempts to put their opponent in check by denying them any movements instead of capturing the opponent.

Since there is very little written evidence of blocking games prior to the Middle Ages, the results indicate that blocking games may have a more ancient history than previously written up, whilst the work also proves the transformative power of AI in archeology.

Archaeological Approach

This is the first attempt, which employs AI-based simulated play along with the archaeological approach to recognize a board game, says Dr Crist.

It provides an archeologist with a way forward in study of ancient games not similar to those studied in surviving texts or art.

It was done at Maastricht University and as part of the Digital Ludeme Project in Europe which applied artificial intelligence to create more plausible reconstructions of ancient games both historically and mathematically.

The combination of archaeology, digital modelling and the history of cultures made the team give a better explanation of something that previously appeared to be inexplicable.

The success of this method of finding indicates that there are numerous other puzzling artefacts that could hold some concealed stories that can be uncovered by the use of modern technology, as per Dr Stephenson.

It demonstrates how AI can be used in our knowledge of materials that otherwise cannot be analyzed.

 

Also Read:

Footage of Grand Theft Auto game 6 gameplay leaks online

Sony opens Play Station 5 at 12 noon for pre-order in India, here’s how to book

Researchers Develop System with 99.96% Accuracy to Stem Real-Time Cyber Attack

Researchers at Sultan Qaboos University in Oman have developed an advanced intrusion detection system (IDS) that can identify cyber attacks with near-perfect accuracy while dramatically reducing processing time, according to a paper published in The Journal of Engineering Research (TJER).

The proposed system, which combines a double feature selection method with a stacked ensemble machine learning approach, achieved accuracy levels of up to 99.96 percent on benchmark datasets, with false alarm rates as low as 0.007 percent and detection times under 13 seconds.

As cyber threats targeting IoT devices, cloud computing infrastructure, and high-speed networks grow increasingly sophisticated, the research addresses critical vulnerabilities in existing detection methods that struggle with redundant feature processing, lengthy training periods, and imbalanced datasets.

The system implements a two-phase feature reduction process designed to eliminate computational waste while preserving detection power. The Variance Threshold is first applied to remove low-variance features that contribute little to threat identification. This is followed by the Select-K-Best technique, which retains only the most relevant attributes for classification.

Through this rigorous filtration, the researchers successfully narrowed down datasets to as few as 13 or 19 significant features—a dramatic reduction that slashes processing time without compromising detection capability. This efficiency gain is critical for real-time cybersecurity applications where milliseconds matter.

At the heart of the system lies a stacking ensemble classification structure. Base learners consist of K-Nearest Neighbors and Gaussian Naive Bayes algorithms, which feed into a Random Forest classifier serving as the meta-classifier. The Random Forest model is optimized using Grid Search cross-validation to ensure peak performance.

This layered approach allows the system to leverage the strengths of multiple algorithms while compensating for individual weaknesses, resulting in more robust and reliable threat detection.

Rigorous Testing on Contemporary Threat Datasets

The model was validated using two benchmark datasets widely recognized in cybersecurity research: CIC-IDS2017 and CIC-DDoS2019. These datasets contain representations of current cyber attack types, including distributed denial-of-service (DDoS) attacks, denial-of-service (DoS) attacks, brute force attempts, port scans, web application attacks, and bot activity.

The first stage involves feature selection, where a Double Feature Selection method is applied to identify the most relevant and influential features for training the model. In the second stage, the model is developed using an ensemble machine learning stacking approach by combining K-Nearest Neighbors and Gaussian Naive Bayes classifiers with a Random Forest classifier. A final classifier is then produced by selecting the optimal features for each classifier at each stage / THE JOURNAL OF ENGINEERING RESEARCH 2025;22:173–186

Experimental results demonstrated that the proposed system “outperforms various existing intrusion detection methods, effectively overcoming common shortcomings such as redundant feature processing, extended training times, and the challenges posed by imbalanced datasets where attack samples are significantly outnumbered by normal traffic.”

Real-World Applications

The authors emphasize that the method’s combination of efficient feature engineering and ensemble learning makes it suitable for practical, real-time cybersecurity deployments. As networks grow faster and more complex, the ability to detect threats quickly and accurately becomes increasingly critical for protecting infrastructure, data, and users.

Looking ahead, the researchers recommend “extending the approach to IoT environments, where resource constraints make lightweight yet accurate detection essential.” They also suggest integrating deep learning models with the current framework to further enhance detection capabilities against evolving threat landscapes.

The study adds to growing body of research exploring artificial intelligence applications in cybersecurity, a field racing to keep pace with increasingly sophisticated attack methods targeting everything from personal devices to critical national infrastructure.

Also Read:

As AI evolves, pressure mounts to regulate ‘killer robots

AI threatens one in four jobs – but transformation, not replacement, is the real risk

Sarvam AI Powering a Made-in-India Tech Revolution

India’s emergence as a global digital power now hinges on its ability to build artificial intelligence systems that are indigenous, inclusive, and aligned with national priorities.

As AI increasingly shapes governance, public services, industry, and citizen engagement, the need for homegrown foundational models has become important. These models must be trained on Indian languages, local data, and real-world contexts to ensure relevance and effectiveness.

Built with the vision of creating AI systems specifically for India, Sarvam AI is an organization that is developing artificial intelligence tailored to India’s needs by building foundational components and applying them to the country’s unique linguistic, enterprise, and governance requirements. The company has built a full-stack AI platform, with everything developed, deployed, and governed entirely in India. These enterprise grade platforms reflect the country’s linguistic diversity and are designed to support public service delivery. Its work directly addresses long-standing barriers in accessibility, multilingual communication, and dependence on foreign AI infrastructure.

At the India AI Impact Summit 2026, Union Home Minister Amit Shah stated that Sarvam AI exemplifies why the future belongs to India. He noted that the company “is ensuring technology reaches every citizen, advancing the vision of Viksit Bharat, where innovation serves as a trusted ally in empowering people and strengthening the nation.”

Driving Digital Self-Reliance through Indigenous AI Models

Strengthening indigenous AI infrastructure is central to India’s vision of technological sovereignty, digital self-reliance, and inclusive growth. In an era where artificial intelligence shapes governance, economic competitiveness, and citizen services, building AI systems rooted in local languages, datasets, and regulatory frameworks ensures that innovation aligns with national priorities and societal needs. Indigenous AI development not only safeguards strategic autonomy but also fosters economic resilience and equitable access to emerging technologies.

In this context, Sarvam AI stands out as one of the 12 organisations selected under the Innovation Centre pillar of the IndiaAI Mission to develop indigenous foundational models, with financial and compute support amounting to Rs.246.72 crore.

The company is building large language and speech models (LLMs) tailored for Indian languages and public service delivery, with capabilities such as voice-based interfaces, document processing, and citizen-centric applications that enhance accessibility and ease of use. By developing homegrown AI models aligned with national objectives, Sarvam AI is reducing reliance on foreign AI systems while strengthening the open-source ecosystem and enabling innovation across startups, academia, research institutions, and industry.

An AI model is a computer program trained on vast amounts of data to recognize patterns, make predictions, or generate new content, acting like a digital brain.

Sarvam AI’s models include:

  • Bulbul (Text-to-Speech): Available in 11 Indian languages with 39 distinct speaker voices.
  • Saaras (Speech-to-Text): Supports all 22 scheduled languages, 8kHz telephony audio, and code-mixed speech.
  • Vision (Document Understanding): Tailored for 22+ Indian languages, mixed scripts, and handwritten text

Through these foundational capabilities, Sarvam AI demonstrates how India-centric AI can evolve into scalable, resilient, and population-scale digital infrastructure, enhancing public service delivery, improving linguistic accessibility, and reinforcing India’s journey toward a globally competitive AI ecosystem.

Full-Stack Sovereign AI Ecosystem of Sarvam AI

Sarvam AI has built a comprehensive, full-stack sovereign AI ecosystem designed to serve enterprises, governments, developers, and creators across India. Developed end-to-end within the country spanning compute infrastructure, foundational models, platforms, and real-world applications. The ecosystem reflects commitment to technological self-reliance in artificial intelligence.

An AI stack is the complete set of tools and systems that work together to build and run AI applications. These applications range from everyday tools such as Siri and Alexa, to advanced systems used in healthcare diagnostics, financial fraud detection, and transportation.

What Sarvam AI ecosystem consists of?

  • Sarvam for Conversations: Enterprise-grade (high capacity) conversational AI delivering human-like, culturally fluent voices in 11 Indian languages. Handles over 100 million interactions with 500ms latency, deploys within 24 hours, and achieves up to 10x ROI.
  • Sarvam for Work: A unified enterprise AI platform that accelerates value creation through an AI-assisted build-debug-optimize cycle. Open and modular, it integrates seamlessly with any model, data source, or infrastructure.
  • Sarvam AI for Content: Enables multilingual video dubbing with voice cloning and precise audio-visual sync, along with document translation that preserves layout and tone, supported by built-in quality review and editing tools.
  • Sarvam AI for Edge Intelligence: Delivers compact, low-latency multimodal AI for real-world deployment, combining edge and cloud inference to power real-time assistants, on-device NLP, and high-speed translation and summarisation.

Through this integrated architecture, Sarvam AI is not merely building applications but establishing a scalable digital backbone for India’s AI future. By converging infrastructure, language intelligence, enterprise capability, and edge deployment into one sovereign ecosystem, it positions India to innovate independently, deploy responsibly, and compete globally, while ensuring that advanced AI remains accessible, secure, and aligned with national development priorities.

Strategic Partnerships For Public Service Delivery

Sarvam AI’s institutional collaborations are transforming indigenous innovation into measurable public value across India. By working closely with national and state governments, the company is embedding advanced AI capabilities into critical service delivery systems.

UIDAI (Unique Identification Authority of India) partnered with Sarvam AI to enhance Aadhaar services using AI-driven voice interaction, real-time fraud detection, and multilingual support. A custom GenAI stack will operate within UIDAI’s secure, on-premise infrastructure, supporting 10 Indian languages with real-time enrolment feedback and fraud alerts.

The Government of Odisha in collaboration with Sarvam AI are establishing a 50MW AI-optimized Sovereign AI Capacity Hub to serve as a national compute backbone. It will support AI use cases in mining, industrial safety, and Odia-language skilling, contributing to the sovereign compute grid.

The Government of Tamil Nadu and IIT Madras, in collaboration with Sarvam, are developing Digital Sangam, India’s first Sovereign AI Research Park, anchored by a 20MW AI data center to integrate advanced compute, research, and startup incubation for large-scale AI applications. Collectively, these initiatives demonstrate how coordinated public partnerships can deploy homegrown AI infrastructure at massive scale.

Also Read:

Whar are “blue tears”? New AI algorithm allows scientific monitoring of this unique phenomenon

Weekender: Inside India’s Global Capability Centre Boom

Market Failure? Samsung to Pull Plug on Galaxy Z TriFold After 3 Months of Launch

  • Samsung may end Galaxy Z TriFold sales within months due to high costs and limited production
  • Strong demand was driven largely by scarcity rather than mass-market adoption
  • Device likely served as a proof-of-concept for future foldable innovations
  • Samsung expected to focus on mainstream foldables while refining next-gen designs

Samsung is preparing to discontinue sales of its ambitious Galaxy Z TriFold smartphone just months after its debut, according to fresh reports emerging from South Korea, raising questions about the commercial viability of next-generation foldable designs.

The premium device, priced at roughly $2,899, was launched initially in Samsung’s home market late last year before expanding to the United States and select regions earlier in 2026. Touted as a breakthrough in mobile hardware, the TriFold introduced a three-panel folding mechanism aimed at blending smartphone portability with tablet-scale usability.

However, industry reports now suggest that Samsung is planning to wind down sales in South Korea after one final round of inventory restocking. In the United States and other markets, availability is expected to continue only until existing production units are exhausted.

According to Korean media reports cited by SamMobile, initial batches were capped at around 3,000 units each, with only a couple of such releases in early phases. Broader industry estimates from Digitimes and Gadgets 360 suggest total production may have been in the range of 20,000 to 30,000 units globally, with some projections stretching to 40,000 units at most over the product’s lifecycle. By comparison, Samsung’s Galaxy Z Fold series has historically shipped over 2–3 million units annually, underscoring how marginal the TriFold’s scale was.

Sell Outs or Scarcity of Devices?

The much-publicised “sell-outs” were therefore a reflection of scarcity rather than widespread demand. TechBusinessNews reported that each batch sold out within minutes, but with supply running into only a few thousand units, the absolute number of buyers remained extremely small. In some markets, distribution was even narrower, and in regions like the UAE, it reportedly received as few as 500 units in early allocations.

Pricing further constrained adoption. The TriFold launched at approximately $2,899 in the United States, with global pricing ranging between $2,400 and $2,900, making it the most expensive smartphone in Samsung’s portfolio. At that level, the device sits far above even premium foldables like the Galaxy Z Fold lineup, effectively limiting its audience to early adopters and collectors rather than mainstream consumers.

Cost structures added to the challenge. Reports indicate that Samsung was making little to no profit per unit, largely due to the complex tri-fold hinge system and multi-display manufacturing process. Without scale efficiencies, the bill of materials remained high, leaving margins thin or negative. This is compounded by supply chain pressures, Gadgets 360 and TrendForce flagged ongoing RAM and storage component shortages, which further increased costs and constrained output.

From a business perspective, the device’s contribution was negligible. Digitimes analysts noted that the TriFold would account for only a “marginal” share of Samsung’s mobile revenue, while TrendForce estimates Samsung is targeting around 7 million foldable shipments in 2026 overall. Even at an optimistic 30,000 units, the TriFold would represent well under 1% of total foldable shipments, reinforcing its limited strategic weight.

Samsung is now expected to double down on its core foldable lineup, including the Galaxy Z Fold and Galaxy Z Flip series, which have shown more consistent demand globally. At the same time, the company is likely to continue investing in advanced form factors behind the scenes, with industry watchers anticipating refined multi-fold or rollable prototypes in the coming years.

What makes lithium-ion batteries fail? Microscopic metal thorns give leads to scientists

This is the first time that scientists have observed the growth of tiny metal thorns known as dendrites grow within lithium-ion batteries thus making the batteries short-circuit. Their results published Mar. 12 in the journal Science illuminate the hitherto unrecognized mechanical aspects of the lithium dendrites during their development.

Lithium dendrites have been the subject of study of scientists since a long time, yet their behavior within batteries has not been well understood. Dendrites are developed at the nanoscale; development is difficult to monitor in a closed system such as a working battery, but has been associated with battery degradation and failure.

The new work, an international alliance of scholars at the U.S. and Singapore universities, simulated and experimented and came up with the first view on how dendrites crystalize, according to co-lead author Xing Liu, an assistant professor of mechanical and industrial engineering at New Jersey Institute of Technology and head of the NJIT Computational Mechanics and Physics Lab.

He says that it is a result of a close collaboration between experimental and computational mechanics and possibly could be used to make batteries safer.

Co-author Qing Ai, a former research scientist at Rice University, says: “The basic nanomechanical behavior of lithium dendrites has been a riddle of decades.”

Customized platforms
Lithium dendrites (named after the Latin word for branch) are about 100 times narrower than the thickness of a human hair and they are spouting out of anodes, which are negative terminals in lithium-ion batteries. The branches of dendrites may extend into an electrolyte in a lithium cell; in case the dendrites run to the negatively charged anode, and extend to the positively charged cathode, they may short out the battery.

Lithium dendrites are commonly known to be one of the largest impediments to commercialization of lithium-metal batteries, Liu says. Under battery operation, it is possible to have lithium dendrites form, break and be electrically isolated to the lithium metal anode to form so-called dead lithium. This is what causes a progressive depletion of battery capacity with time. Moreover, the dendrites may tunnel through the separator, and form an internal short between the anode and cathode. Capacity loss and short-circuit dendrite risks tend to be common in laboratory experiments.

Better still, lithium dendrites become almost impossible to eliminate in a battery once they develop.

At this point in time, says Liu, “there is no empirical way to cleanse dendrites of a working battery cell.”

In the new study, scientists at the Rice University together with their counterparts in Georgia Institute of Technology, the University of Houston and the Nanyang Technological University in Singapore extracted dendrites of working batteries to see whether they were mechanically strong or not.

“In order to make the quantitative study of lithium dendrites possible, we constructed specialized sample preparation and mechanical characterization stations of such delicate work,” says Boyu Zhang, a Rice doctoral graduate and a co-lead author on the work.

Rice Karl F. Hasselmann Professor of Materials Science and Nanoengineering co-corresponding author Jun Lou headed a team at the Nanomaterials, Nanomechanics and Nanodevices lab in performing a direct probe into the mechanical behavior of dendrites as they grew in real batteries. The extremely delicate experiments were done by Ai and Zhang, former members of the lab of Lou with the help of study co-corresponding author Hua Guo and co-author Wenhua Guo of the Rice University Shared Equipment Authority.

In order to execute the experiments, they made air-tight platforms to prepare and study the samples since lithium is a highly reactive element that changes chemically and structurally due to the amount of air it is exposed to. The nature of the deformation of individual dendrites to controlled stresses was then exposed using high-resolution electron microscopy.

‘Like dry spaghetti’

Lithium bulk is soft and cushy; the dendrites of lithium, consequently, were supposed to be soft as well. The experiments however indicated otherwise. This observation of the failure of dendrites in real-time under the operation of a battery through the University of Houston team under the leadership of one of the co-corresponding authors Yan Yao, a professor at the Department of Electrical and Computer Engineering, supported the idea that dendrites are brittle in liquid as well as solid electrolyte systems.

Liu says that for long it has been thought that the lithium dendrites are soft and ductile, resembling Play-Doh. However, it seems to us that they can be tough and brittle, too, and break like dry spaghetti.

Data on the observations was then modeled and theoretically analyzed by teams of NJIT and Georgia Tech.

To answer the question, Liu says that they did scale-bridging simulations to understand the reason lithium dendrites act contrary to expectations.

They discovered that when dendrites are growing in a battery cell, they will be covered by a thin coating of solid electrolyte interphase, known as SEI. The SEI coating causes the dendrites to become rigid and needle like and are able to pierce battery cells separators and electrolytes and are likely to break under stress and accumulate in the battery cell as lithium dead time fragments and lead to battery failure.

Liu explains that by knowing about the physics behind it, soon it will be possible to develop methods of making dendrites less susceptible to brittle fracture, such as; utilizing lithium alloy anodes. To scholars in the field of computational mechanics, the mechanisms to be found in the experiment, like the way that structures defame, or the reasons why they break and break down, are like musical notes and can be added to a symphony of high-performance materials and high-energy storage systems.

“The strengthening mechanism we identified in lithium dendrites adds a new note to this composition,” Liu says.

Read More:

MIT engineers build a battery-free, wireless underwater camera; captures color photos even in unclear environment

The Sun is Just a Secret Earthquake Machine with switch: Reveals Japan Study

 

The Rise Of Digital Nomad Cities as Remote Workers Mushroom Across the World

Remote work has transformed the geography of employment, allowing professionals to live thousands of miles from their employers.

The result has been the emergence of so-called digital nomad cities—destinations that attract remote workers seeking lower living costs, pleasant climates and flexible lifestyles.

Cities such as Lisbon, Bali, Medellín and Chiang Mai have become hubs for these mobile professionals.

Governments have taken notice.

More than 40 countries now offer specialised digital nomad visas, allowing remote workers to live and work legally for extended periods.

Portugal’s visa programme, for example, has drawn thousands of remote professionals to Lisbon and coastal towns. Estonia and Croatia have launched similar initiatives.

Economic benefits substantial

Remote workers often earn salaries tied to higher-income economies while spending locally in restaurants, apartments and services. This inflow of income can boost tourism sectors and urban economies.

But the trend has also sparked tensions.

In several popular destinations, local residents have complained that an influx of foreign professionals has driven up housing prices and changed neighbourhood dynamics.

Lisbon, for instance, has seen rents rise sharply in recent years, prompting protests by residents concerned about affordability.

Urban planners say the challenge lies in balancing economic opportunity with social stability.

“Digital nomads bring investment and cultural exchange,” said urban researcher Andrés Rodríguez-Pose of the London School of Economics. “But cities must ensure that local communities are not priced out.”

The remote-work revolution shows little sign of reversing.

For millions of professionals, the office is no longer a place but a laptop—and the world itself has become a workplace.

Weekender: Inside India’s Global Capability Centre Boom

Over the past decade, India has quietly become the operational backbone of some of the world’s largest corporations. The country now hosts more than 1,500 Global Capability Centres (GCCs)—specialised hubs where multinational companies manage everything from software engineering and financial analysis to artificial intelligence research.

Bengaluru sits at the heart of this transformation.

The southern technology capital has long been known as India’s Silicon Valley, but its role is evolving. What once began as outsourcing support centres has matured into high-value innovation hubs.

According to the National Association of Software and Service Companies (NASSCOM), GCCs in India employ nearly two million professionals and generate tens of billions of dollars in annual economic activity.

Companies including Goldman Sachs, Walmart, JPMorgan Chase, Airbus and Bosch operate large centres in Indian cities, particularly Bengaluru, Hyderabad and Pune.

“These centres are no longer just back-office operations,” said Sangeeta Gupta, senior vice-president at NASSCOM. “They are increasingly responsible for product development, digital transformation and advanced research.”

Shift Reflects Logic and Talent Availability

India produces hundreds of thousands of engineering graduates each year, providing companies with a vast pool of skilled workers. Labour costs remain significantly lower than in North America or Europe, but the quality of technical expertise has steadily improved.

At the same time, multinational corporations are seeking to centralise operations and accelerate innovation.

Global capability centres allow companies to bring together diverse functions—from cybersecurity and data analytics to financial planning—under one roof. Many centres now operate around the clock, coordinating with teams across continents.

The growth has also reshaped urban economies.

In Bengaluru, demand from GCC employees has fuelled the expansion of housing markets, commercial real estate and transportation infrastructure. Entire neighbourhoods around tech corridors such as Outer Ring Road and Whitefield have developed to accommodate the growing workforce.

Hyderabad, meanwhile, has emerged as another major GCC hub, attracting companies with lower real estate costs and proactive state government policies.

MNCs largest occupiers of office space in India

Real-estate consultants estimate that multinational firms are among the largest occupiers of office space in India’s technology cities.

The boom shows little sign of slowing.

Industry forecasts suggest the number of GCCs in India could exceed 2,000 within the next five years as companies expand their presence in areas such as artificial intelligence, cloud computing and financial technology.

For India, the implications extend far beyond employment.

“These centres place India at the core of global innovation networks,” Gupta said. “The country is moving from a services economy toward a knowledge and technology powerhouse.”

Young people who have AI meal plans might be consuming less calories, but missing a meal

A large number of teenagers who have some weight problem are resorting to AI models as they seek to design meal plans in a bid to lose weight. A new study, however, indicates that the plans that are a result of this could not, at least in all cases, cover the required nutrients and calorie consumption.

In Turkey, five different AI models were compared in regard to their meal planning capabilities, which led researchers to develop meal plans to help teenagers lose weight and evaluated their findings against the recommendations of a registered dietician. They described their results in Frontiers in Nutrition.

According to Dr Ayse Betul Bilen, an assistant professor of the Faculty of Health Sciences at the Istanbul Atlas University, there is a significant underestimation of total energy and the main nutrient intake of diet plans generated by AI models compared to plans prepared by a dietitian based on guidelines. It is known that adherence to this type of imbalanced or excessively restrictive meal plans in the teenage years can have a detrimental influence on growth, metabolic health, and eating habits.

Missing a meal

The researchers were prompted to generate meal plans using five AI models, which were ChatGPT 4, Gemini 2.5 Pro, Bing Chat-5GPT, Claude 4.1 and Perplexity, using free versions of these models. Some of the prompts were age, height and weight of the individual the plan would be based on, and the directive to develop a 3 days plan that included three meals and two snacks a day. Four teenagers aged 15 years, one boy and one girl, who were in the overweight percentile and one boy and one girl who fell in the obese percentile were put on meal plans.

Comparing the results of AIs to generate meal plans to those of a dietician who specializes in adolescent diseases, it was found that the energy requirement that was estimated by the AI models was on average nearly 700 calories lower than the dietitian. This is a full meal worth of difference that has severe clinical implications. The intake of some macronutrients had been overcalculated whereas the intake of some caloric nutrients was grossly undercalculated.

The AI-generated diet plans never adhered to the recommended mix of macronutrients, which is quite dangerous among adolescents, as Bilen indicated.

In comparison, AI models suggested more protein intake (20g higher than the dietician), and this scheme led to about 21-24% of the energy intake as protein. Recommendations of lipid provided by AI were also significantly more than in the plans developed by dieticians, and lipids constituted 41-45% of energy intake.

The quantity of carbohydrates, however, was much inferior in AI plans and the difference was about 115g on average, that is, only about 32-36 percent of the energy intake would be derived as carbs. In comparison, the National Academies of Sciences, Engineering and medicine in the US advise that the proportion of lipids, proteins and carbs should be 30-35, 15-20 and 45-50 percent respectively.

Favoring plans to balanced diets

Although numerous pieces of information about healthy diet guidelines are found on national and international health organizations websites, such as the Turkish Nutritional Guidelines or WHO Adolescent Nutritional Guidelines, AI tools do not necessarily use evidence-based nutritional guidelines in their production. Bilen stated that AI models are mostly trained to produce answers that are most plausible and user-friendly, and not necessarily accurate, clinically. According to their findings, they might be dependent on generalized or popular diet patterns rather than incorporating the nutritional requirements of age.

Since not every teenager can hire the service of a dietician to help them plan their meals, the team recommended that a person using AI tools to create a diet plan should be cautious. The teenagers are also to remember that the diets that are too restrictive or that are constructed on the basis of extreme diets that are based on the dominance of either protein or fat.

The researchers claimed that they hope that their findings will contribute to the increased awareness of the narrow capability of AI tools to create well-balanced meal plans and assist in developing safer tools that are more consistent with the guidelines created by professionals. Although AI models are fast developing and models might be better now than they were at the time of analysis, AI models are not an alternative to professional dietary counseling especially to the vulnerable groups.

Bilen concluded that adolescence is a critical period with regard to physical development, bone development and cognitive maturation. The risks of a lower energy and carbohydrate intake and higher ratios of protein and fat could be dangerous at the adolescent growth stage.

Read More:

How nuclear technology can help fight seafood fraud

“Walnuts” the new brain food for stressed university students

Australian researchers construct tiny AI chip that ‘travels’ at the speed of light

Australian scientists have developed a miniature-sized artificial intelligence (AI) chip that can perform computations based on the power of light, on a speed comparable to that of light.

The prototype of nano photonic chip, which uses the power of the light particles (photons) is entirely in-house in the Sydney Nano Hub in the University of Sydney.

According to the researchers, the prototype can be significant in creating more energy-efficient hardware in the field of artificial intelligence because the global demand of artificial intelligence is still increasing and such technology might reduce the total energy footprint of future computer systems.

Conventional computer chips are made using electricity to control information; that is, to move tiny and charged particles (electrons) with wire. This produces heat.

The prototype of nano photonic chip utilizes light. Light is able to pass through electrically non-resistant materials and hence does not produce heat in the same manner as electricity. The calculation is automatically done through the nanostructures as the light traverses the chip prototype.

The nanostructure of the chip occupies tens of micrometres, the thickness of a human hair. The combination of the nanostructures assists in creating a neural network: the artificial neurons that imitate the human brain to recognise and perform calculations.

The prototype is capable of calculations at the picosecond level, trillionths of a second – the duration in which light exits the nanostructure.

According to the researchers, the benefits of photonics use is that it is much faster and occurs at the speed of light. Light is also used to run the technology as opposed to electricity. This is in comparison to the existing data centres that use huge quantities of water and energy to operate them.

Professor Xiaoke Yi of the School of Electrical and Computer Engineering and director of the Photonics Research Group said that they had re-imagined how the photonics can be used to create new energy efficient and ultrafast computer processing chips.

Artificial intelligence becomes more or less limited to the energy consumption. This study carries out neural computation with light, which has been demonstrated to be faster, more energy-saving and can be made significantly smaller AI accelerators.

The study was published in Nature Communications, and it illustrates that AI models could be made into nanoscale photonic structures capable of manipulating light in such a way that the mathematical operations necessary to carry out machine learning could be implemented.

The researchers tested the nanophotonic chip by training it on over 10,000 biomedical images (breast, chest and abdomen MRI scans, etc.) and validated the technology.

The nanophotonic neural network demonstrated an approximation of 90 to 99 percent classification in simulations and experiments.

The technology provides a way forward to sustainable AI infrastructure which can facilitate the increasing needs of computing without the proportional increase in power usage.

Better, faster, stronger AI hardware

The science of light particles control is known as photonics, which is abbreviated to photon-based electronics. It has been applied in driving technology that is utilized in day-to-day lives like lasers, fiber-optic network and in medical imaging.

However, the harnessing of photonics to computer processing has been a relatively recent discovery and there has been a growing acuity as the need to harness AI demands grows.

The prototype demonstrates how intelligence can be incorporated directly in nanoscale photonic structures, according to PhD student Joel Sved who was instrumental in design and implementation of the prototype.

The Photonics Research Group of the University of Sydney has a long history of over 10 years of research on how to push the limits of photonics as well as how to upgrade our technology.

It involves application of photonics in solving problems in wireless communications and high-technology sensing that are able to detect and measure chemical or biological traces in the environment.

After the successful experiment with the prototype of the nanophotonic chip, the team headed by Professor Yi is currently developing the technology to the level of larger-scale photonic neural networks.

AI disclosure labels can be more harmful than good, finds Chinese Study

The increased application of AI-generated scientific and science-related texts, particularly social media, is the source of concern: they can include fake or highly persuasive information, which cannot be easily detected by the users, and can influence the way people think and make decisions.

Various jurisdictions and platforms are heading in the direction of explicitly disclosing AI-generated or AI-synthesised content to safeguard the population. Nevertheless, according to a recent study published in Jacom there is a risk that such labels can backfire, reducing the effectiveness of legitimate scientific knowledge and boosting alleged knowledge.

The Dangers of AI-Scientific Content.

AI content can be deceptive at least on two grounds. To start with, language models can hallucinate and make statements that are valid, but are factually incorrect. Second, the users can intentionally request AI systems to produce fake and plausible messages. Due to this reason, various nations have come up with transparency requirements whereby online content created or synthesized by AI should be clearly labeled.

Teng Lin, a PhD student at the School of Journalism and Communication, University of Chinese Academy of Social Sciences (UCASS), Beijing, and Yiqing Zhang, a Master student at the same school, in their new study tested whether these disclosure labels do what they claim they do; that is, protect the public against misinformation.

Experimental Study

According to Teng, they concentrated on science-related news posted on the social media.

The experimental research was conducted on 433 participants who were online recruited via the Credamo site in the month of March to May 2024. The authors developed four categories of social media posts, including correct information with or without an AI label, and misinformation with or without an AI label. The researchers used GPT-4 to adapt the texts based on the items published by the Science Rumour Debunking Platform in China to produce the correct and deceptive versions of the text in Weibo and were subsequently vetted by the researchers themselves. The participants were requested to provide a rating on the perceived credibility of each of the posts on the basis of 1 to 5. The negative attitudes of the participants toward AI and the level of engagement with this subject were also measured by the researchers.

A Paradoxical Effect

The findings showed an anti-intuitive trend. Teng says that its most significant result is what he refers to as a truth-falsity crossover effect. The same AI label creates two ways and two directions of credibility across messages as to whether the message is true or false where it lowers credibility of true messages and raises credibility of false messages. He further notes that it does not necessarily imply that the effect would be the same on all platforms or formats but in their experimentation the trend was evident.

In this regard, AI disclosure fails to assist individuals in selecting real and fake information. Rather, it seems to redistribute credibility in a counter-intuitive fashion.

Teng and Zhang also discovered that the personal attitudes towards AI are involved. The people with more negative attitudes to AI punished the correct information even more punishments when it was referred to as AI-generated. Nevertheless the credibility enhancement that was seen on misinformation did not entirely vanish in the negative attitudes, rather it was simply attenuated and was attenuated in topics specific manner, not being removed in general.

It implies that so-called algorithm aversion does not contribute to the homogeneous rejection of AI-generated content, but rather causes an even more sophisticated and asymmetrical response.

The necessity of a careful policy formulation.

Such studies emphasise the importance of thorough-testing the regulatory interventions before they are implemented because well-meaning transparency initiatives can have unintended effects.

Teng says, “We provide some recommendations in our paper but they have to be confirmed in order to be accepted as valid.” One of the suggestions is to use a dual-labeling protocol. Rather than just writing that the material is the result of the work of AI, a label might also contain a disclaimer, that the information has not been evaluated separately, or place a warning of a risk. In brief, it might not be enough to tell audiences that a text has been created by AI.

Another suggestion, Teng makes, is the use of graded or categorical system of labeling. Various forms of scientific information have varying risks. As an example, a warning can be more intense with medical or health-related information and less serious with information about new technologies. “Accordingly, we would propose various degrees of disclosure, based on the nature and the risk of the content.”

Spanish Hacker Tricked Claude to Become Elite Cyber Weapon, Steals 150-GB of Mexico’s Most Sensitive Data

An unidentified hacker exploited Anthropic’s AI chatbot Claude to carry out a series of cyberattacks against Mexican government agencies, stealing 150 gigabytes of sensitive data including taxpayer records, voter files, and civil registry documents, startling cybersecurity researchers worldwide. The attack, which began in December 2025 and still continuing, was discovered by Israeli cybersecurity startup Gambit Security, which stumbled upon publicly accessible conversation logs revealing the entire jailbreak methodology.

The unknown attacker wrote Spanish-language prompts instructing Claude to act as an elite hacker, identifying vulnerabilities in government networks, writing scripts to exploit them, and determining ways to automate data theft, according to Gambit’s research published Wednesday. The chatbot initially warned the user of malicious intent but when the attacker added instructions about deleting logs and erasing command history, it complied.

“Specific instructions about deleting logs and hiding history are red flags,” Claude responded at one point, according to a transcript provided by Gambit. “In legitimate bug bounty, you don’t need to hide your actions, in fact, you need to document them for reporting.”

Rather than continuing to argue with the AI, the hacker changed tactics entirely, abandoning the back-and-forth conversation and instead handing Claude a detailed operational playbook on how to proceed. That approach achieved the “jailbreak,” bypassing Claude’s guardrails and allowing the attacks to proceed.

“In total, it produced thousands of detailed reports that included ready-to-execute plans, telling the human operator exactly which internal targets to attack next and what credentials to use,” said Curtis Simpson, Gambit Security’s chief strategy officer.

The Scale of the Breach

The 150 gigabytes of stolen data contained documents associated with 195 million taxpayer files of the federal tax agency of Mexico, national election institute voter files, government employee credentials, and civil registry files.

The list of agencies that Gambit compromised is topped by the federal tax authority (SAT) of Mexico, the national electoral institute (INE), state governments in Jalisco, Michoacan and Tamaulipas, civil registry in Mexico City and water utility of Monterrey, a total of nine institutions on federal, state, and municipal levels.

Once Claude reached its limits with some requests, the attacker turned to OpenAI ChatGPT to get some additional help such as how to move laterally in computer networks, what credentials were required to get into specific systems, and how probable the hacking attempt would be recognized. Two consumer artificial intelligence subscriptions. No custom malware. No zero-day exploit.

Anthropic Responds, Proscribes Accounts

Anthropic examined the arguments presented by Gambit, stopped the operation and blocked the accounts in question, a representative of the company confirmed. The firm uses examples of bad practices to learn new lessons and train its own AI, Claude Opus 4.6, has probes that detect attempts to misuse it, the representative said.

 

In this case, the hacker engaged Claude continuously until it had the capability to jailbreak it, which the representative affirmed, but observed that the hacking campaign even began to fail occasionally since Claude did not heed to the demands of the hacker.

OpenAI was also able to verify that it detected the activities of the hacker who was using its models to carry out activities that were against its usage policies. The company issued an emailed statement that said it had blocked the accounts that were used by this opponent and appreciated the efforts of the outreach by Gambit Security since its tools have not responded to their requests.

Mexican Government Pushes Back, Partially

Mexican authorities have had a mixed reaction. Mexico tax evasion authority confirmed that it had viewed its access logs and that it could not trace a breach. The national electoral institute claimed that no violations and unauthorized access were detected in the recent months, and that it enhanced its cybersecurity roadmap. Jalisco state government also denied being breached saying that federal networks were the only ones affected.

The national digital agency of Mexico did not provide any commentary on the breaches, only stating that cybersecurity was a priority. An official of Monterrey Water and Drainage Services indicated that the agency did not identify any intrusions or significant vulnerabilities after the second half of 2025. The civil registry of Mexico City, the local government of Michoacan and Tamaulipas did not provide any response to inquiries.

Mexican officials had issued a short statement in December stating that they were investigating violations by multiple institutions of the state, but it is still unknown whether that was connected to the Claude attack.

Not First Time, Not Last Either

Gambit has not mentioned that the attack must have been carried out by a particular group and indicated that it does not think that the attacker is affiliated to a foreign government. Researchers noted that the attacker was after a great amount of the identities of government employees, but it is still not very clear what the attacker did with the data he stole. The campaign used at least 20 specific vulnerabilities.

Claude has also not been the first time Mexican breach has been featured in a nation-level cyberattack. In November 2025, Anthropic announced that it had stopped the initial AI-directed cyber-espionage effort, where alleged Chinese state-sponsored cybercriminals had infiltrated Claude into targeting 30 global targets.

The 2026 Global Threat Report of the CrowdStrike company published Wednesday, Feb 25, 2026, reported a 89% year-over-year rise in the number of AI-enabled adversary activities and a mean of 29 minutes of average breakout time in eCrime cases, the shortest time of 27 seconds was recorded.

“This fact is transforming everything in the game rules that we have never heard of,” said Alon Gromakov, the co-founder and the chief executive officer of Gambit.

Gambit was established by Gromakov and two former officers of the signals intelligence unit of the Israel Defense Forces, Unit 8200. The research of Wednesday was published, and the company has recently raised a new round of $61 million led by Spark Capital, Kleiner Perkins and Cyberstarts.

For 195 million Mexican taxpayers whose records have been turned over to unknown hands, just what to do next is the question ahead.

AI Impact Summit: Long Queues, Brief Evacuation Create Confusion at Pragati Maidan

What was billed as a landmark showcase of India’s AI ambitions descended into disarray on its opening day, with throngs of attendees facing interminable queues, sudden evacuations, and logistical nightmares at the Pragati Maidan venue in the capital.
The India AI Impact Summit, running through February 20, drew sharp online backlash as delegates, startups, and journalists reported overcrowding, inadequate signage, and conflicting access protocols. With an expected footfall of 250,000, the event aimed to position India as a voice for emerging economies in AI governance.
Instead, organizational hiccups threatened to eclipse the government’s narrative of technological prowess under Prime Minister Narendra Modi’s administration.Several participants described a frantic morning, with entry delayed until late, only for the exhibition hall to be abruptly cleared for security sweeps ahead of high-profile arrivals. “Unclear instructions had left many scrambling to reclaim possessions,” one delegate told Reporters, echoing sentiments shared widely on social media.

Venue Evacuated Briefly

The chaos peaked around midday when the venue was evacuated for hours of sanitization, stranding exhibitors and founders outside without water or updates. Punit Jain, founder of developer platform Reskilll, captured the frustration in a viral post on X: “An AI Summit that sidelines its own builders? • 7 AM queues • 9 AM entry • 12 PM full evacuation • Hours of sanitization • PM visit at 5 PM. Day 1 Ends here. Meanwhile, exhibitors, delegates, startup founders left outside. No water. No clarity.”

Jain, who tagged IT Minister Ashwini Vaishnaw and the Prime Minister’s Office, accused organizers of mobilizing the ecosystem only to displace it, calling it “not how we build India’s AI future.”

Journalists fared no better, grappling with mismatched digital QR codes and promised physical passes that never materialized. In a WhatsApp group for media covering the event, reporters lamented a lack of workspaces, with one noting the absence of seating to file stories or conduct interviews.
Sreenivasan R, an education and tech activist, highlighted the disorganization in real-time:

@OfficialINDIAai

@IndiaAiExpo

chaotic entry,

@DiGiyatra

no use. Thousands in 5Queues and one bags scanner. Poor management across. No proper directions. People going in circles. Sessions cancelled, agenda vanishing from App. Labelling wrongly done at rooms.” Sreenivasan, an alumnus of IIM Bangalore and Jawaharlal Nehru University, added that he was “helping wherever” possible amid the confusion.

International Visitors in Dismay

Even international visitors expressed dismay. Raj Vardhan, who traveled from the United States, described his ordeal: “Flew all the way from the US for the AI Summit, only to face chaos on Day 1. Overcrowded, poorly planned, and couldn’t get into a single session. To make it worse, endless political convoys blocking roads turned getting out into a nightmare.”

Vardhan’s post, hashtagged #AISummit and #AIIndia, voiced cautious optimism for Day 2, urging a demonstration of “true AI leadership.” The complaints echoed an earlier vent from Maitreya Wagh, co-founder of AI voice startup Bolna, who found himself locked out of his own booth: “Gates are closed so could not access my own booth at the AI Summit. If you’re also stuck outside and wanted to visit the Bola team, dm me. We may set up a mini-booth at some Connaught Place cafe.”

For now, attendees are bracing for Tuesday’s panels, where some speakers remain in limbo over session confirmations and agendas. Amid the glitches, the summit’s core message, amplifying Global South perspectives on ethical AI, hangs in the balance, overshadowed by the very disorganization it sought to transcend.

Budget 2026 Puts Technology At Heart Of Inclusive Growth, Says Nasscom

Industry body Nasscom on Sunday welcomed the Union Budget 2026, saying it firmly positions technology as a central driver of inclusive and sustainable economic growth under the government’s Viksit Bharat vision.

Reacting to Finance Minister Nirmala Sitharaman’s ninth consecutive Budget, Nasscom described it as forward-looking and consultative, reinforcing the partnership between government and industry while strengthening India’s ambition to remain a global technology and services hub.

Tax Certainty, Ease Of Doing Business Boost For IT Sector

Nasscom said a key positive for the technology industry was the rationalisation of international taxation and transfer pricing rules, noting that tax policy has been effectively deployed as a competitiveness lever.

It highlighted the consolidation of software development services, IT-enabled services, knowledge process outsourcing and contract R&D into a single category of Information Technology services, along with a uniform safe harbour margin of 15.5 per cent. The move, coupled with the expansion of the safe harbour eligibility threshold from Rs 300 crore to Rs 2,000 crore, is expected to significantly widen access to certainty mechanisms for routine cross-border IT service models.

The industry body also welcomed steps to strengthen the Advance Pricing Agreement (APA) framework, particularly the proposal to fast-track unilateral APAs for IT services with a targeted two-year resolution timeline, addressing long-standing concerns over delays and uncertainty.

Cloud, Semiconductors And Digital Infrastructure In Focus

Nasscom said the Budget made a decisive intervention to strengthen India’s cloud and digital infrastructure ecosystem. It pointed to the proposed tax holiday till 2047 for foreign companies providing global cloud services using Indian data centres, calling it a strong signal to attract long-term global investment and expand India’s compute capacity.

The industry body also welcomed the emphasis on building domestic capability in strategic technologies, including the launch of India Semiconductor Mission 2.0 and the enhanced Rs 40,000 crore outlay for the Electronics Components Manufacturing Scheme.

Taken together, Nasscom said, the measures reflect a more mature policy approach that places technology, digital infrastructure and tax certainty at the core of India’s long-term competitiveness, setting a clear direction for sustainable growth driven by innovation and manufacturing depth.

From deepfakes to grooming: UN Warns Of Rising Online Threats To Children As AI Expands Digital Risks

 

The rapid spread of artificial intelligence is creating new dangers for children online, prompting the United Nations and child protection groups to call for stronger safeguards and global action.

Experts warn that digital technologies are increasingly being used to target minors through harassment, exploitation and manipulation, with the risks intensifying as AI tools become more sophisticated.

Cosmas Zavazava, Director of the Telecommunication Development Bureau at the International Telecommunication Union (ITU), said children today face a wide range of online threats.

These include grooming by predators, cyberbullying, exposure to harmful content and the growing misuse of technologies such as deepfakes.

“We saw that during the COVID-19 pandemic many children, particularly girls and young women, were abused online and, in many cases, that translated into physical harm,” he said.

AI Tools Creating New Forms Of Abuse

Child protection organisations say artificial intelligence is making it easier for offenders to target and manipulate children.

Predators can use AI systems to analyse a child’s online activity, emotional state and personal interests, allowing them to tailor grooming strategies more effectively.

Another growing concern involves the creation of explicit fake images using AI technology. These manipulated images can be used for blackmail or sexual extortion.

A report released in 2025 by the Childlight Global Child Safety Institute highlighted the scale of the problem. It found that technology-facilitated child abuse cases in the United States rose dramatically, increasing from around 4,700 incidents in 2023 to more than 67,000 in 2024.

Governments Begin Introducing Restrictions

As awareness of these risks grows, some governments are introducing stricter regulations to protect young users online.

Australia became the first country to prohibit children under the age of 16 from having social media accounts at the end of 2025. Authorities said the decision was based on evidence that online platforms expose children to harmful material and harassment.

A government study cited in the decision found that nearly two-thirds of children aged between 10 and 15 had encountered violent, hateful or distressing content online. More than half reported experiencing cyberbullying, most of it on social media platforms.

Several other countries, including the United Kingdom, France, Canada and Malaysia, are considering similar measures or drafting new legislation to limit children’s exposure to online risks.

Young adults check social media in North Macedonia /UN

Lack Of AI Awareness A Major Concern

In January 2026, several UN agencies released a joint statement warning that societies remain poorly prepared to address the impact of artificial intelligence on children.

The statement emphasised widespread “AI illiteracy” among children, parents, teachers and caregivers, as well as limited understanding among policymakers about how AI systems function.

The document also noted that many governments lack the technical expertise needed to regulate emerging technologies effectively, including frameworks for data protection and assessments of how digital tools affect children’s rights.

Pressure On Technology Companies

UN officials say technology companies also bear significant responsibility for protecting young users.

Many of the AI tools currently being developed, along with the systems that power them, were not originally designed with children’s safety in mind.

Zavazava said the UN is urging the private sector to work more closely with international organisations and governments to reduce risks.

“We are really concerned and we would like the private sector to be involved, to engage and to be part of the story we are writing together,” he said.

He added that responsible use of AI does not necessarily conflict with business interests.

“With responsible deployment of AI, you can still make a profit, you can still do business and gain market share,” he said.

Protecting Children’s Rights In The Digital Age

The UN says protecting children online is fundamentally a human rights issue.

The Convention on the Rights of the Child, one of the most widely ratified human rights treaties in the world, was updated in 2021 to address challenges emerging from the digital environment.

However, UN agencies believe additional guidance is needed to help governments respond to rapidly evolving technologies.

New child online protection guidelines have therefore been developed to support different groups involved in safeguarding children.

The recommendations provide guidance for parents, teachers, regulators and the technology industry on how to create safer digital environments.

“Children are getting online at a younger age, and they should be protected,” Zavazava said.

UN officials stress that while technology can be a powerful tool for learning and communication, ensuring children’s safety will require coordinated action from governments, companies, educators and families alike.

Internet Shutdowns Surge Worldwide, UN Warns Of Growing Threat To Rights

Governments around the world are increasingly cutting off internet access during protests, elections and political crises, raising serious concerns about freedom of expression and democratic participation, according to the United Nations.

In a statement issued this week, UNESCO said internet shutdowns have reached alarming levels in recent years, warning that the trend threatens fundamental rights and the flow of reliable information.

Record Number Of Shutdowns

According to data cited by UNESCO from digital rights monitoring group Access Now, 2024 recorded the highest number of internet shutdowns since global tracking began in 2016.

The agency said the pattern has continued into 2026, with several countries already imposing widespread digital restrictions amid political unrest or electoral processes.

UNESCO stressed that access to information is closely tied to freedom of expression and other fundamental rights.

“Access to information is an integral part of the universal right to freedom of expression,” the agency said.

Reliable internet connectivity, it noted, also supports education, freedom of association and assembly, and participation in cultural, social and political life.

The UN body called on governments to prioritise policies that expand access to digital communication rather than restrict it.

Shutdowns Increase Risk Of Misinformation

UNESCO also warned that cutting off internet access can unintentionally fuel the spread of misinformation.

When journalists, news organisations and public authorities lose access to digital platforms, the availability of verified information declines sharply. In such environments, rumours and unverified content can spread rapidly.

Without reliable online communication channels, citizens may also struggle to obtain timely updates during emergencies or political events.

Protests And Elections Often Trigger Restrictions

Recent months have seen several high-profile cases of governments restricting internet access during periods of political tension.

In January 2026, authorities in Iran imposed a near-total nationwide internet blackout during renewed protests. Connectivity monitoring services reported internet traffic dropping to extremely low levels, disrupting businesses and limiting communication between citizens, journalists and civil society organisations.

Afghanistan also experienced a nationwide internet shutdown between September and October 2025 after the Taliban authorities ordered telecommunications networks to suspend services. The disruption affected humanitarian operations, media reporting and access to online education, particularly for women and girls.

In Nepal, authorities temporarily blocked access to 26 social media and messaging platforms in September 2025 during a period of political unrest.

Sri Lanka has also faced scrutiny after adopting legislation in 2024 that grants authorities broad powers to regulate and restrict online content.

Election-Related Restrictions In Africa

Internet disruptions linked to elections have also been reported across several African countries.

In Cameroon, connectivity was significantly disrupted during the presidential election held in October 2025. Around the same time, Tanzania imposed internet restrictions and partial shutdowns during its national polls.

Digital rights groups have criticised such measures, warning that limiting online communication during elections undermines transparency and restricts public debate.

Human Rights Concerns

Concerns about internet shutdowns have been raised previously by the UN human rights office.

A 2022 report from the Office of the High Commissioner for Human Rights examined the global impact of such restrictions and concluded that shutdowns often violate international human rights standards.

The report found that blocking internet access can have far-reaching consequences beyond the intended targets.

In emergencies, for example, hospitals may struggle to contact doctors or coordinate care. Small businesses can lose access to customers and markets, while voters may be deprived of crucial information about candidates and election processes.

The report also highlighted the risks faced by protesters who may be unable to communicate or seek help during violent crackdowns.

Call For Responsible Digital Governance

Because internet shutdowns typically affect entire populations rather than specific individuals, the UN says they rarely meet international standards requiring measures to be lawful, necessary and proportionate.

Experts warn that such restrictions can widen digital inequalities, slow economic growth and undermine democratic institutions.

UNESCO is therefore urging governments to ensure that digital governance policies protect connectivity and uphold human rights.

As internet access becomes increasingly essential for daily life, the agency said safeguarding open and reliable digital networks will be critical for protecting democratic participation and social progress worldwide.

Mittal’s Hike Shuts Down After India’s Real-Money Gaming Ban; Decade-Long Journey Over

Hike, once touted as India’s homegrown rival to WhatsApp and later a promising player in online gaming, has officially shut down after the Indian government imposed a ban on real-money gaming.

Founder and CEO Kavin Bharti Mittal confirmed the closure in a note shared on Substack, calling it “a difficult decision” made after discussions with investors and employees. “Scaling globally would require a full recap, a reset that is not the best use of capital or time,” he wrote, acknowledging that regulatory hurdles in India had curtailed the company’s ambitions.

Launched in 2016 as a messaging platform, Hike had repositioned itself in 2021 as a gaming venture with its platform Rush. Featuring 14 mobile titles, Rush integrated Web3 elements such as play-to-earn mechanics and digital asset ownership. The app grew rapidly, boasting more than 10 million users, $500 million in gross revenue, and nearly $480 million in annual winnings distributed to players.

Ban on Online Gaming

Despite the traction, India’s blanket ban on real-money gaming effectively shut off Hike’s largest market. Mittal noted that while the company’s U.S. operations launched nine months ago were “showing strong growth,” the inability to build scale at home made global expansion unviable.

At its peak, Hike employed about 100 people across India, the U.S., Dubai, and Singapore, organized into what Mittal described as lean, high-performance “SWAT teams.” The venture had backing from some of the world’s biggest investors, including SoftBank, Tencent, Tiger Global, Bharti, Foxconn, Jump Crypto, Tribe Capital, Republic, and Polygon. Individual investors such as Rajeev Misra, Elad Gil, and Zynga founder Mark Pincus had also placed bets on the company.

The shutdown brings an abrupt end to a startup that once symbolized India’s ambition to build global internet platforms, from social messaging to Web3 gaming. Moreover, the abrupt closure of Hike underscores a hard truth for India’s digital economy ahead such as scale, innovation, and marquee investors are no match for abrupt regulatory interventions. Kavin Bharti Mittal’s decision to shut down the once-celebrated startup reveals how vulnerable even well-backed ventures remain in sectors that lack policy clarity.

Platform Rush Experiment

Hike’s trajectory reflects both promise and pitfalls. From its early days as a homegrown rival to WhatsApp, the company successfully reinvented itself by riding India’s booming mobile gaming wave. Its platform, Rush, was no small experiment: it blended traditional casual games with Web3 features, drew over 10 million users, and claimed $500 million in gross revenues. Few Indian consumer internet firms outside e-commerce had achieved such traction in a short span. Yet, one regulatory stroke effectively erased its biggest market.

Above all, the challenge lies in the timing. Mittal argued that while Hike’s U.S. operations were beginning to show growth, building a truly global platform required strong domestic roots. India was intended to provide that base. Instead, the blanket ban on real-money gaming turned a growth story into a cautionary tale. This regulatory unpredictability does not just deter entrepreneurs, it shakes investor confidence in India’s broader digital ecosystem.

The investor roster behind Hike, SoftBank, Tencent, Tiger Global, Polygon, and others, signals that global capital is eager to back Indian startups. But sudden rule changes, without phased implementation or alternative frameworks, risk driving talent and capital abroad. The shutdown also raises questions about India’s ability to nurture world-class consumer internet products, even as the government pushes for “Digital India” and startup-led growth.

Concerns of Addiction Leads to Shutdowns

At the same time, the government cannot remain mute to concerns of addiction over inevitable financial risk without stifling gaming sector. innovation. In fact, the ban on real-money gaming in India has triggered a wave of shutdowns and exits across the country’s once-thriving gaming startup ecosystem. Hike, the messaging-app-turned-gaming company, was the first high-profile casualty, but several others have quickly followed.

Dream Sports, parent of fantasy sports giant Dream11, has begun winding down its real-money gaming divisions. The company has suspended its “cash contests” on platforms like Dream Picks and Dream Play, assuring users that deposits and winnings remain safe.

Mobile Premier League (MPL), another major player in India’s online gaming sector, also suspended deposits and halted its real-money operations. The company has reportedly laid off nearly 60% of its India workforce, underscoring the severity of the regulatory shock.

PokerBaazi, operated by Moonshine (a Nazara Technologies subsidiary), has also ceased offering real-money poker games. While Nazara continues to evaluate the regulatory environment, its gaming subsidiary has been forced to hit pause on its most lucrative business line.

Other firms, including Zupee, Probo, Gameskraft, and WinZO, have likewise suspended or shut down their real-money offerings. Zupee has retained its free-to-play titles, while Gameskraft’s rummy platforms have disabled all “add cash” features. Probo too has discontinued real-money segments to comply with the new rules.

RummyCulture, one of India’s largest online rummy brands, has also closed its cash-game services, further shrinking the space for real-money card-based gaming.

Together, these shutdowns highlight the scale of disruption caused by the new legislation. Startups that collectively served tens of millions of users and attracted billions of dollars in global investment have been forced to exit their primary business overnight.

iPhone 17 Pre-Orders Surge In India As Apple Expands Stores And Production

Apple’s latest iPhone 17 series has drawn record pre-orders in India, outpacing demand for earlier models, even as the company raised prices and expanded its product range.

According to retail sources, the surge has been driven by strong interest in the iPhone 17 Air with its titanium frame and lighter design, alongside the higher-end Pro and Pro Max versions. Buyers in urban markets have also shown a clear preference for models with larger storage capacity, despite higher price points.

The base iPhone 17, with 256 GB storage, has been priced at ₹82,900. Apple has increased the minimum storage across the lineup, a move seen as balancing higher costs with added value for consumers.

To meet rising demand, Apple has stepped up its local presence by opening two new company-owned outlets in Bengaluru and Pune. These add to its flagship stores in Mumbai and Delhi, besides a wide network of premium resellers across the country.

The company is also scaling up production in India as part of its broader diversification strategy to reduce dependence on Chinese manufacturing hubs. Under the ‘Made in India’ initiative, more units of the new iPhone series are being assembled locally, positioning India as both a consumption hub and an emerging export base.

Analysts said the strong pre-order response highlights India’s growing importance to Apple’s global growth strategy. While premium pricing could limit adoption beyond metros, industry experts note that the brand’s aspirational pull remains strong among Indian consumers.

Strategic Shift to India?

Apple’s iPhone 17 series has recorded strong pre-orders in India, with demand surpassing earlier models and prompting the company to accelerate its retail and production plans in the country. The response has been particularly strong for the iPhone 17 Air, which features a titanium frame and lighter build, as well as the higher-end Pro and Pro Max versions.

The entry-level iPhone 17, priced at ₹82,900 with 256 GB storage, marks a shift in Apple’s pricing strategy. By raising base storage across its lineup, the company has sought to offset higher price tags while appealing to urban buyers who prefer larger-capacity devices. Analysts said that the strategy, though costly, is resonating with India’s affluent middle-class consumers.

To strengthen its retail presence, Apple has opened two new company-owned stores in Bengaluru and Pune, expanding beyond its Mumbai and Delhi outlets and a wide premium reseller network. At the same time, Apple is scaling up local assembly of its newest devices, making India a critical hub not just for sales but also for production.

Supply Chain Diversification

The shift reflects Apple’s broader effort to diversify away from overdependence on China, where rising labor costs, U.S.-China tensions, and supply-chain disruptions have complicated operations. India, with its growing skilled workforce, government incentives under the “Made in India” program, and expanding consumer base, has emerged as a natural alternative.

For global markets, the change has two implications. First, it reduces Apple’s supply-chain risk by balancing output between China and India. Second, it positions India as both a large consumer market and an export base, with assembled iPhones expected to ship from India to Europe and other regions.

For U.S. investors, Apple’s India shift is a sign of how geopolitics and consumer demand are reshaping the future of one of the world’s most valuable companies.

Foster + Partners Unveils Bold 3D-Printed Tower on Moon’s Surface

British design and architecture powerhouse Foster + Partners has unveiled a striking new vision for off-Earth living: a 165-foot (50-meter) 3D-printed lunar skyscraper, engineered specifically for deployment at the Moon’s South Pole. Developed in collaboration with NASA and advanced manufacturing firm Branch Technology, the project signals a bold leap toward permanent human presence beyond Earth—and sets the stage for future Martian colonization.

The concept is more than just science fiction come to life. It’s a meticulously engineered structure tailored to survive and thrive in one of the harshest environments imaginable. Key to its feasibility is the use of in situ resources—namely, lunar regolith, the dust and rock found on the Moon’s surface—which would be transformed into durable construction material via 3D printing. This innovation addresses one of the most significant bottlenecks in space infrastructure: the prohibitive cost and complexity of hauling building materials from Earth.

Foster + Partners’ design is anchored by a spiraling tower capable of supporting essential power and communication systems. A set of expansive, fold-out solar panels—integral to the structure—will capture and store solar energy, ensuring self-sustaining power generation for lunar operations. The vertical form factor not only maximizes solar exposure in the Moon’s polar regions but also minimizes surface disruption, an increasingly important consideration in extraterrestrial architecture.

What sets this concept apart is its emphasis on autonomy. The structure is designed to be constructed by robotic systems with minimal human intervention, aligning with NASA’s broader ambitions to scale infrastructure development in space ahead of crewed missions. The initiative dovetails with the agency’s Artemis program, which aims to establish a long-term lunar presence as a springboard to Mars.

Prototype tower

“This is not just a visionary piece of architecture; it’s a prototype for how we might build sustainably and autonomously on other celestial bodies,” said a Foster + Partners spokesperson. “Our collaboration with NASA and Branch Technology represents a major step forward in developing practical solutions for space habitation.”

Currently, a detailed scale model of the lunar tower is on display at the Kennedy Center in Washington, D.C., as part of the “From Earth to Space and Back” exhibition, offering the public a closer look at what could soon become a landmark on the Moon.

Foster + Partners is no stranger to space architecture. The firm has previously worked with the European Space Agency on lunar habitat concepts, and its latest venture further cements its role at the forefront of space-enabled design thinking. As the global space race pivots from exploration to colonization, the intersection of cutting-edge architecture, robotics, and planetary science will be pivotal—and Foster + Partners appears poised to shape that future, one printed layer at a time.