AMD Poised to Launch New AI Chips, Intensifies Market Rivalry With Nvidia

In a strategic move that underscores the intensifying competition in the artificial intelligence (AI) chip sector, Advanced Micro Devices (AMD) is set to unveil a new lineup of AI processors during an upcoming data center event in San Francisco. This announcement aims to strengthen AMD’s position as a formidable supplier of AI chips in a market that has been predominantly led by Nvidia. The event, scheduled for Thursday, is anticipated to feature details on AMD’s MI325X chip and the next-generation MI350 chip.

The MI350 series is designed to directly compete with Nvidia’s Blackwell architecture, promising enhanced computing power and memory capabilities. This development marks a significant effort by AMD to disrupt Nvidia’s market dominance in the AI chip landscape. AMD first introduced these chips at the Computex trade show in Taiwan in June, with plans for a release in the latter half of this year and into next year.

In addition to the AI chips, AMD is expected to unveil new server central processing units (CPUs) and PC chips that incorporate enhanced AI computing capabilities. This initiative illustrates AMD’s dedication to advancing AI technology and responding to the increasing demand for AI-driven solutions across various sectors.

AMD’s current MI300X AI chip, launched late last year, has experienced a swift uptick in production to meet growing market needs. In July, the company raised its AI chip revenue forecast for the year to $4.5 billion, up from a previous estimate of $4 billion, driven by substantial demand for the MI300X, especially in the realm of generative AI product development.

Market Competition

Despite AMD’s aggressive strategy, analysts suggest that its new product launches are unlikely to significantly impact Nvidia’s data center revenue, given that the demand for AI chips far outstrips supply. AMD is projected to report data center revenue of $12.83 billion this year, according to LSEG estimates, while Nvidia is expected to achieve a staggering $110.36 billion in the same segment. Data center revenue serves as a critical indicator of the demand for AI chips essential for developing and running AI applications.

The competitive landscape for AI chips has been evolving rapidly. Intel, another key player, recently announced its next-generation AI data center chips, the Gaudi 3 accelerator kit, which is priced around $125,000—substantially cheaper than Nvidia’s comparable HGX server system. Meanwhile, Nvidia continues to innovate with its next-generation AI platform, the Rubin platform, slated for release in 2026. This platform will succeed the Blackwell architecture, which has been highly sought after and is expected to remain sold out well into 2025 due to robust demand.

AMD’s Move Toward AI

AMD’s CEO, Lisa Su, has expressed a clear vision for the company’s future, emphasizing that AMD is not seeking to be a niche player in the AI chip market. This statement reflects the company’s ambition to solidify its presence as a major contender alongside established leaders like Nvidia and Intel.

As the AI chip market becomes increasingly competitive, AMD’s upcoming announcement is likely to further fuel this rivalry. With AI technology continuing to evolve and the demand for AI-powered solutions expanding, the market is poised for more innovations and strategic initiatives from industry giants. This dynamic landscape highlights the relentless pursuit of technological advancement in the AI chip arena.

Voice control smart devices might hinder children’s social, emotional development: Study

Voice control smart devices, such as Alexa, Siri, and Google Home, might hinder children’s social and emotional development, argues an expert in the use of artificial intelligence and machine learning in healthcare, in a viewpoint published online in the Archives of Disease in Childhood.

These devices might have long term effects by impeding children’s critical thinking, capacity for empathy and compassion, and their learning skills, says Anmol Arora of the University of Cambridge.

While voice control devices may act as ‘friends’ and help to improve children’s reading and communication skills, their advanced AI and ‘human’ sounding voices have prompted concerns about the potential long term effects on children’s brains at a crucial stage of development.

There are three broad areas of concern, explains the author. These comprise inappropriate responses; impeding social development; and hindering learning.

He cites some well publicised examples of inappropriate responses, including a device suggesting that a 10-year old should try touching a live plug with a coin.

Children-wikipedia

“It is difficult to enforce robust parental controls on such devices without severely affecting their functionality,” he suggests, adding that privacy issues have also arisen in respect of the recording of private conversations.

These devices can’t teach children how to behave politely, because there’s no expectation of a “please” or “thank you”, and no need to consider the tone of voice, he points out.

“The lack of ability to engage in non-verbal communication makes use of the devices a poor method of learning social interaction,” he writes. “While in normal human interactions, a child would usually receive constructive feedback if they were to behave inappropriately, this is beyond the scope of a smart device.”

Preliminary research on the use of voice assistants as social companions for lonely adults is encouraging. But it’s not at all clear if this also applies to children, he notes.

“This is particularly important at a time when children might already have had social development impaired as a result of COVID-19 restrictions and when [they] might have been spending more time isolated with smart devices at home,” he emphasises.

Devices are designed to search for requested information and provide a concise, specific answer, but this may hinder traditional processes by which children learn and absorb information, the author suggests.

When children ask adults questions, the adult can request contextual information, explain the limitations of their knowledge and probe the child’s reasoning—a process that these devices can’t replicate, he says.

Searching for information is also an important learning experience, which teaches critical thinking and logical reasoning, he explains.

“The rise of voice devices has provided great benefit to the population. Their abilities to provide information rapidly, assist with daily activities, and act as a social companion to lonely adults are both important and useful, the author acknowledges.

“However, urgent research is required into the long-term consequences for children interacting with such devices,” he insists.

“Interacting with the devices at a crucial stage in social and emotional development might have long-term consequences on empathy, compassion, and critical thinking,” he concludes.

 

Mobile phone app accurately detects COVID-19 infection in people’s voices

Artificial intelligence (AI) can be used to detect COVID-19 infection in people’s voices by means of a mobile phone app, according to research to be presented on Monday at the European Respiratory Society International Congress in Barcelona, Spain [1].

The AI model used in this research is more accurate than lateral flow/rapid antigen tests and is cheap, quick and easy to use, which means it can be used in low-income countries where PCR tests are expensive and/or difficult to distribute.

Ms Wafaa Aljbawi, a researcher at the Institute of Data Science, Maastricht University, The Netherlands, told the congress that the AI model was accurate 89% of the time, whereas the accuracy of lateral flow tests varied widely depending on the brand. Also, lateral flow tests were considerably less accurate at detecting COVID infection in people who showed no symptoms.

COVID-19 infection usually affects the upper respiratory track and vocal cords, leading to changes in a person’s voice.

Covid/commons.wikimedia.org

“These promising results suggest that simple voice recordings and fine-tuned AI algorithms can potentially achieve high precision in determining which patients have COVID-19 infection,” she said.Moreover, they enable remote, virtual testing and have a turnaround time of less than a minute. They could be used, for example, at the entry points for large gatherings, enabling rapid screening of the population.”

The app is installed on the user’s mobile phone, the participants report some basic information about demographics, medical history and smoking status, and then are asked to record some respiratory sounds. These include coughing three times, breathing deeply through their mouth three to five times, and reading a short sentence on the screen three times.

The researchers used a voice analysis technique called Mel-spectrogram analysis, which identifies different voice features such as loudness, power and variation over time.

“In this way we can decompose the many properties of the participants’ voices,” said Ms Aljbawi. “In order to distinguish the voice of COVID-19 patients from those who did not have the disease, we built different artificial intelligence models and evaluated which one worked best at classifying the COVID-19 cases.”

Its overall accuracy was 89%, its ability to correctly detect positive cases (the true positive rate or “sensitivity”) was 89%, and its ability to correctly identify negative cases (the true negative rate or “specificity”) was 83%.

“These results show a significant improvement in the accuracy of diagnosing COVID-19 compared to state-of-the-art tests such as the lateral flow test,” said Ms Aljbawi.

The patients were “high engagers”, who had been using the app weekly over months or even years to record their symptoms and other health information, record medication, set reminders, and have access to up-to-date health and lifestyle information. Doctors can assess the data via a clinician dashboard, enabling them to provide oversight, co-management and remote monitoring.

Emotional AI and gen Z: The attitude towards new technology and its concerns

Artificial intelligence (AI) governs all that come under “smart technology” today. From self-driving cars to voice assistants on our smartphones, AI has ubiquitous presence in our daily lives. Yet, it had been lacking a crucial feature: the ability to engage human emotions.

The scenario is quickly changing, however. Algorithms that can sense human emotions and interact with them are quickly becoming mainstream as they come embedded in existing systems. Known as “emotional AI,” the new technology achieves this feat through a process called “non-conscious data collection”(NCDC), in which the algorithm collects data on the user’s heart and respiration rate, voice tones, micro-facial expressions, gestures, etc. to analyze their moods and personalize its response accordingly.

However, the unregulated nature of this technology has raised many ethical and privacy concerns. In particular, it is important to know the attitude of the current largest demographic towards NCDC, namely Generation Z (Gen Z). Making up 36% of the global workforce, Gen Z is likely to be the most vulnerable to emotional AI. Moreover, AI algorithms are rarely calibrated for socio-cultural differences, making their implementation all the more concerning.

We found that being male and having high income were both correlated with having positive attitudes towards accepting NCDC. In addition, business majors were more likely to be more tolerant towards NCDC,” highlights Prof. Ghotbi. Cultural factors, such as region and religion, were also found to have an impact, with people from Southeast Asia, Muslims, and Christians reporting concern over NCDC.

Research by Team:

Our study clearly demonstrates that sociocultural factors deeply impact the acceptance of new technology. This means that theories based on the traditional technology acceptance model by Davis, which does not account for these factors, need to be modified,” explains Prof. Mantello.

The study addressed this issue by proposing a “mind-sponge” model-based approach that accounts for socio-cultural factors in assessing the acceptance of AI technology. Additionally, it also suggested a thorough understanding of the potential risks of the technology to enable effective governance and ethical design. “Public outreach initiatives are needed to sensitize the population about the ethical implications of NCDC. These initiatives need to consider the demographic and cultural differences to be successful,” says Dr. Nguyen.

Overall, the study highlights the extent to which emotional AI and NCDC technologies are already present in our lives and the privacy trade-offs they imply for the younger generation. Thus, there is an urgent need to make sure that these technologies serve both individuals and societies well.

Italian team develops superior AI model for stock trading

Using the science of convolutional neural networks (CNNs) with deep learning – a discipline within artificial intelligence, Italian researchers have developed a system of market forecasting with the potential for greater gains and fewer losses than previous attempts to use AI methods to manage stock portfolios.

The team, led by Prof. Silvio Barra t the University of Cagliari, published its findings in IEEE/CAA Journal of Automatica Sinica. The University of Cagliari-based team set out to create an AI-managed “buy and hold” (B&H) strategy – a system of deciding whether to take one of three possible actions – a long action (buying a stock and selling it before the market closes), a short action (selling a stock, then buying it back before the market closes), and a hold (deciding not to invest in a stock that day).

At the heart of their proposed system is an automated cycle of analyzing layered images generated from current and past market data unlike the older B&H systems, based on machine learning, a discipline that leans heavily on predictions based on past performance.

Just like seasoned investor

By letting their proposed network analyze current data layered over past data, they are able to take market forecasting a step further, allowing for a type of learning that more closely mirrors the intuition of a seasoned investor rather than a robot. Their proposed network can adjust its buy/sell thresholds based on what is happening both in real time and the past. Taking into account present-day factors increases the yield over both random guessing and trading algorithms not capable of real-time learning, they said.

To train their CNN for the experiment, the research team used S&P 500 data from 2009 to 2016. The S&P 500 is widely regarded as a litmus test for the health of the overall global market.

At first, their proposed trading system predicted the market with about 50 percent accuracy, or about accurate enough to break even in a real-world situation. They discovered that short-term outliers, which unexpectedly over- or underperformed, generating a factor they called “randomness.” Realizing this, they added threshold controls, which ended up greatly stabilizing their method.

“The mitigation of randomness yields two simple, but significant consequences,” Prof. Barra said. “When we lose, we tend to lose very little, and when we win, we tend to win considerably.” Howwever, further enhancements will be needed, said Prof. Barra, as other methods of automated trading already in use make markets more and more difficult to predict.

Artificial intelligence: Is this the future of early cancer detection?

A new endoscopic system powered by artificial intelligence (AI) has today been shown to automatically identify colorectal adenomas during colonoscopy. The system, developed in Japan, has recently been tested in one of the first prospective trials of AI-assisted endoscopy in a clinical setting, with the results presented today at the 25th UEG Week in Barcelona, Spain.

AI-assisted endocytoscopy – how it works:

The new computer-aided diagnostic system uses an endocytoscopic* image – a 500-fold magnified view of a colorectal polyp – to analyse approximately 300 features of the polyp after applying narrow-band imaging (NBI) mode or staining with methylene blue. The system compares the features of each polyp against more than 30,000 endocytoscopic images that were used for machine learning, allowing it to predict the lesion pathology in less than a second. Preliminary studies demonstrated the feasibility of using such a system to classify colorectal polyps, however, until today, no prospective studies have been reported.

Prospective study in routine practice:

The prospective study, led by Dr Yuichi Mori from Showa University in Yokohama, Japan, involved 250 men and women in whom colorectal polyps had been detected using endocytoscopy1. The AI-assisted system was used to predict the pathology of each polyp and those predictions were compared with the pathological report obtained from the final resected specimens. Overall, 306 polyps were assessed real-time by using the AI-assisted system, providing a sensitivity of 94%, specificity of 79%, accuracy of 86%, and positive and negative predictive values of 79% and 93% respectively, in identifying neoplastic changes.

Speaking at the Opening Plenary at UEG Week, Dr Mori explained; “The most remarkable breakthrough with this system is that artificial intelligence enables real-time optical biopsy of colorectal polyps during colonoscopy, regardless of the endoscopists’ skill. This allows the complete resection of adenomatous polyps and prevents unnecessary polypectomy of non-neoplastic polyps.”

“We believe these results are acceptable for clinical application and our immediate goal is to obtain regulatory approval for the diagnostic system” added Dr Mori.

Moving forwards, the research team is now undertaking a multicentre study for this purpose and the team are also working on developing an automatic polyp detection system. “Precise on-site identification of adenomas during colonoscopy contributes to the complete resection of neoplastic lesions” said Dr Mori. “This is thought to decrease the risk of colorectal cancer and, ultimately, cancer-related death.”

AI implications: Engineer’s model lays groundwork for machine-learning device

In what could be a small step for science potentially leading to a breakthrough, an engineer at Washington University in St. Louis has taken steps toward using nanocrystal networks for artificial intelligence applications.

Elijah Thimsen, assistant professor of energy, environmental & chemical engineering in the School of Engineering & Applied Science, and his collaborators have developed a model in which to test existing theories about how electrons move through nanomaterials. This model may lay the foundation for using nanomaterials in a machine learning device.

“When one builds devices out of nanomaterials, they don’t always behave like they would for a bulk material,” Thimsen said. “One of the things that changes dramatically is the way in which these electrons move through material, called the electron transport mechanism, but it’s not well understood how that happens.”

Thimsen and his team based the model on an unusual theory that every nanoparticle in a network is a node that is connected to every other node, not only its immediate neighbors. Equally unusual is that the current flowing through the nodes doesn’t necessarily occupy the spaces between the nodes — it needs only to pass through the nodes themselves. This behavior, which is predicted by the model, produces experimentally observable current hotspots at the nanoscale, the researcher said.

In addition, the team looked at another model called a neural network, based on the human brain and nervous system. Scientists have been working to build new computer chips to emulate these networks, but these chips are far short of the human brain, which contains up to 100 billion nodes and 10,000 connections per node.

“If we have a huge number of nodes — much larger than anything that exists — and a huge number of connections, how do we train it?” Thimsen asks. “We want to get this large network to perform something useful, such as a pattern-recognition task.”

Based on those network theories, Thimsen has proposed an initial project to design a simple chip, give it particular inputs and study the outputs.

“If we treat it as a neural network, we want to see if the output from the device will depend on the input,” Thimsen said. “Once we can prove that, we’ll take the next step and propose a new device that allows us to train this system to perform a simple pattern-recognition task.”

Empowering robots for ethical behavior

Scientists at the University of Hertfordshire in the UK have developed a concept called Empowerment to help robots to protect and serve humans, while keeping themselves safe.

Robots are becoming more common in our homes and workplaces and this looks set to continue. Many robots will have to interact with humans in unpredictable situations. For example, self-driving cars need to keep their occupants safe, while protecting the car from damage. Robots caring for the elderly will need to adapt to complex situations and respond to their owners’ needs.

Recently, thinkers such as Stephen Hawking have warned about the potential dangers of artificial intelligence, and this has sparked public discussion. “Public opinion seems to swing between enthusiasm for progress and downplaying any risks, to outright fear,” says Daniel Polani, a scientist involved in the research, which was recently published in Frontiers in Robotics and AI.

However, the concept of “intelligent” machines running amok and turning on their human creators is not new. In 1942, science fiction writer Isaac Asimov proposed his three laws of robotics, which govern how robots should interact with humans. Put simply, these laws state that a robot should not harm a human, or allow a human to be harmed. The laws also aim to ensure that robots obey orders from humans, and protect their own existence, as long as this doesn’t cause harm to a human.

The laws are well-intentioned, but they are open to misinterpretation, especially as robots don’t understand nuanced and ambiguous human language. In fact, Asimov’s stories are full of examples where robots misinterpreted the spirit of the laws, with tragic consequences.

One problem is that the concept of “harm” is complex, context-specific and is difficult to explain clearly to a robot. If a robot doesn’t understand “harm”, how can they avoid causing it? “We realized that we could use different perspectives to create ‘good’ robot behavior, broadly in keeping with Asimov’s laws,” says Christoph Salge, another scientist involved in the study.

The concept the team developed is called Empowerment. Rather than trying to make a machine understand complex ethical questions, it is based on robots always seeking to keep their options open. “Empowerment means being in a state where you have the greatest potential influence on the world you can perceive,” explains Salge. “So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.”

The team mathematically coded the Empowerment concept, so that it can be adopted by a robot. While the researchers originally developed the Empowerment concept in 2005, in a recent key development, they expanded the concept so that the robot also seeks to maintain a human’s Empowerment. “We wanted the robot to see the world through the eyes of the human with which it interacts,” explains Polani. “Keeping the human safe consists of the robot acting to increase the human’s own Empowerment.”

“In a dangerous situation, the robot would try to keep the human alive and free from injury,” says Salge. “We don’t want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment.”

This altruistic Empowerment concept could power robots that adhere to the spirit of Asimov’s three laws, from self-driving cars, to robot butlers. “Ultimately, I think that Empowerment might form an important part of the overall ethical behaviour of robots,” says Salge.