Amazon Makes Big, Bold Push Into AI — Leaves Experts Amazed

Amazon Makes Big, Bold Push Into AI — Leaves Experts Amazed

Amazon launches full speed into AI with several generative AI models, a new AI computer chip, plans … [+] for a new supercomputer and big partnership deals.

getty

Amazon’s stock continues to edge up as investors digest the potential impact of a barrage of AI related announcements from the company on Wednesday, affecting nearly every area — and competitor — across the AI landscape.

The company that disrupted bookselling, retail, grocery stores, pharmacies, video streaming, home delivery, cloud-based infrastructure, movie production and beyond is now focused on AI — with a seeming vengeance.

Following the recent news of its plans to double its investment in Anthropic to $8 billion, Amazon also unveiled a series of blockbuster announcements including the launch of its Trainium2 chips — with Trainium3 waiting in the wings — specifically designed for the heavy compute demands of AI. The chip line will be a direct competitor to Nvidia and AMD, leaders in the graphics processing unit space.

Amazon also debuted plans to develop a massive supercomputer called Rainier to meet its AI training and computational needs, which will rival Elon Musk’s Cortex and Colossus AI supercomputer complexes.

Additionally, it launched six foundational large language models under its Nova umbrella to compete with ChatGPT and Gemini. These new LLM offerings will be discounted as much as 75% from current models on the market, they’ll work in more than 200 languages and some will be multimodal — so they can produce written responses, images or video-generated outputs from a single AI platform.

“We are launching these models now because they are ready and we are excited to share them with customers. We’re building the world’s most useful AI and excited to tell the world about it,” wrote an Amazon spokesperson in an email exchange.

“Training foundational models is costly, but we are fortunate to be able to be in a position to train and deploy the Nova models at scale to benefit our customers. Our goal is to provide a diverse range of options, recognizing that different customers have different needs. We’ll continue investing in both our own models and partnerships, ensuring developers have access to a variety of tools for building and deploying AI applications,” the spokesperson added.

How Amazon Fights Hallucinations In Its AI Models

When asked what steps Amazon has taken to mitigate some of the risks associated with generative AI — specifically hallucinations, where AI makes up its own best version of facts when it lacks actual facts — the spokesperson noted that hallucinations are not unique to Amazon’s AI, but Amazon is taking a unique multi-pronged approach at addressing this risk.

“Amazon Science recently launched RefChecker, a new tool to detect hallucinations, and a benchmark dataset for assessing them in various contexts. AWS services like Amazon Kendra can help reduce hallucinations by augmenting LLMs to provide more accurate and verifiable information to the end user. For Amazon Q, we help customers understand how a specific answer was derived by providing citations and links to source material, so customers can make an informed assessment of its outputs,” the spokesperson explained.

Amazon’s AI Ambitions Impress The Experts

Ben Torben-Nielsen, Ph.D., MBA, is an internationally recognized AI consultant for businesses. He holds two machine learning patents and has published more than 40 peer-reviewed articles on AI. He wrote in an email that he was most impressed with the number and quality of Amazon’s Nova generative AI models.

“So far, AWS has largely been a broker between its cloud infrastructure and other [LLM] providers. Now they are releasing a entire line of text, image, video and multi-modal models. Is this a Netflix moment? Does AWS think the content provider — in their case the LLM providers — have too much power and eat their margin, therefore, they start making ‘their own content’ in the form of foundational models,” he wrote.

Conor Grennan is chief AI architect at NYU Stern School of Business as well as CEO and founder of the consultancy AI Mindset. He noted that he was surprised that Amazon was able to keep its foundational models under wraps for so long but believes its chips will have the most lasting impact.

“Amazon’s Trainium2 chips showing 30-40% better price performance than current GPU-based instances represents a serious challenge to Nvidia’s dominance. With both Apple and Anthropic committing to use these chips for AI training, AWS is positioning itself as a viable alternative in the AI infrastructure space. This space can use good competition,” he wrote in an email exchange.

Ahmed Banafa, Ph.D., is a technology expert and engineering professor at San Jose State University. He noted in an email that the Rainier supercomputer announcement was a stunner to him.

“The standout surprise was Amazon’s decision to unveil an AI supercomputer built to rival Nvidia even though AWS is also working on Project Ceiba in partnership with Nvidia. While AWS has long been a dominant force in cloud computing, this marks a strategic shift toward deeper vertical integration. By moving into specialized hardware, Amazon is signaling its intent to not just host the future of AI but to shape it,” he explained.

He added that Amazon’s AWS division is sending a clear message to the entire industry.

“AWS is no longer satisfied being a cloud platform. It’s aiming to be an end-to-end AI powerhouse. This move will likely force competitors to accelerate their own strategies, setting the stage for a new era of innovation. At the same time, it may prompt regulators to scrutinize the growing concentration of AI capabilities within a few dominant firms,” Banafa concluded.

Grennan added that Amazon’s strategy is a blend of bold AI ambition and cautious fast-following.

“They’re looking to be a one stop shop for AI, building out complete solutions where they can produce and own the best of all worlds and not rely on competitors. While Amazon has also invested $8 billion in Anthropic and with good reason. Claude is arguably the best model out there, and Amazon has a need for a great model — especially with Alexa in every home. Anthropic will also use AWS cloud services and Amazon chips. But we’re seeing that with Nova, Amazon is hedging against over reliance on Anthropic to be their LLM of choice,” concluded Grennan

Amazon Says Its AI Engines Are Just Getting Started

Despite Amazon’s AI news avalanche, the spokesperson acknowledged that this is just the beginning for the company.

“Amazon is in this for the long term. We believe Amazon is in the best possible position to bring the benefits of gen AI to every household and every business in the world,” the spokesperson noted.

“In 2025, we will bring even more capable models while driving cost down, through algorithmic and computing innovations. And we will continue to invest in the talent, technology, and infrastructure necessary to offer world class models to our customers for years to come,” they concluded.

Read More

Zaļā Josta - Reklāma