Clik here to view.

After the release of DeepSeek’s R1, a reasoning LLM that matches the performance of OpenAI’s latest o1 model, trade media is abuzz with speculations about the future of artificial intelligence (AI). Has the AI bubble burst? Is it the end of Nvidia’s spectacular AI ride?
EE Time’s Sally Ward-Foxton takes a closer look at the engineering-centric aspects of this talk of the town, explaining how DeepSeek tinkered with AI models as well as interconnect bandwidth and memory footprint. She also provides a detailed account of Nvidia’s chips utilized in this AI head-turner and what it means for Nvidia’s future.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- Transform Your Business With AI
- AI at the edge: It’s just getting started
- Nvidia GTC 2024: Why Nvidia Dominates AI
- Is China Poised to Surpass the U.S. in Innovative AI Models?
- Project Stargate: Trump Announces $500 Billion of U.S. AI Infrastructure
The post DeepSeek’s AI stunner and the future of Nvidia appeared first on EDN.