FigureAsia  Prize & Award 2024  NominationsFigureAsia  Prize & Award 2024  NominationsFigureAsia  Prize & Award 2024  NominationsFigureAsia  Prize & Award 2024  Nominations

NVIDIA Huang expects to reveal details of new AI chips

Date:

NVIDIA CEO Jensen Huang is expected to reveal new details about the organization's new AI chips at Tuesday's annual software developer meeting.

NVIDIA is trying to introduce flagship chips every year, but so far, both internal and external obstacles have been encountered.

In the second half of last year, Huang gave a hint that the company's new flagship product will be named Rubin, which will consist of a family of chips – a graphics processing unit, a central processing unit and network chips.

See also: China's Baidu launches two free AI models to compete for DeepSeek

These chips (designed to work in large data centers that train AI systems) are expected to be put into production this year and will be launched in large quantities in 2026.

The company's current flagship chip (called Blackwell) is slower than expected after design flaws cause manufacturing problems.

Last year, the broader AI industry was working to address latency, where previous approaches to scaling data to an increasing number of data centers filled with NVIDIA chips began to show a decrease in returns.

Data center chips sold in 2024 $130 billion

NVIDIA shares have been worth more than three times over the past three years as the company powers advanced AI systems such as Chatgpt, Claude and many others.

Much of these successes stemmed from a decade-long Santa Clara, California-based that it has spent building software tools to attract AI researchers and developers, but NVIDIA's data center chip sales accounted for tens of thousands of dollars in sales per sales, accounting for the majority of its $130.5 billion in sales last year.

NVIDIA shares this year's Chinese startups DeepSeek claims it can produce competitive AI chatbots with less computing power – Therefore, there are fewer NVIDIA chips than the early stages of the model.

Mr. Huang fired the new AI model, and they spent more time thinking about their answers would make Nvidia's chips even more important, as they are the basic unit of the fastest AI plan to generate “tokens”.

Huang told Reuters last month: “When Chatgpt first came out, the generation rate of tokens only needs to be read as quickly as possible.”

“But now the token generation rate is how fast AI can read itself because it thinks itself, and AI can think faster than you because it has to create many future possibilities before giving you the right answer.”

  • Jim Pollard's additional editor Reuters

See also:

Server fraud cases in Singapore may be related to Chinese AI chips

New CEO could block Intel Split, TSMC plans to run Fabs

Everyone who doubts NVIDIA chips on AI Outlay needs it

DeepSeek Breakthrough or Theft? US investigates “AI data breach”

Technology sell-off expands to Japan as DeepSeek focuses on AI costs

China's DeepSeek rock US tech giant's “AI breakthrough”

UAE's SoftBank joins Openai's $50 billion data center agreement

Biden restricts access to AI chips for U.S. companies and their allies

US rules restrict investment in Chinese chips, quantum and AI

Jim Pollard

Jim Pollard has been an Australian journalist in Thailand since 1999. He worked for News Ltd in Sydney, Perth, London and Melbourne, and then passed SE Asia in the late 1990s. He has been a senior editor in the United States for 17 years.

Share to

Subscribe

spot_imgspot_img

Breaking News

Read More
Figure Aisa

Kazakhstan fintech company plans to acquire $500 million in bond acquisition

The company is based on Almaty and mainly provides...

AI hosting provider Featherless raises $5 million in seeds

Featherless.ai’s platform supports over 4,000 open source AI models...

North Korea holds $1.13B of Bitcoin

North Korea's Lazarus Group uses Thorchain to convert stolen...

Swiggy Instamart expands to 100 cities in India

Customers now have access to over 30,000 products including...