When will robots with human reasoning arrive?
Nvidia has promised that humanoid robots will soon be able to perform a multitude of automated tasks in controlled environments with little human intervention.


BarcelonaThe infrastructure needed for the future development of artificial intelligence (AI) systems was the focus of Nvidia's GTC 2025 developer conference last week. The company, main driver of the global enthusiasm around AI, defends the need for a much more powerful data processing infrastructure than the current one. Critics of this highly technological vision, on the other hand, consider it preferable to manufacture less ambitious and cheaper chips and inference models that would create a demand for useful, reliable, and profitable AI services for both users and providers.
Jensen Huang, co-founder and CEO of Nvidia, states that new types of AI models that reason, in addition to inferring, will generate much more complex responses that will only increase the need for computing power, which will be "easily 100 times greater than what we thought we need now, as we thought we need now." For Huang, the analysis of the Chinese R1 AI model DeepSeek, which suggests that fewer chips and servers will be needed to run the AI software of the future, "is completely wrong."
Now yes, the robots arrive
According to Huang, global demand for chips specialized in the training and operation of AI models is in a phase of "hyper-accelerated growth," thanks to the creation of "AI agents" capable not only of responding quickly and accurately but also of making reasoned proposals to users. And, he asserts, the development of humanoid robots that will be able to perform a multitude of automated tasks in controlled environments with little human intervention is on the verge of arrival, rather than expected. Installing these humanoids in factories will require a relatively modest investment of around $100,000, which will be easily amortized, according to the Nvidia executive.
This view is not shared by many investors, who are concerned about the high and rising costs that large technology companies allocate to building an increasingly sophisticated infrastructure. In his opinion, It will be difficult to amortize because they consider that there will not be enough demand for the price that will have to be paid for the numbers to come out. In fact, Nvidia shares fell 3.4% last Tuesday, during the opening session of the GTC, a sign that Huang's words were not entirely convincing. So far this year, the company's shares have lost 17% of their value.
Investors' doubts began last June, when Nvidia's share value had tripled in less than a year and the company had become the most valuable semiconductor manufacturer in the world, with a stock market value 30 times that of Intel. Last fall, Nvidia's shares continued to rise, until they suffered a sharp drop in January (losing nearly $600 billion in a single day) when the Chinese DeepSeek unveiled its R1 model, which showed many that it was possible to make AI without so many chips and data servers, a vision that Huang considers completely wrong.
According to a report by Bloomberg Intelligence Published last Monday, major tech companies and data center operators plan to invest $371 billion in AI infrastructure in 2025, 44% more than last year. By 2032, they estimate that investment will reach $525 billion, even more than expected before the emergence of DeepSeek. Huang already mentioned in his conference that the four major cloud computing operators have increased orders for graphics units this year. It should also be noted that Nvidia, in addition to supplying the chips and much of the software that many AI models run on (with its proprietary CUDA architecture), has equity stakes in many of the sector's emerging companies. For example, one of the protagonists of the recent AI summit in Paris, the French company Mistral, is owned by Nvidia.
Most powerful chips: Blackwell Ultra, Rubin, and Rubin Ultra
To address this increased need for advanced graphics capability, Huang unveiled the successor to the Blackwell chip—announced at last year's conference but only just shipping due to manufacturing issues the company has encountered. This is the Blackwell Ultra, scheduled to ship in the second half of 2025, and will have more memory than the Blackwell. It will be followed in the second half of 2026 by the Rubin chip, featuring a new architecture capable of linking 576 individual graphics units that will act as a single chip (the Blackwell links up to 72 units), and the Rubin Ultra in the second half of 2027. And in 2028, following the same ; all of them using Nvidia's proprietary CUDA software architecture.
Each of these new chips will have significantly more memory and run much faster than their predecessors, allowing them to support larger AI models. This will allow them to help AI systems respond more intelligently to a larger number of users, and also provide faster responses. For Huang, Nvidia chips are the only ones capable of doing all three. Rapid response, he believes, is essential because people don't want to wait, as is the case with the web and mobile applications.
In addition to this annual sequence of chips, Nvidia announced "personal AI supercomputers" that will include the most advanced Blackwell chips typically found in data centers, allowing developers to work with desktop computers. Another new development is the Dynamo operating system for AI data centers (which the industry is now calling "AI factories," one of which is being built at BSC in Barcelona), which will boost the performance of Nvidia chips. Huang's presentation, which lasted more than two hours in front of a packed auditorium of nearly 25,000 spectators, featured a robot inspired by the film. Star Wars, the result of an alliance between Google's DeepMind and Disney Research to develop an open-source robotics simulation engine.