The new supercomputers will be operated by Cerebras and used for G42 projects. Any excess capacity will offered commercially as a service.
For Cerebras, based in Silicon Valley, the new systems provide a showcase that it hopes will lead to wider adoption. The company’s offerings rely on massive chips that are made out of whole silicon wafers — disks that are normally sliced up to create multiple components.
Feldman argues that his processors have the advantage of being able to deal with large data sets in one go, rather than only working on portions of the information at a time. Compared with Nvidia’s processors, they also require less of the complicated software needed to make chips work in concert, he said.
This year, cloud computing providers such as Microsoft Corp. and Amazon.com Inc.’s AWS have been stocking up on Nvidia processors to keep up with runaway demand for OpenAI’s ChatGPT and other generative AI tools. Nvidia has about 80% of the market for the so-called accelerators that help handle these workloads.
With his computing rollout, Feldman aims to demonstrate that the AI explosion won’t just benefit the giant tech companies that can afford big-budget equipment.
Here is the other story, focusing on Tesla’s ambition to solve the autonomous driving problem and to also develop humanoid robots. Limited supply and rising demand will inevitably result in substitution. Meanwhile, the efforts to contain China’s expansion have inflated demand for Nvidia’s chips and that will only last for as long as the window of opportunity remains open.Click HERE to subscribe to Fuller Treacy Money Back to top