![]() One reason why AI developers have historically preferred Nvidia chips is that it has a well-developed software package called CUDA that enables them to access the chip's core hardware features.ĪMD said on Tuesday that it has its own software for its AI chips that it calls ROCm. Nvidia and Google have developed similar systems that combine eight or more GPUs in a single box for AI applications. ![]() "Model sizes are getting much larger, and you actually need multiple GPUs to run the latest large language models," Su said, noting that with the added memory on AMD chips developers wouldn't need as many GPUs.ĪMD also said it would offer an Infinity Architecture that combines eight of its M1300X accelerators in one system. OpenAI's GPT-3 model has 175 billion parameters. AMD demoed the MI300x running a 40 billion parameter model called Falcon. Large language models for generative AI applications use lots of memory because they run an increasing number of calculations. Nvidia's rival H100 only supports 120GB of memory, for example. The MI300X can use up to 192GB of memory, which means it can fit even bigger AI models than other chips. GPUs are enabling generative AI," Su said. Personal Loans for 670 Credit Score or LowerĪMD said that its new MI300X chip and its CDNA architecture were designed for large language models and other cutting-edge AI models. ![]() Personal Loans for 580 Credit Score or Lower Best Debt Consolidation Loans for Bad Credit ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |