New Delhi, 5 April (IANS). Amid the growing trend of Artificial Intelligence (AI), ‘Groke’, owned by Allen Musk and China’s ‘DipC’ models now lead in terms of AI capacity. According to a report released on Saturday, where one model is better in terms of accessibility and efficiency, the other in terms of large scale. However, there is a great variation in the resources of training of both models.
According to Counterpoint Research, no compromise in terms of Groke-3 scale. He has the support of two lakh Nvidia H100 (Tensor Core GPU). At the same time, Dipcik-R1 is also challenging it using a much smaller part than that, which shows that innovative arteteters and cursors can compete with large infrastructure.
In February, Dipcik started making headlines globally by opening its flagship region model DipC-R1. This model delivers performance equal to the world’s Frontier Reasoning Model.
AI’s chief analyst at Counterpoints, Wei Sun said, “Its specialty is not only its excellent abilities, but also the fact that it was trained using only two thousand NVDia H800 GPU, which is a small and export-export option of H100, which makes its achievement a masterclass in terms of efficiency.”
Musk’s XAI introduced Groke-3, which is its most advanced model ever. The model delivers the better performance from DipC-R1, GPT-O1 of OpenAI and Gemini 2 of Google.
Sun said, “Apart from Dipcik-R1, Groke-3 has been trained using two lakh H100 GPU on the supercomputer colossus of XAI.”
Groke-3 symbolizes the large-scale strategy, the huge compute scale (represents billions of dollars in the GPU cost) encourages the increasing performance profit.
This is a route that only the richest technical giants or governments can realize in real.
Sun said, “Conversely, by taking advantage of techniques like Dipcic-R1, mixture-of-access (MOE) and Reinforcement Learning for Reasoning, together with curated and high-quality data, it shows the power of algorithmic simplicity to achieve comparative results with a fracture of compute.”
Groke-3 proves that applying 100 times more GPU can give marginal benefits rapidly in performance. But it also highlights rapidly decreasing returns (ROI) on investment.
The report stated that the DipC-R1 is associated with achieving excellent performance with minimum hardware overhead, while Groke-3 is associated with any necessary computational means to break and move forward.
-IANS
SKT/Ekde