Rising Costs of AI, is it actually profitable business?
AI is currently a significant trend, with millions of users accessing ChatGPT daily. Given this high demand, it's unsurprising that ChatGPT incurs substantial operational costs. It is estimated that running ChatGPT costs OpenAI about $700,000 each day, translating to 36 cents per query. Despite skyrocketing interest from businesses and individuals, running an OpenAI chatbot in a free to use basic business model could make it challenging to generate profits. These expenses are exacerbated by the need for AI companies, including Microsoft, to purchase GPUs in large quantities from manufacturers like NVIDIA. To support its commercial endeavors, it's estimated that OpenAI might require an additional 30,000 NVIDIA GPUs this year, compared to the 10,000-15,000 GPUs currently in usage.
Start investing today or test a free demo
Open real account TRY DEMO Download mobile app Download mobile app
Nvidia's upcoming cutting edge technology!
Yesterday, Nvidia announced the release of its GH200 super chip, designed to meet the growing demand for running large AI models and associated large costs. The GH200 boasts the same GPU as Nvidia's current top-tier AI chip, the H100, but offers triple the memory capacity. This enhancement is crucial for generative AI applications, such as ChatGPT, which require substantial computational power. The cost of running such models is significant, especially given that even with Nvidia's H100 chips, some models need to be distributed across multiple GPUs. The GH200 aims to address this challenge, with Nvidia's CEO, Jensen Huang, emphasizing its design for scaling out global data centers. The chip is set to be available in the second quarter of 2024, and while the price remains undisclosed, the current H100 line is priced around $40,000.
Nvidia market share
Microsoft and Nvidia have collaborated on building new supercomputers, even as Microsoft reportedly explores manufacturing its own AI chips. However, Nvidia's near-monopoly in the AI-capable GPU market, with an estimated 80% market share, might face challenges. Cloud providers, including AWS, Azure, and Google, mostly use Nvidia's H100 Tensor Core GPUs, but they are also working on adding their own services. Nvidia's dominance is also under threat from competitors like AMD, which plans to increase production of its AI GPU later this year. Additionally, tech giants like Google and Amazon are venturing into designing custom AI chips. The competitive landscape suggests that while Nvidia remains the dominant player, the AI hardware space is rapidly evolving, with new companies also exploring this space.