OpenAI’s ChatGPT has dominated the AI chatbot conversation since its 2022 debut. However, if you follow the world of AI, you’d have come across the name Deepseek thrown around over the last few weeks. The Chinese large language model claims to trade blows with ChatGPT for its speed, accuracy, and, most importantly, open-source nature. But what’s truly astonishing is the training efficiency of R1. Relying on pure reinforcement learning versus GPT-4‘s supervised fine-tuning, the entire model cost just $12 million in training versus the $500 million required for the upcoming GPT-5.
Of course, none of that really matters to the end consumer. What matters is if it’s any good in its intended purpose. I’ve spent the last couple of days testing out Deepseek R1 as part of my workflow — ideating, coding, performing tasks like grammar checks, and more. My takeaway? OpenAI needs to be seriously worried.