Enable javascript in your browser for better experience. Need to know to enable it? Go here.

DeepSeek: Five things business and technology leaders need to know

DeepSeek has sent shockwaves across the business and technology worlds. Amid headlines about geopolitical tensions and collapsing share prices, knowing precisely what DeepSeek’s new AI models mean for businesses — from long-term AI strategy to day-to-day technology experimentation — is challenging. 

 

So, to give technology and business leaders some grounding, we’ve answered five key questions that can help you move forward, however the news and hype cycle evolves in the weeks and months to come.

 

Why is DeepSeek in the news?

 

DeepSeek is a Chinese startup that released two new AI models — DeepSeek-R1 and DeepSeek R1-Zero — on January 20, 2025. It’s in the news because the model’s performance appears to match those of its rivals, such as Llama, Gemini, Claude and ChatGPT’s o1 “reasoning model.” This is despite it reportedly being trained using NVIDIA chips that are less advanced than the manufacturer's top-tier chips used by established vendors. (NVIDIA actually developed the chips in such a way to comply with US government regulations around what chips can be exported to China — they have reduced interconnect speed, which DeepSeek engineers mitigated through considerable ingenuity in their code.)

 

There have been significant consequences. NVIDIA’s market value has dropped by almost $600 billion, and the US tech industry more broadly has been left reeling at a Chinese player apparently beating them at their own game — despite only having access to ostensibly inferior hardware.

 

Is DeepSeek going to lower the cost of using AI for businesses?

 

DeepSeek-R1 comes in several smaller 'distilled' sizes and can be run on commodity hardware. This is significant because the ability to run a model that matches the performance of ChatGPT o1 — instead of being beholden to third party API costs — is a big deal. It's especially important if you’re trying to do advanced things like agentic AI, where the AI may require many cycles to get the job done successfully.

 

Although precisely how much cheaper is hard to determine, it’s believed DeepSeek’s hardware is 20-50 times cheaper than OpenAI’s (some industry voices have disputed this but there are strong indications the claims made are true). In theory, then, this should make AI much cheaper for businesses: this is because not only is the foundational model itself cheaper to train, using and running the model — in, say, an application — is cheaper too.

 

However, this is just an assumption; lots of questions remain. For instance, DeepSeek’s cheaper infrastructure may come with some tradeoffs yet to be identified. Even more importantly, it’s worth bearing in mind what’s known as the Jevons Paradox: efficiency gains, rather than reducing prices as you might expect, actually lead to increased demand which, in turn, offsets the decrease in price.

Mike Mason, Thoughtworks
What DeepSeek appears to have achieved will likely encourage greater focus on efficiency — doing more with less.
Mike Mason
Chief AI Officer, Thoughtworks
What DeepSeek appears to have achieved will likely encourage greater focus on efficiency — doing more with less.
Mike Mason
Chief AI Officer, Thoughtworks

Will DeepSeek reduce energy consumption?

 

DeepSeek’s models suggest you can unlock an incredibly high standard of performance without the same scale of electricity required by other established models. Consequently, many power companies took a hit to their share price. One of the potential benefits, though, is that it could help drive the adoption of green computing, in which the environmental impacts of computing are addressed through greater efficiency. (That said, this may also lead us to Jevons paradox again, where energy consumption will go up as efficiency gains are realized.)

 

What this means for the likes of OpenAI and Google remains to be seen. These companies have planned for huge amounts of investment in data centers and resources in the years to come: if DeepSeek really has proven you can do more with less, perhaps we will see these companies pivot. While that’s currently just speculation, there’s no doubt DeepSeek is forcing the industry to rethink how much energy is required to build and then run an effective AI system.

 

Will this spur another wave of AI innovation?

 

What DeepSeek appears to have achieved will likely encourage greater focus on efficiency — doing more with less. The challenges in the field have typically been framed in terms of scale — more computing power, more intensive model training, bigger models. One of the biggest lessons of DeepSeek is perhaps there are ways of innovating in AI that don’t require greater scale but, instead, ingenuity and optimization.

 

It’s also worth noting that DeepSeek R-1 is what we like to call “open-ish” — open but not quite fulfilling the strict requirements to be called open source. This means it can be adapted and used in ways that proprietary systems cannot, arguably challenging the current dominance of proprietary models. When you combine this with decreased costs, the door will potentially being open for a whole new set of companies to consider the options for building their own models.

 

What should my next steps be?

 

The AI landscape moves so fast that advancements like this are going to keep happening. That’s why it’s vital to make sure your experiments pipeline and your process for evaluating tools is agile enough to adapt to change. You never know, we could get yet another new model from a different vendor next week.

 

Right now, though, there are many potential use cases worth exploring, from building a simple chat application to leveraging it for coding. It’s undeniably powerful, so see what you can do with it. Andrew Ng noted on LinkedIn that “the foundation model layer being hyper-competitive is great for people building applications” — this is certainly an exciting time for organizations seeking to bring generative AI into production environments. 

 

That all being said, it’s nevertheless important to be mindful of the privacy risks. While that’s true whatever AI model you use, some security and privacy experts have urged particular caution. They’ve expressed concern at how the Chinese government may be able to leverage DeepSeek data.

 

At Thoughtworks, we’re excited to experiment with DeepSeek and encourage the wider industry to continually evaluate and share the value they get from it. That’s ultimately how we learn and innovate. Most importantly, it will help us continue to deliver more value for customers.

Need help making AI work for you?