AI

|

Consumer Tech

AI

|

AI Ethics and Risks

CX/CS

|

AI

Enterprise

|

Data and Privacy

Latest

|

LLMs

Latest

|

Banking and Blockchain

AI

|

Healthtech

Enterprise

|

AI in the Enterprise

AI

|

AI Risk and Compliance

AI

|

AI Arms Race

Enterprise

|

AI

Latest

|

LLMs

CX/CS

|

Compliance

CX/CS

|

Great CX

CX/CS

|

CS in Blockchain

AI

|

AI News

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

AI

|

CX/CS

|

AI

|

AI

|

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

Enterprise

|

Enterprise

|

Open source LLMs are on a fast track toward Enterprise viability

by

Lorikeet News Desk

published

February 16, 2025

Credit: Google DeepMind

Key Points

  • DeepSeek is challenging the AI industry by creating cost-effective models over sheer size and power.
  • Sanjay Basu, Sr. Director of GPU & Gen AI Solutions & Services at a Fortune 100 cloud provider, says Enterprises will accelerate adoption of safe and compliant value-based open source models.

With open-source models getting more sophisticated and inference costs becoming a bigger pain point, enterprises will increasingly demand AI that is not just powerful, but also economically viable.

Sanjay Basu

Sr. Dir. of GPU & Gen AI Solutions & Services | Fortune 100 Cloud Provider

Deepseek is showing that in order to lead in AI, it’s not all about computational power and efficiency, the burning question now is who can create the best AI for the cheapest price. This is turning the AI world upside down and restructuring how AI leaders build their latest tools. 

To understand how this shift will shape AI model releases in the U.S. in 2025, we spoke with Sanjay Basu, Senior Director of GPU & Gen AI Solutions & Services at a Fortune 100 cloud provider.

Price Battles: "The AI arms race isn’t just about who builds the biggest, smartest model anymore—it’s also about who can deliver the best bang for the buck. As we head into 2025, we’re going to see AI model releases in the U.S. shaped by two competing forces: performance and cost," Basu explains.

While tech giants like OpenAI and Google DeepMind push the boundaries with frontier models such as GPT-5, emerging players like DeepSeek and Mistral are proving that smaller models can still deliver impressive reasoning capabilities without the hefty infrastructure demands. This price-performance dynamic is set to redefine priorities for AI development.

U.S. regulations will play their part. With increasing scrutiny on AI safety, expect model releases to be packed with alignment safeguards, content filtering, and transparency features—not just because it’s the right thing to do, but because failing to do so will mean legal headaches and trust issues.

Sanjay Basu

Sr. Dir. of GPU & Gen AI Solutions & Services | Fortune 100 Cloud Provider

Open Source Momentum: Open-source is having its own moment thanks to DeepSeek, leaving businesses to question which AI might be best for their operations and budgets. 

"With open-source models getting more sophisticated and inference costs becoming a bigger pain point, enterprises will increasingly demand AI that is not just powerful, but also economically viable. That means the days of paying a premium for proprietary models might be numbered—especially if smaller, open models can be fine-tuned to match performance in specialized tasks," Basu notes.

Market Shifts: "But that doesn’t mean the AI giants are in trouble." Basu clarifies. "Instead, we’re going to see a shift where differentiation happens not just on raw intelligence, but on things like customization, multimodal capabilities, and regulatory alignment. Expect more efficient, cost-conscious AI releases and a heightened focus on on-premise, edge, and hybrid-cloud solutions that minimize expensive cloud inference costs."

U.S. Regulations: Regulatory changes will also significantly influence model development. "U.S. regulations will play their part. With increasing scrutiny on AI safety, expect model releases to be packed with alignment safeguards, content filtering, and transparency features—not just because it’s the right thing to do, but because failing to do so will mean legal headaches and trust issues," Basu emphasizes.

One more thing

We know AI chatbots that read your help center and summarize answers back to your customers are dime-a-dozen. The underlying technology is a commodity.

In fact we believe this so strongly, we’ll handle 100,000 FAQ lookup tickets for free.

Join free waitlist