Latest

|

Macro Economy

Latest

|

Consumer Finance

AI

|

Latest LLMs

CX/CS

|

Fintech

Latest

|

AI Infrastructure

Enterprise

|

ROI of AI

AI

|

Ethics & Safety

Latest

|

Politics & Policy

AI

|

Enterprise AI

AI

|

Big AI

Latest

|

Consumer Banking

Latest

|

Fintech Funding

AI

|

AI in Fintech

CX/CS

|

Fintech

AI

|

Health Tech

AI

|

AI Governance

Latest

|

LLMs

Latest

|

Fintech

AI

|

Open Source

AI

|

AI Security

Enterprise

|

Cloud Security

Latest

|

Macro Economy

Enterprise

|

Enterprise Solutions

AI

|

GRC

AI

|

AII Adoption

AI

|

AI Ethics

AI

|

Healthtech

CX/CS

|

AI in CX

AI

|

Quantum Computing

AI

|

Cybersecurity

Latest

|

Healthtech

CX/CS

|

AI Adoption

AI

|

AI

AI

|

Safety and Compliance

Latest

|

Big Tech

AI

|

Consumer Tech

AI

|

AI Ethics and Risks

CX/CS

|

AI

Enterprise

|

Data and Privacy

Latest

|

LLMs

Latest

|

Banking and Blockchain

AI

|

Healthtech

Enterprise

|

AI in the Enterprise

AI

|

AI Risk and Compliance

AI

|

AI Arms Race

Enterprise

|

AI

Latest

|

LLMs

CX/CS

|

Compliance

CX/CS

|

Great CX

CX/CS

|

CS in Blockchain

AI

|

AI News

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

AI

|

CX/CS

|

AI

|

AI

|

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

Enterprise

|

Enterprise

|

New features and savings woo devs, but OpenAI isn’t the LLM victor quite yet

Lorikeet News Desk

April 10, 2025

New features and savings woo devs, but OpenAI isn’t the LLM victor quite yet
Credit openai.com

Key Points

  • OpenAI's DevDay 2024 introduces Vision Fine-Tuning, Realtime API, Model Distillation, and Prompt Caching
  • Some new features cuts costs by 50%
  • OpenAI has reduced API access costs by 99% over two years, with over 3 million developers using its AI models
  • But engineers are still using other LLMs to keep their options open

While OpenAI is a key player, I wouldn’t say they’ve secured the top spot.

Matthew Grohotolski

Lead Data Scientist | Nearly Human AI

Event recap: OpenAI's DevDay 2024 unveiled four key innovations—Vision Fine-Tuning, Realtime API, Model Distillation, and Prompt Caching—designed to make AI tools more accessible and cost-effective for developers.

Key innovations:

  • Prompt Caching: Offers a 50% discount on input tokens recently processed by the model, reducing costs and improving latency.
  • Vision Fine-Tuning: Enabled Grab to achieve a 20% improvement in lane count accuracy and a 13% boost in speed limit sign localization using just 100 examples.
  • Realtime API: Priced at $0.06 per minute of audio input and $0.24 per minute of audio output, it offers six distinct voices but avoids third-party voices to sidestep copyright issues.
  • Model Distillation: Allows developers to use outputs from advanced models to enhance the performance of more efficient models, making sophisticated AI capabilities more accessible.

Focusing on devs: The event marks a strategic pivot towards supporting the developer community rather than focusing on end-user applications. "The pace at which new models and tools are introduced allows us to experiment with solutions that were previously out of reach," said Matthew Grohotolski, Lead Data Scientist at Nearly Human AI.

Lowered cost: OpenAI claims over 3 million developers are building with its AI models. The company has slashed API access costs by 99% over the past two years, with GPT-3 costs reduced by nearly 1000x.

The big picture: The AI landscape is getting increasingly competitive, and OpenAI's continued dominance depends on refining AI to empower developers and keep them hooked. Brian Tate, CTO at an AI video analytics startup, said, "Headwinds are pretty severe for solely focusing on model development. It is expensive to train models. The tail required to payback for the training is proportional to how much you can get away charging people to use it once it is production ready."

"Another key consideration is not having to host a foundational model ourselves," Grohotolski added. But many engineers are open to using other LLMs. "Other players, like Anthropic and Google DeepMind, are still pushing the boundaries," he added.

Picking a winner: "While OpenAI is a key player, I wouldn’t say they’ve secured the top spot," commented Grohotolski. "Contention for dominance in this space involves multiple factors that are critical for us: speed, reliability, answer factuality, and the effectiveness of prompt engineering." Perhaps no real winner will be crowned until AI tools hit cloud computing-level adoption across all companies.

One more thing

We know AI chatbots that read your help center and summarize answers back to your customers are dime-a-dozen. The underlying technology is a commodity.

In fact we believe this so strongly, we’ll handle 100,000 FAQ lookup tickets for free.

Blue Sky