Latest
|
Macro Economy
Latest
|
Consumer Finance
AI
|
Latest LLMs
CX/CS
|
Fintech
Latest
|
AI Infrastructure
Enterprise
|
ROI of AI
AI
|
Ethics & Safety
Latest
|
Politics & Policy
AI
|
Enterprise AI
AI
|
Big AI
Latest
|
Consumer Banking
Latest
|
Fintech Funding
AI
|
AI in Fintech
CX/CS
|
Fintech
AI
|
Health Tech
AI
|
AI Governance
Latest
|
LLMs
Latest
|
Fintech
AI
|
Open Source
AI
|
AI Security
Enterprise
|
Cloud Security
Latest
|
Macro Economy
Enterprise
|
Enterprise Solutions
AI
|
GRC
AI
|
AII Adoption
AI
|
AI Ethics
AI
|
Healthtech
CX/CS
|
AI in CX
AI
|
Quantum Computing
AI
|
Cybersecurity
Latest
|
Healthtech
CX/CS
|
AI Adoption
AI
|
AI
AI
|
Safety and Compliance
Latest
|
Big Tech
AI
|
Consumer Tech
AI
|
AI Ethics and Risks
CX/CS
|
AI
Enterprise
|
Data and Privacy
Latest
|
LLMs
Latest
|
Banking and Blockchain
AI
|
Healthtech
Enterprise
|
AI in the Enterprise
AI
|
AI Risk and Compliance
AI
|
AI Arms Race
Enterprise
|
AI
Latest
|
LLMs
CX/CS
|
Compliance
CX/CS
|
Great CX
CX/CS
|
CS in Blockchain
AI
|
AI News
Enterprise
|
AI
|
CX/CS
|
CX/CS
|
AI
|
CX/CS
|
AI
|
AI
|
Enterprise
|
AI
|
CX/CS
|
CX/CS
|
Enterprise
|
Enterprise
|
The price of easily accessible AI could be your data, experts warn
.jpg)
⋅
April 10, 2025

Key Points
- DeepSeek offers cost efficiency and energy savings compared to larger AI models like GPT-4, but raises more pressing security concerns.
- Expert Jun Seki, CTO and Entrepreneur-in-Residence at Antler, warns that AI can be weaponized for attacks, potentially exposing sensitive data without user awareness.
But here’s the thing—who’s auditing this? Open-source is cool, but what about data governance and security risks—especially with a China-backed model?
Jun Seki
CTO and EiR | Antler
DeepSeek is making waves in the AI world, boasting impressive cost efficiency and energy savings compared to larger models like OpenAI's GPT-4. While its (alleged) training cost of $5.6 million is a fraction of GPT-4's $80 million, and its ability to run locally seems like a win for privacy, experts are raising red flags about its potential security risks.
We spoke with Jun Seki, CTO and Entrepreneur-in-Residence at Antler about the inherent risks of DeepSeek and the information AI prompts often expose.
Questioning DeepSeek: "It's proving you can build powerful AI without blowing millions on GPUs," says Seki. "But here's the thing—who's auditing this? Open-source is cool, but what about data governance and security risks—especially with a China-backed model?"
Seki actively hosts conversations around AI security on his LinkedIn newsletter "AI Prompt Security" and on Ragas. A leader in AI-driven cybersecurity, he specializes in building scalable, secure platforms, protecting business-critical data, and driving innovation in AI governance.
Prompt attacks: Seki warns that the security implications of AI tools like DeepSeek go beyond cost-cutting. AI can be weaponized to automate attacks such as vulnerability scanning or prompt injections. This could lead to sensitive data being exposed without users even realizing it. "Imagine your AI accidentally leaking sensitive info just because someone knew how to game the system," Seki says.
Imagine your AI accidentally leaking sensitive info just because someone knew how to game the system.
Jun Seki
CTO and EiR | Antler
Business exposure: For heavily regulated industries such as finance or healthcare, leaking business information to AI models can have serious consequences. Personally identifiable information (PII) or sensitive business data could be exposed and used for malicious purposes. According to Seki, once sensitive data enters a large language model, it becomes nearly impossible to remove.
He highlights the importance of anonymizing and sanitizing client data before sharing it with AI systems. "Imagine a wealth management professional asking an AI tool for portfolio advice. If they casually share their client's name, address, and investment details, they're exposing themselves to regulatory fines—and that information could potentially be used in a social engineering attack," Seki says. He is currently working on a solution that focuses on anonymizing sensitive information while retaining the essence of messages, ensuring that businesses can leverage AI safely.
Behind the curtain: Looking ahead, Seki predicts that U.S. startups will race to replicate DeepSeek's cost efficiency in 2025. But he cautions against sacrificing security for speed. "The real winners will be companies that balance efficiency with trust," he says.
He encourages businesses to carefully evaluate AI models and understand where their data is hosted and who has access to it. "You need to dig deep and look behind the curtain," Seki advises. "If you're building a recipe app, DeepSeek might be perfect for you. But if you're handling sensitive business data, you could be opening Pandora's box."