Latest

|

Macro Economy

Latest

|

Consumer Finance

AI

|

Latest LLMs

CX/CS

|

Fintech

Latest

|

AI Infrastructure

Enterprise

|

ROI of AI

AI

|

Ethics & Safety

Latest

|

Politics & Policy

AI

|

Enterprise AI

AI

|

Big AI

Latest

|

Consumer Banking

Latest

|

Fintech Funding

AI

|

AI in Fintech

CX/CS

|

Fintech

AI

|

Health Tech

AI

|

AI Governance

Latest

|

LLMs

Latest

|

Fintech

AI

|

Open Source

AI

|

AI Security

Enterprise

|

Cloud Security

Latest

|

Macro Economy

Enterprise

|

Enterprise Solutions

AI

|

GRC

AI

|

AII Adoption

AI

|

AI Ethics

AI

|

Healthtech

CX/CS

|

AI in CX

AI

|

Quantum Computing

AI

|

Cybersecurity

Latest

|

Healthtech

CX/CS

|

AI Adoption

AI

|

AI

AI

|

Safety and Compliance

Latest

|

Big Tech

AI

|

Consumer Tech

AI

|

AI Ethics and Risks

CX/CS

|

AI

Enterprise

|

Data and Privacy

Latest

|

LLMs

Latest

|

Banking and Blockchain

AI

|

Healthtech

Enterprise

|

AI in the Enterprise

AI

|

AI Risk and Compliance

AI

|

AI Arms Race

Enterprise

|

AI

Latest

|

LLMs

CX/CS

|

Compliance

CX/CS

|

Great CX

CX/CS

|

CS in Blockchain

AI

|

AI News

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

AI

|

CX/CS

|

AI

|

AI

|

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

Enterprise

|

Enterprise

|

When AI watches everything, who watches AI?

Lorikeet News Desk

April 10, 2025

When AI watches everything, who watches AI?
Credit: Outlever

Key Points

  • AI startup CEO Jun Seki discusses the potential security risks and challenges AI systems can pose, while also showing strong potential for improving security practices.

It's funny how in the 21st century a large part of security engineering or IT security roles is just manually clicking buttons, typing usernames, creating passwords, assigning groups.

Jun Seki

Co-Founder and CEO | Stealth Startup

With AI encroaching on so many aspects of our lives, it's natural to be nervous about the implications. Our jobs face potential automation, governments are struggling with regulatory frameworks and our security and privacy depend on AI being properly guard-railed. What's being done to ensure we can protect ourselves in the midst of this uncertainty?

Jun Seki is an Entrepreneur in Residence at Antler and CEO of a forthcoming stealth AI startup. He sat down with us for a discussion about the challenges of establishing true safety within AI systems, and the emerging strategies being developed to eliminate threats.

Fear of the unknown: At the heart of AI security concerns is a level of mystery surrounding its development. "AI is a black box. It's pre-trained. So how do you evaluate the model, the safety and the response?" asks Seki. "There are so many models being launched, how do you know which models are poisoned, which are not? Let's assume the majority of them are poisoned. How do you know which ones are safe?"

Zero trust is not enough: The cybersecurity industry has long advocated for zero trust policies, but even that approach may not protect against the risks posed by AI. 

"Zero trust means you need to verify everything—every transaction, every action," Seki explains. "Looking at recent cases, despite having multiple parties approve transactions, organizations are still vulnerable. Attackers have initiated highly sophisticated social engineering or deepfake UI/UX scams that look like legitimate transactions."

Security professionals find themselves in a tough situation, and Seki suggests several important questions they need to ask if they are to ensure their organizations are safeguarded. "No matter how much zero trust you enforce, you still need additional security frameworks on top of it. How do you identify deepfake interfaces? How do you verify underlying transaction links? How do you examine the source code to ensure you're not approving something malicious? These are the challenges we face even with zero trust principles in place."

I see agentic AI as a way of automating security engineering. It’s one step closer to replacing the IT help desk and enhancing organizational-level security. In that way, one of the initial internal threats can be eliminated.

Jun Seki

Co-Founder and CEO | Stealth Startup

AI-powered defense systems: Despite these challenges, Seki believes in the potential in using AI itself as part of the solution, particularly in threat detection and incident response.

"If you spin up an AI bot to actively monitor and defend the system, it can definitely react faster when an incident happens," says Seki. "If a hacker is attacking a specific network port, AI can see that and shut down that port much faster than humans can."

The industry is moving toward AI-enhanced DevOps infrastructure, where AI assists in diagnosing and responding to threats. Seki mentions he's watching developments like Microsoft Defender to see how AI enhances security capabilities, though he notes that widespread working examples of such systems are still emerging.

Eliminating internal threats: One of the most promising applications for AI in security might be addressing human-enabled risks. Seki points out the surprisingly manual nature of current security practices in most organizations. 

"It's funny how in the 21st century a large part of security engineering or IT security roles is just manually clicking buttons, typing usernames, creating passwords, assigning groups," Seki observes.

The solution may lie in agentic AI, he says. "I see agentic AI as a way of automating security engineering. It’s one step closer to replacing the IT help desk and enhancing organizational-level security. In that way, one of the initial internal threats can be eliminated."

One more thing

We know AI chatbots that read your help center and summarize answers back to your customers are dime-a-dozen. The underlying technology is a commodity.

In fact we believe this so strongly, we’ll handle 100,000 FAQ lookup tickets for free.

Blue Sky