Latest
|
Macro Economy
Latest
|
Consumer Finance
AI
|
Latest LLMs
CX/CS
|
Fintech
Latest
|
AI Infrastructure
Enterprise
|
ROI of AI
AI
|
Ethics & Safety
Latest
|
Politics & Policy
AI
|
Enterprise AI
AI
|
Big AI
Latest
|
Consumer Banking
Latest
|
Fintech Funding
AI
|
AI in Fintech
CX/CS
|
Fintech
AI
|
Health Tech
AI
|
AI Governance
Latest
|
LLMs
Latest
|
Fintech
AI
|
Open Source
AI
|
AI Security
Enterprise
|
Cloud Security
Latest
|
Macro Economy
Enterprise
|
Enterprise Solutions
AI
|
GRC
AI
|
AII Adoption
AI
|
AI Ethics
AI
|
Healthtech
CX/CS
|
AI in CX
AI
|
Quantum Computing
AI
|
Cybersecurity
Latest
|
Healthtech
CX/CS
|
AI Adoption
AI
|
AI
AI
|
Safety and Compliance
Latest
|
Big Tech
AI
|
Consumer Tech
AI
|
AI Ethics and Risks
CX/CS
|
AI
Enterprise
|
Data and Privacy
Latest
|
LLMs
Latest
|
Banking and Blockchain
AI
|
Healthtech
Enterprise
|
AI in the Enterprise
AI
|
AI Risk and Compliance
AI
|
AI Arms Race
Enterprise
|
AI
Latest
|
LLMs
CX/CS
|
Compliance
CX/CS
|
Great CX
CX/CS
|
CS in Blockchain
AI
|
AI News
Enterprise
|
AI
|
CX/CS
|
CX/CS
|
AI
|
CX/CS
|
AI
|
AI
|
Enterprise
|
AI
|
CX/CS
|
CX/CS
|
Enterprise
|
Enterprise
|
AI’s black box problem has CIOs begging for reliable guardrails – before it's too late
.jpg)
⋅
April 10, 2025

Key Points
CISOs and CIOs are increasingly concerned about AI's lack of transparency and control, fearing job loss due to potential security issues.
The black box nature of AI systems complicates explainability, raising risks of cognitive offloading and over-reliance on AI.
The rise of "citizen developers" using foundation models without oversight poses significant security risks, akin to shadow AI.
Some people talk about ethics, but in general, it’s explainable AI that’s key.
Doug Shannon
AI expert and mentor | Insight AI, Brewing Insights!, and the Forbes Technology Council
CISOs and CIOs are quietly panicking about AI—and not just because of what it can do, but because of how little they can see or control. Without governance and frameworks to support them, those leading security measures stand to potentially lose the most amongst company executives.
Living in fear: "CISOs and CIOs are so scared at the moment with AI and the new security pressures emerging," says Doug Shannon, AI expert and mentor at Insight AI, Brewing Insights!, and the Forbes Technology Council. "They’re thinking, ‘I’m going to be fired. Something’s going to happen, I have no visibility in this and I don’t know what to do.' They fear being the scapegoat and getting fired on the first issue."
One of the biggest problems, he says, is the black box nature of many AI systems. "You don’t know what the weights are. You don’t know where things are going. You don’t know how you're getting the answers out of it."
Cognitive offloading: That lack of transparency feeds into a broader issue: explainability. "Some people talk about ethics, but in general, it’s explainable AI that’s key. And we still very much need it, but we’re not seeing it yet." Without that, Shannon warns of a growing risk of cognitive offloading—the idea that people will stop thinking for themselves once they begin to over-trust AI systems. "Once we build more trust into them, and we say, ‘Oh, we just trust it,’ kind of like with Wikipedia—we shouldn’t, but we do—then cognitive offloading happens. We're at risk of not doing things anymore."
Once we build more trust into AI and we say, ‘Oh, we just trust it,’ kind of like with Wikipedia—we shouldn’t, but we do—then cognitive offloading happens.
Doug Shannon
AI expert and mentor | Insight AI, Brewing Insights!, and the Forbes Technology Council
Losing skills: He points to early signals already visible in society. "There was an article that said in the next three years we may see kindergarteners not reading. We lost calligraphy. We’ve lost other skills we don’t practice anymore because we didn’t really need to. As an example, we don’t give directions anymore. We use GPS. So what else are we going to offload? The risk I see is generally where AI is heading."
Citizen developers: Shannon also flags a second, related concern: the rise of "citizen developers". Shannon explains, "I still see a lot of major change controls not happening—and that’s pushing people to look at citizen developers again, saying, ‘Let’s just let people build their own stuff using foundational models.’ But then you step back and go, wait a minute—isn’t that shadow AI? You’re building code with no version control, no oversight, and no idea what’s up or down. But because it works, people assume it’s fine. That’s massively dangerous." The likeliest winners in the space will be those that relentlessly define the guardrails for AI, and can reliably promise predictability to both businesses and consumers.