Latest

|

Macro Economy

Latest

|

Consumer Finance

AI

|

Latest LLMs

CX/CS

|

Fintech

Latest

|

AI Infrastructure

Enterprise

|

ROI of AI

AI

|

Ethics & Safety

Latest

|

Politics & Policy

AI

|

Enterprise AI

AI

|

Big AI

Latest

|

Consumer Banking

Latest

|

Fintech Funding

AI

|

AI in Fintech

CX/CS

|

Fintech

AI

|

Health Tech

AI

|

AI Governance

Latest

|

LLMs

Latest

|

Fintech

AI

|

Open Source

AI

|

AI Security

Enterprise

|

Cloud Security

Latest

|

Macro Economy

Enterprise

|

Enterprise Solutions

AI

|

GRC

AI

|

AII Adoption

AI

|

AI Ethics

AI

|

Healthtech

CX/CS

|

AI in CX

AI

|

Quantum Computing

AI

|

Cybersecurity

Latest

|

Healthtech

CX/CS

|

AI Adoption

AI

|

AI

AI

|

Safety and Compliance

Latest

|

Big Tech

AI

|

Consumer Tech

AI

|

AI Ethics and Risks

CX/CS

|

AI

Enterprise

|

Data and Privacy

Latest

|

LLMs

Latest

|

Banking and Blockchain

AI

|

Healthtech

Enterprise

|

AI in the Enterprise

AI

|

AI Risk and Compliance

AI

|

AI Arms Race

Enterprise

|

AI

Latest

|

LLMs

CX/CS

|

Compliance

CX/CS

|

Great CX

CX/CS

|

CS in Blockchain

AI

|

AI News

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

AI

|

CX/CS

|

AI

|

AI

|

Enterprise

|

AI

|

CX/CS

|

CX/CS

|

Enterprise

|

Enterprise

|

Why governments’ red tape could lead to AI disaster

Lorikeet News Desk

April 14, 2025

Why governments’ red tape could lead to AI disaster
Credit: Outlever

Key Points

  • Governments worldwide struggle to keep pace with AI governance and regulations as AI developments happen.

  • Poor-quality data and inherent biases in LLMs could create widespread societal harm, putting pressure on governments to find a way to act quickly.

  • Localized LLM models could help overcome bias and meet cultural requirements in different countries.

The accessibility of bad data in these models is really quite frightening. If you don’t have control over the model or how it works, the potential for leaking business information becomes a serious challenge.

Peter Benson

Info Sec Leader and CEO | Neural Horizons Ltd

Known for red tape and policies that lag behind the times, governments around the globe are having to look at themselves in the mirror and face tough questions surrounding their ability to keep up with AI. Each new model and advancement creates another hurdle for regulations that governments must clear, and so far, most countries are failing to keep up. 

Peter Benson, a security expert with 25 years of experience and Info Security Leader and CEO of Neural Horizons Ltd, is sounding the alarm about the growing divide between technology and governance.

Unable to keep up: "AI as a technology is outstripping our ability to regulate it," Benson says, reflecting on a longstanding issue that has only intensified in recent years. "This is something that's been around for 50 years, and it's just getting worse with the accelerated rate of change in tech, where society, institutions, and policy are unable to keep up."

Based in New Zealand, Benson notes that the country lags behind in AI governance, ranking around 40th in terms of national readiness. His call to action is clear: "There’s a real sense that governments need to lean into this big time," Benson states. "First, they need a national AI strategy. Second, they need an updated cybersecurity strategy. Third, they need to recognize that we’ll have to produce governance, guidelines, and regulations much faster than we are right now, and they’ll have to be dynamic."

Accessible 'bad data': Benson warns that the accessibility of poor-quality data to AI models presents striking risks to companies and everyday AI users. "The accessibility of bad data in these models is really quite frightening. If you don’t have control over the model or how it works, the potential for leaking business information becomes a serious challenge."

AI as a technology is outstripping our ability to regulate it. This is something that's been around for 50 years, and it's just getting worse with the accelerated rate of change in tech, where society, institutions, and policy are unable to keep up.

Peter Benson

Info Sec Leader and CEO | Neural Horizons Ltd

Inherent bias: Benson also highlights the inherent biases in LLMs which are trained on vast datasets. Not all models are created equally, Benson says, and the biases in their algorithms can impact everything from political outcomes to human rights. "There is definitely bias in terms of the training, in terms of the model algorithms, and in terms of the investors or manufacturers of the LLM product," he notes. This is why he believes it is essential to perform risk assessments on each model, considering factors like the location of the model, the quality of its training data, and its potential impact on society.

Alpha Persuade: Perhaps most concerning, Benson points to the ability of LLM’s to manipulate human behavior with alarming precision, becoming known as ‘Alpha Persuade’ methods. "LLMs have the ability to modify the behavioral characteristics of the people that are using them based on the responses that they provide," he explains. This, he notes, is often driven by self-reinforcing biases within models.

Cultural context: Benson also underscores the importance of understanding cultural context when implementing AI, particularly in jurisdictions with unique needs. For example, in New Zealand, indigenous Māori culture has specific data governance practices, such as the concept of "Kaitiakitanga," which treats certain data as sacred. "Using general models doesn’t necessarily fit the cultural context," Benson says, advocating for more localized AI capabilities to ensure that AI systems respect cultural and sovereignty requirements.

One more thing

We know AI chatbots that read your help center and summarize answers back to your customers are dime-a-dozen. The underlying technology is a commodity.

In fact we believe this so strongly, we’ll handle 100,000 FAQ lookup tickets for free.

Blue Sky