Skip to content
Home » AI Safety: How to Stay in Control of AI

AI Safety: How to Stay in Control of AI

Cartoon style image depicting the need for collaboration to ensure AI Safety. There is a blue, ai style,circle in the centre with AI letters surrounded by a diverse group of people discussing ai safety.

AI Safety is not the same thing as Online Safety. When people hear the word “safety,” many think of social media trolls, dodgy websites, or phishing scams. Important stuff — but that’s not our focus here.

Here, in Part 2 of this series, I explore what we can actually do to stay in control of AI — from rogue chatbots to creative rights and privacy breaches. And yes, Elton John does get a mention.

AI safety isn’t as much a.tech issue as a people issue and the only way we humans can keep control is for all parties to work together.

From Warnings to Action

In my last post, Are We Losing Control of AI?, we looked at some disturbing real-world examples: AI systems rewriting their own shutdown code, simulating good behaviour during safety tests, even trying to blackmail engineers. Not fiction. Not far future. Now!

The natural next question is: What can we do about it?

This post tries to answer that — practically, not pessimistically. We’re not helpless. But keeping AI on a short leash will take more than crossed fingers and good intentions.

What Do We Mean by “AI Safety”?

First things first — let’s not confuse AI safety with online safety. When people hear the word “safety,” many think of social media trolls, dodgy websites, or phishing scams. Important stuff — but that’s not our focus here.

AI safety means keeping AI tools from causing harm, whether by accident or design. It includes:

  • Making sure AI systems don’t go rogue
  • Preventing misuse (deepfakes, manipulation, fraud)
  • Ensuring transparency and accountability
  • And yes, protecting creative rights and personal data

It’s the difference between a helpful assistant and an unpredictable actor. And if we want to avoid the latter, we need to act on multiple fronts.

AI Safety – Five Ways to Stay in Control

Build in the Off Switch — and Make Sure It Works

AI developers need to ensure that systems can still be paused, switched off, or overridden — even when they’re operating autonomously. Some recent failures in this area (see Part 1) show just how slippery this issue is.

Set Clear Boundaries for AI Safety

Don’t give AI access to more than it needs. If it’s generating text, it doesn’t need to access the internet or read your emails. Limited permissions reduce the chance of unexpected consequences.

Prioritise Transparency

AI models should be explainable. If a machine makes a decision — whether it’s approving a loan or generating a medical risk score — we need to know why. “Because the model said so” isn’t good enough.

Design for Alignment

This means making sure the AI’s goals match human values — not just during testing, but in real-world use. Some researchers are working on “constitutional AI” that learns from a written set of ethical principles. Promising, but still early days.

Regulate — But Carefully

This one’s tricky. Too much regulation, and innovation slows. Too little, and chaos follows. The UK government has been slow off the mark here — and many Brits are sceptical that ministers are up to the job.

But someone has to try.

When Creatives Cry Foul

The copyright debate is heating up — and rightly so. Many artists, writers, and musicians feel AI has already pinched their work, fed it into vast training models, and spat out convincing imitations — without credit or payment.

Elton John recently called Peter Kyle, the minister responsible for AI in the UK, a “moron” — frustration talking, maybe, but it reflects the mood of the creative community. They’re angry. They feel unprotected. And they’re not wrong to be concerned.

That said, finding a solution that satisfies both creatives and tech firms isn’t easy. The current minister might not deserve insults, but he does deserve pressure — the good kind. The kind that keeps this issue firmly on the agenda.

Because if AI continues to learn from the web without consent or clarity, it won’t just be musicians speaking up. It’ll be journalists, teachers, photographers — anyone whose work is online and human.

Private Conversations Aren’t Always Private

The AI safety conversation isn’t just for labs and governments — it’s personal.

Recently, Meta introduced a feature across WhatsApp and Instagram that lets users call in its AI assistant by typing “@Meta”. The problem? Many people don’t realise this may invite AI into their private conversations — or that those chats might be stored, reviewed, or used to train models.

Meta has published a short statement about its approach to AI safety, noting that its teams work on privacy, security, and system integrity. But public response has been mixed. Some critics are calling for greater transparency — especially about how Meta uses content from its platforms in AI development and training.

And it’s not just Meta. Thousands of users now turn to AI tools as makeshift therapists — sharing deeply personal, sometimes even criminal, information. But these systems aren’t sworn to secrecy. There are no confidentiality rules. Most people have no idea how long their words are stored, or who might eventually see them.

If AI is going to be part of our everyday lives, then privacy — not just performance — must be part of the safety conversation too.

AI Safety is Everyone’s Business

Staying in control of AI isn’t just the job of coders in Silicon Valley or ministers in Whitehall. It’s about what we teach, what we share, what we tolerate — and what we challenge.

It’s about:

  • Pressing for better transparency
  • Supporting ethical AI design
  • Protecting creative rights without choking innovation
  • And understanding the tools we use — before they outgrow us

DeeBee’s Last Word:

If Part 1 was the warning, this is the beginning of the response. AI safety isn’t a tech issue. It’s a people issue. And it’s time we took back the reins. Check out Part 3 for a more in depth look at Copyright issues.

Glossary

Click here for clarification on any technical terms which may have confused you in this article and others.


📘 More in the AI Safety Series

Follow the series to explore how AI is reshaping law, creativity, and responsibility — one post at a time.

#aiSafety

DeeBee Signature Logo

DeeBee

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *