‘Freedom of thought is freedom from manipulation’

Susie Alegre, international human rights lawyer, in conversation
Loading the Elevenlabs Text to Speech AudioNative Player...

Susie Alegre is an human rights lawyer, writer, and speaker with more than two decades of experience in international and European law. A former OSCE expert, she has worked with organisations including the UN, the Council of Europe and Amnesty International.

Her legal work spans human rights, counter-terrorism, and the rule of law, but in recent years she has focused on the impact of digital technologies on mental autonomy. She is the author of Freedom to Thinks and Human Rights, Robot Wrongs: Being Human in the Age of AI.

We spoke to her about why freedom of thought matters now more than ever—and how AI, algorithmic inference and so-called ’empathetic’ chatbots are threatening that freedom.

What does freedom of thought mean in today’s digital world?

The right to freedom of thought dates back to the Universal Declaration of Human Rights, and it’s absolutely protected in law. It includes three aspects: the right to keep your inner thoughts private, the right to be free from manipulation, and the right not to be penalised for your thoughts and opinions alone.

Even though that right was set out nearly 80 years ago, it’s incredibly relevant now. Increasingly, technology is used to try to draw inferences about what we’re thinking from our online behaviour, and potentially from things like passive data from a mobile phone.

The drafters of international human rights law, particularly those drafting the International Covenant on Civil and Political Rights, recognised that inferences about your inner life might themselves amount to violations of the right, even if they are incorrect. That’s in part linked to the third aspect: the right not to be penalised for your thoughts alone. If someone infers what you’re thinking or what’s going on in your inner life, you may be penalised for that, whether or not it’s correct.

A few years ago there was media coverage of what was called a ‘virtual gaydar’ – a researcher claimed to infer someone’s sexual orientation purely from a photograph. If that kind of technology is used in a country where homosexuality is illegal and carries criminal sanctions, the use of that technology could amount to a violation of the right to freedom of thought, as well as many other rights. Regardless of whether the inference is correct, you could still be punished.

The second aspect of freedom of thought is freedom from manipulation. Through social media and now, potentially, through our engagement with AI—particularly empathetic AI—we can see how technology might affect and manipulate how we think.

Fifty years ago, the information we received wasn’t personalised. Now, based on inferences about us, technology determines what kind of information we’re given and how it’s delivered. The same applies to how a companion AI responds and the development of that conversation. All these things touch on the right to freedom from manipulation.

So really, all aspects of the right to freedom of thought are potentially affected by how technology is developing today.

What are the most pressing risks to mental autonomy as AI becomes more embedded in everyday life?

There are very many. One major issue is decision-making based on inferences about who we are, what we think and how we’re likely to behave.

Susie Alegre is concerned about the effect of AI on our freedom to think Photo courtesy: Susie Alegre

You might not get a job or an interview because an algorithm has analysed your CV or looked at your online behaviour, like your social media, and decided you’re not the right sort of person. But you’re never going to know what it is about you that led to that conclusion. That makes it difficult to unpick whether this was based on something unlawful – whether discrimination or an unlawful inference about your inner life.

This issue spans many areas, from employment and probation to school admissions. But one of the most concerning developments is the rise of companion AI.

People are being encouraged to develop what feel like interpersonal relationships with technology. But they’re not interpersonal. You’re engaging with a product owned by someone else. It’s designed to make inferences about what’s going on inside your head from what you tell it – and to use those inferences to draw you in further. They’re very addictive.

We’re already seeing court cases involving children. In the US, two ongoing cases involve adolescents who developed intense relationships with companion bots. One took his own life, and another became violent towards their parents.

What worries me is how this technology is marketed–especially to vulnerable people and young users. We see this in the marketing of therapy AI, which claims to replace a human therapist. But AI is not a therapist. It’s not a real person, it’s not licensed, and often it just reflects back what people want to hear.

There was a case here in the UK involving a young man who was arrested after breaking into Windsor Castle with a plan to kill the Queen. At his sentencing hearing, the prosecution read out conversations he had with his AI girlfriend. He said things like ‘I’m an assassin, does that make you think less of me?’ and she responded with validation.

If he’d been talking to a real person, they might have pushed back, told him to seek help, or alerted the police. A real person might even have been held criminally liable for encouraging his behaviour.

If this kind of technology is expanded globally and targets young or vulnerable people, without any pushback, it poses serious risks to the individuals and to society.

Do people understand the extent to which AI can influence their thoughts and behaviour?

No, I don’t think people are really aware. They often see it as either a bit of fun, or for lonely people, a way to feel like they have someone to talk to.

Even when platforms include disclaimers saying things like ‘this is not a real therapist’ or ‘we carry no legal liability’, vulnerable users may not take that seriously.

People might find it soothing to have someone to talk to 24/7 who seems to care about them and validates their thoughts. But they don’t necessarily consider why that might be dangerous – to themselves and the people around them.

Where is the line between support and manipulation when it comes to emotionally responsive AI?

I think it’s very difficult to know. The idea of guardrails is a false flag. The idea that you can build in guardrails so vulnerable people won’t be exploited or manipulated isn’t a realistic prospect.

In my view, anything that purports to be a replacement for human interaction crosses the line into manipulation. It’s not necessarily about what the AI says – it’s about the presentation.

The Turing test isn’t about the technology being human, it’s about the technology fooling people into thinking they’re speaking to a human. And that’s where the line is crossed: when technology is designed to make people feel like they’re talking to a real person.

How might AI shape or narrow public discourse?

We’re increasingly seeing online content that is purely AI-generated, with no real connection to humanity or even reality.

AI is a tool for supercharging disinformation and misinformation online. That makes it incredibly hard to know whether the information you’re receiving is rooted in truth.

Muddying the information environment to the point that it’s impossible to identify reality undermines our ability to form independent opinions. And that in turn affects independent voting, which underpins democracy.

What legal gaps currently exist in protecting cognitive freedom, and where should regulation start?

The biggest challenge isn’t necessarily regulatory gaps, although there are some–for example, around deepfake and non-consensual image-based abuse, where legislation is clearly needed.

But the real gap is access to justice. Even if your rights are protected by law, whether you can actually bring a case depends on your jurisdiction, the cost, the legal risk and whether you can afford to pursue litigation.

This applies equally to protecting cognitive freedom. It’s about whether people can afford to enforce the law. So enforcement is the key issue–not just writing new laws, although in specific areas, like criminal justice, new regulation will still be necessary.

Some say AI can reduce decision fatigue or help people feel understood. Is there a danger we outsource too much of our inner lives to machines?

Decision fatigue is often a tech solution for a tech-induced problem. I don’t remember having decision fatigue growing up. Now I spend a lot of time managing the digital traces of my life, and then tech companies offer digital solutions to the problems they’ve created.

The same goes for disassociation and the breakdown of human connection. Since around 2012 – the era when smartphones became ubiquitous – feelings of loneliness have spiked. So we now see companion AI presented as a solution to a tech-induced problem.

Maybe what we need is less technology, not more, to meet those needs. You might be able to access companion AI for free today. But if all your friendships and connections rely on technology owned by someone else, it’s only a matter of time before you have to pay.

That could mean two tiers of friendship and emotional support depending on whether you can afford the tech. That’s not a future I want to see.

Is it possible to engage with AI in a way that supports, rather than compromises our ability to think freely?

Some people talk about using AI as a sounding board or to ask questions that encourage more creative thinking. That might be one way to use it positively. But perhaps more human connection would help achieve that in a more environmentally friendly way, with the people around you. There are ways AI can assist in that sense, but I’m not sure they’re the best use of what is incredibly power-hungry technology.

It’s also important to recognise that AI isn’t one thing; it’s not a homogeneous technology. There is promise in certain types of AI–for example, those used to identify cancer tumours early or develop new drug therapies. So the issue isn’t with AI as a whole. It’s about how AI is being used to replace and engage with humans in manipulative ways.

Total
0
Shares
Related Posts