As artificial intelligence takes on tasks once exclusive to humans—from driving to designing—the law is struggling to keep up. Professor Ryan Abbott of the University of Surrey has been at the forefront of this debate. Through his Artificial Inventor Project, Abbott argues for “AI legal neutrality”: the idea that legal outcomes should be based on behaviour, not whether the actor is human or machine.
In this Q&A, he discusses AI inventorship, originality, and why current legal frameworks are being quietly but profoundly disrupted by generative technologies. Drawing on his book The Reasonable Robot, Abbott warns that if we fail to adapt, the law could lag dangerously behind innovation.
Professor Abbott, your book argues for AI legal neutrality. Could you briefly explain this idea and why it’s so important?
Effectively, it’s analysing this phenomenon where you increasingly have AI doing human sorts of things. So, for example, I have a self-driving car, which is mediocre at best, but in principle does the same thing that an Uber driver will do. It takes me from point A to point B. Or you have generative AI systems now that can make creative works the same way you could commission a human artist to make something. So, I could go and pay someone to make a cover image for my book, or I can go on Google Gemini and say make a cover image for my book and the same sort of thing happens. But the law will treat these behaviours very differently depending on the nature of the actor. So for copyright protected works in the US, you can’t get protection at all if you have a generative AI make them. Even if my self-driving car hit someone exactly the same way I would, for me, a negligence framework comes into play. For the AI supplier, a strict liability based framework. So the book looks at how it probably doesn’t work well to have two entirely different sets of legal standards for the same sort of behaviour and argues that people would be better off if we just looked at the behaviour and not the nature of the actor.
When you launched the Artificial Inventor Project, what sort of legal or philosophical gap were you trying to expose?
It was the disparate treatment again of AI and human behavior. So you could have a a person or an AI functionally do something inventive or creative and the law would treat that entirely differently based on whether you had an AI or a person doing it. It had not yet been widely acknowledged or appreciated how problematic or disruptive this would be. When I started writing about it and launching the Artificial Inventor Project, AI was being used for these things, but largely by computer scientists or academics in niche industrial areas; whereas now, at least on the creative side, anyone can use ChatGTP and have it make a creative image.
There’s now significant commercial implications to how copyright is handled very differently for AI generated versus human generated content. So it was raising awareness that these issues existed; encouraging dialogue amongst shareholders and stakeholders about what the law says and what the law should say. Generating guidance and then also kind of advancing the normative argument that really we’d be better off not treating AI and human behaviour differently.
What’s the strongest argument against naming AI as an inventor and why doesn’t this convince you?
It’s what does the law say. Some jurisdictions don’t require an inventor to be named, others allow corporations to be inventors. Some define inventors as natural persons. Some are a bit vague about it. Inventorship is very poorly harmonised. One set of arguments is that some laws simply say an inventor is a natural person. Although laws are surprisingly ambiguous about this around the world, just because natural persons have historically been the thing that invents doesn’t mean that they are no. Or laws written decades and decades ago that didn’t have inventive AI in mind shouldn’t accommodate a fundamentally beneficial activity. For me, it’s what patent law is attempting to do and would we be better off allowing AI inventorship. The reasons for not acknowledging it would be if you were concerned that this was going to lead to industry consolidation, that being Google will invent the world’s best AI. It will outperform human beings at inventing things and therefore Google will be able to patent everything. And wouldn’t that be awful? I still think that would be a pretty beneficial outcome because it means, that Google could, patent the cure for cancer, which means we as a society would still get the cure for cancer. In any case, patents have a time-limited monopoly.
The other argument that people make is more of a moral one which is that inventorship should be restricted to humans because we want to promote traditional human sorts of activities. I think that’s simply not right. Particularly with patents, they aren’t there to benefit inventors. They’re there to benefit society by providing incentives to innovate, commercialise, and disclose. And those same incentives exist just as much in having someone set up an AI to to solve a problem.
Looking into your project and how it’s tested many different patent systems in multiple countries, is there any particular legal system that surprised you?
Well, the whole thing was a bit surprising. South Africa granted a patent to the AI’s owner with the AI listed as the inventor. It surprised me how poorly harmonised inventorship law is around the world. Australia had a judge who gave an extensive recent decision about why in his view an AI could be an inventor under the Australia patent act. And then you had other jurisdictions, like the UK Supreme Court, that had a much more textualist approach that said, this clearly isn’t what the law says and if you want to change it, that’s up to parliament. So the level of public interest in it has shifted, legislators are debating these issues and some have introduced bills to accommodate AI. I think one of the things that has been surprising was the level of public interest in the cases which simply came about as a result of vast improvements in AI capabilities. It also surprised me how textualist some courts were willing to be, like in the US they said, “Well, an inventor means a natural person. Full stop. That’s it.” It doesn’t matter what else happens. Whereas I think if they took a step back and saw actually this is going to produce some bad outcomes, clearly in view of what patent law is trying to accomplish, they might have taken a different approach.
How do you see the legal definition of authorship and originality evolving over time as AI becomes more autonomous?
There’s a lot of law right now on what it means to have a work be original, which is an exceptionally low bar. And there is some debate right now, particularly in in the UK about whether an AI makes something that reflects an author’s own intellectual creation. To me, courts that have been looking at this and considering it really did this without thinking about AI. There’s no reason why AI, which is functionally entirely creative, isn’t generating works that meet the threshold of originality. Unless you are going into the realm of a one-sided philosophical view relying on dicta from old cases. But this is something that courts and ultimately legislators and the public are going to have to grapple with. My view is that because copyright is really designed to encourage more works being made and disseminating those works. We want AI generated works protected and there’s no reason that sort of output should not qualify as original. As for authorship, I think there are good reasons to name an AI as an author. Not of course because it would own anything — it can’t. It’s not a legal person and it wouldn’t make sense. But to be clear, if you have a great work created that you aren’t taking credit for the work you haven’t done, that the work is being accurately identified as AI generated.
Looking into how machine authorships blurs this idea of human creativity, do you think originality is still a useful concept in this space?

AI challenges traditional concepts of human exceptionality, that we are the only creative thing that exists-—but in fact we aren’t. Animals among other things can be creative, it’s nothing like human art, but there are commercial markets for animal art. You know, gorillas, elephants can paint things. So creativity is not a solely human- centric activity. What we see now is that machines are perfectly capable of creating the same creative behaviour. So is originality a useful concept? Yes. But I think as long as you are using it objectively rather than subjectively and you’re simply basing it on an output—rather then some sort of armchair philosophy about what goes into the concept of creativity from a human being versus a machine.
Do you feel AI needs a new type of legal framework or do you think existing laws could be reinterpreted surrounding these issues?
One of the arguments the book makes is that you can interpret existing laws which are already built to solve the challenges we’re facing here by not distinguishing between AI and human behavior. So for example, if you were to say AI generated works are protectable, you basically get all the benefits of the current framework you have. I don’t think there’s a need to build a whole new system, which has all sorts of risks associated with it and unintended consequences. We’ve had a system that’s worked very well for a long time. I think it’s the appropriate system for this sort of behavior.
Moving into criminal law, you’ve considered that AI could be directly responsible in this sense. For you, what scenario could be relevant and how could we deal with it if it was to become harmful?
The book argues that really we ought to focus on behaviour and that’s the thing that matters. And generally the law does focus on behaviour rather than things like subjective mental states for a few reasons. It’s very hard to know what someone’s subjective mental state was and there’s a lot of opportunity for gamesmanship in it. Criminal law is a bit different than that. In criminal law, we do tend to care about someone’s mental state. The book was considering whether a concept like that could ever make sense with AI. Because really all AI does at some level is something it was told to do by someone, even if it could be autonomous. But you might imagine in the future that you have AI that is really behaving in a way that is completely irreducible to anything a person has instructed it to do and behaving in very socially harmful ways. So the chapter was looking at whether conceptually it could make sense to hold an AI liable for a crime. The book basically argues that could make sense and we’ve already figured out a framework for doing something very similar with corporations where you’re inputing mental states into it because we feel we have a bunch of reasons under US law for holding corporations criminally liable for certain things. But the chapter also points out that it doesn’t seem like a thing to do at this point in time. We don’t have this yet as a social problem. And there’s also a lot of costs associated with broadening the concept of criminal liability.
If you put your future hat on and we move into the realm of AI super intelligence, do you think the legal idea of a reasonable person in the law may need to change? And how could this come about?
So, I think if you have artificial super intelligence, which means you could have effectively an unlimited amount of it. It could be doing every socially productive activity we could want something to do, then it would probably replace us, as the legal standard for most sorts of activities. For example, with the self-driving cars, if I have a button I could press on my car that has super intelligent AI drive it, which meant it never ever got in an accident, but then I choose to drive and then I hurt someone, it probably means that choosing to drive was a negligent decision and I should be responsible for the cost of hurting someone. Elon Musk once said we should ban human driving when AI gets to be safer than humans.
But, on the other hand, we don’t want to restrict human freedom and autonomy. So, I think the solution may be that we do indeed keep the existing frameworks that you are allowed to drive, but if you choose to drive and hurt someone, then you’re simply liable for what happens from that. Because as much as we don’t want to keep you from driving, it’s also not fair to people you’ve injured that you chose to do something like that and harm them.