After the Dabus decision of the EPO, the decision of the US Federal Circuit, the German Federal Patent Court and the Australian High Court, it is now the turn of the UK Supreme Court to decide whether an AI can be named as Inventor in a Patent Application.
The EPO, the US Supreme Court and the German Federal Patent Court were clear enough in saying that the law only provides for a person to be named Inventor, not a machine or an algorithm (but notice that a parallel patent in South Africa was granted with the AI as Inventor).
The UK Supreme Court recently heard on March 2, 2023 the appeal by Dr Thaler (creator of “Dabus”) against the Court of Appeal’s dismissal of his appeal from the Controller of Patents. As in the parallel cases, the UK Supreme Court substantiated its decision on the one hand by the fact that only legal personality can transfer or assign rights, and on the other hand by the fact that assessing non-obviousness of an AI is a completely different process as opposed to the case of a human inventor.
Beyond the specific case before the UK Courts, the issue is very interesting and potentially disrupting the way we deal with IP.
In fact, the operation of an AI coming up with a new device or product is human-driven for the time being, and therefore it does not seem fair to assign rights to an AI, which is, in turn, owned by legal personalities.
But even assuming that the specific provision of legal personality would be removed from the law, the consequence would not only be on the name of the Inventor. Since non-obviousness has to be assessed, what would the skilled person look like? In fact, it seems natural to compare similar means, therefore we might be willing to replace the skilled person with another AI.
This “skilled AI” would then be one with growing ability, and Patent Offices around the world should then run programs for inventive step assessment whose quality and extent would change with real, improving AI. Would then be two separate tracks for human and non-human inventions before the Offices? Fictitious classical and real AI-driven skilled agents?
This would already be very complicated but what about equivalents? Would an analytical algorithm be equivalent to a specific AI and vice versa based on the fact that one can always take data from common data sources or even from the same analytical algorithm and train an AI to give similar outputs? Or would they be kept separated before Court?
AI Inventorship is therefore not a secondary problem, because it is another entrance gate for bigger and disrupting situations that can potentially have a great impact on how we deal with human and non-human activities. We need to take thoughtful decisions as a society and species.