Recent research suggests that the prospect of AI-powered companies or organizations is becoming increasingly feasible. At the same time, researchers warn that we need to update our laws and AI training methods to prepare for this possibility. The advancements in […]
Recent research suggests that the prospect of AI-powered companies or organizations is becoming increasingly feasible. At the same time, researchers warn that we need to update our laws and AI training methods to prepare for this possibility. The advancements in large language models are forcing us to reassess what abilities are solely human. There is a debate about whether these algorithms can understand and reason in the same way as humans, but they are increasingly achieving similar or even better results than most humans in various cognitive tasks.
This fact is driving efforts to incorporate these new skills into all types of business functions, from writing marketing texts to summarizing technical documents or even customer support. Although there are inherent limitations to how far AI can advance in the hierarchy of a company, the idea of AI taking on managerial or even executive roles is becoming more of a possibility.
As a result, experts in digital intelligence law are now calling for the adaptation of laws to consider the possibility of AI-powered companies and for technology developers to change the way AI is trained so that algorithms comply with laws from the beginning.
Daniel Gervais, a law professor at Vanderbilt University, and John Nay, an entrepreneur and Stanford CodeX fellow, write in an article published in the scientific journal Science: “Legal singularity is looming. For the first time, non-physical entities not guided by humans may enter the legal system as a new ‘species’ of legal entities.”
While non-physical entities such as rivers or animals sometimes receive legal subject status, the main barrier to their full participation in the law is the inability to use or understand human language. According to the authors, with the latest generation of AI, this barrier has already been or will soon be overcome, depending on whom you ask.
This opens up the possibility for non-physical entities to directly interact with the law. In fact, the authors highlight that lawyers are already using advanced AI tools to assist them in their work, and recent research has shown that large language models can perform a wide range of legal reasoning.
While today’s AI is still not capable of independently running a company, the authors emphasize that some jurisdictions do not have rules requiring corporations to be supervised by humans, and the idea of AI managing business operations is not explicitly prohibited by law.
If such an AI company were to emerge, it is not entirely clear how courts would handle the case. The two most common consequences for breaking the law are financial penalties and imprisonment, which do not translate well into virtual software.
While it is possible to ban the existence of AI-controlled companies, the authors argue that it would require significant international legal coordination and could limit innovation. Instead, they suggest that the legal system itself should adapt to this possibility and devise the best way to address this issue.
One important direction for future development is likely to be directing AI to comply with the law. This could be achieved by training models to predict which activities are in line with specific legal principles. The model could then teach other models, trained for different purposes, how to act in accordance with the law.
The ambivalent nature of the law, which can be highly contextual and often needs to be resolved in court, complicates this process. But the authors advocate for training methods that incorporate what they call the “spirit of the law” into algorithms, rather than relying on more formal rules of how to behave in different situations.
Regulators could make this type of AI training mandatory, and governing bodies could also develop their own AI designed to monitor the behavior of other models to ensure compliance with the law.
While researchers acknowledge that some may ridicule the idea of AI directly controlling companies, they argue that it is better to involve them in overall regulatory activities as soon as possible to overcome potential challenges.
“If we don’t create AI agents as legal subjects who must comply with human law, we will miss out on significant advantages of monitoring what they do, shaping how they do it, and preventing harm,” the authors write.
Q: What is AI-powered companies?
A: AI-powered companies are organizations that utilize artificial intelligence technology to drive various aspects of their operations, from decision-making to customer support.
Q: Can AI understand and reason like humans?
A: There is a debate about whether AI algorithms can understand and reason in the same way as humans. However, in many cognitive tasks, AI algorithms are achieving similar or even better results than most humans.
Q: Is it possible for AI to take on managerial or executive roles in a company?
A: While there are limitations to how far AI can advance in the hierarchy of a company, the idea of AI taking on managerial or even executive roles is becoming more feasible.
Q: How can the legal system adapt to the possibility of AI-powered companies?
A: The legal system can adapt by updating laws to consider the potential involvement of AI-powered companies and by developing methods to train AI models to comply with specific legal principles.
Q: What are the challenges in incorporating AI into the legal system?
A: The ambivalent nature of the law, which often requires contextual interpretation and resolution in court, presents challenges in incorporating AI into the legal system. However, methods that incorporate the “spirit of the law” into AI algorithms can help address these challenges.
– [Science Journal](https://www.example.com)