Regulating Artificial Intelligence

Current Regulation of Artificial Intelligence

Artificial intelligence is changing the world at a breakneck pace. The applications of artificial intelligence are numerous and growing, involving everything from financial planning, medical procedures, the fashion industry, and the ever-expanding reality of fully autonomous vehicles. As of now, most artificial intelligence is either completely unregulated or regulated only to the extent that all technologies in a certain sphere are (i.e., all medical software and hardware must comply with certain regulations). Some types of artificial intelligence, like autonomous vehicles, are regulated more heavily, but they, too, tend to have patchwork regulation across different states. The dearth of regulations, combined with the rapidly expanding artificial intelligence landscape, demonstrates the necessity of regulation, and numerous groups, as well as several prominent individuals including the entrepreneur Elon Musk, have called for extensive regulation of artificial intelligence, as Musk said before “it’s too late.”

The Need to Regulate Artificial Intelligence

Artificial intelligence is extremely powerful. However, problems naturally arise, whether it is due to error on the account of a machine or programmer of the machine, learned biases due to flawed data sets in real life (a famous example being racist recidivism prediction software, which occurred because people of color are overrepresented in the prison system and therefore more likely to reoffend) or artificial intelligence working too well for our own good (pattern recognition software intruded on people’s privacy by figuring out exactly who they were based on their movement patterns). Further compounding this issue is the fact that most experts agree that a reactive approach to regulating artificial intelligence could be disastrous, and therefore a prospective approach is necessary to regulate artificial intelligence before it is implemented.  Nick Bostrom, an expert in artificial intelligence at the University of Oxford, explains that this is because “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”


Ways to Regulate Artificial Intelligence

Numerous working groups have released papers on how best to regulate artificial intelligence. I have summarized the findings of two particularly influential working groups here:

  • The AI Now Institute at NYU believes that agencies that already deal with certain sectors and industries should be given power to regulate artificial intelligence in their industry, rather than creating a general artificial intelligence agency. A sector-specific approach is preferable, because one artificial intelligence agency would struggle with creating general regulations for various industries, given the different needs of various sectors.
  • Similarly, the AI Now Institute has singled out facial recognition technology specifically as needing extremely stringent regulations due to its invasive nature. They have been joined in this clarion call by Microsoft, who despite being a major player in facial recognition technology, lobbied Congress for more stringent regulation. The AI Now Institute believes that communities and individuals should have the right to reject facial recognition technology’s use, and mere public notice is not sufficient. They also claim that affect recognition (also called emotion recognition), a subclass of facial recognition technology that claims to be able to read personality, mood, engagement, inner feelings, and nearly everything else, is a particularly dangerous unregulated subclass of facial recognition technology. Many companies have implemented emotion recognition in the hiring process, leading to the potential for discrimination or persons randomly being unable to find jobs if affect recognition remains unregulated.
  • Finally, the AI Now Institute believes that consumer protection agencies should apply truth in advertising laws to artificial intelligence firms, because currently, many companies have used the “hype” surrounding artificial intelligence to make wild promises which they simply cannot deliver, or simply mislead the public about what their product actually does. These have the potential to exploit consumers or compromise industries that are people-facing (see affect recognition mentioned above).
  • The Conference on Artificial Intelligence, Ethics, and Society recommends that world powers work together to create a new organization devoted to standardizing artificial intelligence laws across borders. They claim that a patchwork approach has the potential to be disastrous, and so an international body devoted to streamlining and coordinating national artificial intelligence regulations is necessary. This will further benefit governments as well as the public insofar that if a nation makes a mistake in regulating artificial intelligence, and a tragedy occurs, other governments can correct their own regulations before they make the same mistake. This could be necessary in preventing a worldwide catastrophe.


Artificial intelligence regulation is a growing issue, and this is only a high-level smattering of the numerous detailed proposals in existence. Nevertheless, it is critical that the public become informed and lobby their elected representatives for artificial intelligence regulation. We leave regulation of artificial intelligence in the hands of private corporations at our own peril.

Comments are closed.