On Oct. 30, President Joe Biden issued a far-reaching executive order seeking to limit the dangers of artificial intelligence (AI).
The order was made in response to the recent manifestations of harmful AI usage, such as impersonating political representatives through deep fakes, and generating fake images of global crises to circulate on social media platforms.
In doing so, the executive order seeks to establish the standards for AI safety and credibility to protect ordinary citizens and innovation, create a cybersecurity program to develop safeguard measures and set requirements for companies producing AI systems that conflict with national security to disclose safety test results with the federal government.
Although such action is warranted, the administration’s swift response may fail to reckon with concrete issues of AI. Reactionary regulation can render myopic solutions, promoting a narrow outlook on the wrong objectives for regulation.
The Biden administration has taken major strides against AI following the woeful shortcomings of substantive social media regulation during the 2010s, allowing social media platforms to become facilitators of misinformation and polarization.
The executive order is critical especially with the pervasiveness of AI in every aspect of our lives. As students, we see the widespread usage of generative AI like ChatGPT that has disrupted traditional education, or AI-driven algorithms that carefully curate our personalized social media feeds.
The Center for AI Safety has already rang the alarm to bring AI regulation to the forefront of societal risks. Many of the foreshadowed catastrophes, like AI wiping out humanity, are purely speculative risks that distract lawmakers from addressing the actual harms of AI, namely human impersonation or faulty facial recognition technology that have led to theft and wrongful arrests.
The preemptive safety measures the government has taken to restrict AI systems can also potentially barricade smaller companies from revolutionizing the industry for the better. Well-established companies, like Google or Apple, already have exorbitant amounts of money at their fingertips to pay for lawyers and advisers who can rework their company’s approach in adherence with new regulations; and it’s no surprise that these dominating corporations are the firmest proponents of AI regulation.
For smaller companies who don’t have access to those resources, however, innovation is stopped at just an idea, and never made into reality.
Additionally, the executive order did not fully address the onslaught of misinformation — especially with content surrounding the 2024 election — that has been disseminated through generative AI. With the rapid advances in generative AI technology, it’s difficult for ordinary citizens to distinguish between real and fake information. Although the order does require governmental agencies to authenticate content through watermarking, it does not regulate AI content that is published outside of the government or country. This is a crucial component of legislation that can only be passed by Congress.
Biden’s executive order is sensible given that it sets a viable plan for future action on pressing AI issues. However, the tangible changes must come from the legislation that is passed through Congress. Legislators should remain aware of these speculative problems while simultaneously taking action against the current issues we face — human impersonation, algorithmic manipulation, the dissemination of online misinformation and advocating for online privacy laws that have yet to be passed.
If AI regulation is taken in the right direction, it will provide protection for vulnerable citizens and stimulate more inclusive innovation. The U.S. must balance its response in remaining wary of imagined risks, and taking appropriate action when there is substantial harm.