Advertisement

the hoofprint

Walnut High School | 400 Pierre Rd. Walnut, Calif. 91789

the hoofprint

Walnut High School | 400 Pierre Rd. Walnut, Calif. 91789

the hoofprint

Walnut High School | 400 Pierre Rd. Walnut, Calif. 91789

AI regulation needs a balanced approach

Although President Biden’s new executive order on AI regulation advocates for new safety standards, tangible changes must come from congressional legislation.
AI+regulation+needs+a+balanced+approach
Photo source: Unsplash

On Oct. 30, President Joe Biden issued a far-reaching executive order seeking to limit the dangers of artificial intelligence (AI). 

The order was made in response to the recent manifestations of harmful AI usage, such as impersonating political representatives through deep fakes, and generating fake images of global crises to circulate on social media platforms.

In doing so, the executive order seeks to establish the standards for AI safety and credibility to protect ordinary citizens and innovation, create a cybersecurity program to develop safeguard measures and set requirements for companies producing AI systems that conflict with national security to disclose safety test results with the federal government. 

Although such action is warranted, the administration’s swift response may fail to reckon with concrete issues of AI. Reactionary regulation can render myopic solutions, promoting a narrow outlook on the wrong objectives for regulation. 

Story continues below advertisement

The Biden administration has taken major strides against AI following the woeful shortcomings of substantive social media regulation during the 2010s, allowing social media platforms to become facilitators of misinformation and polarization. 

The executive order is critical especially with the pervasiveness of AI in every aspect of our lives. As students, we see the widespread usage of generative AI like ChatGPT that has disrupted traditional education, or AI-driven algorithms that carefully curate our personalized social media feeds. 

The Center for AI Safety has already rang the alarm to bring AI regulation to the forefront of societal risks. Many of the foreshadowed catastrophes, like AI wiping out humanity, are purely speculative risks that distract lawmakers from addressing the actual harms of AI, namely human impersonation or faulty facial recognition technology that have led to theft and wrongful arrests. 

The preemptive safety measures the government has taken to restrict AI systems can also potentially barricade smaller companies from revolutionizing the industry for the better. Well-established companies, like Google or Apple, already have exorbitant amounts of money at their fingertips to pay for lawyers and advisers who can rework their company’s approach in adherence with new regulations; and it’s no surprise that these dominating corporations are the firmest proponents of AI regulation. 

For smaller companies who don’t have access to those resources, however, innovation is stopped at just an idea, and never made into reality. 

Additionally, the executive order did not fully address the onslaught of misinformation — especially with content surrounding the 2024 election — that has been disseminated through generative AI. With the rapid advances in generative AI technology, it’s difficult for ordinary citizens to distinguish between real and fake information. Although the order does require governmental agencies to authenticate content through watermarking, it does not regulate AI content that is published outside of the government or country. This is a crucial component of legislation that can only be passed by Congress. 

Biden’s executive order is sensible given that it sets a viable plan for future action on pressing AI issues. However, the tangible changes must come from the legislation that is passed through Congress. Legislators should remain aware of these speculative problems while simultaneously taking action against the current issues we face — human impersonation, algorithmic manipulation, the dissemination of online misinformation and advocating for online privacy laws that have yet to be passed. 

If AI regulation is taken in the right direction, it will provide protection for vulnerable citizens and stimulate more inclusive innovation. The U.S. must balance its response in remaining wary of imagined risks, and taking appropriate action when there is substantial harm.

Leave a Comment
Donate to the hoofprint

Your donation will support the student journalists of Walnut High School. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.

More to Discover
About the Contributor
Kaelin David
Kaelin David, Opinion editor
Hi! My name is Kaelin David and I am in the 12th grade, serving as the Opinion editor for The Hoofprint. In my free time, I love playing around with website design and reading literary magazines.
Donate to the hoofprint

Comments (0)

All The Hoofprint Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *