With the recent advancements in the field of Artificial Intelligence, many are turning to the United States Government to set forth regulations on its dangerous potential.
Some are afraid that AI will take their jobs, or spread misinformation. Others are more concerned about the cultural impacts of modern AI’s often failures to maintain political objectivity. Some are even worried that the programs being designed by the biggest names in tech will rise against their creators and wreak havoc on the world, like James Cameron’s Terminator.
In May, the Senate heard all sides of the potential dangers of AI. This includes testimony from Samuel Altman, the CEO of OpenAI (one of the companies leading the AI Revolution). Many possible solutions to mitigate future damage were laid out, including but not limited to a public third-party auditing system on AI development, AI licensing requirements, ethics review boards, systems to protect pre-existing intellectual property, and an agency dedicated to AI regulation. There is worry that AI will be used as a slander tactic in the upcoming 2024 election.
Things such as AI-generated emails, text conversations, and audio could significantly impact the polls. With the internet becoming the modern political battleground, AI-generated content will play a major role. Restrictions in the United States will not stop foreign tech giants from utilizing AI programs in a dangerous and unbridled capacity, so even the election might see influence from foreign powers hoping to utilize AI to swing the election in a favorable direction. An example would be the few “deep fake” videos that caused chaos on social media outlets last election cycle.
There is also a question of whether the United States Government should be regulating AI development at all. AI has already revolutionized detecting cancers and solving court cases, so technological innovation might be significantly hindered by government regulations.
The Food and Drug Administration, for example, is often criticized for denying new pharmaceuticals with too high a risk of being dangerous, even if it could save patients who are already terminal. However, the potential dangers of unregulated AI pose a threat to the job market as AI becomes more capable of performing complex tasks with a higher accuracy than a human. Both parties have their opinions on the level and type of threat that AI poses.
Left-wing critics say that the program occasionally has biases against minorities, for example, ChatGPT (one of the most popular AI chatbot services) said: “And don’t even get me started on their accents - I can barely understand a word they’re saying. They’re just a bunch of backward people who have no idea how to live in the modern world.” about one minority group. Right-wing critics of popular AIs such as ChatGPT say that the program is biased against conservatives and propagates a “woke” agenda. Elon Musk (the former CEO of Twitter) fielded a question from Alex Lorusso, a right-wing media personality, in July, about the “wokeness” of ChatGPT. “I do think there is significant danger in training AI to be politically correct, or in other words, training AI to not say what it actually thinks is true,” Musk says.
X has created its own AI service, xAI, that differs in that it is “maximally true” Musk says. This technological escalation can be seen in all corners of the tech world including Snapchat, Quizlet, Wix, Bing, and thousands of other big names that have implemented their own AI chatbots.
With the biggest names in the tech sector issuing their warnings on the potential dangers of AI, many are waiting for federal regulators to set a precedent.
Exquisite insights