The Challenge in Regulating AI

Technology Policy Brief #85 | By: Steve Piazza | April 28, 2023

Header photo taken from: innovationatwork.ieee.org

Policy Issue Summary

Since artificial technology (AI) technology is relatively new, policy regarding the appropriate use of it is still evolving. Domestically and abroad, actions have been taken to minimize any of the harm that may ensue due to possible misuse.  

For example, last year 17 states introduced legislation, and three more states passed laws in response to growing concerns of the use of AI. Studies are also underway in a small handful of other states to explore the issue. To date, though, no sweeping federal legislation exists.

Meanwhile, in September 2022, the United States government banned two U.S. companies, Nvidia and AMD, from exporting select AI chips to China. The reasoning was to prevent China from getting ahead in the AI race and to use the technology for military purposes.

Despite minimal legal action to ward off threats, however, the numerous ethical dilemmas being raised remain in the shadows. Such questions are crucial in the effort to utilize the extremely valuable advantages of the technology while at the same time keep severe consequences from occurring.

Policy Analysis

Each time a new technology is introduced, whether it is nuclear energy or cloning, arguments supporting or opposing it become more difficult to formulate. The arguments themselves, which frequently emanate from unverified news, often fail to reflect the underlying ethical dilemmas presented by new breakthroughs. And when it comes to passing laws or creating public policy, the advantages and consequences of the technology are becoming less and less tangible.

The utilization of AI is rapidly influencing just about every industry, such as health, education, transportation, and finance. The benefits of this new technology are many: it can help spot diseases early on, deliver customized learning opportunities and immediate feedback to students, detect hazardous driving conditions and regulate emissions in cars, and alert commercial institutions and their clients when fraud is occurring. Such benefits justify the existence of AI. It’s hard to argue against technologies that save lives and secure property.

But the benefits can come with a price. When AI devices replace humans who traditionally perform tasks requiring judgment, questions about integrity and objectivity arise. AI can easily create a false sense of security. 

Michael Sandel, Anne T. and Robert M. Bass Professor of Government, say AI not only replicates human biases, it confers on these biases a kind of scientific credibility. It makes it seem that these predictions and judgments have an objective status.” Said another way, AI may replace human tasks but not human judgment.

The speed and lack of understanding of this technology is so vast, it leaves protections against machines making existential decisions in the hands of an industry that is too in awe of what it creates to act. Underprepared public officials under growing tremendous pressure are unable to do something about it. 

It’s one thing to agree safeguards are necessary, and another to put them into place and enforce them. Even when existing laws have served well, many do the exact opposite of what they intended to do. For example, as the Americans with Disabilities Act 1990 (ADA) uses artificially developed, but extremely enhanced methods to generate selection criteria in lending, housing, and insurance, it may end up discriminating against those traditionally underserved and actually deny rights and services to them. 

Government agencies like the Federal Trade Commission (FTC) have expressed the commitment to prevent businesses from using AI to mislead and take advantage of the unsuspecting. But like those in the AI world who don’t understand what the AI they invent might do in the long run, the FTC has taken to using terms like “advised companies, “emphasize the use of” and “offer important lessons” suggesting that they are relying on the industries themselves to do the right thing. 

At the moment, standards developed by The National Institute of Standards Technology (NIST) do exist. NIST created The Artificial Intelligence Risk Management Framework (AI RMF) which, in effect, provides a “voluntary” Framework, to “help foster the responsible design, development, deployment, and use of AI systems over time.” 

That may be something, but it still leaves us with more, effective AI laws waiting to be written and practical methods by which to enforce them. Meanwhile, ethicists will be busy addressing AI dilemmas as they emerge and find they’ll be revising both questions and answers frequently just to keep up.

The list of benefits and concerns will continue to develop, as will AI.  And like any tool, AI technologies must be used by those who are adept and judicious. It might even help employing AI to develop answers to our legal and moral questions, just as long as we’re the ones asking them and judging the answers.

Engagement Resources

For an example of efforts made towards establishing worldwide principles for the ethical use of AI, read UNESCO’s Recommendation on the Ethics of Artificial Intelligence: https://unesdoc.unesco.org/ark:/48223/pf0000381137

Here are two extensive lists of organizations promoting the responsible use of AI (limited overlap): 

https://alltechishuman.org/responsibble-ai-knowledge-hub/#ri-orgs

https://www.aiethicist.org/ai-organizations

The Center for AI and Digital Policy Studies (CAIDP) examines the policies of countries around the world for their commitment to using AI responsibly and reports their progress. This map is good place to see how countries compare based on metrics in the Artificial Intelligence and Democratic Values 2022 Index developed by CAIDP: https://www.caidp.org/reports/aidv-2022/aidv-maps/

DONATE NOW
Subscribe Below to Our News Service

Pin It on Pinterest

Share This