Researchers disagree onwhen artificial intelligence that displays something like human understanding mightarrive. But the Obama administration isn’t waiting to find out. The White House says the government needs to start thinking about how to regulate and use the powerful technology while it is still dependent on humans.
The public should have an accurate mental model of what we mean when we say artificial intelligence, says Ryan Calo, who teaches law at University of Washington. Calo spoke last week at the first of four workshops the White House hosts the summer months to examine how to address an increasingly AI-powered world.
Althoughscholars and policymakers agree that Washington has a role to play here, it isn’t clear what the path to that policy looks like–even as pressing topics accumulate. They include deciding when and how Google’s self-driving autoes take to American freeways and examining how bias permeates algorithms.
” One thing we are aware for sure is that AI is inducing policy challenges already, such as how to make sure the technology remains safe, controllable, and predictable, even as it gets much more complex and smarter ,” said Ed Felten, the deputy US chief of science and technology policy leading the White Houses summer of AI research .” Some of these issues will become more challenging over time as the technology progress, so well need to keep upping our game .”
AI, Still Puppeteered By People
Although artificial intelligence already exceeds human abilities in some areas–Googles AlphaGorepeatedly beatthe worlds best Go player–each systems applications remain narrow, and reliant upon humans.Intelligence and independence are two very different things, say Oren Etzioni, the director of the nonprofit Allen Institute for Artificial intelligence and a speaker at Tuesdays workshop. In people, intelligence and freedom go hand in hand, but in computers thats not at all the case, he said.
To govern AI in the future, it builds sense lay the ground while humen are still at the wheel.
Entireteams of people who have spent years examining the technology painstakingly build and manage the smartest AI systems. As Etzioni notes, AlphaGo can’t play its next round until person pushes a button.But its human fallibility at the level of input and design that attains scholars and policy experts anxious. For machines to learns, they must be fed massive decides of data. And it’s humans, with all their inherent defects, who are doing the feeding.
Feeding the Machines
A recent White House reportoutlinedthe discriminatory potential of big data. To make sense of data, someone must categorize and profile it. Technologists and designers could be feeding existing racisms and structural unfairness into how the AI thinks.
Its going to become increasingly important for some level of accountability to be applied to the data thats fed into these systems.
This is not an academic issue. Googles ad-delivery algorithm sent more ads for higher-paying tasks to men than to women.AndProPublica recently reportedthatjudges who madesentencing and parole decisions relied upon AI systems shown to be racially biased inmaking risk assessments.
The journalists found that there was this real disparity between African Americans who were being labeled as potential recidivists versus white people, said Microsoft researcher Kate Crawford. This was a system that was making bias in its very design, but we cant see how it works. The system is proprietary. They havent shared the data. We dont know why the system was getting these results.
If AI will determine things like who gets a mortgage, a task, or parole, Crawford says, it will be increasingly important to apply some level of accountability for the data fed into these systems to ensure it is accurate.
How The Government Can Step In
Artificial intelligence is used formore than life choices and judicial outcomes. Its also are applied to make immediate decisions about how, say, an autonomous automobile avoids a collision. The problem with trying to regulate these technologies is that they’re still being developed, says Bryant Walker Smith, a law prof at the University of South Carolina and one of the nations leading experts on self-driving cars.
Any kind of design requirements this early on could hinder constructing a safer, most responsible machine, Smith says. That sets the onus on creators of autonomous vehicles to construct the public safety lawsuit themselves.
Meanwhile, the governmental forces already is wrestling with how to regulate and oversee other forms of AI already in use, from drones to cancer-detection analytics. The White Houses Office of Science and Technology Policy is bringing several agencies together to craft an approach based on evidence , not anxiety.The issues the government will have to consider scope from what the government will be able to buy and under what words to money research into inducing AI safer.
Still, even as the government plays catch-up with technology already at work in the world, it’s worth remembering that AI remains nascent. To regulate AI in the future, it builds sense to lay the foundation while humans are still at the wheel.