AI is getting cheaper, making it easier to misuse
Published 12:00 am Wednesday, February 21, 2018
SAN FRANCISCO — A Silicon Valley startup recently unveiled a drone that can set a course entirely on its own. A smartphone app allows the user to tell the drone to follow someone. Once the drone starts tracking, its subject will find it hard to shake.
The drone is meant to be a fun gadget, but it is not unreasonable to find this automated bloodhound a little unnerving.
On Tuesday, a group of artificial intelligence researchers and policymakers released a report that described how rapidly evolving and more affordable AI technologies could be used for malicious purposes.
Made by a company called Skydio, the tracking drone costs $2,499. It uses technological building blocks available to anyone: ordinary cameras, open-source software and low-cost computer chips.
In time, putting these pieces together will become increasingly easy and inexpensive.
“This stuff is getting more available in every sense,” said one of Skydio’s founders, Adam Bry. These same technologies are bringing a new level of autonomy to cars, warehouse robots, security cameras and a wide range of internet services.
At times, new AI systems exhibit strange and unexpected behavior because the way they learn from large amounts of data is not entirely understood. That makes them vulnerable to manipulation; today’s computer-vision algorithms, for example, can be fooled into seeing things that are not there.
In such a scenario, miscreants could circumvent security cameras or compromise a driverless car.
Researchers are developing AI systems that can find and exploit security holes in other systems, said Paul Scharre, an author of the report. These systems can be used for both defense and offense.
Automated techniques will make it easier to carry out attacks that now require extensive human labor, including “spear phishing,” which involves gathering and exploiting personal data of victims. In coming years, the report said, machines will be more adept at collecting and deploying this data on their own.
AI systems are increasingly adept at generating believable audio and video on their own. This will make it easier for bad actors to spread misinformation, the report said.