SAN FRANCISCO, C.A. — Google pledged Thursday that it will not use artificial intelligence in applications related to weapons, surveillance that violates international norms, or that works in ways that go against human rights. It planted its ethical flag on use of AI just days confirming it would not renew a contract with the U.S. military to use its AI technology to analyze drone footage.
The principles, spelled out by Google CEO Sundar Pichai in a blog post , commit the company to building AI applications that are βsocially beneficial,β that avoid creating or reinforcing bias and that are accountable to people.
The search giant had been formulating a patchwork of policies around these ethical questions for years, but finally put them in writing. Aside from making the principles public, Pichai didnβt specify how Google or its parent Alphabet would be accountable for conforming to them. He also said Google would continue working with governments and the military on noncombat applications involving such things as veteransβ health care and search and rescue.
βThis approach is consistent with the values laid out in our original foundersβ letter back in 2004,β Pichai wrote, citing the document in which Larry Page and Sergey Brin set out their vision for the company to βorganize the worldβs information and make it universally accessible and useful.β
Pichai said the latest principles help it take a long-term perspective βeven if it means making short-term trade-offs.β
The document, which also enshrines βrelevant explanationsβ of how AI systems work, lays the groundwork for the rollout of Duplex, a human-sounding digital concierge that was shown off booking appointments with human receptionists at a Google developers conference in May.
Some ethicists were concerned that call recipients could be duped into thinking the robot was human. Google has said Duplex will identify itself so that wouldnβt happen.
Other companies leading the race developing AI are also grappling with ethical issues β including Apple, Amazon, Facebook, IBM and Microsoft, which have formed a group with Google called the Partnership on AI.
Making sure the public is involved in the conversations is important, said Terah Lyons, director of the partnership.
At an MIT technology conference on Tuesday, Microsoft President Brad Smith even welcomed government regulation, saying something βas fundamentally impactfulβ as AI shouldnβt be left to developers or the private sector on its own.
Googleβs Project Maven with the U.S. Defense Department came under fire from company employees concerned about the direction it was taking the company.
A company executive told employees this week the program would not be renewed after it expires at the end of 2019. Google expects to have talks with the Pentagon over how it can fulfil its contract obligations without violating the principles outlined Thursday.
Peter Asaro, vice chairman of the International Committee for Robot Arms Control, said this week that Googleβs backing off from the project was good news because it slows down a potential AI arms race over autonomous weapons systems. Whatβs more, letting the contract expire was fundamental to Googleβs business model, which relies on gathering mass amounts of user data, he said.
βTheyβre a company thatβs very much aware of their image in the public conscious,β he said. βThey want people to trust them and trust them with their data.β