Google’s Principles for AI Development in a Nutshell


I keep preaching that AI is desirable but requires good rules and regulations to follow. Having such laws in place would allow researches and developers to account for the adherence by implementing them very early in the system, as a core function, with less risk of being bypassed or misinterpreted.

As a pioneer of technology, Google is also making great efforts in AI research and applying AI to solve existing problems. They are no like Asimov’s Laws for robotics but I believe they also make a lot of sense in this particular context.

The 7 AI principles of Google

You could argue about their applicability but the most important thing is that a company has defined principles for their AI development, in order to make sure that nothing could get out of hand. Here’s what they came up with, as per a blog post of Google CEO, Sundar Pichai:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Some might argue that segments of these AI principles are only there to account for political correctness, but we are talking about machines interacting with living beings, so it is certainly worth defining AI to dismiss any kind of racism, sexism, and all other kinds of “isms”.


What’s out of scope here?

Google has also defined clearly what they are not going to consider working on by establishing an “out of scope” list. Here are these items which will not be delved into during R&D:

  • Tech that could cause harm to living beings (leaving in the option of causing harm to “material” if the benefits outweigh the risk)
  • Weapons or tech with the principal purpose to cause injury (or worse) to people
  • Tech that mines data and processes information for surveillance in a way that would violate internationally accepted norms
  • Any type of gizmo, gadget, device, or service that is or could be violating against international law or human rights

Now that being said and written down, we can only hope that the mother company of Google, Alphabet, would also sign these principles or they won’t be of any value whatsoever. If a valuable technology is “sold” to another legal entity just to bypass these principles, it would all be for naught.

Photo credit: The feature image “good game” has been done by Erik Lucatero. The image “future horse” has been done by Emile Guillemot.
Source: AI at Google: our principles / Kristen P. Jones, Isaac E. Sabat, Eden B. King, Afra Ahmad, Tracy C. McCausland, Tiffani Chen (Wiley Online Library)

Was this post helpful?

Christopher Isak
Christopher Isak
Hi there and thanks for reading my article! I'm Chris the founder of TechAcute. I write about technology news and share experiences from my life in the enterprise world. Drop by on Twitter and say 'hi' sometime. ;)
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -
- Advertisment -