Last month, Google rebranded their Google Research to Google AI to reflect their commitment towards the Artificial Intelligence. Today, the company has made a blog post, where the CEO, Sundar Pichai says, AI can’t solve every problem, but its potential to improve our lives is profound. Also, the post has listed seven different Objectives for AI applications and its use cases.
AI is being used to help people tackle urgent problems. A pair of high school students is building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use AI to help diagnose cancer and prevent blindness. These clear benefits are why Google invests heavily in AI research and development and makes AI technologies widely available to others via our tools and open-source code. Google recognizes that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions, added Sundar Pichai.
Objectives for AI applications
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles, based on the following factor:
- Primary purpose and use
- Nature and uniqueness
- Nature of Google’s involvement
Also, in addition to the above objectives, Google will not design or deploy AI in the following application areas:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
For more news and updates follow us on Twitter.