Fri. Apr 26th, 2024
Artificial Intelligence

A new Artificial Intelligence manifesto was laid down by Google CEO Mr. Sundar Pichai earlier today. These new manifesto has come in time when the company has been in questions regarding its military AI contracts. These new set of principles are moreover an answer to questions raised last year. Last year Sergey Brin raised some questions regarding the impact of Machine Learning and AI in Alphabet’s 2017 Founder’s Letter. He also raised questions regarding Google’s expectations in developing the new technology. Now, a year later, the tech giant reveals its objectives and limits in the field of AI.

Google’s rules for Artificial Intelligence:

Nowadays, Machine Learning and Artificial Intelligence present a difficult moral situation to companies. The company know that AI can solve some serious problems in the easier way. However, it also realizes the specific questions and concerns that many people may have regarding the technology. Now, in order to make it simple for users to understand, the company has made seven points principles. These principles will guide the company as it will progress in the field of AI. These are the real guidelines that the company believes it can use for both engineering and business decisions.

Now, let’s take a look at Google’s seven points principle on AI.

  1. The first point is, whether it will be socially beneficial or not.
  2. The company will avoid creation and reinforcing of unfair bias.
  3. It will be built and will be tested for safety.
  4. The technology will be accountable to people.
  5. It will also incorporate the privacy design principles.
  6. It will uphold high standards of scientific excellence.
  7. The technology will be available to use on the basis of these principles. However, following factors will also be considered:
    1. What is the primary use?
    2. Is it unique?
    3. Will it have a significant impact?
    4. What will Google’s involvement be?

Apart from these new rules, the company has also established a set of new AI use case. According to Google, it will not pursue technologies that will cause overall harm to people. It will also not use it for technologies that will enable surveillance and any other general violation of the principles of international law or human rights. However, there is a contradiction in this. The company will not be working on AI to develop weapons, but it will work with governments and military in other fields. These fields will mainly include cybersecurity, healthcare, and training.

Leave a Reply

Your email address will not be published. Required fields are marked *