News

Google’s AI principles have been published for a year, and Jeff Dean reports the results.

2019-07-23 16:33:01 447

Following the theme of Jeff Dean "To Be Helpful" on the Google Developers Conference (Google I/O) in May, the general manager of the Google AI came to Tokyo in July to face the entire Asia Pacific. Media and developers, once with the theme "Solve with AI"


Geek scene 5min read

Google’s AI principles have been published for a year, and Jeff Dean reports the results.

Wormworm 2019/07/23


Summary

How does a technology company combine its mission, technology values, and technological advancement into one?


Following the theme of Jeff Dean "To Be Helpful" on the Google Developers Conference (Google I/O) in May, the general manager of the Google AI came to Tokyo in July to face the entire Asia Pacific. The media and developers conducted a "results report" with the theme "Solve with AI".

The June interval between the two sharing sessions coincided with Google’s announcement of its own AI at Google: our principles. In the early summer of 2018, Google released this series of principles, which pointed to the concerns of ethics of artificial intelligence technology in recent years, including:


· Be socially beneficial


· Avoid creating or reinforcing unfair bias


· Be built and tested for safety


· Be accountable to people


· Incorporate privacy design principles


· Uphold high standards of scientific excellence


· Be made available for uses that accord with these principles


A year later, the typical application cases that exemplified the above principles were grouped under Google's project called AI for Social Good. The logic behind it is how a technology company can make its mission, technology values, and technological advancement. One is one.


Learning mode

The Federated Learning model, which was announced very early, was re-emphasized by Jeff Dean in this sharing. Its focus has also shifted from efficiency to data security.


This new method, which is different from the traditional data concentration learning model, was proposed by Google in 2016 and open sourced TensorFlow Federated this year. The advantage lies in the efficient learning efficiency of interoperability between multi-terminal and computing nodes, and the terminal data in huge data transmission. Security. In the federated learning mode, the original data does not need to be collected from the device. The user can download the ready-made model through the mobile terminal, complete the training locally, encrypt and upload the update after the iteration, and then continuously cycle to maximize the efficiency and security.


Medical and health

In the field of health care, Google AI is a typical application case, including lung cancer screening, breast cancer testing, and diabetes testing.


Lung cancer has consistently topped the list of death rates among all cancers, reaching 3% worldwide. Compared with 80% of lung cancer cases in traditional medical treatments, they are not detected at an early stage, and the most urgent need is concentrated in the early screening field. Currently, the clinical application of artificial intelligence solutions has increased initial detection by 5%, while false positive misdiagnosis has decreased by 11%.


The traditional screening method for breast cancer is to find a trace of the spread of cancer cells in lymphoid tissue in a 1 billion-pixel slide. The application of artificial intelligence models in this field can achieve a detection rate of 22%, but it is different from lung cancer screening, which also increases the proportion of misdiagnosis of false positives. Therefore, the current direction of encouragement is the mutual integration and mutual aid of artificial intelligence and manual testing by doctors.


At present, more than 415 million cases of diabetes worldwide are accompanied by retinopathy, which may directly lead to blindness, and even the lack of manpower in the medically underdeveloped areas for initial testing. Google has established a visual recognition system for diabetic retinopathy through collaboration with external companies, and this year has reached the same level of detection as the ophthalmologist. In India and attitude, this system has entered the clinical trial phase.


Environmental protection

Through voice recognition and visual recognition, Google AI has entered the practical application stage in marine endangered species protection, rainforest illegal logging monitoring, and garbage collection identification and agricultural pest identification.


The National Oceanic and Atmospheric Administration (NOAA) has accumulated a vast database of 19 years through underwater audio collection. Through cooperation with Google, NOAA has been able to identify the call of endangered species humpback whales in a complex and varied underwater sound world, and has drawn a humpback whale ocean activity track through a neural network that automatically recognizes whale sounds. Dynamic maps make tracking and directed protection for specific marine species possible.


On land, Rainforest Connection uses the Android phone in the rainforests of South America and Southeast Asia to build a rainforest sound collection and monitoring system on top of the trees, and TensorFlow realizes real-time identification of the sound of chainsaws and logging trucks. The area of rainforest protected by this program has exceeded 2,000 square kilometers.


Similar to the latest garbage classification and recycling problems in cities such as Shanghai and Beijing, Indonesia, as the world's second-largest plastic waste polluting country, has begun to use a mobile phone camera based on Google AI to identify plastic waste types. After identification, in addition to the types, it can also show the recycling and reuse value of different plastic waste categories.


Help with disabled people

Hearing or language barriers account for a significant proportion of the total number of people with disabilities worldwide. In the artificial intelligence application for the hearing impaired, the speech recognition technology can not only transcribe the interpersonal dialogue into visual text in real time, but the hearing impaired can participate in daily communication, and can also use the sounds in life, such as in sports competitions. Cheers, the whistle of the car on the road, the blasting after the fireworks are vacated, etc., are also transcribed into text in real time to provide as far as possible the undisturbed real world perception and interaction for this disabled group. At present, Google AI has more than 70 languages supported by this app.


In contrast, for people with language problems caused by nervous system diseases such as stroke, gradual freezing, or Parkinson's disease, Google AI has established sound and visual models that can recognize their fuzzy pronunciation, gestures, and even blinks. They implement real-time text transcription and even language vocalization. This is a more efficient and convenient solution than the interactive system used by Dr. Stephen Hawking, which will eventually make it easy for everyone with language disabilities.


From the above-mentioned AI application cases that have entered the practical stage, it is the best answer to the sway of technological advancement in the global technology business field: the true technological advancement can only be expressed through technical values. Most of the technical paths and commercializations are caused by the divestiture and even the opposition between the two.


As Jeff Dean said: In this era, machines have been able to see, hear, speak and understand. However, how to look? What to listen to? For whom? Who do you understand? It is the ultimate problem that needs to be answered constantly.