Article: Responsible AI Practices

The development of AI is creating new opportunities to improve the lives of people around the world, from business to healthcare to education. It is also raising new questions about the best way to build fairness, interpretability, privacy, and security into these systems. These questions are far from solved, and in fact are active areas of research and development. Google is committed to making progress in the responsible development of AI and to sharing knowledge, research, tools, datasets, and other resources with the larger community. Below we share some of our current work and recommended practices. As with all of our research, we will take our latest findings into account, work to incorporate them as appropriate, and adapt as we learn more over time.


Article: When Automation Bites Back

The pilots fought continuously until the end of the flight’, said Capt. Nurcahyo Utomo, the head of the investigation of Lion Air Flight 610 that crashed on October 29, 2018, killing the 189 people aboard. The analysis of the black boxes had revealed that the Boeing 737’s nose was repeatedly forced down, apparently by an automatic system receiving incorrect sensor readings. During 10 minutes preceding the tragedy, the pilots tried 24 times to manually pull up the nose of the plane. They struggled against a malfunctioning anti-stall system that they did not know how to disengage for that specific version of the plane. That type of dramatic scene of humans struggling with a stubborn automated system belongs to pop culture. In the famous scene of the 1968 science-fiction film ‘2001: A Space Odyssey’, the astronaut Dave asks HAL (Heuristically programmed ALgorithmic computer) to open a pod bay door on the spacecraft, to which HAL responds repeatedly, ‘I’m sorry, Dave, I’m afraid I can’t do that’.


Article: Chinese Interests Take a Big Seat at the AI Governance Table

Last summer the Chinese government released its ambitious New Generation Artificial Intelligence Development Plan (AIDP), which set the eye-catching target of national leadership in a variety of AI fields by 2030. The plan matters not only because of what it says about China’s technological ambitions, but also for its plans to shape AI governance and policy. Part of the plan’s approach is to devote considerable effort to standards-setting processes in AI-driven sectors. This means writing guidelines not only for key technologies and interoperability, but also for the ethical and security issues that arise across an AI-enabled ecosystem, from algorithmic transparency to liability, bias, and privacy. This year Chinese organizations took a major step toward putting these aspirations into action by releasing an in-depth white paper on AI standards in January and hosting a major international AI standards meeting in Beijing in April. These developments mark Beijing’s first stake in the ground as a leader in developing AI policy and in working with international bodies, even as many governments and companies around the world grapple with uncharted territory in writing the rules on AI. China is eager to participate in international standards-setting bodies on the question of whether and how to set standards around controversial aspects of AI, such as algorithmic bias and transparency in algorithmic decision making.


Article: National Strategy for Artificial Intelligence – Discussion Paper

Artificial Intelligence (AI) is poised to disrupt our world. With intelligent machines enabling high-level cognitive processes like thinking, perceiving, learning, problem solving and decision making, coupled with advances in data collection and aggregation, analytics and computer processing power, AI presents opportunities to complement and supplement human intelligence and enrich the way people live and work.
This strategy document is premised on the proposition that India, given its strengths and characteristics, has the potential to position itself among leaders on the global AI map – with a unique brand of #AIforAll. The approach in this paper focuses on how India can leverage the transformative technologies to ensure social and inclusive growth in line with the development philosophy of the government. In addition, India should strive to replicate these solutions in other similarly placed developing countries.


Article: Ethics Commission Automated and Connected Driving

Throughout the world, mobility is becoming increasingly shaped by the digital revolution. The ‘automation’ of private transport operating in the public road environment is taken to mean technological driving aids that relieve the pressure on drivers, assist or even replace them in part or in whole. The partial automation of driving is already standard equipment in new vehicles. Conditionally and highly automated systems which, without human intervention, can autonomously change lanes, brake and steer are available or about to go into mass production. In both Germany and the US, there are test tracks on which conditionally automated vehicles can operate. For local public transport, driverless robot taxis or buses are being developed and trialled. Today, processors are already available or are being developed that are able, by means of appropriate sensors, to detect in real time the traffic situation in the immediate surroundings of a car, determine the car’s own position on appropriate mapping material and dynamically plan and modify the car’s route and adapt it to the traffic conditions. As the ‘perception’ of the vehicle’s surroundings becomes increasingly perfected, there is likely to be an ever better differentiation of road users, obstacles and hazardous situations. This makes it likely that it will be possible to significantly enhance road safety. Indeed, it cannot be ruled out that, at the end of this development, there will be motor vehicles that are inherently safe, in other words will never be involved in an accident under any circumstances. Nevertheless, at the level of what is technologically possible today, and given the realities of heterogeneous and nonconnected road traffic, it will not be possible to prevent accidents completely. This makes it essential that decisions be taken when programming the software of conditionally and highly automated driving systems. The technological developments are forcing government and society to reflect on the emerging changes. The decision that has to be taken is whether the licensing of automated driving systems is ethically justifiable or possibly even imperative. If these systems are licensed – and it is already apparent that this is happening at international level – everything hinges on the conditions in which they are used and the way in which they are designed. At the fundamental level, it all comes down to the following questions. How much dependence on technologically complex systems – which in the future will be based on artificial intelligence, possibly with machine learning capabilities – are we willing to accept in order to achieve, in return, more safety, mobility and convenience? What precautions need to be taken to ensure controllability, transparency and data autonomy? What technological development guidelines are required to ensure that we do not blur the contours of a human society that places individuals, their freedom of development, their physical and intellectual integrity and their entitlement to social respect at the heart of its legal regime?


Article: AI Policy 101: An Introduction to the 10 Key Aspects of AI Policy

What in the world is AI policy? First, a definition: AI policy is defined as public policies that maximize the benefits of AI, while minimizing its potential costs and risks. From this perspective, the purpose of AI policy is two-fold. On the one hand, governments should invest in the development and adoption of AI to secure its many benefits for the economy and society. Governments can do this by investing in fundamental and applied research, the development of specialized AI and ‘AI + X’ talent, digital infrastructure and related technologies, and programs to help the private and public sectors adopt and apply new AI technologies. On the other hand, governments need to also respond to the economic and societal challenges brought on by advances in AI. Automation, algorithmic bias, data exploitation, and income inequality are just a few of the many challenges that governments around the world need to develop policy solutions for. These policies include investments into skills development, the creation of new regulations and standards, and targeted efforts to remove bias from AI algorithms and data sets.


Article: Data Science and Ethics – Why Companies Need a new CEO (Chief Ethics Officer)

We live in a time, where individuals and organizations can store and analyze massive amounts of information; with over 2.5 ­quintillion records of data created every day. People are being defined by what they search for on the internet, what they eat, where they travel to, where they hold membership accounts, etc. Organizations and individuals can leverage this data to uncover powerful findings. Data scientists/analysts often get lost in the techniques and methods of the trade. In doing so, they can forget to ask important questions such as: Who will be affected by the work? How we are ensuring that, by doing ‘good’ for one group, we are not inadvertently harming another? Who actually owns the data that we are working with? What is the protocol for using that data to make business decisions?


Article: An Open Standard for Ethical Enterprise-Grade AI

Just like the invention of the wheel, of the printing press or of the computer, Artificial Intelligence (AI) will radically reshape the way enterprises work. This is such a revolution, that all industries will be impacted – without any exception: transportation, e-commerce, education, healthcare, energy, insurance, … The standard has seven sections. The full details of each of them are explained on github, but here is a summary:
1. General Information
2. Initial Data
3. Data Preparation
4. Feature Engineering
5. Training Data Audit
6. Model Description
7. Model Audit
Advertisements