Article: Global Data Ethics Project

The Global Data Ethics Project, or GDEP for short, is an ethical framework and set of principles created for data practitioners. The framework is built upon the five FORTS – Fairness, Openness, Reliability, Trust, and Social Benefit. The principles dive into concepts of privacy, transparency, consent, bias, diversity and ethical imagination.
1. Consider (if not collect) informed and purposeful consent of data subjects for all projects, and discard resulting data when that consent expires.
2. Make best effort to guarantee the security of data, subjects, and algorithms to prevent unauthorized access, policy violations, tampering, or other harm or actions outside the data subjects’ consent.
3. Make best effort to protect anonymous data subjects, and any associated data, against any attempts to reverse-engineer, de-anonymize, or otherwise expose confidential information.
4. Practice responsible transparency as the default where possible, throughout the entire data lifecycle.
5. Foster diversity by making efforts to ensure inclusion of participants, representation of viewpoints and communities, and openness. The data community
should be open to, welcoming of, and inclusive of people from diverse backgrounds.
6. Acknowledge and mitigate unfair bias throughout all aspects of data work.
7. Hold up datasets with clearly established provenance as the expected norm, rather than the exception.
8. Respect relevant tensions of all stakeholders as it relates to privacy and data ownership.
9. Take great care to communicate responsibly and accessibly.
10. Ensure that all data practitioners take responsibility for exercising ethical imagination in their work, including considering the implication of what came before and what may come after, and actively working to increase benefit and prevent harm to others.


Article: Data for Democracy

With good data, we can do great things. Data for Democracy is a worldwide community of passionate volunteers working together to promote trust and understanding in data and technology.


Article: Partnership on AI

Bringing together diverse, global voices to realize the promise of artificial intelligence – The Partnership on AI to Benefit People and Society was established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.


Article: Manifesto for Data Practices

As data teams, we aim to…
• Use data to improve life for our users, customers, organizations, and communities.
• Create reproducible and extensible work.
• Build teams with diverse ideas, backgrounds, and strengths.
• Prioritize the continuous collection and availability of discussions and metadata.
• Clearly identify the questions and objectives that drive each project and use to guide both planning and refinement.
• Be open to changing our methods and conclusions in response to new knowledge.
• Recognize and mitigate bias in ourselves and in the data we use.
• Present our work in ways that empower others to make better-informed decisions.
• Consider carefully the ethical implications of choices we make when using data, and the impacts of our work on individuals and society.
• Respect and invite fair criticism while promoting the identification and open discussion of errors, risks, and unintended consequences of our work.
• Protect the privacy and security of individuals represented in our data.
• Help others to understand the most useful and appropriate applications of data to solve real-world problems.


Article: The AI Initiative – Civic Debate on the Governance of AI

The AI Initiative is an initiative of The Future Society incubated at Harvard Kennedy School and dedicated to the rise of Artificial Intelligence. Created in 2015, it gathers students, researchers, alumni, faculty and experts from Harvard and beyond, interested in understanding the consequences of the rise of Artificial Intelligence. Its mission is to help shape the global AI policy framework.


Article: Improving Social Responsibility of Artificial Intelligence by Using ISO 26000

The vigorous development of artificial intelligence has had a profound and long-term impact on human production and life. It is a double-edged sword. While letting people enjoy the good life created by new technology, it also allows people to feel its negative effects, such as infringing on human privacy, and bringing new inequalities to human beings. Discussing the social responsibility of artificial intelligence has become a hot topic in academic circles in the past two years. This article starts with adopting the research framework of ISO 26000, comprehensively analyzing the problems of artificial intelligence social responsibility in theory and practice, and putting forward their own thinking. It is concluded that in the age of artificial intelligence, we will proceed from the seven themes of this standard to enhance the social responsibility of artificial intelligence, and ultimately achieve the sustainable development of artificial intelligence by adopting the social responsibility international standard ISO 26000.


Article: Built By Humans. Ruled By Computers.

In the world of humans, Brian Russell is a regular blue-collar guy. Stocky with a shaved head, black-rimmed glasses and a tightly trimmed Van Dyke, he pulls down steady hours at his job installing security systems. Every night, he drives his old green Jeep home to a freshly planted subdivision of modest ranch houses outside the squeaky-clean West Michigan town of Zeeland. Trucks moan past on the freeway out back and the dewy-sweet smell of cut grass follows him to the door. His dog, Mischief, his fiancée and their two boys greet him. All seems right with the world. But this world – the one we can see and touch and smell – is no longer the only one that matters. Another domain, built by humans but ruled by computers, has taken shape in the past few decades: that of algorithmic decision-making. This new world is often invisible but never idle. It likely determines whether you’ll get a mortgage and how much you’ll pay for it, whether you’re considered for job opportunities, how much you pay for car insurance, how likely you are to commit a crime or mistreat your children, how often the police patrol your neighborhood. It even influences the level of prestige conferred by a U-M degree, thanks to the now-ubiquitous, algorithm-based U.S. News & World Report college rankings. Generally, these algorithms keep a low profile. But occasionally, they collide spectacularly with humans. That’s what happened to Russell.


Article: AI researchers debate the ethics of sharing potentially harmful programs

A recent decision by research lab OpenAI to limit the release of a new algorithm has caused controversy in the AI community.
The nonprofit said it decided not to share the full version of the program, a text-generation algorithm named GPT-2, due to concerns over ‘malicious applications.’ But many AI researchers have criticized the decision, accusing the lab of exaggerating the danger posed by the work and inadvertently stoking ‘mass hysteria’ about AI in the process.
The debate has been wide-ranging and sometimes contentious. It even turned into a bit of a meme among AI researchers, who joked that they’ve had an amazing breakthrough in the lab, but the results were too dangerous to share at the moment. More importantly, it’s highlighted a number of challenges for the community as a whole, including the difficulty of communicating new technologies with the press, and the problem of balancing openness with responsible disclosure.
Advertisements