A recent poll found that most Americans think algorithms are unfair. Unfortunately, the poll was itself biased and an example of the very phenomenon it decries. All around us, algorithms are invisibly at work. They’re recommending music and surfacing news, finding cancerous tumors, and making self-driving cars a reality. But do people trust them? Not really, according to a Pew Research Center survey taken last year. When asked whether computer programs will always reflect the biases of their designers, 58 percent of respondents thought they would. This finding illustrates a serious tension between computing technology, whose influence on people’s lives is only expected to grow, and the people affected by it.
Better information on workers’ skills attainment, employers’ skills needs, and educational institutions’ programs to increase skills is an essential element across all of these focus areas. The AWPAB’s Data Transparency working group has identified interoperable learning records (ILRs) as a novel and technically feasible, achievable way to communicate skills between workers, employers, and education and training institutions.
Five billion dollars. That’s the apparent size of Facebook’s latest fine for violating data privacy. While many believe the sum is simply a slap on the wrist for a behemoth like Facebook, it’s still the largest amount the Federal Trade Commission has ever levied on a technology company. Facebook is clearly still reeling from Cambridge Analytica, after which trust in the company dropped 51%, searches for ‘delete Facebook’ reached 5-year highs, and Facebook’s stock dropped 20%. While incumbents like Facebook are struggling with their data, startups in highly-regulated, ‘Third Wave’ industries can take advantage by using a data strategy one would least expect: ethics. Beyond complying with regulations, startups that embrace ethics look out for their customers’ best interests, cultivate long-term trust – and avoid billion dollar fines. To weave ethics into the very fabric of their business strategies and tech systems, startups should adopt ‘agile’ data governance systems. Often combining law and technology, these systems will become a key weapon of data-centric Third Wave startups to beat incumbents in their field.
AI agents are increasingly deployed and used to make automated decisions that affect our lives on a daily basis. It is imperative to ensure that these systems embed ethical principles and respect human values. We focus on how we can attest to whether AI agents treat users fairly without discriminating against particular individuals or groups through biases in language. In particular, we discuss human unconscious biases, how they are embedded in language, and how AI systems inherit those biases by learning from and processing human language. Then, we outline a roadmap for future research to better understand and attest problematic AI biases derived from language.
Paper: Raiders of the Lost Art
Neural style transfer, first proposed by Gatys et al. (2015), can be used to create novel artistic work through rendering a content image in the form of a style image. We present a novel method of reconstructing lost artwork, by applying neural style transfer to x-radiographs of artwork with secondary interior artwork beneath a primary exterior, so as to reconstruct lost artwork. Finally we reflect on AI art exhibitions and discuss the social, cultural, ethical, and philosophical impact of these technical innovations.
On September 17, Seth Vargo – a former employee of Chef, the software deployment automation company – found out via a tweet that Chef licenses had been sold to the Immigration and Customs Enforcement Agency (ICE) under a $95,500, one-year contract through the approved contractor C&C International Computers & Consultants. In protest, Vargo decided to ‘archive’ the GitHub repository for two open source Chef add-ons he had developed in the Ruby programming language. On his GitHub repository page, Vargo wrote, ‘I have a moral and ethical obligation to prevent my source from being used for evil.’
Seth Vargo wrote code used in a platform called Chef. When he learned ICE was a customer, he wrestled with ICE using code he had personally written. Technologist Seth Vargo had a moral dilemma. He had just found out that Immigration and Customs Enforcement (ICE), which has faced widespread condemnation for separating children from their parents at the U.S. border and other abuses, was using a product that contained code that he had written. ‘I was having trouble sleeping at night knowing that software – code that I personally authored – was being sold to and used by such a vile organization,’ he told Motherboard in an online chat. ‘I could not be complicit in enabling what I consider to be acts of evil and violations of our most basic human rights.’
This is the first part of our special feature series on Deepfakes, exploring the latest developments and implications in this nascent field of AI. We will be covering detailed implementations on generation and countering strategies in future articles, please stay tuned to GradientCrescent to learn more.