Artificial Intelligence principles define social and ethical considerations to develop future AI. They come from research institutes, government organizations and industries. All versions of AI principles are with different considerations covering different perspectives and making different emphasis. None of them can be considered as complete and can cover the rest AI principle proposals. Here we introduce LAIP, an effort and platform for linking and analyzing different Artificial Intelligence Principles. We want to explicitly establish the common topics and links among AI Principles proposed by different organizations and investigate on their uniqueness. Based on these efforts, for the long-term future of AI, instead of directly adopting any of the AI principles, we argue for the necessity of incorporating various AI Principles into a comprehensive framework and focusing on how they can interact and complete each other.
This paper studies the question on whether machines can be rational. It observes the existing reasons why humans are not rational which is due to imperfect and limited information, limited and inconsistent processing power through the brain and the inability to optimize decisions and achieve maximum utility. It studies whether these limitations of humans are transferred to the limitations of machines. The conclusion reached is that even though machines are not rational advances in technological developments make these machines more rational. It also concludes that machines can be more rational than humans.
I investigate causal machine learning (CML) methods to estimate effect heterogeneity by means of conditional average treatment effects (CATEs). In particular, I study whether the estimated effect heterogeneity can provide evidence for the theoretical labour supply predictions of Connecticut’s Jobs First welfare experiment. For this application, Bitler, Gelbach, and Hoynes (2017) show that standard CATE estimators fail to provide evidence for theoretical labour supply predictions. Therefore, this is an interesting benchmark to showcase the value added by using CML methods. I report evidence that the CML estimates of CATEs provide support for the theoretical labour supply predictions. Furthermore, I document some reasons why standard CATE estimators fail to provide evidence for the theoretical predictions. However, I show the limitations of CML methods that prevent them from identifying all the effect heterogeneity of Jobs First.
Article: Towards Ethical Machine Learning
I quit my job to enter an intensive data science bootcamp. I understand the value behind the vast amount of data available that enables us to create predictive machine learning algorithms. In addition to recognizing its value on a professional level, I benefit from these technologies as a consumer. Whenever I find myself in a musical rut, I rely on Spotify’s Discover Weekly. I’m often amazed by how Spotify’s algorithms and other machine learning models so accurately predict my behavior. In fact, when I first sat down to write this post, I took a break to watch one Youtube video. Twenty minutes later, I realized just how well Youtube’s recommendation algorithm works. Although I so clearly see the benefits of machine learning, it is also essential to recognize and mitigate its potential dangers.
Artificial intelligence evokes a mythical, objective omnipotence, but it is backed by real-world forces of money, power, and data. In service of these forces, we are being spun potent stories that drive toward widespread reliance on regressive, surveillance-based classification systems that enlist us all in an unprecedented societal experiment from which it is difficult to return. Now, more than ever, we need a robust, bold, imaginative response.
Ethical reasoning is an essential skill for today’s computer scientists. The Embedded EthiCS distributed pedagogy embeds philosophers directly into computer science courses to teach students how to think throughthe ethical and social implications of their work. Why Embedded EthiCS? The aim of Embedded EthiCS is to teach students to consider not merely what technologies they could create, but whether they should create them.
Article: It’s time for a Bill of Data Rights
As the US Senate debates a new bill, a data-governance expert presents a plan to protect liberty and freedom in the digital age.
From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.
The first industrial revolution, powered by steam, launched mass production. The second revolution added electricity to everything. The third added computing power. This new revolution, powered by artificial intelligence (AI), is adding cognitive capabilities to everything – and it’s a game changer. Code that learns is both powerful and dangerous. It threatens the basic rules of markets and civic life. AI requires a new technical and civic infrastructure, a new way to conduct business, a new way to be together in community. AI and enabling technologies like robotics and autonomous vehicles will change lives and livelihoods. Great benefits and unprecedented wealth will be created. But with that will come waves of disruption. Compared to prior revolutions, this one is occurring at exponential speed and while impacts are ubiquitous, control is concentrated. AI is a centralizing force. It plows through monster data sets in seconds aggregating benefits and wealth at an unprecedented speed.