• Home
  • About
  • Books
  • Courses
  • Documents
  • eBooks
  • Feeds
  • Images
  • Quotes
  • R Packages
  • What is …

AnalytiXon

~ Broaden your Horizon

Category Archives: Ethics

AI related Ethics

Let’s get it right

17 Tuesday Sep 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: A Legal Definition of AI

When policy makers want to regulate AI, they must first define what AI is. However, legal definitions differ significantly from definitions of other disciplines. They are working definitions. Courts must be able to determine precisely whether or not a concrete system is considered AI by the law. In this paper we examine how policy makers should define the material scope of AI regulations. We argue that they should not use the term ‘artificial intelligence’ for regulatory purposes because there is no definition of AI which meets the requirements for legal definitions. Instead, they should define certain designs, use cases or capabilities following a risk-based approach. The goal of this paper is to help policy makers who work on AI regulations.


Paper: Valuating User Data in a Human-Centric Data Economy

The idea of paying people for their data is increasingly seen as a promising direction for resolving privacy debates, improving the quality of online data, and even offering an alternative to labor-based compensation in a future dominated by automation and self-operating machines. In this paper we demonstrate how a Human-Centric Data Economy would compensate the users of an online streaming service. We borrow the notion of the Shapley value from cooperative game theory to define what a fair compensation for each user should be for movie scores offered to the recommender system of the service. Since determining the Shapley value exactly is computationally inefficient in the general case, we derive faster alternatives using clustering, dimensionality reduction, and partial information. We apply our algorithms to a movie recommendation data set and demonstrate that different users may have a vastly different value for the service. We also analyze the reasons that some movie ratings may be more valuable than others and discuss the consequences for compensating users fairly.


Article: Why Accessibility Is the Future of Tech

Designing solutions for people with disabilities offers a peephole into the future. ‘It’s just the right thing to do.’ Very few people think that those of us who are blind should be exiled from the web altogether, or that people with hearing loss shouldn’t have iPhones. That’s as it should be. But all too often, the importance of accessibility – the catch-all term for designing technology that people with disabilities can use – is framed in terms of charity alone. And that’s a shame because it makes accessibility seem grudging and boring, when the reality is that it’s the most exciting school of design on the planet.


Article: The Anthropologist of Artificial Intelligence

How do new scientific disciplines get started? For Iyad Rahwan, a computational social scientist with self-described ‘maverick’ tendencies, it happened on a sunny afternoon in Cambridge, Massachusetts, in October 2017. Rahwan and Manuel Cebrian, a colleague from the MIT Media Lab, were sitting in Harvard Yard discussing how to best describe their preferred brand of multidisciplinary research. The rapid rise of artificial intelligence technology had generated new questions about the relationship between people and machines, which they had set out to explore. Rahwan, for example, had been exploring the question of ethical behavior for a self-driving car – should it swerve to avoid an oncoming SUV, even if it means hitting a cyclist? – in his Moral Machine experiment.


Paper: Avoiding Resentment Via Monotonic Fairness

Classifiers that achieve demographic balance by explicitly using protected attributes such as race or gender are often politically or culturally controversial due to their lack of individual fairness, i.e. individuals with similar qualifications will receive different outcomes. Individually and group fair decision criteria can produce counter-intuitive results, e.g. that the optimal constrained boundary may reject intuitively better candidates due to demographic imbalance in similar candidates. Both approaches can be seen as introducing individual resentment, where some individuals would have received a better outcome if they either belonged to a different demographic class and had the same qualifications, or if they remained in the same class but had objectively worse qualifications (e.g. lower test scores). We show that both forms of resentment can be avoided by using monotonically constrained machine learning models to create individually fair, demographically balanced classifiers.


Article: Developing AI responsibly

Sarah Bird discusses the major challenges of responsible AI development and examines promising new tools and technologies to help enable it in practice.


Article: Open-endedness: The last grand challenge you’ve never heard of

Artificial intelligence (AI) is a grand challenge for computer science. Lifetimes of effort and billions of dollars have powered its pursuit. Yet, today its most ambitious vision remains unmet: though progress continues, no human-competitive general digital intelligence is within our reach. However, such an elusive goal is exactly what we expect from a ‘grand challenge’ – it’s something that will take astronomical effort over expansive time to achieve – and is likely worth the wait. There are other grand challenges, like curing cancer, achieving 100% renewable energy, or unifying physics. Some fields have entire sets of grand challenges, such as David Hilbert’s 23 unsolved problems in mathematics, which laid down the gauntlet for the entire 20th century. What’s unusual, though, is for there to be a problem whose solution could radically alter our civilization and our understanding of ourselves while being known only to the smallest sliver of researchers. Despite how strangely implausible that sounds, it is precisely the scenario today with the challenge of open-endedness. Almost no one has even heard of this problem, let alone cares about its solution, even though it is among the most fascinating and profound challenges that might actually someday be solved. With this article, we hope to help fix this surprising disconnect. We’ll explain just what this challenge is, its amazing implications if solved, and how to join the quest if we’ve inspired your interest.


Article: Regulation and Ethics in Data Science and Machine Learning

Statistical inference, reinforcement learning, deep neural networks, and other jargon has recently attracted much attention, and indeed, for a fundamental reason. Statistical inference extends the basis of our decisions and changes the deliberative process in making decisions. This change constitutes the essential differentiator from what I name as the pre-data science to the subsequent data science era. In the data science era, decisions are taken based on data and algorithms. Often, decisions are made solely by algorithms and humans constitute an important actor only in the process of gathering, cleaning, structuring the data and setting up the framework for the algorithm selection (often, the algorithm itself is chosen by a metric). Given this fundamental change, it is important to take a closer look at both the extended base of decisions and the changes in thought processes in deliberation of this extended base when taking decisions in the data science era.

Let’s get it right

05 Thursday Sep 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Alternative Influence: Broadcasting the Reactionary Right on YouTube

This report identifies and names the Alternative Influence Network (AIN): an assortment of scholars, media pundits, and internet celebrities who use YouTube to promote a range of political positions, from mainstream versions of libertarianism and conservatism, all the way to overt white nationalism. Content creators in the AIN claim to provide an alternative media source for news and political commentary. They function as political influencers who adopt the techniques of brand influencers to build audiences and ‘sell’ them on far-right ideology. This report presents data from approximately 65 political influencers across 81 channels. This network is connected through a dense system of guest appearances, mixing content from a variety of ideologies. This cross-promotion of ideas forms a broader ‘reactionary’ position: a general opposition to feminism, social justice, or left-wing politics.


Paper: AI and Accessibility: A Discussion of Ethical Considerations

According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered.


Article: AI Schools – The Schools of the Future

Technology is becoming a bigger part of our lives every day. Imagine driving to a new place without your sat-nav, ordering a takeaway without an app, or finding a new place to eat it without a quick search on Google. Now think about the things you don’t see. How Google orders the billions of results it finds when you ask it ‘Why is the sky blue?’ or ‘Where is Dubai?’ (searched 165,000 and 60,500 times a month on average)?


Paper: Towards Ethical Content-Based Detection of Online Influence Campaigns

The detection of clandestine efforts to influence users in online communities is a challenging problem with significant active development. We demonstrate that features derived from the text of user comments are useful for identifying suspect activity, but lead to increased erroneous identifications when keywords over-represented in past influence campaigns are present. Drawing on research in native language identification (NLI), we use ‘named entity masking’ (NEM) to create sentence features robust to this shortcoming, while maintaining comparable classification accuracy. We demonstrate that while NEM consistently reduces false positives when key named entities are mentioned, both masked and unmasked models exhibit increased false positive rates on English sentences by Russian native speakers, raising ethical considerations that should be addressed in future research.


Article: Artificial Intelligence and Nonprofits

Could part of AI Safety be ensuring distribution or work towards equality? I have written before about fairness in AI; the importance of data quality; and equality relating to gender. Yet the most challenging article to write was Inequalities and AI. Is artificial intelligence truly safe if it worsens or exacerbates inequality? What is one of the greatest inequalities? It has been important for nonprofits to connect to makers of new technology to see if any part of the revenue can be funnelled towards a humanitarian purpose or programs. As much as we can question these technologies, because they are of course not faultless, it is arguably important that nonprofits are able to raise funds and address issues. The question for these organisations is often a large looming ‘how’? In an ideal world their operations would not be needed, yet in the current situation there is a place for the charity sector, and how they operate is certainly changing. With these services moving to apps or social media with a variety of actors it does seem a challenge to keep up. In many instances the technologies such as AI or ML are integrated into existing products or services. Is it necessary to collaborate? We proceed with the assumption that it can be generated in conjunction with machine learning projects and that part of the money should go to charity. Let us explore a few options, but first a quick look at AI for Good.


Article: Can an ethical and algorithmically transparent cloud kitchen prevent future Amazon fires?

We often view AI with suspicion – but AI can be used to solve complex problems currently facing society where innovative approaches are needed. For many of us, the Amazon fires are disturbing and a serious problem because the Amazon cannot be recovered once its gone. It seems that there is nothing we can do to mitigate this man-made (and economically driven) disaster. However, I believe that in the very near future we can. And the solution maybe to create a spirit of activism through transparent algorithms to bring about social change. Two technologies could be key – and they are both currently viewed with some suspicion.


Article: Artificial Intelligence Without the Utopian Promise-land and Dystopian Armageddon

Before you start reading, think of 3 possible scenarios for the future of Artificial Intelligence (AI). If I asked you to think of 3 possible scenarios for the future of AI, I am guessing you’d think of the bad first: Takeover scenario – Terminator-style. Computers and robots dominate human species, take over our planet, and eventually wipe us off the face of Earth. Or, that the power of AI will be held, and used by a handful of tyrants whose sole purpose is to enslave the rest of us. You might’ve also thought of a hybrid scenario, where we lose some of our humanity to gain far superior computational and physical power. And finally, you might’ve even thought of brighter days where robots work for human species who now enjoy their Universal Basic Income (UBI), follow their ‘passions’ or their ‘useless’ creative endeavors, and live without a single worry in the world.


Article: What is Machine Behavior?

Understanding the behavior of artificial intelligence(AI) agents is one of the pivotal challenges of the next decade of AI. Interpretability or explainability are some of the terms often used to describe methods that provide insights about the behavior of AI programs. Until today, most of the interpretability techniques have focused on exploring the internal structure of deep neural networks. Recently, a group of AI researchers from the Massachusetts Institute of Technology(MIT) are exploring a radical approach that attempts to explain the behavior of AI observing them in the same we study human or animal behavior. They group the ideas in this area under the catchy name of machine behavior which promises to be one of the most exciting fields in the next few years of AI.

Let’s get it right

24 Saturday Aug 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Europe will be left behind if it focuses on ethics and not keeping pace in AI development

President-elect of the European Commission, Ursula von der Leyen made clear in her recently unveiled policy agenda, that not only will artificial intelligence (AI) be a key component of European digital strategy, but the cornerstone of the European AI plan will be to develop ‘AI made in Europe’ that is more ethical than AI made anywhere else in the world. What this means is not always clear, since there is no universal consensus on ethics. However, most European policymakers are less concerned about the ‘what’ and more about the ‘why.’ As explained by former Vice-President for the Digital Single Market, Andrus Ansip, ‘Ethical AI is a win-win proposition that can become a competitive advantage for Europe.’ This idea that Europe can become the global leader in AI simply by creating the most ethical AI systems, rather than by competing to build the best-performing ones, has become the conventional wisdom in Brussels, repeated ad nauseum by those tasked with charting a course for Europe’s AI future. But it is a delusion built on three fallacies: that there is a market for AI that is ethical-by-design, that other countries are not interested in AI ethics, and that Europeans have a competitive advantage in producing AI systems that are more ethical than those produced elsewhere.


Article: How Much Can We Afford to Forget, If We Train Machines to Remember?

Civilizations evolve through strategic forgetting of once-vital life skills. But can machines do all our remembering? When I was a student, in the distant past when most computers were still huge mainframes, I had a friend whose PhD advisor insisted that he carry out a long and difficult atomic theory calculation by hand. This led to page after page of pencil scratches, full of mistakes, so my friend finally gave in to his frustration. He snuck into the computer lab one night and wrote a short code to perform the calculation. Then he laboriously copied the output by hand, and gave it to his professor. Perfect, his advisor said – this shows you are a real physicist. The professor was never any the wiser about what had happened. While I’ve lost touch with my friend, I know many others who’ve gone on to forge successful careers in science without mastering the pencil-and-paper heroics of past generations.


Article: AI and Collective Action

Towards a more responsible development of artificial intelligence with a research paper from OpenAI. The 10th of July team members of OpenAI released a paper on arXiv called The Role of Cooperation in Responsible AI Development by Amanda Askell, Miles Brundage and Gillian Hadfield. One of the main statements in the article goes as follows: ‘Competition between AI companies could decrease the incentives of each company to develop responsibly by increasing their incentives to develop faster. As a result, if AI companies would prefer to develop AI systems with risk levels that are closer to what is socially optimal – as we believe many do – responsible AI development can be seen as a collective action problem’ Therefore how is it proposed we approach this problem?


Article: AI is transforming politics – for both good and bad

Big Data powering Big Money, the return of direct democracy, and the tyranny of the minority. Nowadays, artificial intelligence (AI) is one of the most widely discussed phenomena. AI is poised to fundamentally alter almost every dimension of human life – from healthcare and social interactions to military and international relations. However, it is worth considering the effects of the advent of AI in politics – since politics are one of the fundamental pillars of today’s societal system, and understanding the dangers that AI poses for politics is crucial to combat AI’s negative implications, while at the same time maximizing the benefits stemming from the new opportunities in order to strengthen democracy.


Paper: Fairness Issues in AI Systems that Augment Sensory Abilities

Systems that augment sensory abilities are increasingly employing AI and machine learning (ML) approaches, with applications ranging from object recognition and scene description tools for blind users to sound awareness tools for d/Deaf users. However, unlike many other AI-enabled technologies, these systems provide information that is already available to non-disabled people. In this paper, we discuss unique AI fairness challenges that arise in this context, including accessibility issues with data and models, ethical implications in deciding what sensory information to convey to the user, and privacy concerns both for the primary user and for others.


Paper: Bayesian leveraging of historical control data for a clinical trial with time-to-event endpoint

The recent 21st Century Cures Act propagates innovations to accelerate the discovery, development, and delivery of 21st century cures. It includes the broader application of Bayesian statistics and the use of evidence from clinical expertise. An example of the latter is the use of trial-external (or historical) data, which promises more efficient or ethical trial designs. We propose a Bayesian meta-analytic approach to leveraging historical data for time-to-event endpoints, which are common in oncology and cardiovascular diseases. The approach is based on a robust hierarchical model for piecewise exponential data. It allows for various degrees of between trial-heterogeneity and for leveraging individual as well as aggregate data. An ovarian carcinoma trial and a non-small-cell cancer trial illustrate methodological and practical aspects of leveraging historical data for the analysis and design of time-to-event trials.


Article: How New A.I. Is Making the Law’s Definition of Hacking Obsolete

Using adversarial machine learning, researchers can trick machines – potentially with fatal consequences. But the legal system hasn’t caught up. Imagine you’re cruising in your new Tesla, autopilot engaged. Suddenly you feel yourself veer into the other lane, and you grab the wheel just in time to avoid an oncoming car. When you pull over, pulse still racing, and look over the scene, it all seems normal. But upon closer inspection, you notice a series of translucent stickers leading away from the dotted lane divider. And to your Tesla, these stickers represent a non-existent bend in the road that could have killed you. In April this year, a research team at the Chinese tech giant Tencent showed that a Tesla Model S in autopilot mode could be tricked into following a bend in the road that didn’t exist simply by adding stickers to the road in a particular pattern. Earlier research in the U.S. had shown that small changes to a stop sign could cause a driverless car to mistakenly perceive it as a speed limit sign. Another study found that by playing tones indecipherable to a person, a malicious attacker could cause an Amazon Echo to order unwanted items.


Article: A.I. Is the Cause Of – And Solution To – the End of the World

The development of artificial general intelligence offers tremendous benefits and terrible risks. There is no easy definition for artificial intelligence, or A.I. Scientists can’t agree on what constitutes ‘true A.I.’ versus what might simply be a very effective and fast computer program. But here’s a shot: intelligence is the ability to perceive one’s environment accurately and take actions that maximize the probability of achieving given objectives. It doesn’t mean being smart, in a sense of having a great store of knowledge, or the ability to do complex mathematics.

Let’s get it right

21 Wednesday Aug 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: Machinery And Ethics

No one can escape the fact that trying to legislate on the existing connection between man and machine will lead us to the serious problem of shifting our customs and morality into the field of ethics. In fact, it would be foolish to think that any social pact marked between humans and humanoids would not be reflected through human rights, especially when humanoids themselves were exclusively artificial. The starting point is to analyze which are the inherent human rights acquired by the machines so as to be able to consider which aspect is concerned with morality. So one way to simplify the essay is to postulate that the 1948 Charter of Human Rights could form part of the ethical principles that define not only human beings, but also the very concept of humanity.


Article: Robots and AI Threaten to Mediate Disputes Better Than Lawyers

Algorithms and big data are entering the often shrouded world of alternative dispute resolution. Robots and artificial intelligence seem worlds away from the sensitive and nuanced area of international mediation. Here, battles are largely settled behind closed doors and skilled mediators pick their way through sticky negotiations. Algorithms and big data, however, are fast entering the often mystery-shrouded world of alternative dispute resolution. This is much the result of the rapidly increasing demand for the kind of data analytics being harnessed in US litigation to predict trial outcomes. The incursion of robots into mediation hit a new milestone in February, when Canadian electronic negotiation specialists iCan Systems reputedly became the first company to resolve a dispute in a public court in England and Wales using a ‘robot mediator’.


Paper: Connected Fair Allocation of Indivisible Goods

We study the fair allocation of indivisible goods under the assumption that the goods form an undirected graph and each agent must receive a connected subgraph. Our focus is on well-studied fairness notions including envy-freeness and maximin share fairness. We establish graph-specific maximin share guarantees, which are tight for large classes of graphs in the case of two agents and for paths and stars in the general case. Unlike in previous work, our guarantees are with respect to the complete-graph maximin share, which allows us to compare possible guarantees for different graphs. For instance, we show that for biconnected graphs it is possible to obtain at least $3/4$ of the maximin share, while for the remaining graphs the guarantee is at most $1/2$. In addition, we determine the optimal relaxation of envy-freeness that can be obtained with each graph for two agents, and characterize the set of trees and complete bipartite graphs that always admit an allocation satisfying envy-freeness up to one good (EF1) for three agents. Our work demonstrates several applications of graph-theoretical tools and concepts to fair division problems.


Paper: Tackling Online Abuse: A Survey of Automated Abuse Detection Methods

Abuse on the Internet represents an important societal problem of our time. Millions of Internet users face harassment, racism, personal attacks, and other types of abuse on online platforms. The psychological effects of such abuse on individuals can be profound and lasting. Consequently, over the past few years, there has been a substantial research effort towards automated abuse detection in the field of natural language processing (NLP). In this paper, we present a comprehensive survey of the methods that have been proposed to date, thus providing a platform for further development of this area. We describe the existing datasets and review the computational approaches to abuse detection, analyzing their strengths and limitations. We discuss the main trends that emerge, highlight the challenges that remain, outline possible solutions, and propose guidelines for ethics and explainability


Paper: A Survey on Computational Politics

Computational Politics is the study of computational methods to analyze and moderate users\textquotesingle behaviors related to political activities such as election campaign persuasion, political affiliation, and opinion mining. With the rapid development and ease of access to the Internet, Information Communication Technologies (ICT) have given rise to a massive number of users joining the online communities and to the digitization of analogous data such as political debates. These communities and digitized data contain both explicit and latent information about users and their behaviors related to politics. For researchers, it is essential to utilize data from these sources to develop and design systems that not only provide solutions to computational politics but also help other businesses, such as marketers, to increase the users\textquotesingle participation and interaction. In this survey, we attempt to categorize main areas in computational politics and summarize the prominent studies at one place to better understand computational politics across different and multidimensional platforms. e.g., online social networks, online forums, and political debates. We then conclude this study by highlighting future research directions, opportunities, and challenges.


Article: Discriminating Systems – Gender, Race, and Power in AI

Research Findings:
• There is a diversity crisis in the AI sector across gender and race.
• The AI sector needs a profound shift in how it addresses the current diversity crisis.
• The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others.
• Fixing the ‘pipeline’ won’t fix AI’s diversity problems.
• The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.


Paper: Oxford Handbook on AI Ethics Book Chapter on Race and Gender

From massive face-recognition-based surveillance and machine-learning-based decision systems predicting crime recidivism rates, to the move towards automated health diagnostic systems, artificial intelligence (AI) is being used in scenarios that have serious consequences in people’s lives. However, this rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial face recognition systems have much higher error rates for dark skinned women while having minimal errors on light skinned men. A 2016 ProPublica investigation uncovered that machine learning based tools that assess crime recidivism rates in the US are biased against African Americans. Other studies show that natural language processing tools trained on newspapers exhibit societal biases (e.g. finishing the analogy ‘Man is to computer programmer as woman is to X’ by homemaker). At the same time, books such as Weapons of Math Destruction and Automated Inequality detail how people in lower socioeconomic classes in the US are subjected to more automated decision making tools than those who are in the upper class. Thus, these tools are most often used on people towards whom they exhibit the most bias. While many technical solutions have been proposed to alleviate bias in machine learning systems, we have to take a holistic and multifaceted approach. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.


Paper: A Mulching Proposal

The ethical implications of algorithmic systems have been much discussed in both HCI and the broader community of those interested in technology design, development and policy. In this paper, we explore the application of one prominent ethical framework – Fairness, Accountability, and Transparency – to a proposed algorithm that resolves various societal issues around food security and population ageing. Using various standardised forms of algorithmic audit and evaluation, we drastically increase the algorithm’s adherence to the FAT framework, resulting in a more ethical and beneficent system. We discuss how this might serve as a guide to other researchers or practitioners looking to ensure better ethical outcomes from algorithmic systems in their line of work.

Let’s get it right

13 Tuesday Aug 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: AI Ethics Guidelines Every CIO Should Read

You don’t need to come up with an AI ethics framework out of thin air. Here are five of the best resources to get technology and ethics leaders started.
• Future of Life Institute
• IAPP
• IEEE
• The Public Voice
• EU Council of Europe


Article: Artificial Intelligence – Ethics vs. World Domination?

I was contacted by a friend who is helping to host an event at a large business conference in Norway where industry and politicians meet. The name of the event is ‘Artificial Intelligence – Ethics vs. World Domination?’. In this context I was asked a few questions and I will do my best to answer. I will however first discuss a series of questions I was sent relating to the topic. These questions concern competitiveness, human-centric AI, Norwegian interests and social responsible AI. First let us begin with the description of the event.


Article: You Can’t Fix Unethical Design by Yourself

Nearly every tech conference right now has at least one, if not many, sessions about ethics: Ethics in artificial intelligence, introductions to data ethics, why letting the internet go to sleep is the ethical thing to do, or just plain integrating the basics of ethics into your design. We as a community are doing a great job raising questions about the implications of technology and spreading awareness to our communities about the potential for harm.


Paper: A 20-Year Community Roadmap for Artificial Intelligence Research in the US

Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.


Article: AI Justice: When AI Principles Are Not Enough

Fluxus Landscape is an art and research project mapping about 500 stakeholders and actors in AI ethics and governance. It casts a broad net and each included stakeholder defines artificial intelligence and ethics in their own terms. Together, they create a snapshot of the organic structure of social change – showing us that development at speed can create vortices of thought and intellectual dead zones open to exploitation.


Article: Fluxus Landscape

Fluxus Landscape is an art and research project created in partnership with the Center for the Advanced Study in the Behavioral Sciences (CASBS) at Stanford University with support from the Stanford Institute for Human-Centered Artificial Intelligence.


Paper: Conservatives Overfit, Liberals Underfit’: The Social-Psychological Control of Affect and Uncertainty

The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological and sociological theory. We introduce a revised BayesAct model that more deeply integrates social-psychological theorising, and we demonstrate a key component of the model as being sufficient to account for cognitive biases about fairness, dissonance and conformity. We close with ethical and philosophical discussion.


Article: Safe Artificial General Intelligence

The Future of Life Institute (FLI) has appeared across various articles and areas within the field of artificial intelligence, at least where I have looked. They seem to be concerned with the unknown future and how it affects us. Since I have been exploring the topic of AI Safety it does now make sense seeing as FLI has funded a series of different projects throughout the last five years particularly with two rounds, both funded by Elon Musk and different research institutes. The first round seems to have been in 2015 with a focus on AI Safety Researchers and the second round with its focus on artificial general intelligence (AGI) Safety Researchers in 2018. Since the project summaries are all out online I decided to have a think about each in turn.

Let’s get it right

07 Wednesday Aug 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Machinic Surrogates: Human-Machine Relationships in Computational Creativity

Recent advancements in artificial intelligence (AI) and its sub-branch machine learning (ML) promise machines that go beyond the boundaries of automation and behave autonomously. Applications of these machines in creative practices such as art and design entail relationships between users and machines that have been described as a form of collaboration or co-creation between computational and human agents. This paper uses examples from art and design to argue that this frame is incomplete as it fails to acknowledge the socio-technical nature of AI systems, and the different human agencies involved in their design, implementation, and operation. Situating applications of AI-enabled tools in creative practices in a spectrum between automation and autonomy, this paper distinguishes different kinds of human engagement elicited by systems deemed automated or autonomous. Reviewing models of artistic collaboration during the late 20th century, it suggests that collaboration is at the core of these artistic practices. We build upon the growing literature of machine learning and art to look for the human agencies inscribed in works of computational creativity, and expand the co-creation frame to incorporate emerging forms of human-human collaboration mediated through technical artifacts such as algorithms and data.


Paper: Incorporating Structural Stigma into Network Analysis

A rich literature has explored the modeling of homophily and other forms of nonuniform mixing associated with individual-level covariates within the exponential family random graph (ERGM) framework. Such differential mixing does not fully explain phenomena such as stigma, however, which involve the active maintenance of social boundaries by ostracism of persons with out-group ties. Here, we introduce a new statistic that allows for such effects to be captured, making it possible to probe for the potential presence of boundary maintenance above and beyond simple differences in nomination rates. We demonstrate this statistic in the context of gender segregation in a school classroom.


Paper: What do the founders of online communities owe to their users?

We discuss the organisation of internet communities, focusing on what we call the principle of ‘bait and switch’: founders of internet communities often find it advantageous to recruit members by promising inducements which are later not honoured. We look at some of the dilemmas and ways of attempting to resolve them through two paradigmatic examples, Wikispaces and WordPress. Our analysis is to a large extent motivated by the demands of CALLector, a university-centred social network we are in the process of establishing. We consider the question of what ethical standards are imposed on universities engaged in this type of activity.


Paper: Adapting SQuaRE for Quality Assessment of Artificial Intelligence Systems

More and more software practitioners are tackling towards industrial applications of artificial intelligence (AI) systems, especially those based on machine learning (ML). However, many of existing principles and approaches to traditional systems do not work effectively for the system behavior obtained by training not by logical design. In addition, unique kinds of requirements are emerging such as fairness and explainability. To provide clear guidance to understand and tackle these difficulties, we present an analysis on what quality concepts we should evaluate for AI systems. We base our discussion on ISO/IEC 25000 series, known as SQuaRE, and identify how it should be adapted for the unique nature of ML and $\textit{Ethics guidelines for trustworthy AI}$ from European Commission. We thus provide holistic insights for quality of AI systems by incorporating the ML nature and AI ethics to the traditional software quality concepts.


Paper: Robby is Not a Robber (anymore): On the Use of Institutions for Learning Normative Behavior

Future robots should follow human social norms in order to be useful and accepted in human society. In this paper, we leverage already existing social knowledge in human societies by capturing it in our framework through the notion of social norms. We show how norms can be used to guide a reinforcement learning agent towards achieving normative behavior and apply the same set of norms over different domains. Thus, we are able to: (1) provide a way to intuitively encode social knowledge (through norms); (2) guide learning towards normative behaviors (through an automatic norm reward system); and (3) achieve a transfer of learning by abstracting policies; Finally, (4) the method is not dependent on a particular RL algorithm. We show how our approach can be seen as a means to achieve abstract representation and learn procedural knowledge based on the declarative semantics of norms and discuss possible implications of this in some areas of cognitive science.


Paper: Knowledge Query Network: How Knowledge Interacts with Skills

Knowledge Tracing (KT) is to trace the knowledge of students as they solve a sequence of problems represented by their related skills. This involves abstract concepts of students’ states of knowledge and the interactions between those states and skills. Therefore, a KT model is designed to predict whether students will give correct answers and to describe such abstract concepts. However, existing methods either give relatively low prediction accuracy or fail to explain those concepts intuitively. In this paper, we propose a new model called Knowledge Query Network (KQN) to solve these problems. KQN uses neural networks to encode student learning activities into knowledge state and skill vectors, and models the interactions between the two types of vectors with the dot product. Through this, we introduce a novel concept called \textit{probabilistic skill similarity} that relates the pairwise cosine and Euclidean distances between skill vectors to the odds ratios of the corresponding skills, which makes KQN interpretable and intuitive. On four public datasets, we have carried out experiments to show the following: 1. KQN outperforms all the existing KT models based on prediction accuracy. 2. The interaction between the knowledge state and skills can be visualized for interpretation. 3. Based on probabilistic skill similarity, a skill domain can be analyzed with clustering using the distances between the skill vectors of KQN. 4. For different values of the vector space dimensionality, KQN consistently exhibits high prediction accuracy and a strong positive correlation between the distance matrices of the skill vectors.


Paper: Seeding the Singularity for A.I

The singularity refers to an idea that once a machine having an artificial intelligence surpassing the human intelligence capacity is created, it will trigger explosive technological and intelligence growth. I propose to test the hypothesis that machine intelligence capacity can grow autonomously starting with an intelligence comparable to that of bacteria – microbial intelligence. The goal will be to demonstrate that rapid growth in intelligence capacity can be realized at all in artificial computing systems. I propose the following three properties that may allow an artificial intelligence to exhibit a steady growth in its intelligence capacity: (i) learning with the ability to modify itself when exposed to more data, (ii) acquiring new functionalities (skills), and (iii) expanding or replicating itself. The algorithms must demonstrate a rapid growth in skills of dataprocessing and analysis and gain qualitatively different functionalities, at least until the current computing technology supports their scalable development. The existing algorithms that already encompass some of these or similar properties, as well as missing abilities that must yet be implemented, will be reviewed in this work. Future computational tests could support or oppose the hypothesis that artificial intelligence can potentially grow to the level of superintelligence which overcomes the limitations in hardware by producing necessary processing resources or by changing the physical realization of computation from using chip circuits to using quantum computing principles.


Article: Microsoft looks to ‘do for data sharing what open source did for code’

As Microsoft seeks to make data-sharing across companies easier and more pervasive, company officials have seen areas where roadblocks can occur. Prevalent among these are the lack of consistent, standardized data-sharing terms and licensing agreements. On July 23, the company took a first potential step toward remedying this gap. Microsoft is making publicly available today the first drafts of three proposed data-sharing agreements. It is looking for community feedback and input on them over the next few months. Each of the three is designed for particular data-sharing scenarios between companies — not individuals — and is covered by the Creative Commons license. Some of these agreements will be published on Microsoft’s GitHub code-sharing site. Microsoft officials said they believe these kinds of agreements could alleviate the need of companies to spend months or years negotiating and creating data-sharing governance agreements.

Let’s get it right

07 Wednesday Aug 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: AI Often Adds To Bias In Recruiting – But There’s A New Approach That Could Change The Game

Most people aren’t trying to be biased, but bias is inherent – it influences how we view any situation, often unconsciously. When you think of bias, characteristics like race, gender, and religion likely come to mind. But there’s a much broader context of what bias can actually be. Bias comes in many forms. For example, the halo effect occurs when we assume that our initial impression of someone means something about his or her character. The halo effect can lead us to believe, without evidence, that someone who is warm and likable when you meet them is also intelligent and capable. Similarity bias is our implicit affinity toward those similar to us. In our flawed minds, relatable traits are positive traits – even when they really aren’t. Is someone who grew up 15 minutes from you, or someone who is also a soccer fan, really more likely to be a better team member? These types of biases present a big problem in recruiting and hiring. And not just in human recruiters. When you consider that recruiting software – both AI and traditional – mirrors human tendencies, you realize that bias affects every part of the recruiting process, from in-person interviews to resume-scanning software.


Article: AI Safety and Social Data Science

Social Data Science is a new exciting area of study. This month there was no Wikipedia page on the topic and indeed very few articles on Medium (only one specifically mentioning the topic). I wrote an article on this topic called Towards Social Data Science. There are a few institutions that have begun educating students on master’s level in Social Data Science including (Oxford, LSE and University of Copenhagen). For this article I will focus on the University of Copenhagen considering the possibility of contributing to the field of AI Safety with the combined academic staff present at the University of Copenhagen focused on security issues together with the students attending the master’s course in Social Data Science.


Article: The Legal and Ethical Challenges of Using Commercial Genealogical Data in Court

Meet your genes’ reads the big white print on the 23andMe website, contrasted by vibrant colors making it difficult to scroll away. The webpage is aesthetic and engaging, inviting potential customers to explore the company’s service options. For just $199 and a swab of saliva, one can learn almost everything about their ancestry and genetic wellbeing thereby facilitating family reunions and preventative health treatments. The website exudes simplicity, ease, and innovation – but at what cost?


Article: The Metamorphosis

AI will bring many wonders. It may also destabilize everything from nuclear détente to human friendships. We need to think much harder about how to adapt.


Article: Humans versus technology

For the past decade, advancements in technology have disrupted what seems like most facets of the human experience. And with what feels like monthly advances in technology, it’s no surprise the retail industry is accelerating globally in innovation and experimentation. According to Gartner, worldwide retail-tech spending will increase by 3.6 per cent to $203.6 billion in 2019, surpassing the technology spending of most other industries.


Article: Ethically Aligned Design – A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems

As someone involved in the study and deployment of systems using machine intelligence in society, Ethically Aligned Design, First Edition, provides an essential contribution to the discussion and practice of designing systems that more coherently integrate with society and in ethical ways. This work aligns with my work at the MIT Media Lab and the Harvard Law School, and I look forward to continuing our collaboration.’ Joichi Ito, MIT Media Lab. We released Ethically Aligned Design, Version 1 (EADv1) as a Request for Input in December of 2016 and received over two hundred pages of in-depth feedback about the draft. We subsequently released Ethically Aligned Design, Version 2 (EADv2) in December 2017 and received over three hundred pages of in-depth feedback about the draft. Since that time, over one thousand people from the realms of business, academia and policy helped write, edit, and review Ethically Aligned Design, First Edition.


Article: How To Use Data Science For Social Impact

Data science is truly an interdisciplinary field. Mathematicians, statisticians, computer scientists, social scientists, database administrators, data engineers, graphic designers, UI experts, journalists, storytellers, researchers, and business administrators are among many individuals involved in the process. As machine learning and AI continue to advance, philosophers and ethicists will hopefully be included in the process as well.


Paper: What is the Point of Fairness? Disability, AI and The Complexity of Justice

Work integrating conversations around AI and Disability is vital and valued, particularly when done through a lens of fairness. Yet at the same time, analyzing the ethical implications of AI for disabled people solely through the lens of a singular idea of ‘fairness’ risks reinforcing existing power dynamics, either through reinforcing the position of existing medical gatekeepers, or promoting tools and techniques that benefit otherwise-privileged disabled people while harming those who are rendered outliers in multiple ways. In this paper we present two case studies from within computer vision – a subdiscipline of AI focused on training algorithms that can ‘see’ – of technologies putatively intended to help disabled people but, through failures to consider structural injustices in their design, are likely to result in harms not addressed by a ‘fairness’ framing of ethics. Drawing on disability studies and critical data science, we call on researchers into AI ethics and disability to move beyond simplistic notions of fairness, and towards notions of justice.

Let’s get it right

05 Monday Aug 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Modelling the Safety and Surveillance of the AI Race

Innovation, creativity, and competition are some of the fundamental underlying forces driving the advances in Artificial Intelligence (AI). This race for technological supremacy creates a complex ecology of choices that may lead to negative consequences, in particular, when ethical and safety procedures are underestimated or even ignored. Here we resort to a novel game theoretical framework to describe the ongoing AI bidding war, also allowing for the identification of procedures on how to influence this race to achieve desirable outcomes. By exploring the similarities between the ongoing competition in AI and evolutionary systems, we show that the timelines in which AI supremacy can be achieved play a crucial role for the evolution of safety prone behaviour and whether influencing procedures are required. When this supremacy can be achieved in a short term (near AI), the significant advantage gained from winning a race leads to the dominance of those who completely ignore the safety precautions to gain extra speed, rendering of the presence of reciprocal behavior irrelevant. On the other hand, when such a supremacy is a distant future, reciprocating on others’ safety behaviour provides in itself an efficient solution, even when monitoring of unsafe development is hard. Our results suggest under what conditions AI safety behaviour requires additional supporting procedures and provide a basic framework to model them.


Paper: Artificial Intelligence and the Future of Psychiatry: Insights from a Global Physician Survey

Futurists have predicted that new technologies, embedded with artificial intelligence (AI) and machine learning (ML), will lead to substantial job loss in many sectors disrupting many aspects of healthcare. Mental health appears ripe for such disruption given the global illness burden, stigma, and shortage of care providers. Using Sermo, a global networking platform open to verified and licensed physicians, we measured the opinions of psychiatrists about the likelihood that future autonomous technology (referred to as AI/ML) would be able to fully replace the average psychiatrist in performing 10 key tasks (e.g. mental status exam, suicidality assessment, treatment planning) carried out in mental health care. Survey respondents were 791 psychiatrists from 22 countries. Only 3.8% of respondents felt that AI/ML was likely to replace a human clinician for providing empathetic care. Documenting (e.g. updating medical records) and synthesizing information to reach a diagnosis were the two tasks where a majority predicted that future AI/ML would replace human doctors. About 1 in 2 doctors believed their jobs could be changed substantially by future AI/ML. However, female and US-based doctors were more uncertain that the possible benefits of AI would outweigh potential risks, versus their male and global counterparts. To our knowledge, this is the first global survey to seek the opinions of physicians on the impact of autonomous AI/ML on the future of psychiatry. Our findings provide compelling insights into how physicians think about intelligent technologies which may better help us integrate such tools and reskill doctors, as needed, to enhance mental health care.


Article: The AI Who Was Born on a Farm

In a recent post I looked at some ideas about how consciousness develops, and then proposed a sequence of stages that might allow an intelligent, learning machine to build a conscious self. Herein I let that AI tell its own story, explaining each of the 4 stages of training.


Article: Why Genuine Human Intelligence Is Key for the Development of AI

The developments have been fast and furious in recent months. Microsoft announced that it will invest $1 billion in a partnership with research lab OpenAI to create artificial general intelligence (AGI), the holy grail of artificial intelligence. OpenAI’s CEO Sam Altman has boasted that ‘the creation of AGI will be the most important technological development in human history’ Computers can do many very specific tasks much better than humans, but they do not have anything remotely resembling the wisdom, common sense, and critical thinking that humans use to deal with ill-defined situations, vague rules, and ambiguous, even contradictory, goals. The development of computers that can do everything the human brain does would be astonishing, but Microsoft’s record is not encouraging.


Paper: Coercion, Consent, and Participation in Citizen Science

Throughout history, everyday people have contributed to science through a myriad of volunteer activities. This early participation required training and often involved mentorship from scientists or senior citizen scientists (or, as they were often called, gentleman scientists). During this learning process, participants learned how they and their data would be used both to advance science, and in some cases, advance the careers of professional collaborators. Modern, online citizen science, allows participation with just a few clicks, and people may participate without understanding what they are contributing to. Too often, they happily see what they are doing as the privilege of painting Tom Sawyer’s fence without realizing they are actually being used as merely a means to a scientific end. This paper discusses the ethical dilemmas that plague modern citizen science, including: the issues of informed consent, such as not requiring logins; the issues of coercion inherent in mandatory classroom assignments requiring data submission; and the issues of using people merely as a means to an end that are inherent in technonationalism, and projects that do not provide utility to the users beyond the knowledge they helped. This work is tested within the context of astronomy citizen science.


Paper: Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arabic Social Media

Arabic Twitter space is crawling with bots that fuel political feuds, spread misinformation, and proliferate sectarian rhetoric. While efforts have long existed to analyze and detect English bots, Arabic bot detection and characterization remains largely understudied. In this work, we contribute new insights into the role of bots in spreading religious hatred on Arabic Twitter and introduce a novel regression model that can accurately identify Arabic language bots. Our assessment shows that existing tools that are highly accurate in detecting English bots don’t perform as well on Arabic bots. We identify the possible reasons for this poor performance, perform a thorough analysis of linguistic, content, behavioral and network features, and report on the most informative features that distinguish Arabic bots from humans as well as the differences between Arabic and English bots. Our results mark an important step toward understanding the behavior of malicious bots on Arabic Twitter and pave the way for a more effective Arabic bot detection tools.


Article: The Evolutionary Roots of Human Decision Making

Humans exhibit a suite of biases when making economic decisions. We review recent research on the origins of human decision making by examining whether similar choice biases are seen in nonhuman primates, our closest phylogenetic relatives. We propose that comparative studies can provide insight into four major questions about the nature of human choice biases that cannot be addressed by studies of our species alone. First, research with other primates can address the evolution of human choice biases and identify shared versus human-unique tendencies in decision making. Second, primate studies can constrain hypotheses about the psychological mechanisms underlying such biases. Third, comparisons of closely related species can identify when distinct mechanisms underlie related biases by examining evolutionary dissociations in choice strategies. Finally, comparative work can provide insight into the biological rationality of economically irrational preferences.


Article: AI & Ethics – Where Do We Go From Here?

The topic of ethics comes up a lot when we talk about Artificial Intelligence. ‘How do we teach an AI to make ethical decisions?’, ‘Who decides what’s ethical for an AI to do?’ and a big one: ‘Who is responsible if an AI does something considered unethical?’ Surely we can’t hold the AI accountable? It’s only a machine. Is it the programmer? They were only creating something to the specification their manager gave them. So, the manager then? But they were just creating the product ordered by the client. Is it the client? But they didn’t fully understand how the AI would make decisions. So… no one? That doesn’t seem quite right. Does that mean that we just have to trust that AI won’t be used unethically?

Let’s get it right

31 Wednesday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Paper: Conscientious Classification: A Data Scientist’s Guide to Discrimination-Aware Classification

Recent research has helped to cultivate growing awareness that machine learning systems fueled by big data can create or exacerbate troubling disparities in society. Much of this research comes from outside of the practicing data science community, leaving its members with little concrete guidance to proactively address these concerns. This article introduces issues of discrimination to the data science community on its own terms. In it, we tour the familiar data mining process while providing a taxonomy of common practices that have the potential to produce unintended discrimination. We also survey how discrimination is commonly measured, and suggest how familiar development processes can be augmented to mitigate systems’ discriminatory potential. We advocate that data scientists should be intentional about modeling and reducing discriminatory outcomes. Without doing so, their efforts will result in perpetuating any systemic discrimination that may exist, but under a misleading veil of data-driven objectivity.


Paper: Green AI

The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly large carbon footprint [38]. Ironically, deep learning was inspired by the human brain, which is remarkably energy efficient. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers from emerging economies to engage in deep learning research. This position paper advocates a practical solution by making efficiency an evaluation criterion for research alongside accuracy and related measures. In addition, we propose reporting the financial cost or ‘price tag’ of developing, training, and running models to provide baselines for the investigation of increasingly efficient methods. Our goal is to make AI both greener and more inclusive—enabling any inspired undergraduate with a laptop to write high-quality research papers. Green AI is an emerging focus at the Allen Institute for AI.


Article: Underwriting by Prediction Machines

Credit decisioning has always been at the forefront of adopting innovative tools and technology. As a result, the error rates and overall cost of prediction have reduced significantly since the industry adopted scorecards and machine-based prediction. From credit scoring to analytics and now to machine learning models, the fundamental problem statement in a credit decisioning model is of a prediction. Prediction is defined as the process of filling in the missing information. Prediction takes the information (data) one has and uses it to generate information one doesn’t have.


Article: AI & Global Governance: Human Rights and AI Ethics – Why Ethics Cannot be Replaced by the UDHR

In the increasingly popular quest of trying to make the tech world ethical, a new idea has emerged: just replace ‘ethics’ with ‘human rights’. Since no one seems to know what ‘ethics’ means, it is only natural that everyone is searching for a framework that is clearly defined, and all the better if it fits onto a single, one-page document: the Universal Declaration of Human Rights (UDHR). Unfortunately, like many shortcuts, this one also simply does not solve the problem. Let me start by summarizing the argument for using the UDHR to solve questions surrounding AI ethics. To spell out this argument, I will make use of this blog post and the report that it is based on from Harvard Law School: ‘Artificial Intelligence & Human Rights: Opportunities & Risks.’ Here is the basic argument: The UDHR provides us with a (1) guiding framework that is (2) universally agreed upon and that results in (3) legally binding rules – in contradistinction to ‘ethics’, which is (1) a matter of subjective preference (‘moral compass,’ if you will), (2) widely disputed, and (3) only as strong as the goodwill that supports it. Therefore, while appealing to ethics to solve normative questions in AI gets us into a tunnel of unending discussion, human rights is the light that we need to follow to get out on the ‘right’ side. Or so the argument goes.


Article: The Economic and Business Impacts of Artificial Intelligence: Reality, not Hype

The debate on Artificial Intelligence (AI) is characterized by hyperbole and hysteria. The hyperbole is due to two effects: first, the promotion of AI by self-interested investors. It can be termed the ‘Google-effect’, after its CEO Sundar Pichai, who declared AI to be ‘probably the most important thing humanity has ever worked on’. He would say that. Second, the promotion of AI by tech-evangelists as a solution to humanity’s fundamental problems, even death. It can be termed the ‘Singularity-effect’, after Ray Kurzweil, who believes AI will cause a ‘Singularity’ by 2045.


Article: Why Machine Learning won’t cut it

Current machine learning approaches will not get us to real AI. The kind that can truly understand you, and learn new knowledge and skills by itself. Like humans do.


Article: The Limitations of Machine Learning

Machine learning is now seen as a silver bullet for solving all problems, but sometimes it is not the answer.
Limitation 1 – Ethics
Limitation 2 – Deterministic Problems
Limitation 3 – Data
Limitation 4 – Misapplication
Limitation 5 – Interpretability


Article: Avoiding Side Effects and Reward Hacking in Artificial Intelligence

I decided to take a step back, again. This time to the paper written about AI Safety published on the OpenAI page on June 2016 called Concrete Problems in AI Safety. It is now July the 26th at the time of writing. However I have great doubts as to whether I understand any more now than this collection of thinkers did then. Yet I will try my best to examine this paper.

Let’s get it right

27 Saturday Jul 2019

Posted by Michael Laux in Ethics

≈ Leave a comment

Article: AI Will Take Center Stage in Human Evolution

There’s a point in your life when you realize that the people you once looked up to just aren’t as perfect as you once imagined them to be. We’re now struggling to handle that realization with our intelligence. In a world where a moon launch, once the pinnacle of human achievement, can now be handled by a fraction of the computing power found in your smartphone (a transformation that happened in less than a century), we can’t help but wonder about where we’re headed in the future – a future where humans may not even be relevant anymore. But don’t take my word for it, Elon Musk even made a documentary on it, and it’s one of the reasons why he got involved with Neuralink. The future isn’t here yet, though, and there’s still time for us as a civilization to find ways to keep ourselves relevant – particularly in regards to our cognitive relevance, which is what got us here in the first place.


Article: Why Big Data And Machine Learning Are Important In Our Society

The singularity is near, or maybe we’re already in it. Whatever the case is, machine learning and big data will have a tremendous influence on our society. The machine minds are coming online, and you had better learn to adapt if you want to succeed. But what are big data and machine learning? Keep reading to find out.


Article: Neuralink’s Technology Is Impressive. Is It Ethical?

Imagine being able to walk into a strip-mall and have thousands of microscopically-fine electrodes inserted into your brain, all implanted as quickly and as efficiently as if you were having LASIK eye surgery, and designed to boost your brain from a simple smartphone app.


Article: Interested in AI Policy? Start writing

Recently, OpenAI’s Amanda Askell, Miles Brundage, and Jack Clark joined Rob Wiblin on the 80,000 hours podcast to discuss a wide range of topics related to AI philosophy. policy, and publication norms. During the conversation, they also discussed where to start if you’re trying to understand AI and AI policy. It was a topic that spoke to me directly, since I’m interested in the field but totally overwhelmed by the resources (or lack thereof) that are available.


Article: All Hail the Algorithm

A five-part series exploring the impact of algorithms on our everyday lives


Article: Data Science Ethics: Without Conscience It Is But The Ruin of the Soul

Data Science is on the agenda but what about Data Science Ethics? The twin motors of data and information technology are driving innovation forward in most every aspect of human enterprise. In a similar fashion, Data Science today profoundly influences how business is done in fields as diverse as the life sciences, smart cities, and transportation. As cogent as these directions have become, the dangers of data science without ethical considerations is as equally apparent – whether it be the protection of personally identifiable data, implicit bias in automated decision-making, the illusion of free choice in psychographics, the social impacts of automation, or the apparent divorce of truth and trust in virtual communication. Justifying the need for focus on the Data Science Ethics goes beyond a balance sheet of these opportunities and challenges, for the practice of data science challenges our perceptions of what it means to be human.


Article: Estimating the success of re-identifications in incomplete datasets using generative models

While rich medical, behavioral, and socio-demographic data are key to modern data-driven research, their collection and use raise legitimate privacy concerns. Anonymizing datasets through de-identification and sampling before sharing them has been the main tool used to address those concerns. We here propose a generative copula-based method that can accurately estimate the likelihood of a specific person to be correctly re-identified, even in a heavily incomplete dataset. On 210 populations, our method obtains AUC scores for predicting individual uniqueness ranging from 0.84 to 0.97, with low false-discovery rate. Using our model, we find that 99.98% of Americans would be correctly re-identified in any dataset using 15 demographic attributes. Our results suggest that even heavily sampled anonymized datasets are unlikely to satisfy the modern standards for anonymization set forth by GDPR and seriously challenge the technical and legal adequacy of the de-identification release-and-forget model.


Article: The Future Of Work Is An Adaptive Workforce

The future of work isn’t something that happens to you – it’s something you create for your company and your own career. Unfortunately, C-level technology and business leaders are often uncertain on how to do it. We’ve just released a major new report, ‘The Adaptive Workforce Will Drive The Future Of Work,’ to establish a North Star for your aspirations – and a blueprint for how to get there.
← Older posts
Newer posts →

Blogs by Category

  • arXiv
  • arXiv Papers
  • Blogs
  • Books
  • Causality
  • Distilled News
  • Documents
  • Ethics
  • Magister Dixit
  • Personal Productivity
  • Python Packages
  • R Packages
  • Uncategorized
  • What is …
  • WordPress

Blogs by Month

Follow Blog via Email

Enter your email address to follow this blog and receive notifications of new posts by email.

Follow AnalytiXon

Powered by WordPress.com.

 

Loading Comments...