AI technologies have the potential to dramatically impact the lives of people with disabilities (PWD). Indeed, improving the lives of PWD is a motivator for many state-of-the-art AI systems, such as automated speech recognition tools that can caption videos for people who are deaf and hard of hearing, or language prediction algorithms that can augment communication for people with speech or cognitive disabilities. However, widely deployed AI systems may not work properly for PWD, or worse, may actively discriminate against them. These considerations regarding fairness in AI for PWD have thus far received little attention. In this position paper, we identify potential areas of concern regarding how several AI technology categories may impact particular disability constituencies if care is not taken in their design, development, and testing. We intend for this risk assessment of how various classes of AI might interact with various classes of disability to provide a roadmap for future research that is needed to gather data, test these hypotheses, and build more inclusive algorithms.
Article: A.I. and Humanity’s Self-Alienation
Who are we?’ is a timeless question that cannot be answered with singular specificity, for we are not any one thing. As we know it, we are many things, many cultures, many societies, many systems, many norms, many relations. We are good and evil, nurturing and threatening, smart and stupid, wise and foolish. We are, simply, human, and we come with intelligence. So, what is intelligence? Intelligence is many things. What is considered intelligent changes over time and differs across context and culture. Contrary to the way it is treated in American popular culture, intelligence is fluid, not fixed. Its evaluation is context dependent.
In ever more areas of life, algorithms are coming to substitute for judgment exercised by identifiable human beings who can be held to account. The rationale offered is that automated decision-making will be more reliable. But a further attraction is that it serves to insulate various forms of power from popular pressures. Our readiness to acquiesce in the conceit of authorless control is surely due in part to our ideal of procedural fairness, which demands that individual discretion exercised by those in power should be replaced with rules whenever possible, because authority will inevitably be abused. This is the original core of liberalism, dating from the English Revolution. Mechanized judgment resembles liberal proceduralism. It relies on our habit of deference to rules, and our suspicion of visible, personified authority. But its effect is to erode precisely those procedural liberties that are the great accomplishment of the liberal tradition, and to place authority beyond scrutiny. I mean ‘authority’ in the broadest sense, including our interactions with outsized commercial entities that play a quasi-governmental role in our lives. That is the first problem. A second problem is that decisions made by algorithm are often not explainable, even by those who wrote the algorithm, and for that reason cannot win rational assent. This is the more fundamental problem posed by mechanized decision-making, as it touches on the basis of political legitimacy in any liberal regime.
Artificial Intelligence is making a quick transition from future technology to one that surrounds us in our daily lives. From taking perfect pictures to predicting what we can say next in an email, artificial intelligence is being incorporated into the products and services we use every day to transform our lives for better but how can this emerging technology affect our future work? Of all the technologies that are driving digital transformation in the enterprise, often the people out AI as the most disruptive among all. There arises no question to how AI is in the process of disrupting people’s day-to-day jobs because of the sophisticated automation.
Ever heard of anything like it before? Me neither. The robot was created as part of Futurice’s project with Yle, the national broadcast company of Finland. Yle produces content for TV, radio, and the web. It has a broad reach of older audiences, but has had trouble reaching younger ones. The goal of this project was to use new technology to reach young audiences – specifically teenagers.
More data means better models but we may be crossing over a line into what the public can tolerate, both in the types of data collected and our use of it. The public seems divided. Targeted advertising is good but the increased invasion of privacy is bad.
Article: The circle of fairness
We shouldn’t ask our AI tools to be fair; instead, we should ask them to be less unfair and be willing to iterate until we see improvement. Fairness isn’t so much about ‘being fair’ as it is about ‘becoming less unfair.’ Fairness isn’t an absolute; we all have our own (and highly biased) notions of fairness. On some level, our inner child is always saying: ‘But that’s not fair.’ We know humans are biased, and it’s only in our wildest fantasies that we believe judges and other officials who administer justice somehow manage to escape the human condition. Given that, what role does software have to play in improving our lot? Can a bad algorithm be better than a flawed human? And if so, where does that lead us in our quest for justice and fairness?
Artificial intelligence will bring more of the human touch to each interaction AI and machine learning have become unavoidable trends in customer relations. AI is unlocking and redefining the possibilities to appeal to today’s most demanding consumers; meeting their ever-growing expectations and developing emotional connections to deliver a fulfilling customer experience. A report published by Juniper Research predicts that retail industry spending on AI will reach $7.3 billion per year by 2022. Notable applications like Uber and Lyft have changed the expectations of consumers with regards to taxis. The experience of traditional taxis now seems outdated and ineffective. Yet the arrival of artificial intelligence is raising alarm over the loss of human contact.