Paper: Towards Effective Human-AI Teams: The Case of Human-Robot Packing

We focus on the problem of designing an artificial agent capable of assisting a human user to complete a task. Our goal is to guide human users towards optimal task performance while keeping their cognitive load as low as possible. Our insight is that in order to do so, we should develop an understanding of human decision for the task domain. In this work, we consider the domain of collaborative packing, and as a first step, we explore the mechanisms underlying human packing strategies. We conduct a user study in which human participants complete a series of packing tasks in a virtual environment. We analyze their packing strategies and discover that they exhibit specific spatial and temporal patterns (e.g., humans tend to place larger items into corners first). Our insight is that imbuing an artificial agent with an understanding of this spatiotemporal structure will enable improved assistance, which will be reflected in the task performance and human perception of the AI agent. Ongoing work involves the development of a framework that incorporates the extracted insights to predict and manipulate human decision making towards an efficient route of low cognitive load. A follow-up study will evaluate our framework against a set of baselines featuring distinct strategies of assistance. Our eventual goal is the deployment and evaluation of our framework on an autonomous robotic manipulator, actively assisting users on a packing task.

Article: We can’t trust AI systems built on deep learning alone

Gary Marcus is not impressed by the hype around deep learning. While the NYU professor believes that the technique has played an important role in advancing AI, he also thinks the field’s current overemphasis on it may well lead to its demise. Marcus, a neuroscientist by training who has spent his career at the forefront of AI research, cites both technical and ethical concerns. From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods.

Article: Why the ‘why way’ is the right way to restoring trust in AI

As so many more organizations now rely on AI to deliver services and consumer experiences, establishing a public trust in the AI is crucial as these systems begin to make harder decisions that impact customers.

Article: Don’t blame the AI, it’s the humans who are biased.

Bias in AI programming, both conscious and unconscious, is an issue of concern raised by scholars, the public, and the media alike. Given the implications of usage in hiring, credit, social benefits, policing, and legal decisions, they have good reason to be. AI bias occurs when a computer algorithm makes prejudiced decisions based on data and/or programming rules. The problem of bias is not only with coding (or programming), but also with the datasets that are used to train AI algorithms, in what some call the ‘discrimination feedback loop.’

Article: Saving democracy from fakes and AI misuse

Today was another rainy Friday afternoon in Berlin. At 4 pm sharp, a dear colleague of mine came to my working place and took me for a quick coffee break. While walking, avoiding the small water puddles in the floor, she said: ‘Yesterday I saw a movie about this couple. After 9 years being together and still loving each other, they broke up. The girl had an amazing working opportunity in another place and the guy did not want a long distance relationship. Seriously mate, heartbreaking.’ One thing led to another, and suddenly I said: ‘look, no matter what everybody says. I am convinced that when two people love, truly love, each other, everything can be solved. They can overcome everything. I understand and respect other people opinions, but love is at the core of everything that I am and do’. ‘But, dude, really, why you just can’t nurture yourself from other human experiences around you. Can’t you see that life is harsh and realise that love is not enough?’, she replied.

Article: Ethics and Security in Data Science

Benefits of data science, you might guess that data science has played a role in your daily life. After all, it not only affects what you do online, but what you do offline. Companies are using massive amounts of data to create better ads, produce tailored recommendations, and stock shelves, in the case of retail stores. It’s also shaping how, and who, we love. Here’s how data impacts us daily.

Article: Bias and Algorithmic Fairness

The modern business leader’s new responsibility in a brave new world ruled by data. As Data Science moves along the hype cycle and matures as a business function, so do the challenges that face the discipline. The problem statement for data science went from ‘we waste 80% of our time preparing data’ via ‘production deployment is the most difficult part of data science’ to ‘lack of measurable business impact’ in the last few years.

Paper: Data management for platform-mediated public services: Challenges and best practices

Services mediated by ICT platforms have shaped the landscape of the digital markets and produced immense economic opportunities. Unfortunately, the users of platforms not only surrender the value of their digital traces but also subject themselves to the power and control that data brokers exert for prediction and manipulation. As the platform revolution takes hold in public services, it is critically important to protect the public interest against the risks of mass surveillance and human rights abuses. We propose a set of design constraints that should underlie data systems in public services and which can serve as a guideline or benchmark in the assessment and deployment of platform-mediated services. The principles include, among others, minimizing control points and non-consensual trust relationships, empowering individuals to manage the linkages between their activities and empowering local communities to create their own trust relations. We further propose a set of generic and generative design primitives that fulfil the proposed constraints and exemplify best practices in the deployment of platforms that deliver services in the public interest. For example, blind tokens and attribute-based authorization may prevent the undue linking of data records on individuals. We suggest that policymakers could adopt these design primitives and best practices as standards by which the appropriateness of candidate technology platforms can be measured in the context of their suitability for delivering public services.