Article: A Human Centered approach to AI

Tools and approaches to help CX Designers and Product Owners approach emergent tech projects.

Article: Artificial Intelligence: Do stupid things faster with more energy!

If you think this is a no-brainer and reliable option (A) is the obvious answer, think again. It really depends on the skills of whoever’s giving the workers their instructions. Reliable workers will efficiently scale up the intelligent decision-making of a good leader, but they will unfortunately also amplify a foolish decision-maker. Remember those classic café posters? ‘Coffee: Do stupid things faster with more energy!’ When a leader is incompetent (or depraved), unreliable workers are a blessing. Can’t drag single-minded determination out of them? How wonderful! Things can get scary when zealots wholeheartedly pursue objectives set by a bad decision-maker.

Article: The Weaponization of Artificial Intelligence

In 2019, we live in a world where we are augmenting our soldiers and military units with AR. We are increasing living in a world of deep fakes, and the weaponization of AI appears to have no limit. A.I. technology has for years led military leaders to ponder a future of warfare that needs little human involvement yet in 2019 impressively, it’s how consumers are under a dark threat of a repressive authoritarian internet. In China, its internet can bar millions from travel for ‘social credit’ offences. Meanwhile new apps weaponize idol submissiveness to the state. We already know Facebook and other American tech companies practice privacy invasion and 3rd party data harvesting at a nightmarish scenario of exploiting user data.

Article: Debating the AI Safety Debate

As I am moving into the area of AI Safety within the field of artificial intelligence (AI) I find myself both confused and perplexed. Where do you even start? I covered the financial developments in OpenAI yesterday, and they are one of the foremost authorities on AI Safety. As such I thought it would be interesting to look at one of their papers. The paper that I will be looking at is called AI safety via debate published October 2018. You can of course read the article yourself in arXiv, and critique my article in turn; that would be the ideal situation. This debate about AI debates is of course ongoing.

Article: If Software is Eating the World

With the advent of Alexa, Google Assistant, Siri, and Alibaba and Baidu killing it in smart speaker adoption in China, consumer voice AI is eating the world, but to what end? Case in point, Alexa echo devices don’t make much of a profit for Amazon on hardware sales. It’s always been about the software ecosystem and the add-on value it could create. The longer-term goal could be to make money off an app marketplace through skills. ‘Skills’ are in a sense what Alexa calls its app store.

Article: AI equal with human experts in medical diagnosis, study finds

Artificial intelligence is on a par with human experts when it comes to making medical diagnoses based on images, a review has found. The potential for artificial intelligence in healthcare has caused excitement, with advocates saying it will ease the strain on resources, free up time for doctor-patient interactions and even aid the development of tailored treatment. Last month the government announced £250m of funding for a new NHS artificial intelligence laboratory.

Paper: Minimizing Margin of Victory for Fair Political and Educational Districting

In many practical scenarios, a population is divided into disjoint groups for better administration, e.g., electorates into political districts, employees into departments, students into school districts, and so on. However, grouping people arbitrarily may lead to biased partitions, raising concerns of gerrymandering in political districting, racial segregation in schools, etc. To counter such issues, in this paper, we conceptualize such problems in a voting scenario, and propose FAIR DISTRICTING problem to divide a given set of people having preference over candidates into k groups such that the maximum margin of victory of any group is minimized. We also propose the FAIR CONNECTED DISTRICTING problem which additionally requires each group to be connected. We show that the FAIR DISTRICTING problem is NP-complete for plurality voting even if we have only 3 candidates but admits polynomial time algorithms if we assume k to be some constant or everyone can be moved to any group. In contrast, we show that the FAIR CONNECTED DISTRICTING problem is NP-complete for plurality voting even if we have only 2 candidates and k = 2. Finally, we propose heuristic algorithms for both the problems and show their effectiveness in UK political districting and in lowering racial segregation in public schools in the US.

Paper: Machine learning in healthcare — a system’s perspective

A consequence of the fragmented and siloed healthcare landscape is that patient care (and data) is split along multitude of different facilities and computer systems and enabling interoperability between these systems is hard. The lack interoperability not only hinders continuity of care and burdens providers, but also hinders effective application of Machine Learning (ML) algorithms. Thus, most current ML algorithms, designed to understand patient care and facilitate clinical decision-support, are trained on limited datasets. This approach is analogous to the Newtonian paradigm of Reductionism in which a system is broken down into elementary components and a description of the whole is formed by understanding those components individually. A key limitation of the reductionist approach is that it ignores the component-component interactions and dynamics within the system which are often of prime significance in understanding the overall behaviour of complex adaptive systems (CAS). Healthcare is a CAS. Though the application of ML on health data have shown incremental improvements for clinical decision support, ML has a much a broader potential to restructure care delivery as a whole and maximize care value. However, this ML potential remains largely untapped: primarily due to functional limitations of Electronic Health Records (EHR) and the inability to see the healthcare system as a whole. This viewpoint (i) articulates the healthcare as a complex system which has a biological and an organizational perspective, (ii) motivates with examples, the need of a system’s approach when addressing healthcare challenges via ML and, (iii) emphasizes to unleash EHR functionality – while duly respecting all ethical and legal concerns – to reap full benefits of ML.