Paper: Modelling the Safety and Surveillance of the AI Race
Innovation, creativity, and competition are some of the fundamental underlying forces driving the advances in Artificial Intelligence (AI). This race for technological supremacy creates a complex ecology of choices that may lead to negative consequences, in particular, when ethical and safety procedures are underestimated or even ignored. Here we resort to a novel game theoretical framework to describe the ongoing AI bidding war, also allowing for the identification of procedures on how to influence this race to achieve desirable outcomes. By exploring the similarities between the ongoing competition in AI and evolutionary systems, we show that the timelines in which AI supremacy can be achieved play a crucial role for the evolution of safety prone behaviour and whether influencing procedures are required. When this supremacy can be achieved in a short term (near AI), the significant advantage gained from winning a race leads to the dominance of those who completely ignore the safety precautions to gain extra speed, rendering of the presence of reciprocal behavior irrelevant. On the other hand, when such a supremacy is a distant future, reciprocating on others’ safety behaviour provides in itself an efficient solution, even when monitoring of unsafe development is hard. Our results suggest under what conditions AI safety behaviour requires additional supporting procedures and provide a basic framework to model them.
Paper: Artificial Intelligence and the Future of Psychiatry: Insights from a Global Physician Survey
Futurists have predicted that new technologies, embedded with artificial intelligence (AI) and machine learning (ML), will lead to substantial job loss in many sectors disrupting many aspects of healthcare. Mental health appears ripe for such disruption given the global illness burden, stigma, and shortage of care providers. Using Sermo, a global networking platform open to verified and licensed physicians, we measured the opinions of psychiatrists about the likelihood that future autonomous technology (referred to as AI/ML) would be able to fully replace the average psychiatrist in performing 10 key tasks (e.g. mental status exam, suicidality assessment, treatment planning) carried out in mental health care. Survey respondents were 791 psychiatrists from 22 countries. Only 3.8% of respondents felt that AI/ML was likely to replace a human clinician for providing empathetic care. Documenting (e.g. updating medical records) and synthesizing information to reach a diagnosis were the two tasks where a majority predicted that future AI/ML would replace human doctors. About 1 in 2 doctors believed their jobs could be changed substantially by future AI/ML. However, female and US-based doctors were more uncertain that the possible benefits of AI would outweigh potential risks, versus their male and global counterparts. To our knowledge, this is the first global survey to seek the opinions of physicians on the impact of autonomous AI/ML on the future of psychiatry. Our findings provide compelling insights into how physicians think about intelligent technologies which may better help us integrate such tools and reskill doctors, as needed, to enhance mental health care.
Article: The AI Who Was Born on a Farm
In a recent post I looked at some ideas about how consciousness develops, and then proposed a sequence of stages that might allow an intelligent, learning machine to build a conscious self. Herein I let that AI tell its own story, explaining each of the 4 stages of training.
Article: Why Genuine Human Intelligence Is Key for the Development of AI
The developments have been fast and furious in recent months. Microsoft announced that it will invest $1 billion in a partnership with research lab OpenAI to create artificial general intelligence (AGI), the holy grail of artificial intelligence. OpenAI’s CEO Sam Altman has boasted that ‘the creation of AGI will be the most important technological development in human history’ Computers can do many very specific tasks much better than humans, but they do not have anything remotely resembling the wisdom, common sense, and critical thinking that humans use to deal with ill-defined situations, vague rules, and ambiguous, even contradictory, goals. The development of computers that can do everything the human brain does would be astonishing, but Microsoft’s record is not encouraging.
Paper: Coercion, Consent, and Participation in Citizen Science
Throughout history, everyday people have contributed to science through a myriad of volunteer activities. This early participation required training and often involved mentorship from scientists or senior citizen scientists (or, as they were often called, gentleman scientists). During this learning process, participants learned how they and their data would be used both to advance science, and in some cases, advance the careers of professional collaborators. Modern, online citizen science, allows participation with just a few clicks, and people may participate without understanding what they are contributing to. Too often, they happily see what they are doing as the privilege of painting Tom Sawyer’s fence without realizing they are actually being used as merely a means to a scientific end. This paper discusses the ethical dilemmas that plague modern citizen science, including: the issues of informed consent, such as not requiring logins; the issues of coercion inherent in mandatory classroom assignments requiring data submission; and the issues of using people merely as a means to an end that are inherent in technonationalism, and projects that do not provide utility to the users beyond the knowledge they helped. This work is tested within the context of astronomy citizen science.
Paper: Hateful People or Hateful Bots? Detection and Characterization of Bots Spreading Religious Hatred in Arabic Social Media
Arabic Twitter space is crawling with bots that fuel political feuds, spread misinformation, and proliferate sectarian rhetoric. While efforts have long existed to analyze and detect English bots, Arabic bot detection and characterization remains largely understudied. In this work, we contribute new insights into the role of bots in spreading religious hatred on Arabic Twitter and introduce a novel regression model that can accurately identify Arabic language bots. Our assessment shows that existing tools that are highly accurate in detecting English bots don’t perform as well on Arabic bots. We identify the possible reasons for this poor performance, perform a thorough analysis of linguistic, content, behavioral and network features, and report on the most informative features that distinguish Arabic bots from humans as well as the differences between Arabic and English bots. Our results mark an important step toward understanding the behavior of malicious bots on Arabic Twitter and pave the way for a more effective Arabic bot detection tools.
Article: The Evolutionary Roots of Human Decision Making
Humans exhibit a suite of biases when making economic decisions. We review recent research on the origins of human decision making by examining whether similar choice biases are seen in nonhuman primates, our closest phylogenetic relatives. We propose that comparative studies can provide insight into four major questions about the nature of human choice biases that cannot be addressed by studies of our species alone. First, research with other primates can address the evolution of human choice biases and identify shared versus human-unique tendencies in decision making. Second, primate studies can constrain hypotheses about the psychological mechanisms underlying such biases. Third, comparisons of closely related species can identify when distinct mechanisms underlie related biases by examining evolutionary dissociations in choice strategies. Finally, comparative work can provide insight into the biological rationality of economically irrational preferences.
Article: AI & Ethics – Where Do We Go From Here?
The topic of ethics comes up a lot when we talk about Artificial Intelligence. ‘How do we teach an AI to make ethical decisions?’, ‘Who decides what’s ethical for an AI to do?’ and a big one: ‘Who is responsible if an AI does something considered unethical?’ Surely we can’t hold the AI accountable? It’s only a machine. Is it the programmer? They were only creating something to the specification their manager gave them. So, the manager then? But they were just creating the product ordered by the client. Is it the client? But they didn’t fully understand how the AI would make decisions. So… no one? That doesn’t seem quite right. Does that mean that we just have to trust that AI won’t be used unethically?