Paper: Model fitting in Multiple Systems Analysis for the quantification of Modern Slavery: Classical and Bayesian approaches

Multiple Systems Estimation is a key estimation approach for hidden populations such as the number of victims of Modern Slavery. The UK Government estimate of 10,000 to 13,000 victims was obtained by a multiple systems estimate based on six lists. A stepwise method was used to choose the terms in the model. Further investigation shows that a small proportion of models give rather different answers, and that other model fitting approaches may choose one of these. Three data sets collected in the Modern Slavery context, together with a data set about the death toll in the Kosovo conflict, are used to investigate the stability and robustness of various Multiple Systems Estimate approaches. The crucial aspect is the way that interactions between lists are modelled, because these can substantially affect the results. Model selection and Bayesian approaches are considered in detail, in particular to assess their stability and robustness when applied to real data sets in the Modern Slavery context. A new Markov Chain Monte Carlo Bayesian approach is developed; overall, this gives robust and stable results at least for the examples considered. The software and datasets are freely and publicly available to facilitate wider implementation and further research.


Paper: Using Machine Learning to Guide Cognitive Modeling: A Case Study in Moral Reasoning

Large-scale behavioral datasets enable researchers to use complex machine learning algorithms to better predict human behavior, yet this increased predictive power does not always lead to a better understanding of the behavior in question. In this paper, we outline a data-driven, iterative procedure that allows cognitive scientists to use machine learning to generate models that are both interpretable and accurate. We demonstrate this method in the domain of moral decision-making, where standard experimental approaches often identify relevant principles that influence human judgments, but fail to generalize these findings to ‘real world’ situations that place these principles in conflict. The recently released Moral Machine dataset allows us to build a powerful model that can predict the outcomes of these conflicts while remaining simple enough to explain the basis behind human decisions.


Article: AI Safety Needs Social Scientists

Properly aligning advanced AI systems with human values will require resolving many uncertainties related to the psychology of human rationality, emotion, and biases. These can only be resolved empirically through experimentation – if we want to train AI to do what humans want, we need to study humans.
Definitions of alignment: reasoning and reflective equilibrium
1. Cognitive and ethical biases
2. Lack of domain knowledge
3. Limited cognitive capacity
4. ‘Correctness’ may be local


Article: Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice

Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced within the context of flawed, racially fraught and sometimes unlawful practices (‘dirty policing’). This can include systemic data manipulation, falsifying police reports, unlawful use of force, planted evidence, and unconstitutional searches. These policing practices shape the environment and the methodology by which data is created, which leads to inaccuracies, skews, and forms of systemic bias embedded in the data (‘dirty data’). Predictive policing systems informed by such data cannot escape the legacy of unlawful or biased policing practices that they are built on. Nor do claims by predictive policing vendors that these systems provide greater objectivity, transparency, or accountability hold up. While some systems offer the ability to see the algorithms used and even occasionally access to the data itself, there is no evidence to suggest that vendors independently or adequately assess the impact that unlawful and bias policing practices have on their systems, or otherwise assess how broader societal biases may affect their systems.


Article: Should We Treat Data as Labor? Moving Beyond ‘Free’

In the digital economy, user data is typically treated as capital created by corporations observing willing individuals. This neglects users’ role in creating data, reducing incentives for users, distributing the gains from the data economy unequally and stoking fears of automation. Instead treating data (at least partially) as labor could help resolve these issues and restore a functioning market for user contributions, but may run against the near-term interests of dominant data monopsonists who have benefited from data being treated as ‘free’. Countervailing power, in the form of competition, a data labor movement and/or thoughtful regulation could help restore balance.


Article: Teaching AI Human Values

OpenAI Believes that the Path to Safe AI Requires Social Sciences. Ensuring fairness and safety in artificial intelligence (AI) applications is considered by many the biggest challenge in the space. As AI systems match or surpass human intelligence in many areas, it is essential that we establish a guideline to align this new form of intelligence with human values. The challenge is that, as humans, we understand very little about how our values are represented in the brain or we can’t even formulate specific rules to describe a specific value. While AI operates in a data universe, human values are a byproduct of our evolution as social beings. We don’t describe human values like fairness or justice using neuroscientific terms but using arguments from social sciences like psychology, ethics or sociology Recently, researchers from OpenAI published a paper describing the importance of social sciences to improve the safety and fairness or AI algorithms in processes that require human intervention. We often hear that we need to avoid bias in AI algorithms by using fair and balanced training datasets. While that’s true in many scenarios, there are many instances in which fairness can’t be described using simple data rules. A simple question such as ‘do you prefer A to B’ can have many answers depending on the specific context, human rationality or emotion. Imagine the task of inferring a pattern of ‘happiness’, ‘responsibility’ or ‘loyalty’ given a specific dataset. Can we describe those values simply using data? Extrapolating that lesson to AI systems tells us that in order to align with human values we need help from the disciplines that better understand human behavior.


Article: The BAI Fall Session on the Ethics of Data Science

The Business Analytics Institute and SDMIMD will be offering a 10-day Fall Session next September 6th to 15th in Mysore, India on the ethical implications of data science. This year’s session will highlight the ethical challenges associated with the practice of Data Science. Our professors and experts will help established managers and management students focus on the needs to develop of data protection, consumer privacy, consumer trust, implied bias, automated decision-making, and prescriptive analytics. Course discussions will be facilitated by an industry-recognized expert in their field. Company visits and professional speakers from the industry leaders and makers will highlight this session. A dedicated study tour/track will be organized for our business delegates.


Paper: Emulating Human Developmental Stages with Bayesian Neural Networks

We compare the acquisition of knowledge in humans and machines. Research from the field of developmental psychology indicates, that human-employed hypothesis are initially guided by simple rules, before evolving into more complex theories. This observation is shared across many tasks and domains. We investigate whether stages of development in artificial learning systems are based on the same characteristics. We operationalize developmental stages as the size of the data-set, on which the artificial system is trained. For our analysis we look at the developmental progress of Bayesian Neural Networks on three different data-sets, including occlusion, support and quantity comparison tasks. We compare the results with prior research from developmental psychology and find agreement between the family of optimized models and pattern of development observed in infants and children on all three tasks, indicating common principles for the acquisition of knowledge.
Advertisements