Article: Programming Best Practices For Data Science

The ethical considerations are also a very important area which should be reported on. It is important to critically think about this area and assess the impact of the system’s capabilities. Ethical considerations need to be a part of the product design and planning process. Data science is an emerging discipline. It will most likely evolve over time. Follow interesting problems, people and technologies into the future of what data science will become. The data science life cycle is generally comprised of the following components:
• data retrieval
• data cleaning
• data exploration and visualization
• statistical or predictive modeling
While these components are helpful for understanding the different phases, they don’t help us think about our programming workflow.
Often, the entire data science life cycle ends up as an arbitrary mess of notebook cells in either a Jupyter Notebook or a single messy script. In addition, most data science problems require us to switch between data retrieval, data cleaning, data exploration, data visualization, and statistical/predictive modeling.


Paper: Human-Misinformation interaction: Understanding the interdisciplinary approach needed to computationally combat false information

The prevalence of new technologies and social media has amplified the effects of misinformation on our societies. Thus, it is necessary to create computational tools to mitigate their effects effectively. This study aims to provide a critical overview of computational approaches concerned with combating misinformation. To this aim, I offer an overview of scholarly definitions of misinformation. I adopt a framework for studying misinformation that suggests paying attention to the source, content, and consumers as the three main elements involved in the process of misinformation and I provide an overview of literature from disciplines of psychology, media studies, and cognitive sciences that deal with each of these elements. Using the framework, I overview the existing computational methods that deal with 1) misinformation detection and fact-checking using Content 2) Identifying untrustworthy Sources and social bots, and 3) Consumer-facing tools and methods aiming to make humans resilient to misinformation. I find that the vast majority of works in computer science and information technology is concerned with the crucial tasks of detection and verification of content and sources of misinformation. Moreover, I find that computational research focusing on Consumers of Misinformation in Human-Computer Interaction (HCI) and related fields are very sparse and often do not deal with the subtleties of this process. The majority of existing interfaces and systems are less concerned with the usability of the tools rather than the robustness and accuracy of the detection methods. Using this survey, I call for an interdisciplinary approach towards human-misinformation interaction that focuses on building methods and tools that robustly deal with such complex psychological/social phenomena.


Article: Trained neural nets perform much like humans on classic psychological tests

In the early part of the 20th century, a group of German experimental psychologists began to question how the brain acquires meaningful perceptions of a world that is otherwise chaotic and unpredictable. To answer this question, they developed the notion of the ‘gestalt effect’ – the idea that when it comes to perception, the whole is something other than the parts. Recommended for You Watch two astronauts take a spacewalk to give the ISS a power upgrade Boeing sold two safety features on its 737 Max planes as ‘extras’ Drones that perch like birds could go on much longer flights A quantum experiment suggests there’s no such thing as objective reality Microsoft just booted up the first ‘DNA drive’ for storing data Sine then, psychologists have discovered that the human brain is remarkably good at perceiving complete pictures on the basis of fragmentary information. A good example is the figure shown here. The brain perceives two-dimensional shapes such as a triangle and a square, and even a three-dimensional sphere. But none of these shapes is explicitly drawn. Instead, the brain fills in the gaps. A natural extension to this work is to ask whether gestalt effects occur in neural networks. These networks are inspired by the human brain. Indeed, researchers studying machine vision say the deep neural networks they have developed turn out to be remarkably similar to the visual system in primate brains and to parts of the human cortex.


Article: Artificial Intelligence v/s Humans: Why the Slave could soon become the Master

Artificial Intelligence is gradually wiping out the human touch. Not only labor intensive jobs but creative jobs like journalism are also under threat owing to its outreach. The vicious circle of humans programming robots to act like humans and disillusion them is eventually mechanizing humans. With the magnitude of autonomy that is being granted to robots, robots procreating robots may not be a distant phenomenon. As their numbers increase, so will their impact, paving the way for the Slave to soon overtake its Master.


Article: Automation: How Can We Reskill the Workforce?

The growing interest in automation in the enterprise highlights the need to place humans in jobs that are best suited for humans rather than machines, and a need for continuous training.


Article: Constructivist Machine Learning

A vision towards bringing machine learning closer to humans. Is there a way to re-interpret machine learning in a constructivist way? And more importantly, why should we do it? The answers to both questions are quite straightforward. Yes, we can do it, and the motivation for that may address one of the crucial flaws of modern machine learning, i.e., bringing it closer to human interpretation of reality. The key component of cognitive functionality is a model. Humans are able to build very complex models, thanks to the way our mind works. Functionalistic psychology have shown that mental models are able to continuously build hypothetical constructs to predict the environment, and continuously modify them.


Article: Society Desperately Needs An Alternative Web

I see a society that is crumbling. The rampant technology is simultaneously capsizing industries that were previously the bread and butter of economic growth. The working man and woman have felt its effects as wages stagnate and employment opportunities remain fewer amidst a progressively automated economy. Increasing wage inequality and financial vulnerability have given rise to populism, and the domino effects are spreading. People are angry. They demand fairness and are threatened by policies and outsiders that may endanger their livelihoods. This has caused a greater cultural and racial divide within and between nations. Technology has enabled this anger to spread, influence and manipulate at a much greater speed than ever before resulting in increasing polarization and a sweeping anxiety epidemic. Globally, we are much more connected – this, to our detriment. We’ve witnessed both government and business leverage technology to spread disinformation for their gains. While regulators struggle to keep pace with these harms, the tech giants continue, unabated, to wield their influence and power to establish footprints that make both consumers and business increasingly dependent on their platforms and technology stacks. We cannot escape them, nor do we want to. Therein lays the concern…


Article: Automation, Risk and Robust Artificial Intelligence

The ways in which artificial intelligence (AI) is woven into our everyday lives can hardly be overstated. Powerful deep machine-learning algorithms increasingly predict what movies we want to watch, which ads we’ll respond to, how eligible we are for a loan, and how likely we are to commit a crime or perform well on the job.¹ AI is also revolutionizing the automation of physical systems such as factories, power plants, and self-driving cars, and the pace of deployment is rapidly increasing. However, the recent fatal failures of auto-pilot systems built by Tesla, Uber, and Boeing highlight the risks associated with relying on opaque and highly complex software in dangerous situations.² Mitigating the dangers posed by such systems is an area of active research called resilience engineering, but the rapid adoption of AI, coupled with its notorious lack of algorithmic transparency, makes it difficult for the research to keep pace. One of the pioneers of machine-learning, Professor Thomas Dietterich, has spent the last several years investigating how artificial intelligence can be made more robust, especially when it’s embedded in a complex socio-technical system where the combination of human and software errors can propagate into untested regimes. Paul Scharre’s recent book, ‘An Army of None’, inspired Prof. Dietterich to look deeper into the literature of high-reliability organizations for potential solutions. ‘It drove home to me the sense that it’s not enough to make the technology reliable, you need to make the entire human organization that surrounds it reliable also; understanding what that means has been taking up a lot of my time.’ Prof. Dietterich spared some of that time to discuss with me his thoughts on AI ethics, AI safety, and his recent article Robust artificial intelligence and robust human organizations.
Advertisements