This report identifies and names the Alternative Influence Network (AIN): an assortment of scholars, media pundits, and internet celebrities who use YouTube to promote a range of political positions, from mainstream versions of libertarianism and conservatism, all the way to overt white nationalism. Content creators in the AIN claim to provide an alternative media source for news and political commentary. They function as political influencers who adopt the techniques of brand influencers to build audiences and ‘sell’ them on far-right ideology. This report presents data from approximately 65 political influencers across 81 channels. This network is connected through a dense system of guest appearances, mixing content from a variety of ideologies. This cross-promotion of ideas forms a broader ‘reactionary’ position: a general opposition to feminism, social justice, or left-wing politics.
According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered.
Technology is becoming a bigger part of our lives every day. Imagine driving to a new place without your sat-nav, ordering a takeaway without an app, or finding a new place to eat it without a quick search on Google. Now think about the things you don’t see. How Google orders the billions of results it finds when you ask it ‘Why is the sky blue?’ or ‘Where is Dubai?’ (searched 165,000 and 60,500 times a month on average)?
The detection of clandestine efforts to influence users in online communities is a challenging problem with significant active development. We demonstrate that features derived from the text of user comments are useful for identifying suspect activity, but lead to increased erroneous identifications when keywords over-represented in past influence campaigns are present. Drawing on research in native language identification (NLI), we use ‘named entity masking’ (NEM) to create sentence features robust to this shortcoming, while maintaining comparable classification accuracy. We demonstrate that while NEM consistently reduces false positives when key named entities are mentioned, both masked and unmasked models exhibit increased false positive rates on English sentences by Russian native speakers, raising ethical considerations that should be addressed in future research.
Could part of AI Safety be ensuring distribution or work towards equality? I have written before about fairness in AI; the importance of data quality; and equality relating to gender. Yet the most challenging article to write was Inequalities and AI. Is artificial intelligence truly safe if it worsens or exacerbates inequality? What is one of the greatest inequalities? It has been important for nonprofits to connect to makers of new technology to see if any part of the revenue can be funnelled towards a humanitarian purpose or programs. As much as we can question these technologies, because they are of course not faultless, it is arguably important that nonprofits are able to raise funds and address issues. The question for these organisations is often a large looming ‘how’? In an ideal world their operations would not be needed, yet in the current situation there is a place for the charity sector, and how they operate is certainly changing. With these services moving to apps or social media with a variety of actors it does seem a challenge to keep up. In many instances the technologies such as AI or ML are integrated into existing products or services. Is it necessary to collaborate? We proceed with the assumption that it can be generated in conjunction with machine learning projects and that part of the money should go to charity. Let us explore a few options, but first a quick look at AI for Good.
We often view AI with suspicion – but AI can be used to solve complex problems currently facing society where innovative approaches are needed. For many of us, the Amazon fires are disturbing and a serious problem because the Amazon cannot be recovered once its gone. It seems that there is nothing we can do to mitigate this man-made (and economically driven) disaster. However, I believe that in the very near future we can. And the solution maybe to create a spirit of activism through transparent algorithms to bring about social change. Two technologies could be key – and they are both currently viewed with some suspicion.
Before you start reading, think of 3 possible scenarios for the future of Artificial Intelligence (AI). If I asked you to think of 3 possible scenarios for the future of AI, I am guessing you’d think of the bad first: Takeover scenario – Terminator-style. Computers and robots dominate human species, take over our planet, and eventually wipe us off the face of Earth. Or, that the power of AI will be held, and used by a handful of tyrants whose sole purpose is to enslave the rest of us. You might’ve also thought of a hybrid scenario, where we lose some of our humanity to gain far superior computational and physical power. And finally, you might’ve even thought of brighter days where robots work for human species who now enjoy their Universal Basic Income (UBI), follow their ‘passions’ or their ‘useless’ creative endeavors, and live without a single worry in the world.
Article: What is Machine Behavior?
Understanding the behavior of artificial intelligence(AI) agents is one of the pivotal challenges of the next decade of AI. Interpretability or explainability are some of the terms often used to describe methods that provide insights about the behavior of AI programs. Until today, most of the interpretability techniques have focused on exploring the internal structure of deep neural networks. Recently, a group of AI researchers from the Massachusetts Institute of Technology(MIT) are exploring a radical approach that attempts to explain the behavior of AI observing them in the same we study human or animal behavior. They group the ideas in this area under the catchy name of machine behavior which promises to be one of the most exciting fields in the next few years of AI.