Autonomous intelligent agents are playing increasingly important roles in our lives. They contain information about us and start to perform tasks on our behalves. Chatbots are an example of such agents that need to engage in a complex conversations with humans. Thus, we need to ensure that they behave ethically. In this work we propose a hybrid logic-based approach for ethical chatbots.
Article: Artificial Inhumanity
A few months ago, Fr Philip Larrey published his book called ‘Artificial Humanity’. It discusses the need for developing humane Artificial Intelligence (AI). In this article, we will explain what would happen if we have an inhumane AI.
The security of the Internet of Things (IoT) has attracted much attention due to the growing number of IoT-oriented security incidents. IoT hardware and software security vulnerabilities are exploited affecting many companies and persons. Since the causes of vulnerabilities go beyond pure technical measures, there is a pressing demand nowadays to demystify IoT ‘security complex’ and develop practical guidelines for both companies, consumers, and regulators. In this paper, we present an initial study targeting an unexplored sphere in IoT by illuminating the potential of crowdsource ethical hacking approaches for enhancing IoT vulnerability management. We focus on Bug Bounty Programs (BBP) and Responsible Disclosure (RD), which stimulate hackers to report vulnerability in exchange for monetary rewards. We carried out a qualitative investigation supported by literature survey and expert interviews to explore how BBP and RD can facilitate the practice of identifying, classifying, prioritizing, remediating, and mitigating IoT vulnerabilities in an effective and cost-efficient manner. Besides deriving tangible guidelines for IoT stakeholders, our study also sheds light on a systematic integration path to combine BBP and RD with existing security practices (e.g., penetration test) to further boost overall IoT security.
We have seen multiple cities, especially in Asia make announcements about intentions to launch government issued reward tokens as part of its smart city initiative (e.g. Seoul S Coin and Municipal Tokens). The apparent goal is to encourage citizens to participate in the use of public services, increase the tax base, foster economic activity and respond to government sponsored questionnaires. Providing feedback in terms of social services will entitle citizens to receive tokens in the form of a reward. Once collected the tokens can then be spent on goods and services, often via their mobile phones at different merchant outlets. There are few things to consider with these announcements. First, not all such tokens are pure crypto, issued using blockchain (eg: Belfast). There are thousands of private loyalty and rewards schemes in both physical and digital form. Second, these experiments are not new in the context of government initiatives, especially municipalities, that have been trying to nudge citizens into adopting certain, different behavioural patterns.
Like anything, boundaries and frameworks need to be established, and artificial intelligence should be no different. Whether we have realized it or not, AI is changing the way we live. It’s present in the way social media feeds are organised; the way predictive searches show up on Google; and how music services such as Spotify make song suggestions. The technology is also helping transform the way enterprises do business. It will make the world of work more efficient and many professions superfluous. From algorithms detecting Parkinson’s disease to saving people from cancer to improving mental health by Ai-enabled counseling sessions to reducing road accidents, Ai has huge benefits for human intelligence in the future. Humanity desperately needs it. Ai can be critical in solving dilemmas in healthcare, for instance, where healthcare expenditure is growing at unsustainable rates. Ai can be the crucial technology that helps pretty much every sector in our society.
AI warfare is beginning to dominate military strategy in the US and China, but is the technology ready?
Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can — and must — ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans? Diverse teams are needed to build trustworthy artificial intelligent systems, and those teams need to coalesce around a shared set of ethics. There are many discussions in the AI field about ethics and trust, but there are few frameworks available for people to use as guidance when creating these systems. The Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences described in this paper, when used with a set of technical ethics, will guide AI development teams to create AI systems that are accountable, de-risked, respectful, secure, honest, and usable. To support the team’s efforts, activities to understand people’s needs and concerns will be introduced along with the themes to support the team’s efforts. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development. Human-machine teams are strongest when human users can trust AI systems to behave as expected, safely, securely, and understandably. Using the HMT Framework to design trustworthy AI systems will provide support to teams in identifying potential issues ahead of time and making great experiences for humans.
Our society is experiencing profound changes brought about by digitalisation. Innovative data-based technologies may benefit us at both the individual and the wider societal levels, as well as potentially boosting economic productivity, promoting sustainability and catalysing huge strides forward in terms of scientific progress. At the same time, however, digitalisation poses risks to our fundamental rights and freedoms. It raises a wide range of ethical and legal questions centring around two wider issues: the role we want these new technologies to play, and their design. If we want to ensure that digital transformation serves the good of society as a whole, both society itself and its elected political representatives must engage in a debate on how to use and shape data-based technologies, including artificial intelligence (AI). Germany’s Federal Government set up the Data Ethics Commission (Datenethikkommission) on 18 July 2018. It was given a one-year mandate to develop ethical benchmarks and guidelines as well as specific recommendations for action, aiming at protecting the individual, preserving social cohesion, and safeguarding and promoting prosperity in the information age. As a starting point, the Federal Government presented the Data Ethics Commission with a number of key questions clustered around three main topics: algorithm-based decision-making (ADM), AI and data. In the opinion of the Data Ethics Commission, however, AI is merely one among many possible variants of an algorithmic system, and has much in common with other such systems in terms of the ethical and legal questions it raises. With this in mind, the Data Ethics Commission has structured its work under two different headings: data and algorithmic systems (in the broader sense).