Opportunity, accuracy, exploration, medical progression, risk reduction, support, disruptive innovation, quality of life improvement, equality, just a few of the many opportunities that can be explored through the development of Artificial Intelligence (AI). What would once be the script for a Sci-Fi movie, is now nothing but the nowadays reality. Now the fiction element is gone and space travel, quantum computing, mind-reading, human memory upload to cloud and brain wave communications turned into simple development projects of AI integration.
While the advantages gained by AI and Machine Learning integration are undeniably beneficial to humanity and a core evolution facilitator, introducing such powerful and impactful tech to society can easily spin around and become a colossal threat. So, the challenge lies around identifying this fine ethical line that separates the good from the unjust, the honest from the unfair or the moral from the corrupt.
A highly controversial topic, the ethical dilemmas of AI have been raised and discussed key world leaders over the last century, but little to no action has been taken. Meanwhile, AI is advancing faster than we can track. Going from the currently researched Artificial Narrow Intelligence (ANI), where robots are focused on single tasks, towards Artificial General Intelligence (AGI), where AI is as intelligent as humans. This is then followed by Artificial Super Intelligence (ASI), where AI becomes incomprehensibly smarter than humans. If you thought that the transition through these phases is the most exciting challenge for all the AI supporters, keeping up with the rules and regulations is even a bigger challenge.
To some extent counter-intuitive, ethics can help and support the future of AI rather than slow it down. ‘We are like children playing with a bomb’ – would argue Nick Bostrom, a Swedish philosopher at the University of Oxford, renowned for his work on human enhancement ethics and superintelligence risk. Not only should ethics and AI not be seen as different constituents, but AI must be modelled using ethics, social norms and moralities throughout the whole process.
Ethics also works both ways: caring for the moral behaviour of humans, from the design process to development, and then treatment of robots (AI) known as ‘Roboethics’ and ‘Machine Ethics’, which are governing the moral behaviour of artificial moral agents (AMAs). For each of these, there are multiple ethical layers involving employment and job evolution, inequality introduced by technology accuracy and AI bias, humanity, behaviour and human-robot interaction – and these are just part of the first layer.
The early integration of ethics in AI seems like a sensible approach to adapt moving forward, but who is in charge of making this happen? One may argue that technology and regulations should evolve together but the speed of technological progress is much higher than anything else. How can we possibly keep the regulations aligned without slowing down the AI’s development? One approach would be for tech companies to set up their own ethics committee, but getting the right people involved in the right process is a challenge. A recent example of this would be Google’s newly appointed AI ethics board that was dissolved soon after their start, outlining the difficulty of getting the ethics right within a globally leading organization.
Therefore, how can we assure the ethical evolution of AI? What are the optimum ways of integrating human values and principles into advanced technological entities? Whose responsibility is it to ensure safe human evolution integration with AI? The answers are not obvious, but exploring the reasons behind our thirst for AI could build up a safer and prosperous path to a tale of Robots and Humans that lived together happily ever after. We take a closer look at the intersection of humanity and robots here.