Artificial Intelligence
The rise of technology that we have witnessed over the last few years has brought along a series of questions left to be answered either by ingenious designs or the voice (and artificial intelligence) of a smart speaker. How will the versatility of AI change the human interactions and what implications does that have in the context of emotional wellbeing? In fact, digital wellness is a topic of increasing interest among people, as they are searching for new metrics to slow down and start living in a more considerate way.
In January 2018, Britain’s Government launched the first loneliness strategy, after recognizing it as ‘one of the greatest public health challenges of our time’, as well as its billion-pound toll on the UK’s economy. Because loneliness is closely related to depression, heart disease and, implicitly, productivity loss, dozens of start-ups are promoting robots as home companions. In other countries, such as Japan, social robots are increasingly popular, blurring the line between ‘tools and pals’.

Image credit: Stock
However, unlike the cinematic representations of AI robots as cold, imposing and conscious, they cannot feel, whereas the entire array of human interactions is fully dependable on emotions. As Astrid Weiss, a human-robot interaction researcher noted, ‘we tend to treat media technology like we treat other humans’, highlighting that ‘social interaction is a deeply-rooted human trait’. This is where what we need out of technology and what defines us as humans comes into question.
In a time where digital interaction makes us feel more disconnected than ever, a growing number of AI functions and gadgets are programmed to meet human’s fundamental need to interact with others. The question left to be answered is whether the sociability of a robot lies in its human-like appearance or the ability of machine learning and AI to ‘fake it till they make it’. Bearing in mind that people cannot help but attach human features to non-human objects; it might be a bit of both.
During her TEDxTUWien talk, Astrid Weiss made a reference to the movie ‘Robot and Frank’, where the protagonists (an elderly man suffering from severe dementia and a robot butler) bond over their own imperfections. Hence, Frank does not see the robot merely as a servant, but as his friend and refuses to accept any form of abusive behaviour coming from other members of his family. Taking this a step further, and perhaps cliché by now, in the movie Bicentennial Man, Robin Williams, as Andrew the robot, pleads for human rights, which were granted only once he made himself mortal.

Image credit: Imdb
“Decades of robotics research and product development show a clear distinction between predictable machines, like robot vacuum cleaners and smart interactive devices that can’t quite be understood, yet appear to understand a bit of us. The former is quickly seen as dull and lifeless, while with the latter we feel a sense of connection. In fact, as soon as a life-like robot is understood it becomes lifeless and deception of faked understanding’ mentioned Stefan Taal, Principal – Engineering at PDD.
Imagine having access to the software ‘rulebook’ of Frank’s robot or of Andrew the human, we would quickly lose any sense of bonding. With increasing awareness of technology, people can spot the rulebook easier than ever. Following the recent spur of startups developing robotic friends, many of these start-ups have now disappeared, having come up short in both the friend aspect as well as the ‘useful helper’ aspect. In some occasions, social robots are readily accepted, such as with toddlers, autistic children, and dementia patients. We, ‘normal grownups’ can see the make-believe set up by the robot, but when will we be (or have we already been) tricked?
Will we let ourselves be surprised by our own emotions each time a new smart device gradually enters our life, or can we anticipate the unknown and create successful products that are both useful while also considered a trusted companion? We might as well see it as a technological paradox, as what brings us closer to the future is not the constant appraisal of strengths and advancements, but the recognition of weaknesses. Perhaps the key is to bond with machines over their imperfections as they help us overcome ours. Either way, it’s fair to say that now more than ever, we need a human-centred approach; one where we admit we know very little, dive deep, and (as we say) ‘create, test, repeat, succeed’.
Opportunity, accuracy, exploration, medical progression, risk reduction, support, disruptive innovation, quality of life improvement, equality, just a few of the many opportunities that can be explored through the development of Artificial Intelligence (AI). What would once be the script for a Sci-Fi movie, is now nothing but the nowadays reality. Now the fiction element is gone and space travel, quantum computing, mind-reading, human memory upload to cloud and brain wave communications turned into simple development projects of AI integration.
While the advantages gained by AI and Machine Learning integration are undeniably beneficial to humanity and a core evolution facilitator, introducing such powerful and impactful tech to society can easily spin around and become a colossal threat. So, the challenge lies around identifying this fine ethical line that separates the good from the unjust, the honest from the unfair or the moral from the corrupt.
A highly controversial topic, the ethical dilemmas of AI have been raised and discussed key world leaders over the last century, but little to no action has been taken. Meanwhile, AI is advancing faster than we can track. Going from the currently researched Artificial Narrow Intelligence (ANI), where robots are focused on single tasks, towards Artificial General Intelligence (AGI), where AI is as intelligent as humans. This is then followed by Artificial Super Intelligence (ASI), where AI becomes incomprehensibly smarter than humans. If you thought that the transition through these phases is the most exciting challenge for all the AI supporters, keeping up with the rules and regulations is even a bigger challenge.
To some extent counter-intuitive, ethics can help and support the future of AI rather than slow it down. ‘We are like children playing with a bomb’ – would argue Nick Bostrom, a Swedish philosopher at the University of Oxford, renowned for his work on human enhancement ethics and superintelligence risk. Not only should ethics and AI not be seen as different constituents, but AI must be modelled using ethics, social norms and moralities throughout the whole process.
Ethics also works both ways: caring for the moral behaviour of humans, from the design process to development, and then treatment of robots (AI) known as ‘Roboethics’ and ‘Machine Ethics’, which are governing the moral behaviour of artificial moral agents (AMAs). For each of these, there are multiple ethical layers involving employment and job evolution, inequality introduced by technology accuracy and AI bias, humanity, behaviour and human-robot interaction – and these are just part of the first layer.
The early integration of ethics in AI seems like a sensible approach to adapt moving forward, but who is in charge of making this happen? One may argue that technology and regulations should evolve together but the speed of technological progress is much higher than anything else. How can we possibly keep the regulations aligned without slowing down the AI’s development? One approach would be for tech companies to set up their own ethics committee, but getting the right people involved in the right process is a challenge. A recent example of this would be Google’s newly appointed AI ethics board that was dissolved soon after their start, outlining the difficulty of getting the ethics right within a globally leading organization.
Therefore, how can we assure the ethical evolution of AI? What are the optimum ways of integrating human values and principles into advanced technological entities? Whose responsibility is it to ensure safe human evolution integration with AI? The answers are not obvious, but exploring the reasons behind our thirst for AI could build up a safer and prosperous path to a tale of Robots and Humans that lived together happily ever after. We take a closer look at the intersection of humanity and robots here.
Techniques such as AI and Machine Learning are creating new opportunities for healthcare and changing the way that people interact with healthcare services. There are many benefits associated with new technology – for example, supporting a move away from hospital-based care towards the home, increasing patients’ independence, helping us to make better decisions and reducing the burden on healthcare providers. This type of technology can be used to increase efficiency by speeding up the clinical trials process, reducing paperwork, limiting unnecessary procedures and getting more done with less.
Nevertheless, there also some concerns about introducing modern technologies. For instance, what are the aspects of healthcare that do not lend themselves to automation / A.I.? This is not a simple issue– i.e. questions around machines making life and death decisions. The cost of change in healthcare is high – it is a regulated environment and we need to consider the feasibility of implementation. We want to get things right but we may not always know how (for example) certain types of algorithm arrive on the solution that is presented. It follows that there may be seemingly random, emergent and unpredictable side effects. We have a duty to understand these concerns before pressing ahead.
How can modern medical technologies be trusted? How can they be governed and regulated? We all get annoyed when the software fails to work in the way we need it to or gives the wrong results. In healthcare, the consequences can be serious; patients may receive the wrong diagnosis and, in extreme circumstances, they may be harmed. Some of the repercussions may not be immediately obvious or only become apparent when considered across a population. Although no one wants to delay the introduction of new technologies, we also need to explore what can be done to get the most out of them, act in the public interest and include appropriate safeguards.
Who is to be held responsible?
We need to clearly define the boundaries of a system and who is responsible for it. ‘Intelligence’ may sit on a server and be enabled through data-sharing and connectivity. This is inherently multiregional and it can be hard to understand exactly how such technology is functioning, how safety is being assured and who is taking responsibility for it. Such systems are not static, they have many interdependencies and interactions – they may have been tested but we don’t know how, what, which part and why? As a user of network-enabled technology, we might find ourselves asking questions around provenance. For example, we will never have seen the software code, we won’t have met the people writing the code and in some cases, we will have little way of knowing if the software code has been written correctly (i.e. it is doing the job as intended). We don’t know when the software was written, for what reason, when it was last updated, what input it is receiving and how it arrived at a given output/calculation. We potentially put ourselves in this situation every day – we have no way of knowing how most of the systems that we use function but we take it on trust that they will not cause any harm. Where is the evidence to show that this is the case? Do we know how to test and evaluate such systems and how do we provide an appropriate level of assurance?
Although there have been great advances over recent decades, there has been a lag in terms of public understanding and appreciation of the broader implications of mass digitisation and use of these new forms of technology (especially in healthcare). The time has come to take stock and consider the profound impact that such technology can have. How informed is society and how aware are we of the positive and negative aspects of the use of this type of medical technology? For instance, technologies may not be applied in our best interest; they may substitute for human contact and in rare circumstances, they may be unsafe.

Image credit: Stock
Shared information such as that from GPS tracking tools, step counters, social media and internet browsing makes it possible to define personal profiles and track/individuals in a way that wasn’t previously possible. AI / ML / Big Data allows for the amalgamation of sources and a much richer picture around who we are and what we do. Effectively, we can reverse anonymization using these techniques (although this may be prohibited). From a healthcare perspective, we don’t know how this information is being used. For example, it could be used to cherry-pick low-risk individuals or deny treatment to high-risk individuals. Perhaps the most concerning aspect of this is that we have little to no way of knowing what is going on and to what extent. Healthcare is changing, but we have little understanding of why, how and who is responsible. It is also the case that virtual technologies may be being substituted for face to face or human contact when it is not appropriate to do so.
In a recently publicised case, a doctor told a patient they were going to die via a robot. The clinician (mediated through a robot), told the patient that ‘he has no lungs left only option is comfort care, remove the mask helping him breathe and put him on a morphine drip until he dies.’ The case highlights many obvious questions. Technology can’t substitute for human contact. We need to be sensitive to when this is the case. It follows that, as computational systems become progressively capable, it is possible they will help predict when we will die – an even more contentious scenario would involve this type of robot relaying such information with no clinician involved. Do we feel conformable with this scenario?

Image credit: BBC
A degree of transparency
Another concern relates to the feasibility of assuring the safety of AI systems. Although we have an established track record for doing this in the field of automation, newer forms of technology challenge our approach. For example, AI systems are highly sensitive to the environment (e.g. functions are dependent on the properties of the input and can vary over time). This type of system could be sub-optimal in terms of the way in which it is programmed/implemented. The system may have limited / incomplete / poorly mapped data. This situation could vary between the time at which it is tested and the time when it is used. Computational systems can be deliberately misled for profit; they can be hacked and hijacked. As the scale of the system grows, so does the potential for harm. We don’t know if this type of technology allows for monitoring (e.g. do we have the means to know when things go wrong?) – traditional vigilance mechanisms may not work (for instance, an individual may not be aware of a loss of personal data, but they may be impacted by it).
By raising these questions we don’t offer an opinion as to the suitability of these forms of technology, we just raise a point that there needs to be a degree of transparency. A growing dialogue is occurring around these topics, but how involved are manufacturers and pharmaceutical/technology companies? There are increasing numbers of resources becoming available in this field (see the list below), but how much is the industry involved? We encourage an open and informed dialogue between the public, medical community and industry in moving things forward.
- The emergence of artificial intelligence and machine learning algorithms in healthcare: Recommendations to support governance and regulation
- Ethically Aligned Design
- Digital maturity in an age of digital excitement
- Governing artificial intelligence: ethical, legal and technical opportunities and challenges
- Artificial intelligence and health
- The impact of artificial intelligence on work
- AI: Artificial intelligence and the legal profession