I was privileged and honoured to be the keynote speaker at a major global summit on artificial intelligence (AI) hosted by India, ‘RAISE 2020’, which was inaugurated by Indian Prime Minister Narendra Modi.
I addressed a session on ‘Ethics by design structures for Responsible AI’.
This is a very debatable and ambiguous discussion and raises many questions. Is AI ethics that is constructed in the West actually applicable to the Middle East and Asia? Do we all share similar AI ethical point of view? Do AI ethics slow down progress of AI? Is it a brand or commercial competition? Are AI and machine learning branches, such as face recognition, really an intrusion of privacy, or protection of health and safety?
According to the Institute of Electrical and Electronics Engineers (IEEE), much of the existing research on the social and ethical impact of AI has been focused on defining ethical principles and guidelines surrounding machine learning and other AI algorithms. While this is extremely useful for helping define the appropriate social norms of AI, they believe that it is equally important to discuss both the potential and risks of machine learning and to inspire the community to use the latter for beneficial objectives.
An MIT Technology review stated that dozens of organisations produced AI ethics guidelines; companies rushed to establish responsible AI teams and parade them in front of the media. It’s hard to attend an AI-related conference anymore without part of the programming being dedicated to an ethics-related message: How do we protect people’s privacy when AI needs so much data? How do we empower marginalised communities instead of exploiting them? How do we continue to trust media in the face of algorithmically created and distributed disinformation?
But talk is just that; it’s not enough. For all the lip service paid to these issues, many organisations’ AI ethics guidelines remain vague and hard to implement. Few companies can show tangible changes to the way AI products and services get evaluated and approved. We’re falling into a trap of ethics-washing, where genuine action gets replaced by superficial promises. In the most acute example, Google formed a nominal AI ethics board with no actual veto power over questionable projects, and with a couple of members whose inclusion provoked controversy. A backlash immediately led to its dissolution.
Meanwhile, the need for greater ethical responsibility has only grown more urgent. The same advancements made sometimes back have led to the proliferation of hyper-realistic deepfakes, which are now being used to target women and erode people’s belief in documentation and evidence. New findings have shed light on the massive climate impact of deep learning, but organisations have continued to train ever larger and more energy-guzzling models. Scholars and journalists have also revealed just how many humans are behind the algorithmic curtain.
The AI industry is creating an entirely new class of hidden labourers — content moderators, data labellers, transcribers — who toil away in often brutal conditions.