By Dr Jaspal Kaur Sadhu Singh
For a data ethicist, the last few months have taken the appearance of a mixed bag of confounding occurrences.
At the international level, on a positive note, the hyperbole around data ethics reached new academic heights with the headline-grabbing news of the establishment of the ‘Stephen A. Schwarzman Centre for the Humanities’- that too with a staggering donation of £150 million (US$188 million) to boot. It was not the generosity of the donation that piqued my interest; rather it was Mr Schwarzman himself and the purpose of the establishment of the Centre.
In 2018, Stephen Schwarzman had in similar fashion donated a large amount of money (US$350 million) to fund research into the study of artificial intelligence at the ‘M.I.T. Stephen A. Schwarzman College of Computing’. Stephen Schwarzman, a billionaire, is the co-founder and CEO of private equity giant, Blackstone. Regardless of the critics who accuse Mr Schwarzman of the mere purchase of naming and bragging rights in these established and renowned centres of research and academia, Mr Schwarzman’s philanthropy has categorically marked the essential need to research data ethics from a multidisciplinary perspective.
However, the ambivalent position taken by tech-corporate elites like Amazon and Google, is simply confounding. When the people in the forefront of tech hegemony shrug responsibility for how their invented Artificial Intelligence (AI)I is used, this will drive any technophobe amongst us to the fringes of our sanity. This position was taken by the Chief Technology Officer of Amazon, Werner Vogels. In an interview with the BBC in June Vogels, who was commenting on the sale of Amazon’s facial recognition software, felt that Amazon should not be responsible for how its AI is used. This is simply one concern surrounding the AI of facial recognition software. In a contrasting fashion, Google‘s Ethical Artificial Intelligence Team raised concerns on biases and large error rates that exist within facial recognition software.
At a national level, I had been invited by MDEC to participate in their panel discussion on Data Governance during Malaysia Tech Week held in June. The morning began with two presenters talking about “What is Data Governance (and why do you need it)?” with the first talk titled ‘Introduction to Data Management: Governance Frameworks and Principles’ and the second, ‘Data Governance for companies of tomorrow’ followed by the panel discussion.
The following are my thoughts which I shared with the audience and in response to several questions both from the floor and from the very able moderator, Dr Karl Ng of MDEC. I have also had some after-thoughts post-Malaysia Tech Week which I have included here.
I want to begin by briefly describing “data governance”. It is essentially the development of a framework to assess data analytics and AI projects which includes legal and ethical considerations. Whilst data analytics allow for descriptive analytics to make decisions based on past events from which the data is derived, predictive analytics allows for further questions to be asked of the data set from which assumptions are drawn. AI through machine learning takes predictive analytics to a whole different level. The technology works autonomously in evaluating the data, testing the algorithm and making assumptions. Before we drown in tech jargon, allow me to revert to the legal and ethical discussion after which I will proceed to the interaction between legal and ethical considerations, and AI.
In terms of legal considerations, the different aspects of collection, storage, processing, security, integrity and decisions arising from all of these aspects must comply with national laws. In Malaysia, there are industry-specific laws that regulate the banking and finance, healthcare, insurance and other industries. But the lex specialis is really the Personal Data Protection Act 2010. So when I was asked the question whether our privacy is protected under Malaysian law, my reply was naturally in the negative. There are no Acts of Parliament that expressly enumerate such protection. Nevertheless, our courts have taken steps to interpret our Constitution in a progressive manner, a prismatic method, where the Federal Court in a judgment in 2010 stated that “‘personal liberty’ in Article 5(1) of the Federal Constitution includes within its compass other rights such as the right to privacy”. Hence, there is some hope for privacy to be protected as a fundamental liberty.
In spite of having the 2010 legislation on data protection, its scope is limited to commercial transactions, which leads to the conclusion that data that is collected from non-commercial transactions which are processed will not be required to meet the standards prescribed within the Act. Whilst law is perceived as a minimum standard albeit limited and restricted in its application, as identified as above, the assessment of data analytics must therefore be prescribed to a yardstick of a higher standard – ethics. Ethical considerations or the lack thereof in the manner of processing the data against a set of algorithms and utilising said algorithms to make decisions about individuals, communities and a society is one of the most pressing issues of our times. Fairness, trust, transparency, equality and accuracy are several of the ethical considerations that should be integrated in data analytics and AI.
Algorithms are not sentient and are therefore ethical and morally neutral. For data scientists and data engineers, the framework by which the algorithms are to be designed must be agile and capable of application – and more importantly free from human bias. With any type of ethical reflection in applying criteria in the development and design of the AI, these professionals must be able to pose questions and seek answers. In order to possess the ability and capability to do so, the reflection must be done in a meaningful way and this can only be done through a process of education and nurturing – an inculcation of a set of values. Ethical and legal risks within data analytics and AI must be part of the design – design of our courses in schools and universities and most definitely in the algorithm.
Consequences that follow from the use of rote machine learning in decision-making undoubtedly impacts our liberties, our autonomy as human beings and how we are treated. Should approval of an application for a job, an insurance policy, a financial loan, and the like, be determined by AI that is devoid of human values? On a scale, large or small, this is worrying. To me, what is even more worrying is when governments and legislatures start to use AI and machine learning to make decisions in creating policies and legislation premised on the findings of a machine. Stirring the boiling cauldron of worries is the use of these technologies in pre-crime decisions and law enforcement. From the use of surveillance technologies, in particular facial recognition software to targeting individuals in pre-determining their criminality or in the case of convicted criminals, the degree of recidivism – it is veering me to the edge of mild dysphoria.
An approach with a degree of probity must be imbued into the algorithm. It is the view of the wider world that we cannot halt the use of AI in our lives, in making decisions, in mechanisation – but we can certainly design an AI that is woven into our constitutions as humans.
5 August 2019
Dr. Jaspal Kaur is a member of the Executive Committee of the International Movement for a Just World (JUST). She is also a Senior Law Lecturer at a Malaysian university.