Shirley Genga
LLB LLM (Nairobi) PhD (Witwatersrand)
Postdoctoral Research Fellow, Free State Centre for Human Rights, University of the Free State, Bloemfontein, South Africa
Edition: AJPDP Volume 2 2025
Pages: 41-60
Citation: S Genga ‘ A review of the adequacy of Kenya’s and South Africa’s data protection legal frameworks in protecting persons with disabilities from artifi cial intelligence algorithm discrimination’ (2025) 2 African Journal on Privacy & Data Protection 41-60
Download article in PDF
Abstract
As the use of artificial intelligence (AI) through automated decision making continues to increasingly influence decision making in various sectors, including employment, insurance, financial, health care and social services bringing efficiency, the likelihood of AI algorithm discrimination also grows. This discrimination is often perpetuated against vulnerable groups such as persons with disabilities (PWDs), who may already face significant societal barriers. This article delves into the question of whether Kenya’s and South Africa’s data protection laws adequately protect PWDs from AI algorithmic discrimination. The initial part of the paper explores how AI algorithms, when applied through automated decision making, can unintentionally lead to discrimination against PWDs. It does this by highlighting specific examples from various sectors, demonstrating how AI discrimination impacts on PWDs. The second part critically reviews the data protection legal framework in both Kenya and South Africa and, while providing a comparative analysis of both states, it focuses on their adequacy in protecting PWDs from AI discrimination. It does this in order to identify the strengths and limitations of both states’ laws in protecting PWDs from algorithm discrimination. It will further provide recommendations for legal and policy reforms aimed at enhancing transparency, accountability and inclusivity in AI systems in both states in terms of regulating algorithmic discrimination of PWDs.
Key words: artificial intelligence; discrimination; disability; data protection
1 Introduction
It is estimated that 1,3 billion people experience disabilities, representing approximately one in six of the world’s population.1 These include a wide variety of disabilities, including visual, hearing, speech, mobility, cognitive and psychosocial.2 Notably, persons with disabilities (PWDs) experience widespread stigma and discrimination.3 They are often prevented from fully participating in society because of environmental and attitudinal barriers.4 As a result, PWDs experience exclusion from education and employment, barriers in health systems and are at higher risks of experiencing poverty.5
Further, although, generally speaking, technology makes life convenient for most, for PWDs, technology provides independence.6 Technology helps to remove barriers to participating in society. As a result of technology, PWDs can access education, health, transport, employment, leisure, culture, and participate in other areas of life never imagined previously.7
Importantly, when it comes to technology, no other area has impacted the lives of PWDs like the internet of things (IoT).8 IoT refers to a ‘network of physical devices, vehicles, appliances, and other physical objects that are embedded with sensors, software, and network connectivity, allowing them to collect and share data. IoT devices are also known as smart objects.’9 These include everything from assistive devices, to wearables such as smart watches, to industrial machinery and transportation systems.10 These IoT-connected assistive technologies are intentionally designed to assist PWDs in the different facets of their daily lives.11 Indeed, many of today’s IoT devices and services are specifically designed for PWDs, whereas others are repurposed by them.12 For PWDs, the IoT can be transformational because it can enhance communication, socialising, safety, mobility in both physical and virtual environment.13
In addition, computers today can learn, and artificial intelligence (AI) is integrated into the products we use every day.14
AI has the potential to not only revolutionise the industrial sector, but also the quality of people’s lives,15 and this is what has influenced the participation of both private and public actors.16 Therefore, there is no aspect of life today that is not been impacted by AI, including assistive devices for persons with disabilities.17
Significantly, while there is no agreement on the definition of AI, an essential element that has been identified is that it covers systems that think like humans or act like human beings.18 AI technologies are ‘typically based on algorithms that make predictions to support or even fully automate decision making’. 19 Algorithms ‘process are a set of rules to be followed in calculations or other problem-solving operations, especially by a computer’.20 Moreover, algorithms are used to automate a wide range of everyday tasks on a scale far beyond what humans can achieve.21 They can analyse, infer, predict, label and recommend and, as a result, have opened up new horizons and can support decision making across many domains.22 AI algorithms are the backbone of AI, enabling machines to replicate human-like intelligence and execute multifaceted tasks such as automated decision making (ADM).23 ADM basically refers to using AI algorithms to make decisions without human involvement.24
Nowadays, ADM systems are used extensively throughout different industries across African countries, invading every sector including finance, education, health care, business and public administration, and both Kenya and South Africa have not been left behind.25 In order to bolster accurate and efficient service delivery, these sectors are increasingly using ADM.26 For example, in Kenya we have Felisa, a money-lending product; Tala, a credit service;27 the Angaza Elimu, M-shule and iMlango system in the education sector;28 and Boma Yangu portal, a government system to operationalise its affordable housing project.29 In South Africa, examples include ‘First National Bank’s Manila platform using AI to flag fraud, money laundering, and tax evasion risks’; Daptio, an education platform;30 and the Department of Education in Gauteng utilises a fully ADM system to ensure fair placement of students at schools and uses factors such as proximity to schools and other relevant factors in making these determinations.31
2 Use of AI algorithms in decision making and the risk of bias
AI systems are changing the lives of persons with disabilities at an amazing rate never previously imagined.32 Nevertheless, the application of AI systems is not unproblematic and comes with its share of challenges.33 Indeed, the trend that poses a series of risks for PWDs is the everyday use of AI algorithms for automated decision making (ADM) online.34 They are exposed to ‘pervasive surveillance, persistent evaluation, insistent influence, possible manipulation and discrimination’.35 This article will specifically focus on the ability of ADM to discriminate against persons with disabilities.
The use of ADM is often depicted as rational and neutral, but this is not true because of human influence. They are developed and used by humans. As a result, if bias is present in human decision making, it can be replicated by machines.36 Indeed, it is now an accepted fact that AI systems, ADM, can discriminate against some categories of the population.37 This is especially true when privacy and other ethical standards are not implanted in algorithms, then their use can result in the discrimination of PWDs.38 AI applications ‘process personal data in two ways’. Primarily, personal data is the source material used to teach machine learning systems in order to build their algorithmic models.39 Once built, the same models can be used to analyse and interpret personal data to make inferences concerning particular individuals.40
Interestingly enough, one of the reasons discrimination occurs is because algorithms are ‘fuelled’ or trained by personal data that is biased.41 In fact, algorithms are biased when they learn or are trained by biased data.42 If the data employed in the training of the machine learning models contains any bias, the analysis conducted by the algorithm will follow the same pattern and in some instances introduce new ones.43 Bias refers to ‘the systematic errors that occur in decision-making processes, leading to unfair outcomes’.44 Hence, it can lead to AI discrimination based on disability if the bias is towards persons with disabilities. Significantly, apart from data used for training AI, other potential sources of bias include algorithm design and human interpretation.45 AI discrimination is of crucial concern for persons with disabilities because the industries where ADM use is on the uptake in the same sectors where PWDs have historically encountered and continue to encounter barriers and exclusion. These include welfare benefits, employment opportunities and healthcare decisions.46 If not adequately regulated, the use of ADM can perpetuate and even magnify already-existing inequalities.
2.1 AI discrimination based on disability
Kenya and South Africa have ratified the United Nations (UN) Convention on the Rights of Persons with Disabilities (CRPD).47 Additionally, CRPD assumes a social understanding of disability when it comes to defining disability. This is important because it highlights a change from the medical model of disability, which is the historically-dominant model whose focus is on correcting or curing the individual to fit society.48 Contrastingly, a social understanding of disability highlights the fact that disability is created when the social environment fails to change to meet the needs of individuals with impairments.49 Further, because a social model of disability infers that a comprehensive approach is adopted in disability anti-discrimination law, CRPD recognises all the different types of discrimination, which include ‘direct and indirect discrimination, harassment and the denial of reasonable accommodation’;50 also, recognising discrimination by association, and multiple and intersectional discrimination.51
To discriminate on an elemental level means to differentiate.52 CRPD defines discrimination on the basis of disability as
any distinction, exclusion or restriction on the basis of disability which has the purpose or effect of impairing or nullifying the recognition, enjoyment or exercise, on an equal basis with others, of all human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field. It includes all forms of discrimination, including denial of reasonable accommodation.53
Accordingly, an AI algorithm theoretically discriminates against a person with a disability whenever it makes an automated decision based on their disability that excludes or restricts that and that leads to disparate impact, unjustifiable disadvantage.54 Lastly, the differentiation need not be intentional.55 Kenyan law and South African law both recognise all forms of discrimination recognised by CRPD.56 This widens the reach and scope of anti-discrimination in both countries and, hence, for example, sets the ground to claim intersectional AI discrimination based on disability.
2.2 Examples of AI discrimination based on disability
There are a number of ways in which AI algorithm discrimination (AI discrimination) can occur. To begin with, when one engages in an image search for ‘athlete’ or even of a ‘beautiful girl’ on today’s AI-enabled internet search engines, they are unlikely to yield images of athletes with disabilities or a girl with a physical disability. This is fuelled by the fact that the internet search engines rely on a data set or algorithm that holds to the outdated belief that persons with disabilities cannot be athletes,57 or even beautiful.
Second, AI discrimination can also occur through targeted online advertising. For example, companies such as Meta and Google rely on targeted advertising.58 Targeted online advertising relying on ADM can lead to the discrimination of PWDs.
An example is if a person has an eating disorder such as bulimia (which falls into the category of psychiatric or psychosocial disability). Discrimination can occur where a consumer with anorexia is profiled based on their data and is served customised advertisements selling weight loss products as a result.59 This type of marketing is exploitative and is called ‘vulnerability-based marketing’.60 Another example of targeted advertising is when algorithms infer one’s disability from one’s personal data. For example, an AI algorithm through one’s digital foot print can identify that a person has a visual disability through their use of a screen reader or a braille keyboard even when they may not have publicly disclosed their disability. This information can be used to push advertisements for assistive devices used by persons with visual disabilities and other products.61 Additionally, this information can also be used to deny or increase insurance coverage, or to exclude a person with disability from receiving ads for employment, education, housing and other resources, and hence exclude them from fully participating in society.62
Also, AI based discrimination can occur if an organisation uses an AI recruitment system that has been trained on data from past human decisions that discriminated against persons with disabilities. A real-life example is when Amazon was forced to stop the use of an automated recruitment tool that was found to be biased against women.63 The automated recruitment algorithm was trained on curricula vitae sent to Amazon over a period of ten years. A majority of the curricula vitae came from men, and hence the recruitment algorithm showed a preference for applications by men and rejected applications by women.64
Another example is when AI proxy discrimination occurs. This is when an outwardly neutral feature or variable (proxy attribute) that is associated or correlated with a specific protected ground is used as the ground for making a decision, leading to disparate impact.65 However, at first glance it may seem that a person was denied an opportunity based on a facially-neutral feature, and so no discrimination occurred, but upon close inspection the connection between the facially-neutral feature, proxy attribute, can be made with the protected ground, hence highlighting that discrimination occurred.66 For example, in a state where its provinces are predominantly inhabited by certain ethnic groups, postal codes may indirectly indicate a person’s ethnicity. Here the ‘postal code can be a proxy for ethnicity’, and hence an ADM that makes a decision to accept or reject a job application based on one’s postal codes could be held liable for engaging in ethnic proxy discrimination if the result leads to a disparate impact.67 AI systems may unintentionally have discriminatory effects.68
A recent case example of proxy discrimination based on disability is the American case of Mobley v Workday, Inc.69 Here Derek Mobley brought an action for employment discrimination against Workday, which provides employment screening services.70 Mobley claimed that Workday’s ADM application screening tool discriminated against him based on race, age and disability.71 According to Mobley, he had been overlooked for numerous job opportunities at other companies that also contracted ‘with Workday because he is black, over 40 and has anxiety and depression’.72 Further, he claimed that Workday’s algorithms could infer personal details about him, such as his age, race and background, based on other information. These include information on when he graduated, the schools he attended (including his degree from a historically black college). Also, the numerous positions for which he applied ‘required him to take a Workday-branded assessment and/or personality test, and to provide other personal information from which his disability could be inferred’.73 He argued that the use of the ADM tool infringed on anti-discrimination law.74 On 15 July 2024 a bid to dismiss the class action was rejected.75
As of yet, there are no available cases in either state, but as has been highlighted, both states are using ADM in different industries,76 and so it is only a matter of time. Other American cases include Louis & Others v SafeRent Solutions & Others;77 Equal Employment Opportunity Commission v iTutorGroup, Inc;78 and KW ex rel DW v Armstrong.79
3 Data protection legal framework and AI discrimination regulation
As it stands in both Kenya and South Africa, anti-discrimination law and data protection law are the main tools for protecting persons with disabilities against AI discrimination. Notably, Kenya currently has an AI Bill that has been drafted, but which has not yet been passed into law by Parliament.80 This research will mainly focus on data protection laws in both states as AI algorithm anti-discrimination tools. Nevertheless, it is worth noting that as an anti-discrimination tool, data protection law remains largely untested generally,81 and the legal frameworks in both states are no different.
This article chose to focus on both Kenya’s and South Africa’s legal frameworks as both are developing African states that have passed comprehensive data protection laws whose provisions are currently in force. Further, both states have a data protection commissioner’s office that is operational and established by law to supervise and enforce data protection law in both states.82 Data protection law safeguards the rights of data subjects and establishes corresponding responsibilities for data processors and controllers who collect the data.83 Although the purpose of data protection law is to protect personal information, to that end, it can also be used to protect other standards and rights, in this instance, anti-discrimination rights in the use of AI. Correspondingly, Marvin and Frederik state that apart from data privacy, data protection law can also be used for anti-discrimination purposes and to protect other rights.84
However, it is crucial to note that a tension exists between AI and traditional data protection principles.85 Nevertheless, data protection principles can be translated and applied in a way that aligns with the advantageous application and use of AI.86 The principles and provisions can be interpreted and understood in a way that is consistent with and beneficial to the application of AI, as will be highlighted.87
Further, in order to identify the adequacy of the legal frameworks of both states in protecting PWDs from AI discrimination, this article will put up two arguments.
To begin with, Roberts and Schwarcz argue that protecting privacy can limit discrimination. This is done when data protection law limits access to the very information discriminators use to discriminate.88 Limiting access acts as a barricade against detrimental differentiation.89
Roberts argues that unlawful discrimination frequently requires discriminators to be informed about protected status.90 For instance, in the context of employment, an employer cannot discriminate against an employee based on disability or any other protected characteristic if they do not have access to that information.91 In actuality, it would be difficult and even impossible for an employer to consciously or unconsciously ground their decision on an employee based on their disability or another protected ground if the employer does not know about the employee’s disability or other protected ground.92 Hence, restricting the access of potential discriminators from information about one’s protected status can significantly reduce the chances of subsequent discrimination.93 In accordance with this, this article makes the argument that when data privacy law limits the processing of disability data, it also protects persons with disability from AI discrimination. In addition, the article builds on this argument by Roberts and adds that disability data should not just be protected as personal data in general, but adds that disability data should be protected as a sensitive class of data requiring a greater level of protection. Generally, special or sensitive data is not allowed to be processed except in exceptional circumstances. Data that falls under this category requires more protection because of its sensitive nature.94
This article argues that there are a number of reasons why disability data should automatically fall in the category of special or sensitive data.
Primarily, this is because PWDs are often vulnerable and heavily discriminated against generally.95 The very knowledge of a person’s disability is sensitive as it can expose the said person to discrimination, and that is why the privacy protection of a person’s disability status can often lead to their protection from discrimination.
Additionally, although emerging technologies, especially assistive devices, are key to elevating the quality of life for PWDs by facilitating their participation in society, the same technology puts their privacy at risk.96 This is because assistive devices also collect and process sensitive data.97 Further, it is not only the assistive devices, but persons with disabilities are also exposed to the collection of personal information in the workplace. For example, this occurs when a PWD requests to be reasonably accommodated or when they seek social services or health care. Further, it is not just assistive devices or the workplaces, but it almost seems as if to access and participate in society, persons with disabilities are constantly put in positions where they must share detailed sensitive information. In public spaces they are constantly attempting to balance the need for accessibility with the desire to protect their privacy.98 Therefore, privacy is a key concern for persons with disabilities because a big chunk of their lives is managing privacy in order to have access to participation in society.99 Consequently, for data protection to be effective in protecting PWDs from AI discrimination, disability data should be protected as special or sensitive data.
The second argument that the article makes is that while preventing access to the processing disability data as special or sensitive data is key to providing protection against AI discrimination, it is not sufficient, and that the same law should also allow for specific circumstances where the same disability data should be processed for anti-discrimination purposes.100 Therefore, the same law that limits the processing of data that falls into the category of special or sensitive data, in this case, disability data, will additionally need to provide specific and limited exceptions for processing of the said data for auditing or debiasing purposes.101 For instance, if a company utilising an ADM system to select the best candidate wants to determine whether its AI system discriminates against individuals with disabilities or any other protected characteristic, such as ethnicity, it must conduct an audit. In order to conduct such an audit, the company requires access to data on applicants’ disabilities or ethnicities.102
Consequently, although strict rules on special or sensitive categories of data limit discrimination on one end, a strict regime also acts as a barrier when it comes to assessing and mitigating discrimination.103 Furthermore, the allowance to process disability data for auditing and debiasing purposes is particularly crucial for PWDs because although the use of ADM tools is growing in popularity globally, a 2024 report by the Centre for Democracy and Technology has found that there is inadequate high-quality data about persons with disabilities.104
This allowance to process disability data for debiasing or auditing purposes, in my view, captures the spirit of the principles of transparency and explainability which, according to a report by the UN Special Rapporteur on the Right to Privacy, are significant for the reliable use of AI.105 This is because AI systems suffer the challenge of being opaque, in that it is a challenge for users to understand how it works.106 Its opaqueness magnifies the inability to recognise and ‘prove possible breaches of laws, including legal provisions that protect fundamental rights, attribute liability and meet the conditions to claim compensation’.107 This is why transparency and explainability are key principles; they require that the use of AI and ADM should also be accompanied by information that explains the process of how the decision was made.108
According to the Special Rapporteur’s report, the potential opacity of AI may be alleviated by mandating adherence to minimum transparency standards.109 The principle of transparency requires that ‘when interacting with an AI system and not a human being, users should be clearly informed in an objective, concise and easily understandable way’.110 Explainability, on the other hand, requires that with every decision an in-depth explanation should be provided, especially when the decision ‘impacts the end user in a way that is not temporary, easily reversible or otherwise low risk’.111 Additionally, a data subject should be informed about the reasoning behind the decision and the specific data that was utilised. This information is crucial as it allows the data subject to determine whether the decision was correct and, if not, it provides them with relevant evidence to defend themselves or make a claim in a court of law in case of inaccuracies or an injustice such as AI discrimination.112 Transparency and explainability are key in building trust in the use of AI.113 Hence, this article reviews the data protection laws in both Kenya and South Africa to identify whether both entrench the principles of transparency and explainability as an AI anti-discrimination tool.
3.1 Kenya’s data protection law and AI discrimination
The Kenya Data Protection Act 2019 (KDPA) was adopted by the National Assembly and assented to by the President of Kenya on 8 November 2019.114 The law ‘came into force on 25 November 2019 and gives effect to articles 31(c) and (d) of the Constitution of Kenya, 2010’.115
It provides guidance on the collection, storage, processing, dissemination and transfer of personal data in Kenya. Additionally, it provides legal recourse where there is misuse or abuse of personal data.116 The first data protection commissioner is Ms Immaculate Kassait, who assumed office on 16 November 2020,117 and to date remains in office.118
To start with, the Act fails to specifically define disability data as special or sensitive data. The KDPA states that personal data is ‘any information relating to an identified or identifiable natural person’.119 This includes the processing of disability data covered. The Act further outlines a category of personal data that requires greater protection under the banner of sensitive data. The Act in section 2 defines sensitive personal data and does not mention disability data as belonging to the category under the Act.120 However, one could make the case that disability data falls under the category of health data, which is mentioned as belonging to the sensitive personal data category. Nonetheless, the Act defines health data as
data related to the state of physical or mental health of the data subject and includes records regarding the past, present or future state of the health, data collected in the course of registration for, or provision of health services, or data which associates the data subject to the provision of specific health services.121
Looking at the definition, it can be argued that there may be instances where disability data could qualify as health data according to the definition, which seems to include the aspect of disability data that is captured when a person with disability seeks health care service. Nonetheless, this is limiting because it does not, for example, include disability data that is collected for social services, employment, or while using assistive devices or for other purposes or reasons.
Further, although health conditions and problems sometimes cause disability, health and disability are two distinct categories.122 This is because a person can have a disability and be healthy. As well, studies consistently report substantial health disparities and experiences among persons with disabilities.123 For example, some individuals with disabilities are born with conditions such as blindness or show signs of a disabling condition early in life. Others may acquire a disability later, such as through a spinal cord injury. Additionally, some people develop disabilities later in life, such as dementia or age-related mobility challenges.124 As a result, health needs vary depending on the type and the cause of one’s disability.125 Thus, for some, the nature of their disability can be easily differentiated from their health status, for example, a person who is born blind. Alternatively, for others, their health status may directly lead to their disability, for example, the loss of a limb as a result of diabetes.126
The Act’s definition of health data does not adequately capture the different complexities of disability data. As a result, it is possible that some disability data, for example, disability data collected for social services, government services or collected by assistive tools, is open for collecting and processing and will not be protected as special or sensitive data. Significantly, the American Data Privacy and Protection Act goes a step ahead of the KDPA and explicitly provides that sensitive data includes disability data.127 This provides clarity and, importantly, recognises the fact that disability data also requires heightened protection, unlike the KDPA. This position by the KDPA limits the protection of PWDs from AI discrimination. Notably, though, the Act gives the data protection commissioner the authority to recommend additional types of personal data that could be grouped as sensitive personal data.128 The commissioner has not as yet exercised these powers.
The Act, however, does allow exceptions for processing of disability data for auditing or debiasing purposes. According to KDPA, data controllers and processors have access to process disability data, which does not qualify as sensitive personal data in line with principles and requirement found in sections 25 to 43 of the Act. This includes disability data that does not qualify as data for health purposes as provided in section 30 of the Act. The Act also outlines exceptions in section 45 that allow for the processing of disability data, which may qualify as sensitive personal data. In fact, it can be argued that section 45(c) of the Act provides an avenue through which data controllers and processors can seek permission to process sensitive personal data for debiasing and auditing purposes with the aim of fighting AI discrimination, but this is not a given as it is not included as a specific exception.
Additionally, although the KDPA does not specifically mention AI, it does refer to ADM in section 35 of the Act. It provides that ‘where a data controller or data processor takes a decision which produces legal effects or significantly affects the data subject based solely on automated processing, the data controller or data processor must, as soon as reasonably practicable, notify the data subject’.129
An organisation must inform a data subject when it uses ADM, but this does not specifically obligate the organisation to disclose information about the underlying reasoning of that decision-making process. The data processor or controller is not obligated to provide a clear and precise explanation about the solely automated decision. In fact, the only recourse for a data subject who experiences a legal effect as a result on ADM processes, in this case AI discrimination, is found in section 35(b) of the KDPA. It states that after a reasonable period has passed, the data subject has the authority to demand that the data controller or data processor reassess the ADM decision.130 Another option is to request the data processor not to make a new decision solely based on ADM.131 In response, a data controller or data processor is obligated to consider the request within a reasonable period132 and to comply.133 Further, the data subject should be informed of compliance with the request through a notice in writing.134 Importantly, the Act is also silent on how a reasonable period will be determined under section 35. Here again, the Act blatantly fails to capture the transparency and explainability principle and limits the process of debiasing or auditing of the possible disability AI discriminatory process.
It is worth noting that according to section 35, ‘every data subject has a right not to be subjected to a decision based “solely” on automated processing, including profiling’.135 The word ‘solely’ is different from that provided in section 22 of the European Union (EU) General Data Protection Regulation (GDPR) which in article 22, referring to ADM, provides that it applies to decisions that are ‘largely’, rather than ‘solely’, like section 35 above. This leaves this section open to different interpretations. In fact, it could be argued that section 22 does not apply if a university denies a student admission based on a recommendation by an ADM system.136 On the other hand, looking at Kenyan law, the question that arises is whether the law applies in instances where decisions are partly automated, which involves humans making decisions assisted by algorithms, for example, if an employer decides to hire an employee with a disability after an algorithmic system assessess the potential employee’s qualifications.137 Whether the Kenyan approach or EU approach is limited or effective is left to be seen. Nevertheless, a more effective approach would be to provide that the principle of transparency and explainability applies when the decision is both ‘largely’, ‘partly’ or ‘solely’ ADM. As long as AI processes are implemented, then, transparency and explainability should apply.
In addition, the data subject will not be alerted of an ADM process involving their data in a number of situations, namely, if the ADM
is necessary for entering into, or performing, a contract between the data subject and a data controller or it is authorised by a law to which the data controller is subject, and which lays down suitable measures to safeguard the data subject’s rights, freedoms and legitimate interests; or is based on the data subject’s consent.138
The Act then grants the cabinet secretary the power to create regulations and make further provisions to enhance the protection of the rights of the data subject when decisions are made solely by ADM process.139 Notably, the adequacy of the relevant provisions of the KDPA, sections 2, 30, 45 and 35, have not yet been put to test in a court of law and, hence, their adequacy is difficult to determine, but from the review, it is clear that it is limited.
3.2 South Africa’s data protection law and AI discrimination
The right to privacy is a fundamental right that is protected in the Constitution of South Africa.140 Markedly, ‘the Protection of Personal Information Act 4 of 2013 (POPIA) came into effect on 1 July 2020’. It was, however, ‘subject to a one-year grace period, which ended on 30 June 2021’.141 The South African POPIA adopts important features from global privacy laws and is considered to meet the protection standards outlined by the EU Directive.142 Also, apart from providing data protection for only natural persons, POPIA also extends protection to legal persons.143
POPIA regulates the handling of personal data in South Africa, including the collection, storage, recording, retrieval, organisation, storage, alteration, use, updating, and distribution of personal information.144
Similar to Kenya, POPIA in section 26 prohibits ‘the processing of special personal information’.145 Nevertheless, unlike Kenya, POPIA in section 1 specifically lists disability data as falling into the category of personal information.146 As a result, although POPIA provides that personal information relating to health falls into the category of special personal information and, hence, it is excluded from processing by section 26,147 unlike Kenya, where disability data connected to health information could in some instances be considered special personal information, the same may not apply under this Act. This is because the Act specifically defines disability data as falling into the category of personal information.148 As a consequence, the Act limits the protection of persons with disabilities from AI discrimination.
Additionally, section 71 of POPIA deals with ADM. Section 71(1) states that a data subject, in this case a person with a disability,
may not be subject to a decision which results in legal consequences for him, her or it, or which affects him, her or it to a substantial degree, which is based solely on the basis of the automated processing of personal information intended to provide a profile of such person including his or her performance at work, or his, her or its creditworthiness, reliability, location, health, personal preferences or conduct.149
Interestingly, unlike Kenya, although it prohibits the making of decisions concerning data subjects based entirely on an ADM process, it does not bind data processors or controllers with the obligation to notify the affected party when such a process or decision is undertaken. The principles of transparency and explainability demand that an affected party should always be notified. This limits the application of this section. On the other hand, although it does not have a notification obligation, it is more progressive than Kenyan law in that it provides that data processors or controllers, when notified by a data subject, in this case a PWD, with regard to a decision with legal consequences that was made solely on the basis of ADM, the data controller or processor is obligated to give the data subject information explaining the logic behind the decision or process.150 Hence, an organisation can be obligated to explain that it used ADM and must provide relevant information on the foundational logic of that decision-making process.
However, in the same vein as Kenya, there are exceptions where data processors and controllers are not obligated to provide the data subject with adequate information on the foundational logic behind the ADM process in a number of situations in section 71(2) of POPIA.
Correspondingly to Kenya, the relevant provisions on AI discrimination have been discussed. Sections 71 and 26 have not yet been put to test in a court of law. Nonetheless, based on this review, it is limited in protecting persons with disabilities from AI discrimination. This is because, as has been highlighted, POPIA does not categorise disability data as special data and, hence, denies this data a greater level of protection. Second, although it may be argued that section 71 of POPIA is more progressive than section 35 of Kenya’s KDPA in that it obligates the data processor and controller to provide relevant information in the case of using solely automated processing, it fails to accurately capture the principles of transparency and explainability in that it does not obligate the same data processor and controllers to notify data subjects of it use. Therefore, one is left wondering how a data subject, in this case a PWD, will be able to identify when a solely automated process has been used with regard to their data.
4 Conclusion
In summary, while emerging technologies such as AI come with great benefits that increase the inclusion and participation of PWDs, it also comes with legitimate concern around AI disability discrimination.151 Further, as has been highlighted, AI has brought many benefits, including independence, which is key for PWDs when it comes to fully benefiting from and participating in society. Thus, the solution lies in finding a balance between use of and access to the benefits of AI by PWDs, on the one hand, and anti-discrimination protection, on the other, from AI processes. As has been highlighted, both the Kenyan and South African laws have made progress when it comes to providing a law that can be used to regulate AI discrimination. Nevertheless, more needs to be done to ensure adequate protection of PWDs from AI discrimination. Both laws need to define disability data as falling within the category of special or sensitive data. Moreover, both laws need to explicitly or tacitly capture the principles of transparency and explainability in the sections that regulate AI processes. This will enable PWDs to be adequately protected from AI discrimination.152 Nonetheless, transparency and explainability is not always practical or even attainable because of the opaqueness associated with algorithmic decisions, which makes it difficult to explain.153 It is not always easy to clearly explain the logic behind a decision and, in some circumstances, an explanation might not be helpful.154 While data protection laws play a crucial role in safeguarding persons with disabilities against AI discrimination, a significant part of the solution lies in technical advancements. This involves redesigning algorithms or developing alternative versions that align with ethical standards and regulatory requirements, wherever feasible.155 There is a need to advance algorithmic systems that facilitate transparency and explainability.156 Lastly, it may be too early to assess the effects of data protection law can have on AI discrimination in both states as more legal research and jurisprudential development is needed.
-
1 WHO ‘Disability: Key facts’ 7 March 2023, https://www.who.int/news-room/fact-sheets/detail/disability-and-health (accessed 21 July 2024).
-
2 DS Raja (World Bank Group) ‘Bridging the disability divide through digital technologies, world development report’ (2016) 5, http://pubdocs.worldbank.org/en/123481461249337484/WDR16-BP-Bridging-the-Disability-Divide-through-Digital-Technology-RAJA.pdf, https://giwps.georgetown.edu/dei-resources/bridging-the-disability-divide-through-digital-technologies/ (accessed 21 July 2024).
-
3 WHO (n 1); C Marzin ‘Plug and pray? A disability perspective on artificial intelligence, automated decision-making and emerging technologies’ (2018) 5, https://www.edf-feph.org/content/uploads/2020/12/edf-emerging-tech-report-accessible.pdf, https://www.edf-feph.org/publications/plug-and-pray-2018/ (accessed 21 July 2024).
-
4 Marzin (n 3) 5.
-
5 WHO (n 1); Marzin (n 3) 5.
-
6 Marzin (n 3) 5.
-
7 As above.
-
8 As above.
-
9 IBM ‘What is the IoT?’, https://www.ibm.com/topics/internet-of-things (accessed 30 July 2024).
-
10 As above.
-
11 A Habbal and others ‘Privacy as a lifestyle: Empowering assistive technologies for people with disabilities, challenges and future directions’ (2024) 36 Journal of King Saud University – Computer and Information Sciences 2.
-
12 Future of Privacy Forum ‘The internet of things and people with disabilities: Exploring the benefits, challenges and privacy tensions’ January 2019 1, https://fpf.org/wp-content/uploads/2019/01/2019_01_29-The_Internet_of_Things_and_Persons_with_Disabilities_For_Print_FINAL.pdf (accessed 21 July 2024).
-
13 As above; M Marks ‘Algorithmic disability discrimination’ in G Cohen & C Shachar (eds) Disability, health, law, and bioethics (2020) 243.
-
14 Marzin (n 3) 5.
-
15 E Ferrara ‘Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies’ (2024) 6 Sci 2.
-
16 M Buyl and others ‘Tackling algorithmic disability discrimination in the hiring process: An ethical, legal and technical analysis’ 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22) 21-24 June 2022, Seoul 1.
-
17 As above.
-
18 AB Nougrères ‘Report of the Special Rapporteur on the Right to Privacy: Principles of transparency and explainability in the processing of personal data in artificial intelligence’ (30 August 2023) A/78/310 para 7; T Krupiy & M Scheinin ‘Disability discrimination in the digital realm: How the ICRPD applies to artificial intelligence decision-making processes and helps in determining the state of international human rights law’ (2023) 23 Human Rights Law Review 1, 2.
-
19 EU Agency for Fundamental Rights (FRA) ‘Bias in algorithms – Artificial intelligence and discrimination’ (2022) 7, https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in-algorithms_en.pdf (accessed 21 July 2024).
-
20 Marzin (n 3) 6.
-
21 FRA (n 19) 7.
-
22 As above.
-
23 M Viola de Azevedo Cunha ‘Child privacy in the age of web 2.0 and 3.0: Challenges and opportunities for policy’ UNICEF Innocenti Discussion Paper March 2017 10, https://cadmus.eui.eu/handle/1814/49884 (accessed 23 December 2024); Artificial Intelligence (AI) Algorithms (10 April 2024), https://www.geeksforgeeks.org/ai-algorithms (accessed
23 December 2024). -
24 Centre of Intellectual Property and Technology Law (CIPIT) co-authored with LO Orero & J Kaaniru (Strathmore University) ‘Policy brief – Automated decision-making policies in Africa’ (2023) 3, https://cipit.strathmore.edu/category/publications/policy-briefs/ (accessed 23 December 2024).
-
25 Centre for Intellectual Property and Information Technology law(CIPIT) ‘The applications, challenges and regulation of automated decision-making (ADM) in Africa’ 8 November 2024 7, https://cipit.strathmore.edu/the-applications-challenges-and-regulation-of-automated-decision-making-adm-in-africa/ (accessed 30 December 2024).
-
26 As above.
-
27 CIPIT (n 24) 8.
-
28 CIPIT (n 24) 9.
-
29 As above.
-
30 As above.
-
31 As above.
-
32 Buyl and others (n 16) 1.
-
33 As above.
-
34 Viola de Azevedo Cunha (n 23) 10.
-
35 G Sartor (STOA) ‘The impact of the General Data Protection Regulation (GDPR) on artificial intelligence’ (2020) ii, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641530/EPRS_STU(2020)641530_EN.pdf (accessed 21 July 2024).
-
36 FRA (n 19).
-
37 Marzin (n 3) 25.
-
38 Viola de Azevedo Cunha (n 23) 10.
-
39 Sartor (n 35).
-
40 As above.
-
41 Marks (n 13) 243.
-
42 Marzin (n 3) 26.
-
43 Sartor (n 35) i; Marks (n 13) 243.
-
44 Ferrara (n 15) 2.
-
45 Ferrara (n 15) 4.
-
46 G Alexiou ’Disability data alarmingly absent from AI algorithmic tools, report suggests’ 6 August 2024, https://www.forbes.com/sites/gusalexiou/2024/08/06/disability-data-alarmingly-absent-from-ai-algorithmic-tools-report-suggests/ (accessed 25 December 2024).
-
47 United Nations Human Rights Treaty Bodies, Ratification Status for CRPD – Convention on the Rights of Persons with Disabilities, https://tbinternet.ohchr.org/_layouts/15/TreatyBodyExternal/Treaty.aspx?Treaty=CRPD (accessed 21 July 2024).
-
48 SA Genga ‘Legal responses to employment discrimination on the basis of psychosocial disabilities: Kenya’s and South Africa’s compliance with the Convention on the Rights of Persons with Disabilities’ unpublished PHD thesis, University of the Witwatersrand, 2021 71, 25.
-
49 Genga (n 48) 71.
-
50 Genga (n 48) 51, 184-193.
-
51 Genga (n 48) 71, 184-193.
-
52 JL Roberts ‘Protecting privacy to prevent discrimination’ (2015) 56 William & Mary Law Review 2109.
-
53 Art 2 Convention on the Rights of Persons with Disabilities.
-
54 H Weerts and others ‘Unlawful proxy discrimination: A framework for challenging inherently discriminatory algorithms’ (2024) ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24) 3-6 June 2024, Rio de Janeiro, Brazil, ACM, New York, NY, USA, https://doi.org/10.1145/3630106. 3659010, 1850; Roberts (n 52) 2109.
-
55 Roberts (n 52) 2109.
-
56 Genga (n 48) 111-118, 140-152.
-
57 Rights of persons with disabilities, Report of the Special Rapporteur on the Rights of Persons with Disabilities (28 December 2021) UN DOC A/HRC/49/52 para 61.
-
58 FJZ Borgesius ‘Strengthening legal protection against discrimination by algorithms and artificial intelligence’ (2020) 24 International Journal of Human rights 1575; Marks (n 13) 244.
-
59 Marks (n 13) 244.
-
60 As above.
-
61 Marks (n 13) 243.
-
62 As above; Marzin (n 3) 25.
-
63 Marzin (n 3) 26.
-
64 As above.
-
65 Weerts and others (n 54) 1850.
-
66 M van Bekkum & FZ Borgesius ‘Using sensitive data to prevent discrimination by artificial intelligence: Does the GDPR need a new exception?’ (2023) 48 Computer Law and Security Review 3; Weerts and others (n 54) 1851-1852.
-
67 Weerts and others (n 54) 1852.
-
68 Van Bekkum & Borgesius (n 66).
-
69 Case 23-cv-00770-RFL, FindLaw, https://caselaw.findlaw.com/court/us-dis-crt-n-d-cal/116378658.html?utm_source=chatgpt.com; D Wiessner ‘Workday must face novel bias lawsuit over AI screening software’ 16 July 2024, https://www.reuters.com/legal/litigation/workday-must-face-novel-bias-lawsuit-over-ai-screening-software-2024-07-15/ (accessed
29 December 2024). -
70 As above.
-
71 As above.
-
72 Wiessner (n 69).
-
73 FindLaw, https://caselaw.findlaw.com/court/us-dis-crt-n-d-cal/116378658.html?utm_source=chatgpt.com (accessed 21 July 2024); Wiessner (n 69).
-
74 As above.
-
75 Wiessner (n 69).
-
76 CIPIT (n 25) 7-9.
-
77 1:22-cv-10800, C Milstein, https://www.cohenmilstein.com/case-study/louis-et-al-v-saferent-solutions-et-al/ (accessed 29 December 2024).
-
78 1:22-cv-02565, (EDNY), https://www.workforcebulletin.com/assets/htmldocuments/blog/
8/2023/08/2023.08.09-EEOC-v.-iTutorGroup-Joint-Notice-of-Settlement-22-cv-02565-PKC-PK.pdf; Court Listener, https://www.courtlistener.com/docket/63288748/equal-em
ployment-opportunity-commission-v-itutorgroup-inc/#:~:text=Opportunity%20Com
mission%20v.-,iTutorGroup%2C%20Inc.,%3A22%2Dcv%2D02565) (accessed 23 December 2024). -
79 789 F.3d 962 (9th Cir 2015); G van Toorn (ARC Centre of Excellence for Automated Decision-Making and Society and Data Justice Lab) ‘United against algorithms: A primer on disability-led struggles against algorithmic injustice’ 15 April 2024, https://apo.org.au/node/326312 19 (accessed 21 July 2024); E McCormick ‘What happened when a “wildly irrational” algorithm made crucial healthcare decisions’ 2 July 2021, https://www.theguardian.com/us-news/2021/jul/02/algorithm-crucial-healthcare-decisions (accessed 21 July 2024).
-
80 The Kenya Robotics and Artificial Intelligence Society Bill, 2023, https://www.dataguidance.com/sites/default/files/the_kenya_robotics_and_artificial_intelligence_society_bill_2023.docx.pdf (accessed 21 July 2024).
-
81 Borgesius (n 58) 1582.
-
82 Office of the Data Protection Commissioner (ODPC) ‘Data commissioner inaugurates for data protection officers on data protection impact assessment’ 24 April 2024, https://www.odpc.go.ke/data-commissioner-inaugurates-training-for-data-protection/ (accessed 30 Dec-
ember 2024); The Information Regulator (South Africa) ‘Members of the Information Regulator’, https://inforegulator.org.za/members-2/ (accessed 30 December 2024). -
83 Borgesius (n 58) 1576.
-
84 Van Bekkum & Borgesius (n 66) 5; A Calvi ‘Exploring the synergies between non-discrimination and data protection: What role for EU data protection law to address intersectional discrimination?’ (2023) 14 European Journal of Law and Technology;
D le Métayer & J le Clainche ‘From the protection of data to the protection of individuals: Extending the application of non-discrimination principles’ in S Gutwirth and others (eds) European data protection: In good health? (2012) 315- 316. -
85 Sartor (n 35) ii.
-
86 Sartor (n 35) i.
-
87 Sartor (n 35) ii.
-
88 Roberts (n 52) 2097; D Schwarcz ‘Health-based proxy discrimination, artificial intelligence, and big data’ (2021) Houston Journal of Health Law and Policy 4; MC Tschantz ‘What is proxy discrimination?’ (2022) ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22) 1, 21-24 June 2022, Seoul, Republic of Korea. ACM, New York, NY, USA, https://doi.org/10.1145/3531146.3533242 (accessed 21 July 2024).
-
89 Roberts (n 52) 2101.
-
90 Roberts (n 52) 2097.
-
91 Roberts (n 52) 2099.
-
92 As above.
-
93 Roberts (n 52) 2099-2100.
-
94 UK information Commissioner’s Office ‘Special category data’, https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/a-guide-to-lawful-basis/lawful-basis-for-processing/special-category-data (accessed 29 December 2024).
-
95 WHO (n 1).
-
96 Habbal and others (n 11) 2.
-
97 As above.
-
98 L McRae and others Privacy and the ethics of disability research: Changing perceptions of privacy and smartphone use’ in J Hunsinger and others (eds) Second international handbook of internet research (2020) 413.
-
99 As above.
-
100 T Marwala ‘The dual faces of algorithmic bias – Avoidable and unavoidable discrimination’ 30 January 2024, https://www.dailymaverick.co.za/opinionista/2024-01-30-the-dual-faces-of-algorithmic-bias-avoidable-and-unavoidable-discrimination/ (accessed 30 January 2024); CIPIT (n 25) 14; RJ Chen and others ‘Algorithmic fairness in artificial intelligence for medicine and healthcare’ (2023) 7) Nature Biomedical Engineering 719-742, 6 and 47; Weerts and others (n 54)1852.
-
101 As above; Van Bekkum & Borgesius (n 66) 5; Rights of persons with disabilities, Report of the Special Rapporteur on the Rights of Persons with Disabilities (28 December 2021) UN DOC A/HRC/49/52 para 62; Tschantz (n 88) 1.
-
102 Borgesius (n 58) 1579.
-
103 Borgesius (n 58) 1581.
-
104 A Aboulafia and others (Centre for Democracy and Technology) Report – To reduce disability bias in technology, start with disability data 25 July 2024 6-7, https://cdt.org/wp-content/uploads/2024/07/2024-07-23-Data-Disability-report-final.pdf (accessed 21 July 2024); Alexiou (n 46).
-
105 Nougrères (n 18) para 1.
-
106 A Facchini & A Termine ‘Towards a taxonomy for the opacity of AI systems’ in VC Muller (ed) Philosophy and theory of artificial intelligence (2021) 73.
-
107 Para 27.
-
108 AM Laibuta ‘Adequacy of data protection Regulation in Kenya’ unpublished PhD thesis, University of the Witwatersrand, 2023 171.
-
109 Nougrères (n 18) para 28.
-
110 As above.
-
111 Nougrères (n 18) para 31.
-
112 Nougrères (n 18) para 50.
-
113 Nougrères (n 18) para 63(a).
-
114 Amnesty International Kenya ‘Comparative study on data protection regimes’ (2021) 11, https://restoredatarights.africa/wp-content/uploads/2021/12/Amnesty-International-Kenya-Data-Protection-Report-Pages-1.pdf (accessed 21 July 2024).
-
115 As above.
-
116 Kenya Data Protection Act 24 of 2019.
-
117 Amnesty International Kenya (n 114) 11; Laibuta (n 108) 172.
-
118 Office of the Data Protection Commissioner (ODPC) ‘Data commissioner inaugurates for data protection officers on data protection impact assessment’ 24 April 2024, https://www.odpc.go.ke/data-commissioner-inaugurates-training-for-data-protection/ (accessed 30 July 2024).
-
119 Kenya Data Protection Act 24 of 2019 sec 2.
-
120 As above.
-
121 As above.
-
122 S Yee & M Lou Breslin ‘Disability rights education and Defence Fund: This data, not that data: Big data, privacy, and the impact on people with disabilities’ (March 2023) 1, https://healthlaw.org/wp-content/uploads/2023/03/This-Data-Not-That-Data_Disability-Rights-Education-and-Defense-Fund_FINAL.pdf (accessed 21 July 2024).
-
123 As above.
-
124 GL Krahn and others ‘Persons with disabilities as an unrecognized health disparity population’ (2015) American Journal of Public Health 198.
-
125 As above.
-
126 As above.
-
127 H.R.8152 – American Data Privacy and Protection Act, 117th Congress (2021-2022) sec 28(i), https://www.congress.gov/bill/117th-congress/house-bill/8152/text#toc-H0299B60817D742978DC3C447CD110A88 (accessed 29 July 2024).
-
128 KPDA sec 47; Amnesty International Kenya (n 114) 26.
-
129 Sec 35(3)(a).
-
130 Sec 35(3)(b)(i).
-
131 Sec 35(3)(b)(ii).
-
132 Sec 35(4)(a).
-
133 Sec 35(4)(b).
-
134 Secs 35(b) & (c).
-
135 Sec 35(1).
-
136 Borgesius (58) 1580.
-
137 Borgesius (58) 1573.
-
138 Secs 35(2)(a), (b) & (c).
-
139 Sec 35(5).
-
140 Sec 14 of the Constitution; DLA Piper ‘Data protection laws of the world: South Africa vs United Kingdom’ (12 June 2024) 2, https://www.dlapiperdataprotection.com/system/modules/za.co.heliosdesign.dla.lotw.data_protection/functions/handbook.pdf?country-1=ZA&country-2=GB (accessed 23 July 2024).
-
141 Constitution of the Republic of South Africa, 1996 sec 14; PJ de Waal ‘The Protection of Personal Information Act (POPIA) and the Promotion of Access to Information Act (PAIA): It is time to take note’ (2022) 35 Current Allergy and Clinical Immunology 232.
-
142 N Baloyi & P Kotzé ‘Are organisations in South Africa ready to comply with personal data protection or privacy legislation and regulations?’ IST-Africa 2017 Conference Proceedings, P Cunningham & M Cunningham (eds) IIMC International Information Management Corporation (2017) 2; A da Veiga & J Ophoff ‘Concern for information privacy: A cross-nation study of the United Kingdom and South Africa’ 14th International Symposium on Human Aspects of Information Security and Assurance (HAISA), July 2020, Mytilene, Lesbos, Greece 5.
-
143 Baloyi and Kotzé (n 142) 2.
-
144 Protection of Personal Information Act 4 of 2013 sec 1; S Mahomed and others ‘The role of data transfer agreements in ethically managing data sharing for research in South Africa’ (2022) 15 South African Journal of Bioethics Law 27.
-
145 Sec 26 POPI.
-
146 Sec 1 POPI, definition of special personal information.
-
147 Sec 26(a)(1) POPIA.
-
148 Sec 1(c) POPIA, definition of special personal information.
-
149 Sec 71(1) POPIA.
-
150 Sec 71(3)(b).
-
151 V Cobigo & K Czechowski ‘Protecting the privacy of technology users who have cognitive disabilities: Identifying areas for improvement and targets for change’ (2020) 7 Journal of Rehabilitation and Assistive Technologies Engineering 1; Marzin (n 4) 4.
-
152 Nougrères (n 18) para 64(b).
-
153 MF Nkonge ‘Legal challenges facing algorithmic decision-making in Kenya’ (2022) University of Nairobi Law Review 18; Nougrères (n 18) para 57.
-
154 Borgesius (n 58) 1581.
-
155 Nougrères (n 18) para 57.
-
156 Nkonge (n 153) 18, 19.