AI bias in sexual and reproductive health: an African human rights perspective
In this brief, Phenyo Sekati explores the impact of biased training data in artificial intelligence (AI) healthcare models, emphasising the limited research on sexual and reproductive health (SRH) in low- and middle-income countries on the African continent. The brief highlights the risks of bias in the data used to train AI healthcare models and examines the human rights implications for African women and girls.
The role of AI on sexual and reproductive health
Society has seen a boost in the use of artificial intelligence (AI) across many uses and fields. This has included the increased use of AI in healthcare, particularly in sexual and reproductive health (SRH) research and data collection. A recent journal article found that AI is being used to work towards the sustainable development goals for universal SRH, within five core components, namely: improvement of antenatal, perinatal, postpartum, and newborn care; provision of high-quality services for contraception and infertility services; elimination of unsafe abortions; prevention and treatment of sexually transmitted infections, reproductive tract infections; and cervical cancer). These advancements have been praised for their benefits to human rights, specifically with respect to sexual rights and rights to sexual autonomy.
In spite of this, a downside exists. AI-driven tools rely on training data which is sourced from existing information on the particular issue or topic. As it stands, SRH research remains limited, with this especially being the case for women and girls in lower-and-middle-income countries. As such, AI-driven tools rely on fragmented training data, giving rise to potential issues of bias. This brief considers the contribution of AI to SRH in Africa, explains how SRH research is typically gathered and sourced, and how these sources can generate skewed perspectives. These considerations then transition to a discussion on bias particularly regarding how the limited research on SRH for African women and girls influences the data used for AI learning.
Existing projects and institutions on AI in SRH
AI has significant potential to improve the efficiency, accuracy, and accessibility of sexual and reproductive health services. A technical brief by the World Health Organisation highlights AI’s ability to enhance diagnostics, personalise treatments, and predict health trends, thus contributing to global SRH goals.
There are emerging institutional projects aiming to use AI to enhance the delivery of SRH information and services within the African region. A notable project is the Innovation Hub on Artificial Intelligence for Sexual, Reproductive and Maternal Health in Africa, located in the Infectious Disease Institute in Uganda. The Institute has been engaging in a variety of projects that leverage AI and machine learning to address SRH challenges across several African countries. These projects include the development of AI-enabled chatbots by mDoc Healthcare in Nigeria and the University of Embu in Kenya, which provide personalised health guidance on sexually transmitted infections. Additionally, institutions such as Addis Ababa Science and Technology University in Ethiopia and Makerere University in Uganda use mathematical modelling and machine learning algorithms to predict and monitor sexually transmitted diseases and miscarriages. The information gained by AI diagnostic tools has reportedly also enabled institutions such as the Pan African Information Communication Technology in Namibia and Makerere University to develop machine learning models to predict risks such as gestational hypertension and contraceptive side effects.
AI has also been used for several projects aimed at promoting SRH rights in South Africa. For example, BroadReach Healthcare developed a machine learning system to predict clinic performance in KwaZulu-Natal and improve HIV treatment outcomes.
Overall, these projects have contributed significantly to protecting and promoting SRH rights by providing personalised care, improving early detection, and enhancing health education and awareness by using training data and algorithms sourced from existing medical research.
Bias emanating from research and implementation
When considering the issue of bias in AI, it is useful to adopt an intersectional lens to gauge the extent to which certain groups of people may become excluded and/or prejudiced. As it stands, most research on adolescent SRH concentrates on a small number of African countries such as South Africa and Kenya. This research disparity perpetuates global imbalances in knowledge production. Local African researchers are often underrepresented as lead authors, particularly in collaborations with researchers from the United States and Europe, who have the financial resources to fund research. This dynamic reinforces power imbalances, where African researchers contribute significantly but receive less recognition and influence over research priorities. Consequently, these inequities have the potential to skew global datasets and narratives, which in turn influence AI models trained on incomplete or biased data.
AI healthcare tools analyse and synthesise data from various sources including electronic health records; medical images which assist in early disease detection and improving diagnosis accuracy; genomic data which is derived from individual genetic markers; and evidence derived from patient surveys to identify how treatments perform outside of controlled clinical settings. These systems rely heavily on existing data, and the underrepresentation of many African countries means that algorithms risk embedding and amplifying these biases.
A primary challenge is the lack of standardisation to make datasets interoperable across various clinical settings and the diversity of patients. Moreover, bias in AI tools can arise from the use of retrospective data, which may fail to account for changes in patient demographics, such as age, ethnicity, or gender. The Organization of Economic Cooperation and Development has indicated a deficiency in South Africa’s medical diagnostic resources. As a result, a significant amount of research conducted in the application of AI to medical diagnosis is sourced from developed countries. These issues not only constrain the development of AI tools but also create challenges in ensuring that AI models are generalisable across different patient populations. The exclusion of certain patient populations from AI datasets has profound human rights implications where systemic inequities are entrenched and equitable access to SRH is denied.
Conclusion
The abovementioned issues demonstrate a need to not only improve training data, but to also address the broader systemic issues at play. Specifically, efforts to integrate AI into health services should ensure that AI systems developed and deployed in SRH in Africa adopt a more intersectional lens. This includes accounting for the diverse social, cultural, and economic contexts across the continent, beyond the current focus on countries like South Africa and Kenya. Ultimately, however, there are still many questions to be explored and answered to fully address these complex issues. Strengthening local researchers’ leadership and ensuring equitable collaboration can serve as a meaningful start to developing and implementing representative data and, consequently, more equitable AI systems that better address the diverse SRH needs of different patients.
Phenyo Sekati is a candidate legal practitioner at Power & Associates.