Back to overview

Equitable AI Guanajuato Use Case

Preventing and mitigating gender bias in Guanajuato’s AI-based early alert systems to prevent school dropout.

Submitted on 1 September 2023

Case Author

Cristina Martínez Pinto

PIT Policy Lab

Key takeaways

  • Role of champions and leadership: The Secretariat of Education of Guanajuato (SEG) was an ideal partner, as it not only had a vision to innovate but also had the technical capacity and team in place to advance an AI initiative.
  • Importance of data: SEG had over 20 years of data and worked on the harmonisation of its different databases – they are intent on using data for evidence-based policymaking.
  • Ease of use of open-source tools (AI Fairness 360 Toolkit): We were able to test the applicability of the tool in a developing country context and it helped us identify and mitigate gender bias.
  • Importance of multi-disciplinary and multi-sector collaboration: This was a key point in the development of our socio-technical approach. We formed an international consortium of four organisations to carry out the project, with the support of USAID.
  • Policy outcome: Evidence-based policymaking that accounts for gender bias –  everybody counts.

Key issues

As we moved forward with the different milestones within the project timeline, the Secretariat of Education of Guanajuato (SEG) was working in parallel with the World Bank to develop the algorithm central to the initiative. Therefore, we did not get to test the AIF360 tool with a final model but instead had to use training databases. During our technical training on the use of the tool, we emphasised the opportunity for the SEG team to own the tool and experiment with it, so they can continue the analysis internally.

Case target / aim / intention

The case aimed at developing an Equitable AI use case in the education sector. It aimed at improving school retention and graduation rates by identifying at-risk students and by providing insights into the role that gender plays in project design and implementation. The Equitable AI: Guanajuato Use Case had four components:

  1. Local Capacity building
  2. Use of the AI Fairness 360 (AIF360) tool
  3. Providing a training course for key SEG profiles
  4. Development of an AI Ethics Guide and Checklist for decision makers in the public sector, together with policy recommendations.

Case Study

Part One

Initial Situation

About 40,000 students drop out of the education system in Guanajuato every year.

In this context the Government of Guanajuato responded to UNESCO's urgent call to innovate in the education system and support girls, boys and young people at risk of dropping out from school. The aim of these efforts is to find innovative solutions by using ICTs and improving data-driven decision making.

Through the Learning Analytics project, the Secretariat of Education of Guanajuato (SEG) has been working on harmonising different databases with relevant information, developing use cases and identifying patterns and variables, with the objective of having reliable and updated data and information on the current state of the education system. The products obtained from the Learning Analytics project allowed the development of predictive analytics on individual students’ trajectories in the state, thus facilitating joint monitoring by educational institutions, mothers and fathers. The application of predictive analytics allows the identification of multiple aspects of school dropout for the development of effective attention strategies.

In partnership with the World Bank via the Educational Trajectories initiative, the SEG created an AI-based early alert system, the Early Action System for School Permanence (SATPE), aimed at improving school retention and graduation rates by identifying and supporting at-risk students. While SATPE serves as a meaningful use case for how government agencies can use AI to address pressing social challenges, such AI-based tools are vulnerable to entrenching existing biases and producing discriminatory outcomes.

Identifying the need to mitigate potential gender bias in the SATPE system, PIT Policy Lab, Itad,  Women in Digital Transformation, and Athena Infonomics worked together to stress the need to incorporate frameworks that guide SEG’s actions beyond their initial focus on privacy and personal data protection.

Implementation - capacity building

Part Two

Strategy and implementation

Identifying the need to mitigate potential gender bias in the SATPE system, the international consortium of organisations stressed the need to incorporate frameworks that guide the SEG’s actions beyond their initial focus on privacy and personal data protection. The project had four components:

  1. Capacity building

In this component, the different actors (strategy and innovation teams, evaluation, school control, operational, etc.) that were involved in the development and implementation of SATPE had a first look at the ethical, gender and human rights implications of a public policy using artificial intelligence, such as this one. Through a series of workshops, they were able to strengthen the SEG team's perspectives, triggering reflection and managing the application of knowledge to aid the design of SATPE and its interventions. These workshops also provided a space to ask questions of the SEG itself, and the answers to these questions became important lines of action in the design of interventions. This especially arose in the idea of ethical considerations that should be expressed in clear criteria and set precedents for future innovations that involve data and AI and therefore involve risks for the beneficiary population. While the workshops provided valuable tools to avoid building biases by design into public policy, the careless application of AI systems has the potential to reproduce and increase biases present in the data. Thus, because datasets are a reflection of the society they describe, the datasets to train and test the SAT AI system may embed biases. For this reason, a second component of the project was developed.

  1. Use of the AI Fairness 360 (AIF360) tool.

This tool, developed by the IBM Research team and currently hosted by The Linux Foundation, was created to identify and mitigate biases. The preliminary results allowed us to identify the existence of some bias in the results in relation to the gender variable; for example:

It was found that up to 4 girls and adolescents out of every 100 at secondary level may not be correctly identified as being at risk of dropping out if the biases present in the data are not mitigated.

This implies that the preliminary SATPE results should be taken with caution. However, by using the AIF360 tool, it was possible to transform the data to correct for such biases, achieving improved results when employed through a generic model. In this sense, the AIF360 tool proved to be useful in identifying and mitigating biases in the data.

  1. Training course for key SEG profiles

In addition to leading the technical aspects of the intervention with the AIF360 tool, our data science specialist also provided a training course to key SEG profiles. In this course, the tool, its features and some examples were explained in depth in a practical way. Also, as an introduction, representatives from IBM Mexico's Legal and Regulatory Affairs participated in the course. In addition to the technical training, there were periodic meetings with the SEG team to present the results and reflections arising from them. It is expected that the SEG technical team will be able to take ownership of the AIF360 tool, to use it to test new bias hypotheses, and to incorporate it as a routine tool for decision-making when working with data and AI. To facilitate this, a recorded tutorial and materials were provided in addition to the face-to-face workshops so that SEG could train new members of its team and update their knowledge regarding the use of the tool periodically.

  1. Ethical AI Guide and Checklist for Decision Makers in the Public Sector

In addition to these workshops and interventions, the team worked together under the leadership of Women in Digital Transformation (WinDT) to bring two products of the project to SEG: the Ethical AI Guide and the Checklist.  As implied by the name, the aim of the Ethical AI Guide was to assist in the development of ethical and responsible artificial intelligence in the education sector. These inputs were designed in the framework of the project for local governments exploring the design and implementation of data and AI solutions. The Ethical AI Guide provides an introductory conceptual overview, as well as discussions on the risks and ethical considerations of any AI deployment.

The Checklist consists of a 6-step self-assessment exercise that covers the entire lifecycle of the solution until its possible end of life and considers aspects such as the regulatory framework, decisions, risk assessment and management, public consultations and procurement.

SEG's role was crucial to feedback and validate both instruments, considering the context of a subnational government in the Latin American region, where these products will be pioneering tools.

Implementation - checklist

Part Three

Outcome and reflection

Reflecting on SATPE’s implementation, the consortium offered actionable policy recommendations for decision-makers looking to mitigate biases and incorporate gender perspectives into AI systems. These recommendations can be adopted by a diverse range of organisations looking to explore AI and data policies in sectors like education, health and financial services.

Phase 1 | Self-Assessment and Reflection: Starting from the design stage of AI-based interventions, teams should ask themselves what could go wrong. Both policymakers and design teams should consult stakeholders involved in the AI system’s development and implementation to consider their concerns, develop a risk analysis methodology, document processes and lessons learned and weigh up the institutional capacities to execute such projects.

Phase 2 | Paving The Way: To standardise best practices when working with data and AI, organisations must establish decision-making criteria that align with ethics and human rights. For example, government agencies can adopt decision-making criteria in line with their country’s existing principles. Organisations can also create a working group in charge of systematising and designing the ethical criteria for interventions related to data and AI.

Phase 3 | Involvement and Transparency: Populations affected by AI-based systems must be consulted and included in the design and implementation of AI projects. By creating consultation mechanisms open to civil society and other specific interest groups – including large companies, small- and medium-sized enterprises and professional organisations – diverse stakeholders can understand and provide feedback on AI systems that will impact them.

Phase 4 | Strengthening Existing Instruments: Organisations should coordinate existing efforts and incorporate an ethical perspective on AI and the protection of personal data as necessary to build a solid foundation for ethical, responsible and trustworthy policies powered by AI.

Phase 5 | Broader Sensitivity: It will only be possible to increase diagnostic precision and intervention effectiveness by measuring and evaluating the results. For example, organisations should consider data fairness metrics, such as those provided by the AI ​​Fairness 360 toolkit and other tools, to identify when data biases are present and correct them before feeding data into AI models. Finally, organisations should consider verification mechanisms for AI-based tools beyond data and algorithms — ensuring that a human component remains the ultimate decision-maker.

To implement many of these recommendations, the AI Checklist and AI Ethical Guide can be valuable tools for identifying risks and areas of opportunity and can supplement working with stakeholders.

With the Secretariat of Education of Guanajuato (SEG) being set to adopt SATPE as an essential component informing Guanajuato’s educational policy, local officials along with similar government bodies must consider the implications of using AI systems to address social issues. Building on its engagement in Mexico, Itad and its partners communicated these implications to other government representatives in Latin America. The international consortium then presented critical findings to state government leaders from Uttar Pradesh, Andhra Pradesh, Telangana and Tamil Nadu in India — encouraging replicability of AI approaches in other regions while sharing lessons learned from previous iterations.

Subsequently, the Preventing and Mitigating Gender Bias in AI-based Early Alert Systems in Education grant, made possible through USAID’s Equitable AI Challenge, produced crucial resources that will allow government bodies to weigh up the benefits and risks of using AI when improving the delivery of future public services. By using IBM’s AI Fairness 360 toolkit, the AI Ethics Guide and the AI Checklist in a development context, government bodies, organisations and technical teams can better mitigate bias in low- and middle-income country datasets while ensuring their AI projects become more equitable, inclusive and transparent.

As SEG and other government bodies consider expanding this work by pulling higher-risk data, including personal information and security data, open and trusting relationships with diverse groups who impact and are impacted by AI systems will also remain critical to ensure that these influential partners continue down an equitable path.

Outcomes - guide

Author's notes

The products described in this case can be publicly consulted:

  1. Artificial Intelligence Ethics Guide (Link)

The Artificial Intelligence (AI) Ethics Guide presents a broad overview of what AI is, the ethical concerns it creates and how they can be addressed at the national, sub-national and municipal level. To illustrate ethical concerns, the guide presents several case studies and provocative questions that allow decision-makers to reflect on the responsible use of AI in government systems.

  1. Checklist for Artificial Intelligence Deployment (Link)

This is a tool for policymakers and technical teams preparing to deploy or already deploying AI systems. The document seeks to inform policymakers on starting points for building ethical AI systems, as well as prompting technical experts to reflect on whether the right ethical guardrails are in place for an AI-based approach.

  1. AI Checklist: Self-Assessment Tool (Link)

This spreadsheet will allow users to generate a spider diagram for better illustration of the most/least developed areas for ethical AI deployment in different situations and contexts.

  1. Policy Recommendations: Preventing and mitigating gender bias in AI-based early alert systems for education (Link)

With the intention of promoting practices aiming towards a more equitable and responsible use of AI, this report presents a series of actionable policy recommendations to mitigate biases and to mainstream gender perspectives within AI projects.

 

PIT Policy Lab was part of an international consortium of organisations, together with Itad, Women in Digital Transformation, Athena Infonomics with a grant from USAID- DAI within the Equitable AI Challenge global initiative.

References

Riabov, A., (2023) . “How USAID, Local Government, and the Private Sector Mitigated AI Gender Bias in One of Mexico’s Leading Education Pilots”. MarketLinks: USAID. Retrieved from. https://www.marketlinks.org/blogs/how-usaid-local-government-and-private-sector-mitigated-ai-gender-bias-one-mexicos-leading

“Educational Trajectories: Foundations for an Early Warning System against school dropout”. (2022). Observatory for Public Sector Innovation. Case Study Library. Retrieved from https://oecd-opsi.org/innovations/educational-trajectories/

PIT Policy Lab. (2023). Equitable AI. Retrieved from https://www.policylab.tech/equitable-ai

Aparicio, E. (2023). Promoviendo una IA equitativa en el sector educativo. PIT Policy Lab Blog Series. Retrieved from: https://www.policylab.tech/post/promoviendo-una-ia-equitativa-en-el-sector-educativo-el-caso-de-la-secretar%C3%ADa-de-educaci%C3%B3n-de-guana