top of page
Data Invisible Groups and Data Minimization in the Deployment of AI Solutions

Artificial Intelligence’s (AI) swift development has transformed every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights for improved decision-making. While AI’s deployment and uptake undoubtedly provide humanity with numerous opportunities to address global challenges, the data used for AI systems can create risks that must be addressed to avoid undesirable outcomes. This is because the biggest challenge to AI is that it seeks to mimic humans, which are inherently flawed. UNESCO hired Tambourine Innovation Ventures (TIV) to author a policy brief to provide its member states with a better understanding of the need for greater transparency in data usage for AI solutions.

​

The policy brief builds a case for data sharing and minimization sensitive to those historically excluded to ensure that governments fulfill their commitments to Our Common Future. The brief makes the case that these individuals and communities will likely remain invisible without creating inclusive data systems guided by data minimization and sharing principles. "Data invisibility" is a corollary of the digital divide across many countries of the Global South and is likely to affect traditionally underserved and marginalized communities such as women, girls, indigenous peoples, religious and linguistic minorities, the elderly, refugees, and migrant workers. TIV’s brief identifies discrimination in three main areas - punishment and policing, essential services and support, and movement and border control.

​

The critical challenges to Our Common Future posed by the effective contemporary deployment of AI for good are explored, exposing many inequalities and exclusionary practices toward data invisible groups. TIV’s brief shows that regulators and policymakers are at a critical juncture in regulating AI. Without proper regulations, AI may be used to harden lines of difference because present-day data overlooks the impact of AI on marginalized groups. To curb the negative effects of inadequate data and AI, our brief advocates for iterative and adaptable governance and regulatory frameworks for AI and big data that go hand in hand with the pace of AI development. To achieve this, the data minimization principle was presented, requiring organizations to ensure that the personal data used is adequate, relevant, and limited for what is essential for the purposes for which it is processed. TIV’s experts further supplemented this by highlighting privacy-preserving methods, such as differential privacy, federated learning, and anonymization, ensuring a minimum set of principles and standards for data governance. In their policy brief, TIV’s experts advocate for reshaping the data landscape to close the gap in the data discourse by promoting data collaboratives, data stewards, and regulatory data sandboxes. The brief aims to improve decision-making and policymaking and create capacity in countries with emerging AI and data capabilities.

bottom of page