Disclaimer:
Please be aware that the content herein has not been peer reviewed. It consists of personal reflections, insights, and learnings of the contributor(s). It may not be exhaustive, nor does it aim to be authoritative knowledge.
Learnings on your challenge
What are the top key insights you generated about your learning challenge during this Action Learning Plan? (Please list a maximum of 5 key insights)
We started our learning cycle making sense of the wide variety of opportunities and risks of generative Artificial Intelligence to promote sustainable development. The results of our initial sensemaking, led us to select one area to delve deeper: the provision and access of public services with generative AI. We share key insights from both stages.
Generative AI for sustainable development:
1. FROM AI GONE WRONG TO AI FOR GOOD: When exploring the uses and applications of generative AI, there is a dichotomy between their positive and negative aspects. The positive aspect involves seizing opportunities to benefit society, “AI for good”, while the negative aspect, “AI gone wrong”, concerns the potential risks associated with these endeavors. Some examples of “AI gone wrong” are: making students susceptible to misinformation or affecting their reading and writing abilities if their adoption generative AI tools if not properly guided; increasing violence against women in the digital realm through the production of content such as fake videos; increasing misinformation among citizens. While some examples of “AI for good” are: improving the academic performance of students and aid educators with repetitive tasks; support the processing of large amounts of information in judicial systems and other public services.
2. FROM AI DONE POORLY TO AI DONE WELL: Regarding the design and development of generative AI, there is a dichotomy between the opportunities and the risks. The positive aspects, “AI done well”, are concerned with the benefits that the development of AI systems brings along its value chain and when it is done responsibly. While the negative aspect, “AI done poorly”, encompasses instances where AI development falls short of societal expectations, environmental concerns or exacerbates existing issues or biases. Under “AI done poorly”, there are precarious conditions for the workers who tag text, images and video, especially in the Global South and who are often working under unregulated and opaque labor schemes. Conversely, under “AI done well” there are examples of efforts working to improve the conditions of gig workers who are in charge of tagging.
Generative AI for the provision and access of public services:
3. DONE RIGHT, GENERATIVE AI CAN EXPEDITE PEOPLE’S ACCESS TO THEIR RIGHTS —DONE WRONG, IT CAN STIFLE IT. Generative AI can perform advisory, assistive, cooperative and augmentative functions to aid public officials throughout the cycle of providing public services (planning, production, provision). When applying these solutions in the public sector, caution must be taken not to replicate biases in the products, since they have the capacity to impact people's access to their rights. However, the potential of generative AI to synthesize large amounts of information and produce new data also has the potential to streamline processes that bring rights closer to people.
4. GENERATIVE AI CAN HELP LEVEL THE FIELD FOR UNDERSTAFFED AND UNDERFUNDED GOVERNMENT ENTITIES. HOWEVER, A CODE OF PRACTICE SHOULD BE OBSERVED. Public officials are already using generative AI in their daily work to perform their tasks more efficiently, from tasks that require the generation of text (e.g. writing briefs, responses to information requests from citizens) to tasks that require creativity (e.g. brainstorming of names for new public services). In government entities that lack resources like computers and connectivity, officials are using the tools on their own devices. In government entities that are understaffed, generative AI can help them get things done more quickly, potentially allowing them to focus their energy and time to higher value tasks, like face-to-face contact with citizens. However, there is a need to have a code of practice in place to guide the adequate and ethical us of off the shelf generative AI solutions in order to mitigate risks in the public sector.
5. EXPERIMENTING WITH REGULATIONS TO PROMOTE THE RESPONSIBLE USE OF AI FOR PUBLIC SERVICES. It is necessary to test different regulatory approaches and develop laws that adapt to the needs and realities of the country. Regulatory sandboxes and policy prototypes can help countries measure the impact of these proposals. A policy prototyping project would allow experimentation with innovative regulatory proposals that have not yet been officially adopted by governments (e.g. white papers, draft laws, etc.), and thus improving the application of AI for sustainable development. Regulatory sandboxes can be particularly beneficial for countries like Mexico, who need to catch up with the regulation of emerging technologies.
Considering the outcomes of this learning challenge, which of the following best describe the handover process? (Please select all that apply)
Our work has not yet scaled
Can you provide more detail on your handover process?
Generative AI for development (and specifically for public services) is a new topic for our CO. Our aim is to contribute to the CO’s portfolio for digitalization and help UNDP Mexico become a trusted partner for digital matters.
Please paste any link(s) to blog(s) or publication(s) that articulate the learnings on your frontier challenge.
Data and Methods
Relating to your types of data, why did you chose these? What gaps in available data were these addressing?
We chose academic literature, literature review and grey document analysis to conduct an overall sensemaking of the uses and applications of generative AI that are emerging throughout the various fields of sustainable development. Once we narrowed down our focus to the application of generative AI for the provision and access of public services, we wanted to identify emerging examples throughout the world and in Mexico. Thus, grey document analysis helped us address existent data gaps. Additionally, data obtained through in-depth interviews helped us identify risks and opportunities from the point of view of experts working with AI in Mexico. Finally, interviews with public officials and citizens allowed us to identify pain points and potential opportunities for harnessing the potential of generative AI in the provision and access of public services. Interviews helped tapped into findings of the people closer to the problem.
Why was it necessary to apply the above innovation method on your frontier challenge? How did these help you to unpack the system?
Visual thinking and mapping helped us make sense of the wide variety of opportunities and risks that we identified regarding the use and development of AI for sustainable development. The mapping allowed us to better understand risks and opportunities, and to visualize how that variety relates to UNDP’s Strategic Plan, which helped us narrow down to the area in which we wanted to work (public services). Systems thinking was helpful for mapping the various stages and elements of the provision and access of public services. And collective intelligence and ethnography allowed us to delve deeper into pain points, opportunities, ideas, risks of generative AI for public services by incorporating the knowledge, voice and expertise of a variety of people (technologists, regulation experts, public officials, users).
Partners
Please indicate what partners you have actually worked with for this learning challenge.
Please state the name of the partner:
PIT Policy Lab
What sector does your partner belong to?
Civil Society
Please provide a brief description of the partnership.
A knowledge partnership in which we collaborated in the identification and analysis of risks and opportunities for the application of generative AI in public services, as well in an ecosystem mapping.
Is this a new and unusual partner for UNDP?
Yes
End
Bonus question: How did the interplay of innovation methods, new forms of data and unusual partners enable you to learn & generate insights, that otherwise you would have not been able to achieve?
It helped us cast a wide net to learn about uses of an emerging/new technology such as generative AI, particularly its application in the public sector, around the world. Moreover, the interplay of methods, data and partners supported our process of sensemaking and the identification and ideation of potential applications of generative AI for improving the provision of services in Mexico.
Please upload any further supporting evidence / documents / data you have produced on your frontier challenge that showcase your learnings.
The closing form saves automatically or via the blue "save changes" button the top left. Thank you
Comments
Log in to add a comment or reply.