Disclaimer:
Please be aware that the content herein has not been peer reviewed. It consists of personal reflections, insights, and learnings of the contributor(s). It may not be exhaustive, nor does it aim to be authoritative knowledge.
Learnings on your challenge
What are the top key insights you generated about your learning challenge during this Action Learning Plan? (Please list a maximum of 5 key insights)
AI-assisted tool enables scalable yet context-sensitive media literacy interventions.
The AI-assisted tool demonstrated strong potential for scalability because it adapts its outputs to the context of participants, making learning relevant across different groups. However, this scalability is conditional on the availability of basic technical infrastructure, such as internet connectivity and access to digital devices.
AI-supported workshops create safer spaces for emotional expression than analog formats.
Compared to the analog version, the AI-assisted workshop significantly improved participants’ ability to openly express emotions related to news about violence. This effect was particularly notable among men (100% versus 14%), suggesting that AI-mediated interaction can lower social barriers to emotional reflection while revealing important gender-differentiated responses.
Gender and context shape how news about violence is emotionally processed.
While all men in the AI-assisted workshop reported being able to express emotions, women consistently experienced a wider range and intensity of emotional reactions to news content. This highlights the need to account for differentiated impacts of media narratives based on gender, sociocultural context, and lived experience when designing media literacy interventions.
AI generated content and analysis strengthen the ability to recognize narrative framing and misinformation.
Participants moved from having no prior familiarity with the concept of narrative framing to being able to identify it in messages at higher rates in the AI-assisted workshop (63%) than in the analog version (46%). This suggests that AI can effectively accelerate conceptual understanding through interactive and adaptive learning, though further testing is recommended.
Perceived media literacy does not always align with actual critical capacity, underscoring the role of ethical AI design.
Although participants’ self-confidence in distinguishing reliable from misleading information increased in both formats, workshop exercises revealed that more than half still considered misleading narrative frames to be trustworthy. This gap between perceived and actual skill reinforces the importance of ethically designed AI—guided by clear constraints and safeguards—to surface blind spots without amplifying harm, especially in sensitive contexts such as security and violence.
Considering the outcomes of this learning challenge, which of the following best describe the handover process? (Please select all that apply)
Our work has not yet scaled
Can you provide more detail on your handover process?
The pilot will be handed over to the Governance team within UNDP Mexico and includes transferring the practical assets and the learning generated through the pilot. The assets include the AI tool python code and documentation (github), the workshop methodology, an ethical protocol for implementation, communication support materials, workshop evaluation surveys, and a non-technological version of the workshop materials and facilitation guide. Learnings include key data insights and documented lessons learned. This will be complemented by a knowledge exchange session to align on how the approach could be adapted, sustained, or integrated into ongoing governance and information integrity initiatives within the Country Office.
Please paste any link(s) to blog(s) or publication(s) that articulate the learnings on your frontier challenge.
Data and Methods
Relating to your types of data, why did you chose these? What gaps in available data were these addressing?
These data types responded to a key constraint of the challenge: the limited availability of contextual, human-centered data on how young people interpret information related to violence and misinformation, and how AI-mediated interactions influence that process. Together, these data types compensated for the lack of integrated datasets that combine behavioral, cognitive, emotional, and technological dimensions of misinformation.
User feedback from workshops and pilot testing was essential to address the lack of real-time, practice-based evidence on how media literacy tools function in live, participatory settings. Existing data tends to focus on online behavior or content analysis, but rarely captures how young people actively reason, question, and reflect when exposed to different narratives. Direct feedback helped surface usability issues, moments of confusion, and learning triggers that would not be visible through quantitative data alone.
Artificial intelligence data (such as interaction logs, prompt-response patterns, and thematic clustering of outputs) was selected to fill the gap in understanding how participants engage with AI as a sense-making partner. While much discourse exists on the risks and potential of AI, there is limited empirical data on how AI influences critical thinking in controlled, educational environments. These data helped reveal patterns in questioning, framing recognition, and bias awareness that emerge through AI-supported reflection.
Surveys were used to address the absence of baseline and comparative data on emotions, trust, and narrative components that could inform media literacy skills. By capturing responses during the workshop, surveys enabled the team to detect shifts over time and to compare outcomes between the AI-assisted and analog (A/B) versions. This helped bridge the gap between anecdotal insights and measurable change.
Participant experiences, captured through reflections, group discussions, and qualitative narratives, responded to a critical blind spot in existing data: the emotional and social dimensions of misinformation. Perception of violence is not shaped solely by facts, but by fear, identity, peer influence, and lived experience. Documenting these experiences made visible the deeper drivers behind information interpretation and resistance to certain narratives—elements that are often missing from traditional media or security datasets.
Why was it necessary to apply the above innovation method on your frontier challenge? How did these help you to unpack the system?
These methods helped address misinformation as a complex, adaptive challenge shaped by emotions, narratives, technology, and trust. The AI-assisted pilot intentionally used complementary methods toward systems learning—revealing how misinformation operates at the intersection of technology, cognition, emotion, and context, and how AI can be responsibly positioned as a learning catalyst rather than a standalone solution.
Prototyping was necessary to translate abstract concepts such as narrative framing, bias, and information integrity into tangible experiences. By rapidly creating and iterating low-fidelity and AI-assisted versions of the tool, the team could test assumptions in a safe, contained environment and adapt the design based on how participants actually engaged with the content, rather than how it was expected they would engage.
A/B testing enabled the pilot to isolate the added value of generative AI by comparing it with an analog, non-AI version of the same learning experience. This method helped distinguish what outcomes were driven by facilitation and participatory dynamics versus what was uniquely enabled by AI (e.g. personalization, adaptive prompts, narrative variation). In doing so, it clarified where AI meaningfully strengthened critical thinking—and where simpler approaches were equally effective.
Empathy mapping was critical to understanding how young people emotionally experience news related to violence, including fear, desensitization, skepticism, or mistrust. This method surfaced how emotions, prior beliefs, and social contexts influence interpretation of information—factors often invisible in purely cognitive approaches to media literacy. By making these dynamics explicit, the pilot could design AI prompts and activities that acknowledged lived experience rather than treating misinformation as a purely technical problem.
Data visualization helped participants and facilitators make sense of complex patterns emerging during the workshop, such as shifts in trust, recognition of bias, or differences in interpretation across narratives. Visualizing these patterns supported collective reflection, turning individual reactions into shared insights about how information circulates and is interpreted within the group. At a system level, this made visible the feedback loops between narratives, perception of violence, and social meaning-making.
Partners
Please indicate what partners you have actually worked with for this learning challenge.
Please state the name of the partner:
State of Zacatecas
What sector does your partner belong to?
Government (&related)
Please provide a brief description of the partnership.
The State of Zacatecas, through the Department of State, which coordinates the Security Strategy of the State, served as the liaison to coordinate the workshop testing, through JUCPAZ, a government youth program, and local schools. Looking into scaling opportunities, the Department of State is a possible candidate to take the pilot further and explore its potential for strengthening media literacy across Zacatecas.
Is this a new and unusual partner for UNDP?
No
Please indicate what partners you have actually worked with for this learning challenge.
Please state the name of the partner:
Mottum
What sector does your partner belong to?
Private Sector
Please provide a brief description of the partnership.
Mottum is a private technology consultant in the technological development of the pilot. They assisted in implementing open code and open-source software with ethical guiderails for the AI prompts.
Is this a new and unusual partner for UNDP?
No
End
Bonus question: How did the interplay of innovation methods, new forms of data and unusual partners enable you to learn & generate insights, that otherwise you would have not been able to achieve?
The combination of innovation methods, novel data sources, and a diverse set of partners enabled learning that would not have been possible through a single institution or traditional research approach. The Department of State of Zacatecas grounded the pilot in real policy and community contexts, Mottum translated emerging insights into viable technical and design adaptations, and the support of Japan’s Cabinet Office created the space to test and compare approaches without pressure for immediate scale. The collaboration, both direct and indirect, made it possible to generate actionable insights on how young people engage and learn with AI-supported media literacy tool, revealing cognitive and emotional dynamics that are challenging to observe using conventional tools or data alone.
Please upload any further supporting evidence / documents / data you have produced on your frontier challenge that showcase your learnings.
The closing form saves automatically or via the blue "save changes" button the top left. Thank you
Comments
Log in to add a comment or reply.