The Disinformation Games

Your safe space for games and challenges related to misinformation and disinformation!


Case study: Artificial Intelligence -  Opportunity, Danger or Both?

AI: Opportunity, Danger or Both?

Recommended for: 16 year old students or older

Available building blocks: 4

Tags: artificial intelligence, technology, robots, fake news

Tips for educators

Building block 3. How is artificial intelligence used?

Look at and discuss some of the ways that AI is causing harm to us; either directly or indirectly. Investigate the underlying causes to understand why.

Suggested resources

1. First killer robot was around back in 1979 [Open from webarchive if link broken/inactive]

2. Machine learning AI [Open from webarchive if link broken/inactive]

3. Fake influencer follower fraud [Open from webarchive if link broken/inactive]

4. Google algorithm for detecting hate speech looks racially biased [Open from webarchive if link broken/inactive]
AI is coded in a racsit and biased way.

5. 4 misconceptions about ethics and bias in AI ([Open from webarchive if link broken/inactive]
Research overview of Bias.

6. 4 misconceptions about ethics and bias in AI  [Open from webarchive if link broken/inactive]
Middle class more at threat from AI job losses.

7. Why do algorithms replicate inequalities in gender and race? [Open from webarchive if link broken/inactive]
AI is biased towards women and minorities.

8. Automating artificial intelligence for medical decision-making? [Open from webarchive if link broken/inactive]

9. Japanese robot will care for the elderly and children [Open from webarchive if link broken/inactive]

10. Monsanto will use AI for crop protection [Open from webarchive if link broken/inactive]


Learning outcomes

The learning outcomes represent the competences which learners are expected to develop as a result of the training intervention:

1. Reflect critically on the material provided and make informed links to the pillars.
2. Analyse the judgements made relating to good and bad influences of AI.

Suggested teaching methods

We plan to use some standard classroom practices:
> Instructor led sessions to introduce ideas and guide discussions.
> Small group work to consolidate and support learning.
> Short quizzes for fun and consolidation.
> Plenary sessions to check understanding.


Suggested learning activities

1.5 to 2 hour per lesson over 2 sessions. The structure of the class work related to the case study will be roughly as follows:

> Lesson 1: Class based discussion on pillars and especially reference frames (15). Group work on good vs bad, see if this is important (30). Class based Q & A on psychological drivers and their possible importance (30). Group based review of the game and the impact it had on their thinking, discussion about how they would do the game if they were asked (30) - feedback to peers (15).

>Lesson 2: Supported group work for learners to devise their own set of questions for learners outside their age range (60). Class based discussion with learners to share their thoughts and observations (30). Plenary session on the use of "good" and "bad" for framing AI (15).

De Facto pillars

Motivated Cognition: Question and answer session with learners to determine if they feel AI and fake information is good, bad or more. Establish their understanding of the purpose.

Systemic Causality: Investigate in groups how psychological pre-dispositions enable the creation and transmission of fake material.

Frames and Framing: Use of board game and/or app to facilitate deeper discussion of the framing of fake information and how the mechanisms work and affect us due to framing.

Equivalence and Emphasis Frames: Explore in groups if equivalence exists in order to make the material more acceptable.

Additional tools

De Facto app
Analogue board game and cards


You have selected a topic from the Disinformation Games area. Please be advised that this area hosts, or links to, resources that contain misinformation or disinformation. The presence of such materials is to assist in developing and sustaining skills for navigating and detecting disinformation. To achieve this goal – and with clear intent – none of the materials are explicitly marked as true or not true.