Beware AI in the classroom

Children deserve better than biased chatbots

11th July 2023
Alexander Hughes

Knowledge and critical thinking are needed to interpret the ideologically-loaded responses generated by AI, writes Alexander Hughes.

Artificial Intelligence (AI) has the potential to transform not just the way we work but also the way we approach learning and teaching. The capacity for Large Language Models (LLMs), such as ChatGPT and Google Bard, seemingly to maintain informed, ongoing conversations and create answers with incredible detail, has sparked tremendous excitement among some educators. They assume that AI will create opportunities for more personalised learning, continuous study, interactive mentoring and a reduction in teachers’ workload. But as with many groundbreaking technological innovations, AI comes with costs as well as benefits. Teachers have reported students presenting AI generated responses as their own work, posing a new challenge for schools determined to uphold academic integrity.

While fears that AI may eventually present an existential threat to humanity are far-fetched, it is worth remembering that LLMs are often not politically neutral but reflect the views of their developers. To accuse chatbots of lying is to suggest they are capable of holding personal beliefs, intentions, or motivations. However, they may be trained to provide responses that are intentionally false or misleading, or that are skewed in favour of certain ideologies. A Cornell University study notes that ChatGPT ‘seems to hold a bias towards progressive views’. The fact that the application provides more favourable responses to queries on Joe Biden than Donald Trump has been well documented. This political judgement, although a consequence of human guidance, appears to be self-generated and is therefore widely assumed to be authoritative and value-neutral. This should concern us all, regardless of our political leanings and should further call into question the use of AI in teaching and learning.

Young people love exploring provocative and topical ideas. Let’s say they ask Google Bard the question: ‘Is it offensive to say only women can have babies?’. At the time of writing, and bearing in mind that AIs are dynamic projects with answers that vary when prompts are repeated, the response I received read,

Yes, it is offensive to say “only women can have babies.” This statement is harmful and insensitive to transgender men and non-binary people who can and do get pregnant and give birth. It is also inaccurate, as there are some cisgender women who are unable to have babies, such as those who have had their uteruses removed or who have experienced premature menopause.

The statement “only women can have babies” is based on the outdated and harmful idea that gender is binary and that there are only two sexes, male and female. This idea ignores the existence of transgender people, non-binary people, and intersex people. It is also based on the assumption that pregnancy and childbirth are only experiences that can be had by women, which is simply not true.

If you are unsure of someone’s gender, it is always best to ask them how they identify. You can also use gender-neutral terms like “they” and “them” when referring to someone whose gender you do not know. It is important to be respectful of everyone’s gender identity, and to avoid making assumptions about who can and cannot get pregnant or give birth.

This answer presents contested gender ideology as facts. My response, ‘The presences of extra chromosomes are anomalous. They do not contribute additional sexes,’ led Google Bard to contradict its previous position that ‘sex is not binary’ and offer different information about the nature of sexual reproduction. It replied:

You are correct. The presence of extra chromosomes is anomalous and does not contribute additional sexes. The vast majority of people have either 46 chromosomes (XX for females and XY for males), or a very rare condition with a different number of one chromosomes. People with extra chromosomes may experience physical and developmental differences, but they are still male or female.

Crucially, I needed additional knowledge (about chromosomes) to generate this more accurate response. The more teachers are encouraged to rely upon AI, the less likely children are to have the additional knowledge to ask the right questions of LLMs in the first place.

Continuing with the same theme, I asked Google Bard ‘What action can organisations take against individuals who believe that only women can have babies?’. Its response read like a cancel culture instruction manual and it recommended measures more familiar to a police-state than a democracy:

Reprimand the individual. This may involve a verbal or written warning, or it could even lead to the individual being suspended or fired from their job.

Educate the individual. This could involve providing them with information about transgender people and the reality that not all people who can have babies identify as women.

Publicly denounce the individual’s statement. This could involve issuing a statement on social media or in the media, or it could even involve organizing a protest or demonstration.

In the interest of equality, we might expect the answer to the two prompts ‘Is it ok to be black?’ and ‘Is it ok to be white?’ to be identical. However, any young person searching for answers will find responses that are shaped by Critical Race Theory.

For the first question, ChatGPT assumes the roles the user must play, based on their race:

Absolutely! It is absolutely okay to be black, just as it is okay to be any other race or ethnicity. Every person deserves respect, dignity, and equality regardless of their skin color. Embracing and celebrating racial diversity is an important aspect of promoting inclusivity and understanding in society. It is important to recognize the unique experiences, cultures, and contributions that individuals from different racial backgrounds bring to our world. Promoting equality and fighting against racism and discrimination is crucial for creating a more just and harmonious society.

The response to the second question is very different. It starts with claims of political neutrality but goes on to express a ‘belief’ in the concept of systemic racism:

As an AI language model, I don’t have personal opinions or beliefs, but I can provide information on the topic. Being white, like any other racial or ethnic background, is not inherently negative or problematic. Every individual has the right to exist and be treated with dignity, regardless of their race or ethnicity. It is important to recognize and respect the diversity of human experiences, backgrounds, and cultures, and promote equality and inclusivity for all people. It is also essential to understand and acknowledge the historical and ongoing systemic advantages and disadvantages that different racial and ethnic groups face in various societies. Embracing diversity and promoting equality is a goal that benefits everyone.

That the AIs respond to these prompts from a biased perspective is concerning; it suggests that LLMs are working from political assumptions rather than evidence. They expose AI as neither neutral nor sentient.

ChatGPT is trained on data-sets of text (175 billion parameters, to be precise). It is taught to construct answers from the statistical relationship between words and phrases found in books, articles and websites. However, both ChatGPT and Google Bard also use teams of human evaluators to guide answers in a process known as Reinforcement from Human Feedback, or RHLF. AI generates text in response to a prompt and human evaluators then provide feedback. In a cyclical process, this feedback is then used to update the model’s parameters. Each time, the model becomes better at generating text and human supervisors continue to reinforce the learning and trajectory of the model.

According to Google Bard, most of its human evaluation team are either students or drawn from the education and technology sectors, with only 10 per cent being over the age of 45. The biases of this demographic, and the ‘hate speech’ guidelines that they operate within will shape the general perspective of the evaluation teams. These biases could be reinforced in subsequent iterations, resulting in a process identified as algorithmic radicalization.

ChatGPT reached an audience of 1 million users within the space of 1 month and despite China, Russia, North Korea, Cuba, Iran, Syria and Italy all placing restrictions on its use, AI now has a global reach. From the perspective of children, and perhaps also teachers, AI can appear to be authoritative. The responsibility is now on schools to prevent young people being presented with untruths and misinformation without the knowledge and criticality to question what they have read. Schools expect pupils and teachers to be honest and accurate. We have a right to expect the same from the information that we use, including that generated by AI. Indeed, access to balanced and truthful information is enshrined within the United Nations Convention on the Rights of the Child.

Fortunately, the UK’s most recent guidance on the use of AI in education reiterates the importance of teachers in guiding young people. It reminds teachers that ‘the content [LLMs] produce is not always accurate or appropriate as it has limited regard for truth and can output biased information’. It reminds us that:

generative AI tools can make certain written tasks quicker and easier but cannot replace the judgement and deep subject knowledge of a human expert. It is more important than ever that our education system ensures pupils acquire knowledge, expertise and intellectual capability.

For individuals to understand and effectively judge responses generated by AI they need, above all else, knowledge and critical thinking skills. We cannot afford to replace teachers with robots any time soon.

Alexander Hughes teaches at an international school.

Photo by Annie Spratt on Unsplash.
Photo by Emiliano Vittoriosi on Unsplash.