“The original version, in French, was posted onCHAIRE UNESCO RELIA website
Let’s face it, for a few weeks, we’ve been talking only about this: ChatGPT is in all the media, in all the meetings. The amused reflex “I’m going to ask ChatGPT” has now become an obligatory point of passage, a permanent guest at evenings with friends, at researcher panels.
Very quickly, professionals became worried. The illustrators whose work involves the creation of visual aids, already very unhappy with Dall-E, Midjourney, etc., expressed even more dismay at the ease with which Mr. and Mrs. Everybody would be able to obtain, following a simple dialogue (and the installation of additional software), an image immediately qualified as superb, stunning or just largely sufficient for the use we intended to make of it . It would be time to recall that when digital cameras appeared, other image-creating professions had expressed more or less the same concerns.
But the purpose of this article is to focus on another professional, represented by a much larger number. The teacher.
Very quickly, teachers saw the risks and benefits of this type of technology. But very quickly, it was only the worried people that we heard. Let’s try to see where we stand.
The difficulty of writing something definitive in February 2023
Above all, it is appropriate to be clear: we do not have sufficient hindsight to engage in an in-depth scientific analysis, based on facts observed repeatedly, on longitudinal studies, on detailed studies. We can base ourselves on the impact of other artificial intelligences on education (Google Translate, DeepL, Photomath) but there too, the documentation is sparse. We can also work on the dozens of articles published in recent weeks and a dizzying number of meetings (dizzying given the novelty of the question).
This analysis can therefore only be what it is: the position of a researcher who works today on these issues with the members of the Unesco RELIA Chair, in contact with other researchers. It may be reinforced or, on the contrary, challenged in the months or weeks to come.
AI in education is not dated December 2022
A first thing to say is that artificial intelligence did not establish itself as an actor in the world of education on November 30, 2022 with the public introduction of ChatGPT. On the one hand, ChatGPT was just an iteration of a suite of generative tools that we had been testing for several months; on the other hand, tools using artificial intelligence had already been impacting education, in deep media silence, for quite a while. We are partners of the European AI4T project, the objective of which is to train teachers in 5 European countries in artificial intelligence and above all in its understanding and reasoned use. In the context of AI4T, we analysed the opportunities offered by educational software containing artificial intelligence, but reached the conclusion that the real urgency was perhaps to analyse the effect of non-educational software that can be repurposed for learning. This is where we noted the importance of software for translation (Google translate, DeepL) or for solving mathematical problems (like Photomath). Already, language teachers have told us of their difficulty in giving homework, of their frustration in grading copy-paste abilities (I quote), and -and this is fundamental- of their concern in face of the lack of motivation of pupils towards a subject whose raison d’être seriously needs to be dusted off. By forcing the point at the risk of being provocative: was it necessary to learn to calculate for so many years, at the rate of 3 hours a week, once the calculator was available to everyone?
What are we talking about?
ChatGPT is a tool available for free -for the moment- online, after a fairly simple registration with OpenAI. This registration requires you to give an email address and a telephone number. This tool allows us to interact with an artificial intelligence on almost any subject and in a large number of languages. Its engine is called GPT-3 and is the 3rd iteration of a large language model (Large Language Model, LLM). Built from texts found on the Internet (this is estimated at 570 GB or 300 billion words), the result is a model called a neural network, which can be seen as a gigantic system of equations. This system will be defined by 175 billion parameters. Hours of calculation on powerful machines allowed the adjustment (we say “training”) of these parameters.
Note also that the name of the company (Open-AI) producing both ChatGPT, GPT-3 and soon GPT-4 pegged to be much more powerful, is misleading. If GPT-3 was largely open, this is absolutely not the case with Chat-GPT.
For several months, alternative solutions have been under construction by all the major digital players (META, Alphabet, etc.), and a truly open “competitive” solution to GPT-3 has been made available by an international consortium (Big Science) including particular French research centres (CNRS, INRIA): BLOOM.
Don’t be in denial
The discussions that we have had for more than a year on AI tools impacting the teaching of modern languages, with various actors in education, have shown that the most frequent line is that of denial: “Google translate does not work”. With an example dating from 10 years ago, the question of algorithmic biases (try to translate “the mechanic” or “the nurse” into a language which takes gender) or a convoluted example. Automatic translation does not work. Period.
Now let’s put ourselves in the place of a 12-year-old child (fifth grade). Let’s call her Aicha. Aicha has some creative homework to do. Aicha will try, if she is hard working, to perform the requested work on her own. But she will probably check what the machine translator would give. And she’s bound to feel a bit frustrated, because for a tool “that doesn’t work”, the result is way beyond what she can imagine. And if she has travelled abroad with her parents, she may also find that AI does better than them, who have studied languages for 7 years. She may deduce from this that she herself will have a hard time doing better than AI in 6 or 7 years time. And Aicha is a “good student”: she has made the effort of trying.
Today, like many of my colleagues who have interacted with our students (admittedly older than high school students), we know that many have tried ChatGPT. In some cases, out of curiosity, for fun. In others, as part of a project, a job to be done.
It seems to me that the first thing to do, for a teacher, is to try it, to play with it. Then to talk about it with the students, so that they understand that we (teachers) are in the process of integrating this tool, that we have analysed it. That will do at least three things:
- Do not let students believe that they have a head start in access to a technology that the teacher does not know about. Whatever the outcome of the debate on cheating, the recommendations that will come from the ministry, it is important not to suggest that we do not know or that we have remained in a position of denial.
- Try to understand the strengths and weaknesses of these tools. Examples for “fooling” ChatGPT abound and are instructive: problems with logic, the ability to try to explain anything, cultural differences… But it’s a young technology and it’s improving over time and over iterations.
- We can completely admit to curiosity (a quality) and imperfect knowledge: it is an opportunity to learn from our students and pupils, to ask them to explain, to suggest that they go further together than simple use.
Did you say “cheat”?
One of the most frequently expressed anxieties is that of cheating. EdTech already offer us solutions (of course built using artificial intelligence) to detect cheating and therefore lead us to a semblance of justice. In the previous context of automatic translation tools, there were already discussions on Anglo-Saxon sites suggesting that the good solutions consisted in prohibiting their use because that amounted to cheating. With Chat-GPT (and related technologies), cheating would have become easy. The debate must take place. Here are some questions to get it started.
- This situation has already occurred in the field of education: with the arrival of the calculator, when Wikipedia began to have texts on almost all subjects. In both cases, they were practical tools but which could lead to behaviour on the part of the pupils which was – or was not – qualified as cheating. In the case of Wikipedia in particular, some students had learned to cite their sources, others had the feeling of being congratulated for a simple copy and paste.
- The most important thing is to know who is being cheated. Is it the system itself? Or the other students, because there would not be equality? Or the student who is not learning while obtaining misleading results? These different cases must be considered carefully.
- We have to realize that adults/teachers are going to ask ChatGPT to do part of our job: to prepare exams, to develop lesson plans, to find references, to build images to illustrate our slideshows, to write the outlines of our lessons… And we’re probably not going to call that “cheating”. We will not voluntarily return part of our salary because our preparation work was simplified. It will then be difficult to explain why, when we use it, it is good, but when a student uses it, it is bad.
- If it is about cheating on an exam, the question really matters, at least for now. If the software writes the answer, we have a problem: in Spain, recently , ChatGPT has shown its ability to obtain (with a fairly average mark, however) the History of Selectividad test, the equivalent of our baccalaureate.
Let’s separate the issues and the debates
In reality, we are witnessing two debates that overlap and contradict each other:
- Training the student for tomorrow’s world necessarily involves training to the tools he/she will have to encounter. We can think of digital tools, and in the first place the Internet (when will it be possible and even compulsory to use the Internet – as in Denmark – during exams?). In 2023, knowing how to correctly use a spelling and grammar checker would seem more important than being certain of rare tense conjugations or of obtaining a good grade on an elaborate dictation. And it is therefore a question for the school being able to adapt to tools like ChatGPT or those that will follow. Note that proposals are beginning to be made in this direction (Andrew Heft’s guide for teachers, Canopé).
- The issue of summative assessment is a concern. And in that setting the word “cheating” is then on everyone’s lips. If we must first think carefully about what we mean by “cheating” (see the previous paragraph), it should be noted that part of the knowledge assessment methods of 2022 are now obsolete. This is particularly the case for all those based on distance tests or on personal work. For some, this question is a priority and all other questions become secondary. This can then lead to banning, talking about plagiarism, threatening the offending students with exclusion.
Taken together, these two questions cancel each other out: “training to use AI” becomes “training future cheaters” and conversely, prohibiting the use of these technologies leads to not training for the world of tomorrow.
The answer? We have to separate the debates, rethink the question of the skills to be acquired, then the question of the evaluation: many people think that it is time to do it anyway.
Let’s look at the opportunities
Unless we think that ChatGPT and the other large language models are only ephemeral technologies, we must admit that they are tools that will find a place of choice in our immediate environment, alongside the calculator, the smartphone, internet… A place both in our professional and social environment. Based on this gamble (or evidence), the question posed is then, retaining the argument of the separation of questions: how can ChatGPT be used to educate? What can students use it to learn better? It will be necessary that from the field to the didactics labs, we will have to make proposals, to test these rigorously in class, to document them. But we might already risk a few guesses of situations where thes tools could help:
- The need for reformulation. This need is expressed by the students. It can be the reformulation of an exam question, a course element, an entire course. These technologies offer real prospects.
- The development of critical thinking. Where one can be worried that the tool is useful to generate majority opinions, it can on the contrary be for the teacher a tool for generating “wrong” yet objective opinions (not claimed by an individual, an ethnic or religious group) that can be used to analyse, deconstruct, understand.
- The fear of the blank sheet. This difficulty that students of all ages have in getting started, when they do not immediately have the solution or the method, could be overcome by using these technologies. Of course, it will be up to the teachers to explain how the result obtained from a first request is not, despite appearances, a finished work but an additional ground for obtaining something meaningful.
- The search for reliable information (and the detection of fakes). Here again, we read at the moment that ChatGPT is above all capable of inventing quotations, of mixing sources. But trust in human innovation, capable of remedying this. Tools like you.com, perplexity.ai are already offering us versions that combine the generation capacity of these tools and the power of search engines.
In conclusion, ChatGPT should be seen as a great opportunity for our teaching work, our interactions. Prohibiting it, or setting up moral -speaking of cheating- or legal barriers to its use would lead to justified misunderstandings. It should therefore be added, along with other tools, to the arsenal of what students should know how to use. To achieve this, two different questions need to be solved, requiring very different answers: how to integrate it into our teachings to solve important questions, but also how do we change our evaluation strategies?
The author is solely responsible for the ideas expressed here. But these ideas are the result of many discussions, within the UNESCO RELIA Chair, within the framework of the European AI4T project, with IRCAI or during the closing day of the GTnum #IA_EO. The article also benefited from careful proofreading by friends of the Chair.
Unless otherwise specified, all content on this site https://chaireunescorelia.univ-nantes.fr/ is made available under the terms of the Creative Commons Attribution 4.0 International License.