‘AI in education can be compared with the rise of the fatbike’
What role will AI play in legal education? And how do we prepare for it? We asked Francien Dechesne, associate professor of Ethics and Digital Technologies.
On 24 October, a symposium on artificial intelligence (AI) in legal education will be hosted at the KOG Building. The symposium is designed for all teaching and non-teaching staff who are interested in this subject. During the symposium, we will explore AI in education from various perspectives. Someone who has lots of experience with this is Francien Dechesne. As an Associate Professor of Ethics and Digital Technologies, Dr Dechesne is affiliated with eLaw, the Center for Law and Digital Technologies, where she is head of teaching. She is also the initiator and coordinator of the university-wide AI & Society minor and is a member of the university’s AI in Education working group. Reason enough to hear her views on AI in education and ask her some questions.
AI in education has received a lot of attention lately. Why?
Francien: ‘The biggest misconception is that AI is a complete revolution or a new innovation. AI technology has been around for years – it’s simply access to it that has suddenly increased due to systems such as ChatGPT. The hype surrounding it is causing unnecessary panic, while the technology has just become more broadly available. You can compare this with the rise of the fatbike. Bicycles – even e-bikes – have been around for much longer but now a version with an electric motor and thick tyres has come out, which makes you go really fast. Faster than the existing infrastructure and traffic standards are designed for. Similarly, you can also use ChatGPT to generate texts very quickly. As with the fatbike, this can result in accidents and discussions about how to use AI in a way that is both responsible and safe. Users need an ‘internal helmet’ – i.e. a better understanding of what it does and doesn’t do – as well as perhaps new standards for whether and how to travel so fast in a way that’s still responsible.’
Do you think AI can be deployed effectively in legal degree programmes?
Francien: ‘In any case, AI can be a very useful tool that makes students reflect on the texts that are generated. Rather than viewing the text as an end product, students can critically analyse AI-generated texts. What is and isn’t correct in the reasoning? This enhances their analytical skills and improves their understanding of sound legal reasoning. As a result, students learn how AI tools can contribute to education, as well as the limitations of these kinds of technologies.
You’re an Associate Professor of Ethics and Digital Technologies. What are the ethical challenges associated with AI in education?
Francien: ‘One major ethical challenge is that level of reflection that AI requires from us – both on the core of our field of expertise and on the skills we aim to teach. AI tools can seamlessly produce 500-word texts that are grammatically correct and sound plausible. However, that doesn’t mean they offer valuable input and ideas for students in their learning process or useable text for us as legal experts. And so we need to move away from purely focusing on the form and let students engage with the content and meaning of what they’re writing and analysing.’
How can lecturers do that?
Francien: ‘Human interaction remains essential – and especially when it comes to assessing students’ reasoning and the extent to which they demonstrate their understanding of the material. In my opinion, AI will never be able to replace that. Hence the importance of students being able to explain in their own words how they arrived at their thoughts and ideas, in addition to completing written assignments. In practice, that’s not always easy with the large number of students, of course.’
Can lawyers also be replaced by AI?
Francien: ‘Absolutely not. AI cannot replace the essential skills that are necessary in fields such as the legal profession. Although AI is able to generate plausible texts based on previous descriptions, it doesn’t offer meaning or interpretation in the way that a legal expert should be able to. AI can help with repetitive tasks such as drawing up standard contracts (that you then have to go through in full) and summarising files, but it will never be able to replicate the complexity of legal reasoning and interpretation.
And lastly, what’s the future of AI in education?
Francien: ‘AI will inevitably remain part of education, but it won’t replace basic human skills. Even for the simple reason that it’s those skills that we want to cultivate through education. The main challenge is effectively integrating AI in education in a way that allows us to deploy the efficiency of technology while continuing to focus on the deeper learning and thought processes that are essential for students’ professional and academic development. Universities across the world are taking action to create a balance – take a look, for example, at the decision tree developed by Oregon State University on the use of AI tools in education and research.’
To find out more about AI in legal education, register for the symposium on AI in legal education taking place on 24 October 2024. Some sessions at this event will be held in English.