Project Name
AI and Multimodal Learning
Project Description
This project’s aim is therefore to explore and evaluate the opportunities brought by the use of GenAI tools within education to support disciplinary knowledge creation in the form of multimodal learning and the development of digital capabilities by students and staff.
Our interest in multimodality follows Kress’ (2010) social semiotic approach, which explores the ways in which different semiotic modes (text, speech, sound, image, moving image, touch, gesture etc.) are present and combined within one communication to produce a multimodal artefact. For instance, an infographic combines text and image in ways that produce extra meaning. Multimodal learning is concerned with designing learning utilising multimodal texts across media, forms and formats appropriately to the given context (van Leeuwen, 2017). Other examples of multimodal texts include posters, video, dance or virtual simulation. Multimodal learning also involves the use of ‘semiotic technologies’ such as collaborative tools, visualisation apps, online quizzes to support teaching practice. The ascent of GenAI offers further expansion to eductors’ repertoire of semiotic technologies.
GenAI is a form of technology that uses deep learning techniques to access a huge swathe of data and produce artefacts based on the prompt(s) provided to it by a human user, is increasingly capable of producing multimodal content, including (but not limited to) text, speech, audio, image, video and even three-dimensional models (Fui-Hoon Nah et al., 2023). Multimodal GenAI can be, and is being, used to create, manipulate, and adapt content and combine different semiotic forms together, to produce multimodal artefacts.
The proposed guide will offer practical learning designs for integrating present and future freely-available GenAI tools for multimodal learning, paying attention to equality, diversity and inclusivity, in three strands:
- Teaching, i.e. how GenAI can be used to represent subject knowledge multimodally,
- Learning, i.e. how students could encounter, explore, evaluate and express ideas via multimodal GenAI,
- Assessment, i.e. how students’ using GenAI to create or critique/reflect on multimodal artefacts.
The guide will support educational developers (and educators) in understanding the potential and challenges of integrating GenAI into teaching/learning practices.
Project Members
Project Leader: Varga-Atkins, Tünde (Centre for Innovation in Education, University of Liverpool, UK), Saunders, Samuel (Generative Artificial Intelligence Network, Centre for Innovation in Education, University of Liverpool, UK)
Project Members:
Dr Na Li (XJTLU, China), Dr Wen Run (XJTLU, China), Sue Beckingham (Sheffield Hallam), Dr Sam Elkington (Teesside), Professor Peter Hartley (Edge Hill), Nayiri Keshishi (Surrey), Dr Nataša Lackovic (Lancaster), Dr Isabelle Winder (Bangor)
Project Outcome
1. Research grant: 2024 XJTLU SURF, award number: 000000000339, project title: A Guide to Utilising Multimodal Learning and Generative Artificial Intelligence for Enhancing Student-Centred Active Learning and Teaching in Higher Education
2. Award: SEDA Research and Evaluation Small Grants 2024, Project title: An Educational Developer’s Guide to Multimodal Learning and Generative AI (artificial intelligence). Principal investigators: Dr Tunde Varga-Atkins & Dr Sam Saunders, University of Liverpool, Sum awarded: £1,000.00
Recruitment
- 2 Research Assistants for assisting with the literature of the project
- Student advisory members
- Focus group participants for the overall project.
Comments
Add comment