AI and Multimodal Learning

Project Name

AI and Multimodal Learning

Project Description

This project’s aim is therefore to explore and evaluate the opportunities brought by the use of GenAI tools within education to support disciplinary knowledge creation in the form of multimodal learning and the development of digital capabilities by students and staff.

Our interest in multimodality follows Kress’ (2010) social semiotic approach, which explores the ways in which different semiotic modes (text, speech, sound, image, moving image, touch, gesture etc.) are present and combined within one communication to produce a multimodal artefact. For instance, an infographic combines text and image in ways that produce extra meaning. Multimodal learning is concerned with designing learning utilising multimodal texts across media, forms and formats appropriately to the given context (van Leeuwen, 2017). Other examples of multimodal texts include posters, video, dance or virtual simulation. Multimodal learning also involves the use of ‘semiotic technologies’ such as collaborative tools, visualisation apps, online quizzes to support teaching practice. The ascent of GenAI offers further expansion to eductors’ repertoire of semiotic technologies.

GenAI is a form of technology that uses deep learning techniques to access a huge swathe of data and produce artefacts based on the prompt(s) provided to it by a human user, is increasingly capable of producing multimodal content, including (but not limited to) text, speech, audio, image, video and even three-dimensional models (Fui-Hoon Nah et al., 2023). Multimodal GenAI can be, and is being, used to create, manipulate, and adapt content and combine different semiotic forms together, to produce multimodal artefacts.
In this project, we focus on three strands:

  • Teaching: teachers can potentially use GenAI to represent subject knowledge multimodally.
  • Learning: students may be able to encounter, explore, evaluate and express ideas via multimodal GenAI, where technology can help manipulate, change, adapt or create artefacts that incorporate multiple semiotics forms.
  • Assessment: students could use GenAI to create multimodal artefacts, or critique/reflect on existing ones for assessment.

Project Members

Project Leader: Varga-Atkins, Tünde document-files.png (Centre for Innovation in Education, University of Liverpool, UK), Saunders, Samuel document-files.png (Generative Artificial Intelligence Network, Centre for Innovation in Education, University of Liverpool, UK)

Project Members: 

Dr Na Li (XJTLU, China), Dr Wen Run (XJTLU, China), Sue Beckingham (Sheffield Hallam), Dr Sam Elkington (Teesside), Professor Peter Hartley (Edge Hill), Nayiri Keshishi (Surrey), Dr Nataša Lackovic (Lancaster), Dr Isabelle Winder (Bangor)

Project Outcome

1. Research grant: 2024 XJTLU SURF, award number: 000000000339, project title: A Guide to Utilising Multimodal Learning and Generative Artificial Intelligence for Enhancing Student-Centred Active Learning and Teaching in Higher Education

2. Award: SEDA Research and Evaluation Small Grants 2024, Project title: An Educational Developer’s Guide to Multimodal Learning and Generative AI (artificial intelligence). Principal investigators: Dr Tunde Varga-Atkins & Dr Sam Saunders, University of Liverpool, Sum awarded: £1,000.00  

Recruitment

  • 2 Research Assistants for assisting with the literature of the project
  • Student advisory members
  • Focus group participants for the overall project. 

Comments


Add comment

Fields marked by '*' are required.
Comments are moderated. If you choose to make this comment public, it will not be visible to others until it is approved by the owner.

Reply to:

Public
Private: This reply will only be visible to you and the author of the preceeding comment.