Fourth Multimodal Learning and Analytics
Grand Challenge

Multimodality is an integral part of teaching and learning. Over the past few decades researchers have been designing, creating and analyzing novel environments that enable students to experience and demonstrate learning through a variety of modalities. The recent availability of low cost multimodal sensors, advances in artificial intelligence and improved techniques for large scale data analysis have enabled researchers and practitioners to push the boundaries on multimodal learning and multimodal learning analytics. In an effort to continue these developments, the 2015 Multimodal Learning and Analytics Grand Challenge will include a combined focus on the development of rich, multimodal learning environments, as well techniques for capturing and analyzing multimodal learning data. Whereas previous workshops were centrally concerned with the ability to predict a variety of metrics related to expertise, social dominance and answer correctness, this year we are broadening the scope of the Grand Challenge to promote innovations in the development of multimodal learning environments, that can, potentially, be used to collect rich multimodal learning data and expand multimodal learning opportunities.

Long before researchers are able to conduct multimodal learning analytics, they must have access to tools that can be used to capture multimodal data. Furthermore, as the field endeavors to foster multimodal learning, there is a growing need to develop teaching and learning environments that truly enable students to have authentic and naturalistic multimodal educational experiences. While the educational technology space has increasingly embraced individual, video-based instruction, there is a great shortage of technology-enhanced multimodal teaching and learning experiences. And with availability of various low-cost multimodal sensors, and their respective software development kits, there is an incredible opportunity to design and create collaborative, multimodal learning experiences that engage students in hands-on interactions. Accordingly, this year’s Multimodal Learning and Analytics Grand Challenge will solicit participation in two design challenges. Both challenges are situated in the domain of multimodal teaching and learning.

Important Dates

  • April 1, 2015: Official website launch
  • July 1, 2015: Deadline to submit preliminary abstract
  • July 8, 2015: Feedback provided on preliminary abstracts
  • August 15, 2015: Deadline for submitting grand challenge papers
  • August 30, 2015: Notification of acceptance
  • September 15, 2015: Camera-ready papers due
  • November 9, 2015: Grand challenge event @ ICMI in Seattle, WA (See schedule)


Participation is being solicited in two sub-challenges. Authors can contribute to either one (or both):

Multimodal Capture of Learning Environments

The Multimodal Capture of Learning Environments challenge addresses the need to develop multimodal tools that can be used to effectively gather data from unstructured environments. While multimodal data capture can reasonably be completed in laboratory settings, with a small number of students, undertaking classroom wide multimodal data capture and analysis in authentic, everyday learning settings is quite challenging. However, we believe that the current availability of high quality multimodal sensor technology can significantly improve the state-of-the-art in this domain. Furthermore, advances in this specific scenario should have applications across a variety of domains and scenarios outside of education. Submissions in this category will be evaluated based on the quality, quantity and diversity of the data captured. Additionally, submissions will be evaluated based on their ability to generalize to a variety of classroom learning contexts. Finally, authors will be asked to focus on one or more areas for application optimization: data quality, cost, scalability, flexibility and intrusiveness. More specific details concerning submission evaluation guide can be found in the Multimodal Capture Evaluation Guide.

Multimodal Learning Applications: Incorporating Human Movement

The multimodal learning application grand challenge challenge seeks submissions of software and/or hardware solutions that enable multimodal teaching and learning for one or more users. We are particularly interested in soliciting submissions that include recent low-cost motion sensors (eg. Microsoft Kinect, Leap Motion, Myo). The low-cost motion sensors can easily be coupled with sensors from other modalities (Eye Tribe, Q-sensor/Empatica E4, etc.) Additionally, we encourage participants to leverage existing software applications that ease the process of software development (e.g. the Institute of Creative Technology's Virtual Human Toolkit and the LITE Lab's Generalized Intelligent Framework for Tutoring). For the design challenge, submissions will be evaluated based on: the naturalistic nature of the interactions, the extent to which the affordances of the platform align with the learning goals, the quality of the learning (learning will be considered in the broad sense to include cognitive, socio-emotional, intuitive, etc.) that is taking place and how well the platform is able to record and leverage meaningful multimodal data in real-time. Submissions in this particular sub-challenge should include quantitative and/or qualitative results from a preliminary user study. A specific set of questions to ask of pilot test participants can be found in the Multimodal Learning Applications Evaluation and Design Guide. Additionally, solutions will be rated based on the scalability of the platform (how easily can it be developed across multiple geographies and for multiple students) as well as their ability to offer real-time feedback. Finally, submission authors should also be prepared to do demonstrations of their applications at the conference in order to allow for participant feedback of each application. Note: the focus for this category is not on the underlying intelligence that is used to provide interaction to the user, but, instead, on the ways that the application affords multimodal interactions.

Challenges Guidelines

To provide a common ground where the different submissions could be compared and evaluated, the authors are required to follow the provided guidelines to prepare their submissions. While it is not always possible to comply with the guidelines, trying to fulfill them as much as possible will expand the impact of the publication.

Author Guidelines

To contribute to the grand challenges, authors will be required to submit a design paper. The design should specify in a high level of detail the proposed solution. In the case of the Multimodal Classroom Capture challenge, such design should incorporate the sensor types, arrangement, configuration, recording means and expected analyzable data. It can also present examples of the implementation of the design on real classrooms. For the Multimodal Learning Application challenge, the design should explain the goal of the tool, how it can be applied to improve learning and optionally some evaluation of the learning impact. Papers in both challenges should answer to the issues mentioned in their respective Guidelines.

The maximum length of the paper is 10 pages. The final version of the paper should follow the ACM conference guidelines and the ICMI 2015 guidelines for full papers. The submissions will be reviewed in a double-blind way, so refrain of adding identifying information in the review version.

Each submission will be evaluated by at least three reviews that will come from a variety of disciplines. As described above there are specific guidelines for each of the sub-challenges. In the case of the Multimodal Classroom Capture category, submissions will be evaluated based on the quality and quantity of data that is captured, and the ease with which that data can easily be interpreted and analyzed. For the Multimodal Learning Application challenge, submissions will be evaluated based on how naturalistic the learning experience is, the quality of the learning gained from the interaction, and the platform’s ability to capture and analyze rich multimodal data in real-time.

All the accepted papers will be published in the ICMI proceedings. The proceedings of ICMI 2015 will be published by ACM as part of their series of International Conference Proceedings.

The papers should be submitted using the online application for ICMI 2015.

To submit an abstract or paper:


Grand Challenge Chairs

  • Katherine Chiluiza, ESPOL, Ecuador
  • Joseph Grafsgaard, North Carolina State University, USA
  • Xavier Ochoa, ESPOL, Ecuador
  • Marcelo Worsley, University of Southern California, USA

Email contact:

Program Committee

  • Michael Johnston, Interactions
  • Alejandro Andrade, Indiana University
  • Kate Thompson, University of Sydney
  • Engin Bumbacher, Stanford University
  • Richard Davis, Stanford University
  • Bertrand Schneider, Stanford University
  • Mirko Raça, École polytechnique fédérale de Lausanne
  • Shuchi Grover, SRI International
  • Saad Khan, Educational Testing Service
  • Lei Chen, Educational Testing Service
This event is organized in cooperation with the Society for Learning Analytics Research (SoLAR).