seminars
Detail
Publication date: 13 de January, 2025Show and Guide: Instructional-Plan Grounded Vision and Language Model
Guiding users through complex procedural plans is an inherently multimodal task in which having visually illustrated plan steps is crucial to deliver an effective plan guidance. However, existing works on plan-following language models (LMs) often are not capable of multimodal input and output. In this work, we present MM-PlanLLM, the first multimodal LLM designed to assist users in executing instructional tasks by leveraging both textual plans and visual information. Specifically, we bring cross-modality through two key tasks: Conversational Video Moment Retrieval, where the model retrieves relevant step-video segments based on user queries, and Visually-Informed Step Generation, where the model generates the next step in a plan, conditioned on an image of the user’s current progress. MM-PlanLLM is trained using a novel multitask-multistage approach, designed to gradually expose the model to multimodal instructional-plans semantic layers, achieving strong performance on both multimodal and textual dialogue in a plan-grounded setting. Furthermore, we show that the model delivers cross-modal temporal and plan-structure representations aligned between textual plan steps and instructional video moments.
URL | https://videoconf-colibri.zoom.us/j/92950889155?pwd=YXN6MFNwaDVxbGh4RHQ5d3N0VWhLUT09 |
---|---|
Date | 15/01/2025 2:00 pm |
Location | DI Seminars Room and Zoom |
Host Bio | Diogo Silva is a 4th-year Ph.D. student at NOVA School of Science and Technology with a CMU Portugal Affiliated scholarship. His research focuses on language generation in conversational systems, with a particular interest in incorporating vision in plan guidance systems. Diogo was a member of TWIZ, the winning team for the 2023 Alexa Prize TaskBot Challenge. |