Navigation
Abstracts
International Scientific Board
Logistical information
Overview
Program
Registration
Submit an abstract
Tech Info

Overview

ICMC2018: International Conference on Multimodal Communication

Full Conference Program | Abstracts

Find us on Wechat for the most recent conference news!


AvH_Logo.jpg

Hunan Normal University, 1-3 November 2018
Conference directors: ZENG Yanyu and Mark Turner
Assistant Directors: Prof. CHEN Minzhe and Dr. LIU Bai

See the Call for Papers for full instructions.

Click here for the Chinese announcement of the conference, from the College of Foreign Studies, Hunan Normal University

.

The Annual Hunan Normal University International Conference on Languages and Cultures has as its 2018 theme MULTIMODAL COMMUNICATION.

ICMC2018 is hosted by Hunan Normal University and its Center for Cognitive Science. It is organized by the College of Foreign Studies. It builds on the tradition established by ICMC2017.

We encourage presentations on any aspect of multimodal communication, including topics that deal with word & image, language and multimodal constructions, paralinguistics, facial expressions, gestures, cinema, theater, role-playing games, . . . . The research domains can be drawn from literature and the arts, personal interaction, social media, mass media, group communication, . . . . We invite conceptual papers, observational studies, experiments, and computational, technical, and statistical approaches.

Dates:
Thursday, 1 November: Methods Training Classes
Thursday, 1 November, evening: participants are invited to a lecture by Peter Knox, Director of the Baker-Nord Center for the Humanities at Case Western Reserve University. This talk is hosted by the College of Foreign Studies
Friday and Saturday, 2-3 November: Conference

Methods Training Classes. Thursday, 1 November
4 Methods Training Classes: 9-10:30am, 11am-12:30pm, 1-2:30pm, 3-4:30pm

Plenary speakers

  1. Sandra Blakely, Emory University
  2. Thomas Hoffmann, Professor and Chair of English Language and Linguistics, Katholische Universität Eichstätt-Ingolstadt
  3. Yuzhi Shi, Professor, Hunan Normal University
  4. Francis Steen, Associate Professor of Communication, University of California, Los Angeles
  5. Xu Wen, Professor and Dean, School of Foreign Languages, Southwest University in Chongqing

Plenary workshops

In addition to plenary talks, parallel sessions, and a conference dinner, the conference will feature on 1 November plenary workshops on methods by leading methodological experts, each presenting a specific workflow, showing how specific methods can be applied to transform a research question into finished, publishable research products.

Detailed Descriptions of Plenary Talks and Plenary Workshops on Methods

Plenary Speakers

  1. Francis Steen
    Title: Modulating meaning: How to convey the literally unspeakable
  2. Abstract: Multimodal constructions employing tone of voice, hand gestures, gaze direction, facial expressions, and pose perform a uniquely efficient job of modulating the literal verbal meaning of utterances. In this talk, I will examine strategies of signaling epistemic stance, positive or negative deviance from expectation, and emotional coloration.
    Bio: Associate Professor, Department of Communication, University of California, Los Angeles.
  3. Sandra Blakely
    Title:
    Abstract:
    Bio: Emory University.
  4. Thomas Hoffmann
    Title: Multimodal Construction Grammar – Cognitive and Psychological Aspects of a Cognitive Semiotic Theory of Verbal and Nonverbal Communication
  5. Abstract: Over the past 30 years, evidence from cognitive linguistics, psycho- as well as neurolinguistics and research into language acquisition, variation and change has provided ample support for Construction Grammar, the theory that holds that arbitrary and conventionalized pairings of form and meaning are the central units of human language. Recently, several scholars have started to explore the idea of a Multimodal Construction Grammar, i.e. the notion that not only language, but also multimodal communication in general might be based on multimodal constructions. In this talk, I will take a closer look at the evidence for and against multimodal constructions. In particular, I will focus on the cognitive processes that produce multimodal utterances (i.e. the interaction of working memory and long term memory) as well as the role of inter-individual psychological differences (especially the Big-Five personality traits openness, conscientiousness, extroversion, agreeableness, and neuroticism). All data for this talk will be drawn from the Distributed Little Red Hen Lab (http://redhenlab.org).
    Bio: Professor and Chair of English Language and Linguistics, Katholische Universität Eichstätt-Ingolstadt.
  6. Xu Wen
  7. Title:
    Abstract
    :
    Bio: Professor and Dean, School of Foreign Languages, Southwest University in Chongqing
  8. Yuzhi Shi, Professor, Hunan Normal University
  9. Title:
    Abstract
    :
    Bio: Hunan Normal University

Plenary Workshops on Methods

  1. Mark Turner, 9-10:30am, 1 November.
    Title: Overview of Red Hen Lab tools for the study of multimodal communication
  2. Abstract: The Distributed Little Red Hen Lab (http://redhenlab.org) has been developing new tools for several years, with support from various agencies, including Google, which has provided four awards for Google Summers of Code, in 2015, 2016, 2017, and 2018. These tools concern search, tagging, data capture and analysis, language, audio, video, gesture, frames, and multimodal constructions.  Red Hen now has several hundred thousand hours of recordings, more than 4 billion words, in a variety of languages and from a variety of countries, including China. She ingests and processes about an additional 150 hours per day, and is expanding the number of languages held in the archive.  The largest component of the Red Hen archive is called “Newsscape,” but Red Hen has several other components with a variety of content and in a variety of media.  Red Hen is entirely open-source; her new tools are free to the world; they are built to apply to almost any kind of recording, from digitized text to cinema to news broadcasts to experimental data to surveillance video to painting and sculpture to illuminated manuscripts, and more. This interactive workshop will present in technical detail some topical examples, taking a theoretical research question and showing how Red Hen tools can be applied to achieve final research results.
    Bio
    : Institute Professor and Professor of Cognitive Science, Case Western Reserve University.  Co-director, the Distributed Little Red Hen Lab (http://redhenlab.org), Distinguished Visiting Professor and Director of the Center for Cognitive Science (http://202.197.122.82), Hunan Normal University. http://markturner.org
  3. Dr. Weixin Li 北航. 11am-12:30pm. 1 November.
    Title: Automated Methods and Tools for Facial Communication Expression Recognition
    Abstract: Facial expression, as a form of non-verbal communication, plays a vital role in conveying social information in human interactions. In the past decades, much progress has been made for automatic recognition of facial expressions in the computer vision and machine learning research community. Recent years also witness many successful attempts based on deep learning techniques. These automated methods and related tools for facial expression recognition are powerful especially when dealing with large-scale data corpus. This workshop will firstly provide an introduction to these methods and tools, and then show one campaign communication study which applies fully automated coding on a massive collection of news and tweets data.
    Bio: Beihang University.
  4. Shuwei Xui, postgraduate student, STJU, China, 1pm-2:30pm. 1 November.
    Title: Automatic Speech Recognition for Speech to Text, with a focus on Chinese
    Abstract:
    Bio:
  5. Professor Zhiming Yang 3pm-4:30pm. 1 November.
    Title: Multimodal Communication and Its Performance Assessment
    Abstract: Multimodal Communication (MC) means communicating through more than one “mode”, such as verbal, visual, gestures, sign language, etc. Communication can be more efficient and effective if we use more than one mode in our schools and work places. Therefore, it is worthy to discuss ways of developing multimodal communication skills for students, teachers, and employees. Developing a Likert Multimodal Communication Appraisal (MCA) could be helpful for developing people’s MC skills. This workshop will introduce the general procedures of developing a MCA, such as writing a test specification, defining the construct of MC, designing the test blueprint, conducting item development, pilot study and field test, performing psychometric analysis with classical testing theory and item response theory, completing the scaling and norming, and providing score reporting. Some initial results from pilot studies on college and high school students will be presented at the meeting.
    Bio: Professor and Director, Center for Assessment Research of Hunan Normal University. Executive Director of Psychometrics, Educational Records Bureau (2013-2016), Psychometrician, Educational Testing Services (2009-2013), Pearson (2008) and Harcourt Assessment (2003-2007).