Navigation
Abstracts
International Scientific Board
Logistical information
Overview
Posters
Program
Registration
Submit an abstract
Tech Info
 

Overview

International Conference on Multimodal Communication: Developing New Theories and Methods


Full Conference Program | Abstracts

Find us on Twitter for the most recent conference news!

 
logo.png

 
 
AvH_Logo.jpg

Osnabrück University, 9-11 June 2017

Conference organizers: Alexander Bergs and Mark Turner


Registration opens at 09:00 on Friday 9th of June. The conference itself opens at 13:00 and ends on Sunday the 11th of June at 17:00.
For their generous support of the conference, we thank Osnabrück University, Case Western Reserve University, the Anneliese-Maier Research Award program of the Alexander von Humboldt Foundation, and the Deutsche Forschungsgemeinschaft program for international conferences. 

Plenary speakers

  1. Harald Baayen. Alexander von Humboldt Professor, University of Tübingen. "Integrating acoustic and visual information with discriminative learning."
  2. Mehul Bhatt. Professor of Human-Centered Cognitive Assistance, University of Bremen, Germany | Director of The DesignSpace Group. “Minds, Media, Mediated Interaction: On AI and Spatial Cognition for Human Behavioral Research." 
  3. Thomas Hoffmann. Professor of Linguistics, Katholische Universität Eichstätt-Ingolstadt. “Multimodal Communication – Multimodal Constructions?”
  4. Irene Mittelberg. Professor of Linguistics and Cognitive Semiotics at the Human Technology Centre (HumTec) at RWTH Aachen University. Mittelberg directs the Natural Media Lab and the Center for Sign Language and Gesture (SignGes). "Scenes – frames – constructions: Some ways of correlating embodied patterns of experience in language and gesture "
  5. Francis Steen. Professor of Communication Studies, UCLA. Co-director of the Distributed Little Red Hen Lab. "On establishing an integrated research agenda and workflow for multimodal communication."
  6. Eve Sweetser. Professor of Linguistics, UC-Berkeley. Coordinator, UC-Berkeley Gesture and Multimodality Group. Coordinator, UC-Berkeley Matrix Metaphor research group. Co-PI, MetaNet IARPA research project. "Viewpoint, creativity and convention in multimodal constructions."

Plenary workshops

In addition to plenary talks, parallel sessions, a poster session, and a conference dinner, the conference will feature, on Saturday and Sunday mornings, several plenary workshops on methods by leading methodological experts, each presenting a specific workflow, showing how specific methods can be applied to transform a research question into finished, publishable research products.

  1. Mark Turner. Institute Professor and Professor of Cognitive Science, Case Western Reserve University. Co-director, the Distributed Little Red Hen Lab.  CV. markturner.org. “Red Hen tools for the study of multimodal constructions.”
  2. Jakob Suchan. Ph.D. student, Human-Centered Cognitive Assistance (HCC Lab). University of Bremen, Germany. http://hcc.uni-bremen.de  | Cognitive Vision. http://www.cognitive-vision.org  —  "Computational Cognitive Vision for Human-Behaviour Interpretation"
  3. Silva Ladewig, Assistant Professor, currently replacing Cornelia Müller at the European University Viadrina, Frankfurt (Oder) and Jana Bressem, academic assistant to the chair for German Linguistics, Semiotics, and Multimodal Communication at the Technische Universität Chemnitz. "Methods of Gesture Analysis—analyzing multimodality from a cognitive-linguistic perspective."
  4. Peter Uhrig. Post-Doctoral researcher at the chair of English Linguistics, FAU Erlangen-Nürnberg. "Researching co-speech gesture in NewsScape – an integrated workflow for retrieval, annotation, and analysis."

Detailed Descriptions of Plenary Talks and Plenary Workshops on Methods

Plenary Speakers

1. Francis Steen
Title: On establishing an integrated research agenda and workflow for multimodal communication. Abstract: Human communication is a core research area, intersecting with every other human endeavor. An improved understanding of the actual and effective complexities of human communication will have consequences for a broad range of fields, from politics and religion to education and business. Historically, the study of human communication has long been recognized as a central discipline, reaching back to antiquity when the study of rhetoric was the core element in education. Theoretical advances in the understanding of human communication date back millennia, to the early grammarians. The focus was mostly on written language, since the data record lent itself to systematic study. Human communication, however, has always been multimodal, and modern communication technologies have been developed to allow the full visual and auditory channels of face-to-face communication to be broadcast globally. These broadcasts can in turn be electronically captured and stored, making vast datasets of real world multimodal communication for the first time available for systematic scientific study. These new datasets present a radical new challenge: to develop a new and integrative model of the full complexity of human communication, building on existing advances in linguistics. To advance research into human multimodal communication and its role in human endeavors, Mark Turner and I founded The Distributed Little Red Hen Lab. Red Hen is designed to function as a global research commons, a practical and theoretical platform for research into multimodal communication. It provides core datasets, maintains a wide and rapidly growing network of researchers, develops an expanding suite of new computational tools, and facilitates the exchange of skills and the identification of the complementary forms of expertise required to make rapid progress. It aims to create an efficacious multilevel integrated research infrastructure and workflow along the following lines.
  1. Introduction to multimodal communication research
  2. Red Hen Primer: selective unix shell commands, applied python, how to deploy NLP engines, and the statistical package R
  3. Multimodal analysis with Elan -- hands-on, best coding practices, annotations integrated into the Red Hen dataset
  4. Research question development in the student's area of interest -- e.g., crossmodal constructions, complex blends, multimodal disambiguation.
  5. Apply machine learning tools to annotations to create feature-specific classifiers on Red Hen servers, semi-supervised through feedback in Elan
  6. Run the classifier for feature extraction and annotation on high-performance computing clusters at CWRU, UCLA, and Erlangen (cf. Audio processing pipeline)
  7. Search for complex correlations between linguistic, auditory, and visual annotations in the Red Hen dataset
  8. Interpretation, qualitative and quantitative/statistical analysis of the search results
  9. Experimental testing of communicative effects
  10. Visualization, write-up, presentations
What kind of knowledge can we expect and aspire to generate? We can generate new knowledge on the range of expression achieved and possible in multimodal communication, expending the scope of which aspects of human experience can be effectively communicated. We can generate new knowledge on the generative processes of multimodal communication, tracking patterns of repetition and innovation. We can also generate new knowledge on the epidemiology of representations, determining which forms of expressions catch on and spread most widely. We can generate new knowledge about persuasiveness and influence, examining how multimodal communication can be used effectively for truth as well as maliciously for propaganda and misinformation. Bio: Professor, Department of Communication, UCLA. Co-director of the Distributed Little Red Hen Lab, which is dedicated to the study of multimodal communication. Specialist on the development of multimodal constructions in mediated environments and on computational techniques for big data capture, transmission, and analysis. http://commstudies.ucla.edu/content/francis-steen-ph
2. Mehul Bhatt
Title: Minds, Media, Mediated Interaction: On AI and Spatial Cognition for Human Behavioral Research
Abstract: Our research at the interface of Artificial Intelligence, Spatial Cognition, and Human-Computer Interaction focusses on the design and implementation of computational cognitive systems, and mediated interaction technologies. Here, we are driven by application areas where human-centred perceptual sensemaking and interaction with cognitively founded conceptualisations of space, events, actions, and change are crucial. A recent emphasis has been on the processing and semantic interpretation of multi-modal human behaviour data, with a principal focus on highly dynamic visuo-spatial imagery.
In this talk, I will:
  • make a case for the foundational significance of artificial intelligence and visuo-spatial cognition and computation research for the development of integrated analytical–empirical methods suited for the (multi-modal) study of human behaviour in diverse contexts of socio-cultural, and socio-technological significance;
  • present foundational methods and assistive technologies for systematic formalisation and empirical analysis aimed at, for instance, the generation of evidence, and establishing & characterising behavioural correlates for the the synthesis of embodied cognitive experiences in select contexts.
The presentation particularly emphasises integrating analytics and empiricism for the multimodal study of human behaviour in (select) contexts: communications and media design, architecture design, and interaction design. With the support of an additional plenary tutorial, I will highlight core results and open opportunities for the semantic interpretation of human behaviour in the backdrop of “indoor wayfinding studies” and “cognitive film studies” encompassing:
  • computational models of narrative:  computational cognitive vision for moving image analysis from the viewpoint of machine coding of narrative scene structure of the analysed content
  • image schemas:  embodied grounding and simulation for generalised / domain-independent image-schematic analysis of narrative media
  • behaviour and learning:  computational learning of human behavioural patterns vis-a-vis visuo-auditory computational narrative forms; emphasis is on acquiring qualitative or high-level knowledge from large-scale experiments / datasets
  • semantic question-answering:  system-driven reasoning and query answering about digital narrative forms, and their embodied reception, e.g., for high-level attention and visual fixation analysis vis-a-vis embodied scene semantics and learnt behavioural patterns
I will showcase the manner in which semantic interpretation of human behaviour, founded on AI-based methods such as in (1–4), serves as basis to externalise explicit and inferred knowledge about embodied cognitive experiences, e.g., using modalities such as diagrammatic representations, natural language, complex (dynamic) data visualisations.

Bio: Mehul Bhatt is Professor within the Faculty of Mathematics and Informatics at the University of Bremen, Germany; and Stiftungs Professor at the German Research Center for Artificial Intelligence (DFKI Bremen). He leads the Human-Centred Cognitive Assistance Lab at the University of Bremen, Germany (HCC Lab. http://hcc.uni-bremen.de/), and is Director and co-founder of the research and consulting group DesignSpace (www.design-space.org). Mehul's research encompasses the areas of artificial intelligence (AI), cognitive science, and human-computer interaction. Of particular focus are basic topics on spatial cognition and computation, visual perception, knowledge representation and reasoning, multimodality and interaction studies, design cognition and computation, and communications & media studies (with a focus on visuo-auditory narrativity). Mehul's currently ongoing research initiatives particularly reach out with methods and technologies from ``cognition and artificial intelligence'' for integrated analytical & empirical behavioural research in psychology, social sciences, and arts & humanities. Mehul obtained a bachelor's in economics (India), master's in information technology (Australia), and a PhD in computer science (Australia). He has been a recipient of an Alexander von Humboldt Fellowship, a German Academic Exchange Service award (DAAD), and an Australian Post-graduate Award (APA). Recently steered initiatives include: 
CoDesign 2017: The Bremen Summer of Cognition and Design; and HCC 2016: The International School on Human-Centred Computing. http://www.mehulbhatt.org

3. Irene Mittelberg
Title: "Scenes – frames – constructions: Some ways of correlating embodied patterns of experience in language and gesture "
Abstract: Starting from the assumption that basic scenes of experience tend to underpin entrenched patterns in both language and gesture, this talk sketches the foundations of a frame-based account of discourse-integrated, communicative action. It is argued that gestures recruiting frame structures tend to metonymically profile deeply embodied, habituated aspects of scenes, e.g., the motivating context of (certain) semantic frames and grammatical constructions (e.g., Fillmore 1982; Goldberg 1998; Dancygier & Sweetser 2014). To begin with, two kinds of embodied frame structures situated at different levels of abstraction and schematicity are introduced and illustrated with multimodal discourse data (Mittelberg in press; Mittelberg & Joue 2017): i.) basic physical action frames, understood as directly experientially grounded and involving physical action and/or interaction with the material and social world; and ii.) complex, highly abstract frame structures that are more detached from the respective motivating context of experience. Shifting the focus to syntactic frames, I then turn to recent research from the Natural Media Lab aiming to contribute to the ongoing discussion on the topic of multimodal constructions (e.g., Steen & Turner 2013; Zima & Bergs 2017). Crucial questions pertain to whether, how, and under what conditions, spontaneous co-speech gestures may be said to partake in constructions or may themselves be considered constructions. First, I report on a case study showing how manual actions and interactions, such as holding or giving, may not only be said to function as blueprints for prototypical transitive or ditransitive constructions in language (Goldberg 1995), but also to underpin multimodal instantiations of impersonal existential constructions in German discourse: Spoken uses of the es gibt ‘it gives’ (there is) construction have been found to co-occur with metonymically reduced gestural enactments of holding something, thus evoking, e.g., a scene of existence, or presence, rather than a scene of object transfer (Mittelberg 2017). Finally, two pilot studies employing motion-capture data as a means to identify and analyze multimodal instantiations of frames/constructions are presented, highlighting some of the merits and challenges that come with using this technology to a) visualize movement traces; b) run similarity searches based on specific movement types; and c) derive crossmodal patterns of specific linguistic target structures and the correlating gestural practices. The overall goal is to instigate a discussion of how to combine theory-inspired, qualitative and quantitative methods to establish and better understand (and eventually predict) discourse-shaped tendencies in the crossmodal clustering of particular frames, constructions, and gestural patterns. An ensuing, more general question, being explored from different angles at this conference, is how we can derive and account for emergent regularities (Hopper 1998) or “the conventionalization of commonly used discourse patterns” (Bybee 2013: 51) within and across the manifold manifestations and genres of multimodal communication.
References:
Bybee, J. L. 2013. Usage-based theory and exemplar representations of constructions. In T. Hoffmann & G. Trousdale (eds.), The Oxford handbook of Construction Grammar. Oxford UP, 49-69.
Dancygier, B. & Sweetser, E. 2014. Figurative language. Cambridge UP. Fillmore, C. J. 1982. Frame semantics. In Linguistic Society of Korea (ed.), Linguistics in the morning calm. Seoul: Hanshin, 111-137.
Goldberg, A. E. 1995. Constructions: A Construction Grammar approach to argument structure. U of Chicago Press.
Goldberg, A. E. 1998. Patterns of experience in patterns of language. In M. Tomasello (ed.), The new psychology of language. Mahwah, N.J.: L. Erlbaum Assoc., 203-219.
Hopper, P. 1998. Emergent grammar. In M. Tomasello (ed.), The new psychology of language, 155-175. Mittelberg, I. in press. Embodied frames and scenes: Metonymy and pragmatic inferencing in gesture. Gesture 16/2 (Special issue on ‚gesture pragmatics’, eds. E. Wehling & E. Sweetser).
Mittelberg, I. 2017. Multimodal existential constructions in German. Linguistics Vanguard, 3(s1) (Special issue ‘Towards a multimodal construction grammar’, eds. E. Zima & A. Bergs).
Mittelberg, I. & Joue, G. 2017. Source actions ground metaphor via metonymy: Toward a frame-based account of gestural action in multimodal discourse. In B. Hampe (ed.), Metaphor: Embodied cognition and discourse. Cambridge UP, 119-137.
Steen, F. & Turner, M. 2013. Multimodal construction grammar. In M. Borkent, B. Dancygier & J. Hinnell (eds.), Language and the creative mind. Stanford: CSLI Publications, 255-274.
Zima, E. & Bergs, A. (guest eds.) 2017. Special issue, Toward a multimodal construction grammar, Linguistics Vanguard, 3(s1).
Bio: Professor of Linguistics and Cognitive Semiotics at the Institute of English, American and Romance Studies at RWTH Aachen University. Mittelberg directs the Natural Media Lab (HumTec) and the Center for Sign Language and Gesture (SignGes) http://www.humtec.rwth-aachen.de/mittelberg  

4. Harald Baayen
Title: "Integrating acoustic and visual information with discriminative learning." (Harald Baayen, Fabian Tomaschek and Denis Arnold)
Abstract: We present a computational model based on naive discriminative learning. Trained on 20 hours of conversational speech, it recognizes word meanings with human-like accuracy, without making use of phones or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of this model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and pointers to semantic vectors as output units. Auditory comprehension, however, involves not only acoustic input, but also visual input. In this presentation, we concentrate on visual information about lip movements, as measured with electromagnetic articulography, and we show that when this visual information is merged with the acoustic information, our network correctly predicts the McGurk illusion (e.g., "da" is heard when presented with the acoustics of "ba" and the lip movements of "ga"). This result suggests that discriminative learning may provide a more general framework for integrating multimodal information in computational models of auditory comprehension.
Bio:  In 2011, Baayen received an Alexander von Humboldt research award from Germany, which brought him to the University of Tübingen, where he is now heading a large research group investigating the role of learning in lexical representation and processing. Harald Baayen has published widely in international journals, including Psychological Review, Language, Journal of Memory and Language, Cognition, PLoS ONE, and the Journal of the Acoustical Society of America. He published a monograph on word frequency distributions with Kluwer, and an introductory textbook on statistical analysis (with R) for the language sciences with Cambridge University Press. (http://www.sfs.uni-tuebingen.de/~hbaayen/)
5. Thomas Hoffmann
Title: “Multimodal Communication – Multimodal Constructions?” Abstract: Language is a symbolic system, whose basic units are arbitrary and conventionalized pairings of form and meaning. In fact, in light of substantive empirical evidence, Construction Grammar advocates the view that not only words but all levels of grammatical description – from morphemes, words, and idioms to abstract phrasal patterns as well as larger discourse patterns – comprise form-meaning pairings, which are collectively referred to as constructions. Besides this, usage-based Construction Grammar approaches claim that the storage, i.e. mental entrenchment, of constructions depends on the repeated exposure to specific utterances, as well as the interaction of domain-general cognitive processes such as categorization, chunking and cross-modal association. Finally, due to the fact that utterances are complex usage-events, the information that is considered to be stored in a construction comprises not only purely linguistic (i.e. phonetic, syntactic and semantic) properties but also inferences based on the social, physical and linguistic context of an utterance (Bybee 2010). Since authentic spoken utterances are very often accompanied by gesture, it seems logical to assume that cross-modal association and chunking should also result in multimodal constructions: constructions that also contain information on gesture, facial expressions, body posture, etc. The ontological status of such multimodal constructions and their implications for cognitive theories of language have, indeed, recently become the focus of recent constructionist research (cf., e.g., Steen and Turner 2013; Zima 2014; Csienki 2015; Schoonjans, Brône and Feyaerts 2015; Pagán Cánovas and Antovic 2016). In this talk, I will discuss the status of multimodal usage-events (constructs) for the potential entrenchment of multimodal constructions and their implications for human cognition in general. After a short overview of the various types of unimodal gesture constructions (including, inter alia, emblems, iconics, deictics, and metaphorics), I will take a closer look at the prototypical differences between language and gesture as related, yet different types of semiosis (McNeill 2005, 2015). As I will show, there is some evidence for the existence of multimodal constructions (which I, however, argue mainly arise in close-knit sociolinguistic groups as complex linguistic acts of identity). Yet, due to the different modes of semiosis of language and gesture, the majority of multimodal constructs must be considered parallel realisations of independent unimodal language and gesture constructions. Far from being theoretically uninteresting, however, these latter types of complex multimodal constructs raise important issues concerning the duality of constructs in general (as fluid, emergent patterns that are partly licensed by stored, prefab templates). In the final part of my talk, I will explore this duality of constructs and outline a cognitive network account that licenses such multimodal usage-events. Bio: Thomas Hoffmann is Professor and Chair of English Language and Linguistics at the Catholic University Eichstätt-Ingolstadt. His main research interests are usage-based Construction Grammar, synchronic syntactic variation and World Englishes. He has published widely in international journals such as Cognitive Linguistics, Journal of English Linguistics, English World-Wide and Corpus Linguistics and Linguistic Theory. His 2011 monograph Preposition Placement in English was published by Cambridge University Press and he is currently writing a textbook on Construction Grammar: The Structure of English for the Cambridge Textbooks in Linguistics series. He is also Area Editor for Syntax of the Linguistics Vanguard and Editor-in-Chief of the Open Access journal Constructions. https://ku-eichstaett.academia.edu/ThomasHoffmann.
6. Eve Sweetser
Title: "Viewpoint, creativity and convention in multimodal constructions." Abstract: There is a large body of research on creative construction and manipulation of viewpoint in written narrative, but a relative lack of work on parallel phenomena in spoken multimodal narrative. And yet spoken narrative too, analyzing language and gesture together, manifests very complex construction of embedded and multiple viewpoints. Normally competent readers have no trouble in deciphering "mixed" viewpoint cues in Free Indirect Style, e.g. the PAST + Now construction in a sentence such as They now saw the car at the door, where the now is the characters' now and the Past the narrator's past.  But members of the American speech community also have no trouble deciphering gestures which accompany a spoken narrative, even when some aspects of the speaker's gesture enact a character and others the narrator, or two different characters are enacted. There is clearly convention involved at the gestural level of viewpoint-enaction, as well as the linguistic level. There is also creativity: a bravura storyteller may combine aspects of gestural performance which draw on convention but are not limited to it.  As we move towards a more general, multimodal understanding of viewpoint in communication, we need to delineate the factors involved in multimodal viewpoint, and assess the extent to which conventions play a role. Recent broader work on narrative, as well as on convention and innovation in gesture, can be brought to bear on these questions.
Bio:
Professor of Linguistics, UC-Berkeley. Coordinator, UC-Berkeley Gesture and Multimodality Group. Coordinator, UC-Berkeley Matrix Metaphor research group. Co-PI, MetaNet IARPA research project. Co-editor, Viewpoint in language: a multimodal perspective (Cambridge UP, 2012). Co-author, “Maintaining multiple viewpoints with gaze” in Viewpoint and the fabric of meaning: viewpoint tools across languages and modalities, eds. Barbara Dancygier, We-lun Lu, and Arie Verhagen (Mouton de Gruyter, in press) and “Space-time mappings beyond language” in the Cambridge Handbook of Cognitive Linguistics, ed. Barbara Dancygier (Cambridge UP, in press). Sweetser has been a Fellow of the Center for Advanced Study in the Behavioral Sciences and a Senior Fellow at the Cinepoetics Institute, Freie Universitaet-Berlin. http://linguistics.berkeley.edu/person/30

Plenary Workshops on Methods

State-of-the art methods for studying multimodal communication, each showing one or more complete workflow arcs, from a beginning question to a final research product.

1. Mark Turner

Title: “Red Hen tools for the study of multimodal constructions.” Abstract: The Distributed Little Red Hen Lab (http://redhenlab.org) has been developing new tools for several years, with support from various agencies, including Google, which has provided two awards for Google Summers of Code, in 2015 and 2016. These tools concern search, tagging, data capture and analysis, language, audio, video, gesture, frames, and multimodal constructions.  Red Hen now has several hundred thousand hours of recordings, more than 3 billion words, in a variety of languages and from a variety of countries. She ingests and processes about an additional 150 hours per day, and is expanding the number of languages held in the archive.  The largest component of the Red Hen archive is called “Newsscape,” but Red Hen has several other components with a variety of content and in a variety of media.  Red Hen is entirely open-source; her new tools are free to the world; they are built to apply to almost any kind of recording, from digitized text to cinema to news broadcasts to experimental data to surveillance video and more. This interactive workshop will present in technical detail some topical examples, taking a theoretical research question and showing how Red Hen tools can be applied to achieve final research results. Bio: Institute Professor and Professor of Cognitive Science, Case Western Reserve University.  Co-director, the Distributed Little Red Hen Lab (http://redhenlab.org). http://markturner.org

2. Jakob Suchan
Title: "Computational Cognitive Vision for Human-Behaviour Interpretation" Abstract: This plenary tutorial focuses on application areas where the processing and semantic interpretation of (potentially large volumes of) highly dynamic visuo-spatial imagery is central: dynamic imagery & narrativity from the viewpoint of visual perception and embodiment research; embodied cognitive vision for robotics; commonsense scene understanding etc. In the backdrop of areas as diverse as (evidence-based) architecture design, cognitive film studies, cognitive robotics, eye-tracking, the tutorial will pursue a twofold objective encompassing applications and basic methods with a particular emphasis on the visual interpretation aspect of multimodality studies. The tutorial will address AI researchers in knowledge representation, computer vision, developers of computational cognitive systems where processing of dynamic visuo-spatial imagery is involved, and educators wanting to learn about general tools for high-level logic based reasoning about image, video, point-clouds and using such tools in their teaching activities. Bio: Jakob Suchan is doctoral researcher within the Human-Centred Cognitive Assistance Lab (HCC) at the Faculty of Mathematics and Informatics, University of Bremen, Germany. His research is in the area of cognitive vision (www.cognitive-vision.org), particularly focussing on the integration of vision and AI (specifically Knowledge Representation) from the viewpoint of computational cognitive systems where integrated (embodied) perception and interaction are involved. Jakob is also a member of the DesignSpace Group (www.design-space.org).
3. Peter Uhrig
Title: "Researching co-speech gesture in NewsScape – an integrated workflow for retrieval, annotation, and analysis." Abstract: Finding co-speech gesture in a corpus is a time-consuming process that still involves a lot of manual work. Finding such gestures aligned to abstract grammatical constructions is even more difficult in standard corpora. This plenary workshop is designed to demonstrate a full research workflow within the NewsScape/Red Hen framework from the retrieval of the grammatical structures to thinning, classification, and the analysis of the manually annotated data using state-of-the-art tools developed within the Red Hen community. Bio: Peter Uhrig is Post-Doctoral researcher at the chair of English Linguistics, FAU Erlangen-Nürnberg. He studied English and French at FAU Erlangen-Nürnberg and Lancaster University. He defended his PhD on clausal subjects in English in 2013 and started to collaborate with the Distributed Little Red Hen Lab in 2014. His main research interests are syntax, corpus linguistics, collocation , and lexicography. His recent interest in co-speech gesture stems from his work in Construction Grammar since he hopes to be able to use co-speech gesture to find out more about the storage of grammatical constructions (i.e. which ones are more likely to be stored together and which ones are more likely to be stored separately). https://www.anglistik.phil.fau.de/staff/uhrig/

4. Silva Ladewig and Jana Bressem
Title: Methods of Gesture Analysis—analyzing multimodality from a cognitive-linguistic perspective.
Abstract:The workshop provides a theoretical and practical introduction to the analysis and annotation of gestures from a (cognitive) linguistic perspective (Bressem, Ladewig and Müller 2013; Müller 2010; Müller, Bressem and Ladewig 2013; Müller, Ladewig and Bressem 2013). The first part of the workshop introduces the Methods of Gesture Analysis (MGA), a form-based method to systematically reconstruct the meaning of gestures. It allows for the reconstruction of fundamental properties of gestural meaning creation and determines basic principles of gestural meaning construction by distinguishing four main building blocks: 1) form, 2) sequential structure of gestures in relation to speech and other gestures, 3) local context of use, i.e., gestures’ relation to syntactic, semantic, and pragmatics aspects of speech, and 4) distribution of gestures over different contexts use. The second part of the workshop presents its implementation in an ELAN based annotation system, the Linguistic Annotation System for Gestures (LASG), providing guidelines for the annotation of gestures on a range of levels of linguistic description. As such the workshop familiarizes the participants with the a cognitive-linguistic perspective on gestures that focuses on a description of the structural and functional properties of gestures (“grammar of gesture” Müller, Bressem and Ladewig 2013; Müller 2010) and an investigation of the relation of speech and gestures in conjunction from the perspective of a “multimodal grammar” (Bressem 2014; Fricke 2012; Ladewig 2014). References:
Bressem, J. (2014). "Repetitions in gesture". In: C. Müller, A. Cienki, E. Fricke, S.H. Ladewig, D. McNeill & J. Bressem (Eds.), Body-Language-Communication: An International Handbook on Multimodality in Human Interaction. Berlin, Boston: De Gruyter: Mouton, 1641-1649.
Bressem, J., Ladewig, S. H., & Müller, C. (2013). Linguistic Annotation System for Gestures (LASG). In C. Müller, A. Cienki, E. Fricke, S. H. Ladewig, D. McNeill, & S. Teßendorf (Eds.), Body – Language – Communication. An International Handbook on Multimodality in Human Interaction. pp. 1098-1125. Berlin/ Boston: De Gruyter Mouton.
Fricke, E. (2012). Grammatik multimodal: Wie Wörter und Gesten zusammenwirken. Berlin: Mouton de Gruyter.
Ladewig, S. H. (2014). Creating multimodal utterances. The linear integration of gestures into speech. In: C. Müller, A. Cienki, E. Fricke, S.H. Ladewig, D. McNeill & J. Bressem (Eds.), Body-Language-Communication: An International Handbook on Multimodality in Human Interaction. Berlin, Boston: De Gruyter: Mouton, 1662-1677.
Müller, C. (2010). Wie Gesten bedeuten. Eine kognitiv-linguistische und sequenzanalytische Perspektive. Sprache und Literatur, 41(1), 37-68.
Müller, C., Bressem, J., & Ladewig, S. H. (2013). Towards a grammar of gesture: A form-based view. In C. Müller, A. Cienki, E. Fricke, S. H. Ladewig, D. McNeill, & S. Teßendorf (Eds.), Body – Language – Communication. An International Handbook on Multimodality in Human Interaction. pp. 707-733. Berlin/ Boston: De Gruyter Mouton.
Müller, C., Ladewig, S. H., & Bressem, J. (2013). Gesture and speech from a linguistic point of view. In C. Müller, A. Cienki, E. Fricke, S. H. Ladewig, D. McNeill, & S. Teßendorf (Eds.), Body – Language – Communication. An International Handbook on Multimodality in Human Interaction, pp. 55-81. Berlin, Boston: De Gruyter Mouton.
Bios: The Berlin Gesture Center.  http://www.berlingesturecenter.de/corneliamueller.html Jana Bressem is academic assistant to the chair for German Linguistics, Semiotics, and Multimodal Communication at the Technische Universität Chemnitz. Her main research interests are multimodality (speech/gesture and text/image), language and cognition and pragmatics of gesture use. Together with Cornelia Müller and the ToGoG team, she has devloped a linguistic, form-based approach to gestures. https://www.tu-chemnitz.de/phil/ifgk/germanistik/sprachwissenschaft/mitarbeiter.html#bressem,www.janabressem.de,  http://www.berlingesturecenter.de/corneliamueller.html. Silva Ladewig is an Assistant Professor, currently replacing Cornelia Müller at the European University Viadrina, Frankfurt (Oder). Her main research interests are multimodality, gesture-sign interface, cognitive grammar, and embodiment. Together with Cornelia Müller and the ToGoG team, she has developed a linguistic, form-based approach to gestures. https://www.kuwi.europa-uni.de/en/lehrstuhl/sw/sw0/mitarbeiter/index.html, www.silvaladewig.de,  http://www.berlingesturecenter.de/corneliamueller.html