The social lives of words
Our ability to form rich and diverse semantic representations of words is a defining feature of human cognition. A large body of research in the filed of lexical semantics has aimed to quantify these representations, by asking large numbers of people to rate thousands of words in relation to theoretically meaningful semantic dimensions, for example whether ‘avocado’ is abstract/concrete or positive/negative. However, there has been relatively little attention given to quantifying dimensions of social semantics, for instance if a word is believed to be associated more towards specific age groups, genders, locations, or political ideologies. This presentation will introduce the first large scale investigation into the way Czech speakers form their socio-semantic representations for Czech words, through a presentation of the methods used, the data that has been collected, and the initial findings that have emerged from a range of different analyses.
Language and Theory of Mind Development
In a series of four sessions we will look at interactions between language acquisition and children’s developing understanding of others’ and their own desires, beliefs, intentions, perspectives, etc. – commonly referred to as Theory of Mind. We will start by discussing what Theory of Mind is and how we can test infants’ and young children’s Theory of Mind understanding. Then we will look at a specific subset of Theory of Mind, namely the understanding of false belief. We will discuss and compare various studies that have found that different linguistic skills support children’s understanding of false belief. Finally, we will also look at differences between monolingual and bilingual children and at cross-linguistic differences in this relationship between various linguistic skills and children’s developing understanding of false belief.
(1) Introduction to Theory of Mind
(2) False Belief Understanding and Language
(3) Cross-linguistic Perspectives on Theory of Mind and Language Development
(4) Differences between Mono- and Bilingual Children’s Understanding of Theory of Mind
Language, Health and Illness
In this block we will explore the language which constitutes health and illnesses and which characterises communication within a range of healthcare contexts. We’ll consider the relationship between language and health and come to better understand some of the ways in which health and illness are profoundly connected to language use and discourse. The block is broadly divided into two parts. First, we’ll consider what is widely referred to as health care communication, and this involves the study of language and discourse produced in contexts of healthcare, focusing specifically on doctor-patient interaction, before move onto looking more broadly at discourses around health and illness in a wide range of private and professional contexts (e.g. advertising, health promotion campaigns, and the media).
Language in the news: Critical and corpus perspectives
In this block we explore the language of news media. We will consider the discursive practices involved in the creation of news and reflect on the broader socio-cultural implications of news discourse. We will then consider two methodological approaches that are particularly popular in discourse studies of the news – Critical Discourse Studies and Corpus Linguistics – and discuss how these can be synthesised to produce comprehensive yet critical accounts of language and representation in the news. This will be demonstrated through reference to research on news representations of migrants in the UK.
What are events? – Four perspectives
Events are fundamental units of human cognition. They guide perception and they can be referred to by language. I will illustrate four theoretical perspectives on events – each one highlighting a different aspect of event cognition – and briefly discuss what is typically presented as evidence in favor of each view. Finally, I will try to connect these perspectives and discuss what each view might contribute to a unified theory of event cognition.
Motion event encoding across different languages
Motion events constitute an extensively researched field in the domains of language typology and linguistic relativity. I will briefly sketch the theoretical background underlying previous empirical studies and present some recent work from the Heidelberg research group.
Odd! How predictability violations impact learning and memory
Prediction, the anticipation of upcoming linguistic material, is supposed to enable efficient language comprehension. In contrast, having strong predictions disconfirmed (i.e., prediction error) is currently discussed as the primary cognitive principle that drives long-term memory and (language) learning. In my talk, I will introduce key concepts that inform proposals on error-driven learning (e.g., Bayesian learning, surprisal), and I will present studies that investigate the role of prediction error in learning. I arrive at the conclusion that the evidence supporting a strong role of prediction error in language learning is relatively thin, and I will offer recent insights from research in our lab that can help inform future research on this topic.
Basics of R and data visualization for linguists (hands-on workshop)
This course is a hands-on introduction into R, an easy to learn, versatile programming language widely used across data science and academia. The focus will be on explorative data analysis, and data visualization in particular. By the end of the course, you will have learned how to represent your data in an attractive visual manner that is both informative and easy to grasp. We will go over just enough of programming and data wrangling basics to be able to work with example datasets and corpora, and then dive straight into the visualization part. This course will focus on ggplot2, touch upon some other useful tidyverse packages, as well some others useful for managing corpus data. We will also go over some ways graphs are often used in a misleading manner, and how to avoid such pitfalls. The software we will be using – R and the RStudio IDE – are both free and open source. The course does not require any prior experience in programming or data science.
Demography and social structure in language change (block 1)
In this block, we will look at relevance of social structure for language change. Building on the work of Peter Trudgill, Salikoko Mufwene and Henning Andersen, we will cover cases as diverse as the Continental vs. Insular Scandinavian languages, English dialect change during the Industrial Revolution, new dialect formation in the English New Town of Milton Keynes, and dialect levelling in post-WWII London. The key to understanding these changes lies in the degree of language and dialect contact as well as where a community lies on a closed–open continuum.
300 years of migration and language: Colonialism and its aftermath (block 2)
This block takes us through the beginnings of West European colonialism, dealing with the effects on language change of the forced translocation of Africans across the North Atlantic and their subsequent enslavement. We will briefly look at the characteristics of Caribbean creoles in relation to their West African antecedents, such as Akan. Returning to the mother continent, we look at pidgins and creoles in West Africa. Finally, we touch on more recent language and migration in Europe and Africa, in preparation for the topics of Block 3.
The rise of urban contact dialects: Europe and Africa (block 3)
The third block considers how ‘The Empire Strikes Back’ in northern Europe, with the rise of ethnolects and, more particularly, multiethnolects in countries such as Denmark, the Netherlands, Germany and the UK. These new varieties result from large-scale migration from countries in the Global South, many of them former colonies of northern European countries, giving rise to distinctive innovations in the majority European language. At the same time, we look at migrations within Africa South of the Sahara, where massive urbanisation leads to the emergence of new, variable, language varieties among the youth. We find that these European and African ‘urban contact dialects’ have similar origins and similar indexical and identity functions, although the linguistic outcomes are strongly affected by the monolingual habitus of the former countries and a multilingual ethos in the latter.
Introduction to sign languages
Sign languages are natural, hierarchically structured language systems that are equally complex to spoken languages. Although sign and spoken languages differ in the modality in which they are expressed (visual-gestural vs. auditory-oral) many similarities with respect to linguistic structure can be observed. However, language modality also impacts the linguistic structure in particular ways. For instance, a higher degree of simultaneity and a higher degree of iconicity can be observed in sign languages. In this talk general information about sign languages, sign language linguistics, Deaf culture and the sign language community will be provided. Participants will be introduced to the grammatical system of sign languages, considering typological as well as sign language specific aspects.
Neural processing of sign languages
The investigation of sign language processing in the brain gives insight into the question of what aspects of neural language processing are independent of language modality (i.e. universal processing mechanisms), and how language modality (auditory-oral vs. visual-gestural) influences neural language processing. In this talk an overview on research investigating the neural processing of sign languages will be provided, whereby also similarities and differences between sign and spoken languages will be discussed. Participants will be informed about how sign languages are processed in the brain, which processing mechanisms are involved during the comprehension and production of sign language, how sign language disorders such as sign language aphasia are expressed, as well as how sign language is acquired.
Look who’s calling: Individual identity in animal vocalizations
The ability to recognize and discriminate among individuals based on individually-distinctive traits is a core assumption underlying many decisions animals make – choosing a mate, feeding offspring, distinguishing friends from foes, etc. Many studies have revealed that animal vocalizations are individually distinctive. Species vary in the degree of expressed vocal individuality and to which their signals are individually distinctive which may depend on their biology and life history traits. In this presentation, I will review how individual identity can be expressed in animal vocalizations, how vocal individuality might be affected by species traits, and I will try to highlight similarities and differences in human and animal vocal individual identity signaling.
Eva Maria Luef
Phonological networks in first and second languages
Lexical knowledge is a crucial pillar of linguistic competence, upon which all other linguistic functions depend. Decades of psycholinguistic research have explored the cognitive representations of words in our minds and their internal organization, the so-called mental lexicon. The primary method of investigation to understand the inner workings of the mental lexicon has been the reconstruction of linguistic processes through psycholinguistic tasks, such as word recognition, or the charting of developmental trajectories of word learning by focussing on sub-groups of words and their formal and functional neighborhoods (i.e., semantic or phonological neighborhoods). This “bottom-up” approach to word processing has significantly shaped our current knowledge in the field and built a solid theoretical foundation to capture phenomena observed in experimental tests over the decades. Recent advances in the mathematical domain of network sciences have now offered promising new scientific avenues to investigate a “top-down” approach to the mental lexicon. By studying words as belonging to a large and largely interconnected network, a new understanding of word processing can be gained from a more holistic perspective of word memory and learning. This bird’s-eye view of the mental lexicon as a complex system can facilitate the study of the grander structure of word connections, from which new patterns of hierarchical relationships and lexical access dynamics can be inferred, leading to more predictive models of the factors influencing lexical processing.
The research field of lexical network science is relatively new and researchers working within this theoretical framework have investigated a small number of languages involving first-language users. Second and foreign languages are underrepresented, with no study charting the whole lexicon of language learners from the network perspective. I will attempt to fill the knowledge gap by providing an overview of lexical word form networks of learners of English as a second language. For contrast, a lexical word form network of British English first-language users will be presented, in order to be able to draw comparisons between first and second languages. A specific focus will be on mathematical modelling of network-theoretical concepts in relation to their psycholinguistic realities and the question of what network science can contribute to theories of word learning and lexical access.
The goal of the first talk (Saturday) is to give a network-theoretical description of lexical knowledge and its growth in learners of English as a second language. In the second talk (i.e., workshop on Sunday), participants will be shown how to construct lexical networks using the software “Gephi”. Participants of the workshop are asked to download Gephi (https://gephi.org/) and bring their own laptops.
Collecting and analyzing data from the bidirectional self-paced reading paradigm
Bidirectional self-paced reading (BSPR) is an extension of the classic word-by-word self-paced reading paradigm that adds the option to move backward instead of only forward through the sentence. In this workshop, participants will be familiarized with the theoretical assumptions behind the BSPR paradigm, and learn how to set up a BSPR experiment on the Ibex farm. Participants will learn how to compute common reading measures from the eye-tracking literature for BSPR data (first-pass reading times, rereading times, regression probabilities etc.), and how to conduct scanpath analyses of BSPR data.
Native and non-native speech perception
Our speech perception system is optimized for processing all aspects of the sound structure of our native language(s): segments and suprasegments, phonemic vs. allophonic contrasts, syllable structure, phonotactics, and alternations. How does language-specific speech perception develop during infancy and early childhood? How does dual language input influence this process? And to what extent can speech perception be retuned later in life with the acquisition of a second language? In the first three lectures we will address these questions; in the last one we will consider consequences for phonological theory.
Syntactic and Semantic Agreement
Agreement has been described as “the systematic covariance between a semantic or formal property of one element and a formal property of another” (Steele, 1978). This means that we can use agreement to understand how syntactic and semantic information are represented, and how they are linked during language processing. In British English, semantic number agreement can arise when a morphologically singular collective noun like “committee” forms a dependency with a morphologically plural agreement target, because of the semantic property that a single committee may be composed of multiple members (e.g. “The committee were mentioned in the report”; “The committee awarded themselves a payrise”). In this talk, I will describe experiments that examine this phenomenon, using eye-tracking and acceptability judgements. I argue that any successful account of semantic number agreement will need to take incremental processing into account.
Integration of information in reading
Current models of eye-movement control in reading suggest that the relationship between reading behavior and linguistic knowledge is fairly indirect; for example, in the EZ-reader model of eye-movement control, syntactic information can only affect eye-movement behavior via a post-lexical integration stage, or through lexical pre-activation of the current word. However, recent studies have shown that some degree of syntactic and semantic integration of a target word N can occur while a reader is fixating on word N-1. In the absence of lexical pre-activation, this places constraints on the timing of the onset of the post-lexical integration stage. Based on studies conducted in our lab, I propose that grammatical constraints may sometimes bypass the integration stage, influencing the early stages of lexical identification via an interaction with visual or orthographic cues.
Variability in speech production in the light of language usage and an emergent lexicon
There are two aspects regarding speech production that seem to be well established. The first aspect concerns the effects language usage, which is typically measured as predictability and frequency of occurrence: the consensus is that greater predictability and higher frequency are associated with phonetic reduction — i.e. shorter phone duration and more centralized vowels. Typically, these findings are explained to emerge as a result of the need to balance the amount of acoustic signal and the amount of information conveyed by the signal.
The second aspects concerns the cognitive processes that take place during speech production and the involvement of the mental lexicon. Speech production is typically conceived of as a modular, sequential process in which higher-level representations are transformed into lower-level representations necessary for articulation. Once the lower-level representations such as phonology and articulatory gestures are obtained, higher-level representations such as morphology or semantics is discarded. As a result, differences in morphological information such as inflectional functions should not be reflected by acoustic and articulatory characteristics.
In two sessions, I will present and discuss studies that challenge these well established aspects regarding the interaction between the lexicon and language usage on the one hand, and speech production on the other hand. The first session will focus on the effects of language usage. I will present acoustic and articulatory evidence that demonstrates that language usage is not necessarily reflected by reduction but also can also be reflected by an improvement of articulatory capabilities that show enhancement.
The second session will focus on the interaction between the mental lexicon and speech production. I will present studies that employ computational learning models to assess the structure of the mental lexicon and how this structure co-varies with acoustic characteristics of the speech signal.