Below are the abstracts for the 2022 Annual Meeting's Five-Minute Linguist event on Friday, January 7th at 7:00 - 8:00 PM. Read more about the presenters here: link

How (people react) to talk about COVID-19: Linguistics and public health communication

Elsi Kaiser

COVID-19 has increased the urgency of effectively communicating health information to the general public. How do you persuade people to wear masks, get vaccinated or practice social distancing? We tested whether research on language processing can be applied to boost persuasiveness of COVID-19 health messages, and whether reactions to COVID-related messaging depends on people's political views and COVID anxiety.

People are known to pay closer attention to information mentioned early in a sentence. However, the role of order-of-mention is less clear in real-world contexts. Do readers react differently to "The coronavirus has afflicted over 42 million Americans" (virus-first) vs. "Over 42 million Americans have been afflicted by the coronavirus" (victims-first)? Furthermore, does order-of-mention interact with perspective information from pronouns, e.g. "You/we/people should wear a face mask in public to protect everyone around you/us/them"? What happens if agent-first sentences are rewritten as action-first: "Wearing a face mask can help you/us/people protect everyone around you/us/them"?

We tested factors including order-of-mention and perspective, as well as presence/absence of numbers, to see if they influence the persuasiveness and level-of-concern triggered by health messages about COVID.

Our results show there is no one-size-fits-all solution: Individual differences shape people's reactions. For example, anxiety impacts interpretation of COVID-related facts: Unlike people with low COVID-related anxiety levels, higher-anxiety people rate messages with approximations (e.g. 'millions') more frightening than specific numbers (e.g. 'over 25 million') – perhaps because vagueness allows for over-estimation. Furthermore, political views appear to impact reactions to perspective indicated by "you", "we" and "people," modulated by order-of-mention: When reading about social distancing and masks, Democrats favored messages with the direct "you" or general "people" perspective, depending on whether action or agent came first, but non-Democrats showed no clear sensitivity to perspective.

Though there is no simple recipe for universally-effective COVID messages, our findings demonstrate that linguistic packaging impacts people's reactions in systematic ways. This highlights the importance of taking linguistic information into account when conveying health recommendations and facts to the general public.

Does a man need to make a man-made lake 

Kristin Denlinger

Say we’re on a camping trip. If I say the lake at our campsite is man-made does it matter if it was made by a man or a woman? Chances are it doesn’t! But if I say the lake was made by a man, it would seem odd if the crew was all ladies. So what’s the difference between choosing to call the lake man-made as opposed to made by a man? In the first example, I’m specifying that the lake is a certain type, man-made as opposed to natural, whereas in the second there was a lake-making event involving a specific man. In both cases we may be referring to lake-making but when made is used in an adjective-y way, as in man-made lake, our interpretation of man changes. For decades, linguists have noticed that funny things like this happen to the meaning of verbs when they’re brought into ‘adjective-land’ (Embick, 2004, Kratzer, 2000). Specifically, the modifiers of adjective-y verbs, like man in man-made, are more generic and can be used to create new meanings, like ‘artificial’ (Gehrke 2015, Maienborn 2009).

In this talk I will explain why ‘adjective-land’ is such a fertile ground for this type of meaning shift, and why certain meanings seem to thrive in this terrain. This explanation arises in part from the way we use verbs to talk about the world around us. While we typically use verbs to talk about events, verbs-turned-adjectives instead provide speakers with a means to creatively talk about properties associated with the events. If that property is important enough, in other words if speakers want to talk about it enough, the adjective-y verb can gain a meaning of its own. This is one way that speakers become more efficient at communicating to each other about the world.

Reexamining Negative Concord and Definiteness in African American English

Taylor Jones and Christopher Hall

Linguists have long known that “double negatives” in sentences like “I don’t know nothing about no carrots” are not bad grammar, but rather an instance of what we call negative agreement, and further, that negative agreement is a regular part of the grammar of many other languages, such as French, Russian, and Arabic. In English, multiple negation is traditionally thought of as consisting of a negative word like not or no, and a negative polarity item like any or even another no.

These negative words are thought to combine only with indefinites that are bare nouns like dog or bare plurals like carrots in the example above, presumably indicating there is no such referent to the speaker’s knowledge. A speaker who says I don’t have any children cannot have unique, identifiable children in mind when saying so (if they’re not simply lying).

So what about sentences in African American English like You don't know nothing about no Kendrick Lamar? While a proper name is a kind of bare noun, Kendrick Lamar certainly exists! Here, the word no modifies a definite noun phrase, and intensifies the assertion that you don’t know anything at all about (the one and only) Kendrick Lamar. No isn’t informing us that there are multiple Kendricks Lamar and you don’t know about any of them; rather, it’s reaffirming your lack of knowledge about him. We propose that in AAE no after a negation can sometimes introduce a topic of discussion (as in Kids these days don’t know nothing about no Gatorade in a glass bottle), reported speech, or an echo reading. This explanation accounts for no in these kinds of sentences, and relates it to other phenomena in AAE, especially around quotation (Jones 2016, Spears 1980), and intensifying, indignant (Spears 1980) or exasperated (Smith 2019) utterances.

Misunderstanding or incorrectly transcribing utterances like the above, as has been shown to happen in court transcriptions and other places, can contribute to discrimination and systemic inequality. Focusing on cases like these not only helps destigmatize AAE, but also challenges long-held assumptions about how negation works not only in AAE but across languages.

Perceptions of Ethnolectal Variation in Montreal: What we learn from second-generation speakers

Tracey Adams

Montréal, the capital of Québec, represents a unique tension between a diverse and growing second-generation immigrant population and systematic government initiatives directed at speaker assimilation. One might expect two very different outcomes that arise from this situation. On the one hand, these speakers might jettison accents that reflect their heritage and conform to a ‘mainstream’ French accent (as Boberg (2006) argues). On the other, these speakers might retain their accents as means of preserving their ethnic heritage and shaping their self-presentation amongst a variety of second-generation speakers (Blondeau & Friesner, 2011; Blondeau, 2016).  

This topic has been explored in diverse cities in France, showing variation in pronunciation by speakers of immigrant backgrounds. However, since Francophone immigrants and their descendants in Québec have largely been neglected in sociolinguistic studies up until now, we lack data to adjudicate between the hypotheses above in this context. The current work fills that gap and demonstrates in a series of extensive sociolinguistic interviews with women from the three largest ethnic groups in Montréal (Quebecker, North African, and Haitian) that second-generation speakers self-report speaking differently from, and consistently perceived as speaking differently from, Montréalers of non-immigrant backgrounds. Thus, contrary to linguistic assimilation, second-generation speakers preserve their ethnic identity in their speech, finding language as a means to represent their heritage. This study not only further elucidates the linguistic landscape of Montréal and the close relation between language and identity, but it also demonstrates the value of firsthand ethnographic studies of community members from diverse backgrounds to remove potential bias from linguistic analysis and conclusions.

I’m Tawkin’ Here: Why don’t New Yorkers sound like Noo Yawkas anymore?

Jennifer Kaplan, Cecelia Cutler

Why aren’t most young New Yorkers tawkin’ like dis anymore? We know how their speech is changing: Some young New Yorkers are pronouncing the long “U” in GOOSE and the long “O” in GOAT with their tongues further forward in their mouths—what linguists call Back Vowel Fronting. (Think a less extreme example of a stereotypical Californian saying “totally, dude” like tewtally, dewd). Which got us asking: Why are some young New Yorkers fronting their back vowels, while others are not?

While BVF is widespread in North American English, the degree to which individuals pronounce their “O’s” and “U’s” in the front of their mouths differs regionally (e.g. Southerners do this differently than Californians). New Yorkers of the past used to only front their back vowels in some very specific cases—and when they did, their tongues were only a little bit more forward than we would expect for those vowels. However, new data collected as part of the New York City English Corpus (CoNYCE) project indicates that young New Yorkers today are more likely to pronounce these vowels with their tongues even more dramatically forward and increase their fronting in more words than their older counterparts, in order to distance themselves from New York City stereotypes.

It turns out that speakers’ gender doesn’t matter, and ethnolinguistic identity matters more than ethnicity (i.e., we found an effect for some Spanish speakers, but only those who link their Hispanic identity with sounding like a Spanish speaker). However, age matters most: Younger people (born after 1979—a cutoff we established by analyzing phonological [pronunciation] data from over 300 interviews) are more likely to front their “U’s” and “O’s.” Secondly, speakers’ attitudes matter. We find that New Yorkers’ stances (attitudes) toward New York City influence the way they speak. Ultimately, it is a combination of the especially stigmatized quality of the New York City accent in the popular psyche and the prevalence of negative NYC stereotypes, such as those of the ‘ghetto’ New Yorker, the ready-to-fight New Yorker, and the rude New Yorker, that is causing some young New Yorkers to fuggedabout retaining the back “U’s” and “O’s” of their parents’ generation.

Are you asking me or telling me? Learning to identify questions in early speech to children

Yu’an Yang, Daniel Goodhue, Valentine Hacquard, Jeffrey Lidz

Imagine having to figure out what “DIboQnISʼaʼ” means when you don’t speak Klingon and hear the sentence out of the blue. Do you think it's an assertion or a question? How could you tell? Now imagine you hear it in a specific context: You are the captain of the Enterprise, and a Klingon warrior walks up to you and says this sentence. Suppose also that after they say it, they stop speaking, and look at you. Clearly they are waiting for your reply. Perhaps now you might guess that they are asking you a question. Babies find themselves in a situation not completely unlike this. They have to figure out what counts as questions in their language as well. Because their grammar is still developing, they can’t rely on knowing the clause types of their language (that is, Is the Klingon sentence an interrogative Should we help them? or a declarative We should help them.). Luckily, infants don’t hear sentences out of the blue. So what kinds of cues are available to infants?

We turned to videos of English-speaking parents’ interaction with their infants between 11-18 months, and carefully examined the social conversational cues associated with speech acts like questions and assertions. We found that when directing their infants’ attention to new objects, parents were more likely to produce questions than assertions. And when asking a question, parents paused longer afterwards and looked longer at the infant than when uttering assertions, presumably to elicit a response and facilitate conversational turn-taking. Taken together, these social conversational cues reveal that parents — probably without even knowing it — are providing their children with important information to help them learn the difference between questions and assertions. In turn, this information about speech acts can provide a foothold for the discovery of the basic clause types in their language early, before they have acquired much of their grammar.

ST Homesign: The Story of Natural Language Emergence in a Rural Area

Seyyed Hatam Tamimi Sa'd, Ronnie Wilbur

How does language emerge and evolve? Is language emergence driven by the conscious efforts of the community that speaks that language, or does it happen subconsciously? Questions like these have baffled linguists for a long time, but the answers may come from homesigns— gestural communication systems that emerge in families with deaf members with no access to sign language. We report on one particular homesign, ST Homesign, that emerged in a small village in southwestern Iran around six decades ago. After a man in his early twenties completely lost his hearing, his family’s lack of access to deaf education and his own lack of literacy prompted the family and close friends, all hearing and speaking Arabic, to communicate with him by gesturing. Over time , this ‘gesture system’ has evolved into a homesign that exhibits many features characteristic of language . ST Homesign  enables users to form various types of simple and complex sentences: declarative statements, questions, conditional sentences, amongst others.  Even more surprising is that ST Homesign—whose users have never known any sign language nor been in contact with any other deaf individuals—shares many features with established sign languages, including word order, topicalization, use of facial expressions, and differential functions of dominant v. non-dominant hands. Grammatical differences between Arabic and ST Homesign provide further evidence that this homesign is not merely speech accompanied by gestures; rather, it is an independent language in its own right. Thus the story of ST Homesign provides a novel insight into the natural emergence and development of language without any explicit educational intervention from the outside world. 

The temporal texture of events: The connection between language and cognition

Yue Ji, Anna Papafragou

The world is a continuous flow of activity — running, laughing, meeting (so many meetings!). But humans intuitively segment this continuous experience into concrete chunks with beginnings and ends, which we call events. These events might have an inherent endpoint, like running to the bus stop, fixing a car, or making a decision. Or they might lack such an endpoint, like running, driving a car, or droning on and on in a meeting. In language, we can describe bounded events that have an endpoint as telic, and those without an endpoint (unbounded events) as atelic. Thus, we represent and talk about these events based on this temporal structure, but how do we perceive events as they unfold? Is this telic/atelic contrast something we deploy on the fly when observing the world around us?

One might think that endpoints are especially salient and privileged in event structure, as opposed to other points in time. In fact, we know from previous research that this is the case for bounded events. But what about unbounded events? We investigated this question experimentally by showing participants videos of events with very brief interruptions, paying close attention to whether observers noticed the interruptions at the middle of the video, or towards the end of the video, where the event either reached its inherent endpoint (bounded events) or simply stopped (unbounded events). We hypothesized that observers of bounded events would be more likely to neglect an interruption close to the end of the video, since the developments near the event endpoint would naturally attract more attention than those at other time points, while the middle-end difference would diminish in observers of unbounded events. And this is just what we found.  

This research shows that humans are naturally and spontaneously inclined to attend to event temporal structure and further highlights a profound and intimate connection between language and cognition.