For Information about Annual Meetings, See Conferences
Sarah Brown-Schmidt organized a special session entitled “How Language Science is Building Bridges in Society” at the Annual Meeting of the American Association for the Advancement of Science (AAAS), held in Phoenix, AZ, on Feb 12-14, 2025. This panel, moderated by SHSP President Elsi Kaiser, included presentations by Charlotte Vaughn (UMD), Sarah Phillips (UArizona), Deanna Gagne (Gallaudet).
Synopsis: This session explores the intersection of language science and societal impact, emphasizing the importance of inclusivity and engagement in connecting science to educational and public settings. It highlights the biases present in language-based assessments used in clinical practice, particularly for individuals with diverse linguistic backgrounds, and advocates for a more equitable approach through translational research. The session showcases a science outreach initiative designed to engage deaf students in cognitive science, aiming to broaden participation and foster curiosity by connecting scientific concepts with students' everyday experiences. This initiative includes accessible resources, such as lesson plans and American Sign Language (ASL) videos, and has been piloted in local deaf programs, encouraging students to see themselves as future contributors to the field. Lastly, the session will offer insights from a public-facing research lab at the Planet Word museum illustrating the value of prioritizing participant experience in language science research. This approach not only enhances public engagement but also transforms traditional research methodologies, presenting challenges and opportunities for researchers to communicate effectively with participants.
In Spring 2025, the Society for the Human Sentence Processing organized a series of online seminars presenting research that uses large-scale computational language models for human psycholinguistics. By computational language models, we refer to both large-scale (many-parameter) machine learning systems, such as generative pretrained transformer networks, as well as other computational approaches to modeling human language learning and knowledge. We’ve invited scholars who are using such systems to conduct research in human psycholinguistics to present on a range of topics within this area. The goal of these seminars is to present important findings, to share some methodological and technical knowledge, and to inform our community about both the strengths and limitations of using computational language models in psycholinguistic research. The seminar series was initiated by Yuhan Zhang, Stanford University, then junior member of the HSP Executive Committee.
Presented by Kanishka Misra (UT Austin)
Abstract: The success of language models (LMs), especially in producing impressively fluent and grammatical text, has spurred multiple research programs that attempt to incorporate them into the cognitive scientist’s toolkit. These research programs often focus on two major themes: 1) using LMs as objects of study, in attempts to characterize what aspects of acquiring linguistic form and function are the result of statistical learning mechanisms; and 2) as a source of measures that represent predictive processing during comprehension and production of language form, which are then used to build specific cognitive models. Regardless of the focus of the program, both approaches come with the technical requirements of accessing and using LMs. In this talk I will present minicons, a python library that aims to reduce the technical burden of extracting scientifically useful metrics from publically available LMs—e.g., log-probabilities and surprisals of words in context, entropies of predictive distributions, sentence/phrasal likelihoods, etc. After describing and demonstrating the basic functionality of the package, I will do a quick semi-live coding analysis. The analysis will test the extent to which LMs demonstrate behavior that is compatible with the results of Federmeier and Kutas (1999), which revealed the effects of category structure on contextual expectations and neural processing of language (in particular the N400). The tutorial will go from loading the stimuli to generating LM predictions to finally analysing and comparing them with the results in Federmeier and Kutas (1999). I will end by describing future plans for the library.
Presented by Kyle Mahowald (UT Austin)
Abstract: Language models (LMs) have become remarkably adept at generating fluent and grammatically coherent English, prompting fundamental questions about whether their performance indicates genuine linguistic generalization ("the real thing") or mere memorization from extensive training data ("stochastic parrothood"). To investigate these questions, I focus on two specific grammatical constructions: the Article+Adjective+Numeral+Noun (AANN) construction (e.g., "a lovely 3 days in Austin") and the English dative alternation (Double Object [DO]: "gave Y the X" vs. Prepositional Object [PO]: "gave the X to Y"). Through systematic experimentation, I report new results from small LMs trained from scratch on human-scale corpora, explicitly manipulating their exposure to targeted phenomena. We find effects of both the presence of these constructions in input, as well as sophisticated generalization from indirect evidence. Drawing from these findings and broader theoretical arguments in my recent position piece (Futrell and Mahowald, "How Linguistics Learned to Stop Worrying and Love the Language Models "), I argue that these kinds of experiments are linguistically informative and are powerful tools in the psycholinguistic toolkit.
Presented by Grusha Prasad (Colgate)
Abstract: There has been a growing body of work measuring the extent to which predictability estimates from (Large) Language Models can capture psycholinguistic effects. What can this comparison tell us about human sentence processing? The answer to this depends on why and how we are measuring the fit to human behavior, brain responses, or acceptability judgments. In the first part of this talk, I will survey existing work to articulate the different reasons people cite for why they compare LM predictability estimates to various empirical effects in humans, and the different approaches they use to measure this alignment between the two. In the second part of the talk I will motivate a particular method of measuring alignment that my collaborators and I have used (Huang et al 2024), and present an interactive tutorial on implementing the pipeline for this method from scratch for a targeted question. Then I will open up the space to discuss how we might extend this particular method, or the more general arguments to other questions or phenomena the audience is interested in exploring.
Science of Human Language at the Annual Meeting of the American Association for the Advancement of Science (AAAS) February 15, 2025
Sarah Brown-Schmidt and Tom McCoy organized a special session entitled The Science of Human Language: Insights For and From AI at the Annual Meeting of the American Association for the Advancement of Science (AAAS) Boston on Feb 13-15, 2025.
Synopsis: The scientific study of human language provides valuable insight into the structure, promise, and pitfalls of the artificial intelligence systems known as large language models (LLMs). The science of language shows how and why some forms of language are hard to understand, such as legal contracts and waivers, and identifies ways to leverage artificial intelligence to improve readability and support informed decision-making. Advances in the cognitive and neurobiological bases of language demonstrate distinct human competencies for knowledge of linguistic rules versus how to use language in the real world that rely on distinct neural mechanisms. These distinctions can be leveraged to identify potential new capabilities in LLMs, improve evaluation of their performance, and in turn support development of even better neurobiologically grounded models of human language. The scientific study of the sociocultural context of language explains how language reflects and evokes stereotypes and biases in the world, and their manifestation in LLMs. Leveraging tools from the scientific study of social language use, researchers can measure stereotypes and biases in LLMs, a crucial first step in creating the artificial intelligence systems of tomorrow. The speakers bring expertise from law, neuroscience and computer science to provide insight into mechanisms of human language and LLMs and how their differences and similarities provide insights that can advance the value of these emerging technologies.
Academic Career Paths for Psycholinguists April 23, 2024
Are you considering a career in academia? Whether you're driven by a passion for research, teaching, or both, understanding the landscape of the current academic job market is crucial.
Dan Parker, Associate Professor of Linguistics at The Ohio State University, led this workshop designed to guide aspiring academics through the complexities of the application process. In this workshop, the following topics were discussed: the various positions available in academia, guidance on crafting a compelling application, tips on how to best prepare for interviews, and strategies for effectively navigating the job search process. The workshop concluded with a Q&A session, so bring your questions!
Video recording available below.
To view in Full Screen, start play, then click the ⚙ Gear and select "External Player."
Industry Career Paths with a Cognitive Science or Linguistics Degree March 7, 2024
Rachel Ostrand, staff research scientist at IBM Research, and Brendan Tomoschuk, senior data scientist at Cruise, hosted a session to talk about various industry job roles that can be a good fit for cognitive scientists and linguists. It included a discussion of skills that are useful to develop and highlight in a job application, tips for crafting an industry-appropriate resume, and how to search for industry jobs in the first place.
Video recording available below.
To view in Full Screen, start play, then click the ⚙ Gear and select "External Player."
Online Workshop on the Peer Review Process January 10, 2024
Matt Goldrick, of Reviewer Zero and Northwestern University, hosted a workshop to introduce researchers to the peer review process. The workshop gave an overview of the peer review process, including its goals and aims. It also discussed a range of practical tools for navigating the peer review process.
Video recording available: click the thumbnail below
Rebecca Holt and Maayan Keshev, current and previous HSP Executive Committee junior representatives, hosted abstract writing workshops to provide students and early career researchers of various backgrounds with concrete tips to bring their abstracts to the next level.
Video recording available below.
To view in Full Screen, start play, then click the ⚙ Gear and select "External Player."
NSF Program Officers Jorge Valdes Kroff (Linguistics), Leher Singh & Anna Fisher (Developmental Sciences), Dwight Kravitz (Cognitive Neuroscience), and Betty Tuller (Perception, Action and Cognition) presented and answered questions about submitting grants, getting feedback on project ideas or resubmissions, and other issues related to funding and reporting at the NSF.
Video recording available below.
To view in Full Screen, start play, then click the ⚙ Gear and select "External Player."