当前位置: 首页 - 科研动态 - 通知公告 - 正文

专家预告(三)| 中西学者语言认知科学前沿百家讲坛(第三期)——语言习得与认知神经科学交叉论坛

时间:2024-10-15 阅读次数:

距离语言习得与认知神经科学交叉论坛还有三天的时间,大家准备好了吗?


无论你是语言学爱好者,还是对认知神经科学充满好奇,这次论坛都将带给你新的启发与思考!


丰富的议题、精彩的讨论,绝对不容错过。


让我们继续了解一下出席本次论坛的部分专家和他们的报告内容:


Anna Papafragou教授

宾夕法尼亚大学语言学系教授、语言与交流科学跨学科研究生项目主任、MindCORE中心副研究主任及心理学研究小组成员

主要研究领域:

词汇学习,沟通能力的发展,儿童和成人的语言处理,语言认知界面的跨语言视角

报告题目:

Events in Language and Mind

报告摘要:

Understanding how humans represent, recognize, remember and talk about events is important for several disciplines that focus on the human mind and brain. In this talk I will bring together an interdisciplinary set of tools to address the nature of event representation. I will present experimental evidence showing that abstract properties of event structure underlie both the conceptual and the linguistic encoding of dynamic events; furthermore, in several crucial respects, the representation of events resembles that of objects. These findings have implications for both cognitive and linguistic event theories, as well as the relation between language and thought.


Gasper Begus教授

加州大学伯克利分校语言学系副教授、伯克利语音和计算实验室主任

主要研究领域:

计算语言学,可解释的机器学习,深度学习,生成式人工智能,语言学

报告题目:

New Ways of Modeling Language

报告摘要:

Large language models have recently captured the attention of the linguistic community, but their architecture features several cognitively and linguistically implausible aspects. In this talk, I argue for building more realistic models that learn language from raw speech, incorporating articulators, the production-perception loop, and communicative intent. I propose the ciwaGAN model, which models language as informative imitation and features several desired properties such as communicative intent, learning from raw unsupervised audio, and embodied representations. I also present an interpretability technique, which allows us to perform linguistic experiments on the models. This technique enables us to model language in a new way--not with rules, Bayesian approaches, or exemplars, but as a dependency between latent space and generated data in generative deep models. I will argue that such modeling has implications for phonology, morphosyntax, historical linguistics, and neurolinguistics.


吕柄江研究员

昌平实验室研究员

主要研究领域:

多模态脑成像技术的神经解码

报告题目:

Structure Representation and Computation in the Human Brain

报告摘要:

A remarkable aspect of human cognition is the ability to organize information and knowledge in a structured way, which is crucial for various cognitive processes, including natural language understanding and managing complex relationships. However, the neural mechanisms underlying the representation and computation of structure in the human brain are not yet fully understood. In this talk, I will present findings from two MEG studies that explore this topic. The first study examines how the brain incrementally integrates consecutive words into a coherent interpretation during speech comprehension, aligning with the speaker's intended meaning. We used BERT, a deep language model, to extract sentential structure representations and compared them with listeners' brain activity as they processed the same sentences. This approach provides a detailed view of how the brain constructs structured interpretations by dynamically balancing multiple constraints. In the second study, participants were tasked with generating a linear sequence from a multi-depth symbolic structure using recursive syntax. MEG results revealed sequential reactivation of objects according to the underlying structure, marked by repeatable syntactic operations and increased ripple band power in the inferior frontal gyrus and hippocampus. These findings offer insights into the neurocomputational mechanisms that support the recursive generation of nested structures in the human brain.


明日将会继续更新专家预告,敬请期待来自引智基地的一线报道!


百问百答专栏

你有想对专家说的话吗?有想要和专家交流的问题吗?

是否对本次论坛的活动有疑问呢?

或者有哪些突然闪现的小tips.....

“百问百答”专栏将限时开通,收集来自您的奇思妙想,欢迎大家积极投递,我们期待与您的思想碰撞......

邮箱:clcb@blcu.edu.cn