Transformer Networks of Human Conceptual Knowledge

19Citations
Citations of this article
51Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a computational model capable of simulating aspects of human knowledge for thousands of real-world concepts. Our approach involves a pretrained transformer network that is further fine-tuned on large data sets of participant-generated feature norms. We show that such a model can successfully extrapolate from its training data, and predict human knowledge for new concepts and features. We apply our model to stimuli from 25 previous experiments in semantic cognition research and show that it reproduces many findings on semantic verification, concept typicality, feature distribution, and semantic similarity. We also compare our model against several variants, and by doing so, establish the model properties that are necessary for good prediction. The success of our approach shows how a combination of language data and (laboratory-based) psychological data can be used to build models with rich world knowledge. Such models can be used in the service of new psychological applications, such as the modeling of naturalistic semantic verification and knowledge retrieval, as well as the modeling of real-world categorization, decision-making, and reasoning.

References Powered by Scopus

GloVe: Global vectors for word representation

27046Citations
N/AReaders
Get full text

WordNet: A Lexical Database for English

11726Citations
N/AReaders
Get full text

A spreading-activation theory of semantic processing

5671Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Inductive reasoning in humans and large language models

24Citations
N/AReaders
Get full text

A Better Way to Do Masked Language Model Scoring

15Citations
N/AReaders
Get full text

Using Machine Learning to Uncover the Semantics of Concepts: How Well Do Typicality Measures Extracted from a BERT Text Classifier Match Human Judgments of Genre Typicality?

8Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Bhatia, S., & Richie, R. (2022). Transformer Networks of Human Conceptual Knowledge. Psychological Review, 131(1), 271–306. https://doi.org/10.1037/rev0000319

Readers over time

‘20‘21‘22‘23‘24‘250481216

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 26

68%

Researcher 5

13%

Professor / Associate Prof. 4

11%

Lecturer / Post doc 3

8%

Readers' Discipline

Tooltip

Psychology 19

59%

Computer Science 8

25%

Neuroscience 3

9%

Social Sciences 2

6%

Article Metrics

Tooltip
Mentions
News Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free
0