Marianna Bolognesi

Concreteness and specificity: 

Two variables involved in abstraction

Dr Marianna Bolognesi

Words, language’s building blocks, are labels that define different types of categories. Some words define categories of concrete entities (cats, tables) while others define abstract entities (legacy, empathy). Some words define generic categories that encompass many different entities (vehicles, art) while others define more specific ones (sport cars, Impressionism).

To unlock meaning from experience, think and communicate with other humans, we construct different types of categories through mechanisms of abstraction. Concreteness and specificity are the two variables that enable us to abstract from the here and now of sensory experience. The ability to abstract is a hallmark in human cognition that underscores any form of scientific, cultural, technological and artistic endeavor.

Concreteness indicates the degree by which a word defines a category extracted from groups of entities that can be perceived through sensory experiences. The concrete word dog defines a category of tangible entities, while the abstract word doubt does not. Specificity indicates the degree of precision of a word meaning: generic categories like vehicle have low specificity and include many different items (cars, trucks, etc), while more precise categories like tractor have higher specificity and include fewer items.

When investigating the mechanisms and effects of abstraction, scholars from different fields sometimes focus only on specificity or only on concreteness. In particular, cognitive scientists tend to focus on concreteness, and investigate the mechanisms that can explain the conceptual grounding of abstract concepts in perception and action. These mechanisms are then compared to the mechanisms that explain the grounding of concrete concepts in perception and action. Computer scientists tend to focus on the extraction of categories at different levels of abstraction and therefore on the variable called specificity. This operation has been the goal of the semantic web endeavor, of topic models, and more recently of deep learning algorithms.

Focusing on different aspects involved in the complex notion of abstraction, the debate across scientific communities is impaired and the theoretical development is jeopardized. This is also due to the fact that while human-generated resources that can be used to operationalize the concreteness of a word meaning exist and are commonly used in various disciplines (e.g., Brysbaert et al., 2014), human-generated resources to measure specificity do not exist. In addition, concreteness and specificity are intuitively considered by some scholars to be highly correlated, or even confused with one another. This would imply that concrete concepts (e.g., banana) shall be on average more specific than abstract concepts (e.g., freedom).

In this talk I will focus on the difference and the relation between concreteness and specificity. I will show results from a large-scale empirical investigation in which we operationalized specificity in numeric scores (Bolognesi et al. 2020) extracted automatically from a lexical resource (namely: WordNet,  Miller, 1995) and correlated to human-generated ratings of concreteness (Brysbaert et al., 2014). Results showed that, although positively correlated, the two variables capture different phenomena: words can be concrete and specific (e.g., Aspirin, muffler, pulpit), concrete and generic (substance, tool, construction), abstract and specific (ratification, Buddhism, forensics) or abstract and generic (law, religion, beauty). Then, I will explain some problems associated with Wordnet, the lexical resource used to collect specificity data, and I will show preliminary data consisting of human-generated specificity ratings, collected for 1000 Italian words in a crowdsourcing task. The scores were collected using a ranking task (Best-Worst scaling method, Louviere et al., 2015) instead of the classic rating paradigm, which appears to be poorly suitable to tackle specificity. Within a Best-Worst scaling task, in each trial participants are shown a list of 4 words and have to select which word is the most specific and which is the least specific. This method has been recently shown to be particularly effective for the collection of norms with higher predictive validities compared to other response formats such as rating tasks (Hollis & Westbury, 2018; Kahneman et al., 2021). I will show to what extent human-generated ratings of word specificity correlate with the specificity scores extracted automatically from Wordnet, and with the human-generated concreteness ratings. I will then discuss the theoretical implications of these relations and propose new avenues of research that go in the direction of systematically collecting specificity scores, in order to elaborate a comprehensive theory of abstraction.

 

References

Bolognesi, M., Burgers, C. & Caselli, T. (2020). On Abstraction: Decoupling conceptual concreteness and categorical specificity. Cognitive Processing. 21, 365–381.

Brysbaert, M., Warriner, A.B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46, 904-911.

Hollis, G., Westbury, C. (2018). When is best-worst best? A comparison of best-worst scaling, numeric estimation, and rating scales for collection of semantic norms. Behavior Research Methods 50, 115–133.

Kahneman, D., Sibony, O., Sunstein, C. R. (2021). Noise: A Flaw in Human Judgment. Little, Brown Spark.

Louviere, J., Flynn, T.N., and Marley, A. A. J. (2015). Best-Worst Scaling: theory, methods, and applications. Cambridge University Press.

Miller, G. (1995). WordNet: a lexical database for English. Commun. ACM 38, 11, 39–41.

 184 total views,  2 views today

This post is also available in: hrHrvatski (Croatian)