// Back to home

A RE:ENLIGHTENMENT STATEMENT

Paul Nulty

T

he virtual meeting of Re:Enlightenment on July 7th was my first direct involvement in the Re:Enlightenment project, although I have had very enjoyable collaborations and discussions with many of the participants over the past four years through my work as part of the Cambridge Concept Lab. One of the most immediately noticeable aspects of the project is the healthy lack of attention to academic discipline, or even to the inter/multidisciplinarity of the project itself — this is taken for granted. That said, with my primary training in computer science and linguistics, I am, lets say, unencumbered with a great deal of prior training in the history and philosophy of the Enlightenment. My understanding of the Re:Enlightenment project comes primarily from the statements, status updates, and background information on the website and many interesting discussions with participants during my time as a member of the Cambridge Concept Lab.

In the context of the Re:Enlightenment project, I hope that my work can make a contribution by examining how artificial and natural languages work structurally to instantiate and communicate knowledge. Chomskyan generative linguistics, following Sassure’s dichotomy of langue and parole, assumed a continuum between the particular surface expressions of ideas in language and its underlying universal ‘Logical Form’. Although many aspects of generative linguistics have been critically re-examined in recent years, most people share the intuition that the content of an idea is not determined

by the particular language in which it is expressed. Rather, there are component structures and functions: phonemic, syntactic, semantic, and conceptual, that facilitate communication between minds in ways that are not specific to language or medium. Ideas can be instantiated in many surface forms. This is also true of artificial languages (computer programming languages), which, in object-oriented languages, offer the conceptual tools of abstraction, composition, and polymorphism to the programmer. Lucien Floridi’s Informational Realism (2005) draws attention to object oriented programming as “a flexible and powerful methodology with which to clarify and make precise the concept of “informational object”, but the idea doesn’t seem to have gained much traction since then. In fact, software development involves re-making the events, entities and interactions of a real world system (such as a bank, a university library, or a factory floor) using the abstractions and behaviours that a particular programming language offers. It is not so much a kind of recipe-writing as an exercise in applied metaphysics.

The distributional concept analysis methods developed in the Concept Lab give us some insight into the forms of human concepts in particular communities at particular times, using statistical abstractions over their written records. Computer science textbooks are full of datastructures and algorithms, forms designed for efficient storage and propagation of information. Using the structures of computer science to instantiate concepts built from

linguistic data is not a prescriptive pursuit, not a mission to discover the ‘correct’ datastructure or functional specification to assign to a particular concept. Rather, as in the case of descriptive linguistics, we can use the abstractions of computer science and statistics to understand the commonalities and variation in knowledge forms among communities.

I hope to give a possible framing of our investigation into knowledge in these terms, using distributional semantics and the ontologies of computer science to present, compare, contrast and visualise the elements of knowledge. Siskin (2020) draws our attention to how various thinkers have characterised knowledge as the ability to re-make the world in our own words, or to manipulate the world into a form that causes information to persist: (“Knowledge is ultimately the re-making of what nature itself makes or has made”, “What I cannot create, I do not understand”, “knowledge is information which, when it is physically embodied in a suitable environment, tends to cause itself to remain so”.- Morrison, Feynman, Deutsch). If we want to gain an understanding of how particular ideas take hold, spread, and adapt, the datastructures of computer science (networks, trees, lattices, objects) are the forms designed not only for describing this knowledge but for instantiating it.

My previous work relevant to these ideas began in distributional semantics before developing more of a focus on conceptual structures after joining the

1

Concept Lab. My PhD thesis “Lexical Expressions of Semantic Relations Between Nouns” investigates how surface-level expressions of relations between entities in English vary in their specificity, and proposes a new method for selecting a relational expression at a desired level of granularity from given word co-occurrence counts. The key emphasis is on moving easily between surface forms and abstract relations in a taxonomy. In the concept lab, I worked on interactive visualisations of word association statistics, in particular network visualisations. In a solo-authored work from this time I show how these visualisations are particularly well-suited to examining contrasting conceptions in rival political communities, following the political philosophy framework of ‘essentially contested concepts’.

//

De Bolla, Peter, et al. “Distributional Concept Analysis.” Contributions to the History of Concepts 14.1 (2019): 66-92.

Nulty, Paul. “Network Visualisations for Exploring Political Concepts”

IWCS 2017 — 12th International Conference on Computational Semantics — Short papers. ACL. (2017): view.

Nulty, Paul, and Fintan J. Costello. “General and specific paraphrases of semantic relations between nouns.” Natural Language Engineering 19.03 (2013): view.

The Concept Lab Viewer: Here & here.

Siskin, Clifford. “Enlightenment, Information, and the Copernican Delay: A Venture into the History of Knowledge.” History and Theory 59.4 (2020): 168-183.

2