ISSAI - New paths for Intelligence
- Lectures

Matteo Valleriani

(Max Planck Institute for the History of Science, Berlin
Technische Universität Berlin
Tel Aviv University)

Short bio:

Matteo Valleriani is Research Group Leader in Dept. I at the Max Planck Institute for the History of Science, Honorary Professor at the Technische Universität Berlin, Professor for Special Appointments at the Faculty of Humanities at Tel Aviv University, and Principal Investigator of the Project “Images and Configurations in Corpora of University Textbooks” at the Berlin Center for Machine Learning.

In his research, he investigates processes of 1) emergence of scientific knowledge in relation to its practical, social, and institutional dimensions, and 2) homogenization of scientific knowledge in the framework of Cultural Heritage Studies.

Centering on cosmological knowledge, Matteo Valleriani’s current major research project is concerned with the evolution of the scientific knowledge system and with the establishment of a shared scientific identity in Europe in the period between the thirteenth and the seventeenth centuries. In the frame of this project he also co-develops and implements multi-layered network models (

A further focus of his research is on the epistemic function of visual material in scientific research and in the framework of processes of knowledge transformation. Within this context he co-develops and applies machine learning technologies.

As leader of the working group “The Structures of Practical Knowledge” he investigated the epistemic mechanisms that integrated practical knowledge and its codification into abstract structures of scientific knowledge during the early modern period (The Structures of Practical Knowledge, Springer Nature, 2017).


Early modern Mathematically Hardcoded Historical Reasoning:
Is it Relevant for the Development of Artificial Intelligence?


The section shows the ongoing development of a method to structure and model data in order to detect diffusion of innovations and processes of homogenization of scientific knowledge in history. The data are extracted from a collection of over 350 textbooks, produced between 1472 and 1650, and used for the introductory class to the study of astronomy at all European universities. Three conceptually divided kinds of data are modeled together: data concerning the scientific content of the treatises, data concerning their materiality, and data concerning all the social actors involved in the production and marketing of the books.

First, the historical project ( will be briefly introduced. Its aim is to explain the mechanisms that transformed natural philosophy into modern science and gave to the latter the role of paradigm for a shared scientific identity in Europe at the turn of the seventeenth century. An overview will be given about the methods of data extraction, the ontology of the dataset, and the structure of the data. Finally, after having shown how the dataset can be seen as a complex system modeled as a multilayer network, it will be discussed how the model can be interpreted as a reasoning scheme in the frame of artificial intelligence.

Other lectures:

Keynote lecturer:

(University of Lisbon)

Other lectures (confirmed):

(Catalan Institute for Advanced Studies (ICREA)) | (University of Granada) | (Max Planck Institute for the History of Science) | (University of Minho) | (Critical Software) | (University College Dublin)