- World Logic Day | Talks at NOVA LINCS
- Feb 2023
- On November 18, 2019, UNESCO declared January 14 as the World Logic Day: "The ability to think is one of the most defining features of humankind. ... Logic, as the investigation on the principles of reasoning, has been studied by many civilizations throughout history and, since its earliest formulations, logic has played an important role in the development of philosophy and the sciences."
Audrey Azoulay, Director-General of UNESCO, says:
"At the dawn of this new decade – indeed, now more than ever – the discipline of logic is utterly vital to our societies and economies. Computer science and digital technology, which provide the structure for today’s ways of life, are rooted in logical and algorithmic reasoning. Artificial intelligence (AI), the unprecedented progress of which constitutes a technological and even anthropological revolution, is itself founded on logical reasoning. Through (...) the first global standard-setting instrument concerning the ethics of AI, UNESCO has undertaken to establish an ethical framework for this innovative product of logic."
To celebrate the occasion at NOVA LINCS, aligned with the statements above, we will host two talks, one on Logic, Computation, and Programming Languages, and another on Logic-based Explanations for Neural Networks. Incidentally, the talks will be, respectively, by the Director of NOVA LINCS and by the Head of the Department of Computer Science.
February, 22: Logic, Computation, and Programming Languages
We will briefly overview how the interleaving threads of prolific research on logic, computation and programming languages has developed during the last 100 years or so, and are still very active today, leading to the design of programming languages for practical concurrent systems in which programs never crash, deadlock or livelock, and are the proof of its own correction. This will be a high level - dissemination type of talk, accessible to a broad audience.
March, 8: Logic-based Explanations for Neural Networks
Neural networks have been the key to solve a variety of different problems. However, neural network models are still regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain result. In this talk, we will explore a procedure to induce human-understandable logic-based theories that attempt to represent the classification process of a given neural network model, based on the idea of establishing mappings from the values of the activations produced by the neurons of that model to human-defined concepts to be used in the induced logic-based theory. Through a series of experiments, we discuss how to map the internal state of a neural network to the human-defined concepts, examine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyse the fidelity of the resulting theory and how it can be used to generate symbolic justifications for the output of neural network models.
This work was carried out in collaboration with Manuel de Sousa Ribeiro, João Ferreira, and Ricardo Gonçalves.