Keynote Speakers – ONTOBRAS 2025

Photo of Giancarlo GuizzardiGiancarlo Guizzardi
Full Professor / University of Twente

Full Professor at the University of Twente (Netherlands), working in the areas of Formal and Applied Ontology, Conceptual Modeling, and Information Systems Engineering. He was a professor at the Free University of Bozen-Bolzano (Italy), where he led the CORE research group, and is a co-founder of the NEMO group in Brazil. Author of around 400 publications, he frequently serves as keynote speaker, chair, and editor in major events and journals in the fields of Ontology and Computing. He is currently also a visiting professor at the University of Stockholm, researching value-based modeling and ethical aspects in information systems.

Talk: Explanation, Semantics and Ontology

Cyber-human systems are formed by the coordinated interaction of human and computational components. The latter are justified to the extent that they are meaningful to humans – in both senses of ‘meaning’, i.e., semantics and significance. Data manipulated by these components only acquire meaning when mapped to shared human conceptualizations. They can also only be justified if ethically designed. In this talk, I present a notion of explanation called “Ontological Unpacking,” which aims to explain symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications). I argue that this explanatory nature is essential for semantic interoperability and, consequently, trustworthiness. I conclude by stating that symbolic artifacts in XAI are not inherently interpretable and should be seen as the beginning, not the end, of explanation.

Photo of John BeverleyJohn Beverley
Assistant Professor / University at Buffalo

Assistant Professor at the University at Buffalo and Co-Director of the National Center for Ontological Research. He works at the intersection of ontology, formal logic, machine learning, and knowledge graph engineering, with applications in infectious diseases, ethics in health, and applied epistemology. He holds a Ph.D. in Philosophy from Northwestern University and worked as a senior ontologist at the Johns Hopkins Applied Physics Laboratory. He has contributed to projects such as the Infectious Disease Ontology and works on the Basic Formal Ontology (BFO, ISO/IEC 21838-2).

Talk: Ontology Engineering Tradecraft

This talk concerns what I take to be the central tradecraft of ontology engineering: systematic disambiguation—a process of methodically justifying and explicitly representing the implicit structures found within and across sources. I defend systematic disambiguation as the key differentiator of ontology engineering as a discipline, distinguishing it from related fields such as data science, formal verification, and software engineering. I present this work against a backdrop of historical cycles of ontology engineering, comparing past booms (such as those involving expert systems and the semantic web) with the current surge in interest due to advances in AI, demonstrating the efficacy of our tradecraft using Basic Formal Ontology, drawing on the long tradition of best practices developed within its ecosystem.

Photo of Jérémy RavenelJérémy Ravenel
Senior Advisor Data & AI Services / naas

Founder and CEO of naas.ai, where he leads the development of a universal data and AI platform focused on accessibility, interoperability, and applied ontologies. He also serves as Senior Advisor in Data & AI at Forvis Mazars Group and as a researcher in Applied Ontology at the University at Buffalo. He has extensive experience integrating ontologies with artificial intelligence, data engineering, and business processes. With executive training from Stanford University and a master’s in Corporate Finance, he has founded startups and contributed to projects connecting philosophy, technology, and ethics in AI systems.

Talk: Building Trust In AI: Semantic Interoperability and KGs at the Rescue of LLMs

As Large Language Models (LLMs) continue to evolve, their lack of grounding and transparency poses significant risks to trust, auditability, and enterprise adoption. Drawing on real-world insights from the development of the ABI (Agentic Brain Infrastructure) system and the Naas Universal Data & AI Platform, we present a multi-layered approach to trust: from personal assistants that capture individual intent and context, to domain-specific agents operating on structured ontologies like BFO. We show how semantic interoperability enabled by shared vocabularies and logical consistency acts as a communication layer between data, agents, and human workflows.

  • Why ontologies should serve as the configuration layer for AI systems.
  • How to architect a network of AI systems (Personal, Business, Institutional) using semantic flywheels.
  • The automation pipeline to construct personalized knowledge graphs from real-world data (e.g., LinkedIn profiles).
  • Why “trust” in AI is not a property, but an outcome of structured alignment.

We close by arguing for the adoption of semantic hypergraphs as the future architecture for scalable, auditable, and human-aligned AI systems interconnecting intent, identity, and impact across domains.

Photo of Renata Wassermann
Renata Wasserman
Associate Professor / University of São Paulo

She holds a B.Sc. in Computer Science from the University of São Paulo (1991), an M.Sc. in Applied Mathematics from the University of São Paulo (1995), a Ph.D. in Computer Science from the University of Amsterdam (1999), and a habilitation from the University of São Paulo (2005). She is currently an Associate Professor at the Department of Computer Science, Institute of Mathematics and Statistics, University of São Paulo. Her research area is Artificial Intelligence, with emphasis on Logic and Knowledge Representation. She is a member of the Logic, Artificial Intelligence and Formal Methods (LIAMF) group, a researcher at the Center for Artificial Intelligence (C4AI), and a member of the steering committee of Lawgorithm.

Talk: Repairing ontologies – the story so far

As we all know by now, crafting an ontology can be an extremely expensive endeavour in terms of domain experts’ time. Several methods and tools were designed to help the building process. However, the maintenance cycle of ontologies has received less attention in the literature. In this talk, we will focus on the process of repairing an ontology, which can be triggered either by unwanted behavior or by a change in the domain. We will define specific goals of ontology repair and advocate for the use of logic as a tool to verify ontologies. We will also discuss how the connection between logical methods for repairing databases can be applied to ontologies.