Keynote Speakers – ONTOBRAS 2025

Photo of Barry SmithBarry Smith
Director of NCOR – Distinguished Professor of Philosophy, Computer Science and Biomedical Informatics / SUNY

Barry Smith contributes to both theoretical and applied research in ontology. He has authored over 700 publications with more than 50,000 citations and an h-index of 108. His research has been funded by the National Institutes of Health, the US, Swiss, and Austrian National Science Foundations, the US Departments of Defense (DoD) and Homeland Security (DHS), the Humboldt and Volkswagen Foundations, and the European Union. He is Distinguished Julian Park Professor of Philosophy and also Professor of Biomedical Informatics and Computer Science at the University at Buffalo. He is also Director of the National Center for Ontological Research (NCOR).

Smith is the lead developer of the Basic Formal Ontology (BFO), a top-level ontology used in more than 600 initiatives worldwide and recently documented as international standard ISO/IEC 21838-2. BFO is also part of the mandated ontology baseline of the US Department of Defense and the Office of the Director of National Intelligence and is a DOD Joint Enterprise Standards Committee (JESC) standard.

His work led to the creation of the Open Biomedical Ontologies (OBO) Foundry, a set of widely used resources designed to support information-driven research in biology and biomedicine. He is one of the founders of the Industrial Ontologies Foundry (IOF) and also contributes to the ontology work of DHS, DoD, the Intelligence Community Ontology Working Group (DIOWG), and the Five Eyes Ontology Working Group (FOWG). Most recently, Smith co-authored with German AI entrepreneur and mathematician Jobst Landgrebe the book Why Machines Will Never Rule the World, published by Routledge in 2023. An expanded second edition appeared in April 2025.

Online Talk: The History of Ontology from Aristotle to ISO/IEC 21838

Aristotle’s table of categories can justly claim to mark the beginnings of ontology in the modern (ontological engineering) sense, and his complication of the constitutions of 158 Greek city states can justly claim to be the world’s first database. For 2000 years, or so, nothing much happened until, in the 17th and 18th centuries,  French and German philosophers  dislodged ontology from its position as Queen of the Sciences in favour of epistemology. The next phase begins in the early 20th century with the coinage by Husserl of the term ‘formal ontology’, followed by important work on the ontologies of law, language, and psychology by Husserl’s students. From there I move on to Stanford in the 1970s, where, in the hands of Patrick Hayes and others, work on ‘naive ontology’ played an important role in the development of (good old-fashioned, logic-based) AI. I shall conclude with more recent developments in biomedicine, industry, and national security spurred in no small part by the BFO International Standard top-level ontology in 2021.

Photo of Giancarlo GuizzardiGiancarlo Guizzardi
Full Professor / University of Twente

Full Professor at the University of Twente (Netherlands), working in the areas of Formal and Applied Ontology, Conceptual Modeling, and Information Systems Engineering. He was a professor at the Free University of Bozen-Bolzano (Italy), where he led the CORE research group, and is a co-founder of the NEMO group in Brazil. Author of around 400 publications, he frequently serves as keynote speaker, chair, and editor in major events and journals in the fields of Ontology and Computing. He is currently also a visiting professor at the University of Stockholm, researching value-based modeling and ethical aspects in information systems.

Talk: Explanation, Semantics and Ontology

Cyber-human systems are formed by the coordinated interaction of human and computational components. The latter are justified to the extent that they are meaningful to humans – in both senses of ‘meaning’, i.e., semantics and significance. Data manipulated by these components only acquire meaning when mapped to shared human conceptualizations. They can also only be justified if ethically designed. In this talk, I present a notion of explanation called “Ontological Unpacking,” which aims to explain symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications). I argue that this explanatory nature is essential for semantic interoperability and, consequently, trustworthiness. I conclude by stating that symbolic artifacts in XAI are not inherently interpretable and should be seen as the beginning, not the end, of explanation.

Photo of John BeverleyJohn Beverley
Assistant Professor / University at Buffalo

Assistant Professor at the University at Buffalo and Co-Director of the National Center for Ontological Research. He works at the intersection of ontology, formal logic, machine learning, and knowledge graph engineering, with applications in infectious diseases, ethics in health, and applied epistemology. He holds a Ph.D. in Philosophy from Northwestern University and worked as a senior ontologist at the Johns Hopkins Applied Physics Laboratory. He has contributed to projects such as the Infectious Disease Ontology and works on the Basic Formal Ontology (BFO, ISO/IEC 21838-2).

Talk: Ontology Engineering Tradecraft

This talk concerns what I take to be the central tradecraft of ontology engineering: systematic disambiguation—a process of methodically justifying and explicitly representing the implicit structures found within and across sources. I defend systematic disambiguation as the key differentiator of ontology engineering as a discipline, distinguishing it from related fields such as data science, formal verification, and software engineering. I present this work against a backdrop of historical cycles of ontology engineering, comparing past booms (such as those involving expert systems and the semantic web) with the current surge in interest due to advances in AI, demonstrating the efficacy of our tradecraft using Basic Formal Ontology, drawing on the long tradition of best practices developed within its ecosystem.

Photo of Jérémy RavenelJérémy Ravenel
Senior Advisor Data & AI Services / naas

Founder and CEO of naas.ai, where he leads the development of a universal data and AI platform focused on accessibility, interoperability, and applied ontologies. He also serves as Senior Advisor in Data & AI at Forvis Mazars Group and as a researcher in Applied Ontology at the University at Buffalo. He has extensive experience integrating ontologies with artificial intelligence, data engineering, and business processes. With executive training from Stanford University and a master’s in Corporate Finance, he has founded startups and contributed to projects connecting philosophy, technology, and ethics in AI systems.

Talk: Building Trust In AI: Semantic Interoperability and KGs at the Rescue of LLMs

As Large Language Models (LLMs) continue to evolve, their lack of grounding and transparency poses significant risks to trust, auditability, and enterprise adoption. Drawing on real-world insights from the development of the ABI (Agentic Brain Infrastructure) system and the Naas Universal Data & AI Platform, we present a multi-layered approach to trust: from personal assistants that capture individual intent and context, to domain-specific agents operating on structured ontologies like BFO. We show how semantic interoperability enabled by shared vocabularies and logical consistency acts as a communication layer between data, agents, and human workflows.

  • Why ontologies should serve as the configuration layer for AI systems.
  • How to architect a network of AI systems (Personal, Business, Institutional) using semantic flywheels.
  • The automation pipeline to construct personalized knowledge graphs from real-world data (e.g., LinkedIn profiles).
  • Why “trust” in AI is not a property, but an outcome of structured alignment.

We close by arguing for the adoption of semantic hypergraphs as the future architecture for scalable, auditable, and human-aligned AI systems interconnecting intent, identity, and impact across domains.

Photo of Renata WassermannRenata Wassermann
Associate Professor / University of São Paulo

She holds a B.Sc. in Computer Science from the University of São Paulo (1991), an M.Sc. in Applied Mathematics from the University of São Paulo (1995), a Ph.D. in Computer Science from the University of Amsterdam (1999), and a habilitation from the University of São Paulo (2005). She is currently an Associate Professor at the Department of Computer Science, Institute of Mathematics and Statistics, University of São Paulo. Her research area is Artificial Intelligence, with emphasis on Logic and Knowledge Representation. She is a member of the Logic, Artificial Intelligence and Formal Methods (LIAMF) group, a researcher at the Center for Artificial Intelligence (C4AI), and a member of the steering committee of Lawgorithm.

Talk: Repairing ontologies – the story so far

As we all know by now, crafting an ontology can be an extremely expensive endeavour in terms of domain experts’ time. Several methods and tools were designed to help the building process. However, the maintenance cycle of ontologies has received less attention in the literature. In this talk, we will focus on the process of repairing an ontology, which can be triggered either by unwanted behavior or by a change in the domain. We will define specific goals of ontology repair and advocate for the use of logic as a tool to verify ontologies. We will also discuss how the connection between logical methods for repairing databases can be applied to ontologies.

Photo of Aaron DamianoAaron Damiano
Ontologist – System Analyst / U.S. Customs and Border Protection

With over twenty years of experience in application development, Aaron Damiano specializes in creating enterprise-grade systems that bridge traditional software engineering with next-generation semantic technologies. He currently serves as an Ontologist and Systems Analyst with U.S. Customs and Border Protection (CBP), where he designs and implements knowledge graphs, ontologies, and semantic models to improve interoperability, search, and analytics across large-scale environments. Aaron also founded Fandaws.com, a platform that applies semantic technologies to help organizations structure, connect, and enrich their data for real-world impact.

Ontology Demo

This demo will introduce participants to the use of Fandaws (Fact and Answer Web Service), an interactive ontology-building platform powered by Basic Formal Ontology (BFO). It provides a natural language interface that enables users to create, explore, and refine ontologies simply by answering guided questions. Leveraging BFO’s ontological principles, Fandaws automatically determines how new terms and concepts fit into its knowledge structure. The resulting ontology can be visualized through an intuitive graph view, exported for reuse, or integrated directly via API. Fandaws has also been linked to multiple online dictionaries and has successfully processed more than 100,000 terms, demonstrating its scalability and adaptability. Participants will gain an understanding of how to leverage Fandaws in their ontology construction workflow.