Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: practice: industry

Topic: Category theorists in AI


view this post on Zulip John Baez (Feb 17 2025 at 07:47):

Here's a draft of a blog article I want to write. What are the main other organizations using category theory for AI?

Category Theorists in AI

I avoid working on AI and machine learning, both because it's a hot topic, with all the usual problems that brings, and because these tools are mainly being used to make rich and powerful people richer and more powerful---with all the usual problems that brings.

However, I like to pay attention to how category theorists are getting jobs in the AI industry, and what they're doing---in part because I know and like many of these people, and in part because I wonder what they will do for AI, and what work on AI will do to them.

Here are some stories I've been following intermittently:

Symbolica

When Musk and his AI head Andrej Karpathy didn't listen to their engineer George Morgan's worry that current techniques in deep learning couldn't "scale to infinity and solve all problems," Morgan left Tesla and started a company called Symbolica to work on symbolic reasoning. The billionaire Vinod Khosla gave him $2 million in seed money. He began with an approach based on hypergraphs, but then these researchers wrote a position paper that pushed the company in a different direction:

Bruno Gavranović, Paul Lessard, Andrew Dudzik, Tamara von Glehn, João G. M. Araújo and Petar Veličković, Position: Categorical Deep Learning is an Algebraic Theory of All Architectures

Kholsa liked the new direction and invested $30 million more in Symbolica. At Gavranovic and Lessard's suggestion Morgan hired category theorists including Dominic Verity and Neil Ghani.

But Morgan was never fully sold on category theory: he still wanted to pursue his hypergraph approach. Aafter a while, continued disagreements between Morgan and the category theorists took its toll. He fired some, even having one summarily removed from his office. Another resigned voluntarily. Due to nondisclosure agreements, these people no longer talk publicly about what went down.

So one moral for category theorists, or indeed anyone with a good idea: after your idea helps someone get a lot of money, they may be able to fire you.

Topos UK

David Dalrymple is running a program on "safeguarded AI" at the UK agency ARIA (the Advanced Research + Invention agency) and he is very interested in using category theory for this purpose.

This program is funding a new branch of the Topos Institute at Oxford, called Topos UK. The previous Topos Institute is a math research institute in Berkeley with a focus on category theory. Three young category theorists there---Sophie Libkind, David Jaz Myers and Owen Lynch---wrote a proposal called "Double categorical systems theory for safeguarded AI", which got funded by ARIA.

It looks like David Jaz Myers, Owen Lynch, Tim Hosgood, José Siquiera and Xiaoyan Li will be working at Topos UK.

Glaive

The Glasgow Lab for AI Verification or 'Glaive' is a non-profit limited company focusing on applying category theory to AI verification. The people working for it include Dylan Braithwaite, Jade Master, Zans Mihejevs, André Videla, Neil Ghani and Jules Hedges.

Like the Topos UK, Glaive is supported by ARIA. I don't know enough about what they're doing.

view this post on Zulip Kevin Carlson (Feb 17 2025 at 08:50):

There is also Jason Brown at Topos UK, but I don’t think he’s on this server.

view this post on Zulip David Corfield (Feb 17 2025 at 09:14):

The Safeguarded AI program is employing many more category theorists than that, or at least category-theoretic minded computer scientists. Even a category-theoretic philosopher!

view this post on Zulip David Corfield (Feb 17 2025 at 09:19):

You can look through the 'creators' from this page.

By the way, I've gave a talk on the possible interest to each side of philosophy and category-theoretic AI, slides.

view this post on Zulip John Baez (Feb 17 2025 at 17:57):

Thanks, @David Corfield - I figured the Safeguarded AI program was bigger than what I've seen, but I hadn't figured out how to find out its full extent. Let me list some of the category theorists involved, from that 'creators' list:

view this post on Zulip John Baez (Feb 17 2025 at 17:57):

Axiomatic Theories of String Diagrams for Categories of Probabilistic Processes

Fabio Zanasi, University College London

This project aims to provide complete axiomatic theories of string diagrams for significant categories of probabilistic processes. Fabio will then use these theories to develop compositional methods of analysis for different kinds of probabilistic graphical models.

view this post on Zulip John Baez (Feb 17 2025 at 17:59):

Safety: Core Representation Underlying Safeguarded AI

Ohad Kammar, University of Edinburgh

Team: Justus Matthiesen; Jesse Sigal, University of Edinburgh

This project looks to design a calculus that utilises the semantic structure of quasi-Borel spaces, introduced in ‘A convenient category for higher-order probability theory‘. Ohad and his team will develop the internal language of quasi-Borel spaces as a “semantic universe” for stochastic processes, define syntax that is amenable to type-checking and versioning, and interface with other teams in the programme—either to embed other formalisms as sub-calculi in quasi-Borel spaces, or vice versa (e.g. for imprecise probability).

view this post on Zulip John Baez (Feb 17 2025 at 18:04):

Philosophical Applied Category Theory

David Corfield, Independent Researcher

This project plans to overcome the limitations of traditional philosophical formalisms by integrating interdisciplinary knowledge through applied category theory. In collaboration with other TA1.1 Creators, David will explore graded modal logic, type theory, and causality, and look to develop the conceptual tools to support the broader Safeguarded AI programme.

view this post on Zulip John Baez (Feb 17 2025 at 18:07):

Double Categorical Systems Theory for Safeguarded AI

David Jaz Myers, Topos Research UK

Team: Owen Lynch; Sophie Libkind; David Spivak, Topos Research UK; James Fairbanks, University of Florida

This project aims to utilise Double Categorical Systems Theory (DCST) as a mathematical framework to facilitate collaboration between stakeholders, domain experts, and computer aides in co-designing an explainable and auditable model of an autonomous AI system’s deployment environment. David + team will expand this modelling framework to incorporate formal verification of the system’s safety and reliability, study the verification of model-surrogacy of hybrid discrete-continuous systems, and develop serialisable data formats for representing and operating on models, all with the goal of preparing the DCST framework for broader adoption across the Safeguarded AI Programme.

view this post on Zulip John Baez (Feb 17 2025 at 18:17):

Monoidal Coalgebraic Metrics

Filippo Bonchi, University of Pisa

Filippo and team intend to establish a robust mathematical framework that extends beyond the metrics expressible in quantitative algebraic theories and coalgebras over metric spaces. By shifting from Cartesian to a monoidal setting, this group will examine these metrics using algebraic contexts (to enhance syntax foundations) and coalgebraic contexts provide robust quantitative semantics and effective techniques for establishing quantitative bounds on black-box behaviours), ultimately advancing the scope of quantitative reasoning in these domains.

view this post on Zulip John Baez (Feb 17 2025 at 18:19):

Doubly Categorical Systems Logic: A Theory of Specification Languages

Matteo Capucci, Independent Researcher

This project aims to develop a logical framework to classify and interoperate various logical systems created to reason about complex systems and their behaviours, guided by Doubly Categorical Systems Theory (DCST). Matteo’s goal is to study the link between the compositional and morphological structure of systems and their behaviour, specifically in the way logic pertaining these two components works, accounting for both dynamic and temporal features. Such a path will combine categorical approaches to both logic and systems theory.

True Categorical Programming for Composable Systems Total

Jade Master + Zans Mihejevs, Glasgow Lab for AI Verification (GLAIVE)

Team: Andre Videla; Dylan Braithwaite, GLAIVE

This project intends to develop a type theory for categorical programming that enables encoding of key mathematical structures not currently supported by existing languages. These structures include functors, universal properties, Kan extensions, lax (co)limits, and Grothendieck constructions. Jade + team are aiming to create a type theory that accurately translates categorical concepts into code without compromise, and then deploy this framework to develop critical theorems related to the mathematical foundations of Safeguarded AI.

Employing Categorical Probability Towards Safe AI

Sam Staton, University of Oxford

Team: Pedro Amorim; Elena Di Lavore; Paolo Perrone; Mario Román; Ruben Van Belle; Younesse Kaddar; Jack Liell-Cock; Owen Lynch, University of Oxford

Sam + team will look to employ categorical probability toward key elements essential for world modelling in the Safeguarded AI Programme. They will investigate imprecise probability (which provides bounds on probabilities of unsafe behaviour), and stochastic dynamical systems for world modelling, and then look to create a robust foundation of semantic version control to support the above elements.

Syntax and Semantics for Multimodal Petri Nets

Amar Hadzihasanovic, Tallinn University of Technology

Team: Diana Kessler, Tallinn University of Technology

Amar + team will develop a combinatorial and diagrammatic syntax, along with categorical semantics, for multimodal Petri Nets. These Nets will model dynamical systems that undergo mode or phase transitions, altering their possible places, events, and interactions. Their goal is to create a category-theoretic framework for mathematical modelling and safe-by-design specification of dynamical systems and process theories which exhibit multiple modes of operation.

view this post on Zulip Ryan Wisnesky (Feb 17 2025 at 18:42):

Is the list restricted to the statistical, machine learning kind of AI? Or can it be good old fashioned symbolic AI too? We claim to be an AI company, inasmuch as data integration can be reduced to computer algebra. "Generative symbolic AI" we call it: https://conexus.com/symbolic-generative-ai/. And sometimes we verify LLM outputs too!

view this post on Zulip John Baez (Feb 17 2025 at 18:44):

It's not restricted at all: I included Symbolica, which does symbolic AI.

view this post on Zulip Ryan Wisnesky (Feb 17 2025 at 18:47):

touché; despite their name I always think of them as an LLM company. I'm really asking for us to be included on the list- happy to provide a blurb.

view this post on Zulip John Baez (Feb 17 2025 at 19:01):

I'm going to publish the article in about 10 minutes, but I can keep expanding it so give me your blurb.