You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Someone has probably announced this funded PhD position somewhere already, but since I just saw it on Mathstodon I'll pass it on:
PhD opportunity: Coq Formalisation of Differentiable Logics
Hello, we are looking for a PhD student to work on Coq formalisation of Differentiable Logics, the full details are below.
Vacancy: PhD in Computer Science
Title: Formal Verification of AI Interfaces
Advisors: Ekaterina Komendantskaya (Southampton University, UK), Alessandro Bruni (IT University of Copenhagen, Denmark), Reynald Affeldt (AIST, Japan)
Start Date: As soon as the right candidate is foundLocation: Southampton University, UK; with collaborative visits involving researchers at AIST, Japan and IT University of Copenhagen, Denmark.
Description: The field of computing is facing a conundrum caused by a clash in two opposing trends: on the one hand, the growth and proliferation of machine learning (ML) in software, and on the other hand, ever-growing concerns that, with ML models being a black-box technology, the safety, security and explainability of software that uses ML diminish. To address these concerns, we need tools and languages that can serve as safe interfaces to ML components. Such safe ML interfaces will allow to specify the desired properties of ML models, train ML models to satisfy such properties, and verify that these desired properties do in fact hold in the final artifact. For example, one language that supports the safe ML interfaces approach is the Haskell DSL Vehicle [1]; and one iconic application for safe ML interfaces is in verifying autonomous car controllers [1]. At the moment, we use the Coq proof assistant and the recent MathComp-Analysis library to study the formalization of Differentiable Logics that allow to specify certain safety properties of ML models, and then compile them down into loss functions for training. We are looking for a PhD applicant with keen interest in mathematics, logic and/or Coq programming to join this team, to extend the initial study of [2] to a richer language such as [1]. The conditions of this PhD funding come with no restrictions on nationality but assume that a successful PhD candidate will have a competitive CV.
Please forward this advertisement to any interested individuals, and address any questions to: e.komendantskaya@soton.ac.uk
[1] Matthew L. Daggitt, Wen Kokke, Robert Atkey, Natalia Slusarz, Luca Arnaboldi, Ekaterina Komendantskaya
Vehicle: Bridging the Embedding Gap in the Verification of Neuro-Symbolic Programs. CoRR abs/2401.06379 , 2024.[2] R. Affeldt, A. Bruni, E. Komendantskaya, N. Slusarz, and K. Stark.
Taming Differentiable Logics with Coq Formalisation. In Interactive Theorem Proving (ITP) 2024, 2024.
Ekatarina Komendantskaya has some really interesting papers on machine learning in Coq.
I'm not familiar with Reynald Affeldt's publications but he is one of the most prominent contributors to the Coq math-comp library, and you can probably talk to him in the Coq zulip chat if you want to learn more about the post.
Some of Prof. Komendantskaya's papers that I happened to have read or bookmarked to read later:
"Recycling Proof Patterns in Coq; Case Studies"
"Proof-pattern recognition and lemma discovery in ACL2"
"ML4PG (Machine Learning for Proof General)"
"Machine Learning in Proof General (ML4PG): Interfacing Interfaces"
"Proof-relevant horn clauses for dependent type inference and term synthesis"
"ML4PG in Computer Algebra Verification"
(Proof General is an Emacs major mode for Coq.)