You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
A few news items:
The second part of the UK program on Safeguarded AI program has apparently been canceled. This is the TA2 or "implementation" part of the project, that involves writing software. This is newsworthy because the TA1 or "research" part of the project is employing most of the applied category theorists I know in the UK (see the list here). I don't know why the TA2 part has been canceled.
There's a meeting of the Safeguarded AI project members in Edinburgh from November 13th to 19th. I'll be meeting with Sophie Libkind and David Corfield, who are coming to this meeting.
Bob Coecke has left the quantum computing company Quantinuum. Again I don't know why. He had headed the Oxford branch, which focuses on quantum natural language processing. (The Cambridge headquarters, led by Ross Duncan, focuses on more traditional aspects of quantum computing, some connected to the ZX calculus, a string diagram language of interest to applied category theorists.)
TA2 was specifically targeted to the production of ML software which the programme leads now believe will be available off-the-shelf sooner than expected, so despite what it sounds like, this could be good news for the applied category theorists on the project: the (large!) resources that were allocated for hiring ML specialists (who, at least anecdotally, expect much larger paychecks than academic mathematicians) may be diverted to other parts of the programme.
Okay, that's very interesting. Thanks!
I know someone who was hoping to be involved in TA2. They're an academic. But universities were explicitly disallowed from involvement in TA2: only people working at non-university non-profits were allowed to apply. So, this person was hoping to work on TA2 as part of a nonprofit organization.
Do you know if TA3 is going forward? I see now that applications for this were due August 7, 2024.
Yes, TA3 is going forward. I've heard that there are some slight changes there too, because of the TA2 cancellation, but I know it has not been canceled. (I'm not involved in it though, so I don't know the details.)
Here's the program's official announcement about the pivot away from TA2, which just reinforces Amar's description:
Important Update: TA2 Funding Call
As a Programme team, our responsibility is to ensure our funding has the highest possible impact. This requires constantly re-evaluating our strategy against the rapidly changing technological landscape. In that spirit, we’re announcing a significant pivot for our programme. We will be redirecting our efforts away from Technical Area 2 (TA2) as originally planned and doubling down on expanding the ambition and scope of Technical Area 1 (TA1). Our conviction in Safeguarded AI's vision is unchanged. We continue to believe it's both critically important and possible.
When we designed this programme, the world looked different. Today, the pace of progress in frontier AI models has fundamentally altered the path to our goals. We now expect that the intended technical objectives of TA2 will be attainable as a side effect of this progress, without requiring a dedicated R&D organisation. Instead of investing in creating specialised AI systems that can use our tools, it will be more catalytic to broaden the scope and power of the TA1 toolkit itself, making it a foundational component for the next generation of AI.
Making a pivot like this is a difficult decision, especially given the hard work many have invested in positioning for TA2. However, our job is to steer the Programme towards the greatest possible long-term impact. We believe this change allows us to do exactly that.
Thank you for your interest in our work as we embark on this exciting new chapter.
The Safeguarded AI Programme team
And you can see the TA3 funded use cases here: https://www.aria.org.uk/opportunity-spaces/mathematics-for-safe-ai/safeguarded-ai/meet-the-creators?tabId=real-world-use-cases
Very interesting! When they (presumably David Dalrymple) say they'll be "doubling down on expanding the ambition and scope of Technical Area 1 (TA1)", does that mean they're injecting more money into existing TA1-funded organizations like Topos?
Those details remain up in the air, but I'll let him know about your suggestion. :innocent: