Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: community: positions

Topic: Infrabayesianism


view this post on Zulip Morgan Rogers (he/him) (Mar 24 2022 at 08:08):

[Not category theory]
Needless to say, AI has a lot of hype around it. However, the research community in the subfield of AI Safety is rather fragmented. As a result, there are a few very advanced mathematical frameworks in the field which are only currently accessible to a small number of people. I heard yesterday that one such program is seeking to remedy this by creating pedagogical resources (such as a textbook, exercises...) for the wider community working or thinking about AI Safety. Here's the job advertisement, but it doesn't include a lot of detail about what the topic is, so here's the authors' introductory post on the Alignment Forum. To put it briefly, infrabayesianism is an extension of Bayesian decision theory.

I am not personally invested in this, but I am interested in getting to know the theory better eventually, and so I wanted to spread the word about this job just in case. I also think that categorically-minded people might have enough insight to identify any flaws in the abstractions they use along the way.