You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
I think this is the right stream for this. Does anyone know of any work being done in applied category theory and dynamic programming? There are a few older works I have read: one by Clement and one by De Moor.
Bertsekas has a project and text called "Abstract Dynamic Programming" which to my naive eye looks like it is one step from being categorized (categorified? categorialized? ...). http://www.athenasc.com/abstractdp.html
But I have not come across anything explicit in the ACT arena.
Kyle Wilkinson said:
I think this is the right stream for this. Does anyone know of any work being done in applied category theory and dynamic programming? There are a few older works I have read: one by Clement and one by De Moor.
Bertsekas has a project and text called "Abstract Dynamic Programming" which to my naive eye looks like it is one step from being categorized (categorified? categorialized? ...). http://www.athenasc.com/abstractdp.html
But I have not come across anything explicit in the ACT arena.
https://arxiv.org/abs/2302.05575
JR said:
Kyle Wilkinson said:
I think this is the right stream for this. Does anyone know of any work being done in applied category theory and dynamic programming? There are a few older works I have read: one by Clement and one by De Moor.
Bertsekas has a project and text called "Abstract Dynamic Programming" which to my naive eye looks like it is one step from being categorized (categorified? categorialized? ...). http://www.athenasc.com/abstractdp.html
But I have not come across anything explicit in the ACT arena.
https://arxiv.org/abs/2207.06091 also this paper :)
Here's a paper by @Jules Hedges and Riu Rodríguez Sakamoto:
https://arxiv.org/abs/2206.04547
Thanks for the papers. I'll admit the first two are a little outside my "casual perusal" range.
The third by @Jules Hedges I understand more. However, it looks like though the title is dynamic programming, the real thrust is generalized toward reinforcement learning. In modern practice of course this is fully justified, but I'm working with an old variant of deterministic dynamic programming.
But If I understand it correctly, "ordinary" dynamic programming can be expressed as a lens, while introducing stochasticity requires leveling up to the more general optic. Lenses are a lot more intuitive to me, so I would like that to be correct.
In addition to the suggestions above, I can add two papers:
Recursion Schemes for Dynamic Programming and
Graph Neural Networks are Dynamic Programmers
Despite a lot of the theorems/definitions being implicit in the latter paper - I find it quite interesting, as it models the Bellman-Ford algorithm using indexed containers.
There's probably many connections waiting to be established to dependent lenses, and a lot of the above mentioned literature