Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: deprecated: neuroscience

Topic: general


view this post on Zulip Johannes Drever (Apr 14 2020 at 17:32):

Hello @Rongmin Lu @Duncan Mortimer @Toby Smithe and @Daniel Geisler. I'll move the discussion to a new stream, hope that works out fine. I thought about moving this to #general, but people can more easily choose to subscribe or unsubscribe to streams than to topics.

There actually is some work on category theory and cognitive neuroscience. Phillips and Wilson explain systematicity of human cognition. Andrée Ehresmann built a bio-inspired model for higher level cognitive systems, and Piaget worked on category theory in his later years. There is also some work on autopoiesis by Varela and Nomura. I didn't work through all these papers and would be happy for some discussion as a motivation to do so. The Ehresmann model is something I'm interested in right now.

view this post on Zulip Daniel Geisler (Apr 15 2020 at 05:04):

One line of attack is to consider natural transformations as related to mental abstraction. Maybe the beauty of CT come's from the brain going to a low energy level when programmed with CT.

view this post on Zulip Daniel Geisler (Apr 15 2020 at 05:26):

I'm suggesting that CT might naively be modeling cognitive/neural structures creating a feeling of harmony and beauty. I study this from both directions. I have a background in psychology as well as computer science. I've been involved in brain wave experiments. I also got into yoga meditation fifty years ago to study meta-cognition.

view this post on Zulip Johannes Drever (Apr 15 2020 at 07:52):

@Rongmin Lu , great survey paper on neuroscience! My favorite sections (from skimming) are "Active Learning" and "Cost Funcitons That Approximate Properties of the World". In "Active Learning" they discuss how a gradient may be optimally chosen in an unknown environment. One idea is to "choose actions that will lead to the largest improvement in its prediction or data compression performance". Additionally they suggest priors or heuristics such as novelty, other people's attention or the organisms developmental state.

The "Properties of the World" explores how statistical properties of the world may lead to specific properties of the nervous system. Olshausen and Field developed a model which employs a sparseness prior on the internal representation. When applied to natural images the learned feature detectors look a lot like receptive fields in the visual cortex.

@Brendan Fong's Backprop as a functor is surly an exciting paper. I heard that @Bruno Gavranovic is working in a similar direction. The "lens pattern" in general seems to pop up in a lot of situations.

view this post on Zulip Johannes Drever (Apr 15 2020 at 07:59):

Daniel Geisler said:

I'm suggesting that CT might naively be modeling cognitive/neural structures creating a feeling of harmony and beauty. I study this from both directions. I have a background in psychology as well as computer science. I've been involved in brain wave experiments. I also got into yoga meditation fifty years ago to study meta-cognition.

Schmidhuber developed a theory of subjective beaty and couples it with curiosity. I would be very interested to couple these ideas with neurophenomenology a la Varela. "Sleeping, dreaming and dying" was an extraordinary attempt to relate subjective notions with hard science.

view this post on Zulip Bruno Gavranović (Apr 15 2020 at 09:25):

Johannes Drever said:

Brendan Fong's Backprop as a functor is surly an exciting paper. I heard that Bruno Gavranovic is working in a similar direction. The "lens pattern" in general seems to pop up in a lot of situations.

Yes - I'm working in a direction relying heavily on a generalization of Lenses called Optics to model these bidirectional data transformers that could be said to have agency. The idea is to yeah, use this "lens pattern" to talk about bidirectional data transformation with "one extra leg" which accounts for the internal state of these learners. See http://events.cs.bham.ac.uk/syco/strings3-syco5/papers/dalrymple.pdf for a good intro

view this post on Zulip Toby Smithe (Apr 15 2020 at 09:25):

So I'm currently working in a similar direction to Bruno, formalising the "free energy" framework of Karl Friston in terms of optics / open games. In December, Jules Hedges and I proved that Bayesian updates compose optically (I'm working on a note on that today), and that's at the heart of Friston's work (aka. "active inference"). That note should be on arXiv soon, and my active inference work should be not long after that..

But active inference is not so much about neuroscience itself, although Friston's model has some nice parallels with neurological structures. I think CT also has plenty to contribute in "hard" neuroscience, further away from where it intersects with machine learning.

view this post on Zulip Toby Smithe (Apr 15 2020 at 09:25):

Hi Bruno! :)

view this post on Zulip Bruno Gavranović (Apr 15 2020 at 09:25):

Hi Toby :)

view this post on Zulip Toby Smithe (Apr 15 2020 at 09:46):

Schmidhuber developed a theory of subjective beaty and couples it with curiosity. I would be very interested to couple these ideas with neurophenomenology a la Varela.

@Johannes Drever Do you know Camilo Signorelli? He's a friend of mine in Oxford and is also interested in such things, I think.

These works remind me of Robert Rosen's limited success in applying category theory to biology. It's a start, but not by much. That's a good thing if you want to come up with a better theory, of course.

I'm with Rongmin Lu on these works [eg MENS], which I think are too far removed from the domain-specific ideas that have been developed by people actually modelling brains (in as much detail as possible). I'm sure it might be possible to reverse-engineer those structures from abstract ones, but such an approach seems rather 'dual' to the usual "rising sea"; call it "falling sea", perhaps...

view this post on Zulip Johannes Drever (Apr 15 2020 at 10:16):

Hi Toby, good to hear that you are working on something similar and that there is quite some activity in this direction. Active inference is something I wanted to look into for a long time. Friston relates it to artificial curiosity quite explicitly. I don't know Camilo, but the whole Oxford-group looks pretty amazing.

I see the point that specific principles lend themselves more towards being cast into an algorithm, which may then be related to a neural substrate. But I also think the dual view of thinking about abstract patterns is very interesting. In the end that is what our brains are really good at, and we want to know the difference which makes a difference, so to say.

view this post on Zulip Nivedita (Jul 06 2020 at 22:12):

Toby Smithe

Could you direct me a few groups or individuals that may be actively researching on this aspect (hard neuroscience). I wish to get initiated into cognitive neuroscience and I really hope to use CT as a tool, I've noted the works of Brown and Porter, A. Ehresmann and Ramirez.