Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: applied category theory

Topic: emergent behaviour as laxity


view this post on Zulip Jules Hedges (Apr 24 2021 at 14:07):

It is time for me to reiterate my grand challenge to applied category theory, stretching the purpose of the off-topic thread to breaking point. Typically when you have some category CC whose objects are "boundaries", morphisms are "open systems" and composition is coupling 2 open systems along a boundary, typically "behaviours" form a lax pseudofunctor F:CRelF : C \to \mathbf{Rel}. This says that every boundary xx has some set F(x)F(x) of possible behaviours, and every open system f:xyf : x \to y has a set F(f)F(x)×F(y)F(f) \subseteq F(x) \times F(y) of boundary behaviours that it can perform. The laxator says that if you behaviours (a,b)F(f)(a, b) \in F(f) and (b,c)F(g)(b, c) \in F(g) of two composable open systems that agree on the behaviour of the common boundary then they compose to a behaviour (a,c)F(fg)(a, c) \in F(fg) of the complex system; but the converse typically isn't true: fgfg exhibits emergent behaviours that don't arise from individual behaviours of ff and gg. Right now the typical next step is to shrug and then go and do something else. I would like to associate some mathematical object that "describes" or "measures" the failure of the laxator to be an isomorphism, aka describes or measures the emergent effects. Several people have mumbled something about cohomology after I talked about this problem

view this post on Zulip Jade Master (Apr 24 2021 at 15:10):

Jules Hedges said:

It is time for me to reiterate my grand challenge to applied category theory, stretching the purpose of the off-topic thread to breaking point. Typically when you have some category CC whose objects are "boundaries", morphisms are "open systems" and composition is coupling 2 open systems along a boundary, typically "behaviours" form a lax pseudofunctor F:CRelF : C \to \mathbf{Rel}. This says that every boundary xx has some set F(x)F(x) of possible behaviours, and every open system f:xyf : x \to y has a set F(f)F(x)×F(y)F(f) \subseteq F(x) \times F(y) of boundary behaviours that it can perform. The laxator says that if you behaviours (a,b)F(f)(a, b) \in F(f) and (b,c)F(g)(b, c) \in F(g) of two composable open systems that agree on the behaviour of the common boundary then they compose to a behaviour (a,c)F(fg)(a, c) \in F(fg) of the complex system; but the converse typically isn't true: fgfg exhibits emergent behaviours that don't arise from individual behaviours of ff and gg. Right now the typical next step is to shrug and then go and do something else. I would like to associate some mathematical object that "describes" or "measures" the failure of the laxator to be an isomorphism, aka describes or measures the emergent effects. Several people have mumbled something about cohomology after I talked about this problem

This is a cool question. I think that a first step is to classify when there is no emergent behavior. That's something you can do for the algebraic path problem by defining functional open matrices (See section 4 of this) and in my forthcoming thesis I show how to generalize this to open Petri nets, Q-nets, and graphs . Basically these are systems such that when you compose them, there are no channels which can produce additional feedback. I don't know how yet, but definitely it would be cool to find some class of networks for which there is some emergent behavior, but it is still relatively manageable.

view this post on Zulip Notification Bot (Apr 24 2021 at 15:34):

This topic was moved here from #general: off-topic > memes by Matteo Capucci (he/him)

view this post on Zulip Jules Hedges (Apr 24 2021 at 15:37):

Luckily I got a screengrab of my post being under the "memes" thread before it was moved

view this post on Zulip Jules Hedges (Apr 24 2021 at 15:39):

Right, one of the motivating examples for me was open graph reachability, where the laxness happens because a path can zigzag across a boundary, but it should be able to say something much stronger than just "it's lax" - if you know something about what the boundary looks like then it should be possible to control "how lax" it is

view this post on Zulip Jules Hedges (Apr 24 2021 at 15:39):

This motivation came straight out of one of your papers

view this post on Zulip Jade Master (Apr 24 2021 at 16:03):

Jules Hedges said:

Luckily I got a screengrab of my post being under the "memes" thread before it was moved

Lol, it is all a meme. And yeah, the thing is that as soon as you get any sort of loop the emergent behavior is infinite because you can go around the loop as many times as you want. I'm curious about when the emergent behavior is finite...so there's a zig-zag but a not a loop.

view this post on Zulip Eigil Rischel (Apr 24 2021 at 16:10):

I've put a lot of thought into this problem. It's very hard to say something about the general case because you need some sort of structure on the categories involved to work with.
One family of examples is where your bicategories are (co)span categories, and your (op)lax functor comes from a functor which doesn't preserve pushout/pullback. If you further assume that the target category is abelian, this is the kind of thing you can hit with (co)homology.
The higher homology of an object XX then measures somehow the failure of FF to preserve those pushouts where the pushout is XX (in a vague sense).
I tried applying this to the graph reachability problem (since open graphs obviously are a structured cospan category). The problem is that as soon as you replace the reachability relation with an additive relation, you can also express relations between vertices on the same side of the graph (by saying that x1x2x_1 - x_2 is related to zero). So somehow the step of "passing to the additive situation" already does all the work.

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:12):

Cool!

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:13):

I was vaguely thinking that the target category would most likely have to be something like LinRel or AbRel to make progress

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:15):

Although I do vaguely wonder if you can make use of the fact that Rel itself is enriched in a bunch of things that includes Boolean algebras

view this post on Zulip Eigil Rischel (Apr 24 2021 at 16:16):

Yeah, I think we've discussed this before: the reachability functor from open graphs to linear relations (taking a set to the vector space with that basis) can be made functorial by encoding the "same-side" connections using the trick I wrote above.

Hmm, now I'm wondering what the relation is between these lax functors:

Obviously FF~F \subseteq \tilde{F}. Is F~\tilde{F} something like "the minimal functor containing FF"?

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:17):

I'm trying to think what it would mean that your semantic functor goes into AbRel. The set of boundary behaviours is now an abelian group, so every boundary has a "zero behaviour" and a commutative "addition of behaviours"

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:18):

By the way, to make the jump between reachability and my "behaviours" intuition, I think about a behaviour of the boundary of an open graph being a token entering or leaving

view this post on Zulip Eigil Rischel (Apr 24 2021 at 16:19):

In the linear/additive version, the behaviour becomes something like "a number of tokens entering/leaving, according to the sign"

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:20):

Yeah. So going to AbRel isn't a totally crazy thing to do. It would be like viewing your graph as a relatively simple kind of integer petri net, where all of the transitions are trivially "one in one out"

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:21):

Which would also be a hint that actual integer petri nets are possibly the right setting to think about this sort of reachability problem...

view this post on Zulip Jade Master (Apr 24 2021 at 16:24):

Eigil Rischel said:

I've put a lot of thought into this problem. It's very hard to say something about the general case because you need some sort of structure on the categories involved to work with.
One family of examples is where your bicategories are (co)span categories, and your (op)lax functor comes from a functor which doesn't preserve pushout/pullback. If you further assume that the target category is abelian, this is the kind of thing you can hit with (co)homology.
The higher homology of an object XX then measures somehow the failure of FF to preserve those pushouts where the pushout is XX (in a vague sense).
I tried applying this to the graph reachability problem (since open graphs obviously are a structured cospan category). The problem is that as soon as you replace the reachability relation with an additive relation, you can also express relations between vertices on the same side of the graph (by saying that x1x2x_1 - x_2 is related to zero). So somehow the step of "passing to the additive situation" already does all the work.

What makes this a problem?

view this post on Zulip Eigil Rischel (Apr 24 2021 at 16:26):

Uh, not sure it's a problem - it just made me think that the technology for studying failure-of-functoriality wouldn't do anything, because once you rephrase the problem so that it makes sense to apply something like homology, you've already solved the problem. Although I just realized you can also study the lax functor which doesn't "know about" same-side connections, so maybe something interesting happens there.

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:26):

It could be that this is actually the right way to do it, and my whole "cohomology of the laxator" is going the wrong way...... possibly it comes down to an application-driven question for "what do you want a solution to the reachability problem to actually be?"

view this post on Zulip Eigil Rischel (Apr 24 2021 at 16:27):

Right - this is good news if your goal is to understand reachability! But it means you can't use it as an example to think about abstract tools for measuring non-compositionality

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:30):

Heh, my brain just caught up and realised what this is saying intuitively. If you have an open graph, you can detect a connection between two nodes on the same boundary, but injecting in a particle and an antiparticle, and observing nothing coming out the other side

view this post on Zulip Jade Master (Apr 24 2021 at 16:32):

Yeah that's really interesting how you lose directionality when you freely include negatives.

view this post on Zulip Jade Master (Apr 24 2021 at 16:40):

Reachability and behavior are very related. I would say that reachability is the decategorification of behavior. For a network P, you can take a category F(P)F(P) representing the operational semantics of P, so that objects are states of P and morphisms are execution sequences in P. Then the hom-sets FP(x,y)FP(x,y) represent all the possible behaviors which can occur starting in x an ending in y. Reachability decategorifies this by making the change of enrichment Set{0,1}\mathsf{Set} \to \{0,1\} where a set is sent to 0 if it is empty and sent to 1 otherwise. So reachability is takes the behavior and just cares about whether it exists or not, and does that by going from 1-dimensional categories to 0-dimensional categories.

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:41):

That's neat

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:42):

So, my grand challenge problem is probably stuck until I come up with a new example that can't be solved in this easier way, probably.... which may be slight problem because coming up with decent examples is really surprisingly hard (which itself is a hint that my starting abstract may not be a good one, no matter how great it intuitively seems to me)

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:49):

I had another motivating example that came from something me and Sharwin Rezagholi worked on and then abandoned: topological entropy of open dynamical systems (aka open directed graphs, for our purposes). That turned out to be a lax pseudofunctor from open directed graphs to the poset of reals. If I remember correctly (which is not guaranteed, I'm pretty fuzzy on this), we worked out that coupling 2 open directed graphs means you can say that the topological entropy is \geq the maximum of the individual topological entropies. So there was an obvious question of trying to measure/control how much it could increase in terms of the boundary you couple along, but we never came up with anything. And that's basically all I can say about it, because without Sharwin around I don't even remember how to define topological entropy...

view this post on Zulip Dan Marsden (Apr 24 2021 at 16:49):

Jules Hedges said:

Although I do vaguely wonder if you can make use of the fact that Rel itself is enriched in a bunch of things that includes Boolean algebras

I don't think this is true. Composition doesn't preserve intersections for example. Rel is cJSLat-enriched though.

view this post on Zulip Jade Master (Apr 24 2021 at 16:50):

So if you want to not decategorify and keep the full behavior you can instead construct a double functor

:Open(Network)Prof\blacksquare : Open(Network) \to \mathsf{Prof}
which sends an open network XiGoYX \xrightarrow{i} G \xleftarrow{o} Y to the profunctor (G):FX×FYSet\blacksquare(G) : FX \times FY \to \mathsf{Set} given by
(G)(x,y)=FG(ix,oy)\blacksquare(G) (x,y) = FG(ix,oy)
So maybe this would be a better double functor for measuring tnon-compositionality than the one with codomain Rel or AddRel.

view this post on Zulip Jade Master (Apr 24 2021 at 16:52):

And maybe you don't want a double functor and would prefer the ordinary functor...that exists too!

view this post on Zulip Jade Master (Apr 24 2021 at 16:54):

Oh Jules I was about to ask you what topological entropy was. Regardless that seems like a really nice result, it's the old adage "a thing is greater than the sum of its parts"

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:55):

Yeah, that's a problem I'd like to go back to after having a framework for how to even think about this sort of problem

view this post on Zulip Jules Hedges (Apr 24 2021 at 16:56):

Hopefully we can seriously think about this stuff once you're in Glasgow!

view this post on Zulip John Baez (Apr 24 2021 at 17:23):

Jade Master said:

And yeah, the thing is that as soon as you get any sort of loop the emergent behavior is infinite because you can go around the loop as many times as you want. I'm curious about when the emergent behavior is finite...so there's a zig-zag but a not a loop.

One thing you can do both when there are zig-zags and when there are loops is keep track of the number of times you have to go back into the first graph. So, you get this:

Say you have an open graph LXGLYLX \to G \leftarrow LY, where L:SetGrphL: \mathsf{Set} \to \mathsf{Grph} is the "discrete graph on a set" functor. Let P(G)x,yP(G)_{x,y} be the set of paths from xXx \in X to yYy \in Y through the graph GG. So, we have

P(G):X×YSetP(G) : X \times Y \to \mathsf{Set}

Here I'm really using GG as a nickname for the whole cospan LXGLYLX \to G \leftarrow LY.

A matrix of sets is the same as a span of sets, so this PP thing is trying to be a functor from open graphs to spans of sets, but really both these things are bicategories and PP is just laxly functorial.

Namely, if you have two open graphs

LXGLYLX \to G \leftarrow LY and LYHLZLY \to H \leftarrow LZ,

when you we compose them you get an open graph LXGHLZLX \to GH \leftarrow LZ but alas

yYP(G)x,yP(H)y,zP(GH)x,z \sum_{y\in Y} P(G)_{x,y} P(H)_{y,z} \subseteq P(GH)_{x,z}

However, as Jade has shown, the left-hand side is just the first term in a sum (a sum of sums!) that converges to P(GH)P(GH). The first term contains paths that go through GG and then HH. The second term contains paths that go through GG and then HH and then go back into GG and then back into HH. And so on.

So P(GH)P(GH) is much better than a mere span of sets: it has a bunch of subspans Pn(GH)P_n(GH) where nn keeps track of the number of zigzags:

P(GH)x,z=n=0Pn(GH)x,z P(GH)_{x,z} = \bigcup_{n = 0}^\infty P_ n(GH)_{x,z}

In other words P(GH)x,zP(GH)_{x,z} is a filtered set. (It has an exhaustive increasing filtration.)

view this post on Zulip John Baez (Apr 24 2021 at 17:27):

Now all of this is obvious, but working with filtered objects is already taking us into a realm reminiscent of cohomology: all sorts of constructions in cohomology give us filtrations like this: "approximations" to the final answer, that are simpler than the final answer.

view this post on Zulip John Baez (Apr 24 2021 at 17:27):

Things get even more interesting when we have a composite GHKGHK of three open graphs.

view this post on Zulip John Baez (Apr 24 2021 at 17:29):

Now there are different ways for a path to "bend back": it can go back from HH to GG, or go back from KK to HH, and it can do either of these things repeatedly.

view this post on Zulip John Baez (Apr 24 2021 at 17:30):

This makes it a more interesting real challenge to keep track of all things a path can do. For example, it can go like GHKHGHGHKGHKHGHGHK.

view this post on Zulip John Baez (Apr 24 2021 at 17:33):

So P(GHK)P(GHK) has an interesting structure where we keep track of paths with various kinds of zigzags. And this structure is related to the filtrations on P(GH)P(GH) and P(HK)P(HK).

view this post on Zulip John Baez (Apr 24 2021 at 17:33):

It's a bit complicated... and it gets even more complicated when we have a composite of even more open graphs.

view this post on Zulip John Baez (Apr 24 2021 at 17:35):

But this complexity, which may at first seem like just an annoying quagmire, is just the sort of you expect when doing calculations in cohomology - or for that matter combinatorics. It's something you can really get into, and understand completely.

view this post on Zulip Jade Master (Apr 24 2021 at 17:37):

I hope so :) maybe combinatoricists already have results in this flavor.

view this post on Zulip Jules Hedges (Apr 24 2021 at 17:40):

This sounds pretty promising

view this post on Zulip John Baez (Apr 24 2021 at 17:42):

Great! I should add that if anyone feels revolted by the complexity that shows up here, they shouldn't. "Emergence of more complex behavior" is hardly ever going to be simple.

view this post on Zulip Jules Hedges (Apr 24 2021 at 17:43):

I guess I can record what few thoughts I've had towards my own problem. If you have a "behaviour" lax pseudofunctor F:CRelF : C \to \mathbf{Rel}, then you can define an "emergent behaviour" completely formally to be an element of F(fg)F(f)F(g)F (fg) \setminus F(f) F(g), where the right hand thing is composition in Rel\mathbf{Rel}. In words, an emergent effect is not just associated to a system, but to a system together with a choice of composition. So it seems obvious to store this data somewhere that involves the nerve of CC... cells in the nerve correspond to systems together with a choice of decomposition. You could just associate the set of emergent behaviours to every cell. Then the laxator is an isomorphism iff all of those sets are the empty set

view this post on Zulip Jules Hedges (Apr 24 2021 at 17:43):

... and that's all I got

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 09:46):

Jules Hedges said:

So, my grand challenge problem is probably stuck until I come up with a new example that can't be solved in this easier way, probably.... which may be slight problem because coming up with decent examples is really surprisingly hard (which itself is a hint that my starting abstract may not be a good one, no matter how great it intuitively seems to me)

The example you proposed in your original statement sounded like it was directly applicable to cobordism categories, so there's probably an interesting higher-dimensional (in a topological sense) analogue of the 1D graph example.

view this post on Zulip Jules Hedges (Apr 25 2021 at 09:50):

Path-connectedness in cobordisms should do it. But it likely "suffers" from the same "problem"

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 09:57):

Right, but perhaps there's a more interesting notion of connectedness that one could examine. For example, you can view path-connectedness as the existence of a homotopy between the end-points; maybe one could examine homotopy between higher-dimensional sub-spaces? Or indeed directly asking about the effect of composition on homology classes.

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 09:59):

Take e.g. a torus with two disks cut out of it (viewed as a cobordism between two circles). This has a pair of loops on it that are not deformable into one another, but become so when we re-adjoin one of the disks, and that's a direct description of emergent behaviour

view this post on Zulip Jules Hedges (Apr 25 2021 at 10:00):

Hm, interesting, yeah

view this post on Zulip Jules Hedges (Apr 25 2021 at 10:01):

Now the equivalent of what we were talking about for graphs would be to pass to a situation where you have multiple parallel paths, and also negative paths - something that is to integer petri nets as cobordisms are to graphs...

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 10:04):

Homology does that pretty well tbh, as long as you're working in nice spaces :grinning_face_with_smiling_eyes: You can reinterpret what I said about "torus minus two disks XX" in homology as there being two non-equivalent homology classes f,gH1(X)f,g \in H_1(X) which cancel when one of the disks is glued in, in the sense that fgf - g, a non-trivial element of H1(X)H_1(X), maps to 00 in the homology of the resulting space.

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 10:08):

Turning this data into some kind of "homology net" to be treated like integer Petri nets would be great, I wonder if this is a perspective algebraic topologists have taken?

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 10:13):

I also want to make the observation that "the emergent behaviour is controlled by Mayer-Vietoris", in the sense that that result gives a precise statement of how gluing along a boundary (again, assuming we're working in "effectively nice" cobordism categories) affects the homology groups.

view this post on Zulip Jules Hedges (Apr 25 2021 at 10:20):

I may have just got a small step closer to understanding what the heck homology is

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 10:27):

To be honest, the challenge is applying this stuff. What emergent behaviours do these represent in systems we care about? What is not covered, geometrical or otherwise?

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 10:40):

Morgan Rogers (he/him) said:

Turning this data into some kind of "homology net" to be treated like integer Petri nets would be great, I wonder if this is a perspective algebraic topologists have taken?

I feel summoned! What can I do for you?

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 10:44):

Jules Hedges said:

Now the equivalent of what we were talking about for graphs would be to pass to a situation where you have multiple parallel paths, and also negative paths - something that is to integer petri nets as cobordisms are to graphs...

In the tradition of graph rewriting, this approach usually pays off

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 10:44):

That is, you have a graph, you want to do stuff with it, and you decorate it with all sort of things that will make your life easier at a latter stage when you do rewriting and other things

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 10:44):

I'll let you guys hash it out a while and check back in later :grinning_face_with_smiling_eyes:

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 10:45):

Morgan Rogers (he/him) said:

I'll let you guys hash it out a while and check back in later :grinning_face_with_smiling_eyes:

Actually I'm off for an extreme sunbathing session, but I'll think about it as well, and hopefully I'll be back with something interesting later this evening

view this post on Zulip Jules Hedges (Apr 25 2021 at 10:50):

Meanwhile @Bryce Clarke has written some very interesting replies to this question over on twitter:
https://twitter.com/8ryceclarke/status/1386233995965337607?s=21
https://twitter.com/8ryceclarke/status/1386234910965338116?s=21
https://twitter.com/8ryceclarke/status/1386237192486277122?s=21

@_julesh_ This is an interesting question! Just checking: by "lax pseudofunctor", do you mean "lax functor" or "normal lax functor"? Lax functors C -> Rel are equivalent faithful functors into C; adding "normal" yields "faithful functors with discrete fibres" 1 / 2

- Bryce Clarke (@8ryceClarke)

@_julesh_ Pseudofunctors C -> Rel are the same as faithful discrete Conduche fibrations into C. You may find the papers by Susan Niefield on lax functors C -> Rel to be helpful: http://www.tac.mta.ca/tac/volumes/12/7/12-07abs.html http://www.tac.mta.ca/tac/volumes/24/12/24-12abs.html 2 / 2

- Bryce Clarke (@8ryceClarke)

@_julesh_ So in a sense, what you would like to find is a measure of how much a faithful functor fails to be exponentiable. A paper where I have seen both "cohomology" and "exponentiable / powerful morphisms" in the same place is here: http://www.tac.mta.ca/tac/volumes/23/3/23-03.pdf (but I haven't read it).

- Bryce Clarke (@8ryceClarke)

view this post on Zulip Jules Hedges (Apr 25 2021 at 10:51):

After looking up what "normal" means (strict for identities, possibly lax for composition) and eyeballing some examples a bit, I think it is indeed reasonable to assume normal - identity open systems always exhibit exactly the identity behaviour and can never fail

view this post on Zulip Jules Hedges (Apr 25 2021 at 10:52):

I had never heard of Conduché fibrations before this morning, so I'm currently looking at the nLab page and trying to figure out what it all means

view this post on Zulip Jules Hedges (Apr 25 2021 at 10:52):

But as a whole new perspective on exactly the same situations, it seems extremely likely that this is useful...

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 11:05):

Oh hey that last reference is, completely coincidentally, pretty useful for me. Thanks for sharing :joy:
Personally I would love your problem to be solved in a grounded way @Jules Hedges; that is, I would love to see some more concrete examples of emergent behaviours besides connectivity that can serve to lend insight into the result of the categorical abstraction that's already happening.

view this post on Zulip Jules Hedges (Apr 25 2021 at 11:12):

Right... I'd like more examples too, but it seems quite difficult to come up with examples. I believe without real evidence that this framework is a good one, based on thinking a lot of generalities about what "open systems" and "behaviours" can be. I suspect that each "serious" example looks like a research problem in its own right

view this post on Zulip Jules Hedges (Apr 25 2021 at 11:12):

This isn't a great situation to be in, of course

view this post on Zulip Jules Hedges (Apr 25 2021 at 13:40):

So after looking a bit about what Bryce said, it seems that by turning the problem inside out, the failure of a lax functor Rel\to Rel to be a functor can be seen as equivalently the failure of a certain morphism to be exponentiable, as an object in a slice category of CatCat, or abstracting a whole bunch, as the failure of a certain functor to have an adjoint. So.... is there any standard way to "measure" the failure of a functor to have an adjoint?

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 13:50):

My instinct is yes, but I can't put my finger on what form of such measurements I've seen in the past...

view this post on Zulip David Egolf (Apr 25 2021 at 14:06):

I'm currently doing engineering research (and only know introductory category theory). In engineering research we often care not only about the underlying physics, but also about traditional processing schemes. In my experience, a lot of engineering research is focused on improving existing processing schemes. Even when the physics is compositional, the traditional processing might not be.

For an ultrasound imaging example, say we fire a pressure beam AA at a target and observe the resulting echoes E(A)E(A). This echo operator is compositional, in the sense that E(A+B)=E(A)+E(B)E(A + B) = E(A) + E(B) where AA and BB are different fired pressure beams. However, to make an image from the echoes involves a reconstruction operation RR, and a traditional reconstruction approach can be viewed in terms of dot products. In this case, we would have R(E(A))=E(A),E(A)R(E(A)) = \langle E(A), E(A) \rangle and R(E(B))=E(B),E(B)R(E(B)) = \langle E(B), E(B) \rangle but R(E(A)+E(B))=E(A),E(A+B)+E(B),E(A+B)=E(A),E(A)+2E(A),E(B)+E(B),E(B)=R(E(A))+R(E(B))+2E(A),E(B)R(E(A) + E(B)) = \langle E(A), E(A+B) \rangle + \langle E(B), E(A+B) \rangle = \langle E(A), E(A) \rangle + 2\langle E(A), E(B) \rangle + \langle E(B), E(B) \rangle = R(E(A)) + R(E(B)) + 2\langle E(A), E(B) \rangle. So, we no longer have compositionality. The failure of compositionality in this case is referred to as the generation of "cross-talk artifacts" and there are number of ways to try and reconstruct better so that they don't show up.

If we're interested in using category theory to help with engineering, it may be interesting to consider analysis of traditional processing schemes - not just analysis of the underlying physics. (And these may provide additional examples of emergent behaviour).

view this post on Zulip Nathaniel Virgo (Apr 25 2021 at 14:31):

Jules Hedges said:

So after looking a bit about what Bryce said, it seems that by turning the problem inside out, the failure of a lax functor Rel\to Rel to be a functor can be seen as equivalently the failure of a certain morphism to be exponentiable, as an object in a slice category of CatCat, or abstracting a whole bunch, as the failure of a certain functor to have an adjoint. So.... is there any standard way to "measure" the failure of a functor to have an adjoint?

I wonder if this might be relevant, from nlab:

image.png

The "inclusion of the full subcategory determined by those dsd\text{s} for which Lˉ(d)\bar L(d) is representable" seems like kind of a measure of adjointness, in that it will map into all of DD if LL does have a right adjoint. So that might turn the problem into "measuring" the failure of an inclusion functor to be an isomorphism, maybe.

view this post on Zulip Amar Hadzihasanovic (Apr 25 2021 at 15:19):

@Jules Hedges I think similar ideas to the ones I was writing about on "failure of a morphism to be iso" may apply, just moved up a notch to morphisms in the 2-category of small categories.

A functor F:CDF: C \to D is a right adjoint iff

(The first is like a lax version of the condition of being a split mono)
I bet you can phrase them both as certain (2?)-functors being equivalences.

So for example any functor that does not have a right Kan extension along FF is a witness of its failure to be a right adjoint. So is any right Kan extension that is not preserved by FF. Again, the challenge is finding the appropriate structure that such witnesses form.

view this post on Zulip Jules Hedges (Apr 25 2021 at 15:21):

David Egolf said:

For an ultrasound imaging example, say we fire a pressure beam AA at a target and observe the resulting echoes E(A)E(A). This echo operator is compositional, in the sense that E(A+B)=E(A)+E(B)E(A + B) = E(A) + E(B) where AA and BB are different fired pressure beams. However, to make an image from the echoes involves a reconstruction operation RR, and a traditional reconstruction approach can be viewed in terms of dot products. In this case, we would have R(E(A))=E(A),E(A)R(E(A)) = \langle E(A), E(A) \rangle and R(E(B))=E(B),E(B)R(E(B)) = \langle E(B), E(B) \rangle but R(E(A)+E(B))=E(A),E(A+B)+E(B),E(A+B)=E(A),E(A)+2E(A),E(B)+E(B),E(B)=R(E(A))+R(E(B))+2E(A),E(B)R(E(A) + E(B)) = \langle E(A), E(A+B) \rangle + \langle E(B), E(A+B) \rangle = \langle E(A), E(A) \rangle + 2\langle E(A), E(B) \rangle + \langle E(B), E(B) \rangle = R(E(A)) + R(E(B)) + 2\langle E(A), E(B) \rangle. So, we no longer have compositionality. The failure of compositionality in this case is referred to as the generation of "cross-talk artifacts" and there are number of ways to try and reconstruct better so that they don't show up.

This looks not dissimilar to the common situation when you're trying to answer questions about composite systems algorithmically. Graph reachability is an obvious example of that. The whole of compositional game theory is an example too - being able to compose games is still a big upgrade on what was previously possible, but even so, when doing equilibrium analysis you generally have to start completely from scratch every time you change something. (It's not totally inconceivable that it's possible to do better than that, but you'd have to walk a thin line in order to not have a proof of P=NPP=NP)

view this post on Zulip Jules Hedges (Apr 25 2021 at 15:35):

There's a bunch of ideas in this thread now, but it still looks pretty intimidating to join the dots up to talk about any concrete examples. For example plugging it into the "reachability functor" on open graphs ought to turn into a bunch of combinatorics about loops around the boundaries

view this post on Zulip Jules Hedges (Apr 25 2021 at 15:36):

By which I mean, in any specific situation it ought to be feasible to prove that the abstract notion of "witnesses of barrier to functoriality" are equivalent to something concrete that you can do calculations with...

view this post on Zulip John Baez (Apr 25 2021 at 16:35):

The example of reachability for open graphs is nice because it's pretty easy to visualize and understand. So, I should take the pseudofunctor Open(Graph) \to Rel and convert it into a Conduche fibration and see what it looks like.

view this post on Zulip John Baez (Apr 25 2021 at 16:38):

Any traditional ideas on homology of graphs will have limited success in understanding the way a path in a composite of two open graphs can be more complicated than a composite of two paths, one in each open graph. The reason is that the homology of a graph doesn't change if you replace an edge of the graph by an edge going in the other direction!

view this post on Zulip John Baez (Apr 25 2021 at 16:40):

So, homology of graphs is okay if you're interested in paths where you're allowed to walk either along or against the direction of each edge in your path.

view this post on Zulip John Baez (Apr 25 2021 at 16:41):

But this limitation is why people are trying to invent "directed" algebraic topology, where the reverse of an allowed path may not be an allowed path. Roughly speaking, directed algebraic topology is to categories as ordinary algebraic topology is to groupoids.

view this post on Zulip Jules Hedges (Apr 25 2021 at 16:45):

You can certainly ask the reachability question for both directed and undirected graphs. Is it understood that undirected graph reachability is easier in some way than directed graph reachability?

view this post on Zulip Robin Piedeleu (Apr 25 2021 at 16:51):

One thing that bothers me about this approach to emergent effects is that, at least for the particular case (taken here to be the paradigmatic example) of reachability in graphs, a different choice of codomain for a similar functor could make it compositional/strictly functorial: you can send an open graph XYX \rightarrow Y to its reachability relation seen as a morphism of Int(Rel), i.e., a relation X+YX+YX+Y\rightarrow X+Y. Composition in this category works by computing a transitive closure---I haven't checked the details but it seems plausible that gives a strict functor capturing the problem of reachability in some precise sense.

More broadly, it seems conceivable that for any given problem (e.g., graph reachability), starting with some syntactic category (e.g., open graphs) one could always find different semantics (e.g., Rel vs. Int(Rel)) that captures the problem more or less compositionally according to how much expressiveness the target category packages in its composition. But perhaps it is to be expected that emergent effects are also an artefact of the formalism you use to model the problem, just another expression for leaky abstractions.

view this post on Zulip Jules Hedges (Apr 25 2021 at 17:06):

Yeah, it's conceivable that you can always avoid it by being smart... my ACT talk last year was about one possible way to get around the problem and recover a behaviour functor but into a different category

view this post on Zulip Jules Hedges (Apr 25 2021 at 17:07):

But the way I see it, emergent effects are something that arguably does exist out there in the world, and it would be a huge win for ACT to have a really powerful way to talk about that

view this post on Zulip John Baez (Apr 25 2021 at 17:14):

Jules Hedges said:

You can certainly ask the reachability question for both directed and undirected graphs. Is it understood that undirected graph reachability is easier in some way than directed graph reachability?

That's what I'm claiming; I don't know if this is "understood", and I should make good on my claim that the undirected case can be handled using traditional ideas on homology. In other words, I should prove a theorem.

view this post on Zulip John Baez (Apr 25 2021 at 17:17):

Robin Piedeleu said:

One thing that bothers me about this approach to emergent effects is that, at least for the particular case (taken here to be the paradigmatic example) of reachability in graphs, a different choice of codomain for a similar functor could make it compositional/strictly functorial: you can send an open graph XYX \rightarrow Y to its reachability relation seen as a morphism of Int(Rel), i.e., a relation X+YX+YX+Y\rightarrow X+Y. Composition in this category works by computing a transitive closure---I haven't checked the details but it seems plausible that gives a strict functor capturing the problem of reachability in some precise sense.

That sounds right. It's an important observation! I don't think it should "bother" you... though I can see why it bothered you that nobody said it yet.

view this post on Zulip Jules Hedges (Apr 25 2021 at 17:18):

Right, in a sense it shouldn't be surprising that any particular example can be handled by being smart... the question is whether every reasonable example can be handled by being smart... or more accurately, whether being smart is the best it's possible to do, and nothing can be said in general

view this post on Zulip John Baez (Apr 25 2021 at 17:19):

@Robin Piedeleu - By the way, the reachability relation X+YX+YX + Y \to X + Y meaning "there's a path from one point to another" makes X+YX+ Y into a poset, and this poset is the collage of a profunctor from the poset XX to the poset YY (both of which have their own reachability relation).

view this post on Zulip John Baez (Apr 25 2021 at 17:21):

Oh, no, that's not quite right, the collage I'm thinking about would only take into account paths from points in XX to points in YY, not the other way around.

view this post on Zulip John Baez (Apr 25 2021 at 17:21):

(My remarks may be cryptic because I'm alluding to a profunctor that @Jade Master has written about.)

view this post on Zulip Robin Piedeleu (Apr 25 2021 at 17:30):

Jules Hedges said:

Right, in a sense it shouldn't be surprising that any particular example can be handled by being smart... the question is whether every reasonable example can be handled by being smart... or more accurately, whether being smart is the best it's possible to do, and nothing can be said in general

There's maybe something more general you can say about how emergent effects appear based on how expressive your form of composition is and how difficult it is to compute it. Then it would not be surprising that, the less computational power you give yourself the more emergent effects you're going to see popping up. Conversely, it makes sense that you should be able to suppress any emergent behaviour, given enough computational power. This is all a bit vague but characterising the precise tradeoff for specific examples would be nice.

view this post on Zulip Jules Hedges (Apr 25 2021 at 17:32):

It's also not at all clear that every example of emergent effects comes from not having enough computational power. (Although some of them do... for example I think it's possible to upgrade compositional game theory to be even more compositional, as long as you don't mind computing a Brouwer fixpoint every time you compose anything)

view this post on Zulip Robin Piedeleu (Apr 25 2021 at 17:45):

Sure, I'm not claiming that this is always the case but it does seem to happen a lot in relational settings where composition requires solving some constraint satisfaction problem and can be badly global. If we map the problem to a category where it can be solved by composing some functions/anything with a neat causal flow of information, it seems likely that we're going to miss some part of the solution.

view this post on Zulip John Baez (Apr 25 2021 at 17:45):

Btw, @Jules Hedges, who started this rumor that homology could be good for understanding emergent effects? Was it you, or was it Abramsky talking about sheaves and contextuality, or someone else.

view this post on Zulip Jules Hedges (Apr 25 2021 at 17:48):

The story is, I thought about the general problem enough to get to the point where I could ask it in the form "how to say something finer-grained than that a lax pseudofunctor to Rel is either a functor or it isn't". For a while around 2019 I asked this question to anyone who would listen, and several different people said it made them think of cohomology... but I don't remember who

view this post on Zulip Jules Hedges (Apr 25 2021 at 17:49):

But it probably does have some relation to what Samson works on

view this post on Zulip John Baez (Apr 25 2021 at 17:50):

If we map the problem to a category where it can be solved by composing some functions/anything with a neat causal flow of information, it seems likely that we're going to miss some part of the solution.

Right. In general we'll miss some part of what happens when we combine two systems if we don't keep track of enough information about their internal structure. Compositionality happens when we keep track of just enough information about the two systems that we can recover the same sort of information about the system we get by combining them. There could in theory be many kinds of "just enough" here - i.e., many ways of achieving compositionality. It reminds me of a fixed-point problem.

But it's also very interesting to study what happens when don't keep track of enough information to recover the same sort of information about the composite system. It's not just a bad thing to be shunned and ignored.

view this post on Zulip Jules Hedges (Apr 25 2021 at 17:58):

Morgan Rogers (he/him) said:

Oh hey that last reference is, completely coincidentally, pretty useful for me. Thanks for sharing :joy:
Personally I would love your problem to be solved in a grounded way Jules Hedges; that is, I would love to see some more concrete examples of emergent behaviours besides connectivity that can serve to lend insight into the result of the categorical abstraction that's already happening.

As it happens, I spontaneously remembered another example! This takes several pages to setup properly, but a couple of years ago I built a category of decorated corelations for smooth optimisation problems. The objects were natural numbers, and the morphisms mnm \to n were [insert your favourite optimisation theory assumptions here - I used smooth + convex] functions Rm+nR\mathbb R^{m + n} \to \mathbb R, ie. single-objective optimisation problems on m+nm + n variables. (I'm skipping a bunch of details, there's some corelations stuff to make it work out.) Composition was to pointwise add the functions, and then maximise over the variables on the middle boundary. (There were two reasons for that choice - one is a general feeling that max/+ play nice together, and the other is that it lets you do Lagrange multipliers nicely compositional).

Then I tried to prove that argmax\arg\max defines a functor from this category to Rel. But it's not true. It's a lax functor, but the laxator fails extremely badly to be an iso

view this post on Zulip Jules Hedges (Apr 25 2021 at 18:01):

Why? Because if you have an objective function that splits as the sum of 2 parts and those parts share some variables in common, then the choices for those variables that maximise the sum in general is totally different to the choices for those variables that maximise the individual terms

view this post on Zulip Jules Hedges (Apr 25 2021 at 18:02):

That was the thing that really pushed me into thinking about the general question, because after spending a couple of months getting that far, I wished there was something better to do than just giving up

view this post on Zulip Jules Hedges (Apr 25 2021 at 18:02):

(It wasn't a completely wasted effort, since I learned all about decorated corelations in the process)

view this post on Zulip Jules Hedges (Apr 25 2021 at 18:12):

This happened in quick succession after my topological entropy example. That's what left me with the general feeling that functors are rare in applied situations, and lax pseudofunctors are the norm

view this post on Zulip Morgan Rogers (he/him) (Apr 25 2021 at 18:31):

John Baez said:

If we map the problem to a category where it can be solved by composing some functions/anything with a neat causal flow of information, it seems likely that we're going to miss some part of the solution.

But it's also very interesting to study what happens when don't keep track of enough information to recover the same sort of information about the composite system. It's not just a bad thing to be shunned and ignored.

Having seen a few examples, I'm very in favour of this last comment. So much of CT is finding the right level of abstraction to bring the details you care about to the fore; applied CT should definitely have access to tools to measure how much of an idealisation different models are.

view this post on Zulip Jules Hedges (Apr 25 2021 at 20:47):

Sketch of an idea to turn a lax pseudofunctor F:CRelF: C \to \mathbf{Rel} into a simplicial set:
Ez2OZQCX0AY_iT0.jpg

view this post on Zulip Jules Hedges (Apr 25 2021 at 20:49):

I'm very very sketchy on simplicial sets, I'm basically figuring this out as I go along, but I'm kind of excited about this idea, because laxness results in the middle cell being "not big enough", which kinda looks like a "hole" if you squint

view this post on Zulip Jules Hedges (Apr 25 2021 at 20:51):

(Normally I throw ideas out in public without a second thought, but maybe this one might be one I'll regret not keeping to myself until thinking about it a bit more...)

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:52):

I don't understand how you are building this simplicial set

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:53):

A symplex is just a set, a simplicial complex is a bunch of symplexes that are closed under face relationship

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:53):

When you write F(x)F(x) as a vertex what do you mean?

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:54):

F(x)F(x) is a set of points, are you taking any point to be a vertex of the symplex?

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:54):

On the contrary, do you add just one point for each F(x)F(x)? In this case you may just write xx

view this post on Zulip Jules Hedges (Apr 25 2021 at 20:55):

It's not a simplicial set. It's just a picture that might contain the germ of an idea

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:55):

Similarly, an edge between F(x)F(x) and F(y)F(y) is denoted as F(f)F(f). Are you adding an edge for each element of the relation F(f)F(f)?

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:56):

If yes, it seems that the symplicial complex will end up being very (very) connected

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:56):

If not, it will have a lot of disconnected components? (and will be much much bigger)

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:58):

I'd suggest to take your functor CRelC \to \mathbf{Rel}, turn it into a functor DCD \to C for some DD, and see how all the stuff in DD sits on CC

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:59):

Then maybe you want to apply some sort of homological reasoning to the whole DCD \to C

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 20:59):

So you have all the F(x)F(x) you were considering, but you are still indexing them using xx.

view this post on Zulip Jules Hedges (Apr 25 2021 at 21:17):

As far as I can tell this is pretty much a variant of the nerve of a category. As long as your category is small (as many categories of decorated cospanish things are, so for once that doesn't worry me), it's nothing that a few dependent sums can't handle. Instead of having the set of 0-cells being the set of objects of C, instead it's the set xCF(x)\sum_{x \in C} F (x)

view this post on Zulip Jules Hedges (Apr 25 2021 at 21:18):

It's probably fibred over the nerve of CC, or something like that [waves hands wildly]

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:13):

By this you mean the picture?

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:14):

Or what I suggested?

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:15):

Ok, I get what you say

view this post on Zulip Graham Manuell (Apr 25 2021 at 22:16):

Jules Hedges said:

You can certainly ask the reachability question for both directed and undirected graphs. Is it understood that undirected graph reachability is easier in some way than directed graph reachability?

This probably isn't in the sense you are meaning, but for what it's worth here is a complexity theoretic answer. Reachability for undirected graphs can be determined in logarithmic space, while reachability for directed graphs is NL-complete. So the question of whether the undirected case is as 'easy' as the directed case is precisely the L = NL problem. Like most things in complexity theory this is an open problem, but it is widely believed to be false.

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:16):

The problem is that it seems to me that you are building something akin to the nerve of CC, which says little about your functor to Rel\textbf{Rel}

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:17):

You are basically using sets to index the faces of your complex, yes. But the sets themselves aren't "opened up", or interfaced with the complex itself

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:18):

My previous observation meant exactly this: As of now, if you replace, say, any F(x)F(x) with xx, you'd get a very similar complex

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:19):

Oh, ok, now I'm seeing how you are building the 2-d face

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:19):

It's still very coarse, but I'd rephrase your condition in another way

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:21):

Suppose you have xfyzx \xrightarrow{f} y \xrightarrow{z} in CC. You add vertexes x,y,zx,y,z to the complex. Similarly, you add f,g,fgf,g,fg as 1-dimensional symplexes.

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:21):

Now, you add the center face only if F(g;f)=F(g);F(f)F(g;f) = F(g);F(f) as sets

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:22):

This would give you a very coarse way of saying how strict functoriality fails

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:22):

You have a hole in the middle every time laxity is true laxity, as in non-strict, basically

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:24):

Now, you do this for any morphism, and you can already start saying interesting things, e.g. that if your functor is strict, your symplex should be contractible

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:27):

So you could build something like this from your functor (this should work also for other interesting categories that aren't Rel\mathbf{Rel} btw), and obtain a complex that is homotopically equivalent to the one you got with elementary collapses

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:27):

Then you can study the homology if this complex to detect cycles and other stuff

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:27):

So I recant what I said before, It's not similar at all to the nerve at this stage

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:28):

There are probably a lot of details that don't check out at this stage btw.

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:29):

Moreover, as of now we are building very very simple complexes, with no faces of dimension more than 2. This is "expected" since we are focusing on establishing if F(f;g)=Ff;FgF(f;g) = Ff;Fg, which just involves two things, ff and gg

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:30):

My feeling is that we should aim to encode more information about our problem by leveraging on all the higher-dimensions that we aren't using, but I'm really too tired to properly think about this now

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:33):

One thing I seem to like about this approach is that we are completely forgetting about the elements in F(x)F(x) etc. We are just recording which morphisms, when composed, fail to be strictly carried over by the functor. We are not looking into the reason why a given composition fails (e.g. the elements in the relevant sets that we aren't hitting)

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:34):

This may make the technique less insightful, but definitely more manageable

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:36):

I'm also curious to see how we can generalize this to tensor products, it doesn't seem obvious. I thought about considering a monoidal category as a bicategory with just one object, but then we have just one vertex around for our symplicial complex, so things fail

view this post on Zulip Fabrizio Genovese (Apr 25 2021 at 22:38):

Ok, I'm off to sleep. I apologize to anyone knowing properly this stuff for the huge amount of BS I probably said, I just hope that the intuition I was trying to formalize is more or less understandable. :smile:

view this post on Zulip Mike Shulman (Apr 25 2021 at 23:19):

Fabrizio Genovese said:

I'd suggest to take your functor CRelC \to \mathbf{Rel}, turn it into a functor DCD \to C for some DD, and see how all the stuff in DD sits on CC

I haven't really been following this thread, but it's known that for any category CC, the category of lax functors CSpanC\to \rm Span (a.k.a. displayed categories) is equivalent to the slice Cat/C{\rm Cat}/C, by a generalized sort of Grothendieck construction. A lax functor to Rel\rm Rel is a special sort of lax functor to Span\rm Span, so this seems like the canonical way to turn it into a functor DCD\to C.

view this post on Zulip Mike Shulman (Apr 25 2021 at 23:20):

Also this sort of homological stuff reminds me of things like persistent homology and magnitude homology.

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 09:18):

Mike Shulman said:

Fabrizio Genovese said:

I'd suggest to take your functor CRelC \to \mathbf{Rel}, turn it into a functor DCD \to C for some DD, and see how all the stuff in DD sits on CC

I haven't really been following this thread, but it's known that for any category CC, the category of lax functors CSpanC\to \rm Span (a.k.a. displayed categories) is equivalent to the slice Cat/C{\rm Cat}/C, by a generalized sort of Grothendieck construction. A lax functor to Rel\rm Rel is a special sort of lax functor to Span\rm Span, so this seems like the canonical way to turn it into a functor DCD\to C.

Yes, this was precisely the equivalence I was alluding at, I've been using it quite a lot in my work on Petri nets!

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 09:19):

Mike Shulman said:

Also this sort of homological stuff reminds me of things like persistent homology and magnitude homology.

Again, we are on the same page. It would be very cool to obtain something akin to barcodes for functors, that classify qualitatively how and why a given functor fails to be strict

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 09:20):

The way I intuitively think about persistent homology is "A way to translate geometric information into combinatoric information in a way that filters out 'noise' ". This is conceptually very similar to what we want here

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 09:32):

Robin Piedeleu said:

But perhaps it is to be expected that emergent effects are also an artefact of the formalism you use to model the problem, just another expression for leaky abstractions.

This is the whole point of Adam's thesis about generative effects: they're a byproduct of leaky abstractions. This is actually a good observation, because 1. it makes the thing relative and 2. it makes you reason in terms of 'hidden information'.
I strongly recommend anyone interested in emergent behaviour to read at least the introduction, the concept of 'veil' is one I really like.

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 10:03):

So I think it's now my turn to dump my thoughts about this problem here.
First of all, let me mention that there's a quantitative decategorified version of 'emergent behaviour', namely non-linearity. Saying that BB is a lax (monoidal) functor amounts to say it's not linear B(X+Y)B(X)+B(Y)B(X+Y) \neq B(X) + B(Y). This observation leads to the path of studying 'Taylor expansions' of functors. In fact, higher derivatives of a function measure lack of linearity (sort of: you need to recenter at x=0x=0 or talk about 'affinity') at increasing degrees of precision.
Fortunately, there's already a theory of Taylor series for functors, it's called Goodwillie calculus. It's a fine piece of abstract \infty-machinery, but it's quite elegant and beautiful. It works best between \infty-topoi (suffices: 'Goodwillie differentiable \infty-cats), but I believe it can be bent a little to work with less structured objects. It starts with the observation that linearity is akin to a sheaf condition, as @Eigil Rischel mentioned above: in Goodwillie calculus, a linear functor (or 0-excisive) is a functor that preserves homotopy pushouts in a suitable sense. You can extend this definition to higher degress, defining nn-excisive functors. These functors are lex reflective inside the category of all functors, so you can localize a given functor BB to this subcategory.
The result is a (pre)tower BJnBJ1BJ0BB \to \cdots \to J_n B \to \cdots \to J_1 B \to J_0 B (this is a Postnikov tower in the generalized sense) of successive approximations of BB, looking at its linearity in increasing dimension. This is the Taylor tower of BB! Notice you can also form the Whithead tower of BB, which is sort of the dual of the Postnikov tower, thus mimicking the tower of 'rests' of the Taylor series (instead of growing an increasingly less linear picture of BB, we sculpt away an increasingly more linear version of BB)
Under suitable assumptions this pretower converges, i.e. its colimit is exactly BB, as for Taylor series.

What I like about this approach is that it's gradual, so I can have different degrees of 'compositional approximation' of a behaviour. It's still very abstract though... The 1-dimensional shadow of this process is plain old sheafification, and gives us a way to interpret it as a process of linear approximation of a functor (presheaves are non-linear/non-compositional, sheaves are linear). This makes it more manageable and we can still say some things.
For example, if you're lucky enough to have abelian behaviour, you can distill information about its extensionality ('can I distinguish behaviours of the whole by looking at their restrictions to the parts?') and generativity ('is every behaviour of the whole given by behaviours of the parts?') by looking at the first two cohomology groups of its augmented Cech complex (see here).
What's not clear to me at this point is how to represent systems as sites over which behaviours are presheaves. This is the major obstacle me and @Fabrizio Genovese encountered when we tried to attack consensus with sheaf theory... I still don't have a guiding principle for that. Actually there's one thing I didn't try yet: take your structure (a network, say), define an algebra of observable quantities over it, use the dual of that (+ an apparently non-canonically definable topology) as a site. This works for, stochastic systems (sort of) or quantum systems (pretty literally).

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:07):

There are many words that I don't know here, but it seems an interesting approach. And yes, I remember the pain of trying to describe systems as sites

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:08):

In any case, given the multiple directions we already established in this thread, the obvious course of actions in better times would be to organize a workshop about it T_T

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:10):

Do you reckon if there's any hope to have that, at some point? :frown:

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 10:12):

Fabrizio Genovese said:

In any case, given the multiple directions we already established in this thread, the obvious course of actions in better times would be to organize a workshop about it T_T

Alas, totally agree... I feel like (without any experience about it, though) this is exactly the stage at which it would maximally benefit to have a bunch of people together in the same place for some days. I'd attent that!

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:17):

At this point I'd attend anything actually, I'm just longing for math-based human contact.

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 10:19):

I'm really stoked to attend ACT21 and CT21 in person.. I'm very lucky they'll be in the countries I already plan to be (and allowed to be) in when they happen. What will universe cook up this time to disrupt my plans?

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:25):

Am I wrong or is Britain's border closed atm? If yes there's no hope for ACT2021 for me

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 10:26):

Yeah, basically closed

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:27):

I'm not even very optimistic about being able to get from Glasgow to Cambridge in July

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:28):

The England-Scotland border just reopened recently, I think

view this post on Zulip Nathaniel Virgo (Apr 26 2021 at 10:29):

By the way, sorry for the really basic question, but could someone post or link to the definition of "lax pseudofunctor" in this context? I couldn't find much online - does it have another name?

view this post on Zulip Cole Comfort (Apr 26 2021 at 10:29):

Jules Hedges said:

The England-Scotland border just reopened recently, I think

imagine this being uttered in 2019

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:30):

I think we are more or less meaning a pseudofunctor with non invertible isos

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:30):

And we are wildly handwaving the boatload of coherence conditions that have to hold

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:31):

There is for sure some formal definition somewhere, even if they are usually difficult to find completely spelled out

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:31):

(So we want something for which F(f;g)Ff;FgF(f;g) \to Ff ; Fg or the other way around, and same for the monoidal product if you have one)

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:31):

Nathaniel Virgo said:

By the way, sorry for the really basic question, but could someone post or link to the definition of "lax pseudofunctor" in this context? I couldn't find much online - does it have another name?

A pseudeofunctor is a morphism between 2-categories, and "lax pseudofunctor" means that it only laxly preserves horizontal composition. So the idea is that you view your category CC as a 2-category with only identity 2-cells, and you view Rel as a (locally posetal) 2-category whose 2-cells are relational inclusion, ie. you have a 2-cell from the relation R1X×YR_1 \subseteq X \times Y to R2X×YR_2 \subseteq X \times Y just when R1R2R_1 \subseteq R_2

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:32):

Then a "lax pseudofunctor" CRelC \to \mathbf{Rel} is something satisfying F(g)F(f)F(gf)F (g) \circ F (f) \subseteq F (g \circ f)

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:32):

Where the composition on the left is horizontal composition in Rel

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:33):

In other words, if you have (a,b)F(f)(a, b) \in F(f) and (b,c)F(g)(b, c) \in F(g) for some bb, then it must be the case that (a,c)F(fg)(a, c) \in F(fg). But the converse doesn't hold. (The converse would have to hold for this to be a functor)

view this post on Zulip Nathaniel Virgo (Apr 26 2021 at 10:35):

So monoidal products don't come into it at all? (That's what I thought but I wanted to check.)

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:35):

No

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:35):

That would be a lax-monoidal-lax functor

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 10:35):

or: lax monoidal functors are lax functors between deloopings

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:35):

Fabrizio Genovese said:

(So we want something for which F(f;g)Ff;FgF(f;g) \to Ff ; Fg or the other way around, and same for the monoidal product if you have one)

Actually my experience in practice is that things are usually strict in the tensor direction, and only lax in the composition direction. Because the tensor of 2 morphisms is generally a "disjoint" or "non-interacting" composition, so there is no emergent behaviour. (But I'd expect things to get gnarly again once you upgrade from monoidal to compact closed)

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:36):

Jules Hedges said:

Fabrizio Genovese said:

(So we want something for which F(f;g)Ff;FgF(f;g) \to Ff ; Fg or the other way around, and same for the monoidal product if you have one)

Actually my experience in practice is that things are usually strict in the tensor direction, and only lax in the composition direction. Because the tensor of 2 morphisms is generally a "disjoint" or "non-interacting" composition, so there is no emergent behaviour. (But I'd expect things to get gnarly again once you upgrade from monoidal to compact closed)

Lucky you! In my application domain everything is lax T_T

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:37):

(That would be Petri nets with a non-local semantics on them)

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:38):

Ah yeah, you'd have even more headaches if you were also lax in the other direction. For me, I was optimistic that once you've dealt with mere categories, the rest should be straightforward

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:39):

In any case, as I was stressing, the problem arises when you want to figure out precisely what are the coherence conditions for these functors. Proving things manually is basically not possible, so to even state that you have a lax-monoidal-lax functor between categories you need to invoke some higher-level result

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:40):

Jules Hedges said:

Ah yeah, you'd have even more headaches if you were also lax in the other direction. For me, I was optimistic that once you've dealt with mere categories, the rest should be straightforward

Yeah, it can cause headaches. My go-to strategy has been working closely with people that are much better than me at thinking about these things, such as @fosco, while trying to pick up some insights/techniques along the way

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:41):

This is also one reason I've stuck with talking about semantic functors into Rel, even though going into Span(Set) would be probably better in principle. Much less coherence headaches

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:41):

Span is uncomparably better tho

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:41):

Every functor CDC \to D corresponds to a functor DSpanD \to \textbf{Span}

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:42):

I've been dropping working with Rel\textbf{Rel} very quickly because the Span\textbf{Span} case is much better behaved, for instance you can prove that in many cases DD free symmetric monoidal implies CC free symmetric monoidal (which is crucial in my research)

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:42):

I have a fairly intuitive way to think about the difference. Behaviours into Rel are black-box behaviours: the only things you can observe behaving are the boundaries, so all you know about a system is what combinations of boundary behaviours you can simultaneously observe. If you go into Span(Set) then you also have a set of observations you can make about the system itself, and each system behaviour determines its projections onto boundary behaviour

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:43):

Precisely. Another way to say this is that in Span you keep track of everything, while in Rel you collapse things. The obvious consequence is that, if you care about freeness, often Rel introduces many new equations you didn't have before, breaking things down

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:43):

Makes sense

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:44):

Rel just tells you if things are connected, while Span keeps track of all the different paths connecting them :smile:

view this post on Zulip Jules Hedges (Apr 26 2021 at 10:44):

On the other hand, coherence in 2 dimensions generally makes me want to quit category theory and run away to become a sailor instead

view this post on Zulip Fabrizio Genovese (Apr 26 2021 at 10:45):

This is also a very well justified opinion. Also because boats are great!

view this post on Zulip Mike Shulman (Apr 26 2021 at 14:59):

Jules Hedges said:

A pseudofunctor is a morphism between 2-categories, and "lax pseudofunctor" means that it only laxly preserves horizontal composition.

Normally people just say "lax functor". A pseudofunctor preserves composition up to isomorphism; a lax functor preserves it up to a noninvertible transformation; a strict functor preserves it up to equality. Once you're lax, there's no room left for pseudo-ness.

view this post on Zulip Mike Shulman (Apr 26 2021 at 15:01):

Jules Hedges said:

On the other hand, coherence in 2 dimensions generally makes me want to quit category theory and run away to become a sailor instead

When I was in grad school, a friend of mine suggested to define an "nn-person" as a person who gets a headache when thinking about (n+1)(n+1)-categories.

view this post on Zulip Mike Shulman (Apr 26 2021 at 15:04):

Fortunately, examples like this are usually not really 2-dimensional. Most of the time it's better to think about double categories than about bicategories, and then the laxness and so on are at the same categorical dimension as for ordinary monoidal categories (although there are more of them). In particular, a lax monoidal lax functor between double categories is pretty easy to define, where the laxness of the functor is with respect to the "loose" arrows, with composition of "tight" arrows being preserved strictly, while the lax constraints for the monoidal structure are tight arrows.

view this post on Zulip Mike Shulman (Apr 26 2021 at 15:05):

BTW, there's also a Grothendieck construction equivalence for double categories.

view this post on Zulip Eigil Rischel (Apr 26 2021 at 15:13):

I had an idea inspired by Goodwillie calculus, which didn't quite work out:
Goodwillie calculus is about approximating your functor with versions that becomes progressively less "excisive".
So first you have a functor which carries pushouts to pullbacks (a "linear" functor).
Then you have one that carries a cube where every face is a pushout to a "pullback cube".
(Note that here we don't require every face to be a pullback square, just that the whole cube is a limit diagram).
And so on.

So I thought maybe you could resolve a lax functor CRelC \to Rel similarly in less and less compositional terms.
Like, the first step is an actual functor, then we have a functor so that F(fgh)=F(fg)F(h)F(f)F(gh)F(fgh) = F(fg)F(h) \cup F(f)F(gh), then one with a similar condition for composites of four morphisms, and so on.
The problem is that if you put g=1g = 1, you recover the normal notion of compositionality.
(If you take ff or hh the identity, the condition automatically holds for any lax functor).
In the Goodwillie calculus case, if you take any leg of your cube to be an isomorphism, similarly, any functor carries it to a limit diagram.

(It's also not really clear what the nn-compositional components of eg the reachability functor should look like)

view this post on Zulip John Baez (Apr 26 2021 at 15:42):

Jules Hedges said:

Nathaniel Virgo said:

By the way, sorry for the really basic question, but could someone post or link to the definition of "lax pseudofunctor" in this context? I couldn't find much online - does it have another name?

A pseudofunctor is a morphism between 2-categories, and "lax pseudofunctor" means that it only laxly preserves horizontal composition.

view this post on Zulip John Baez (Apr 26 2021 at 15:46):

"Lax pseudofunctor" is not the name for this: please just say "lax functor".

A functor between 2-categories is

So, every pseudofunctor is lax, but not vice versa.

Here's the precise definition of pseudofunctor between 2-categories:

and after that you can see the definition of lax functor between 2-categories.

view this post on Zulip Jules Hedges (Apr 26 2021 at 15:48):

Oops, that's news to me, sorry for the confusion

view this post on Zulip Jules Hedges (Apr 26 2021 at 15:49):

This corresponds in the world of monoidal functors to strict, strong and lax, right?

view this post on Zulip John Baez (Apr 26 2021 at 15:52):

Yes. Mac Lane said "strong", while some say "weak"; it's a bit odd that strong is weaker than strict, but "strong" seems to have won - even I have caved in.

view this post on Zulip Reid Barton (Apr 26 2021 at 15:53):

Some people also say "" for strong/weak.

view this post on Zulip John Baez (Apr 26 2021 at 15:53):

I guess someone working on the Grothendieck construction, maybe him, started talking about "pseudofunctors" from a 1-category to Cat, and then I guess the Australians adopted this much more generally.

view this post on Zulip John Baez (Apr 26 2021 at 15:55):

Reid Barton said:

Some people also say "" for strong/weak.

Yes, the default notion is usually strong when you're talking about monoidal functors, and once you make that clear you can stop saying it. But there are some people for whom the default notion is lax! So it's always good to put in a little extra work to be clear.

view this post on Zulip Spencer Breiner (Apr 26 2021 at 16:21):

I'm late to the party, but I think it would probably be helpful to have a larger list of emergent phenomena to consider. It's not clear to me that these will all fall under the same rubric.

view this post on Zulip Spencer Breiner (Apr 26 2021 at 16:23):

Paths in open graphs is certainly a good example, but lacks the micro/macro aspect that (seems to be?) an important feature of many classes of emergence; e.g., snowflakes or even temperature as a macroscopic quantity. Similarly, prices in markets are usually viewed as emergent, since market actors generally don't know "true" prices.

view this post on Zulip Spencer Breiner (Apr 26 2021 at 16:25):

Other examples involve some sort of balancing that, again, is not obviously present in for paths. Examples include ocean waves, sand dunes and zebra stripes.

view this post on Zulip Mike Shulman (Apr 26 2021 at 17:00):

John Baez said:

Yes, the default notion is usually strong when you're talking about monoidal functors, and once you make that clear you can stop saying it. But there are some people for whom the default notion is lax! So it's always good to put in a little extra work to be clear.

However, when you're talking about just plain functors (no monoidalness) between (weak) nn-categories, the default should absolutely (pace Benabou) be strong/weak/pseudo. So a "functor of bicategories" means a pseudofunctor. (But a "functor of (strict) 2-categories" probably means a strict functor, although for clarity that should be emphasized.)

view this post on Zulip fosco (Apr 26 2021 at 20:09):

Amar Hadzihasanovic said:

Jules Hedges I think similar ideas to the ones I was writing about on "failure of a morphism to be iso" may apply, just moved up a notch to morphisms in the 2-category of small categories.

A functor F:CDF: C \to D is a right adjoint iff

(The first is like a lax version of the condition of being a split mono)
I bet you can phrase them both as certain (2?)-functors being equivalences.

So for example any functor that does not have a right Kan extension along FF is a witness of its failure to be a right adjoint. So is any right Kan extension that is not preserved by FF. Again, the challenge is finding the appropriate structure that such witnesses form.

I see I have been summoned as well, but there's no way I can catch up with the conversation, if no one wants to propose a summary of it

My eye was caught by this; the same condition can be more efficiently packaged into a "lax orthogonality" request: a functor F has a left adjoint if and only if the right Kan extension of the identity along F exists and is absolute. Explaining why this is a form of laxified orthogonality requires a piece of blackboard, but there's a crew of people who tried to explain the similarities between the two concepts, while at the same time trying to re-enact the most powerful tool of category theory, i.e. the small object argument.

More, when I read the rest of the conversation (possibly, in 2030)

view this post on Zulip Amar Hadzihasanovic (Apr 26 2021 at 20:12):

@fosco Ah, yes, I am aware of the fact that that condition is (necessary and) sufficient for a functor to be right adjoint -- although I do not know why that's an “orthogonality” condition and you should explain it to me :)

view this post on Zulip Jade Master (Apr 26 2021 at 20:14):

@Matteo Capucci (he/him) what is the title lf Adam's thesis about the veil? I wanted to find it the other day but I couldn't remember.

view this post on Zulip Amar Hadzihasanovic (Apr 26 2021 at 20:17):

In the message above I focussed on the “less simple” but “unbiased” condition because I was trying to think of a way to give some structure to the failure of a functor to be right adjoint. Supposing that FF is not a right adjoint, the identity will not have an absolute right Kan extension along FF; that's an obstruction. What about other functors? Perhaps there's cases in which the identity is essentially the only obstruction, in the sense that any functor that's not an equivalence does have an (absolute) right Kan extension along FF, and others in which there are more?

view this post on Zulip Jade Master (Apr 26 2021 at 20:19):

@Matteo Capucci (he/him) I'm also curious about the taylor expansion of a functor that you mentioned. How are those terms in the postnikov tower defined?

view this post on Zulip Amar Hadzihasanovic (Apr 26 2021 at 20:19):

I may be completely off track, and also it may be that this idea can be expressed more neatly in the “laxified orthogonality” setup that you mention :)

view this post on Zulip Amar Hadzihasanovic (Apr 26 2021 at 20:20):

(Sorry @Jade Master for the 'crossing' conversation :sweat_smile: )

view this post on Zulip Jade Master (Apr 26 2021 at 20:22):

@Amar Hadzihasanovic no worries :) it's a huge thread

view this post on Zulip Amar Hadzihasanovic (Apr 26 2021 at 20:39):

I mentioned the condition of “being a split mono” as a decategorified, simpler thing on which this idea could be tested. If f:xyf: x \to y is a morphism, then morphisms g:xzg: x \to z that factor though ff form a cosieve, so the identity on xx does iff all of them do, which is why both conditions are equivalent to “ff is a split mono”. But looking at all morphisms is more interesting than looking at just the identity if we want to measure the failure of ff to be a split mono.

For example, for a function ff of sets, I imagine there could be a sense in which functions gg that are identical to ff except that f(x)=f(y)f(x) = f(y) and g(x)g(y)g(x) \neq g(y) for a single pair of elements (x,y)(x,y) are “minimal obstructions” to ff being a (split) monomorphism...

view this post on Zulip Amar Hadzihasanovic (Apr 26 2021 at 20:42):

So e.g. a function which is injective except on a pair of elements would have “a single minimal obstruction” and so be “close to being injective”.

view this post on Zulip Amar Hadzihasanovic (Apr 26 2021 at 20:46):

So I'm thinking that perhaps there's a way to give structure to these obstructions in a way that quantifies how far a morphism is from being split mono, epi, invertible; and, up one dimension in the n-categorical ladder, how far a morphism is from being a left/right adjoint, an equivalence...

view this post on Zulip fosco (Apr 26 2021 at 21:07):

Jade Master said:

Matteo Capucci (he/him) I'm also curious about the taylor expansion of a functor that you mentioned. How are those terms in the postnikov tower defined?

A Postnikov tower is just(TM) the result of a factorization with respect to an infinitary system of torsion theories: https://arxiv.org/abs/1501.04658

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 21:44):

Jade Master said:

Matteo Capucci (he/him) what is the title lf Adam's thesis about the veil? I wanted to find it the other day but I couldn't remember.

Here's a link

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 21:46):

Jade Master said:

Matteo Capucci (he/him) I'm also curious about the taylor expansion of a functor that you mentioned. How are those terms in the postnikov tower defined?

By successive localization at the subcategory of nn-excisive functors. Since nn-excisive functors are sheaves for a particular Grothendieck topology (the nn-th Weiss topology), you can see this as sheafification at increasingly less stringent topologies.

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 21:46):

Or, as Fosco pointed out, it corresponds to a certain infinitary factorization systems.

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 21:47):

Here's maybe a useful intuition

view this post on Zulip Jade Master (Apr 26 2021 at 21:57):

Thanks @fosco and @Matteo Capucci (he/him). It would be helpful to me to see an example. Do you know what the the taylor expansion of the free category functor
F:GrphCat F : Grph \to Cat would be?

view this post on Zulip fosco (Apr 26 2021 at 21:59):

At this time of the night, I know nothing :smile:

view this post on Zulip Matteo Capucci (he/him) (Apr 26 2021 at 22:09):

I think it's a non-trivial task to compute that... Maybe when you're here in Glasgow we could have a go at it

view this post on Zulip Jade Master (Apr 26 2021 at 22:38):

That's fair lol. I will ask again some other time maybe :)

view this post on Zulip John Baez (Apr 26 2021 at 23:54):

Spencer Breiner said:

Paths in open graphs is certainly a good example, but lacks the micro/macro aspect that (seems to be?) an important feature of many classes of emergence; e.g., snowflakes or even temperature as a macroscopic quantity. Similarly, prices in markets are usually viewed as emergent, since market actors generally don't know "true" prices.

I think most of these are a lot harder to understand using category theory (or even at all) than paths in open graphs. I think our category-theoretic understanding of emergent behavior is so weak that it's good to think about a really easy example, and paths in open graphs should be one.

view this post on Zulip Bryce Clarke (Apr 27 2021 at 01:18):

Fabrizio Genovese said:

I've been dropping working with Rel\textbf{Rel} very quickly because the Span\textbf{Span} case is much better behaved, for instance you can prove that in many cases DD free symmetric monoidal implies CC free symmetric monoidal (which is crucial in my research)

You probably already know, but every faithful functor A -> B corresponds to a lax functor B -> Rel, and conversely.
So I agree that it's likely better to work with Span, but it is not that much different.

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 08:03):

If you care about freeness, it really is :smile:

view this post on Zulip Jules Hedges (Apr 27 2021 at 10:19):

Spencer Breiner said:

Paths in open graphs is certainly a good example, but lacks the micro/macro aspect that (seems to be?) an important feature of many classes of emergence; e.g., snowflakes or even temperature as a macroscopic quantity. Similarly, prices in markets are usually viewed as emergent, since market actors generally don't know "true" prices.

Yees. So, this framework (lax functors to Rel) I think (again - without real evidence!) is useful for talking about behaviours of "circuit-like" open systems, which I think are more or less the things that come under the umbrella of systems theory or systems engineering. I would call these "emergent behaviours" rather than "emergent effects", since your model is making a totally explicit separation between systems and behaviours. So I think this won't be able to talk about for example emergent effects in statistical mechanics (eg. economic models inspired by statistical physics), continuum mechanics, cellular automata, etc

view this post on Zulip Jules Hedges (Apr 27 2021 at 10:21):

The setting here is that you have a widget - which is discrete and has a set of behaviours it can do, and in your other hand you have a gizmo, which also has a set of behaviours it can do, and then you plug the widget and the gizmo together and suddenly it has new behaviours that it refused to do before you plugged it together

view this post on Zulip Jules Hedges (Apr 27 2021 at 10:22):

The discreteness comes from the fact that your class of open systems form a category, composition in categories is inherently discrete -getting around that is a different one of my grand problems (which is probably less difficult than this one)

view this post on Zulip Jules Hedges (Apr 27 2021 at 10:24):

For other kinds of emergent effects outside of systems theory, like the ones you mentioned, I don't even know how to ask the question in category-theoretic terms

view this post on Zulip Morgan Rogers (he/him) (Apr 27 2021 at 10:26):

Indeed, working out what kind of limit or completion operation is appropriate and realistic is always a challenging problem

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:09):

One reason why I am more inclined to think about functors to Span is that there is an equivalence of categories
Cat/C[C,Span]\textbf{Cat}/C \simeq [C,\textbf{Span}], where the functors in the right hand side are lax

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:09):

Crucially, I'm not seeing Cat\textbf{Cat} as a 2-category here. If so, you need to replace Span\textbf{Span} with Prof\textbf{Prof} and do some other tricks

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:10):

So, in studying compositionality issues for functors to Span, we can characterize compositionality issues of any functor DCD \to C, if you wish

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:11):

I concede that the failure of strictness of a functor CSpanC \to \textbf{Span} may translate in some other property that the functor DCD \to C lacks, tho

view this post on Zulip Nathanael Arkor (Apr 27 2021 at 11:13):

Fabrizio Genovese said:

One reason why I am more inclined to think about functors to Span is that there is an equivalence of categories
Cat/C[C,Span]\textbf{Cat}/C \simeq [C,\textbf{Span}], where the functors in the right hand side are lax

Slightly tangential, but is there a corresponding equivalence for coslices over Cat, instead of slices?

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:14):

That I don't know. I stumbled into this equivalence almost by chance while talking with @fosco (meaning that he made me aware of its existence) and we realized it's central to study Petri nets extensions. That's where my knowledge stops :stuck_out_tongue:

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:15):

Yeah, it's indeed extremely neat

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:15):

(My feeling is that there will be still a lot of bunnies coming out of a lot of hats in Petri land using this thing, so I'm focusing on it at the moment without looking much farther)

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:16):

As I learned the other day, the equivalent question to "detect when a lax functor is strong" on the other side is "detect whether a functor is exponentiable". I don't know whether that's an easier or harder problem, but at least now there's 2 different attack surfaces

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:16):

But hey, the bottom line is that if someone wants to figure out what "doing the (co)homology of this thing" means, I won't complain at all

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:17):

Jules Hedges said:

As I learned the other day, the equivalent question to "detect when a lax functor is strong" on the other side is "detect whether a functor is exponentiable". I don't know whether that's an easier or harder problem, but at least now there's 2 different attack surfaces

Did you see @Amar Hadzihasanovic tweetstorm about it? It seems related

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:17):

I am intending to do exactly that. Since I need to learn basic AlgTop as I go along, this is a bit of a long shot

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:17):

Yeah

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:18):

Jules Hedges said:

I am intending to do exactly that. Since I need to learn basic AlgTop as I go along, this is a bit of a long shot

I actually have a study group about TDA at the moment. It's in Italian but we can switch to English if you want to join

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:18):

We are using this: http://people.maths.ox.ac.uk/nanda/cat/TDANotes.pdf
And we got to chapter 3 (due next week)

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:19):

I'm in fact following Vidit's video lectures, I didn't look at the lecture notes so far. I won't join you, I prefer to do it at my own speed

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:20):

Makes sense!

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:21):

For me it actually makes sense to try to attack a problem where I have to learn the basics as I go along, because that's in fact the only way I can ever motivate myself to learn anything new

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:22):

I have something like a ton of problems where "maybe homology can say something clever about this", so it was about time for me to learn it properly and see what happens. I guess we are in similar boats!

view this post on Zulip fosco (Apr 27 2021 at 11:22):

Nathanael Arkor said:

Fabrizio Genovese said:

One reason why I am more inclined to think about functors to Span is that there is an equivalence of categories
Cat/C[C,Span]\textbf{Cat}/C \simeq [C,\textbf{Span}], where the functors in the right hand side are lax

Slightly tangential, but is there a corresponding equivalence for coslices over Cat, instead of slices?

I'd say yes: Cat/CLax[C,Prof]n\mathsf{Cat}/C \cong Lax[C, {\sf Prof}]_n where the n stands for "normal" and the functors are lax; dually, C/CatOplax[C,Prof]nC/\mathsf{Cat} \cong Oplax[C, {\sf Prof}]_n, same convention: normal and oplax functors

view this post on Zulip fosco (Apr 27 2021 at 11:22):

In the end, C/C\mathcal C/C is just(TM) (C/Cop)op(C/\mathcal C^{op})^{op}

view this post on Zulip fosco (Apr 27 2021 at 11:23):

(I will be grateful for life to anyone who will summarise me the content of this thread)

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:23):

The content of this thread is the following: We want a neat way to measure how much lax a lax functor is

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:23):

That is, how much it fails to be strict/strong

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:24):

Where "how much" can be understood quantitatively or qualitatively, at the moment we are satisfied with everything.

view this post on Zulip fosco (Apr 27 2021 at 11:24):

ok, now I can make sense of some of the keywords I saw

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:25):

If you want it to be extra spicy, you can throw in also monoidality and suppose that the functor is lax-monoidal-lax. In that case we'd like to have a measure for both composition and monoidal product

view this post on Zulip fosco (Apr 27 2021 at 11:25):

cohomology was invented for this; as well as resolutions; as well as the filtration/decomposition of something in a "tower" of simple pieces; as well as...

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:26):

I tried to have a look at the tower paper by you, Fiorenza and someone else that you referenced

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:26):

It is quite rare that I don't understand even a single sentence in something I read, but that was indeed the case (and entirely my fault, obviously)

view this post on Zulip fosco (Apr 27 2021 at 11:27):

:mind-blown:

view this post on Zulip fosco (Apr 27 2021 at 11:28):

the idea is pretty simple though: factoring wrt a countable family of nested factorisation systems is what geometers call something something derived categories

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:29):

Unfortunately I am not familiar at all with factorization system, I just have a rough understanding of the general idea

view this post on Zulip fosco (Apr 27 2021 at 11:32):

is it the thing we're after? I don't know. In a sense yes, the failure of a map to be an isomorphism (this is the case we're in) is measured by an homo{topy,logy} group

view this post on Zulip fosco (Apr 27 2021 at 11:34):

the problem is that in a sense there seems to be no such thing as "non additive homology objects" measuring the failure of a functor to do something or be something, other than very general, i.e. useless, results

view this post on Zulip fosco (Apr 27 2021 at 11:34):

I can't find the place where I first read this

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:35):

fosco said:

is it the thing we're after? I don't know. In a sense yes, the failure of a map to be an isomorphism (this is the case we're in) is measured by an homo{topy,logy} group

You mean, you can make this claim precise but only if your map lives in an abelian (?) category?

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:35):

fosco said:

the problem is that in a sense there seems to be no such thing as "non additive homology objects" measuring the failure of a functor to do something or be something, other than very general, i.e. useless, results

So, somewhere in this thread and on twitter, it has been pointed out that measuring the failure of strictness can be reduced to measuring how much a functor fails to have an adjoint

view this post on Zulip fosco (Apr 27 2021 at 11:37):

Jules Hedges said:

fosco said:

is it the thing we're after? I don't know. In a sense yes, the failure of a map to be an isomorphism (this is the case we're in) is measured by an homo{topy,logy} group

You mean, you can make this claim precise but only if your map lives in an abelian (?) category?

Yes and no; I don't know what I'm saying. What I mean is: you have a functor between two 2-categories of some kind

view this post on Zulip fosco (Apr 27 2021 at 11:37):

we know it is a lax functor; there are maps witnessing that

view this post on Zulip fosco (Apr 27 2021 at 11:37):

are they invertible? Well, in full generality what more can you say apart "let's use Yoneda lemma and check"?

view this post on Zulip fosco (Apr 27 2021 at 11:37):

It's not a rethorical question

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:39):

fosco said:

are they invertible? Well, in full generality what more can you say apart "let's use Yoneda lemma and check"?

Indeed. We were restricting to Rel or Span as codomains

view this post on Zulip fosco (Apr 27 2021 at 11:43):

What I mean is, in general when you want to measure how far is X from being Y, you encode "being Y" in the vanishing of some functor over the class of all X's

view this post on Zulip fosco (Apr 27 2021 at 11:44):

to do this you have to translate the property of being Y into something relying on additional structure; for homotopy theorists, a model structure of some kind

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:46):

Hm. I don't know what the model structure would be here tho. Most often than not we don't have a precise idea of why things fail

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:47):

We just say that "composition is not preserved strictly because of emerging behavior" but characterizing this precisely depends on a specific application domain

view this post on Zulip fosco (Apr 27 2021 at 11:47):

What is "emerging behaviour"?

view this post on Zulip fosco (Apr 27 2021 at 11:48):

Mathematically, I mean. I lost all hopes to understand the gibberish about emergent stuff outside of maths

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:48):

It's when something is more than its parts. For example, we can associate a petri net to a SMC describing its executions. Now, you can glue petri nets together using a formalism like decorated cospans, but if you do this, the functor to SMCs is just lax

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:49):

The idea is that if you think about the SMC representing reachability, that is if I can use transitions to move tokens around from from configuration A to some configuration B, when you glue nets together you may get loops

view this post on Zulip Fabrizio Genovese (Apr 27 2021 at 11:50):

And so the resulting SMC is strictly bigger than the SMCs of the components

view this post on Zulip Jules Hedges (Apr 27 2021 at 11:50):

The whole reason I ask this question since years is that it's pinning down one aspect of the mysterious "emergent effects" that people talk all kinds of shit about, and makes it totally precise in a way that you can ask to any mathematician

view this post on Zulip fosco (Apr 27 2021 at 11:51):

Another thing: sometimes lax functors with nice codomains are equivalent to "good objects" in another category; for example this happens with Cat/C being just lax normals C -> Prof

view this post on Zulip fosco (Apr 27 2021 at 11:52):

provided it is normal, a lax functor C -> Prof is just(TM) a 1-functor with codomain C

view this post on Zulip fosco (Apr 27 2021 at 11:52):

now now, F is a pseudofunctor IFF the associated 1-functor is a fibration

view this post on Zulip fosco (Apr 27 2021 at 11:53):

(am I remembering right the Grothendieck construction or I'm just ridiculing myself? https://ncatlab.org/nlab/show/Grothendieck+fibration#fibrations_versus_pseudofunctors )

view this post on Zulip fosco (Apr 27 2021 at 11:55):

now, are fibrations the fibrations of a model structure on Cat? Do you really need the iso? https://ncatlab.org/nlab/show/isofibration

view this post on Zulip fosco (Apr 27 2021 at 11:56):

let's pretend they are

view this post on Zulip fosco (Apr 27 2021 at 11:59):

now for the category theory gibberish: let K\mathcal K be a concrete 2-category with a yoneda structure on it, with presheaf construction P.

Conjecture: there is an equivalence between K/X\mathcal K/X and normal lax functors UXKl(P)UX \to Kl(P), where UXUX is the underlying category of XKX\in\mathcal K and Kl(P)Kl(P) is the Kleisli bicategory of PP

view this post on Zulip fosco (Apr 27 2021 at 12:00):

In a sense, I cannot state something more general; in a sense, @Nathanael Arkor will see where this is going

view this post on Zulip fosco (Apr 27 2021 at 12:00):

and probably prove it for just a KZ doctrine?

view this post on Zulip fosco (Apr 27 2021 at 12:01):

(...I'm worried this doesn't have any chance to be plausible, but I wanted to offer the 2-category theorist POV -if I can ever claim myself to be one, which I can't-)

view this post on Zulip Nathanael Arkor (Apr 27 2021 at 12:05):

I think I see how one proves this statement abstractly. A monad in a bicategory K is a lax functor from the terminal category into K, and a monad in Span(Cat) is a small category. Therefore, a lax functor from the terminal category into Span(Cat) is a small category. In a similar way, taking a lax functor from a non-terminal category yields a slice category. Then, using Prof(Cat) = Mod(Span(Cat)), and the (double categorical) universal property of Mod, we get that lax functors into Span(Cat) are normal lax functors into Prof(Cat). This should work for arbitrary choices of K (assuming one is willing to work with virtual double categories to avoid concerns about composition of spans).

view this post on Zulip Nathanael Arkor (Apr 27 2021 at 12:07):

The corresponding statement @fosco mentions for Kleisli bicategories of KZ doctrines holds when Kl(P) coincides with Mod(Span(K)). (I would hope this is usually the case, but I don't know.)

view this post on Zulip John Baez (Apr 27 2021 at 16:17):

@fosco - if you're having trouble seeing how laxity represents "emergent behavior", you could look at page 23 of Open Petri nets, which shows a picture of an explicit example. Namely, there's a path in a composite of open graphs that's not a composite of paths in those graphs.

view this post on Zulip fosco (Apr 27 2021 at 16:18):

The problem might be at an even more elmentary level! But yes, thanks, I'll take a look

view this post on Zulip John Baez (Apr 27 2021 at 16:19):

No, it's a very simple picture which even a 2-category theorist can understand.

view this post on Zulip John Baez (Apr 27 2021 at 16:19):

This example is the sort of thing Jules was talking about here:

Jules Hedges said:

It is time for me to reiterate my grand challenge to applied category theory, stretching the purpose of the off-topic thread to breaking point. Typically when you have some category CC whose objects are "boundaries", morphisms are "open systems" and composition is coupling 2 open systems along a boundary, typically "behaviours" form a lax pseudofunctor F:CRelF : C \to \mathbf{Rel}. This says that every boundary xx has some set F(x)F(x) of possible behaviours, and every open system f:xyf : x \to y has a set F(f)F(x)×F(y)F(f) \subseteq F(x) \times F(y) of boundary behaviours that it can perform. The laxator says that if you behaviours (a,b)F(f)(a, b) \in F(f) and (b,c)F(g)(b, c) \in F(g) of two composable open systems that agree on the behaviour of the common boundary then they compose to a behaviour (a,c)F(fg)(a, c) \in F(fg) of the complex system; but the converse typically isn't true: fgfg exhibits emergent behaviours that don't arise from individual behaviours of ff and gg. Right now the typical next step is to shrug and then go and do something else. I would like to associate some mathematical object that "describes" or "measures" the failure of the laxator to be an isomorphism, aka describes or measures the emergent effects. Several people have mumbled something about cohomology after I talked about this problem

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:32):

This is a very interesting thread, full of interesting questions. I wish I could understand it better.

But here is some input anyway. Maybe it is completely misguided. I sometimes feel like somebody with a hammer, seeing nails everywhere.

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:33):

I will eventually say something about Jules' grand challenge, but first some preparations.

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:35):

It has already been mentioned several times that lax functors CSpanC\to\mathbf{Span} correspond to functors DCD\to C, and also that pseudo-functors CSpanC\to\mathbf{Span} correspond to discrete Conduché fibrations DCD\to C. It also seems clear that for the envisioned applications, pseudo-functors are not quite enough.

I just wanted to point you to another possible direction of generalisation from pseudo-functors: namely instead of looking at discrete Conduché fibrations from a category DD, allow DD to be more generally a decomposition space. A decomposition space is very much like a category, except that composition of arrows is not always well defined, but decomposition of arrows is well defined in a certain sense.

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:37):

I think this should resonate with the general ideas of quantifying lack of compositionality, because in many situations it is easier to take things apart than it is to put things together, and it seems plausible that this ability should enter the analysis of compositionality.

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:37):

I am trying to advocate the idea of decompositionality!

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:38):

Unfortunately I now have ten things I would like to say at the same time. It is a rather long story I try to make short:

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:42):

To say what a decomposition space is, it is necessary to take a simplicial viewpoint. Jules already started defining simplicial objects, and there were already a few mentions of nerves of categories. In the nerve of a category we have X2X1×X0X1X_2 \simeq X_1 \times_{X_0} X_1 (the Segal condition) expressing the fact that the 22-simplices are precisely the composable pairs of arrows. A decomposition space is a simplicial set where this condition does not necessarily hold, but where there is another condition to help tame the anarchy of general simplicial sets. The Segal condition says that certain pushouts in Δ\Delta are sent to pullbacks. The decomposition-space condition says that certain other pushouts in Δ\Delta are sent to pullbacks. These pushouts are the active-inert pushouts in Δ\Delta. I don't want to explain what these are here. But active maps correspond to the inner parts of a simplicial set, the inert maps correspond to the outer parts. In the end the decomposition-space axiom can be interpreted as saying that the possibilities of decomposing an edge do not depend on anything outside this edge. It's a locality condition.

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:46):

Rather than making all this precise, let me give the motivation: incidence coalgebras. This is an important tool in combinatorics, devised to express how structures decompose into smaller structures. Classically it is defined for posets, but a lot of the theory works the same for categories: any category DD has an incidence coalgebra: its underlying vector space is spanned by the arrows of DD, denoted QD1\mathbb{Q}_{D_1}. The comultiplication Δ:QD1QD1QD1\Delta: \mathbb{Q}_{D_1} \to \mathbb{Q}_{D_1} \otimes \mathbb{Q}_{D_1} is defined by

Δ(f)=ba=fab\displaystyle \Delta(f) = \sum_{b\circ a = f} a \otimes b

That is, sum over all possible two-step factorisations of ff and return the two components. (If finiteness of sums is important, some conditions should be imposed on the category DD to ensure the sum is finite. This is important at the level of vector spaces, but not important a the objective level of slice categories, but that's a longer story.)

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:49):

Now let us write the formula in terms of the nerve of DD:

Δ(f)=σX2d1(σ)=fd2(σ)d0(σ)\displaystyle \Delta(f) = \sum_{\sigma \in X_2 \mid d_1(\sigma) = f} d_2 (\sigma) \otimes d_0(\sigma)

The 22-simplex σ\sigma is just a commutative triangle in DD, and its long edge d1(σ)d_1(\sigma) is required to be ff. Now d2(σ)d_2(\sigma) and d0(σ)d_0(\sigma) are the two short edges, so it is precisely the same formula as before.

The point is that now the formula no longer refers to composition of arrows, so now it makes sense for any simplicial set. But it is no longer guaranteed that the comultiplication is coassociative then. The decomposition-space axioms are designed precisely to ensure that the comultiplication is coassociative.

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:54):

(So why incidence coalgebras? Well, to decompose objects is clearly a nice thing to do, but more importantly incidence coalgebras are the setting for the theory of Möbius functions. Sorry for throwing in yet another word without explaining it properly. Let me just say that Möbius functions count nondegenerate simplices in all dimensions, with alternating signs depending on their dimension. They encode various kinds of overcounting/undercounting principles, such as the inclusion/exclusion principle, antipodes of Hopf algebras, inversion of power series -- and Euler characteristics! Indeed the Euler characteristic of the nerve of a poset DD is the Möbius function evaluated on the interval D\bot * D * \top (the poset with bottom and top element added).)

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:54):

But now I think I should give an example of a decomposition space which is not a category (and an example of a CULF functor).

view this post on Zulip Joachim Kock (Apr 28 2021 at 00:58):

Discrete Conduché fibrations are also called CULF functors: that stands for Conservative and Unique Lifting of Factorisations. Conservative here means that if an arrow is sent to an identity arrow in CC, then it is already an identity arrow in DD, and unique lifting of factorisations says that for an arrow ff in DD, for each factorisation of its image in CC there is also a unique factorisation of ff itself lying over it. The CULF condition has a nice simplicial interpretation: a functor is CULF iff the corresponding simplicial map (on the nerves) is cartesian on active maps. The first active maps are s0:X0X1s_0 : X_0 \to X_1 (which for nerves of categories pick out identity arrows), and the second active map is d1:X2X1d_1 : X_2 \to X_1 (which for nerves of categories is composition of arrows). Cartesian means cartesian in the sense of natural transformations (recall that a simplicial map is a natural tranformation between functors ΔopSet\Delta^{\mathrm{op}} \to \mathbf{Set}), which in turn means the naturality squares are pullbacks. The condition ensures that there is induced a coalgebra homomorphism on the incidence coalgebras.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:00):

Consider the following coalgebra of graphs, which is called the chromatic Hopf algebra, and which plays a role in graph colouring and in the theory of symmetric functions. The underlying vector space is spanned by isoclasses of graphs (say simple, undirected graphs). A graph GG with vertex set VV is comultiplied by summing over all splittings of VV into two disjoint subsets and then inducing a graph structure on each subset:

Δ(G)=A+B=VGAGB\displaystyle \Delta(G) = \sum_{A+B=V} G|A \otimes G|B

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:04):

This is a very typical comultiplication formula for combinatorial Hopf algebra, but it is not the incidence coalgebra of a poset or a category. But is is the incidence coalgebra of a decomposition space XX:

X0X_0 is singleton

X1X_1 is the set of (isoclasses of) graphs (as required if we want the incidence coalgebra to be spanned by graphs)

X2X_2 is the set of (isoclasses of) graphs with a splitting of the vertex set (as required since the sum in the comultiplication formula is over the set of such splittings)

XkX_k is the set of (isoclasses of) graphs with a splitting of the vertex set into kk parts (allowed to be empty).

The inner face maps join two adjacent parts. The outer face maps delete outer parts. The degeneracy maps insert empty parts.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:06):

This simplicial set is not the nerve of the category. This is easy to see: a graph with a vertex splitting cannot be reconstructed from knowing the two parts (because all info about the connecting edges is lost in the splitting process). In other words, graphs cannot be composed, but they can be decomposed. In other words, a graph is more than its parts.

The simplicial set XX is a decomposition space: intuitively (since I didn't give the detailed pullback conditions) it is because the ways to split a certain part in a kk-split graph does not depend on the content of the other parts.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:08):

This decomposition space XX has a CULF map to the decomposition space of discrete graphs (meaning graphs without edges): simply return the underlying vertex set. This is CULF because the shape of the comultiplication only depends on the underlying vertex set. (The incidence coalgebra of the discrete case is the so-called binomial Hopf algebra, since its structure constants are the binomial coefficients. This decomposition space is actually Segal: a set with a splitting can be reconstructed from knowing the two parts, simply by disjoint union.)

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:09):

Coming back to CULF maps: their purpose is to preserve decomposition of objects. They were studied by Lawvere in his approach to dynamics in his 1986 preprint State categories and response functors -- a classic in applied category theory and compositionality! He used it in the situation where the codomain CC encodes a notion of time, DD encodes a notion of processes, and and the CULF functor is a notion of duration.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:12):

Lamarche suggested that the category of CULF functors over CC should be a topos. Johnstone found a counter-example, and then came an analysis of which CC the conjecture is true for, with results by Johnstone, Niefield, Bunge, Fiore. The funny thing is that Lamarche's conjecture turns out actually to be true --- if just you allow the domain DD to be a decomposition space instead of merely a category! This is a theorem by David Spivak and myself. (David's motivation for this was aerospace security, but I doubt the theorem was ever helpful in that context.)

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:13):

Now for the grand challenge. I don't have a solution, but possibly a framework for thinking about it. Possibly just a misunderstanding.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:18):

Let CC be the base category of 'open systems'. Define a simplicial set DD over CC, which hopefully will be a decomposition space with a CULF map to CC. It is similar to a Grothendieck construction. Let D0D_0 be the set of objects of CC equipped with a behaviour. Let D1D_1 be the set of arrows of CC equipped with a behaviour relation (or a behaviour span, or whatever it appropriate -- I don't even know what is the leading example). And now the punch line (which I think is already in Jules' drawing, although I don't quite understand it): let D2D_2 be the set of composable pairs of arrows equipped with three behaviour relations: one for each of the three faces of the triangle. And so on, defining higher simplices with behaviour relations on all edges, not just the principal edges. Now the emergent feature of the situation, whatever it is, tells us that DD is not Segal: it is not possible to construct a 22-simplex by knowing the two short edges. That is, two composable arrows and choices of behaviour on them are not enough to reconstruct behaviour on the composite.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:19):

GUESS: DD is a decomposition space.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:19):

Of course it is quite a shot in the dark, since I don't know what the motivating examples are, but here are two arguments one could try to follow.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:21):

The first is direct translation of the decomposition-space axiom: it would say that the behaviour of a 33-simplex 0123 can be reconstructed by knowing the behaviour on 013 and on 123. The point is that the difficult part is always the long edge, and this info is already available from 013 (since both 0123 and 013 have the same long edge). The finer information available in 0123 which is not available in 013 is precisely available in 123. This argument is easy to carry out in the graph example.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:24):

The second argument strategy is just to prove that the projection map is CULF, because it is a theorem that any simplicial set CULF over a decomposition space is again a decomposition space, and the base CC is a category and therefore in particular a decomposition space. This check now says (unique lifting of factorisations): given a composable pair in CC and a behaviour on the composite, there is a unique behaviours on the two arrows compatible with the behaviour on the composite. Basically this should follow from some principle saying that one can always restrict behaviours to 'shorter' arrows. In the case of graphs, this is the fact that a graph on a set of vertices induces a graph structure on any subset of vertices.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:25):

As you can see, my intuition comes from the very simple example where the base category is finite sets, composed by disjoint union, and where 'behaviour' is graph structure. Of course there is a high risk that this intuition is misguided.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:26):

I look forward to hear your thoughts.

view this post on Zulip Joachim Kock (Apr 28 2021 at 01:27):

(Of course your thoughts on this may require actually knowing the definition of decomposition space. I am happy to explain more.)

view this post on Zulip John Baez (Apr 28 2021 at 01:48):

Hi! Can you get your ideas to apply to the leading example that we've been discussing here? That's this:

view this post on Zulip John Baez (Apr 28 2021 at 01:49):

The laxness arises from the fact that there can be paths in a composite of open graphs, say GHG \circ H, that do not consist of a path in GG composed with a path in HH.

view this post on Zulip John Baez (Apr 28 2021 at 01:53):

Of course your ideas could be good even if they don't apply to this example, but we've been talking a lot about this example because it's a simple example of "emergent behavior as laxity".

view this post on Zulip Morgan Rogers (he/him) (Apr 28 2021 at 08:07):

That tour of decomposition spaces and related topics was very enlightening @Joachim Kock! If you ever put together more extensive notes or a presentation on this stuff, I would love to see it :grinning_face_with_smiling_eyes:

view this post on Zulip Matteo Capucci (he/him) (Apr 28 2021 at 08:46):

This thread is becoming a journal

view this post on Zulip Matteo Capucci (he/him) (Apr 28 2021 at 08:46):

Amazing

view this post on Zulip Fawzi Hreiki (Apr 28 2021 at 09:24):

It’s good the archive exists then

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:26):

John Baez said:

The laxness arises from the fact that there can be paths in a composite of open graphs, say GHG \circ H, that do not consist of a path in GG composed with a path in HH.

That's a tricky example!

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:28):

In its most natural version it does not fit into the decomposition-space setting, unfortunately (or maybe that's fortunate, who knows?): CC is the category of open graphs. The simplicial set DD should then be the following: D0D_0 is a finite set with a chosen point. D1D_1 is a graph with a chosen point on each boundary component, and the existence of a path between them. D2D_2 should be pairs of composable graphs XYZX \Rightarrow Y \Rightarrow Z with chosen points xXx\in X, yYy\in Y, zZz\in Z, and existence of paths xyx \to y in XYX \Rightarrow Y, yzy\to z in YZY\Rightarrow Z, and xzx\to z in XZX\Rightarrow Z. And so on.

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:29):

Unfortunately it is not a decomposition space. The trouble is that is it not possible to restrict a path to a shorter graph. More precisely, given XYZX \Rightarrow Y \Rightarrow Z and a path xzx \to z in XZX \Rightarrow Z, it is not always possible to split it into paths xyx\to y and yzy\to z.

view this post on Zulip Morgan Rogers (he/him) (Apr 28 2021 at 09:30):

Perhaps you can modify the set-up a little bit; after all, any path is decomposable into shorter paths!

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:30):

Too bad. I should have studied this example before shooting in the dark.

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:30):

Morgan Rogers (he/him) said:

Perhaps you can modify the set-up a little bit; after all, any path is decomposable into shorter paths!

Yes, I am coming to it now...

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:32):

However, the following variation (which I admit is a bit contrived) does work: instead of letting D0D_0 be boundaries with a chosen point, let D0D_0 be 'boundaries with the property of being nonempty' and let now D1D_1 be 'graphs with the existence of a path from some point in the in-boundary to some point in the out-boundary'. Now D2D_2 consists of composable pairs of graphs, with existence of paths between the three 'boundaries'. Now it seems to work: if there exists a path from an XX-point to a ZZ-point then that path must go through one or more YY-points.

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:34):

This variant is a decomposition space that is not Segal: it is easy to cook up two composable graphs that individually admit a path from an in-point to an out-point, but whose composed graphs does not have this property: in the simplest example XX and ZZ are singleton, YY has two points, and the graphs XYX\Rightarrow Y and YZY \Rightarrow Z both have a single edge, and these two edges do not connect to the same point in YY.

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:35):

It seems that in this example, the emerging behaviour is not reachability but rather non-reachability! (Where 'reachability' is understood in an artificially simple-minded sense.)

view this post on Zulip Morgan Rogers (he/him) (Apr 28 2021 at 09:35):

Joachim Kock said:

However, the following variation (which I admit is a bit contrived) does work: instead of letting D0D_0 be boundaries with a chosen point, let D0D_0 be 'boundaries with the property of being nonempty' and let now D1D_1 be 'graphs with the existence of a path from some point in the in-boundary to some point in the out-boundary'. Now D2D_2 consists of composable pairs of graphs, with existence of paths between the three 'boundaries'. Now it seems to work: if there exists a path from an XX-point to a ZZ-point then that path must go through one or more YY-points.

It still doesn't seem to cover the case where gluing the two graphs together produces a path with multiple disjoint components in each side.

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:36):

But in any way, the first example shows that my idea is not nearly as useful as I had hoped.

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:40):

Morgan Rogers (he/him) said:

It still doesn't seem to cover the case where gluing the two graphs together produces a path with multiple disjoint components in each side.

Not sure I understand this :-(

view this post on Zulip Joachim Kock (Apr 28 2021 at 09:42):

It is not necessarily a question of splitting the path into two parts (as the full example shows is not possible), but only about the existence of paths in each of the two graphs.

view this post on Zulip Jules Hedges (Apr 28 2021 at 10:27):

Joachim Kock said:

Morgan Rogers (he/him) said:

It still doesn't seem to cover the case where gluing the two graphs together produces a path with multiple disjoint components in each side.

Not sure I understand this :-(

Here's one I made earlier (taken from an extended abstract of mine, https://obsoletewallstreet.files.wordpress.com/2020/05/categorical-systems-theory-3.pdf)
Screenshot-2021-04-28-at-11.27.01.png

view this post on Zulip Jules Hedges (Apr 28 2021 at 10:28):

I'm pretty sure this example is minimal as a witness of laxity

view this post on Zulip Fabrizio Genovese (Apr 28 2021 at 10:29):

Question: Is it a necessary requirement for graphs that, for emergent behavior to manifest, the graphs being composed are disconnected?

view this post on Zulip Fabrizio Genovese (Apr 28 2021 at 10:30):

The whole point here seems that we are composing two graphs made of two disconnected parts, that become one connected part when composed

view this post on Zulip Fabrizio Genovese (Apr 28 2021 at 10:31):

"A space (whatever that means) is disconnected" is the prototypical thing that topological data analysis can detect, so if this is a necessary condition, it makes me hopeful in some way

view this post on Zulip Fabrizio Genovese (Apr 28 2021 at 10:33):

Fabrizio Genovese said:

Question: Is it a necessary requirement for graphs that, for emergent behavior to manifest, the graphs being composed are disconnected?

If your graphs are oriented, you can allow for the underlying non-oriented graph to be connected and still have emerging behavior. But let's avoid indirected stuff just for now

view this post on Zulip Fabrizio Genovese (Apr 28 2021 at 10:33):

It seems to me that if I have two graphs where each point is reachable from any other point, reachabaility doesn't really change when composing them

view this post on Zulip Jules Hedges (Apr 28 2021 at 10:39):

I guess so, yeah. If your open graphs are all individually connected then any path that zigzags back across the boundary can be "shortcutted" to only cross it once

view this post on Zulip Jules Hedges (Apr 28 2021 at 10:39):

But connected makes the reachability question boring anyway, no? In a connected graph, all pairs of boundary vertices are connected

view this post on Zulip Jules Hedges (Apr 28 2021 at 10:40):

Oh yeah, directed... forget what I just said

view this post on Zulip Fabrizio Genovese (Apr 28 2021 at 10:44):

Yeah, so for directed graphs "connected" translates to something different

view this post on Zulip Fabrizio Genovese (Apr 28 2021 at 10:44):

But I know nothing about what TDA can say for directed graphs

view this post on Zulip Joachim Kock (Apr 28 2021 at 11:10):

Thinking about it a little bit more, I now think my modified graph example is completely dull. It is an example of a general construction: start with any category, and decide that certain arrows should be excluded. Then write down the sub-simplicial set of the nerve consisting only of those composable strings that do not involve any of the excluded arrows. This will be a decomposition space, provided the remaining 'good' arrows have the property that if an arrow is good, then in any factorisation of it, the two components are good again. Silly example: in the free category on a directed graph, decide to exclude paths of length 10 or more. Clearly if a path has length <10, then the same is true for its shorter path. So paths of length <10 form a decomposition space.

view this post on Zulip Joachim Kock (Apr 28 2021 at 11:11):

This says that bigness is an emergent phenomenon: two small things put together could create something big! Wow, I am really onto something deep here :-)

view this post on Zulip Joachim Kock (Apr 28 2021 at 11:13):

(But silly as it seems in this context, it is actually an interesting construction. If the category is just a monoid, it gives Segal's notion of partial monoid, which he famously used to prove that for a path-connected space, the configuration space of nn points on it is homotopy equivalent to the nn-fold loop space of its nn-fold suspension.)

view this post on Zulip John Baez (Apr 28 2021 at 16:44):

Joachim Kock said:

But in any way, the first example shows that my idea is not nearly as useful as I had hoped.

It might be useful for something else. Build the general theory: if it's beautiful enough, the examples will come. :upside_down:

view this post on Zulip Spencer Breiner (Apr 29 2021 at 15:54):

Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.

This made me wonder why this thread hasn't included more talk about sheaves and colimits. In particular, how much of this laxity is just the expected behavior of colimits on algebraic structures. Isn't that the core of what's going on, at least in the main example considered so far:
F(G)+F(H)F(G+H)F(G)+F(H)\lneq F(G+H)

I suppose it's implicit in the mentions of cohomology, but I still have trouble keeping those ideas from leaking out of my brain.

view this post on Zulip Spencer Breiner (Apr 29 2021 at 15:55):

Jules Hedges said:

I'm pretty sure this example is minimal as a witness of laxity

Note that in the directed case (I shouldn't even have to specify that here!) there is an even more minimal example: G={xy}G=\{x\to y\}, H={yx}H=\{y\to x\}.

view this post on Zulip Spencer Breiner (Apr 29 2021 at 16:06):

It's worth noting that reachability isn't the only emergent phenomenon in this example. Another that is very important in practice is the presence of cycles, since these generate infinities, feedbacks and all sorts of other trouble :-). Of course, these are both just reflections of the path category, which seems like the "core emergent".

view this post on Zulip Morgan Rogers (he/him) (Apr 29 2021 at 16:07):

Spencer Breiner said:

Jules Hedges said:

I'm pretty sure this example is minimal as a witness of laxity

Note that in the directed case (I shouldn't even have to specify that here!) there is an even more minimal example: G={xy}G=\{x\to y\}, H={yx}H=\{y\to x\}.

I know you've named the output vertex of HH as xx, but surely composing GG and HH by gluing along yy would not identify that xx with the xx in GG?

view this post on Zulip Morgan Rogers (he/him) (Apr 29 2021 at 16:08):

But if you just mean taking the pushout of the inclusions of the objects in the category of categories, I see what you're getting at

view this post on Zulip Morgan Rogers (he/him) (Apr 29 2021 at 16:13):

Spencer Breiner said:

Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.

for anyone who scrolls past links, a "sheaf" on an undirected graph G=(V,E)G = (V,E) in this paper means a presheaf on the category whose objects are VEV \coprod E with a morphism eve \to v whenever vev \in e, for edges ee and vertices vv.

view this post on Zulip Spencer Breiner (Apr 29 2021 at 16:22):

Joachim Kock said:

Coming back to CULF maps: their purpose is to preserve decomposition of objects. They were studied by Lawvere in his approach to dynamics in his 1986 preprint State categories and response functors -- a classic in applied category theory and compositionality! He used it in the situation where the codomain CC encodes a notion of time, DD encodes a notion of processes, and and the CULF functor is a notion of duration.

Thanks for the reference! Here's a link for those too lazy to look it up (thanks to @Matt Earnshaw's awesome Lawvere bibliography!)
https://github.com/mattearnshaw/lawvere/blob/master/pdfs/1986-state-categories-and-response-functors.pdf

This is actually my very favorite example of laxity, and I can't believe I forgot to mention it above. Given some symmetric monoidal category of processes PPP\in\mathcal{P}, we would like to assign a duration d(P)R+d(P)\in\mathbb{R}^+. Naively, this looks like a nice monoidal functor with d(P;Q)=d(P)+d(Q)d(P;Q)=d(P)+d(Q) and d(PQ)=max{d(P),d(Q)}d(P\otimes Q)=\max\{d(P),d(Q)\}.

However, there is a problem with interchange, resulting in potential time savings (laxity) by grouping serial compositions. If d(P)=d(Q)=10mind(P)=d(Q')=10\min and d(P)=d(Q)=5mind(P')=d(Q)=5\min, then
15min=max{d(P)+d(Q),d(P)+d(Q)}max{d(P),d(Q)}+max{d(P),d(Q)}=20min15\min=\max\{d(P)+d(Q),d(P')+d(Q')\}\lneq\max\{d(P),d(Q)\}+\max\{d(P'),d(Q')\}=20\min

With regards to Jules' original question regarding measuring the amount of laxity, this might be a good example to think about, since there is a visceral quantitative measure backed by strong intuition.

view this post on Zulip Spencer Breiner (Apr 29 2021 at 16:24):

Morgan Rogers (he/him) said:

But if you just mean taking the pushout of the inclusions of the objects in the category of categories, I see what you're getting at

That's right, I'm imagining these graphs as morphisms {x,y}\emptyset\to\{x,y\} and {x,y}\{x,y\}\to\emptyset in some sort of cospan category.

view this post on Zulip Jules Hedges (Apr 29 2021 at 17:38):

Spencer Breiner said:

Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.

This made me wonder why this thread hasn't included more talk about sheaves and colimits. In particular, how much of this laxity is just the expected behavior of colimits on algebraic structures. Isn't that the core of what's going on, at least in the main example considered so far:
F(G)+F(H)F(G+H)F(G)+F(H)\lneq F(G+H)

I suppose it's implicit in the mentions of cohomology, but I still have trouble keeping those ideas from leaking out of my brain.

Personally I forgot that perspective also existed, since I pretty much don't understand sheaves at all. In this formula what you've done is to take the definition of a lax functor, and switch out composition of morphisms for something written "+". So.... (how) are lax functors and sheaves known to be related?

view this post on Zulip Spencer Breiner (Apr 29 2021 at 19:12):

I'm not even thinking of sheaves yet, just colimits. Sheaves are (in some sense) about preservation of colimits, so that's how they come in.

Here I'm thinking of something like the free category on a graph, but you see the same sort of behavior in the free monoid on a set. Free algebras for colimits are badly behaved, and this often shows up as some kind of laxity because we can mix elements from the two component algebras.

view this post on Zulip Spencer Breiner (Apr 29 2021 at 19:13):

Since composition in cospan categories is defined by pushouts, maybe we should expect laxity to emerge in this case.

view this post on Zulip Fabrizio Genovese (Apr 29 2021 at 22:51):

My gut feeling is that sheaves are unbelievably cool, but they can become overkill fast in many circumstances

view this post on Zulip Fabrizio Genovese (Apr 29 2021 at 22:52):

For this kind of problem it seems to me that just some symplicial stuff may take us quite fare, so I wouldn't move to sheaves unless there's a real reason to do so

view this post on Zulip Fabrizio Genovese (Apr 29 2021 at 22:53):

(I've tried to almost force myself to use sheaves to solve a particular problem once and I've been stuck for two years, so I'm slightly biased lol)

view this post on Zulip John Baez (Apr 29 2021 at 22:58):

I think sheaves are overkill in many applications. However, I think for understanding this theoretical issue of how "the whole is greater than the sum of its parts" they are very nice.

For example, a circle has a hole in it, even though any small contractible portion of the circle does not. "The hole is greater than the sum of the parts". Sheaf cohomology lets us see how the hole---the nontrivial first cohomology of the circle---is composed from what's happening in the parts.

Perhaps a better way to put it is that "the whole is assembled in a more subtle way from the parts than you might naively expect". You can imagine a little ant crawling around a circle, trying to figure out where exactly is the hole. It's everywhere and nowhere.

view this post on Zulip John Baez (Apr 29 2021 at 23:02):

There's still a big challenge, which is to use ideas like sheaf cohomology to describe new kinds of emergent behavior.

view this post on Zulip Spencer Breiner (Apr 29 2021 at 23:37):

Morgan Rogers (he/him) said:

Spencer Breiner said:

Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.

for anyone who scrolls past links, a "sheaf" on an undirected graph G=(V,E)G = (V,E) in this paper means a presheaf on the category whose objects are VEV \coprod E with a morphism eve \to v whenever vev \in e, for edges ee and vertices vv.

Hmm. It looks like you're right! So this is really just a "presheaf-theoretic" interpretation anyway. Does anyone know why that community refers to these as "cellular sheaves" rather than "cellular presheaves"? Is it possible that these give a purely combinatorial/presheafy presentation for traditional sheaf categories over nice spaces (similar to what that @Joachim Kock described in a recent thread about categories/planar/operads)?

view this post on Zulip John Baez (Apr 29 2021 at 23:51):

Interesting question!

Bob Ghrist and Jakob Hansen note that any cell complex gives a poset where one cell is \le another if it's a face that other cell. An example would be a graph, where a vertex is \le an edge if it's one of the endpoints of that edge.

They call presheaves on such a poset "cellular sheaves", and here is their excuse:

For experts, this definition at first seems only reminiscent of the notion of sheaves familiar to topologists. The depth of the relationship is explained in detail in [Cur14], but the essence is this: the data of a cellular sheaf on X specifies spaces of local sections on a cover of X given by open stars of cells. This translates in two different ways into a genuine sheaf on a topological space. One may either take the Alexandrov topology on the face incidence poset of the complex, or one may view the open stars of cells and their natural refinements a basis for the topology of X. There then exists a natural completion of the data specified by the cellular sheaf to a constructible sheaf on X.

view this post on Zulip John Baez (Apr 30 2021 at 00:03):

So: from such a presheaf you can build a sheaf on some space.

I think it's better to call it a presheaf.

view this post on Zulip Morgan Rogers (he/him) (Apr 30 2021 at 07:52):

Fabrizio Genovese said:

For this kind of problem it seems to me that just some symplicial stuff may take us quite fare, so I wouldn't move to sheaves unless there's a real reason to do so

Given that, as I mentioned above, the "sheaves" here are just a particular case of presheaves, how is this any more complicated than simplicial sets? :thinking:

view this post on Zulip Fabrizio Genovese (Apr 30 2021 at 10:31):

I wasn't referring to simplicial sets, but more to stuff like complexes and all the standard toolkit of topological data analysis :smile:

view this post on Zulip Morgan Rogers (he/him) (Apr 30 2021 at 10:44):

Again, I don't think the presheaf data they use in the data above is any more complicated than the kind of constructions one does in TDA, but it probably depends on your experience with the different techniques.

view this post on Zulip Fabrizio Genovese (Apr 30 2021 at 10:58):

Indeed, complexes seem way more manageable to me :smile:

view this post on Zulip Spencer Breiner (Apr 30 2021 at 15:53):

I think the situation is somewhat akin to the relationship between finite decimals and real numbers. If I understand correctly, each cellular "sheaf" F0F_0 over C\mathcal{C} can be converted to a classical sheaf FF over XX (the underlying space of the cellular decomposition) through something like a geometric realization. I think this works because cells are contractible.

However, for different decompositions, the maps PSh(C)Sh(X)\mathrm{PSh}(\mathcal{C})\to\mathrm{Sh}(X) will have different images. Presumably, (nice?) common refinements of the decompositions correspond to some kind of colimits in the sheaf category. In that sense, it gives a single space for comparing different finite approximations.

view this post on Zulip Jade Master (May 01 2021 at 03:25):

In my opinion the most minimal example of laxity is when you have two open graphs with a single arrow and they are joined together backwards...so that they turn into a circle in the composite.

view this post on Zulip Jade Master (May 01 2021 at 03:26):

It's not minimal in the sense of least amount of emergence but I like it because you get a ton of emergence out of super small graphs.

view this post on Zulip Jules Hedges (May 01 2021 at 09:34):

Jade Master said:

It's not minimal in the sense of least amount of emergence but I like it because you get a ton of emergence out of super small graphs.

Just using words like this hints that you believe it's possible to quantify this stuff, which is pretty much the original motivation for this thread

view this post on Zulip Fabrizio Genovese (May 01 2021 at 10:02):

Just very tangentially, with a definition of dependent graphs you can flatten out cycles, this is something I was thinking about yesterday and I'll write down precisely at some point, but in a nutshell:

view this post on Zulip Fabrizio Genovese (May 01 2021 at 10:03):

In practice, you are tagging every vertex of a graph with a property that is "all the possible ways you got here". Then the outbound arcs from this vertex are checking with which property you landed in A. If they like it, they let you pass to whatever is their target, otherwise they don't.

view this post on Zulip Fabrizio Genovese (May 01 2021 at 10:04):

The most permissive case maps every AundefinedfBA \xrightarrow{f} B to just XHom(X,f)\sqcup_X Hom(X,f), and is basically the Yoneda embedding evaluated on any set.

view this post on Zulip Joachim Kock (May 01 2021 at 10:05):

Jules Hedges said:

Jade Master said:

It's not minimal in the sense of least amount of emergence but I like it because you get a ton of emergence out of super small graphs.

Just using words like this hints that you believe it's possible to quantify this stuff, which is pretty much the original motivation for this thread

One interpretation is: "tons of emergence" = "all of homotopy theory" --- that's a lot!
Because it is the construction of the circle from two contactible spaces.

view this post on Zulip Fabrizio Genovese (May 01 2021 at 10:06):

The grothendieck construction of this functor is again a free category, so it spits out another graph. By defining the functor correctly, you can "unroll" cycles. For instance, you may have a graph ABCA,DA \to B \to C \to A, D

view this post on Zulip Fabrizio Genovese (May 01 2021 at 10:07):

so you can cycle around A,B,CA, B, C as many times as you want, and in CC you can also decide to go to DD. By defining the functor in the right way you can state that "you can go to DD, and only to DD, after you completed exactly 3 cycles around A,B,CA,B,C", and other stuff like this. In this example, the grothendieck construction would give you a graph that goes like
CABCABCABCDC \to A \to B \to C \to A \to B \to C \to A \to B \to C \to D, with no cycles.

view this post on Zulip Fabrizio Genovese (May 01 2021 at 10:07):

All this also works for Petri nets, with the caveat that your functor is now lax monoidal, or even lax-monoidal-lax

view this post on Zulip Fabrizio Genovese (May 01 2021 at 10:11):

This kind of stuff is useful in practice, because it allows you to tag a thing with a potentially very complex reachability relation and simplify it radically. It is used in practice when you do modelling with automata and nets. I also like that there's Yoneda around (always a good sign) and that probably one could say something very sievey and sheafy about it

view this post on Zulip Spencer Breiner (May 01 2021 at 15:59):

Fabrizio Genovese said:

Just very tangentially, with a definition of dependent graphs you can flatten out cycles...$$

What's a dependent graph?

view this post on Zulip Fabrizio Genovese (May 01 2021 at 16:12):

I defined it just afterwards

view this post on Zulip Eigil Rischel (May 06 2021 at 14:10):

@Spencer Breiner I've also thought that this might be explained as "failure-to-preserve-pushouts/colimits" (leading to sheaves/cohomology) rather than failure to preserve composition.
The problem in applying this approach to the reachability example is that you can't generally restrict a path to a subgraph, nor extend a path from a subgraph to the whole graph in a canonical way.
Morally speaking, instead of having restriction maps P(U+V)P(U),P(V)P(U + V) \to P(U),P(V) so that P(U+V)P(U)×P(V)P(U + V) \to P(U) \times_{\partial} P(V) is an equivalence, we only have the inverse of that equivalence - we can only glue compatible paths in the subgraphs.
(The notation there isn't supposed to be rigorous)

view this post on Zulip Eigil Rischel (May 06 2021 at 14:20):

Also, thinking about decomposition spaces, I came up with the following construction:

Let X0X_0 consist of finite sets, X1X_1 consist of relations between two finite sets, and let X2X_2 consist of data like (A,B,C,R,R,)(A,B,C,R,R',\to), where R:AB,R:BC,:BBR: A \leftrightarrow B, R': B \leftrightarrow C, \to : B \leftrightarrow B are relations, \to should probably be transitive(?), and the three boundary 1-simplices are (A,B,R),(B,C,R)(A,B,R), (B,C,R') and (A,C,RR)(A,C, R' \circ \to \circ R).

I don't think I quite understand decomposition spaces well enough to say whether this is one - @Joachim Kock can maybe help. But my idea here would be that this carries exactly the extra information about reachability necessary to work out the composite reachability relation - and this can't be inferred from the component reachability relations, which is why this is not a category.

(Of course, you can take a category where morphisms aren't just relations, but "corelations", i.e they remember reachability between vertices on the same side of the graph. Then you can compose these and get the right thing, but the problem with this is that composition requires computing the transitive closure of a relation, which is hard. So this decomposition space sort of isolates the hard part, and puts it exactly in the failure to be a category).

view this post on Zulip Spencer Breiner (May 06 2021 at 16:11):

@**Eigil Rischel|
Lots of ideas in here!

"failure-to-preserve-pushouts/colimits"

My point, at least, is that it matters where we take the colimits. Ordinary people do not care whether the diagram they draw is a graph or a free category, and if we want to talk about reachability then the latter is better behaved.

Now, maybe this just sweeps the problem under the rug--colimits in Cat are hard--but I think it can give us a handle on how sheaves might fit into the question. @_Joachim Kock recently described a general strategy for representing compositional structures as (iirc) equivalent categories of either sheaves or presheaves. I think it was in the thread on shapes and algebraic structures. He talks about how you can get data about arbitrary-length paths based on knowledge of length-2 paths (i.e., composition).

I think this is also related to the path (pre)sheaves paper that I mentioned earlier. Though I hadn't put it together yet at the time, they also rely on length-2 paths in their explication of Dijkstra's algorithm.

The problem in applying this approach to the reachability example is that you can't generally restrict a path to a subgraph

I suspect you could play some tricks with pointing in order to improve this. Fundamentally, though, I see this as the observation that pullback projections in Cat need not be full.

extend a path from a subgraph to the whole graph in a canonical way.

Not sure what this means. For me, a path in a subgraph is still a path in the supergraph.

P(U)×P(V)P(U) \times_{\partial} P(V) ... (The notation there isn't supposed to be rigorous)

Not sure why, exactly, but all this makes me think of (groupoidal) van Kampen theorems. Somehow, the problem is that the overlap between two graphs as the shared nodes is too coarse to push out well; what's really needed in the overlap of the pushout is the length-2 paths shared by the two graphs. We can't do this directly, since the whole problem is that one edge is missing from each graph.

Wild speculation: maybe we could canonically extend the original graphs by adding an "outside" node and some special "outside" edges to the original graphs and use length-2 paths to glue along those?

view this post on Zulip Spencer Breiner (May 06 2021 at 16:15):

I'm going to have to think some more about the decomposition spaces. I probably won't understand your construction unless you draw a picture, or fill in A,B,C,R with with some real-world examples that are relevant.

view this post on Zulip Jade Master (Jun 22 2021 at 17:59):

I hope that my thesis:

is relevant to this conversation. First of all I think that it sets an appropriate level of abstraction to discuss this sort of emergent behavior. The generality that I chose was "network" and to me that means either an enriched and internal graph. There are a surprising number of examples that can be thought of this way. Petri nets and their variants including integer nets, k-safe nets, pre-nets etc. can all be regarded as internal graphs. Also markov networks, distance networks, capacity networks, NFA's and more can all be regarded as enriched graphs. And of course there are ordinary graphs which these are all generalized from. The operational semantics of these networks are all generalized from the free category construction. In the internal case this gives the category of executions for Petri nets and their variants. In the enriched case this gives solutions of the algebraic path problem. But regardless in all of these cases, for every category of networks NN there is

Anyway there are likely some holes or misunderstandings so let me know if you have any feedback, comments, or questions. It is all very appreciated :)

view this post on Zulip John Baez (Jun 22 2021 at 19:29):

Yes, these are really good concrete examples of "emergent behavior as laxity".