You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
It is time for me to reiterate my grand challenge to applied category theory, stretching the purpose of the off-topic thread to breaking point. Typically when you have some category whose objects are "boundaries", morphisms are "open systems" and composition is coupling 2 open systems along a boundary, typically "behaviours" form a lax pseudofunctor . This says that every boundary has some set of possible behaviours, and every open system has a set of boundary behaviours that it can perform. The laxator says that if you behaviours and of two composable open systems that agree on the behaviour of the common boundary then they compose to a behaviour of the complex system; but the converse typically isn't true: exhibits emergent behaviours that don't arise from individual behaviours of and . Right now the typical next step is to shrug and then go and do something else. I would like to associate some mathematical object that "describes" or "measures" the failure of the laxator to be an isomorphism, aka describes or measures the emergent effects. Several people have mumbled something about cohomology after I talked about this problem
Jules Hedges said:
It is time for me to reiterate my grand challenge to applied category theory, stretching the purpose of the off-topic thread to breaking point. Typically when you have some category whose objects are "boundaries", morphisms are "open systems" and composition is coupling 2 open systems along a boundary, typically "behaviours" form a lax pseudofunctor . This says that every boundary has some set of possible behaviours, and every open system has a set of boundary behaviours that it can perform. The laxator says that if you behaviours and of two composable open systems that agree on the behaviour of the common boundary then they compose to a behaviour of the complex system; but the converse typically isn't true: exhibits emergent behaviours that don't arise from individual behaviours of and . Right now the typical next step is to shrug and then go and do something else. I would like to associate some mathematical object that "describes" or "measures" the failure of the laxator to be an isomorphism, aka describes or measures the emergent effects. Several people have mumbled something about cohomology after I talked about this problem
This is a cool question. I think that a first step is to classify when there is no emergent behavior. That's something you can do for the algebraic path problem by defining functional open matrices (See section 4 of this) and in my forthcoming thesis I show how to generalize this to open Petri nets, Q-nets, and graphs . Basically these are systems such that when you compose them, there are no channels which can produce additional feedback. I don't know how yet, but definitely it would be cool to find some class of networks for which there is some emergent behavior, but it is still relatively manageable.
This topic was moved here from #general: off-topic > memes by Matteo Capucci (he/him)
Luckily I got a screengrab of my post being under the "memes" thread before it was moved
Right, one of the motivating examples for me was open graph reachability, where the laxness happens because a path can zigzag across a boundary, but it should be able to say something much stronger than just "it's lax" - if you know something about what the boundary looks like then it should be possible to control "how lax" it is
This motivation came straight out of one of your papers
Jules Hedges said:
Luckily I got a screengrab of my post being under the "memes" thread before it was moved
Lol, it is all a meme. And yeah, the thing is that as soon as you get any sort of loop the emergent behavior is infinite because you can go around the loop as many times as you want. I'm curious about when the emergent behavior is finite...so there's a zig-zag but a not a loop.
I've put a lot of thought into this problem. It's very hard to say something about the general case because you need some sort of structure on the categories involved to work with.
One family of examples is where your bicategories are (co)span categories, and your (op)lax functor comes from a functor which doesn't preserve pushout/pullback. If you further assume that the target category is abelian, this is the kind of thing you can hit with (co)homology.
The higher homology of an object then measures somehow the failure of to preserve those pushouts where the pushout is (in a vague sense).
I tried applying this to the graph reachability problem (since open graphs obviously are a structured cospan category). The problem is that as soon as you replace the reachability relation with an additive relation, you can also express relations between vertices on the same side of the graph (by saying that is related to zero). So somehow the step of "passing to the additive situation" already does all the work.
Cool!
I was vaguely thinking that the target category would most likely have to be something like LinRel or AbRel to make progress
Although I do vaguely wonder if you can make use of the fact that Rel itself is enriched in a bunch of things that includes Boolean algebras
Yeah, I think we've discussed this before: the reachability functor from open graphs to linear relations (taking a set to the vector space with that basis) can be made functorial by encoding the "same-side" connections using the trick I wrote above.
Hmm, now I'm wondering what the relation is between these lax functors:
Obviously . Is something like "the minimal functor containing "?
I'm trying to think what it would mean that your semantic functor goes into AbRel. The set of boundary behaviours is now an abelian group, so every boundary has a "zero behaviour" and a commutative "addition of behaviours"
By the way, to make the jump between reachability and my "behaviours" intuition, I think about a behaviour of the boundary of an open graph being a token entering or leaving
In the linear/additive version, the behaviour becomes something like "a number of tokens entering/leaving, according to the sign"
Yeah. So going to AbRel isn't a totally crazy thing to do. It would be like viewing your graph as a relatively simple kind of integer petri net, where all of the transitions are trivially "one in one out"
Which would also be a hint that actual integer petri nets are possibly the right setting to think about this sort of reachability problem...
Eigil Rischel said:
I've put a lot of thought into this problem. It's very hard to say something about the general case because you need some sort of structure on the categories involved to work with.
One family of examples is where your bicategories are (co)span categories, and your (op)lax functor comes from a functor which doesn't preserve pushout/pullback. If you further assume that the target category is abelian, this is the kind of thing you can hit with (co)homology.
The higher homology of an object then measures somehow the failure of to preserve those pushouts where the pushout is (in a vague sense).
I tried applying this to the graph reachability problem (since open graphs obviously are a structured cospan category). The problem is that as soon as you replace the reachability relation with an additive relation, you can also express relations between vertices on the same side of the graph (by saying that is related to zero). So somehow the step of "passing to the additive situation" already does all the work.
What makes this a problem?
Uh, not sure it's a problem - it just made me think that the technology for studying failure-of-functoriality wouldn't do anything, because once you rephrase the problem so that it makes sense to apply something like homology, you've already solved the problem. Although I just realized you can also study the lax functor which doesn't "know about" same-side connections, so maybe something interesting happens there.
It could be that this is actually the right way to do it, and my whole "cohomology of the laxator" is going the wrong way...... possibly it comes down to an application-driven question for "what do you want a solution to the reachability problem to actually be?"
Right - this is good news if your goal is to understand reachability! But it means you can't use it as an example to think about abstract tools for measuring non-compositionality
Heh, my brain just caught up and realised what this is saying intuitively. If you have an open graph, you can detect a connection between two nodes on the same boundary, but injecting in a particle and an antiparticle, and observing nothing coming out the other side
Yeah that's really interesting how you lose directionality when you freely include negatives.
Reachability and behavior are very related. I would say that reachability is the decategorification of behavior. For a network P, you can take a category representing the operational semantics of P, so that objects are states of P and morphisms are execution sequences in P. Then the hom-sets represent all the possible behaviors which can occur starting in x an ending in y. Reachability decategorifies this by making the change of enrichment where a set is sent to 0 if it is empty and sent to 1 otherwise. So reachability is takes the behavior and just cares about whether it exists or not, and does that by going from 1-dimensional categories to 0-dimensional categories.
That's neat
So, my grand challenge problem is probably stuck until I come up with a new example that can't be solved in this easier way, probably.... which may be slight problem because coming up with decent examples is really surprisingly hard (which itself is a hint that my starting abstract may not be a good one, no matter how great it intuitively seems to me)
I had another motivating example that came from something me and Sharwin Rezagholi worked on and then abandoned: topological entropy of open dynamical systems (aka open directed graphs, for our purposes). That turned out to be a lax pseudofunctor from open directed graphs to the poset of reals. If I remember correctly (which is not guaranteed, I'm pretty fuzzy on this), we worked out that coupling 2 open directed graphs means you can say that the topological entropy is the maximum of the individual topological entropies. So there was an obvious question of trying to measure/control how much it could increase in terms of the boundary you couple along, but we never came up with anything. And that's basically all I can say about it, because without Sharwin around I don't even remember how to define topological entropy...
Jules Hedges said:
Although I do vaguely wonder if you can make use of the fact that Rel itself is enriched in a bunch of things that includes Boolean algebras
I don't think this is true. Composition doesn't preserve intersections for example. Rel is cJSLat-enriched though.
So if you want to not decategorify and keep the full behavior you can instead construct a double functor
which sends an open network to the profunctor given by
So maybe this would be a better double functor for measuring tnon-compositionality than the one with codomain Rel or AddRel.
And maybe you don't want a double functor and would prefer the ordinary functor...that exists too!
Oh Jules I was about to ask you what topological entropy was. Regardless that seems like a really nice result, it's the old adage "a thing is greater than the sum of its parts"
Yeah, that's a problem I'd like to go back to after having a framework for how to even think about this sort of problem
Hopefully we can seriously think about this stuff once you're in Glasgow!
Jade Master said:
And yeah, the thing is that as soon as you get any sort of loop the emergent behavior is infinite because you can go around the loop as many times as you want. I'm curious about when the emergent behavior is finite...so there's a zig-zag but a not a loop.
One thing you can do both when there are zig-zags and when there are loops is keep track of the number of times you have to go back into the first graph. So, you get this:
Say you have an open graph , where is the "discrete graph on a set" functor. Let be the set of paths from to through the graph . So, we have
Here I'm really using as a nickname for the whole cospan .
A matrix of sets is the same as a span of sets, so this thing is trying to be a functor from open graphs to spans of sets, but really both these things are bicategories and is just laxly functorial.
Namely, if you have two open graphs
and ,
when you we compose them you get an open graph but alas
However, as Jade has shown, the left-hand side is just the first term in a sum (a sum of sums!) that converges to . The first term contains paths that go through and then . The second term contains paths that go through and then and then go back into and then back into . And so on.
So is much better than a mere span of sets: it has a bunch of subspans where keeps track of the number of zigzags:
In other words is a filtered set. (It has an exhaustive increasing filtration.)
Now all of this is obvious, but working with filtered objects is already taking us into a realm reminiscent of cohomology: all sorts of constructions in cohomology give us filtrations like this: "approximations" to the final answer, that are simpler than the final answer.
Things get even more interesting when we have a composite of three open graphs.
Now there are different ways for a path to "bend back": it can go back from to , or go back from to , and it can do either of these things repeatedly.
This makes it a more interesting real challenge to keep track of all things a path can do. For example, it can go like .
So has an interesting structure where we keep track of paths with various kinds of zigzags. And this structure is related to the filtrations on and .
It's a bit complicated... and it gets even more complicated when we have a composite of even more open graphs.
But this complexity, which may at first seem like just an annoying quagmire, is just the sort of you expect when doing calculations in cohomology - or for that matter combinatorics. It's something you can really get into, and understand completely.
I hope so :) maybe combinatoricists already have results in this flavor.
This sounds pretty promising
Great! I should add that if anyone feels revolted by the complexity that shows up here, they shouldn't. "Emergence of more complex behavior" is hardly ever going to be simple.
I guess I can record what few thoughts I've had towards my own problem. If you have a "behaviour" lax pseudofunctor , then you can define an "emergent behaviour" completely formally to be an element of , where the right hand thing is composition in . In words, an emergent effect is not just associated to a system, but to a system together with a choice of composition. So it seems obvious to store this data somewhere that involves the nerve of ... cells in the nerve correspond to systems together with a choice of decomposition. You could just associate the set of emergent behaviours to every cell. Then the laxator is an isomorphism iff all of those sets are the empty set
... and that's all I got
Jules Hedges said:
So, my grand challenge problem is probably stuck until I come up with a new example that can't be solved in this easier way, probably.... which may be slight problem because coming up with decent examples is really surprisingly hard (which itself is a hint that my starting abstract may not be a good one, no matter how great it intuitively seems to me)
The example you proposed in your original statement sounded like it was directly applicable to cobordism categories, so there's probably an interesting higher-dimensional (in a topological sense) analogue of the 1D graph example.
Path-connectedness in cobordisms should do it. But it likely "suffers" from the same "problem"
Right, but perhaps there's a more interesting notion of connectedness that one could examine. For example, you can view path-connectedness as the existence of a homotopy between the end-points; maybe one could examine homotopy between higher-dimensional sub-spaces? Or indeed directly asking about the effect of composition on homology classes.
Take e.g. a torus with two disks cut out of it (viewed as a cobordism between two circles). This has a pair of loops on it that are not deformable into one another, but become so when we re-adjoin one of the disks, and that's a direct description of emergent behaviour
Hm, interesting, yeah
Now the equivalent of what we were talking about for graphs would be to pass to a situation where you have multiple parallel paths, and also negative paths - something that is to integer petri nets as cobordisms are to graphs...
Homology does that pretty well tbh, as long as you're working in nice spaces :grinning_face_with_smiling_eyes: You can reinterpret what I said about "torus minus two disks " in homology as there being two non-equivalent homology classes which cancel when one of the disks is glued in, in the sense that , a non-trivial element of , maps to in the homology of the resulting space.
Turning this data into some kind of "homology net" to be treated like integer Petri nets would be great, I wonder if this is a perspective algebraic topologists have taken?
I also want to make the observation that "the emergent behaviour is controlled by Mayer-Vietoris", in the sense that that result gives a precise statement of how gluing along a boundary (again, assuming we're working in "effectively nice" cobordism categories) affects the homology groups.
I may have just got a small step closer to understanding what the heck homology is
To be honest, the challenge is applying this stuff. What emergent behaviours do these represent in systems we care about? What is not covered, geometrical or otherwise?
Morgan Rogers (he/him) said:
Turning this data into some kind of "homology net" to be treated like integer Petri nets would be great, I wonder if this is a perspective algebraic topologists have taken?
I feel summoned! What can I do for you?
Jules Hedges said:
Now the equivalent of what we were talking about for graphs would be to pass to a situation where you have multiple parallel paths, and also negative paths - something that is to integer petri nets as cobordisms are to graphs...
In the tradition of graph rewriting, this approach usually pays off
That is, you have a graph, you want to do stuff with it, and you decorate it with all sort of things that will make your life easier at a latter stage when you do rewriting and other things
I'll let you guys hash it out a while and check back in later :grinning_face_with_smiling_eyes:
Morgan Rogers (he/him) said:
I'll let you guys hash it out a while and check back in later :grinning_face_with_smiling_eyes:
Actually I'm off for an extreme sunbathing session, but I'll think about it as well, and hopefully I'll be back with something interesting later this evening
Meanwhile @Bryce Clarke has written some very interesting replies to this question over on twitter:
https://twitter.com/8ryceclarke/status/1386233995965337607?s=21
https://twitter.com/8ryceclarke/status/1386234910965338116?s=21
https://twitter.com/8ryceclarke/status/1386237192486277122?s=21
@_julesh_ This is an interesting question! Just checking: by "lax pseudofunctor", do you mean "lax functor" or "normal lax functor"? Lax functors C -> Rel are equivalent faithful functors into C; adding "normal" yields "faithful functors with discrete fibres" 1 / 2
- Bryce Clarke (@8ryceClarke)@_julesh_ Pseudofunctors C -> Rel are the same as faithful discrete Conduche fibrations into C. You may find the papers by Susan Niefield on lax functors C -> Rel to be helpful: http://www.tac.mta.ca/tac/volumes/12/7/12-07abs.html http://www.tac.mta.ca/tac/volumes/24/12/24-12abs.html 2 / 2
- Bryce Clarke (@8ryceClarke)@_julesh_ So in a sense, what you would like to find is a measure of how much a faithful functor fails to be exponentiable. A paper where I have seen both "cohomology" and "exponentiable / powerful morphisms" in the same place is here: http://www.tac.mta.ca/tac/volumes/23/3/23-03.pdf (but I haven't read it).
- Bryce Clarke (@8ryceClarke)After looking up what "normal" means (strict for identities, possibly lax for composition) and eyeballing some examples a bit, I think it is indeed reasonable to assume normal - identity open systems always exhibit exactly the identity behaviour and can never fail
I had never heard of Conduché fibrations before this morning, so I'm currently looking at the nLab page and trying to figure out what it all means
But as a whole new perspective on exactly the same situations, it seems extremely likely that this is useful...
Oh hey that last reference is, completely coincidentally, pretty useful for me. Thanks for sharing :joy:
Personally I would love your problem to be solved in a grounded way @Jules Hedges; that is, I would love to see some more concrete examples of emergent behaviours besides connectivity that can serve to lend insight into the result of the categorical abstraction that's already happening.
Right... I'd like more examples too, but it seems quite difficult to come up with examples. I believe without real evidence that this framework is a good one, based on thinking a lot of generalities about what "open systems" and "behaviours" can be. I suspect that each "serious" example looks like a research problem in its own right
This isn't a great situation to be in, of course
So after looking a bit about what Bryce said, it seems that by turning the problem inside out, the failure of a lax functor to be a functor can be seen as equivalently the failure of a certain morphism to be exponentiable, as an object in a slice category of , or abstracting a whole bunch, as the failure of a certain functor to have an adjoint. So.... is there any standard way to "measure" the failure of a functor to have an adjoint?
My instinct is yes, but I can't put my finger on what form of such measurements I've seen in the past...
I'm currently doing engineering research (and only know introductory category theory). In engineering research we often care not only about the underlying physics, but also about traditional processing schemes. In my experience, a lot of engineering research is focused on improving existing processing schemes. Even when the physics is compositional, the traditional processing might not be.
For an ultrasound imaging example, say we fire a pressure beam at a target and observe the resulting echoes . This echo operator is compositional, in the sense that where and are different fired pressure beams. However, to make an image from the echoes involves a reconstruction operation , and a traditional reconstruction approach can be viewed in terms of dot products. In this case, we would have and but . So, we no longer have compositionality. The failure of compositionality in this case is referred to as the generation of "cross-talk artifacts" and there are number of ways to try and reconstruct better so that they don't show up.
If we're interested in using category theory to help with engineering, it may be interesting to consider analysis of traditional processing schemes - not just analysis of the underlying physics. (And these may provide additional examples of emergent behaviour).
Jules Hedges said:
So after looking a bit about what Bryce said, it seems that by turning the problem inside out, the failure of a lax functor to be a functor can be seen as equivalently the failure of a certain morphism to be exponentiable, as an object in a slice category of , or abstracting a whole bunch, as the failure of a certain functor to have an adjoint. So.... is there any standard way to "measure" the failure of a functor to have an adjoint?
I wonder if this might be relevant, from nlab:
The "inclusion of the full subcategory determined by those for which is representable" seems like kind of a measure of adjointness, in that it will map into all of if does have a right adjoint. So that might turn the problem into "measuring" the failure of an inclusion functor to be an isomorphism, maybe.
@Jules Hedges I think similar ideas to the ones I was writing about on "failure of a morphism to be iso" may apply, just moved up a notch to morphisms in the 2-category of small categories.
A functor is a right adjoint iff
(The first is like a lax version of the condition of being a split mono)
I bet you can phrase them both as certain (2?)-functors being equivalences.
So for example any functor that does not have a right Kan extension along is a witness of its failure to be a right adjoint. So is any right Kan extension that is not preserved by . Again, the challenge is finding the appropriate structure that such witnesses form.
David Egolf said:
For an ultrasound imaging example, say we fire a pressure beam at a target and observe the resulting echoes . This echo operator is compositional, in the sense that where and are different fired pressure beams. However, to make an image from the echoes involves a reconstruction operation , and a traditional reconstruction approach can be viewed in terms of dot products. In this case, we would have and but . So, we no longer have compositionality. The failure of compositionality in this case is referred to as the generation of "cross-talk artifacts" and there are number of ways to try and reconstruct better so that they don't show up.
This looks not dissimilar to the common situation when you're trying to answer questions about composite systems algorithmically. Graph reachability is an obvious example of that. The whole of compositional game theory is an example too - being able to compose games is still a big upgrade on what was previously possible, but even so, when doing equilibrium analysis you generally have to start completely from scratch every time you change something. (It's not totally inconceivable that it's possible to do better than that, but you'd have to walk a thin line in order to not have a proof of )
There's a bunch of ideas in this thread now, but it still looks pretty intimidating to join the dots up to talk about any concrete examples. For example plugging it into the "reachability functor" on open graphs ought to turn into a bunch of combinatorics about loops around the boundaries
By which I mean, in any specific situation it ought to be feasible to prove that the abstract notion of "witnesses of barrier to functoriality" are equivalent to something concrete that you can do calculations with...
The example of reachability for open graphs is nice because it's pretty easy to visualize and understand. So, I should take the pseudofunctor Open(Graph) Rel and convert it into a Conduche fibration and see what it looks like.
Any traditional ideas on homology of graphs will have limited success in understanding the way a path in a composite of two open graphs can be more complicated than a composite of two paths, one in each open graph. The reason is that the homology of a graph doesn't change if you replace an edge of the graph by an edge going in the other direction!
So, homology of graphs is okay if you're interested in paths where you're allowed to walk either along or against the direction of each edge in your path.
But this limitation is why people are trying to invent "directed" algebraic topology, where the reverse of an allowed path may not be an allowed path. Roughly speaking, directed algebraic topology is to categories as ordinary algebraic topology is to groupoids.
You can certainly ask the reachability question for both directed and undirected graphs. Is it understood that undirected graph reachability is easier in some way than directed graph reachability?
One thing that bothers me about this approach to emergent effects is that, at least for the particular case (taken here to be the paradigmatic example) of reachability in graphs, a different choice of codomain for a similar functor could make it compositional/strictly functorial: you can send an open graph to its reachability relation seen as a morphism of Int(Rel), i.e., a relation . Composition in this category works by computing a transitive closure---I haven't checked the details but it seems plausible that gives a strict functor capturing the problem of reachability in some precise sense.
More broadly, it seems conceivable that for any given problem (e.g., graph reachability), starting with some syntactic category (e.g., open graphs) one could always find different semantics (e.g., Rel vs. Int(Rel)) that captures the problem more or less compositionally according to how much expressiveness the target category packages in its composition. But perhaps it is to be expected that emergent effects are also an artefact of the formalism you use to model the problem, just another expression for leaky abstractions.
Yeah, it's conceivable that you can always avoid it by being smart... my ACT talk last year was about one possible way to get around the problem and recover a behaviour functor but into a different category
But the way I see it, emergent effects are something that arguably does exist out there in the world, and it would be a huge win for ACT to have a really powerful way to talk about that
Jules Hedges said:
You can certainly ask the reachability question for both directed and undirected graphs. Is it understood that undirected graph reachability is easier in some way than directed graph reachability?
That's what I'm claiming; I don't know if this is "understood", and I should make good on my claim that the undirected case can be handled using traditional ideas on homology. In other words, I should prove a theorem.
Robin Piedeleu said:
One thing that bothers me about this approach to emergent effects is that, at least for the particular case (taken here to be the paradigmatic example) of reachability in graphs, a different choice of codomain for a similar functor could make it compositional/strictly functorial: you can send an open graph to its reachability relation seen as a morphism of Int(Rel), i.e., a relation . Composition in this category works by computing a transitive closure---I haven't checked the details but it seems plausible that gives a strict functor capturing the problem of reachability in some precise sense.
That sounds right. It's an important observation! I don't think it should "bother" you... though I can see why it bothered you that nobody said it yet.
Right, in a sense it shouldn't be surprising that any particular example can be handled by being smart... the question is whether every reasonable example can be handled by being smart... or more accurately, whether being smart is the best it's possible to do, and nothing can be said in general
@Robin Piedeleu - By the way, the reachability relation meaning "there's a path from one point to another" makes into a poset, and this poset is the collage of a profunctor from the poset to the poset (both of which have their own reachability relation).
Oh, no, that's not quite right, the collage I'm thinking about would only take into account paths from points in to points in , not the other way around.
(My remarks may be cryptic because I'm alluding to a profunctor that @Jade Master has written about.)
Jules Hedges said:
Right, in a sense it shouldn't be surprising that any particular example can be handled by being smart... the question is whether every reasonable example can be handled by being smart... or more accurately, whether being smart is the best it's possible to do, and nothing can be said in general
There's maybe something more general you can say about how emergent effects appear based on how expressive your form of composition is and how difficult it is to compute it. Then it would not be surprising that, the less computational power you give yourself the more emergent effects you're going to see popping up. Conversely, it makes sense that you should be able to suppress any emergent behaviour, given enough computational power. This is all a bit vague but characterising the precise tradeoff for specific examples would be nice.
It's also not at all clear that every example of emergent effects comes from not having enough computational power. (Although some of them do... for example I think it's possible to upgrade compositional game theory to be even more compositional, as long as you don't mind computing a Brouwer fixpoint every time you compose anything)
Sure, I'm not claiming that this is always the case but it does seem to happen a lot in relational settings where composition requires solving some constraint satisfaction problem and can be badly global. If we map the problem to a category where it can be solved by composing some functions/anything with a neat causal flow of information, it seems likely that we're going to miss some part of the solution.
Btw, @Jules Hedges, who started this rumor that homology could be good for understanding emergent effects? Was it you, or was it Abramsky talking about sheaves and contextuality, or someone else.
The story is, I thought about the general problem enough to get to the point where I could ask it in the form "how to say something finer-grained than that a lax pseudofunctor to Rel is either a functor or it isn't". For a while around 2019 I asked this question to anyone who would listen, and several different people said it made them think of cohomology... but I don't remember who
But it probably does have some relation to what Samson works on
If we map the problem to a category where it can be solved by composing some functions/anything with a neat causal flow of information, it seems likely that we're going to miss some part of the solution.
Right. In general we'll miss some part of what happens when we combine two systems if we don't keep track of enough information about their internal structure. Compositionality happens when we keep track of just enough information about the two systems that we can recover the same sort of information about the system we get by combining them. There could in theory be many kinds of "just enough" here - i.e., many ways of achieving compositionality. It reminds me of a fixed-point problem.
But it's also very interesting to study what happens when don't keep track of enough information to recover the same sort of information about the composite system. It's not just a bad thing to be shunned and ignored.
Morgan Rogers (he/him) said:
Oh hey that last reference is, completely coincidentally, pretty useful for me. Thanks for sharing :joy:
Personally I would love your problem to be solved in a grounded way Jules Hedges; that is, I would love to see some more concrete examples of emergent behaviours besides connectivity that can serve to lend insight into the result of the categorical abstraction that's already happening.
As it happens, I spontaneously remembered another example! This takes several pages to setup properly, but a couple of years ago I built a category of decorated corelations for smooth optimisation problems. The objects were natural numbers, and the morphisms were [insert your favourite optimisation theory assumptions here - I used smooth + convex] functions , ie. single-objective optimisation problems on variables. (I'm skipping a bunch of details, there's some corelations stuff to make it work out.) Composition was to pointwise add the functions, and then maximise over the variables on the middle boundary. (There were two reasons for that choice - one is a general feeling that max/+ play nice together, and the other is that it lets you do Lagrange multipliers nicely compositional).
Then I tried to prove that defines a functor from this category to Rel. But it's not true. It's a lax functor, but the laxator fails extremely badly to be an iso
Why? Because if you have an objective function that splits as the sum of 2 parts and those parts share some variables in common, then the choices for those variables that maximise the sum in general is totally different to the choices for those variables that maximise the individual terms
That was the thing that really pushed me into thinking about the general question, because after spending a couple of months getting that far, I wished there was something better to do than just giving up
(It wasn't a completely wasted effort, since I learned all about decorated corelations in the process)
This happened in quick succession after my topological entropy example. That's what left me with the general feeling that functors are rare in applied situations, and lax pseudofunctors are the norm
John Baez said:
If we map the problem to a category where it can be solved by composing some functions/anything with a neat causal flow of information, it seems likely that we're going to miss some part of the solution.
But it's also very interesting to study what happens when don't keep track of enough information to recover the same sort of information about the composite system. It's not just a bad thing to be shunned and ignored.
Having seen a few examples, I'm very in favour of this last comment. So much of CT is finding the right level of abstraction to bring the details you care about to the fore; applied CT should definitely have access to tools to measure how much of an idealisation different models are.
Sketch of an idea to turn a lax pseudofunctor into a simplicial set:
Ez2OZQCX0AY_iT0.jpg
I'm very very sketchy on simplicial sets, I'm basically figuring this out as I go along, but I'm kind of excited about this idea, because laxness results in the middle cell being "not big enough", which kinda looks like a "hole" if you squint
(Normally I throw ideas out in public without a second thought, but maybe this one might be one I'll regret not keeping to myself until thinking about it a bit more...)
I don't understand how you are building this simplicial set
A symplex is just a set, a simplicial complex is a bunch of symplexes that are closed under face relationship
When you write as a vertex what do you mean?
is a set of points, are you taking any point to be a vertex of the symplex?
On the contrary, do you add just one point for each ? In this case you may just write
It's not a simplicial set. It's just a picture that might contain the germ of an idea
Similarly, an edge between and is denoted as . Are you adding an edge for each element of the relation ?
If yes, it seems that the symplicial complex will end up being very (very) connected
If not, it will have a lot of disconnected components? (and will be much much bigger)
I'd suggest to take your functor , turn it into a functor for some , and see how all the stuff in sits on
Then maybe you want to apply some sort of homological reasoning to the whole
So you have all the you were considering, but you are still indexing them using .
As far as I can tell this is pretty much a variant of the nerve of a category. As long as your category is small (as many categories of decorated cospanish things are, so for once that doesn't worry me), it's nothing that a few dependent sums can't handle. Instead of having the set of 0-cells being the set of objects of C, instead it's the set
It's probably fibred over the nerve of , or something like that [waves hands wildly]
By this you mean the picture?
Or what I suggested?
Ok, I get what you say
Jules Hedges said:
You can certainly ask the reachability question for both directed and undirected graphs. Is it understood that undirected graph reachability is easier in some way than directed graph reachability?
This probably isn't in the sense you are meaning, but for what it's worth here is a complexity theoretic answer. Reachability for undirected graphs can be determined in logarithmic space, while reachability for directed graphs is NL-complete. So the question of whether the undirected case is as 'easy' as the directed case is precisely the L = NL problem. Like most things in complexity theory this is an open problem, but it is widely believed to be false.
The problem is that it seems to me that you are building something akin to the nerve of , which says little about your functor to
You are basically using sets to index the faces of your complex, yes. But the sets themselves aren't "opened up", or interfaced with the complex itself
My previous observation meant exactly this: As of now, if you replace, say, any with , you'd get a very similar complex
Oh, ok, now I'm seeing how you are building the 2-d face
It's still very coarse, but I'd rephrase your condition in another way
Suppose you have in . You add vertexes to the complex. Similarly, you add as 1-dimensional symplexes.
Now, you add the center face only if as sets
This would give you a very coarse way of saying how strict functoriality fails
You have a hole in the middle every time laxity is true laxity, as in non-strict, basically
Now, you do this for any morphism, and you can already start saying interesting things, e.g. that if your functor is strict, your symplex should be contractible
So you could build something like this from your functor (this should work also for other interesting categories that aren't btw), and obtain a complex that is homotopically equivalent to the one you got with elementary collapses
Then you can study the homology if this complex to detect cycles and other stuff
So I recant what I said before, It's not similar at all to the nerve at this stage
There are probably a lot of details that don't check out at this stage btw.
Moreover, as of now we are building very very simple complexes, with no faces of dimension more than 2. This is "expected" since we are focusing on establishing if , which just involves two things, and
My feeling is that we should aim to encode more information about our problem by leveraging on all the higher-dimensions that we aren't using, but I'm really too tired to properly think about this now
One thing I seem to like about this approach is that we are completely forgetting about the elements in etc. We are just recording which morphisms, when composed, fail to be strictly carried over by the functor. We are not looking into the reason why a given composition fails (e.g. the elements in the relevant sets that we aren't hitting)
This may make the technique less insightful, but definitely more manageable
I'm also curious to see how we can generalize this to tensor products, it doesn't seem obvious. I thought about considering a monoidal category as a bicategory with just one object, but then we have just one vertex around for our symplicial complex, so things fail
Ok, I'm off to sleep. I apologize to anyone knowing properly this stuff for the huge amount of BS I probably said, I just hope that the intuition I was trying to formalize is more or less understandable. :smile:
Fabrizio Genovese said:
I'd suggest to take your functor , turn it into a functor for some , and see how all the stuff in sits on
I haven't really been following this thread, but it's known that for any category , the category of lax functors (a.k.a. displayed categories) is equivalent to the slice , by a generalized sort of Grothendieck construction. A lax functor to is a special sort of lax functor to , so this seems like the canonical way to turn it into a functor .
Also this sort of homological stuff reminds me of things like persistent homology and magnitude homology.
Mike Shulman said:
Fabrizio Genovese said:
I'd suggest to take your functor , turn it into a functor for some , and see how all the stuff in sits on
I haven't really been following this thread, but it's known that for any category , the category of lax functors (a.k.a. displayed categories) is equivalent to the slice , by a generalized sort of Grothendieck construction. A lax functor to is a special sort of lax functor to , so this seems like the canonical way to turn it into a functor .
Yes, this was precisely the equivalence I was alluding at, I've been using it quite a lot in my work on Petri nets!
Mike Shulman said:
Also this sort of homological stuff reminds me of things like persistent homology and magnitude homology.
Again, we are on the same page. It would be very cool to obtain something akin to barcodes for functors, that classify qualitatively how and why a given functor fails to be strict
The way I intuitively think about persistent homology is "A way to translate geometric information into combinatoric information in a way that filters out 'noise' ". This is conceptually very similar to what we want here
Robin Piedeleu said:
But perhaps it is to be expected that emergent effects are also an artefact of the formalism you use to model the problem, just another expression for leaky abstractions.
This is the whole point of Adam's thesis about generative effects: they're a byproduct of leaky abstractions. This is actually a good observation, because 1. it makes the thing relative and 2. it makes you reason in terms of 'hidden information'.
I strongly recommend anyone interested in emergent behaviour to read at least the introduction, the concept of 'veil' is one I really like.
So I think it's now my turn to dump my thoughts about this problem here.
First of all, let me mention that there's a quantitative decategorified version of 'emergent behaviour', namely non-linearity. Saying that is a lax (monoidal) functor amounts to say it's not linear . This observation leads to the path of studying 'Taylor expansions' of functors. In fact, higher derivatives of a function measure lack of linearity (sort of: you need to recenter at or talk about 'affinity') at increasing degrees of precision.
Fortunately, there's already a theory of Taylor series for functors, it's called Goodwillie calculus. It's a fine piece of abstract -machinery, but it's quite elegant and beautiful. It works best between -topoi (suffices: 'Goodwillie differentiable -cats), but I believe it can be bent a little to work with less structured objects. It starts with the observation that linearity is akin to a sheaf condition, as @Eigil Rischel mentioned above: in Goodwillie calculus, a linear functor (or 0-excisive) is a functor that preserves homotopy pushouts in a suitable sense. You can extend this definition to higher degress, defining -excisive functors. These functors are lex reflective inside the category of all functors, so you can localize a given functor to this subcategory.
The result is a (pre)tower (this is a Postnikov tower in the generalized sense) of successive approximations of , looking at its linearity in increasing dimension. This is the Taylor tower of ! Notice you can also form the Whithead tower of , which is sort of the dual of the Postnikov tower, thus mimicking the tower of 'rests' of the Taylor series (instead of growing an increasingly less linear picture of , we sculpt away an increasingly more linear version of )
Under suitable assumptions this pretower converges, i.e. its colimit is exactly , as for Taylor series.
What I like about this approach is that it's gradual, so I can have different degrees of 'compositional approximation' of a behaviour. It's still very abstract though... The 1-dimensional shadow of this process is plain old sheafification, and gives us a way to interpret it as a process of linear approximation of a functor (presheaves are non-linear/non-compositional, sheaves are linear). This makes it more manageable and we can still say some things.
For example, if you're lucky enough to have abelian behaviour, you can distill information about its extensionality ('can I distinguish behaviours of the whole by looking at their restrictions to the parts?') and generativity ('is every behaviour of the whole given by behaviours of the parts?') by looking at the first two cohomology groups of its augmented Cech complex (see here).
What's not clear to me at this point is how to represent systems as sites over which behaviours are presheaves. This is the major obstacle me and @Fabrizio Genovese encountered when we tried to attack consensus with sheaf theory... I still don't have a guiding principle for that. Actually there's one thing I didn't try yet: take your structure (a network, say), define an algebra of observable quantities over it, use the dual of that (+ an apparently non-canonically definable topology) as a site. This works for, stochastic systems (sort of) or quantum systems (pretty literally).
There are many words that I don't know here, but it seems an interesting approach. And yes, I remember the pain of trying to describe systems as sites
In any case, given the multiple directions we already established in this thread, the obvious course of actions in better times would be to organize a workshop about it T_T
Do you reckon if there's any hope to have that, at some point? :frown:
Fabrizio Genovese said:
In any case, given the multiple directions we already established in this thread, the obvious course of actions in better times would be to organize a workshop about it T_T
Alas, totally agree... I feel like (without any experience about it, though) this is exactly the stage at which it would maximally benefit to have a bunch of people together in the same place for some days. I'd attent that!
At this point I'd attend anything actually, I'm just longing for math-based human contact.
I'm really stoked to attend ACT21 and CT21 in person.. I'm very lucky they'll be in the countries I already plan to be (and allowed to be) in when they happen. What will universe cook up this time to disrupt my plans?
Am I wrong or is Britain's border closed atm? If yes there's no hope for ACT2021 for me
Yeah, basically closed
I'm not even very optimistic about being able to get from Glasgow to Cambridge in July
The England-Scotland border just reopened recently, I think
By the way, sorry for the really basic question, but could someone post or link to the definition of "lax pseudofunctor" in this context? I couldn't find much online - does it have another name?
Jules Hedges said:
The England-Scotland border just reopened recently, I think
imagine this being uttered in 2019
I think we are more or less meaning a pseudofunctor with non invertible isos
And we are wildly handwaving the boatload of coherence conditions that have to hold
There is for sure some formal definition somewhere, even if they are usually difficult to find completely spelled out
(So we want something for which or the other way around, and same for the monoidal product if you have one)
Nathaniel Virgo said:
By the way, sorry for the really basic question, but could someone post or link to the definition of "lax pseudofunctor" in this context? I couldn't find much online - does it have another name?
A pseudeofunctor is a morphism between 2-categories, and "lax pseudofunctor" means that it only laxly preserves horizontal composition. So the idea is that you view your category as a 2-category with only identity 2-cells, and you view Rel as a (locally posetal) 2-category whose 2-cells are relational inclusion, ie. you have a 2-cell from the relation to just when
Then a "lax pseudofunctor" is something satisfying
Where the composition on the left is horizontal composition in Rel
In other words, if you have and for some , then it must be the case that . But the converse doesn't hold. (The converse would have to hold for this to be a functor)
So monoidal products don't come into it at all? (That's what I thought but I wanted to check.)
No
That would be a lax-monoidal-lax functor
or: lax monoidal functors are lax functors between deloopings
Fabrizio Genovese said:
(So we want something for which or the other way around, and same for the monoidal product if you have one)
Actually my experience in practice is that things are usually strict in the tensor direction, and only lax in the composition direction. Because the tensor of 2 morphisms is generally a "disjoint" or "non-interacting" composition, so there is no emergent behaviour. (But I'd expect things to get gnarly again once you upgrade from monoidal to compact closed)
Jules Hedges said:
Fabrizio Genovese said:
(So we want something for which or the other way around, and same for the monoidal product if you have one)
Actually my experience in practice is that things are usually strict in the tensor direction, and only lax in the composition direction. Because the tensor of 2 morphisms is generally a "disjoint" or "non-interacting" composition, so there is no emergent behaviour. (But I'd expect things to get gnarly again once you upgrade from monoidal to compact closed)
Lucky you! In my application domain everything is lax T_T
(That would be Petri nets with a non-local semantics on them)
Ah yeah, you'd have even more headaches if you were also lax in the other direction. For me, I was optimistic that once you've dealt with mere categories, the rest should be straightforward
In any case, as I was stressing, the problem arises when you want to figure out precisely what are the coherence conditions for these functors. Proving things manually is basically not possible, so to even state that you have a lax-monoidal-lax functor between categories you need to invoke some higher-level result
Jules Hedges said:
Ah yeah, you'd have even more headaches if you were also lax in the other direction. For me, I was optimistic that once you've dealt with mere categories, the rest should be straightforward
Yeah, it can cause headaches. My go-to strategy has been working closely with people that are much better than me at thinking about these things, such as @fosco, while trying to pick up some insights/techniques along the way
This is also one reason I've stuck with talking about semantic functors into Rel, even though going into Span(Set) would be probably better in principle. Much less coherence headaches
Span is uncomparably better tho
Every functor corresponds to a functor
I've been dropping working with very quickly because the case is much better behaved, for instance you can prove that in many cases free symmetric monoidal implies free symmetric monoidal (which is crucial in my research)
I have a fairly intuitive way to think about the difference. Behaviours into Rel are black-box behaviours: the only things you can observe behaving are the boundaries, so all you know about a system is what combinations of boundary behaviours you can simultaneously observe. If you go into Span(Set) then you also have a set of observations you can make about the system itself, and each system behaviour determines its projections onto boundary behaviour
Precisely. Another way to say this is that in Span you keep track of everything, while in Rel you collapse things. The obvious consequence is that, if you care about freeness, often Rel introduces many new equations you didn't have before, breaking things down
Makes sense
Rel just tells you if things are connected, while Span keeps track of all the different paths connecting them :smile:
On the other hand, coherence in 2 dimensions generally makes me want to quit category theory and run away to become a sailor instead
This is also a very well justified opinion. Also because boats are great!
Jules Hedges said:
A pseudofunctor is a morphism between 2-categories, and "lax pseudofunctor" means that it only laxly preserves horizontal composition.
Normally people just say "lax functor". A pseudofunctor preserves composition up to isomorphism; a lax functor preserves it up to a noninvertible transformation; a strict functor preserves it up to equality. Once you're lax, there's no room left for pseudo-ness.
Jules Hedges said:
On the other hand, coherence in 2 dimensions generally makes me want to quit category theory and run away to become a sailor instead
When I was in grad school, a friend of mine suggested to define an "-person" as a person who gets a headache when thinking about -categories.
Fortunately, examples like this are usually not really 2-dimensional. Most of the time it's better to think about double categories than about bicategories, and then the laxness and so on are at the same categorical dimension as for ordinary monoidal categories (although there are more of them). In particular, a lax monoidal lax functor between double categories is pretty easy to define, where the laxness of the functor is with respect to the "loose" arrows, with composition of "tight" arrows being preserved strictly, while the lax constraints for the monoidal structure are tight arrows.
BTW, there's also a Grothendieck construction equivalence for double categories.
I had an idea inspired by Goodwillie calculus, which didn't quite work out:
Goodwillie calculus is about approximating your functor with versions that becomes progressively less "excisive".
So first you have a functor which carries pushouts to pullbacks (a "linear" functor).
Then you have one that carries a cube where every face is a pushout to a "pullback cube".
(Note that here we don't require every face to be a pullback square, just that the whole cube is a limit diagram).
And so on.
So I thought maybe you could resolve a lax functor similarly in less and less compositional terms.
Like, the first step is an actual functor, then we have a functor so that , then one with a similar condition for composites of four morphisms, and so on.
The problem is that if you put , you recover the normal notion of compositionality.
(If you take or the identity, the condition automatically holds for any lax functor).
In the Goodwillie calculus case, if you take any leg of your cube to be an isomorphism, similarly, any functor carries it to a limit diagram.
(It's also not really clear what the -compositional components of eg the reachability functor should look like)
Jules Hedges said:
Nathaniel Virgo said:
By the way, sorry for the really basic question, but could someone post or link to the definition of "lax pseudofunctor" in this context? I couldn't find much online - does it have another name?
A pseudofunctor is a morphism between 2-categories, and "lax pseudofunctor" means that it only laxly preserves horizontal composition.
"Lax pseudofunctor" is not the name for this: please just say "lax functor".
A functor between 2-categories is
So, every pseudofunctor is lax, but not vice versa.
Here's the precise definition of pseudofunctor between 2-categories:
and after that you can see the definition of lax functor between 2-categories.
Oops, that's news to me, sorry for the confusion
This corresponds in the world of monoidal functors to strict, strong and lax, right?
Yes. Mac Lane said "strong", while some say "weak"; it's a bit odd that strong is weaker than strict, but "strong" seems to have won - even I have caved in.
Some people also say "" for strong/weak.
I guess someone working on the Grothendieck construction, maybe him, started talking about "pseudofunctors" from a 1-category to Cat, and then I guess the Australians adopted this much more generally.
Reid Barton said:
Some people also say "" for strong/weak.
Yes, the default notion is usually strong when you're talking about monoidal functors, and once you make that clear you can stop saying it. But there are some people for whom the default notion is lax! So it's always good to put in a little extra work to be clear.
I'm late to the party, but I think it would probably be helpful to have a larger list of emergent phenomena to consider. It's not clear to me that these will all fall under the same rubric.
Paths in open graphs is certainly a good example, but lacks the micro/macro aspect that (seems to be?) an important feature of many classes of emergence; e.g., snowflakes or even temperature as a macroscopic quantity. Similarly, prices in markets are usually viewed as emergent, since market actors generally don't know "true" prices.
Other examples involve some sort of balancing that, again, is not obviously present in for paths. Examples include ocean waves, sand dunes and zebra stripes.
John Baez said:
Yes, the default notion is usually strong when you're talking about monoidal functors, and once you make that clear you can stop saying it. But there are some people for whom the default notion is lax! So it's always good to put in a little extra work to be clear.
However, when you're talking about just plain functors (no monoidalness) between (weak) -categories, the default should absolutely (pace Benabou) be strong/weak/pseudo. So a "functor of bicategories" means a pseudofunctor. (But a "functor of (strict) 2-categories" probably means a strict functor, although for clarity that should be emphasized.)
Amar Hadzihasanovic said:
Jules Hedges I think similar ideas to the ones I was writing about on "failure of a morphism to be iso" may apply, just moved up a notch to morphisms in the 2-category of small categories.
A functor is a right adjoint iff
- all functors with domain have a right Kan extension along , and
- all right Kan extensions of functors with codomain are preserved by .
(The first is like a lax version of the condition of being a split mono)
I bet you can phrase them both as certain (2?)-functors being equivalences.So for example any functor that does not have a right Kan extension along is a witness of its failure to be a right adjoint. So is any right Kan extension that is not preserved by . Again, the challenge is finding the appropriate structure that such witnesses form.
I see I have been summoned as well, but there's no way I can catch up with the conversation, if no one wants to propose a summary of it
My eye was caught by this; the same condition can be more efficiently packaged into a "lax orthogonality" request: a functor F has a left adjoint if and only if the right Kan extension of the identity along F exists and is absolute. Explaining why this is a form of laxified orthogonality requires a piece of blackboard, but there's a crew of people who tried to explain the similarities between the two concepts, while at the same time trying to re-enact the most powerful tool of category theory, i.e. the small object argument.
More, when I read the rest of the conversation (possibly, in 2030)
@fosco Ah, yes, I am aware of the fact that that condition is (necessary and) sufficient for a functor to be right adjoint -- although I do not know why that's an “orthogonality” condition and you should explain it to me :)
@Matteo Capucci (he/him) what is the title lf Adam's thesis about the veil? I wanted to find it the other day but I couldn't remember.
In the message above I focussed on the “less simple” but “unbiased” condition because I was trying to think of a way to give some structure to the failure of a functor to be right adjoint. Supposing that is not a right adjoint, the identity will not have an absolute right Kan extension along ; that's an obstruction. What about other functors? Perhaps there's cases in which the identity is essentially the only obstruction, in the sense that any functor that's not an equivalence does have an (absolute) right Kan extension along , and others in which there are more?
@Matteo Capucci (he/him) I'm also curious about the taylor expansion of a functor that you mentioned. How are those terms in the postnikov tower defined?
I may be completely off track, and also it may be that this idea can be expressed more neatly in the “laxified orthogonality” setup that you mention :)
(Sorry @Jade Master for the 'crossing' conversation :sweat_smile: )
@Amar Hadzihasanovic no worries :) it's a huge thread
I mentioned the condition of “being a split mono” as a decategorified, simpler thing on which this idea could be tested. If is a morphism, then morphisms that factor though form a cosieve, so the identity on does iff all of them do, which is why both conditions are equivalent to “ is a split mono”. But looking at all morphisms is more interesting than looking at just the identity if we want to measure the failure of to be a split mono.
For example, for a function of sets, I imagine there could be a sense in which functions that are identical to except that and for a single pair of elements are “minimal obstructions” to being a (split) monomorphism...
So e.g. a function which is injective except on a pair of elements would have “a single minimal obstruction” and so be “close to being injective”.
So I'm thinking that perhaps there's a way to give structure to these obstructions in a way that quantifies how far a morphism is from being split mono, epi, invertible; and, up one dimension in the n-categorical ladder, how far a morphism is from being a left/right adjoint, an equivalence...
Jade Master said:
Matteo Capucci (he/him) I'm also curious about the taylor expansion of a functor that you mentioned. How are those terms in the postnikov tower defined?
A Postnikov tower is just(TM) the result of a factorization with respect to an infinitary system of torsion theories: https://arxiv.org/abs/1501.04658
Jade Master said:
Matteo Capucci (he/him) what is the title lf Adam's thesis about the veil? I wanted to find it the other day but I couldn't remember.
Jade Master said:
Matteo Capucci (he/him) I'm also curious about the taylor expansion of a functor that you mentioned. How are those terms in the postnikov tower defined?
By successive localization at the subcategory of -excisive functors. Since -excisive functors are sheaves for a particular Grothendieck topology (the -th Weiss topology), you can see this as sheafification at increasingly less stringent topologies.
Or, as Fosco pointed out, it corresponds to a certain infinitary factorization systems.
Here's maybe a useful intuition
Thanks @fosco and @Matteo Capucci (he/him). It would be helpful to me to see an example. Do you know what the the taylor expansion of the free category functor
would be?
At this time of the night, I know nothing :smile:
I think it's a non-trivial task to compute that... Maybe when you're here in Glasgow we could have a go at it
That's fair lol. I will ask again some other time maybe :)
Spencer Breiner said:
Paths in open graphs is certainly a good example, but lacks the micro/macro aspect that (seems to be?) an important feature of many classes of emergence; e.g., snowflakes or even temperature as a macroscopic quantity. Similarly, prices in markets are usually viewed as emergent, since market actors generally don't know "true" prices.
I think most of these are a lot harder to understand using category theory (or even at all) than paths in open graphs. I think our category-theoretic understanding of emergent behavior is so weak that it's good to think about a really easy example, and paths in open graphs should be one.
Fabrizio Genovese said:
I've been dropping working with very quickly because the case is much better behaved, for instance you can prove that in many cases free symmetric monoidal implies free symmetric monoidal (which is crucial in my research)
You probably already know, but every faithful functor A -> B corresponds to a lax functor B -> Rel, and conversely.
So I agree that it's likely better to work with Span, but it is not that much different.
If you care about freeness, it really is :smile:
Spencer Breiner said:
Paths in open graphs is certainly a good example, but lacks the micro/macro aspect that (seems to be?) an important feature of many classes of emergence; e.g., snowflakes or even temperature as a macroscopic quantity. Similarly, prices in markets are usually viewed as emergent, since market actors generally don't know "true" prices.
Yees. So, this framework (lax functors to Rel) I think (again - without real evidence!) is useful for talking about behaviours of "circuit-like" open systems, which I think are more or less the things that come under the umbrella of systems theory or systems engineering. I would call these "emergent behaviours" rather than "emergent effects", since your model is making a totally explicit separation between systems and behaviours. So I think this won't be able to talk about for example emergent effects in statistical mechanics (eg. economic models inspired by statistical physics), continuum mechanics, cellular automata, etc
The setting here is that you have a widget - which is discrete and has a set of behaviours it can do, and in your other hand you have a gizmo, which also has a set of behaviours it can do, and then you plug the widget and the gizmo together and suddenly it has new behaviours that it refused to do before you plugged it together
The discreteness comes from the fact that your class of open systems form a category, composition in categories is inherently discrete -getting around that is a different one of my grand problems (which is probably less difficult than this one)
For other kinds of emergent effects outside of systems theory, like the ones you mentioned, I don't even know how to ask the question in category-theoretic terms
Indeed, working out what kind of limit or completion operation is appropriate and realistic is always a challenging problem
One reason why I am more inclined to think about functors to Span is that there is an equivalence of categories
, where the functors in the right hand side are lax
Crucially, I'm not seeing as a 2-category here. If so, you need to replace with and do some other tricks
So, in studying compositionality issues for functors to Span, we can characterize compositionality issues of any functor , if you wish
I concede that the failure of strictness of a functor may translate in some other property that the functor lacks, tho
Fabrizio Genovese said:
One reason why I am more inclined to think about functors to Span is that there is an equivalence of categories
, where the functors in the right hand side are lax
Slightly tangential, but is there a corresponding equivalence for coslices over Cat, instead of slices?
That I don't know. I stumbled into this equivalence almost by chance while talking with @fosco (meaning that he made me aware of its existence) and we realized it's central to study Petri nets extensions. That's where my knowledge stops :stuck_out_tongue:
Yeah, it's indeed extremely neat
(My feeling is that there will be still a lot of bunnies coming out of a lot of hats in Petri land using this thing, so I'm focusing on it at the moment without looking much farther)
As I learned the other day, the equivalent question to "detect when a lax functor is strong" on the other side is "detect whether a functor is exponentiable". I don't know whether that's an easier or harder problem, but at least now there's 2 different attack surfaces
But hey, the bottom line is that if someone wants to figure out what "doing the (co)homology of this thing" means, I won't complain at all
Jules Hedges said:
As I learned the other day, the equivalent question to "detect when a lax functor is strong" on the other side is "detect whether a functor is exponentiable". I don't know whether that's an easier or harder problem, but at least now there's 2 different attack surfaces
Did you see @Amar Hadzihasanovic tweetstorm about it? It seems related
I am intending to do exactly that. Since I need to learn basic AlgTop as I go along, this is a bit of a long shot
Yeah
Jules Hedges said:
I am intending to do exactly that. Since I need to learn basic AlgTop as I go along, this is a bit of a long shot
I actually have a study group about TDA at the moment. It's in Italian but we can switch to English if you want to join
We are using this: http://people.maths.ox.ac.uk/nanda/cat/TDANotes.pdf
And we got to chapter 3 (due next week)
I'm in fact following Vidit's video lectures, I didn't look at the lecture notes so far. I won't join you, I prefer to do it at my own speed
Makes sense!
For me it actually makes sense to try to attack a problem where I have to learn the basics as I go along, because that's in fact the only way I can ever motivate myself to learn anything new
I have something like a ton of problems where "maybe homology can say something clever about this", so it was about time for me to learn it properly and see what happens. I guess we are in similar boats!
Nathanael Arkor said:
Fabrizio Genovese said:
One reason why I am more inclined to think about functors to Span is that there is an equivalence of categories
, where the functors in the right hand side are laxSlightly tangential, but is there a corresponding equivalence for coslices over Cat, instead of slices?
I'd say yes: where the n stands for "normal" and the functors are lax; dually, , same convention: normal and oplax functors
In the end, is just(TM)
(I will be grateful for life to anyone who will summarise me the content of this thread)
The content of this thread is the following: We want a neat way to measure how much lax a lax functor is
That is, how much it fails to be strict/strong
Where "how much" can be understood quantitatively or qualitatively, at the moment we are satisfied with everything.
ok, now I can make sense of some of the keywords I saw
If you want it to be extra spicy, you can throw in also monoidality and suppose that the functor is lax-monoidal-lax. In that case we'd like to have a measure for both composition and monoidal product
cohomology was invented for this; as well as resolutions; as well as the filtration/decomposition of something in a "tower" of simple pieces; as well as...
I tried to have a look at the tower paper by you, Fiorenza and someone else that you referenced
It is quite rare that I don't understand even a single sentence in something I read, but that was indeed the case (and entirely my fault, obviously)
the idea is pretty simple though: factoring wrt a countable family of nested factorisation systems is what geometers call something something derived categories
Unfortunately I am not familiar at all with factorization system, I just have a rough understanding of the general idea
is it the thing we're after? I don't know. In a sense yes, the failure of a map to be an isomorphism (this is the case we're in) is measured by an homo{topy,logy} group
the problem is that in a sense there seems to be no such thing as "non additive homology objects" measuring the failure of a functor to do something or be something, other than very general, i.e. useless, results
I can't find the place where I first read this
fosco said:
is it the thing we're after? I don't know. In a sense yes, the failure of a map to be an isomorphism (this is the case we're in) is measured by an homo{topy,logy} group
You mean, you can make this claim precise but only if your map lives in an abelian (?) category?
fosco said:
the problem is that in a sense there seems to be no such thing as "non additive homology objects" measuring the failure of a functor to do something or be something, other than very general, i.e. useless, results
So, somewhere in this thread and on twitter, it has been pointed out that measuring the failure of strictness can be reduced to measuring how much a functor fails to have an adjoint
Jules Hedges said:
fosco said:
is it the thing we're after? I don't know. In a sense yes, the failure of a map to be an isomorphism (this is the case we're in) is measured by an homo{topy,logy} group
You mean, you can make this claim precise but only if your map lives in an abelian (?) category?
Yes and no; I don't know what I'm saying. What I mean is: you have a functor between two 2-categories of some kind
we know it is a lax functor; there are maps witnessing that
are they invertible? Well, in full generality what more can you say apart "let's use Yoneda lemma and check"?
It's not a rethorical question
fosco said:
are they invertible? Well, in full generality what more can you say apart "let's use Yoneda lemma and check"?
Indeed. We were restricting to Rel or Span as codomains
What I mean is, in general when you want to measure how far is X from being Y, you encode "being Y" in the vanishing of some functor over the class of all X's
to do this you have to translate the property of being Y into something relying on additional structure; for homotopy theorists, a model structure of some kind
Hm. I don't know what the model structure would be here tho. Most often than not we don't have a precise idea of why things fail
We just say that "composition is not preserved strictly because of emerging behavior" but characterizing this precisely depends on a specific application domain
What is "emerging behaviour"?
Mathematically, I mean. I lost all hopes to understand the gibberish about emergent stuff outside of maths
It's when something is more than its parts. For example, we can associate a petri net to a SMC describing its executions. Now, you can glue petri nets together using a formalism like decorated cospans, but if you do this, the functor to SMCs is just lax
The idea is that if you think about the SMC representing reachability, that is if I can use transitions to move tokens around from from configuration A to some configuration B, when you glue nets together you may get loops
And so the resulting SMC is strictly bigger than the SMCs of the components
The whole reason I ask this question since years is that it's pinning down one aspect of the mysterious "emergent effects" that people talk all kinds of shit about, and makes it totally precise in a way that you can ask to any mathematician
Another thing: sometimes lax functors with nice codomains are equivalent to "good objects" in another category; for example this happens with Cat/C being just lax normals C -> Prof
provided it is normal, a lax functor C -> Prof is just(TM) a 1-functor with codomain C
now now, F is a pseudofunctor IFF the associated 1-functor is a fibration
(am I remembering right the Grothendieck construction or I'm just ridiculing myself? https://ncatlab.org/nlab/show/Grothendieck+fibration#fibrations_versus_pseudofunctors )
now, are fibrations the fibrations of a model structure on Cat? Do you really need the iso? https://ncatlab.org/nlab/show/isofibration
let's pretend they are
now for the category theory gibberish: let be a concrete 2-category with a yoneda structure on it, with presheaf construction P.
Conjecture: there is an equivalence between and normal lax functors , where is the underlying category of and is the Kleisli bicategory of
In a sense, I cannot state something more general; in a sense, @Nathanael Arkor will see where this is going
and probably prove it for just a KZ doctrine?
(...I'm worried this doesn't have any chance to be plausible, but I wanted to offer the 2-category theorist POV -if I can ever claim myself to be one, which I can't-)
I think I see how one proves this statement abstractly. A monad in a bicategory K is a lax functor from the terminal category into K, and a monad in Span(Cat) is a small category. Therefore, a lax functor from the terminal category into Span(Cat) is a small category. In a similar way, taking a lax functor from a non-terminal category yields a slice category. Then, using Prof(Cat) = Mod(Span(Cat)), and the (double categorical) universal property of Mod, we get that lax functors into Span(Cat) are normal lax functors into Prof(Cat). This should work for arbitrary choices of K (assuming one is willing to work with virtual double categories to avoid concerns about composition of spans).
The corresponding statement @fosco mentions for Kleisli bicategories of KZ doctrines holds when Kl(P) coincides with Mod(Span(K)). (I would hope this is usually the case, but I don't know.)
@fosco - if you're having trouble seeing how laxity represents "emergent behavior", you could look at page 23 of Open Petri nets, which shows a picture of an explicit example. Namely, there's a path in a composite of open graphs that's not a composite of paths in those graphs.
The problem might be at an even more elmentary level! But yes, thanks, I'll take a look
No, it's a very simple picture which even a 2-category theorist can understand.
This example is the sort of thing Jules was talking about here:
Jules Hedges said:
It is time for me to reiterate my grand challenge to applied category theory, stretching the purpose of the off-topic thread to breaking point. Typically when you have some category whose objects are "boundaries", morphisms are "open systems" and composition is coupling 2 open systems along a boundary, typically "behaviours" form a lax pseudofunctor . This says that every boundary has some set of possible behaviours, and every open system has a set of boundary behaviours that it can perform. The laxator says that if you behaviours and of two composable open systems that agree on the behaviour of the common boundary then they compose to a behaviour of the complex system; but the converse typically isn't true: exhibits emergent behaviours that don't arise from individual behaviours of and . Right now the typical next step is to shrug and then go and do something else. I would like to associate some mathematical object that "describes" or "measures" the failure of the laxator to be an isomorphism, aka describes or measures the emergent effects. Several people have mumbled something about cohomology after I talked about this problem
This is a very interesting thread, full of interesting questions. I wish I could understand it better.
But here is some input anyway. Maybe it is completely misguided. I sometimes feel like somebody with a hammer, seeing nails everywhere.
I will eventually say something about Jules' grand challenge, but first some preparations.
It has already been mentioned several times that lax functors correspond to functors , and also that pseudo-functors correspond to discrete Conduché fibrations . It also seems clear that for the envisioned applications, pseudo-functors are not quite enough.
I just wanted to point you to another possible direction of generalisation from pseudo-functors: namely instead of looking at discrete Conduché fibrations from a category , allow to be more generally a decomposition space. A decomposition space is very much like a category, except that composition of arrows is not always well defined, but decomposition of arrows is well defined in a certain sense.
I think this should resonate with the general ideas of quantifying lack of compositionality, because in many situations it is easier to take things apart than it is to put things together, and it seems plausible that this ability should enter the analysis of compositionality.
I am trying to advocate the idea of decompositionality!
Unfortunately I now have ten things I would like to say at the same time. It is a rather long story I try to make short:
To say what a decomposition space is, it is necessary to take a simplicial viewpoint. Jules already started defining simplicial objects, and there were already a few mentions of nerves of categories. In the nerve of a category we have (the Segal condition) expressing the fact that the -simplices are precisely the composable pairs of arrows. A decomposition space is a simplicial set where this condition does not necessarily hold, but where there is another condition to help tame the anarchy of general simplicial sets. The Segal condition says that certain pushouts in are sent to pullbacks. The decomposition-space condition says that certain other pushouts in are sent to pullbacks. These pushouts are the active-inert pushouts in . I don't want to explain what these are here. But active maps correspond to the inner parts of a simplicial set, the inert maps correspond to the outer parts. In the end the decomposition-space axiom can be interpreted as saying that the possibilities of decomposing an edge do not depend on anything outside this edge. It's a locality condition.
Rather than making all this precise, let me give the motivation: incidence coalgebras. This is an important tool in combinatorics, devised to express how structures decompose into smaller structures. Classically it is defined for posets, but a lot of the theory works the same for categories: any category has an incidence coalgebra: its underlying vector space is spanned by the arrows of , denoted . The comultiplication is defined by
That is, sum over all possible two-step factorisations of and return the two components. (If finiteness of sums is important, some conditions should be imposed on the category to ensure the sum is finite. This is important at the level of vector spaces, but not important a the objective level of slice categories, but that's a longer story.)
Now let us write the formula in terms of the nerve of :
The -simplex is just a commutative triangle in , and its long edge is required to be . Now and are the two short edges, so it is precisely the same formula as before.
The point is that now the formula no longer refers to composition of arrows, so now it makes sense for any simplicial set. But it is no longer guaranteed that the comultiplication is coassociative then. The decomposition-space axioms are designed precisely to ensure that the comultiplication is coassociative.
(So why incidence coalgebras? Well, to decompose objects is clearly a nice thing to do, but more importantly incidence coalgebras are the setting for the theory of Möbius functions. Sorry for throwing in yet another word without explaining it properly. Let me just say that Möbius functions count nondegenerate simplices in all dimensions, with alternating signs depending on their dimension. They encode various kinds of overcounting/undercounting principles, such as the inclusion/exclusion principle, antipodes of Hopf algebras, inversion of power series -- and Euler characteristics! Indeed the Euler characteristic of the nerve of a poset is the Möbius function evaluated on the interval (the poset with bottom and top element added).)
But now I think I should give an example of a decomposition space which is not a category (and an example of a CULF functor).
Discrete Conduché fibrations are also called CULF functors: that stands for Conservative and Unique Lifting of Factorisations. Conservative here means that if an arrow is sent to an identity arrow in , then it is already an identity arrow in , and unique lifting of factorisations says that for an arrow in , for each factorisation of its image in there is also a unique factorisation of itself lying over it. The CULF condition has a nice simplicial interpretation: a functor is CULF iff the corresponding simplicial map (on the nerves) is cartesian on active maps. The first active maps are (which for nerves of categories pick out identity arrows), and the second active map is (which for nerves of categories is composition of arrows). Cartesian means cartesian in the sense of natural transformations (recall that a simplicial map is a natural tranformation between functors ), which in turn means the naturality squares are pullbacks. The condition ensures that there is induced a coalgebra homomorphism on the incidence coalgebras.
Consider the following coalgebra of graphs, which is called the chromatic Hopf algebra, and which plays a role in graph colouring and in the theory of symmetric functions. The underlying vector space is spanned by isoclasses of graphs (say simple, undirected graphs). A graph with vertex set is comultiplied by summing over all splittings of into two disjoint subsets and then inducing a graph structure on each subset:
This is a very typical comultiplication formula for combinatorial Hopf algebra, but it is not the incidence coalgebra of a poset or a category. But is is the incidence coalgebra of a decomposition space :
is singleton
is the set of (isoclasses of) graphs (as required if we want the incidence coalgebra to be spanned by graphs)
is the set of (isoclasses of) graphs with a splitting of the vertex set (as required since the sum in the comultiplication formula is over the set of such splittings)
is the set of (isoclasses of) graphs with a splitting of the vertex set into parts (allowed to be empty).
The inner face maps join two adjacent parts. The outer face maps delete outer parts. The degeneracy maps insert empty parts.
This simplicial set is not the nerve of the category. This is easy to see: a graph with a vertex splitting cannot be reconstructed from knowing the two parts (because all info about the connecting edges is lost in the splitting process). In other words, graphs cannot be composed, but they can be decomposed. In other words, a graph is more than its parts.
The simplicial set is a decomposition space: intuitively (since I didn't give the detailed pullback conditions) it is because the ways to split a certain part in a -split graph does not depend on the content of the other parts.
This decomposition space has a CULF map to the decomposition space of discrete graphs (meaning graphs without edges): simply return the underlying vertex set. This is CULF because the shape of the comultiplication only depends on the underlying vertex set. (The incidence coalgebra of the discrete case is the so-called binomial Hopf algebra, since its structure constants are the binomial coefficients. This decomposition space is actually Segal: a set with a splitting can be reconstructed from knowing the two parts, simply by disjoint union.)
Coming back to CULF maps: their purpose is to preserve decomposition of objects. They were studied by Lawvere in his approach to dynamics in his 1986 preprint State categories and response functors -- a classic in applied category theory and compositionality! He used it in the situation where the codomain encodes a notion of time, encodes a notion of processes, and and the CULF functor is a notion of duration.
Lamarche suggested that the category of CULF functors over should be a topos. Johnstone found a counter-example, and then came an analysis of which the conjecture is true for, with results by Johnstone, Niefield, Bunge, Fiore. The funny thing is that Lamarche's conjecture turns out actually to be true --- if just you allow the domain to be a decomposition space instead of merely a category! This is a theorem by David Spivak and myself. (David's motivation for this was aerospace security, but I doubt the theorem was ever helpful in that context.)
Now for the grand challenge. I don't have a solution, but possibly a framework for thinking about it. Possibly just a misunderstanding.
Let be the base category of 'open systems'. Define a simplicial set over , which hopefully will be a decomposition space with a CULF map to . It is similar to a Grothendieck construction. Let be the set of objects of equipped with a behaviour. Let be the set of arrows of equipped with a behaviour relation (or a behaviour span, or whatever it appropriate -- I don't even know what is the leading example). And now the punch line (which I think is already in Jules' drawing, although I don't quite understand it): let be the set of composable pairs of arrows equipped with three behaviour relations: one for each of the three faces of the triangle. And so on, defining higher simplices with behaviour relations on all edges, not just the principal edges. Now the emergent feature of the situation, whatever it is, tells us that is not Segal: it is not possible to construct a -simplex by knowing the two short edges. That is, two composable arrows and choices of behaviour on them are not enough to reconstruct behaviour on the composite.
GUESS: is a decomposition space.
Of course it is quite a shot in the dark, since I don't know what the motivating examples are, but here are two arguments one could try to follow.
The first is direct translation of the decomposition-space axiom: it would say that the behaviour of a -simplex 0123 can be reconstructed by knowing the behaviour on 013 and on 123. The point is that the difficult part is always the long edge, and this info is already available from 013 (since both 0123 and 013 have the same long edge). The finer information available in 0123 which is not available in 013 is precisely available in 123. This argument is easy to carry out in the graph example.
The second argument strategy is just to prove that the projection map is CULF, because it is a theorem that any simplicial set CULF over a decomposition space is again a decomposition space, and the base is a category and therefore in particular a decomposition space. This check now says (unique lifting of factorisations): given a composable pair in and a behaviour on the composite, there is a unique behaviours on the two arrows compatible with the behaviour on the composite. Basically this should follow from some principle saying that one can always restrict behaviours to 'shorter' arrows. In the case of graphs, this is the fact that a graph on a set of vertices induces a graph structure on any subset of vertices.
As you can see, my intuition comes from the very simple example where the base category is finite sets, composed by disjoint union, and where 'behaviour' is graph structure. Of course there is a high risk that this intuition is misguided.
I look forward to hear your thoughts.
(Of course your thoughts on this may require actually knowing the definition of decomposition space. I am happy to explain more.)
Hi! Can you get your ideas to apply to the leading example that we've been discussing here? That's this:
The laxness arises from the fact that there can be paths in a composite of open graphs, say , that do not consist of a path in composed with a path in .
Of course your ideas could be good even if they don't apply to this example, but we've been talking a lot about this example because it's a simple example of "emergent behavior as laxity".
That tour of decomposition spaces and related topics was very enlightening @Joachim Kock! If you ever put together more extensive notes or a presentation on this stuff, I would love to see it :grinning_face_with_smiling_eyes:
This thread is becoming a journal
Amazing
It’s good the archive exists then
John Baez said:
The laxness arises from the fact that there can be paths in a composite of open graphs, say , that do not consist of a path in composed with a path in .
That's a tricky example!
In its most natural version it does not fit into the decomposition-space setting, unfortunately (or maybe that's fortunate, who knows?): is the category of open graphs. The simplicial set should then be the following: is a finite set with a chosen point. is a graph with a chosen point on each boundary component, and the existence of a path between them. should be pairs of composable graphs with chosen points , , , and existence of paths in , in , and in . And so on.
Unfortunately it is not a decomposition space. The trouble is that is it not possible to restrict a path to a shorter graph. More precisely, given and a path in , it is not always possible to split it into paths and .
Perhaps you can modify the set-up a little bit; after all, any path is decomposable into shorter paths!
Too bad. I should have studied this example before shooting in the dark.
Morgan Rogers (he/him) said:
Perhaps you can modify the set-up a little bit; after all, any path is decomposable into shorter paths!
Yes, I am coming to it now...
However, the following variation (which I admit is a bit contrived) does work: instead of letting be boundaries with a chosen point, let be 'boundaries with the property of being nonempty' and let now be 'graphs with the existence of a path from some point in the in-boundary to some point in the out-boundary'. Now consists of composable pairs of graphs, with existence of paths between the three 'boundaries'. Now it seems to work: if there exists a path from an -point to a -point then that path must go through one or more -points.
This variant is a decomposition space that is not Segal: it is easy to cook up two composable graphs that individually admit a path from an in-point to an out-point, but whose composed graphs does not have this property: in the simplest example and are singleton, has two points, and the graphs and both have a single edge, and these two edges do not connect to the same point in .
It seems that in this example, the emerging behaviour is not reachability but rather non-reachability! (Where 'reachability' is understood in an artificially simple-minded sense.)
Joachim Kock said:
However, the following variation (which I admit is a bit contrived) does work: instead of letting be boundaries with a chosen point, let be 'boundaries with the property of being nonempty' and let now be 'graphs with the existence of a path from some point in the in-boundary to some point in the out-boundary'. Now consists of composable pairs of graphs, with existence of paths between the three 'boundaries'. Now it seems to work: if there exists a path from an -point to a -point then that path must go through one or more -points.
It still doesn't seem to cover the case where gluing the two graphs together produces a path with multiple disjoint components in each side.
But in any way, the first example shows that my idea is not nearly as useful as I had hoped.
Morgan Rogers (he/him) said:
It still doesn't seem to cover the case where gluing the two graphs together produces a path with multiple disjoint components in each side.
Not sure I understand this :-(
It is not necessarily a question of splitting the path into two parts (as the full example shows is not possible), but only about the existence of paths in each of the two graphs.
Joachim Kock said:
Morgan Rogers (he/him) said:
It still doesn't seem to cover the case where gluing the two graphs together produces a path with multiple disjoint components in each side.
Not sure I understand this :-(
Here's one I made earlier (taken from an extended abstract of mine, https://obsoletewallstreet.files.wordpress.com/2020/05/categorical-systems-theory-3.pdf)
Screenshot-2021-04-28-at-11.27.01.png
I'm pretty sure this example is minimal as a witness of laxity
Question: Is it a necessary requirement for graphs that, for emergent behavior to manifest, the graphs being composed are disconnected?
The whole point here seems that we are composing two graphs made of two disconnected parts, that become one connected part when composed
"A space (whatever that means) is disconnected" is the prototypical thing that topological data analysis can detect, so if this is a necessary condition, it makes me hopeful in some way
Fabrizio Genovese said:
Question: Is it a necessary requirement for graphs that, for emergent behavior to manifest, the graphs being composed are disconnected?
If your graphs are oriented, you can allow for the underlying non-oriented graph to be connected and still have emerging behavior. But let's avoid indirected stuff just for now
It seems to me that if I have two graphs where each point is reachable from any other point, reachabaility doesn't really change when composing them
I guess so, yeah. If your open graphs are all individually connected then any path that zigzags back across the boundary can be "shortcutted" to only cross it once
But connected makes the reachability question boring anyway, no? In a connected graph, all pairs of boundary vertices are connected
Oh yeah, directed... forget what I just said
Yeah, so for directed graphs "connected" translates to something different
But I know nothing about what TDA can say for directed graphs
Thinking about it a little bit more, I now think my modified graph example is completely dull. It is an example of a general construction: start with any category, and decide that certain arrows should be excluded. Then write down the sub-simplicial set of the nerve consisting only of those composable strings that do not involve any of the excluded arrows. This will be a decomposition space, provided the remaining 'good' arrows have the property that if an arrow is good, then in any factorisation of it, the two components are good again. Silly example: in the free category on a directed graph, decide to exclude paths of length 10 or more. Clearly if a path has length <10, then the same is true for its shorter path. So paths of length <10 form a decomposition space.
This says that bigness is an emergent phenomenon: two small things put together could create something big! Wow, I am really onto something deep here :-)
(But silly as it seems in this context, it is actually an interesting construction. If the category is just a monoid, it gives Segal's notion of partial monoid, which he famously used to prove that for a path-connected space, the configuration space of points on it is homotopy equivalent to the -fold loop space of its -fold suspension.)
Joachim Kock said:
But in any way, the first example shows that my idea is not nearly as useful as I had hoped.
It might be useful for something else. Build the general theory: if it's beautiful enough, the examples will come. :upside_down:
Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.
This made me wonder why this thread hasn't included more talk about sheaves and colimits. In particular, how much of this laxity is just the expected behavior of colimits on algebraic structures. Isn't that the core of what's going on, at least in the main example considered so far:
I suppose it's implicit in the mentions of cohomology, but I still have trouble keeping those ideas from leaking out of my brain.
Jules Hedges said:
I'm pretty sure this example is minimal as a witness of laxity
Note that in the directed case (I shouldn't even have to specify that here!) there is an even more minimal example: , .
It's worth noting that reachability isn't the only emergent phenomenon in this example. Another that is very important in practice is the presence of cycles, since these generate infinities, feedbacks and all sorts of other trouble :-). Of course, these are both just reflections of the path category, which seems like the "core emergent".
Spencer Breiner said:
Jules Hedges said:
I'm pretty sure this example is minimal as a witness of laxity
Note that in the directed case (I shouldn't even have to specify that here!) there is an even more minimal example: , .
I know you've named the output vertex of as , but surely composing and by gluing along would not identify that with the in ?
But if you just mean taking the pushout of the inclusions of the objects in the category of categories, I see what you're getting at
Spencer Breiner said:
Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.
for anyone who scrolls past links, a "sheaf" on an undirected graph in this paper means a presheaf on the category whose objects are with a morphism whenever , for edges and vertices .
Joachim Kock said:
Coming back to CULF maps: their purpose is to preserve decomposition of objects. They were studied by Lawvere in his approach to dynamics in his 1986 preprint State categories and response functors -- a classic in applied category theory and compositionality! He used it in the situation where the codomain encodes a notion of time, encodes a notion of processes, and and the CULF functor is a notion of duration.
Thanks for the reference! Here's a link for those too lazy to look it up (thanks to @Matt Earnshaw's awesome Lawvere bibliography!)
https://github.com/mattearnshaw/lawvere/blob/master/pdfs/1986-state-categories-and-response-functors.pdf
This is actually my very favorite example of laxity, and I can't believe I forgot to mention it above. Given some symmetric monoidal category of processes , we would like to assign a duration . Naively, this looks like a nice monoidal functor with and .
However, there is a problem with interchange, resulting in potential time savings (laxity) by grouping serial compositions. If and , then
With regards to Jules' original question regarding measuring the amount of laxity, this might be a good example to think about, since there is a visceral quantitative measure backed by strong intuition.
Morgan Rogers (he/him) said:
But if you just mean taking the pushout of the inclusions of the objects in the category of categories, I see what you're getting at
That's right, I'm imagining these graphs as morphisms and in some sort of cospan category.
Spencer Breiner said:
Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.
This made me wonder why this thread hasn't included more talk about sheaves and colimits. In particular, how much of this laxity is just the expected behavior of colimits on algebraic structures. Isn't that the core of what's going on, at least in the main example considered so far:
I suppose it's implicit in the mentions of cohomology, but I still have trouble keeping those ideas from leaking out of my brain.
Personally I forgot that perspective also existed, since I pretty much don't understand sheaves at all. In this formula what you've done is to take the definition of a lax functor, and switch out composition of morphisms for something written "+". So.... (how) are lax functors and sheaves known to be related?
I'm not even thinking of sheaves yet, just colimits. Sheaves are (in some sense) about preservation of colimits, so that's how they come in.
Here I'm thinking of something like the free category on a graph, but you see the same sort of behavior in the free monoid on a set. Free algebras for colimits are badly behaved, and this often shows up as some kind of laxity because we can mix elements from the two component algebras.
Since composition in cospan categories is defined by pushouts, maybe we should expect laxity to emerge in this case.
My gut feeling is that sheaves are unbelievably cool, but they can become overkill fast in many circumstances
For this kind of problem it seems to me that just some symplicial stuff may take us quite fare, so I wouldn't move to sheaves unless there's a real reason to do so
(I've tried to almost force myself to use sheaves to solve a particular problem once and I've been stuck for two years, so I'm slightly biased lol)
I think sheaves are overkill in many applications. However, I think for understanding this theoretical issue of how "the whole is greater than the sum of its parts" they are very nice.
For example, a circle has a hole in it, even though any small contractible portion of the circle does not. "The hole is greater than the sum of the parts". Sheaf cohomology lets us see how the hole---the nontrivial first cohomology of the circle---is composed from what's happening in the parts.
Perhaps a better way to put it is that "the whole is assembled in a more subtle way from the parts than you might naively expect". You can imagine a little ant crawling around a circle, trying to figure out where exactly is the hole. It's everywhere and nowhere.
There's still a big challenge, which is to use ideas like sheaf cohomology to describe new kinds of emergent behavior.
Morgan Rogers (he/him) said:
Spencer Breiner said:
Hi all. Joe Moeller just pointed me to this paper on a sheaf-theoretic interpretation of Dijkstra's algorithm, which seems at least tangentially relevant to the discussion here.
for anyone who scrolls past links, a "sheaf" on an undirected graph in this paper means a presheaf on the category whose objects are with a morphism whenever , for edges and vertices .
Hmm. It looks like you're right! So this is really just a "presheaf-theoretic" interpretation anyway. Does anyone know why that community refers to these as "cellular sheaves" rather than "cellular presheaves"? Is it possible that these give a purely combinatorial/presheafy presentation for traditional sheaf categories over nice spaces (similar to what that @Joachim Kock described in a recent thread about categories/planar/operads)?
Interesting question!
Bob Ghrist and Jakob Hansen note that any cell complex gives a poset where one cell is another if it's a face that other cell. An example would be a graph, where a vertex is an edge if it's one of the endpoints of that edge.
They call presheaves on such a poset "cellular sheaves", and here is their excuse:
For experts, this definition at first seems only reminiscent of the notion of sheaves familiar to topologists. The depth of the relationship is explained in detail in [Cur14], but the essence is this: the data of a cellular sheaf on X specifies spaces of local sections on a cover of X given by open stars of cells. This translates in two different ways into a genuine sheaf on a topological space. One may either take the Alexandrov topology on the face incidence poset of the complex, or one may view the open stars of cells and their natural refinements a basis for the topology of X. There then exists a natural completion of the data specified by the cellular sheaf to a constructible sheaf on X.
So: from such a presheaf you can build a sheaf on some space.
I think it's better to call it a presheaf.
Fabrizio Genovese said:
For this kind of problem it seems to me that just some symplicial stuff may take us quite fare, so I wouldn't move to sheaves unless there's a real reason to do so
Given that, as I mentioned above, the "sheaves" here are just a particular case of presheaves, how is this any more complicated than simplicial sets? :thinking:
I wasn't referring to simplicial sets, but more to stuff like complexes and all the standard toolkit of topological data analysis :smile:
Again, I don't think the presheaf data they use in the data above is any more complicated than the kind of constructions one does in TDA, but it probably depends on your experience with the different techniques.
Indeed, complexes seem way more manageable to me :smile:
I think the situation is somewhat akin to the relationship between finite decimals and real numbers. If I understand correctly, each cellular "sheaf" over can be converted to a classical sheaf over (the underlying space of the cellular decomposition) through something like a geometric realization. I think this works because cells are contractible.
However, for different decompositions, the maps will have different images. Presumably, (nice?) common refinements of the decompositions correspond to some kind of colimits in the sheaf category. In that sense, it gives a single space for comparing different finite approximations.
In my opinion the most minimal example of laxity is when you have two open graphs with a single arrow and they are joined together backwards...so that they turn into a circle in the composite.
It's not minimal in the sense of least amount of emergence but I like it because you get a ton of emergence out of super small graphs.
Jade Master said:
It's not minimal in the sense of least amount of emergence but I like it because you get a ton of emergence out of super small graphs.
Just using words like this hints that you believe it's possible to quantify this stuff, which is pretty much the original motivation for this thread
Just very tangentially, with a definition of dependent graphs you can flatten out cycles, this is something I was thinking about yesterday and I'll write down precisely at some point, but in a nutshell:
In practice, you are tagging every vertex of a graph with a property that is "all the possible ways you got here". Then the outbound arcs from this vertex are checking with which property you landed in A. If they like it, they let you pass to whatever is their target, otherwise they don't.
The most permissive case maps every to just , and is basically the Yoneda embedding evaluated on any set.
Jules Hedges said:
Jade Master said:
It's not minimal in the sense of least amount of emergence but I like it because you get a ton of emergence out of super small graphs.
Just using words like this hints that you believe it's possible to quantify this stuff, which is pretty much the original motivation for this thread
One interpretation is: "tons of emergence" = "all of homotopy theory" --- that's a lot!
Because it is the construction of the circle from two contactible spaces.
The grothendieck construction of this functor is again a free category, so it spits out another graph. By defining the functor correctly, you can "unroll" cycles. For instance, you may have a graph
so you can cycle around as many times as you want, and in you can also decide to go to . By defining the functor in the right way you can state that "you can go to , and only to , after you completed exactly 3 cycles around ", and other stuff like this. In this example, the grothendieck construction would give you a graph that goes like
, with no cycles.
All this also works for Petri nets, with the caveat that your functor is now lax monoidal, or even lax-monoidal-lax
This kind of stuff is useful in practice, because it allows you to tag a thing with a potentially very complex reachability relation and simplify it radically. It is used in practice when you do modelling with automata and nets. I also like that there's Yoneda around (always a good sign) and that probably one could say something very sievey and sheafy about it
Fabrizio Genovese said:
Just very tangentially, with a definition of dependent graphs you can flatten out cycles...$$
What's a dependent graph?
I defined it just afterwards
@Spencer Breiner I've also thought that this might be explained as "failure-to-preserve-pushouts/colimits" (leading to sheaves/cohomology) rather than failure to preserve composition.
The problem in applying this approach to the reachability example is that you can't generally restrict a path to a subgraph, nor extend a path from a subgraph to the whole graph in a canonical way.
Morally speaking, instead of having restriction maps so that is an equivalence, we only have the inverse of that equivalence - we can only glue compatible paths in the subgraphs.
(The notation there isn't supposed to be rigorous)
Also, thinking about decomposition spaces, I came up with the following construction:
Let consist of finite sets, consist of relations between two finite sets, and let consist of data like , where are relations, should probably be transitive(?), and the three boundary 1-simplices are and .
I don't think I quite understand decomposition spaces well enough to say whether this is one - @Joachim Kock can maybe help. But my idea here would be that this carries exactly the extra information about reachability necessary to work out the composite reachability relation - and this can't be inferred from the component reachability relations, which is why this is not a category.
(Of course, you can take a category where morphisms aren't just relations, but "corelations", i.e they remember reachability between vertices on the same side of the graph. Then you can compose these and get the right thing, but the problem with this is that composition requires computing the transitive closure of a relation, which is hard. So this decomposition space sort of isolates the hard part, and puts it exactly in the failure to be a category).
@**Eigil Rischel|
Lots of ideas in here!
"failure-to-preserve-pushouts/colimits"
My point, at least, is that it matters where we take the colimits. Ordinary people do not care whether the diagram they draw is a graph or a free category, and if we want to talk about reachability then the latter is better behaved.
Now, maybe this just sweeps the problem under the rug--colimits in Cat are hard--but I think it can give us a handle on how sheaves might fit into the question. @_Joachim Kock recently described a general strategy for representing compositional structures as (iirc) equivalent categories of either sheaves or presheaves. I think it was in the thread on shapes and algebraic structures. He talks about how you can get data about arbitrary-length paths based on knowledge of length-2 paths (i.e., composition).
I think this is also related to the path (pre)sheaves paper that I mentioned earlier. Though I hadn't put it together yet at the time, they also rely on length-2 paths in their explication of Dijkstra's algorithm.
The problem in applying this approach to the reachability example is that you can't generally restrict a path to a subgraph
I suspect you could play some tricks with pointing in order to improve this. Fundamentally, though, I see this as the observation that pullback projections in Cat need not be full.
extend a path from a subgraph to the whole graph in a canonical way.
Not sure what this means. For me, a path in a subgraph is still a path in the supergraph.
... (The notation there isn't supposed to be rigorous)
Not sure why, exactly, but all this makes me think of (groupoidal) van Kampen theorems. Somehow, the problem is that the overlap between two graphs as the shared nodes is too coarse to push out well; what's really needed in the overlap of the pushout is the length-2 paths shared by the two graphs. We can't do this directly, since the whole problem is that one edge is missing from each graph.
Wild speculation: maybe we could canonically extend the original graphs by adding an "outside" node and some special "outside" edges to the original graphs and use length-2 paths to glue along those?
I'm going to have to think some more about the decomposition spaces. I probably won't understand your construction unless you draw a picture, or fill in A,B,C,R with with some real-world examples that are relevant.
I hope that my thesis:
is relevant to this conversation. First of all I think that it sets an appropriate level of abstraction to discuss this sort of emergent behavior. The generality that I chose was "network" and to me that means either an enriched and internal graph. There are a surprising number of examples that can be thought of this way. Petri nets and their variants including integer nets, k-safe nets, pre-nets etc. can all be regarded as internal graphs. Also markov networks, distance networks, capacity networks, NFA's and more can all be regarded as enriched graphs. And of course there are ordinary graphs which these are all generalized from. The operational semantics of these networks are all generalized from the free category construction. In the internal case this gives the category of executions for Petri nets and their variants. In the enriched case this gives solutions of the algebraic path problem. But regardless in all of these cases, for every category of networks there is
Anyway there are likely some holes or misunderstandings so let me know if you have any feedback, comments, or questions. It is all very appreciated :)
Yes, these are really good concrete examples of "emergent behavior as laxity".