Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: theory: applied category theory

Topic: measuring non-compositionality


view this post on Zulip Jules Hedges (Mar 24 2020 at 12:16):

Alright folks, in case anybody's bored and in need of a hard question in ACT, here's one that I keep asking to anyone who'll listen. (it's the one I wrote about here: https://julesh.com/2019/12/02/lax-functors-describe-emergent-effects/)

view this post on Zulip Jules Hedges (Mar 24 2020 at 12:18):

Let C\mathcal{C} be a category, which we think of as a category whose morphisms are open systems and objects are interaction boundaries. (Morally it's symmetric monoidal, but the question can be asked just for a category)

view this post on Zulip Jules Hedges (Mar 24 2020 at 12:21):

For any boundary XX we have a set F(X)F(X) of possible behaviours that can be observed on that boundary. And for any any open system f:XYf : X \to Y, we have a relation F(f)F(X)×F(Y)F(f) \subseteq F(X) \times F (Y) that describes what boundary behaviours can be observed together when looking at ff

view this post on Zulip Jules Hedges (Mar 24 2020 at 12:23):

Composition in C\mathcal{C} is coupling systems along a common boundary. For any reasonable system (aka just ignore anything that doesn't satisfy this), it is the case that if (x,y)F(f)(x, y) \in F (f) and (y,z)F(g)(y, z) \in F (g) for some yy, then (x,z)F(f;g)(x, z) \in F (f;g). That is, any possible behaviours of the parts that agree on their common boundary yields a possible behaviour of the composite

view this post on Zulip Jules Hedges (Mar 24 2020 at 12:24):

But the converse typically fails in many real examples. In summary, F:CRelF : \mathcal{C} \to \mathbf{Rel} is a lax pseudofunctor (viewing C\mathcal{C} as a 2-category with only identity 2-cells, and Rel\mathbf{Rel} as a 2-category with relational inclusion

view this post on Zulip Jules Hedges (Mar 24 2020 at 12:28):

Hard question: Devise general machinery for "measuring" or "describing" how a lax pseudofunctor fails to be a strong pseudofunctor (in which case you could forget the 2-cells and just call it a functor CRel\mathcal{C} \to \mathbf{Rel})

view this post on Zulip Jules Hedges (Mar 24 2020 at 12:29):

Several people have commented that cohomology sounds like the right tool for the job, but I know nothing about cohomology

view this post on Zulip Nathanael Arkor (Mar 24 2020 at 12:40):

breaking down the question a bit, by just looking at the components of the lax functor, are you essentially looking for a way to describe how a given morphism in a category fails to be an isomorphism?

view this post on Zulip Nathanael Arkor (Mar 24 2020 at 12:40):

or do you expect (the hom categories of) Rel\mathbf{Rel} to have properties to allow you to measure this in ways that you can't in an arbitrary category?

view this post on Zulip James Fairbanks (Mar 24 2020 at 13:04):

In lots of applications, there is structure in the Relations that you could use. Like for passive circuits, the behavior on the boundary is the flow of current into or out of the circuit and the relations have to satisfy Kirchoff's law. Or in chemical reaction networks the relations are flows of chemical concentration into and out of a reaction network and so the morphisms satisfy a chemical analog of Kirchoff's law. I think Jules wants an answer that is general across all categories.

view this post on Zulip James Fairbanks (Mar 24 2020 at 13:07):

If anyone is thinking about "why is it important that the codomain of F is Rel?" I think the fact that the behaviors are relations is important to the question because the differences between Rel and Set are what make ACT as a paradigm for scientific modeling different from CT as a paradigm for PL/CS theory.

view this post on Zulip James Fairbanks (Mar 24 2020 at 13:11):

I don't have any insight into answering the CT question that Jules posed, but I have been thinking a lot about the practical side of scientific modeling with this approach.

view this post on Zulip Jules Hedges (Mar 24 2020 at 13:35):

Yes, the question can be rephrased as "measure how the laxator fails to be an isomorphism". It may be possible to be more specific there than asking how an arbitrary morphism fails to be an iso

view this post on Zulip Jules Hedges (Mar 24 2020 at 13:36):

And yes, Rel\mathbf{Rel} is the weakest setting I can ask the question, but it's totally reasonable to replace it with something similar or more specific to make progress

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 13:46):

This is already progress. The first time we started thinking about this was 2 years ago in Leiden, and the idea was "we want to define categories where the composition of morphisms is not perfect and can fail up to some degree". Here you are restricting this very general question to an observational paradigm, which all things considered seems reasonable to me.

view this post on Zulip Nathanael Arkor (Mar 24 2020 at 13:50):

it would be helpful to pick a concrete setting (e.g. passive circuits), and determine exactly what measure you want to get, rather than trying to do it entirely abstractly right from the beginning

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 13:51):

I agree with you, but Grothendieck wouldn't! :P

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 13:57):

Anyway I am not even sure this would help. What does it mean that a morphism "fails to compose"? Importantly, from an engineering point of view I'd like most often to know "why it fails" more than "how much it fails". So I am not even sure that "measuring how much composition fails" is what we want.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 13:58):

For instance, when you compose petri nets place-wise you have that the behavior of the composition is lax wrt to the components. Here asking how much the laxator fails to be strong is not what matters, what matters is to know why

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 13:59):

And, in t his particular case, the answer is "because when you glue things together you can create loops, which make everything difficult". It would be nice to have a framework that captures this kind of reasoning, and that maybe quantifies over it.

view this post on Zulip Jules Hedges (Mar 24 2020 at 14:13):

Indeed. There are heaps and heaps of examples, I believe reachability in open petri nets is the only one that has appeared in published literature. (If it is true that some standard-ish machinery like cohomology applies, it may be that it does what it does, and it needs to be explored rather than engineered)

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 14:14):

In very simple/wrong words, cohomology individuates and measures what is that blocks you to do something. So yes, it seems to me more of a qualitative analysis tool than a quantitative one, and may be what we want.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 14:14):

For what is worth I'm happy to pair with a cohomologist to look together at the petri net stuff, if someone can find me one :D

view this post on Zulip Jules Hedges (Mar 24 2020 at 14:18):

Yes, I was abusing the word "measure" from the start, I don't know a good word for whatever it is that cohomology does

view this post on Zulip Nathaniel Virgo (Mar 24 2020 at 14:38):

Sorry for the following basic question - I probably can't help with measuring the laxness of a pseudofunctor, but I'd like to properly understand the question at least. I'm happy to move to #Basic Questions if this belongs there.

When you say "the converse fails," do you mean that given (x,z)F(f;g)(x,z)\in F(f;g) it is not necessarily the case that there exists some yYy\in Y such that (x,y)F(f)(x,y)\in F(f) and (y,z)F(g)(y,z)\in F(g)? If so, can you give an example of a system where this can fail? I tried to think about cellular automata as an example, but in that case, at least the way I tried to put it together, it can't fail.

Specifically: let the objects be the cells in a 1D binary cellular automaton (possibly with a neighbourhood size bigger than 3), so that F(X)F(X) is a set of binary sequences representing the state of cell XX over time, starting from the initial time 0.

Then I guess the morphisms don't need any special structure, we just say there's a morphism f:XYf:X\to Y if cell YY is somewhere to the right of XX. But given xF(X)x\in F(X) and yF(Y)y\in F(Y) we can say that (x,y)F(f)(x,y)\in F(f) if there exists some initial state of all the cells in the cellular automaton, such that F(x)F(x) is indeed the sequence of cell XX's states over time, and similarly for cell YY. This seemed a reasonable interpretation of "F(f)F(f) describes what boundary behaviours can be observed together when looking at ff."

But now if we look at a member (x,z)(x,z) of F(f;g)F(f;g), that corresponds to trajectories of xx and zz that, together, are compatible with the rules. Being a member of (x,z)(x,z) means there is a joint trajectory for all cells such that the rules are obeyed. If we take such a trajectory and find the trajectory of cell YY, then we've found yy such that (x,y)F(f)(x,y)\in F(f) and (y,z)F(g)(y,z)\in F(g).

It seems like dynamical examples of this kind will always work this way - whenever we jointly observe a behaviour on the boundaries of f;gf;g, the internal boundary must be behaving in some way, and if we just observe that too then we have joint behaviours on the boundaries of ff and gg as well.

So what's going on here? Is it because I'm composing things in a direct way instead of defining a symmetric monoidal category? Or is "behaviour" here meant to mean something other than just dynamics? I'm interested in understanding the things that people broadly call "emergent" in cellular automata and other dynamical systems, and I guess I'm trying to work out if these "emergent effects" are related to those or just something different.

view this post on Zulip Jules Hedges (Mar 24 2020 at 14:39):

(minor comment: double dollar signs for mathmode)

view this post on Zulip Nathaniel Virgo (Mar 24 2020 at 14:41):

dollar signs fixed

view this post on Zulip Jules Hedges (Mar 24 2020 at 14:49):

You might have picked an example where you have a functor, so you can be happy. (I hypothesise that any sorts of emergent effects can be "detected" by something failing to be a functor, but I could definitely still be wrong on that.)

view this post on Zulip Jules Hedges (Mar 24 2020 at 14:52):

An example where functorality fails is reachability in open graphs, where F(f)F (f) is the pairs of boundary vertices (x,y)(x, y) where yy is reachable from xx in ff (say, directed graphs for now). Composition is by glueing boundary vertices. Then you can make an example where zz is reachable from xx in f;gf;g, but there is no boundary yy with yy reachable from xx and zz reachable from yy. Clue: you need to loop back and cross the boundary 3 times

view this post on Zulip Jules Hedges (Mar 24 2020 at 14:55):

I first came across this idea in a totally unrelated example, where I was trying to build a compositional theory of single-objective smooth optimisation, and I found that my argmax\arg\max failed to be a functor into Rel\mathbf{Rel} but was a lax pseudofunctor

view this post on Zulip Jules Hedges (Mar 24 2020 at 14:57):

Because if you have a composite objective function (I was composing by addition), the point where the objective is optimised typically doesn't optimise any of the parts individually - this is exactly saying that optimisation theory is interesting!

view this post on Zulip Nathanael Arkor (Mar 24 2020 at 15:01):

that's quite a nice way to look at it: taking the perspective of a compiler, if ff and gg are functions in the programming language sense, and FF is a compiler pass that optimises functions; gfg \circ f corresponds to inlining ff in gg, but you would not expect inlining an optimised function into an optimised function to give you the same thing as first inlining and then optimising

view this post on Zulip Georgios Bakirtzis (Mar 24 2020 at 15:04):

Jules Hedges said:

Because if you have a composite objective function (I was composing by addition), the point where the objective is optimised typically doesn't optimise any of the parts individually - this is exactly saying that optimisation theory is interesting!

Is there work at the intersection of optimization and category theory? (interested in optimal control recently so it might be a good overlap for me)

view this post on Zulip James Fairbanks (Mar 24 2020 at 16:47):

Jules, what about this? There may be many formulas for the same morphism in C. If two formulas encode the the same morphism, call them equivalent formulas. When you apply a functor to any equivalent formulas in C you get the same relation in Rel. But when you apply a lax functor, do you have to get the same subrelation? In other words, can the laxity of F(f) depend on which formula for f we used?

view this post on Zulip Nathaniel Virgo (Mar 24 2020 at 17:25):

Jules, thanks, the open graph example is really helpful. For an optimisation type of example, what would be the mapping to Rel that we might expect to be a functor but isn't?

view this post on Zulip Philip Zucker (Mar 24 2020 at 17:59):

Is this related to overapproximation can be lossy? Like say I have a polynomial relationships f and g. I can take the convex hull of them Conv(f) Conv(g). I can relationally compose f and g exactly, or I can relationally compose the hulls Conv(f)Conv(g). Conv(fg) <= Conv(f) Conv(g). Or in other words, the more you can do exactly, before turning to approximations, the tighter the answer you get.

view this post on Zulip John Baez (Mar 24 2020 at 18:10):

Cohomology is a very precise tool for studying certain kinds of laxness. For example, suppose you have a 2-group: a monoidal category where all objects and morphisms are invertible. You can precisely measure how much it fails to be equivalent to a strict skeletal one using group cohomology.

view this post on Zulip John Baez (Mar 24 2020 at 18:10):

In fact this way of thinking, generalized in the right direction, explains the whole point of group cohomology (in my opinion).

view this post on Zulip John Baez (Mar 24 2020 at 18:11):

People have also used cohomology to study non-strictness of monoidal functors.

view this post on Zulip John Baez (Mar 24 2020 at 18:12):

But laxness, as opposed to just weakness, is about noninvertible morphisms, and I think this goes beyond what traditional forms of cohomology are good at handling.

view this post on Zulip John Baez (Mar 24 2020 at 18:13):

Perhaps cohomology (which is already very general) could be generalized a bit more to study things like "the category of all lax monoidal functors from a monoidal category X to a monoidal category Y" or "the category of all lax 2-functors from a bicategory X to a bicategory Y".

view this post on Zulip John Baez (Mar 24 2020 at 18:14):

Maybe it already has been...

view this post on Zulip Jules Hedges (Mar 24 2020 at 18:30):

Nice! Do you have any gut feeling for how easy/hard/impossible that might be? (For someone that knows what they're doing with cohomology... which definitely counts me out)

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 18:32):

On top of that, assuming that this is doable and that someone does it, is there a way for us common mortals to understand it? :D I tried to approach cohomology many times but I never found a source that tells me very clearly:

In particular I did not find any source that would do this via examples which are easy to understand for someone that is not geometrically-oriented like me :(

view this post on Zulip Evan Patterson (Mar 24 2020 at 18:38):

Are we interested in a qualitative or quantitative understanding of non-compositionality? I can't speak to cohomology, but it may be useful to think about metric-enriched categories, in which one can quantify the failure of diagrams to commute. People (including me) have worked on such things. As a quick sketch, in the case of the reachability, one could take a metric on binary relations, of which many have been proposed, and then get a quantitative measure of the laxness of a functor into Rel at a given morphism.

view this post on Zulip Evan Patterson (Mar 24 2020 at 18:38):

Of course, the effectiveness of this approach depends on the ability to get useful and relevant base metrics.

view this post on Zulip Evan Patterson (Mar 24 2020 at 18:39):

To be sure, this is much less sophisticated than cohomology :)

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 18:41):

As I said above, I'd prioritize quality over quantity, but this is higly dependent on the applications you have in mind. In PDE-related stuff quantity could end up being more important.

view this post on Zulip Christian Williams (Mar 24 2020 at 18:42):

Jules Hedges said:

Clue: you need to loop back and cross the boundary 3 times

I'm pretty sure @Jade Master has been thinking about this issue.

view this post on Zulip John Baez (Mar 24 2020 at 18:46):

I think explaining cohomology and extending it to do something new are pretty different jobs.

view this post on Zulip Christian Williams (Mar 24 2020 at 18:46):

thinking about open systems as spans, for any endospan you can form a free category which creates all composites across boundaries... and the idea generalizes to any composite of spans. I don't know it as well; she can talk about it.

view this post on Zulip John Baez (Mar 24 2020 at 18:47):

Anyway, I'll think about how easy/impossible it is to do this job, @Jules Hedges.

view this post on Zulip John Baez (Mar 24 2020 at 18:47):

The basic idea is not at all black magic.

view this post on Zulip John Baez (Mar 24 2020 at 18:48):

Let me just consider this case: X and Y are monoidal categories, and we form the category hom(X,Y) of lax monoidal functors from X to Y, and monoidal natural transformations between these.

view this post on Zulip John Baez (Mar 24 2020 at 18:49):

This is the simplest situation I know where "laxness" raises its ugly head.

view this post on Zulip John Baez (Mar 24 2020 at 18:49):

The relevant "cohomology" here is the set of lax monoidal functors mod monoidal natural isomorphism.

view this post on Zulip John Baez (Mar 24 2020 at 18:50):

So we're saying: what are the different lax monoidal functors, but where we count two as the same if they're monoidally naturally isomorphic?

view this post on Zulip John Baez (Mar 24 2020 at 18:51):

If you want to feel like you're doing cohomology, you call the lax monoidal functors cocycles, and say two of them are cohomologous if they're monoidally naturally isomorphic, and you call our set of equivalence classes the "cohomology".

view this post on Zulip John Baez (Mar 24 2020 at 18:52):

But the fun only starts if you develop tools for computing this cohomology.

view this post on Zulip John Baez (Mar 24 2020 at 18:52):

I guess I'll stop here for now. The "grown-up" example might use 2-categories or bicategories instead of monoidal categories.

view this post on Zulip Amar Hadzihasanovic (Mar 24 2020 at 19:07):

John Baez said:

So we're saying: what are the different lax monoidal functors, but where we count two as the same if they're monoidally naturally isomorphic?

This sounds homotopy-like to me, "identifying if there is a path" -- what do you think makes it "cohomology" instead?

view this post on Zulip Eigil Rischel (Mar 24 2020 at 19:20):

Here is something: We can build two simplicial sets out of Rel: The nerve of the 1-categorical version, call it N(t1Rel)N(t_1Rel), and the "Geometric nerve" of the 2-category Rel (which puts a 2-simplex in any "subcommutative" triangle) - see https://ncatlab.org/nlab/show/geometric+nerve+of+a+bicategory. Call this N(Rel)N(Rel). Observe that there's an inclusion N(t1Rel)N(Rel)N(t_1Rel) \hookrightarrow N(Rel). Now our lax functor CRel\mathcal{C} \to Rel gives a map of simplicial sets N(C)N(Rel)N(\mathcal{C}) \to N(Rel), and (it seems to me), our question is whether or not this lifts to a map N(C)N(t1Rel)N(\mathcal{C}) \to N(t_1Rel). This question may be amenable to some sort of obstruction theory.

view this post on Zulip John Baez (Mar 24 2020 at 19:21):

Jules wondered if we could use cohomology, so I'm calling it "cohomology". :upside_down:

view this post on Zulip Amar Hadzihasanovic (Mar 24 2020 at 19:22):

Ah, it's cohomology in the opposite category :)

view this post on Zulip John Baez (Mar 24 2020 at 19:24):

You can classify 2-groups (=monoidal categories where everything is invertible) where the group of isomorphism classes of objects is G and the group of automorphisms of the unit object is A.

view this post on Zulip John Baez (Mar 24 2020 at 19:24):

To classify them up to equivalence, you need to choose an action of G on A and then an element of H3(G,A)H^3(G,A), the third cohomology of the group G with coefficients in the module A.

view this post on Zulip John Baez (Mar 24 2020 at 19:25):

But this element of H3(G,A)H^3(G,A) is really just the associator, modulo ways of changing the associator that produce an equivalent monoidal category!

view this post on Zulip John Baez (Mar 24 2020 at 19:25):

It just happens that this set of equivalence classes is called "third cohomology".

view this post on Zulip John Baez (Mar 24 2020 at 19:26):

The main advantage of noticing that it's cohomology is that people have lots of ways of computing it.

view this post on Zulip John Baez (Mar 24 2020 at 19:27):

But the "lax" situation Jules cares about now is more novel, and I don't know if work people have already done will make it easier to compute things.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 19:39):

Can you elaborate more on this? I'm trying to follow but I don't understand this step!

view this post on Zulip Amar Hadzihasanovic (Mar 24 2020 at 19:41):

One idea which may or may not make sense (as to “quantifying non-compositionality” as in “how non-pseudo a lax functor is”). This could be translated to “quantifying non-invertibility” of the structural natural transformations of the lax functor.

Now, one way of characterising invertibility of a morphism f : a -> b is: “all well-formed equations of morphisms f;x = g and x;f = h in the indeterminate x have a solution”. Not all such equations are independent, though: for example if f;x = g has a solution, so does f;x = g;g' for any g' (in particular, of course, it suffices that f;x = id(a) and x;f = id(b) have solutions for all the equations to have one).

I wonder if “the [set with some structure] of independent equations lacking a solution” may be interesting as a measure of noninvertibility? Each such equation could be seen as an obstruction to invertibility, in a certain sense.

view this post on Zulip John Baez (Mar 24 2020 at 19:59):

I'm trying to follow but I don't understand this step!

view this post on Zulip John Baez (Mar 24 2020 at 19:59):

What step?

view this post on Zulip John Baez (Mar 24 2020 at 20:00):

The stuff about 2-groups is a nontrivial theorem due to Joyal and Street...

view this post on Zulip John Baez (Mar 24 2020 at 20:01):

I explained it in my paper HDA5: 2-Groups with Aaron Lauda.

view this post on Zulip John Baez (Mar 24 2020 at 20:01):

I don't know if that's the "step" you mean.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 20:43):

John Baez said:

I don't know if that's the "step" you mean.

Sorry, I thought I quoted something but I didn't. I was referring to this:
"To classify them up to equivalence, you need to choose an action of GG on AA and then an element of H3(G,A)H^3(G,A), the third cohomology of the group GG with coefficients in the module AA."

view this post on Zulip John Baez (Mar 24 2020 at 20:46):

Yes, that's a nontrivial theorem - try HDA5: 2-Groups, section 8.3.

view this post on Zulip John Baez (Mar 24 2020 at 20:47):

The basic idea is that the associator gives a map from G3G^3 to AA (I defined these groups earlier).

view this post on Zulip John Baez (Mar 24 2020 at 20:48):

The pentagon identity says that this map is a "3-cocycle" - think of that as just jargon if you wish, but it's actually cool.

view this post on Zulip John Baez (Mar 24 2020 at 20:48):

Changing the associator to another one that gives an equivalent 2-group is "adding a coboundary" to your 3-cocycle.

view this post on Zulip John Baez (Mar 24 2020 at 20:49):

So you get "3-cocycles mod coboundaries", which forms the "3rd cohomology".

view this post on Zulip John Baez (Mar 24 2020 at 20:49):

That's a rapid sketch of the plan....

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 21:15):

So elements in H(G,A)H(G,A) are maps G×G×GAG \times G \times G \to A? I guess my main problem is not knowing how H(G,A)H(G,A) is defined

view this post on Zulip John Baez (Mar 24 2020 at 21:18):

Hn(G,A)H^n(G,A) is the set of maps GnAG^n \to A obeying some equations, modulo some equivalence relation.

view this post on Zulip John Baez (Mar 24 2020 at 21:19):

This is what cohomology of groups is all about. The equations are called "cocycle conditions" and anything obeying them is called a "cocycle".

view this post on Zulip John Baez (Mar 24 2020 at 21:19):

The equivalence relation is being called "cohomologousness".

view this post on Zulip John Baez (Mar 24 2020 at 21:20):

They're not mind-breakingly complicated; I'm just too lazy to write them down here, especially since I wrote a paper explaining all this stuff in the case n=3n = 3, which is what I've been talking about just now.

view this post on Zulip John Baez (Mar 24 2020 at 21:21):

As I mentioned, the equations in the case n=3n = 3 correspond precisely to the fact that an associator must obey the pentagon identity.

view this post on Zulip John Baez (Mar 24 2020 at 21:21):

And the equivalence relations correspond precisely to the fact that you can sometimes change the associator while still getting an equivalent monoidal category.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 21:21):

That's nice, thanks. I'll take a look then! The paper is the HDA5: 2-Groups you referenced above, right?

view this post on Zulip John Baez (Mar 24 2020 at 21:21):

Yes, I think I said section 8.3.

view this post on Zulip John Baez (Mar 24 2020 at 21:22):

There are lots of fun string diagrams in this paper, but not in this section. :-(

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 21:23):

Thanks! I'll try to understand things as much as I can. Is this also a good introduction to cohomology in general? In other words, can someone not having a clue about what cohomology is understand it? Asking for a friend.

view this post on Zulip John Baez (Mar 24 2020 at 21:34):

I don't know what counts as a "good" introduction.

First I learned homology and cohomology of spaces in a topology course as an undergrad, using Greenberg and Harper's Algebraic Topology: a First Course. Then I learned them again in a topology course as a grad student. At that point I could compute them but didn't understand them very deeply. Then I learned homology and cohomology for groups and for Lie algebras - you see, homology and cohomology come in many kinds. At that stage I found Rotman's Introduction to Homological Algebra very useful. Then I learned how to understand all these versions of cohomology much better using n-categories, and I wrote Lectures on n-categories and cohomology.

view this post on Zulip John Baez (Mar 24 2020 at 21:35):

So I've been thinking about this stuff for too long to know what to say, especially since you kinda said you're not into topology.

view this post on Zulip John Baez (Mar 24 2020 at 21:37):

Maybe someone else could answer your question better!

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 21:39):

It's not that I'm not into topology, it's that my intuition is mainly algebraical/logical, and zero geometric. So I have a very hard time understanding things from a geometric/topologic point of view.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 21:40):

Sheaf theory helped me a big deal with topology for instance, since it's very algebraic in flavour

view this post on Zulip John Baez (Mar 24 2020 at 21:58):

Well, then, maybe you should start by learning sheaf cohomology.

view this post on Zulip John Baez (Mar 24 2020 at 21:58):

You've got a sheaf of abelian groups...

view this post on Zulip John Baez (Mar 24 2020 at 21:58):

Its 0th cohomology is the set (actually abelian group) of global sections.

view this post on Zulip John Baez (Mar 24 2020 at 22:00):

Its 1st cohomology keeps track of things that are defined on intersections of 2 open sets and obey the cocycle condition they would obey if they did come from a global section, but don't actually come from a global section.

view this post on Zulip John Baez (Mar 24 2020 at 22:01):

Abramsky has been talking about the 1st cohomology of sheaves in his work on quantum contextuality, though the idea of sheaf cohomology goes back to ~1950.

view this post on Zulip John Baez (Mar 24 2020 at 22:01):

The 2nd cohomology is something about intersections of 3 open sets, and so on.

view this post on Zulip John Baez (Mar 24 2020 at 22:03):

I've never met someone who studied sheaf cohomology before studying cohomology of topological spaces... but it's theoretically possible.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 22:05):

This seems way more understandable to me

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 22:05):

I guess I'll start from there then, even if I found Samson's work on non-contextuality always hard to understand.

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 22:06):

An approach to sheaves that I realli liked is Michael Robinson's. Very example oriented, "connected to real things" if you wish. Maybe he wrote something about cohomology as well, I'll have to check.

view this post on Zulip Jules Hedges (Mar 24 2020 at 22:09):

On my todo list is to try to follow the computational algebraic topology course that Vidit Nanda live-tweeted from Oxford last term (but hopefully from a better source than his handwritten notes). I suspect it's very close to the Michael Robinson way

view this post on Zulip Fabrizio Genovese (Mar 24 2020 at 22:13):

I remember when I met Michael at NIST 2 years ago or so. His presentation was what made sheaves click for me. Best moment of the year math-wise for me, or close to.

view this post on Zulip Joshua Tan (Mar 25 2020 at 00:49):

Ahhhhh all my favorite things are in this stream.

view this post on Zulip Joshua Tan (Mar 25 2020 at 00:50):

Somewhere up there Fabrizio made a connection between this subject and an old discussion we had at ACT 2018. Here's a write-up of that discussion: http://www.joshuatan.com/the-failures-of-category-theory/

view this post on Zulip Matteo Capucci (he/him) (Mar 25 2020 at 14:57):

I come here on tiptoe to give another instance of this phenomenon, in probability theory. The functor L1L^1, going from Prob\bf{Prob} to RMod\Bbb{R}\bf{Mod}, associating to a space its algebra of integrable random variables, is oplax monoidal, and basically we call 'covariance' the failure of such a functor to be strong monoidal. At TICT2020, there was a poster outlining a similarity between this situation and Kunneth's theorem in homological algebra, i.e. the device you use to compute the homology of a product from the homologies of the factors.

view this post on Zulip Fabrizio Genovese (Mar 25 2020 at 15:30):

This is interesting. I never thought about covariance in this way but it makes a lot of sense!

view this post on Zulip Robin Piedeleu (Mar 26 2020 at 14:55):

(I haven't read this whole thread yet so apologies if someone has already proposed a similar approach.) If measuring the laxness of a functor in qualitative terms is what you're looking for, why not replace the target category with that of spans instead of relations? Then, laxness is precisely witnessed by a 2-cell between spans (i.e. just a map) h:F(f;g)F(f);F(g)h : F(f;g)\rightarrow F(f) ; F(g) for any two composable ff and gg.

I also wanted to mention a result that came to mind, but may not be useful here: by the Grothendieck construction, the category of lax functors CRel\mathcal{C}\rightarrow \mathbf{Rel} is equivalent to the subcategory of the slice category Cat/C\mathbf{Cat}/\mathcal{C} consisting only of faithful functors. If you replace Rel\mathbf{Rel} with Span\mathbf{Span} you get the whole slice category (I believe this was first noticed by Dusko Pavlovic and I could try to find a reference if pushed). Not sure what to make of this but it might come in handy at some point!

view this post on Zulip Blake Pollard (Mar 26 2020 at 16:11):

Can we get like 50 examples at a similar level to Jules' non-compositionality of reachability in open graphs?

view this post on Zulip Nathaniel Virgo (Mar 26 2020 at 17:07):

I'm still hoping for a worked example along these lines
Jules Hedges said:

I first came across this idea in a totally unrelated example, where I was trying to build a compositional theory of single-objective smooth optimisation, and I found that my argmax\arg\max failed to be a functor into Rel\mathbf{Rel} but was a lax pseudofunctor

maybe a simplified version of the original problem, but where I can see what the mapping to Rel is, and why it's not a functor. If that's possible of course.

view this post on Zulip David Egolf (Mar 26 2020 at 17:34):

Here's an example of where we care quantitatively about the failure of compositionality in a (medical) imaging engineering context. I don't know much category theory, so this example can probably be expressed better (and more concisely!) using some more sophisticated concepts.

Consider a (source) category ImgSys\mathbf{ImgSys} of imaging systems. There is a single object and each morphism corresponds to some "target interrogation". For example, in an ultrasound imaging context, one such morphism would be "fire a focused beam using the full aperture to (0,38 mm)", provided we specify a coordinate system. Composition in this category corresponds to "interrogate simultaneously", and the identity morphism corresponds to "interrogate with nothing". For example, in ultrasound imaging, composition of two focused beams would correspond to firing the two focused beam simultaneously and the identity morphism corresponds to firing no ultrasound beam.

Next consider a second (target) category Imgs\mathbf{Imgs} of images. Again we have a single object. The morphisms are images, which can be thought of as arrays of real numbers of some fixed size. Composition of morphisms corresponds to adding the corresponding images, and the identity morphism is the zero image.

Now consider a mapping ϕX,R:ImgSysImgs\phi_{X, R}: \mathbf{ImgSys} \to \mathbf{Imgs}. We index the mapping on XX, where XX is some physical object we want to image. We also index the mapping on RR, which is some "image reconstruction scheme", which produces an image given the results of applying a target interrogation to some specific target. We define ϕX,R(f)\phi_{X,R}(f), where ff is some target interrogation, to be the image obtained when we interrogate target XX using ff and make an image from the results using RR. So, ϕX,R(f)=R(f(X))\phi_{X,R}(f) = R(f(X)).

The "makes images" mapping ϕX,R\phi_{X, R} is not quite a functor. It does send the identity morphism (interrogate with nothing) to the identity morphism (the zero image), under reasonable image reconstruction schemes in a noiseless context. However, in general ϕX,R(fg)ϕX,R(f)+ϕX,R(g)\phi_{X, R}(f \circ g) \neq \phi_{X, R}(f) + \phi_{X, R}(g). In an ultrasound imaging context we describe this non-compositionality by saying there is "cross-talk" between the interrogations ff and gg. We care about the size of this difference and give it names like "cross-talk artifact level". Predicting and quantifying this degree of non-compositionality is of relevance for designing high-quality imaging schemes that can make images rapidly.

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:42):

I'll try to describe my example of laxness in optimisation, as @Nathaniel Virgo asked. I was planning to write a paper on this, which is now infinitely on hold because the laxness just stops it from being very interesting - which is what got my interested in the topic of saying something more interesting about laxness

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:44):

I built a symmetric monoidal category in which objects are natural numbers (so, a prop) and a morphism mnm \to n is given by a function Rm+nR\mathbb R^{m + n} \to \mathbb R satisfying some choice of conditions that make optimisation theory work; smooth and convex would be one choice

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:46):

In fact that's not quite right. A morphism mnm \to n is given by: (1) an equivalence relation \sim on m+nm + n elements, say with kk equivalence classes, and (2) a nice function RkR\mathbb R^k \to \mathbb R

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:48):

The idea of composition is (1) add the functions pointwise, (2) maximise over the variables that are hidden

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:48):

It's constructed as a category of decorated cospans, and takes a few pages to set up the details

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:50):

The choice of max\max and ++ in the composition is the main design choice. That's a pair that are generally considered to play nice together, and it has the nice feature that you can add constraints compositionally using a Lagrange multiplier

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:51):

Anyway, I optimistically hoped to get a monoidal functor argmax:CRel\arg\max : \mathcal C \to \mathbf{Rel}. I think it does turn out to be strong in the monoidal direction, but in any case it's lax in the forwards direction, so it's a monoidal lax pseudofunctor

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:53):

On objects, it takes the object nn to the set Rn\mathbb R^n. It takes a morphism f:mnf : m \to n to a subset of Rm×Rn=Rm+n\mathbb R^m \times \mathbb R^n = \mathbb R^{m + n}, namely the actual argmax of ff as a function Rm+nR\mathbb R^{m + n} \to \mathbb R

view this post on Zulip Jules Hedges (Mar 27 2020 at 15:53):

But it's not a functor... not even close

view this post on Zulip Nathaniel Virgo (Mar 27 2020 at 17:27):

Cool, thanks, that helps a lot.

On the off chance it ends up helping somehow, here's a radically simplified version of that, which looks to my physicist-ish eyes like a one-dimensional spin glass model.

Let C\mathcal{C} have one object, and let morphisms be functions {0,1}2R\{0,1\}^2\to \mathbb{R}. Let composition be given by (f;g)(x,z)=miny(f(x,y)+g(y,z))(f;g)(x,z) = \min_y\big(f(x,y)+g(y,z)\big), which turns out to be associative. I'm thinking of f(x,y)f(x,y) as the coupling energy between two spins, which is the only reason for using min instead of max.

We can define a mapping F:CRelF:\mathcal{C}\to\mathbf{Rel} that maps the object to {0,1}\{0,1\} and maps morphisms as F(f:xy)={(x,y):f(x,y)=minx,yf(x,y)}F(f:x\to y) = \{(x,y):f(x,y) = min_{x,y} f(x,y)\}. We might hope this is a functor, but it's not too hard to come up with counterexamples. (Including cases where F(f);F(g)=F(f);F(g) = \emptyset, which is a bit weird.)

If we like the physics analogy, this seems to be because when we calculate F(f)F(f) we're finding the ground state of a one dimensional spin glass model of a finite size, where the two end points aren't coupled to anything. But when we do f;gf;g we're coupling the end points of ff and gg to each other. So it's a slightly different physical situation, and the fact that FF isn't a functor has to do with how that coupling enables non-local interactions. (If I can make this thought clearer I will.)

This also makes me think we won't always be looking for a functor to Rel\mathbf{Rel}. In this analogy the spin glass has zero temperature. If we had a finite temperature instead then the min would turn into a softmin type thing, and we'd probably be looking for a map to joint probability distributions or conditional distributions instead of relations. I imagine that would fail to be a functor in a similar way, for similar reasons.

view this post on Zulip Blake Pollard (Mar 27 2020 at 18:22):

@Jules Hedges I'm having trouble understanding this optimization category in the decorated cospan framework. The key ingredient is a lax monoidal functor F:(FinSet,+)(Set,×) F : (\mathbf{ FinSet}, +) \to (\mathbf{Set}, \times) . In your optimization category are the objects finite sets m,nm, n or are they Rm \mathbb{R}^m. What is a decoration F(m) F(m) ? Is it the set of all (smooth, insert conditions here) functions f:RmR f : \mathbb{R}^m \to \mathbb{R} ?

view this post on Zulip Jules Hedges (Mar 27 2020 at 18:35):

Without checking my notes, I think that sounds right yes

view this post on Zulip Jules Hedges (Mar 27 2020 at 18:36):

I described objects as natural numbers, they could equivalently be Rn\mathbb R^n

view this post on Zulip Jules Hedges (Mar 27 2020 at 18:40):

Wait, no, I think I had the base category as FinSet\mathbf{FinSet}

view this post on Zulip Blake Pollard (Mar 27 2020 at 19:05):

So you have cospans in FinSet \mathbf{FinSet}
msnso m \to s \leftarrow n \to s' \leftarrow o
and decorations {f:RsR} \{ f : \mathbb{R}^s \to \mathbb{R} \} and {g:RsR} \{ g : \mathbb{R}^{s'} \to \mathbb{R} \} . For composition, we want a way to cook up a function {f;g:Rs+nsR} \{ f;g : \mathbb{R}^{s+_n s'} \to \mathbb{R} \} where s+ns {s+_n s'} is the pushout of the finite sets?

view this post on Zulip Joe Moeller (Mar 27 2020 at 22:08):

This is a neat example. I've been getting into tropical stuff lately too. Have you tried thinking about what the Grothendieck construction of it ought to be?

view this post on Zulip John Baez (Mar 27 2020 at 22:32):

I should help Blake finish off building this decorated cospan category.

view this post on Zulip Jules Hedges (Mar 27 2020 at 22:40):

Beh, now I think about I think I was lying and it's actually a decorated corelations category. I'm struggling a bit without checking my notes here

view this post on Zulip Jules Hedges (Mar 27 2020 at 22:41):

But I'm about to go to sleep, sorry

view this post on Zulip Faez Shakil (Mar 28 2020 at 01:35):

Blake Pollard said:

So you have cospans in FinSet \mathbf{FinSet}
msnso m \to s \leftarrow n \to s' \leftarrow o
and decorations {f:RsR} \{ f : \mathbb{R}^s \to \mathbb{R} \} and {g:RsR} \{ g : \mathbb{R}^{s'} \to \mathbb{R} \} . For composition, we want a way to cook up a function {f;g:Rs+nsR} \{ f;g : \mathbb{R}^{s+_n s'} \to \mathbb{R} \} where s+ns {s+_n s'} is the pushout of the finite sets?

@Blake Pollard slightly off topic and probably dumb but how are you factoring cospans through s and s'?
I've been defining them as m -> n <- o and you're making that look silly.

view this post on Zulip John Baez (Mar 28 2020 at 01:55):

Blake is not factoring anything. He's drawing two composable cospans, msnm \rightarrow s \leftarrow n and nson \rightarrow s' \leftarrow o, and he's considering composing them.

view this post on Zulip Faez Shakil (Mar 28 2020 at 02:02):

Ah! Thanks I see the pushout square emanating from n now.

view this post on Zulip Faez Shakil (Mar 28 2020 at 02:03):

And thank you for your series on networks! Everything I’ve found fascinating over the last year has resulted from stumbling on to it!

view this post on Zulip John Baez (Mar 28 2020 at 02:04):

Thanks - that's great to hear!