You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hi,
I was working on a little "Intro to Span(Set) for Engineers" for my own education as well as to help motivate some other things I'm working on.
I noticed that if you have a span where is a set with one element, you can compose it with itself to get the span and (unless I made a mistake) it turns out that is just the normal Cartesian product, i.e.
I also noticed that if you have a span where the source and target are identity functions, you can compose it with itself to get the span . The set consists of elements such that , but are just identity functions so elements of are pairs and (unless I made a mistake)
so that considering isomorphism classes of spans, is an identity morphism.
I thought this could be a cute way to define a product of elements of a set by forming a commuting diagram of spans
I want to get something morally like
Considering isomorphism classes of spans, I want to replace that :point_of_information: diagram with
If , I think the diagram is fine, but if , it doesn't parse because is not necessarily an element of .
In desperation, I tried augmenting with the empty set so I have a diagram
but this is starting to feel desparate and I'm about to accept that multiplying elements of a set via maps between spans is just a bad idea and I should drop it.
Is there any way to salvage this?
The motivation is that I am going to send my sets to algebras via a functor, i.e. (functions from to a field ). In this algebra, for every element we have a basis function such that
with product
So I think it would be interesting if we could define this product in in terms of commuting diagrams. Since functors preserve commuting diagrams, then if I have a rule like that in it gets preserved if I have a functor out of
Any ideas? Thank you :pray:
Edit: If it matters, the first diagram above :point_of_information: is piece of a natural transformation between functors
so I was hoping to say something morally like, "Multiplication of elements of a set is a natural transformation."
Eric Forgy said:
Hi,
I was working on a little "Intro to Span(Set) for Engineers" for my own education as well as to help motivate some other things I'm working on.
I noticed that if you have a span where is a set with one element, you can compose it with itself to get the span and (unless I made a mistake) it turns out that is just the normal Cartesian product, i.e.
That's true! This fact is famous and can be generalized a lot.
Eric Forgy said:
I also noticed that if you have a span where the source and target are identity functions...
That's called an identity span, it's an identity morphism in the category (or bicategory) of spans. This fact explains some of your observations.
Eric Forgy said:
I thought this could be a cute way to define a product of elements of a set by forming a commuting diagram of spans [...]
I want to get something morally like
That's not something you should want. First, there are only two god-given ways to give a set a multiplication if we don't have any other structure to help us. One rule is this:
and the other is this
Both rules are associative, so every set naturally becomes a semigroup in two different ways.
It's obvious that there are no other ways, since (by assumption) we don't have anything else to work with.
In particular, is not even an element of .
On the other hand, if you want a partially defined multiplication on you could decree that equals when and decree that it's undefined otherwise. This makes any set into a semigroup object in the category of sets and partially defined functions. (That's an important category.) You seem to be trying to use as a stand-in for "undefined".
Note:
1) composition of morphisms in a category is partially defined, so the set of morphisms of any category becomes a semigroup object in the category of sets and partially defined maps.
2) given a semigroup object in the category of sets and partially defined maps, we can always give the free vector space on a product where we define the product of two elements of to be zero if it's undefined. This is an associative algebra without unit.
Combining tricks 1) and 2), any category gives an associative algebra without unit. (If the category is finite, this algebra actually has a unit.)
You, Eric, sometimes do this when your category is the category of paths in a graph.
John Baez said:
You seem to be trying to use as a stand-in for "undefined".
Exactly. Yeah. I didn't know how to handle "undefined" and "empty" was the closest thing I could think of (and I see your next comment on "partial stuff") :+1:
There's a lot of fun stuff to say about the category of sets and partially defined functions:
https://ncatlab.org/nlab/show/partial+function#idea
John Baez said:
You, Eric, sometimes do this when your category is the category of paths in a graph.
Yeah. That is exactly what I am trying to do. I'll think about it :raised_hands:
The usual definition of product of paths in a graph as concatenation (if defined) felt a little contrived so I was trying to come up with some arrow theoretic way to show that is the only thing that makes sense.
There's nothing contrived about that.
There's a forgetful functor from Cat to Graph, and its left adjoint sends each graph to the category where morphisms are paths of edges!
That's the way to make precise this intuition: any graph gives a category where you can "freely compose" edges and get morphisms in the category.
John Baez said:
There's a forgetful functor from Cat to Graph, and its left adjoint sends each graph to the category where morphisms are paths of edges!
This sounds like the arrow theoretic justification I was looking for :blush:
Yes!
I was telling you about this a while back, right before you overloaded and told me you had to feed your family:
- Start with a graph G.
- Apply the "free category on a graph" functor to get a category whose morphisms are paths in the graph .
- Apply the "nerve" functor , which turns categories into simplicial sets. An n-simplex in will be a length-n path in the graph .
- Apply the "cochains" functor to create the dga of cochains on your simplicial set.
Step 2 is the left adjoint we're talking about right now.
Btw, it's completely normal to overload and "shut down" when learning category theory. I do it all the time. There's just a limit to how much abstraction anyone can swallow in one sitting!
It becomes downright painful... that means it's time to take a break.
Thank you for your help and thank you for your understanding :raised_hands:
I've "hit the wall" so many times learning category theory, I know this is just part of the process. I actually think it involves growing new neurons - there's a kind of physical pain involved. (But then when you understand things it's lots of fun.)
That is very helpful for me too. I have slowly been growing the neurons for the concept of Simplicial Set by following work on higher dimensional networks and tying it to RDF. (I still need to learn about this "nerve" functor, which I noticed turns up a lot). Step 3 now looks very intriguing.
I wonder if those 4 steps are an answer to @Jacques Carette 's question in the underlying graphs thread on Zulip which arose from his experience building a proof relevant Category Theory library in Agda. (see StackExchange question).
John Baez said:
There's a lot of fun stuff to say about the category of sets and partially defined functions:
I actually had this page open :+1:
(but obviously hadn't absorbed it yet :sweat_smile: )
In particular:
The category of sets and partial functions between them is important for understanding computation.
I'm also looking at: https://ncatlab.org/nlab/show/partial+map+classifier
This looks like what I was trying to do (I was on the fence whether to use or ):
A partial function between two sets can be turned into a total function by adding a new element to and sending all that are not in the domain of definition of to this new value = “undefined” and the rest just to their value under .
I don't think it matters what you use for "undefined", but some symbol is probably better than :+1:
This also looks interesting: https://ncatlab.org/nlab/show/maybe+monad
I'm not yet ready to give up on this idea of defining a product of elements of a set like
but I need some nice diagram to express it :thinking:
John Baez said:
- Start with a graph G.
- Apply the "free category on a graph" functor to get a category whose morphisms are paths in the graph .
- Apply the "nerve" functor , which turns categories into simplicial sets. An n-simplex in will be a length-n path in the graph .
- Apply the "cochains" functor to create the dga of cochains on your simplicial set.
I know I am being stubborn, but I am trying to do something like this, but with steps 1. and 2. replaced by a functor
I like this because each maps to directed paths of length ( is a directed graph as an endomorphism) directly and is more closely related to the next category I'm looking at, which is
The whole picture (still ironing it out) is
where the last step is cochain complexes. This is very intuitive to me.
It isn't necessary to get a product
in , but the product in is of that form, i.e.
It just seems like there should be a clean way to do this with commuting diagrams with
Btw, vertices are thought of as paths of length 0. The corresponding product of edges (spans) is
with the obvious generalization to directed paths.
Btw^2, the second functor maps
What that thread says about the underlying graph functor is that it doesn't respect categorical equivalence. Which does make it a rather bad functor indeed, as it can see fine structure that isn't visible in context (i.e. there are equivalent categories, in the 2-category of categories, which the underlying graph functor sends to inequivalent graphs).
But all that means for its adjoint is that it isn't faithful. Not a big deal.
I can't help but wonder if the nerve functor is well-behaved though.
@Eric Forgy I'm only partially following this discussion. Is this what you mean by a functor ?
-It maps from the category freely generated by starting with a single object and a single non-identity morphism on it
-It sends the single object to a set of vertices , and the generating morphism to a graph (endospan) on that set
-Composition of morphisms in corresponds to pullbacks of the apex (arrow) sets with respect to the source and target functions of the corresponding graphs. This involves taking a subset of the cartesian product of the apex sets, requiring each ordered pair in the result to satisfy the condition that the target of the first element matches the source of the second element. So, corresponds to the paths of length 2 in the graph .
-In general is the paths of length on the graph .
That is exactly right David :+1::blush:
Intuitively, that makes sense. I understand functors as often generating a "best interpretation of the source in the target context". The source in this case is a category that is all about relating a thing to itself in ways generated by one non-trivial way. The functor then picks a single set to relate to itself, and then picks a single generating non-trivial self-relationship on that set - namely the graph . Given a way of composing this self-relationship (in this case we chose to do this by concatenating compatible arrows into paths), this then generates a sequence of additional self-relationships on the set (the paths of higher order on ).
I don't (yet) know enough algebra to follow what you are saying about modules. I'd appreciate it if you could describe the equivalent to "graphs" you are trying to generate in some simpler terms. Maybe in terms of vector spaces?
Eric Forgy said:
Btw^2, the second functor maps
- (which is an algebra)
- (which is a -bimodule)
I see. Where before we had a set of vertices, now we have the functions on that set to some field (assuming is a set and is a field), which forms an "algebra" with "scalar" multiplication of functions by and multiplications of functions with functions presumably corresponding to element-wise multiplication, so .
Before, to make a graph we introduced another set, and associated it in two ways with the set of interest.
Now, we are introducing another algebra of functions and associating it in two ways with our algebra of functions of interest.
David Egolf said:
I don't (yet) know enough algebra to follow what you are saying about modules. I'd appreciate it if you could describe the equivalent to "graphs" you are trying to generate in some simpler terms. Maybe in terms of vector spaces?
Oh dear, I am not the best person (especially among this crowd) to explain modules, but as one engineer to another, modules are a lot like vector spaces. You can add modules together. You can tensor them (if they meet the "apex" condition), but instead of scalar multiplication, you have "ring action". You usually don't talk about "modules", you talk about "(left or right) -modules" or, like in my case, "-bimodules". In my case, I have more than just rings, I have commutative associative algebras (which are also rings) so I deal with -bimodules (which is short for -bimodule). In my case, I have an -bimodule denoted . This means with two elements and , and can act on (which I think of as multiplying on the left and right) giving a new module element
The "apex" condition for tensor product of modules means that if you have an -bimodule and an -bimodule you can tensor them over resulting in a new -bimodule .
Or, in my case, I have an -bimodule so I can tensor it with itself to get a new -bimodule . I can repeat this getting -bimodules
This is VERY reminiscent of what happens with a functor . A directed graph as an endomorphism in can be composed with itself times resulting in
An -bimodule can be thought of as an endomorphism in , i.e. a span
This is what my functor
does. It takes a span (directed graph)
and sends it to the span (-bimodule)
Since a functor
is completely determined by what it does to the one object in and what it does to the morphism in , i.e.
and
the functor is basically a directed graph (as an endomorphism in
So writing
tells us that starting with a directed graph , we can construct an -bimodule with
Thanks for explaining! I'll have to meditate on how "paths" on bimodules relate to something more concrete.
Coming full circle...
As you correctly noted, is the algebra of functions for some field with product defined pointwise, i.e.
But, since is constructed from a set , it means the elements of the set correspond to basis elements in I denote as such that
which gives us the product of basis elements as
which can be written as
or in other words
However, since I have this nice functor , I'm wondering if I can move this product back to
but define it as a commuting diagram so that when I hit it with my functor , we get the same product I know I want in This step isn't absolutely necessary because there is a natural way to get the product directly in as described above, BUT it would be nice I think so I'm still trying to figure out a nice way to do it.
David Egolf said:
Thanks for explaining! I'll have to meditate on how "paths" on bimodules relate to something more concrete.
Once you understand that, you are well on your way to understanding discrete differential geometry (which is the point of all this) :+1: :blush:
A general element can be written as
These are "discrete 0-forms".
A general element can be written as
These are "discrete 1-forms".
A general element can be written as
These are NOT "discrete 2-forms".
To get discrete 2-forms, we need the last functor
which is super cool and I haven't talked much about yet (except there are comments sprinkled around here the last few weeks). I am pretty excited about all this. It is a beautiful story to tell and I've almost got everything tidied up (thanks to a lot of help from John and others here :pray: ).
It sounds cool! I don't know enough differential geometry to say much more, sadly.
One more question: If we have a functor from to , acting on the object:
,
what span does this become in ? You mentioned already how acts on the sets, but how does it act on the functions?
David Egolf said:
It sounds cool! I don't know enough differential geometry to say much more, sadly.
One more question: If we have a functor from to , acting on the object:
,
what span does this become in ? You mentioned already how acts on the sets, but how does it act on the functions?
You are now caught up to where I am currently writing up in my article :blush:
I should try to finish writing this up first. Of course when my paper is finished, I will share with anyone interested :blush:
Almost everything I've said so far (except the ACT lingo) is contained in an old paper of mine (with Urs - creator of the nLab and uber genius)):
I think there is still large potential for DDG in applied scientific computation and one of the purposes for resurrecting the old paper is to add some exposition to make it more widely accessible, simplify / formalize the presentation in the language of CT, and extend it a bit with some new recent results I'm excited to share :+1:
Since we wrote that paper, I've demonstrated several uses spanning mathematical finance, fluid dynamics for climate science, and general stochastic partial differential equations, but everything I did was on blogs and discussion forums. Nothing was formally published. The original motivation was for computational electromagnetics although I haven't written about that yet (something I hope to change soon).
David Egolf said:
You mentioned already how acts on the sets, but how does it act on the functions?
One thing I can say real quick is that there are no "functions" in :blush:
consists of sets and spans so the functor just needs to know how to send a set to an algebra, i.e. , and how to send a span to a span, i.e.
This is one of the beautiful things about working with instead of just :+1:
Yes, that is what I meant. A span in Set contains functions and I was wondering how those ended up being reflected in the resulting span of algebras.
One more thing, any function in can be sent to a span in so there is also a functor
There are actually two ways to send a function to a span. That one :point_of_information: is usually denoted You can also send to a span which is denoted
Here is one way that one could send a span of sets to a span of vector spaces (I don't know enough about algebras!):
-View a span as a diagram in Set
-Apply the free functor from Set to Vect to each part of that diagram
I think the resulting vector spaces will be isomorphic to the vector spaces generated by considering all linear maps from the original set to the field we're using.
I would probably try thinking of to get a handle on it.
An algebra can be thought of as a vector space together with a product . So if you forget about the product, is still a vector space constructed from a set.
Another way to get a vector space from a set is by taking formal linear combinations of elements of the set. This is denoted (I think - John recently explained this to me). is dual to
You can actually do this all with rather than the dual space . I chose the latter because, ultimately, the algebra will be 0-cochains. If you start with , you end up with chains. In our paper, we show both formulations.
I believe the "formal linear combinations" way of making a vector space from a set corresponds to applying the free functor I was mentioning.
David Egolf said:
I believe the "formal linear combinations" way of making a vector space from a set corresponds to applying the free functor I was mentioning.
There are TWO free functors :blush:
is also "free", i.e. obtained from [Edit: a the] forgetful functor :blush:
Every has a dual
I think I get the flavor at least! Thanks for taking the time to explain.
As I've said in your stream on imaging systems, the more I learn about spans, the more I love them and think they should be taught early in science / engineering. I'm trying to write up some notes "for Scientists and Engineers" to make this more accessible :+1:
Another way to say "span" is "multi-valued partial function"
So spans handle things in computation like "undefined".
Also, spans , unlike functions, can be reversed
So given a directed graph, as a span, , we also have the directed graph with edges reversed. These reverse graphs get sent to reversed -bimodules allowing us to define inner products, etc.
So for every functor
,
we have another functor
with
and
which means we have a natural transformation
And now I am explaining my entire paper, so I should just finish writing it :joy:
I've been trying to figure out what the "nerve" of a category is (mentioned above by John Baez). I think it is relevant to what you are doing here. Here is an explanation from "A Leisurely Introduction to Simplicial Sets" that I thought was helpful:
Nerve
The nerve appears to spit out paths of various lengths, which is what the functor does above.
Thanks for sharing that :+1: I hadn't looked at that definition (at least not recently). It definitely seems related. It appears to be a different (less elegant - imo) way to say "Functor " :thinking:
As usual, the nLab is also pretty helpful :blush:
https://ncatlab.org/nlab/show/nerve
There, it has
It doesn't say so, but consider with . This is just a directed graph (as an "endospan") composed with itself twice.
Expressing this as a functor
is much cleaner :thinking:
So reviewing John's treasure map:
John Baez said:
- Start with a graph G.
- Apply the "free category on a graph" functor to get a category whose morphisms are paths in the graph .
- Apply the "nerve" functor , which turns categories into simplicial sets. An n-simplex in will be a length-n path in the graph .
- Apply the "cochains" functor to create the dga of cochains on your simplicial set.
The functor
actually takes care of steps 1-3, i.e. we obtain a nerve, in one shot :+1:
That is what I call engineering efficiency :nerd:
The spans are secretly -simplices.
It's also worth noting that article also later gives another definition of the nerve:
Nerve V2
in this case it's the right adjoint of a functor that embeds into the category of small categories.
To get composable arrows in a category (which like a path of length ) we can take a functor from the category [n]=0->1...->n. So, if we consider all functors from [n] to , this is like all the paths of length in .
I assume that "cochain" functor involves that degeneracy map and this is where our stuff differs. The s you obtain from that are not associative on cochains (which is something I've been blabbering about for decades but haven't been able to enunciate clearly).
Cool stuff :+1:
I wouldn't be surprised if there was some kind of adjunction relating what I'm doing to this nerve stuff :thinking:
I also have face and degeneracy maps, but I want these to be -bimodule morphisms so they do not touch the first or last vertex. They only touch internal vertices.
Eric Forgy said:
This also looks interesting: https://ncatlab.org/nlab/show/maybe+monad
I'm not yet ready to give up on this idea of defining a product of elements of a set like
but I need some nice diagram to express it :thinking:
As you long as you admit that the multiplication map by this formula is a partial function, there's nothing wrong with it.
But if you're mainly using this to study composition of paths in a graph, and how that operation is partially defined, it's probably better to just admit that paths in a graph are morphisms in a category. Composition of morphisms in a category is always partially defined, but in a very well-behaved way, so that you don't really need to bring in partial functions to study it.
Eric Forgy said:
is also "free", i.e. obtained from [Edit:
athe] forgetful functor :blush:
I don't know what "obtained from" means, but this functor is not the left adjoint of the forgetful functor , so we don't call it "free".
The "free" functor from sets to vector spaces is , since that's the left adjoint of the forgetful functor .
is just followed by taking duals.
Jacques Carette said:
I can't help but wonder if the nerve functor is well-behaved though.
The nerve of a category, you mean? (There are lots of "nerves".) Yes, I'd say that's well-behaved: equivalent categories have homotopy equivalent nerves.
John Baez said:
The "free" functor from sets to vector spaces is , since that's the left adjoint of the forgetful functor .
is just followed by taking duals.
Thanks for clarifying.
John Baez said:
But if you're mainly using this to study composition of paths in a graph, and how that operation is partially defined, it's probably better to just admit that paths in a graph are morphisms in a category. Composition of morphisms in a category is always partially defined, but in a very well-behaved way, so that you don't really need to bring in partial functions to study it.
Thanks. This does seem like the right way to go :+1:
Thinking of functors , I tend to focus on and and for directed edges and directed paths, it is natural to think of the "partial"ness as just rules for composition of morphisms, but I can't neglect the poor identity span . The rule I'm thinking about for product of vertices is probably just stemming from the composition of the identity span with itself. I'll think about it. Thanks for the tip :pray:
The only way to compose two vertices (0-paths) is if they are the same vertex.
Note: This looks relevant: https://golem.ph.utexas.edu/category/2014/05/categories_vs_algebras.html
I'm trying to understand the multiplication you want to put on graphs.
You gave this diagram:
multiplication
As we discussed, a span induces a graph, and so we can view the apex as corresponding to edges and the feet as vertices in some graph.
The top span then corresponds to some collection of edges that share common sources and targets. This is a graph where every edge has the same source as target. Further, every edge has the same source and target as every other edge. So, this is a graph on a single vertex with arrows going from and to that vertex.
The bottom span was formed by composing the identity span with itself, which is . John Baez notes above that this is an identity morphism in the category of spans (where we are viewing spans as morphisms). So, up to isomorphism, the bottom span is just the identity span:
multiplication V2
Viewing the bottom span as a graph, it is the graph on vertices having exactly one edge for each vertex, going from that vertex to itself.
So, a map from the top span to the bottom span must satisfy and . Since and send everything to , this means we need .
This multiplication function is then totally determined by a map , which selects a single vertex. The multiplication then takes any two edges and sends them to the unique edge in the bottom graph associated with the vertex .
So, we do have a kind of multiplication . For a given choice of , we get a multiplication rule that sends every pair of elements in to a single fixed element of .
This seems different than the rule you were proposing though, so maybe I'm thinking about this wrong.
Hi David. Thanks for thinking of this problem and your note :pray:
I drew those diagrams knowing they were incorrect and I was looking for the correct way to do it. Sorry I didn't make that more clear :pray:
I'm currently thinking about the identity span: . This can be thought of as "paths of length 0". Composing paths of length 0 results in paths of length 0 and the only way to compose a vertex with a vertex is if it is the same vertex
I'm also looking at the article I just posted on category algebras. There they define exactly the multiplication I am thinking about.
Eric Forgy said:
I'm currently thinking about the identity span: . This can be thought of as "paths of length 0". Composing paths of length 0 results in paths of length 0 and the only way to compose a vertex with a vertex is if it is the same vertex
Doesn't this specify a graph where there is one arrow starting and ending at each vertex? These would be paths of length 1.
For sure, corresponds to paths of length 0, but I see your point and don't have a good explanation :sweat_smile:
Given a directed graph (as a span) , composing with itself times gives which are paths of length .
What happens when ?
When , we have the identity span and I think of
I don't know how to make this precise, but pretty sure it is true.
I think I get the idea though: You have parts of a graph that can't be stuck together (say the components of it). You want to make an analogy between this and the "perpendicularity" of basis vectors: in each case we have a collection of things that "go in different directions"?
For example, consider this graph from Wikipedia:
graph
It has three components. If we consider all the paths of length n, this is made up of three kinds of paths: Ones that start in the first component, ones that start in the second component, and ones that start in the third component. A vector space has a basis. All of its elements can be formed as linear combinations of the basis elements. There is no way to make elements generated by basis vector 1 from elements generated by basis vector 2.
Also, the number of paths that start in one component and end in a different component is zero. An orthogonal basis has inner product of zero between two different basis vectors. Maybe we can define an inner product between subgraphs of a graph based on the number of paths starting in one and ending in the other.
Maybe some analogy like this?
Maybe :blush:
On a related note, we can take the coproduct of two graphs to get the disjoint union of them.
The coproduct of two subspaces (for example those generated by two basis vectors) is their direct sum.
There are so many different ways to look at this. Conceptually, it is very simple and intuitive to me, but the hard part is exposition. A graph has stuff that can be multiplied, added together and tensored together. This stuff can be combined in certain ways to make it into a DGA (with inner product). A DGA with inner product is enough to write code and do physics simulations.
So starting with a directed graph, you can build a framework that allows you to do scientific computation. The rest is exposition.
Oh!
https://ncatlab.org/nlab/show/category+algebra#InTermsOfCompositionOfSpans
I was thinking a bit more about trying to put a sort of inner product on graphs. Have you considered using the intersection of graphs? If you are using a functor from graphs to vector spaces that maps intersections to intersections, that might be helpful. That is because the dimension of the intersection of the vector spaces spanned by two vectors and is if and are part of an orthonormal basis. That is, the dimension is one if and the dimension is zero otherwise, as long as and are part of an orthonormal basis.
(feel free to ignore this should you already be past this point)
I've spent many years trying to put a physically meaningful inner products on graphs :blush:
The best way, imo, is what we did in our 2004 paper that I'm currently trying to re-express in CT language. It involves reversing edges in a directed graph. That is another reason why I am happy to be learning about spans because for every span , there is a related span with all edges reversed. There is a kind of naive way to do this, but that doesn't give anything physically meaningful, but once you've defined inner product the naive way, there is a cool way to obtain more physically meaningful inner products as a deformation of the naive one. The most obvious, nontrivial, inner product on a certain simple graph, gives the Lorentzian inner product and your edges all become lightlike in a way that relates to Sorkin's causal sets. It is all a beautiful story that I've been failing to tell the world :sweat_smile:
I'm trying to take things one step at a time, but I hope to eventually end up talking about "dagger compact categories from graphs". The 1-category is a dagger category and we know endomorphisms in correspond to graphs. I believe (need to confirm), that the 1-category (algebras and isomorphism classes of bimodules) is a dagger compact category so the third morphism in my chain, i.e. gives dagger compact catgeories from graphs. Then I should be able to relate our old stuff to cool things like this:
I've been stuck the last several days though on the right way to express this "algebra of paths". nLab sends me to category algebra, which seems like the right direction, but I might need to "explode" because I want to peer into the set of spans and consider the product of individual directed edges as composition. So now my head is spinning thinking about "pointed spans" etc
Pointed spans bring me to displayed categories
Interesting! I had not heard of dagger categories before!
I found a cool way of thinking about inner products in dagger categories here:
inner product in dagger category
which refers to this paper.
Thanks for the reference :+1:
I feel like part of what I am trying to do is reinvent this :point_of_information: No wonder I am struggling :sweat_smile:
Ever since @Bob Coecke and Samson Abramsky's famous 2004 paper A categorical semantics of quantum protocols, dagger categories have become the standard framework for generalizing quantum mechanics using category theory. I gave an intro to some aspects of this stuff in 2004 here:
which was a paper written for philosophers of science.
It's all very cool. I'm still catching up after being away since around that time.
Now Bob is involved in a company, Cambridge Quantum Computing, that uses this stuff. They've hired some people who work on categorical quantum mechanics, like Ross Duncan. So this stuff is becoming quite real.
Nice slide from this:
Ah, Simon Willerton!
what’s the category Var in that picture?
This is a list of monoidal bicategories, not categories. The chart says what the objects, morphisms and 2-morphisms are.
Willerton sketches the description of Var here:
The 2-category Var has spaces as its objects, has objects of the derived category D(X×Y)— considered as integral kernels — as its 1-morphisms from X to Y, and has morphisms in the derived category as its 2-morphisms.
Var = varieties, or ...?
For details try that paper by Willerton; I think he said "spaces" in the short description because the idea of "the derived category of coherent sheaves" works for a variety of different kinds of spaces. (Pun intended)
Out of curiosity, is it possible that some of those can be expressed as for some sharing the same objects listed there, but simpler morphisms?
For example, I think
Do
have any kind of meaning?
Looking on the nlab, it sounds like we can make a 2-category when has pullbacks.
Eric Forgy said:
For example, I think
I don't think so: a span of algebras involves three algebras, while a bimodule involves two algebras and a bimodule.
I think there should be a nice juicy relation between and $\mathsf{Span(Alg)}$$, but I'd be kinda amazed (and delighted) if it were an equivalence of categories, or bicategories.
David Egolf said:
Looking on the nlab, it sounds like we can make a 2-category when has pullbacks.
Yes. A bicategory, actually. (Sometimes the nLab uses 2-category to mean bicategory, but in composition of 1-morphisms is not strictly associative so it's a bicategory.)
I was playing around with a bit and in my usual cowboy fashion, convinced myself, but we know the chance I'm wrong is high :blush:
For sure, involves three algebras, but I think those three -algebras could be simply , and i.e. an -bimodule is a span
with the third algebra product in given by
It made intuitive since so I didn't question too deeply. I'm pretty sure this is the case for the -bimodules I'm working with, i.e.
Then composition is
So now it sounds like you're not saying is equivalent to , but something weaker and more easy to believe: that there's a functor from to .
This makes sense to me because we can build it up like:
The problem is you aren't doing anything with the -bimodule.
Here's what I'd do.
I want to describe a functor from to .
So say is an -bimodule, where and are algebras over some field and $$M$ is a vector space over that field.
Saying that is a left -module say exactly that we have an algebra homomorphism
Saying that is a right -module says exactly that we have an algebra homomorphism .
So an -bimodule gives a special sort of cospan of algebras:
Oh, I'm getting a cospan. That's what I should have said all along, not span.
I don't love this, but at least I'm using the bimodule , whereas you seemed to completely throw that out and only work with two algebras!
Maybe you were actually doing something very different than turning a bimodule into a span of algebras.
is an -bimodule :blush:
Okay, but it's a very special bimodule; you're not giving a recipe to turn bimodules into spans of algebras, nor are you giving a recipe to turn spans of algebras into bimodules.
You are giving a recipe, right now, to turn a pair of algebras into a bimodule of that pair of algebras.
When you said
Eric Forgy said:
For example, I think
I was hoping you'd at least have a functor going one way or the other between these two categories, but that's not what you're doing. Why? Because you're not giving a recipe to turn bimodules into spans of algebras, nor are you giving a recipe to turn spans of algebras into bimodules.
Yeah. That is what I'm working on now (and stuck). I have scribbled notes involving a functor
A natural way to try to get one of those is to use a functor . But the nicest functor like that is actually contravariant, so you wind up getting a functor from Span(Set) to Cospan(Alg).
For example, given a span of sets I want to send elements and to elements and defined by
such that are given by
and become like face maps in a cochain complex.
The idea is that I will have (I hope) a functor sending a span
to a span
This span being a bimodule once I sort out the actions (which is part of the motivation of this stream).
I know what I want the actions to be, i.e.
These are the actions I mentioned way above. The most natural way to think of this, i.e. what you suggested, is in terms of composition.
So I am chasing nLab definitions around related to category algebras.
I'm actually thinking I might need to "explode" my spans, i.e. consider so that each individual directed edge becomes a morphism and form a category algebra from that.
I think the elegant way to do what you're suggesting is this. Note that given a field , the functor
sending a set to the algebra is contravariant, as I mentioned. That is, a function gives an algebra homomorphism , defined by
for . Thus, this functor maps a span of sets
to a cospan of algebras
In particular, any graph is an endospan of sets
so it gets send to an endo-cospan of algebras
Cool. Thanks. I'll think about this :+1:
This procedure gives a functor from to .
Trying to get a span of algebras from a span of sets is going against the tao of mathematics - I advise against that.
That definitely makes more sense than what I was trying to do with spans of algebras.
This is kind of cool and makes me feel I'm on the right track (thanks to you :pray: )
So with a directed graph (as a span) , let
Pulling back to gives
How to express in terms of basis elements ?
This should consist of the sum of all where and there could be more than one of them.
So let me introduce a notation
Then we have
where
if is empty.
Similarly,
This is interesting because I introduced this notation before when I was following your first course in network theory and converting everything to DDG. Specially, I needed this to follow what you were doing with electrical networks.
But this just raises more questions in my mind, because I always looked at as a tensor product , but if it is related to cospan, wouldn't it be closer to coproduct? :thinking: I was just reading about biproduct :thinking:
Is it possible that ?
(I was just starting to get a handle on spans - now I need to think about cospans and starting with little intuition again)
Intuitively, it would make more sense to me if were equivalent to , but looking at that diagram :point_of_information: , if the apex wasn't labelled , I'd be tempted to call that :thinking:
This makes me think of biproducts again :thinking:
Finite Products are Biproducts in a Compact Closed Category
https://arxiv.org/abs/math/0604542
Eric Forgy said:
Is it possible that ?
I believe so: that's why I guessed that the functor gives a functor from to . For this idea to work, needs to map pullbacks to pushouts.
Here is my nickname for the contravariant functor from Set to Alg sending to .
If I were better at category theory I'd just instantly know if maps pullbacks to pushouts.
A related fact is this: if you have any category C, and any object , the contravariant functor from C to Set sending x to the set hom(x,c) maps pullbacks to pushouts.
In fact it maps any sort of limit to the corresponding sort of colimit.
That's why I made my guess. But we're doing something a bit different, since although has hom(S,K) as its underlying set, we're treating as an algebra, not a mere set!!!
https://ncatlab.org/nlab/show/hom-functor#preservation_of_limits
Very cool. That feels like progress :+1:
So is it the case that sends a span (pullback) to the cospan (pushout) as sets? :thinking:
Then if I could show as sets, then we can construct the category algebra from that maybe since they are morphisms?
I keep coming back to the - maybe misguided - idea that we want to consider pointed sets so that spans become spans i.e. each individual directed edge becomes a morphism. Then the category algebra of that seems like what I want.
Eric Forgy said:
So is it the case that sends a span (pullback) to the cospan (pushout) as sets? :thinking:
Yes, that's what I just said. But since I'm a mathematician, I said it more generally:
if you have any category C, and any object c∈C, the contravariant functor from C to Set sending x to the set hom(x,c) maps pullbacks to pushouts.
(was checking to make sure I understood :raised_hands: )
Finite Products are Biproducts in a Compact Closed Category
https://arxiv.org/abs/math/0604542
By the way, the categories we're dealing with - like Set and Alg - are not compact closed. The most famous compact closed category is the category of finite-dimensional vector spaces made monoidal with the usual .
The categories Set and Alg also do not have biproducts. CommAlg, the category of commutative algebras, has biproducts: is the biproduct of commutative algebras. Note that your algebras are commutative, and you might want to take advantage of that.
Yeah. I am always (so far) dealing with commutative (associative, unital) algebras :+1:
Actually scratch that - it's bullshit. The category of commutative algebras does NOT have biproducts.
It has products and coproducts, but they are different: in a category with biproducts (like Vect) they are the same.
Is it possible to consider an -bimodule an algebra?
What I'm thinking is that if we have , could we consider a product
? Not sure if that makes any sense.
I'm still wanting to consider a morphism as a bimodule.
(Maybe .)
If and , it's tempting to work with spans (as sets?) instead of spans (as sets?) and then consider category algebras.
If we do this, maybe we don't even need . If I understand,
would consist of objects for and (isomorphism classes of) spans
i.e. directed edges are morphisms. If and are understood, directed edges could be written as spans
Composition of directed edges and follows
The reason I consider this idea is that if directed edges are morphisms, then the category algebra gives the product I wanted (I think).
Similarly, vertices are also morphisms, i.e.
Eric Forgy said:
Is it possible to consider an -bimodule an algebra?
What I'm thinking is that if we have , could we consider a product
? Not sure if that makes any sense.
Not really: one problem is that you're only telling us how to multiply and in the special case where , so it's not really a well-defined product!
The other problem is that everything about your formula for multiplication is just multiplication in and in , nothing about the bimodule.
So you practically might as well just leave out the bimodule and admit you're making into an algebra - that is a something you can really do.
If you grab a book like Rings and Categories of Modules you can learn what's possible (and thus what's not).
Quivers and Path Algebras
https://sites.math.washington.edu/~julia/teaching/Sem_Fall2010/Jim_notes.pdf
https://ncatlab.org/nlab/show/category+algebra#for_bare_categories_discrete_geometry