You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Anyone knows where the distinction "Structure vs. Properties" appears, and is explained, in print? I read something about it ages ago in some text from which I didn't understand much - I don't remember where it was, sorry - and I invented my own meaning for it... in my meaning when we express a functor F in Type Theory it becomes (F_0, F_1, respid, respcomp), where F_0 and F_1 - its actions on objects and morphisms - are "structure", and respid and respcomp, that assure that F respects identities and compositions, are "properties". Roughly, the last two components are "properties" because they mention equalities of morphisms, and the first two components are "structure" because they don't...
I don't know a reference, but the general idea behind this distinction is that a property of a mathematical object is something that exists uniquely, when it does exist, and a structure is something extra added to the object, not generally unique. So a monoidal product is an extra structure on a category, because a category can generally have many different monoidal products, whereas the categorical product is a property, because they are unique up to isomorphism whenever they exist.
https://ncatlab.org/nlab/show/stuff,+structure,+property ?
I learned the yoga of "stuff, structure and properties" from James Dolan, and explained it in detail here:
We also explain how this hierarchy continues further.
This topic should be in #learning: recommendations or #learning: basic questions.
I don't hear “stuff” used much. Does it just mean h-level 3 or above, or does it also imply something about size, or something else entirely?
Ah, yeah, looks like in your thing it's about h-levels.
Yes, but all this was written before "h-levels" were cool
People in the know use "stuff" as appropriate: for example a graph of the sort category theorists like (a quiver) is a set equipped with extra stuff, while a graph of the sort graph theorists like (a simple graph) is a set equipped with extra structure.
Thanks!!!
Btw, I just found where I heard of "Structure vs. Properties" the first time...
It was on "On property-like structures", by G. M. Kelly and Stephen Lack:
http://www.tac.mta.ca/tac/volumes/1997/n9/3-09abs.html
Yes, I know that one - it explores a nice issue.
Eduardo Ochs said:
Anyone knows where the distinction "Structure vs. Properties" appears, and is explained, in print? I read something about it ages ago in some text from which I didn't understand much - I don't remember where it was, sorry - and I invented my own meaning for it... in my meaning when we express a functor F in Type Theory it becomes (F_0, F_1, respid, respcomp), where F_0 and F_1 - its actions on objects and morphisms - are "structure", and respid and respcomp, that assure that F respects identities and compositions, are "properties". Roughly, the last two components are "properties" because they mention equalities of morphisms, and the first two components are "structure" because they don't...
So we had a paprer written a few years ago where we asked ourselves if compact closure is a structure or a property. The difference for us was that a structure is something that you can impose (e.g. Set can have different monoidal structures on itself) while a property is somewhat canonical. I don't know if this is the distinction you want. In any case, the paper is this one: https://arxiv.org/abs/1803.00708
John Baez said:
I learned the yoga of "stuff, structure and properties" from James Dolan, and explained it in detail here:
- John Baez and Mike Shulman, Lectures on n-categories and cohomology, Section 2.4: stuff, structure, and properties, in Towards Higher Categories, eds. John Baez and Peter May, Springer, Berlin, 2010, pp. 1-68.
I find this discussion in terms of forgetful functors difficult to get a grip on, but I think there is another viewpoint using the left adjoint functor, the one that freely constructs a new category that has the specified stuff/structure/properties. In this case, stuff corresponds to freely adjoining objects, structure freely adjoins morphisms, and properties corresponds to adjoining equations for these morphisms. Ie. stuff is 0-cells, structure is 1-cells, properties are 2-cells. This seems much more intuitive to me than using forgetful functors, because that already assumes you know what it is that you are about to forget. Also, it makes clear how this should generalize to higher dimensions. (i hope i am not too confused here...)
Fabrizio Genovese said:
Eduardo Ochs said:
Anyone knows where the distinction "Structure vs. Properties" appears, and is explained, in print? I read something about it ages ago in some text from which I didn't understand much - I don't remember where it was, sorry - and I invented my own meaning for it... in my meaning when we express a functor F in Type Theory it becomes (F_0, F_1, respid, respcomp), where F_0 and F_1 - its actions on objects and morphisms - are "structure", and respid and respcomp, that assure that F respects identities and compositions, are "properties". Roughly, the last two components are "properties" because they mention equalities of morphisms, and the first two components are "structure" because they don't...
So we had a paprer written a few years ago where we asked ourselves if compact closure is a structure or a property. The difference for us was that a structure is something that you can impose (e.g. Set can have different monoidal structures on itself) while a property is somewhat canonical. I don't know if this is the distinction you want. In any case, the paper is this one: https://arxiv.org/abs/1803.00708
The way I think of it is that properties are unique structures.
The key to distinguishing "stuff," "structure" and "property" in a mathematically rigorous way is to examine properties of the forgetful functor from the (n-)category of things with more stuff, structure or properties to the (n-)category of things with less.
For example at the 1-categorical level one is forgetting "at most properties" (not structure or stuff) if the forgetful functor is full and faithful.
All this is explained a lot more carefully in Section 2.4 of Lectures on n-Categories and Cohomology. But there's even more to say than what's in there!
(I'll take the opportunity to say that I read the Lectures on n-Categories and Cohomology for the first time a few weeks ago and that I should have done so much earlier! Really enlightening stuff -- thanks @John Baez!)
Is there a preferred notation for higher stuff? Like n-stuff, with either 0-, 1-, 2-, or 3-stuff being ordinary stuff
Thanks, Amar! I've always felt this paper was a lot of fun.
Numbering conventions are often annoying in n-category theory. There's not a standard convention for "n-stuff".
Joe Moeller said:
Fabrizio Genovese said:
Eduardo Ochs said:
Anyone knows where the distinction "Structure vs. Properties" appears, and is explained, in print? I read something about it ages ago in some text from which I didn't understand much - I don't remember where it was, sorry - and I invented my own meaning for it... in my meaning when we express a functor F in Type Theory it becomes (F_0, F_1, respid, respcomp), where F_0 and F_1 - its actions on objects and morphisms - are "structure", and respid and respcomp, that assure that F respects identities and compositions, are "properties". Roughly, the last two components are "properties" because they mention equalities of morphisms, and the first two components are "structure" because they don't...
So we had a paprer written a few years ago where we asked ourselves if compact closure is a structure or a property. The difference for us was that a structure is something that you can impose (e.g. Set can have different monoidal structures on itself) while a property is somewhat canonical. I don't know if this is the distinction you want. In any case, the paper is this one: https://arxiv.org/abs/1803.00708
The way I think of it is that properties are unique structures.
This is precisely our point of view in the paper :slight_smile:
Does that mean that this notion is different from the 'stuff, structure, properties' classification, because the latter has "property-like structure"?
E.G. the unit of a monoid is property-like structure, I think, because it is necessarily unique.
Yes, so "property" here really means that it follows canonically. For instance, the cartesian product is a property of the category of sets and functions, but a monoidal structure on sets and functions is a structure because it's not unique
I'll again argue that concepts like "property, structure, and stuff" only make rigorous sense relative to a choice of functor where is the category of things with more property, structure, and stuff and is the category of things with less.
So for example if we're talking about "cartesian products", we can choose to study where is the 2-category of categories with chosen finite products - if that's what you're interested in.
I think that what we are calling property here corresponds to have that functor as injective on objects
Here we see that for a given there may be no with , or there may be many.
However, if there are many, they are all equivalent.
So yes, is 'essentially injective on objects'.
Yes, this seems to me the right way to generally capture this idea. We were interested into a very restricted class of examples so we never got to this level of generality :slight_smile:
But Jim Dolan and I tended to focus on whether is faithful, i.e. whether there can be truly different ways to make a functor into a finite-product-preserving functor, or whether all such ways (if any exist) are naturally isomorphic.
And the answer is: yes, it's faithful.
Also, the question of whether it's full, i.e. whether every functor between categories with finite products is finite-product-preserving.
And here the answer is: no.
That makes the forgetful functor from to a bit like the forgetful functor from to - namely, not every function between groups is automatically a homomorphism.
But the forgetful functor is not essentially injective: we can often make a set into a group in many ways.
This "conflict" - being essentially injective, but not being full, probably accounts for a lot of everyone's queasiness about whether "having finite products" is a structure or a property of categories. It's a combination that doesn't show up often in set-based algebraic gadgets like groups or rings.
But the unit of a monoid is another nice case to think about. Think about the forgetful functor , where you forget the unit.
Answering these questions is a way to confront the slippery nature of the question "is the unit of a monoid property or structure?"
Fabrizio Genovese said:
Eduardo Ochs said:
Anyone knows where the distinction "Structure vs. Properties" appears, and is explained, in print? I read something about it ages ago in some text from which I didn't understand much - I don't remember where it was, sorry - and I invented my own meaning for it... in my meaning when we express a functor F in Type Theory it becomes (F_0, F_1, respid, respcomp), where F_0 and F_1 - its actions on objects and morphisms - are "structure", and respid and respcomp, that assure that F respects identities and compositions, are "properties". Roughly, the last two components are "properties" because they mention equalities of morphisms, and the first two components are "structure" because they don't...
So we had a paprer written a few years ago where we asked ourselves if compact closure is a structure or a property. The difference for us was that a structure is something that you can impose (e.g. Set can have different monoidal structures on itself) while a property is somewhat canonical. I don't know if this is the distinction you want. In any case, the paper is this one: https://arxiv.org/abs/1803.00708
Nifty! Thanks!!! =)
Is the forgetful functor considered essentially injective? It seems to have a different nature from the functors and , in that the induced functor is still not full.
So, in something like HoTT, you have something like , where the first projection forgets the that is equipped to an underlying to form a given . Then you ask how "connected" this first projection is?
Or is that insufficient, because it isn't considering homomorphisms?
n-connectedness seems like it's probably related to n-essentially surjective stuff talked about in the nlab article. I guess the question is whether it's good enough to talk about the groupoids of equivalent things instead of the categories of things.
Khoi Nguyen said:
Is the forgetful functor considered essentially injective?
What's ? If you'd written I'd know you were talking about the category of pointed sets.
By the way, a functor is either essentially injective or not; we don't need to decide whether to "consider" it essentially injective.
The forgetful functor from the category of pointed sets to the category of sets is essentially injective, since two pointed sets are isomorphic if their underlying sets are isomorphic.
(However, not every isomorphism between the underlying sets comes from an isomorphism of the pointed sets - since not all bijections map the chosen point to the chosen point.)
John Baez said:
What's ? If you'd written I'd know you were talking about the category of pointed sets.
Yes, that's what I meant; I was following the notation on the nLab.
John Baez said:
By the way, a functor is either essentially injective or not; we don't need to decide whether to "consider" it essentially injective.
I wanted to make sure if I understood the definition correctly.
John Baez said:
(However, not every isomorphism between the underlying sets comes from an isomorphism of the pointed sets - since not all bijections map the chosen point to the chosen point.)
Thanks for reassuring what I found out!
I just want to mention that the functors and are both "full on isomorphisms".
Oh, maybe that means the connectedness version with groupoids doesn't work.
Because it can't distinguish monoids from semigroups in the right way?
Khoi Nguyen said:
I just want to mention that the functors and are both "full on isomorphisms".
Yes, that's another nice similarity!
So, is property-like structure necessarily preserved by isomorphisms of the underlying category? Or are these just accidental cases?
I suppose the idea of property-like structure is that the possible structure is determined by the underlying things, so if the underlying things are equivalent, they must determine the same structure?
So you may need something laxer than equivalence to observe the difference between a property and property-like structure.
Right now all I want to say is that the Lack-Kelly theory of property-like structures is very much 2-categorical in spirit. It's worth reading their paper.
I don't think it really addresses the case of .
There are a lot of subtleties left to explore, I think!
Hmm.
There are scenarios where I think the functor formulation of stuff structure property maybe doesn't apply (yet?). For instance fractals. I think most people who work on fractals agree there's clearly no definition that captures everything we think of as fractals, and excludes things we aren't fractals. So being a fractal appears to definitely be a property, and probably a mathematical one, but we can't say that with a functor.
it's hard to say that we can't describe something with a functor when it has no precise definition
That's precisely my point.
It has no precise definition despite both being a property and being mathematical in nature.
If you make a choice of some particular sort of fractal, like iterated function systems, then it really changes the situation. For one, I think picking an IFS which generates a given fractal, I think this should be a structure.
I suppose that I mean it's not reasonable to call something a property if we can't say objectively if something has that property or not
I don't yet agree with that.
I do agree that this is a point worth debating.
We need vague words in math because you can't be precise all the time, like when you're starting to come up with new ideas. So it's good that "fractal" is vague. There are lots of precise concepts like "metric space with nonintegral Hausdorff dimension", which capture some aspects of fractality, but hovering over all of them is the vague concept of "fractal".
I don't think this has much to do with the discussion of "property vs. structure", though.
I'm vaguely uncomfortable about "forgetting stuff". In my little plan to organize types by canonical embedding, then, for example, Integer is canonically embedded in Rational. I make the rule that lower types inherit all structure and properties from above, so that Integer acquires the denominator structure in the obvious way. Going up seems like a sort of forgetting: the integer forgets that it had denominator 1. And as you go up, forgetting more and more properties and structure, you need more and more data. So, in my brain, forgetting is associated with having more "stuff", I guess that means I'm in the wrong section, but it seems very close.
There is a 1-categorical conclusion to be drawn from the Kelly--Lack paper on property-like structures (see Section 7 therein). Namely, a functor can be said to witness the objects of as being objects of equipped with a property-like structure iff is faithful and full on isomorphisms. (The reader may enjoy finding a better way to phrase this.) Such functors are very nice: they are precisely the pseudomonic functors, i.e. the bicategorical/homotopy monomorphisms in the 2-category of categories.
discusses property-like structure = pseudomonic forgetful functor
That's very nice; I'd forgotten or never known they said that. But that's just what we were stumbling intohere in the case of the forgetful functor from the category of monoids to the category of semigroups: a faithful functor that's full on isomorphisms... but not full. If it were faithful and full we'd say it "forgets at most properties". If it were merely faithful we'd say it "forgets at most structure". But this is perched halfway between.
Robert Smart said:
I'm vaguely uncomfortable about "forgetting stuff". In my little plan to organize types by canonical embedding, then, for example, Integer is canonically embedded in Rational. I make the rule that lower types inherit all structure and properties from above, so that Integer acquires the denominator structure in the obvious way. Going up seems like a sort of forgetting: the integer forgets that it had denominator 1. And as you go up, forgetting more and more properties and structure, you need more and more data. So, in my brain, forgetting is associated with having more "stuff", I guess that means I'm in the wrong section, but it seems very close.
The map from Integer to Rational is not a functor between categories (as far as I can tell), so the current discussion of forgetful functors and whether they forget "properties, structure or stuff" is not directly relevant.
Here's a functor that "forgets stuff". There's a category Graph where objects are graphs - a graph being a set of vertices, a set of edges and two maps assigning to each edge its "source" and "target" vertex There's a category Set where the objects are sets. There's a forgetful functor U: Graph Set that sends each graph to its set of vertices. This "forgets stuff" in the technical sense: it's not faithful. And it "forgets stuff" in the intuitive sense: it forgets the set of edges of a graph.
@John Baez @Robert Smart
I think we can understand the map from integers to rationals (viewed as posets) as a functor E. It happens to have both a left adjoint F (floor) and a right adjoint C (ceil). It seems to me that F and C, being faithful and surjective-on-objects, forget purely structure, and that E, being full, faithful, but not surjective on objects, forgets only properties (like the property of being integral).
Does this seem right? Unlike typical "forgetful" functors, not all of these are right adjoints and not all of these are injective on objects. So maybe the analogy isn't useful.
Also, the functors are not monoidal (wrt addition), so that's another flaw.
Okay, that's actually a pretty nice way to think of the map from integers to rationals as a functor that forgets only properties. Thanks!
Thanks folks. I'm less confused. One thing I looked at when trying to understand all this was Toby Bartels polynomials.pdf. For some reason chrome's pdf reader can't handle it and the characters and spacing are slightly off. If you read it in Adobe Reader it's fine. In case anyone else has this problem.
I'm kind of curious what a nice HoTT characterization of this would be, if there is one. I looked more at essential k-surjectivity and came to the conclusion that it's not the same as (k-2)-connectedness. It seems more like -1-connectedness (surjectivity) of the k-th path level. But I'm not sure if focusing on the surjectivity version is ideal in that setting.
I guess maybe the core there is that the fibers of the map have all the information you want to know about what was thrown away, and all the characterizations are just different ways of probing pieces to talk about. So just knowing the (smallest) h-level of the fibers tells you the 'most' forgotten. And that might also give you a way of talking in terms of equipping instead of forgetting.
Maybe that's the content of that other article linked here. :)
Anyhow, if you have a map , then , so you can view values as values equipped with whatever is in those fibers. If the fibers are contractible, then is an equivalence. If they're hprops, then the most you can hope for is to 'filter' based on a property. Etc.
This can only classify property-like structure as a property, though, I think.
It still seems to me that it makes some sense to apply SSP to unique things. For example the natural numbers:
We have some stuff (words of booleans are popular).
We have some structure: Zero picks out one entity in our stuff; Every entity has a single Succ "morphism" going to another element, and each of these has a Pred morphism that is an inverse.
We have some properties: (NoLoop) Zero is not the target of any Succ; (Induction) for any dependent predicate P(n), P(Zero) and forall(n)(P(n) implies P(Succ(n)) implies forall(n)(P(n)).
I guess it makes a boring category since it only has one entry up to isomorphism. Still one could think about forgetting properties. If you forget NoLoop then finite loops also come in. If you forget Induction then you can have extra loops on the side of the main chain of numbers.
@Dan Doel Right: plain HoTT is about higher groupoids, so it doesn't see anything that distingushes between invertible and noninvertible morphisms. But in that context, a map that forgets only $n$-stuff is equivalently an $n$-truncated map, i.e. one whose fibers are $n$-types.
Okay.
I guess in a directed version, it would be clear that when you equip a semigroup to a monoid, the equipment looks higher dimensional than it does in HoTT?
Or, had some other uniform structure on types than paths. I was thinking, based on the recent paper on parametricity/logical relations in wider mathematics, perhaps you could characterize property/structure/stuff appropriately with that, and then perhaps one of the theories with both paths and relations could talk about it.
Yes, I guess, in that any directed type theory should have a way of talking about directed morphisms...
I guess the problem with the Cavallo-Harper cubical parametricity, for a mathematician, is that it refutes excluded middle. So there'd need to be some mechanism in the theory for not everything to be parametric, like multiple modes, or the cohesive inspired stuff from Nuyts-Vezzose-Devriese.
I didn't quite follow what parametricity has to do with stuff-structure-properties.
This paper suggests that parametricity and associated machinery can serve as the laxer notion of a correspondence between two structures without being directional (so, there would be no need to figure out how to do variance like you are).
So 'property like structure' would be detectable as higher "bridge" dimension than "path" dimension, or something, maybe.
I'm probably too much of a category theorist to think that that will capture something useful. (-:
A bridge between semigroups would not be required to relate unital values if they exist, I think.
Well actually, I kind of wonder if it's related to the new paper you worked on. :) The paper I just linked suggests that the intuitive descriptions initially given for naturality and parametricity were the same. Naturality developed using (directed) homomorphisms, and parametricity using something more like structure-respecting relations.
So maybe an appropriate signature in a theory with parametricity automatically gains (something much closer to) an appropriate notion of 'indiscernibility'? Because the parts of the signature are specified in such a way that they must be natural, and there is no need to write all the conditions that specify what naturality means. I don't know.
The category-theorist in me is doubtful. (-: The extra freedom in choosing relations that gives you parametricity also seems to make them too general to be useful for this sort of thing.
oh man i need to catch up on this topic
Hmm, rereading the parametricity paper, I guess one of its arguments is that logical relations would actually not always agree with homomorphisms on properties vs. structure (I think). For instance, being a group is a property of a monoid, because every monoid homomorphism must preserve inverses if they exist, right? But logical relations between monoids needn't relate inverses (says the paper), so the inverse would appear as property-like structure.
I think that's more or less what I meant: relations are so flabby that they don't see category-theoretic structure in the correct way.
When rels compose, they really wish they could compose as spans, naturally and happily and without constraints. But they are not allowed by their rel-igion :-(
much as a colimit in Set wishes it could be [op?]lax :eyes:
@Joachim Kock I don't think the flabbiness of the relations used in parametricity depends on whether they are proof-relevant or not.
Mike Shulman said:
Joachim Kock I don't think the flabbiness of the relations used in parametricity depends on whether they are proof-relevant or not.
Fair enough. I confess I don't quite know what parametricity is, so probably I should not muddle the discussion with random middle-of-the-night remarks. I can still pretend it was only a joke, though :-)
No, they don't need to be proof irrelevant. I'm not sure they need to be 'relations' either, in the h-sense.