You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
In an earlier question I tried to ask whether categories I've attempted to define were "useful" https://categorytheory.zulipchat.com/#narrow/stream/229199-learning.3A-questions/topic/How.20to.20make.20up.20categories.20outside.20of.20math.3F
I didn't like how vague the question was, this is my attempt to do somewhat better.
If you think you're looking at something that satisfies the definition of a Functor, how do you work out its input category and output category?
You could also ask a similar question about functions.
It can be very easy to just note two things seem analogous, just like it's very easy to note some sequence looks like it's even "numbers". But, it can take a lot more work to really say for sure you have a function
The motivation (this might deserve its own thread) is I often want to make physical gadgets, or "toys", that simulate some mathematical system. A smart friend of mine told me "I like functors", which I think is essentially correct. I'm trying to use these toys as "Functors" from some physics thing to some math thing.
But while I like using "toys" as analogies for "math", I'm almost never sure how to see the "toy" as a "syntactic category" or the math I'm trying to understand with it as a "semantics category". I'm not sure that's the right terminology, but hopefully the idea is clear.
Other questions I asked earlier connect to this basic activity, like if a sphere can be in multiple categories, and I have a spherical toy in the real world that I want to use as a model for something else (like a color sphere). Does that mean I need to figure out which sphere is the correct one for my analogy (or my functor)? Etc.
Any help with this (or help tightening my question further) is welcome.
The source and target categories of a functor must be given prior to the definition of that functor, like how the domain and codomain of a function must be given before the definition of a function. That is, a functor's definition includes its source and target, not just its "action". Similarly, there is no number 7; there is the real number 7, the natural number 7, the floating point number 7, and so: you must first say which collection of numbers you are talking about, then pick out 7 (or a sphere, etc). Programmers have to deal with such "type casting" everywhere!
Of course, what I'm noting is that when I'm describing something with a Functor, it's often easier to tell what the rule should be, but not the input and output categories. I'm asking for advice on how to approach specifying the categories. Because I know the Functor isn't specified until this is done.
For example, in this video, the presenter is illustrating a surprising group homomorphism (https://youtu.be/dYj0rPQeRkA?si=2orJ94PHWW7f6Urq).
But they don't explain the way in which the Rubik's Cube forms a group, or a category. Here the problem is definite enough I think I could do it, but I'm asking for advice on how to go about this more generally.
The way to think of the Rubik's cube puzzle in terms of a group is discussed here:
Briefly, each way you can move the cubes around in the Rubik's cube is an element of this group, and doing one of these motions and then another is multiplication in this group. They say this group has
elements. By understanding the structure of this group well enough, you can figure out you might want to know about the Rubik's cube puzzle. I've never thought about it myself.
If you think you're looking at something that satisfies the definition of a Functor, how do you work out its input category and output category?
Can you give an example problem? I don't know if there's a general strategy, or if it depends completely on the details of the example.
To be clear, this is a bit embarrassing because I'm confused, and I'm interested in making sense of "toys" or "tools".
I've been thinking hard about hexagon lattices as inputs for instruments. For example, Euler's Tonnetz was used as a layout for a harmonica, called the "Harmonetta" (https://en.m.wikipedia.org/wiki/Harmonetta)
Chromatic button accordions, in the left hand, use various standards that almost all amount to reflections and rotations of the same system, shared here: Semitone layout numbers.jpg
The numbers refer to the twelve tones of equal temperament.
You can visualize a song on this lattice (because how else would you play it?)
And, generally speaking, this is called the "tablature" of the song.
In principle, you could convert this tablature to sheet music, sounds played on a music system, etc.
I think there's a ton of possible categories and functors here, but I'm struggling to sort out what I should be focusing my energy on.
But like, my basic intuition for why it's some sort of functor is because of the following:
Then we should have (If I play the first half, and then the second half, that's just the same as playing from the beginning to the end, therefore I can practice music "one measure at a time")
But like, my basic intuition for why it's some sort of functor is...
Wait a minute - what's "it"? You didn't say.
The rule that takes positions on a lattice to notes, as specified by the image
Or maybe I can take paths on the lattice to sequences of notes
Okay, so I was supposed to figure that out? Whew, you must think I'm telepathic. :upside_down:
Well, I think there's more than one possible functor here for a given accordion, but I'm not sure.
Those are the most obvious rules I can think of
Like I'm not sure what I should mean by "notes" etc. Especially if I want to map tablature to sheet music, or vice versa
I'd say you just have a function from positions on the lattice to notes, say
where is the set of lattice positions and is the set of notes. From this you automatically get a function sending lists of lattice positions to lists of notes, called
where I'm taking a cartesian product of sets (well-known I hope) or a cartesian product of functions (slightly less well known, but still used a lot).
We can abbreviate this as
where the notation is rarely used this way, but makes sense because is actually a functor from to , which works not only for objects (sets) but morphisms (functions).
Now there's no serious category theory here yet - I'm just mentioning categories to make you happier. But then you seemed to want to create a monoid consisting of lists of arbitrary length, either of lattice positions or notes. In the first case this is
and in the second it's
Here the sum means [[coproduct]], or 'disjoint union'. These things get to be monoids like this:
We just concatenate two lists to get a new longer list.
We then get a function
which does the obvious thing: converts a sequence of lattices positions to the corresponding sequence of notes. And this map is a monoid homomorphism, meaning
where and are two sequences of lattice positions, each of arbitrary length.
This is just the fact you said here:
Then we should have (If I play the first half, and then the second half, that's just the same as playing from the beginning to the end, therefore I can practice music "one measure at a time")
What's really going on is that we've got a functor
sending each set, say , to the monoid of arbitrary-length lists of elements of that set. And this is a very famous functor from the category of sets to the category of monoids! Computer scientists might call it the "list" functor, but mathematicians would call it the "free monoid" functors, since it's left adjoint to the forgetful functor from monoids to sets.
I'm not claiming any of this is actually useful for anything in music. It's just the math of converting lists of elements in some set to lists in some other set .
So that's what immediately pops into mind when I try to take what you're saying and say it more mathematically. Notice it has nothing to do with the details of your situation: I really just started with any function between sets, , and did stuff with it.
Everything you're saying sounds right. And makes a ton of sense. And what you're saying I think is useful for music. But there's one more big fact that makes me interested in lattices (that I see now in retrospect I didn't make clear enough)
If you shift your hands on the numbered grid, but play a song with the same shape, it'll be the same song in a different key:
I got this image from a respected accordion text, I think the arrows could be morphisms in some category (That I don't know how to articulate)
485ce2e7-8da1-4c21-af1e-e3fcc870608a.jpg
Your analysis is noting we don't need to know this about an accordion to play it successfully (Or you could use a piano layout instead, where you don't have this big fact, but you can still play a song provided you play the right keys in sequence)
One thing you're noting is that "transposing" gives a function , or at least it would if your accordion were infinitely long. Then we can throw this into the machinery I've discussed and get a monoid homomorphism .
And, I'm noting, that transposing here in the musical sense, is literally the same as translating in the physical sense, provided you're using a lattice (and the notes are placed on the lattice correctly)
Right. By the way, there are lot of people who work on the math of music, who would have already written about very similar things, though not necessarily acccordions. Some of them use category theory, and others don't.
One of my favorite mathematical music theorists, Dmitri Tymoczko, has been using groupoids lately, and he's been talking to me about that.
His book A Geometry of Music is really good, but it doesn't get into groupoids.
There's also this:
but I've only read their previous paper, which is probably a prerequisite.
I am interested in music theory so I'll check out that book.
I'm more interested in analogies, how to work with them exactly, especially if I can use them to encode math facts in physical toys.
I think because I'm interested in physical aids for thinking about abstract stuff.
So here, rather than thinking about music composition as such, I'm trying to understand "How does an accordion convert my finger's motions through space into sounds?" So I got really excited when a physical(well, geometric) transformation "translation" could correspond to a musical transformation "transposition"
And I'm very interested in converting physical space (usually lines) into many other things. I was drawn to category theory in part because string diagrams seem to do that - for example. And because analogies seem to play a big role in category theory as functors.
(Your lazy lines discussion from your paper you linked awhile back was very exciting by the way)
I appreciate the time you took with this question!
Lazy paths, you mean.
A lot of mathematical music theory is about transformations acting on notes, chords, melodies, etc. For example they talk a lot about the transposition-inversion group and the PLR group, which acts on triads.
I explained these in week234 of my series This Week's Finds, and there are some references there.
Yes, you can partially blame this question on me reading your post! It's convenient to carry out PLR group transforms on Euler's Tonnetz, and therefore on the Harmonetta!
For those listening in who know group theory, the T/I and PLR groups can be summarized very quickly. The group of transformations of the form for is the dihedral group with 24 elements, and in music is known as the transposition/inversion group or T/I group. The set of major and minor triads is a torsor for this group, so the group of bijections that commute with the action of the T/I group form an isomorphic group called the PLR group.
The fun part is seeing how all this connects to music theory.
The bass notes on an accordions left hand are almost Euler's Tonnetz, it's one shear away, so technically you can still play the PLR group transformations conveniently.
Euler's Tonnetz:
Neo-Riemannian_Tonnetz.svg.png
My illustration of Accordion's left hand bass rows (French system, which adds one more low bass row than is typical):
French Layout.jpg
Nice!!! I've been listening to a lot of accordion music on Sundays in the local pub when I'm in Edinburgh, and it uses this kind of hexagonal arrangement of buttons, or some similar kind. It reminded me of the Tonnetz but for some reason I didn't look into the connection.
When you require, that a consecutive move on the lattice changes by a definite number of half steps, and you treat layouts the same if you can rotate, reflect and translate between them, then there are only 12 possibilities, and only 8 of them hit all notes.
https://guichaoua.gitlab.io/web-hexachord/
One of these possibilities is Euler's Tonnetz.
These are also related to square lattices.
And 5 of the above 8 are possible to tune on a slanted guitar. That is, five of them have one semitone steps for one of the axis. And I say slanted, because guitars are normally square lattices, but the link is for hex lattices, which are sheared from squares.
The accordion's bass rows, on Stradella systems, is the closest "guitar possible tuning" to the Tonnetz (with the qualifier that guitars are square lattices, not hex lattices, so I'm stretching the language a bit)
I'll have to think about this more - I can't really absorb that fully right away. I got interested in these hexagonal grids when I wrote these:
• Part 1: The history of just intonation. Just intonation versus Pythagorean tuning. The syntonic comma.
• Part 2: Just intonation from the Tonnetz. The four possible tritones in just intonation. The small and large just whole tones. Ptolemy’s intense diatonic scale, and its major triads.
• Part 3: Curling up a parallelogram in the Tonnetz to get just intonation. The frequency ratios of the four possible tritones: the syntonic comma, the lesser and greater diesis, and the diaschisma.
You'll see some hexagonal grids in parts 2 and 3.
The math behind my basic claims about the lattices, like that there are only 12 up to isometry, is relatively elementary number theory...but I find its very easy to get extremely confused.
Like I once conflated a reflection, with reversing the rows of an accordion - but those aren't the same :sweat_smile:
If I remember correctly, using the notation from the website I linked to in my previous message, (1,2,9) and (1, 3, 8) are related by reversing the rows of an accordion.
Accordion right hands, when they aren't pianos, are almost always (1, 2, 9), that's the lattice I shared in my original question.
Btw, do you play the accordion?
I do! This is a big way I do math - through my widgets or "toys"!
I'm slightly embarrassed to say I started accordion in part to explore these questions (but I swear I can play "normal" music)
Good! It would be sort of a waste to think so much about accordion math without actually playing an accordion.
I asked my piano teacher years ago "why don't we make a piano where all the keys have the same width?"
Instead of saying "there's this neat invention called the Janko Keyboard"1280px-MIM_Janko_Piano.jpg
She said "You'll never find a better layout, learn the fingering!" So, I was like "Okay..."
Years later when I saw a chromatic accordion I was like "THATS THE THING!!!"
(Immediately followed by, "Oh two dimensions makes it smaller! That's a good idea!")
Probably the biggest absolutely practical advantage of the lattices over the piano, is that it's trivial to have multiple octave spans with one hand on most of them. That's part of what makes guitars sound different than pianos.
I only play the piano and I don't really want to learn these other instruments, but I'm very glad they exist. Do you know about the "lumatone"?
https://www.youtube.com/results?search_query=lumatone
I do! What's crazy about the Lumatone, is it uses something called a "Bosanquet - Wilson" layout. The hex lattice is rotated in such a way, that for various different equal temperament layouts, your hand doesn't fly off the top or bottom of the keyboard as you play up the scale.
So it's a good example of why it's dangerous to treat these layouts the same up to isometry.
I almost bought it, but I realized I couldn't use my accordion fingerings on it, because of this rotation.
If I'm ever interested in equal temperament tunings with more than 12 notes, I'll get it.
Here's a few Lumatone examples, note the diatonic scale is horizontal over the various equal temperament tunings, but the hexagons are "tilted":
9b05af08-b2a5-4efa-8630-c17a5db4000d.jpg
0ca27fa1-d463-4e4a-a34e-51f290eb8f16.jpg
e68a474d-5d63-465d-a30a-4ed284d11738.jpg
Whereas if you drew in the hexagons of an accordion lattice, they aren't tilted, like on this accordion midi controller:
358225f4-7777-4891-b6b6-e7c147de39d2.jpg
(the white and black buttons of this accordion are the white and black keys of piano)
@Alex Kreitzberg Your question reminded me something that I read somewhere that Rudolph Carnap originally coined the word "functor" to mean something that is like a function, but is not one.
For sure! A generic Functor's Domain often feels way weirder than a set to me.
Another thought I had. Hex lattices like the above, with the property that hand shape determines quality, are sometimes called "Isomorphic keyboards" (https://en.m.wikipedia.org/wiki/Isomorphic_keyboard).
This invites me to wonder whether there's a category with a notion of isomorphism that coincides with this terminology. I think Its objects would be "diagrams" in the hex lattice.
Though I suspect they're called "isomorphic" because somebody thought that was a good word to summarize "same shape, same sound"
The notion of isomorphism is not only a CT notion, or at least the word is often used outside of CT.
I wonder if all isomorphisms can be reduced to CT isomorphisms.
But even if they can, there surely is also the informal notion of isomorphism which CT (or any other mathematics) cannot pinpoint, as in "I know how to convert one to the other", as in "I know how to convert notes to music." (although the relation between notes and music is actually more like a functor, as there is many different ways to play the same notes.)
I was curious about the term isomorphic keyboard when I first heard it, but the Wikipedia definition connects to the etymology of isomorphic, meaning "same shape":
An isomorphic keyboard is a musical input device consisting of a two-dimensional grid of note-controlling elements (such as buttons or keys) on which any given sequence and/or combination of musical intervals has the "same shape" on the keyboard wherever it occurs – within a key, across keys, across octaves, and across tunings.
So, it's not that the whole keyboard is isomorphic to something.
If you think you're looking at something that satisfies the definition of a Functor, how do you work out its input category and output category?
You could also ask a similar question about functions.
Mathematicians did! The concept of a function has evolved a lot over time.
https://en.wikipedia.org/wiki/History_of_the_function_concept
If you think of a function as a special kind of relation, then your question is a good one when encountering category theory. If the codomain is not exactly the range, how do you pick it? Like, if and the domain is the reals, what right do we have to say that the codomain is not the nonnegative reals?
If you think of a function as a rule for acting on inputs to produce an output, you might consider and to be different functions. Certainly bubblesort and quicksort both have the same input/output relation on lists of integers, but one runs in time and the other in time. Shouldn't we consider them different?
In 1951, Bourbaki defined a function as a triple where and are sets and a relation from to such that each element of was related to a single element of . In this definition, the domain and codomain are given as part of the data.
Category theory follows Bourbaki in specifying the domain and codomain as part of the data of a morphism. Whether bubblesort and quicksort count as different functions depends on the context. In Lambek and Scott's construction of the internal language of a cartesian closed category, they would be considered different morphisms because they are not beta equivalent in the lambda calculus. But as morphisms in Set, once we choose the domain and codomain, they're the same because they have the same input/output relation.
If you like highfalutin' language, quicksort and bubblesort are extensionally equal but intensionally distinct.
Edited later to add:
In Lambek and Scott's construction of the internal language of a cartesian closed category, they would be considered different morphisms because they are not beta equivalent in the lambda calculus.
I would say that in the construction of the syntactic cartesian closed category of lambda-calculus, they are different morphisms. Normally when we say "the internal language of a cartesian closed category" we mean to start with a particular semantic category and build a (very large) theory whose constants and equations are all the morphisms of that category and the equations that hold between them, and in that case whether or not bubblesort and quicksort are equal would depend on the category you start with. For instance, that category might be Set, in which case they would be equal in the resulting theory.
The process of adding specified codomains can be formalized as the process of passing from a [[protocategory]] to a category. Perhaps one could define some kind of "protofunctor" that wouldn't know its domain and codomain yet, but which gives rise to a functor between categories in a similar way.
I'm very impressed with your ability to directly answer my question Shulman.
Here's my rough takeaway from what people are saying.
It's like if a student had a solution of a differential equation, and the differential equation, but then asked "what were the initial conditions of my solution?"
The question is a bit unnatural, because initial conditions are typically data. You can maybe fix it by asking "What are all the possible initial conditions this solution could've had?" But that's a lot of work to go through, when you're suspicious the student simply hasn't shared the full context.
I think a practical issue I'm struggling with, is I'm still getting used to picking categories. It's like when a programmer, used to Python, decides it's important to annotate with types. It's initially stressful, but eventually the activity becomes convenient and clarifying.
And maybe I'm a bit embarrassed at how long it's taking me to figure out how to do this for the weirder examples I'm interested in.
Here's my attempt to give a bit more context for what I'm doing.
It's like there's a "target category" which is rich enough to hold "the computations I want to do" and an "input category" that's almost like a programming language, but is a physical thingy. And I'm trying to figure out how to articulate the way the physical thingy is making it easier to do the computations I want to do.
Like I want to think of an abacus, or a slide rule, as a "proto programming language." How would I then approach giving these physical things semantics? Category theory seems like a really compelling general way to think about this. Constructive mathematicians are drawn to it, etc. But maybe that's the wrong approach?
Like categorical semantics of programming languages are very cool, why can't I use topological spaces as programming languages instead? Isn't that what string diagrams are? How weird, or physical, can the "syntax category" get?
I want to develop the lattice example further, to stay closer to the ground and to build up some more context. There's a song I'm learning right now "Lamento Nordestino", maybe it'll help if I show how this thinking helps me learn this song.
You seem to be switching rather rapidly between different questions:
Each of these question is potentially interesting, but it would take work to make the question precise (there are many ways) and then figure out a good answer.
Instead of throwing out several hard questions in rapid succession, might be more productive to pretend you have one question you are very interested in, and sit still on it, and talk about it for a long time, in order to bring it to the point of mathematical precision where it admits a precise and useful answer.
That at least is how midt mathematicians proceed. We tend to work patiently and thoroughly. I can easily imagine writing a whole paper about question 1.
I say "pretend" because you don't actually need to be interested in just one question to act like that's all you care about, and put your all into making it very precise, and trying to answer it.
Well my hope was that I was trying to "pretend to worry about one question" by focusing on the accordion a little bit longer. I was going to illustrate a diagram, show chords on it, show how this helps me memorize a song, and really try to define a category carefully.
I think Shulman answered the base question I asked as precisely as one could hope. I am actually pretty happy with his answer. So maybe I should start a new thread about the accordion category.
But I'm worried people would still ask "What exactly are you trying to solve with an accordion category?" And I don't know how else to explain it other than "An accordion is a physical tool, that I think can act as a physical analog (or a functor?) for musical concepts. Whenever a physical tool is a useful analog (or a functor?) for a computation, I get curious. Because my personal working memory is chaotic and error prone.
Hrmmmm, but if this physical analogy is a functor - then in what sense is the input of an accordion a category?")
I'm really trying to get precise, I know I'm not there yet. I hope with the above I've helped people avoid needlessly guessing at what's going on in my head.
If I'm too all over the place, or asking the wrong questions, that's useful for others to know:sweat_smile:
It's perfectly fine to stick to accordions! When you start talking about "how slide rules can be programming languages", it looks (to me) like you've lost interest in the original topic.
I explained how any function that translates something (like a button press on an accordion) to something else (like a note) can be turned, via the "free monad" functor, into a monoid homomorphism that translates lists of something into lists of something else. Then we discussed how "transposition" fit naturally into that mathematical framework. We could keep going and develop all the math you might be interested in when it comes to accordion music. I felt like we were just getting going.
Yes I want to say more about the accordions! I was just trying to explain
Are in my head examples of
I don't have a lot of hope getting an answer for what exactly that third bullet point is. But I was worried something about how I was writing my questions was leaving that implicitly in the air.
I wanted to give people the ability to ignore the point if it seemed too loose to make sense of, to them.
But yes, accordions! I want to say more definite things about those!
I think
is too diffuse an idea to make real progress on before fully working out, say, 5 examples. By the way, people often think of the "syntax to semantics" functors as going the other way: mapping an abstract description to the real thing it describes. But never mind! I shouldn't have even mentioned that. Right now I'm more interested in the categorical semantics of accordions.
I've got a lot more than five examples in mind, so I guess I should get to work :laughing:
I appreciate your point about syntax, because maybe that's part of what's making some of my questions on this confusing. I'll avoid getting distracted by it and get to accordions.
Yes, I find it very good to work examples out in detail while letting the grand philosophical issues hover silently in the background. That's why I went into mathematical physics rather than philosophy (which I actually considered at some point in my foolish youth).
Mike Shulman said:
If you like highfalutin' language, quicksort and bubblesort are extensionally equal but intensionally distinct... they would be considered different morphisms...
I know this was a digression, and sorry to jump in, but: I understood that different sorting algorithms will always be equated as morphisms whenever lists are interpreted categorically using initial algebras, as would be usual in categorical semantics, since their equality can be shown by initial algebra induction. I'm not aware of a ccc that would distinguish them or how that would be set up, and I would be interested to hear about it if there is one. I've seen some work (e.g. by Robin Cockett and collaborators) but I'm not sure what the state of the art is for categorical analysis of algorithms and complexity.
John Baez said:
By the way, people often think of the "syntax to semantics" functors as going the other way: mapping an abstract description to the real thing it describes.
That's an apt remark: when I had tried to graph such functors, I stumbled on the following complication: the real world is much richer than any abstract system, and there are aspects of it that cannot be mapped to it.
e.g. when I play an accordion, not all movements of my hand would be triggered by notes, not all aspects of sound would be described by them.
When I think about this, I think of a functor from a subcategory of the real world, to a category of abstractions, or a functor from an abstraction to the real world.
Sam Staton said:
I understood that different sorting algorithms will always be equated as morphisms whenever lists are interpreted categorically using initial algebras, as would be usual in categorical semantics, since their equality can be shown by initial algebra induction. I'm not aware of a ccc that would distinguish them or how that would be set up, and I would be interested to hear about it if there is one.
Yes, I thought about this a bit before posting that. I agree that two functions on a list that can be proven equal by list-induction will always be equal in a CCC if lists are interpreted by initial algebras, and similarly for functions between other inductive types. And if you go up one step to dependent type theory, then the equality of such functions can be proven inside the syntax, so they will therefore be equal in all models. But my understanding is that in non-dependent lambda-calculus, the rules for a "list type" only require it to be interpreted semantically by a weakly initial algebra, which doesn't in general admit "list induction".
I don't know of a specific semantic CCC with weak list objects in which quicksort and bubblesort actually turn out different. @Mike Stay, since you brought them up, do you? As soon as there is any such CCC, though, the syntactic CCC built from lambda-calculus will also have that property, by initiality (hence my edit).
Jencel Panic said:
When I think about this, I think of a functor from a subcategory of the real world, to a category of abstractions, or a functor from an abstraction to the real world.
Do you mean that a functor from a subcategory of to a category is necessarily a functor from to ? Or that it should probably be interpreted as a functor from to ?
Mike Shulman said:
I don't know of a specific semantic CCC with weak list objects in which quicksort and bubblesort actually turn out different. @Mike Stay, since you brought them up, do you?
Not specifically, but this paper looks relevant:
https://sartemov.ws.gc.cuny.edu/files/2012/10/Bonelli2007.pdf
Since I can't provide proof, let me back off from saying they're different to saying I would be gobsmacked if they were equal.
This isn't really the distinction you were getting at, which I'm understanding is something like, "the syntax can be different even though the functions are the same".
But this reminded me of one fun little category I made. Its arrows are pairs of functions with "cost functions" . The idea was was the cost of running .
Then composing , which I define to be an arrow from to , with , an arrow from to , gives the pair , an arrow from to .
If I define identities to be identity functions with zero cost functions, then this composition is associative, has left and right identities, etc.
I cooked this up trying to think of a category with an obvious functor into , that wasn't faithful. Because I was told if I did that, it'd look like morphisms that lost structure, which it does! (Of course, I built it so that this was obviously true)
Alex Kreitzberg said:
Do you mean that a functor from a subcategory of to a category is necessarily a functor from to ? Or that it should probably be interpreted as a functor from to ?
I hope he didn't mean those - they're wrong. Think about it with functions (which are a special case of functors). A function from a subset of a set X to a set Y doesn't always give a function from Y to X.
Oh sure, that makes sense :face_palm:
If you want a highfalutin' description of your "category with cost functions" I think it is the [[Kleisli category]] of the [[monad]] ... (-:O
You probably want to use big-O for your cost functions. Otherwise you may not even have products: if it costs even some constant amount to project out of a product, then the triangles in the universal property diagram won't commute.
Or you could keep the strict costs but just be content with a [[Freyd category]]. This is in line with recent work on cost as a computational effect.
But it sounded to me like Alex wasn't specifically interested in "cost functions" in their own right, but just as an example of a category with a natural "underlying-set" functor that isn't faithful.
Alex Kreitzberg said:
Jencel Panic said:
When I think about this, I think of a functor from a subcategory of the real world, to a category of abstractions, or a functor from an abstraction to the real world.
Do you mean that a functor from a subcategory of to a category is necessarily a functor from to ? Or that it should probably be interpreted as a functor from to ?
I compare the two, because they are both alternatives of the "real world system → abstract system" model which wouldn't have the issue that I tried to outline above.
Jencel Panic said:
- "abstract system → real world system " is one, but that would entail that all the structure of the abstract system would have to be manifested in the real world.
It's pretty off topic from category theory, but you might be interested in the philosophical discussion about physical implementation of computations, which asks essentially "given some mathematical gadget which models a computation, say a DFA or Turing machine, when does a real-world physical system implement that computation?" There are a number of related questions that are actively discussed in certain parts of analytic philosophy. (In undergrad a professor and I spent some time trying to give an answer in terms of PL-theoretic models of computation; we never got anything satisfactory.)
In particular, the historically first answer (explicitly written down by Putnam, but probably older) was what we now call the "simple mapping account," which says a system implements a computation, modelled as some kind of automaton, if there is a mapping from states of the DFA to states of the physical system that preserves transitions. That reminds me quite strongly of your abstract -> real world functor. This account pretty quickly runs into a lot of issues, some technical, some physical, and some philosophical, of the kind it seems you're running into here.
(There's a reasonable-looking SEP article, but if you're interested I would recommend this paper of Chalmers, which argues that these kinds of mappings are actually too plentiful to be useful tools.)
I'm trying to make a nice, easy to read, post on algebraic aspects of accordions, which takes time - but it's so painful not engaging with all the topics people are bringing up.
I'll say when trying to find help on making "math toys" in the past, I hit that philosophical crowd and felt discouraged :laughing:
I read the intro of "Physical Computation: A Mechanistic Account", the book is about the philosophical issues you mentioned.
I want to finish it, because it seems relevant.
But I really like analogies, which I think Functors model well, so I don't really want to give up on "mapping accounts" too soon.
If my mindset is "Analogies are neat!" Rather than "they're the perfect tool for modeling computations of physical systems!" I think (or hope) I'll be fine :sweat_smile:
Alex Kreitzberg said:
But I really like analogies, which I think Functors model well, so I don't really want to give up on "mapping accounts" too soon.
I don't think you have to give them up! There are a lot of proposals in the literature for extra restrictions to put on the map from the mathematical model to the physical world, e.g. that it respects counterfactual/causal structure, locality principles, or mechanistic information as in the piccinini book you mention. It seems to me that you are wrestling with related questions; the discussion in that literature might give you insight into the kind of information you want to somehow encode into the real-world category that forms the codomain of your functor.
It's pretty off topic from category theory, but you might be interested in the philosophical discussion about physical implementation of computations, which asks essentially "given some mathematical gadget which models a computation, say a DFA or Turing machine, when does a real-world physical system implement that computation?"
Haven't read about this, but I have thought about the question and the answer for me is pretty simple --- intend. A computer is a mechanism that was made in order to compute by people. Even if it doesn't compute correctly it is still a computer, just a defunct computer.
Interesting to hear your opinion, but we should probably open another topic :)
Jencel Panic said:
It's pretty off topic from category theory, but you might be interested in the philosophical discussion about physical implementation of computations, which asks essentially "given some mathematical gadget which models a computation, say a DFA or Turing machine, when does a real-world physical system implement that computation?"
Haven't read about this, but I have thought about the question and the answer for me is pretty simple --- intend. A computer is a mechanism that was made in order to compute by people. Even if it doesn't compute correctly it is still a computer, just a defunct computer.
Interesting to hear your opinion, but we should probably open another topic :)
I don't know if you'll automatically get a notification; I made a new thread in #theory: philosophy.
Mike Stay said:
Mike Shulman said:
I don't know of a specific semantic CCC with weak list objects in which quicksort and bubblesort actually turn out different. @Mike Stay, since you brought them up, do you?
Not specifically, but this paper looks relevant:
https://sartemov.ws.gc.cuny.edu/files/2012/10/Bonelli2007.pdf
Since I can't provide proof, let me back off from saying they're different to saying I would be gobsmacked if they were equal.
Thanks both. This is an interesting question. I would usually use ordinary strong initial algebras, so it is interesting to consider the weak case.
Ordinarily I would think "cost" would count the beta reductions, but in a ccc you can't do that. (You sort-of can in the Kleisli category that you also mentioned, which is one way I've seen cost treated.)
So suppose we're in a ccc with weak natural numbers object. Although I think in general, you must still have , because of the beta equalities. So you can't just count "reduction steps" in a naive machine. I suppose it might work for some machine that does optimal sharing and memoization -- in , don't calculate twice, remember that you have already done it.
It is quite possible that different algorithms are still separated here, perhaps the complexity distinctions are invariant under this kind of sharing and memoization. But I wonder,
Sam Staton [said]
I guess you could try addition, defined by induction on the first argument, versus addition, defined by induction on the second argument. These probably won't be different in terms of time complexity, but running them on certain inputs ought to give a different number of steps.
Personally I'm convinced that in order to talk about this we really want to be in a weakly closed setting, in which the many "names" of a given morphism are like different programs implementing it.
The first step towards talking about complexity theory is to be able to talk about the time a given program takes to terminate on a given input. It seems to me that this is a feature of the names/codes more than it is a feature of the morphism/function they happen to name/implement. So, it's important that we can have multiple names!
As far as I know nobody has seriously tried to do this. There's some encouraging work on timing morphisms in Turing categories (which happen to be weakly closed), but they don't work with the names...
Let me mention again the recent work of Bob Harper's group on cost as a computational effect.