You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
This is the second of five examples I'm formalizing, to distill a common abstraction.
The example I'm formalizing, is drawn lines on paper. This could be thought of as extending the discussion on melody line diagrams from my accordion post. The upshot is I want to think of a single drawn line, as a piecewise analytic path. This is a common thread through a lot of my examples. If you don't think you need my context, you're welcome to go to the end of the post to see my questions.
I think about my line drawings as concatenated curves, that individually don't have inflections. I was encouraged to think this why by my art teacher Jacobo via the "how to draw book" by Fialetti:
odoardo_fialetti_limbs_and_the_human_body-4.jpg
I wanted to think about drawing segments as morphisms, with concatenation as the composition operation. You can concatenate equivalence classes of paths in ; but, I was more excited to learn about [[Moore path categories]], because the paths themselves, not equivalence classes of them, defined a category.
When I shared that I was excited about this feature in a previous thread, Baez gave an example of a groupoid of paths, called lazy paths. I want to pull out a specific observation Baez makes in his paper An Invitation to Higher Gauge Theory:
Since we are studying paths in a smooth manifold, we want them to be smooth. But the
path may not be smooth: there could be a ‘kink’ at the point y.There are different ways to get around this problem. One is to work with
piecewise smooth paths. But here is another approach...
After this point Baez goes on to discuss lazy paths.
But it seems my intuition is fixated on piecewise smooth, or even piecewise analytic, paths. This fixation was validated after reading Trefethen, a numerical analyst, argue the most common functions were piecewise analytic(Note when Trefethen says "functions" they are following Euler's usage):
If I look at mathematics from Trefethen's point of view, it's clear that many of the images I had in mind superficially match.
A square is four straight lines connected at points, a regular -gone is lines connected at points.
If I have a lattice of squares, and I only travel vertically or horizontally, then the smallest number of straight lines between two points is the taxi cab distance. I could consider more exotic movement if I restricted movements on the lattice to knight moves. In this case the movement is simple enough I can classify the sort of line segments I want to use. For example, could be a knight move that adds to its current position. Then a translation invariant description of a knight's movement might look like .
A lot of the algebraic examples I've gotten excited about, including the ones in the accordion post, are mutually convertible between linear piecewise functions. I bet with just a bit of care this could be made into a functor from monoids to piecewise linear paths. I was particularly excited about the possibility of these algebraic structures being "generated" from a base set of moves in a certain sense, but really I think I just find these drawings really compelling, and "piecewise analytic" covers a ton of ground.
I could play a melody on my accordion with one finger. If you took a long exposure photo of its movement over time then it would move discretely between the buttons of the melody. But if you watched a video of my finger moving, it would patiently rest on each note, so really the movement is described by lazy paths.
And again, as a drawing, although a drawing of a box can have sharp corners, if I were to draw it in the real world, I would take a break at each of the corners, so lazy paths actually model drawing well.
Trefethen I think would argue that piecewise analytic functions were still critical however, because he is interested in modeling differential equations of real world systems with sharp corners.
When I tried to organize chess games into a sort of category, @Mike Shulman pointed out that individual chess moves couldn't compose into larger moves. And that therefore I was more interested in a quiver instead of a category. I've been thinking a lot about this idea, that a category is really what you get when the composition of two things is another example of whatever they are.
Questions start here
Straight lines are deeply special, unfortunately concatenating them together does not guarantee a straight line. Similarly, analytic paths are special, again unfortunately concatenating them doesn't guarantee an analytic parh. At the same time, allowing concatenation without preserving properties, seems to be immensely practical and common. But maybe concatenation in this sense isn't modeled well by categories. In which case do I want quivers instead? Are quivers appropriate for modeling piecewise analytic functions? If not those then what?
And further, are quivers(or whatever is appropriate here) "worse" than categories, which are also "worse" than groupoids? What are the reasons to be disappointed when I realize I'm working with piecewise functions?
If you start with a directed graph, you can always make a "free" category from it! So I could imagine a graph where:
Then the free category induced by this will have:
In this case, the morphisms are not just straight lines, but rather sequences of straight lines - where each line in the sequence ends where the next one starts. And composing two such compatible sequences indeed gives us another such sequence.
I could also imagine a category where the objects are the different possible states of a piece of paper (in terms of what is drawn on it), and a morphism is a sequence of "valid" drawing steps that converts state to state . To connect this to the discussion above, you could define a "valid drawing step" to involve drawing a straight line between two points.
Yeah, and if "lines" here can be either straight lines, or analytic paths between the ends, I think this free category covers all of these examples.
So before we get the free category, it starts as a graph. We could also probably make that into a free groupoid by imagining each path having a formal "inverse" when we swap the objects.
I guess my confusion here, is I have a definition of graph in my head that allows at most one edge between two vertices, but quivers are more general than that. Also calling an analytic curve between two points of an edge of a graph feels like a ton of information is being thrown away, but it must not be right?
From the free category article you linked "...whence free categories are also called path categories." I see sentences like this and wonder if I've read them before, but only now have an intuition for what they mean XD.
When I say "graph" I mean a quiver. The nLab has this to say about the term "directed graph":
In graph theory, directed graph (often abbreviated to the contraction digraph) nowadays usually means a digraph, while in category theory, directed graph generally means a quiver. The basic difference is: quivers may have multiple arrows in the same direction (often called “parallel”), and also loops, while digraphs may not have any of those.
So this terminology can be rather confusing! But yes, for the graphs I have in mind you are allowed to have multiple parallel directed edges between two vertices, as well as loops.
Right, I interpreted what you said as a quiver, I'm just debugging my own confusion with the definitions.
I feel like I'm "getting" it, but am consistently surprised at the level of abstraction these definitions are working at. Does the Free category of piecewise analytic paths, when viewed as a category keep track that it consists of analytic paths? Does that make sense?
I think the question of how much information is lost in this way is interesting. In general, I think a major component of the philosophy of category theory is this: we can understand a thing by how it relates to other things, and how those relationships relate to one another. So the question that occurs to me is: how do piecewise analytic paths relate to one another, and to what extent are those relationships modelled in this free category?
One could imagine trying to introduce "homotopies" that gradually deform one piecewise analytic/linear path to another (subject to some rules probably), to incorporate more information about how such paths relate to one another.
In general, I'm interested in this question: how can we measure the extent to which a category models the thing we started out considering?
Someone who knows more about piecewise analytic/linear paths than me would probably be able to contribute more specific insight to this particular case!
David Egolf said:
One could imagine trying to introduce "homotopies" that gradually deform one piecewise analytic/linear path to another (subject to some rules probably), to incorporate more information about how such paths relate to one another.
For higher Moore-Path categories, you can define 2D surfaces that concatenate strictly on common boundary paths, and then you can concatenate 3D volumes on boundary surfaces. The higher compositions then make your category into a 2-category and 3-category respectively. So the 2-cells in this case aren't natural transformations, I guess they're like homotopies but I imagine the whole surface as existing, rather than a transformation identifying the boundaries.
I'm sure there is a way to extend the free-category of piecewise analytic paths into a 2-category or a 3-category in this manner. And these levels all feel like various nested free-categories to my intuition.
I also get this from art. When I'm drawing, I think of the surfaces as implied by key structure lines of the models.
Like a surface of revolution defined by a single path.
(I don't normally think of chess moves this way though, maybe I should? Food for thought...)
I was thinking about this, and I noticed an issue with trying to abstract this in terms of quivers the way we did.
If there is a path from to via , then it's composed of two paths and , that is
If we generate a free category on a quiver with paths as edges, then we have .
So I think the way to formalize piecewise analytic paths as a category is to use the Moore path category construction, while restricting the morphisms in the homsets to piecewise analytic paths, rather than all continuous maps.
There is still a useful property preserved, the paths are analytic everywhere except at a finite number of points, composing two such paths only adds one point of non differentiability.
If you're careful you can get free categories to coincide with some subcategory of this piecewise analytic path category, but in general I think piecewise analytic paths are different from free constructions.
I'm unsure how much it matters that we compose two paths to a sequence of two paths, as compared to a single path that does one path and then the other. It seems to me that the basic idea - at least with respect to composition - is still being respected: you make this morphism by composing these two.
Well the free category is making more paths is my point. There will be paths that coincide but aren't equal.
I don't think this happens for the special case of lattices, with a generating basis.
What would be an example of two paths that coincide but are not equal?
Imagine a top half of a circle, with points (objects) at , , and , call them and .
We have a few analytic paths, one from to , one from to , and one from to . If you generate a category freely with each of these analytic paths as part of our quiver, then the arc from to , will not be equal to the free composition of the arc from to , with the arc from to .
Ah, that makes sense! Thanks for sharing that example.
This also occurs even in the piecewise-linear case. If you have a line that goes from A to C through B, this will happen.
I guess you are saying that the Moore path category avoids this problem. Composition in that category isn't "free" but rather involves doing one path after the other, while keeping track of path length.
To reflect on why the free category didn't do what we wanted: in that category we kept track of how we built up curves. But we don't care about how we built up two curves; we just care about the final resulting shape drawn on paper. In a way, the free category was keeping track of the drawing process involved for each curve, while we wanted to just model the resulting shapes on paper.
I think the Moore path category will also have nonequal morphisms that correspond to the same resulting shape drawn on paper. That's because we keep track of "how long" it took us to draw a curve in that category. (You can draw the same shape at two different speeds, and those will count as different in this category).
So that might be a criticism of modelling the situation with the Moore path category, in a similar spirit to the one you provided above relating to the free category.
I think all of the coinciding stuff gets identified if you note that equality is determined by their equality as functions, and if you parameterize by arc length instead of time.
Otherwise I agree. That said I'm way less "uncomfortable" with two paths being distinct because they're different as functions, even though they have the same image (where image here isn't the target object, but the set of the path)
I might want to model someone playing slowely vs someone playing quickly etc.
You could apply the same intuition to the free category. Imagine that each "basic" morphism (not built up from others) corresponds to drawing that shape in one time unit. Then various composites corresponding to the same shape correspond to drawing different parts of that shape at different speeds.
But that wouldn't have analytic functions as a basis graph though right? That would have analytic functions that took one unit of time to travel between their ends as a basis graph(which actually is really interesting to think about, but I think it's something else).
I think what you just said is correct.
That's a really interesting category to think about though.
Because you'll have increasing tiers of morphisms using up more integral units of time
One other idea that comes to mind: you could try to consider a category where objects are points and morphisms are equivalence classes of paths. We consider two paths to be equivalent iff one is a reparametrisation of the other.
A reparametrisation of the unit interval is a continuous function with and . Then given a path and a reparametrisation , we get a reparametrised path .
Anyways, I should probably stop throwing more ideas out there if you're already happy with the category you've got :sweat_smile:. But I find it really fun to think about how to model stuff, so sometimes I can't help myself!
Have you read up on Baez's Lazy paths? He encouraged me to look into those. What you just said reminds me of them. Because Baez pointed me that way means the reparameterizing is probably a really good idea.
So I guess that circles us back to my last question, why are lazy paths better than piecewise smooth paths? Or are they?
How do you pick between two categories like that?
I have not read about lazy paths before! Taking a look at "An invitation to higher gauge theory" I see we have this:
It's not exactly what you said, but If I remember correctly it uses reparameterization, and more, to identify paths
Specifically any "area preserving change" identifies the paths, which includes reparameterizations.
Ah, I see! They talk about "thin" homotopies: a homotopy is thin iff it sweeps out a surface of zero area. So intuitively it doesn't "move around" the path we care about. Then that paper considers a category where morphisms are thin homotopy classes of lazy paths.
They mention that smooth reparametrizations provide examples of thin homotopies, but there are other examples too.
Right, one neat thing that extra flexibility buys you is you can make the category into a groupioid
Because if you concatenate the reverse of the path, you can shrink the full length along itself back to the starting object, making it into an identity path.
So it's a groupoid like homotopy categories, but it's also a bit more structured, like a piecewise smooth category.
That's a notable difference compared to the free category or Moore path category: in those settings going along a path and back will I think generally be different from such just sitting at the start/end point.
Do you really want a groupoid like this for modelling drawing though? If you draw with your pencil from A and to B and then back to A along the same line, that's not the same as leaving the pencil at A.
I don't know XD, why would I want a groupoid in general?
Maybe that'd be a useful thing to add to a digital painting program.
So yeah, are groupoids better than categories? Are groups better than monoids?
I guess they are, but I always struggle with questions of this sort.
Maybe "Ctrl-Z" is a poor man's groupoid in digital painting programs XD
Oh, reading more carefully there is some subtlety to how the groupoid stuff works for lazy paths. I might have misinterpreted it.
Well now you got me nervous lol, how you used the inverse is how I understood it.
Now I see what was confusing me! The thin homotopies are allowed to move around a path - my intuition above was wrong. It's just that the way in which the path moves needs to sweep out a region of area zero.
So we are indeed allowed to shrink a "out and back" path along itself - that's a valid thin homotopy.
Yes exactly! Your previous explanation was consistent with this I think.
But if you have a path that's a loop from some origin point, you're stuck with the loop
Which is actually sort of weird, because if your path works around the loop, and back, then it can be undone. But if it just loops around and stops, then I think you're stuck.
We can go along any path in reverse - even if it's a loop. So I think every (equivalence class) of path should indeed have an inverse.
So the "out and back" in this case would be - "go along our loop" and then "go along that loop backwards".
I'm kind of imagining that you have a magical pencil that "undraws" when you trace back over the curve you just drew.
Maybe the way I said that was confusing. Here's how the paper puts it:
...the composites and are not constant paths, but they are thinly homotopic to constant paths
It has an inverse, but it itself isn't thin homotopic with the identity.
So isn't equal to the identity. You have to concatenate it with
David Egolf said:
I'm kind of imagining that you have a magical pencil that "undraws" when you trace back over the curve you just drew.
That's how I'm thinking of it as well.
Ah, I think we are on the same page then.
Regarding "is a groupoid 'better' than a category?" - I don't know. In general I would assume it can be good to add in more structure to a model, provided that structure reflects something about the thing you want to model.
So if you think of "drawing" as a setting where you can undraw anything that you draw, then maybe using a groupoid is good!
But it can also be good to not add in structure, I think, if it makes things simpler to study. Or sometimes this can be good for studying things in greater generality.
At least, that's the rough impression I get!
I suspect, making it a groupoid makes it "a lot" better somehow. I'd love some intuition for why.
It's not uncommon to paint over previous parts of a work, and if it were possible, drawing backwards over something you already drew would be the most efficient way mechanically. Think of an eraser.
If you stop with an eraser halfway it'll even describe an image of something thin homotopic to the identity.
Generality wise, lazy paths feel like a lateral move relative to piecewise smooth paths.
Generality is an advantage of a theory, not a model. We want to prove as much as possible in a category, but sometimes we need a groupoid.
But if we're making an example to describe something, we want as much structure as possible. So we want it to be a groupoid, but sometimes the best we can get away with is a category.
At least that's how I think about it.
I like the idea that having a groupoid corresponds to being able to erase things!
Regarding piecewise smooth/linear/analytic: if you have in mind some software for drawing, you might want something different. Piecewise smooth or analytic sounds really hard to implement, and piecewise linear sounds too restrictive. Maybe you could consider things built out of Bezier curves for example?
"splines" also come to mind
A bezier spline is a sort of piecewise analytic curve. In software I just wouldn't be afraid of locking curves end to end, even if they weren't the same sort of curve.
You have to work really hard to make something that isn't differentiable in uncountably many places
Alex Kreitzberg said:
A bezier spline is a sort of piecewise analytic curve. In software I just wouldn't be afraid of locking curves end to end, even if they weren't the same sort of curve.
The idea of connecting curves of different kinds is interesting. For example, we could imagine snapping the end of a curve to a vertex of a polygon. This makes me start to wonder if each curve should come with a set of "attachment points". But perhaps this is getting a bit tangential to your current focus.
Alex Kreitzberg said:
You have to work really hard to make something that isn't differentiable in uncountably many places
I guess what I'm trying to say: you might get enough flexibility while restricting attention to the simpler case of combinations of linked Bezier curves.
That's a tangent related to what I'm exploring here XD. Where your head is going is exactly how I think about stuff.
I was really impressed with Trefethen, so I'm planning on looking through his stuff on numerics, regarding modeling this stuff on a computer. I'm following his lead by thinking of these things as piecewise analytic.
What that amounts to is I'm open to using complex analysis to understand these curves eventually.
Specifically, I was really impressed with Trefethen's takedown of "myths in numerical analysis" https://people.maths.ox.ac.uk/trefethen/mythspaper.pdf
It's funny, to build an intuition for why categories are worse than groupoids, I wanted to look up whether we had classification theorems for monoids (understanding that groups are classified)
And found this helpful question by Baez:
Alex Kreitzberg said:
I guess my confusion here, is I have a definition of graph in my head that allows at most one edge between two vertices,
When a graph theorist says 'graph' they often mean a 'simple graph', and I bet you're thinking about those. A simple graph has at most one edge between two vertices, no edges from a vertex to itself, and no direction on the edges. A better way to say this is that simple graph is a set with a symmetric reflexive relation.
It's funny, to build an intuition for why categories are worse than groupoids, I wanted to look up whether we had classification theorems for monoids (understanding that groups are classified) .
:shock: Groups are very much not classified. If you could classify all finite groups you'd instantly get a job in the math department at any university you want.
Right, finite simple groups are classified :man_facepalming:, good catch.
The process of building up more general finite groups from finite simple groups uses cohomology. It's a fascinating business, but classifying finite gorups is provably a 'wild problem', meaning at least as hard as a bunch of problems that are considered intractable.
Would you be up for expanding on why Lazy paths are nicer than piecewise smooth paths? Or why you chose lazy paths in "An Invitation to Higher Gauge Theory"?
For starters, it's nicer for a path to be smooth everywhere than just piecewise smooth: its derivative, second derivative, etc. make sense everywhere, and you don't have to fuss about places where they're not defined.
The problem is that if you compose two smooth paths you don't get a smooth path, just a piecewise smooth path.
But lazy paths are smooth paths with this nice property: if you compose two lazy paths, you get another lazy (and thus smooth) path.
Finally, you can take any smooth path and reparametrize in a way that turns it into a lazy path. So if you're mainly interested in paths up to reparametrization (as one is, in gauge thoery), you don't lose anything by restricting to lazy paths.
Okay cool, that makes sense. In all the examples I mentioned (moving chess pieces, playing accordion, drawing) I think reparameterization doesn't lose too much. So I'll try to think more about lazy paths.
I guess, analytic curves are too rigid for there to be a lazy analog for them, because of analytic continuation, any interval will determine the rest of the curve. So I think you're stuck with piecewise analytic if you want to concatenate arbitrary analytic curves. Maybe that's why Trefethen defends them, because they want to use complex analysis. But maybe this is getting into "nice morphism, bad category" territory.
What you just posted was interesting Baez, you can leave it there!
I didn't like it; it was too vague. Anyway, you're right, there aren't any "analytic lazy paths" except constant paths (paths that don't move at all).
Maybe, as a way to engage with your deleted thought, I might try and come up with an application that uses lazy paths in a natural way, and an application that uses piecewise analytic paths in a natural way.
Maybe bonus points if it's the same physical system but in different contexts XD
Because I've noticed I keep wanting to identify these real world systems with one abstract presentation, and I think that's not quite right, or it at least isn't always right.
Okay, the main point of my deleted comment was that in applied mathematics or mathematical physics it's usually good to postpone technical decisions like what kind of paths you want to work with until you have a really good idea. The challenge of getting the idea to work will often tell you how to make the technical decisions.
For example, I introduced lazy paths because I was studying gauge theory and I wanted to to show that a connection can be seen as a functor sending reparametrization equivalence classes of paths to elements of a group or morphisms in a groupoid. This is an old idea but previous attempts to do it were awkward in various ways. Lazy paths make things work nicely, I think. Whether or not I did the best possible job, I was trying to let the goal dictate my tools rather than the other way around.
I wanted to optimize the technical decisions because my real goal was to categorify the existing theory and describe '2-connections' using paths in the space of paths, and things can get really ugly if you try to build a fancy structure using awkward tools.
Thank you for this heuristic, I'm thinking about it and my goals.
This is an aside, but honestly the way you explained this is making me wonder how hard it would be to "reprove" Stokes' theorem by working backwards from the theorem statement.
If we can't reprove something we don't fully understand it. It's okay to use math we don't fully understand, because there's too much to fully understand all of it (especially all at once). But for Stokes' theorem, for example, I've had to prove it a few times, so I remember some of the key insights, so I wouldn't be scared if someone would lock me in a cell and only let me out after I gave them a proof of Stokes' theorem. Some other results would be more scary.
So, while thinking about "What is my goal?", I happened to read a bit of MacLane's "Mathematics Form and Function." His argument, that there are multiple formalisms corresponding to elementary ideas, has startled me.
To overstate my mistake for clarity, while trying to formalize drawing, I dreamed of finding a unique formalism for the act of drawing; But there isn't one. There isn't even a unique formalism for paths.
I'm making a painting program, I loath telling the artist how to draw. So bezier splines and line stabilizers which are required because you're interpolating sampled analog data, felt like an overreach. I wanted to give control of lines back to the artist.
But, unless I'm misunderstanding MacLane's argument, I stepped on a philosophical land mine. I can't give artists the idea of a line, I can only give them a formal definition. So because formal definitions aren't unique - I'm forced to make a choice for the artist.
Any of the above paths I mentioned, and many others, could be useful for art.
Good art relies on limitations. I understand this most for music since I spend most of my time on that. Digital audio workstations open a vast range of options for musicians, but musicians still like instruments, in part because they have fewer options and by practice you work those options into your muscle memory so you can eventually transmit what you're feeling without having to consciously think about all the details of sound production. There are infinitely many possible instruments, but some have emerged, by a kind of process of mutation and natural selection, as providing particularly nice sets of options and limitations. People who use DAWs a lot tend to make up their own systems of limitations.
The same happens in math: there's an infinite landscape of possible formalisms, each of which makes some things quick and easy to say, and makes other things long and hard to say, but some are more attractive than others.
Right, if I programmed a proof assistant for mathematics, I would have to settle on some collection of formal definitions that captured some aspects of what it's like to do mathematics, I couldn't capture all aspects, or all formalisms.
Therefore, there would still be a reason to do math without a proof assistant, if the mathematician needed maximum flexibility for whatever reason.
Some folks are upset with "Lean" becoming the default proof assistant. I think a big part of the frustration is because you can't change the definitions Lean, or any language, settles on.
What's throwing me for a loop is the possibility this is a "philosophical limitation" and not a technical one. Whether this annoys me or not, this isn't something I can fix.
It's like the xkcd joke about standards(https://xkcd.com/927/), you'll never have a winning standard because no standard will be able to express all programs easily at the same time.
Welcome to reality. We were born with a nose: we didn't choose that, it's just standard equipment.
Even if we could have infinitely flexible tools that don't shape our actions in any particular way (which is impossible), our brains would not be infinitely flexible.
In math you can change your definitions or formalism, but if you don't settle down and work within some framework you'll never do anything interesting. So choose a good framework for what you want to do, do lots of cool stuff, and only change the framework when it becomes evident that a different framework would help you a lot.
For what it's worth, what I said above is my interpretation of the consequences implied by the following passage of MacLane's "Mathematics Form and Function:
This use of axioms... allows for the view that the formal systems studied in Mathematics come in a great variety and are intended primarily to help organize and understand selected aspects of the "real world" without being necessarily exact descriptions of a part of that unique world. For example, our presentation allows that the first step in the formalization of space could be the description of figures and chunks of space as models of metric space and not as subsets of Euclidean space. This is by no means the conventional view.
I was looking for descriptions of physical features of the world, in this case of a path, that behaved the way mathematicians historically treated Euclidean Geometry - as a definite "description of the world". In a way, a historical answer to my attempted formalization would be to settle on Euclidean curves. The Greeks were good at art!
MacLane convinced me this "conventional view" is false, and I'm trying to interpret his argument in ways that make this as obvious as possible.
Our position is rather that congruence and geometric ratios on the one hand and Dedekind cuts on the other are just two different careful formalizations of the same underlying idea of magnitude-and that this point exemplifies the unity of idea behind the inevitably varied form.
So you could do physics with a bunch of geometric axioms, or you could do it with differential equations of formulas. But they both depend on the same idea of "magnitude".
Its not that I want a maximally flexible brain, it's that my intuition of "line", "circle", "triangle", "path", isn't something I can put into a computer. I can only put a formalism of such into a computer. And I'll admit, I'm a bit embarrassed I didn't understand this point much sooner.
I'm reflecting on a lot of my thoughts on mathematics and computing, and noticing many errors that amount to confusing an idea with a formalization of the idea.
This seems related to the discussion a bit ago with Onstead about whether you could have a definite definition for "space" - you can't because "space" is an "idea", according to my understanding of MacLanes' argument.
All that said, I believe there are many definite things I'm trying to do, and I'll refocus my energy on that.
It's a bit embarrassing to me how easy it is to get confused when trying to apply mathematics to things in the world.
I'm reflecting on a lot of my thoughts on mathematics and computing, and noticing many errors that amount to confusing an idea with a formalization of the idea.
That's interesting.
The reason we can't capture the concept of 'space' in a definition feels a bit different than why we can't completely capture the concept of "triangle". The word 'space' is used in an extremely flexible way, and people are constantly making up entirely new kinds of space. If we limit ourselves to triangles on the Euclidean plane, the concept of 'triangle' is comparatively much more limited.
I can still list many different formalizations of 'the Euclidean plane': axioms for Euclidean geometry in terms of points and lines, of which there are at least 3 really famous one, coordinate-based Cartesian geometry, and coordinate-free Cartesian geometry using algebra. And within each one of these, there are multiple formal concepts of what counts as a 'triangle': a triple of points, an ordered triple of points, etc. However, all of this seems quite tame and typical of how math works: we can easily prove theorems relating all these different concepts of triangle, and any good mathematician is expected to know some of this stuff, or at least not get into a tizzy about it.
On the other hand, you mentioned your 'intuition' of a triangle! THAT is a completely mysterious thing.
I find it hard to apply math for the purpose of thinking about things in the real world. I really want to do so (in order to think about medical imaging more deeply!), but it's much easier to just keep learning more math instead :sweat_smile:.
So I find the thoughts in this thread quite interesting - and I feel somewhat consoled: it's not just me who finds this process challenging!
I'll admit I have trouble seeing why it's hard to apply math to the real world! But I'm probably just used to doing it. I've been doing it for a long time, so when I think about the real world, it's almost automatic to filter out aspects that I don't know how to formalize - and there's still plenty left.
Years ago I was exhausted with trying to keep track of how various software libraries defined the "same idea" in dramatically different ways. To understand the choices I analogized this business logic situation in terms of various definitions of circles.
And after doing so I easily could list over five different definitions of circles with varying degrees of superficiality. And I got really exasperated, because those definitions implied a need to convert between all the representations. And then if you included small errors, you could get even more definitions, with even harder conversions to track. Or if the definition is in an external library, I might not be allowed to convert from it in a simple way.
And then if it became important to shift categories, say when you want to project a circle in perspective for computer graphics, you now have even more "circles" you have to track conversions between.
I had a moment where I just felt defeated keeping all these definitions under control. I remember thinking "It's just circles, why is this so hard".
Part of what I find compelling about category theory is that it claims to organize all the various mathematical subjects. People like to pick on the subject as just being about "stamp collecting", but even if that false assessment were true - I'd still be impressed! Organizing knowledge is about one of the hardest things I've ever tried to do (at least in a software context).
I agree that it's annoying that there are lots of software packages for dealing with circles, and it's difficult to convert between the different representations. This is why I don't program! It's much easier for me to control what's going on in my own brain, and convert between my own personal ways of thinking about circles. That's why I like math.
I'm very glad there are people who like to program. Luckily there are so many of them that the world doesn't need yet another one.
So when you mentioned different concepts of circle, I automatically thought of different mathematical definitions of circle, not different software approaches. I'm completely ignorant of those, and I don't feel sad about that.
I think Mathematicians are at risk of the same problems, but are somehow much better at keeping it at bay.
If all mathematicians were forced to do everything in Lean, then everything would grind to a halt, I think for reasons that involve some of the aspects of what I said above.
Well I don't know if things would grind to a halt, but I suspect the formalization burden could increase the number of things that have to be tracked. Including things mathematicians don't generally care about.
If all mathematicians were forced to do everything in Lean, I would become.... not a mathematician, but a guy who does math without using Lean!
I would talk to the official "mathematicians" and tell them cool ideas, which they could implement in Lean.
I already feel like that in our project for using category theory to design better epidemiological software. I'm the guy who doesn't know how to program... but I still have cool ideas, which the team members who do program find useful.
I'm worried artists are accidentally signing themselves up for this hypothetical world, by using digital painting programs. But they don't realize they're doing that.
Because they have to track, how does RGB work, what are vector graphics, how do digital pens and stabilizers work, blah blah blah.
But it's a hard argument to make because, like you said, they learn how brushes and paper works, which inherently have limits, how's that different than a computer?
It makes me feel like a Luddite, but that's not true, I love computers
Brushes and paper are definitely different than a computer, because they are more limited in what they can do, they take more advantage of your motor skills, and when you are using them you don't have a bunch of symbolic representations floating around in your mind - just like playing a flute is really different from creating flute sounds on a computer. The more traditional technologies are more about your body.
Ideally we will get good at blending the more traditional technologies with the computer technologies and create art that's better than either could do alone.
I actually think paper is more flexible in a way.
In the real world, I can do projective geometry by folding the paper in fun ways, and then use water colors to paint on the paper - the paper doesn't care. I could use French curves, to play up "bezier curves" or I could use a ruler and make "piecewise linear paths".
But when I formalize the paper on a computer, I have to make choices about what aspects of the paper I want to model.
You only get more flexibility in principle if you can program. But even then you only get back what you are able to formalize.
In my program I was trying to figure out how to let the artist define programs to define whatever curves they want, part of what I'm grokking right now is if I'm not really careful about what I mean by that, it's a fools errand
Is "crocking" like "grokking"?
Typo sorry
Artists can already program whatever curves they want.... if they are good enough at programming. But if you're doing the programming, you can try to figure out what curves artists would like to draw.
So then you make a tool that does something people like, not a tool that does "everything".
And now you know why I've been studying drawing for the past three years XD
But that's why I brought up Lean, I don't think it's possible to make a program that can do everything anybody might want to do, you can only make programs that help most people do most of what they want to do.
But in a way, I think analog paper, although it can't do everything, really can be used "for whatever you want".
That's why both mathematicians and artists use papers and pencils.
But then on a computer, most mathematicians spend their time in LaTeX, and most artists spend their time in Photoshop.
They're seperate formalisms that capture two different aspects of how paper is used.
But that's why I brought up Lean, I don't think it's possible to make a program that can do everything anybody might want to do, you can only make programs that help most people do most of what they want to do.
I agree, except I wouldn't even try for "most people" unless I was trying to make a lot of money. I would try for the cool people - the people who do things I like.
It's like when I'm trying to explain math, the very first thing I do is figure out who my audience is. It's never "most people". It's always some more specific audience, and that lets me know what to do.
I guess, to bring this back to the ground then, I should pick something about drawing I find cool, and maybe try to imagine a program that would make that easier.
And then maybe use that as an opportunity to define a useful theorem that would make that program easier to write or understand.
I'm reading Durer's book on measuring and painting, it's very interesting. I'm sure there's something in it I could say something about
Alex Kreitzberg said:
But in a way, I think analog paper, although it can't do everything, really can be used "for whatever you want".
What if you want to make videos or 3D images?
You could make a Zoetrope for simple animations, I make solids out of paper with polyhedral nets to practice drawing. Origami has been developed to an extremely high level, so you can even have programs generate folds that fit near arbitrary models, within a given error.
But if I were to solve those problems without computers, I wouldn't necessarily use paper. I might use clay for 3D models for example.
But my goal isn't to kill computers.
And, I would never suggest you can do literally anything with paper. But a real piece of paper can do more than any one formal model of paper.
The situation with computers is the reverse.
I fully believe in a real sense you can do literally anything with a computer. But on a computer:
"If you wish to make an apple pie from scratch, you must first invent the universe"
Anything you want to do needs to be formalized, and these formalizations will be limited to what they were built for.
Rebelle is a paint program that pretends to mix paint, but paint is more complicated than RGB, so it makes a bunch of subtle compromises and assumptions.
I don't bring up this example to suggest computers are worse than paper for modeling water colors on paper, that isn't fair. I'm trying to communicate the tension that occurs when one tries to pretend an array of RGB values is a model of paper. I'm sensitive to this tension.
I have all these math toys in the real world I enjoy playing with. They reduce the burden on my imagination.
What I'm learning is I can't formally model "the toy", I can only formally model aspects of the toys. I can't have a formal model of "paper" I have to formalize the specific quirk of paper I'm trying to prove things about.
Alex Kreitzberg said:
Its not that I want a maximally flexible brain, it's that my intuition of "line", "circle", "triangle", "path", isn't something I can put into a computer. I can only put a formalism of such into a computer. And I'll admit, I'm a bit embarrassed I didn't understand this point much sooner.
Plato would be proud.
I don't think of myself as a platonist XD.
I want to use my real world toys and widgets to do math. I hope to use toys as "formal mechanisms" to track rules for me.
I think the world is more basic than my intuitions, I think my intuitions are impressions of the world.
Here's a diagram of stick figures from Desargues' book on drawing:
I've already linked to Fialetti's "How to draw an eye":
And here are examples of "gesture drawings" by Glenn Vilpuu, a famous animator in art education circles:
I could call all of these "drawings", that are built up out of "lines" and nobody would be confused - and nobody should be.
But if I want to formalize these in order to make mathematical conclusions, or if I want to make drawing programs that are convenient to use for each of these artists, then each drawing suggests a completely different family of approaches.
Sometimes we're in the category of topological spaces, and sometimes we're in the category of vector spaces, even though all of these spaces can be "visualized". Being "visualizable" doesn't imply the formal descriptions, or tools, should all be the same.
Alex Kreitzberg said:
I guess, to bring this back to the ground then, I should pick something about drawing I find cool, and maybe try to imagine a program that would make that easier.
And then maybe use that as an opportunity to define a useful theorem that would make that program easier to write or understand.
Good, good! There's nothing like a specific goal to focus one's thoughts.
I had an idea, I was going fully explain, but I think I can give the motivation intuitively and clarify with I think a rigorous statement about categories.
If I imagine a curve as being at least twice differentiable, then I can define its curvature. Everywhere its curvature is zero, I could imagine "cutting" the curve at that point. I might do this to break up the curve into C curves that are easy to chain together.
More generally, if I'm working in a category of paths, then if I look under the hood, every point along the path could be the start and end of two smaller paths.
That is, if is a path between points and , then depending on the category, we could say where is the first half of the journey, and is the second half. But moreover, we can break up into shorter paths that are somehow simpler, or easier to draw.
I was going to carefully try to motivate how this might work by explaining aspects of Raph Levien's thesis. But really for here I just want to mention, that it's possible to have a spline, where any curve segment is determined by data just at the ends of curve segments, and under very general conditions you can always find a section of a "french curve" (my metaphor), that coincides with the curve segment of the spline. Levien provides an algorithm that figures everything out.
So here's what I want to say, Given any two points and you get a rich homset of paths. And furthermore, given any , it's possible to decompose into a list of "easy to draw paths" , that is we have , where are "easy to draw" or computable.
I think it's notable here that if the distance between and is the same as the distance between and , then somehow the homsets and should be "the same". But I don't think that's the most important abstract point.
It seems to me that I'm trying to articulate what it means for a category to have a basis. I'm thinking about categories where, Given any there exists where each has a special property within its homset (the property could be "is a C curve" or "is a segment of a french curve"), and collectively they satisfy .
Is there a categorical definition that tries to capture something like this?
If every arrow is uniquely expressible in terms of the basis (which is the best analogy to a vector space basis), then you have a free category generated by the graph of basis elements. If not, you can still consider the category presentation which quotients that free category by whatever relations you like.
Alex Kreitzberg said:
If I imagine a curve as being at least twice differentiable, then I can define its curvature.
Minor technical point: that's not true. There's an infinitely differentiable function that comes in from along the axis, takes a sharp right turn at the origin, and then goes up to along the axis. So its image is
It curvature is zero everywhere except at the origin, and undefined at the origin!
I will leave it as a puzzle to fix your statement. :smiling_devil:
I once did a lot of work on smooth curves and realized how nasty they can be.
Kevin Carlson said:
If every arrow is uniquely expressible in terms of the basis (which is the best analogy to a vector space basis), then you have a free category generated by the graph of basis elements. If not, you can still consider the category presentation which quotients that free category by whatever relations you like.
@David Egolf mentioned this, it's tricky to get it working right so I was stressed I was missing something.
One big C curve can be composed with smaller C curves, which undermines uniqueness. I guess uniqueness is the crucial detail broken here.
So this has to be fixed with a quotient. I guess if we're freely gluing paths, we can quotient by whether they're equal as functions after they're composed?
I think that works.
John Baez said:
I will leave it as a puzzle to fix your statement. :smiling_devil:
The tangent vector needs to exist and be differentiable everywhere?
No, that does not fix your statement. Your statement already included those conditions.
I said my function was infinitely differentiable. This means the tangent vector of exists everywhere, it's differentiable everywhere, the derivative of this tangent vector is differentiable, and so on ad infinitum.
There is an infinitely differentiable function that comes in from along the axis, takes a sharp right turn at the origin, and then goes up to along the axis. So its image is
Before fixing your statement it may help to figure out an example of a function with these properties.
Of course this is only if you want to become an expert on what curves can do!
I'll think about that puzzle, but I guess I had one more question in view of Carlson's answer.
Can you think of a simple programming language as freely generated by its standard library, and evaluating expressions as quotienting?
So a group presentation of is like a really simple programming language for the symmetries of a triangle?
Do you mean the 3-sphere or the symmetric group on 3 letters, ?
(These are both super-standard abbreviations so I don't feel too bad doing what I just did, though I probably should.)
The symmetric group
I knew it, of course.
I think the answer to my question is yes?
[[presentation of a category by generators and relations]] has the following comment:
A useful way to think of the relations is as being 2-cells between parallel pairs of arrows, thus if are objects, and , we think of as a 2-cell (initially from to ). In this way, one can encode [[rewriting]] systems of a certain kind in terms of the embryonic data for a 2-category. This is discussed more in the entry on [[computad]], which are also called [[polygraphs]].
Rewriting to reduce expressions definitely counts, and I think resolves what I was trying to clarify with my question.
I'm going to think about the curvature puzzle, and let the generality of freely generated categories settle in my brain.
There's a lot of fun to be had with presentations of groups and monoids! From week70, back in 1995:
Yves Lafont also gave a talk with strong connections to n-category theory. Recall that a monoid is a set with an associative product having a unit element. One way to describe a monoid is by giving a presentation with "generators", say
a, b, c, d,
and "relations", say
ab = a, da = ac.
We get a monoid out of this in an obvious sort of way, namely by taking all strings built from the generators a,b,c, and d, and then identifying two strings if you can get from one to the other by repeated use of the relations. In math jargon, we form the free monoid on the generators and then mod out by the relations.
Suppose our monoid is finitely presented, that is, there are finitely many generators and finitely many relations. How can we tell whether two elements of it are equal? For example, does
dacb = acc
in the above monoid? Well, if the two are equal, we will always eventually find that out by an exhaustive search, applying the relations mechanicallly in all possible ways. But if they are not, we may never find out! (For the above example, the answer appears at the end of this article in case anyone wants to puzzle over it. Personally, I can't stand this sort of puzzle.) In fact, there is no general algorithm for solving this "word problem for monoids", and in fact one can even write down a specific finitely presented monoid for which no algorithm works.
However, sometimes things are nice. Suppose you write the relations as "rewrite rules", that go only one way:
ab → a
da → ac
Then if you have an equation you are trying to check, you can try to repeatedly apply the rewrite rules to each side, reducing it to "normal form", and see if the normal forms are equal. This will only work, however, if some good things happen! First of all, your rewrite rules had better terminate: it had better be that you can only apply them finitely many times to a given string. This happens to be true for the above pair of rewrite rules, because both rules decrease the number of b's and c's. Second of all, your rewrite rules had better be confluent: it had better be that if I use the rules one way until I can't go any further, and you use them some other way, that we both wind up with the same thing! If both these hold, then we can reduce any string to a unique normal form by applying the rules until we can't do it any more.
Unfortunately, the rules above aren't confluent; if we start with the word dab , you can apply the rules like this
dab → acb
while I apply them like this
dab → da → ac
and we both terminate, but at different answers. We could try to cure this by adding a new rule to our list,
acb → ac.
This is certainly a valid rule, which cures the problem at hand... but if we foolishly keep adding new rules to our list this way we may only succeed in getting confluence and termination when we have an infinite list of rules:
ab → a
da → ac
acb → ac
accb → acc
acccb → accc
accccb → acccc...
and so on. I leave you to check that this is really terminating and confluent. Because it is, and because it's a very predictable list of rules, we can use it to write a computer program in this case to solve the word problem for the monoid at hand. But in fact, if we had been cleverer, we could have invented a finite list of rules that was terminating and confluent:
ab → a
ac → da
Lafont's went on to describe some work by Squier:
6) Craig C. Squier, Word problems and a homological finiteness condition for monoids, Jour. Pure Appl. Algebra 49 (1987), 201-217.
Craig C. Squier, A finiteness condition for rewriting systems, revision by F. Otto and Y. Kobayashi, to appear in Theoretical Computer Science.
Craig C. Squier and F. Otto, The word problem for finitely presented monoids and finite canonical rewriting systems, in Rewriting Techniques and Applications, ed. J. P. Jouannuad, Lecture Notes in Computer Science 256, Springer, Berlin, 1987, 74-82.
which gave general conditions which must hold for there to be a finite terminating and confluent set of rewrite rules for a monoid. The nice thing is that this relies heavily on ideas from n-category theory. Note: we started with a monoid in which the relations are equations, but we then started thinking of the relations as rewrite rules or morphisms, so what we really have is a monoidal category. We then started worrying about "confluences", or equations between these morphisms. This is typical of "categorification", in which equations are replaced by morphisms, which we then want to satisfy new equations (see "week38").
For the experts, let me say exactly how it all goes. Given any monoid M, we can cook up a topological space called its "classifying space" KM, as follows. We can think of KM as a simplicial complex. We start by sticking in one 0-simplex, which we can visualize as a dot like this:
O
Then we stick in one 1-simplex for each element of the monoid, which we can visualize as an arrow going from the dot to itself.
Unrolled a bit, it looks like this:
O---a---O
Really we should draw an arrow going from left to right, but soon things will get too messy if I do that, so I won't. Then, whenever we have ab = c in the monoid, we stick in a 2-simplex, which we can visualize as a triangle like this:
O
/ \
a b
/ \
O---c---O
Then, whenever we have abc = d in the monoid, we stick in a 3-simplex, which we can visualize as a tetrahedron like this
O
/|\
/ | \
/ b \
a | bc
/ _O_ \
/ / \_ \
/ _ab c_ \
/_/ \_\
O--------d--------O
And so on... This is a wonderful space whose homology groups depend only on the monoid, so we can call them . If we have a presentation of M with only finitely many generators, we can build KM using 1-simplices only for those generators, and it follows that is finitely generated. (More precisely, we can build a space with the same homotopy type as KM using only the generators in our presentation.) Similarly, if we have a presentation with only finitely many relations, we can build KM using only finitely many 2-simplices, so is finitely generated. What Squier showed is that if we can find a finite list of rewrite rules for M which is terminating and confluent, then we can build KM using only finitely many 3-simplices, so is finitely generated! What's nice about this is that homological algebra gives an easy way to compute given a presentation of M, so in some cases we can prove that a monoid has no finite list of rewrite rules for M which is terminating and confluent, just by showing that is too big. Examples of this, and many further details, appear in Lafont's work:
7) Yves Lafont and Alain Proute, Church-Rosser property and homology of monoids, Mathematical Structures in Computer Science 1 (1991), 297-326. Also available at http://iml.univ-mrs.fr/~lafont/publications.html
Yves Lafont, A new finiteness condition for monoids presented by complete rewriting systems (after Craig C. Squier), Journal of Pure and Applied Algebra 98 (1995), 229-244. Also available at http://iml.univ-mrs.fr/~lafont/publications.html
(Hopefully this isn't too off-topic!)
If we are trying to define a monoid by using a presentation, how can we tell if the relations we picked will actually produce a monoid? I'd worry about some relations being non-obviously inconsistent with eachother, or somehow leading to a non-associative or non-unital multiplication.
I don't see that as off topic fwiw (I'll omit the machinations in my brain that are somehow trying to relate this to drawings)
Maybe I don't need to worry... if we had generators and relations and I guess we'd end up with a monoid where , but maybe that's okay?
It’s not possible to specify relations that produce a non-monoid, happily! It’s a cool question, though. The reason you can’t have inconsistent relations is essentially that every possible relation is true in the trivial monoid. So when you think of a presentation as saying “give me the minimum element of the poset of quotients of the free monoid on these generators satisfying these relations”, you know you’re in a nonempty poset!
So, yes, relations will sometimes combine to imply stronger ones, as you say.
David Egolf said:
If we are trying to define a monoid by using a presentation, how can we tell if the relations we picked will actually produce a monoid?
They always do. In addition to the relations we write down, we impose associativitiy and the existence of a unit. That's how "presenting a monoid by generators and relations" works.
I'd worry about some relations being non-obviously inconsistent with each other, or somehow leading to a non-associative or non-unital multiplication.
That's not possible since we impose those laws by fiat. The worst that can happen is that our monoid reduces to the terminal monoid, with just one element. That can happen when you have "too many relations".
And since Kevin beat me to saying all this stuff, I'll say something more: this general philosophy works not just for monoids, but for any sort of "purely algebraic gadget", meaning one with a bunch of operations obeying equational laws. This is what the formalism of [[Lawvere theories]] handles. Examples include monoids, groups, rings, rigs, etc.
Any Lawvere theory gives a category of algebraic gadgets (called "models" or "algebras" of the theory), and this category always has a terminal object, and that's what your presentation gives if you throw in "too many relations".
If you throw in "not enough generators" (meaning none at all), you get the initial object.
So the initial object is like a white page on which you have not begun to write, while the terminal object is like a black page on which you have scribbled too much.
I'm still absorbing a lot of the above information, but I wanted to share a rewriting system I pulled out of a turtle program generator, hopefully it's self explanatory:
(I started redrawing over the image after two interations to save time)
third iteration
In the above straight lines were always rewritten over, if I wanted more control I might give the lines different colors, or give them different shapes, and specify the rewrite rule per shape & color.
This specific system feels like the opposite of the monoids, rather than rewriting into hopefully converging smaller expressions, this just makes more complicated images. Completely evaluating it amounts to defining the limit of the rigmarole.
I'm still processing the above examples and theory, but I had a thought
I feel tempted to outline how my brain is bouncing around from the above answers and examples to the following question, but I'll restrain myself and just ask
Let be a path from to , then a translation , is an arrow whose objects were shifted. So we have objects dependent on values. Similarly I could imagine surfaces whose objects are paths, and simplifying those could be carried out by picking out paths that are swept over by the surface.
I hope I can make into a functor, and then prove is a natural transformation over , I'm going to think about whether that makes sense.
In the mean time do folks have references that talk about dependent objects like the above? Or dependent objects in general? But not dependent types specifically.
I guess here is supposed to be a category of paths? I can't remember if you had settled on a specific such category.
Oh sorry, that's the Euclidean plane
The on the points viewed as objects is different but analogous to the on the path.
Pieces are coming together.
I'm going to think more about finitely presented groups. I've been clumsier for this discussion, but I think I can bring all my thoughts together.
If you don't mind elaborating a bit - what are the objects and morphisms of , and how does composition work? (Also, what is ?)
Oh gosh I guess I confused myself, maybe it's fine that instead be say, the Moore path category over the Euclidean plane, and then for a given , where is the vector space associated with , then , still using the definition on values above, is an endofunctor.
Yeah that makes sense, because translating the composition of paths is the same as translating each path first and then composing
Ah, thanks for clarifying! So intuitively the vector space is "acting" on our category of points and paths by translation.
Yes that's right
I guess a next question could be as you sketched above: can we "bundle together" all these translation endofunctors to get a functor ?
(where I guess we'd make the vector space a category by taking its underlying additive group, and then viewing that as a category with a single object?)
Maybe I got ahead of myself there then, and I don't need "dependent objects" I guess
Is fine
hah, you had the same thought XD, thanks for untangling my ideas
Yeah
Works I think
Assuming is a single object category with addition as the composition operator, per your suggestion, makes a category with a somewhat odd composition rule
But I think that's what the math "wants" to do
Gosh I confused myself again I think
inhale
What you're saying matches with what I was thinking! I'm a bit worried that we might not get a functor though.
If we had a translation functor , we would need .
And intuitively we might not even be able to compose and if and are different. (Intuitively, the two paths we started with are getting translated in different ways!)
The would be functor acts on objects and morphisms.
The objects of are points of the Euclidean space crossed with the single object , one example is .
The arrows are pairs , the first a path and the second a vector
So we want to define
But this doesn't work, because the point is the ends of the path are supposed to change based on .
I want to somehow "lift" the action on the underlying space of points, to an action on the paths.
That's why I wrote the expression
So maybe I need a category whose objects are vectors in ? But then what should the arrows be, and how do I relate that back?
I don't know if you saw, but I noted with paths specifically they are made up out of the objects that glue them together
So I could break up a path into two smaller paths on a point at time inside.
The path , can be broken up so,
where ,
I think "lifting the action" might be a similar trick.
The idea of making a category where the objects are vectors in sounds interesting. If we call such a category , then a functor (where is the category of endofunctors of ) would pick out one endofunctor of for each object of - and hence for each vector of .
One way to make such a category comes to mind, although I don't know if it will do what we want. We could form a category where:
That feels right to me, or at least useful, I'm going to try and see what that would mean.
Upon further reflection, I don't love this category . Every vector ends up being isomorphic to every other in !
Before we decide why it's bad, let me finish this thought, I think it works (or at least works in a sense)
I have thoughts that almost work, but I'm confusing myself, I'm going to sleep on this and come back to it later. I'm convinced the idea is close to something that works
(but if you want to share other thoughts in the mean time go ahead)
Rather than work with the Endofunctor category, what if we look at the Monoid of endofunctors from to instead.
Then our base category can just be the usual Vector space considered as a category with one object.
So call our translation action
And define it as
where is the length of the path. We already established this was a functor previously.
Now
Does that all look correct to you?
This is literally just a monoid action I think.
That sounds right to me. I'm not following every detail you wrote, but I'm intuitively thinking of as sending a vector to the endofunctor of which translates all the paths and points of by adding .
It makes sense for to respect composition because we should get the same results when doing these two things:
Also, will just be the identity endofunctor of , which leaves every point and path untranslated, so is preserving identity elements.
Yeah I think my notation can be made slicker, if instead of writing , I write , and similarly for the objects/points
Then the last line of my proof just looks like normal algebra
and
Which is what I want, it's just easy to confuse oneself
Sometimes I write an action like as . So then the equations you just wrote would look like this: , and . In some cases this makes it easier for me to not confuse myself. :sweat_smile:
Anyways, it sounds like you got the functor/action you were looking for! That's great!
Yeah I think so, Now I'm going to go back to thinking about finitely presented groups, and hopefully relate that to a finite presentation between two points and , which I can translate anywhere with the action.
(Maybe we can add rotation/scaling etc.)
I wanted to add just a slight reflection on what you did to get things working.
The category of endofunctors of , namely , is not just a category - it's a monoidal category! We have a way to compose not just the morphisms, but also the objects (because we can compose endofunctors of ). So then you "went down a level" by taking the underlying monoid on the objects of this monoidal category.
I think the reason why I ended up doing that, is I think there are no non trivial natural transformations between paths in Moore path categories (and because I'm thinking of this stuff so strictly there won't be any in any of the other categories I might make in this way).
If I did an approach involving lazy paths, maybe it would suggest something different? But I'm not sure.
What does it mean to talk about a natural transformation between morphisms in a category? (Maybe I'm misreading you.)
What I meant by that is actually false. A better way to say what I meant is there are no non trivial commuting squares in Moore path categories, and therefore no non trivial natural transformations between Functors into Moore path categories.
But that's not right, you could have two paths overlapping, and then extend the end of one, and the beginning of the other, to get a commuting square.
I'd still be surprised if two different translation functors of the category could be related by a natural transformation.
I had a different idea to recontextualize the Functors here though, again working in a Moore path category to keep us grounded, but I think this is very general if you're careful:
Because arrows are implemented as continuous maps like , , if then any continuous endomap of has the property and
Where adjacency is path concatenation and is post composition as functions by f.
So we have lots of Functors! Including those corresponding to all the isometries, and affine transformations of .
Wow, that's pretty cool! I didn't carefully check that really obeys the equations you list, but it sounds intuitively right.
I think the main technical points are and
With an eye towards possibly generalizing this situation, I think I was able to show that preserves composition as desired using the universal property of pushouts.
Here is the diagram I've been working from:
diagram
The top square/diamond is a pushout diagram. It says that we can glue together and . Intuitively we glue together the point of the interval with the point of the interval.
(See here for details on this construction, which I think is called a "wedge sum".)
Next, we have a cocone given by and . This really forms a cocone because and are equal.
Using the universal property of pushouts, we then get an induced morphism . I think this is our concatenated path .
We then set up another cocone, using and . This is a cocone because .
Using the universal property of pushouts, we know there is a unique continuous map that gives a morphism of cocones. But we can see that will work for this purpose.
Thus, .
Yeah your diagram makes sense to me, I'm always impressed with how much information those diagrams encode.
I need to practice them more