You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Tom Leinster has a blog article in which he gives a category-theoretic description of the space and integration of functions:
This is really interesting! Do you know of any other places where analysis is given a structural bent?
I have always found it hard to digest analysis, though I have tried many times. Leinster's paper is exactly the kind of stuff I was looking for.
There's Scholze and Clausen's condensed approach (https://www.math.uni-bonn.de/people/scholze/Analytic.pdf) to analysis (among other topics) which there was just a series of lectures about--the last 8 sessions were on analysis (using background material from the first several sessions).
Chetan Vuppulury said:
This is really interesting! Do you know of any other places where analysis is given a structural bent?
None come to mind.
There's the article Tom cites in the post (universality of ), there's Freyd's articled "Algebraic real analysis" which proves universality of and there's this awesome talk by @fosco in which he compares coend calculus and real analysis
Ah! Thanks for the mention :grinning:
Freyd's paper is way more inspiring than my talk
Forgot to mention: Kock, Commutative monads as a theory of distributions. Really wonderul paper. This is the bridge bringing this discussion into the territory of categorical probability. The next arch of the bridge is the Giry monad and its lookalikes. There's mist on this bridge, but on the other side you can barely distinguish Markov categories and differential categories
the idea of Yoneda as dirac delta in @fosco ‘s talk is a really cool analogy!
I think of the category of presheaves on C as a "linearization" of C, analogous to the free vector space on a set S. Colimits act like linear combinations. Every presheaf is canonically a colimit of representables just as every element of the free vector space on S is canonically a linear combination of elements of S. The category of presheaves on C is the free colimit completion of C.
In this analogy to linear algebra, the analogue of the ground field is the category Set. This is a 2-rig, which is a kind of categorified ring (or really rig).
Elements of the free vector space on S can also be seen measures on S (with a couple different choices of sigma-algebra allowed). Then the elements of S give Dirac deltas.
I guess this is what @fosco must have been talking about, in part.
John Baez said:
Chetan Vuppulury said:
This is really interesting! Do you know of any other places where analysis is given a structural bent?
None come to mind.
Martin Escardo and I had a paper called "Calculus in coinductive form" based on the idea that analysis is (and has tacitly been through its history) the practice of coinduction, in the same sense in which arithmetic has been the practice of induction. We reconstructed the Laplace transform as a simple anamorphism. Fourier follows the same pattern, but a much bigger field of research opens up, so we that has not been properly worked out. And then Bertfried Fauser and I worked out basic vector analysis also in coalgebraic form: tangent and cotangent functors as a monad and comonad... It could be the other way around. Many people in that community apparently use functors, but with different arrow parts, so it took some work to convince them about the cotangent bundle and the monadicity, because they had convinced themselves that something was not functorial.
But IMHO, none of this should be viewed as a structural bend of anything. The categorical message is i think that the structures are already present in all mathematical and logical practices, just like gravity is in space, whether we talk about it like this or like that.
the paper on cotangent bundles is called "Smooth coalgebra: testing vector analysis". it is in MSCS but also on arxiv. the "calculus in coinductive form" was in some LICS proceedings, so it may be behing IEEE's wonderful paywall. let me know if anyone wants to read it and cannot get through. (i never wanted to rewrite the "journal versions" and with the publishers like IEEE it comes back to bite you)
There appears to be an open access copy in the interweb: https://www.semanticscholar.org/paper/Calculus-in-coinductive-form-Pavlovic-Escard%C3%B3/ba71257b1a145a0d10b3ae274abd32127b3b0c77
John Baez said:
I think of the category of presheaves on C as a "linearization" of C, analogous to the free vector space on a set S. Colimits act like linear combinations. Every presheaf is canonically a colimit of representables just as every element of the free vector space on S is canonically a linear combination of elements of S. The category of presheaves on C is the free colimit completion of C.
In this analogy to linear algebra, the analogue of the ground field is the category Set. This is a 2-rig, which is a kind of categorified ring (or really rig).
Elements of the free vector space on S can also be seen measures on S (with a couple different choices of sigma-algebra allowed). Then the elements of S give Dirac deltas.
I guess this is what fosco must have been talking about, in part.
What I like the most about this topic is that everyone has their own way of fleshing out the analogy into something concrete, and no one is ever wrong! Surely @John Baez ' s intuition is motivated by the fact that both the "vector space on a set" and the phresheaf construction build free objects of some kind; my intuition during that talk was way more pedestrian, I was motivated by the fact that you can write the integral
as if it was the integral
...and funnily enough, coend formulas or Kan extensions can be seen from the same angle (see also the section on Fourier transforms over *-autonomous categories).
Re distributions, however, there's a much longer story: see https://www.sciencedirect.com/science/article/pii/002240499400157X and https://www.springer.com/gp/book/9783540363590
Indeed the Yoneda integral can also be understood as expressing the linear combination defining as an element of the free vector space over
I've often likened the Yoneda reduction to the eta reduction (in fact, AFAIK the term "Yoneda reduction" is due to me, guided by this rhyme).
The Fourier transform defines a map
,
but as far as I know, there is no good description of the image of this map. So you don't get a "Fourier duality" between two well-understood spaces here.
Does anyone know a (categorical) workaround for this? Maybe the Fourier transform of a continuous function should be a measure and vice versa, but then it is not clear what kind of measures appear as Fourier transforms of -functions.
Jens Hemelaer said:
The Fourier transform defines a map
,
but as far as I know, there is no good description of the image of this map. So you don't get a "Fourier duality" between two well-understood spaces here.Does anyone know a (categorical) workaround for this? Maybe the Fourier transform of a continuous function should be a measure and vice versa, but then it is not clear what kind of measures appear as Fourier transforms of -functions.
The Fourier transform of the -functions packs two different concepts together. One is the change to a complementary (unbiased, maximally mixed) base. That is the concept of Fourier transform, as applied across math, from abelian groups and boolean functions, to Banach spaces of measurable functions. The other concept is the idea of adding structure to infinite dimensional vector spaces to choose bases in such a way that some sort of duality is recovered (which is so useful in finite dimensional spaces). These two concepts pose different problems. They are not independent (as eg the uncertainty principle shows) but studying them together does not make either of them easier. (The fact that they usually teach us the Fourier transform in the calculus classes is an accident of history.)
Even if you look at the Fourier transform on , you will shed a part of the second problem, of the bases for the spaces. The reason why the transform looks what it looks like there is that the powers of $$e^{2i\pi}$ form the complementary basis. The Taylor expansions of the powers of simplify things in the spaces of sequences. But really any area where the bases alone are less of a problem than in the -spaces will clarity what the Fourier transform alone is doing, and how is it an isomorphism.
George Mackey had a very nice paper in the Bulleting of the AMS at some point about the conceptual contents of the Fourier transforms crossing the paths with the problem of duality of the infinite dimensional vector spaces (on which he worked since his thesis in the 1940s, which funny enough got systematized in Grothendieck's thesis in the 1950s)...
Incidentally, the categorical approach to things is i think not intended for workarounds, but to understand why things are the way they are. Workarounds are of course the heart of solving concrete problems, where we want to avoid any obstackes that can be avoided. But when a big structure behaves in an unexpected way, then there is little reason to expect that there is a passable workaround around the structure.
PPS i suppose you accidentally wrote in
Jens Hemelaer said:
The Fourier transform defines a map
the duality does require the complex-valued functions. use the riesz-fischer to characterize the image.
I understand the here to refer to the domain of complex-valued functions.
Yes, that's standard in analysis: we've got , , and for any measure space .
Todd Trimble said:
I've often likened the Yoneda reduction to the eta reduction (in fact, AFAIK the term "Yoneda reduction" is due to me, guided by this rhyme).
personally, i look at the coend calculus / yoneda reduction rules and i see a process calculus of some kind
or even better / intersecting the two: perhaps something linear / proof net / sequent calculus -y
my knowledge of enriched category theory is pretty limited, but—for a more general non-cartesian enriching category, say a benabou cosmos, isn't there some kind of linearity restriction on variable occurrences in the body of coend? like you're supposed to use a variable exactly once in each variance?
...or am i completely misremembering?
Wouldn't that just be part of being functorial?
ummm, quite possibly :sweat_smile:
"pretty limited" might be an understatement
...anyway, though, i'm thinking about the manifestation of functions, their application to arguments, and the reduction of all of that stuff, in a linear-logic-y, sequent-calculus-y, proof-net-y setting—ive written up some stuff abt this before image.png
I don't know if eta reduction is the ideal analogy. It might be closer with with something from nominal logic/type theory, where left and right adjoints to weakening are the same (if I'm not mistaken). So you have two equivalent representations of abstracting over a free variable.
One where you pair with the abstracted variable, and one where you have a function that accepts the abstracted variable.
Not that analogies need to be ideal.
hmm, isn't there some similar kind of "ambidexterity" in like, linear cats or sth
where like certain left n right things coincide
If you have a homomorphism between finite groups, you get a map
between their categories of finite-dimensional complex representations, and this has an ambidextrous adjoint: a functor that's both left and right adjoint.
This is called Frobenius reciprocity: is called restriction and its adjoint is called induction.
Digging a bit deeper into this, it turns out that all that matter is that and are 2-vector spaces: categories equivalent to for some finite .
Any cocontinous -enriched functor between 2-vector spaces has an ambidextrous adjoint. (I'm probably not stating the strongest result like this.)
But these categories are also dagger-categories (or can be made into them), and any adjoint functor between dagger-categories is an ambidextrous adjoint.
So I think it makes sense to say - at least for this sort example - that it's the dagger-ness that's the key to ambidexterity, not anything about linearity.
dusko said:
Incidentally, the categorical approach to things is i think not intended for workarounds, but to understand why things are the way they are. Workarounds are of course the heart of solving concrete problems, where we want to avoid any obstackes that can be avoided. But when a big structure behaves in an unexpected way, then there is little reason to expect that there is a passable workaround around the structure.
The workaround typically used by analysts is to look at the Schwartz space, I think. But the Schwartz space depends on the smooth structure on instead of only the measure space structure. So I thought that maybe category theorists had a different way of looking at it. In Leinster's paper the restriction is made to finite measure spaces, possibly to avoid similar problems.
John Baez said:
But these categories are also dagger-categories (or can be made into them), and any adjoint functor between dagger-categories is an ambidextrous adjoint.
Provided the adjoint functors between the underlying categories end up being promoted to dagger functors when you promote the cats to dagger cats.
Todd Trimble said:
I understand the here to refer to the domain of complex-valued functions.
doh. (i was 10 and had to have 2 years of piano school behind me before i could confidently tell left from right.)