You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Hi, this is my first main post here. Thanks to all who can provide suggestions and answers! Oh also I'll try to make my posts more concise next time (sorry about the length)!
So, when you are in middle or high school, you are entered into a class titled "algebra" where you learn about the so-called rules of algebra. When you don't follow these rules, you are marked incorrect- it's as if these rules are THE rules of THE algebra, and there's no other way it could be. But then you get to college and learn the field of abstract algebra. In the process, you learn that THE algebra from school is just a tiny drop in an infinite sea of differing algebraic universes, each with their own "rules" of algebra. These universes are known as "algebraic structures" and include everything from magmas to groups and monoids to rings and fields to vector spaces and algebras over vector spaces. In the process, THE algebra from school is just AN algebra(ic structure) among many others. And from a categorical POV, algebraic structures are not hard to define- they are just algebraic objects in Set, where an algebraic object in a category with finite products (or tensor product) is an object A equipped with (one or more) "operation morphism" A x A -> A participating in diagrams encoding certain "properties" such as associativity, commutativity, identity, inversion, etc. This is a classic example of a formal definition in "stuff, structure, property" format, IE, "An algebraic structure is a set A equipped with morphism(s) A x A -> A such that..."
After they learn algebra in school, many students will move on to the dreaded course of calculus! There, they learn about limits, derivatives, integrals, and the fundamental theorem of calculus. Again, there are rules to follow, and again students learn about alternate rules when they get to college. There, they learn THE calculus they learned in high school is a type of "real analysis", but other "analyses" exist such as complex analysis and vector calculus. So here's my question: is there a such thing as AN "analytic structure" that generalizes THE calculus learned in high school, directly analagous to how AN "algebraic structure" generalizes THE algebra learned in high school, such that THE calculus is just one of many in an infinite sea of different calculus/analysis universes? If so, is there a subject of "abstract analysis" that studies it directly analagous to abstract algebra, and then what is the corresponding formal/internal definition of an analytic object such that an analytic structure is defined as an analytic object internal to Set? Preferably, in "stuff, structure, property" format: "An analytic structure is a set A equipped with structure S such that..."
Here's my thoughts so far. I believe whatever it is, an analytic structure needs to have an underlying algebraic structure. This is because analysis needs to be built on a concept of "linearity", with these linear operations being identified with the underlying algebraic operations. For instance, take some 1d continuous function and zoom into a point until the infinitesimal scale, then apply the derivative operator. The application will look like applying some linear transformation in that neighborhood, which is usually some sort of scaling- which is just multiplication, an algebraic operation. Likewise, when defining an integral, one imagines dividing up the area under a 1d function into infinitesimal tiny rectangles and adding them up, where rectangles are linear objects (whose areas are b*h, again multiplication). So a common theme in THE calculus we learned, that we can approximate continuous things with tiny infinitesimal linear things, is likely something that needs to carry over to analytic structures in general, in which case we need that definition of linearity that an algebraic structure will be able to provide.
My guess is that the underlying algebraic structure is some sort of algebra over a field, though maybe generalizations are possible to other algebraic structures. Under this, it seems then that "THE calculus" is just an analytic structure defined over the algebraic structure of the real numbers. In a similar vein, "complex analysis" is just an analytic structure defined over the algebraic structure of the complex numbers. "Vector calculus" could be an analytic structure defined over the algebraic structure of linear algebra. There also always seems to be some concept of an analogy to the fundamental theorem of calculus associated to each analytic structure here (like Stokes theorem for vector calculus, which reduces to things like Green or Divergence theorem). I'm wondering if that's always true for any analytic structure. Related to this is if it is always possible to define both an integral and derivative operator in any analytic structure, or if there are some analytic structures that only contain one or the other. In vector calculus, we actually have multiple (such as divergence, gradient, and curl for derivative operators and line, surface, volume integrals for integral operators)
Let me know if I'm on the right track, or any track at all, with any of this. Thanks!
Synthetic differential geometry has the kind of axiomatic vibe that I'm getting from your question, but it's not presented as being quite as fundamental as what you're aiming for.
Also vaguely related are the (sheaves of) differentials used to construct de Rham cohomology. They're a formal algebraic version of the differentials in analysis. I took a cohomology course where a version of Stokes' theorem was proved using these. (Sorry that that's vague, I can dig up more details if it sounds relevant.)
Maybe you can figure it out from first principles: what are the operations featuring in an "abstract calculus", and by extension what features do you need an object to have in order to count as one?
Maybe you're thinking more about something like [[differential categories]] and the like?
I think the concept of "limits" and of "continuity" are important to calculus and real analysis. The particular way in which those concepts are handled I assume would depend on how one is modelling "topological aspects". We could imagine having different "topological structures" - different ways of doing topology, analogous to how "algebraic structures" are different ways of doing algebra. And indeed, there are things called "topological constructs", which I think are like different settings in which one can do topology-like things.
Here is a big list of topological constructs (from the book "Beyond Topology"):
big list
One could imagine trying to do analysis with an arbitrary topological construct, or trying to figure out which topological constructs are "nice enough" to support at least one "analytic structure".
A lot of calculus can be done over any [[Banach space]].
Interesting, thanks for the answers! It seems what I am gathering from all the above is that it is not possible to do analysis without some notion of underlying "space", whether that be from differential geometry, cohomology, a topological construct, a banach space, etc. So you can't just have some set and an underlying algebraic structure, you would also need some sort of a space structure to define an abstract analysis.
I also gather that what will be the most general "underlying space" to define these analytic structures will depend on the set of concepts I (maybe arbitrarily) define to be sufficient for analysis (tying into Morgan and David's comment). If convergence is all I need, then a convergent space is the most general space, but if I need both convergence and continuity, then a topological space will be. If I also want to be able to define uniformity, then I will need a uniform space. Integrals, as far as I know, require measure theory to properly define, so if I want integrals in addition to the above I'll need a measure structure on the space too (IE, it will need to be a measurable topological space).
Following from Dr. Shulman's response, once I have a reasonable space, I can combine the space structure with an algebraic structure by analyzing vector spaces (or more generally modules) internal to the categories of those spaces. For instance, topological vector spaces are vector spaces internal to Top, and a Banach space seems to have an underlying TVS with even more extra analytic structure added on top. However, at this point I will need to learn more about Banach spaces and related spaces since admittedly I haven't taken any extremely rigorous real analysis courses that would likely cover such things. For instance, I am unsure how "vector measures" that allow for integration in Banach spaces come into being. I know that if a topological group's underlying topology satisfies certain properties (local compactness), it somehow gains a natural "Haar measure"; is something similar happening here? How does this measure work, and would it make specifying the underlying topological space to be a measurable topological space redundant? Lastly, do you think it would make sense to define an analytic structure in the most general case as a vector space internal to a category of spaces?
John Onstead said:
I also gather that what will be the most general "underlying space" to define these analytic structures will depend on the set of concepts I (maybe arbitrarily) define to be sufficient for analysis (tying into Morgan and David's comment). If convergence is all I need, then a convergent space is the most general space, but if I need both convergence and continuity, then a topological space will be. If I also want to be able to define uniformity, then I will need a uniform space. Integrals, as far as I know, require measure theory to properly define, so if I want integrals in addition to the above I'll need a measure structure on the space too (IE, it will need to be a measurable topological space).
While there is some dependence, I was suggesting that you try and extract the "type" of these various operations. In the same way that you have a general concept of an algebra in terms of finitary operations, if you can find some commonality of type then you don't have to be as specific. For example, to define convergence spaces you need a notion of distinguished "net", and the structure of convergence is a map from the space of such sequences back to the space satisfying certain constraints. It might be possible to make generalize "net" to other shapes and still prove analogous results.
There is a reverse mathematics flavour to this pursuit, where a tangible goal would be to isolate the features of integration etc that enable one to conclude certain properties or results.
You might also be interested in algebraic real analysis.
I took a course called "algebraic analysis" in 2019. I blogged my notes for the first 11 lectures: https://joe-moeller.com/2019/01/07/algebraic-analysis-notes-7-jan-2019/
I say in the first one that the outline of the course is: some theorems about smooth manifolds and smooth complex algebraic varieties, sheaves & derived categories, intersection cohomology, perverse sheaves.
I'll check out algebraic analysis, thanks for the links! In the meantime, I've been struggling to reconcile two ideas. As Dr. Schulman noted, analysis can be done in a Banach space. However, in differential geometry, one defines calculus to be set on a differentiable manifold. How do these two seemingly entirely separate concepts connect? Does differentiable structure added onto a topological manifold also instantly grant Banach structure in some way?
Calculus on Banach spaces and differentiable manifolds are not the same, but they have both an "intersection" and a "union". A differentiable manifold looks "locally" like a finite-dimensional vector space , which is a particular kind of Banach space, so calculus on is their "intersection". Their "union" is calculus on Banach manifolds, which are like ordinary ones but look locally like a Banach space.
You can do calculus on Banach manifolds and this is quite useful when studying nonlinear differential equations. As the Wikipedia article points out, we can go further and work with Fréchet manifolds (or even more general kinds of infinite-dimensional manifolds). But for hard analysis, Banach manifolds are more practical. I used to use them a fair amount. For "soft" work on differential geometry it often pays to use the still more general diffeological spaces, and people who like category theory tend to enjoy these.
Thanks for the help! There's still two things that confuse me. First, where do vector bundles and tangent bundles come in when discussing calculus on a differentiable manifold? Does the ability to do calculus come from the tangent/vector bundle defined on the manifold, or does it come from the manifold looking locally like a Euclidean vector space? The second question is about how derivatives are even taken using Banach spaces. As a vector space, the canonical morphism between Banach spaces are continuous linear maps unless I am mistaken. However, what if I wanted to take the derivative of a nonlinear function? How am I supposed to represent that operation? Thanks!
As for your first question, the ability to define the tangent space of a manifold comes from the fact that a manifold 'looks locally like a vector space'. The intuition is that when you zoom in to a point on a manifold, its infinitesimal neighborhood looks like a vector space, and that vector space is the tangent space of that point. Of course what I'm saying is rather vague. To make it precise you have to fill in a bunch of details - or read a book that does. Actually it can't hurt to read all of this:
I don't think general vector bundles are so important here: right now you're wondering about the basics of calculus on a manifold, and that mainly requires that you understand the tangent bundle.
As for your second question, derivatives of differentiable maps between Banach spaces are taken the same way as for differentiable maps between Euclidean vector spaces. This subject is explained reasonably well here:
Thanks, the manifold stuff makes sense now. I looked at that page before where it said a chart is an assignment of an open subset of a manifold to an open subset of Euclidean space that is a homeomorphism. But the page left it extremely vague as to if "Euclidean space" in this context is topological Euclidean space or topological vector space Euclidean space (only the latter of which would make sense in getting the manifold to look locally like a vector space), which was probably what was confusing me. An atlas is then a way of "gluing" the charts so they are compatible. This reminds me of sheaves in a way, since they also correspond to gluing and also have to do with open subsets. Any connection?
As for the second point (and using the notation given in the wikipedia article), my concern is with a possible typing error in the expression f: U -> W. A morphism between topological vector spaces (TVS), Euclidean space included, is of the "continuous linear map" type. The article on Frechet derivative acknowledges this by stating the derivative as a bounded linear operator A: V -> W, which makes sense as bounded linear operators are a case of continuous linear map. However, this means trying to define a morphism of "nonlinear continuous function" type between TVS is impossible since this results in a typing error; the morphism and the object are incompatible (IE, we are using a morphism from one category and objects from another). In this case, we are using objects from some category of TVS, and morphisms from Top (or at least some subcategory of Top). Perhaps this is where manifolds come in: instead of V and W being TVS, they are TVS manifolds (like Banach manifolds or differentiable manifolds), which do have a notion of nonlinear map between them. Any thoughts?
Regarding the expression in the Wikipedia article, I am guessing that this is shorthand for where is some functor that sends each normed vector space to its underlying set.
I've never really thought about it this way before, but I suppose the process of differentiation can involve "measuring" a morphism of one category and then obtaining data which involves a collection of morphisms from another category. For example, we might start out with a differentiable function and then through the process of differentiation obtain a "measurement" of in terms of data which involves various linear maps in . (By I mean the category of vector spaces over the field ).
I use the word "measuring" because differentiation is a process that loses information, but the derivative of a function tells us quite a bit about the original function. For example, if we know that , we then know that , for some .
(This also reminds me a little bit of how one can work with a convex function in terms of its supporting hyperplanes.)
John Onstead said:
Thanks, the manifold stuff makes sense now. I looked at that page before where it said a chart is an assignment of an open subset of a manifold to an open subset of Euclidean space that is a homeomorphism. But the page left it extremely vague as to if "Euclidean space" in this context is topological Euclidean space or topological vector space Euclidean space (only the latter of which would make sense in getting the manifold to look locally like a vector space), which was probably what was confusing me.
In the Wikipedia page Differentiable manifold, "Euclidean space" is used simply to mean the set for some arbitrary natural number . This set can be given many structures: there's a standard way to make it into a topological space, a vector space and even a differentiable manifold in its own right. This flexibility is part of what's going on here: we can treat the as living in many different categories.
However, this means trying to define a morphism of "nonlinear continuous function" type between TVS is impossible since this results in a typing error; the morphism and the object are incompatible (IE, we are using a morphism from one category and objects from another).
Ordinary mathematicians use a lot of type conversion, and category theorists can fill in the details if they want. Type conversion is what lets someone add a real number and an integer without us wagging our finger at them and saying "no, no, that makes no sense". It's also what's going on here.
A nonlinear continuous function between topological vector spaces is a perfectly reasonable and extremely important concept. What we're doing is this: we're taking the full image of the forgetful functor from
[topological vector spaces and continuous linear maps]
to
[topological spaces and continuous maps]
This gives the category of
[topological vector spaces and continuous maps]
David Egolf said:
Regarding the expression in the Wikipedia article, I am guessing that this is shorthand for where is some functor that sends each normed vector space to its underlying set.
Yes. This implicit is doing type conversion.
This makes sense, thanks. I do understand why the wikipedia article is written the way it is, it's because most mathematics is considered (implicitly in most cases) from a set theoretic perspective where everything is of type Set and so defining a function between anything is possible and makes sense. I wanted to think of this from the category theoretic perspective which is where the issue arose. It's likely there are other ways of going about this that are more "friendly" to category theory (such as synthetic differential geometry), which I plan to investigate further in the future!
Synthetic differential geometry is friendly to category theorists, and so are diffeological spaces. You might like my paper on the latter, since it takes a quite category-theoretic outlook:
I did have another question about manifolds going off on what was said above about how R^n can have different structure depending on the category. If R^n is a topological space only and used in a chart for an atlas, one gets a topological manifold. If R^n is given topological vector space structure and used in a differentiable atlas, one can assign a tangent space to points on the generated manifold and so the resulting manifold defined using that atlas is differentiable. However, I am confused when it comes to defining metrics on a manifold (like in Riemannian geometry with metric tensors and the like). There are certain explicit ways of starting with some smooth manifold (a type of differentiable manifold) and then giving it a metric structure. On the other hand, one can give a metric structure instead to the topological vector space R^n, turning it into a normed topological vector space R^n. One can then use this in an atlas to define a manifold. How do these two ways of giving a metric connect? Are they essentially equivalent?
John Onstead said:
Related to this is if it is always possible to define both an integral and derivative operator in any analytic structure, or if there are some analytic structures that only contain one or the other. In vector calculus, we actually have multiple (such as divergence, gradient, and curl for derivative operators and line, surface, volume integrals for integral operators)
Let me know if I'm on the right track, or any track at all, with any of this. Thanks!
In the field of differential categories, we have differential categories which axiomatize differentiation and integral categories which axiomatize integration. A category which is equipped with the two structures at the same time in a compatible way is named a calculus categoy. I think that some differential categories can't be made into a calculus category and in the same way some integral categories can't be made into a calculus category.
I don't know about examples because I know only about differential categories and not about integral and calculus categories which are more recent. But for sure @JS PL (he/him) will know as he introduced integral and calculus categories in his master thesis. If people want to learn about differential categories and the like, we will always be happy. Bot him and me... That's definitely an interesting field IMHO.
I don't know if you're thinking about analytic in the sense of analytic functions. The Taylor expansion is something important in theoretical computer science, in linear logic and lambda-calculus. There have been work by JS also to categorify the Taylor expansion. As to myself, I've been working a lot on polynomials in differential categories and I'd like to define one day a notion of analytic differential categories which would be the ones which are in some way a filtered (co)limit of their chain of subcategories of polynomial functions of degree less than when tends to infinity (in some way which makes sense because of course there is a problem of stablilty by composition here but this is the idea. I don't say anything mathematically precise here, just want to share a vague idea that I'd like to explore on the long-term. What I say is maybe even completely nonsensical or absurd).
@Jean-Baptiste Vienney This seems interesting! I'll look more into it. But I'm not exactly sure how differential and calculus categories relate to the definition of analytic structure I've ended up landing upon- as spaces equipped with some extra vector space structure (such as topological vector spaces, Banach spaces, differentiable manifolds, vector bundle, etc.) Or with synthetic differential geometry which I am also considering. Any help would be great!
They are very related to spaces with extra vector space structure!
Part of the definition of a differential category is that it is a category enriched over the category of commutative monoids. So each is a commutative monoid. Moreover in the definition there is even "symmetric monoidal category enrich over commutative monoids". It implies that is a commutative rig and every hom-set is a -module (or -semi-module if you prefer this terminology).
I'm pretty sure that each of this: the category of topological vector spaces over or , the category of Banach spaces and the category of vector bundles over a manifold are differential categories.
I'm not sure of the exact conditions but every symmetric monoidal category enriched over the category of commutative monoids which can be equipped with a free commutative algebra modality which is not too bad is a codifferential category (ie. its opposite is a differential category) and this is a very weak condition.
Each category like this is a codifferential category:
Screenshot-2023-12-15-at-10.34.53PM.png (p.8 of Derivations in Codifferential Categories).
I'm pretty sure we can even weaken a bit these conditions.
This is all interesting! It sounds a little like the concept of an algebraic category or a topological category. Various algebraic objects like groups, monoids, etc. form an algebraic category, while various space like objects form a topological category. Maybe various analytic objects form differential or integral categories? So as algebraic categories are settings for algebra, differential categories are the various settings for analysis? Hope I'm on the right track... I wonder what other categories are differential, for instance diffeological spaces, smooth spaces, orbifolds, etc.
Good question! The differential categories are not the categories of differentiable maps. They are category of linear maps and the differentiable maps are seen as the linear maps of some type where is a monad (the exponiential modality of the differential category) which is one of the main element of the definition of differential categories.
If you want categories of differentiable maps, you must use the notion of cartesian differential categories which are categories with finite products and additional structure which allows also to define differentiation, but starting from the differentiable maps and not the linear maps this time.
By the way for every differential category with some exponential modality , defining , you obtain that is a cartesian differential category. You can also extract the linear maps from a cartesian differential category, and I believe you should obtain a differential category.
Smooth spaces and diffeological spaces could very well be cartesian differential categories but I don't know. What I know is that the category of euclidean spaces and smooth maps is a cartesian differential category.
Would you happen to have any concrete examples of how to do things inside a differential category to illustrate? For instance, in the category of smooth manifolds, differentiation is an endofunctor that sends a manifold to its tangent bundle and a function to its derivative. Does this tie in to the concept of a differential category and monads (which are a type of endofunctor) on it? If so then how does differentiation on a differentiable manifold "look like" within the context of the differential category they form?
Ahah the category of smooth manifolds is yet something else. It is what we call a tangent category: a category with some endofunctor which is like the tangent bundle functor in the category of smooth manifolds.
I can explain what is a differential category with an example.
That's a bit long but I can do it.
I will explain what is a codifferential category. Codifferential categories are the opposites of differential categories and vice-versa. But the simplest example is a codifferential category.
A codifferential category is a category such that:
First part:
The category of vector space over any field, or (semi)-modules over any commutative (semi-)ring is such a category, with the usual tensor product, sum of maps and zero maps.
This part of the definition says that we have some category of "linear maps".
Now we want to add differentiation and this is the second part.
Second part:
But the idea is that an object is to be interpreted as differentiable maps from to for example.
In our category of vector spaces or modules, we can choose the symmetric algebra on .
The symmetric algebra is the coordinate-free version of the notion of polynomial algebra.
If is a vector space with basis then .
Now we are going to ask in the rest of the definition for everything we need to multiply, compose and differentiate our polynomials.
So we ask that:
Now we can multiply our polynomials, and we also have constant polynomials.
We also ask that:
The idea is that the multiplication of the monad axiomatize the composition of smooth functions. In our example is the composition of polynomials.
I'm not sure how to write it using polynomials and coordinates but with the symmetric algebra, it is the natural way to compose: you have two levels of symmetric tensors, and then you flatten everything to the same level.
Like this: .
And transforms a vector in the corresponding mononial of degree .
Ok, now we can multiply and compose our polynomials. The rest of the definition is made to differentiate them.
Third part (the most important one if you want to differentiate!):
With the example of modules and polymials the deriving transformation is like this:
Suppose that a basis of is so that .
Then is defined by:
Without coordinates it's almost the same:
We want our deriving transformation to verify all the usual properties of calculus.
So we require four commutative diagrams:
Now you have the complete definition! (at least the idea of each part of the definition, look at the original paper Differential categories or the paper Differential Categories Revisited if you want all the details, or there is better:
These videos: The Theory of Differential Categories. (4 tutorial lectures)
You can ask me if you have some question right now also
I did my best to try to explain what is a (co)differential category together with the example of the symmetric algebra (the polynomials) in the category of modules.
Thanks for the information and for all your help! It was very well explained. I'll check out the resources and everything, its a very interesting topic!
This may sound like a goofy question and it's probably silly. But what if you were asked to find the derivative of the function x^2 or e^x evaluated at x = 1, but the catch is you have to completely use the machinery of some differential category to solve the problem. How do you go about approaching that problem?
A second question is how a tangent category relates to a differential category.
If you want to find the derivative of a function evaluated at , you need to use differential categories and no longer codifferential categories.
In a differential category, a smooth map from to is a map of type
To differentiatiate, you precompose by the deriving transformation which is now of type
so that you obtain a map .
the idea is that this map is smooth in the first variable, linear in the second variable and with values in .
Now if you wanted to differentiate in you would have to use some map which is the unit of some algebra .
which a linear map from to (where is a unit of the algebra , by the way there is not any map like this in the definition of differential category because we ask for to be a coalgebra. But it is in the definition of a model of differential linear logic where we ask every for every to be a commutative bialgebra. A model of differential linear logic is something a bit more restrictive than a differential category but most instersting differential categories are models of differential linear logic).
Now if you have a smooth function from to as you are thinking of, then , and you obtain a linear map , i.e. a vector in which is your linear approxiation at .
The examples of differential categories are more complicated... I don't really know any of them but you will find out in the references probably. I believe most of the time is some kind of space of distributions.
The co-Eleinberg-Moore category of a differential category is always a tangent category.
(In the same way the co-Kleisli category of a differential category is always a cartesian differential category.)
I have no idea how to obtain a differential category from a tangent category. But maybe you can use some category of the vector bundles of your tangent category and for instance apply the symmetric algebra on it to obtain your modality . It would be quite weird but there is a paper doing something like this, not starting from an arbitrary tangent category but from the category of smooth manifolds:
Jets and differential linear logic
This is quite plausible that a general process could be adapted from this paper to obtain a differential category from any tangent category.
Thanks! Very helpful :)
John Onstead said:
However, I am confused when it comes to defining metrics on a manifold (like in Riemannian geometry with metric tensors and the like). There are certain explicit ways of starting with some smooth manifold (a type of differentiable manifold) and then giving it a metric structure. On the other hand, one can give a metric structure instead to the topological vector space R^n, turning it into a normed topological vector space R^n. One can then use this in an atlas to define a manifold. How do these two ways of giving a metric connect? Are they essentially equivalent?
The second way sounds much more restrictive, since every chart would be a copy of with its flat metric, so you'd only get flat Riemannian manifolds. I'm assuming that the first way - which you didn't really explain in detail, you just mentioned "certain explicit ways" - you are talking about the usual definition of Riemannian manifolds.
The second way is much more rarely used; I might have seen it used somewhere, but people are vastly more likely to just talk about flat Riemannian manifolds.
Jean-Baptiste Vienney said:
But for sure JS PL (he/him) will know as he introduced integral and calculus categories in his master thesis. If people want to learn about differential categories and the like, we will always be happy. Bot him and me... That's definitely an interesting field IMHO.
Late to the discussion, but yes absolutely always happy to discuss differential categories. As it might be what you're looking for. @Jean-Baptiste Vienney did a good job discussing differential categories, but another kind of category might better fit what you are looking for. They fit more in the direction @John Baez and @Tom Hirschowitz and @Morgan Rogers (he/him) were going towards, which are Tangent Categories in the sense of Cockett and Cruttwell. (I don't know if the nlab page is complete, so here are some slides)
Essentially a tangent category is a category where every object has an associated object which behaves like an abstract tangent bundle over . The canonical example is the category of smooth manifolds, but there are also lots of other examples in particular like synthetic differential geometry and (affine) schemes. They are also very much linked to the differential categories @Jean-Baptiste Vienney was talking about above. Tangent categories have been particularly successful in formalizing numerous concepts from differential geometry. So they might be a suitable setting for what you which to achieve.