You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Let be a function. Consider the statement that is continuous at . We can define a "modulus of continuity" by . Then is continuous at iff is continuous at . Also note that is a weakly monotonic function (i.e. it preserves the relation ). So we have reduced the notion of continuity of a real function at a point to that of continuity of a monotonic function at .
Let's rename by , and consider the proposition that is continuous at . This is to say,
This can be rewritten, there exists a function with such that
Well this looks a lot like an adjunction doesn't it! We can make it look a bit more like an adjunction:
This would be an adjunction , where , except that we have instead of . If there was , that would force not to just be continuous at , but to be right-continuous throughout its domain, as discussed in this related post.
What is going on here?
I replied in the other thread, but this is known as a "weak adjunction". A good modern reference is Lack–Rosický's Enriched weakness.
A semiadjunction is something different :)
Joshua Meyers said:
Let be a function. Consider the statement that is continuous at . We can define a "modulus of continuity" by .
Oh hey I spent a summer thinking about this in 2016 (informally, mind you, it was while I was on holiday with my family). I had called it the -transform, because I was less good at coming up with names then. I'll have to dig up what I can remember about it
So, we can let vary, and let and be arbitrary metric spaces. Then we have a transformation from the collection of functions to the collection of functions sending to the mapping .
We could also define .
In both cases we have to allow as a value, to account for singularities in the first case and boundedness of coupled with unboundedness of in the latter. The latter is actually the one I spent more time thinking about; it seems to turn failure of convexity beyond a certain threshold into failure of continuity (when is the reals, anyway), which is interesting. It also detects points at which functions are locally constant as failures of continuity in the limit . I never got much further in examining the formal properties of these things or how they relate to one another, but I bet there's some good stuff in there.
Interesting, can you say more about "seems to turn failure of convexity beyond a certain threshold into failure of continuity"?
Maybe convexity isn't the right description, it's more like it detects the presence of more than one turning point. For example, is not convex, but I don't think anything weird happens. But if you consider with its nice curvy shape, and consider the value of , it increases continuously up to as , where the turning points happen, and then for it jumps up to... about 1.155.
More interestingly, if I'm sketching this correctly, there is a critical value such that if you look at for fixed in the range , then there are two points of discontinuity (and similarly below ), at there is just one point of discontinuity, and outside the range there are none. So if you plot the transform, you get an -symbol-shaped curve of points of discontinuity.
You're talking about ? In my mind, for , , this will descend continuously to as from above.
Yes, like I said, discontinuity of that thing as detects if a function is locally constant at , which doesn't happen for any non-constant polynomial. What's interesting is the discontinuity behaviour at that I was describing.
I also bet that there's some nice interaction between these transforms, so if anyone wants to take these ideas and run with them, I'd love to (eventually) participate in writing a paper proving some of the basic results about them.
Morgan Rogers (he/him) said:
Yes, like I said, discontinuity of that thing as detects if a function is locally constant at , which doesn't happen for any non-constant polynomial. What's interesting is the discontinuity behaviour at that I was describing.
I don't see the discontinuity you are describing, can you explain? It seems to me that the function you mentioned that takes and as arguments is always continuous in for and has a discontinuity in otherwise
It turns out my hand-drawn sketch of the function was bad. There are fewer discontinuities than I though, but there's definitely at least one for small .
epsilon.png
Beyond the turning points, the function will always be growing faster in the direction of increasing , so the location of the maximum value doesn't switch; it's that switching that allows discontinuities to happen. I thought there was one discontinuity for each turning point when is small, but only the more distant turning point actually creates a discontinuity, because once again the location of the maximum value is towards/through 0.
Here there is no discontinuity at , but there is one at .
Sorry, I am confused, can you please state your claim more precisely? What function are you saying has a discontinuity where?
Sure, sorry. I'm saying that for , the transformed function has a discontinuity at for . I checked the value this time. Specifically, the value jumps from to .
I see now. Let's take for simplicity. When , the set has 3 connected components --- I think you might be just looking at the connected component that includes . For example, if , this set converges to , so the "transformed function" converges to --- does that align with your intuition?
Oh you're right! :relieved: What I wrote wasn't what I was actually using!
I guess what I meant was the transform
... which behaves somewhat differently, but looks more like what we would be interested in with continuity as a jumping-off point.
and then the more natural dual transform would be
That makes more sense. So that dual transform is identical to my function . I'll have to think more about the other one
Right, they do coincide, which is how I got confused about the definition of the other one :wink: Thanks for having the patience to work out what I was doing wrong!
So let's think about the other one:
This won't be defined if that set is empty, but I guess we can define it to be then, because in , is the supremum of the empty set.
Then we can say that is continuous at iff this function is nonzero for all values of
I guess the dual transform is a kind of measure of "efficiency": how much -boundedness do you get for a given -boundedness?
And the transform answers a similar question: how much boundedness do you need to get a given -boundedness?
I could see these being useful for thinking about complicated estimations
Here's an interesting way of thinking about it: fixing , define a partial order on by setting if , if , and if for all , .
Then the transforms can be written and
We can also write this more cleanly by considering the injections defined by and . Then we get for the transforms, and
And we could generalize this to any cospan of posets !
This might be useful for answering questions about resource convertibility, as in resource theories
Joshua Meyers said:
Here's an interesting way of thinking about it: fixing , define a partial order on by setting if , if , and if for all , .
Then the transforms can be written and
If you just consider the two copies of separately, then that last line looks like an adjunction (cf the title of the topic)!
It does! We can get it to look even more like an adjunction:
Let be the function and be the function .
Then !
I think your two transformations form a literal adjunction!
Now this makes me wonder, what does this say about ? The first thing is that is continuous at iff is always non-zero, i.e. for every we can pick a non-zero such that etc.
This adjunction expresses what can "see" about using only distance information, which includes local stuff like continuity and local constant-ness, but also more subtle distant behaviour like what I worked out re the cubic. Tracing out as varies for (eg) piecewise linear functions can have interesting results. I think it can also do smoothness properties like derivatives, at least for real functions.
How could you show given by
is not smooth using only distance information?
where
is smooth, so if you want to detect the non-smoothness at you'd need to look at some point other than . But actually I think
for all . If this is true I guess it's hopeless, right?
I don't think that's true: let .
I should really avoid making imprecise concluding statements here on Zulip, someone always ends up picking me up on it. The best that can do is compute the limsup of the modulus of the derivative at as . If I have for and for with , then we have as when and as when .
In particular, this has the nice property of having a value even at where the function is not differentiable, and it can detect any discontinuities in (in the case , but it can't detect points at which the sign of flips without changing in modulus.
But more to the point, I think that for all .
Morgan Rogers (he/him) said:
In particular, this has the nice property of having a value even at where the function is not differentiable, and it can detect any discontinuities in (in the case , but it can't detect points at which the sign of flips without changing in modulus.
This can be adjusted for with some perturbations, though: add on a function with strictly positive (small) derivative everywhere; then applying will detect the points where the sign suddenly changed before as discontinuities.
I should really avoid making imprecise concluding statements here on Zulip, someone always ends up picking me up on it.
This is how math progresses! :upside_down:
There's really nothing more fun than reading a conjecture and trying to quickly disprove it with a counterexample.
Interesting points about . A small correction: I think you mean , not . So we can say
So it is kind of like a derivative but it always exists
Probably related to the Dini derivatives
My bad, I guess it should have been
Yes, it does indeed seem like the Dini derivative, except symmetric. Perhaps gives the lower one?
I think that .
Shows that really loses a lot of information in a "real numbers" context (rather than a general metric space context), since we're coming from both sides at once and taking the absolute value... There could be more sensitive moduli, for example:
Conjecture: these correspond to the Dini derivatives.
For your information, I observed that continuity via epsilon-delta formula can be interpreted as an adjunction at the Bowdoin College Category Theory Advanced Science Seminar June 24 to August 14, 1969. Everyone was there, Eilenberg, Mac Lane, and so on. Mac Lane gave a series of lectures, and kindly presented my idea to the (large) audience of categorists. Eilenberg wrote for me a letter of recommendation to attend. I had the good fortune to have been seated next to the analyst, William F. Donoghue, Jr., and I had told him my idea, but was stuck on a detail. He told me about the modulus of continuity, and so began (and ended) my fame as a categorist.