Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: ε-δ is "semi-adjunction"?


view this post on Zulip Joshua Meyers (Feb 21 2021 at 22:45):

Let f:RRf:\mathbb{R}\to\mathbb{R} be a function. Consider the statement that ff is continuous at x0Rx_0\in\mathbb{R}. We can define a "modulus of continuity" ω:R0R0\omega:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} by ω(δ)sup{f(x)f(x0):xx0<δ}\omega(\delta)\coloneqq \text{sup}\{|f(x)-f(x_0)|:|x-x_0|< \delta\}. Then ff is continuous at x0x_0 iff ω\omega is continuous at 00. Also note that ω\omega is a weakly monotonic function (i.e. it preserves the relation \leq). So we have reduced the notion of continuity of a real function at a point to that of continuity of a monotonic function ω:R0R0\omega:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} at 00.

Let's rename ω\omega by ff, and consider the proposition that f:R0R0f:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} is continuous at 00. This is to say,

ϵ>0 δ>0 x0 (x<δf(x)<ϵ)\forall \epsilon > 0\ \exists \delta > 0\ \forall x\geq 0\ (x<\delta \Rightarrow f(x)<\epsilon)

This can be rewritten, there exists a function δ:R0R0\delta:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} with ϵ>0δ(ϵ)>0\epsilon>0\Rightarrow \delta(\epsilon)>0 such that

ϵ>0 x0 (x<δ(ϵ)f(x)<ϵ)\forall \epsilon > 0\ \forall x\geq 0\ (x<\delta(\epsilon) \Rightarrow f(x)<\epsilon)

Well this looks a lot like an adjunction doesn't it! We can make it look a bit more like an adjunction:

ϵ>0 x>0 (ϵf(x)δ(ϵ)x)\forall \epsilon > 0\ \forall x >0\ (\epsilon\leq f(x)\Rightarrow \delta(\epsilon)\leq x)

This would be an adjunction fδf\vdash\delta, where f,δ:R+R+f,\delta:\mathbb{R}_+\to\mathbb{R}_+, except that we have \Rightarrow instead of \Leftrightarrow. If there was \Leftrightarrow, that would force ff not to just be continuous at 00, but to be right-continuous throughout its domain, as discussed in this related post.

What is going on here?

view this post on Zulip Nathanael Arkor (Feb 21 2021 at 23:39):

I replied in the other thread, but this is known as a "weak adjunction". A good modern reference is Lack–Rosický's Enriched weakness.

view this post on Zulip Nathanael Arkor (Feb 21 2021 at 23:40):

A semiadjunction is something different :)

view this post on Zulip Morgan Rogers (he/him) (Feb 22 2021 at 08:41):

Joshua Meyers said:

Let f:RRf:\mathbb{R}\to\mathbb{R} be a function. Consider the statement that ff is continuous at x0Rx_0\in\mathbb{R}. We can define a "modulus of continuity" ω:R0R0\omega:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0} by ω(δ)sup{f(x)f(x0):xx0<δ}\omega(\delta)\coloneqq \text{sup}\{|f(x)-f(x_0)|:|x-x_0|< \delta\}.

Oh hey I spent a summer thinking about this in 2016 (informally, mind you, it was while I was on holiday with my family). I had called it the Δ\Delta-transform, because I was less good at coming up with names then. I'll have to dig up what I can remember about it

view this post on Zulip Morgan Rogers (he/him) (Feb 22 2021 at 09:10):

So, we can let x0x_0 vary, and let (X,d)(X,d) and (Y,e)(Y,e) be arbitrary metric spaces. Then we have a transformation from the collection of functions (X,d)(Y,e)(X,d) \to (Y,e) to the collection of functions (X,d)×R0R0(X,d) \times \mathbb{R}_{\geq 0} \to \mathbb{R}_{\geq 0} sending ff to the mapping (x,δ)sup{e(f(y),f(x)):d(x,y)<δ}(x,\delta) \mapsto \sup \{e(f(y),f(x)) : d(x,y) < \delta\}.
We could also define (x,ϵ)sup{d(x,y):e(f(x),f(y))<ϵ}(x,\epsilon) \mapsto \sup \{d(x,y) : e(f(x),f(y)) < \epsilon \}.
In both cases we have to allow \infty as a value, to account for singularities in the first case and boundedness of (Y,e)(Y,e) coupled with unboundedness of (X,d)(X,d) in the latter. The latter is actually the one I spent more time thinking about; it seems to turn failure of convexity beyond a certain threshold into failure of continuity (when XX is the reals, anyway), which is interesting. It also detects points at which functions are locally constant as failures of continuity in the limit ϵ0\epsilon \to 0. I never got much further in examining the formal properties of these things or how they relate to one another, but I bet there's some good stuff in there.

view this post on Zulip Joshua Meyers (Feb 22 2021 at 15:55):

Interesting, can you say more about "seems to turn failure of convexity beyond a certain threshold into failure of continuity"?

view this post on Zulip Morgan Rogers (he/him) (Feb 22 2021 at 16:22):

Maybe convexity isn't the right description, it's more like it detects the presence of more than one turning point. For example, x3x^3 is not convex, but I don't think anything weird happens. But if you consider x3xx^3 - x with its nice curvy shape, and consider the value of (0,ϵ)(0,\epsilon), it increases continuously up to 1/31/\sqrt{3} as ϵ2/33\epsilon \to 2/3\sqrt{3}, where the turning points happen, and then for ϵ>1/3\epsilon > 1/\sqrt{3} it jumps up to... about 1.155.
More interestingly, if I'm sketching this correctly, there is a critical value 0<α<1/30 < \alpha < 1/\sqrt{3} such that if you look at (x,ϵ)(x,\epsilon) for fixed xx in the range 0<x<α0 < x < \alpha, then there are two points of discontinuity (and similarly below 00), at ±α\pm \alpha there is just one point of discontinuity, and outside the range αxα-\alpha \leq x \leq \alpha there are none. So if you plot the transform, you get an \infty-symbol-shaped curve of points of discontinuity.

view this post on Zulip Joshua Meyers (Feb 22 2021 at 16:59):

You're talking about (x,ϵ)sup{d(x,y):e(f(x),f(y))<ϵ}(x,\epsilon) \mapsto \sup \{d(x,y) : e(f(x),f(y)) < \epsilon \}? In my mind, for f(x)=x3xf(x)=x^3-x, x=0x=0, this will descend continuously to 11 as ϵ0\epsilon\to 0 from above.

view this post on Zulip Morgan Rogers (he/him) (Feb 22 2021 at 17:39):

Yes, like I said, discontinuity of that thing as ϵ0\epsilon \to 0 detects if a function is locally constant at xx, which doesn't happen for any non-constant polynomial. What's interesting is the discontinuity behaviour at ϵ>0\epsilon > 0 that I was describing.

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2021 at 13:48):

I also bet that there's some nice interaction between these transforms, so if anyone wants to take these ideas and run with them, I'd love to (eventually) participate in writing a paper proving some of the basic results about them.

view this post on Zulip Joshua Meyers (Feb 24 2021 at 14:29):

Morgan Rogers (he/him) said:

Yes, like I said, discontinuity of that thing as ϵ0\epsilon \to 0 detects if a function is locally constant at xx, which doesn't happen for any non-constant polynomial. What's interesting is the discontinuity behaviour at ϵ>0\epsilon > 0 that I was describing.

I don't see the discontinuity you are describing, can you explain? It seems to me that the function you mentioned that takes xx and ϵ\epsilon as arguments is always continuous in ϵ\epsilon for x[23,23]x\in [-\frac{2}{\sqrt{3}}, \frac{2}{\sqrt{3}}] and has a discontinuity in ϵ\epsilon otherwise

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2021 at 15:06):

It turns out my hand-drawn sketch of the function was bad. There are fewer discontinuities than I though, but there's definitely at least one for small xx.
epsilon.png
Beyond the turning points, the function will always be growing faster in the direction of increasing x|x|, so the location of the maximum value doesn't switch; it's that switching that allows discontinuities to happen. I thought there was one discontinuity for each turning point when xx is small, but only the more distant turning point actually creates a discontinuity, because once again the location of the maximum value is towards/through 0.

Here there is no discontinuity at ϵ=e1\epsilon = e_1, but there is one at ϵ=e2\epsilon = e_2.

view this post on Zulip Joshua Meyers (Feb 24 2021 at 15:23):

Sorry, I am confused, can you please state your claim more precisely? What function are you saying has a discontinuity where?

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2021 at 16:39):

Sure, sorry. I'm saying that for f(x)=x3xf(x) = x^3 - x, the transformed function (x,ϵ)sup{d(x,y):e(f(x),f(y))<ϵ}(x,\epsilon) \mapsto \sup \{d(x,y) : e(f(x),f(y)) < \epsilon \} has a discontinuity at ϵ=2/33f(x)\epsilon = 2/3\sqrt{3} - |f(x)| for x<1/3|x| < 1/\sqrt{3}. I checked the value this time. Specifically, the value jumps from x+1/3|x| + 1/\sqrt{3} to 2/3x2/\sqrt{3}-|x|.

view this post on Zulip Joshua Meyers (Feb 24 2021 at 17:05):

I see now. Let's take x=0x=0 for simplicity. When ϵ<2/33f(x)=2/33\epsilon< 2/3\sqrt{3} -|f(x)|=2/3\sqrt{3}, the set {d(x,y):e(f(x),f(y))<ϵ}\{d(x,y):e(f(x),f(y))<\epsilon\} has 3 connected components --- I think you might be just looking at the connected component that includes 00. For example, if ϵ0\epsilon\to 0, this set converges to {0,1}\{0,1\}, so the "transformed function" converges to 11 --- does that align with your intuition?

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2021 at 18:01):

Oh you're right! :relieved: What I wrote wasn't what I was actually using!
I guess what I meant was the transform (x,ϵ)sup{δd(x,y)<δ    e(f(x),f(y))<ϵ}(x,\epsilon) \mapsto \sup \{\delta \mid d(x,y)< \delta \implies e(f(x),f(y)) < \epsilon \}
... which behaves somewhat differently, but looks more like what we would be interested in with continuity as a jumping-off point.

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2021 at 18:03):

and then the more natural dual transform would be
(x,δ)inf{ϵd(x,y)<δ    e(f(x),f(y))<ϵ}(x,\delta) \mapsto \inf \{\epsilon \mid d(x,y) < \delta \implies e(f(x),f(y)) < \epsilon \}

view this post on Zulip Joshua Meyers (Feb 24 2021 at 18:06):

That makes more sense. So that dual transform is identical to my function ω\omega. I'll have to think more about the other one

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2021 at 18:08):

Right, they do coincide, which is how I got confused about the definition of the other one :wink: Thanks for having the patience to work out what I was doing wrong!

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:02):

So let's think about the other one: (x,ϵ)sup{δd(x,y)<δ    e(f(x),f(y))<ϵ}(x,\epsilon) \mapsto \sup \{\delta \mid d(x,y)< \delta \implies e(f(x),f(y)) < \epsilon \}

This won't be defined if that set is empty, but I guess we can define it to be 00 then, because in R+\mathbb{R}_+, 00 is the supremum of the empty set.

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:02):

Then we can say that ff is continuous at xx iff this function is nonzero for all values of ϵ\epsilon

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:05):

I guess the dual transform is a kind of measure of "efficiency": how much ϵ\epsilon-boundedness do you get for a given δ\delta-boundedness?

And the transform answers a similar question: how much δ\delta boundedness do you need to get a given ϵ\epsilon-boundedness?

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:05):

I could see these being useful for thinking about complicated estimations

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:19):

Here's an interesting way of thinking about it: fixing xx, define a partial order on R+×{0,1}\mathbb{R}_+\times \{0,1\} by setting (δ,0)(δ,0)(\delta,0)\leq (\delta',0) if δδ\delta\leq\delta', (ϵ,1)(ϵ1)(\epsilon,1)\leq (\epsilon'1) if ϵϵ\epsilon\leq\epsilon', and (δ,0)(ϵ,1)(\delta,0)\leq (\epsilon, 1) if for all yy, d(x,y)<δ    e(f(x),f(y))<ϵd(x,y)< \delta \implies e(f(x),f(y)) < \epsilon.

Then the transforms can be written ϵsup{δ(δ,0)(ϵ,1)}\epsilon \mapsto \sup \{\delta \mid (\delta,0)\leq (\epsilon,1)\} and δinf{ϵ(δ,0)(ϵ,1)}\delta \mapsto \inf \{\epsilon\mid (\delta,0)\leq (\epsilon, 1)\}

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:22):

We can also write this more cleanly by considering the injections i0,i1:R+R+×{0,1}i_0,i_1:\mathbb{R}_+\to\mathbb{R}_+\times \{0,1\} defined by i0(δ)=(δ,0)i_0(\delta)=(\delta,0) and i1(ϵ)=(ϵ,1)i_1(\epsilon)=(\epsilon,1). Then we get for the transforms, ϵsup{δi0(δ)i1(ϵ)}\epsilon \mapsto \sup \{\delta \mid i_0(\delta)\leq i_1(\epsilon)\} and δinf{ϵi0(δ)i1(ϵ)}\delta \mapsto \inf \{\epsilon\mid i_0(\delta)\leq i_1(\epsilon)\}

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:23):

And we could generalize this to any cospan of posets Ai0Pi1BA\xrightarrow{i_0}P\xleftarrow{i_1}B !

view this post on Zulip Joshua Meyers (Feb 24 2021 at 21:24):

This might be useful for answering questions about resource convertibility, as in resource theories

view this post on Zulip Morgan Rogers (he/him) (Feb 24 2021 at 22:10):

Joshua Meyers said:

Here's an interesting way of thinking about it: fixing xx, define a partial order on R+×{0,1}\mathbb{R}_+\times \{0,1\} by setting (δ,0)(δ,0)(\delta,0)\leq (\delta',0) if δδ\delta\leq\delta', (ϵ,1)(ϵ1)(\epsilon,1)\leq (\epsilon'1) if ϵϵ\epsilon\leq\epsilon', and (δ,0)(ϵ,1)(\delta,0)\leq (\epsilon, 1) if for all yy, d(x,y)<δ    e(f(x),f(y))<ϵd(x,y)< \delta \implies e(f(x),f(y)) < \epsilon.

Then the transforms can be written ϵsup{δ(δ,0)(ϵ,1)}\epsilon \mapsto \sup \{\delta \mid (\delta,0)\leq (\epsilon,1)\} and δinf{ϵ(δ,0)(ϵ,1)}\delta \mapsto \inf \{\epsilon\mid (\delta,0)\leq (\epsilon, 1)\}

If you just consider the two copies of R+\mathbb{R}_+ separately, then that last line looks like an adjunction (cf the title of the topic)!

view this post on Zulip Joshua Meyers (Feb 24 2021 at 22:42):

It does! We can get it to look even more like an adjunction:

Let Ω\Omega be the function ϵsup{δ(δ,0)(ϵ,1)}\epsilon \mapsto \sup \{\delta \mid (\delta,0)\leq (\epsilon,1)\} and ω\omega be the function δinf{ϵ(δ,0)(ϵ,1)}\delta \mapsto \inf \{\epsilon\mid (\delta,0)\leq (\epsilon, 1)\}.

Then Ω(ϵ)δ(δ,0)(ϵ,1)ϵω(δ)\Omega(\epsilon)\leq\delta\Longleftrightarrow(\delta,0)\leq (\epsilon,1)\Longleftrightarrow \epsilon\leq\omega(\delta) !

view this post on Zulip Joshua Meyers (Feb 24 2021 at 22:45):

I think your two transformations form a literal adjunction!

view this post on Zulip Joshua Meyers (Feb 24 2021 at 22:51):

Now this makes me wonder, what does this say about ff? The first thing is that ff is continuous at xx iff Ω\Omega is always non-zero, i.e. for every ϵ\epsilon we can pick a non-zero δ\delta such that etc.

view this post on Zulip Morgan Rogers (he/him) (Feb 25 2021 at 09:19):

This adjunction expresses what xx can "see" about ff using only distance information, which includes local stuff like continuity and local constant-ness, but also more subtle distant behaviour like what I worked out re the cubic. Tracing Ω\Omega out as xx varies for (eg) piecewise linear functions can have interesting results. I think it can also do smoothness properties like derivatives, at least for real functions.

view this post on Zulip John Baez (Feb 25 2021 at 16:09):

How could you show f:RRf : \mathbb{R} \to \mathbb{R} given by

f(x)=xf(x) = |x|

is not smooth using only distance information?

d(f(0),f(x))=d(g(0),g(x))d(f(0), f(x)) = d(g(0),g(x))

where

g(x)=xg(x) = x

is smooth, so if you want to detect the non-smoothness at 00 you'd need to look at some point other than 00. But actually I think

d(f(y)),f(x))=d(g(y),g(x))d(f(y)),f(x)) = d(g(y),g(x))

for all x,yRx,y \in \mathbb{R}. If this is true I guess it's hopeless, right?

view this post on Zulip Joshua Meyers (Feb 25 2021 at 16:48):

I don't think that's true: let x=1,y=1x=1,y=-1.

view this post on Zulip Morgan Rogers (he/him) (Feb 25 2021 at 16:49):

I should really avoid making imprecise concluding statements here on Zulip, someone always ends up picking me up on it. The best that Ω\Omega can do is compute the limsup of the modulus of the derivative at yy as yxy \to x. If I have f(x)=xf(x) = x for x0x \geq 0 and f(x)=kxf(x) = kx for x<0x < 0 with k1|k| \leq 1, then we have Ωx(ϵ)/ϵ1\Omega_x(\epsilon)/\epsilon \to 1 as ϵ0\epsilon \to 0 when x0x \geq 0 and Ωx(ϵ)/ϵk\Omega_x(\epsilon)/\epsilon \to |k| as ϵ0\epsilon \to 0 when x<0x<0.

In particular, this has the nice property of having a value even at 00 where the function is not differentiable, and it can detect any discontinuities in df/dx|df/dx| (in the case k<1|k|<1, but it can't detect points at which the sign of df/dxdf/dx flips without changing in modulus.

view this post on Zulip Joshua Meyers (Feb 25 2021 at 16:50):

But more to the point, I think that ωf(x,δ)=ωg(x,δ)\omega_f(x,\delta)=\omega_g(x,\delta) for all xR,δ0x\in\mathbb{R},\delta\geq 0.

view this post on Zulip Morgan Rogers (he/him) (Feb 25 2021 at 16:55):

Morgan Rogers (he/him) said:

In particular, this has the nice property of having a value even at 00 where the function is not differentiable, and it can detect any discontinuities in df/dx|df/dx| (in the case k<1|k|<1, but it can't detect points at which the sign of df/dxdf/dx flips without changing in modulus.

This can be adjusted for with some perturbations, though: add on a function with strictly positive (small) derivative everywhere; then applying Ω\Omega will detect the points where the sign suddenly changed before as discontinuities.

view this post on Zulip John Baez (Feb 25 2021 at 17:10):

I should really avoid making imprecise concluding statements here on Zulip, someone always ends up picking me up on it.

This is how math progresses! :upside_down:

There's really nothing more fun than reading a conjecture and trying to quickly disprove it with a counterexample.

view this post on Zulip Joshua Meyers (Feb 25 2021 at 17:55):

Interesting points about Ωx(ϵ)/ϵ\Omega_x(\epsilon)/\epsilon. A small correction: I think you mean Ωx(ϵ)/ϵ1/k\Omega_x(\epsilon)/\epsilon\to 1/|k|, not k|k|. So we can say (lim infϵ0Ωx(ϵ)/ϵ)1=lim supδ0ωx(δ)/δ=lim supyxf(y)f(x)/yx(\liminf_{\epsilon\to 0}\Omega_x(\epsilon)/\epsilon)^{-1}=\limsup_{\delta\to 0} \omega_x(\delta)/\delta = \limsup_{y\to x} |f(y)-f(x)|/|y-x|

view this post on Zulip Joshua Meyers (Feb 25 2021 at 17:55):

So it is kind of like a derivative but it always exists

view this post on Zulip Joshua Meyers (Feb 25 2021 at 18:29):

Probably related to the Dini derivatives

view this post on Zulip Morgan Rogers (he/him) (Feb 26 2021 at 09:54):

My bad, I guess it should have been ϵ/Ωx(ϵ)\epsilon/\Omega_x(\epsilon)

view this post on Zulip Morgan Rogers (he/him) (Feb 26 2021 at 09:55):

Yes, it does indeed seem like the Dini derivative, except symmetric. Perhaps ω\omega gives the lower one?

view this post on Zulip Joshua Meyers (Feb 27 2021 at 21:43):

I think that lim supδ0ωx(δ)/δ=max{D+f(x),D+f(x),Df(x),Df(x)}\limsup_{\delta\to 0} \omega_x(\delta)/\delta = \max\{D^+f(x),-D_+f(x),D^-f(x),-D_-f(x)\}.

view this post on Zulip Joshua Meyers (Feb 27 2021 at 21:47):

Shows that ω\omega really loses a lot of information in a "real numbers" context (rather than a general metric space context), since we're coming from both sides at once and taking the absolute value... There could be more sensitive moduli, for example:

ω+(x,δ)inf{ϵ(y) 0<yx<δf(y)f(x)<ϵ}\omega^+(x,\delta)\coloneqq \inf\{\epsilon | (\forall y)\ 0<y-x<\delta \Rightarrow f(y)-f(x) < \epsilon\}
ω+(x,δ)inf{ϵ(y) 0<yx<δf(x)f(y)<ϵ}\omega_+(x,\delta)\coloneqq \inf\{\epsilon | (\forall y)\ 0<y-x<\delta \Rightarrow f(x)-f(y) < \epsilon\}
ω(x,δ)inf{ϵ(y) 0<xy<δf(y)f(x)<ϵ}\omega^-(x,\delta)\coloneqq \inf\{\epsilon | (\forall y)\ 0<x-y<\delta \Rightarrow f(y)-f(x) < \epsilon\}
ω(x,δ)inf{ϵ(y) 0<xy<δf(x)f(y)<ϵ}\omega_-(x,\delta)\coloneqq \inf\{\epsilon | (\forall y)\ 0<x-y<\delta \Rightarrow f(x)-f(y) < \epsilon\}

Conjecture: these correspond to the Dini derivatives.

view this post on Zulip Ellis D. Cooper (Mar 16 2021 at 17:23):

For your information, I observed that continuity via epsilon-delta formula can be interpreted as an adjunction at the Bowdoin College Category Theory Advanced Science Seminar June 24 to August 14, 1969. Everyone was there, Eilenberg, Mac Lane, and so on. Mac Lane gave a series of lectures, and kindly presented my idea to the (large) audience of categorists. Eilenberg wrote for me a letter of recommendation to attend. I had the good fortune to have been seated next to the analyst, William F. Donoghue, Jr., and I had told him my idea, but was stuck on a detail. He told me about the modulus of continuity, and so began (and ended) my fame as a categorist.