Category Theory
Zulip Server
Archive

You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.


Stream: learning: questions

Topic: Kastler, Cyclic cohomology within the differential envelope


view this post on Zulip Eric Forgy (Dec 05 2020 at 23:13):

I would love to get my hands on this book, but it seems to be out of print. Is there an electronic version floating around somewhere? Any ideas how to get it?

Edit: I miss the UIUC library :blush:

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:19):

PS: Btw, I am reading this

https://projecteuclid.org/euclid.pjm/1102650385

and could really use some help if this stuff is of interest to anyone here :blush:

view this post on Zulip John Baez (Dec 05 2020 at 23:29):

In case it helps someone be interested: there's a forgetful functor from differential graded algebras to algebras, and the paper is studying things similar to the left adjoint of this functor.

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:30):

I'm looking for more information about what they refer to as "Karoubi differential envelope"

view this post on Zulip John Baez (Dec 05 2020 at 23:31):

For example at the top of page 249 they're looking at the left adjoint of the forgetful functor from Z/2\mathbb{Z}/2-graded differential algebras to Z/2\mathbb{Z}/2-graded algebras. They don't say "left adjoint" but buzzwords like "universal property", and the commutative triangle, give it away.

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:31):

John, that sounds cool :blush:

Yeah, one way to say it is that I am trying to understand the functor from algebras to differential graded algebras.

view this post on Zulip John Baez (Dec 05 2020 at 23:32):

The "Karoubi differential envelope", if I remember it correctly, is the left adjoint of the forgetful functor from differential graded algebras (and I mean ordinary N\mathbb{N}-graded algebras) to algebras.

view this post on Zulip John Baez (Dec 05 2020 at 23:33):

I used to be really into this stuff in grad school, when noncommutative geometry was sorta new.

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:33):

There is also (I think) a related functor (or something) from directed graphs to DGAs.

view this post on Zulip John Baez (Dec 05 2020 at 23:36):

There are well-known functors from

to

to

to

so we can compose those.

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:36):

A universal first-order differential calculus corresponds to a complete graph.

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:37):

(I easily get myself in trouble throwing around words like "universal" especially here :sweat_smile: )

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:40):

I'm also looking at:

Differential Calculi on Commutative Algebras

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:45):

I call a differential calculus arising from a directed graph a "discrete calculus".

So I'd say something like, "A discrete calculus is obtained from the universal discrete calculus by removing edges from the complete graph."

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:50):

This is explained in Introduction to Noncommutative Geometry of Commutative Algebras and Applications in Physics

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:53):

So I am basically trying to link up Muller-Hoissen's paper to Coquereaux and Kastler paper, but there seems to be some inconsistency between the two I'm trying to sort out and hoping this reference will help :pray:

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:56):

John Baez said:

The "Karoubi differential envelope", if I remember it correctly, is the left adjoint of the forgetful functor from differential graded algebras (and I mean ordinary N\mathbb{N}-graded algebras) to algebras.

Any idea where I can read about U: DGAlg -> Alg or is it trivial enough to state here? I hope it isn't just "forgetting the grade" :blush:

view this post on Zulip Eric Forgy (Dec 05 2020 at 23:58):

Would it be simply projecting to grade 0?

view this post on Zulip John Baez (Dec 06 2020 at 06:29):

I guess there are a couple functors from DGAlg to Alg, as you mention: forgetting the grading and differential, or taking the degree-zero part.

view this post on Zulip John Baez (Dec 06 2020 at 06:31):

Let me think about which one is right adjoint to the following functor F: Alg -> DGAlg:

the functor that takes an algebra A, decrees everything in it is of degree 0, and freely throws in a differential d that increases the degree by one and obeys all all the differential graded algebra rules.

view this post on Zulip John Baez (Dec 06 2020 at 06:33):

So say we want a functor U: DGAlg -> Alg that's right adjoint to this F.

view this post on Zulip John Baez (Dec 06 2020 at 06:35):

I.e. given an algebra A and a differential graded algebra B we want dga homomorphisms

f: FA -> B

to correspond naturally algebra homomorphisms

g: A -> UB

view this post on Zulip John Baez (Dec 06 2020 at 06:39):

Note that f: FA -> B is completely determined by what it does to guys of degree zero, since we can write everything in FA as a linear combination of guys of this form:

a da' da'' da''' ...

i.e. a product of a guy in A and a finite number of differentials of guys in A,

view this post on Zulip John Baez (Dec 06 2020 at 06:41):

and f of such a product must be

f(a) d(f(a')) d(f(a'')) d(f(a''')) ...

view this post on Zulip John Baez (Dec 06 2020 at 06:45):

Since f preserves the degree, it's determined by some algebra homomorphism g: A -> UB where UB is the degree zero part of B.

view this post on Zulip John Baez (Dec 06 2020 at 06:46):

Furthermore I claim any algebra homomorphism g: A -> UB extends to some dga homomorphism f: FA -> B by the formula above, or more precisely

f( a da' da'' ... ) = g(a) d(g(a')) d(g(a'')) ...

view this post on Zulip John Baez (Dec 06 2020 at 06:48):

So, if all this is right, the right adjoint U: DGAlg -> Alg takes the degree-zero part of a dga, and the left adjoint F: Alg -> DGAlg freely creates a dga whose degree-zero part is the given algebra.

view this post on Zulip John Baez (Dec 06 2020 at 06:49):

All this is standard stuff... easier to figure out than to look up!

view this post on Zulip John Baez (Dec 06 2020 at 06:50):

The discipline of adjoint functors is a way to make sure one isn't just randomly making stuff up.

view this post on Zulip Eric Forgy (Dec 06 2020 at 07:25):

:heart: :raised_hands:

That is so beautiful :heart_eyes:

I wish I could just see things so clear like that.

The discipline of adjoint functors is a way to make sure one isn't just randomly making stuff up.

Yes. Totally. That is one reason I "try" to bring CT into the picture because I can build models and even write code to verify that it works, but it sometimes feels shaky withouy some nice maths to bring it together.

view this post on Zulip Eric Forgy (Dec 06 2020 at 07:26):

Thank you :raised_hands:

view this post on Zulip Eric Forgy (Dec 06 2020 at 07:34):

If I understand, the stuff in Kastler's papers is explicity constructing the functor F:AlgDGAlgF: \text{Alg} \to \text{DGAlg}, but the details include actually constructing dd from AA as well.

view this post on Zulip Eric Forgy (Dec 06 2020 at 07:56):

Referring back to the article that references this book (that I'd like to get my hands on), I think I understand how d:AJd:A\to J is constructed (when AA is unital), but then it says

Since dd uniquely extends to a differential dd of ΩA\Omega A (i.e. a \partial-graded derivation of vanishing square), moreover of N\mathbb{N}-grade 1, ΩA\Omega A becomes a Z/2\mathbb{Z}/2-graded (in fact a bigraded) differential algebra.

Unfortunately, it isn't obvious to me how to explicity construct

d:JnJn+1d: J^n \to J^{n+1}

and I need that explicit construction if I am going to turn that into some code and generate numerical results.

Since U:DGAlgAlgU: \text{DGAlg}\to \text{Alg} is so simple, is there a way to make F:AlgDGAlgF: \text{Alg}\to\text{DGAlg} 100% explicit (as in I can write code and compute stuff) at least when AA is finite dimensional and commutative?

Now, I "think" this construction produces a universal DGA, i.e.

AFΩ~AA \overset{F}{\mapsto}\tilde\Omega A

where Ω~A\tilde\Omega A is universal so I'd still need some explicit way to obtain other DGAs once we have the universal DGA, but I'd be super happy to get to the universal DGA explicitly as a start.

view this post on Zulip Eric Forgy (Dec 06 2020 at 08:18):

Btw, if AA is finite-dimensional with basis elements eie^i and product eiej=δi,jeie^i e^j = \delta^{i,j}e^i, then the unit is just

1=iei 1 = \sum_i e^i

and

a=ia(i)ei.a = \sum_i a(i) e^i.

The map

d:AJd: A\to J

is given explicity by C&K as

da=1a(1)aa1.da = 1\otimes a - (-1)^{|a|} a\otimes 1.

However, this can be written as a graded commutator

da=1a(1)aa1=(11)a(1)aa(11)=[G,a],\begin{aligned}da &= 1\otimes a - (-1)^{|a|} a\otimes 1\\&= (1\otimes 1)a - (-1)^{|a|} a(1\otimes 1) \\ &= [G,a],\end{aligned}

where

G=11=i,jeiej.\begin{aligned} G &= 1\otimes 1 \\ &= \sum_{i,j} e^i\otimes e^j.\end{aligned}

Now, if we interpret eie^i as (dual to) vertices and eieje^i\otimes e^j as (dual to) directed edges from ii to jj of a directed graph, then GG is related to the adjacency matrix of a complete directed graph.

So we should have some natural way to go from

DiGraphDGAlg\text{DiGraph}\to \text{DGAlg}

that is closely related to the above.

view this post on Zulip John Baez (Dec 06 2020 at 16:15):

Eric Forgy said:

Referring back to the article that references this book (that I'd like to get my hands on), I think I understand how d:AJd:A\to J is constructed (when AA is unital), but then it says

Since dd uniquely extends to a differential dd of ΩA\Omega A (i.e. a \partial-graded derivation of vanishing square), moreover of N\mathbb{N}-grade 1, ΩA\Omega A becomes a Z/2\mathbb{Z}/2-graded (in fact a bigraded) differential algebra.

Unfortunately, it isn't obvious to me how to explicity construct

d:JnJn+1d: J^n \to J^{n+1}

and I need that explicit construction if I am going to turn that into some code and generate numerical results.

Is JnJ^n your notation for the things of grade nn in ΩA\Omega A?

I don't want to worry about the bigraded stuff now; I'd rather talk about an associative algebra AA and the free dga on that algebra, which I'd probably call ΩA\Omega A - but in case Kastler means something really different by that I'll call it FAFA for now.

Since U:DGAlgAlgU: \text{DGAlg}\to \text{Alg} is so simple, is there a way to make F:AlgDGAlgF: \text{Alg}\to\text{DGAlg} 100% explicit (as in I can write code and compute stuff) at least when AA is finite dimensional and commutative?

FAFA is pretty explicit. First, note that every grade-nn element of FAFA is a linear combination of guys like

a0da1dana_0 da_1 \cdots da_n

It takes a bit of work to see this because if you freely start multiplying things you get more complicated expressions like

a0(da1)b1(da2)b2(dan)bn a_0 (da_1) b_1 (da_2) b_2 \cdots (da_n) b_n

So how can we get rid of all these b's? The trick is to remember that we're requiring

d(ab)=(da)b+a(db) d(ab) = (da)b + a(db)

so

(da)b=d(ab)a(db) (da)b = d(ab) - a(db)

gives a recipe for taking dada multiplied on the right by bb and turning it into a difference of two terms of the form dxdx multiplied by something on the left:

(da)b=1d(ab)a(db) (da)b = 1 d(ab) - a(db)

Repeatedly using this rule we can take

a0(da1)b1(da2)b2(dan)bn a_0 (da_1) b_1 (da_2) b_2 \cdots (da_n) b_n

and show it's equal to a linear combination of terms of the form

x0dx1dxnx_0 dx_1 \cdots dx_n

where the x's are elements of AA depending on the a's and b's in some way.

view this post on Zulip John Baez (Dec 06 2020 at 16:20):

Anyway, once you know this it's easy to say what dd does:

d(x0dx1dxn)=dx0dx1dxn d(x_0 dx_1 \cdots dx_n) = dx_0 dx_1 \cdots dx_n

view this post on Zulip John Baez (Dec 06 2020 at 16:21):

But this is not magic, or cleverness! It just follows from the rules of a dga: repeatedly use d2=0d^2 = 0 and d(xy)=(dx)y+xdyd(xy) = (dx)y + x dy.

view this post on Zulip John Baez (Dec 06 2020 at 16:22):

Indeed, everything about the free dga FAFA follows from the rules of a dga, the rules for adding and multiplying elements of AA, and nothing more. That's what "free" means, in practice.

view this post on Zulip John Baez (Dec 06 2020 at 16:23):

So if you know how to compute in AA, you know - after some thought - how to compute in FAFA.

view this post on Zulip Eric Forgy (Dec 06 2020 at 21:42):

Thank you John :heart: :raised_hands:

I totally get what you mean about moving everything to the left (or right) etc using the product rule :+1:

It is not a mathematical problem, but I prefer to not leave a differential exposed directly on either the left or right side like that though (but I get what you mean) so I'd say write linear combinations of elements like

x0dx1dxnxn.x_0 dx_1 \cdots dx_n x_n.

To see why, consider a finite (commutative) algebra with bases eie^i, product eiej=δi,jeie^i e^j = \delta^{i,j} e^i and unit 1=iei1 = \sum_i e^i we have

dej=1ejej1=k(ekejejek)\begin{aligned} de^j &= 1\otimes e^j - e^j\otimes 1 \\ &= \sum_k \left(e^k\otimes e^j - e^j\otimes e^k\right)\end{aligned}

and

eidej=eiejδi,jei1=eiejδi,jleiel\begin{aligned} e^i de^j &= e^i\otimes e^j - \delta^{i,j} e^i\otimes 1 \\ &= e^i\otimes e^j - \delta^{i,j} \sum_l e^i\otimes e^l\end{aligned}

so unless you restrict to iji\ne j for some reason, then eideje^i de^j is a combination of emene^m\otimes e^n. However, if you "close off" the dangling differential on the right, you get

eidejek=(δj,kδi,j)eiek.e^i de^j e^k = (\delta^{j,k} - \delta^{i,j}) e^i\otimes e^k.

We can read off a bunch of info from this, e.g. if i=j=ki=j=k, we have

eideiei=0e^i de^i e^i = 0

so all three indices cannot be the same. Similarly, if none of the indices are the same, i.e. iji\ne j, jkj\ne k and kik\ne i, then

eidejek=0e^i de^j e^k = 0

vansihes again. However, if two of the three indices are the same we have

eidejei=0e^i de^j e^i = 0

and

eidejej=eideiej=(1δi,j)eieje^i de^j e^j = -e^i de^i e^j = (1-\delta^{i,j}) e^i\otimes e^j

so any degree 1 element is a linear combination of eieke^i\otimes e^k where iki\ne k, i.e. linear combinations of eidejek.e^i de^j e^k. In fact

eidej=keidejek.e^i de^j = \sum_k e^i de^j e^k.

(I have more to say - including actual code - but this comment is already too long :sweat_smile:)

view this post on Zulip John Baez (Dec 06 2020 at 21:51):

Eric Forgy said:

It is not a mathematical problem, but I prefer to not leave a differential exposed directly on either the left or right side like that though (but I get what you mean) so I'd say write linear combinations of elements like

x0dx1dxnxn+1x_0 dx_1 \cdots dx_n x_{n+1} [typo fixed]

That's okay if you want it; just beware that two linear combinations of elements like this can be equal in highly nonobvious ways!

view this post on Zulip John Baez (Dec 06 2020 at 21:52):

The great thing about linear combinations of elements like

x0dx1dxnx_0 dx_1 \cdots dx_n

is that it's really easy to tell when they're equal. The only relations that hold are 'obvious' ones saying this expression is linear in each variable. Namely:

(cx0+cx0)dx1dxn=cx0dx1dxn+cx0dx1dxn (c x_0 + c' x_0') dx_1 \cdots dx_n \quad =\quad c x_0 dx_1 \cdots dx_n \quad + \quad c' x_0' dx_1 \cdots dx_n

where c,cc,c' are numbers (elements of our field), and similarly

x0dx1d(cxi+cxi)dxn=cx0dx1dxidxn+cx0dx1dxidxn x_0 dx_1 \cdots d(c x_i + c' x'_i) \cdots dx_n \quad = \quad c x_0 dx_1 \cdots dx_i \cdots dx_n \quad +\quad c' x_0 dx_1 \cdots dx'_i \cdots dx_n

and of course the consequences of these. But these are quite manageable.

view this post on Zulip John Baez (Dec 06 2020 at 21:54):

So: if you have a basis of your algebra AA, you get a basis of the free dga on AA by taking elements like

x0dx1dxn x_0 dx_1 \cdots dx_n

where each of the xix_i is chosen to be a basis element!

view this post on Zulip Eric Forgy (Dec 06 2020 at 21:55):

John Baez said:

That's okay if you want it; just beware that two linear combinations of elements like this can be equal in nonobvious ways!

Yes :blush:

When I am talking about generating the space, I think it is a little more intuitive to have that element on the right, but yeah, totally, when I compute stuff, I write all coefficients on one side or the other (and the choice can be interesting because degree 0 and degree 1 elements do not commute) :blush:

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:04):

Btw, with f=if(i)eif = \sum_i f(i) e^i, g=jg(j)ejg = \sum_j g(j) e^j and h=kh(k)ekh = \sum_k h(k) e^k, we have

f(dg)h=i,j,kf(i)g(j)h(k)eidejek=i,jf(i)[g(j)g(i)]h(j)eiej\begin{aligned} f(dg)h &= \sum_{i,j,k} f(i) g(j) h(k) e^i de^j e^k \\ &= \sum_{i,j} f(i)\left[g(j)-g(i)\right] h(j) e^i\otimes e^j\end{aligned}

which explains why we think of eieje^i\otimes e^j (which I usually write ei,je^{i,j}) as (dual to) a directed edge.

view this post on Zulip John Baez (Dec 06 2020 at 22:07):

Btw, I don't know what you're doing with these tensor products here, since you didn't explain it. One trick people commonly do is write

a1da2dan a_1 d a_2 \cdots d a_n

as

a1a2an a_1 \otimes a_2 \otimes \cdots \otimes a_n

but it doesn't look like you're doing that.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:13):

since you didn't explain it

Yeah :sweat_smile: The tensor product is just usual tensor product of the respective vectors. 1 is vector with all "components" equal to 1, i.e.

1=iei.1 = \sum_i e^i.

So if aAa\in A, then aa is a vector

a=ia(i)eia = \sum_i a(i) e^i

so

1a=(iei)(ja(j)ej)=i,ja(j)eiej1\otimes a = \left(\sum_i e^i\right) \otimes \left(\sum_j a(j) e^j\right) = \sum_{i,j} a(j) e^i\otimes e^j

is just the usual tensor product.

We do need to define a(bc)a(b\otimes c) and (ab)c(a\otimes b)c, but those are straightforward

a(bc):=(ab)ca(b\otimes c) := (ab)\otimes c

and

(ab)c:=a(bc).(a\otimes b)c := a\otimes (bc).

view this post on Zulip John Baez (Dec 06 2020 at 22:18):

Okay - but you're not telling me the thing I really need to know, which is the differential in terms of these tensor products. Like if I write adbdca\, db \, dc, what's that for you?

view this post on Zulip John Baez (Dec 06 2020 at 22:19):

There's one standard convention where it's called abca \otimes b \otimes c. You'll see this in lots of work on Hochschild and cyclic homology.

view this post on Zulip John Baez (Dec 06 2020 at 22:19):

Here a,b,ca,b,c are elements of our algebra AA, not numbers.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:20):

The differential is

da=1aa1.da = 1\otimes a - a\otimes 1.

See for example (1.24) of C&K.

view this post on Zulip John Baez (Dec 06 2020 at 22:20):

Okay, great. That's a different convention. I guess that's pretty standard too!

view this post on Zulip John Baez (Dec 06 2020 at 22:21):

I saw you write that but it just confused me.

view this post on Zulip John Baez (Dec 06 2020 at 22:21):

So with this convention adbdca \, db \, dc turns into some big mess, but that's okay.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:23):

So C&K give you an explicit way to construct d:A0A1d: A^0\to A^1 and it says this. Unfortunately, the extension is not obvious to me :sweat_smile:

view this post on Zulip John Baez (Dec 06 2020 at 22:25):

You extend it using the dga law d(xy)=(dx)y+x(dy)d(xy) = (dx)y + x(dy).

view this post on Zulip John Baez (Dec 06 2020 at 22:25):

This holds for any elements x,yx,y in your dga.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:26):

I'd like to find a natural / minimal nontrivial definition of d:A1A2d: A^1\to A^2 such that

d(1a)=d(a1)d(1\otimes a) = d(a\otimes 1)

so that

d2a=0.d^2a = 0.

view this post on Zulip John Baez (Dec 06 2020 at 22:27):

You can just calculate it.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:28):

I have a notebook with pages of scribbles. No luck so far :sweat_smile:

But you inspired me. Let me try some things.

view this post on Zulip John Baez (Dec 06 2020 at 22:28):

It's a calculation - no inspiration required, just follow the rules.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:28):

Yes sensei. I will try :blush:

view this post on Zulip John Baez (Dec 06 2020 at 22:29):

Take anybody in A1A^1 and write it as a linear combination of products of guys like aa and dbd b.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:29):

Thank you :heart: :raised_hands:

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:29):

Yes. It seems so obvious now that you say it :sweat_smile:

view this post on Zulip John Baez (Dec 06 2020 at 22:29):

Then hit it with dd and use dga rules like d(adb)=dadbd(a db) = da db

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:29):

Yes

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:30):

In fact, every reference on this stuff says it, but for some reason it didn't sink in until YOU said it, so thank you so much :pray:

view this post on Zulip John Baez (Dec 06 2020 at 22:31):

I think when you're done you may have reinvented "the differential in the Hochschild cochain complex", but don't let that intimidate you... it's not really a matter of creativity, I think there's basically no free choice involved.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:31):

Yes

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:31):

I don't understand something until I've reinvented it.

view this post on Zulip John Baez (Dec 06 2020 at 22:31):

Good.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:33):

Thank you again so much :pray:

view this post on Zulip John Baez (Dec 06 2020 at 22:34):

Sure! It's funny, I was working on this stuff all the time when I was a postdoc right out of grad school. I was convinced that noncommutative geometry held the keys to quantum gravity, and I was working on it in my own slow way when Connes started putting out papers on it, and then I had to understand those.

view this post on Zulip John Baez (Dec 06 2020 at 22:35):

I eventually gave up trying to do quantum gravity this way, when loop quantum gravity came along.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:37):

I remember :blush: :+1:

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:39):

To be honest, I'm still a little surprised there is not more work along these lines. Urs made the link between our paper and NCG pretty clear and the cool thing is that it is not only cool maths, it can be used directly to generate code for engineering applications.

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:45):

So my homework is to try to tell what is d(1a).d(1\otimes a).

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:47):

Or better d(eiej).d(e^i\otimes e^j).

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:50):

I got it. I knew the answer already, but was never able to reinvent it until now :+1:

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:51):

So after 20+ years, I am still making slow but sure progress :nerd:

view this post on Zulip Eric Forgy (Dec 06 2020 at 22:53):

d(eiej)=1eiejei1ej+eiej1.d(e^i\otimes e^j) = 1\otimes e^i\otimes e^j - e^i\otimes 1\otimes e^j + e^i\otimes e^j\otimes 1.

view this post on Zulip Eric Forgy (Dec 06 2020 at 23:00):

d(1a)=d(a1)=1a1    d2a=0.d(1\otimes a) = d(a\otimes 1) = 1\otimes a\otimes 1 \implies d^2a = 0. :check:

view this post on Zulip Eric Forgy (Dec 06 2020 at 23:18):

So I think it is safe to say I finally understand the functor AlgDGAlgAlg\to DGAlg :tada:

Or, at least, if you give me a finite dimensional commutative algebra, I can construct a DGA, BUT, if I understand correctly, that DGA is "universal" for AA. My next task is to understand an explicit way to construct a DGA that is not universal, i.e. the above gives

AΩ~AA\to \tilde\Omega A

so now I need to explicitly construct a map

Ω~AΩA.\tilde\Omega A \to \Omega A.

The answer should morally be something like "forget some edges".

view this post on Zulip Eric Forgy (Dec 07 2020 at 00:35):

I think I got it!

view this post on Zulip Eric Forgy (Dec 17 2020 at 07:46):

I spent some more time on this, including working through

It is very pretty :heart_eyes:

The idea is:

Give an algebra AA and a product m:AAAm: A\otimes A\to A, let Ω~1\tilde\Omega^1 denote ker(m)\mathsf{ker}(m).

The derivation d~:AΩ~1\tilde d: A\to\tilde\Omega^1 given by

d~:a1aa1\tilde d: a\mapsto 1\otimes a - a\otimes 1

generates Ω~1\tilde\Omega^1 as a left AA-module (despite the fact Ω~1\tilde\Omega^1 is an AA-bimodule).

Now, if abΩ~1a\otimes b\in\tilde\Omega^1, then

adb=abab1=ab\begin{aligned} a \, db &= a\otimes b - ab\otimes 1 \\ &= a\otimes b\end{aligned}

since ab=0ab = 0.

John Baez said:

So with this convention adbdca \, db \, dc turns into some big mess, but that's okay.

Actually, it isn't a big mess. It turns out

adbdc=abca \, db \, dc = a\otimes b\otimes c

if ab,bcΩ~1.a \otimes b, b\otimes c\in\tilde\Omega^1. The point is that we don't take just any a,bAa,b\in A and form adba \, db. aba\otimes b needs to be in Ω~1=ker(m).\tilde\Omega^1 = \mathsf{ker}(m).

Bourbaki helped me understand why

d~:AΩ~1\tilde d: A\to\tilde\Omega^1

is a "universal derivation".

If ϕ:Ω~1Ω1\phi: \tilde\Omega^1\to\Omega^1 is an AA-bimodule morphism, then

d=ϕd~:AΩ1d = \phi\circ \tilde d: A\to\Omega^1

is also a derivation.

Conversely, Bourbaki shows that given any derivation d:AΩ1d: A\to\Omega^1, you get a unique AA-bimodule morphism

ϕ:Ω~1Ω1\phi: \tilde\Omega^1\to \Omega^1

such that d=ϕd~d = \phi\circ\tilde d given by

ϕ:abadb.\phi: a\otimes b \mapsto a \, db.

In this way, we see

Hom(A,A)(Ω~1,Ω1)DerK(A,Ω1).\mathsf{Hom}_{(A,A)}(\tilde\Omega^1,\Omega^1) \cong \mathsf{Der}_K(A,\Omega^1).

It was painful to work through that, but worth it :blush: :muscle:

Now, obviously (even to me), all of that :point_of_information: involves CT with some nice diagrams.

I was able to work it out to first order

image.png

Since all the morphisms are AA-bimodule morphisms I think we should be sitting in a category of AA-bimodules, but I am not 100% sure about that.

Now, I am struggling to extend this to higher order, i.e. d~:Ω~1Ω~2.\tilde d: \tilde\Omega^1\to \tilde\Omega^2.

According to this, we should have

Ω~2=Ω~1AΩ~1\tilde\Omega^2 = \tilde\Omega^1\otimes_A\tilde\Omega^1

and I think the proof is in Kastler's book (this topic). I'm having trouble understanding this and how Ω~2\tilde\Omega^2 is universal and whether there are any new morphisms we need to consider for it to work.

Any help would be appreciated :pray:

view this post on Zulip Matteo Capucci (he/him) (Dec 17 2020 at 08:28):

What if you try running the machinery you described with A=Ω~1A= \tilde\Omega^1? In particular, I guess your multiplication μ:Ω~1Ω~1Ω~1\mu: \tilde\Omega^1 \otimes \tilde\Omega^1 \to \tilde \Omega^1 is going to be something like μ(adbcde)=?acdbde\mu(a db \otimes c de) \overset{?}= ac db de. But since ac=0ac = 0 since Ω~1\tilde\Omega^1 is the kernel of mm, so the kernel of μ\mu is the whole Ω~1Ω~1\tilde \Omega^1 \otimes \tilde\Omega^1, Therefore you get Ω~2=Ω~1Ω~1\tilde\Omega^2 = \tilde \Omega^1 \otimes \tilde \Omega^1.

view this post on Zulip Matteo Capucci (he/him) (Dec 17 2020 at 08:30):

I'm not very sure about the expression for μ\mu... it's probably wrong as I wrote it but I'm fairly confident whatever is the right expression will involve multiplying (with mm aka AA's product) a,b,c,da,b,c,d and thus trivializing to 00, as above.

view this post on Zulip Matteo Capucci (he/him) (Dec 17 2020 at 08:32):

Another candidate for μ\mu is \wedge:

adbcde=12(acdbdecadbde)a \operatorname db \wedge c \operatorname de = \frac12 (ac \operatorname db \operatorname de - ca \operatorname db \operatorname de)

view this post on Zulip Eric Forgy (Dec 17 2020 at 10:15):

Thanks Matteo :pray:

I think you have the right idea, except the right side of your product is not in Ω~1\tilde\Omega^1. However, we have a similar product

m:Ω~1Ω~1Ω~1Ω~1m: \tilde\Omega^1\otimes\tilde\Omega^1\to\tilde\Omega^1\tilde\Omega^1

(which is what I think you meant) given by

(wdx)(ydz)(wdx)(ydz)=w(dxy)dz=w[d(xy)xdy]dz=wd(xy)dz.\begin{aligned}(w\, dx)\otimes (y\, dz) \mapsto (w\, dx)(y\, dz) &= w\, (dx\, y)\, dz \\ &= w\left[d(xy) - x\, dy\right] \, dz \\ &= w\, d(xy)\, dz.\end{aligned}

This is zero if xyΩ~1.x\otimes y\in\tilde\Omega^1. I think that is what they mean by Ω~1AΩ~1\tilde\Omega^1\otimes_A\tilde\Omega^1 so

Ω~1AΩ~1=ker(m).\tilde\Omega^1\otimes_A\tilde\Omega^1 = \mathsf{ker}(m).

The undecorated \otimes is actually K\otimes_K where AA is a KK-algebra (KK a field).

For sure, I think this is an important piece to the puzzle, but I'm not sure it is the full story :thinking:

view this post on Zulip Eric Forgy (Dec 17 2020 at 10:32):

My first guess here is along those lines:

image.png

This requires we do have a map Ω~1Ω~1Ω~1.\tilde\Omega^1\otimes\tilde\Omega^1\to\tilde\Omega^1. I have an idea what such a map might need to look like from some other calculations, but it smells a little fishy.

My second guess was here:

image.png

This requires a map Ω~1Ω~1Ω~1,\tilde\Omega^1\tilde\Omega^1\to\tilde\Omega^1, which feels a little better dimensionally, but it means we need a new third space Ω~2.\tilde\Omega^2.

My most recent guess looks like:

20201217_022737.jpg

This says that A=ker(d~2).A = \mathsf{ker}(\tilde d^2).

I think I am getting warm, but not quite there yet :sweat_smile:

view this post on Zulip Eric Forgy (Dec 17 2020 at 10:43):

Of the 3 diagrams above, I think my favorite is the second one. However, we might augment it with a morphism A0Ω~2A\to 0\to\tilde\Omega^2 (like in the third) so it is clear that d~2=0.\tilde d^2 = 0.

view this post on Zulip John Baez (Dec 17 2020 at 20:34):

Eric Forgy said:

The derivation d~:AΩ~1\tilde d: A\to\tilde\Omega^1 given by

d~:a1aa1\tilde d: a\mapsto 1\otimes a - a\otimes 1

generates Ω~1\tilde\Omega^1 as a left AA-module (despite the fact Ω~1\tilde\Omega^1 is an AA-bimodule).

Now, if abΩ~1a\otimes b\in\tilde\Omega^1, then

adb=abab1=ab\begin{aligned} a \, db &= a\otimes b - ab\otimes 1 \\ &= a\otimes b\end{aligned}

since ab=0ab = 0.

Why is ab=0ab = 0? It sounds like you're saying the product of any two elements of AA is zero! That would be a very boring algebra.

view this post on Zulip Eric Forgy (Dec 17 2020 at 20:45):

ab=0ab = 0 if abΩ~1.a\otimes b\in\tilde\Omega^1. That is the definition of Ω~1.\tilde\Omega^1. :thinking:

ab0ab \ne 0 for arbitrary a,bAa,b\in A though.

view this post on Zulip Eric Forgy (Dec 17 2020 at 20:52):

Btw, there is a projection

ϕ:AAΩ~1\phi: A\otimes A\to\tilde\Omega^1

given by

ϕ:ababm(ab)1.\phi: a\otimes b \mapsto a\otimes b - m(a\otimes b)\otimes 1.

If abΩ~1a\otimes b\in\tilde\Omega^1 (:=ker(m))(:=\mathsf{ker}(m)), then

ϕ(ab)=adb.\phi(a\otimes b) = a\, db.

view this post on Zulip John Baez (Dec 17 2020 at 20:56):

Eric Forgy said:

ab=0ab = 0 if abΩ~1.a\otimes b\in\tilde\Omega^1. That is the definition of Ω~1.\tilde\Omega^1.

Okay, sorry. You mean more precisely that Ω~1\tilde\Omega^1 is the kernel of the map

m:AAAm : A \otimes A \to A

coming from multiplication.

So Ω~1\tilde\Omega^1 contains everything of the form

a11a a \otimes 1 - 1 \otimes a

and indeed everything of the form

abcabc a \otimes bc - ab \otimes c

Is everything in Ω~1\tilde\Omega^1 of the form

abcabc a \otimes bc - ab \otimes c ?

It's nice to know exactly what it contains.

view this post on Zulip John Baez (Dec 17 2020 at 20:59):

I meant to say: is everything in Ω~1\tilde\Omega^1 a linear combination of terms of the form

abcabc a \otimes bc - ab \otimes c ?

view this post on Zulip Eric Forgy (Dec 17 2020 at 21:00):

Here is a (poorly edited :sweat_smile: ) snippet from Bourbaki III 10.10:

image.png

image.png

view this post on Zulip Eric Forgy (Dec 17 2020 at 21:00):

Lemma 1 demonstrates that all elements of Ω~1\tilde\Omega^1 are of that form :+1:

view this post on Zulip Eric Forgy (Dec 17 2020 at 21:01):

Note: Bourbaki has a sign difference relative to more recent articles and I am using this other convention.

view this post on Zulip John Baez (Dec 17 2020 at 21:06):

Okay, I won't look at it... I used to know this stuff, I think I could prove that.

view this post on Zulip John Baez (Dec 17 2020 at 21:07):

One part of the idea is this: if ab=0ab = 0 why is aba \otimes b of the form ABCABC A B\otimes C - A \otimes B C? Easy: just take A=a,B=b,C=1A = a, B = b, C = -1.

view this post on Zulip Eric Forgy (Dec 17 2020 at 21:11):

Btw, I just found this (can't believe it isn't bookmarked :face_palm: )

https://ncatlab.org/nlab/show/algebraic+approaches+to+differential+calculus

I got there from here:

https://ncatlab.org/nlab/show/differential+monad

and chasing references. In particular, I'm reading this one (about Grothendieck differential calculus):

http://www.mpim-bonn.mpg.de/preblob/3894

view this post on Zulip Eric Forgy (Dec 17 2020 at 22:56):

I'm starting to feel not too bad that I'm struggling to reinvent this stuff. It is not easy! :sweat_smile:

Btw, I've said this before, but I think I can say it a little more clearly now that I'm slowly gaining some mental muscle mass :muscle:

If AA has basis elements eie^i with product eiej=δi,jeie^i e^j = \delta^{i,j} e^i and unit element 1=iei1 = \sum_i e^i, then

df=1ff1=i,j[f(j)f(i)]eiej\begin{aligned} df &= 1\otimes f - f\otimes 1 \\ &= \sum_{i,j} \left[f(j) - f(i)\right] e^i\otimes e^j \end{aligned}

which looks like the finite difference along a direct edge iji\to j (think "fundamental theorem of calculus"). So we take this seriously and interpret eiej:G1Ke^i\otimes e^j: \mathcal{G}^1\to K as "dual" edges

eiej(k,l)=δjiδlj.e^i\otimes e^j(k,l) = \delta^i_j \delta^j_l.

Also, any αAA\alpha\in A\otimes A can be written in terms of basis elements as

α=i,jα(i,j)eiej\alpha = \sum_{i,j} \alpha(i,j) e^i\otimes e^j

and we have

m(α)=iα(i,i)eim(\alpha) = \sum_i \alpha(i,i) e^i

so Ω~1:=ker(m)\tilde\Omega^1 := \mathsf{ker}(m) is the subbimodule with all "diagonal" elements α(i,i)\alpha(i,i) set to zero. This can be interpreted as a directed graph with no "loops".

Then let

G~=11=i,jeiej\tilde G = 1\otimes 1 = \sum_{i,j} e^i\otimes e^j

i.e. the sum of all dual edges of a complete graph, and we have

d~f=1ff1=(11)ff(11)=G~ffG~=[G~,f].\begin{aligned} \tilde df &= 1\otimes f - f\otimes 1 \\ &= (1\otimes 1) f - f (1\otimes 1) \\ &= \tilde G f - f\tilde G \\ &= [\tilde G, f].\end{aligned}

If we combine this with the results above from Bourbaki, then we can define

G=(i,j)G1eiejG = \sum_{(i,j)\in\mathcal{G}^1} e^i\otimes e^j

where G1\mathcal{G}^1 is a subset of G0×G0\mathcal{G}^0\times\mathcal{G}^0, i.e. (G0,G1,s,t:G1G0)(\mathcal{G}^0,\mathcal{G}^1,s,t:\mathcal{G}^1\to\mathcal{G}^0) is a directed graph with s(eiej)=eis(e^i\otimes e^j) = e^i and t(eiej)=ejt(e^i\otimes e^j) = e^j and we have a new derivation d:AΩ1d: A\to\Omega^1 given by

df=[G,f]df = [G,f]

and a unique bimodule morphism ϕ:Ω~1Ω1\phi: \tilde\Omega^1\to\Omega^1 given by

ϕ(ab)=adb\phi(a\otimes b) = a db

satisfying

d=ϕd~.d = \phi\circ\tilde d.

The graph operator GAAG\in A\otimes A can be obtained from the "universal" (or "complete") graph operator G~=11\tilde G = 1\otimes 1 by setting some of the dual edges eieje^i\otimes e^j to zero.

view this post on Zulip Eric Forgy (Dec 17 2020 at 23:17):

This is still all "first order" though so I think I have a pretty good handle on how things work to first order. So I still need to get a similar grip on Ω~2.\tilde\Omega^2.

It is interesting to see that Grothendieck also defined his derivation as an adjoint, i.e. commutator, but it should be the graded commutator for higher orders.

view this post on Zulip Eric Forgy (Dec 18 2020 at 00:33):

So now we have a way to tie a directed graph G:XSet\mathcal{G}: X\to\mathsf{Set} to a first-order differential calculus.

In particular, there is a "universal" graph G~\mathcal{\tilde G} with

G~1=G~0×G~0\mathcal{\tilde G}^1 = \mathcal{\tilde G}^0\times\mathcal{\tilde G}^0

giving the "complete" graph on the set G~0.\mathcal{\tilde G}^0.

This is the "universal endospan in Set\mathsf{Set}" or the better, the "universal quiver in Set\mathsf{Set}". This extends (via free construction - I believe) to a "universal quiver in AlgK\mathsf{Alg}_K", where

AlgK:=Mon(VectK)\mathsf{Alg}_K := \mathsf{Mon}(\mathsf{Vect}_K)

so I think the category I need to work in (at least up to this point) is actually a bicategory

Span(Alg).\mathsf{Span(Alg)}.