You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
If given a set of : or or or whatever, does assigning a 'physical' unit to them structure them in some quantitative manner? Does a real number assigned a unit of Pascals ( or ) become structured based on the derivation of that particular unit?
I don't know any literature related to this question, but I'll offer preliminary stab at a definition which would capture numbers with physical units. I'll assume, for simplicity, that we're just working with real numbers.
There are two essential facts about arithmetic with units. First, you can only add numbers if their units are the same. Second, when you multiply two numbers together, their units multiply.
Let's think for a moment about multiplication of units. We have some number of "basic units" (seconds, meters, grams, etc.) and we can combine them by multiplying them and taking reciprocals. So the structure of all the possible units can be seen as the free Abelian group generated by the basic units. (We'll write its elements multiplicatively.) This group contains, for example, g / m * sec^2. (I'm glossing over the fact that you can multiply units by scalars, e.g. kg / m * sec^2 = 100 g / m * sec^2.) Somewhat atrociously, we could call this the "group of units," but that phrase is better avoided, since it has a well-established meaning elsewhere!
For each element u of this free Abelian group, we have a copy of the real line R_u containing all the numbers with unit u. Note that the identity element 1 of the group corresponds to the unitless quantities, which inhabit the copy R_1 of the real line.
Each R_u has an additive structure since we can add two numbers if their units are the same. Furthermore, if x \in R_u and y \in R_v, then we can form their product xy \in R_{uv}. But note that we cannot add an element of R_u to an element of R_v when u and v are different, because we can't add numbers with different units.
Note that this is different that something like a graded algebra, because in a graded algebra you are allowed to add quantities with different "grades."
I'm curious to hear what others say about this. But my guess is that the structure coming from the units will only reveal itself while looking at all the different possible numbers of all units together, rather than just focusing on all the numbers of one unit.
It's a kind of graded ring: you have objects , additions , multiplications , additive unit (which sends to ) and multiplicative unit (which sends to ) in the category of sets, where is your "group of physical units". And these operations verify the same commutative diagrams that for a ring, modulo the indexes that you must add. Note that one of the multiplication gives you when you multiply eg. an element of type with an element of type .
It's very practical to work with such "disconnected" (in the sense that you consider the family of the 's rather than a coproduct which doesn't make sense here) graded algebraic structures and you can code a lot of operations like this in a flexible way with the gradings allowing one to constrain the operations as you want.
And I really like your idea because I've been wondering for a long time how we could formalize mathematically the idea of units in physics. I work with such graded things but I never did the connection.
You could maybe add a kind of differentiation map in the sense of the (graded) differential linear categories/logic that we cooked with @JS PL (he/him) but I must think more to that.
@Jean-Baptiste Vienney if it were a ring you could add arbitrary elements. One way to structure this is as a monoidal groupoid in which every component is isomorphic to , where the monoidal product represents taking products of units. Or a ring in which addition is only partially defined.
This thread is great... I was wondering about this topic a while ago and didn't come to any good conclusions. It was vexing and interesting to realize I didn't really know what "units" were in a formal mathematical sense despite using them very often!
Morgan Rogers (he/him) said:
Jean-Baptiste Vienney if it were a ring you could add arbitrary elements. One way to structure this is as a monoidal groupoid in which every component is isomorphic to , where the monoidal product represents taking products of units. Or a ring in which addition is only partially defined.
It’s not a ring it’s « a kind of graded ring” as I said but maybe it doesn’t sound as I want in English
When I was a graduate student, my advisor taught me that when algebraists say a "graded ring" they mean a ring with a suitable decomposition , but when topologists say a "graded ring" they mean a collection of abelian groups with multiplications such as . I don't know how true that characterization of "algebraists" is, but I can certainly speak to the fact that the second meaning -- which seems to be what Jean-Baptiste meant -- is almost universal in algebraic topology, due for instance to the fact that the cohomology groups of a space are a "graded ring" in this sense, and it doesn't really make sense to add classes in different degrees.
Terry Tao has also blogged about this
In addition to what Mike said, in topology if you want to make the cohomology groups of a space into a single ring for some reason, it can make sense to form either the direct sum or the direct product of the abelian groups, depending on the context.
Whoa, I didn't know that! (Or if I did, I forgot it.) When do you want to take the product?
And how do you give the product a ring structure?
For example in the theory of formal group laws, if is a suitable (complex oriented) cohomology theory, you want to think of as a power series ring , and the -space structure map as inducing a map which sends to a formal group law , which is usually not just a polynomial.
I guess at least for ordinary cohomology there is no problem with the ring structure, because the grading is in nonnegative degrees only. So to compute the degree part of the product, we only have to add finitely many products coming from degrees and with .
Ah, right. I knew about that, but my mind didn't immediately connect the "direct product" with formal power series. (-:
I'm actually not quite sure whether thinking about the cohomology ring as the product of all the individual cohomology groups is the "correct" way to think about it, but it gives the right answer at least.
Jim Dolan and I studied how physical units 'structure' sets of numbers here:
The basic idea is here:
Another name for the study of categories of line objects is “dimensional analysis”. In dimensional analysis, a physical theory is described by specifying an abelian group of “dimensions” together with a commutative algebra of “quantities” (these are the sections of the line objects) which is graded by the dimension group. We’ll call a physical theory described in this way a dimensional algebra, but the fundamental fact about a dimensional algebra is that it’s equivalent to a dimensional category, which is a symmetric monoidal category where all objects are line objects.
Example. Let G be the abelian group freely generated by the dimensions “mass” and “velocity”. Let R be the G-graded commutative algebra generated by the six quantities “mass of particle #1”, “mass of particle #2”, “initial velocity of particle #1”, “initial velocity of particle #2”, “final velocity of particle #1”, and “final velocity of particle #2”, subject to the two relations “conservation of momentum” and “conservation of energy”.
(The names given to the dimensions, quantities, and relations here are informally meant to suggest both a physical situation and the precise dimensional algebra used to describe it. The physical context suggests taking the base field of the dimensional algebra to be the field of real numbers, but there are advantages and no disadvantages to leaving the field unspecified for now; even requiring it to be a field may be over-definite.)
For example, there are seven linearly independent quantities that live in the dimension “momentum” (defined as mass times velocity) in this dimensional algebra: multiplying the mass of either particle by the initial or final velocity of either particle gives eight quantities, but conservation of momentum cuts it down to seven. Thus in the corresponding dimensional category, the hom-space from the “dimensionless” dimension to the “momentum” dimension is a 7-dimensional vector space.
As a rough first approximation, we’ll construe algebraic geometry as the study of dimensional algebras, but from the perspective that it’s better to think of them as dimensional categories.
https://arxiv.org/abs/1304.6659
See also my post here:
https://www.reddit.com/r/math/comments/riw7qb/dimensional_analysis_via_group_actions/
It is probably much shallower than the links above. But some fun examples
Something I've been looking for that is related to units is a "theory of quantities": some objects naturally have attributes described by certain units (have mass, have velocity, have conductance, etc), while other objects should not have such attributes. A 'word' (as in a piece of text in a natural language) should not have a 'mass' associated to it (other than perhaps as an analogy, but let's ignore that).
This is probably more of an ontological question, I would guess.
But still: it does make sense to say that a gas that occupies a well-delimited region of space has a mass -- but does it make sense to say that it has a velocity? It doesn't seem so (but I could be wrong). So there might be some kind of finer classification of 'objects' along the lines of what unit-bearing attributes may or may not be attached to it.
Basic question: does such a theory exist?
Ordinarily we describe the motion of a gas in a region using a velocity vector field in that region.
At least that's the macrocropic description - not the microscopic description, where each molecule in the gas has a velocity.
Interesting. But this is 'internal movement', not external, right? Though I guess this vector field, when integrated, would also give how the gas moves in the larger region than the one it occupies?
Indeed, just because the velocity is constrained by the confinement of the gas doesn't make that value meaningless; the container could be moving relative to the observer, say.
I wonder if you can come up with a more clear-cut example of an object which can have some physical dimensions associated to it and not others.
@Morgan Rogers (he/him) I can't come up with such an example - but figured that might be a lack of imagination rather than a truism. Or perhaps all 'physical' objects can have all attributes with units, which is what separates them from the concepts that don't have a physical realisation?
This isn't a philosophy question: I'm writing software for modelling many different things, and I'm trying to create a 'type system' beyond the usual ones (i.e. units and what's usually the topic of 'type theory', both which are already implemented) that uses a fine ontology to decide whether it is meaningful to say that a 'specification has conductance' or that a 'glass of water is a topic from literature' or that 'a gas has a volume'.
Jacques Carette said:
Morgan Rogers (he/him) I can't come up with such an example - but figured that might be a lack of imagination rather than a truism. Or perhaps all 'physical' objects can have all attributes with units, which is what separates them from the concepts that don't have a physical realisation?
I asked chatGPT about this, and it came up with an interesting example. A metal cube has a naturally defined length, width and height. But a metal sphere has a radius instead... at least it doesn't seem as natural to assign it a length, width or height. It also suggested, in a similar line of thought, that a cotton ball has a clearly defined mass, but measurements of its shape that use units would be less simple.
I guess part of the question is, do we care if it is (1) simple and natural that an object can be measured using units of a certain kind? Or do we care if (2) there is any way at all that an object can be measured using units of a certain kind, however complex?
In addition, are we making a distinction between attributes that use the same unit to measure different things (e.g. a sphere has a radius, a cube has a height, but both of these quantities are in units of length)?
I think it also matters what we consider to be an "object" (or a "physical object"). A recording of a piece of music has a duration, but no obvious height or width.
That music example is nice! And you're also right to point out the subtle difference between assigning values in certain units to an object and assigning directly comparable values to objects.
I agree - very nice! Yes, that's exactly the topic I'm thinking about.
From the point of view of a "type system", all I care about is removing obvious 'category errors'. So 'not natural' examples would be allowed, as a first step.
And yes, I am very much interested in later not allowing length and radius added (unless there's a physical configuration that makes it meaningful). I can come up with an ad hoc way of describing that (using coordinate systems), but it's unsatisfying as it indeed feels 'ad hoc'.
What is puzzling to me is that such a system of 'quantity types' does not already exist.
Jacques Carette said:
What is puzzling to me is that such a system of 'quantity types' does not already exist.
Has this paper been mentioned?
What is puzzling to me is that such a system of 'quantity types' does not already exist.
I haven't been completely following the conversation, so I may be missing the point, but type systems that incorporate units of measure do exist. For instance, F# has built in support for units of measure.
Sam Speight said:
Jacques Carette said:
What is puzzling to me is that such a system of 'quantity types' does not already exist.
Has this paper been mentioned?
There's also this related paper: Type systems for programs respecting dimensions
@Sam Speight @Nathanael Arkor @Dylan Braithwaite what I mean is that units are only part of the story. Units has been done in various programming languages for 30 years (none mainstream until the quasi-mainstream F#). Once you actually use your units in real situations, you discover that it is far from sufficient.
The wikipedia page on SI has the following good examples:
For example, the joule per kelvin (symbol J/K) is the coherent SI unit for two distinct quantities: heat capacity and entropy; another example is the ampere, which is the coherent SI unit for both electric current and magnetomotive force. This is why it is important not to use the unit alone to specify the quantity.
Merely having units will not save you from adding electric current to magnetomotic force. There is a more fundamental set of 'physical quantities'.
Also: I know the authors of several of those papers quite well; they did pretty math there. Unfortunately, they didn't use the work "for real", else they would have realized that what they did was just step 1. I could add a good dozen more papers on units in CS, none of which cite each other!
However, this did help me find the International System of Quantities and a [List of physical quantities] which both seems to be related.
I might be missing the point, but isn’t the distinction you’re making just a case of adding more “units”? i.e. in the example you give, if you really want heat capacity and entropy to be considered incompatible then you could just add new ‘generating units’ for them. It might be that there are quantities derivable from both of heat capacity and entropy that coincide, but then this would just be a case of adding equations to allow this, to the presentation of the group generating your unit system.
I'm definitely not arguing that this is a definitive solution to the problem. Physics seems complicated enough that there could be no complete equational theory of quantities, but I think methods in the above papers could facilitate more than just naive unit checking.
My understanding is that the MSP guys at least, were talking to people from the NPL a lot when working on this, who certainly do 'for real' things. So I'm sure they would have been considering some real world context on top of the pretty maths
Side-note: I spend August-December 2022 at MSP, and will be spending parts of the coming April there as well. I'll discuss it with them in a couple of weeks!
Jacques Carette said:
Something I've been looking for that is related to units is a "theory of quantities": some objects naturally have attributes described by certain units (have mass, have velocity, have conductance, etc), while other objects should not have such attributes. A 'word' (as in a piece of text in a natural language) should not have a 'mass' associated to it (other than perhaps as an analogy, but let's ignore that).
This is probably more of an ontological question, I would guess.
But still: it does make sense to say that a gas that occupies a well-delimited region of space has a mass -- but does it make sense to say that it has a velocity? It doesn't seem so (but I could be wrong). So there might be some kind of finer classification of 'objects' along the lines of what unit-bearing attributes may or may not be attached to it.
Basic question: does such a theory exist?
You seem to hint at the difference between intensive and extensive quantity. Lawvere wrote a beautiful paper about their abstract properties, in terms of who integrates and who does the integration.
Jacques Carette said:
The wikipedia page on SI has the following good examples:
For example, the joule per kelvin (symbol J/K) is the coherent SI unit for two distinct quantities: heat capacity and entropy; another example is the ampere, which is the coherent SI unit for both electric current and magnetomotive force. This is why it is important not to use the unit alone to specify the quantity.
Merely having units will not save you from adding electric current to magnetomotic force. There is a more fundamental set of 'physical quantities'.
Why wouldn't it ever be ok to add numbers with identical units? Don't equal units mean equal kind of quantities?
The only explanation I can give myself is they are the same quantity but measured at different scales (hence one might be an average over a large amount of particles, sort of)...
Our choice of units is a decision, based on context. For example, in theoretical work on special relativity we treat length and time as having the same units, and in quantum field theory we also treat energy as having the same units as 1/time.
The choice of 3 fundamental units - mass, length and time - is based on convenience, not a law laid down by god. The SI system of units started by taking 7 units as fundamental:
The more knowledge of physics we use, the more we can reduce the number of independent units. For example, by using our knowledge of how entropy, energy and temperature are related, we can eliminate temperature as an independent unit, expressing it in terms of the others. When we reach work that combines general relativity and quantum mechanics, there are no independent units at all!
But it's often convenient to have more independent units, in situations where we know that our more fancy theories of physics are unlikely to be relevant.
John Baez said:
The more knowledge of physic we use, the more we can reduce the number of independent units. For example, by using our knowledge of how entropy, energy and temperature are related, we can eliminate temperature as an independent unit, expressing it in terms of the others. When we reach work that combines general relativity and quantum mechanics, there are no independent units at all!
Could you say more about units of temperature in terms of the others? I had thought a little about this recently, but bumbling around Wikipedia, found myself getting confused.
Sure!
We reduce the number of units each time we find a new fundamental constant that lets us translate between units in a good way: for example the speed of light lets us translate between length and time in relativity, and Planck's constant lets us translate between 1/length and momentum in quantum mechanics.
In statistical mechanics, Boltzmann's constant lets us translate between energy and temperature.
The basic law is that in thermal equilibrium at temperature T, the probability that a system will be in a state of energy E is proportional to
exp(-E/kT)
where k is Boltzmann's constant. This is called Boltzmann's law.
Once we believe this law and realize the deep connection between temperature and energy that it provides, we can set k = 1 (just as we do with the speed of light and Planck's constant in other contexts), and use units where we measure temperature in units of energy.
Note, this is COMPLETELY DIFFERENT from the widely made sophomore mistake of thinking every system has a energy proportional to its temperature. That's only true for certain very special systems like ideal gases and collections of harmonic oscillators.
Sorry to yell there - I doubt you are making this mistake! - but in case anyone in the room is making this mistake, I had to yell, to let them know that the connection between energy and temperature is NOT a simple linear relationship. It's the Boltzmann law.
Anyway, in the most modern approach to SI units of temperature, temperature is not treated as an independent unit, because Boltzmann's constant, originally a measured constant, is now defined to be exactly joules/kelvin.
Thanks, that makes sense.
Since joules are already defined in some other way, this is now a definition of our unit of temperature.
Great.
So if we keep making reductions in this way, using physical constants to "identify" different units (like using to identify length and time), how many independent units are there in the end (as far as we know)?
(Or is "identify" the wrong word to use?)
I answered that somewhere in my wall of text:
John Baez said:
The more knowledge of physics we use, the more we can reduce the number of independent units. For example, by using our knowledge of how entropy, energy and temperature are related, we can eliminate temperature as an independent unit, expressing it in terms of the others. When we reach work that combines general relativity and quantum mechanics, there are no independent units at all!
Oops! :grinning_face_with_smiling_eyes:
Matteo Capucci (he/him) said:
Why wouldn't it ever be ok to add numbers with identical units? Don't equal units mean equal kind of quantities?
Because it seems that our system of units is incomplete in some sense. I like to think of it as a Galois connection that abstracts out just a bit too much.
Very simple analogy: say you have R red apples and G green apples. You have A=R+G apples. Say I tell you A - do you now know if you can bake your pie, which asks for only green apples?
A much better explanation is at Difference between heat capacity and entropy.
Simpler still: torque and energy have the same dimension (Joules) but are quite different beasts. In particular, torque is a vector and energy is a scalar. Even in 1 dimension they shouldn't be added!
I like the torque example, though to be a bit pedantic I think torque is a bivector (and doesn't really make sense in 1d).
Being pedantic on these things is actually useful to discover what good conceptual ways to think about these things.
The CS 'disease' is to consider X and Y to be 'the same' when they can both be stored on a computer identically. Though mathematicians will weave in and out of consider row and column vectors the same, 1x1 matrices and scalars to be the same, etc, etc. When teaching math to a computer, all those implicit moves are a massive pain, and lead to nonsense if not dealt with carefully. I wish mathematicians weren't so sloppy with these things. [Sloppy does not equal wrong! It's just more of that 'invisible math' that Andrej Bauer talked about recently.]
Jacques Carette said:
Simpler still: torque and energy have the same dimension (Joules) but are quite different beasts. In particular, torque is a vector and energy is a scalar. Even in 1 dimension they shouldn't be added!
That's a brilliant example. As Graham pointed out, this a case of confunding -forms with -forms.
John Baez said:
The more knowledge of physics we use, the more we can reduce the number of independent units. For example, by using our knowledge of how entropy, energy and temperature are related, we can eliminate temperature as an independent unit, expressing it in terms of the others. When we reach work that combines general relativity and quantum mechanics, there are no independent units at all!
I also think this partially addresses my doubts :thumbs_up:
Matteo Capucci (he/him) said:
Jacques Carette said:
Simpler still: torque and energy have the same dimension (Joules) but are quite different beasts. In particular, torque is a vector and energy is a scalar. Even in 1 dimension they shouldn't be added!That's a brilliant example. As Graham pointed out, this a case of confunding $$k$$-forms with $$(n-k)$$-forms.
oh! i am confused! could you say a bit more about what geometric thing torque is? For instance, if we can get a torque by wedging a spatial displacement vector with a force vector, then we'd get a 2-vector rather than a 1-vector, right? Are we identifying the two because (we are in 3 space dimensions and) we have a god-given orientation and metric? Is there a fancy symplectic geometry view of this where we can speak of general coordinates and momenta (so perhaps torques would have to do with the 2-sphere as a symplectic manifold?
(Alas, I have been telling friends and strangers that torque and energy really do have the same units --- i claimed we can regard a torque as the amount of energy needed to oppose it for a full revolution, which btw seems practical when designing mechanical linkages for steam engines etc --- but now I see that I was talking about torque magnitudes instead of torques, so I misled them!)
Thanks!
John Baez said:
I answered that somewhere in my wall of text:
John Baez said:
The more knowledge of physics we use, the more we can reduce the number of independent units. For example, by using our knowledge of how entropy, energy and temperature are related, we can eliminate temperature as an independent unit, expressing it in terms of the others. When we reach work that combines general relativity and quantum mechanics, there are no independent units at all!
I like this. I think it is a nice phrasing of the introductory blurb adorning "hbar=c=1" that seems to be legally mandated in all quantum field theory textbooks.
Something I've been thinking vaguely about is this: that thermodynamics is its own "synthetic" theory that just happens to have, as a possible (and in our universe actual) basis in the "analytic" theory of statistical mechanics. In classical thermodynamics, entropy is not dimensionless (it's not a log of some phase space volume divided by hbar^d); it's its own thing. So 1/temperature = coldness = d(entropy)/d(energy) does not have the same units as 1/energy. I think it's uncontroversial to say that it's fun and fruitful to study this "pure" thermodynamics, a bit like how we study chemistry despite knowing about QED and a lot like how we study euclid despite knowing about complete contractible riemannian manifolds with vanishing curvature.
Thus, what I want to find more clarity on is the relation between systems of units as we pass from one "layer" of theory to another. It seems that in taking a thermodynamic limit (a "large-system") limit, we introduce new units that intuitively parameterize the ambiguity "42 * infinity = infinity". For instance, "1 mole" is in stat mech a synonym for a certain large (unitless) real number. But in thermodynamics, it is its own thing, because we have zoomed out so much that the "real number line" on which "1 mole" lies no longer has a god-given "1.0 many particles" with respect to which we may compare "1 mole". That example is secretly the same as the temperature vs energy thing, because extensive quantities including entropy scale with that particular real number line.
@Sam Tenka wrote:
For instance, if we can get a torque by wedging a spatial displacement vector with a force vector, then we'd get a 2-vector rather than a 1-vector, right? Are we identifying the two because (we are in 3 space dimensions and) we have a god-given orientation and metric?
There's not a "god-given" orientation in 3d Euclidean space unless you think god is responsible for the right-hand rule. But ordinary lowbrow physicists doing classical mechanics are happy to equip 3d Euclidean space with an orientation along with its metric; for better or worse this lets them identify bivectors, 1-forms and 2-forms with vectors.
(This is a bit like how we set to reduce the number of scalar quantities that have distinct units!)
For instance, "1 mole" is in stat mech a synonym for a certain large (unitless) real number. But in thermodynamics, it is its own thing, because we have zoomed out so much that the "real number line" on which "1 mole" lies no longer has a god-given "1.0 many particles" with respect to which we may compare "1 mole".
This sounds right to me. As evidence, note it's no coincidence that Avogadro's number is about and Boltzmann's constant is about joules per kelvin.
John Baez said:
Sam Tenka wrote:
For instance, if we can get a torque by wedging a spatial displacement vector with a force vector, then we'd get a 2-vector rather than a 1-vector, right? Are we identifying the two because (we are in 3 space dimensions and) we have a god-given orientation and metric?
There's not a "god-given" orientation in 3d Euclidean space unless you think god is responsible for the right-hand rule. But ordinary lowbrow physicists doing classical mechanics are happy to equip 3d Euclidean space with an orientation along with its metric; for better or worse this lets them identify bivectors, 1-forms and 2-forms with vectors.
(This is a bit like how we set to reduce the number of scalar quantities that have distinct units!)
Ah good point! I suppose one could say something about the weak interaction (together with a chosen arrow of time and a chosen preference for electrons over positrons) picking out an orientation... but that's not what I meant. I was just being dumb
Edit: I just more fully appreciated how interesting your parenthetical is. Very cool! We can formalize it by talking about group representations. Then the analogy you draw involves SO3:ScalingSymmetry
I suppose that units appear whenever we have a symmetry, namely as irreps of groups acting on sets of conceivable physical systems. When we "zoom out" from stat mech to thermo, then a new symmetry appears --- scaling symmetry, since approximate extensivity (and, more to the point, divisibility) in the thermo limit becomes exact. I wonder where else besides thermo this kind of "birth of new units" occurs?
One too-similar-to-feel-satisfying answer is fluid mechanics, where we can find lots of insight just from dimensional considerations (e.g. concept of reynolds number) because we regard water as a continuous fluid. I heard about "dimensional transmutation" for the strong force, but I don't understand qcd at all so I don't know whether this is an example of a new unit coming from a new symmetry (fixed point of rg flow); I kinda doubt it.
(( I also note that sometimes more symmetry leads to fewer units, e.g. nanoseconds and feet become the same in special relativity. This is because this lorentz aint a product of old symmetries with a new factor; the new group nontrivially extends the old group and happens to merge irreps. ))
(((( edit: here is a whimsical image of infinity as having different units than finity: the axes with which we construct out mental plots "fray" near the ends into new dimensions. )))
Yes, I think 'units' in the conventional sense give 1-dimensional representations of the multiplicative group or the multiplicative group of positive reals , or products of such groups, which describe how quantities transform under various kinds of 'rescaling'.
Similarly other kinds of physical quantities are often elements of irreducible representations of other groups.
It's fun to understand all the 3d irreducible representations of : they include vectors and 1-forms (or 'covectors') and bivectors and 2-forms, but also others, because you can take any of these representations and tweak it by throwing in a factor of for any real number (if you're doing real representations.
The simplest description is that there's the obvious representation of on , and all the other (continuous) 3d real reps are obtained from this by multiplying by for some , and/or multiplying by .
When we switch from a group to a subgroup, two different representations may become isomorphic, and also new ones can appear.
It's fun to see how this plays out for and and - all these groups are important in physics!
Thanks for your post, John. I do like this viewpoint a lot. I suppose that those weird "nonpolynomial" reps of GL involving noninteger powers of det can be useful for things like wavefunctions, whose units ought to be square roots of densities. (This just corresponds to the "diagonal" GL(1) subgroup of GL(n), which is scaling, as you say
I'm curious how SL(3) appears in physics!!
Well, it's probably the least important for physics of the groups I mentioned, but if you have an incompressible fluid flow, at any time you get a smooth map whose differential at any point , say , is volume-preserving, so if you trivialize the tangent bundle of in the usual way you get .
In this way you get for each time an -valued function on .
Sam Tenka said:
Thanks for your post, John. I do like this viewpoint a lot. I suppose that those weird "nonpolynomial" reps of GL involving noninteger powers of det can be useful for things like wavefunctions, whose units ought to be square roots of densities. (This just corresponds to the "diagonal" GL(1) subgroup of GL(n), which is scaling, as you say).
Yes, I'm impressed that you know this stuff. In geometric quantization people talk about "half-forms", which is a confusing name for things that when squared give densities. See the confused question here and the corrective answer.
Thanks, John. I usually insecurely beat myself up about stuff that your compliment counters, so that's useful. I skimmed your geometric quantization blog posts a while ago --- i didn't get far, but i was pleased at least to get gist of the S^2 example for spins (especially pleasing because it's not topologically just some cotangent bundle; I wonder whether other compact surfaces arise in physics). Maybe now i am better equipped to appreciate those posts.
Another thing --- kinda basic physics but i think still relevant to a thread about physical units --- is whether there is a space-time symmetric formulation of hamiltonian mechanics (even nonrelativistic)? In the lagrangian formulation, we can just think about a blob in space time, and study consistency conditions for field values on that blob's boundary (e.g. if the fields and their derivatives started out this way on this timeslice, then how would they look on this other timeslice). So it's very symmetrical. But hamilton singles out the time direction. I'd love to see some equation like A_mu^nu v_{nu} = 0, where v is tangent to our state's worldline (within some ambient space such as phase space x time) and A is some symmetrical looking object gotten from a hamiltonian defined on that ambient space together with something like a symplectic form. (Wikipedia says there is something called a contact structure, but I don't actually see whether this leads to a picture of physics, not to mention a beautiful-rather-than-duct-taped-together picture)
For incompressible fluid flows --- let's think in 2D space because then we can relate SL(2,R)'s reps to SU(2)'s reps --- it seems we get more "units" than I expected, analogous to different spins! Maybe these can be useful when studying vortices in incompressible flows. I wonder whether we can make sense of a "spin 1/2" vortex and, if so, what to make of the singlet and triplet combinations for two spin 1/2 vortices near each other. This sounds like it rhymes with poincare-hopf but I haven't visualized my way through the situation yet.
Sam Tenka said:
I skimmed your geometric quantization blog posts a while ago --- i didn't get far, but i was pleased at least to get gist of the example for spins (especially pleasing because it's not topologically just some cotangent bundle; I wonder whether other compact surfaces arise in physics).
Every irreducible representation of every compact simple Lie group arises from geometric quantization of some compact symplectic manifold called a 'partial flag manifold'; using to get the representations of SU(2) is the easiest example of this theory, but the whole theory is incredibly beautiful. If you want to learn more, try week181 of This Week's Finds.
For example really shows up when you're studying SU(2) because . If you were interested in representations of SU(3) one symplectic manifold you'd get interested in geometrically quantizing - not the only one - is .
None of this seems extremely important to the working physicist: physicists are quite happy using representations SU(3) to describe the internal states of gluons and quarks without asking what is this the quantization of?
But I think it's really cool that we can talk about the phase space of a classical spinning particle (which is ), or the phase space of color states of a classical quark (which is ).
Another thing --- kinda basic physics but i think still relevant to a thread about physical units --- is whether there is a space-time symmetric formulation of hamiltonian mechanics (even nonrelativistic)?
Are you talking about classical field theory or classical particle mechanics? From further on in your comment I think the former. But in fact both have interesting manifestly invariant Hamiltonian formulations, which don't require you to single out a time direction.
I'm the kind of guy who like this stuff - just like the "classical quark", less because it's of urgent practical importance than because it fills an annoying hole in the usual curriculum!
Re particles or fields:
I was thinking either, whichever leads to a more beautiful theory! I phrased the lagrangian picture in terms of fields rather than finite dimensional phase space because then the notion of boundary constraints becomes more psychologically natural to me: we don't have to single out time by talking about start and end conditions.
Where should I look if i want to learn more about the invariant formulations you hinted at?
John Baez said:
I'm the kind of guy who like this stuff - just like the "classical quark", less because it's of urgent practical importance than because it fills an annoying hole in the usual curriculum!
I feel likewise, with the two important differences.
One difference is that I've never heard of the classical quark (I'm intrigued; I know how we can set up matterless gauge theories classically (tho the classical vs quantum physics likely looks quite different except for U(1)), but I sense that that's not what you're talking about. I think you're zooming in to the fiber of the quark field to ask: what is it, classically? As a first answer we could say: well, we can choose whatever matter representations we want for our gauge group; some might appear in nature and others might not. I think what you're saying is that when we focus on a particular representation (here, perhaps the "defining" one for SU3) then even the individual fibers have rich classical meaning.
Another difference is that I don't care about physics curricula except for myself! (And for students I teach in computer stuff, which is separate). I do have weirdly strong preferences for what stories I like and don't like; I think I'm allowed to have strong preferences since I don't plan to be a physicist so none of this is stuff I have to learn. E.g. I often find feynman's lectures inspiring but too squishy, e.g. his explanation in the dirac memorial of spin statistics.
I don't care about the physics curriculum either except I took lots of physics courses so I know where the holes in that curriculum are, and then I had to read stuff and think in order to fill them.
@John Baez
I gave it a bit of thought and I think a "simplest possible attempt" works, and it also cries out for extension.
Let's think about particles --- actually, just one particle, since we're talking particles, not fields, and if we model instantaneous interactions-at-a-distance then we've already spoiled space-time symmetry. (An intriguing option might be to allow spacetime points at different times to talk to each other: maybe this is like the wheeler-feynman formulation of electric interactions; I never learned that stuff). For notational convenience, we'll work in 1+d=1+1 dimensional spacetime.
Each point of a (2+2d) dimensional manifold tells us an energy, time, momentum, and place. We furnish with a symplectic form , where we guessed that minus sign based on relativity. A hamiltonian determines a codimension-1 submanifold of "on-shell" points in . A worldline W is a dimension-1 submanifold of . We want a constraint on 's tangent vectors . Well, let's try a simplest equation that uses all these parts:
$$ \omega(v, s) = 0 $S for all tangent to
Let's see what this says concretely by parameterizing W (arbitrarily) by a pocketwatch (it is good that world and watch start with the same letter!). Each tangent to with components (A,a,B,b) (along E,t,p,x) has , so our generalized hamilton says:
or
But since this holds for all , we have
These are hamilton's equations!
In short: the worldline's velocity vector in spacetime transforms, via the symplectic form, to a covector that under poincare duality looks like the on-shell constraint. I am very pleased with this :-)
Fascinating challenge: how might we extend this to interacting particles? Along the lines of the introductory paragraph , we could try defining a hamiltonian on the space (M^n), so that every particle gets a personal (but still extrinsic rather than pocketwatch) time coordinate. Instead of a world-line in , we might want an -dimensional "world-sheet" in (not in the sense of string theory where the the sheet resides in !) that obeys consistency conditions (closed under "swapping diagonals"). ...fun to think about but i still lack clarity as to how to proceed...
Oh, actually the same equation works with multiple particles with instantaneous interactions. My complaint about that spoiling space time symmetry doesn't mess up the math; it just makes the math less beautiful. But the formalism doesn't know which degrees of freedom are "internal" vs "spatial", so it doesn't care.
Okay, I'm glad I kept putting off answering your questions because you answered this one yourself! Congratulations!
I'll just say that the trick you're using - taking the cotangent bundle of a manifold describing spacetime and using points of this symplectic manifold to describe the position, momentum, time and energy of a particle - is called an extended phase space.
You are imposing one constraint and getting a codimension-one submanifold Then you can mod out by the flow generated by the function and, in good cases, get a new symplectic manifold which is the "true" phase space of your particle, described in an invariant way.
This two-step process of imposing constraints and then modding out by the transformations they generate is called symplectic reduction.
@John Baez ooh what are some physical applications of symplectic reduction? For example, maybe in a problem with spatial rotational symmetry one can mod out by spatial rotation and then work with just 1 (radial) space dimension? (as when solving kepler or hydrogen)
I wonder what hamiltonian mechanics looks like when we have a map from some topologically interesting extended phase space (say, a product of compact surfaces each equipped with a volume 2-form, one surface for "time x energy" and another surface for "space x momentum") to a real line, interpreted as time, that is a morse function. Specifically, when the hamiltonian does not depend on time, what happens to trajectories near critical points?
Sam Tenka said:
John Baez ooh what are some physical applications of symplectic reduction? For example, maybe in a problem with spatial rotational symmetry one can mod out by spatial rotation and then work with just 1 (radial) space dimension? (as when solving kepler or hydrogen)
There are tons of applications but I've never tried that one. What I've mainly thought about is a bit fancier: in gauge theories we have an infinite-dimensional symplectic manifold on which a group of "gauge transformations" act, and we can do symplectic reduction to get the "physical phase space", a symplectic manifold whose points describe physically distinguishable states.
If you want a much simpler example, you can start with two particles in and then admit (or decide) that there's no way to tell the position of one particle except relative to another! Then you can mod out by translations and impose the constraint that the total momentum is zero (since the total momentum generates spatial translations).
John Baez said:
There's not a "god-given" orientation in 3d Euclidean space unless you think god is responsible for the right-hand rule.
By means of creating our hands, I guess God did :laughing:
Sam Tenka said:
I suppose that units appear whenever we have a symmetry, namely as irreps of groups acting on sets of conceivable physical systems. When we "zoom out" from stat mech to thermo, then a new symmetry appears --- scaling symmetry, since approximate extensivity (and, more to the point, divisibility) in the thermo limit becomes exact. I wonder where else besides thermo this kind of "birth of new units" occurs?
One too-similar-to-feel-satisfying answer is fluid mechanics, where we can find lots of insight just from dimensional considerations (e.g. concept of reynolds number) because we regard water as a continuous fluid. I heard about "dimensional transmutation" for the strong force, but I don't understand qcd at all so I don't know whether this is an example of a new unit coming from a new symmetry (fixed point of rg flow); I kinda doubt it.
(( I also note that sometimes more symmetry leads to fewer units, e.g. nanoseconds and feet become the same in special relativity. This is because this lorentz aint a product of old symmetries with a new factor; the new group nontrivially extends the old group and happens to merge irreps. ))
(((( edit: here is a whimsical image of infinity as having different units than finity: the axes with which we construct out mental plots "fray" near the ends into new dimensions. )))
This sounds reaaally cool but I don't really grok the idea of symmetries inducing units? Does it go through Noether's theorem, so that we can measure what is somehow physically preserved? But this doesn't feel right because there's plent of quantities of itnerest that are not conserved in a given physical system. So you must mean something else!
@Matteo Capucci (he/him) i mean something described here!
https://www.reddit.com/r/math/comments/riw7qb/dimensional_analysis_via_group_actions/
(in that note, I should have written "torque magnitude" rather than "torque")
Thanks! I'll read it
Matteo Capucci (he/him) said:
John Baez said:
There's not a "god-given" orientation in 3d Euclidean space unless you think god is responsible for the right-hand rule.
By means of creating our hands, I guess God did :laughing:
God made left hands too. So maybe you're claiming he made our right hands more dextrous? If so, yeah, then he's ultimately to blame for the right-hand rule.
(Of course I don't take the "God" stuff seriously. There's actually been a bunch of interesting work on biology trying to track left-right asymmetry in organisms back to its origins. I had studied this decades ago, and when I revisited it later I was pleased to find a lot of new research has been done!)