You're reading the public-facing archive of the Category Theory Zulip server.
To join the server you need an invite. Anybody can get an invite by contacting Matteo Capucci at name dot surname at gmail dot com.
For all things related to this archive refer to the same person.
Can a module have two bases that are finite, but with different sizes? If so what's a basic example?
Are you perhaps looking for the invariant basis number property of a ring?
Yes, that hits the nail on the head. Thanks!
To give a specific example of a module with bases of different finite size:
Let be a vector space over with a countably infinite basis . Let be the ring of linear operators on . Observe that is not commutative, as composition of functions is not commutative.
The ring is an -module and as such, the identity map forms a basis for . However, it is also possible to construct a basis for of any desired finite size .
These quotes are from page 129 of "Advanced Linear Algebra" by Roman.
The author goes on to construct a basis for that has two elements. They do this by defining two linear operators and . They are defined by: and ; and and .
Nice, thanks
David Tanzer said:
Can a module have two bases that are finite, but with different sizes? If so what's a basic example?
I believe the trivial ring , regarded as a module over , has a basis of cardinality for every natural number , since is isomorphic to for all natural numbers .
I don't see how you're getting a basis of cardinality n, given that 0^n has cardinality 1
I would think every module over 0 has a unique basis, the empty set
But I'm not confident in that assertion. It's such a degenerate case
I guess the issue is that over the zero ring indexed bases behave very differently than over other rings, because {0} is linearly independent. What I said isn't right though, there are two (non indexed) bases for any module: the empty basis and the one element basis
Nope, what you said before was right @Brendan Murphy . The definition of linear dependence is the existence of non-zero coefficients weighting a sum to zero, so is always linearly dependent (and hence not a basis)
But there are no nonzero coefficients!
Oh good point!
Ha so if you allow bases to be multisets then you do get arbitrarily large cardinality after all
The invariant basis number property is stated using the categorically-correct notion of "basis" for a module , namely a set and an isomorphism from to the free module generated by . In this sense, the zero ring does not have the IBN, since the free module on any set is isomorphic to the free module on any other set. But it's confusing because over any nonzero ring, any set injects into the free module it generates, and thus a basis for a module can be identified with a particular subset of that module.
The wikipedia article on the IBN property gives a nice example of a non-IBN ring. For a ring , let be the ring of countably infinite matrices with entries in and columns with finite support. View as a left module over itself.
So is a basis for .
Then define f(m) = (even columns of m, odd columns of m); is an isomorphism from to . Turning this into a basis construction...
Let be the subspace of matrices with zeros in the even columns, and be the matrices with zeroes in the odd columns. Let be the injection inserting zeros into the even columns: , and be the injection inserting zeros into the odd columns.
Let . Then is also a basis for .
For , its coordinates are .
Nice! Wikipedia points out that in this case we get as left -modules and thus
as left -modules for all .
But intriguingly they mention there are examples of rings where
as left -modules for some with , but not all!
This sounds like a much more exotic and subtle phenomenon. They say that "Leavitt algebras" give examples, but I have no idea what a Leavitt algebra is, and they only point to this:
I wonder which partitions of the natural numbers are obtainable as the set of sets of ranks of free modules that coincide for some ring.
Re: Leavitt algebras, here's a reference on Leavitt path algebras, which generalize Leavitt algebras:
a Leavitt path algebra is a universal algebra constructed from a directed graph. Leavitt path algebras generalize Leavitt algebras and may be considered as algebraic analogues of graph C*-algebras.
For a fixed field , and directed graph , the article describes some construction of , the Leavitt path algebra for , which is a -algebra. There is a table showing certain cases which have been worked out.
There are some correspondences stated between graph and algebraic properties, including:
The construction itself looks intricate, haven't grokked it yet. Also not clear how this could be brought to bear on the point John quoted from the article on the IBN property.
David Tanzer said:
Re: Leavitt algebras, here's a reference on Leavitt path algebras, which generalize Leavitt algebras:
For a fixed field , and directed graph , the article describes some construction of , the Leavitt path algebra for , which is a -algebra. There is a table showing certain cases which have been worked out.
- For the graph with one vertex and zero edges, the -algebra is itself.
- For the graph with one vertex and one self-edge, you get the Laurent polynomials .
- For a linear graph with edges on vertices, you get the matrices with entries in .
In these three examples what we're getting is just the usual path algebra, so I'm wondering if "Leavitt path algebra" is a synonym for "path algebra". The idea is that given a directed graph you form an algebra whose basis consists of all directed paths in that graph. Given directed paths and , if starts where ends we define the product to be the usual composite path: the path where you first go along and then . If doesn't start where ends we set because... what else can we do?
This completely describes multiplication in the path algebra, since to describe an algebra you just need to say how to multiply basis elements and check associativity.
Okay, now I looked at the Wikipedia article and see that yes, the description there is indeed truly horrible!
I can't even bear to understand it.
However, my guess is that it's a bit like the graph algebra I described, but it's a -algebra where for each path we get another basis element which we think of as the "reverse" path.... and they seem to include some extra relations as well.
There's an idempotent for each vertex, the product of the idempotents for two different vertices is 0, the product of going "backward then forward" is the idempotent for the vertex you started on, but if you go "forward then backward" you have to add up all the possible ways of doing that to get the same thing ... but the relation only holds when it "makes sense" (the sum is finite and nonempty).
(all the possible ways of doing it in one step anyways)
As a result, the basis elements are paths that go forward then backward, and where you "turn around" there has to be another choice besides going straight back where you came, even if you didn't take it. That's why you get things like the Laurent algebra and the matrix algebra as examples--the graphs for those examples are thin so you can only trivially go forward or back and never turn around in the middle. The proper Leavitt algebras are generally not thin though.
The ordinary Leavitt algebra has generators each of which has a left-sided inverse and . As for for it's .
The form an orthogonal partition of unity which is probably what makes these algebras useful for describing modules with finite bases of different sizes.
That's making more sense - thanks. I see that after the user-unfriendly definition, there is a nice redefinition:
...one can show that
Note however, that all commutative rings have the IBN property, so say in commutative algebra people just take it for granted.
Xuanrui Qi said:
Note however, that all commutative rings have the IBN property, so say in commutative algebra people just take it for granted.
All non-trivial commutative rings have the IBN property. Earlier in this thread there is a discussion on how the trivial ring doesn't have the IBN property.
Yeah, my bad.