Idempotent elements in Rings

For a commutative ring R an idempotent is an element e \in R such that e^2 = e . The collection Re is an ideal of R and in fact, moreover, is a ring in its own right with identity e (why?). A good example of an idempotent to have in mind is the element (1,0) \in R \times R' and in some sense, these are the only idempotents. R' is another commutative ring.

For idempotent e \in R we have e' =1 -e \in R also being an idempotent and ee' = 0 . These should remind you of elements e = (1,0), e' = (0,1) \in R\times R'. We call a pairing \{e, e'\} such that e, e' \in R and e' = 1 - e and ee' = 0 complementary idempotents.

Complementary idempotents give a formulation of an inner product. What do I mean by this? If \{e,e'\} is a pair of complementary idempotents in R, then defining R' = Re, R'' = Re (remember these are rings in their own right) we have that \phi \colon R \to R' \times R'' where r \mapsto (re,re') being a ring isomorphism. This is an instructive check to make and I leave it to the reader.

What do idempotents of some rings look like? 1 is always idempotent, making the pair (1,0) complementary idempotents. In all fields, the only non-zero idempotent element is the multiplicative unit 1 .

What about rings \mathbb{Z}/n\mathbb{Z} ? if n = p^k for some prime number p , supposing that a + p^k\mathbb{Z} is an idempotent, we have p^k | a(1-a) and so if k = s+ t such that p^s | a and p^t |1-a if both s, t \geq 1 then p| a + (1-a)n = 1 which is a contradiction, so p^k | a or p^k | 1-a . The only idempotents \mathbb{Z}/p^k\mathbb{Z} are 0 and 1. The only idempotent pair is \{1,0\}.

We can now look at the general case, let n = {p_1}^{r_1} \ldots {p_m}^{r_m} be n‘s prime factorisation, the chinese remainder theorem tells us \mathbb{Z}/n\mathbb{Z} \cong \mathbb{Z}/{p_1}^{r_1}\mathbb{Z} \times \cdots \times \mathbb{Z}/{p_m}^{r_m}\mathbb{Z} so there are 2^n idempotents, and 2^{n-1} pairs of complementary idempotents.

Advertisements

Lagrange Interpolation, a unique polynomial

Let z_1, \ldots, z_k be k distinct complex numbers and let w_1, \ldots, w_k be k complex numbers which need not be distinct. A natural question to ask is, can we find a polynomial P(z) such that P(z_m) = w_m for m = 1,\ldots, k ?

We can and in fact we can say more, there is a unique polynomial of degree \leq k-1 satisfying the above! This polynomial is called the Lagrange interpolation polynomial and is constructed explicitly.

We start by setting A(z) = (z - z_1)\cdots(z-z_k) and let A_m(z) = \dfrac{A(z)}{z-z_m} .

What have we just constructed? A_m(z) is a polynomial of degree k-1 with A_m(z_m) \neq 0 and A_m(z_i) = 0 if i \neq m.

So \dfrac{A_m(z)}{A_m(z_m)} is another polnomial of degree k-1 which vanishes at z_j for j \neq k , and takes 1 at z_m. This polynomial picks out an idividual point and ignores all the rest, this is precicely what we need.

Let us define P(z) = \sum_{m=1}^{k} w_m \dfrac{A_m(z)}{A_m(z_m)} which is a polynomial of degree \leq k-1 with the properties we want.

What about uniqueness? Suppose we have another such polynomial Q(z), the difference P(z) - Q(z) vanishes at k distinct points, with degree \leq k-1 so it must be 0 and hence P(z) = Q(z).

 

Algebras and why modules are a thing

A k-algebra A where k is a field, is a ring with the added structure of being a vector space over k (with respect to the same addition) where scalar multiplication behaves nicely with the ring multiplication. \mu(ab) = (\mu a)b = a(\mu b) for \mu \in k and a, b \in A. The atypical example is a matrix algebra of dimension n with matrix elements in k denoted M_{n}(k). There is a notion of algebra morphism which is not too hard to figure out with the above or a quick google, so we have a notion of isomorphism.

Matrix algebras are very special, they have vectors in which they can act on. For an n-dimensional matrix algebra these objects are the tuples (x_1, \cdots, x_n) where x_i \in k. As a collection, they are denoted k^n. These form a vector space and as algebras M_{n}(k) \cong \text{Aut}(k^n) . Where the algebra operation in M_n(k)   is matrix multiplication and \text{Aut}(k^n) is the algebra of vector space automorphisms k^n \longrightarrow k^n with the algebra operation being composition.

The definition of an A-module can be found elsewhere but to motivate its origins, it is precisely the articulation of assigning some ‘vectors’ in some vector space V to a general k-algebra A which A can act on V similarly to matrices on tuples. If you struggle to remember the definition of an A-module just think like this and you should be able to reconstruct it.

Modules can even be defined for a ring R, with the relaxation of the structure of a vector space. And so we can assign ‘vectors’ to rings by looking at R-modules. So we can ask the question, what are \mathbb{Z}-modules? These are in fact abelian groups (why?) and so the collection of possible ‘vectors’ for \mathbb{Z} can be thought of as just abelian groups.

I think that’s pretty nice.

It’s all local for topological vector spaces

For a vector space V and a topology \tau on the space, this pair is called a topological vector space when \{v\} is closed for all v \in V and both vector space operations are continuous. This being addition and scalar multiplication.

Defining maps T_\alpha : V \longrightarrow V by T_\alpha(v) = \alpha + v where \alpha \in V its not hard to show that T_\alpha is in fact a homeomorphism. This gives us that the topology is uniquely defined local, around some point say 0 \in V because E \subset V is open iff \alpha + E is open for all \alpha \in V.

I think this is pretty neat and possibly the atypical situation where things locally define everything globally thus illustrating a guiding philosophy in modern mathematics.

Cauchy’s inequality and the appeal to symmetry

Working over the reals \mathbb{R} a young mathematician will learn the inequality (a_1b_1 + \cdots +  a_nb_n)^2 \leq (a_1^2 + \cdots + a_n^2)(b_1^2 + \cdots + b_n^2) which has its name associated to Cauchy and Schwarz and is called the Cauchy -Schwarz inequality.

After proving this result the question of when this inequality is an equality is brought up and the young mathematician will learn this occus when a_i = \lambda b_i for some \lambda \in \mathbb{R} and for all i \in \{1,\cdots,n \} . Here is a neat niave proof of this result following the mathematicians’ philosophy of ‘look for symmetry’.

Let us look at the ‘error’ E_n = (a_1^2 + \cdots + a_n^2)(b_1^2 + \cdots + b_n^2) - (a_1b_1 + \cdots + a_nb_n)^2.

Expanding out gives us E_n = \sum_{}^{}a_i^2b_j^2 - \sum_{}^{}a_ib_ia_jb_j

And since \sum_{}^{}a_i^2b_j^2 = \frac{1}{2}\sum_{}^{}(a_i^2b_j^2 + a_j^2b_j^2). Yes I did just write this and no, I’m not insulting your inteligence. This is my apeal to symmetry.

We get E_n = \frac{1}{2}\sum_{}^{}(a_i^2b_j^2 + a_j^2b_j^2) - \sum_{}^{}a_ib_ia_jb_j = \frac{1}{2}\sum_{}^{}(a_i^2b_j^2 + a_j^2b_j^2 - 2a_ib_ia_jb_j )

And so E_n = \frac{1}{2}\sum_{}^{}(a_ib_j - a_jb_i)^2. I’ll let you finish the rest off but we’ve done all the hard work here.

A theorem by Cauchy on ‘thinning’ sequences

Just to clarify all the numbers in this post are going to exist in \mathbb{R} . Given a sequence \{a_n\} where n \in \mathbb{N} we have a collection of partial sums \{s_n\} indexed by \mathbb{N} and defined by s_n = a_1 + a_2 + ... + a_n . If the sequence \{s_n\} converges to s (In the usual \epsilon \delta way) we say the series converges and write \sum_{n = 1}^{\infty} a_n =s. For completness if the sequence \{s_n\} diverges, the series is said to diverge.

If you know sequences really well, you know series really well as every theorem about sequences can be stated in terms of series (putting a_1 = s_1 and a_n = s_n - s_{n-1} for n > 1 ). In particular the monotone convergence theorem has an instantaneous counterpart for series.

Theorem: A series of non-negative real numbered terms converges if and only if the partial sums form a bounded sequence.

I’m going to omit the proof here but it is a quick application of the monotone convergence theorem to the partial sums. So why bring this up? Well if we impose that the terms in our series are decreasing monotically (which can appear in applications) we can apply the following theorem of Cauchy. What is interesting about this theorem is that a ‘thin’ subsequence of \{a_n\} determines the convergence or divergence of the series.

Theorem: Suppose a_1 \geq a_2 \geq ... \geq 0 are real numbers. Then the series \sum_{n=1}^{\infty} a_n converges if and only if the series \sum_{k=0}2^k a_{2^k} converges.

Proof: By the previous theorem it suffices to consider only the boundedness of the partial sums. Let us write s_n = a_1 + ... + a_n and t_k = 2a_2 + ... + 2^ka_{2^k} .  We will look at two cases, when n < 2^k and when n > 2^k .

For n < 2^k we have s_n \leq a_1 + (a_2 + a+3) + ... + (a_{2^k} + ... +a_{2^{k+1} - 1}) \leq a_1 + 2a_2 + ... + 2^ka_{2^k} = t_k where the first inequality followed from n < 2^k and the second inequality from the hypothesis.

When n > 2^k we have s_n \geq a_1 + a_2 + (a_3 + a_4) + ... +(a_{2^{k-1} +1} + ... a_{2^k}) \geq \frac{1}{2}a_1 + a_2 +2a_4 + ... + 2^{k-1}a_{2^k} = \frac{1}{2}t_k where the first inequality follows from n > 2^k and the second (you guessed it) follows from our hypothesis.

Bringing these together we conclude that the sequence \{s_n\} and \{t_k\} are either BOTH bounded or BOTH unbounded which completes the proof.

When I came across this I thought it was pretty astounding (hence why it has made it onto the blog) so lets see it it action. We will use it to deduce for p \in \mathbb{Z} that \sum_{n=2}^{\infty} \frac{1}{n(log \:n)^p} converges if p >1 and diverges if p \leq 1 .

The monotonicity of the logarithmic function implies \{\mbox{log } n \} increases which puts us in good position to apply our theorem. This leads us to the following which is enough as a proof.

\sum_{k=1}^{\infty}2^k\frac{1}{2^k(log \:2^k)^p} = \sum_{k=1}^{\infty} \frac{1}{k(log \:2)^p} =\frac{1}{(log \; 2)^p} \sum_{k=1}^{\infty} \frac{1}{k^p} .

p-Sylow subgroups and why they exist

Let G be a finite group and p a prime such that the order of G is |G| = p^nm where p and m are coprime. Cauchy’s Theorem for finite groups tells us that there exists x \in G whose order is p. We also know by Langrage’s Theorem for finite groups that given a subgroup H \leq G we have the order of H dividing the order of G.

Putting one and two together we can say that the ‘most simple’ subgroup of our finite group G is the subgroup \langle x \rangle or a cyclic group \mathbb{Z}_p. As our goal (sorry I didn’t tell you this earlier) is to look at the subgroups of G it seems clear that the next ‘most simple’ subgroups will have order some power of our prime, i.e p^k where 1 \leq k \leq n.

By ‘simple’ I mean in some sense we are looking for subgroups of G whose existence and size is only determined by the knowledge of knowing our prime in question p divides G.

Enough philosophy, let us define some notions. A p-group is a finite group whose order is a power of a prime p. H is a p-subgroup of a group G if it is a subgroup and a p-group (p necessarily has to divide the order of G). What is the largest p-subgroup of G? Well, Langrange gives us some constraints and motivates the definition of a p-Sylow subgroup. H is a p-Sylow subgroup if it is a subgroup of G of order p^n and p^n is the highest power of p dividing the order of G.

So I hope I’ve described some motives for looking at/for p-Sylow subgroups but can we even guarantee the existence of such a subgroup of our finite group G? Also, would it count as a surprise if I said the answer was yes?

To prove such a result we will use induction on the order of G. If the order of G is prime the result follows from Cauchy’s Theorem. Now suppose we can always find a p-Sylow subgroup in every finite group whose order is divisible by p and is strictly less than that of G.

Langrage tells us for a subgroup H \leq G we have |G| = |H|[G:H] so if H is proper and p doesnt divide [G:H] we can apply the inductive hypothesis to H and the p-Sylow subgroup we find for H will be one we are looking for G.

We rule this case out and suppose that for all proper subgroups H of G the index [G:H] is divisible by p . Let G act on G by conjugation, the action for this is g * h := g^{-1}hg where g, h \in G. Applying the orbit stabiliser theorem tells us for each orbit  G * x   we have |G * x| = [G:G_x] where x \in G and G_x is its stabiliser. Orbits partition G and unwareling the meaning of orbits in this action gives us |G| = |Z(G)| + \sum [G:G_x] where Z(G) is the abelian subgroup Z(G) = \{g \in G : hgh^{-1} =g \mbox{ for all } h \in G \} of G which is often called the center of G. I hope by now when reading these posts you have pen and paper as this is a moment when you should verify what I am claiming. By the case we ruled out we know that p divides |Z(G)| and applying Cauchy’s Theorem we can find an element x \in Z(G) of order p and as \langle x \rangle  \trianglelefteq Z(G) \trianglelefteq G we have \langle x \rangle \trianglelefteq G .

This gives us a quotient group G / \langle x \rangle and a quotient map \phi : G \longrightarrow G / \langle x \rangle and applying the inductive hypothesis to G / \langle x \rangle gives us a p-Sylow subgroup of G / \langle x \rangle whose order is p^{n-1} due to the equality |G| = |\langle x \rangle||G/ \langle x \rangle| = p|G/\langle x \rangle| from Lagrange. Looking at K = \phi^{-1}(K') as \langle x \rangle \subset K and \phi maps K  into K' we have K' \cong K/H and applying Lagrange (again) we obtain |K|= |H||K'| = p^n , as desired. We have found our p-Sylow subgroup K of G and this finishes the proof.

So we have existence but what more can we ask for and look for? How many such subgroups can we find? Also, if we have a standard p-subgroup is it always contained in a p -Sylow subgroup? How hard is it to find such subgroups?

If you don’t know the answer to these it is instructive to try to have a think about how you would go about answering these and try finding some p-Sylow subgroups for your favourite groups. As a hint conjugation is very important.