Understanding ring homomorphisms

Oct 2017
24
0
sweden
So trying to study chapters on ring homomophisms and integral domains, and there is this one problem in the book where I just don't get how to show the kind of problems yet (as I'm not yet used to it); the problem is:

Show that the matrices $R=\begin{pmatrix}
a & b \\
0 & a \\
\end{pmatrix}$ for $a,b \in \mathbb{R}$ is a subring of the ring of matrices $M_2(\mathbb{R})$, also find a ring homomorphism such that $\Phi:R \rightarrow \mathbb{R}$ that is onto.

I will appreciate the help as I am very new to ring homomoprhisms.
 
Last edited by a moderator:

Country Boy

Math Team
Jan 2015
3,261
899
Alabama
You know what a ring is don't you? To show that the set of all matrices of the form \(\displaystyle \begin{pmatrix}a & b \\ 0 & a\end{pmatrix}\) is a ring, you need to show
1) It is closed under matrix addition
\(\displaystyle \begin{pmatrix}p & q\\ 0 & p \end{pmatrix}+ \begin{pmatrix}r & s \\ 0 & t\end{pmatrix}= \begin{pmatrix}p+ r & q+ s \\ 0 & p+ r\end{pmatrix}\) is of that same form.
2) It contains the additive identity: \(\displaystyle \begin{pmatrix}0 & 0 \\ 0 & 0\end{pmatrix}\) is of this form with \(\displaystyle a= b= 0\).
3) Addition is associative.
\(\displaystyle \left(\begin{pmatrix}a & b \\ 0 & a \end{pmatrix}+ \begin{pmatrix}c & d \\ 0 & c\end{pmatrix}\right)+ \begin{pmatrix}e & f \\ 0 & e \end{pmatrix}= \begin{pmatrix}a+ c & b+ d \\ 0 & a+ c\end{pmatrix}+ \begin{pmatrix}e & f \\ 0 & e\end{pmatrix}= \begin{pmatrix} (a+ c)+ e & (b+ d)+ f \\ 0 & (a+ c)+ e\end{pmatrix}\)

\(\displaystyle \begin{pmatrix}a & b \\ 0 & a \end{pmatrix}+ \left(\begin{pmatrix}c & d \\ 0 & c\end{pmatrix}+ \begin{pmatrix}e & f \\ 0 & e \end{pmatrix}\right)= \begin{pmatrix}a & b \\ 0 & a\end{pmatrix}+ \begin{pmatrix}c+e & d+f \\ 0 & c+e\end{pmatrix}= \begin{pmatrix} a+ (c+ e) & b+ (d+ f) \\ 0 & a+ (c+ e)\end{pmatrix}\)

Essentially, that follows from the fact that matrix addition is "element wise" and addition of numbers is "associative".

For multiplication, you only have to prove that
4) It is closed under matrix multiplication.
\(\displaystyle \begin{pmatrix}a & b \\ 0 & a \end{pmatrix}\begin{pmatrix}c & d \\ 0 & c \end{pmatrix}= \begin{pmatrix}ac & ad+ bc \\ 0 & ac\end{pmatrix}\)

In a ring, there is not necessarily a multiplicative identity nor multiplicative inverses, so we don't need to show that. (Actually, this is a "ring with identity: \(\displaystyle \begin{pmatrix}1 & 0 \\ 0 & 1 \end{pmatrix}\) with \(\displaystyle a= 1,\ b= 0\) is the multiplicative identity. The matrix \(\displaystyle \begin{pmatrix}0 & 1 \\ 0 & 0 \end{pmatrix}\) is of this form, but does not have an inverse.)

5) Multiplication is associative:
\(\displaystyle \left(\begin{pmatrix}a & b \\ 0 & a \end{pmatrix}\begin{pmatrix}c & d \\ 0 & c\end{pmatrix}\right)\begin{pmatrix}e & f \\ 0 & e\end{pmatrix}= \begin{pmatrix}ac & ad+ bc \\ 0 & ac\end{pmatrix}\begin{pmatrix}e & f \\ 0 & e\end{pmatrix}= \begin{pmatrix} ace & acf+ ade+ bce \\ 0 & ace\end{pmatrix}\)

A ring homomorphism that is "onto" (but not one-to-one) maps \(\displaystyle \begin{pmatrix} a & b \\ 0 & a\end{pmatrix}\) is simply "a".
 
Last edited by a moderator:

Country Boy

Math Team
Jan 2015
3,261
899
Alabama
Above I neglected the second part of "multiplication is associative".
\(\displaystyle \begin{pmatrix}a & b \\ 0 & a\end{pmatrix}\left(\begin{pmatrix}c & d \\ 0 & c\end{pmatrix}\begin{pmatrix}e & f \\ 0 & e\end{pmatrix}\right)\)\(\displaystyle = \begin{pmatrix}a & b \\ 0 & a\end{pmatrix}\begin{pmatrix}ce & cf+ de \\ 0 & ce\end{pmatrix}\)\(\displaystyle = \begin{pmatrix}ace & acf+ ade+ bce \\ 0 & ace\end{pmatrix}\).
 
Aug 2017
313
112
United Kingdom
You still need to show
2') $R$ contains additive inverses
6) addition is commutative
7) multiplication and addition distribute over each other

Note that 2') and 1) immediately imply 2). Also, the fact that $R$ is a subset of the ring $M_2(\mathbb{R})$ and has the same addition/multiplication (except restricted to $R$) gives 3), 5), 6) and 7) for free.
 
Aug 2017
313
112
United Kingdom
This is actually never needed for unital rings:
$$a+b+a+b = (a+b)(1+1) = a(1+1) + b(1+1) = a+a+b+b$$
hence by eliminating a and b from both sides, we have $b+a=a+b$.
Thanks for the reply - I hadn't considered that you get commutativity of + from the other conditions of a unital ring. However, in this case, as Country Boy's convention is that a ring needn't be unital, we do need to specify it in the definition. But of course, when we do happen to have a multiplicative identity, your argument means we don't need to bother checking if addition is commutative.

I've noticed a mistake in my post, though - conditions 2)' and 1) only give 2) (existence of additive identity) if we know the set is nonempty. And to show the set is nonempty, often the best way to go is to show 2) directly!
 
Oct 2009
942
367
Thanks for the reply - I hadn't considered that you get commutativity of + from the other conditions of a unital ring. However, in this case, as Country Boy's convention is that a ring needn't be unital, we do need to specify it in the definition. But of course, when we do happen to have a multiplicative identity, your argument means we don't need to bother checking if addition is commutative.

I've noticed a mistake in my post, though - conditions 2)' and 1) only give 2) (existence of additive identity) if we know the set is nonempty. And to show the set is nonempty, often the best way to go is to show 2) directly!
Totally irrelevant, but it is very curious how many axioms for vector spaces are actually superfluous. The most common axiomatization has 10 axioms, but you can heavily reduce it. https://www.jstor.org/stable/3615171?seq=1#page_scan_tab_contents
 
Aug 2011
85
14
Nouakchott, Mauritania
Salam !

...
also find a ring homomorphism such that $\Phi:R \rightarrow \mathbb{R}$ that is onto.
...
Any matrix \(\displaystyle M\) of \(\displaystyle R\) is written as : \(\displaystyle M=\begin{pmatrix}a&b\\0&a\end{pmatrix}\) where \(\displaystyle a,b\in\mathbb R\). So we can define this application :

\(\displaystyle
\begin{array}{cccc}
\Phi:&R &\rightarrow&\mathbb{R}\\
&M=\begin{pmatrix}a&b\\0&a\end{pmatrix}&\mapsto& \Phi(M)=a\end{array}\).

Now you can try to check if :
- \(\displaystyle \Phi(M+N)=\Phi(M)+\Phi(N)\)
- \(\displaystyle \Phi(MN)=\Phi(M)\Phi(N)\)
- \(\displaystyle \Phi(1_R)=1\) ( Some references don't require this condition )
for \(\displaystyle M,N\in R\).
In this case, \(\displaystyle \Phi\) will be a ring homomorphism.

Finally, \(\displaystyle \Phi\) is a surjective (onto) application because :
\(\displaystyle \forall a\in\mathbb{R}, \exists M=aI_2\in R,\quad\phi(M)=a.\)​