The sum of m x n matrices A = (a_{ij }) and B = (b_{ij }) is the m x n matrix C = (c_{ij }),
whose entries are c_{ij } = a_{ij } + b_{ij }.
The product cA of a matrix A = (a_{ij }) and a number c is the matrix (ca_{ij }),
In particular  B = (  b_{ij }) is the negative of a matrix B = (b_{ij }) and
The product AB (in this order) of matrices A_{m} _{x n } = (a_{ik }) and B_{n} _{x p } = (b_{kj }) is the m x p matrix C = (c_{ij }), where


=  a_{11 }b_{11 } + a_{12 }b_{21 } + ··· + a_{1n }b_{n1 }  a_{11 }b_{12 } + ··· + a_{1n }b_{n2 } ···  
a_{21 }b_{11 } + a_{22 }b_{21 } + ··· + a_{2n }b_{n1 }  a_{21 }b_{12 } + ··· + a_{2n }b_{n2 } ···  
:  :  
a_{m1 }b_{11 } + a_{m2 }b_{21 } + ··· + a_{mn }b_{n1 }  a_{m1 }b_{12 } + ··· + a_{mn }b_{n2 } ···  
Note that the product AB is defined only when the number of the columns of A and the
number of the rows of B coincide.
It is possible that AB is defined while BA is not.
Example 2: Multiplication of matrices
If AB = BA, we say that A and B commute. In this case A and B are n x n matrices and therefore they are square matrices of the same dimension.
Proposition 1 When the operations below are defined, we have
1) A + B = B + A
2) (A + B) + C = A + (B + C)
3) A + O = A
4) A + (  A) = O
5) (µ)A = (µ A)
6) ( + µ)A = A + µ A
7) (A + B) = A + B
8) 1· A = A
9) (AB)C = A (BC)
10) A(B + C) = AB + AC
11) (A + B)C = AC + BC
12) (AB) = (A)B = A(B)
13) IA = A
14) AI = A
Proof. Follows from the previous definitions of matrix operations.
Notice! If AB = 0, it does not necessarily follow that A = 0 or B = 0:
Example








The transpose or the transposed matrix of an m x n matrix A = (a_{ij }) is the n x m matrix A^{T} = (a_{ji }), that is, the rows of A^{T} are the columns of A and the columns of A^{T} are the rows of A.
A on skew symmetric if A^{T} =  A
Proposition 2
1) (A^{T})^{T} = A
2) (A + B)^{T} = A^{T} + B^{T}
3) (A)^{T} = A^{T}
4) (AB)^{T} = B^{T}A^{T}
Problem: Show that a square matrix A can be written as the sum of a symmetric and a skew symmetric matrix: A = ½(A + A^{T}) + ½(A  A^{T}).
In applications (for example, in signal processing) one often needs real symmetric Toeplitz matrices (for example, autocorrelation matrix)
r (0)  r (1)  r (2)  ···  r (n  1)  .  
r (1)  r (0)  r (1)  ···  r (n  2)  
r (2)  r (1)  r (0)  ···  r (n  3)  
:  
r (n  1)  r (n  2)  r (n  3)  ···  r (0) 
If A = (a_{ij }) is a complex matrix then the conjugate matrix of A is
A square matrix A is a Hermite matrix (Hermitian), if A^{T} = and a skew Hermite matrix (skew Hermitian) if A^{T} =  .
The Hermite matrix of A is A^{H} = ^{T}. If A = A^{H}, then A is Hermitian.
The diagonal elements of a Hermite matrix are real, because a_{ii } = _{ii }. The diagonal elements of a skew Hermite matrix are pure imaginary or zero, because a_{ii } =  _{ii }. Hermiteness generalizes the notion of symmetricness.
Matrix B is the inverse of a matrix A if
Proposition 3. If A has an inverse, the inverse is unique.
Proof. Let B and B' be inverses of a matrix A, that is,
AB = BA = I, and AB' = B'A = I.
But it now follows that B = IB = (B'A)B = B' (AB) = B'I = B'.
If the inverse of A exists, it is denoted by A^{  1} and we say that A is regular. If A does not have an inverse, it is singular.
It can be shown that for square matrices A and B we have
Therefore, to prove that B is the inverse of A, it suffices to show that AB = I.
Proposition 4. If A and B are regular and 0, then
1) (A^{  1})^{  1} = A
2) (A)^{  1} = (1/ )A^{  1}
3) (AB)^{  1} = B^{  1}A^{  1}
4) (A^{T})^{  1} = (A^{  1})^{T}
Proof. 4): Denote B = (A^{  1})^{T}. We have to show that A^{T}B = BA^{T} = I
A^{T}B = A^{T} (A^{  1})^{T} = (A^{  1}A)^{T} = I^{T} = I 
BA^{T} = (A^{  1})^{T}A^{T} = (AA^{  1})^{T} = I^{T} = I 
A is orthogonal if it is real and A^{T} = A^{  1}.
If A and B are orthogonal, then so is AB.
Proof. (AB)^{T} = B^{T}A^{T} = B^{1}A^{1} = (AB)^{1}.
An orthogonal mapping(a matrix) preserves norms!
More generally: m x n matrix U is orthogonal if U^{T}U = I (column orthogonal).
Example 7: Rotation and reflection matrices
A is unitary if ^{T} = A^{  1}, that is, if A^{  1} = A^{H}. Compare orthogonal vs. unitary.
Often it is useful to partition matrices into smaller ones, called blocks, for example


Proposition 5. Let matrices A and B be partitioned as follows:


where the block A_{ij } is a s_{i } x t_{j } matrix and B_{ij } is a u_{i } x v_{j } matrix. Then

 

 
3) If p = r, n = m and s_{i } = u_{i }, t_{j } = v_{j } i, j, then
A + B =  C_{11 }  ···  C_{1n }  
:  :  
C_{p1 }  ···  C_{pn }  
where C_{ij } = A_{ij } + B_{ij }
4) If n = r and t_{j } = u_{j } j, then
AB =  C_{11 }  ···  C_{1m }  
:  :  
C_{p1 }  ···  C_{pm }  
where C_{ij } = A_{ik }B_{kj }.
Proof. Straightforward calculation.