A Vector space V is a normed space, if there is a norm || || : V -> R _{ + } = [ 0, ), defined for elements of V, satisfying:
(1) |||| = 0 <=> =
(2) || + || |||| + |||| , , V
(3) ||a|| = |a| |||| , V, a K ( = R tai C)
Proposition 1. If dim V = n < , then all norms on V are equivalent, i.e., if || || and || ||' are norms defined on V, then a, b > 0:
The set of M of all m x n matrices is a finite dimensional vector space. The following are all norms on M and A = (a_{ij }) is an m x n matrix.
||A||_{1 } = | a_{ij } | , the maximum of column sums,
||A||_{} = | a_{ij } | , the maximum of row sums,
||A||_{2 } = (the spectral norm),
is the largest eigenvalue of the matrix ^{ T}A, and
||A||_{Fr } = , Frobenius norm,
which all are special cases of a norm or the form
(4) ||A|| = [ ||A|| / |||| ],
where || || is a norm defined on K^{n}.
Note. All the norms defined above satisfy ||AB|| ||A|| ||B||.
In numerical computations one often needs the condition number of a regular matrix A:
A sequence _{1 }, ..., _{n }, ... of vectors of V converges to V in the norm || || if
The convergence a sequence of matrices is defined by the same manner.
It follows from Proposition 1 that if dim V = n < (in particular, if V = R^{n}, M) and if _{n } -> with respect to a norm || ||, then _{n } -> also in every other norm defined on V (equivalence).
A matrix series A_{k }, A_{k } M, converges (in a matrix norm) if the sequence of partial sums
converges. It can be shown that A_{k } = (a^{ (k)}_{ij }) -> A = (a_{ij }) <=> a^{ (k)}_{ij } -> a_{ij }i,j kun k -> .
Let A be a square matrix. We define the matrix e^{A} by setting
(5)
If A is diagonalizable, i.e., if A = TDT^{ - 1} where D = diag (_{1 }, ..., _{n }), then
A^{k} | = TD^{k}T^{ - 1} | |
= Tdiag (_{1 }^{k}, ..., ^{k}_{n })T^{ - 1} | ||
=> | (1 / k!) A^{k} | = Tdiag ( (_{1 }^{k} / k!), ..., (_{n }^{k} / k!) )T^{ - 1} |
= | T [ diag ( | (_{1 }^{k} / k! ), ..., (_{n }^{k} / k! ) ) ] T^{ - 1} |
Since
it follows that
|| ( 1 / k!) A^{k} - Te^{D}T^{ -1}|| | |
= | ||T [ diag ( (_{1}^{k} / k!) , ..., (_{n}^{k} / k!) ) ] T^{ -1} - Te^{D}T^{ -1}|| |
= | ||T [ diag ( (_{1}^{k} / k!) , ..., (_{n}^{k} / k!) ) - e^{D} ] T^{ -1}|| |
||T|| || diag ( (_{1}^{k} / k!) , ..., (_{n}^{k} / k!) ) - e^{D}|| ||T^{ -1}|| | |
-> | 0, kun k -> . |
Hence: If A = TDT^{ - 1}, then
The solution of the differential equation '(t) = A(t) (A does not need to diagonalizable) with an initial condition (0) = _{0 } is
e^{tA} is the so-called state-transition matrix. If kontrolli u(t) on mukana eli kun
then (prove!)