Linear Algebra
Linear Operators and Matrices¶
Now, see a linear operator is just a matrix. Suppose
Inner Products¶
Ok so imagine an operation
and iff
In finite dimensions, inner product space i.e., vector spaces equipped with inner prducts for all
Consider
Norm of a vector¶
We can say that
A set of
Gram Schmidt: for orthonormal basis¶
Outer Product¶
- From this notion we obtain the completeness relation,
. - Cauchy Schwarz:
Hilbert Space¶
A Hilbert Space
For any orthonormal basis of
Eigenvectors and Eigenvalues¶
Under a given linear transformation
All such vectors are referred as eigenvectors and
If all
Eigenspace¶
It is the space of all vectors with a given eigenvalue
Adjoints and Hermitian¶
Suppose
This operator is called as the adjoint or Hermitian conjugate of the operator
- We have,
Some defintions¶
- Normal matrices:
- Hermitian matrices:
- Unitary matrices:
- A normal matrix is Hermitian if and only if it has real eigenvalues.
- If
then is positive semi-definite and has positive eigenvalues.
Some properties¶
- If a Hermitian matrix has positive eigenvalues then it is positive semi-definite.
- If
then it is both Hermitian and positive semi-definite. - All positive semi-definite operators are Hermitian, by definition.
Spectral Decomposition¶
Definition: A linear operator is diagonalizable if and only if it is normal.
Some notes and derivation regarding the above:
Matrices and Vectors¶
In the following statements we are dealing with
Now, to represent a operator or linear transformation as matrix in orthonormal basis.
Now diagonalization for any normal matrix.
where
If
Tensor Products¶
Linear Product¶
Inner Product¶
Trace¶
Properties of trace are given below as follows.
for orthonormal basis
The above properties yield certain implications as follows.
, with algebraic multiplicities
Partial Trace¶
Entanglement excludes the possibility of associating state vectors with individual subsystems. Therefore, we introduce density matrices and the corresponding idea of reduction preformed with partial trace.
Hilbert-Schimdt Inner Product¶
Also, we have
Commutator and Anti-commutator¶
Theorem of Simultaneous Diagonalization¶
Suppose
Polar Value Decomposition¶
If
Moreover, if
Singular Value Decomposition¶
SVD in general is given as follows.
It generalizes the eigen decomposition of a square normal matrix with an orthonormal eigen basis to any
Corollary: If
where
Corollary: If
- If
is square both SVD and EVD exist but might not be same. - If
is a square symmetric matrix both SVD and EVD exist and are equivalent. - If
is non-square only SVD is possible.
Rank of a matrix¶
Rank
- The row rank is the largest number of rows of
that constitute a linearly independent set. - The column rank is the largest number of columns of
that constitute a linearly independent set. Moreover, column-rank row-rank for .
Matrix is called full rank if equality holds.
Projection and Spaces¶
denotes all vectors in that land at the origin after transformation. It is also called kernel. denotes the space spanned by the transformed basis vectors in . and are orthogonal spaces which together span .
Determinant
Quadratic Forms¶
Reminder: we are in real
Moore-Penrose Pseudoinverse¶
This is a pseudo inverse formalism with left and right inverses.
Row Echelon Forms¶
A matrix is in row echelon form if:
- All rows consisting of only zeroes are at the bottom.
- The leading coefficient (also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it.
A matrix is in reduced row echelon form if:
- It is in row echelon form.
- The leading entry in each nonzero row is a 1 (called a leading 1).
- Each column containing a leading 1 has zeros in all its other entries.
Spectral Decomposition¶
Any normal operator
Conversely, any diagonalizable operator is normal.
Proof¶
Now we have
Thus, if
Thus,
Polar Value Decomposition¶
Let
Moreover, if
Singular Value Decomposition¶
Let
The diagonal elements of
Proof¶
From polar value decomposition we have,