Skip to content

Linear Algebra

Linear Operators and Matrices

A(Σai|vi)=ΣaiA|vi

Now, see a linear operator is just a matrix. Suppose A:VW and |v1,|v2,...,|vm are basis of V and |w1,|w2,...,|wn is a basis of W then,

A|vj=ΣiAij|wi

Inner Products

Ok so imagine an operation (_,_):V×VC such that the following shit holds ok?

  1. (|v,Σiλi|wi)=Σiλi(|v,|wi)
  2. (|v,|w)=(|w,|v)
  3. (|v,|v)0 and =0 iff |v

In finite dimensions, inner product space i.e., vector spaces equipped with inner prducts for all |v vector space = Hilbert Space

Consider |i & |j to be orthonormal basis, we have —

v|w=(Σivi|i,Σjwj|j)=ΣiΣjviwjδij=Σiviwi=|v|w

Norm of a vector

||v||=v|v

We can say that |v is normalized iff ||v||=1.

A set of |ai vectors is orthonormal if ai|aj=δij i.e.,  ij ai|aj=0 and ai|aj=1  i=j.

Gram Schmidt: for orthonormal basis

|vk+1=|wk+1Σi=1kvi|wk+1|vi|||wk+1Σi=1kvi|wk+1|vi||, |v1=|w1/|||w1||

Outer Product

|wv|(|v)=|w|v|v=|wv|v=v|v|w
  • From this notion we obtain the completeness relation, Σi|ii|=I.
  • A=IwAIv=Σij|wjwj|A|vivi|=Σijwj|A|vi|wjvi|
  • Cauchy Schwarz: v|vw|wv|ww|v=|v|w|2

Hilbert Space

A Hilbert Space H is complete which means that every Cauchy sequence of vectors admits in the space itself. Under this hypothesis there exist Hilbert bases also known as complete orthonormal systems of vectors in H.

For any orthonormal basis of H, we have the following.

Orthonormalityψi|ψj=δijCompletenessi|ψiψi|=I

Eigenvectors and Eigenvalues

Under a given linear transformation A, A|v=λ|v where  |v s.t. they do not get shifted off their span.

All such vectors are referred as eigenvectors and (AλI)|v=0det|AλI|=0 gives all possible eigenvalue.

If all λi0, it is positive semidefinite and if they are >0, it is positive definite.

Eigenspace

It is the space of all vectors with a given eigenvalue λ. When an eigenspace is more than one dimensional, we call it degenerate.

Adjoints and Hermitian

Suppose A:VV then  A:VV such that  |v, |wV we have,

(|v,A|w)=(A|v,|w)

This operator is called as the adjoint or Hermitian conjugate of the operator A.

(|v,A|w)=v|A|w=vAw=(Av)w=(A|v,|w)
  • We have, (AB)=BA
  • |v=v|

Some defintions

  • Normal matrices: AA=AA
  • Hermitian matrices: A=A
  • Unitary matrices: AA=I
  • A normal matrix is Hermitian if and only if it has real eigenvalues.
  • If x|A|x0, |x then A is positive semi-definite and has positive eigenvalues.

Some properties

  • If a Hermitian matrix has positive eigenvalues then it is positive semi-definite.
  • If M=AA then it is both Hermitian and positive semi-definite.
  • All positive semi-definite operators are Hermitian, by definition.

Spectral Decomposition

Definition: A linear operator is diagonalizable if and only if it is normal.

Some notes and derivation regarding the above:

Av=λv=Σiλijqi where qi's are linearly independent eigenvalues of A.

AQ=QΛ where Q=[q1q2qn]A=QΛQ1

A=IAI=(P+Q)A(P+Q)=PAP+QAP+PAQ+QAQ
A=λP2+0+0+QAQ
A=λP2+QAQ

Matrices and Vectors

In the following statements we are dealing with {|i} as a orthonormal basis set.

I=Σ|ii|
|ψ=Σσi|i, where σi=i|ψ

Now, to represent a operator or linear transformation as matrix in orthonormal basis.

Aij=i|A|jA=Σi,ji|A|j|ij|tr(A)=Σii|A|i

Now diagonalization for any normal matrix.

M=Σiλi|ii|=ΣiλiPi, where Pi=Pif(M)=Σif(λi)|ii|

where λi are eigenvalues of M under a given orthonormal basis set {|i} for vector space V, each |i is an eigenvector of M with eigenvalue λi.

If M is Hermitian, all eigenvalues (λi s) are real.

Tensor Products

  • z|vw=(z|v)(|w)=(|v)(z|w)
  • (|v1+|v2)|w=|v1w+|v2w
  • |v(|w1+|w2)=|vw1+|vw2
  • |ψk=|ψ|ψ k times
  • (AB)=AB

Linear Product

AB forms the linear operator that acts on VW vector space givern that A acts on V and B acts on W.

(AB)(Σiai|vi|wi)=ΣiaiA|viB|wi

Inner Product

(Σiai|vi|wi,Σiaj|vj|wj)=Σijaibjvi|vjwi|wj

Trace

Properties of trace are given below as follows.

  • tr(A)=ΣiAii
  • tr(A)=Σii|A|i for orthonormal basis
  • tr(AB)=tr(BA)
  • tr(zA+B)=ztr(A)+tr(B)

The above properties yield certain implications as follows.

  • tr(UAU)=tr(A)
  • tr(A|ψψ|)=Σii|A|ψψ|i
  • tr(A)=iλi, det(A)=iλi with algebraic multiplicities
||A||=tr(AA)

Partial Trace

Entanglement excludes the possibility of associating state vectors with individual subsystems. Therefore, we introduce density matrices and the corresponding idea of reduction preformed with partial trace.

tr(AB)= tr(A)tr(B)
ρAB:HAHB trBρA:HA
 trB(AB)=A tr(B)

Hilbert-Schimdt Inner Product

Lv forms the vector space of operators over the Hilbert space V. Then, we can show that Lv is also a Hilbert space with tr(AB) as the inner product operator on Lv×Lv.

Also, we have div(Lv)=dim(V)2.

Commutator and Anti-commutator

[A,B]=ABBA{A,B}=AB+BA

Theorem of Simultaneous Diagonalization

Suppose A and B are both Hermitian matrices, then [A,B]=0 iff orthonormal basis such that both A and B are diagonal with respect to that basis.

Polar Value Decomposition

If A is any linear operator and U is a unitary then J,K are positive operators, such that

A=UJ=KU, where J=AA and K=AA

Moreover, if A1 exists, then U is unique.

Singular Value Decomposition

SVD in general is given as follows.

UΣVT

It generalizes the eigen decomposition of a square normal matrix with an orthonormal eigen basis to any m×n matrix.

Σ is an m×n rectangular diagonal matrix with non-negative real numbers on the diagonal (called singular values).

Corollary: If A is a square matrix and  U,V unitaries then D is a diagonal matrix, such that

A=UDV

where D has non-negative values.

Corollary: If A has non-negative eigenvalues then, A=UDU is possible where D has non-negative values.

  • If A is square both SVD and EVD exist but might not be same.
  • If A is a square symmetric matrix both SVD and EVD exist and are equivalent.
  • If A is non-square only SVD is possible.

Rank of a matrix

Rank = number of dimensions in column space.

  • The row rank is the largest number of rows of A that constitute a linearly independent set.
  • The column rank is the largest number of columns of A that constitute a linearly independent set. Moreover, column-rank = row-rank for ARm×n.
rank(ARm×n)min(m,n)

Matrix is called full rank if equality holds.

  • rank(AT)=rank(A)
  • rank(AB)min(rank(A),rank(B))
  • rank(A+B)rank(A)+rank(B)

Projection and Spaces

Proj(y;A)=argminvR(A)||vy||2=A(ATA)1ATy
  • N(A)={xRn:Ax=0} denotes all vectors in Rn that land at the origin after transformation. It is also called kernel.
  • R(A)={vRm:v=Ax, xRn} denotes the space spanned by the transformed basis vectors in Rn.
  • R(AT) and N(A) are orthogonal spaces which together span Rn.

Determinant 0 implies that the matrix has an inverse.

Quadratic Forms

Reminder: we are in real R space.

xTAx=(xTAx)T=xT(12A+12AT)x

Moore-Penrose Pseudoinverse

Aleft inv=(AA)1A
Aright inv=A(AA)1

This is a pseudo inverse formalism with left and right inverses.

Aleft invA=IAAright inv=I

Row Echelon Forms

A matrix is in row echelon form if:

  • All rows consisting of only zeroes are at the bottom.
  • The leading coefficient (also called the pivot) of a nonzero row is always strictly to the right of the leading coefficient of the row above it.

A matrix is in reduced row echelon form if:

  • It is in row echelon form.
  • The leading entry in each nonzero row is a 1 (called a leading 1).
  • Each column containing a leading 1 has zeros in all its other entries.

Spectral Decomposition

Any normal operator M on a vector space V is diagonal with respect to some orthonormal basis for V.

Conversely, any diagonalizable operator is normal.

Proof

M=(P+Q)M(P+Q)=PMP+QMP+PMQ+QMQ=PMP+QMQ

Now we have

QM=QM(P+Q)=QMQ and QM=QMQ
QMQQMQ=QMMQ=QMMQ=QMQQMQ

Thus, if M is normal then M=PMP+QMQ where PMP=λP2=λP and thus is diagonalizable wrt orthonormal basis for P. Similarly, QMQ is also diagonalizable wrt some orthonormal basis for Q.

Thus, M is diagonalizable for orthonormal basis of the entire vector space.

M=Σλi|ii|=ΣλiPi

Polar Value Decomposition

Let A be a matrix on vector space V. Then there exists unitary U and positive operators J and K such that, A=UJ=KU where the unique positive operators shall satisfy the equations JAA and KAA.

Moreover, if A is invertible then U is unique.

Singular Value Decomposition

Let A be a square matrix. Then there exist unitary matrices U and V, and a diagonal matrix D with non-negative entries such that A=UDV.

The diagonal elements of D are called the singular values of A.

Proof

From polar value decomposition we have, A=SJ=STDT=(ST)D(T)=UDV.