0% found this document useful (0 votes)
35 views5 pages

Orthogonality and Inner Product Spaces

Uploaded by

Ashish Papanai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views5 pages

Orthogonality and Inner Product Spaces

Uploaded by

Ashish Papanai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Orthogonality and Inner Product Spaces

Section 1: Concepts and Proofs


Theorem: For any linear transformation $T: V \to W$, there exists a matrix $A$ such that $T(x) = A
x$.

Lemma: If $v_1, v_2, \dots, v_k$ are linearly independent vectors, then any subset of them is also
linearly independent.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.

Theorem: If $A$ is an $n \times n$ matrix, then a scalar $\lambda$ is an eigenvalue of $A$ if there
exists a nonzero vector $v$ such that $Av = \lambda v$.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: Let $v_1 = (1,1,0)$ and $v_2 = (1,0,1)$. Applying Gram-Schmidt yields $u_1 =
\frac{1}{\sqrt{2}}(1,1,0)$ and $u_2 = \frac{1}{\sqrt{3}}(1,-1,1)$.

Theorem: If $A$ is an $n \times n$ matrix, then a scalar $\lambda$ is an eigenvalue of $A$ if there
exists a nonzero vector $v$ such that $Av = \lambda v$.

Lemma: If $v_1, v_2, \dots, v_k$ are linearly independent vectors, then any subset of them is also
linearly independent.

Proof: Let $A$ be symmetric, and let $v_1, v_2$ be eigenvectors for eigenvalues $\lambda_1,
\lambda_2$ respectively. Then $A v_1 = \lambda_1 v_1$ and $A v_2 = \lambda_2 v_2$. Taking the
inner product and using symmetry, we get $\lambda_1 \langle v_1, v_2 \rangle = \lambda_2 \langle
v_1, v_2 \rangle$. If $\lambda_1 \neq \lambda_2$, it follows that $\langle v_1, v_2 \rangle = 0$.

Example: Let $v_1 = (1,1,0)$ and $v_2 = (1,0,1)$. Applying Gram-Schmidt yields $u_1 =
\frac{1}{\sqrt{2}}(1,1,0)$ and $u_2 = \frac{1}{\sqrt{3}}(1,-1,1)$.

Theorem: The Gram-Schmidt process constructs an orthonormal basis from any linearly
independent set in an inner product space.

Lemma: If $v_1, v_2, \dots, v_k$ are linearly independent vectors, then any subset of them is also
linearly independent.

Proof: Suppose $a_1v_1 + \dots + a_nv_n = 0$. If $S$ is linearly independent, this implies each
$a_i = 0$. Hence, the definition holds.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.
Section 2: Concepts and Proofs
Theorem: A symmetric matrix $A \in \mathbb{R}^{n \times n}$ is orthogonally diagonalizable, that
is, there exists an orthogonal matrix $P$ such that $P^T A P = D$, where $D$ is diagonal.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Let $A$ be symmetric, and let $v_1, v_2$ be eigenvectors for eigenvalues $\lambda_1,
\lambda_2$ respectively. Then $A v_1 = \lambda_1 v_1$ and $A v_2 = \lambda_2 v_2$. Taking the
inner product and using symmetry, we get $\lambda_1 \langle v_1, v_2 \rangle = \lambda_2 \langle
v_1, v_2 \rangle$. If $\lambda_1 \neq \lambda_2$, it follows that $\langle v_1, v_2 \rangle = 0$.

Example: Let $v_1 = (1,1,0)$ and $v_2 = (1,0,1)$. Applying Gram-Schmidt yields $u_1 =
\frac{1}{\sqrt{2}}(1,1,0)$ and $u_2 = \frac{1}{\sqrt{3}}(1,-1,1)$.

Theorem: If $A$ is an $n \times n$ matrix, then a scalar $\lambda$ is an eigenvalue of $A$ if there
exists a nonzero vector $v$ such that $Av = \lambda v$.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Let $A$ be symmetric, and let $v_1, v_2$ be eigenvectors for eigenvalues $\lambda_1,
\lambda_2$ respectively. Then $A v_1 = \lambda_1 v_1$ and $A v_2 = \lambda_2 v_2$. Taking the
inner product and using symmetry, we get $\lambda_1 \langle v_1, v_2 \rangle = \lambda_2 \langle
v_1, v_2 \rangle$. If $\lambda_1 \neq \lambda_2$, it follows that $\langle v_1, v_2 \rangle = 0$.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.

Theorem: A symmetric matrix $A \in \mathbb{R}^{n \times n}$ is orthogonally diagonalizable, that
is, there exists an orthogonal matrix $P$ such that $P^T A P = D$, where $D$ is diagonal.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: The transformation $T(x, y) = (2x + y, x - y)$ can be represented by the matrix
$\begin{pmatrix} 2 & 1 \\ 1 & -1 \end{pmatrix}$.

Theorem: If $A$ is an $n \times n$ matrix, then a scalar $\lambda$ is an eigenvalue of $A$ if there
exists a nonzero vector $v$ such that $Av = \lambda v$.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: The transformation $T(x, y) = (2x + y, x - y)$ can be represented by the matrix
$\begin{pmatrix} 2 & 1 \\ 1 & -1 \end{pmatrix}$.
Section 3: Concepts and Proofs
Theorem: Let $V$ be a vector space over a field $F$. A subset $S \subseteq V$ is said to be
linearly independent if the equation $a_1v_1 + a_2v_2 + \dots + a_nv_n = 0$ implies $a_1 = a_2 =
\dots = a_n = 0$.

Lemma: Distinct eigenvalues of a symmetric matrix correspond to orthogonal eigenvectors.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.

Theorem: For any linear transformation $T: V \to W$, there exists a matrix $A$ such that $T(x) = A
x$.

Lemma: Distinct eigenvalues of a symmetric matrix correspond to orthogonal eigenvectors.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.

Theorem: A symmetric matrix $A \in \mathbb{R}^{n \times n}$ is orthogonally diagonalizable, that
is, there exists an orthogonal matrix $P$ such that $P^T A P = D$, where $D$ is diagonal.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Let $A$ be symmetric, and let $v_1, v_2$ be eigenvectors for eigenvalues $\lambda_1,
\lambda_2$ respectively. Then $A v_1 = \lambda_1 v_1$ and $A v_2 = \lambda_2 v_2$. Taking the
inner product and using symmetry, we get $\lambda_1 \langle v_1, v_2 \rangle = \lambda_2 \langle
v_1, v_2 \rangle$. If $\lambda_1 \neq \lambda_2$, it follows that $\langle v_1, v_2 \rangle = 0$.

Example: The transformation $T(x, y) = (2x + y, x - y)$ can be represented by the matrix
$\begin{pmatrix} 2 & 1 \\ 1 & -1 \end{pmatrix}$.

Theorem: Let $V$ be a vector space over a field $F$. A subset $S \subseteq V$ is said to be
linearly independent if the equation $a_1v_1 + a_2v_2 + \dots + a_nv_n = 0$ implies $a_1 = a_2 =
\dots = a_n = 0$.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Let $A$ be symmetric, and let $v_1, v_2$ be eigenvectors for eigenvalues $\lambda_1,
\lambda_2$ respectively. Then $A v_1 = \lambda_1 v_1$ and $A v_2 = \lambda_2 v_2$. Taking the
inner product and using symmetry, we get $\lambda_1 \langle v_1, v_2 \rangle = \lambda_2 \langle
v_1, v_2 \rangle$. If $\lambda_1 \neq \lambda_2$, it follows that $\langle v_1, v_2 \rangle = 0$.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.
Section 4: Concepts and Proofs
Theorem: For any linear transformation $T: V \to W$, there exists a matrix $A$ such that $T(x) = A
x$.

Lemma: Distinct eigenvalues of a symmetric matrix correspond to orthogonal eigenvectors.

Proof: Suppose $a_1v_1 + \dots + a_nv_n = 0$. If $S$ is linearly independent, this implies each
$a_i = 0$. Hence, the definition holds.

Example: The transformation $T(x, y) = (2x + y, x - y)$ can be represented by the matrix
$\begin{pmatrix} 2 & 1 \\ 1 & -1 \end{pmatrix}$.

Theorem: A symmetric matrix $A \in \mathbb{R}^{n \times n}$ is orthogonally diagonalizable, that
is, there exists an orthogonal matrix $P$ such that $P^T A P = D$, where $D$ is diagonal.

Lemma: Distinct eigenvalues of a symmetric matrix correspond to orthogonal eigenvectors.

Proof: Let $A$ be symmetric, and let $v_1, v_2$ be eigenvectors for eigenvalues $\lambda_1,
\lambda_2$ respectively. Then $A v_1 = \lambda_1 v_1$ and $A v_2 = \lambda_2 v_2$. Taking the
inner product and using symmetry, we get $\lambda_1 \langle v_1, v_2 \rangle = \lambda_2 \langle
v_1, v_2 \rangle$. If $\lambda_1 \neq \lambda_2$, it follows that $\langle v_1, v_2 \rangle = 0$.

Example: Let $v_1 = (1,1,0)$ and $v_2 = (1,0,1)$. Applying Gram-Schmidt yields $u_1 =
\frac{1}{\sqrt{2}}(1,1,0)$ and $u_2 = \frac{1}{\sqrt{3}}(1,-1,1)$.

Theorem: The Gram-Schmidt process constructs an orthonormal basis from any linearly
independent set in an inner product space.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Suppose $a_1v_1 + \dots + a_nv_n = 0$. If $S$ is linearly independent, this implies each
$a_i = 0$. Hence, the definition holds.

Example: Let $v_1 = (1,1,0)$ and $v_2 = (1,0,1)$. Applying Gram-Schmidt yields $u_1 =
\frac{1}{\sqrt{2}}(1,1,0)$ and $u_2 = \frac{1}{\sqrt{3}}(1,-1,1)$.

Theorem: A symmetric matrix $A \in \mathbb{R}^{n \times n}$ is orthogonally diagonalizable, that
is, there exists an orthogonal matrix $P$ such that $P^T A P = D$, where $D$ is diagonal.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Suppose $a_1v_1 + \dots + a_nv_n = 0$. If $S$ is linearly independent, this implies each
$a_i = 0$. Hence, the definition holds.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.
Section 5: Concepts and Proofs
Theorem: The Gram-Schmidt process constructs an orthonormal basis from any linearly
independent set in an inner product space.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: Let $v_1 = (1,1,0)$ and $v_2 = (1,0,1)$. Applying Gram-Schmidt yields $u_1 =
\frac{1}{\sqrt{2}}(1,1,0)$ and $u_2 = \frac{1}{\sqrt{3}}(1,-1,1)$.

Theorem: Let $V$ be a vector space over a field $F$. A subset $S \subseteq V$ is said to be
linearly independent if the equation $a_1v_1 + a_2v_2 + \dots + a_nv_n = 0$ implies $a_1 = a_2 =
\dots = a_n = 0$.

Lemma: Distinct eigenvalues of a symmetric matrix correspond to orthogonal eigenvectors.

Proof: Suppose $a_1v_1 + \dots + a_nv_n = 0$. If $S$ is linearly independent, this implies each
$a_i = 0$. Hence, the definition holds.

Example: Let $v_1 = (1,1,0)$ and $v_2 = (1,0,1)$. Applying Gram-Schmidt yields $u_1 =
\frac{1}{\sqrt{2}}(1,1,0)$ and $u_2 = \frac{1}{\sqrt{3}}(1,-1,1)$.

Theorem: For any linear transformation $T: V \to W$, there exists a matrix $A$ such that $T(x) = A
x$.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Since $T$ is linear, the null space and image are subspaces of $V$. Using a basis of $\ker
T$ extended to a basis of $V$, we can show that their dimensions add to that of $V$.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.

Theorem: A symmetric matrix $A \in \mathbb{R}^{n \times n}$ is orthogonally diagonalizable, that
is, there exists an orthogonal matrix $P$ such that $P^T A P = D$, where $D$ is diagonal.

Lemma: If $T$ is a linear operator on a finite-dimensional space $V$, then $\dim(\ker T) +


\dim(\text{Im } T) = \dim(V)$.

Proof: Suppose $a_1v_1 + \dots + a_nv_n = 0$. If $S$ is linearly independent, this implies each
$a_i = 0$. Hence, the definition holds.

Example: Consider $A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}$. The eigenvalues are $2$ and
$3$, with corresponding eigenvectors $(1,0)$ and $(0,1)$.

You might also like