Alternating $k$-Forms

Definition
Given a real vector space $V$, the set of alternating $k$-forms on $V$, denoted $\Omega^k(V)$, is the set of alternating multi-linear, real-valued maps on $V^k$.
In particular, given $\omega\in\Omega^k(V)$, for any $\{v_1,\ldots,v_k\}\in V$ the map $\omega$ satisfies: $$\omega( v_{\sigma(1)},\ldots,v_{\sigma(k)})=\hbox{sign}(\sigma)\cdot\omega(v_1,\ldots,v_k),\quad\forall\sigma\in S_k,$$ with $S_k$ the permutation group on $k$ elements.

Claim
If $V$ is an $n$-dimensional vector space, then $\Omega^k(V)$ has dimension $\left(\begin{array}{c}n\\k\end{array}\right)$ for $0\leq k \leq n$ (and $0$ otherwise).

Definition
Given vector spaces $V$ and $W$, a linear map $L:V\rightarrow W$, and an alternating $k$-form $\omega\in\Omega^k(W)$, define $L^*(\omega):V^k\rightarrow{\mathbb R}$ to be the map: $$[L^*(\omega)](v_1,\ldots,v_k) \equiv \omega\big( L(v_k),\ldots,L(v_k)\big).$$

Claim
The map $L^*(\omega)$ is an alternating $k$-form and the mapping $L^*:\Omega^k(W)\rightarrow\Omega^k(V)$ is linear.


Volume Forms

Definition
When $n$ is the dimension of the vector space $V$, we refer to $\Omega^n(V)$ as the (one-dimensional) space of volume forms on $V$.

Claim
Given linearly independent $\{v_1,\ldots,v_n\}\subset V$, there exists a (necessarily unique) volume form $\omega_{\{v_1,\ldots,v_n\}}$ such that: $$\omega_{\{v_1,\ldots,v_n\}}(v_1,\ldots,v_n) = 1.$$

Property
Given an invertible linear map $L:V\rightarrow W$ between vector spaces, and given $\{v_1,\ldots,v_n\}\subset V$, we have: $$\omega_{\{v_1,\ldots,v_n\}} = L^*(\omega_{\{L(v_1),\ldots,L(v_n)\}}).$$
Proof The proof follows from the fact that both sides evaluate to 1 on $\{v_1,\ldots,v_n\}$.

Property
Given a linear map $L:V\rightarrow V$, there exists $s_L\in{\mathbb R}$ such that: $$L^*(\omega) = s_L\cdot\omega,\quad\forall\omega\in\Omega^n(V).$$
Proof This follows from the facts that the map $L^*:\Omega^n(V)\rightarrow\Omega^n(V)$ is linear and that $\Omega^n(V)$ is one-dimensional.

Definition
Given a a linear map $L:V\rightarrow V$, the determinant of $L$, denoted $\hbox{det}(L)$, is the real value such that: $$L^*(\omega) = \hbox{det}(L)\cdot\omega,\qquad\forall\omega\in\Omega^n(V).$$

Property
Given linear maps $L,M:V\rightarrow V$, we have: $$\hbox{det}(L\circ M)=\hbox{det}(L)\cdot\hbox{det}(M).$$
Proof Given a volume form $\omega\in\Omega^n(V)$, we have: \begin{align*} \hbox{det}(L\circ M)\cdot\omega &= (L\circ M)^*(\omega)\\ &= L^*(M^*(\omega)) \\ &= L^*( \hbox{det}(M)\cdot\omega )\\ &= \hbox{det}(M)\cdot L^*(\omega) \\ &= \hbox{det}(M)\cdot \hbox{det}(L)\cdot\omega. \end{align*}

Property
Given vector spaces $V$ and $W$, an invertible map $L:V\rightarrow W$, and any linear map $M:W\rightarrow W$, we have: $$\hbox{det}(M)=\hbox{det}(L^{-1}\circ M\circ L).$$
Proof Let $\{v_1,\ldots,v_n\}\subset V$ be any linearly independent vectors. Then: \begin{align*} \hbox{det}(L^{-1}\circ M\circ L) &= \hbox{det}(L^{-1}\circ M\circ L)\cdot\omega_{\{v_1,\ldots,v_n\}}(v_1,\ldots,v_n)\\ &= \omega_{\{v_1,\ldots,v_n\}}\big((L^{-1}\circ M\circ L)(v_1),\ldots,(L^{-1}\circ M\circ L)(v_n)\big)\\ &= [L^{-*}(\omega_{\{L^{-1}(L(v_1)),\ldots,L^{-1}(L(v_n))\}})]\big((M\circ L)(v_1),\ldots,(M\circ L)(v_n)\big)\\ &= \omega_{\{L(v_1),\ldots,L(v_n)\}}\big((M\circ L)(v_1),\ldots,(M\circ L)(v_n)\big)\\ &= \hbox{det}(M)\cdot\omega_{\{L(v_1),\ldots,L(v_n)\}}(L(v_1),\ldots,L(v_n))\\ &= \hbox{det}(M). \end{align*}

In particular, this implies that if $L:V\rightarrow W$ and $M:W\rightarrow V$ are any linear map such that one of them is invertible, we have: $$\hbox{det}(M\circ L) = \hbox{det}(L\circ M).$$



Duality

Definition
Given a real vector space $V$, the dual of $V$, denoted $V^*$, is the space of linear functionals on $V$ -- linear maps from $V$ to the real numbers.

Property
There is a canonical isomorphism between $\Phi:V\rightarrow V^{**}$ defined by evaluation: $$[\Phi(v)](\alpha) \equiv \alpha(v),\qquad\forall v\in V,\alpha\in V^*.$$

Definition
Given real vector spaces $U$ and $V$ and a linear map $L:U\rightarrow V$ between them, we can define the canonical dual map $L^*:V^*\rightarrow U^*$, defined by: $$L^*(\alpha) \equiv \alpha \circ L,\quad\forall\alpha\in V^*.$$

Property
Given real vector spaces $U$ and $V$ and a linear map $L:U\rightarrow V$ between them, the dual of the dual of $L$ (under the canonical isomorphisms between $\Phi_U:U\rightarrow U^{**}$ and $\Phi_V:V\rightarrow V^{**}$) is the operator $L$ itself: $$L^{**}\circ\Phi_U=\Phi_V\circ L.$$
Proof For all $u\in U$ and $\alpha\in V^*$, we have: \begin{align*} [(\Phi_V\circ L)(u)](\alpha) &= \alpha(L(u))\\ &= [L^*(\alpha)](u)\\ &= [\Phi_U(u)](L^*(\alpha))\\ &= [(L^{**}\circ \Phi_u)(u)](\alpha) \end{align*}

Throughout, we will use the canonical isomorphism between a vector space $\Phi:V\rightarrow V^{**}$ and will assume that $\Phi$ is applied implicitly, as needed. Thus, the above becomes simply: $$L^{**} = L.$$

Property
Given $\{\alpha_1,\ldots,\alpha_k\}\subset V^*$, the map: \begin{align*} \omega_{\{\alpha_1,\ldots,\alpha_n\}}:V^k&\rightarrow{\mathbb R}\\ (v_1,\ldots,v_k) &\mapsto \sum_{\sigma\in S_k}\hbox{sign}(\sigma)\cdot\prod_{i=1}^k\alpha_i(v_{\sigma(i)}) \end{align*} is an alternating $k$-form.
Proof The fact that $\omega_{\{\alpha_1,\ldots,\alpha_n\}}$ is multi-linear follows from the fact that functionals are linear in their arguments.
The fact that it is alternating follows from the fact that if $\tau\in S_k$ then for all $\{v_1,\ldots,v_k\}\subset V$ we have: \begin{align*} \omega_{\{\alpha_1,\ldots,\alpha_k\}}(v_1,\ldots,v_k) &= \sum_{\sigma\in S_k}\hbox{sign}(\sigma)\cdot\prod_{i=1}^k\alpha_i(v_{\sigma(i)})\\ &= \sum_{\sigma\in S_k\circ\tau^{-1}}\hbox{sign}(\sigma\circ\tau)\cdot\prod_{i=1}^k\alpha_i(v_{(\sigma\circ\tau)(i)})\\ &= \hbox{sign}(\tau)\cdot\sum_{\sigma\in S_n}\hbox{sign}(\sigma)\cdot\prod_{i=1}^k\alpha_i(v_{\sigma(\tau(i))})\\ &= \hbox{sign}(\tau)\cdot\omega_{\{\alpha_1,\ldots,\alpha_k\}}(v_{\tau(1)},\ldots,v_{\tau(k)}) \end{align*} where equality follows from the fact that (as a set) $S_k=S_k\circ\tau^{-1}$.

Property
Given $\{v_1,\ldots,v_k\}\subset V$ and $\{\alpha_1,\ldots,\alpha_k\}\subset V^*$ the alternating $k$-forms $\omega_{\{\alpha_1,\ldots,\alpha_k\}}\in\Omega^k(V)$ and $\omega_{\{v_1,\ldots,v_k\}}\in\Omega^k(V^*)$ satisfy: $$\omega_{\{\alpha_1,\ldots,\alpha_k\}}(v_1,\ldots,v_k) = \omega_{\{v_1,\ldots,v_k\}}(\alpha_1,\ldots,\alpha_k).$$
Proof Since for any $\sigma\in S_k$, the permutation $\sigma^{-1}$ acts on the set $\{1,...,k\}$ by reordering elements, we have: $$\prod_{i=1}^k \alpha_i(v_{\sigma(i)}) = \prod_{i=1}^k \alpha_{\sigma^{-1}(i)}(v_i) = \prod_{i=1}^k v_i(\alpha_{\sigma^{-1}(i)}).$$ Using the facts that $\hbox{sign}(\sigma)=\hbox{sign}(\sigma^{-1})$ and that (as a set) $S_k=S_k^{-1}$, it follows that: \begin{align*} \omega_{\{\alpha_1,\ldots,\alpha_k\}}(v_1,\ldots,v_k) &= \sum_{\sigma\in S_k}\hbox{sign}(\sigma)\cdot\prod_{i=1}^k\alpha_i(v_{\sigma(i)})\\ &= \sum_{\sigma\in S_k}\hbox{sign}(\sigma^{-1})\cdot\prod_{i=1}^k v_i(\alpha_{\sigma^{-1}(i)})\\ &= \sum_{\sigma\in S_k^{-1}}\hbox{sign}(\sigma)\cdot\prod_{i=1}^k v_i(\alpha_{\sigma(i)})\\ &= \sum_{\sigma\in S_k}\hbox{sign}(\sigma)\cdot\prod_{i=1}^k v_i(\alpha_{\sigma(i)})\\ &=\omega_{\{v_1,\ldots,v_k\}}(\alpha_1,\ldots,\alpha_k). \end{align*}

Property
If $L:V\rightarrow V$ is a linear map that acts by scalar multiplication (i.e. there exists $s\in{\mathbb R}$ such that $L(v)=s\cdot v$ for all $v\in V$), then $L^*$ acts by scalar multiplication by the same scalar.
Proof For all $v\in V$ and $\alpha\in V^*$ we have: $$[L^*(\alpha)](v) = \alpha(L(v)) = \alpha(s\cdot v) = s\cdot \alpha(v)\qquad\Longleftrightarrow\qquad L^*(\alpha) = s\cdot\alpha.$$

Property
Given a linear map $L:V\rightarrow V$, we have: $$\hbox{det}(L) = \hbox{det}(L^*).$$
Proof Given $\{v_1,\ldots,v_n\}\subset V$ and $\{\alpha_1,\ldots,\alpha_n\}\subset V^*$, and applying $L^*$ to the dual volume form $\omega_{\{v_1,\ldots,v_n\}}\in\Omega^n(V^*)$ gives: \begin{align*} \hbox{det}(L^*)\cdot\omega_{\{v_1,\ldots,v_n\}}(\alpha_1,\ldots,\alpha_n) &=\omega_{\{v_1,\ldots,v_n\}}(L^*(\alpha_1),\ldots,L^*(\alpha_n))\\ &=\sum_{\sigma\in S_n}\hbox{sign}(\sigma)\cdot\prod_{i=1}^n v_i(L^*(\alpha_{\sigma(i)}))\\ %&=\sum_{\sigma\in S_n}\hbox{sign}(\sigma)\cdot\prod_{i=1}^n[L(v_i)](\alpha_{\sigma(i)})\\ &=\sum_{\sigma\in S_n}\hbox{sign}(\sigma)\cdot\prod_{i=1}^n\alpha_i(L(v_{\sigma(i)}))\\ %&=\omega_{\{L(v_1),\ldots,L(v_n)\}}(\alpha_1,\ldots,\alpha_n)\\ &=\omega_{\{\alpha_1,\ldots,\alpha_n\}}(L(v_1),\ldots,L(v_n))\\ %&=[L(\omega_{\{\alpha_1,\ldots,\alpha_n)}\}](v_1,\ldots,v_n)\\ &=\hbox{det}(L)\cdot\omega_{\{\alpha_1,\ldots,\alpha_n\}}(v_1,\ldots,v_n)\\ &=\hbox{det}(L)\cdot\omega_{\{v_1,\ldots,v_n\}}(\alpha_1,\ldots,\alpha_n). \end{align*} $$\Longleftrightarrow\quad\hbox{det}(L^*) = \hbox{det}(L).$$

Property
Given real vector spaces $U$, $V$, and $W$ and linear maps $L:U\rightarrow V$ and $M:V\rightarrow W$, the dual of the composition is the composition of the duals, in the opposite order: $$(M\circ L)^* = L^* \circ M^*.$$
Proof For $u\in U$ and $\alpha\in W^*$ we have: \begin{align*} [(M\circ L)^*(\alpha)](v) &= \alpha\big( (M\circ L)(v)\big )\\ &= \alpha\big( M(L(v))\big )\\ &= [M^*(\alpha)](L(v))\\ &= [L^*(M^*(\alpha))](v)\\ &= [(L^*\circ M^*)(\alpha)](v) \end{align*} $$\Longleftrightarrow\quad (M\circ L)^* = M.$$

Definition
A linear map $L:V\rightarrow V^*$ is said to be symmetric if $$[L(v)](w) = [L(w)](v),\quad\forall v,w\in V.$$ It said to be positive semi-definite (resp. negative semi-definite) if $$[L(v)](v)\geq0\qquad(\hbox{resp. }[L(v)](v)\leq0),\quad\forall v\in V.$$ And it is said to be positive definite (resp. negative definite) if, in addition, $$[L(v)](v)=0\qquad\Longleftrightarrow\qquad v=0.$$
Choosing a Basis Assume that the vector spaces $U$ and $V$ are finite-dimensional and that we are given bases ${\mathcal B}_U=\{u_1,\ldots,u_m\}\subset U$ and ${\mathcal B}_V=\{v_1,\ldots,v_n\}\subset V$.


Inner Products
Assume that we are given an inner product space, $\{V\,,\,\,{\mathcal I}:V\times V\rightarrow{\mathbb R}\}$ (with $\mathcal I$ a symmetric, positive-definite, bilinear form).

Note
The inner product $\mathcal I:V\times V\rightarrow{\mathbb R}$ is equivalent to a symmetric positive definite map $D_{\mathcal I}:V\rightarrow V^*$ defined by: $$[D_{\mathcal I}(v)](w)\equiv{\mathcal I}(v,w),\quad\forall v,w\in V.$$

Note
By the symmetry of $\mathcal I$, we have: $$D_{\mathcal I} = D_{\mathcal I}^*:V\rightarrow V^*.$$

Note
An inner product on the primal space, ${\mathcal I}:V\times V\rightarrow{\mathbb R}$, defines an associated inner product on the dual space ${\mathcal I}^*:V^*\times V^*\rightarrow{\mathbb R}$.
It is defined by mapping dual vectors to primal vectors (using the inverse of $D_{\mathcal I}$) and then applying the inner product in the primal space: $${\mathcal I}^*(\alpha,\beta) \equiv {\mathcal I}\left(D_{\mathcal I}^{-1}(\alpha),D_{\mathcal I}^{-1}(\beta)\right),\quad\forall\alpha,\beta\in V^*.$$

Note
The map $D_{\mathcal I}^*:V\rightarrow V^*$ is the dual of $D_{\mathcal I}$ while $D_{{\mathcal I}^*}:V^*\rightarrow V$ is the realization of the inner product on the dual space as a linear map. In particular, we have: $$D_{{\mathcal I}^*}=D_{\mathcal I}^{-*}$$ (with $D_{\mathcal I}^{-*}$ the dual of the inverse of $D_{\mathcal I}$).

Definition
Given inner product spaces $\{V\,,\,\,{\mathcal I}_V:V\times V\rightarrow{\mathbb R}\}$ and $\{V\,,\,\,{\mathcal I}_W:W\times W\rightarrow{\mathbb R}\}$, a linear transformation $L:V\rightarrow W$ is said to be an orthogonal transformation from $\{V,{\mathcal I}_V\}$ to $\{W,{\mathcal I}_W\}$, denoted $O(\{V,{\mathcal I}_V\},\{W,{\mathcal I}_W\})$, if it satisifes: $$ \mathcal{I}_W\big(L(v),L(w)\big)=\mathcal{I}_V(v,w),\quad\forall v,w\in V. %\qquad\Longleftrightarrow\qquad L^{-1} = L^\dagger. $$ $$ \Longleftrightarrow\quad L^*\circ D_W \circ L = D_V. $$

Property
Given a vector space $V$, and given two inner products the space ${\mathcal I}_1,{\mathcal I}_2:V\times V\rightarrow{\mathbb R}$, then if $L\in O(\{V,{\mathcal I}_2\},\{V,{\mathcal I}_1\})$, we have: $$\hbox{det}(L) = \pm\sqrt{\hbox{det}(D_1^{-1}\circ D_2)},$$ where $D_1\equiv D_{{\mathcal I}_1}$ and $D_2\equiv D_{{\mathcal I}_2}$.

Proof \begin{align*} 1 &= \hbox{det}(Id.) \\ &= \hbox{det}(D_2^{-1}\circ D_2) \\ &= \hbox{det}(D_2^{-1}\circ L^*\circ D_1 \circ L) \\ &= \hbox{det}(D_2^{-1}\circ L^*\circ D_1) \cdot \hbox{det}(L) \\ &= \hbox{det}(D_1\circ D_2^{-1}\circ L^*) \cdot \hbox{det}(L) \\ &= \hbox{det}(D_1\circ D_2^{-1})\cdot\hbox{det}(L^*) \cdot \hbox{det}(L) \\ &= \hbox{det}(D_1\circ D_2^{-1})\cdot\hbox{det}^2(L) \\ &= \hbox{det}(D_2^{-1}\circ D_1)\cdot\hbox{det}^2(L)\\ &= \frac{1}{\hbox{det}(D_1^{-1}\circ D_2)}\cdot\hbox{det}^2(L) \end{align*} $$\Longleftrightarrow\quad\hbox{det}(L) = \pm\sqrt{\hbox{det}(D_1^{-1}\circ D_2)}.$$

Note
In the case that $V=W$ and ${\mathcal I}_V={\mathcal I}_W$, the set of orthogonal transformations: $$O(\{V,{\mathcal I}_V\})\equiv O(\{V,{\mathcal V}_V\},\{V,{\mathcal I}_V\})$$ is a group, and for any $L\in O(\{V,{\mathcal I}_V\})$ we have $\hbox{det}(L)=\pm1$.

Choosing a Basis Given a basis ${\mathcal B}=\{v_1,\ldots,v_n\}\subset V$, let ${\mathcal B}^*=\{v^1,\ldots,v^n\}$ be the associated dual basis.

In the basis ${\mathcal B}$, the inner product ${\mathcal I}$ is expressed as the Gram matrix ${\mathbf I}\in{\mathbb R}^{n\times n}$ with: $${\mathbf I}_{ij}={\mathcal I}(v_i,v_j).$$

With respect to the primal basis ${\mathcal B}$ and associated dual basis ${\mathcal B}^*$, the map $D_{\mathcal I}:V\rightarrow V^*$ is expressed by the matrix $\mathbf{D}_{\mathcal I}\in{\mathbb R}^{n\times n}$ with: $$(\mathbf{D}_{\mathcal I})_{ij} = \left[D_{\mathcal I}(v_j)\right](v_i) = {\mathcal I}(v_i,v_j).$$ Thus, with respect to the basis ${\mathcal B}$ (and the associated basis ${\mathcal B}^*$) the matrix expressions for the inner produce ${\mathcal I}:V\times V\rightarrow{\mathbb R}$ and the linear map $D_{\mathcal I}:V\rightarrow V^*$ are identical: $${\mathbf D}_{\mathcal I} = {\mathbf I}.$$ Similarly, with respect to the bases ${\mathcal B}$ and ${\mathcal B}^*$ the inner product $I_{{\mathcal I}^*}:V^*\rightarrow V$ is expressed as: $$\mathbf{D}_{{\mathcal I}^*} = {\mathbf I}^{-1}.$$ In particular, ${\mathcal B}$ is an orthonormal basis, then the representation of all these operators in this basis is the identity: $$\mathbf{D}_{\mathcal I} = \mathbf{D}_{{\mathcal I}^*} = {\mathbf I}=\mathbf{Id}.$$



Adjoint Operators
Assume that we are given inner product spaces, $\{U\,,\,\,{\mathcal I}_U:U\times U\rightarrow{\mathbb R}\}$ and $\{V\,,\,\,{\mathcal I}_V:V\times V\rightarrow{\mathbb R}\}$, and a linear operator $L:U\rightarrow V$.
For simplicity, we will write the associated linear maps: $$D_U\equiv D_{{\mathcal I}_U}:U\rightarrow U^*\quad\hbox{and}\quad D_V\equiv D_{{\mathcal I}_V}:V\rightarrow V^*.$$

Definition
The adjoint of $L$, denoted $L^\dagger:V\rightarrow U$, is the linear operator satisfying: $${\mathcal I}_U(L^\dagger(v),u) = {\mathcal I}_V(v,L(u)),\qquad\forall u\in U, v\in V.$$ Or, equivalently: $$D_U\circ L^\dagger = L^*\circ D_V\qquad\Longleftrightarrow\qquad L^\dagger = D_U^{-1}\circ L^*\circ D_V.$$

Property
The adjoint of the adjoint of $L$ is $L$ itself: $$(L^\dagger)^\dagger = L.$$

Proof \begin{align*} (L^\dagger)^\dagger &= ( D_U^{-1}\circ L^*\circ D_V )^\dagger\\ &= D_V^{-1}\circ( D_U^{-1}\circ L^*\circ D_V )^*\circ D_U\\ &= D_V^{-1}\circ D_V^*\circ (L^*)^*\circ D_U^{-*}\circ D_U\\ &= D_V^{-1}\circ D_V\circ L\circ D_U^{-1}\circ D_U\\ &= L. \end{align*}

Property
If, in addition, we are given an inner product space $\{W\,,\,\,{\mathcal I}_W:W\times W\rightarrow{\mathbb R}\}$ and a linear map $M:V\rightarrow W$, the adjoint of the composition is the composition of the adjoints, in the opposite order: $$(M\circ L)^\dagger = L^\dagger\circ M^\dagger.$$
Proof The adjoint of the composition must satisfy: \begin{align*} D_U\circ (M\circ L)^\dagger &= (M\circ L)^*\circ D_W\\ &= L^*\circ M^*\circ D_W\\ &= L^*\circ D_V\circ M^\dagger\\ &= D_U\circ L^\dagger\circ M^\dagger \end{align*} $$\Longleftrightarrow (M\circ L)^\dagger = L^\dagger\circ M^\dagger.$$

Property
Given a vector space $V$ with two innner products, ${\mathcal I}_1,{\mathcal I}_2:V\times V\rightarrow{\mathbb R}$, a linear map $L:V\rightarrow W$ is orthogonal if and only if its inverse is its adjoint.

Proof Setting $D_1\equiv D_{{\mathcal I}_1}$ and $D_2\equiv D_{{\mathcal I}_2}$, we have: \begin{align*} L\in O(\{V,{\mathcal I}_1\},\{V,{\mathcal I}_2\}) &\quad\Longleftrightarrow\quad {\mathcal I}_2(L(v),L(w)) = {\mathcal I}_1(v,w),\qquad\forall v,w\in V\\ &\quad\Longleftrightarrow\quad L^*\circ D_2\circ L = D_1\\ &\quad\Longleftrightarrow\quad D_1^{-1}\circ L^*\circ D_2 = L^{-1} \\ &\quad\Longleftrightarrow\quad L^\dagger = L^{-1}. \end{align*}

Choosing a Basis Assume that we are given bases ${\mathcal B}_U=\{u_1,\ldots,u_m\}\subset U$ and ${\mathcal B}_V=\{v_1,\ldots,v_n\}\subset V$.

With respect to the bases ${\mathcal B}_U$ and ${\mathcal B}_V$, the adjoint operator $L^\dagger:V\rightarrow U$ is expressed by the matrix ${\mathbf L}^\dagger\in{\mathbb R}^{m\times n}$ with: $${\mathbf L}^\dagger = {\mathbf D}_U^{-1}\cdot{\mathbf L}^\top\cdot{\mathbf D}_V.$$ In particular, if the bases ${\mathcal B}_U$ and ${\mathcal B}_V$ are orthonormal, the matrix representation of the adjoint is the matrix transpose: $${\mathbf L}^\dagger = {\mathbf L}^\top.$$



Self-Adjoint Operators
Assume that we are given an inner product space $\{V\,,\,\,{\mathcal I}:V\times V\rightarrow{\mathbb R}\}$.

Definition
A linear operator $L:V\rightarrow V$ is said to be self-adjoint if it is its own adjoint.

Property
A self-adjoint operator $L:V\rightarrow V$ must be invertible.

Proof Otherwise there would be a non-zero vector $v\in V$ s.t. $L(v)=0$. But then $0\neq{\mathcal I}(v,v)={\mathcal I}(L(v),L(v))={\mathcal I}(0,0)=0$ would be a contradiction.

Property
If $L:V\rightarrow V$ is self-adjoint, then so is $L^{-1}$.

Proof If $L:V\rightarrow V$ is self-adjoint then for all $v,w\in V$ we have: $${\mathcal I}\big(L^{-1}(v),L^{-1}(w)\big)={\mathcal I}\big((L\circ L^{-1})(v),(L\circ L^{-1})(w)\big) = {\mathcal I}(v,w))$$ so that $L^{-1}$ is self-adjoint as well.

Property
Given a linear map $L:V\rightarrow W$, between inner product spaces, the map $L^\dagger\circ L$ is self-adjoint.

Proof $$(L^\dagger\circ L)^\dagger = L^\dagger\circ(L^\dagger)^\dagger = L^\dagger\circ L.$$

Property
Given a vector space $V$ with two inner products ${\mathcal I}_1,{\mathcal I}_2:V\times V\rightarrow{\mathbb R}$, define $L=D_{{\mathcal I}_1}^{-1}\circ D_{{\mathcal I}_2}:V\rightarrow V$.
Then $L$ is self-adjoint with respect to ${\mathcal I}_1$ and $L^{-1}$ is self-adjoint with respect to ${\mathcal I}_2$.

Proof To see that $L:V\rightarrow V$ is self-adjoint with respect to ${\mathcal I}_1$, note that for all $u,v\in V$ we have: \begin{align*} {\mathcal I}_1(L(u),v) &= [(D_{{\mathcal I}_1}\circ L)(u)](v) \\ &= \big[(D_{{\mathcal I}_1}\circ D_{{\mathcal I}_1}^{-1}\circ D_{{\mathcal I}_2})(u)\big](v) \\ &= [D_{{\mathcal I}_2}(u)](v) \\ &= {\mathcal I}_2(u,v) \\ &= {\mathcal I}_2(v,u) \\ &= {\mathcal I}_1(L(v),u)\\ &= {\mathcal I}_1(u,L(v)) \end{align*} In a similar manner, we have: $${\mathcal I}_2(L^{-1}(u),v) = {\mathcal I}_1(u,v) = {\mathcal I}_1(v,u) = {\mathcal I}_2(u,L^{-1}(v)).$$

Choosing a Basis Assume that we are given bases ${\mathcal B}_V=\{v_1,\ldots,V_n\}\subset V$.

With respect to this basis, a linear map is $L:V\rightarrow W$ is self-adjoint if its matrix representation, $\mathbf{L}\in{\mathbb R}^{n\times n}$ satisfies: $${\mathbf L}={\mathbf I}^{-1}\cdot{\mathbf L}^\top\cdot{\mathbf I}.$$ If the basis is orthonormal, the linear map is self-adjoint if the matrix representation is symmetric.



Differentials

Definition
Given a vector space $V$ and given a function $F:V\rightarrow{\mathbb R}$ the differential of $F$ at $v\in V$, denoted $dF\big|_v$, is the element of the $V^*$ satisfying: $$\left[dF\big|_v\right](w)\equiv\lim_{\varepsilon\rightarrow0}\frac{F(v+\varepsilon\cdot w) - F(v)}{\varepsilon},\quad\forall w\in V.$$

Definition
Given an inner product space $\{V\,,\,\,{\mathcal I}:V\times V\rightarrow{\mathbb R}\}$ and given a function $F:V\rightarrow{\mathbb R}$, the gradient of $F$ at $v$, denoted $\nabla F\big|_v$, is the element of $V$ satisfying: $$\mathcal{I}(\nabla F\big|_v,w) = \left[dF\big|_v\right](w),\quad\forall w\in V.$$ Or, simply: $$\nabla F\big|_v \equiv D_{\mathcal I}^{-1}\cdot dF\big|_v.$$

Note
Given a linear operator $L:V\rightarrow V^*$, the differential of the function $F(v) = [L(v)](v)$ is: $$dF\big|_v = \big(L + L^*\big)(v).$$
Proof Writing out the derivative along direction $w\in V$ gives: \begin{align*} \left[dF\big|_v\right](w) &=\lim_{\varepsilon\rightarrow0}\frac{F(v+\varepsilon\cdot w) - F(v)}{\varepsilon}\\ &=\lim_{\varepsilon\rightarrow0}\frac{[L(v+\varepsilon\cdot w](v+\varepsilon\cdot w) - [L(v)](v)}{\varepsilon}\\ &=\lim_{\varepsilon\rightarrow0}\frac{[L(v)](v) + \varepsilon\cdot [L(v)](w) + \varepsilon\cdot [L(w)](v) + \varepsilon^2\cdot [L(w)](w) - [L(v)](v)}{\varepsilon}\\ &=[L(v)](w) + [L(w)](v)\\ &=\left[\big(L + L^*\big)(v)\right](w)\\ \end{align*}

Note
In particular, given an inner product space $\{V\,,\,\,{\mathcal I}:V\times V\rightarrow{\mathbb R}\}$ and a linear operator $L:V\rightarrow V$, we can set $\tilde{L}=D_{\mathcal I}\circ L:V\rightarrow V^*$.
Then, the differential of the function $F(v) = {\mathcal I}(L(v),v) = [\tilde{L}(v)](v)$ is: $$dF\big|_v = \big(D_{\mathcal I}\circ L + L^*\circ D_{\mathcal I}^*\big)(v) = \big(D_{\mathcal I}\circ L + L^*\circ D_{\mathcal I}\big)(v),$$ where the second equality follows from the symmetry of the inner product.

Note
In particular, if $L$ is self-adjoint, we get: $$dF\big|_v = 2\big(D_{\mathcal I}\circ L\big)(v) = 2\big(L^*\circ D_{\mathcal I}\big)(v).$$


Frobenius Inner Product

Definition
Given real inner product spaces $\{V\,,\,\,{\mathcal I}_V:V\times V\rightarrow{\mathbb R}\}$ and $\{W\,,\,\,{\mathcal I}_W:W\times W\rightarrow{\mathbb R}\}$ and given two linear maps $A,B:V\rightarrow W$, the Frobenius inner product between $A$ and $B$ is defined as: $$\langle A , B \rangle_F = \hbox{Tr}({\mathcal I}_V^{-1}\circ B^*\circ {\mathcal I}_W \circ A).$$ The Frobenius norm of an operator $A:V\rightarrow W$ is then defined as: $$\|A\|_F = \sqrt{\hbox{Tr}({\mathcal I}_V^{-1} \circ A^*\circ {\mathcal I}_W \circ A)}.$$

Note
The argument of the trace is a map from the space $V$ into itself, $I_V^{-1}\circ B^*\circ I_W \circ A:V\rightarrow V$.
Expressed in matrix form, the Frobenius inner product does not depend on the choice of basis.

Proof Suppose that we have chosen a basis for $V$ and $W$ and in these bases the inner products are represented by the matrices $\mathbf{I}_V$, $\mathbf{I}_W$, and the linear operators are represented by the matrices $\mathbf{A}$ and $\mathbf{B}$.
Let $\mathbf{L}_V$ and $\mathbf{L}_W$ be change of basis matrices.
Then with respect to the new bases the associated matrices become: \begin{align*} \mathbf{A}&\rightarrow \mathbf{L}_W^{-1}\cdot\mathbf{A}\cdot\mathbf{L}_V\\ \mathbf{B}&\rightarrow \mathbf{L}_W^{-1}\cdot\mathbf{B}\cdot\mathbf{L}_V\\ \mathbf{I}_V&\rightarrow \mathbf{L}_V^\top\cdot\mathbf{I}_V\cdot\mathbf{L}_V\\ \mathbf{I}_W&\rightarrow \mathbf{L}_W^\top\cdot\mathbf{I}_W\cdot\mathbf{L}_W. \end{align*} With respect to the new basis, the trace becomes: \begin{align*} \hbox{Tr}(\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A}) \rightarrow&\hbox{Tr}\left((\mathbf{L}_V^\top\cdot\mathbf{I}_V\cdot\mathbf{L}_V)^{-1}\cdot(\mathbf{L}_W^{-1}\cdot\mathbf{B}\cdot\mathbf{L}_V)^\top\cdot(\mathbf{L}_W^\top\cdot\mathbf{I}_W\cdot\mathbf{L}_W)\cdot(\mathbf{L}_W^{-1}\cdot\mathbf{A}\cdot\mathbf{L}_V)\right)\\ =&\hbox{Tr}\left(\mathbf{L}_V^{-1}\cdot\mathbf{I}_V^{-1}\cdot\mathbf{L}_V^{-\top}\cdot\mathbf{L}_V^\top\cdot\mathbf{B}^\top\cdot\mathbf{L}_W^{-\top}\cdot\mathbf{L}_W^\top\cdot\mathbf{I}_W\cdot\mathbf{L}_W\cdot\mathbf{L}_W^{-1}\cdot\mathbf{A}\cdot\mathbf{L}_V\right)\\ =&\hbox{Tr}\left(\mathbf{L}_V^{-1}\cdot\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A}\cdot\mathbf{L}_V\right)\\ =&\hbox{Tr}\left(\mathbf{L}_V\cdot\mathbf{L}_V^{-1}\cdot\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A}\right)\\ =&\hbox{Tr}\left(\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A}\right)\\ \end{align*}

[NOTE TO SELF 1]
The above definition requires defining the trace of a linear operator (i.e. rather than that of a matrix) which is noticeably absent.

[NOTE TO SELF 2]
One can also think of the space of linear maps from $V$ to $W$ as the tensor-product space $W\otimes V^*$. In this case, inner products on $V$ and $W$ induce an inner product on $W\otimes V^*$.
The idea would be to use the fact that to define the inner product on $W\otimes V^*$ it suffices to define its value on all pairs $v\otimes\alpha,w\otimes\beta\in W\otimes V^*$, which would naturally be defined as: $${\mathcal I}_{W\otimes V^*}\big(v\otimes\alpha,w\otimes\beta\big) \equiv {\mathcal I}_{V^*}(\alpha,\beta)\cdot{\mathcal I}_W(w,v).$$ An ugly way to go about this would be by choosing a basis.
If ${\mathcal B}_V=\{v_1,\ldots,v_m\}$ and ${\mathcal B}_W=\{w_1,\ldots,w_n\}$ are bases for $V$ and $W$ (and ${\mathcal B}_{V^*}$ and ${\mathcal B}_{W^*}$ are the associated bases for the dual spaces) then $\{w_k\otimes v^i\}$ (with $1\leq i\leq m$ and $1\leq k\leq n$) is a basis for $W\otimes V^*$.
In this basis, a vector in $W\otimes V^*$ can be represented by a matrix in ${\mathbb R}^{n\times m}$, wherein the matrix $\mathbf{E}_{k,i}$ (with $1$ in the $k$-th row and $i$-th column, and $0$ everywhere else) is used to represent the vector $w_k\otimes v^i$.
Thus, setting $A=w_k\otimes v^i$ and $B=w_l\otimes v^j$, their corresponding representations in the basis are $\mathbf{A} = \mathbf{E}_{k,i}$ and $\mathbf{B} = \mathbf{E}_{l,j}$.
Letting $\mathbf{e}_i\in{\mathbb R}^m$ denote the vector with $1$ in the $i$-th entry and $0$ everywhere else, and letting $\mathbf{f}_k\in{\mathbb R}^n$ denote the vector with $1$ in the $k$-th entry and $0$ everywhere else, we can force this through get: \begin{align*} {\mathcal I}_{W\otimes V^*}\big(w_k\otimes v^i,w_l\otimes v^j\big) &= {\mathcal I}_{V^*}(v^i,v^j)\cdot {\mathcal I}_W(w_l,w_k)\\ &= ({\mathbf D}_V^{-1})_{i,j} \cdot ({\mathbf D}_W)_{l,k}\\ &= \mathbf{e}_i^\top\cdot{\mathbf D}_V^{-1}\cdot\mathbf{e}_j \cdot \mathbf{f}_l^\top\cdot{\mathbf D}_W\cdot\mathbf{f}_k\\ &= \hbox{Tr}\big(\mathbf{e}_i^\top\cdot{\mathbf D}_V^{-1}\cdot\mathbf{e}_j \cdot \mathbf{f}_l^\top\cdot{\mathbf D}_W\cdot\mathbf{f}_k)\\ &= \hbox{Tr}\big({\mathbf D}_V^{-1}\cdot\mathbf{e}_j \cdot \mathbf{f}_l^\top\cdot{\mathbf D}_W\cdot\mathbf{f}_k\cdot\mathbf{e}_i^\top\big)\\ &= \hbox{Tr}\big({\mathbf D}_V^{-1}\cdot\mathbf{E}_{l,j}^\top\cdot{\mathbf D}_W\cdot\mathbf{E}_{k,i}\big)\\ &= \hbox{Tr}\big({\mathbf D}_V^{-1}\cdot\mathbf{B}^\top\cdot{\mathbf D}_W\cdot\mathbf{A}\big)\\ \end{align*} It would be nice to have a cleaner derivation that does not require choosing a basis.