In particular, this implies that if $L:V\rightarrow W$ and $M:W\rightarrow V$ are any linear map such that one of them is invertible, we have: $$\hbox{det}(M\circ L) = \hbox{det}(L\circ M).$$
Throughout, we will use the canonical isomorphism between a vector space $\Phi:V\rightarrow V^{**}$ and will assume that $\Phi$ is applied implicitly, as needed. Thus, the above becomes simply: $$L^{**} = L.$$
Proof
\begin{align*}
1
&= \hbox{det}(Id.) \\
&= \hbox{det}(D_2^{-1}\circ D_2) \\
&= \hbox{det}(D_2^{-1}\circ L^*\circ D_1 \circ L) \\
&= \hbox{det}(D_2^{-1}\circ L^*\circ D_1) \cdot \hbox{det}(L) \\
&= \hbox{det}(D_1\circ D_2^{-1}\circ L^*) \cdot \hbox{det}(L) \\
&= \hbox{det}(D_1\circ D_2^{-1})\cdot\hbox{det}(L^*) \cdot \hbox{det}(L) \\
&= \hbox{det}(D_1\circ D_2^{-1})\cdot\hbox{det}^2(L) \\
&= \hbox{det}(D_2^{-1}\circ D_1)\cdot\hbox{det}^2(L)\\
&= \frac{1}{\hbox{det}(D_1^{-1}\circ D_2)}\cdot\hbox{det}^2(L)
\end{align*}
$$\Longleftrightarrow\quad\hbox{det}(L) = \pm\sqrt{\hbox{det}(D_1^{-1}\circ D_2)}.$$
In the basis ${\mathcal B}$, the inner product ${\mathcal I}$ is expressed as the Gram matrix ${\mathbf I}\in{\mathbb R}^{n\times n}$ with:
$${\mathbf I}_{ij}={\mathcal I}(v_i,v_j).$$
With respect to the primal basis ${\mathcal B}$ and associated dual basis ${\mathcal B}^*$, the map $D_{\mathcal I}:V\rightarrow V^*$ is expressed by the matrix $\mathbf{D}_{\mathcal I}\in{\mathbb R}^{n\times n}$ with:
$$(\mathbf{D}_{\mathcal I})_{ij} = \left[D_{\mathcal I}(v_j)\right](v_i) = {\mathcal I}(v_i,v_j).$$
Thus, with respect to the basis ${\mathcal B}$ (and the associated basis ${\mathcal B}^*$) the matrix expressions for the inner produce ${\mathcal I}:V\times V\rightarrow{\mathbb R}$ and the linear map $D_{\mathcal I}:V\rightarrow V^*$ are identical:
$${\mathbf D}_{\mathcal I} = {\mathbf I}.$$
Similarly, with respect to the bases ${\mathcal B}$ and ${\mathcal B}^*$ the inner product $I_{{\mathcal I}^*}:V^*\rightarrow V$ is expressed as:
$$\mathbf{D}_{{\mathcal I}^*} = {\mathbf I}^{-1}.$$
In particular, ${\mathcal B}$ is an orthonormal basis, then the representation of all these operators in this basis is the identity:
$$\mathbf{D}_{\mathcal I} = \mathbf{D}_{{\mathcal I}^*} = {\mathbf I}=\mathbf{Id}.$$
Choosing a Basis
Given a basis ${\mathcal B}=\{v_1,\ldots,v_n\}\subset V$, let ${\mathcal B}^*=\{v^1,\ldots,v^n\}$ be the associated dual basis.
Proof
\begin{align*}
(L^\dagger)^\dagger
&= ( D_U^{-1}\circ L^*\circ D_V )^\dagger\\
&= D_V^{-1}\circ( D_U^{-1}\circ L^*\circ D_V )^*\circ D_U\\
&= D_V^{-1}\circ D_V^*\circ (L^*)^*\circ D_U^{-*}\circ D_U\\
&= D_V^{-1}\circ D_V\circ L\circ D_U^{-1}\circ D_U\\
&= L.
\end{align*}
Proof
Setting $D_1\equiv D_{{\mathcal I}_1}$ and $D_2\equiv D_{{\mathcal I}_2}$, we have:
\begin{align*}
L\in O(\{V,{\mathcal I}_1\},\{V,{\mathcal I}_2\})
&\quad\Longleftrightarrow\quad {\mathcal I}_2(L(v),L(w)) = {\mathcal I}_1(v,w),\qquad\forall v,w\in V\\
&\quad\Longleftrightarrow\quad L^*\circ D_2\circ L = D_1\\
&\quad\Longleftrightarrow\quad D_1^{-1}\circ L^*\circ D_2 = L^{-1} \\
&\quad\Longleftrightarrow\quad L^\dagger = L^{-1}.
\end{align*}
With respect to the bases ${\mathcal B}_U$ and ${\mathcal B}_V$, the adjoint operator $L^\dagger:V\rightarrow U$ is expressed by the matrix ${\mathbf L}^\dagger\in{\mathbb R}^{m\times n}$ with:
$${\mathbf L}^\dagger = {\mathbf D}_U^{-1}\cdot{\mathbf L}^\top\cdot{\mathbf D}_V.$$
In particular, if the bases ${\mathcal B}_U$ and ${\mathcal B}_V$ are orthonormal, the matrix representation of the adjoint is the matrix transpose:
$${\mathbf L}^\dagger = {\mathbf L}^\top.$$
Choosing a Basis
Assume that we are given bases ${\mathcal B}_U=\{u_1,\ldots,u_m\}\subset U$ and ${\mathcal B}_V=\{v_1,\ldots,v_n\}\subset V$.
Proof
Otherwise there would be a non-zero vector $v\in V$ s.t. $L(v)=0$. But then $0\neq{\mathcal I}(v,v)={\mathcal I}(L(v),L(v))={\mathcal I}(0,0)=0$ would be a contradiction.
Proof
If $L:V\rightarrow V$ is self-adjoint then for all $v,w\in V$ we have:
$${\mathcal I}\big(L^{-1}(v),L^{-1}(w)\big)={\mathcal I}\big((L\circ L^{-1})(v),(L\circ L^{-1})(w)\big) = {\mathcal I}(v,w))$$
so that $L^{-1}$ is self-adjoint as well.
Proof
$$(L^\dagger\circ L)^\dagger = L^\dagger\circ(L^\dagger)^\dagger = L^\dagger\circ L.$$
Proof
To see that $L:V\rightarrow V$ is self-adjoint with respect to ${\mathcal I}_1$, note that for all $u,v\in V$ we have:
\begin{align*}
{\mathcal I}_1(L(u),v)
&= [(D_{{\mathcal I}_1}\circ L)(u)](v) \\
&= \big[(D_{{\mathcal I}_1}\circ D_{{\mathcal I}_1}^{-1}\circ D_{{\mathcal I}_2})(u)\big](v) \\
&= [D_{{\mathcal I}_2}(u)](v) \\
&= {\mathcal I}_2(u,v) \\
&= {\mathcal I}_2(v,u) \\
&= {\mathcal I}_1(L(v),u)\\
&= {\mathcal I}_1(u,L(v))
\end{align*}
In a similar manner, we have:
$${\mathcal I}_2(L^{-1}(u),v)
= {\mathcal I}_1(u,v)
= {\mathcal I}_1(v,u) = {\mathcal I}_2(u,L^{-1}(v)).$$
With respect to this basis, a linear map is $L:V\rightarrow W$ is self-adjoint if its matrix representation, $\mathbf{L}\in{\mathbb R}^{n\times n}$ satisfies:
$${\mathbf L}={\mathbf I}^{-1}\cdot{\mathbf L}^\top\cdot{\mathbf I}.$$
If the basis is orthonormal, the linear map is self-adjoint if the matrix representation is symmetric.
Choosing a Basis
Assume that we are given bases ${\mathcal B}_V=\{v_1,\ldots,V_n\}\subset V$.
Proof
Suppose that we have chosen a basis for $V$ and $W$ and in these bases the inner products are represented by the matrices $\mathbf{I}_V$, $\mathbf{I}_W$,
and the linear operators are represented by the matrices $\mathbf{A}$ and $\mathbf{B}$.
Let $\mathbf{L}_V$ and $\mathbf{L}_W$ be change of basis matrices.
Then with respect to the new bases the associated matrices become:
\begin{align*}
\mathbf{A}&\rightarrow \mathbf{L}_W^{-1}\cdot\mathbf{A}\cdot\mathbf{L}_V\\
\mathbf{B}&\rightarrow \mathbf{L}_W^{-1}\cdot\mathbf{B}\cdot\mathbf{L}_V\\
\mathbf{I}_V&\rightarrow \mathbf{L}_V^\top\cdot\mathbf{I}_V\cdot\mathbf{L}_V\\
\mathbf{I}_W&\rightarrow \mathbf{L}_W^\top\cdot\mathbf{I}_W\cdot\mathbf{L}_W.
\end{align*}
With respect to the new basis, the trace becomes:
\begin{align*}
\hbox{Tr}(\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A})
\rightarrow&\hbox{Tr}\left((\mathbf{L}_V^\top\cdot\mathbf{I}_V\cdot\mathbf{L}_V)^{-1}\cdot(\mathbf{L}_W^{-1}\cdot\mathbf{B}\cdot\mathbf{L}_V)^\top\cdot(\mathbf{L}_W^\top\cdot\mathbf{I}_W\cdot\mathbf{L}_W)\cdot(\mathbf{L}_W^{-1}\cdot\mathbf{A}\cdot\mathbf{L}_V)\right)\\
=&\hbox{Tr}\left(\mathbf{L}_V^{-1}\cdot\mathbf{I}_V^{-1}\cdot\mathbf{L}_V^{-\top}\cdot\mathbf{L}_V^\top\cdot\mathbf{B}^\top\cdot\mathbf{L}_W^{-\top}\cdot\mathbf{L}_W^\top\cdot\mathbf{I}_W\cdot\mathbf{L}_W\cdot\mathbf{L}_W^{-1}\cdot\mathbf{A}\cdot\mathbf{L}_V\right)\\
=&\hbox{Tr}\left(\mathbf{L}_V^{-1}\cdot\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A}\cdot\mathbf{L}_V\right)\\
=&\hbox{Tr}\left(\mathbf{L}_V\cdot\mathbf{L}_V^{-1}\cdot\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A}\right)\\
=&\hbox{Tr}\left(\mathbf{I}_V^{-1}\cdot\mathbf{B}^\top\cdot\mathbf{I}_W\cdot\mathbf{A}\right)\\
\end{align*}
[NOTE TO SELF 1]
[NOTE TO SELF 2]
The idea would be to use the fact that to define the inner product on $W\otimes V^*$ it suffices to define its value on all pairs $v\otimes\alpha,w\otimes\beta\in W\otimes V^*$, which would naturally be defined as:
$${\mathcal I}_{W\otimes V^*}\big(v\otimes\alpha,w\otimes\beta\big) \equiv {\mathcal I}_{V^*}(\alpha,\beta)\cdot{\mathcal I}_W(w,v).$$
An ugly way to go about this would be by choosing a basis.
If ${\mathcal B}_V=\{v_1,\ldots,v_m\}$ and ${\mathcal B}_W=\{w_1,\ldots,w_n\}$ are bases for $V$ and $W$ (and ${\mathcal B}_{V^*}$ and ${\mathcal B}_{W^*}$ are the associated bases for the dual spaces) then $\{w_k\otimes v^i\}$ (with $1\leq i\leq m$ and $1\leq k\leq n$) is a basis for $W\otimes V^*$.
In this basis, a vector in $W\otimes V^*$ can be represented by a matrix in ${\mathbb R}^{n\times m}$, wherein the matrix $\mathbf{E}_{k,i}$ (with $1$ in the $k$-th row and $i$-th column, and $0$ everywhere else) is used to represent the vector $w_k\otimes v^i$.
Thus, setting $A=w_k\otimes v^i$ and $B=w_l\otimes v^j$, their corresponding representations in the basis are $\mathbf{A} = \mathbf{E}_{k,i}$ and $\mathbf{B} = \mathbf{E}_{l,j}$.
Letting $\mathbf{e}_i\in{\mathbb R}^m$ denote the vector with $1$ in the $i$-th entry and $0$ everywhere else, and letting $\mathbf{f}_k\in{\mathbb R}^n$ denote the vector with $1$ in the $k$-th entry and $0$ everywhere else, we can force this through get:
\begin{align*}
{\mathcal I}_{W\otimes V^*}\big(w_k\otimes v^i,w_l\otimes v^j\big)
&= {\mathcal I}_{V^*}(v^i,v^j)\cdot {\mathcal I}_W(w_l,w_k)\\
&= ({\mathbf D}_V^{-1})_{i,j} \cdot ({\mathbf D}_W)_{l,k}\\
&= \mathbf{e}_i^\top\cdot{\mathbf D}_V^{-1}\cdot\mathbf{e}_j \cdot \mathbf{f}_l^\top\cdot{\mathbf D}_W\cdot\mathbf{f}_k\\
&= \hbox{Tr}\big(\mathbf{e}_i^\top\cdot{\mathbf D}_V^{-1}\cdot\mathbf{e}_j \cdot \mathbf{f}_l^\top\cdot{\mathbf D}_W\cdot\mathbf{f}_k)\\
&= \hbox{Tr}\big({\mathbf D}_V^{-1}\cdot\mathbf{e}_j \cdot \mathbf{f}_l^\top\cdot{\mathbf D}_W\cdot\mathbf{f}_k\cdot\mathbf{e}_i^\top\big)\\
&= \hbox{Tr}\big({\mathbf D}_V^{-1}\cdot\mathbf{E}_{l,j}^\top\cdot{\mathbf D}_W\cdot\mathbf{E}_{k,i}\big)\\
&= \hbox{Tr}\big({\mathbf D}_V^{-1}\cdot\mathbf{B}^\top\cdot{\mathbf D}_W\cdot\mathbf{A}\big)\\
\end{align*}
It would be nice to have a cleaner derivation that does not require choosing a basis.