Multilinear algebra concerns with multilinear functionals and tensors.

**Tensor product** $\omega \otimes \eta$ of linear functionals (aka covectors)
$\omega \in \mathcal{L}(V)$ and $\eta \in \mathcal{L}(W)$
is the functional defined by $\omega \otimes \eta (v, w) = \omega(v) \eta(w)$,
which is bilinear: $\omega \otimes \eta \in \mathcal{L}(V \times W)$.
**Tensor product** $F \otimes G$ of multilinear functionals
$F \in \mathcal{L}(\prod_{i=1}^k V_i)$ and
$G \in \mathcal{L}(\prod_{j=1}^l W_j)$
is the functional defined by $F \otimes G (v, w) = F(v) G(w)$, which is multilinear:
$F \otimes G \in \mathcal{L}(\prod_{i=1}^k V_i \times \prod_{j=1}^l W_j)$.
The tensor product map $\otimes$
from $\mathcal{L}(\prod_{i=1}^k V_i) \times \mathcal{L}(\prod_{j=1}^l W_j,\mathbb{R})$
to $\mathcal{L}(\prod_{i=1}^k V_i \times \prod_{j=1}^l W_j)$
is bilinear and associative.
Given a basis $(e^{(i)}_j)_{j=1}^{n_i}$ for each component vector space $V_i$,
the set $\{\otimes_{i=1}^k \omega^i : \omega^i \in \{ \varepsilon_{(i)}^j \}_{j=1}^{n_i}\}$
of all tensor products of the basis covectors
is a basis for the multilinear functional space $\mathcal{L}(\prod_{i=1}^k V_i)$,
which therefore has dimension $\prod_{i=1}^k n_i$.

**Formal linear combination** $f: S \mapsto \mathbb{R}$ of elements of a set $S$
is a function where only a finitely subset has nonzero values.
The set of all formal linear combinations of elements of a set can be written as
$\mathcal{F}(S) = \bigsqcup_{n = 0}^\infty \bigsqcup_{A \subset S, |A| = n} \mathbb{R}_{\ne0}^A$.
Real **free vector space** $(\mathcal{F}(S), (+, \cdot_{\mathbb{R}}))$ on a set $S$
is the vector space consisting of the set of all formal linear combinations of elements of the set,
endowed with pointwise addition and scalar multiplication.
Every element of a free vector space can be written uniquely
as a finite linear combination of indicator functions:
$f = \sum_{i=1}^m f(x_i) 1_{x_i}$ where $\{x_i\}_{i=1}^m = \{x : f(x) \ne 0\}$.
Thus, the free vector space has a basis $(1_x)_{x \in S}$,
and is finite-dimensional if and only if the underlying set is a finite set.
We may identify $1_x$ with $x$, and thus consider $S \subset \mathcal{F}(S)$.
Abstract **tensor product** $\otimes_{i=1}^k V_i$ of a finite family of real vector spaces
is the quotient space $\mathcal{F}(V) / \mathcal{R}$
of the free vector space on their Cartesian product $V = \prod_{i=1}^k V_i$,
by the subspace $\mathcal{R} = \text{Span}(A \cup B)$ where
$A = \{1_{v'} - a 1_v : v \in V, j \in \{i\}_{i=1}^k$, $v' = (v_i a^{\delta_{ij}})_{i=1}^k\}$
and $B = \{1_{v''} - 1_v - 1_{v'}: v \in V, j \in \{i\}_{i=1}^k$,
$v' \in v + V_j, v'' = (v_i + v_i' \delta_{ij})_{i=1}^k\}$.
Abstract **tensor product** $\otimes_{i=1}^k v_i$ of vectors
is the element of the abstract tensor product space defined by
$\otimes_{i=1}^k v_i = 1_v + \mathcal{R}$.
The abstract tensor product map is a multilinear operator:
$\otimes \in \mathcal{L}(\prod_{i=1}^k V_i, \otimes_{i=1}^k V_i)$.
Given a basis $(e^{(i)}_j)_{j=1}^{n_i}$ for each component vector space $V_i$,
the set $\{\otimes_{i=1}^k v_i : v_i \in \{ e^{(i)}_j \}_{j=1}^{n_i}\}$
of all tensor products of the basis vectors is a basis for the abstract tensor product space
$\otimes_{i=1}^k V_i$, which therefore has dimension $\prod_{i=1}^k n_i$.

For any finite family of finite-dimensional real vector spaces, there is a canonical isomorphism $\otimes_{i=1}^k V_i^∗ \cong \mathcal{L}(\prod_{i=1}^k V_i)$ between the space of multilinear functional on their Cartesian product and the abstract tensor product of their dual spaces, under which the abstract tensor product of vectors corresponds to the tensor product of covectors. Identifying the vector space with its second dual space by the canonical isomorphism, we also have $\otimes_{i=1}^k V_i \cong \mathcal{L}(\prod_{i=1}^k V_i^∗ )$.

**Covariant tensor** (共变张量) $\alpha$ of rank $k$, or **covariant k-tensor**,
on a finite-dimensional real vector space $V$
is an element of the k-fold abstract tensor product of its dual space,
aka the **covariant k-tensor space** $T^k(V^∗)$ on the vector space:
$\alpha \in T^k(V^∗) = \otimes_{i=1}^k V^∗$.
Due to the canonical isomorphism $\otimes_{i=1}^k V^∗ \cong \mathcal{L}(V^k)$,
we typically think of a covariant k-tensor as a multilinear functional of $k$ vectors:
$\alpha \in \mathcal{L}(V^k)$.
**Tensor product** of covariant tensors
is thus defined by the tensor product of multilinear functionals:
$\otimes: T^k(V^∗) \times T^l(V^∗) \mapsto T^{k+l}(V^∗)$.

**Contravariant tensor** (反变张量) of rank $k$, or **contravariant k-tensor**,
on a finite-dimensional real vector space
is an element of its k-fold abstract tensor product space,
aka the **contravariant k-tensor space** $T^k(V)$ on the vector space:
$T^k(V) = \otimes_{i=1}^k V$.

**Mixed tensor** (混合张量) of type $(k, l)$, or **mixed (k,l)-tensor**,
on a finite-dimensional real vector space
is an element of the abstract tensor product of
its contravariant k-tensor space and its covariant l-tensor space,
aka the **mixed (k,l)-tensor space** $T^{(k,l)}(V)$ or $T^k_l(V)$ on the vector space:
$T^{(k,l)}(V) = T^k(V) \otimes T^l(V^∗)$.

Given a basis $(e_i)_{i=1}^n$ for the vector space, the set $\{(\otimes_{j=1}^k v_j) \otimes (\otimes_{j=1}^l \omega_j)$: $v_j \in \{ e_i \}_{i=1}^n, \omega_j \in \{\varepsilon^i\}_{i=1}^n\}$ of all tensor products of $k$ basis vectors and $l$ basis covectors is a basis for the mixed (k,l)-tensor space $T^{(k,l)}(V)$, which therefore has dimension $n^{k+l}$. In particular, every covariant k-tensor can be written uniquely as $\alpha = \alpha_{(i_j)_{j=1}^k} \otimes_{j=1}^k \varepsilon^{i_j}$, where $\alpha_{(i_j)_{j=1}^k} = \alpha (e_{i_j})_{j=1}^k$. The coordinate representation $(\alpha_i)_{i \in n^k}$ of a covariant k-tensor is a k-dimensional array with $n$ components in each dimension.

**Covariant k-tensor bundle** $T^k T^∗ M$ on a smooth manifold $M$
is the disjoint union of covariant k-tensor spaces on tangent spaces at all points of $M$:
$T^k T^∗ M = \sqcup_{p \in M} T^k(T_p^∗ M)$.
Analogously, we define **contravariant k-tensor bundle** $T^k T M = \sqcup_{p \in M} T^k(T_p M)$
and **mixed (k,l)-tensor bundle** $T^{(k, l)} T M = \sqcup_{p \in M} T^{(k, l)}(T_p M)$.

**Tensor field** on a smooth manifold is a section of a tensor bundle on the manifold.
**Smooth tensor field** is a smooth section of a tensor bundle.
Denote the **space of smooth covariant k-tensor fields** as $\Gamma(T^k T^∗ M)$
or $\mathfrak{T}^k(M)$,
the **space of smooth contravariant k-tensor fields** as $\Gamma(T^k T M)$,
and the **space of smooth mixed (k,l)-tensor fields** as $\Gamma(T^{(k, l)} T M)$,
all of which are real vector spaces and modules over the ring $C^\infty(M)$
of smooth real-valued functions on the manifold:
$A \in \Gamma(T^{(k, l)} T M)$, $f \in C^\infty(M)$, then $f A \in \Gamma(T^{(k, l)} T M)$.
The (pointwise) action of a smooth covariant k-tensor field on $k$ smooth vector fields
is a smooth real-valued function:
$A \in \mathfrak{T}^k(M)$, $X_i \in \mathfrak{X}(M)$, then $A(X_i)_{i=1}^k \in C^\infty(M)$.
Any smooth covariant k-tensor field is a multilinear operator over $C^\infty(M)$
from the k-th Cartesian power $\mathfrak{X}^k(M)$ of the space of smooth vector fields
to the space $C^\infty(M)$ of smooth functions; the inverse is also true.

Tensor field is a unifying concept: 0-tensor fields are continuous real-valued functions; contravariant 1-tensor fields are vector fields; covariant 1-tensor fields are covector fields; covariant 2-tensors are bilinear forms; smooth, nondegenerate, constant-index, symmetric 2-tensor fields are pseudo-Riemannian metrics; smooth, positive-definite, symmetric 2-tensor fields are Riemannian metrics; closed, nondegenerate, alternating 2-tensor fields are symplectic forms; alternating k-tensor fields are differential k-forms.

Given a smooth chart on the manifold,
every smooth tensor field can be written uniquely as
a linear combination of the coordinate basis at each point in the chart:
$A = A_{(i_{j'})_{j'=1}^l}^{(i_j)_{j=1}^k} \left(\otimes_{j=1}^k
\frac{\partial}{\partial x^{i_j}}\right) \otimes \left(\otimes_{j'=1}^l d x^{i_{j'}}\right)$,
where $\frac{\partial}{\partial x^i}$ are coordinate vectors and $d x^i$ are coordinate covectors.
We call $A_{(i_{j'})_{j'=1}^l}^{(i_j)_{j=1}^k}$
the **component functions** of the smooth tensor field associated with the smooth chart,
which are smooth real-valued functions.
**Local coordinate representation** $(A^i_j)_{i \in n^k, j \in n^l}$
of a smooth mixed (k,l)-tensor field w.r.t. a smooth chart
is a smooth function whose values are (k+l)-dimensional arrays
with $n$ components in each dimension.

**Pullback** $F^∗ A$ of a covariant k-tensor field $A$ on $N$ by a smooth map
$F \in C^\infty(M, N)$ is the rough covariant k-tensor field on $M$
whose value at each point equals the pullback of the covariant k-tensor at that point:
$(F^∗ A)_p = d F_p^∗ (A_{F(p)})$, i.e.
$\forall v_i \in T_p M$, $(F^∗ A)_p (v_i)_{i=1}^k = A_{F(p)}(d F_p (v_i))_{i=1}^k$.
The pullback of any covariant k-tensor field by a smooth map is a covariant k-tensor field;
if the tensor field is smooth, its pullback is also smooth.
The global tangent map $F_∗$ and cotangent map $F^∗$ of a diffeomorphism $F: M \mapsto N$
is a pair of isomorphisms between the spaces (as real vector spaces and $C^\infty$ modules)
of smooth mixed (k,l)-tensor fields on the domain and the codomain:
$F: M \cong N$ then $\forall k, l \in \mathbb{N}$,
$F_∗: \Gamma(T^{(k, l)} T M) \cong \Gamma(T^{(k, l)} T N)$,
$F^∗: \Gamma(T^{(k, l)} T N) \cong \Gamma(T^{(k, l)} T M)$.

Although tensor is a unifying concept, it is only indispensable when the total rank is at lease two. Among covariant tensors of rank two or higher, the symmetric and the alternating ones have predictable changes of value under argument reordering. Symmetric and alternating 2-tensor spaces are complementary subspaces of a covariant 2-tensor space. For higher ranks, they are only linearly independent.

**Symmetric k-tensor** (对称张量) is a covariant k-tensor
whose value does not depend on the order of arguments:
$\forall \pi \in S_n$, $\alpha \circ \pi = \alpha$,
where $S_n$ is the symmetric group over the set $n$.
For example, the inner product $(\cdot,\cdot)$ of a Euclidean space
is a symmetric 2-tensor.
**Symmetric k-tensor space** $\Sigma^k(V^∗)$ on a finite-dimensional real vector space
is the subspace of its covariant k-tensor space $T^k(V^∗)$ consisting of all the symmetric ones.

**Symmetrization** $\text{Sym}: T^k(V^∗) \mapsto \Sigma^k(V^∗)$
is the projection from a covariant k-tensor space to its symmetric subspace
defined by averaging across all permutations of a covariant k-tensor:
$\text{Sym}~\alpha = \frac{1}{k!} \sum_{\pi \in S_k} \alpha \circ \pi$.
**Symmetric product** $\alpha \beta$ of symmetric tensors
is the symmetrized tensor product of these tensors:
$\alpha \beta = \text{Sym}(\alpha \otimes \beta)$.
The symmetric product map is a symmetric bilinear operator
from $\Sigma^k(V^∗) \times \Sigma^l(V^∗)$ to $\Sigma^{k+l}(V^∗)$.

**Bilinear form** (双线性形式) $q: V^2 \mapsto \mathbb{F}$ ('q' for "quadratic")
on a finite-dimensional vector space
is a symmetric 2-tensor, i.e. a symmetric bilinear functional: $q \in \Sigma^2(V^∗)$.
Note that differential forms are alternating tensor fields,
so be careful which "form" a term refers to.
Every covariant 2-tensor $\alpha$ is equivalent to a linear operator $\hat{\alpha}: V \mapsto V^∗$
defined by $\forall v, w \in V$, $\hat{\alpha}(v)(w) = \alpha(v, w)$.
**Nondegenerate bilinear form** on a finite-dimensional vector space
is a symmetric 2-tensor whose equivalent linear operator $\hat{q}: V \mapsto V^∗$
is a vector space isomorphism.
**Positive definite bilinear form** on a finite-dimensional vector space
is a symmetric 2-tensor whose value on any pair of the same nonzero vector is postive:
$x \ne 0$ then $g(x, x) > 0$.

**Alternating k-tensor**, **antisymmetric k-tensor** (反对称张量),
**skew-symmetric k-tensor**, **k-covector**, or **exterior form** $\omega$
is a covariant k-tensor whose value changes sign under a transposition of any two arguments:
$\forall 1 \le i < j \le n$, $\omega \circ (i~j) = -\omega$.
For example, the determinant $\det$ is an alternating n-tensor
on an n-dimensional real inner product space.
**Alternating k-tensor space** or **k-covector space** $\Lambda^k(V^∗)$
on a finite-dimensional real vector space
is the subspace of its covariant k-tensor space $T^k(V^∗)$ consisting of all the alternating ones.
A covariant k-tensor is alternating if and only if
its value is zero for all linearly dependent k-tuples of vectors:
given $\alpha \in T^k(V^∗)$, $\alpha \in \Lambda^k(V^∗)$ if and only if
${v \in V^k: \exists i \ne j, v_i = v_j} \subset \alpha^{-1}(0)$.
There is no k-covector on an n-dimensional real vector space besides zero if $k > n$.
The action of an n-covector on linearly transformed vectors in an n-dimensional vector space
equls its action on the original vectors, multiplied by the determinant of the linear transformation:
$\omega \in \Lambda^k(V^∗)$, $T \in \mathcal{L}(V)$,
$\omega(T v_i)_{i=1}^n = (\det T) \omega(v_i)_{i=1}^n$.

**Alternation** $\text{Alt}: T^k(V^∗) \mapsto \Lambda^k(V^∗)$
is the projection from a covariant k-tensor space to its alternating subspace
defined by averaging across all signed permutations of a covariant k-tensor:
$\text{Alt}~\alpha = \frac{1}{k!} \sum_{\pi \in S_k} (\text{sgn}~\pi) \alpha \circ \pi$,
where $\text{sgn}$ gives the sign of a permutation.
**Exterior product** or **wedge product** $\wedge$ of k- and l-covectors
is their alternated tensor product, with a combinatorial number of their ranks:
$\omega \in \Lambda^k(V^∗)$, $\eta \in \Lambda^l(V^∗)$, $\omega \wedge \eta =
\left(\begin{array}{c}k+l\ k\end{array}\right) \text{Alt}(\omega \otimes \eta)$.
The "$\text{Alt}$ convention" of the wedge product defines it similarly to the symmetric product:
$\omega \overline{\wedge} \eta = \text{Alt}(\omega \otimes \eta)$.
The wedge product map is an associative, anticommutative, bilinear operator
from $\Lambda^k(V^∗) \times \Lambda^l(V^∗)$ to $\Lambda^{k+l}(V^∗)$:
$\omega \wedge \eta = (-1)^{kl} \eta \wedge \omega$; the inverse is also true.
The wedge product of covectors equals the determinant of the matrix of their actions:
$(\wedge_{j=1}^k \omega^j) (v_i)_{i=1}^k = \det(\omega^j(v_i))$;
because of this simple relation with the determinant,
we call our current definition of the wedge product its "determinant convention".

**Multi-index** $I = (i_j)_{j=1}^k$ of length $k$ from the index set $A$ of an indexed family
is a k-tuple of indices: $I \in A^k$.
**Elementary k-covector** $\varepsilon^I$ on an n-dimensional vector space $V$,
given a basis $(\varepsilon^i)_{i=1}^n$ to its dual space and a multi-index $I$ of length $k$,
is the k-covector defined by $\forall v_j \in V$,
$\varepsilon^I(v_j)_{j=1}^k = \det [v]_{I,\cdot}$
where $[v]$ is the matrix representation of the k-tuple of vectors
in the basis dual to that of the dual space, i.e. $v_j^i = \varepsilon^i(v_j)$.
Elementary 1-covector $\varepsilon^i$ equals the given basis covector.
**Kronecker delta** $\delta^I_J$ for multi-indices $I$ and $J$ of length $k$ on $\{i\}_{i=1}^n$
is a symbol defined by $\delta^I_J = \det (\delta^i_j)_{I,J}$.
For any n-dimensional vector space $V$,
given a basis $(e_i)_{i=1}^n$ with dual basis $(\varepsilon^i)_{i=1}^n$,
the value of any elementary k-covector $\varepsilon^I$
on any k-tuple $(e_j)_{j \in J}$ of the basis vectors
equals the Kronecker delta for the corresponding multi-indices:
$\varepsilon^I(e_j)_{j \in J} = \delta^I_J$.
**Increasing multi-index** is a multi-index from a well-ordered index set that preserves the order:
$a < b$ then $i_a < i_b$.
We use $\sum_I'$ to denote a sum over all increasing multi-indices:
$\sum_I' = \sum_{\{I : i_1 < \cdots < i_k\}}$.
Given a basis $(\varepsilon^i)_{i=1}^n$ to a dual space $V^∗$, for all $k \in \{i\}_{i=1}^n$,
the set $\mathscr{E} = \{\varepsilon^I : i_1 < \cdots < i_k\}$
of all elementary k-covectors on increasing multi-indices of length $k$
is a basis for the k-covector space $\Lambda^k(V^∗)$,
which therefoer has dimension $\left(\begin{array}{c}n\ k\end{array}\right)$.
**Decomposable k-covector** is a k-covector that can be expressed as
the wedge product of a k-tuple of covectors: $\eta = \wedge_{i=1}^k \omega^i$.
The wedge product of elementary k-covectors equals the elementary k-covector based on
the concatenated multi-index: $\varepsilon^I \wedge \varepsilon^J = \varepsilon^{(I,J)}$.
The wedge product of $k$ basis covectors equals the elementary k-covector based on
the concatenated multi-index: $\wedge_{i \in I} \varepsilon^i = \varepsilon^I$.
Every elementary k-covector is decomposable, and thus
every k-covector can be written as a linear combintion of decomposable k-covectors.

For an n-dimensional real vector space $V$,
the 0-covector space is the space $\mathbb{R}$ of real numbers,
the 1-covector space is its dual space $V^∗$,
the n-covector space is the linear span of the determinant $\det$.
Sum **covector space** $\Lambda(V^∗)$ on an n-dimensional real vector space
is the direct sum of all the k-covector spaces on the vector space:
$\Lambda(V^∗) = \oplus_{k=0}^n \Lambda^k(V^∗)$, which has dimension $2^n$.
**Exterior algebra** or **Grassmann algebra** $(\Lambda(V^∗), \wedge)$
of an n-dimensional vector space $V$ is the associative algebra
consisting of its covector space and the wedge product.
**Graded algebra** is an algebra $(A, (+, \cdot_{\mathbb{R}}, ×))$
that has a direct sum decomposition $A = \oplus_{k \in \mathbb{Z}} A^k$ into subspaces
such that its product maps among these subspaces in accord with the indices:
$\forall k,l \in \mathbb{Z}$, $×: A^k \times A^l \mapsto A^{k+l}$.
**Anticommutative graded algebra** is a graded algebra
whose product, under the transposition of arguments, changes sign in accord with the indices:
$\forall a \in A^k$, $\forall b \in A^l$, $a × b = (-1)^{kl} b × a$.
Exterior algebra is an anticommutative graded algebra.

**Interior multiplication** (内乘) $i_v: \Lambda^k(V^∗) \mapsto \Lambda^{k-1}(V^∗)$ by a vector $v$
is a linear map that inserts the vector into the first argument of a k-covector:
$i_v \omega(w_i)_{i=1}^{k-1} = \omega(v, w_i)_{i=1}^{k-1}$.
The interior multiplication of a vector with a k-covector is also denoted as
$v \lrcorner \omega = i_v \omega$ (read "v **into** ω").
The equivalent linear operator of a 2-covector
maps each vector to the interior multiplication of the vector with the 2-covector:
$\hat{\omega}(v) = v \lrcorner \omega$.
**Nondegenerate 2-covector** on a finite-dimensional vector space
is a 2-covector whose equivalent linear operator $\hat{\omega}: V \mapsto V^∗$
is a vector space isomorphism.

**Trace** or **contraction** $\text{tr}$
on the last covariant and contravariant indices of a mixed tensor
is a map from (k+1,l+l)-tensors to (k+1,l+l)-tensors defined by:
$(\text{tr}~F)(\omega^i, v_j)^{i \in k}_{j \in l} =
\text{tr} (F(\omega^i, \cdot, v_j, \cdot)^{i \in k}_{j \in l})$;
in a basis, its components are obtained by summing over terms with the same
last upper and lower indices: $(\text{tr}~F)^{(i_j)_{j \in k}}_{(i_{j'})_{j' \in l}} =
F^{(i_j, m)_{j \in k}}_{(i_{j'}, m)_{j' \in l}}$.

Musical isomorphisms sharp $\sharp$ and flat $\flat$...