Cool stuff! I had previously seen that Vect has a biproduct structure, but never even considered that matrices could be appropriately generalized using it.
One thought I had was about handling potentially infinite-dimensional vector spaces. In that case, if V has a basis {vi:i∈I}, we still get a decomposition into a direct sum of copies of R:
V≅⨁i∈IR,
However, when the indexing set I is infinite, the coproducts and products in Vect differ, so the argument from the post doesn’t go through verbatim. But it still feels like we have a (perhaps less useful) notion of matrix here: if V has basis {vi:i∈I} and U has basis {uj:j∈J}, then a linear map T:V→U is still fully determined by the collection {ai,j:(i,j)∈I×J}⊆R, where T(vi)=∑j∈Jai,juj and, for any fixed i∈I, ai,j vanishes for all but finitely many j∈J. Conversely, any collection {ai,j:(i,j)∈I×J}⊆R with the aforementioned condition on the vanishing of the ai,j uniquely defines a linear map T:V→U. In the finite-dimensional case, when V has dimension m and U has dimension n, I={1,…,m} and J={1,…,n}.
This feels like it still captures the notion of “linear maps are in bijection with indexed collections of scalars” (now with an extra vanishing condition) that makes matrices useful for computing in the finite-dimensional case, although infinite collections are not as nice to work with (induction becomes an issue, for example). The category theory part of my brain wants to figure out the appropriate categorical structure that enables this, which would be something weaker than products and coproducts coinciding but instead maybe that they interact well. I might be asking for too much, though, since infinite-dimensional vector spaces are usually appearing in areas / approached using tools which, in my experience, aren’t purely algebraic (e.g. functional analysis).
Cool stuff! I had previously seen that Vect has a biproduct structure, but never even considered that matrices could be appropriately generalized using it.
One thought I had was about handling potentially infinite-dimensional vector spaces. In that case, if V has a basis {vi:i∈I}, we still get a decomposition into a direct sum of copies of R:
V≅⨁i∈IR,However, when the indexing set I is infinite, the coproducts and products in Vect differ, so the argument from the post doesn’t go through verbatim. But it still feels like we have a (perhaps less useful) notion of matrix here: if V has basis {vi:i∈I} and U has basis {uj:j∈J}, then a linear map T:V→U is still fully determined by the collection {ai,j:(i,j)∈I×J}⊆R, where T(vi)=∑j∈Jai,juj and, for any fixed i∈I, ai,j vanishes for all but finitely many j∈J. Conversely, any collection {ai,j:(i,j)∈I×J}⊆R with the aforementioned condition on the vanishing of the ai,j uniquely defines a linear map T:V→U. In the finite-dimensional case, when V has dimension m and U has dimension n, I={1,…,m} and J={1,…,n}.
This feels like it still captures the notion of “linear maps are in bijection with indexed collections of scalars” (now with an extra vanishing condition) that makes matrices useful for computing in the finite-dimensional case, although infinite collections are not as nice to work with (induction becomes an issue, for example). The category theory part of my brain wants to figure out the appropriate categorical structure that enables this, which would be something weaker than products and coproducts coinciding but instead maybe that they interact well. I might be asking for too much, though, since infinite-dimensional vector spaces are usually appearing in areas / approached using tools which, in my experience, aren’t purely algebraic (e.g. functional analysis).