Cool stuff! I had previously seen that Vect has a biproduct structure, but never even considered that matrices could be appropriately generalized using it.
One thought I had was about handling potentially infinite-dimensional vector spaces. In that case, if V has a basis {vi:i∈I}, we still get a decomposition into a direct sum of copies of R:
V≅⨁i∈IR,
However, when the indexing set I is infinite, the coproducts and products in Vect differ, so the argument from the post doesn’t go through verbatim. But it still feels like we have a (perhaps less useful) notion of matrix here: if V has basis {vi:i∈I} and U has basis {uj:j∈J}, then a linear map T:V→U is still fully determined by the collection {ai,j:(i,j)∈I×J}⊆R, where T(vi)=∑j∈Jai,juj and, for any fixed i∈I, ai,j vanishes for all but finitely many j∈J. Conversely, any collection {ai,j:(i,j)∈I×J}⊆R with the aforementioned condition on the vanishing of the ai,j uniquely defines a linear map T:V→U. In the finite-dimensional case, when V has dimension m and U has dimension n, I={1,…,m} and J={1,…,n}.
This feels like it still captures the notion of “linear maps are in bijection with indexed collections of scalars” (now with an extra vanishing condition) that makes matrices useful for computing in the finite-dimensional case, although infinite collections are not as nice to work with (induction becomes an issue, for example). The category theory part of my brain wants to figure out the appropriate categorical structure that enables this, which would be something weaker than products and coproducts coinciding but instead maybe that they interact well. I might be asking for too much, though, since infinite-dimensional vector spaces are usually appearing in areas / approached using tools which, in my experience, aren’t purely algebraic (e.g. functional analysis).
Here’s a basic problem with infinite bases. Suppose f:R→Rω duplicates its argument ω times. And suppose g:Rω→R sums all ω entries. Now g∘f is not a sensible function.
So you really need to have some restriction. Like for example, maybe we interpret Rω as requiring all but a finite number of entries to be zero. That would at least rule out f. Now Rω is not a “true infinite product” in the category-theory sense. But we would still have Rω≅R⊕Rω (“first” and “rest” of infinite list). Which might enable induction. I’m not sure.
Alternatively we could have Rω be unrestricted, but then g can’t be defined. Either way there’s an issue with allowing functions to or from Rω to be represented by arbitrary infinite matrices.
EDIT: another framing of this is that “infinite product” (Rω unrestricted) and “infinite coproduct” (Rω restricted to all but finite being zero) come apart in Vect. So there isn’t strictly an infinite biproduct.
Cool stuff! I had previously seen that Vect has a biproduct structure, but never even considered that matrices could be appropriately generalized using it.
One thought I had was about handling potentially infinite-dimensional vector spaces. In that case, if V has a basis {vi:i∈I}, we still get a decomposition into a direct sum of copies of R:
V≅⨁i∈IR,However, when the indexing set I is infinite, the coproducts and products in Vect differ, so the argument from the post doesn’t go through verbatim. But it still feels like we have a (perhaps less useful) notion of matrix here: if V has basis {vi:i∈I} and U has basis {uj:j∈J}, then a linear map T:V→U is still fully determined by the collection {ai,j:(i,j)∈I×J}⊆R, where T(vi)=∑j∈Jai,juj and, for any fixed i∈I, ai,j vanishes for all but finitely many j∈J. Conversely, any collection {ai,j:(i,j)∈I×J}⊆R with the aforementioned condition on the vanishing of the ai,j uniquely defines a linear map T:V→U. In the finite-dimensional case, when V has dimension m and U has dimension n, I={1,…,m} and J={1,…,n}.
This feels like it still captures the notion of “linear maps are in bijection with indexed collections of scalars” (now with an extra vanishing condition) that makes matrices useful for computing in the finite-dimensional case, although infinite collections are not as nice to work with (induction becomes an issue, for example). The category theory part of my brain wants to figure out the appropriate categorical structure that enables this, which would be something weaker than products and coproducts coinciding but instead maybe that they interact well. I might be asking for too much, though, since infinite-dimensional vector spaces are usually appearing in areas / approached using tools which, in my experience, aren’t purely algebraic (e.g. functional analysis).
Here’s a basic problem with infinite bases. Suppose f:R→Rω duplicates its argument ω times. And suppose g:Rω→R sums all ω entries. Now g∘f is not a sensible function.
So you really need to have some restriction. Like for example, maybe we interpret Rω as requiring all but a finite number of entries to be zero. That would at least rule out f. Now Rω is not a “true infinite product” in the category-theory sense. But we would still have Rω≅R⊕Rω (“first” and “rest” of infinite list). Which might enable induction. I’m not sure.
Alternatively we could have Rω be unrestricted, but then g can’t be defined. Either way there’s an issue with allowing functions to or from Rω to be represented by arbitrary infinite matrices.
EDIT: another framing of this is that “infinite product” (Rω unrestricted) and “infinite coproduct” (Rω restricted to all but finite being zero) come apart in Vect. So there isn’t strictly an infinite biproduct.