So in my research into machine learning algorithms, I have stumbled upon a dimensionality reduction algorithm for tensors, and my computer experiments have so far yielded interesting results. I am not sure that this dimensionality reduction is new, but I plan on generalizing this dimensionality reduction to more complicated constructions that I am pretty sure are new and am confident would work well.
Suppose that K is either the field of real numbers or the field of complex numbers. Suppose that d1,…,dn are positive integers and (m0,…,mn) is a sequence of positive integers with m0=mn=1. Suppose that Xi,j is an mi−1×mi-matrix whenever 1≤j≤di. Then define a tensor T((Xi,j))=(X1,i1…Xn,in)i1,…,in∈Kd1⊗⋯⊗Kdn.
If v∈Kd1⊗⋯⊗Kdn, and (Xi,j)i,j is a system of matrices that minimizes the value ∥v−T((Xi,j))∥, then T((Xi,j)i,j) is a dimensionality reduction of (Xi,j)i,j, and we shall denote let u denote the tensor of reduced dimension T((Xi,j)i,j). We shall call u a matrix table to tensor dimensionality reduction of type (m0,…,mn).
Observation 1: (Sparsity) If v is sparse in the sense that most entries in the tensor v are zero, then the tensor u will tend to have plenty of zero entries, but as expected, u will be less sparse than v.
Observation 2: (Repeated entries) If v is sparse and v=(xi1,…,in)i1,…,in and the set {xi1,…,in:i1,…,in} has small cardinality, then the tensor u will contain plenty of repeated non-zero entries.
Observation 3: (Tensor decomposition) Let v be a tensor. Then we can often find a matrix table to tensor dimensionality reduction u of type (m0,…,mn) so that v−u is its own matrix table to tensor dimensionality reduction.
Observation 4: (Rational reduction) Suppose that v is sparse and the entries in v are all integers. Then the value ∥u−v∥2 is often a positive integer in both the case when u has only integer entries and in the case when u has non-integer entries.
Observation 5: (Multiple lines) Let m be a fixed positive even number. Suppose that v is sparse and the entries in v are all of the form r⋅e2πin/m for some integer n and r≥0. Then the entries in u are often exclusively of the form r⋅e2πin/m as well.
Observation 6: (Rational reductions) I have observed a sparse tensor v all of whose entries are integers along with matrix table to tensor dimensionality reductions u1,u2 of v where ∥v−u1∥=3,∥v−u1∥=2,∥u2−u1∥=5.
This is not an exclusive list of all the observations that I have made about the matrix table to tensor dimensionality reduction.
From these observations, one should conclude that the matrix table to tensor dimensionality reduction is a well-behaved machine learning algorithm. I hope and expect this machine learning algorithm and many similar ones to be used to both interpret the AI models that we have and will have and also to construct more interpretable and safer AI models in the future.
So in my research into machine learning algorithms, I have stumbled upon a dimensionality reduction algorithm for tensors, and my computer experiments have so far yielded interesting results. I am not sure that this dimensionality reduction is new, but I plan on generalizing this dimensionality reduction to more complicated constructions that I am pretty sure are new and am confident would work well.
Suppose that K is either the field of real numbers or the field of complex numbers. Suppose that d1,…,dn are positive integers and (m0,…,mn) is a sequence of positive integers with m0=mn=1. Suppose that Xi,j is an mi−1×mi-matrix whenever 1≤j≤di. Then define a tensor T((Xi,j))=(X1,i1…Xn,in)i1,…,in∈Kd1⊗⋯⊗Kdn.
If v∈Kd1⊗⋯⊗Kdn, and (Xi,j)i,j is a system of matrices that minimizes the value ∥v−T((Xi,j))∥, then T((Xi,j)i,j) is a dimensionality reduction of (Xi,j)i,j, and we shall denote let u denote the tensor of reduced dimension T((Xi,j)i,j). We shall call u a matrix table to tensor dimensionality reduction of type (m0,…,mn).
Observation 1: (Sparsity) If v is sparse in the sense that most entries in the tensor v are zero, then the tensor u will tend to have plenty of zero entries, but as expected, u will be less sparse than v.
Observation 2: (Repeated entries) If v is sparse and v=(xi1,…,in)i1,…,in and the set {xi1,…,in:i1,…,in} has small cardinality, then the tensor u will contain plenty of repeated non-zero entries.
Observation 3: (Tensor decomposition) Let v be a tensor. Then we can often find a matrix table to tensor dimensionality reduction u of type (m0,…,mn) so that v−u is its own matrix table to tensor dimensionality reduction.
Observation 4: (Rational reduction) Suppose that v is sparse and the entries in v are all integers. Then the value ∥u−v∥2 is often a positive integer in both the case when u has only integer entries and in the case when u has non-integer entries.
Observation 5: (Multiple lines) Let m be a fixed positive even number. Suppose that v is sparse and the entries in v are all of the form r⋅e2πin/m for some integer n and r≥0. Then the entries in u are often exclusively of the form r⋅e2πin/m as well.
Observation 6: (Rational reductions) I have observed a sparse tensor v all of whose entries are integers along with matrix table to tensor dimensionality reductions u1,u2 of v where ∥v−u1∥=3,∥v−u1∥=2,∥u2−u1∥=5.
This is not an exclusive list of all the observations that I have made about the matrix table to tensor dimensionality reduction.
From these observations, one should conclude that the matrix table to tensor dimensionality reduction is a well-behaved machine learning algorithm. I hope and expect this machine learning algorithm and many similar ones to be used to both interpret the AI models that we have and will have and also to construct more interpretable and safer AI models in the future.