I keep thinking about the idea of ‘virtual neurons’. Functional units corresponding to natural abstractions made up of a subtle combination of weights & biases distributed throughout a neural network. I’d like to be able to ‘sparsify’ this set of virtual neurons. Project them out to the full sparse space of virtual neurons and somehow tease them apart from each other, then recombine the pieces again with new boundary lines drawn around the true abstractions. Not sure how to do this, but I keep circling back around to the idea. Maybe if the network could first be distilled down to 16 or 8-bit numbers without much capability loss? Then a sparse space could be the full representation of the integer conversion of every point of precision? (upcast number, multiply by the factor of ten corresponding to the smallest point of precision of the original low bit number, round off remaining decimal portion, convert to big integer). Then you could use that integer as an index into the sparse space with dimensionality equal to the full set of weights (factor the biases into the weights) of the model. Then look for concentrations in this huge space which correspond to abstractions in the world? … step 4. Profit?
I keep thinking about the idea of ‘virtual neurons’. Functional units corresponding to natural abstractions made up of a subtle combination of weights & biases distributed throughout a neural network. I’d like to be able to ‘sparsify’ this set of virtual neurons. Project them out to the full sparse space of virtual neurons and somehow tease them apart from each other, then recombine the pieces again with new boundary lines drawn around the true abstractions. Not sure how to do this, but I keep circling back around to the idea. Maybe if the network could first be distilled down to 16 or 8-bit numbers without much capability loss? Then a sparse space could be the full representation of the integer conversion of every point of precision? (upcast number, multiply by the factor of ten corresponding to the smallest point of precision of the original low bit number, round off remaining decimal portion, convert to big integer). Then you could use that integer as an index into the sparse space with dimensionality equal to the full set of weights (factor the biases into the weights) of the model. Then look for concentrations in this huge space which correspond to abstractions in the world? … step 4. Profit?