This short post introduces EvoNet, a prototype of a novel neural architecture that maintains a persistent internal state, routes activations bidirectionally, groups nodes into clusters, and remains GPU-friendly through iterative updates. The core question: could such an architecture transport and refine information in a way that can compete with conventional ANNs?
A New Structure: Persistent State and Flowing Activations
Unlike typical ANNs (which clear their activations after each forward pass), EvoNet maintains a persistent internal state. Signals aren’t just pushed forward and discarded — they continue to circulate and evolve with each iteration. This is more akin to how biological brains maintain working memory and adapt responses over time.
Information-Flow Diagrams: Conventional ANN vs EvoNet
Conventional ANN
EvoNet
In conventional networks, information moves in a single sweep. In EvoNet, activations loop back and cross paths, letting the network build a richer internal context. Whether this is truly viable remains an open question. Can such recursive flows reliably transport meaningful signals, or do they collapse into noise or a static state?
Nodes and Clusters: Scalable Building Blocks
EvoNet’s micro-units are nodes, similar to ANN neurons but with local state updates every iteration. This lets signals ripple like waves, not just be processed once. Nodes group into clusters, forming natural units for scaling and specialization. This could later support dynamic growth by inserting nodes or clusters precisely where informational “strain” emerges.
These ideas for network growth will be discussed in later posts.
Node and Cluster Layout
GPU-Friendly Design
Unlike typical spiking neural nets, EvoNet is built for GPUs by:
Iteratively updating the entire network in parallel via sparse matrix multiplications.
Exploiting sparsity to skip zero connections.
Distributing clusters across GPUs, with only light communication between them. Similar to model parallelism.
This should make it computationally plausible even on existing hardware.
Next Steps: Testing in DOOM
I plan to continue by testing EvoNet in DOOM or another simple RL environment. These games are perfect early testbeds: they’re complex enough to challenge learning, but controlled enough to clearly observe what the network is doing.
The main goal isn’t just to perform well, but to see whether EvoNet’s iterative, state-driven architecture can genuinely adapt over time, building stable internal representations and handling new situations without brute-force parameter scaling.
Call for Collaboration & Support
I am an independent researcher working on this without institutional backing. Funding is currently the most significant hurdle — even modest resources would let me run experiments that are not possible on personal hardware.
I mainly wanted to publish this so the idea exists publicly; even if I cannot fully pursue it, it might inspire others to build on it. (I’ve also already written up detailed sections on entropy-guided learning, self-supervision, alignment, and expansion, which I plan to post if there is interest.)
If you are a researcher, engineer, hobbyist, or simply curious about this topic, I would love to hear your thoughts, critiques, or any leads on potential collaboration. Even small contributions can help bring this very different kind of architecture closer to reality.
I have read some prior posts on spiking neural nets and graph-based models here, but have not seen detailed explorations of persistent, iterative flows like this. If you know of discussions or papers that explore similar architectures, I would love pointers.
Can such persistent, iterative flows in a loosely structured graph really carry and refine useful information, or is this more likely to collapse under noise or uniformity?
EvoNet (Part 1): Can persistent, iterative neural graphs really work?
TL;DR:
This short post introduces EvoNet, a prototype of a novel neural architecture that maintains a persistent internal state, routes activations bidirectionally, groups nodes into clusters, and remains GPU-friendly through iterative updates. The core question: could such an architecture transport and refine information in a way that can compete with conventional ANNs?
A New Structure: Persistent State and Flowing Activations
Unlike typical ANNs (which clear their activations after each forward pass), EvoNet maintains a persistent internal state. Signals aren’t just pushed forward and discarded — they continue to circulate and evolve with each iteration. This is more akin to how biological brains maintain working memory and adapt responses over time.
Information-Flow Diagrams: Conventional ANN vs EvoNet
In conventional networks, information moves in a single sweep. In EvoNet, activations loop back and cross paths, letting the network build a richer internal context. Whether this is truly viable remains an open question. Can such recursive flows reliably transport meaningful signals, or do they collapse into noise or a static state?
Nodes and Clusters: Scalable Building Blocks
EvoNet’s micro-units are nodes, similar to ANN neurons but with local state updates every iteration. This lets signals ripple like waves, not just be processed once. Nodes group into clusters, forming natural units for scaling and specialization. This could later support dynamic growth by inserting nodes or clusters precisely where informational “strain” emerges.
These ideas for network growth will be discussed in later posts.
GPU-Friendly Design
Unlike typical spiking neural nets, EvoNet is built for GPUs by:
Iteratively updating the entire network in parallel via sparse matrix multiplications.
Exploiting sparsity to skip zero connections.
Distributing clusters across GPUs, with only light communication between them. Similar to model parallelism.
This should make it computationally plausible even on existing hardware.
Next Steps: Testing in DOOM
I plan to continue by testing EvoNet in DOOM or another simple RL environment. These games are perfect early testbeds: they’re complex enough to challenge learning, but controlled enough to clearly observe what the network is doing.
The main goal isn’t just to perform well, but to see whether EvoNet’s iterative, state-driven architecture can genuinely adapt over time, building stable internal representations and handling new situations without brute-force parameter scaling.
Call for Collaboration & Support
I am an independent researcher working on this without institutional backing. Funding is currently the most significant hurdle — even modest resources would let me run experiments that are not possible on personal hardware.
I mainly wanted to publish this so the idea exists publicly; even if I cannot fully pursue it, it might inspire others to build on it. (I’ve also already written up detailed sections on entropy-guided learning, self-supervision, alignment, and expansion, which I plan to post if there is interest.)
If you are a researcher, engineer, hobbyist, or simply curious about this topic, I would love to hear your thoughts, critiques, or any leads on potential collaboration. Even small contributions can help bring this very different kind of architecture closer to reality.
E-mail: leonhard.evonet@gmail.com
Prototype repo: github.com/Leonhard17/SelfExpandingNN
Open Question for Discussion
I have read some prior posts on spiking neural nets and graph-based models here, but have not seen detailed explorations of persistent, iterative flows like this. If you know of discussions or papers that explore similar architectures, I would love pointers.
Can such persistent, iterative flows in a loosely structured graph really carry and refine useful information, or is this more likely to collapse under noise or uniformity?