I spent months trying to break the quadratic O(N^2) bottleneck of Transformers. Today I'm releasing Pulse-Field v3.0 — an event-driven, neuro-symbolic architecture that runs in O(N) time.
Benchmarks vs GPT-2 style baseline (on CPU):
- Latency: 5ms (vs 60ms)
- Context: Tested up to 100k tokens with <3ms penalty.
- Size: Starts at ~20MB (dynamic growth).
The architecture uses "Event-Driven Routing" instead of dense attention matrices. Tokens travel as impulses through a graph of specialized "crystals" (logic/memory nodes), activating only relevant paths.
This entire core was architected and coded in a 55-minute sprint using a swarm of AI agents (reasoning models) that I orchestrated to overcome the "average output" bias of standard LLMs.
Happy to answer questions about the routing logic!
The repository looked kinda fake. It looks like it has been taken down?
You might want to read the code your AI agents are producing. Even the agents are aware that the metrics are all made up.
https://github.com/makimilan/pulse-field-core/blob/main/puls...
Thanks, most likely they adjusted something with the latest changes, unfortunately it’s still difficult to cope with hallucinations
Author here.
I spent months trying to break the quadratic O(N^2) bottleneck of Transformers. Today I'm releasing Pulse-Field v3.0 — an event-driven, neuro-symbolic architecture that runs in O(N) time.
Benchmarks vs GPT-2 style baseline (on CPU): - Latency: 5ms (vs 60ms) - Context: Tested up to 100k tokens with <3ms penalty. - Size: Starts at ~20MB (dynamic growth).
The architecture uses "Event-Driven Routing" instead of dense attention matrices. Tokens travel as impulses through a graph of specialized "crystals" (logic/memory nodes), activating only relevant paths.
This entire core was architected and coded in a 55-minute sprint using a swarm of AI agents (reasoning models) that I orchestrated to overcome the "average output" bias of standard LLMs.
Happy to answer questions about the routing logic!
Where did the repository go? It has disappeared.