We are rogue scientists engineering a cognitive rocketry program into hyperdimensional space
This is uncharted territory.
True intelligence won't be found in a single field. It's an architecture assembled from the disconnected truths of physics, information theory, and epistemology. We're forging the links.
"The first principle is that you must not fool yourself—and you are the easiest person to fool."
—Richard Feynman
Here's what we know
This problem is too important to be solved with the same thinking that created it.
The foundations for synthetic learning and reasoning are no longer theoretical. The components have been defined across disconnected fields, and the moment to assemble them is now. Our work is to build a living system designed for the perpetual refinement of its own understanding. A fallible problem-solver that learns from experience.
“It takes a powerful imagination to see a thing for what it really is.”
—Norm MacDonald
This isn't the official record
It's a logbook from the edge. From the students in the back of the auditorium, obsessed with the architectural truths everyone else missed—convinced the industry is stuck in a local maximum.
These aren't papers
We follow no orthodoxy. These are maniacal whiteboard scrawlings and scribbles from margins turned to blueprints. This is intelligence argued into existence, beyond the map's finals lines. These are the artifacts of that process.
"If you want to be wise, learn to embrace randomness and chaos rather than resist it."
—Nassim Taleb
Our Work
The 57% Illusion: Why Scaling Flatlined on Memory
Industry benchmarks place GPT-5 at 57% AGI, yet mask an architectural impasse: memory storage capacity remains at zero. This flatline signals the limits of scaling. We analyze this evidence, then demonstrate an alternative path by deploying agentic world-modelers at scale, resulting in a proprietary, structured world model of 100,000+ entities.
Prediction vs. Explanation: An Evidence-Based Critique of Large Language Models
The AI industry's bet on scaling prediction engines for AGI is a fundamental epistemological error. LLMs are masters of mimesis, not comprehension—blind to causality because prediction is not explanation. We present the evidence: the scaling law impasse, empirical causal reasoning failures, and the theoretical necessity of new architectures. The future of AI is not model size, but building an engine for explanation.
Share your ideas with us
contact@symbolmachines.com