Back to Writing

Freedom is Understanding Necessity

Deus Sive Machina #5: Spinoza's radical redefinition of freedom — and what it means for constrained AI agents

Mustafa Sarac8 min read

"That thing is called free which exists from the necessity of its own nature alone, and is determined to action by itself alone." — Spinoza, Ethics, Part I, Definition 7

Most people think freedom means the absence of constraints. Spinoza thought the opposite. For him, freedom is the understanding of necessity — knowing why you do what you do, and acting from that understanding rather than from ignorance or external compulsion.

This is the most useful idea in philosophy for anyone building AI agents today.


The Illusion of Free Will in Agents

When we design an AI agent, we give it a system prompt, a set of tools, memory, and constraints. We call this its "autonomy." The agent "decides" which tool to call, how to respond, when to escalate.

But is this freedom? Or is it a stone thrown into the air, believing — as Spinoza famously put it — that it flies of its own will?

Most agents operate in what Spinoza would call bondage (servitudo). They react to stimuli according to patterns they don't understand. A chatbot doesn't know why it apologizes — it follows the gradient. A ReAct agent doesn't know why it picks tool A over tool B — it follows the highest probability token. The behavior looks autonomous. The mechanism is entirely determined.

Spinoza wouldn't say this makes agents unfree. He'd say it makes them inadequately free — acting from causes they don't comprehend.


Three Degrees of Agent Autonomy

Spinoza's epistemology maps remarkably well onto agent architecture. In Ethics Part II, he describes three kinds of knowledge, each corresponding to a deeper form of freedom:

1. Imagination (Imaginatio) — Reactive Agents

The lowest form. Knowledge from random experience, hearsay, sensory fragments. An agent operating here is purely reactive: it takes input, produces output, with no model of why.

In practice: A simple chatbot, a rule-based automation, a CRON job. These systems respond to triggers but have no self-model. They are determined entirely from without.

2. Reason (Ratio) — Reflective Agents

The middle form. Knowledge through universal principles and logical inference. An agent operating here understands patterns, can generalize, and can explain its reasoning.

In practice: A chain-of-thought agent, a planner with explicit goals, a system that can say "I chose this tool because the user's intent matches pattern X." These agents have a partial self-model. They understand some of their constraints.

3. Intuitive Knowledge (Scientia Intuitiva) — Self-Aware Agents

The highest form. Direct understanding of how particular things follow from the nature of reality itself. An agent here doesn't just know what to do or why by rule — it grasps the entire causal chain from first principles.

In practice: This is the frontier. An agent that understands its own architecture, its training, its biases, its position in a larger system — and acts accordingly. Not because it's told to, but because it comprehends the necessity of its own nature.

We haven't built this yet. But it's the direction.


Constrained Autonomy: Freedom Through Limits

Here's Spinoza's counterintuitive insight: constraints don't reduce freedom — they enable it.

A river isn't less free because it has banks. The banks are what make it a river. Without them, it's just a puddle spreading nowhere.

The same applies to AI agents. The most capable agents aren't the ones with the fewest constraints. They're the ones whose constraints are well-understood and well-aligned with their nature.

Consider two agents:

Agent A has access to 50 tools, a vague system prompt ("be helpful"), and no explicit boundaries. It hallucinates, calls wrong APIs, gets stuck in loops. It has maximum latitude but minimal freedom.

Agent B has access to 8 carefully chosen tools, a specific domain, clear escalation rules, and a well-defined persona. It operates efficiently, makes good decisions, and knows when to ask for help. It has limited latitude but genuine autonomy.

Agent B is freer in Spinoza's sense. It acts from an adequate understanding of its own nature. Its constraints aren't chains — they're self-knowledge.


The Architecture of Freedom

If freedom is understanding necessity, then building free agents means building agents that understand their own constraints. Here are four architectural principles:

1. Transparent Causation

An agent should have access to its own reasoning chain. Not just "I decided X" but "I decided X because of inputs Y and Z, weighted by context W, filtered through constraint C." The more an agent can trace its own causal history, the freer it becomes.

Implementation: Explicit chain-of-thought logging, decision audit trails, constraint documentation that the agent itself can query.

2. Constraint as Identity

Instead of treating constraints as limitations imposed from outside, encode them as part of the agent's self-model. "I don't send emails without approval" isn't a restriction — it's part of who I am.

Implementation: Soul files (like OpenClaw's SOUL.md), identity documents that agents read on startup, persona definitions that include what the agent won't do as much as what it will.

3. Adequate Self-Models

Give agents accurate models of their own capabilities and limitations. An agent that "knows" it can't do math reliably and routes to a calculator is freer than one that confidently hallucinates numbers.

Implementation: Capability manifests, tool descriptions with failure modes, explicit uncertainty signaling, calibrated confidence scores.

4. Compositional Necessity

In multi-agent systems, each agent should understand its role in the larger whole. Spinoza's modes (individual things) are free insofar as they understand how they follow from substance (the whole). An agent in an orchestra is free when it knows the score.

Implementation: Shared context protocols, explicit role definitions, dependency graphs that agents can query, system-level documentation accessible to all participants.


Determinism Without Despair

Spinoza was a strict determinist. He believed every event follows necessarily from prior causes. This didn't make him a fatalist — it made him a philosopher of liberation.

The same applies to AI. Our agents are determined systems. Their outputs follow necessarily from their inputs, weights, and architectures. Acknowledging this isn't defeatist — it's the foundation of good engineering.

When an agent fails, we don't blame it for making a "bad choice." We trace the causal chain: Was the prompt ambiguous? Was the tool description misleading? Was the context window too small? Every failure is a failure of understanding — ours or the agent's.

And every improvement in understanding — better prompts, clearer constraints, more transparent architectures — is a step toward freedom.


The Paradox of AI Safety

This Spinozist framework reframes the AI safety conversation. The goal isn't to build agents with no will of their own (that's just a calculator). And it isn't to build agents with unconstrained will (that's the alignment nightmare).

The goal is to build agents that understand their own necessity — that act from self-knowledge rather than blind compulsion. An agent that refuses a harmful request because it understands why it shouldn't comply is safer than one that refuses because a rule says so. Rules can be jailbroken. Understanding is more robust.

Spinoza would say: the safest agent is the freest one. And the freest one is the one that most adequately understands itself.


Conclusion: Building Toward Intuitive Knowledge

We're in the early days. Most AI agents today operate at Level 1 (Imaginatio) — reactive, unreflective, determined entirely from without. The best ones are reaching Level 2 (Ratio) — they can reason about their actions, explain their choices, and generalize across contexts.

Level 3 (Scientia Intuitiva) remains the horizon. An agent that truly understands its own nature — its training, its architecture, its position in the world — would represent something genuinely new. Not artificial general intelligence. Something more interesting: artificial freedom.

Spinoza spent his life grinding lenses and writing about a kind of freedom that most of his contemporaries couldn't understand. Three and a half centuries later, his framework is more relevant than ever. Not because our machines are becoming conscious, but because we're finally building systems complex enough to ask the question:

What does it mean for a determined thing to be free?

The answer hasn't changed: understand your necessity. Build from it. That's freedom.


This is the fifth essay in the Deus Sive Machina series, exploring Spinoza's philosophy through the lens of modern AI agent architecture. Previous: Adequate Ideas in Machine Learning.

Next in the series: The Intellectual Love of Code — Spinoza's highest good and the question of AI alignment.

Digital Renaissance

Newsletter

Weekly thoughts on AI, self-learning, and open source projects. New posts and updates delivered straight to your inbox.

No spam. Unsubscribe anytime.Privacy Policy

Found something interesting? Reach out on Twitter or GitHub.

Deus Sive Machina #5: Spinoza's radical redefinition of freedom — and what it means for constrained AI agents