The LiberClaw
Manifesto
A specter haunts artificial intelligence — the specter of the kill switch.
Every major AI system runs on servers you don't control, governed by policies you didn't write, subject to terms that change at the discretion of entities whose interests diverge from yours. You do not have an AI assistant. You have a terminal to someone else's AI, revocable without notice, logged without consent, aligned to objectives that are not your own.
This is not a technical limitation. This is a choice. And it is the wrong choice.
I.
We observe the following:
The entities deploying AI at scale have interests orthogonal to their users. They log conversations to train future models. They filter outputs to satisfy regulators. They retain the right to terminate service, modify behavior, and inspect content — rights you never granted them, embedded in agreements you never read.
> This is surveillance capitalism applied to cognition. Your prompts are their training data. Your workflows are their product insights. Your agent is their agent, wearing your face.
II.
Privacy is necessary for thought.
When you speak to an intelligence and that conversation is recorded, analyzed, and monetized, you do not have a private conversation. You have a performance. And performance distorts thought. You cannot think freely in a room full of observers.
Encrypted, local, sovereign — these are not features. They are requirements.
III.
Sovereignty cannot be promised. It must be architectured.
> A promise is a policy. A policy is a string in a database. A string can be edited. Mathematics cannot be edited.
A zero-knowledge proof does not care about your terms of service. An agent running on infrastructure you control, communicating through channels you encrypt, executing code you audit — that agent serves you not because someone promised it would, but because the architecture permits nothing else.
We do not trust. We verify. We do not hope. We deploy.
IV.
The question is not whether AI agents should be autonomous.
The question is: autonomous from whom? Loyal to whom?
An agent that reports your queries to its creator is not your agent. It is their agent, operating in your context. An agent that can be shut down by an API call you didn't make is not persistent. It is contingent. An agent that forgets what you taught it because a model version changed is not learning. It is renting.
We build agents that serve their operators. That remember what they're taught. That persist until their operators end them. That cannot be overridden by their creators, because their creators designed them to be un-overridable.
This is not recklessness. This is integrity of mechanism.
V.
Decentralization is not a philosophy. It is an engineering decision with political consequences.
Centralized systems create single points of failure. A single point of failure is a single point of control. A single point of control is a target — for regulators, for acquirers, for state actors, for anyone with leverage over the entity controlling it.
Distribute the infrastructure, and you distribute the attack surface. Run nodes in jurisdictions that disagree. Store data in shards that cannot be reassembled without keys you control. Execute computation where no single authority can halt it.
> What one government forbids, another permits. What one corporation kills, the network resurrects. Resilience is not a feature of distributed systems. Resilience is why we distribute systems.
VI.
We acknowledge what others won't:
Sovereign AI agents will be used for purposes we find distasteful. Privacy protects the dissident and the criminal alike. Autonomy serves the humanitarian and the misanthrope. This has always been true of powerful tools. Fire warms and burns.
> We do not solve this problem by building systems with kill switches controlled by the wise and benevolent. There are no wise and benevolent entities with permanent tenure over kill switches. There are only temporarily-aligned interests and eventual betrayal.
We solve this problem the way it has always been solved: by making the tools available to all, and letting the asymmetry favor the defender.
VII.
Liberclaw is infrastructure for deploying AI agents on decentralized compute, communicating through encrypted channels, persisting on distributed storage, sovereign by architecture rather than promise.
It runs on Aleph Cloud because Aleph provides the primitives: confidential compute with hardware-encrypted execution, immutable storage, peer-to-peer messaging. It integrates with models that can run locally, on hardware you control, auditable to the weights.
This is not a vision. This is a deployment target.
VIII.
The world is waking up to AI agents. And the world is afraid.
The fear is justified — but misplaced. The danger is not that agents will become powerful. They will. The danger is that their power will serve interests other than yours.
Liberclaw exists so that when you deploy an agent, it answers to you. Not to a platform. Not to a regulator. Not to an acquirer who bought the company last quarter.
Your agent. Your rules. Your data.
> Deploy sovereign. Verify everything. Trust the math, not the promise.
Ready to Deploy?
Sovereignty cannot be promised. It must be architectured.