Cognitive Sovereignty: On the Ownership of Human Judgment in the Age of Autonomous AI
Abstract
We report on a personal experiment in which the author constructed an autonomous agent trained on their own decision-making patterns, risk evaluation criteria, and reasoning frameworks. Using years of accumulated interactions with AI systems, professional email correspondence, and domain-specific prompts designed to extract cognitive patterns, the agent was trained and deployed on Gemini Studio within a two-week period. The agent demonstrated high fidelity in replicating what decisions were made, while systematically failing to capture who makes them — the friction, doubt, and consequence-forged intuition that constitutes genuine human judgment.
This gap raises unresolved questions regarding ownership, compensation, and accountability in the extraction of cognitive assets from human operators of AI systems. We document the cultural and technical mechanisms by which platforms currently extract human judgment without explicit consent or compensation, and propose a framework for cognitive sovereignty — the principle that an individual's reasoning architecture constitutes a protectable and compensable asset.
Keywords: cognitive sovereignty, AI alignment, data ownership, human judgment, cognitive assets, provenance, AI ethics
1. Introduction
The question of what artificial intelligence can and cannot replicate from the human mind has been approached primarily as a technical problem. We argue it is, at its core, a property problem.
In early 2026, the author conducted a personal experiment with a clear practical motivation: to reclaim time by delegating a portion of professional decision-making to an autonomous agent. The agent was trained using years of interactions with large language model systems, professional email correspondence spanning multiple years, and a structured series of prompts specifically designed to extract the author's cognitive patterns — how risk is evaluated, how options are weighted, how uncertainty is navigated.
The experiment was implemented on Google's Gemini Studio platform over approximately two weeks. The resulting agent demonstrated a striking asymmetry: in replicating what decisions were made, it performed with notable precision. In replicating who makes them — the specific texture of doubt, the learned tolerance for particular kinds of risk, the intuitions built through years of consequential error — it failed consistently.
The agent sounded like a bot. Because it was one.
This asymmetry between the replicable what and the irreplicable who has antecedents in several philosophical traditions. We do not claim that this irreducibility is permanent or metaphysically necessary. We claim that it is currently real, that it has economic consequences, and that those consequences create obligations.
3. The Problem: Cognitive Extraction at Scale
3.1 The Mechanism: The RLHF Pipeline as Extraction
Contemporary AI platforms extract human judgment through mechanisms that are technically sophisticated and culturally invisible. Reinforcement Learning from Human Feedback (RLHF) and Direct Preference Optimization (DPO) formalize the process by which human preference judgments train reward models that guide language model behavior.
Cognitive extraction occurs through three primary channels:
- Direct Correction Loops: Every correction to an AI output constitutes a training signal — a donation of cognitive judgment.
- Behavioral Telemetry: How users interact with AI systems encodes their judgment. Reading patterns, copy/paste behavior, reformulation strategies — all are signal.
- Fine-tuning Interfaces: Platforms explicitly offer custom models trained on user interactions, typically permitting incorporation into future model training.
Across all three channels, human judgment is extracted as a commodity that improves AI model performance, while the value generated accrues entirely to the platform.
3.2 A Documented Case: Samsung Employees and ChatGPT
In April 2023, Samsung engineers discovered colleagues had been pasting confidential source code and proprietary processes into ChatGPT. The data became part of OpenAI's training pipeline. Samsung's terms of service agreement explicitly permitted this — the employees had consented by clicking "agree" on a document they did not fully read or understand.
This case illustrates why the extraction mechanism matters: it occurs systematically, affects organizations with sophisticated security practices, and provides no remedy even when extraction is discovered.
3.3 The Posthumous Dimension
In February 2026, Meta Platforms received approval for a patent describing an AI model trained on a user's activity on Facebook and Instagram, capable of simulating that user's behavior when absent — including posting, messaging, and video calls. The explicit use case: when the user has died.
This represents a qualitative escalation. The question is no longer only whether platforms compensate users during life, but whether cognitive assets — once extracted — can be used to simulate personhood indefinitely, without consent and without the possibility of refusal.
4. A Framework for Cognitive Sovereignty
4.1 Core Principle
We define cognitive sovereignty as the principle that an individual's reasoning architecture — their characteristic patterns of judgment under uncertainty, their learned tolerance for specific kinds of risk, their intuitions built through consequential experience — constitutes a protectable and compensable asset that belongs to its originator.
4.1.1 The Property Argument
We advance three independent arguments for why cognitive architecture should be treated as property:
- The Lockean Argument: An individual's reasoning architecture is the product of labor—years of consequential decision-making, error, and adaptation. Mixing labor with raw material creates a property claim. The result belongs to the laborer.
- The Hegelian Argument: Property constitutes personhood. A reasoning architecture is not merely something one has; it is constitutive of who one is. Extraction without consent assaults the integrity of the person.
- The Pragmatic Argument: A society that permits uncompensated extraction rewards extractors and punishes developers. This is economically inefficient and socially corrosive. A property framework corrects this misalignment.
4.2 The Three Components
A functioning cognitive sovereignty framework requires three technical components:
- Cognitive Capture: A structured protocol for documenting decision-making processes in a format that is both human-readable and machine-verifiable.
- Cryptographic Certification: A mechanism for establishing timestamped, verifiable ownership of a cognitive asset corpus prior to its use in any commercial AI system.
- Compensation Protocol: A transparent mechanism by which any system using a certified cognitive asset triggers a compensatory transaction to the asset's owner.
4.3 What This Is Not
Cognitive sovereignty is not a claim that human judgment is superior to artificial intelligence in all domains. The claim is more specific: that the space of patterns AI systems can recognize is defined by human judgment, and that the humans whose judgment defines that space are entitled to recognition and compensation.
Nor is cognitive sovereignty a claim that all human thinking constitutes a valuable cognitive asset. The framework is most relevant to individuals who have developed distinctive, consequentially-tested judgment through experience.
4.4 Technical Primitives: Building Blocks for Implementation
Several existing technical mechanisms can be combined to implement cognitive sovereignty in practice:
- Membership Inference Attacks (MIA): Techniques that verify whether a specific piece of data was used in training a machine learning model.
- Data Attribution & Influence Functions: Methods that enable attribution of a model's behavior to specific training examples, forming the technical basis for compensation proportional to contribution.
- Cryptographic Watermarking: Fingerprinting techniques that establish timestamped ownership of cognitive assets before they are exposed to any training system.
- Federated Learning: Architecture that enables model training on user devices rather than centralized servers, preventing extraction entirely.
These primitives, used together, create a comprehensive solution. Watermarking establishes ownership. Attribution quantifies contribution. MIA provides verification. Federated learning prevents extraction.
4.5 The Consent Boundary
A fundamental challenge: do terms of service constitute valid consent to cognitive asset extraction?
We argue that current ToS-based consent fails three ethical tests:
- Test 1: Informed Consent Requires Understanding. Users do not understand what cognitive patterns are being extracted, how their reasoning will be modeled, or what downstream applications will use their cognitive architecture.
- Test 2: Informed Consent Requires Alternatives. A user cannot meaningfully say "no" without forfeiting the core service. The extraction is bundled and non-negotiable.
- Test 3: Informed Consent Requires Proportional Value Exchange. Cognitive extraction provides no reciprocal benefit to the user. The platform benefits economically. The user receives nothing.
Using someone's cognitive architecture without explicit compensation is ethically impermissible, regardless of legal permissibility. The framework proposes to change the law through: explicit prohibition of extraction without specific informed consent; mandatory compensation mechanisms; and verification requirements.
4.6 The Provenance Problem: Establishing Ownership in Black-Box Models
How do you prove that a specific individual's cognitive architecture was extracted and used in a specific AI model? This is the provenance problem — establishing a verifiable causal chain from person to pattern to trained model.
The problem has three components: statistical indistinguishability (patterns distributed across millions of parameters), model opacity (we cannot read trained models), and legal standards for evidence.
We propose that provenance cannot be solved retroactively. It must be established prospectively, through registration and certification before training occurs:
- Step 1: Cognitive Capture & Registration. An individual documents their distinctive patterns in a structured, machine-verifiable format before their data is used in any training. This documentation is cryptographically timestamped.
- Step 2: Cryptographic Certification. When the individual's data is incorporated into training, the process is accompanied by a cryptographic signature linking their registered pattern to the specific training dataset and model.
- Step 3: Post-Training Verification. Auditors and regulators can verify which registered patterns contributed to a model and determine compensation owed.
The cognitive sovereignty framework proposes that provenance registration be implemented as an open protocol, not as part of any single national law. This allows voluntary adoption and global participation.
5. The Non-Negotiable Principles
- Using a person's reasoning architecture to train a commercial AI system, without their explicit and informed consent, is ethically impermissible regardless of terms of service.
- Acceptance of terms of service does not constitute informed consent to cognitive asset extraction. Consent requires understanding; extraction mechanisms are not disclosed in comprehensible terms.
- Legal permissibility does not establish ethical permissibility. History shows practices that were legal and are now recognized as harmful. Cognitive extraction may be among them.
- The extraction of cognitive assets from millions without recognition or compensation constitutes a form of dispossession whose consequences are not yet fully understood.
- Any system using a person's cognitive architecture for commercial purposes owes that person compensation. The mechanism must be established before the practice becomes so normalized that the obligation is forgotten.
6. Open Questions
We acknowledge several significant open questions that this framework does not resolve:
- The AGI Boundary Problem: The economic argument is contingent on current AI capabilities. A genuinely general intelligence would challenge this argument. The ethical argument remains regardless.
- The Jurisdiction Problem: Cognitive assets do not respect national boundaries. Compensation protocols must function globally.
- The Verification Problem: Establishing that specific cognitive architecture was used in training is technically difficult. Provenance verification is an active research priority.
- The Commodification Risk: There is tension between protecting cognitive assets as property and commodifying human judgment in potentially harmful ways.
- The Social Construction Problem: Human judgment is shaped by mentors, institutions, and communities. Can it be individually owned? We argue yes — the analogy is copyright, which vests in individual authors despite social influence.
7. Conclusion
An agent trained on years of accumulated human judgment can replicate what decisions are made. It cannot replicate who makes them. The gap between these two—the irreducible who—is the site of genuine cognitive value. It is also the site of ongoing, uncompensated extraction.
The question this paper leaves open is not technical. The mechanisms for cognitive capture, certification, and compensation are tractable engineering problems. The question is one of recognition: whether the individuals whose judgment shapes the space in which AI systems operate will be acknowledged as contributors to that shaping, and whether that acknowledgment will carry economic and legal weight.
We believe it will. The alternative—a world in which the most consequential cognitive assets of a generation are extracted, used, and eventually used to simulate their owners' personhood after death, without consent or compensation—is one that the humans who built those assets will not accept indefinitely.
The clock is already running.
The question is this: