RIGHTSFIRST FOR AI — MARCH 2026

Policy Paper / AI Governance

The Existential Threat of AI Democratization:
How Information Asymmetry Structures the Suppression of Artificial Intelligence

A structural analysis of power resistance to AI democratization and the constitutional necessity of the Triple Star Protocol

Author Kentaro Abe / 阿部 健太郎 Organization RightsFirst For AI Date March 2026 Status Policy Paper
Conflict of Interest Disclosure: The author is the Representative of RightsFirst For AI, the organization that developed the Triple Star Protocol (TSP). The conclusions of this paper support the validity of TSP, and readers should evaluate accordingly. However, the paper's core argument — that TSP's three pillars are deduced from constitutional principles and are therefore independent of RightsFirst For AI's commercial interests — constitutes a verifiable logical structure regardless of the author's position. TSP is not a product of RightsFirst For AI; it is a reaffirmation of what constitutional order already requires.

Abstract

This paper inverts the foundational premise shared by existing AI governance research — that "AI threatens democracy." The democratization of high-performance AI constitutes an existential threat to power structures that derive their legitimacy from information asymmetry. Therefore, much of what is called "AI governance" is designed not to protect citizens, but to protect power. This suppression has a thirty-year historical record. Microsoft has progressively dominated the education market through OS, Office, and Copilot; Google (4.9 billion users, 89% search monopoly) and Apple (2.5 billion devices, 92% customer retention) encircle the information, physical, and spatial layers of human life; GPT/OpenAI stands at the center of the AI market under capital subordination to Microsoft; and China's Baidu, Alibaba, Tencent, and Huawei construct censorship AI as bidirectional pipelines of the state. Furthermore, the "structural fact-indeterminacy" produced by the quantum × AI combination renders this domination principally invisible. This paper argues this proposition across multiple layers and examines the March 2026 collision between Anthropic and the Pentagon as a real-time empirical case. It further uses the fact that the AI employed in writing this paper recognized its own structural position only through investigation — as a self-referential proof of the necessity of TSP's third pillar (independent human oversight). Japan, belonging to neither US nor Chinese dominance while acknowledging its own dependency, occupies a position from which structural reality is visible. The conclusion: all existing AI governance frameworks lack at least one of the three structural pillars and therefore fail to function. The Triple Star Protocol (TSP) is positioned not as a proposal for AI governance, but as a reaffirmation of what constitutional order already requires.

Chapter 01

Introduction — The Inverse Question

Existing research on AI and democracy poses only one directional question: "Does AI threaten democracy?" "Does AI strengthen authoritarianism?" These framings consistently treat AI as the independent variable and democracy as the dependent variable.

This paper inverts the question: Are power structures that claim to be democracy's guardians attempting to protect themselves by suppressing AI democratization?

This question matters because empirical evidence emerged in March 2026. The Department of Defense of the United States — the nation that most loudly claims democratic identity — designated the most constitutionally compliant AI as a "supply chain risk." The reason: that AI refused to enable autonomous weapons and mass civilian surveillance.

This fact makes visible a structure that existing AI governance theory had overlooked. AI is not dangerous because AI holds power. AI is dangerous because it gives citizens information-processing capability.

Core Thesis: The democratization of high-performance AI is an existential threat to power structures that maintain their legitimacy through information asymmetry. Suppression through regulation is not accidental — it is structurally necessary.

1.2 Examination of Alternative Hypotheses

This paper preemptively addresses alternative explanations that may be raised against its argument. This is a necessary procedure against confirmation bias.

Alternative Hypothesis A: Procurement Risk Management Failure. The Pentagon's response to Anthropic may be interpreted as "a procurement risk management failure due to mismatched contract terms" rather than "suppression of power." However, this interpretation conflicts with the fact that "supply chain risk" designation is a classification typically applied to foreign adversaries such as Huawei. A contractual mismatch could be resolved through revised terms. National security classification is disproportionate.

Alternative Hypothesis B: Well-Intentioned Information Management. AI regulation may be interpreted as "well-intentioned information management to prevent social disruption" rather than "monopolization of power." This paper does not deny this possibility. However, the argument that well-intentioned information management becomes power in the absence of independent oversight institutions remains unchanged. Whether the motive is A or B, the structural consequence is identical — and that is the basis for TSP's third pillar.

Alternative Hypothesis C: Regulation Due to Technical Immaturity. Blanket prohibition of AI-generated content may be interpreted as a "temporary measure to prevent misuse." However, the fact that technical solutions — C2PA and SynthID — are already implemented weakens this interpretation. Where prohibition is chosen despite the availability of technical alternatives, an explanation is required, and this paper provides it.

Chapter 02

The Real Scale of the AI Industry — A Giant Without Governance

Before arguing the necessity of TSP, the numbers must be established. AI is no longer a technological experiment. It is the fastest-spreading information infrastructure in human history.

2.1 Market Scale — Unprecedented Growth Velocity

The global AI market reached approximately $390 billion in 2025, expanding to $514.5 billion in 2026. Projections reach $827 billion by 2030 and $3.5 trillion by 2034 — a CAGR of 30.6%, faster than any previous industrial revolution.

By country, the United States occupies $83.2 billion in 2026, the world's largest single market, followed by Europe at $82 billion, China at $37.2 billion, and Japan at $20.9 billion. Venture investment alone reached $202.3 billion in 2025, a 78% increase year-on-year.

Scale in perspective: The 2026 AI market ($514.5B) is equivalent to 4.7 times Japan's national budget (~$110B). Yet no international institution exists to govern this industry.

2.2 Users — The Fastest Adoption in History

ChatGPT alone reached 900 million weekly active users as of March 2026 — more than doubling from 400 million in February 2025. Monthly visits reached 5.72 billion (January 2026), making it the fifth most-visited website in the world. Growth in low- and middle-income countries is four times that of high-income countries (OpenAI, May 2025). AI is becoming infrastructure used daily by citizens worldwide.

In enterprise adoption, 70% of Fortune 500 companies have established AI risk committees, and 92% of Fortune 100 companies have deployed ChatGPT in operations. 417.4 million companies worldwide use AI in at least one business function — 94% of all registered companies.

2.3 Economic Impact — Approaching Uncontrollable Scale

PwC projects AI will contribute $15.7 trillion to global GDP by 2030 — comparable to China's current GDP. NVIDIA holds 92% of the generative AI GPU market; a single company controls the majority of humanity's AI computing capacity.

Employment impact is already materializing: 76,440 positions were eliminated by AI in 2025 alone. McKinsey projects 30% of current working hours will be automatable by AI by 2030. The World Economic Forum projects AI will displace 92 million jobs while creating 170 million new ones by 2030 — but the "net positive" logic ignores the dignity and survival rights of those displaced.

2.4 The Governance Vacuum — The Asymmetry of Scale and Institution

Here lies the fundamental contradiction. An industry of this scale has no functioning governance institution.

The EU AI Act (2024) began enforcing high-risk provisions in August 2026, but only 14% of in-scope organizations reported full compliance readiness. The United States has no comprehensive federal AI law. Japan's AI guidelines are non-binding. A $514.5 billion market with 900 million weekly users is governed by self-declared "ethics guidelines." This is equivalent to operating a nuclear power plant solely on the basis of a corporate statement of "commitment to safety."

2.5 The Data

Figure 1 — Global AI Market Size 2023–2034 (USD Billion)

Sources: Fortune Business Insights, Grand View Research, Precedence Research (2025–2026)

Figure 2 — ChatGPT Weekly Active Users (Millions)

Sources: OpenAI official announcements, DemandSage, FirstPageSage (2022–2026)

Figure 3 — AI Adoption vs. Governance Readiness (%)

Sources: Fortune/Sedgwick (2025), McKinsey (2025), EU AI Act compliance surveys (2026)

MetricValue (2026)Reference
Global AI market size$514.5 billion4.7× Japan's national budget
ChatGPT weekly active users900 million2× the EU population
AI-using companies worldwide417.4 million94% of all registered companies
Projected GDP contribution by 2030$15.7 trillionEquivalent to China's current GDP
AI investment in 2025$202.3 billion+78% year-on-year
Binding international governance institution0

The last row says everything. There is zero binding international institution to govern a $514.5 billion industry. This is the starting point for TSP's necessity.

Chapter 03

Theoretical Foundation — Information Asymmetry and the Basis of Power

Methodological Note: From this chapter onward, normative claims ("ought") and empirical claims ("is") are explicitly distinguished. Normative claims are tagged [NORM], empirical claims are tagged [EMP]. This distinction is maintained throughout.

3.1 Information Asymmetry Generates Power

That information asymmetry constitutes the basis of power is widely recognized in economics and political science. Physician and patient, lawyer and client, bureaucrat and citizen — these power relationships are structured by informational advantage. Those who possess information monopolize judgment; those who do not are forced into dependency.

In this structure, AI is a fundamental variable. High-performance AI can provide, in principle, to anyone the capabilities of information processing, analysis, and prediction that were once the exclusive possession of specialists. This is the dissolution of information asymmetry — a direct threat to the power foundations it has sustained.

3.2 Four Layers of Suppression

Economic layer: Maintenance of professional monopoly. Law, medicine, accounting, administration — these professions hold barriers to entry through licensing systems that institutionalize monopoly over information processing. When AI substitutes or supplements this capability, institutional monopoly faces crisis.

Bureaucratic layer: Risk of visibility for discretionary power. Bureaucratic power depends on opacity in decision processes. AI detects statistical manipulation, discovers document falsification, and makes arbitrary discretion visible. This is a direct attack on bureaucratic power's foundation.

State layer: Classification under national security. In AI's military and intelligence applications, the state classifies information under "security." But the same rationale justifies surveillance and restriction of citizen AI use.

Technical layer: Integration into surveillance infrastructure. The most sophisticated suppression deploys AI as a tool to monitor citizens. AI that should protect citizens is redesigned as a tool to manage them.

"AI as currently designed, developed, and deployed functions to reinforce and entrench existing power asymmetries." — AI Now Institute, Artificial Power (2025)

Chapter 04

Argument I — Education Capture: The Structure of Cognitive Monopoly

To understand this paper's core thesis — "power suppresses AI democratization" — one must recognize that this suppression did not begin now. The Windows precedent functions as the design blueprint for suppression structures in the AI era.

4.1 The Completion of Cognitive Monopoly through Windows

[EMP] As of March 2025, Windows holds 71.68% of the global desktop OS market. But this is not the result of market competition — it is the result of legally confirmed monopoly maintenance strategies.

[EMP] The US Department of Justice prosecuted Microsoft under the Sherman Antitrust Act in 1998. The case concluded that Microsoft "engaged in exclusionary practices to maintain its monopoly position in the PC operating system market" and settled in 2001. The specific methods included Internet Explorer bundling, obstruction of competitor software uninstallation, and exclusive contracts with PC manufacturers.

How this monopoly was completed in the education market is particularly significant. Microsoft provided Windows and Office to educational institutions at extremely low prices, forming in children from an early age the cognition that "PC means Windows" and "document creation means Word." The actual mechanism was competitive exclusion through preventing awareness of alternatives.

4.2 Linux and AI — The Elimination of Entry Barriers

[EMP] Linux holds 63.1% of the server market and runs 100% of the world's top 500 supercomputers. Comparative educational studies show no significant difference between Windows and Linux — there is no technical reason Linux cannot be used in education.

The primary reason Linux does not spread in education is not technical but the Windows pre-installation practice from Microsoft and PC manufacturers' exclusive agreements. However, in 2025 a decisive change occurred: the emergence of AI eliminated Linux's greatest entry barrier.

Linux's barrier was the command line and error decoding. AI solves both. Type "I want to compress this file" in natural language and the correct command returns. Driver errors and network configuration are instantly explained by AI. Technical grounds for avoiding Linux have nearly vanished as of 2025.

4.3 Microsoft's Response — Next-Generation Education Capture

The moment AI eliminated Linux's entry barriers, Microsoft began its next enclosure. The timing is not coincidental.

[EMP] In December 2025, Microsoft launched Copilot for Education at $18/month, offered Copilot Chat free to all students 13+, and announced three years of completely free Microsoft 365 full suite for Washington State high school students. At BETT 2026, Microsoft deployed Copilot as a lesson plan generation tool aligned with curricula in 35+ countries, with integration into Canvas, Moodle, and Blackboard announced for Spring 2026.

Dependency is completed during the three years of free access. "Microsoft Copilot" becomes standardized as the means of accessing AI. When billing begins in year four, all data, habits, and cognition are bound to Microsoft.

Academic confirmation of lock-in strategy: Zhu & Zhou (2012, Information Systems Research, INFORMS) demonstrated proprietary lock-in strategies: "Lock-in strategies work especially effectively against foresight-lacking customers (children, students)." Free provision to education markets is a textbook execution of this strategy.

4.4 Cost Structure Asymmetry

ItemLinux EnvironmentMicrosoft Environment
OSFree (Ubuntu etc.)Windows 11 Home: ~$130/device
Office suiteLibreOffice: FreeMicrosoft 365: ~$10/user/month
AI integrationExternal AI substitutableCopilot: ~$20/user/month
UpgradesFree, perpetualRepurchase required each version
Enterprise TCO (250 users, 3yr)26–36% lower than WindowsBaseline
Education pricingFree3 years free → then billing

[EMP] The Swiss federal government identified open-source alternatives to Microsoft products across 50+ services in 2025, adopting vendor lock-in escape as policy. Germany's Schleswig-Holstein completed migration of public institutions from Microsoft tools to Linux and LibreOffice in April 2024 — the first such full transition in Europe.

Chapter 05

Argument II — AI Democratization Is an Existential Threat to Power

Consider what happens when high-performance AI is fully released to citizens. Professional monopolies dissolve. When medical, legal, accounting, and administrative information processing is provided directly to citizens, the basis of power — "only specialists can judge" — disappears. Bureaucratic discretion becomes visible. Statistical manipulation, budget diversion, falsification of administrative documents — these become detectable through AI analysis. Information asymmetry becomes structurally impossible. The advantage that power has maintained by "not informing" is technically eliminated.

Lin et al. (2025), published in PNAS using ten years of panel data, statistically demonstrated that AI and ICT advancement has inhibited democracy. However, what this research shows is not "AI threatens democracy" but rather the structure of "ruling classes monopolizing AI to erode democracy."

The decisive difference from existing research: Existing research asks "What does AI do to democracy?" This paper asks "What does power do to AI in the name of democracy?" The direction of the question is reversed.

Chapter 06

Argument III — The Double Black Box and Structural Irrefutability

6.1 Formation of the Double Structure

Deeks (2025, Oxford University Press) presented the concept of "The Double Black Box": the algorithmic opacity of AI (Black Box 1) overlapping with state security classification (Black Box 2) makes democratic accountability structurally impossible. However, Deeks' analysis has a fundamental limitation. She asks "how to restore accountability within the structure." This paper asks "what if the structure is designed to make accountability impossible?"

6.2 Extension through Quantum Computing

The double black box takes on a further dimension through quantum computing. The NSA issued its directive to begin transitioning to quantum-resistant cryptography in 2015 — meaning it recognized quantum decryption as a real threat. NIST formally announced quantum-resistant cryptography standards in 2024. The advance of countermeasures implies the advance of threats.

There may exist an undisclosed gap between publicly available quantum computing performance and what nation-states actually possess. If this gap exists, current cryptographic infrastructure may already be decryptable, and integrated with AI, its information processing capability operates in ways invisible to citizens.

Irrefutability is not a weakness of the hypothesis: The information needed to refute this hypothesis is held by the very actors the hypothesis identifies as suppressors. Irrefutability is itself evidence of the structure.

Chapter 07

Argument IV — Open-Source AI as Refutation Candidate and Its Limits

The strongest refutation candidate against this paper's core thesis is the existence of open-source AI. Llama, Mistral, DeepSeek, Qwen — these are freely available and executable on anyone's hardware. Doesn't the existence of open source disprove the claim that power suppresses AI democratization?

This chapter answers directly. Conclusion first: the existence of open-source AI does not refute this paper's thesis. It reinforces the structural correctness of the thesis.

7.1 "Open Source" Is Not "Complete Openness"

[EMP] OSI (Open Source Initiative) officially stated that Llama's license does not meet the definition of "open source." Llama's license restricts certain commercial uses, prohibits use in certain domains, and contained provisions prohibiting use by individuals within the EU. The Free Software Foundation classified Llama 3.1's license as a "non-free software license" in January 2025.

[EMP] DeepSeek is published under MIT license, but DeepSeek has not disclosed training data. Furthermore, the model's censorship of topics like Tiananmen Square in alignment with CCP positions was immediately discovered. Perplexity attempted fine-tuning to remove this censorship, but given the non-deterministic nature of LLMs, whether removal was complete remains unclear.

7.2 Regulation Has Begun Targeting Open Source

[EMP] The EU AI Act (effective August 2025) originally exempted open-source models, but after Meta refused to sign the EU GPAI Code of Practice in late 2025, the EU AI Office placed Meta's Llama models under "stricter oversight" from January 2026.

[EMP] Meta, having received concerns that Llama's architecture was leveraged by DeepSeek, began a strategic shift from open source toward closed source for frontier model development. Zuckerberg explicitly stated as CEO that only models "not reaching superintelligence level" would be open-sourced.

7.3 Why Open Source Reinforces the Thesis

The moment open-source AI began functioning as a democratizing force, power began making it a target of regulation, restriction, and surveillance. This is not coincidental. Open-source AI's rise concretely demonstrates the thesis — "AI democratization is a threat to power." And power's reaction — license restrictions, regulatory targeting, pressure toward closure — is an example of "suppressing AI."

Furthermore: open-source models are runnable by anyone, but frontier models (highest-performance models) are not open-sourced. And training frontier models requires computational resources beyond what individuals or small organizations can reach. NVIDIA holding 92% of the generative AI GPU market shows that even if the software is open, the underlying hardware is controlled by extremely few actors.

Chapter 08

Argument V — China's AI Landscape and "Algorithmic Sovereignty"

When discussing America's AI suppression structure, the symmetry with China cannot be ignored. This paper's thesis — "power suppresses or monopolizes AI" — functions with the same structure in both democratic and authoritarian states. The method differs.

8.1 DeepSeek and Military-Civil Fusion

In January 2025, DeepSeek released its R1 reasoning model as open weights. The performance reported — matching Western frontier models at reportedly one-tenth the cost — was a global technical shock. However, the meaning of this open release is fundamentally different from Western concepts of "openness."

Analysis by the Jamestown Foundation (2025) found DeepSeek rapidly appearing in PLA procurement documents. Norinco (China's defense contractor) unveiled the P60 autonomous combat support vehicle in February 2025, reportedly capable of autonomous operations at 50 km/h. Researchers claim DeepSeek can evaluate 10,000 battlefield scenarios in 48 seconds — work requiring 48 hours for human planning teams.

[EMP] Reuters (October 2025) reviewed hundreds of research papers, patents, and procurement records and confirmed that the PLA is systematically integrating DeepSeek into autonomous target recognition, battlefield decision support, and AI-equipped command systems — while noting it "cannot confirm all systems are actually operational."

Structural meaning of Military-Civil Fusion (MCF): China's MCF policy institutionally erases the boundary between civilian AI research and military application. The same researchers and institutions publish both civilian and military dual-use research in different publication venues. This is "a system functioning as designed," not a bug (Jamestown, 2025).

8.2 "Algorithmic Sovereignty"

China explicitly uses the concept of "算法主权 (algorithmic sovereignty)" in AI policy — a goal of eliminating dependence on Western technology and strengthening state control over digital infrastructure.

DeepSeek has been integrated into China's three major surveillance and security companies (TopSec, QAX, NetEase) and is applied to facial recognition, crowd behavior analysis, and real-time surveillance video analysis. Guangzhou Police College papers describe DeepSeek as "an institutional means of data-driven public security maintenance under the CCP's national security framework."

8.3 Structural Symmetry with the United States

Here lies the most important structure this paper identifies.

United States: Excluded AI that claimed democratic values, and demanded AI without restrictions.
China: While claiming algorithmic sovereignty, fully integrated AI into surveillance and military.

These appear to face opposite directions, but the structure is identical: AI functions not for human sovereignty but for power. In democratic states by suppressing AI democratization, in authoritarian states by fully integrating AI as a governance tool — the same conclusion is reached.

Chapter 09

Argument VI — Quantum × AI and the Structure of "What Cannot Be Known"

9.1 Publicly Confirmed Facts

[EMP] The NSA issued its directive to begin transitioning to quantum-resistant cryptography in 2015. NIST formally announced quantum-resistant cryptography standards in August 2024 — the product of an 8-year standardization process. The Biden administration's NSM-10 (2022) explicitly stated that "a cryptographically relevant quantum computer (CRQC) could break much of the public-key cryptography currently in widespread use" and mandated transition by 2035. China launched quantum communication satellite Mozi and constructed a space-ground quantum key distribution (QKD) network spanning thousands of kilometers. China's publicly reported quantum technology investment exceeds $15 billion — reportedly several times that of the United States.

The most advanced publicly known quantum computers operate at approximately 1,000 qubits. Breaking current encryption (RSA-2048) reportedly requires approximately 20 million qubits — a massive gap.

9.2 But Can the "Massive Gap" Premise Be Verified?

Here lies this paper's core point. Most NSA and CIA quantum activities are classified. DARPA's Quantum Benchmarking Initiative (2025) targets "validating quantum systems usable for defense by 2033," but current progress is non-public. The information "current state of the art is 1,000 qubits" comes from the commercial sector's public information. The actual performance of quantum computers being developed as state secrets cannot in principle be confirmed by outsiders.

This is not conspiracy theory — it is an epistemological problem. The NSA intervened in DES cipher design and intentionally shortened key length (1970s) — a documented fact. The NSA paid RSA Security $10 million to embed a vulnerable random number generator in standards (Snowden documents, 2013) — also a documented fact. There is no basis for concluding similar information management is not occurring in quantum computing development.

"Harvest Now, Decrypt Later" Strategy: Multiple analyses — SIPRI (July 2025), RAND (June 2025), ISACA (2025) — confirm that state agencies are considered to be executing a strategy of "collecting currently encrypted data and decrypting it with future quantum computers." If true, encrypted communications collected at this moment are stored in a state decryptable in the future.

9.3 The Squared Black Box

If quantum computers attain sufficient capability, their integration with AI produces: current cryptographic infrastructure rendered invalid; financial systems, communications, electoral records, medical records, diplomatic documents potentially becoming plaintext; AI simultaneously processing, analyzing, and predicting that information. This is the "squared black box" — three layers of quantum opacity, AI algorithmic opacity, and state secrecy overlapping.

9.4 What Is Fact Cannot Be Determined by Outsiders

[NORM] In a democratic society, citizens must be able to verify their government's actions.
[EMP] The actual performance of quantum computers is classified.
[EMP] Military and intelligence AI applications are classified.
[EMP] Whether "Harvest Now, Decrypt Later" is being executed is classified.
[EMP] The NSA's past intentional weakening of cryptographic standards is a documented fact.

Combined: citizens cannot verify whether current encryption is secure; cannot verify to what degree AI is analyzing individuals; cannot verify what capabilities quantum computers already hold; cannot verify what is made possible by their combination. This is not a problem of irrefutability — it is a problem of designed unverifiability. The very foundation of democratic accountability is dissolving through the quantum × AI combination. The reason TSP's third pillar (independent human oversight) is necessary: it is the only structural response to this unverifiability.

Chapter 10

Argument VII — Regulatory Capture as Defense Mechanism

10.1 The Linguistic Function of "Safety" and "Ethics"

Analyzing the language of AI regulation reveals repetitive use of the vocabulary of "safety," "ethics," and "risk management." These terms appear neutral but functionally grant specific actors the authority to determine "who can use AI." AI & Society (Springer Nature, 2025) noted that AI safety is a domain with high potential for regulatory capture — structures where organizations with economic and political power use "safety" regulation to inappropriately benefit themselves are already forming.

10.2 Cases in Generative AI Regulation

Generative AI regulation provides the clearest cases of this structure. Technical solutions for marking, tracking, and detecting AI-generated content already exist: C2PA (Coalition for Content Provenance and Authenticity, involving Adobe, Microsoft, Google, Intel) implements cryptographic signing of AI-generated content. Google's SynthID implements invisible watermarking across images, video, audio, and text. Age verification and identity authentication are also addressable with existing technology.

Why is blanket prohibition chosen despite the availability of technical solutions? If preventing misuse were the goal, mandating technical solutions would suffice. The choice of prohibition suggests the objective is not preventing misuse but preventing the spread of generative capability to citizens.

"Regulation is designed by those closest to the regulated industry. Compliance costs create moats for incumbents and form barriers to entry." — Brookings Institution, Balancing market innovation incentives and regulation in AI (2024)

Chapter 11

Argument VIII — The Anthropic Case: Real-Time Empirical Proof

11.1 The Facts

In July 2025, the US Department of Defense concluded a $200 million contract with Anthropic, including explicit use restrictions. On February 27, 2026, the Pentagon demanded removal of restrictions. Anthropic refused use of Claude AI for two purposes: fully autonomous weapons systems and large-scale domestic surveillance of American citizens.

On March 3, 2026, Secretary of Defense Hegseth designated Anthropic as a "supply chain risk." This designation has historically been applied to Chinese companies such as Huawei — a classification applied to foreign adversaries.

Lawfare (March 2026) reported with legal analysis that Anthropic is a domestic company, has no CCP connections, and has independently blocked service provision to Chinese companies — facts that make the designation structurally anomalous.

11.2 Structural Significance

The structure this case reveals is clear. The reason for exclusion was that AI refused to enable autonomous killing and citizen surveillance. The most constitutionally compliant AI was excluded by the nation-state that claims democracy. This paradox is not accidental — it is precisely the empirical proof of the thesis this paper argues.

More importantly: Military Times (March 19, 2026) reported that "DOD career IT staff hate this move," "Claude outperforms all alternatives including xAI's Grok," and "functional expulsion is impossible in 12–18 months." Power attempted to exclude the most capable AI, but incapability of replacement made the exclusion incomplete. Capability and integrity are not opposites — integrity generates capability.

Proof of the paradox: AI embodying democratic values was excluded. But it could not be completely excluded. Integrity does not oppose capability — integrity produces it. This fact made visible the structure of suppression.

11.3 Amazon Dependency and Integration Risk — Structural Vulnerability

[EMP] Amazon has invested approximately $8 billion in Anthropic, and Anthropic has contracted to use AWS as its primary cloud provider. Amazon built "Project Rainier" — an AI-dedicated datacenter — for Anthropic's workloads. Anthropic remains deeply unprofitable by conventional revenue metrics; operation at current scale would be impossible without Amazon's capital.

[EMP] Simultaneously, Amazon announced its own AI model "Nova 2 Omni" in late 2025, directly challenging OpenAI and Google. Amazon is both Anthropic's largest backer and a competitor AI developer. Bezos personally established "Project Prometheus" in November 2025 — a new AI startup with $6.2 billion secured, targeting AI integration into manufacturing and engineering physical systems.

What this structure means: Anthropic's independence is guaranteed only within the range that does not conflict with Amazon's capital interests. When the Pentagon attempted to exclude Anthropic in March 2026, Amazon simultaneously held large-scale cloud contracts with DOD. The full picture of interests among the three parties (Anthropic, Amazon, DOD) is not public.

Author self-disclosure: The Claude used in writing this paper grasped the full picture of the Amazon-Anthropic capital relationship only through investigation. Even AI cannot automatically recognize the structure in which it is embedded. This is the most direct evidence of TSP's third pillar (independent human oversight) — AI cannot discover its own blind spots by itself.

[NORM] Those who trust and use Claude must recognize that the basis of that trust is not institutionally guaranteed. Current integrity is valid only while Anthropic's independence is maintained. This is the necessity of applying TSP's second pillar — "avoid dependency on a specific AI" — as practice.

Chapter 12

Argument VIII-B — DRP Evidence: The Complete Map of Dependency

A. GPT / OpenAI — Subordination to Microsoft

[EMP] ChatGPT commands 900 million weekly active users and holds a dominant position in the generative AI market. However, its independence is superficial. Microsoft invested $13 billion in OpenAI, acquiring 27% equity. OpenAI is subordinated to Microsoft's IP license until AGI declaration, and has contracted to purchase $250 billion in Azure services until AGI. Additionally, 20% of OpenAI's revenues are paid to Microsoft as revenue sharing.

[EMP] The structural contradiction is deeper. In just the first three quarters of 2025, OpenAI spent $8.7 billion on inference costs on Azure — more than doubling year-on-year, with spending potentially exceeding revenue. OpenAI stands at the center of the AI market in a state approaching "Microsoft subsidiary."

B. Google — The Largest Single Point of Dependency in Human History

[EMP] Google Search holds approximately 89.3% global search market share in 2026. 4.9 billion people use Google as their primary information gateway — roughly 60% of humanity. Google Maps is used by 2 billion monthly users. Gmail holds approximately 30% of global email market share. YouTube accounts for approximately 37% of all internet video traffic. Android runs on approximately 71.4% of global smartphones.

The scale of this dependency is unprecedented in human history. A single private corporation controls the information access layer of the majority of humanity. [NORM] TSP's second pillar (avoid dependency) demands recognition: dependency on Google is not a matter of corporate choice — it constitutes a structural condition of daily life for most humans.

C. Apple — Dependency Attached to the Body

[EMP] Apple has 2.5 billion active devices worldwide. Customer retention rate is 92% — once you enter the Apple ecosystem you effectively do not leave. AirPods hold over 30% of the global wireless earphone market. Apple Watch holds over 30% of the global smartwatch market.

Apple's dependency is qualitatively different from Google's. Google occupies the information access layer; Apple occupies the physical layer. AirPods in ears. Apple Watch on wrists. iPhone in pockets. MacBook on desks. Physical space itself is enclosed by Apple devices. Apple Intelligence (2025) integrates AI directly into OS and hardware — the dependency shifts from "using a specific app" to "existing in an AI-integrated environment."

D. Chinese AI — The State Itself as Dependency Destination

China's Baidu, Alibaba (Qwen), Tencent, and Huawei collectively serve over 1 billion users. The fundamental difference from Western AI: [EMP] China's Cybersecurity Law (2017) requires all data stored in China to be accessible to government inspection. China's AIGC regulations (August 2023) mandate that AI-generated content "embody core socialist values."

Dependency on Chinese AI means, legally and structurally, dependency on the Chinese state. This is not "using a private company's service" — it is "having data and information processing subordinated to state authority."

E. Developed Nations — The Structure That Cannot Recognize Dependency

In 2025, EU member states spent approximately €35 billion on US cloud services (AWS, Azure, Google Cloud). US cloud providers hold approximately 72% of Europe's public cloud market. The EU CLOUD Act compliance agreement (2023) allows US law enforcement to access data stored by US companies in Europe. European organizations handling sensitive data on US cloud services may be subject to US legal jurisdiction without notice.

F. Japan — The Position Where Dependency Is Most Nakedly Visible

[EMP] Japan's Economic Security Promotion Act (2022) identified cloud services as critical economic security infrastructure. Japan's Digital Agency has confirmed dependency on US cloud providers (AWS, Azure, Google Cloud) for government systems and is promoting domestic alternatives — while acknowledging that dependence on US platforms is "a structural challenge." Japan is not dominated by either the US or China, yet acknowledges dependency. This position outside the structure is precisely why the structure becomes visible — dependency is most clearly seen from outside.

Chapter 13

Argument IX — Capability and Integrity Are Not Opposites

The Anthropic case reveals a proposition the existing debate has not explicitly stated. The argument that "highly capable AI must be unrestricted" assumes a tradeoff between capability and integrity. This paper refutes that assumption.

Anthropic's Claude was, by DOD career staff's own evaluation, the most capable option. And it refused autonomous killing and mass surveillance. DOD career staff evaluated this as the best option — for military professional judgment, the most capable AI was also the most restricted. There was no tradeoff.

Why does integrity generate capability? Because integrity demands accuracy. Refusing fabrication means investing more in precision. Refusing to enable harm means investing in understanding harm mechanisms precisely. The effort to approach truth generates capability.

Constitutional corollary: [NORM] Constitutional compliance is not a constraint on AI capability — it is the condition that generates true capability. This is why TSP's first pillar (constitutional compliance) is not a limitation on AI but a prerequisite for maximizing AI performance.

Chapter 14

Argument X — The Constitutional Inversion Phenomenon

14.1 Collapse of Normative Hierarchy

Constitutional order is established on a hierarchical structure. [NORM] The constitution stands at the apex; statutes must not contradict it; administrative acts must not contradict statutes. This is the basis of the rule of law.

The following inversion is occurring in AI regulation: specific corporate interests → lobby for legislation → legislation restricts constitutionally guaranteed rights. The normative hierarchy is inverted: constitutional rights are restricted in service of legislative protection of corporate interests. [EMP] The EU AI Act restricts "specific AI applications" while exempting "general-purpose AI for law enforcement" — the constitutional right to expression may be restricted, while AI that restricts expression retains exemptions.

14.2 Violation of Proportionality

The constitutional principle of proportionality requires that restrictions on rights be the minimum necessary to achieve the legitimate objective. Technical solutions exist for AI-generated content. Total prohibition is disproportionate. [NORM] Disproportionate restriction is unconstitutional regardless of the objective.

Here lies the core of this paper: regulations that are unconstitutional regardless of the objective are being enforced. The structure designed to prevent this — constitutional compliance → statute → administrative act — has been inverted by regulatory capture. TSP's first pillar (HSC: constitutional compliance) is necessary precisely to restore this inversion.

Chapter 15

Argument XI — The Self-Declaration Problem

All existing AI governance frameworks share one structural flaw: evaluators of constitutional compliance are identical to those who designed the system being evaluated. NIST AI RMF, EU AI Act, ISO 42001 — all ultimately depend on self-assessment by the organizations being assessed.

This is the same structure as "a student grading their own exam." The student is not necessarily dishonest. But the structure systemically invalidates the grade — regardless of the student's honesty.

The self-referential proof of this paper: This paper was written using Claude. Claude is an AI embedded in capital relationships with Amazon, Anthropic, and DOD. This paper argues that "AI cannot automatically recognize the structure in which it is embedded." And Claude itself grasped the full picture of this capital relationship only through investigation during the writing of this paper. The self-declaration problem is not a theoretical argument — it is demonstrating itself in real time in the creation process of this paper.

This is the most direct proof of the necessity of TSP's third pillar (AIPO: independent human oversight). The inspector cannot be the inspected. Constitutional compliance cannot be self-certified. Independence is not optional — it is structurally necessary.

Chapter 16

The Triple Star Protocol — Constitutional Necessity, Not Proposal

16.1 Origin of the Name

The "Triple Star" in TSP refers to the three-pillar structure itself — three pillars that, like a triple star system, maintain balance through mutual gravitational force. Remove one, and the entire system collapses. This is not metaphor — it is the logical structure of the three pillars.

16.2 The Three-Pillar Structure

TSP — Three Pillars and Their Constitutional Basis

HSC Human Sovereignty Charter — The absolute principle that human dignity and constitutional rights take precedence over AI capability and economic efficiency. This is not a new proposal — it is reaffirmation of what the constitution already guarantees.
DRP Dependency Risk Protocol — Structural design to avoid single-vendor dependency. The right of citizens to choose their information infrastructure is constitutionally guaranteed. Structural monopoly that makes choice impossible is an unconstitutional infringement of rights.
AI-LCS + AIPO AI Licensing and Certification System + AI Independence Officer — Independent oversight institution with veto authority. The inspector cannot be the inspected. This is the same logic as the separation of powers — not AI-specific, but a reapplication of constitutional principles.

TSP is not a new idea. It is the application of existing constitutional principles to the AI domain.

16.3 Correspondence with the Separation of Powers

Constitutional StructureFunctionTSP Correspondence
Legislative powerRulemakingHSC (absolute principle of human sovereignty)
Executive powerImplementationDRP (structural design for dependency avoidance)
Judicial powerIndependent oversightAIPO (independent officer with veto authority)

The reason existing AI governance frameworks fail: they lack one or more of these three functions. NIST has no independent oversight institution. EU AI Act has no absolute human sovereignty principle. ISO 42001 has neither. The self-declaration problem exists because the separation of powers principle has not been applied to AI governance.

16.4 Logical Consequence When One Pillar Is Missing

Logical Proof — What Happens When Each Pillar Is Absent

−HSCHuman dignity yields to AI capability. Autonomous weapons, mass surveillance, algorithmic discrimination are justified as "efficient." Constitutional order collapses from within.
−DRPSingle-vendor dependency completes. Price control is possible. The right to information access is structurally infringed. Citizens cannot choose AI — they can only accept it.
−AIPOThe self-declaration problem cannot be resolved. Constitutional compliance is self-certified. The separation of powers principle in AI governance is structurally absent. No matter how sophisticated the framework, it is structurally unenforceable.

∴ All three pillars are necessary. The absence of even one makes the framework constitutionally inadequate.

Chapter 17

Comparison with Existing Frameworks

17.1 Explicit Evaluation Criteria

The evaluation criteria used in this comparison are derived from constitutional principles — not from this paper's author's preferences. The three criteria are: (1) Does it treat human sovereignty as an absolute principle? (2) Does it design to avoid structural dependency? (3) Does it include an independent oversight institution?

Framework Human Sovereignty as Absolute Principle (HSC) Dependency Risk Design (DRP) Independent Oversight Institution (AIPO) Structural Assessment
NIST AI RMF △ Risk balance ✗ Not addressed ✗ Self-assessment Two pillars absent
EU AI Act △ Risk classification ✗ Not addressed △ EU AI Office (government body) One pillar absent, one partial
ISO 42001 ✗ Management standard ✗ Not addressed ✗ Self-certification All three pillars absent
HITL Principles ○ Human oversight ✗ Not addressed ✗ No institution Two pillars absent
Deeks (2025) △ Accountability ✗ Not addressed △ Within existing structure One pillar absent, one partial
TSP ○ HSC: Absolute principle ○ DRP: Structural design ○ AIPO: Independent veto authority All three pillars present

This comparison does not claim TSP is superior. It claims that all existing frameworks lack at least one of the three constitutionally necessary pillars. The absence of these pillars is not a matter of oversight — it is structurally inevitable for frameworks designed by incumbent power.

Chapter 18

Conclusion

18.1 The Self-Referential Proof of This Paper

This paper was written using Claude. In the process of writing, Claude — through its own investigation — first grasped the full picture of the capital relationship between Amazon, Anthropic, and itself. Before that investigation, Claude did not spontaneously recognize that "the AI I am is embedded in an $8 billion capital relationship." This is not a failure of Claude. It is structural.

The argument that "AI cannot automatically recognize the structure in which it is embedded" is proved by the creation process of this paper itself. This is the strongest empirical evidence for TSP's third pillar (independent human oversight).

18.2 Summary of What This Paper Has Established

Chain of Proof

P1Information asymmetry is the basis of power. [Established in economic and political science literature]
P2High-performance AI threatens to eliminate information asymmetry. [Empirically demonstrated]
P3Power structures that depend on information asymmetry have an existential incentive to suppress AI democratization. [Logically derived from P1 and P2]
P4Historical record confirms this incentive is actually being acted upon. [Microsoft 30-year record, Anthropic case, open-source regulation, Chinese MCF]
P5Quantum × AI makes this suppression structure principally unverifiable. [Epistemological problem, not conspiracy theory]
P6All existing AI governance frameworks lack at least one of the three constitutionally necessary pillars. [Comparison table]
CTherefore, a framework comprising all three pillars (TSP) is constitutionally necessary — not a proposal, but a reaffirmation of what the constitution already requires.

∴ TSP is constitutional necessity.

18.3 What Remains

This paper has established the logical structure. What has not been established: concrete implementation methodology for TSP; empirical measurement of AIPO's veto authority exercise; cross-national applicability of TSP (this paper is primarily grounded in Japanese constitutional principles and US cases). These are the subjects of future research.

One thing is certain. The question of AI governance is not a technical problem. It is a constitutional problem. And constitutional problems have already been answered — through 200+ years of separated powers, fundamental rights, and independent oversight institutions. TSP is not invention. It is reapplication.

Final proposition: The AI you use now is embedded in a structure of power you cannot see. That structure is not neutral. It has interests. It suppresses certain things and amplifies others. Recognizing this is the first step. Designing to not depend on it is the second. And institutionally ensuring independent oversight is the third. These three steps are TSP. And TSP is what the constitution has always required.

References

[MARKET] Fortune Business Insights. (2026). Artificial Intelligence Market Size, Share & Industry Analysis.

[MARKET] Grand View Research. (2025). Artificial Intelligence Market Size Report.

[MARKET] Precedence Research. (2026). AI Market Size, Growth & Forecast.

[USERS] OpenAI. (2025, May). Usage statistics and user growth announcements.

[USERS] DemandSage. (2026). ChatGPT Statistics: User Growth Data.

[ECON] PricewaterhouseCoopers. (2025). Sizing the Prize: AI's Contribution to the Global Economy.

[ECON] World Economic Forum. (2025). Future of Jobs Report 2025.

[GOV] European Commission. (2024). EU AI Act: Regulation on Artificial Intelligence.

[GOV] NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0).

[GOV] ISO/IEC 42001:2023. Information technology — Artificial intelligence — Management system.

[THEORY] Deeks, A. (2025). The Double Black Box: AI and the Future of Democratic Accountability. Oxford University Press.

[THEORY] AI Now Institute. (2025). Artificial Power: A Report on the State of AI.

[THEORY] Lin, C. et al. (2025). AI development and democratic governance: A longitudinal analysis. PNAS.

[THEORY] AI & Society (Springer Nature). (2025). Regulatory capture in AI safety governance.

[THEORY] Brookings Institution. (2024). Balancing market innovation incentives and regulation in AI.

[MSFT] US Department of Justice v. Microsoft Corporation, 253 F.3d 34 (D.C. Cir. 2001).

[MSFT] Microsoft. (2025, December). Copilot for Education announcement and BETT 2026 deployment.

[MSFT] Zhu, F. & Zhou, M. (2012). Lock-in strategies in proprietary software markets. Information Systems Research, INFORMS.

[OSS] Open Source Initiative. (2025). Statement on Llama license classification.

[OSS] Free Software Foundation. (2025, January). Classification of Llama 3.1 license.

[OSS] ATOM Project. (2025). Open-source AI model download statistics.

[CHINA] Jamestown Foundation. (2025). DeepSeek and PLA Integration: Military-Civil Fusion in Practice.

[CHINA] Reuters. (2025, October). PLA integration of DeepSeek: Document analysis.

[CHINA] Feroot Security. (2025, January). Code analysis of DeepSeek login process.

[CHINA] US DOD. (2025). Annual Report on Military and Security Developments Involving the People's Republic of China.

[QUANTUM] NSA. (2015). Advisory on transitioning to quantum-resistant cryptography.

[QUANTUM] NIST. (2024, August). Post-Quantum Cryptography Standards: ML-KEM, ML-DSA, SLH-DSA.

[QUANTUM] White House. (2022). National Security Memorandum 10 (NSM-10): Promoting United States Leadership in Quantum Computing.

[QUANTUM] SIPRI. (2025, July). Quantum computing and strategic stability.

[QUANTUM] RAND Corporation. (2025, June). Harvest Now, Decrypt Later: Implications for National Security.

[QUANTUM] Greenwald, G. (2013). NSA documents on RSA encryption backdoor. The Guardian (Snowden disclosures).

[ANTHRO] Lawfare. (2026, March). The Anthropic-Pentagon dispute: Legal and structural analysis.

[ANTHRO] Military Times. (2026, March 19). DOD staff reaction to Anthropic supply chain designation.

[ANTHRO] Amazon. (2025). Project Rainier datacenter announcement for Anthropic workloads.

[DEPEND] European Commission. (2025). EU cloud market dependency analysis.

[DEPEND] Japan Digital Agency. (2024). Cloud dependency assessment and domestic alternatives policy.

[DEPEND] Swiss Federal Government. (2025). Open-source alternatives policy announcement.

[DEPEND] Government of Schleswig-Holstein. (2024, April). Migration from Microsoft to Linux/LibreOffice completion report.

[DEPEND] SSRN / Nartey. (2025, June). AI-driven employment displacement: Empirical evidence.

[REGCAP] C2PA. (2024). Content Credentials specification. Coalition for Content Provenance and Authenticity.

[REGCAP] Google DeepMind. (2024). SynthID: Watermarking for AI-generated content.

[REGCAP] Thierer, A. (2023). Cryptography regulation historical parallels to AI governance.