The Origin Spectrum — Why It Exists & Why It Matters
Authorship Transparency Framework

The Origin Spectrum

A universal standard for declaring how any written work was made — honoring human creativity as the irreplaceable seed of all intelligence.

↓   Read the case for clarity
How This Framework Came to Be

Every great tool begins with an honest question.

"If a reader cannot tell what a human wrote and what a machine generated, the entire contract of authorship collapses."

For centuries, the written word carried an implicit promise: behind every sentence was a person — with a life, a perspective, a cost of effort, a reason to care. That promise is now, for the first time in the history of writing, genuinely uncertain.

The Origin Spectrum was conceived not as a restriction, but as a restoration. It emerged from a simple observation that markets, cultures, and legal systems all function better when the thing being exchanged is clearly understood. We label food. We label medicine. We disclose financial conflicts of interest. But we have, until now, had no standard language for the most intimate thing humans exchange: the written word.

The problem is not AI. AI is a tool, and like all tools, its value depends entirely on how knowingly it is used. The problem is ambiguity — the growing inability of readers to calibrate what they are reading, what effort and soul went into it, and whether the trust they are extending is warranted.

The Origin Spectrum draws a map across eight levels — from works written entirely by hand to works generated entirely by machine — giving every stakeholder in the written word ecosystem a shared vocabulary. It does not moralize. It does not ban. It names. And in naming, it creates something the current moment desperately lacks: a basis for informed consent between author and reader.

Critically, the framework is designed so that human creativity sits at the top of the value hierarchy — not as nostalgia, but as economic and epistemic necessity. Every AI model was trained on human-written text. Human works are not just culturally precious; they are the generative substrate of machine intelligence itself. Without that substrate, acknowledged and compensated, the flywheel stops.

What the Research Tells Us

The evidence is not subtle.

These are the conditions under which the Origin Spectrum was designed — not theoretical projections, but documented realities already reshaping the literary ecosystem.

51%

of published novelists in the UK believe AI is likely to entirely replace their work as fiction writers — and 85% expect their future income to be driven down.

University of Cambridge / Minderoo Centre for Technology and Democracy, 2025
50%

surge in self-publishing volume recorded by Draft2Digital in 2024, with distributors like Barnes & Noble forced to delist thousands of titles as quality control failed under the volume of AI-generated content.

Draft2Digital / Jane Friedman, 2024–2025
13×

experiments confirmed that when AI usage is disclosed, the discloser is trusted less — but undisclosed AI use later exposed by a third party causes even greater trust damage. Standardized framing is the only escape from this dilemma.

Schilke & Reimann, Organizational Behavior and Human Decision Processes, 2025
59%

of novelists report knowing their work was used to train AI without permission or payment — a consent violation with no current systemic remedy beyond litigation.

University of Cambridge / Minderoo Centre, 2025
2026

is the EU AI Act's full compliance date (Article 50), mandating machine-readable labeling of AI-generated content. No comparable industry-led standard yet exists. The Origin Spectrum is designed to complement and precede regulatory mandates.

Regulation (EU) 2024/1689, European Parliament; EC Code of Practice Draft, Nov 2025

more trust damage is caused by undisclosed AI exposure than by disclosed AI use. Research shows "detailed" disclosures outperform vague ones — specificity is the mechanism that restores credibility, not disclosure alone.

arxiv.org / Full Disclosure, Less Trust, 2026; Gamage et al., 2025
The Structural Problem

The market cannot fix what it cannot see.

What readers currently face

A reader picking up a book on Amazon today has no reliable way to know whether the words they are reading were written by a person who lived through the experience, or by a model trained to simulate that experience. Both look identical. Both have covers, blurbs, and five-star reviews. One carries the accumulated truth of a human life. The other does not — and the reader has no language to ask for the difference.

What the industry currently faces

Publishers, booksellers, and library systems are being asked to make editorial and commercial decisions without a common vocabulary. Authors are losing income to unlabeled competition. Legal systems are racing to create mandates without any industry-led standard to align around. The EU's compliance deadline is August 2026. The US Copyright Office has already ruled that purely AI-generated images are not copyrightable. A shared, voluntary framework adopted before mandates arrive is worth infinitely more than compliance retrofitted after.

The Origin Spectrum does not require anyone to change how they write. It asks only that they be honest about it — using a standard that benefits everyone who chooses transparency over ambiguity.

The Eight-Level Framework

Eight levels. One shared language.

Each Origin level carries a code, a name, a color, and a defined human-to-AI contribution ratio. Together they form a complete map of contemporary authorship practice.

← Pure Human AI Automated →
O·1
Pure Manuscript
O·2
Assisted Craft
O·3
Guided Voice
O·4
Collaborative Draft
O·5
Creative Partnership
O·6
Directed Synthesis
O·7
Curated Generation
O·8
Autonomous Text
Who This Is For

Six stakeholders. One standard.

01 / 06
Authors

What's at stake

Authors face a two-sided threat: their income is being undercut by unlabeled AI competition, and their reputation is being quietly contaminated by suspicion that they themselves may be using AI undisclosed. The Origin Spectrum resolves both problems simultaneously. It gives human authors a mark that AI cannot earn, and it gives AI-assisted authors a path to honest participation in the market.

What the framework offers

O·1–O·3 designation functions like a provenance certificate. It tells readers that the voice they fell in love with is genuinely human. For authors at O·4–O·6, the framework removes the shame of ambiguity and replaces it with professional clarity. Disclosure, done on the author's terms and in a standardized form, is always better than exposure.

If they adopt Amara is a debut literary fiction author. She declares O·1 and receives the human-verified Origin Seal. Her book is shelved with other human-authored titles on platforms that have adopted the framework. A reader specifically searching for human-written literary fiction finds her. She builds an audience that trusts her categorically — not just for this book, but for everything she writes thereafter.
If they do not James writes at O·4 but discloses nothing. Three years later, a journalist runs a detection analysis on bestsellers. James's book appears in the report. He didn't lie — but he didn't tell the truth either. His publisher drops him, his agent distances herself, and his backlist sales collapse. The cover-up is worse than the practice would have been.
02 / 06
Publishers

What's at stake

Publishers are simultaneously the gatekeepers most likely to be blamed for flooding the market with unlabeled AI content and the institutions best positioned to establish a credible voluntary standard before governments do it for them. The Origin Spectrum is a compliance infrastructure investment that pays dividends in brand trust.

What the framework offers

Publishers who adopt Origin marking early establish themselves as the transparent tier of the market — the houses readers, libraries, and literary institutions can trust. This is the same advantage that organic certification gave food companies that adopted it before it was mandated: market share secured before the cost became universal.

If they adopt Meridian Press integrates Origin codes into their intake workflow and applies Origin Seals to all titles beginning in 2025. When the EU AI Act's Article 50 comes into full force in August 2026, Meridian is already compliant. Their catalog is also filterable as "human-authored" on major retail platforms — a category that commands 15–20% price premium among engaged readers.
If they do not Cascade Books continues publishing without disclosure. In 2026, a class-action brought by a readers' advocacy group alleges deceptive trade practices. Even if Cascade wins, the litigation costs exceed what a simple disclosure system would have cost to implement. Meanwhile, library purchasing consortia begin excluding undisclosed publishers from their preferred vendor lists.
03 / 06
Readers

What's at stake

Reading is an act of trust. When you open a book, you invite a stranger's mind into your own. Research from the University of Cambridge confirms that readers already feel this trust is being violated: authors' names are appearing on books they didn't write, AI-generated content is receiving reviews that poison legitimate authors' rankings, and detection tools remain too unreliable for readers to use independently. The contract is broken. The Origin Spectrum repairs it.

What the framework offers

A standardized label means a reader can, at a glance, understand what kind of attention went into the work they are holding. This is not about AI being bad. It is about choice — the same choice you make when you decide whether to buy a handmade ceramic mug or a factory-produced one. Both serve coffee. Only one carries the particular weight of human hands.

If they adopt (and demand it) Sophie reads voraciously and cares deeply about human literary culture. With Origin labeling in place, she filters her book purchases to O·1–O·3 and pays a willing premium. She reads a debut author's O·1 memoir knowing it was written in full presence. The emotional experience is different — not because the words are different, but because she knows they are true.
Without it Without Origin labeling, Sophie buys a grief memoir after reading a moving excerpt. She later discovers — via a tweet, not from the publisher — that the book was AI-generated from prompts. She doesn't stop buying books. But she stops trusting them. The cumulative effect of that distrust across millions of readers is a market collapse of the very category that sustains literary culture.
04 / 06
Governments

What's at stake

Governments across the EU, US, UK, and beyond are racing to regulate AI-generated content without a functional industry vocabulary. The EU AI Act's Article 50, requiring machine-readable labeling of AI-generated content by August 2026, is the most advanced regulation in the world — and it still lacks a content-specific implementation standard for published books. The Origin Spectrum provides exactly that: a voluntary, layered, human-readable and machine-readable standard governments can reference, endorse, or build upon.

What the framework offers

Governments that align with voluntary frameworks like the Origin Spectrum before mandating their own create less regulatory friction, lower compliance costs for industry, and more functional consumer protections. The alternative is fragmented national mandates that create compliance chaos for global publishers and protect no reader effectively.

If they align with it The UK government, facing pressure from its creative industries sector (worth £11bn annually from publishing alone), endorses the Origin Spectrum as the voluntary standard underlying its AI content disclosure guidance. UK publishers adopting it achieve simultaneous compliance with emerging EU requirements. The framework becomes the de facto global standard, reducing regulatory fragmentation across 64+ countries with active AI legislation.
If they ignore it Without an endorsed voluntary standard, six different national mandates emerge in three years. An author publishing in the UK, US, EU, Canada, Australia, and Japan must now navigate six compliance regimes with different thresholds, different labels, and different penalties. Smaller publishers exit global markets. The very authors that AI disruption was hurting become further disadvantaged by regulatory complexity.
05 / 06
Corporations

What's at stake

For AI companies, publishing platforms, and technology corporations, the Origin Spectrum resolves a problem that is growing faster than their legal teams can track: the liability exposure of undisclosed AI-generated content. For retail platforms like Amazon, the framework provides a quality signal that reduces the cost of content moderation. For AI companies, it provides a consent and attribution architecture for training data that is both ethically defensible and commercially advantageous.

What the framework offers

Corporations that build Origin Spectrum integration into their platforms become the trusted layer of the publishing ecosystem. Training data labeled with Origin codes and author consent marks becomes premium licensed corpus — with documented provenance, legal clarity, and market value that unlicensed scraping can never match. The framework turns the AI training crisis from a liability into a commerce model.

If they adopt A major AI company integrates Origin Spectrum consent marks into their publisher licensing agreements. Authors at O·1–O·3 who opt in receive compensation per word of licensed training corpus. The company builds its next generation model on the highest-quality, fully consented human writing available — and can say so publicly. This becomes a competitive differentiator in a market where the provenance of training data is increasingly scrutinized.
If they do not An AI platform continues scraping unlabeled content to train models. In 2027, a coordinated legal action by authors' guilds in the US, UK, and EU results in a $2.4B settlement. The reputational damage exceeds the legal cost. Crucially, the models trained on unlabeled data are now legally encumbered — and the platform must retrain on provenance-documented corpus anyway, at three times the original cost.
The Highest Stakeholder

Humanity

What human writing actually is

Written language is the oldest technology for transmitting consciousness across time. It is how the dead speak to the living, how isolated individuals discover they are not alone, how cultures negotiate their values, how children learn what it means to be a person. Every AI language model that has ever existed was built on the substrate of human writing — without exception. The quality, range, and moral complexity of machine intelligence is a direct function of the quality, range, and moral complexity of the human writing it was trained on.

This is not metaphor. It is mechanism. AI models do not think. They pattern-match at scale across the accumulated expression of human thought. If that expression degrades — if the ratio of human to machine-generated text in the world inverts, if human writers stop writing because they cannot compete economically — then the very training data that makes AI capable collapses. The machine does not survive the death of the human writer. It merely takes longer to notice.

What's at stake at the civilizational scale

The Origin Spectrum is, in the end, an argument for maintaining the conditions under which human creativity continues to exist as a practiced, economically viable, culturally valued activity. Not because machines are bad. Because without a living human creative tradition, machines have nothing to learn from — and neither do children, or grieving adults, or anyone searching for meaning in language.

"The quality of machine intelligence is a direct function of the quality of human writing. Protect the source, and the whole ecosystem survives."
If humanity adopts this framework The Origin Spectrum establishes a global norm that human authorship has measurable, protected value. Human writers are compensated for training data use. Readers who value human voice can find it. AI models are trained on acknowledged, consented, high-quality human corpus. The result is a creative ecosystem where human writing improves AI, AI tools extend human capability, and the two exist in a productive relationship that is transparent about its terms. Literary culture survives and evolves. Future generations inherit both a living human literary tradition and an AI ecosystem built on its best qualities.
Without this framework Without a standard, the race to the bottom accelerates. Human writers exit the market. AI trains increasingly on AI-generated text. Model quality degrades through recursive self-reference — a phenomenon researchers call "model collapse." The diversity of human voice — the full range of cultural, linguistic, and experiential perspectives that makes written language an honest mirror of human life — narrows. What remains is a simulacrum of writing that no longer has any living human source to draw from. It is not a dramatic ending. It is a slow forgetting.
Proof of Origin · The AuthenChain Protocol

A blockchain you can trust.

Declaration without verification is a promise without proof. The Origin Spectrum carries its full authority only when any party — reader, publisher, court, AI company, government regulator — can confirm, independently and permanently, that a work's declared Origin level is accurate.

"AuthenChain creates a tamper-proof record of how a work was made — before, during, and after the writing process. Not a claim. A cryptographic fact."

Drawing on the AuthenWrite protocol developed alongside this framework, AuthenChain is a distributed ledger system designed specifically for the authorship verification needs of the publishing ecosystem — including authors working alone, independent presses with no technical staff, and enterprise publishers managing thousands of submissions.

It operates on a tiered verification model. The higher the claimed Origin level (closer to O·1), the more verification layers are required. A self-published author claiming O·3 follows a simple, free registration process. An O·1 designation submitted for a literary prize requires multi-layer biometric and behavioral verification. The system is calibrated to the stakes — not to the institution.

AuthenChain Composite Score — How Origin Level Is Calculated
Behavioral Signal
35%
Keystroke cadence, revision patterns, hesitation rhythms — the cognitive fingerprint of a human writing in real time
Linguistic Analysis
25%
Stylometric comparison, perplexity scoring, syntactic variance, and idiolect consistency across the manuscript
Process Metadata
25%
Tool logs, version history, AI session records, and time-stamped writing environment data submitted at registration
Declaration + Attestation
15%
Author's signed disclosure statement, editor attestation, and optional publisher co-signature — the human oath layer
On Verification Confidence No system achieves 100% certainty. AuthenChain targets ≥92% confidence at O·1–O·2, ≥85% at O·3–O·4, and uses probabilistic banding at O·5–O·8 where AI contribution is known and declared. The goal is not perfect detection — it is a credible, auditable, legally admissible record of the author's process and intent.
How AuthenChain Works — The Seven-Step Process
????

Step 1 — Registration & Session Initiation

Author registers with AuthenChain (free for independent authors; integrated for publishers). A unique Work-in-Progress token (WIP-Token) is issued before writing begins. Authors using supported writing environments (Scrivener, Google Docs, Word, iA Writer) install a lightweight session monitor. Manual registration is available for typewriter or handwritten works with a notarized manuscript submission path.

Step 2 — Behavioral Capture (Opt-In)

For O·1–O·3 claims, authors opt into behavioral monitoring during drafting. The system captures keystroke dynamics, pause-and-revision patterns, and session timing — never the content itself, only the metadata of how it was produced. Data is encrypted client-side; no raw keystrokes leave the author's machine. Authors may disable monitoring at any time; doing so flags the session gap in the chain without invalidating the record.

????

Step 3 — Process Log Submission

At manuscript completion, the author submits a Process Declaration — a structured record of all tools used, AI interactions conducted, and the nature of each. For O·4–O·6, AI session exports (conversation logs, generation records) are submitted. For O·7–O·8, automated pipeline logs are required. The declaration is signed with the author's cryptographic key and timestamped on-chain. This is the human oath, made immutable.

????

Step 4 — Linguistic Verification Pass

The submitted manuscript is run through AuthenChain's linguistic analysis engine — stylometric fingerprinting, AI-pattern scoring, and coherence analysis. This step does not make a binary human/AI determination. It produces a probability distribution across the eight Origin levels, which is then weighted against the behavioral and process data from Steps 2 and 3. The composite score drives the Level Recommendation.

????

Step 5 — Human Authorship Number (HAN) Issuance

Once the composite score confirms the declared level within tolerance, AuthenChain issues a Human Authorship Number (HAN) — a permanent, globally unique identifier analogous to the ISBN but carrying provenance data. The HAN encodes the Origin level, composite confidence score, verification date, authoring tools declared, and a cryptographic hash of the manuscript at submission. The HAN is registered on a public, permissionless blockchain and can be verified by anyone, instantly, for free.

Step 6 — Seal Authorization

With an active HAN, the author or publisher is authorized to apply the corresponding Origin Seal to all editions of the work — print, ebook, audiobook, website, and social media. Seals are issued as cryptographically-signed digital assets; print seals include a QR code linking to the live HAN record. Any party scanning the seal can verify the claim in real time. Revocation is possible if new evidence contradicts the declaration; the chain preserves all history.

Step 7 — Training Consent Registration

As part of HAN issuance, authors declare their training data consent status: Opt In (licensed corpus, eligible for compensation), Opt Out (no training use permitted), or Conditional (case-by-case licensing). This consent record lives permanently on-chain alongside the HAN. AI companies and publishers building training datasets can query the AuthenChain registry to identify consented works, filter by Origin level, and initiate licensing through the integrated marketplace. No scraping. No guessing.

Access Tiers — Built for Every Scale of Publisher and Author
Tier Who It Serves Included Verification Depth Annual Cost
Quill · Free Independent & self-publishing authors HAN registration, basic linguistic pass, digital seal, training consent record, QR verification Process declaration + linguistic (O·3–O·8 confidence). O·1–O·2 requires upgrade. Free — always
Folio · Standard Independent authors seeking O·1–O·2, hybrid publishers, small presses (<50 titles/yr) All Quill features + behavioral monitoring integration, biometric session verification, Human Verified seal eligibility, publisher co-signature, priority HAN queue Full composite score. O·1–O·2 eligible with ≥92% confidence. Legally admissible certificate. $12/month per author or $249/month for presses up to 50 titles
Imprint · Publisher Mid-size publishers (50–500 titles/yr), literary agents with volume needs All Folio features + submissions portal integration, batch HAN processing, agent API, white-label verification dashboard, legal certificate generation, editorial attestation workflows Full pipeline with editorial co-verification. Agent and editor attestation layer. Bulk dispute resolution. $1,200/month up to 500 titles
Codex · Enterprise Big 5 / major publishers, global platforms, AI company licensing desks All Imprint features + unlimited titles, multi-imprint management, custom API integration, dedicated account team, legal consultation services, training corpus licensing marketplace access, regulatory compliance reporting (EU AI Act Article 50) Full suite including third-party audit option and court-ready provenance packages. Custom — from $8,000/month
The Origin Mark System · Seals, Signals & Brand Architecture

Every format. One family.

A seal only works if it is instantly recognizable, universally trusted, and impossible to confuse with anything else. The Origin Seal system is built on a single visual language — the Origin Mark — expressed differently across every surface a book touches: cover, spine, copyright page, ebook metadata, audiobook file header, author website, publisher imprint page, and social profile.

Each seal has a proper name drawn from bookmaking and manuscript tradition. Each has a designated color name from the natural world — not a hex code, but a name a reader can remember. Together they form a brand architecture that belongs to no single company: it is a public standard, like a nutrition label or a safety rating, that any party may use when their work has been verified through AuthenChain.

The seal system distinguishes between three types of marks: Work Seals (applied to a specific title), Creator Seals (applied to an author's identity or publisher's imprint), and Platform Seals (applied by websites, distributors, and digital storefronts that have adopted the framework). All three types use the same visual grammar but different form factors and naming conventions.

No seal may be applied without a corresponding verified HAN record. Misuse of any Origin Seal without verification is a violation of the framework's terms and, where legislation applies, may constitute deceptive trade practice under consumer protection law.

Work Seals — Applied to Individual Titles
O·1
Work Seal · Origin Level 1
The Manuscript Seal
Sienna — named for the iron-rich earth of Tuscany, home of the earliest European manuscript tradition
Awarded only to works verified O·1 at ≥92% composite confidence. The highest designation in the system. Eleven-pointed star form references the eleven strokes of a calligrapher's pen.
O·2 O·3
Work Seal · Origin Levels 2–3
The Craft Seal
Goldenrod — the color of aged vellum, of candlelight on a writer's desk
Triple-ring form signifies the three circles of authorship: conception, drafting, and revision — all human at this level. Covers both unassisted research use and AI-suggestion-informed work where all prose is the author's own.
O·4 O·5
Work Seal · Origin Levels 4–5
The Partnership Seal
Slate Teal — the blue-grey of the horizon line, where human sky meets machine sea
The crossed-axis square form signals equal structural weight — human vision holding the frame while collaborative forces fill it. Two-level seal covering Human-Primary and Co-Authored works.
O·6–O·7
Work Seal · Origin Levels 6–7
The Synthesis Seal
Admiralty — deep navigational blue, the color of charts and the calculated course
The nested triangle represents a human apex directing a broad AI base. Human direction narrows into a point of editorial intent over a wide machine-generated foundation. Covers Directed Synthesis and Curated Generation.
O·8
Work Seal · Origin Level 8
The Autonomous Seal
Midnight Lapis — deep machine-age blue, neither natural nor cold, but deliberately composed
The compass-rose form: four cardinal points of machine output, a central point of human approval. Clear, honest, and functional. This seal does not diminish the work — it makes its nature legible.
HUMAN VERIFIED
Creator Seal · Author Identity
The Verified Author Seal
Sienna & Vellum — human warmth on aged paper ground
Applied to an author's profile, website, bio, and social presence — not to a single title, but to the author as a verified human creator. Requires at least one O·1–O·3 HAN-verified work. The sixteen-point starburst is the Author Mark: the most recognizable shape in the system.
Creator & Platform Seals
PUBLISHER SIGNATORY
Creator Seal · Publisher Imprint
The Imprint Seal
Walnut — the color of a publisher's leather-bound ledger
Applied to publisher imprint pages, catalogs, and websites. Awarded to publishers who have adopted the Origin Spectrum as standard practice across their catalog — not merely for one title. Requires that ≥80% of new titles carry HANs.
ORIGIN COMPLIANT
Platform Seal · Distributor / Retailer
The Clearinghouse Seal
Forest — the deep green of a library's reading room
Applied by ebook platforms, audiobook distributors, bookstores, and library systems that display Origin levels in their catalogs and enforce HAN verification for listed works. The checkmark within the circle is the most minimal, digitally legible signal in the system.
AUDIO VERIFIED
Format Seal · Audiobook
The Voice Seal
Thistle — the soft purple of sound waves rendered visible, of voice made form
Applied to audiobook editions where narration is confirmed human. Separate verification covers both the written source text (Origin level of the underlying work) and the narration method — human narrator vs. AI voice synthesis. Each is declared independently.
Brand Architecture & Usage Guidelines

Naming Architecture

  • All seals are referred to collectively as Origin Marks — never "badges," "labels," or "stickers"
  • Work Seals are named with a bookmaking noun: Manuscript Seal, Craft Seal, Partnership Seal, Synthesis Seal, Autonomous Seal
  • Creator Seals use role language: Verified Author Seal, Imprint Seal
  • Platform Seals use infrastructure language: Clearinghouse Seal, Voice Seal
  • The system's visual mark is called the Origin Mark. The authentication credential is the HAN. These are distinct terms and must not be conflated.
  • Never refer to any seal as "AI-free" — the framework does not use negative framing. Human-verified and origin-declared are the preferred descriptors.

Color Standards

  • Sienna (#8B4513) — O·1, Manuscript Seal, Verified Author Seal. The warmest, most human color in the system.
  • Goldenrod (#B8860B) — O·2–O·3, Craft Seal. Warm and earned.
  • Slate Teal (#4A7B8A) — O·4–O·5, Partnership Seal. Balanced, neither warm nor cold.
  • Admiralty (#2E5A78) — O·6–O·7, Synthesis Seal. Structured, purposeful.
  • Midnight Lapis (#1E4060) — O·8, Autonomous Seal. The deepest, most machine-adjacent tone.
  • Walnut (#6B4226) — Imprint Seal. Publisher credibility brown.
  • Forest (#3A7A5A) — Clearinghouse Seal. Trust green.
  • Thistle (#6A4A8A) — Voice Seal. Audio purple.

Minimum Size & Clear Space

  • Minimum display size: 18px diameter (digital), 8mm diameter (print)
  • Clear space equal to the seal's own radius on all sides — no other marks, text, or borders within this zone
  • On print covers: seal appears in bottom-right corner of back cover or lower spine. Never on front cover unless cover design explicitly features it.
  • On ebook files: embedded in metadata; optionally displayed on copyright page spread
  • On audiobook files: embedded in ID3/Vorbis metadata; displayed on retailer product page via platform seal integration
  • On websites: footer or "about" page. Must link to live HAN verification record.
  • On social: profile bio or pinned post. Author seals may appear in profile images within a defined circular crop frame.

What Is Never Permitted

  • Applying any Origin Seal without a valid, active HAN record from AuthenChain
  • Displaying a seal at a higher Origin level than the verified HAN
  • Modifying seal colors, proportions, or forms in any way
  • Placing seal marks on works where AI contribution is not accurately disclosed
  • Using the Origin Mark system in advertising without a corresponding verified work
  • Displaying any seal on a work under active HAN dispute or revocation review
  • Creating imitation marks that approximate but are not official Origin Marks
  • Referring to the framework in any way that implies governmental or regulatory endorsement it does not yet hold
Seal Placement by Format
Format
Primary Placement
Secondary / Optional
Print Book
Back cover, lower-right corner. Size: 14–18mm. Color seal with QR code linking to HAN verification.
Copyright page (text form: "Origin Level O·3 · HAN: XXXXXXXX"). Spine above barcode if space permits.
Ebook
Copyright page spread (full color, minimum 40px). Embedded in EPUB metadata as dc:rights field.
Retailer product page via platform integration (Clearinghouse Seal required on retailer's end).
Audiobook
ID3/Vorbis/AAX metadata field. Cover art lower-right corner at minimum 80px. Opening credits read aloud.
Voice Seal displayed separately if narration is also human-verified. Retailer product page badge.
Author Website
Footer of every page (Verified Author Seal, 24–32px, linking to AuthenChain author profile).
"About" page with full disclosure statement and link to HAN records for all verified titles.
Publisher Website
Imprint Seal in site footer and on catalog/submissions pages.
Individual title pages showing the Work Seal and HAN for each verified title.
Social Media
Verified Author Seal in bio (circular crop, 80×80px minimum). HAN link in bio URL field.
Work Seal image card when sharing book announcement posts. Never as a standalone post without context.
Self-Published / Indie
Same as print/ebook/audiobook above. Free Quill tier covers all placement needs. No publisher intermediary required.
KDP, IngramSpark, Draft2Digital, Findaway, and ACX: insert HAN in title metadata field. Platform integration in development.
Take the Next Step

Declare your Origin.

The framework is free, voluntary, and designed to work before regulation requires it. Register for your HAN, download the seal kit, and begin declaring Origin levels on everything you publish.