Text_Authenticator / README.md
satyaki-mitra's picture
Architecture updated
44d0409
metadata
title: TEXT-AUTH โ€” Evidence-Based Text Forensics System
emoji: ๐Ÿ”
colorFrom: blue
colorTo: purple
sdk: docker
sdk_version: 4.36.0
app_file: text_auth_app.py
pinned: false
license: mit

๐Ÿ›ก๏ธ TEXT-AUTH

Evidence-First Text Forensics & Authenticity Assessment

Python FastAPI License Code Style


๐Ÿ“‹ Table of Contents


๐Ÿ“ Abstract

TEXT-AUTH is a research-oriented, production-minded text forensics system that evaluates written content using multiple independent linguistic, statistical, and semantic signals.

Rather than claiming authorship or identifying a generation source, the platform performs evidence-based probabilistic assessment of textual consistency patterns. It reports confidence-calibrated signals, uncertainty estimates, and human-interpretable explanations to support downstream decision-making.

TEXT-AUTH is designed as a decision-support and forensic analysis tool, not a binary classifier or attribution oracle.


๐Ÿš€ Overview

Problem. Modern textโ€”whether human-written, assisted, edited, or fully generatedโ€”often exhibits patterns that are difficult to evaluate using binary classifiers.

Solution. A domain-aware analysis system combining six orthogonal evidence signals (Perplexity, Entropy, Structural, Semantic, Linguistic, Multi-perturbation stability) analysis into a confidenceโ€‘calibrated ensemble. Outputs are explainable with sentenceโ€‘level highlighting, and downloadable reports (JSON/PDF).

Live Deployment Link: AI Text Authenticator Platform

MVP Scope. Endโ€‘toโ€‘end FastAPI backend, lightweight HTML UI, modular metrics, Hugging Face model autoโ€‘download, and a prototype ensemble classifier. Model weights are not committed to the repo; they are fetched at first run.


๐ŸŽฏ Key Differentiators

Feature Description Impact
Domainโ€‘Aware Detection Calibrated thresholds and metric weights for 16 content types (Academic, Technical, Creative, Social Media, etc.) Improved signal calibration and reduced false positives compared to generic binary systems
6-Signal Evidence Ensemble Orthogonal statistical, syntactic, and semantic indicators Robust assessments with reduced false positives
Explainability Sentenceโ€‘level scoring, highlights, and humanโ€‘readable reasoning Trust & auditability
Auto Model Fetch Firstโ€‘run download from Hugging Face, local cache, offline fallback Lightweight repo & reproducible runs
Extensible Design Plugโ€‘in metrics, model registry, and retraining pipeline hooks Easy research iteration

๐Ÿ“Š Supported Domains & Threshold Configuration

The platform supports domain-aware forensic analysis tailored to the following 16 domains, each with specific synthetic-text consistency thresholds and metric weights defined in config/threshold_config.py. These configurations are used by the ensemble classifier to adapt its decision-making process.

Domains:

  • general (Default fallback)
  • academic
  • creative
  • ai_ml
  • software_dev
  • technical_doc
  • engineering
  • science
  • business
  • legal
  • medical
  • journalism
  • marketing
  • social_media
  • blog_personal
  • tutorial

Threshold Configuration Details (config/threshold_config.py):

Each domain is configured with specific thresholds for the six detection metrics and an ensemble threshold. The weights determine the relative importance of each metric's output during the ensemble aggregation phase.

  • High-Consistency Threshold: If a metric's synthetic-consistency score exceeds this value, it contributes stronger evidence toward a synthetic-consistency assessment for that metric.
  • Low-Consistency Threshold: If a metric's Authentic probability falls below this value, it contributes evidence toward higher human-authored consistency for that metric.
  • Weight: The relative weight assigned to the metric's result during ensemble combination (normalized internally to sum to 1.0 for active metrics).

Confidence-Calibrated Aggregation (High Level)

  1. Start with domain-specific base weights (defined in config/threshold_config.py).
  2. Adjust these weights dynamically based on each metric's individual confidence score using a scaling function.
  3. Normalize the adjusted weights.
  4. Compute the final weighted aggregate probability.

๐Ÿ—๏ธ System Architecture

Architecture (Darkโ€‘themed Mermaid)

%%{init: {'theme': 'dark'}}%%
flowchart LR
    subgraph FE [Frontend Layer]
        A[Web UI<br/>File Upload & Input]
        B[Interactive Dashboard]
    end

    subgraph API [API & Gateway]
        C[FastAPI<br/>Auth & Rate Limit]
    end

    subgraph ORCH [Forensic Orchestrator]
        D[Domain Classifier]
        E[Preprocessor]
        F[Metric Coordinator]
    end

    subgraph METRICS [Metrics Pool]
        P1[Perplexity]
        P2[Entropy]
        P3[Structural]
        P4[Linguistic]
        P5[Semantic]
        P6[MultiPerturbationStability]
    end

    G[Evidence Aggregator]
    H[Postprocessing & Reporter]
    I["Statistical Reference Models<br/>(HuggingFace Cache)"]
    J[Storage: Logs, Reports, Cache]

    A --> C
    B --> C
    C --> ORCH
    ORCH --> METRICS
    METRICS --> G
    G --> H
    H --> C
    I --> ORCH
    C --> J

Notes: The orchestrator schedules parallel metric computation, handles timeouts, and coordinates with the model manager for model loading and caching.


๐Ÿ” Workflow / Data Flow

%%{init: {'theme': 'dark'}}%%
sequenceDiagram
    participant U as User (UI/API)
    participant API as FastAPI
    participant O as Orchestrator
    participant M as Metrics Pool
    participant E as Ensemble
    participant R as Reporter

    U->>API: Submit text / upload file
    API->>O: Validate & enqueue job
    O->>M: Preprocess & dispatch metrics (parallel)
    M-->>O: Metric results (async)
    O->>E: Aggregate & calibrate
    E-->>O: Final assessment + uncertainty
    O->>R: Generate highlights & report
    R-->>API: Report ready (JSON/PDF)
    API-->>U: Return analysis + download link

๐Ÿงฎ Forensic Signals & Mathematical Foundation

This section provides the exact metric definitions implemented in metrics/ and rationale for their selection. The ensemble combines these orthogonal signals to increase robustness against edited, paraphrased, or algorithmically regularized text.

Metric summary (weights are configurable per domain)

  • Perplexity โ€” 25%
  • Entropy โ€” 20%
  • Structural โ€” 15%
  • Semantic โ€” 15%
  • Linguistic โ€” 15%
  • Multi-perturbation Stability โ€” 10%

1) Perplexity (25% weight)

Definition

Perplexity = \exp\left(-\frac{1}{N}\sum_{i=1}^N \log P(w_i\mid context)\right)

Implementation sketch

def calculate_perplexity(text, model, k=512):
    tokens = tokenize(text)
    log_probs = []
    for i in range(len(tokens)):
        context = tokens[max(0, i-k):i]
        prob = model.get_probability(tokens[i], context)
        log_probs.append(math.log(prob))
    return math.exp(-sum(log_probs)/len(tokens))

Domain calibration example

if domain == Domain.ACADEMIC:
    perplexity_threshold *= 1.2
elif domain == Domain.SOCIAL_MEDIA:
    perplexity_threshold *= 0.8

2) Entropy (20% weight)

Shannon entropy (token level)

H(X) = -ฮฃ p(x_i) * logโ‚‚ p(x_i)

Implementation sketch

from collections import Counter
def calculate_text_entropy(text):
    tokens = text.split()
    token_freq = Counter(tokens)
    total = len(tokens)
    entropy = -sum((f/total) * math.log2(f/total) for f in token_freq.values())
    return entropy

3) Structural Metric (15% weight)

Burstiness

Burstiness = \frac{\sigma - \mu}{\sigma + \mu}

where:

  • ฮผ = mean sentence length
  • ฯƒ = standard deviation of sentence length

Uniformity

Uniformity = 1 - \frac{\sigma}{\mu}

where:

  • ฮผ = mean sentence length
  • ฯƒ = standard deviation of sentence length

Sketch

def calculate_burstiness(text):
    sentences = split_sentences(text)
    lengths = [len(s.split()) for s in sentences]
    mean_len = np.mean(lengths)
    std_len = np.std(lengths)
    burstiness = (std_len - mean_len) / (std_len + mean_len)
    uniformity = 1 - (std_len/mean_len if mean_len > 0 else 0)
    return {'burstiness': burstiness, 'uniformity': uniformity}

4) Semantic Analysis (15% weight)

Coherence (sentence embedding cosine similarity)

Coherence = \frac{1}{n} \sum_{i=1}^{n-1} \cos(e_i, e_{i+1})

Sketch

def calculate_semantic_coherence(text, embed_model):
    sentences = split_sentences(text)
    embeddings = [embed_model.encode(s) for s in sentences]
    sims = [cosine_similarity(embeddings[i], embeddings[i+1]) for i in range(len(embeddings)-1)]
    return {'mean_coherence': np.mean(sims), 'coherence_variance': np.var(sims)}

5) Linguistic Metric (15% weight)

POS diversity, parse tree depth, syntactic complexity

def calculate_linguistic_features(text, nlp_model):
    doc = nlp_model(text)
    pos_tags = [token.pos_ for token in doc]
    pos_diversity = len(set(pos_tags))/len(pos_tags)
    depths = [max(get_tree_depth(token) for token in sent) for sent in doc.sents]
    return {'pos_diversity': pos_diversity, 'mean_tree_depth': np.mean(depths)}

6) MultiPerturbationStability (10% weight)

Stability under perturbation (curvature principle)

Stability = \frac{1}{n} \sum_{j} \left| \log P(x) - \log P(x_{perturbed_j}) \right|
def multi_perturbation_stability_score(text, model, num_perturbations=20):
    original = model.get_log_probability(text)
    diffs = []
    for _ in range(num_perturbations):
        perturbed = generate_perturbation(text)
        diffs.append(abs(original - model.get_log_probability(perturbed)))
    return np.mean(diffs)

๐Ÿ›๏ธ Ensemble Methodology

Confidenceโ€‘Calibrated Aggregation (high level)

  • Start with domain base weights (e.g., DOMAIN_WEIGHTS in config/threshold_config.py)
  • Adjust weights per metric with a sigmoid confidence scaling function
  • Normalize and compute weighted aggregate
  • Quantify uncertainty using variance, confidence means, and decision distance from 0.5
def ensemble_aggregation(metric_results, domain):
    base = get_domain_weights(domain)
    adj = {m: base[m] * sigmoid_confidence(r.confidence) for m, r in metric_results.items()}
    total = sum(adj.values())
    final_weights = {k: v/total for k, v in adj.items()}
    return weighted_aggregate(metric_results, final_weights)

Uncertainty Quantification

def calculate_uncertainty(metric_results, ensemble_result):
    var_uncert = np.var([r.synthetic_probability for r in metric_results.values()])
    conf_uncert = 1 - np.mean([r.confidence for r in metric_results.values()])
    decision_uncert = 1 - 2*abs(ensemble_result.synthetic_probability - 0.5)
    return var_uncert*0.4 + conf_uncert*0.3 + decision_uncert*0.3

๐Ÿงญ Domainโ€‘Aware Detection

Domain weights and thresholds are configurable. Example weights (in config/threshold_config.py):

DOMAIN_WEIGHTS = {'academic'     : {'perplexity':0.22,'entropy':0.18,'structural':0.15,'linguistic':0.20,'semantic':0.15,'multi_perturbation_stability':0.10},
                  'technical'    : {'perplexity':0.20,'entropy':0.18,'structural':0.12,'linguistic':0.18,'semantic':0.22,'multi_perturbation_stability':0.10},
                  'creative'     : {'perplexity':0.25,'entropy':0.25,'structural':0.20,'linguistic':0.12,'semantic':0.10,'multi_perturbation_stability':0.08},
                  'social_media' : {'perplexity':0.30,'entropy':0.22,'structural':0.15,'linguistic':0.10,'semantic':0.13,'multi_perturbation_stability':0.10},
                 }

Domain Calibration Strategy (brief)

  • Academic: increase linguistic weight, raise perplexity multiplier
  • Technical: prioritize semantic coherence, maximize Synthetic threshold to reduce false positives
  • Creative: boost entropy & structural weights for burstiness detection
  • Social Media: prioritize perplexity and relax linguistic demands

โšก Performance Characteristics

Processing Times & Resource Estimates

Text Length Typical Time vCPU RAM
Short (100โ€“500 words) 1.2 s 0.8 vCPU 512 MB
Medium (500โ€“2000 words) 3.5 s 1.2 vCPU 1 GB
Long (2000+ words) 7.8 s 2.0 vCPU 2 GB

Optimizations implemented

  • Parallel metric computation (thread/process pools)
  • Conditional execution & early exit on high confidence
  • Model caching & quantization support for memory efficiency

๐Ÿ“ Project Structure (as in repository)

text_auth/
โ”œโ”€โ”€ config/
โ”‚   โ”œโ”€โ”€ model_config.py
โ”‚   โ”œโ”€โ”€ settings.py
|   โ”œโ”€โ”€ enums.py
|   โ”œโ”€โ”€ constants.py
|   โ”œโ”€โ”€ schemas.py
โ”‚   โ””โ”€โ”€ threshold_config.py
โ”œโ”€โ”€ data/
โ”‚   โ”œโ”€โ”€ reports/
|   โ”œโ”€โ”€ validation_data/
โ”‚   โ””โ”€โ”€ uploads/
โ”œโ”€โ”€ services/
โ”‚   โ”œโ”€โ”€ reasoning_generator.py
โ”‚   โ”œโ”€โ”€ ensemble_classifier.py
โ”‚   โ”œโ”€โ”€ highlighter.py
โ”‚   โ””โ”€โ”€ orchestrator.py
โ”œโ”€โ”€ metrics/
โ”‚   โ”œโ”€โ”€ base_metric.py
โ”‚   โ”œโ”€โ”€ multi_perturbation_stability.py
โ”‚   โ”œโ”€โ”€ entropy.py
โ”‚   โ”œโ”€โ”€ linguistic.py
โ”‚   โ”œโ”€โ”€ perplexity.py
โ”‚   โ”œโ”€โ”€ semantic_analysis.py
โ”‚   โ””โ”€โ”€ structural.py
โ”œโ”€โ”€ models/
โ”‚   โ”œโ”€โ”€ model_manager.py
โ”‚   โ””โ”€โ”€ model_registry.py
โ”œโ”€โ”€ processors/
โ”‚   โ”œโ”€โ”€ document_extractor.py
โ”‚   โ”œโ”€โ”€ domain_classifier.py
โ”‚   โ”œโ”€โ”€ language_detector.py
โ”‚   โ””โ”€โ”€ text_processor.py
โ”œโ”€โ”€ reporter/
โ”‚   โ””โ”€โ”€ report_generator.py
โ”œโ”€โ”€ ui/
โ”‚   โ””โ”€โ”€ static/index.html
โ”œโ”€โ”€ utils/
โ”‚   โ””โ”€โ”€ logger.py
โ”œโ”€โ”€ validation/
โ”œโ”€โ”€ example.py
โ”œโ”€โ”€ requirements.txt
โ”œโ”€โ”€ run.sh
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ Dockerfile
โ”œโ”€โ”€ .gitignore
โ”œโ”€โ”€ setup.sh
โ”œโ”€โ”€ test_integration.py
โ”œโ”€โ”€ .env.example
โ”œโ”€โ”€ requirements.txt
โ””โ”€โ”€ text_auth_app.py

๐ŸŒ API Endpoints

/api/analyze โ€” Text Analysis (POST)

Analyze raw text. Returns ensemble assessment, perโ€‘metric signals, highlights, and explainability reasoning.

Request (JSON)

{
  "text":"...",
  "domain":"academic|technical_doc|creative|social_media",
  "enable_highlighting": true,
  "use_sentence_level": true,

Response (JSON) โ€” abbreviated

{
  "status": "success",
  "analysis_id": "analysis_170...",
  "assessment": {
    "final_verdict": "Synthetic / Authentic / Hybrid",
    "overall_confidence": 0.89,
    "uncertainty_score": 0.23
  },
  "metric_signals": {
    "perplexity": { "score": 0.92, "confidence": 0.89 }
  },
  "highlighted_html": "<div>...</div>",
  "reasoning": {
    "summary": "...",
    "key_indicators": ["...", "..."]
  }
}

Note: The final verdict represents a probabilistic consistency assessment, not an authorship or generation claim.

/api/analyze/file โ€” File Analysis (POST, multipart/form-data)

Supports PDF, DOCX, TXT, DOC, MD. File size limit default: 10MB. Returns same structure as text analyze endpoint.

/api/report/generate โ€” Report Generation (POST)

Generate downloadable JSON or PDF reports for a given analysis id.

Utility endpoints

  • GET /health โ€” health status, models loaded, uptime
  • GET /api/domains โ€” supported domains and thresholds
  • GET /api/models โ€” detectable model list

โš™๏ธ Installation & Setup

Prerequisites

  • Python 3.8+
  • 4GB RAM (8GB recommended)
  • Disk: 2GB (models & deps)
  • OS: Linux/macOS/Windows (WSL supported)

Quickstart

git clone https://github.com/satyaki-mitra/text_authentication.git
cd text_authentication
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Copy .env.example -> .env and set HF_TOKEN if using private models
python text_auth_app.py
# or: ./run.sh

Dev tips

  • Use DEBUG=True in config/settings.py for verbose logs
  • For containerized runs, see Dockerfile template (example included in repo suggestions)

๐Ÿง  Model Management & Firstโ€‘Run Behavior

  • The application automatically downloads required model weights from Hugging Face on the first run and caches them to the local HF cache (or a custom path specified in config/model_config.py).
  • Model IDs and revisions are maintained in models/model_registry.py and referenced by models/model_manager.py.
  • Best practices implemented:
    • Pin model revisions (e.g., [email protected])
    • Resumeable downloads using huggingface_hub.snapshot_download
    • Optional OFFLINE_MODE to load local model paths
    • Optional integrity checks (SHA256) after download
    • Support for private HF repos using HF_TOKEN env var

Example snippet

from huggingface_hub import snapshot_download
snapshot_download(repo_id="satyaki-mitra/statistical-text-reference-v1", local_dir="./models/text-detector-v1")

๐ŸŽจ Frontend Features (UI)

  • Dualโ€‘panel responsive web UI (left: input / upload; right: live analysis)
  • Sentenceโ€‘level color highlights with tooltips and perโ€‘metric breakdown
  • Progressive analysis updates (metric-level streaming)
  • Theme: light/dark toggle (UI respects user preference)
  • Export: JSON and PDF report download
  • Interactive elements: click to expand sentence reasoning, copy text snippets, download raw metrics

๐Ÿ’ผ Business Model & Market Analysis

TAM: $20B (education, hiring, publishing) โ€” see detailed breakdown in original repo. Use cases: universities (plagiarism & integrity), hiring platforms (resume authenticity), publishers (content verification), social platforms (spam & SEO abuse).

Competitive landscape (summary)

  • Binary authorship-claim systems (e.g., GPTZero-style tools) โ€” our advantages: domain adaptation, explainability, evidence transparency, lower false positives and competitive pricing. TEXT-AUTH explicitly avoids authorship claims in favor of evidence-based forensic assessment.

Monetization ideas

  • SaaS subscription (seat / monthly analyze limits)
  • Enterprise licensing with onโ€‘prem deployment & priority support
  • API billing (perโ€‘analysis tiered pricing)
  • Onboarding & consulting for institutions

๐Ÿ”ฎ Research Impact & Future Scope

Research directions

  • Adversarial robustness (paraphrase & synonym attacks)
  • Crossโ€‘model generalization & zeroโ€‘shot detection
  • Explainability: counterfactual examples & feature importance visualization

Planned features (Q1โ€‘Q2 2026)

  • Multiโ€‘language support (Spanish, French, German, Chinese)
  • Realโ€‘time streaming API (WebSocket)
  • Institutionโ€‘specific calibration & admin dashboards

Detailed research methodology and academic foundation available in our Whitepaper. Technical implementation details in Technical Documentation.


๐Ÿ—๏ธ Infrastructure & Deployment

Deployment (Mermaid dark diagram)

%%{init: {'theme': 'dark'}}%%
flowchart LR
    CDN[CloudFront / CDN] --> LB["Load Balancer (ALB/NLB)"]
    LB --> API1[API Server 1]
    LB --> API2[API Server 2]
    LB --> APIN[API Server N]
    API1 --> Cache[Redis Cache]
    API1 --> DB[PostgreSQL]
    API1 --> S3["S3 / Model Storage"]
    DB --> Backup["RDS Snapshot"]
    S3 --> Archive["Cold Storage"]

Deployment notes

  • Containerize app with Docker, orchestrate with Kubernetes or ECS for scale
  • Autoscaling groups for API servers & worker nodes
  • Use spot GPU instances for retraining & large metric compute jobs
  • Integrate observability: Prometheus + Grafana, Sentry for errors, Datadog if available

๐Ÿ” Security & Risk Mitigation

Primary risks & mitigations

  • Model performance drift โ€” monitoring + retraining + rollback
  • Adversarial attacks โ€” adversarial training & input sanitization
  • Data privacy โ€” avoid storing raw uploads unless user consents; redact PII in reports
  • Secrets management โ€” use env vars, vaults, and avoid committing tokens
  • Rate limits & auth โ€” JWT/OAuth2, API key rotation, request throttling

File handling best practices (examples)

ALLOWED_EXT = {'.txt','.pdf','.docx','.doc','.md'}
def allowed_file(filename):
    return any(filename.lower().endswith(ext) for ext in ALLOWED_EXT)

Continuous Improvement Pipeline (TODO)

  • Regular retraining & calibration on new model releases
  • Feedback loop: user reported FP integrated into training
  • A/B testing for weight adjustments
  • Monthly accuracy audits & quarterly model updates

๐Ÿ“„ License & Acknowledgments

This project is licensed under the MIT License โ€” see LICENSE in the repo.

Acknowledgments:

  • DetectGPT (Mitchell et al., 2023) โ€” inspiration for perturbation-based detection
  • Hugging Face Transformers & Hub
  • Open-source NLP community and early beta testers

Built with โค๏ธ โ€” Evidence-based text forensics, transparency, and real-world readiness.

Version 1.0.0 โ€” Last Updated: October, 2025