# Document Parsing Models - Inference Guide ## Overview The scripts in this folder allow users to extract structured data from unstructured documents using different document parsing services and libraries. Each service follows a standard installation procedure and provides an `infer_*` script to perform inference on PDF/Image samples. You can choose from document parsing products such as **Upstage DP**, **AWS Textract**, **Google Document AI**, **Microsoft Azure Form Recognizer**, **LlamaParse**, or **Unstructured**. Most of these services require an API key for access. Make sure to follow specific setup instructions for each product to properly configure the environment. Each service generates a JSON output file in a consistent format with `time_sec` field for performance measurement. --- ## Quick Start **Run a single inference script:** ```bash python scripts/infer_upstage.py \ --data-path \ --save-path \ [--concurrent 4] [--sampling-rate 0.5] [--request-timeout 600] ``` --- ## Common CLI Arguments All `infer_*` scripts share these arguments: | Argument | Description | Default | |----------|-------------|---------| | `--data-path` | Path to documents directory | Required | | `--save-path` | Output JSON file path | Required | | `--input-formats` | File extensions to process | `.pdf .jpg .jpeg .png .bmp .tiff .heic` | | `--concurrent` | Enable async mode with N concurrent requests | None (sync mode) | | `--sampling-rate` | Fraction of files to process (0.0-1.0) | 1.0 | | `--request-timeout` | API timeout in seconds | 600 | | `--random-seed` | Random seed for reproducible sampling | None (random) | --- ## Common Features All inference scripts share the following features: - **Time Measurement**: Automatically measures API latency and stores `time_sec` in each result - **Interim Results**: Saves individual API results to avoid redundant API calls on re-runs - **Error Handling**: Continues execution even if some files fail - **Progress Tracking**: Shows progress and completion status for each document - **Cost Optimization**: Skips already processed files to avoid unnecessary API costs - **Concurrency**: Optional async mode with semaphore-based rate limiting - **Sampling**: Optional random sampling with reproducible seeds ### How Interim Results Work Each inference script creates an interim directory (named after the output file) where individual API results are stored: ``` predictions/ ├── upstage_infer.json # Final merged results └── upstage_infer/ # Interim directory ├── document1.pdf.json ├── document2.pdf.json └── document3.pdf.json ``` Benefits: 1. **Crash Recovery**: If the script crashes, already processed files are preserved 2. **Incremental Processing**: Re-running the script only processes new files 3. **Cost Savings**: Avoids redundant API calls for successful results ### Sampling and Reproducible Results All inference scripts support random sampling of input files using the `--sampling-rate` parameter (0.0-1.0). For reproducible results across multiple runs, use the `--random-seed` parameter: ```bash # Sample 50% of files with reproducible selection python scripts/infer_upstage.py \ --data-path ./documents \ --save-path results.json \ --sampling-rate 0.5 \ --random-seed 42 ``` **Benefits:** - **Reproducible Experiments**: Same seed + same sampling rate = identical file selection - **Performance Testing**: Compare different services on the exact same documents - **Cost Control**: Test on smaller datasets while maintaining representative samples **Note**: Without `--random-seed`, sampling will be different each run (standard random behavior). --- ## Upstage Follow the [official Upstage DP Documentation](https://developers.upstage.ai/docs/apis/document-parse) to set up Upstage for Document Parsing. ### Environment Variables ```bash export UPSTAGE_API_KEY="your-api-key" export UPSTAGE_ENDPOINT="https://api.upstage.ai/v1/document-ai/document-parse" ``` ### Inference ```bash python scripts/infer_upstage.py \ --data-path \ --save-path \ [--model-name document-parse-nightly] \ [--mode standard|enhanced] \ [--output-formats text html markdown] ``` **Service-specific arguments:** - `--model-name`: Model version (default: `document-parse-nightly`) - `--mode`: Parsing mode - `standard` or `enhanced` - `--output-formats`: Output formats to request --- ## AWS Textract ### Installation ```bash curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install aws configure pip install boto3 ``` Refer to the [AWS Textract Documentation](https://docs.aws.amazon.com/en_us/textract/latest/dg/getting-started.html) for detailed instructions. ### Environment Variables ```bash export AWS_ACCESS_KEY_ID="your-access-key" export AWS_SECRET_ACCESS_KEY="your-secret-key" export AWS_REGION="your-region" export AWS_S3_BUCKET_NAME="your-bucket" # Required for PDF processing ``` ### Inference ```bash python scripts/infer_aws.py \ --data-path \ --save-path ``` **Note:** PDFs use async Textract jobs (S3 upload + polling); images use direct analysis. --- ## Google Document AI ### Installation ```bash apt-get install apt-transport-https ca-certificates gnupg curl curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list apt-get update && apt-get install google-cloud-cli gcloud init pip install google-cloud-documentai ``` More information in the [Google Document AI Documentation](https://console.cloud.google.com/ai/document-ai). ### Environment Variables ```bash export GOOGLE_PROJECT_ID="your-project-id" export GOOGLE_PROCESSOR_ID="your-processor-id" export GOOGLE_LOCATION="us" export GOOGLE_ENDPOINT="us-documentai.googleapis.com" export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json" ``` ### Inference ```bash python scripts/infer_google.py \ --data-path \ --save-path ``` --- ## Microsoft Azure Document Intelligence ### Installation ```bash pip install azure-ai-formrecognizer==3.3.0 ``` See the [Microsoft Azure Form Recognizer Documentation](https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api) for additional details. ### Environment Variables ```bash export MICROSOFT_API_KEY="your-api-key" export MICROSOFT_ENDPOINT="https://your-resource.cognitiveservices.azure.com/" ``` ### Inference ```bash python scripts/infer_microsoft.py \ --data-path \ --save-path ``` --- ## LlamaParse Refer to the [official LlamaParse Documentation](https://docs.cloud.llamaindex.ai/category/API/parsing) to set up LlamaParse. ### Environment Variables ```bash export LLAMAPARSE_API_KEY="your-api-key" export LLAMAPARSE_POST_URL="https://api.cloud.llamaindex.ai/api/v1/parsing/upload" export LLAMAPARSE_GET_URL="https://api.cloud.llamaindex.ai/api/v1/parsing/job" ``` ### Inference ```bash python scripts/infer_llamaparse.py \ --data-path \ --save-path \ [--mode cost-effective|agentic|agentic-plus] ``` **Service-specific arguments:** - `--mode`: Parsing mode - `cost-effective`: Fast, standard documents (default) - `agentic`: Balanced quality/cost - `agentic-plus`: Highest quality **Note:** Time measurement includes polling time for async API calls. --- ## Unstructured ### Installation ```bash pip install "unstructured[all-docs]" pip install poppler-utils apt install tesseract-ocr libtesseract-dev apt install tesseract-ocr-[lang] # Use appropriate language code ``` Detailed installation instructions at [Unstructured Documentation](https://unstructured-io.github.io/unstructured/installing.html). Use [Tesseract Language Codes](https://tesseract-ocr.github.io/tessdoc/Data-Files-in-different-versions.html) for OCR support in different languages. ### Environment Variables ```bash export UNSTRUCTURED_API_KEY="your-api-key" export UNSTRUCTURED_URL="https://api.unstructured.io/general/v0/general" ``` ### Inference ```bash python scripts/infer_unstructured.py \ --data-path \ --save-path ``` --- ## Category Mapping Within each `infer_*` script, a `CATEGORY_MAP` is defined to standardize the mapping of layout elements across different products. This ensures uniform evaluation by mapping the extracted document layout classes to standardized categories. Example from LlamaParse: ```python CATEGORY_MAP = { "text": "paragraph", "heading": "heading1", "table": "table" } ``` Modify the `CATEGORY_MAP` in inference scripts according to your document layout categories for accurate results. --- ## Utils Module The `utils.py` module provides shared functionality: - `read_file_paths()` - Find files with supported formats - `validate_json_save_path()` - Validate output file path - `load_json_file()` - Safely load existing JSON results - `get_interim_dir_path()` - Get interim directory path - `save_interim_result()` - Save individual API result - `load_interim_result()` - Load existing interim result - `collect_all_interim_results()` - Merge all interim results --- ## Base Classes (for developers) The `base.py` module provides inheritance hierarchy: - **`BaseInference`**: Core class with sync/async orchestration, interim result handling, performance metrics - **`HttpClientInference`**: For HTTP-based APIs (Upstage, LlamaParse) - manages `httpx.AsyncClient` Use `create_argument_parser()` from `base.py` to get standard CLI arguments when creating new inference scripts.