This page exists strictly to describe how your images move through the system, which data points are collected, where every result is recorded, and how the raw data is returned to the dashboard or PDF. There is no marketing spin—just the steps, the outputs, and how to read them.
Every upload is authenticated and checked for MIME/extension before writing it into a private blob container. URL submissions are fetched server-side, validated, and stored beside uploads so the processing pipeline always references the same binary data.
Each job record captures the status, timestamps, features requested (basic analysis, detectors, reverse search, PDF), and the blob path before the async processing queue picks it up.
feature_statuses table so the system maintains a complete audit trail for success, failure, or retries.The analysis heuristics adjust themselves based on context: editing/AI tags raise the signals while trusted cameras, mobile phones, or screenshots reduce the sensitivity, and every change is recorded inside forensics_json.contextual_flags alongside the observations.
Each data point produces structured JSON and optional explanations. Nothing is thrown away.
metadata_json.exif plus metadata_json.modified_utc and size_bytes. Missing or stripped tags are recorded instead of interpreted, so the UI can show every field the worker saw.forensics_json. Observations (e.g., “Noise texture varies across regions”) describe how the values were derived.detector_json along with flags for the moderation categories and captions/tags. The dashboard surfaces provider/model names plus confidence, and the raw JSON is accessible on demand.safe=active (Google Lens as the default provider), performs a best-attempt reverse image lookup, and records the matches, domains, and similarity scores under reverse_search and job_matches. Failures are noted in feature statuses and credits are refunded when appropriate.The worker's pixel analysis module performs math down to the channel level before it even reaches the Azure detector. It opens the buffer into a grayscale array, measures its histograms, and then normalizes four core signals: noise inconsistency, boundary artifacts, JPEG blockiness, and an image-type heuristic that detects screenshots or synthetic canvases.
Noise inconsistency subtracts the local standard deviation inside 8×8 patches from the global standard deviation, clips the result between 0 and 1, and reports the remainder as the signal. Higher values mean the noise texture shifts from one region to another, which can flag composites. The routine also calculates a cluster score by binarizing the grayscale image, measuring connected regions, and combining those two values so the final ratio reflects both uniform patches and artifact clusters.
Boundary artifacts rely on dilating the trimap of the image, growing regions until only a few pixels per edge remain, and then computing how much the border changes when it expands—a proxy for tampering where edges get smudged. The number is scaled so 0 means no change and 1 is the maximum observed deformation.
JPEG blockiness compares the absolute difference across rows/columns that straddle the 8×8 compressed grid, then divides that measure by the average gradient of the image so bright/high-contrast photos don't look artificially blocky. Again the ratio is clipped between 0 and 1, giving you an easy-to-read likelihood that JPEG compression artifacts dominate the frame.
The image-type heuristic pulls from shape metrics on the binary mask. If the mask is dominated by a single dense cluster with smooth boundaries, the routine nudges interpretations toward screenshots or synthetic renders. That value plus the four signals go straight into forensics_json.signals so you can see the scaled number plus any narrative observation about how it was computed.
The dashboard pulls from job_results and exposes the stored JSON for each data point. Feature pills show status (success, failed, skipped). When a PDF report is requested, the worker generates it, uploads it to reports/<jobId>/report.pdf, and a fresh SAS token is minted on download so you can always access the same content.
We never return aggregate score summaries without the underlying data points. The UI always ties a percentage or bucketed likelihood to explicit stored values (metadata, analysis heuristics, Azure JSON, reverse matches).
x-worker-secret to confirm authenticity before persisting results.When you open a job, the status bullets describe which data points succeeded or failed to collect. Below, you will find the raw JSON, the normalized values (e.g., 0.35 boundary_artifacts), and the plain-language explanation generated from the stored data. PDFs mirror the same tables so you can distribute an exact copy of what the worker logged.
The entire audit trail (metadata_json, forensics_json, detector_json, reverse_search, job_matches, feature_statuses) can be downloaded via the API or reviewed on-screen.