Developer API v2.1

SentinelFlow API Reference

Welcome to the official API documentation for SentinelFlow. This API exposes two primary subsystems: the Security Engine for real-time log ingestion and scoring, and the Forensic Engine for offline threat hunting on large datasets.

All endpoints return standard JSON responses and use HTTP status codes to indicate success or failure.

Base URLs Environment: Localhost
# 1. Real-time Security & Threat Intelligence
BASE_URL: http://127.0.0.1:8000/security

# 2. Forensic File Analysis & Auditing
BASE_URL: http://127.0.0.1:8000/audit

Authentication

SentinelFlow is multi-tenant. You must establish a user identity to scope your logs and analysis.

POST /security/signup

Register a new tenant identity. This isolates your logs from other users.

FieldTypeDescription
usernamestringUnique identifier
passwordstringSecure password
BASH
curl -X POST http://127.0.0.1:8000/security/signup \
-H "Content-Type: application/json" \
-d '{"username": "corp_admin", "password": "s3cure"}'

PATCH /security/user/update/{username}

Update credentials or subscription plan. Migrates existing logs if the username changes.

JSON Payload
{
  "new_username": "admin_v2",
  "new_plan": "Pro"
}

Real-time Ingestion

Push logs from your Firewall, Auth Provider, or Infrastructure to the AI Engine for immediate scoring.

POST /security/analyze/{username}

The core endpoint. Sends a structured log to Gemini 2.0 Flash for threat classification. Returns a unique log_id.

Request Body (LogEntry)

sourcestringe.g., "AWS WAF", "Auth0"
event_idintegerID from your origin system
severity_levelstring"INFO", "WARNING", "CRITICAL"
messagestringThe raw log text to analyze
metadatadictContext (IPs, user_ids, regions)
PYTHON
import requests

payload = {
  "source": "Nginx Gateway",
  "event_id": 90210,
  "severity_level": "WARNING",
  "message": "SQL Injection pattern detected in header",
  "metadata": {
    "ip": "45.22.19.11",
    "path": "/api/v1/users"
  }
}

r = requests.post(
  "http://127.0.0.1:8000/security/analyze/admin", 
  json=payload
)

Reporting & Statistics

Retrieve processed intelligence for visualization on the dashboard.

GET /security/logs/{username}

Fetches the 10 most recent active threats. Note: Issues marked as "resolved" are automatically filtered out.

Response Example
[
  {
    "id": "65b...",
    "source": "Nginx Gateway",
    "analysis": {
      "score": 85,
      "severity": "HIGH",
      "summary": "Potential SQLi attempt...",
      "prevention_measures": "Block IP"
    }
  }
]

GET /security/stats/{username}

Returns calculated time-series data for the dashboard risk graph.

Response Example
{
  "labels": ["-30m", "-28m", ...],
  "counts": [4, 7, 2, 0],
  "risks": ["Stable", "High", "Stable"]
}

DELETE /security/issue/{issue_id}

Marks a threat as resolved. It remains in the database for historical stats but disappears from the active feed.

cURL
curl -X DELETE http://127.0.0.1:8000/security/issue/65b12...

Forensic Audit Module

The Forensic module handles retrospective analysis of large datasets using a TensorFlow Autoencoder (or statistical fallback on non-supported hardware).

POST /audit/scan

Upload a file for deep-learning anomaly detection. The system normalizes numeric data and identifies deviations.

Supported Formats: .csv, .xls, .xlsx
PYTHON (Using Requests)
files = {'file': open('audit_log.csv', 'rb')}
r = requests.post(
    "http://127.0.0.1:8000/audit/scan", 
    files=files
)

# Returns list of anomalies & outliers
print(r.json()['threats'])

GET /audit/logs

Retrieve the history of previous forensic scans stored in MongoDB.

Response Example
[
  {
    "timestamp": "2024-01-25T10:00:00",
    "total_scanned": 5000,
    "threats_detected": 12,
    "threshold_used": 0.98
  }
]