Quickstart
Install Sebastion AI on GitHub in under a minute, then optionally pick up an API key for programmatic access.
The fastest way to use Foundation Machines is to install the Sebastion AI GitHub App on a repo and let it review your next pull request. API access is available as a second step for power users and CI integrations.
Step 1, Install Sebastion AI on GitHub
- Visit github.com/apps/sebastionai.
- Click Install and choose the repositories you want covered. You can pick "All repositories" or a specific subset; you can change this later from your GitHub settings.
- Push a pull request to one of those repositories.
Within about 60 seconds Sebastion opens a new GitHub issue on the same repository with structured findings, severity, file, line, message and suggested fix. No config files, no CI changes, no model picker.
The free tier uses Claude Sonnet 4.6 for audits. Paid tiers (Pro / Team / Enterprise) automatically route audits to Claude Opus 4.7. See models for the full per-plan map.
Step 2, API access (for power users)
Beyond the GitHub App, the same engine is exposed as an
OpenAI-compatible API at https://api.foundationmachines.ai/v1. This is
optional, and primarily useful for IDE integrations, CI scripts and
custom automation.
Get an API key
Sign in at app.foundationmachines.ai
with GitHub and mint a personal API key from the dashboard. Keys look
like fm_<tenant>_<rand>.
Run a security audit on a snippet
curl -X POST https://api.foundationmachines.ai/v1/audit \
-H "Authorization: Bearer fm_acme_..." \
-H "Content-Type: application/json" \
-d '{
"snippets": [
{
"filename": "src/auth.ts",
"language": "typescript",
"content": "const AWS_SECRET = \"AKIAIOSFODNN7EXAMPLE\";\n"
}
],
"ruleset": "security"
}'You don't pass a model, Sebastion picks one based on your plan. See
the audit reference for the full request and
response schema.
Call the chat-completions gateway
You can also use the gateway as a drop-in replacement for OpenAI's chat
endpoint. Here model is yours to choose, within the set allowed by
your plan.
Python
from openai import OpenAI
client = OpenAI(
base_url="https://api.foundationmachines.ai/v1",
api_key="fm_acme_...",
)
resp = client.chat.completions.create(
model="claude-sonnet-4.6",
messages=[{"role": "user", "content": "Hello"}],
)
print(resp.choices[0].message.content)TypeScript
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.foundationmachines.ai/v1",
apiKey: process.env.FM_API_KEY!,
});
const resp = await client.chat.completions.create({
model: "claude-sonnet-4.6",
messages: [{ role: "user", content: "Hello" }],
});curl
curl https://api.foundationmachines.ai/v1/chat/completions \
-H "Authorization: Bearer fm_acme_..." \
-H "Content-Type: application/json" \
-d '{"model":"claude-sonnet-4.6","messages":[{"role":"user","content":"Hello"}]}'Check your usage
curl https://api.foundationmachines.ai/v1/usage/me \
-H "Authorization: Bearer fm_acme_..."Returns plan, daily token cap and what you've used today.