1. Final deploy
You've been running supabase functions deploy mcp along the way. One more time, with the verbosity dialed up so you see exactly what's shipping:
supabase functions deploy mcp --no-verify-jwtThe --no-verify-jwt flag matters: Supabase's default function-level JWT verification will reject the OAuth metadata endpoints (which by spec must be unauthenticated). We do verification inside our function for the protected routes, so we want function-level verification off.
Confirm the deploy:
# Public discovery — should return JSON without any auth header
curl https://<ref>.supabase.co/functions/v1/mcp/.well-known/oauth-protected-resource | jq
# Protected RPC — should return 401 with the WWW-Authenticate header
curl -i https://<ref>.supabase.co/functions/v1/mcp/If both look right, the function is ready for real OAuth traffic.
2. Connect Claude Code
In a terminal where Claude Code is installed:
claude mcp add shared-skills \
--transport http \
https://<ref>.supabase.co/functions/v1/mcp/Two things happen behind the scenes:
- Claude makes an unauthenticated request. Our server returns 401 +
WWW-Authenticate: Bearer ... resource_metadata="...". - Claude fetches the resource metadata, follows
authorization_servers[0]to Supabase, runs the OAuth 2.1 authorization-code-with-PKCE flow (dynamically registering as a client if needed), pops a browser window for the user to sign in, exchanges the code for an access token, and stores it.
The browser handoff looks like:
Claude → opens browser at https://<ref>.supabase.co/auth/v1/authorize?...
↓
User signs in with email/Google/whatever you enabled in step 4.
↓
Supabase redirects to http://127.0.0.1:<port>/callback?code=...
↓
Claude exchanges the code (+ PKCE verifier) for an access token.
↓
Token saved; Claude makes its first authed request to your MCP server.Verify the connection:
claude mcp list
# shared-skills: connectedAnd inside a Claude session, you should now be able to ask:
"What snippets do I have saved?"
Claude calls
list_snippets()→ "You have 4 snippets across 2 workspaces. Want me to show one?"
That's the entire flow.
3. Connect Claude Desktop
Claude Desktop reads its MCP servers from a JSON config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add an entry:
{
"mcpServers": {
"shared-skills": {
"url": "https://<ref>.supabase.co/functions/v1/mcp/",
"transport": "streamable-http"
}
}
}Restart Claude Desktop. The first time you mention skills, it'll prompt the same OAuth flow, save the token, and you're connected.
4. The production checklist
Now that it works end-to-end, harden it. Each item below has caught me at least once.
Rate limiting
Supabase Edge Functions don't ship with a built-in per-user rate limiter. The simplest option: lean on a small in-database counter.
create table public.rate_limits (
user_id uuid not null references auth.users(id),
bucket text not null,
window_start timestamptz not null,
count int not null default 0,
primary key (user_id, bucket, window_start)
);
create or replace function public.bump_rate_limit(
p_bucket text,
p_limit int,
p_window interval
) returns boolean
language plpgsql security definer
set search_path = public
as $$
declare
v_window_start timestamptz := date_trunc('minute', now());
v_count int;
begin
insert into public.rate_limits (user_id, bucket, window_start, count)
values (auth.uid(), p_bucket, v_window_start, 1)
on conflict (user_id, bucket, window_start)
do update set count = public.rate_limits.count + 1
returning count into v_count;
return v_count <= p_limit;
end $$;Then, in any expensive tool handler:
const { data: under } = await supabase.rpc("bump_rate_limit", {
p_bucket: "save_snippet",
p_limit: 30, // 30 saves per minute per user
p_window: "1 minute",
});
if (!under) throw new Error("rate limit exceeded — try again in a minute");That's a coarse, single-region limiter — good enough for "stop one user from hammering the server." For finer control, drop in Upstash Ratelimit which works in Deno.
Observability
The bare minimum:
# Tail logs
supabase functions logs mcp --tail
# Spot-check the last 100
supabase functions logs mcpFor more than that, ship to a real log service. Edge Functions support outbound HTTP, so:
async function logEvent(level: string, msg: string, ctx: Record<string, unknown>) {
if (Deno.env.get("LOG_ENDPOINT")) {
await fetch(Deno.env.get("LOG_ENDPOINT")!, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ level, msg, ts: new Date().toISOString(), ...ctx }),
}).catch(() => { /* don't fail the request if logging fails */ });
}
console.log(`[${level}] ${msg}`, ctx);
}Wire it into the auth middleware to log every authenticated call:
await logEvent("info", "mcp.request", {
sub: user.sub,
clientId: user.clientId,
path: c.req.path,
method: c.req.method,
});What to alert on:
- Sustained 5xx rate > 1% — a regression
- Authentication-failure spike — possible credential-stuffing
- Unusual tool-call patterns from a single sub — abuse
Key rotation
Supabase's signing key rotates automatically; because we use createRemoteJWKSet (step 6), jose handles the rotation on its own. There's nothing to do here, but it's worth confirming: rotate a key in the Supabase dashboard (Project Settings → API → Rotate JWT Secret), wait a moment, hit the MCP server with a fresh token. It should keep working without a redeploy.
The anon key is not a secret per se (it's in client-side code), but if it ever needs rotating (compromise, public-repo leak), do it in the dashboard and run supabase secrets set SUPABASE_ANON_KEY=... for any custom deployments.
Backups
supabase/storage is great until the day you drop table snippets;. Set up nightly backups:
- Free tier: daily automated backups, 7-day retention
- Pro+: point-in-time recovery, 7-30 day retention
Confirm in the dashboard under Project Settings → Database → Backups. For belt-and-suspenders, schedule a daily pg_dump to S3:
# Run via GitHub Actions or a cron worker
pg_dump "$SUPABASE_DB_URL" \
--schema=public \
--no-owner \
--no-privileges \
| gzip > "snippets-$(date +%F).sql.gz"
aws s3 cp "snippets-$(date +%F).sql.gz" "s3://my-backups/snippets/"Security review
A short list of things to double-check before publishing the server URL widely:
- Service role key is NOT in any Edge Function. Only the anon key is referenced; user tokens drive everything else.
- RLS is enabled on every table in the
publicschema we created. Re-run the verification queries from step 3. - The
user_id_for_emailSECURITY DEFINER function (step 9) is the only definer in this schema and only does the one lookup. Don't expand its body without thought — definer functions bypass RLS. - CORS is restrictive. Streamable HTTP MCP doesn't require browser CORS, but if you ever expose the same endpoint to a web client, lock
Access-Control-Allow-Originto specific origins. - Resource indicator binding works. Mint a token for a different MCP server (i.e., set the
resourceparameter to a different URL during auth), hit yours with it, confirm 401. (RFC 8707 enforcement — covered in step 6.) - Snippet bodies don't get returned to non-members. Pick a private snippet, mint a token for a user not in its workspace, attempt
get_snippet. Should return "not found or not visible." - Logs scrubbed of bodies. It's tempting to log
tool_inputfor debugging; forsave_snippetthat means snippet bodies in your log retention. Either redact bodies in the logger, or accept that logs are at the same sensitivity as the database and apply the same access controls.
Scaling notes (for when this isn't just your team)
The architecture scales reasonably well in its default shape:
- Edge Functions scale horizontally and cold-start in ~100ms. A team of 50 users making low-hundreds-of-calls-per-day is well within the free or pro tier.
- Postgres + RLS is the bottleneck if you grow. The
is_workspace_memberhelper runs once per query as a function call; if you see slow tool latencies in the logs, runexplain analyzeon a representative query and consider indexingworkspace_members(workspace_id, user_id)(we did this in step 3 — confirm it's still there) andsnippets(workspace_id, updated_at desc). - MCP connections themselves are stateless. Each request rebuilds the
Serverand Supabase client. There's no in-memory state to lose if a function instance spins down.
The first thing to outgrow is probably the resource listing — 50 snippets per user is arbitrary and small for an active team. Promote that to a paginated cursor-based API when it bites.
5. Add the slash command, for ergonomics
This is optional but nice. In Claude Code, create a project slash command at .claude/commands/snippet.md:
---
description: Save the current discussion as a team snippet
---
Use the `save_snippet` MCP tool on the `shared-skills` server.
Workspace: ask me which workspace to save to if I haven't said.
Title: come up with something short and descriptive.
Body: the relevant block(s) of our conversation, cleaned up.
Tags: 2-3 relevant ones.
Visibility: "workspace" by default.
Confirm the snippet id after saving.Now anyone on the team can type /snippet and Claude will use the MCP tool to capture the moment. Pair with /find <topic> (use list_snippets) and /load <topic> (chain list_snippets → get_snippet) for the full "shared skills" experience.
6. What you built
Counting from step 1:
- A multi-tenant MCP server, deployed to Supabase Edge Functions.
- OAuth 2.1 with dynamic client registration, PKCE, JWKS-verified bearer tokens, and RFC 8707 audience binding.
- A Postgres schema with workspaces, member roles, and snippet visibility — entirely policed by Row-Level Security.
- Seven MCP tools (
list_snippets,get_snippet,save_snippet,share_snippet,list_workspaces,create_workspace,invite_to_workspace) and three MCP resource types. - A production checklist covering rate limits, observability, backups, and security review.
The end-to-end thing your teammates see: they run claude mcp add shared-skills <url>, sign in once, and from then on every Claude session knows their entire team's prompt library.
7. Where to take it next
A few directions, depending on what you want:
- Versioning snippets. Add a
snippet_versionstable that captures each edit.save_snippetwrites both. Adds an undo, plus an audit trail for "who wrote this prompt and when." - Snippet variables. Add a templating layer —
{{topic}},{{audience}}— so a saved prompt can be partially filled by Claude before use. The MCP tool surface stays the same; the change is in how Claude reads and uses the body. - A web admin UI. Same Supabase project, a Next.js app on Vercel reading/writing the same
snippetstable. RLS already does the multi-tenancy; you're just adding a different surface. - Convert this into a Smithery-style public listing. Smithery and the MCP Registry let users discover hosted MCP servers; the auth + multi-tenancy you built is what makes it safe to do that.
You shipped a small, real thing that does something genuinely useful. That's the whole win.