Models

Every model MarkedDown can score an instruction file against. Bring your own API key — no keys are stored, costs pass through per-request.

10 Models
6 Providers
2 / 10 Tested

Anthropic

Claude Haiku 4.5

claude-haiku-4-5-20251001
79% avg
In $0.800/1M
Out $4.00/1M
Files 45

Claude Sonnet 4.5

claude-sonnet-4-5-20250918
Untested
In $3.00/1M
Out $15.00/1M
Files

Claude Opus 4.6

claude-opus-4-6
Untested
In $15.00/1M
Out $75.00/1M
Files

OpenAI

GPT-4o mini

gpt-4o-mini
92% avg
In $0.150/1M
Out $0.600/1M
Files 43

GPT-4o

gpt-4o
Untested
In $5.00/1M
Out $15.00/1M
Files

OpenRouter

Gemma 4 27B

google/gemma-4-27b-it
Untested
In $0.100/1M
Out $0.220/1M
Files
Endpoint https://openrouter.ai/api

Qwen3 235B

qwen/qwen3-235b-a22b
Untested
In $0.220/1M
Out $0.880/1M
Files
Endpoint https://openrouter.ai/api

Z.AI

GLM-5.1

glm-5.1
Untested
In $1.75/1M
Out $5.50/1M
Files
Endpoint https://api.z.ai/api/paas/v4

MiniMax

MiniMax M2.7

MiniMax-M2.7
Untested
In $0.300/1M
Out $1.20/1M
Files
Endpoint https://api.minimax.io

Venice

GLM-5.1 (Venice)

zai-org-glm-5-1
Untested
In $1.75/1M
Out $5.50/1M
Files
Endpoint https://api.venice.ai/api/v1

Pricing shown per 1M tokens. Actual charges are billed directly to the API key you provide — MarkedDown never stores keys or proxies billing. Scores are averaged across all cached test runs (sanity, Tier 1, and Tier 2 suites).