Learn how to integrate OpenLLM API into your applications
Get started with OpenLLM API in minutes. Our API is fully compatible with OpenAI's interface.
Sign in and generate your API key from the settings page.
Install the OpenAI SDK or use our compatible endpoints.
pip install openaiStart making requests with your preferred model.
All API requests require authentication using your API key.
Include your API key in the Authorization header:
Authorization: Bearer YOUR_API_KEYKeep your API keys secure and never expose them in client-side code.
Generate conversational responses using various AI models.
POST https://api.openllm.dev/v1/chat/completionsmodelID of the model to usemessagesArray of message objectstemperatureSampling temperature (0-2)max_tokensMaximum tokens to generatestreamEnable streaming responsesimport OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.openllm.dev/v1',
apiKey: process.env.OPENLLM_API_KEY,
});
const response = await client.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Hello!' }
],
});
console.log(response.choices[0].message.content);from openai import OpenAI
client = OpenAI(
base_url="https://api.openllm.dev/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)Access hundreds of AI models through a single API.
GET https://api.openllm.dev/v1/modelsLatest and most capable models from major providers
Optimized for code generation and technical tasks
Advanced reasoning and complex problem-solving
Support for images, audio, and video inputs
Stream responses in real-time for better user experience.
const stream = await client.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}Understand and handle API errors effectively.
401401 Unauthorized - Invalid API key429429 Too Many Requests - Rate limit exceeded500500 Internal Server Error - Service error503503 Service Unavailable - Temporary outageTransparent pricing based on actual usage.
| Model | Input Price | Output Price |
|---|---|---|
| GPT-4 | $5.00 | $15.00 |
| GPT-3.5 Turbo | $0.50 | $1.50 |
| Claude 3 Opus | $15.00 | $75.00 |
per 1M tokens
Pay-as-you-go pricing with no subscription required.
Track your usage and costs in real-time from the dashboard.
Official and community-maintained SDKs for popular languages.
Use the official OpenAI Python library
pip install openaiUse the official OpenAI Node.js library
npm install openaiAPI usage limits to ensure fair access and service stability.
| Tier | Requests | Tokens |
|---|---|---|
| Free | 100 req/day | 100K tokens/day |
| Pro | 10,000 req/day | 10M tokens/day |
Rate limit information is included in response headers:
X-RateLimit-Limit: 10000
X-RateLimit-Remaining: 9999
X-RateLimit-Reset: 1640995200Join our Discord community for help and discussions
Contact our team at support@openllm.dev
Check real-time API status and uptime
Stay updated with latest features and improvements