Micro AI

Service Platform

v1.0.0

Documentation

Comprehensive guides and API documentation for the Micro AI platform

Migrating to Micro AI
Complete guide for software developers to migrate their applications to use local and online LLMs through Micro AI

Quick Setup

Base URL

https://microai.staging.sirenanalytics.com/llm_router/v1

Replace OpenAI's base URL with your Micro AI endpoint

Authentication

Bearer your-microai-key

Use Micro AI generated keys (not OpenAI keys)

Migration Examples

PythonBefore → After

❌ Before (OpenAI)

from openai import OpenAI

client = OpenAI(
    api_key="sk-proj-xxx"  # OpenAI key
)

response = client.chat.completions.create(
    model="gpt-4.1",
    messages=[{"role": "user", "content": "Hello"}]
)

✅ After (Micro AI)

from openai import OpenAI

client = OpenAI(
    base_url="https://microai.staging.sirenanalytics.com/llm_router/v1",
    api_key="microai-key-xxx",  # Micro AI key
    timeout=60  # Increased timeout
)

response = client.chat.completions.create(
    model="openai/gpt-4.1",  # Local model
    messages=[{"role": "user", "content": "Hello"}]
)

JavaScriptBefore → After

❌ Before (OpenAI)

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'sk-proj-xxx'  // OpenAI key
});

const response = await openai.chat.completions.create({
  model: 'gpt-4.1',
  messages: [{ role: 'user', content: 'Hello' }]
});

✅ After (Micro AI)

import OpenAI from 'openai';

const openai = new OpenAI({
  baseURL: 'https://microai.staging.sirenanalytics.com/llm_router/v1',
  apiKey: 'microai-key-xxx',  // Micro AI key
  timeout: 60000  // Increased timeout
});

const response = await openai.chat.completions.create({
  model: 'openai/gpt-4.1',  // Local model
  messages: [{ role: 'user', content: 'Hello' }]
});

cURLRaw HTTP Request

curl -X POST "https://microai.staging.sirenanalytics.com/llm_router/v1/chat/completions" \
  -H "Authorization: Bearer microai-key-xxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4.1",
    "messages": [{"role": "user", "content": "Hello"}],
    "max_tokens": 150
  }'

Available Models

🏠 Local Models

Check available local models:

GET https://microai.staging.sirenanalytics.com/services

☁️ Online Models

All OpenAI, Cohere, and OpenRouter models:

GET https://microai.staging.sirenanalytics.com/llm_router/models

Additional Services

Text Tools

Chunking, tokenization, and NLP utilities

https://microai.staging.sirenanalytics.com/text_tools

Translator

Machine translation services

https://microai.staging.sirenanalytics.com/translator

LangFuse Logs

Monitor your requests and performance

https://microai.staging.sirenanalytics.com/langfuse

Best Practices & Lessons Learned

Additional Resources

LiteLLM Documentation

Complete API reference for all available endpoints

View LiteLLM Docs

Contact Support

Need help with API keys or LangFuse access?

elie.r@sirenanalytics.com