Microservices
Overview of all microservices in the Micro AI platform. Each service provides specific functionality to the overall architecture.
Core Services
Nginx Gateway
Coreactive
Front-facing gateway that manages and routes requests to all services
No external documentation
LiteLLM Router
Coreactive
LLM-specific router that handles model management and request routing for AI models
Service Manager
Coreactive
Orchestrates and manages service lifecycle and deployment
AI/ML Services
Translator
AI/MLactive
Translation service integrated with LiteLLM for machine translation tasks
Text Tools
AI/MLactive
Text processing utilities including chunking, tokenization, and other NLP tasks
Adapters
AI/MLactive
Service for model-to-API mapping and configuration management
Monitoring Services
LangFuse
Monitoringactive
Logging and observability platform for monitoring LLM applications and performance
Grafana
Monitoringactive
Monitoring and visualization platform for metrics and system performance
Checkmate
Monitoringactive
Uptime monitoring service for tracking service availability and health status
Infrastructure Services
Redis
Infrastructureactive
In-memory cache for request caching and temporary storage
No external documentation
PostgreSQL
Infrastructureactive
Primary database for service data persistence with multiple database support
No external documentation
Architecture Overview
This platform follows a microservices architecture where each component is containerized and can be scaled independently. Services communicate through well-defined APIs, with NGINX acting as the primary gateway and LiteLLM handling specialized routing for AI model requests.