Introducing MattinAI

A comprehensive AI toolbox that powers intelligent applications with advanced LLM integration, semantic search, and automated AI agents.

MattinAI Core Tools Dashboard showing the login interface with LLM integration options and user authentication

LLM Integration & Management

Seamlessly integrate with OpenAI GPT, Anthropic Claude, Azure OpenAI, Mistral AI, Ollama, and more. Unified interface for all major LLM providers.

RAG & Semantic Search

Powerful Retrieval-Augmented Generation (RAG) systems with semantic search capabilities and vector database management using PostgreSQL + pgvector/Qdrant.

AI Agents & Automation

Build intelligent AI agents with our comprehensive automation framework. Create sophisticated multi-agent architectures and task automation.

Modular Architecture

Extensible and modular design with FastAPI/Python backend and React/TypeScript frontend. Easy to customize and extend for your needs.

Technical Requirements

Backend: Python 3.11+

Frontend: Node.js 18+

Database: PostgreSQL + pgvector

Framework: FastAPI + React

License: AGPL 3.0 (Open Source)

+ Commercial License

Agent Architecture

Technical architecture of MattinAI agents, designed for flexibility, scalability, and observability.

MattinAI Agent Architecture diagram showing three main components: Configuration with system instructions and memory settings, Runtime with orchestration engine and LLM integration, and Observability with execution tracing and performance metrics

Configuration

Defines the agent's behavior and capabilities through modular components.

  • System Instructions: Prompts and personality settings
  • Memory Configuration: Strategy, context window, and persistence
  • Tool Registry: MCP connectors and agent capabilities
  • Data Schemas: JSON/Pydantic for data exchange
  • RAG Configuration: Data sources and retrieval strategy

Runtime & Orchestration

Core execution engine powered by the ReAct pattern for intelligent decision-making.

  • API Gateway & Security: Authentication and rate limiting
  • Memory & State Management: Short-term and session history
  • Orchestration Engine: ReAct loop for reasoning and action
  • Tool Execution: External API and sub-agent calls
  • LLM Integration: Multi-provider support (OpenAI, Anthropic, etc.)

Observability

Comprehensive monitoring and tracking for debugging and optimization.

  • Execution Tracing: Step-by-step agent workflow tracking
  • Log Registry: System errors and events logging
  • Performance Metrics: Latency, token usage, and cost tracking
  • User Feedback: Response quality ratings and insights