Reactive AI Agent Framework

A reactive AI agent framework for creating agents that use tools to perform tasks, with support for multiple LLM providers and MCP servers.

๐Ÿš€ Reactive AI Agent Framework

CI PyPI version Python License: MIT Coverage Code style: black Downloads GitHub stars

Docs Discord

An Elegant, Powerful, and Flexible Framework for Building Reactive AI Agents

๐Ÿ Quick Start โ€ข ๐Ÿ“– Documentation โ€ข ๐ŸŽฏ Features โ€ข ๐Ÿ› ๏ธ Installation โ€ข ๐Ÿ’ก Examples โ€ข ๐Ÿค Contributing


๐ŸŒŸ What is Reactive Agents?

Reactive Agents is a cutting-edge AI agent framework that makes building intelligent, autonomous agents as simple as Laravel makes web development. With its elegant builder pattern, comprehensive tooling ecosystem, and production-ready architecture, you can create sophisticated AI agents that think, plan, execute, and adapt.

๐Ÿ”Ž Definition โ€” "Reactive" (adj.)

reactive /หˆriหรฆk.tษชv/

  1. Promptly responsive to change or external stimuli; able to sense, interpret, and act in real time.
  2. Architected for rapid feedback loops, context-aware adaptation, and low-latency decision-making.

๐Ÿš€ Why "Reactive"?

Reactive agents turn sensing into instant value โ€” they detect shifts, call the right tools, and adjust plans on the fly. That means faster answers, fewer failures, better user experiences, and systems that scale gracefully under real-world uncertainty. In short: reactive = faster, smarter, and more reliable AI that drives reliable outcomes now.

๐ŸŽฏ Perfect For

  • ๐Ÿ”ฌ Research Automation - Intelligent web research and data analysis
  • ๐Ÿ“Š Business Intelligence - Automated reporting and decision support
  • ๐Ÿ› ๏ธ DevOps & Infrastructure - Intelligent monitoring and automation
  • ๐Ÿ’ฌ Customer Support - Smart assistants with tool integration
  • ๐Ÿ“ˆ Data Processing - Complex workflows with multiple data sources
  • ๐ŸŽฎ Interactive Applications - AI-powered user experiences
  • ๐Ÿค– Multi-Agent Systems - Orchestrated AI teams solving complex problems
  • โš™๏ธ Automation & Scripting - Intelligent task automation

โœจ Key Features

๐Ÿง  Multiple Reasoning Strategies

Composable strategies with component-based architecture: Strategies are modular and pluggable, built from discrete components (planners, executors, reflectors, and goal evaluators) that you can mix-and-match to craft custom reasoning flows.

  • Modular components โ€” planners, executors, reflectors, and evaluators are independent and swappable.
  • Pluggable strategies โ€” implement BaseReasoningStrategy and register with StrategyManager to add new strategies.
  • Testable & reusable โ€” small, well-typed components make unit testing and reuse simple.
  • Designed for composition โ€” use the Adaptive strategy or compose multiple strategies to handle complex, dynamic tasks.

Pre-built strategies include:

  • Reactive: Fast, direct problem-solving
  • Plan-Execute-Reflect: Structured approach for complex tasks
  • Reflect-Decide-Act: Adaptive strategy for dynamic environments
  • Adaptive: AI-driven strategy selection based on task complexity

๐Ÿ”ง Comprehensive Tool Ecosystem

  • Custom Python Tools with @tool() decorator
  • Model Context Protocol (MCP) integration
  • Pre-built Tools: Web search, file operations, databases, and more
  • Tool Composition and validation system

๐Ÿ—๏ธ Production-Ready Architecture

  • Event-Driven Design with real-time monitoring
  • Robust Error Recovery with intelligent retry mechanisms
  • Memory Management with vector storage and persistence
  • Performance Monitoring with detailed metrics and scoring
  • Context Optimization with adaptive pruning strategies

๐Ÿ”„ Advanced Workflow Management

  • Multi-Agent Orchestration with dependency management
  • A2A Communication (Agent-to-Agent) protocols
  • Parallel Execution and synchronization
  • Workflow Templates for common patterns

๐ŸŽ›๏ธ Developer Experience

  • Fluent Builder API with sensible defaults
  • Type Safety with Pydantic models throughout
  • Comprehensive Logging with structured events
  • ๐Ÿšง Plugin System for extensibility
  • Hot-reloading for development workflows

๐Ÿ Quick Start

Installation

pip install reactive-agents

Your First Agent (30 seconds)

import asyncio
from reactive_agents import ReactiveAgentBuilder, ReasoningStrategies

async def main():
    # Create an intelligent research agent
    agent = await (
        ReactiveAgentBuilder()
        .with_name("Research Assistant")
        .with_model("ollama:llama3")  # or "openai:gpt-4", "anthropic:claude-3-sonnet"
        .with_tools(["brave-search", "time"])  # Auto-detects MCP tools vs custom tools
        .with_instructions("Research thoroughly and provide detailed analysis")
        .with_reasoning_strategy(ReasoningStrategies.REACTIVE)
        .build()
    )

    async with agent:
        result = await agent.run(
            "What are the latest developments in quantum computing this week?"
        )
        print(result.final_answer)
        print(f"Status: {result.status_message}")

asyncio.run(main())

That's it! You now have a fully functional AI agent that can search the web, analyze information, and provide comprehensive answers.

๐ŸŽฏ Core Concepts

๐Ÿค– Agent Architecture

Reactive Agents uses a component-based architecture where each agent is composed of specialized, swappable components:

# The agent automatically manages these components:
ExecutionEngine  # Coordinates task execution and strategy selection
ReasoningEngine  # Handles different reasoning strategies
ToolManager     # Manages tool registration and execution
MemoryManager   # Handles persistent storage and retrieval
EventBus        # Coordinates real-time event communication
MetricsManager  # Tracks performance and provides insights

๐Ÿงญ Reasoning Strategies

Choose the right strategy for your task:

from reactive_agents import ReactiveAgentBuilder, ReasoningStrategies

# Reactive: Fast, direct execution
agent = await ReactiveAgentBuilder().with_reasoning_strategy(ReasoningStrategies.REACTIVE).build()

# Plan-Execute-Reflect: Structured approach
agent = await ReactiveAgentBuilder().with_reasoning_strategy(ReasoningStrategies.PLAN_EXECUTE_REFLECT).build()

# Adaptive: AI selects the best strategy
agent = await ReactiveAgentBuilder().with_reasoning_strategy(ReasoningStrategies.ADAPTIVE).build()  # Default

๐Ÿ› ๏ธ Tool Integration

Multiple ways to add capabilities to your agents:

from reactive_agents import tool

# 1. Custom Python functions with @tool decorator
@tool()
async def get_weather(city: str) -> str:
    """Get weather information for a city."""
    return f"Weather in {city}: Sunny, 72ยฐF"

# 2. Mixed tools - auto-detection!
# Strings = MCP servers, Functions = custom tools
.with_tools([get_weather, "brave-search", "time", "filesystem"])

# 3. Or use explicit methods
.with_mcp_tools(["brave-search", "sqlite"])
.with_custom_tools([get_weather])

๐Ÿ’ก Examples

๐Ÿ” Smart Research Agent

from reactive_agents import ReactiveAgentBuilder, tool, ReasoningStrategies

@tool()
async def analyze_trends(data: str) -> str:
    """Analyze data trends and patterns."""
    # Your analysis logic here
    return f"Trend analysis: {data}"

async def create_research_agent():
    return await (
        ReactiveAgentBuilder()
        .with_name("Research Pro")
        .with_model("openai:gpt-4")
        .with_reasoning_strategy(ReasoningStrategies.PLAN_EXECUTE_REFLECT)
        .with_tools([analyze_trends, "brave-search", "time", "filesystem"])
        .with_instructions("""
            You are a professional research analyst. Always:
            1. Search for the most recent information
            2. Cross-reference multiple sources
            3. Provide data-driven insights
            4. Save important findings to files
        """)
        .with_max_iterations(15)
        .build()
    )

๐Ÿ“Š Business Intelligence Agent

async def create_bi_agent():
    return await (
        ReactiveAgentBuilder()
        .with_name("BI Analyst")
        .with_model("anthropic:claude-3-sonnet")
        .with_tools(["sqlite", "filesystem", "brave-search"])
        .with_vector_memory("bi_agent_memory")  # Enable persistent vector memory
        .with_instructions("""
            You are a business intelligence analyst. Create comprehensive
            reports with data visualizations and actionable insights.
        """)
        .with_response_format("""
            ## Executive Summary
            [Key findings and recommendations]

            ## Data Analysis
            [Detailed analysis with charts/tables]

            ## Recommendations
            [Specific, actionable next steps]
        """)
        .build()
    )

๐Ÿ”„ Multi-Agent Workflow

from reactive_agents.workflows import WorkflowOrchestrator

async def create_content_pipeline():
    orchestrator = WorkflowOrchestrator()

    # Research agent
    researcher = await (
        ReactiveAgentBuilder()
        .with_name("Content Researcher")
        .with_tools(["brave_web_search"])
        .build()
    )

    # Writing agent
    writer = await (
        ReactiveAgentBuilder()
        .with_name("Content Writer")
        .with_tools(["filesystem"])
        .build()
    )

    # Create workflow
    workflow = (
        orchestrator
        .add_agent("research", researcher)
        .add_agent("writing", writer)
        .add_dependency("writing", "research")  # Writer waits for researcher
        .build()
    )

    return workflow

๐ŸŽ›๏ธ Event-Driven Monitoring

from reactive_agents.events import AgentStateEvent

async def create_monitored_agent():
    # Track performance in real-time
    metrics = {"tool_calls": 0, "errors": 0, "duration": 0}

    def on_tool_called(event):
        metrics["tool_calls"] += 1
        print(f"๐Ÿ”ง Tool used: {event['tool_name']}")

    def on_error(event):
        metrics["errors"] += 1
        print(f"โŒ Error: {event['error_message']}")

    def on_completion(event):
        metrics["duration"] = event["total_duration"]
        print(f"โœ… Completed in {metrics['duration']:.2f}s")
        print(f"๐Ÿ“Š Final metrics: {metrics}")

    return await (
        ReactiveAgentBuilder()
        .with_name("Monitored Agent")
        .with_model("ollama:qwen2:7b")
        .on_tool_called(on_tool_called)
        .on_error_occurred(on_error)
        .on_session_ended(on_completion)
        .build()
    )

๐Ÿ› ๏ธ Installation & Setup

Prerequisites

  • Python 3.10+
  • Poetry (recommended) or pip

Basic Installation

# Using pip
pip install reactive-agents

# Using Poetry
poetry add reactive-agents

Development Installation

# Clone the repository
git clone https://github.com/tylerjrbuell/reactive-agents
cd reactive-agents

# Install with Poetry
poetry install

# Run tests
poetry run pytest

Environment Configuration

Create a .env file:

# LLM Providers
OPENAI_API_KEY=your_openai_key
ANTHROPIC_API_KEY=your_anthropic_key
GROQ_API_KEY=your_groq_key
OLLAMA_HOST=http://localhost:11434

# MCP Tools
BRAVE_API_KEY=your_brave_search_key

# Optional: Custom MCP configuration
MCP_CONFIG_PATH=/path/to/custom/mcp_config.json

๐ŸŽฏ Advanced Features

๐Ÿง  Custom Reasoning Strategies

Implement your own reasoning approach:

from reactive_agents.strategies import BaseReasoningStrategy

class MyCustomStrategy(BaseReasoningStrategy):
    @property
    def name(self) -> str:
        return "my_custom_strategy"

    async def execute_iteration(self, task: str, context: ReasoningContext):
        # Your custom reasoning logic
        return StrategyResult.success(payload)

# Register and use
ReactiveAgentBuilder().with_reasoning_strategy("my_custom_strategy")

๐Ÿ”ง Custom Tool Creation

Build sophisticated tools with validation:

from reactive_agents.tools import tool
from pydantic import BaseModel

class WeatherRequest(BaseModel):
    city: str
    units: str = "metric"

@tool("Get detailed weather information", validation_model=WeatherRequest)
async def advanced_weather(request: WeatherRequest) -> dict:
    # Sophisticated weather logic with API calls
    weather_data = await fetch_weather_api(request.city, request.units)
    return {
        "temperature": weather_data.temp,
        "conditions": weather_data.conditions,
        "forecast": weather_data.forecast
    }

๐Ÿ“Š Performance Monitoring

Track and optimize agent performance:

async def monitor_performance():
    agent = await ReactiveAgentBuilder().with_name("Performance Agent").build()

    # Get real-time metrics
    session = agent.context.session

    print(f"Completion Score: {session.completion_score}")
    print(f"Tool Usage Score: {session.tool_usage_score}")
    print(f"Overall Score: {session.overall_score}")

    # Access detailed metrics
    metrics = agent.context.metrics_manager.get_metrics()
    print(f"Total Duration: {metrics['total_time']:.2f}s")
    print(f"Tool Calls: {metrics['tool_calls']}")
    print(f"Model Calls: {metrics['model_calls']}")

๐Ÿ”„ Plugin System ๐Ÿšง

Extend the framework with plugins:

from reactive_agents.plugins import Plugin

class CustomAnalyticsPlugin(Plugin):
    def on_load(self, framework):
        # Initialize your plugin
        self.analytics_client = AnalyticsClient()

    def on_agent_created(self, agent):
        # Hook into agent lifecycle
        agent.on_completion(self.track_completion)

    async def track_completion(self, event):
        await self.analytics_client.track(event)

# Load plugin
framework.load_plugin(CustomAnalyticsPlugin())

๐Ÿ“– Documentation

๐Ÿ“š Comprehensive Guides

๐Ÿ”ง API Reference

๐Ÿ’ก Examples & Tutorials


๐ŸŒ Model Provider Support

Reactive Agents works with all major LLM providers:

ProviderModelsFeatures
OpenAIGPT-4o, GPT-4, GPT-3.5Function calling, streaming, vision
AnthropicClaude 3.5 Sonnet, Claude 3Large context, tool use
GroqLlama 3, MixtralUltra-fast inference
OllamaAny local modelPrivacy, customization
GoogleGemini Pro, Gemini FlashMultimodal capabilities

v0.1.0a7 Update: Google provider now uses the latest google-genai SDK (v1.5.0) with improved performance and zero deprecation warnings.

# Easy provider switching
.with_model("gpt-4o")                    # OpenAI
.with_model("claude-3-sonnet")           # Anthropic
.with_model("groq:llama3-70b")          # Groq
.with_model("ollama:qwen2:7b")          # Ollama
.with_model("google:gemini-pro")        # Google

๐ŸŽฏ Structured Outputs & Provider Architecture

๐Ÿ”„ Universal Structured Output System

Reactive Agents implements a unified structured output system using the Instructor Python Package across all model providers. This ensures consistent Pydantic model validation and type safety regardless of your LLM choice.

from pydantic import BaseModel
from typing import List
from reactive_agents import ReactiveAgentBuilder

class ResearchResult(BaseModel):
    summary: str
    key_findings: List[str]
    confidence_score: float
    sources: List[str]

# Works identically across ALL providers
agent = await ReactiveAgentBuilder()\
    .with_model("ollama:qwen2:7b")  # or any provider
    .build()

# Get validated, structured output
result: ResearchResult = await agent.get_structured_response(
    ResearchResult,
    "Research the latest AI developments"
)

print(result.summary)           # โœ… Type-safe string
print(result.confidence_score)  # โœ… Type-safe float

๐Ÿ”ง OpenAI-Style Parameter Interface

The framework uses OpenAI-compatible parameters as the standard interface, with automatic translation to provider-specific formats:

# โœ… Same parameters work everywhere
universal_options = {
    "temperature": 0.2,
    "max_tokens": 500,
    "top_p": 0.9,
    "frequency_penalty": 0.1,
    "presence_penalty": 0.05,
    "stop": ["END", "STOP"],
    "seed": 42
}

# Automatically optimized for each provider
providers = [
    "openai:gpt-4o",
    "anthropic:claude-3-5-sonnet-latest",
    "groq:llama-3.1-8b-instant",
    "ollama:cogito:14b",
    "google:gemini-2.5-flash"
]

for provider_model in providers:
    agent = await ReactiveAgentBuilder()\
        .with_model(provider_model)\
        .with_model_provider_options(universal_options)\
        .build()

    # Same code, provider-specific optimization! ๐Ÿš€
    result = await agent.run("Analyze this data...")

๐Ÿ”„ Dual-Parameter Architecture

The framework uses an elegant dual-parameter system:

1๏ธโƒฃ User Interface Layer (OpenAI-style)

# Clean, standardized interface
{
    "temperature": 0.3,
    "max_tokens": 200,
    "top_p": 0.8,
    "frequency_penalty": 0.1
}

2๏ธโƒฃ Provider Optimization Layer (Native formats)

# Ollama native (automatically translated)
{
    "temperature": 0.3,
    "num_predict": 200,      # max_tokens โ†’ num_predict
    "top_p": 0.8,
    "repeat_penalty": 1.1,   # frequency_penalty โ†’ repeat_penalty (scaled)
    "num_ctx": 4096,         # Added Ollama optimizations
    "repeat_last_n": 64,
    "top_k": 40
}

# Anthropic native (automatically translated)
{
    "temperature": 0.3,
    "max_tokens": 200,       # Direct mapping
    "top_p": 0.8,
    "stop_sequences": ["END"] # stop โ†’ stop_sequences
}

# Groq native (automatically translated)
{
    "temperature": 0.3,
    "max_completion_tokens": 200,  # max_tokens โ†’ max_completion_tokens
    "top_p": 0.8,
    "frequency_penalty": 0.1       # Direct OpenAI compatibility
}

# Google native (automatically translated)
{
    "temperature": 0.3,
    "max_output_tokens": 200,      # max_tokens โ†’ max_output_tokens
    "top_p": 0.8,
    "stop_sequences": ["END"],     # stop โ†’ stop_sequences (up to 5)
    "top_k": 40                    # Google-specific optimization
}

โœจ Key Benefits

FeatureBenefit
๐Ÿ”„ Universal InterfaceSame parameters across all providers
๐ŸŽฏ Type SafetyFull Pydantic validation for structured outputs
โšก PerformanceProvider-specific optimizations automatically applied
๐Ÿ›ก๏ธ ReliabilityGraceful fallback when structured outputs fail
๐Ÿ”ง MaintainableClean separation between user interface and implementation
๐Ÿš€ Future-ProofEasy to add new providers following established patterns

๐Ÿงช Testing Your Provider Setup

# Test parameter mapping
python -c "
from reactive_agents.providers.llm.ollama import OllamaModelProvider
provider = OllamaModelProvider('cogito:14b')
options = {'temperature': 0.2, 'max_tokens': 100}
print('OpenAI params:', provider.get_openai_params(options))
print('Native params:', provider.get_native_params(options))
"

# Integration testing across providers
python -m reactive_agents.tests.integration.diagnose_provider_issues

๐ŸŽจ Advanced Usage

# Provider-specific optimizations while maintaining compatibility
builder = ReactiveAgentBuilder()

# Ollama with GPU acceleration
if provider == "ollama":
    builder.with_model_provider_options({
        "temperature": 0.2,
        "max_tokens": 1000,
        "num_gpu": 256,      # Ollama-specific: GPU layers
        "num_ctx": 8192,     # Ollama-specific: context window
    })

# Anthropic with advanced parameters
elif provider == "anthropic":
    builder.with_model_provider_options({
        "temperature": 0.2,
        "max_tokens": 1000,
        "top_k": 50,         # Anthropic-specific: top-k sampling
    })

# Google with structured schema
elif provider == "google":
    builder.with_model_provider_options({
        "temperature": 0.2,
        "max_tokens": 1000,
        "candidate_count": 3,    # Google-specific: multiple candidates
        "response_schema": schema # Google-specific: schema validation
    })

agent = await builder.build()

๐Ÿงช Playground & Testing

The Reactive Agents framework includes a comprehensive playground for testing, experimentation, and validation.

Quick Start with Playground

# Run basic functionality tests
poetry run python -m playground.runner agents

# Test reasoning strategies
poetry run python -m playground.runner strategies

# Stress test the framework
poetry run python -m playground.runner stress

# Run real-world scenarios
poetry run python -m playground.runner real-world

# List all available test suites
poetry run python -m playground.runner --list

What's in the Playground?

The playground provides:

  • Agent Tests - Core functionality validation
  • Strategy Tests - Compare reasoning approaches
  • Stress Tests - Find breaking points and edge cases
  • Real-World Tests - Production-ready scenarios
  • Memory Tests - Persistence and learning
  • System Tools - Meta-actions and agent control

Example Test Output

$ poetry run python -m playground.runner agents

=== Running Agent Tests ===

โœ… Basic agent creation: PASS (8.2s)
โœ… Tool integration: PASS (12.5s)
โœ… Event system: PASS (6.1s)
โœ… Custom tools: PASS (9.8s)

4/4 tests passed in 36.6s

For complete playground documentation, see docs/playground.md


๐Ÿ”ง Available Tools & Integrations

๐ŸŒ Web & Data

  • Web Search - Brave Search, DuckDuckGo
  • Web Scraping - Playwright automation
  • APIs - REST/GraphQL client tools
  • Data Processing - Pandas, NumPy integrations

๐Ÿ’พ Storage & Databases

  • File System - Read, write, organize files
  • SQLite - Database operations and queries
  • Vector Stores - ChromaDB, Pinecone integration
  • Cloud Storage - AWS S3, Google Cloud

๐Ÿ”ง Development & DevOps

  • Git Operations - Repository management
  • Docker - Container orchestration
  • CI/CD - GitHub Actions, Jenkins
  • Monitoring - Prometheus, Grafana

๐Ÿค– AI & ML

  • Model Inference - Multiple LLM providers
  • Embeddings - Text and multimodal embeddings
  • Vision - Image analysis and processing
  • Speech - TTS and STT capabilities

๐Ÿ“ˆ Performance & Benchmarks

Reactive Agents is built for performance and scalability:

MetricResult
Agent Creation< 100ms
Tool Execution< 50ms overhead
Memory Usage< 100MB per agent
Concurrent Agents1000+ per instance
Throughput10,000+ tasks/hour

๐Ÿš€ Optimization Features

  • Lazy Loading - Components loaded on demand
  • Connection Pooling - Efficient resource management
  • Context Caching - Intelligent conversation optimization
  • Parallel Execution - Multi-threaded tool execution
  • Memory Management - Automatic cleanup and optimization

๐Ÿค Contributing

We love contributions! Join our growing community:

๐ŸŽฏ Quick Contribution

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit your changes: git commit -m 'Add amazing feature'
  4. Push to the branch: git push origin feature/amazing-feature
  5. Open a Pull Request

๐Ÿ”ง Development Setup

# Clone and setup
git clone https://github.com/tylerjrbuell/reactive-agents
cd reactive-agents
poetry install

# Run tests
poetry run pytest

# Run with coverage
poetry run pytest --cov=reactive_agents

# Lint and format
poetry run black .
poetry run ruff check .

๐Ÿ’ฌ Community

Join the project community on Discord for chat, support, and collaboration:

We welcome contributors, users, and maintainers โ€” stop by, introduce yourself, and let us know what you're building with Reactive Agents!

๐Ÿ“ Contribution Areas

  • ๐Ÿง  New Reasoning Strategies
  • ๐Ÿ”ง Tool Integrations
  • ๐Ÿ“š Documentation & Examples
  • ๐Ÿ› Bug Fixes & Performance
  • ๐ŸŽจ UI/UX Improvements
  • ๐ŸŒ Internationalization

๐Ÿ“Š Project Stats

GitHub Repo stars GitHub forks GitHub watchers

PyPI downloads GitHub issues GitHub pull requests


๐Ÿ™ Acknowledgments

Built with love using these amazing technologies:

Special thanks to our amazing contributors and the AI community! ๐Ÿš€


๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


Ready to build the future with AI agents?

โญ Star this repo if you find it useful!
๐Ÿ› Report issues to help us improve
๐Ÿ’ฌ Join our community for support and discussions

๐Ÿš€ Get Started Now | ๐Ÿ“– Read the Docs | ๐Ÿ’ฌ Join Discord


Made with โค๏ธ by the Reactive Agents team

Related Servers