ProjectFlow

A workflow management system for AI-assisted development with MCP support, featuring flexible storage via file system or PostgreSQL.

ProjectFlow

A workflow management system for AI-assisted development, similar to Jira or Azure DevOps. Supports both API-driven interactions and Model Context Protocol for seamless AI agent integration.

Features

  • Hierarchical task management (Epics, Stories, Subtasks)
  • šŸš€ NEW: Natural Language Chat Interface - Interact with ProjectFlow using conversational commands
  • REST API for programmatic access
  • Model Context Protocol (MCP) support for AI agents
  • Web interface for human users
  • Flexible storage: File system (JSON) or PostgreSQL database
  • Clean, modern UI with accessibility features
  • Containerized deployment

Tech Stack

  • Backend: Go 1.24
  • Storage: File system (JSON) or PostgreSQL database
  • Frontend: HTML templates, CSS, JavaScript
  • Containerization: Docker/Podman
  • Protocols: HTTP REST API + Model Context Protocol

Quick Start

Prerequisites

  • Go 1.24 or later
  • Docker/Podman (for containerized deployment)

Running Locally

  1. Clone the repository:

    git clone https://github.com/aykay76/projectflow.git
    cd projectflow
    
  2. Run the application:

    go run cmd/server/main.go
    
  3. Open your browser and navigate to http://localhost:16191

šŸ’¬ Natural Language Chat Interface

ProjectFlow now features an AI-powered chat interface that allows you to manage tasks and projects using natural language commands. Simply click the chat button (šŸ’¬) in the header or use the keyboard shortcut ⌘+/ (Mac) or Ctrl+/ (Windows/Linux) to get started.

Quick Examples

Create a high priority task to fix the login bug
List all tasks in the PF project  
Mark task PF-123 as done
Show me overdue tasks
Create a new project called "Website Redesign"

Getting Started with Chat

  1. Open the chat interface: Click the šŸ’¬ button in the header or press ⌘+/
  2. Type your request: Use natural language to describe what you want to do
  3. Get instant results: The AI will interpret your request and perform the action

For detailed chat commands and examples, see the Chat Interface Guide.

LLM Configuration

The chat interface supports multiple LLM providers:

  • šŸš€ Ollama: Use local LLM models for privacy and offline capability
  • OpenAI GPT: Use OpenAI's GPT models for natural language understanding
  • Groq: Fast cloud-based LLM inference
  • Anthropic Claude: Leverage Anthropic's Claude AI for conversational interactions

Quick Setup with Ollama (Local LLM)

For the fastest setup with privacy and no API costs:

# Install Ollama
brew install ollama  # macOS
# or curl -fsSL https://ollama.com/install.sh | sh  # Linux

# Start Ollama and install a model
ollama serve &
ollama pull llama3.2

# Configure ProjectFlow
export LLM_PROVIDER=ollama
export LLM_OLLAMA_MODEL=llama3.2
./projectflow

See the Ollama Quick Start Guide for detailed setup instructions.

Cloud LLM Providers

For cloud-based LLMs, configure your API key:

# For OpenAI
export LLM_PROVIDER=openai
export LLM_API_KEY=your-openai-key
export LLM_MODEL=gpt-4

# For Groq
export LLM_PROVIDER=groq
export LLM_API_KEY=your-groq-key
export LLM_MODEL=llama-3.1-8b-instant

Environment Variables

Server Configuration:

  • PORT: Server port (default: 16191)
  • SHUTDOWN_TIMEOUT: Graceful shutdown timeout in seconds (default: 30)
  • LOG_LEVEL: Logging level - DEBUG, INFO, WARN, ERROR (default: INFO)
  • LOG_FORMAT: Log format - json or text (default: text)

Storage Configuration:

  • STORAGE_TYPE: Storage backend - file or postgres (default: file)

File Storage:

  • DATA_DIR: Directory for data storage (default: ./data)

PostgreSQL Storage:

  • DB_HOST: Database host (default: localhost)
  • DB_PORT: Database port (default: 5432)
  • DB_NAME: Database name (default: projectflow)
  • DB_USER: Database user (default: projectflow)
  • DB_PASSWORD: Database password (required for postgres)
  • DB_SSL_MODE: SSL mode - disable, require, verify-ca, verify-full, prefer, allow (default: prefer)

LLM Configuration (for Chat Interface):

  • LLM_PROVIDER: LLM provider - ollama, groq, openai, disabled (default: disabled)
  • LLM_API_KEY: API key for cloud LLM providers (required for groq, openai)
  • LLM_BASE_URL: Custom base URL for the LLM provider (optional)
  • LLM_MODEL: Model name to use (default varies by provider)
  • LLM_TIMEOUT: Request timeout in seconds (default: 60)
  • LLM_MAX_TOKENS: Maximum tokens per response (default: 1000)

Ollama-specific (for local LLM):

  • LLM_OLLAMA_HOST: Ollama server URL (default: http://localhost:11434)
  • LLM_OLLAMA_MODEL: Ollama model name (default: llama3.2)

For detailed PostgreSQL setup, see PostgreSQL Storage Documentation.

Using Docker

  1. Build the image:

    podman build -t projectflow .
    
  2. Run the container:

    podman run -p 16191:16191 -v $(pwd)/data:/app/data projectflow
    

API Documentation

Chat API

  • POST /api/chat - Send a natural language message to the chat interface
  • GET /api/chat/history - Retrieve conversation history

LLM API

  • GET /api/llm/info - Get LLM provider information and status
  • GET /api/llm/health - Check LLM provider health
  • POST /api/llm/chat - Send direct messages to the LLM (bypasses ProjectFlow translation)

Chat Request/Response

Send Message:

POST /api/chat
{
  "message": "Create a high priority task to fix the login bug",
  "conversation_id": "optional-uuid"
}

Response:

{
  "response": "I've created task PF-123: 'Fix login bug' with high priority.",
  "actions_taken": ["create_task"],
  "task_ids": ["PF-123"],
  "conversation_id": "uuid",
  "confidence": 0.95,
  "intent": "create_task"
}

Get History:

GET /api/chat/history?conversation_id=uuid

{
  "id": "uuid",
  "messages": [
    {
      "id": "msg-uuid",
      "role": "user",
      "content": "Create a task...",
      "timestamp": "2025-06-22T15:17:44.334579Z"
    }
  ],
  "created": "2025-06-22T15:17:44.334574Z",
  "updated": "2025-06-22T15:17:44.334574Z"
}

LLM API Examples

Get LLM Info:

GET /api/llm/info

{
  "enabled": true,
  "provider": "ollama",
  "model": "llama3.2",
  "status": "healthy",
  "timestamp": "2025-06-23T08:56:47.927Z",
  "metadata": {
    "host": "http://localhost:11434",
    "version": "0.1.17"
  }
}

Check LLM Health:

GET /api/llm/health

{
  "healthy": true,
  "status": "healthy",
  "provider": "ollama",
  "timestamp": "2025-06-23T08:56:47.927Z",
  "duration_ms": 45,
  "suggestions": []
}

Direct LLM Chat:

POST /api/llm/chat
{
  "messages": [
    {"role": "user", "content": "Hello!"}
  ],
  "max_tokens": 1000,
  "temperature": 0.7
}

Response:
{
  "response": {
    "choices": [
      {
        "message": {
          "role": "assistant",
          "content": "Hello! How can I help you today?"
        },
        "finish_reason": "stop"
      }
    ]
  },
  "provider": "ollama",
  "model": "llama3.2"
}

Tasks API

  • GET /api/tasks - List all tasks
  • POST /api/tasks - Create a new task
  • GET /api/tasks/{id} - Get task by ID
  • PUT /api/tasks/{id} - Update task
  • DELETE /api/tasks/{id} - Delete task
  • GET /api/hierarchy - Get tasks in hierarchical structure

Task Structure

{
  "id": "string",
  "title": "string",
  "description": "string",
  "status": "string",
  "priority": "string",
  "parent_id": "string",
  "children": ["string"],
  "created_at": "timestamp",
  "updated_at": "timestamp"
}

Hierarchy Structure

The /api/hierarchy endpoint returns tasks in a nested structure:

[
  {
    "task": {
      "id": "string",
      "title": "string",
      "description": "string",
      "status": "string",
      "priority": "string",
      "type": "string",
      "parent_id": "string",
      "children": ["string"],
      "created_at": "timestamp",
      "updated_at": "timestamp"
    },
    "child_tasks": [
      {
        "task": { /* nested task */ },
        "child_tasks": [ /* recursively nested */ ]
      }
    ]
  }
]

Development

Project Structure

ā”œā”€ā”€ cmd/server/          # Application entry point
ā”œā”€ā”€ internal/
│   ā”œā”€ā”€ handlers/        # HTTP handlers
│   ā”œā”€ā”€ models/          # Data models
│   └── storage/         # Storage implementations
ā”œā”€ā”€ pkg/api/            # Public API definitions
ā”œā”€ā”€ web/
│   ā”œā”€ā”€ templates/      # HTML templates
│   └── static/         # CSS, JS, images
ā”œā”€ā”€ data/               # Local data storage
└── Dockerfile          # Container definition

Running Tests

go test ./...

Building

go build -o bin/projectflow cmd/server/main.go

Model Context Protocol (MCP)

ProjectFlow includes a Model Context Protocol (MCP) server that enables AI agents to interact with tasks programmatically. This allows AI assistants to create, read, update, and delete tasks as part of their workflow.

MCP Server Setup

  1. Start the MCP server:

    go run cmd/mcp-server/main.go
    

    The MCP server runs on port 3001 by default.

  2. Configure your MCP client: Use the provided mcp-config.json file or configure manually:

    {
      "mcpServers": {
        "projectflow": {
          "command": "go",
          "args": ["run", "cmd/mcp-server/main.go"],
          "cwd": "/path/to/projectflow"
        }
      }
    }
    

Available MCP Tools

The MCP server provides these tools for task management:

  • list_tasks - List all tasks with optional filtering
  • create_task - Create a new task
  • get_task - Get a specific task by ID
  • update_task - Update an existing task
  • delete_task - Delete a task
  • get_task_hierarchy - Get tasks in hierarchical structure

Available MCP Resources

The MCP server exposes these resources:

  • tasks://all - List of all tasks
  • tasks://hierarchy - Hierarchical task structure
  • tasks://summary - Project summary with statistics

Example Usage

# Start both servers
go run cmd/server/main.go &          # HTTP server on :16191
go run cmd/mcp-server/main.go &      # MCP server on :3001

# Use with MCP-compatible AI clients
# The AI can now create, manage, and query tasks programmatically

Integration with AI Agents

AI agents can use the MCP interface to:

  • Create and manage development tasks
  • Track project progress
  • Generate reports and summaries
  • Automate workflow processes
  • Integrate with other development tools

For detailed MCP documentation, see docs/mcp.md.

Project Integration with VS Code

ProjectFlow can be seamlessly integrated into your VS Code projects, allowing you to store and manage tasks alongside your code in Git. This enables powerful AI-assisted development workflows where coding agents can create, update, and track development tasks directly within your project context.

Setup .vscode/mcp.json

Add a .vscode/mcp.json file to your project root to configure ProjectFlow as an MCP server:

{
  "mcpServers": {
    "projectflow": {
      "command": "go",
      "args": ["run", "cmd/mcp-server/main.go"],
      "cwd": "/path/to/projectflow",
      "env": {
        "STORAGE_DIR": "./.projectflow/data"
      }
    }
  }
}

Project-Specific Task Storage

When integrated with your project, ProjectFlow will store tasks in a .projectflow/data/ directory within your project:

your-project/
ā”œā”€ā”€ .vscode/
│   └── mcp.json              # MCP configuration
ā”œā”€ā”€ .projectflow/
│   └── data/
│       └── tasks/            # Project-specific tasks
│           ā”œā”€ā”€ epic-1.json   # Your development epics
│           ā”œā”€ā”€ story-1.json  # User stories
│           └── task-1.json   # Development tasks
ā”œā”€ā”€ src/                      # Your application code
ā”œā”€ā”€ README.md
└── .gitignore

Benefits of Project Integration

  1. Unified Version Control: Tasks are versioned alongside your code
  2. Context-Aware AI: Coding agents understand both code and task context
  3. Team Collaboration: Shared task management through Git
  4. Branch-Specific Tasks: Different branches can have different task states
  5. Automated Workflows: AI agents can create tasks from code analysis

Example Workflow

  1. Initialize ProjectFlow in your project:

    mkdir -p .projectflow/data/projects
    echo ".projectflow/data/projects/*/*.json" >> .gitignore  # Optional: exclude project and task files
    
  2. Configure VS Code MCP:

    {
      "mcpServers": {
        "projectflow": {
          "command": "go",
          "args": ["run", "/path/to/projectflow/cmd/mcp-server/main.go"],
          "env": {
            "STORAGE_DIR": "./.projectflow/data"
          }
        }
      }
    }
    
  3. Use with AI Coding Agents:

    • AI agents can create tasks based on code analysis
    • Track development progress alongside code changes
    • Generate tasks from TODO comments in code
    • Link tasks to specific commits or pull requests

Integration with Development Workflow

The ProjectFlow MCP integration enables powerful development workflows:

  • Automated Task Creation: AI agents analyze code and create relevant tasks
  • Progress Tracking: Link tasks to commits and pull requests
  • Code Review Tasks: Generate review tasks for specific code changes
  • Bug Tracking: Create and track bugs directly from code analysis
  • Feature Planning: Plan features as hierarchical tasks (Epic → Story → Task)

Frontend Access

While the primary interface is through MCP and AI agents, you can still access the web frontend:

  1. Start the ProjectFlow server pointing to your project's data:

    STORAGE_DIR=./.projectflow/data go run /path/to/projectflow/cmd/server/main.go
    
  2. Open http://localhost:16191 to view and manage tasks in the web interface

Git Integration Best Practices

  • Commit task changes: Include task updates in your commits
  • Branch-specific tasks: Use different task states per branch
  • Team synchronization: Pull task updates when syncing with team
  • Task cleanup: Archive completed tasks periodically

Documentation

User Documentation

Administrator Documentation

Developer Documentation

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes with proper tests
  4. Submit a pull request

See our Developer Guide for detailed contribution guidelines.

License

MIT License - see LICENSE file for details.

Related Servers