GM
HomeAboutProjectsBlogContact
GM
Gaurav Mishra

Backend engineer building distributed systems, cloud platforms, and AI-native workflows.

Quick Links

  • Home
  • About
  • Projects
  • Blog
  • Contact

Connect

  • LinkedIn
  • GitHub
  • Peerlist
  • Email

© 2026 Gaurav Mishra. All rights reserved.

Crafted withand

Multi-Cloud Pricing Engine Documentation

End-to-end guide for deploying, configuring, and using the Multi-Cloud Pricing Engine API and MCP Server.

GoDockerPostgreSQLMCP

Challenges

Obtaining accurate, real-time cost visibility across clouds is difficult due to fragmented APIs and complex, non-standardized billing models.

Architecture

Go, PostgreSQL, Docker, MCP

Impact

Unified cloud cost management

Multi-Cloud Pricing Engine

The Multi-Cloud Pricing Engine is a high-performance orchestration system that unifies pricing data from AWS, Azure, and GCP. It provides a canonical API for cost resolution and acts as a financial context server for AI agents via the Model Context Protocol (MCP).


Getting Started

Prerequisites

  • Docker Engine (v20.10+)
  • Docker Compose
  • 4GB+ RAM recommended for large-scale data syncing

Installation

  1. Pull the Docker Image The engine is distributed as a production-ready container on Docker Hub.

    bash
    docker pull gauravmishra0/cost-engine:latest
    
  2. Prepare Configuration Create a docker-compose.yml file. This file controls the API, Database, and the Ingestion Scheduler.

    yaml
    services:
      cost-engine:
        image: gauravmishra0/cost-engine:latest
        ports:
          - "9090:9090"
        environment:
          - DB_PASSWORD=secure_password
          # Scheduler Configuration
          - SYNC_ENABLED=true           # Enables the multi-cloud ingestion job
          - SYNC_SCHEDULE="@daily"      # Interval (e.g., "@hourly", "0 0 * * *")
          - SYNC_REGIONS="us-east-1,us-west-2" # Required: Comma-separated list of target regions
          # Secrets
          - GCP_API_KEY=${GCP_API_KEY}  # Required for GCP ingestion
        depends_on:
          - postgres
        profiles: ["bundled-db"]
    
      postgres:
        image: postgres:16-alpine
        environment:
          - POSTGRES_PASSWORD=secure_password
        volumes:
          - postgres_data:/var/lib/postgresql/data
    
  3. Start the Engine Launch the stack in detached mode.

    bash
    docker compose --profile bundled-db up -d
    
  4. Verify Health Ensure the API is running.

    bash
    curl http://localhost:9090/v1/health
    

Automated Ingestion Scheduler

The engine features a built-in Cron Scheduler designed to keep your pricing data synchronized across all supported clouds (AWS, Azure, GCP).

How to Configure

To create a scheduler, simply define the SYNC_SCHEDULE environment variable in your Docker configuration. No code changes or external cron jobs are required.

  • SYNC_ENABLED: Set to true to activate the background worker.
  • SYNC_REGIONS: Comma-separated list of regions to sync (e.g., us-east-1,eastus). The scheduler will explicitly sync these regions for supported providers.
  • SYNC_SCHEDULE: Accepts standard Cron syntax.
    • @daily: Run once at midnight.
    • @hourly: Run once at the top of every hour.
    • 0 30 * * *: Run every day at 30 minutes past midnight.

Behavior

When triggered, the scheduler performs the following operations sequentially:

  1. AWS Sync: Fetches latest EC2, RDS, and Lambda pricing for configured regions.
  2. Azure Sync: Pulls retail pricing updates for Virtual Machines and Storage.
  3. GCP Sync: Updates SKU catalog for Compute Engine and BigQuery.

Note: The scheduler skips execution if a previous sync job is still running to prevent race conditions.


AI Agent Integration (MCP)

This engine is designed to give AI agents access to real-time cloud pricing. To connect Claude Desktop or other MCP clients:

  1. Open your claude_desktop_config.json.
  2. Add the following server configuration:
json
{
  "mcpServers": {
    "cost-engine": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "--network", "host",
        "-e", "DB_HOST=localhost",
        "gauravmishra0/cost-engine:latest",
        "/app/cost-engine", "mcp"
      ]
    }
  }
}

API Reference

1. Ingestion and Syncing

Manually trigger updates for specific providers.

Sync AWS Data

Note: If services is omitted, ALL supported services (EC2, RDS, etc.) are synced. Input:

bash
curl -X POST http://localhost:9090/v1/ingest/aws/sync \
     -H "Content-Type: application/json" \
     -d '{"regions": ["us-east-1"]}'

Output:

json
{
  "results": [
    {
      "provider": "aws",
      "status": "success",
      "candidates_stored": 15420,
      "message": "Sync completed successfully"
    }
  ]
}

Sync Azure Data

Input:

bash
curl -X POST http://localhost:9090/v1/ingest/azure/sync \
     -H "Content-Type: application/json" \
     -d '{"regions": ["eastus"]}'

Output:

json
{
  "results": [
    {
      "provider": "azure",
      "status": "success",
      "candidates_stored": 8500,
      "message": "Sync completed successfully"
    }
  ]
}

Sync GCP Data

Input:

bash
curl -X POST http://localhost:9090/v1/ingest/gcp/sync \
     -H "Content-Type: application/json" \
     -d '{"regions": ["us-east1"]}'

Output:

json
{
  "results": [
    {
      "provider": "gcp",
      "status": "success",
      "candidates_stored": 12000,
      "message": "Sync completed successfully"
    }
  ]
}

2. Metadata Discovery

Explore available data.

List Regions

Input:

bash
curl "http://localhost:9090/v1/metadata/regions?provider=aws"

Output:

json
{
  "provider": "aws",
  "regions": ["us-east-1", "us-west-2", "eu-central-1"],
  "count": 3
}

List Services

Input:

bash
curl "http://localhost:9090/v1/metadata/services?provider=azure&region=eastus"

Output:

json
{
  "provider": "azure",
  "region": "eastus",
  "services": ["Virtual Machines", "SQL Database", "Storage"],
  "count": 3
}

3. Cost Resolution

Calculate costs using the engine's resolution logic.

Single Resource Lookup

Input:

bash
curl -X POST http://localhost:9090/v1/cost/resolve \
     -H "Content-Type: application/json" \
     -d '{
       "provider": "aws",
       "region": "us-east-1",
       "service": "compute",
       "resource_type": "vm",
       "instance_type": "t3.medium",
       "os": "linux",
       "billing_model": "hourly",
       "hours": 730,
       "quantity": 1
     }'

Output:

json
{
  "provider": "aws",
  "region": "us-east-1",
  "instance_type": "t3.medium",
  "unit_price": 0.0416,
  "total_cost": 30.368,
  "currency": "USD",
  "effective_date": "2024-01-15T00:00:00Z"
}

Batch Resolution

Input:

bash
curl -X POST http://localhost:9090/v1/cost/resolve/batch \
     -H "Content-Type: application/json" \
     -d '{
       "requests": [
         {
           "provider": "aws", 
           "region": "us-east-1", 
           "instance_type": "t3.medium", 
           "service": "compute", 
           "resource_type": "vm", 
           "os": "linux", 
           "billing_model": "hourly", 
           "hours": 730, 
           "quantity": 10
         }
       ]
     }'

Output:

json
{
  "results": [
    {
      "provider": "aws",
      "instance_type": "t3.medium",
      "total_cost": 303.68,
      "currency": "USD"
    }
  ]
}

Technical Concepts

  • Ingestion (Phase A): The engine pulls raw data from cloud providers, filters noise, and stores valid candidates.
  • Resolution (Phase B): Uses Effective Date Resolution to resolve the correct effective price for any given timestamp.
  • Streaming: Large datasets (like AWS EC2) are processed via streaming JSON decoders to keep memory usage under 150MB.

Troubleshooting

Common Issues:

  • connection refused: Ensure Docker container is running and port 9090 is mapped correctly.
  • price_not_found: Verify that the specific region has been synced.
  • database_locked: The engine allows only one sync operation per provider at a time.

For additional support, please open an issue in the repository.


Production Deployment Guide

This section is for customers integrating the pricing engine into their own application stack.

1. Sidecar Deployment (Kubernetes)

For low-latency access, deploy the engine as a sidecar container in your application pod.

yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app-pod
spec:
  containers:
    # Your Main Application
    - name: my-app
      image: my-company/app:latest
      env:
        - name: COST_ENGINE_URL
          value: "http://localhost:9090"

    # Multi-Cloud Pricing Engine Sidecar
    - name: cost-engine
      image: gauravmishra0/cost-engine:latest
      ports:
        - containerPort: 9090
      env:
        - name: DB_HOST
          value: "postgres-service"
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-secrets
              key: password
        - name: SYNC_ENABLED
          value: "true"
        - name: SYNC_SCHEDULE
          value: "@daily"
        - name: SYNC_REGIONS
          value: "us-east-1,eu-central-1"
        - name: GCP_API_KEY
          valueFrom:
            secretKeyRef:
              name: gcp-secrets
              key: api-key

2. Standalone Service (Docker Compose)

For centralized access across multiple microservices, deploy as a standalone service.

bash
# 1. Create a dedicated network
docker network create pricing-net

# 2. Run Database
docker run -d --name pricing-db --net pricing-net \
  -e POSTGRES_PASSWORD=prod_pass \
  postgres:16-alpine

# 3. Run Engine
docker run -d --name cost-engine --net pricing-net \
  -p 9090:9090 \
  -e DB_HOST=pricing-db \
  -e DB_PASSWORD=prod_pass \
  -e SYNC_ENABLED=true \
  -e SYNC_SCHEDULE="0 3 * * *" \
  -e SYNC_REGIONS="us-east-1" \
  -e GCP_API_KEY=your_gcp_key \
  gauravmishra0/cost-engine:latest

3. Sizing Recommendations

  • Small Scale (Under 10k requests/day): 1 vCPU, 512MB RAM (Sufficient for background ingestion of hourly datasets)
  • Large Scale (Over 1M requests/day): 2 vCPU, 4GB RAM (Recommended to handle memory spikes during full-catalog syncs)
  • Storage: 50GB SSD persistent volume recommended for the PostgreSQL backend.