This document describes how to run Postgres AI Monitoring locally for development.
- Docker + Docker Compose plugin (
docker compose) - Git
- Python 3.11+ (for running the reporter on your host)
- Node.js (optional, for
npx postgresai ...helpers)
If you cloned the repo with submodules, make sure the .cursor submodule is initialized:
git submodule update --init --recursiveThis repo uses pre-commit with gitleaks to catch secrets before they are committed.
# Install pre-commit
pip install pre-commit
# or: brew install pre-commit
# Install the hooks (one-time, after cloning)
pre-commit installThis workflow lets you:
- run the monitoring stack via Docker Compose
- iterate on custom code without rebuilding images or committing changes
- run the reporter on your host (recommended) and debug it
- optionally debug the Flask backend running in Docker
- Docker: pgwatch collectors + sinks + Grafana (+ optional Flask dev container)
- Host:
reporter/postgres_reports.py(recommended for iteration & debugging)
docker-compose.yml requires PGAI_TAG. Copy the example and edit as needed:
cp .env.example .env
# edit .env and set at least PGAI_TAG=...The repo ships an example override file. Copy it to the standard Compose override filename:
cp docker-compose.override.example.yml docker-compose.override.ymlThis enables:
- using local
./config/**(Prometheus/Grafana/pgwatch configs) instead of published config images - Flask bind-mount + optional debugpy
- exposing
sink-postgreson localhost for host-run reporter - (optional) an alternate mode to run the reporter inside Docker (commented in the example override). Host-run reporter is the primary workflow.
The compose stack bind-mounts this file into the reporter container. Create it to avoid Docker creating a directory by accident:
: > .pgwatch-configOptional (enables uploading reports to PostgresAI):
echo "api_key=YOUR_KEY" >> .pgwatch-configThis repo is already configured (locally) to ignore these files via .git/info/exclude:
.env.pgwatch-configdocker-compose.override.yml.vscode/
That means you can iterate without polluting git status.
docker compose automatically loads:
docker-compose.ymldocker-compose.override.yml(if present)
This repo includes a docker-compose.override.yml that overrides config-init to copy configs from your local working tree (./config) into the shared postgres_ai_configs Docker volume.
That means edits to:
config/prometheus/prometheus.ymlconfig/grafana/dashboards/*.jsonconfig/pgwatch-*/metrics.ymlconfig/scripts/*.sh
can be picked up locally (see “Applying config changes” below).
The pgwatch collectors read targets from instances.yml. By default, the included demo target is disabled, so you must enable at least one instance.
Edit instances.yml and set:
is_enabled: truefortarget-database- keep
conn_str: postgresql://monitor:monitor_pass@target-db:5432/target_database
The demo DB is initialized by config/target-db/init.sql and creates the monitor user/password used above.
- Ensure your DB is reachable from Docker containers (network +
pg_hba.conf). - Create/prepare a monitoring role (recommended, idempotent):
PGPASSWORD='...' npx postgresai prepare-db postgresql://admin@host:5432/dbname- Add your instance to
instances.yml(or use the CLI:postgresai mon targets add 'postgresql://user:pass@host:port/db' my-db) - Make sure
is_enabled: truefor that instance.
This turns instances.yml into pgwatch sources.yml files inside the shared configs volume:
docker compose run --rm sources-generatorStart everything:
docker compose up -dIf you edited instances.yml after the stack was already running, restart pgwatch to pick up the new sources:
docker compose restart pgwatch-postgres pgwatch-prometheusUseful URLs/ports:
- Grafana:
http://localhost:3000 - VictoriaMetrics (Prometheus API):
http://localhost:59090 - sink-postgres (exposed to host by override):
postgresql://pgwatch@127.0.0.1:55433/measurements
Quick sanity checks:
docker compose ps
curl -fsS http://localhost:59090/metrics >/dev/null && echo "victoriametrics ok"
curl -fsS http://localhost:3000/api/health || trueWhen you change files under config/, re-run config-init to copy them into the shared volume:
docker compose up -d --force-recreate config-initThen restart the affected services:
- Prometheus scrape config changed (
config/prometheus/prometheus.yml):
docker compose restart sink-prometheus- pgwatch metrics changed (
config/pgwatch-*/metrics.yml):
docker compose restart pgwatch-postgres pgwatch-prometheus- Grafana dashboards/provisioning changed (
config/grafana/**):- dashboards are file-provisioned and often reload automatically, but if in doubt:
docker compose restart grafanaUse whichever venv you prefer; .venv is a common convention:
python3 -m venv .venv
source .venv/bin/activate
pip install -r reporter/requirements.txtUse the helper script:
./scripts/run_reporter_local.shOr run directly:
orig_dir="$(pwd)"
ts="$(date -u +%Y%m%d_%H%M%S)"
out_dir="${orig_dir}/dev_reports/dev_report_${ts}"
mkdir -p "${out_dir}"
cd "${out_dir}"
PYTHONPATH="${orig_dir}${PYTHONPATH:+:${PYTHONPATH}}" \
python -m reporter.postgres_reports \
--prometheus-url http://127.0.0.1:59090 \
--postgres-sink-url postgresql://pgwatch@127.0.0.1:55433/measurements \
--no-upload \
--output -
echo "Wrote reports to: ${out_dir}"
cd "${orig_dir}"This repo includes .vscode/launch.json with a config:
- Run Reporter (local)
Use Run and Debug → select Run Reporter (local).
The override file bind-mounts ./monitoring_flask_backend into the container for fast iteration.
docker compose up -d --force-recreate monitoring_flask_backendDEBUGPY_FLASK=1 docker compose up -d --force-recreate monitoring_flask_backendThen attach from Cursor/VS Code:
- Attach (Flask in Docker: debugpy 5678)
The Flask service (gunicorn) is exposed on:
http://localhost:55000
This is usually slower than host-run debugging, but it exists for parity:
DEBUGPY_REPORTER=1 docker compose up -d --force-recreate postgres-reportsThen attach:
- Attach (Reporter in Docker: debugpy 5679)
- Compose says
PGAI_TAG is required: set it in.env(see above). - Host reporter can’t connect to sink-postgres: make sure the stack is up and
sink-postgresis bound to127.0.0.1:55433(it is indocker-compose.override.yml). - Need to regenerate pgwatch sources after editing
instances.yml:
docker compose run --rm sources-generator
docker compose restart pgwatch-postgres pgwatch-prometheusThe default docker-compose.yml uses published images. For local development you can opt-in to building services from source via docker-compose.local.yml.
make up
make up-localUse the CLI with COMPOSE_FILE to include the local compose override:
COMPOSE_FILE="docker-compose.yml:docker-compose.local.yml" postgresai mon local-install --demo -yTo rebuild on every run:
COMPOSE_FILE="docker-compose.yml:docker-compose.local.yml" \
docker compose -f docker-compose.yml -f docker-compose.local.yml build --no-cache
COMPOSE_FILE="docker-compose.yml:docker-compose.local.yml" postgresai mon restartBring the stack up and force rebuild:
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d --build --force-recreateIf you want to rebuild everything without cache (slow, but deterministic):
docker compose -f docker-compose.yml -f docker-compose.local.yml build --no-cache --pull
docker compose -f docker-compose.yml -f docker-compose.local.yml up -d --force-recreatepostgresai mon resetpostgresai mon logs
postgresai mon logs grafana
postgresai mon logs monitoring_flask_backendpostgresai mon stop
postgresai mon startPreview environments allow you to test monitoring changes in isolated, publicly-accessible deployments before merging.
- Each preview runs at
https://preview-{branch-slug}.pgai.watch - Includes: PostgreSQL target database, pgwatch collector, VictoriaMetrics, Grafana, and variable workload generator
- Auto-expires after 3 days of inactivity
- Maximum 2 concurrent previews
- Push your branch to GitLab
- Open the merge request pipeline
- Click the Play button on
preview:deploy(manual trigger) - Wait for deployment to complete (~2-3 minutes)
- Access the preview URL shown in the job output
URL: https://preview-{branch-slug}.pgai.watch
Credentials: The password is generated per-preview and stored on the VM. To retrieve it:
# SSH to the preview VM (requires access)
ssh deploy@<PREVIEW_VM_HOST> "cat /opt/postgres-ai-previews/previews/{branch-slug}/.env"Username: monitor
When you push new commits to a branch with an active preview:
- The
preview:updatejob runs automatically - Grafana dashboards and configs are refreshed
- No need to destroy and recreate
Previews are automatically cleaned up when:
- The branch is merged or deleted
- The 3-day TTL expires
To manually destroy:
- Open the merge request pipeline
- Click
preview:destroy
Branch names are sanitized for DNS compatibility:
| Original | Sanitized |
|---|---|
claude/feature-x |
claude-feature-x |
feature_test |
feature-test |
UPPERCASE-Branch |
uppercase-branch |
Preview won't deploy:
- Check if the maximum concurrent previews limit (2) is reached
- Verify disk/memory on the preview VM
Grafana shows "No Data":
- Wait 1-2 minutes for metrics to be collected
- Check if pgwatch container is running
Can't access the preview URL:
- DNS propagation may take a few minutes
- Verify the SSL certificate is valid (Let's Encrypt DNS-01 challenge)