Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.oktolabs.ai/llms.txt

Use this file to discover all available pages before exploring further.

Storage paths

Pulse is local-first. Every byte the server writes — board metadata, cards, specs, attachments, knowledge-graph nodes — lives under a single root directory on your machine. By default that root is ~/.okto-pulse/. You can move it, mount it as a Docker volume, or back it up with tar. There is no remote dependency. This page documents the on-disk layout, the override knobs, the Docker volume gotcha, and how to move the data directory between machines safely. For runtime configuration (ports, bind hosts, logging) see Server configuration. For the full env-var reference see Environment variables.

Default layout

Source: okto-pulse/src/okto_pulse/community/config.py:CommunitySettings._derive_paths (inventory:160–174).
~/.okto-pulse/
├── data/
│   └── pulse.db                   SQLite (WAL mode, FK ON) — boards, cards, specs, queue, audit
├── uploads/                       Card file attachments (one file per Attachment row)
├── boards/
│   └── {board_id}/
│       └── graph.lbug             Per-board knowledge graph (LadybugDB)
├── global/
│   └── discovery.lbug             Cross-board discovery index (LadybugDB)
└── mcp_traces/                    MCP tool-call traces (only if MCP_TRACE_ENABLED=1)
    └── session_{ts}_{id}.jsonl
The root resolves to Path.home() / ".okto-pulse" when data_dir is empty, evaluated at startup by CommunitySettings._derive_paths (inventory:162).
PathContentsSource
~/.okto-pulse/data/pulse.dbSQLite database, WAL mode, FK ON. 38 SQLAlchemy tables (boards, cards, specs, queue, audit, etc.)inventory:165, inventory:1022–1077
~/.okto-pulse/uploads/Per-card file attachments referenced by Attachment.file_pathinventory:166
~/.okto-pulse/boards/{board_id}/graph.lbugOne LadybugDB graph per board: nodes, edges, HNSW vector indexesinventory:167, inventory:927
~/.okto-pulse/global/discovery.lbugCross-board discovery index (entities, search history)inventory:168, inventory:928
~/.okto-pulse/mcp_traces/One JSONL file per MCP session when tracing is oninventory:771–772
The graph store is LadybugDB, not Kuzu. Kuzu was retired on 2026-05-03; the file extension .lbug is the LadybugDB on-disk format. Some env-var names (KG_KUZU_BUFFER_POOL_MB, KG_CONNECTION_POOL_SIZE) keep the legacy KUZU prefix for backward compatibility — they configure the LadybugDB engine.

Overriding the data directory

DATA_DIR is the single knob. It maps to the data_dir field on CommunitySettings (community/config.py:11) — pydantic-settings reads it with case_sensitive=False and no env-prefix, so the field name uppercased is the env-var name. Set it before okto-pulse serve starts and every subdirectory rebases under the new root.
export DATA_DIR=/srv/pulse
okto-pulse serve --accept-terms
$ tree -L 2 /srv/pulse
/srv/pulse
├── boards
│   └── 01HZQ8...
├── data
│   └── pulse.db
├── global
│   └── discovery.lbug
└── uploads
For finer control the underlying core knob is KG_BASE_DIR (inventory:189, inventory:773) — when set independently it relocates only the per-board and global graph files, leaving SQLite and uploads at the original path:
Env varDefaultAffectsSource
DATA_DIR~/.okto-pulseAll paths aboveCommunitySettings.data_dir (community/config.py:11–22, inventory:162)
KG_BASE_DIR~/.okto-pulseboards/, global/ onlyCoreSettings.kg_base_dir (inventory:189)
DATABASE_URLsqlite+aiosqlite:///{data_dir}/data/pulse.dbSQLite locationCoreSettings.database_url (inventory:185)
UPLOAD_DIR{data_dir}/uploadsAttachments onlyCoreSettings.upload_dir (inventory:186)
MAX_UPLOAD_SIZE10485760 (10 MB)Per-attachment capCoreSettings.max_upload_size (inventory:187)
Do not split DATA_DIR and KG_BASE_DIR across filesystems unless you have a reason. The KG consolidation pipeline reads from the SQLite queue and writes to LadybugDB in the same transaction window — putting them on different filesystems removes any failure-atomicity guarantees you might assume.

Docker: the volume gotcha

Inside the official container, the data root is /data and is set via DATA_DIR=/data and KG_BASE_DIR=/data. The compose file ships in okto-pulse/docker-compose.yml and declares an unprefixed named volume:
services:
  okto-pulse:
    environment:
      MCP_HOST: "0.0.0.0"
      DATA_DIR: /data
      KG_BASE_DIR: /data
    volumes:
      - okto-pulse-data:/data

volumes:
  okto-pulse-data:        # ← unprefixed; this is where live data lives
Compose may auto-prefix the volume name. Running docker compose up from a project named, for example, okto-pulse/ typically creates okto-pulse_okto-pulse-data — a different volume from the unprefixed okto-pulse-data that production scripts and snapshots reference. The board “looks empty” because Pulse is reading a brand-new volume.Fix: declare the volume as external in an override file so Compose never re-namespaces it.
# docker-compose.override.yml
services:
  okto-pulse:
    volumes:
      - okto-pulse-data:/data
volumes:
  okto-pulse-data:
    external: true
Then docker volume create okto-pulse-data once and every compose up will reuse it.
Verify which volume your container is actually using:
docker inspect okto-pulse --format '{{range .Mounts}}{{.Name}} -> {{.Destination}}{{"\n"}}{{end}}'
okto-pulse-data -> /data
If the left-hand name shows a project prefix (e.g. okto-pulse_okto-pulse-data), you are on the wrong volume. See Docker install for the full container recipe.

Backup strategy

Pulse data is plain files. A consistent backup needs two things: SQLite at a transaction boundary, and the LadybugDB files quiesced.
1

Stop the server (or pause writes)

The simplest correct backup is offline. Stop okto-pulse serve and the consolidation worker stops with it.
pkill -INT -f "okto-pulse serve"
For a containerized deploy:
docker stop okto-pulse
2

Snapshot the data directory

A single tar of the root captures everything Pulse needs to restore.
tar -czf pulse-backup-$(date +%Y%m%d).tar.gz -C ~ .okto-pulse
For the Docker named volume:
docker run --rm \
  -v okto-pulse-data:/data:ro \
  -v "$PWD":/backup \
  alpine tar -czf /backup/pulse-backup-$(date +%Y%m%d).tar.gz -C /data .
3

Restart

okto-pulse serve --accept-terms
Or docker start okto-pulse. The next agent connection re-opens SQLite in WAL mode and re-attaches each per-board LadybugDB on demand.

Online (hot) backup

If you cannot stop the server, use SQLite’s online backup API for pulse.db and copy the LadybugDB files separately. SQLite first:
sqlite3 ~/.okto-pulse/data/pulse.db ".backup '/tmp/pulse.db'"
Then snapshot boards/, global/, and uploads/. There is a small window where the SQLite snapshot may reference KG queue entries that have not yet landed in LadybugDB; on restore, the consolidation worker will replay any in-flight entries from the queue table.
Do not copy the .lbug files while a write is in flight. The advisory lock in kg/workers/advisory_lock.py prevents concurrent writers, but it does not prevent a copy tool from reading a half-written page. Pause writes (stop the server) or rely on a filesystem snapshot (LVM, ZFS, APFS) for atomic point-in-time copies.

Migrating between machines

Pulse stores no machine-specific state. You can move the data directory verbatim from one host to another as long as the receiving Pulse version matches.
# On source
tar -czf pulse-data.tar.gz -C ~ .okto-pulse
scp pulse-data.tar.gz target-host:/tmp/

# On target
ssh target-host
tar -xzf /tmp/pulse-data.tar.gz -C ~
okto-pulse --version            # must match source version
okto-pulse serve --accept-terms
Okto Pulse listening on:
  API + UI  http://127.0.0.1:8100
  MCP       http://127.0.0.1:8101/mcp
Match versions on both ends. The KG schema version is stamped in each graph.lbug (SCHEMA_VERSION = "0.3.3" at the time of writing — kg/schema.py, inventory:897). A newer Pulse will run a forward migration on first open; an older Pulse will refuse to open a newer graph. Upgrade the target host before restoring.
For SDLC pipelines that need to copy boards between Pulse instances at the same version, prefer a direct data-dir copy over API replay — the API replay path cannot reach spec.in_progress because the validated→in_progress evaluation gate is enforced at the MCP layer only. Stop both servers, copy data/pulse.db and the relevant boards/{board_id}/graph.lbug, then trim the SQLite rows that should not move (queue entries, audit log) before restarting.

Resetting state

okto-pulse reset deletes the data directory after confirmation. Source: cli.py cmd_reset (inventory:139–151).
okto-pulse reset
This will delete /Users/you/.okto-pulse and ALL boards, cards, specs, and graphs.
Type the data directory path to confirm: /Users/you/.okto-pulse
Removed.
Never delete .lbug files manually to fix a KG issue. Use the documented KG operations (okto-pulse kg backfill, okto-pulse kg dedup-entities) — see KG operations. Deleting a graph file orphans its SQLite queue entries and the KuzuNodeRef rows that point at it; the next consolidation pass will fail to commit and dead-letter the entries.

Inspecting current paths

okto-pulse status prints the resolved data directory and database path:
okto-pulse status
Okto Pulse status
─────────────────
Data dir:     /Users/you/.okto-pulse
Database:     /Users/you/.okto-pulse/data/pulse.db (412 KB)
Boards:       1   Cards: 23   Specs: 4   Agents: 2
API server:   running on 127.0.0.1:8100
MCP server:   running on 127.0.0.1:8101
Source: cli.py:567–614 (cmd_status) (inventory:67–78). The 198 MCP tools exposed by the running process operate against the paths shown here — every board read, every consolidation, every attachment upload resolves under this root.

Server configuration

Ports, bind hosts, CORS, logging, and worker tuning.

Environment variables

Full reference of every CoreSettings field and its env-var name.

Knowledge Graph

What lives in graph.lbug and how consolidation populates it.

Docker install

The named-volume recipe and the unprefixed-volume gotcha.
Last modified on May 8, 2026