ManyHats Integration
OmniData is designed to work with ManyHats — the role-switching system that lets a person wear different professional hats throughout their day. Each hat can have its own .omnidata container, keeping knowledge isolated by role.
The omni: field in hat YAML
To associate an OmniData container with a hat, add the omni: field to the hat’s YAML configuration:
# ~/.config/manyhats/hats/director-of-ai.yaml
name: director-of-ai
display_name: Director of AI
omni:
instance_name: director-of-ai
auto_create: true
adapters:
- name: filesystem
config:
watch_paths:
- ~/Documents/ai-strategy
- name: chrome-capture
config:
capture_screenshots: true
The omni: block tells ManyHats that this hat has a companion OmniData container. The instance_name maps to the bundle directory: director-of-ai.omnidata/.
Auto-create on wear
When auto_create: true is set, the first time you wear a hat, ManyHats will:
- Check if
~/.local/share/eidosomni/instances/<instance_name>.omnidata/exists - If not, create the bundle directory structure (
index.db,memory.db,blobs/,manifest.json,adapters.json) - Write the manifest with the hat’s identity to
manifest.json - Register the configured adapters in
adapters.json - Run an initial sync for all enabled adapters
This means wearing a new hat for the first time automatically provisions its knowledge container. No manual setup required.
Whoami: active container detection
When an OmniData surface (CLI, MCP server, Chrome extension) needs to know which container to use, it queries ManyHats for the currently active hat:
manyhats whoami
# Returns: director-of-ai
The runtime then resolves this to the corresponding .omnidata bundle. This is how the MCP server knows which container to search when an AI agent asks a question — it uses the knowledge scoped to the hat the user is currently wearing.
The resolution path:
manyhats whoamireturns the active hat name- Read the hat YAML to find
omni.instance_name - Open
~/.local/share/eidosomni/instances/<instance_name>.omnidata/
If no hat is active, surfaces fall back to a default container or prompt the user.
Take-off: WAL checkpoint
When switching away from a hat (manyhats take-off), ManyHats runs a WAL checkpoint on both databases inside the active .omnidata bundle:
# Checkpoint index.db
index_conn = sqlite3.connect("instance.omnidata/index.db")
index_conn.execute("PRAGMA wal_checkpoint(TRUNCATE)")
index_conn.close()
# Checkpoint memory.db
memory_conn = sqlite3.connect("instance.omnidata/memory.db")
memory_conn.execute("PRAGMA wal_checkpoint(TRUNCATE)")
memory_conn.close()
This flushes the Write-Ahead Log back into each database file. The checkpoint serves two purposes:
- Clean handoff — Both databases are in a fully consistent state with no pending WAL data. Safe to copy, backup, or sync to another machine.
- Size management — WAL files can grow during active use. Checkpointing on take-off keeps them bounded.
Lifecycle summary
| ManyHats Event | OmniData Action |
|---|---|
wear (first time) |
Bootstrap .omnidata bundle (index.db, memory.db, blobs/, manifest.json, adapters.json), initial sync |
wear (subsequent) |
Open databases, set PRAGMAs, resume adapter scheduling |
whoami |
Return active container for surfaces to use |
take-off |
WAL checkpoint both databases, pause adapter scheduling |
remove (hat deleted) |
Mark manifest as deleted (bundle preserved) |
Multiple hats, multiple containers
A person might have:
director-of-ai.omnidata/ — AI strategy docs, research papers, agent logs
health.omnidata/ — medical records, fitness data, nutrition logs
financial-manager.omnidata/ — portfolio data, tax documents, bank statements
builder.omnidata/ — code repos, architecture decisions, devlogs
Each is completely independent. Searching the health container will never return results from financial-manager. This isolation is by design — it mirrors the real-world separation of concerns across roles.