Written after migrating 10 issues from ExpGraph and exploring the codebase architecture
QNTX is an attestation-based continuous intelligence system. Not a knowledge base, not a note-taking app, not a database GUI. It's an attempt to answer: "How do I build understanding that stays current?" For quick definitions, see the Glossary.
The core primitive is the attestation: structured facts of the form "X has property Y in context Z". Everything flows from this:
The segment symbols (꩜ ⌬ ≡ ⨳ ⋈) are not decoration—they're a namespace system:
꩜ (Pulse) - Async operations, scheduling, rate limiting⌬ (Actor/Agent) - Entity identification and relationships≡ (Configuration) - System settings and parameters⨳ (Ingestion) - Data import and processing⋈ (Join/Merge) - Entity resolution and mergingThis is visual grep. You can scan code/UI and instantly know which domain you're in.
┌─────────────────────────────────────┐
│ Prose/Views (composition layer) │ ← Human-facing intelligence
├─────────────────────────────────────┤
│ Graph/Tiles (spatial visualization)│ ← Pattern recognition layer
├─────────────────────────────────────┤
│ ATS Queries (semantic access) │ ← Query/exploration layer
├─────────────────────────────────────┤
│ Attestations (ground truth) │ ← Data layer (immutable facts)
├─────────────────────────────────────┤
│ Pulse (continuous execution) │ ← Currency layer (keeps fresh)
└─────────────────────────────────────┘
Each layer has a clear contract. You can work on graph rendering without touching attestation storage.
The WebSocket architecture reveals intent (see WebSocket API for protocol details):
/ws endpoint for parse_response (semantic tokens, diagnostics)/lsp endpoint for LSP protocol (completions, hover)Pattern: The system assumes you want to see changes as they happen. Not on refresh, not on save—on keystroke, on schedule tick, on data arrival.
Most systems treat their query language as an afterthought. QNTX treats ATS as a programming language:
The CodeMirror + LSP + ProseMirror integration is expensive to build. You don't do this unless the query language is central to the user experience.
Migrating 10 issues from ExpGraph to QNTX exposed vision clusters:
Insight: The system wants to show you what's running right now. Not just logs, not just status—active awareness of execution.
Insight: The UI is not display-only. Every surface should be explorable and composable. Click a term, see its connections. Arrange data spatially based on relationships.
Insight: These are polish issues, not foundational. The language infrastructure exists; now it's about feel and responsiveness.
Insight: Even operational concerns (usage tracking) get first-class visualization. This is not a "just query the database" system.
This is conviction design. Not "best practices"—specific, opinionated choices.
Don't think: "It's a tool that does X, Y, Z."
Think: "It's a substrate for building intelligence systems that happen to include graph visualization, scheduled execution, and prose composition."
The test: Can you build new intelligence workflows without modifying core infrastructure?
When you see ꩜ in code or UI:
When you see ⋈:
This is semantic indexing for humans.
ATS isn't just a query language—it's an ontology definition language:
User(id: String, email: String)
Document(path: String, content: Text)
hasPermission: User -> Document -> Permission
The types are the data model. The queries are the API. There's no separation.
The real-time everything, LSP integration, D3 visualization, scheduled execution—this is a 2024-2025 stack. It assumes:
If built in 2020, it would've felt premature. In 2026, it might feel expected.
Don't try to be general-purpose. Pick one workflow that's currently painful:
Make that workflow 10x better than alternatives. Expand from there.
Let people use pieces before buying the whole vision:
By Week 4, they're using the full system without realizing it.
The fastest path to value: "Here's my existing data → here's QNTX making sense of it."
If you can import:
Then users get immediate value from data they already have.
The unique value prop is continuous currency. Show:
Make the "stays current" part visible and celebrated.
This is conviction software. It has opinions about how intelligence systems should work:
Those opinions might be right or wrong, but they're coherent. The architecture follows from principles, not from "what's popular."
The question isn't "Is this useful to everyone?" It's "Is this indispensable to someone?"
Build for that someone. If the conviction is sound, it'll expand.
Based on config-system.md and config-panel.md
QNTX's config system has 5 sources with strict precedence:
1. System /etc/qntx/config.toml (lowest)
2. User ~/.qntx/config.toml
3. User UI ~/.qntx/config_from_ui.toml
4. Project ./config.toml
5. Environment QNTX_* environment variables (highest)
The clever part: Separate config_from_ui.toml prevents accidental git commits of user preferences.
Problem it solves:
project/config.tomlSolution:
~/.qntx/config_from_ui.toml (home directory, never in project)Core philosophy: Config is a first-class citizen. Don't hide where values come from.
The config panel shows all 5 sources simultaneously:
Each setting shows:
Example:
┌─────────────────────────────┐
│ openrouter.api_key │
│ sk-or-v1-9bee... │
│ [user] ⚠ Overridden by env │
└─────────────────────────────┘
User immediately understands: "My manually configured key is being ignored because environment variable wins."
This is dataflow visualization as product design. Most systems hide complexity. QNTX makes it comprehensible.
Why this matters: When config doesn't work as expected, users can debug themselves instead of filing support tickets.
The config panel design includes space for inline documentation. See Issue #207 for discussion of ProseMirror-based documentation editing and viewing capabilities.
The concept is "right-click → Go to Definition" UX for configuration - making help contextual and immediately accessible.
The current configuration system with its precedence visualization and source tracking provides a solid foundation that could support additional configuration sources in the future through plugins. See Issue #205 for discussion of potential multi-provider support.
Based on config-system.md, task-logging.md
The docs don't just say what to build—they explain why and what alternatives were rejected.
Example from config-system.md:
Why TOML Marshaling vs Regex?
Original approach: Regex pattern matching to preserve comments
Problems:
- Fragile (breaks with formatting changes)
- Hard to maintain
- No type safety
Current approach: Proper TOML marshaling
Tradeoff: Comments in UI config are lost (acceptable for auto-generated file)
This is conviction documentation. Not "here's how it works" but "here's why we chose this over that."
What this enables:
task-logging.md uses explicit phase numbering with status:
### Phase 1: Database Schema ✓ COMPLETED
### Phase 2: LogCapturingEmitter ✓ COMPLETED
### Phase 3: Integrate in Async Worker Handlers ✓ COMPLETED
### Phase 4: Integrate in Ticker (PENDING - deferred to Issue #30)
...
### Phase 9: Documentation & Cleanup ✓ COMPLETED
Not just aspirational design docs. Status tracks reality:
Example E2E results:
Test Scenario: Manual async job execution (JB_MANUAL_E2E_LOG_TEST_123)
Results:
- ✅ 3 log entries written to task_logs table
- ✅ API endpoints returned correct hierarchical data
Sample Captured Logs:
stage: read_jd | level: info | Reading job description...
stage: extract_requirements | level: info | Extracting with llama3.2:3b...
stage: extract | level: error | file not found: file:/tmp/test-jd.txt
This is documentation that proves completion, not just claims it.
Every doc links to related docs with context:
config-panel.md:
Architecture Reference: For backend config system architecture, see [
docs/architecture/config-system.md]
task-logging.md:
Related Documentation
- Pulse Execution History - Designed the
pulse_executionstable- Pulse Frontend Remaining Work - Frontend status
What this means: No doc is an island. New developers (or LLMs) can follow breadcrumbs.
Based on Issue #30 and direct confirmation
Current state:
From Issue #30:
The Pulse panel is heavily WIP and currently broken:
- ❌ Fails to get tasks - Task loading is broken
- ❌ Panel breaks frequently - Features conflict with each other
- ⚠️ Individual features worked independently - Each feature has worked at some point, but never all together
This is remarkable transparency. Most projects would say "known issues" or "needs refinement."
Instead: "heavily WIP and currently broken" with explicit broken features listed.
What this reveals:
When you're working solo:
The decision transparency makes sense: You're explaining to yourself why you chose X over Y, so when you revisit in 6 months, you don't question the decision without understanding the context.
Issue #30 Priority 1: Fix broken functionality (integration stability)
The problem: WebSocket updates, task hierarchy, job polling, execution history—each works in isolation, all break together.
Likely causes:
The fix: Needs investigation (high priority). Then state management refactor or integration testing.
The need: Frontend developer to complement backend expertise.
Based on task-logging.md, config-system.md
Problem: Need to log all job execution emissions without modifying existing code.
Solution: Decorator pattern—wrap existing emitter:
// Before:
emitter := async.NewJobProgressEmitter(job, queue, h.streamBroadcaster)
// After:
baseEmitter := async.NewJobProgressEmitter(job, queue, h.streamBroadcaster)
emitter := ix.NewLogCapturingEmitter(baseEmitter, h.db, job.ID)
Why this is good:
Decision doc explains alternative:
Alternative considered: Modify
JobProgressEmitterdirectly
- Simpler (one implementation)
- But couples logging to progress tracking
- Harder to test
- Not reusable for other emitter types
This is textbook Gang of Four. But more importantly: It's documented with rationale.
task-logging.md shows:
From user: "Logging is critical" (explaining why test coverage is high here)
Pattern: Test coverage follows importance, not dogma.
From task-logging.md:
Decision: Store full logs without size limits
Rationale:
- Debugging requires complete information
- SQLite TEXT field supports unlimited size
- Storage is cheap (compared to debugging time)
- TTL cleanup handles growth
Risk mitigation:
- Monitor database size
- Implement TTL cleanup (separate task)
- Alert if logs table grows >10GB
This is responsible unlimited storage:
config-panel.md future sections:
task-logging.md future:
Current broken state:
Work in progress:
The vision guides architecture choices NOW:
Implementation catches up incrementally:
The mitigation:
This is strategic incrementalism:
Evidence:
The architecture is sound. The implementation is incomplete. The gap is acknowledged and tracked.
Not just "settings"—it's a debuggable dataflow system:
This reveals production operations thinking:
Every doc includes:
This pays off:
Immediate (from Issue #30):
Near-term:
Long-term (tracked in GitHub Issues):
The key: Fix integration before adding features. Stable foundation first.
This analysis is based on migrating 10 issues, exploring the codebase architecture, reading config-system.md, config-panel.md, task-logging.md, Issue #30, and direct conversation with the developer. It represents honest assessment, not marketing copy.