Status: Aspirational - Core concepts for future UI evolution
Core Concept
Transform entity visualization from nodes as dots to tiles as always-on surfaces - a paradigm shift toward pane-based semantic computing where data is visible without interaction.
Key Principles
1. Tiles as Surfaces, Not Tooltips
Current Pattern: Hover to reveal entity details
Vision: Data always visible on tile surface
Tiles are persistent information surfaces displaying contextual data at all zoom levels. No interaction required to see entity state.
2. Semantic Zoom (Progressive Modes)
Progressive detail disclosure through meaning density, not pixel scaling. The true shape will emerge during development - current thinking:
Mode
Gesture (Mobile)
Display
Use Case
Focus
Pinch in (max zoom)
Single tile fills screen, full detail
Deep inspection of one entity
Relational
Pinch out slightly
Focused tile + half of connected tiles visible
Navigate relationships, jump between entities
Overview
Pinch out fully
Full force-directed graph
Landscape view, see all connections
Mobile-First Insight: Relational mode is the key innovation - shows relationship context without losing focus. Drag connected tile to center to navigate.
Desktop: Same three modes, plus potential intermediate levels for specific workflows (will emerge organically).
Philosophy: Each mode serves a distinct cognitive task - focus for depth, relational for exploration, overview for orientation.
3. Pane-Based Computing
Inspired by Smalltalk's pane model - tiles are compositional surfaces that can be:
Arranged in layouts (grid, hierarchy, timeline, pipeline)
Navigate: Drag candidate protein-coding gene to center - examine function predictions
Discovery: Pinch to relational mode - see this gene's regulatory network
Backtrack: Swipe back gesture - return to original cluster for comparison
Key insight: Discovers potential novel protein function before arriving at lab - gestural exploration enables hypothesis formation during commute.
Desktop Deep Dive (Zoom levels emerge organically)
Working context: See key fields without interaction
Deep inspection: Full metadata for detailed comparison
Neighborhood exploration: Embedded relationship graphs
Comparison to Current UI
Aspect
Current (Nodes)
Vision (Tiles)
Default state
Label on hover
Key fields always visible
Zoom behavior
Pixel scaling
Semantic detail levels
Layout
Force-directed physics
Intentional layouts (grid/tree/timeline)
Customization
Hardcoded
Config-driven per entity type
Information density
Low (interaction required)
High (data always on)
Mobile-First Considerations
Deep Exploratory Analysis (30+ min sessions on mobile):
Fast navigation via gesture shortcuts (swipe patterns, drag-to-center)
Threshold-based zoom: Smooth pinch within modes, discrete snap between modes
Simplified physics: Less movement at overview level for touch stability
Adaptive detail: Entity type determines what fields show at each mode
Landscape enhancement: Show 2-3 tiles side-by-side when horizontal
Gesture Mapping:
Pinch in → Focus mode (single tile, full detail)
Pinch out slightly → Relational mode (show connected tiles)
Pinch out fully → Overview mode (full graph)
Drag tile to center → Navigate to that tile (in relational mode)
Swipe → Gesture shortcuts (back, related entities, etc)
Key Design Principle: Mobile is not a compromise - it's the primary exploratory interface. Desktop adds power-user features.
Open Questions
Tile sizing: Fixed dimensions vs dynamic based on content?
Transition animations: Smooth zoom interpolation or discrete snapping? Answer: Threshold-based - smooth within, snap between
Relational mode polish: Exactly how much of connected tiles? Answer: Half-tile (symbol, label, 1-2 fields visible)
Performance: Can we render 1000+ tiles without degradation?
Gesture vocabulary: What swipe patterns for shortcuts? (back, forward, star, hide, etc)
Success Criteria
A successful tile-based UI should:
✅ Show more information without user interaction
✅ Support multiple layout modes for different workflows
✅ Enable non-technical users to customize via config
✅ Scale from 10 to 1000+ entities
✅ Maintain real-time update responsiveness
✅ Preserve existing graph capabilities where valuable
Related Concepts
Semantic Zoom - Progressive detail disclosure
Pane-Based Computing - Smalltalk's compositional window model
Information Density - Tufte's principles of data visualization
Always-On Interfaces - Data visibility without interaction
When to Build This
Prerequisites:
Stable entity model and attestation system
Clear use cases demanding more visible data
User feedback that current graph UI is limiting
Resources for UI/UX iteration
Trigger conditions:
Users frequently hover to see data (interaction overhead)
Need to compare multiple entities visually
Different workflows require different layouts (not just force-directed)
Non-technical users want to customize views
Status
Current: Vision document - concepts not yet implemented
Future: Consider prototype when core attestation system stabilizes and user feedback indicates need for enhanced visualization.
Key Architectural Patterns
Real-Time Updates via WebSocket
Delta update protocol for live tile changes:
Client subscribes to view updates
Server pushes only changed tiles (not full re-query)