Architecture Overview - V2 Pipeline System
Complete system design and component interactions in 360 Hextile V2
Last updated: 2026-01-03
Architecture Overview - V2 Pipeline System
360 Hextile V2 is a complete redesign featuring a modular, multi-pipeline architecture with dynamic UI generation, VRAM-aware processor management, and extensible plugin system for AI models.
🎯 What's New in V2
Key Improvements
- Multi-Pipeline Support: SDXL Inpainting, SD3.5 Inpainting, Flux.2 Inpainting, Real-ESRGAN Upscaling
- Dynamic UI Generation: Backend defines UI, frontend renders automatically
- Schema-Driven Tab Visibility: Tabs show/hide based on pipeline capabilities
- Smart VRAM Management: Exclusive loading prevents Out-of-Memory errors
- Modular Processors: Plugin architecture for easy model integration
- Backward Compatible: Legacy SDXL workflows still work
- Unified Architecture: Single FastAPI server (no separate diffusion server)
System Architecture (V2)
┌─────────────────────────────────────────────────────────────────────────┐
│ FRONTEND (SvelteKit + Tauri) │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ HEADER: [Config ▼] [Pipeline Selector ▼] [Input][Template]...[RENDER]│
│ │ │
│ │ Dynamic Pipeline Dropdown │
│ ▼ │
│ ┌──────────────────────────┐ │
│ │ Available Pipelines: │ │
│ │ • SDXL Inpaint │ │
│ │ • SD3.5 Inpaint │ │
│ │ • Flux Inpaint │ │
│ │ • Real-ESRGAN Upscaler │ │
│ └──────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ DYNAMIC FORM (Schema-Driven UI) │ │
│ │ │ │
│ │ IF pipeline === "sdxl_inpaint": │ │
│ │ Shows legacy hardcoded 6-tab form (backward compatible) │ │
│ │ │ │
│ │ IF pipeline === "flux_inpaint" OR "realesrgan": │ │
│ │ ├── Fetch schema from backend │ │
│ │ ├── Dynamically render tabs and fields │ │
│ │ ├── Apply conditional visibility rules │ │
│ │ └── Real-time validation │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ POST /api/renders/create │
│ { pipeline: "flux_inpaint", config: {...} } │
│ │
└─────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────┐
│ BACKEND (FastAPI) │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ 📋 API ENDPOINTS │
│ ├── GET /api/processors → List available pipelines │
│ ├── GET /api/processors/{id}/schema → Get dynamic UI schema │
│ ├── POST /api/processors/{id}/validate → Validate config │
│ ├── POST /api/renders/create → Create render with pipeline │
│ ├── GET /api/renders/{id} → Get render status │
│ ├── WS /api/renders/{id}/ws → Real-time progress │
│ ├── GET /api/templates → List hextile templates │
│ └── GET /api/lora/ → List LoRA models │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ PROCESSOR REGISTRY (Singleton) │ │
│ │ │ │
│ │ _processors: Dict[str, Type[BaseProcessor]] │ │
│ │ _instances: Dict[str, BaseProcessor] │ │
│ │ _active_diffusion_processor: Optional[str] │ │
│ │ │ │
│ │ Registered Processors: │ │
│ │ ├── sdxl_inpaint (TiledProcessor, 12GB VRAM) │ │
│ │ ├── sd35_inpaint (TiledProcessor, 12-16GB VRAM) │ │
│ │ ├── flux_inpaint (TiledProcessor, 16GB VRAM) │ │
│ │ └── realesrgan (DirectProcessor, 2GB VRAM) │ │
│ │ │ │
│ │ VRAM Management Logic: │ │
│ │ When switching diffusion models (SDXL ↔ Flux): │ │
│ │ 1. Shutdown old processor (free VRAM) │ │
│ │ 2. Run gc.collect() + torch.cuda.empty_cache() │ │
│ │ 3. Load new processor │ │
│ │ │ │
│ │ Direct processors (upscalers) don't trigger unloading │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ PROCESSING PIPELINE ROUTER │ │
│ │ │ │
│ │ def run(config): │ │
│ │ pipeline_id = config["pipeline"] │ │
│ │ processor = ProcessorRegistry.get(pipeline_id, config) │ │
│ │ │ │
│ │ if processor.type == ProcessorType.TILED: │ │
│ │ return _run_tiled_pipeline(processor) │ │
│ │ # Extract → Process → Stitch │ │
│ │ │ │
│ │ else: # DirectProcessor │ │
│ │ return _run_direct_pipeline(processor) │ │
│ │ # Load → Process → Save │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┬──────────────┬──────────────┬──────────────────────┐ │
│ │ SDXLInpaint │ SD35Inpaint │ FluxInpaint │ RealESRGANProcessor │ │
│ │ Processor │ Processor │ Processor │ │ │
│ ├──────────────┼──────────────┼──────────────┼──────────────────────┤ │
│ │ Type: Tiled │ Type: Tiled │ Type: Tiled │ Type: Direct │ │
│ │ VRAM: ~12GB │ VRAM: ~12-16GB│ VRAM: ~16GB │ VRAM: ~2GB │ │
│ │ LoRA: Yes │ LoRA: Yes(SD3)│ LoRA: Yes │ LoRA: No │ │
│ │ Cardinal: Yes│ Cardinal: Yes│ Cardinal: Yes│ Cardinal: No │ │
│ │ Hextile: Yes │ Hextile: Yes │ Hextile: Yes │ Hextile: No │ │
│ │ Neg Prompt:✅│ Neg Prompt:❌│ Neg Prompt:❌│ Neg Prompt: N/A │ │
│ └──────────────┴──────────────┴──────────────┴──────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Core Concepts
1. Pipeline System
A pipeline is a complete processing workflow with: - Processor Implementation: Python class handling AI model - UI Schema: JSON defining form fields and tabs - Processing Type: Tiled (with hextile) or Direct (full image) - Resource Requirements: VRAM, model files, dependencies
Current Pipelines:
- sdxl_inpaint: Stable Diffusion XL with hexagonal tiling
- sd35_inpaint: Stable Diffusion 3.5 with Turbo and Quality modes
- flux_inpaint: Flux.2 with hexagonal tiling
- realesrgan: Real-ESRGAN upscaling (no tiling)
2. Processor Architecture
# Abstract Base Class
class BaseProcessor(ABC):
processor_name: str # Unique identifier
processor_type: ProcessorType # "tiled" or "direct"
@abstractmethod
def get_ui_schema(self) -> Dict[str, Any]:
"""Define UI tabs, fields, validation rules"""
pass
@abstractmethod
def initialize(self, config: Dict[str, Any]):
"""Load AI model with configuration"""
pass
@abstractmethod
def shutdown(self):
"""Free VRAM and cleanup resources"""
pass
def validate_config(self, config: Dict) -> Tuple[bool, List[str]]:
"""Validate configuration before processing"""
pass
# Tiled Processor (for diffusion models with hextile)
class TiledProcessor(BaseProcessor):
processor_type = ProcessorType.TILED
@abstractmethod
def process_tile(self, tile_image_path, mask_path, output_path,
tile_data, config) -> Dict:
"""Process a single hexagonal tile"""
pass
def supports_lora(self) -> bool:
return False
def supports_controlnet(self) -> bool:
return False
# Direct Processor (for models that process full images)
class DirectProcessor(BaseProcessor):
processor_type = ProcessorType.DIRECT
@abstractmethod
def process_image(self, input_path, output_path, config) -> Dict:
"""Process entire image without tiling"""
pass
@abstractmethod
def get_output_dimensions(self, input_width, input_height,
config) -> Tuple[int, int]:
"""Calculate output dimensions"""
pass
3. Processor Registry
Singleton pattern managing all processors:
class ProcessorRegistry:
_processors: Dict[str, Type[BaseProcessor]] = {}
_instances: Dict[str, BaseProcessor] = {}
_active_diffusion_processor: Optional[str] = None
@classmethod
def register(cls, processor_class: Type[BaseProcessor]):
"""Auto-register processor on import"""
name = processor_class.processor_name
cls._processors[name] = processor_class
@classmethod
def get(cls, name: str, config: Optional[Dict] = None) -> BaseProcessor:
"""Get processor with VRAM management"""
processor_class = cls._processors[name]
is_diffusion = issubclass(processor_class, TiledProcessor)
# Exclusive loading for diffusion models
if is_diffusion and cls._active_diffusion_processor:
if cls._active_diffusion_processor != name:
# Shutdown old model first
old = cls._instances.get(cls._active_diffusion_processor)
if old:
old.shutdown() # Frees VRAM
# Get or create instance
if name not in cls._instances:
instance = processor_class()
cls._instances[name] = instance
instance = cls._instances[name]
# Initialize if needed
if config and not instance._initialized:
instance.initialize(config)
# Track active diffusion model
if is_diffusion:
cls._active_diffusion_processor = name
return instance
Key Features: - Auto-Registration: Processors register on import - Singleton Instances: One instance per processor - Exclusive Diffusion Loading: Only one diffusion model in VRAM - Direct Processor Coexistence: Upscalers can run alongside diffusion - Lazy Initialization: Models load on first use
4. Dynamic UI Schema
Backend processors define their own UI structure:
def get_ui_schema(self) -> Dict[str, Any]:
return {
"mode_id": "flux_inpaint",
"display_name": "Flux Inpainting",
"type": "tiled",
"description": "AI generation using Flux.2",
"icon": "Sparkle",
"capabilities": {
"supports_lora": True,
"supports_controlnet": False,
"requires_template": True,
"min_resolution": (512, 256),
"max_resolution": (16384, 8192)
},
"tabs": [
{
"id": "input",
"label": "Input",
"icon": "FileImage",
"fields": [
{
"id": "input_file",
"type": "file",
"label": "Input Image",
"required": True,
"accept": "image/*",
"description": "360° equirectangular image"
},
# ... more fields
]
},
# ... more tabs
]
}
Field Types Supported:
- text, textarea, number, slider
- checkbox, select, file
- template_selector (special: template picker)
- lora_selector (special: multi-LoRA manager)
- cardinal_grid (special: 360° prompt grid)
Conditional Visibility:
{
"id": "negative_prompt",
"type": "textarea",
"label": "Negative Prompt",
"visible_if": {
"use_negative_prompt": True # Only show if checkbox enabled
}
}
Component Deep Dive
Frontend Architecture
frontend/src/
├── lib/
│ ├── types/
│ │ └── processor.ts # Processor TypeScript interfaces
│ ├── api/
│ │ ├── processors.ts # Processor API client
│ │ └── renders.ts # Render job API client
│ ├── stores/
│ │ ├── pipeline.ts # Pipeline state management
│ │ └── render.ts # Render config state
│ ├── components/
│ │ ├── PipelineDropdown.svelte # Pipeline selector
│ │ ├── DynamicForm.svelte # Schema-driven form
│ │ └── fields/ # Field renderers
│ │ ├── TextField.svelte
│ │ ├── SliderField.svelte
│ │ ├── TemplateSelectorField.svelte
│ │ ├── LoRASelectorField.svelte
│ │ └── CardinalGridField.svelte
│ └── services/
│ └── render.ts # Render submission logic
└── routes/
├── +layout.svelte # Header with pipeline selector
├── +page.svelte # Main configuration page
└── renders/[id]/+page.svelte # Render details
Key Frontend Features:
- Reactive Pipeline Selection
// Stores
export const currentPipeline = writable<string>('sdxl_inpaint');
export const availableProcessors = writable<Processor[]>([]);
export const currentSchema = writable<ProcessorSchema | null>(null);
export const formValues = writable<Record<string, any>>({});
// When pipeline changes
async function selectPipeline(pipelineId: string) {
currentPipeline.set(pipelineId);
renderStore.setPipeline(pipelineId); // Sync to render store
const schema = await fetchProcessorSchema(pipelineId);
currentSchema.set(schema);
resetFormToDefaults(schema);
}
- Config Save/Load Integration
// Save: Includes pipeline field
exportConfig(params) {
return {
pipeline: params.pipeline || 'sdxl_inpaint',
hextile: { ... },
diffusion: { ... },
output: { ... }
};
}
// Load: Restores pipeline selection
loadConfig(config) {
const pipeline = config.pipeline || 'sdxl_inpaint';
currentPipeline.set(pipeline);
const schema = await fetchProcessorSchema(pipeline);
currentSchema.set(schema);
// Populate form with loaded values
}
- Dynamic Field Rendering
{#each $currentSchema.tabs as tab, index}
{#if activeTab === index}
{#each tab.fields as field}
{#if isFieldVisible(field)}
{#if field.type === 'text'}
<TextField {field} bind:value={$formValues[field.id]} />
{:else if field.type === 'template_selector'}
<TemplateSelectorField {field} bind:value={$formValues[field.id]} />
{:else if field.type === 'lora_selector'}
<LoRASelectorField {field} bind:value={$formValues[field.id]} />
<!-- ... other field types -->
{/if}
{/if}
{/each}
{/if}
{/each}
Backend Architecture
backend/
├── processors/
│ ├── __init__.py # Auto-registration
│ ├── base.py # BaseProcessor, TiledProcessor, DirectProcessor
│ ├── registry.py # ProcessorRegistry singleton
│ ├── tiled/
│ │ ├── __init__.py
│ │ ├── sdxl_inpaint.py # SDXL implementation
│ │ └── flux_inpaint.py # Flux implementation
│ └── direct/
│ ├── __init__.py
│ └── realesrgan.py # Real-ESRGAN implementation
├── api/
│ ├── main.py # FastAPI app
│ └── routes/
│ ├── processors.py # Processor endpoints
│ ├── renders.py # Render job endpoints
│ ├── templates.py # Template endpoints
│ └── lora.py # LoRA model endpoints
├── services/
│ └── pipeline.py # Processing pipeline router
└── core/
├── projection.py # Hextile projection
└── prompt_utils.py # Cardinal prompt interpolation
Key Backend Features:
- Auto-Registration System
# backend/processors/__init__.py
from backend.processors.registry import ProcessorRegistry
from backend.processors.tiled import SDXLInpaintProcessor, FluxInpaintProcessor
from backend.processors.direct import RealESRGANProcessor
# Auto-register on module import
ProcessorRegistry.register(SDXLInpaintProcessor)
ProcessorRegistry.register(FluxInpaintProcessor)
ProcessorRegistry.register(RealESRGANProcessor)
- Processing Pipeline Router
# backend/services/pipeline.py
class ProcessingPipeline:
def run(self, input_path, config, output_dir, progress_callback=None):
# Get pipeline from config
pipeline_id = config.get("pipeline", "sdxl_inpaint")
processor_config = config.get("processor_config", {})
# Get processor (VRAM management happens here)
processor = ProcessorRegistry.get(pipeline_id, processor_config)
# Route based on type
if processor.processor_type == ProcessorType.TILED:
return self._run_tiled_pipeline(input_path, config,
output_dir, processor)
else:
return self._run_direct_pipeline(input_path, config,
output_dir, processor)
def _run_tiled_pipeline(self, input_path, config, output_dir, processor):
# Phase 1: Extract hexagonal tiles
tiles_data = extract_tiles(input_path, template_dir, tiles_dir)
# Phase 2: Process each tile
for tile in tiles_data:
result = processor.process_tile(
tile_image_path=tile_path,
mask_image_path=mask_path,
output_path=output_path,
tile_data=tile,
config=processor_config
)
progress_callback(tiles_processed / total_tiles)
# Phase 3: Stitch tiles back
final_output = stitch_tiles(tiles_data, template_dir, output_path)
return final_output
def _run_direct_pipeline(self, input_path, config, output_dir, processor):
# Simple: Load → Process → Save
output_path = output_dir / "output.png"
result = processor.process_image(
input_path=input_path,
output_path=output_path,
config=config.get("processor_config", {})
)
return output_path
- API Endpoints
# backend/api/routes/processors.py
@router.get("/api/processors")
async def list_processors():
"""List all available processors"""
processors = ProcessorRegistry.list_pipelines()
return {"processors": processors}
@router.get("/api/processors/{pipeline_id}/schema")
async def get_processor_schema(pipeline_id: str):
"""Get dynamic UI schema for a processor"""
schema = ProcessorRegistry.get_ui_schema(pipeline_id)
return schema
@router.post("/api/processors/{pipeline_id}/validate")
async def validate_config(pipeline_id: str, body: dict):
"""Validate config before rendering"""
config = body.get("config", {})
is_valid, errors = ProcessorRegistry.validate_config(pipeline_id, config)
return {"valid": is_valid, "errors": errors}
# backend/api/routes/renders.py
@router.post("/api/renders/create")
async def create_render(
file: Optional[UploadFile] = File(None),
pipeline: str = Form(default="sdxl_inpaint"), # Pipeline parameter
template: str = Form(...),
# ... other params
):
"""Create render with pipeline selection"""
config = {
"pipeline": pipeline, # Include pipeline in config
"hextile": { ... },
"diffusion": { ... },
"processor_config": { ... }
}
# Create render job
render_info = {
"render_id": render_id,
"pipeline": pipeline, # Store pipeline in metadata
"status": "pending",
...
}
renders[render_id] = render_info
save_render_metadata(render_id, render_info)
return render_info
Data Flow
Complete Render Flow (V2)
1. User selects pipeline in dropdown
↓
2. Frontend fetches schema: GET /api/processors/{pipeline}/schema
↓
3. Frontend renders dynamic form based on schema
↓
4. User configures settings and uploads image
↓
5. User clicks RENDER button
↓
6. Frontend: POST /api/renders/create
Body: { pipeline: "flux_inpaint", file: <image>, config: {...} }
↓
7. Backend validates input and creates render job
↓
8. Backend saves config with pipeline field
↓
9. Frontend opens WebSocket: WS /api/renders/{id}/ws
↓
10. Backend: POST /api/renders/{id}/start (auto-triggered)
↓
11. ProcessingPipeline.run() called
↓
12. ProcessorRegistry.get("flux_inpaint", config)
├─ Check if SDXL is active
├─ If yes: shutdown SDXL (free VRAM)
├─ Load Flux processor
└─ Mark Flux as active
↓
13. Route to _run_tiled_pipeline() (Flux is tiled)
↓
14. Phase 1: Extract hexagonal tiles
- Load template metadata
- Apply projection transforms
- Save tiles to temp directory
↓
15. Phase 2: Process each tile
For each tile:
- Build prompt with cardinal interpolation
- processor.process_tile() → Flux generates tile
- Apply inpainting mask for seamless blending
- Send progress via WebSocket
- Send tile preview via WebSocket
↓
16. Phase 3: Stitch tiles
- Reverse projection transforms
- Blend overlapping regions
- Composite into final equirectangular
↓
17. Save final output
↓
18. Send completion via WebSocket
↓
19. Frontend displays result and download button
Config Save/Load Flow (V2)
Save:
User clicks Save Config
↓
renderStore.exportConfig(params)
↓
Returns:
{
"pipeline": "flux_inpaint", ← Now included!
"hextile": { ... },
"diffusion": { ... },
"output": { ... }
}
↓
Write to .hextile.json file
Load:
User clicks Load Config
↓
Read .hextile.json file
↓
renderStore.loadConfig(config, fileName)
├─ Extract: pipeline = config.pipeline || 'sdxl_inpaint'
├─ Load all other settings
└─ Set pipeline in params
↓
currentPipeline.set(pipeline)
↓
Fetch schema: GET /api/processors/{pipeline}/schema
↓
currentSchema.set(schema)
↓
UI updates to show correct pipeline and form
VRAM Management Strategy
Problem
- SDXL: ~12GB VRAM
- Flux: ~16GB VRAM
- Both together: Would exceed 24GB GPU → OOM crash
Solution: Exclusive Diffusion Loading
# In ProcessorRegistry.get()
_active_diffusion_processor: Optional[str] = None
def get(cls, name: str, config: Optional[Dict] = None) -> BaseProcessor:
processor_class = cls._processors[name]
is_diffusion = issubclass(processor_class, TiledProcessor)
if is_diffusion and cls._active_diffusion_processor:
if cls._active_diffusion_processor != name:
# Switching diffusion models
old_name = cls._active_diffusion_processor
old_instance = cls._instances.get(old_name)
if old_instance:
logger.info(f"Unloading {old_name} to free VRAM")
old_instance.shutdown()
# Load new processor
instance = cls._instances.get(name) or processor_class()
if config and not instance._initialized:
instance.initialize(config)
if is_diffusion:
cls._active_diffusion_processor = name
return instance
VRAM Cleanup Helper
# In BaseProcessor
def _cleanup_vram(self) -> None:
"""Aggressively free GPU memory"""
import gc
gc.collect()
import torch
if torch.cuda.is_available():
torch.cuda.empty_cache()
torch.cuda.synchronize()
Direct Processor Coexistence
Real-ESRGAN is a DirectProcessor and doesn't trigger unloading:
- Uses only ~2GB VRAM
- Can run alongside diffusion models
- Doesn't participate in exclusive loading
Adding a New Pipeline
Step 1: Create Processor Class
# backend/processors/tiled/my_model.py
from backend.processors.base import TiledProcessor
class MyModelProcessor(TiledProcessor):
processor_name = "my_model"
processor_display_name = "My Model"
def get_ui_schema(self) -> Dict[str, Any]:
return {
"mode_id": "my_model",
"display_name": "My Model",
"type": "tiled",
"capabilities": {
"supports_lora": True,
"requires_template": True
},
"tabs": [
# Define your UI tabs and fields
]
}
def initialize(self, config: Dict[str, Any]):
# Load your model
self.model = load_model(config.get("model_path"))
self._initialized = True
def shutdown(self):
# Free resources
del self.model
self._cleanup_vram()
self._initialized = False
def process_tile(self, tile_image_path, mask_path, output_path,
tile_data, config):
# Process one tile with your model
image = Image.open(tile_image_path)
result = self.model.generate(image, **config)
result.save(output_path)
return {"status": "success"}
Step 2: Register Processor
# backend/processors/__init__.py
from backend.processors.tiled.my_model import MyModelProcessor
ProcessorRegistry.register(MyModelProcessor)
Step 3: Done!
The system now automatically: - ✅ Shows "My Model" in pipeline dropdown - ✅ Fetches and renders your UI schema - ✅ Routes renders to your processor - ✅ Manages VRAM (unloads other diffusion models) - ✅ Saves/loads pipeline in configs
Performance & Optimization
GPU Acceleration
- CuPy: CUDA operations for tile extraction/stitching
- PyTorch: Model inference on GPU
- Custom Kernels: Optimized projection transforms
- Memory Pooling: Reuse GPU arrays
Processing Modes
- Sequential: Process tiles one-by-one (lower VRAM, slower)
- Batch: Process multiple tiles (higher VRAM, faster)
Caching
- Template Metadata: Cached on first load
- Processor Instances: Singleton per processor
- Model Weights: Stay in VRAM until switched
Async Processing
- Non-blocking: Render jobs run in background threads
- WebSocket: Real-time progress without polling
- Queue: Multiple jobs queued and processed sequentially
Security Considerations
Input Validation
- File Type: Only images allowed
- File Size: Max 100MB default
- Dimensions: Min/max resolution checks
- Path Traversal: Sanitize all file paths
Resource Limits
- VRAM Monitoring: Prevent OOM crashes
- Timeout: Max processing time per render
- Disk Space: Check before saving outputs
API Security
- Rate Limiting: Prevent abuse (optional)
- CORS: Configure allowed origins
- Input Sanitization: Validate all parameters
Technology Stack
Backend
- Python 3.10+: Core language
- FastAPI: Web framework
- Pydantic: Data validation
- Uvicorn: ASGI server
- PyTorch: Deep learning
- Diffusers: Stable Diffusion pipelines
- CuPy: GPU acceleration
- Pillow: Image processing
Frontend
- SvelteKit: Reactive framework
- TypeScript: Type safety
- Tailwind CSS: Utility-first styling
- shadcn-svelte: UI components
- Vite: Build tool
Desktop
- Tauri 2.0: Native wrapper
- Rust: System integration
- WebView2: Modern rendering
AI Models
- Stable Diffusion XL: High-quality diffusion
- Flux.2: Advanced diffusion model
- Real-ESRGAN: AI upscaling
- LoRA: Fine-tuning adapters
File Structure (V2)
360-HEXTILE/
├── backend/
│ ├── processors/ # NEW: Modular processor system
│ │ ├── __init__.py # Auto-registration
│ │ ├── base.py # Base classes
│ │ ├── registry.py # Singleton registry
│ │ ├── tiled/ # Tiled processors
│ │ │ ├── sdxl_inpaint.py
│ │ │ └── flux_inpaint.py
│ │ └── direct/ # Direct processors
│ │ └── realesrgan.py
│ ├── api/
│ │ ├── main.py
│ │ └── routes/
│ │ ├── processors.py # NEW: Processor endpoints
│ │ ├── renders.py # UPDATED: Pipeline param
│ │ ├── templates.py
│ │ ├── lora.py # NEW: LoRA endpoints
│ │ └── websocket.py
│ ├── services/
│ │ └── pipeline.py # UPDATED: Pipeline router
│ └── core/
│ ├── projection.py
│ └── prompt_utils.py # NEW: Cardinal interpolation
├── frontend/
│ ├── src/
│ │ ├── lib/
│ │ │ ├── types/
│ │ │ │ └── processor.ts # NEW: Processor types
│ │ │ ├── api/
│ │ │ │ ├── processors.ts # NEW: Processor API
│ │ │ │ └── renders.ts
│ │ │ ├── stores/
│ │ │ │ ├── pipeline.ts # NEW: Pipeline state
│ │ │ │ └── render.ts # UPDATED: Pipeline field
│ │ │ ├── components/
│ │ │ │ ├── PipelineDropdown.svelte # NEW
│ │ │ │ ├── DynamicForm.svelte # NEW
│ │ │ │ └── fields/ # NEW
│ │ │ │ ├── TemplateSelectorField.svelte
│ │ │ │ ├── LoRASelectorField.svelte
│ │ │ │ └── CardinalGridField.svelte
│ │ │ └── services/
│ │ │ └── render.ts # UPDATED: Pipeline param
│ │ └── routes/
│ │ ├── +layout.svelte # UPDATED: Pipeline selector
│ │ ├── +page.svelte # UPDATED: Dynamic form
│ │ └── renders/[id]/+page.svelte # UPDATED: Pipeline badge
│ └── src-tauri/
├── resources/
│ ├── templates/ # Hextile templates
│ │ ├── Hextile_20_2K/
│ │ ├── Hextile_32_3K/
│ │ └── Hextile_44_4K/
│ └── lora/ # NEW: LoRA models directory
├── output/ # Render outputs
└── dev/ # NEW: Development docs
└── 360-HEXTILE-PIPELINE-REFACTOR-V2.md
Migration from V1 to V2
Breaking Changes
- ❌ No separate diffusion server (integrated into FastAPI)
- ❌ No
modeparameter (renamed topipeline) - ❌ No HTTP calls between backend components
Backward Compatible
- ✅ SDXL workflow still works (legacy form)
- ✅ Old configs load with default pipeline
- ✅ API endpoints mostly unchanged
- ✅ Template system unchanged
- ✅ Output format unchanged
New Features
- ✨ Multi-pipeline support
- ✨ Dynamic UI generation
- ✨ VRAM management
- ✨ LoRA integration
- ✨ Config save/load includes pipeline
- ✨ Processor plugin system
Next Steps
For Users
- Getting Started - Setup and first render
- Pipeline Guide - Choose the right pipeline
- Templates - Understanding hextile templates
For Developers
- API Reference - Complete endpoint docs
- Adding Pipelines - Create custom processors
- Contributing - Join development
Want to dive deeper? Check out: - API Reference - Complete endpoint documentation - Pipeline Development Guide - Create custom pipelines - Troubleshooting - Common problems and solutions
Last Updated: 2026-01-03 | V2-PIPELINE Architecture