#12 feat: implement inpainting in the UI

Suljettu
3 kuukautta sitten avasi fszontagh · 1 kommenttia

This feature has been fully implemented. Added complete inpainting UI with canvas and backend integration.

Key components delivered:

  • Inpainting UI page (webui/app/inpainting/page.tsx)
  • Canvas component for mask editing (webui/components/inpainting-canvas.tsx)
  • Backend endpoint integration (src/server.cpp)
  • API library updates (webui/lib/api.ts)
  • Model management for inpainting (src/model_manager.cpp)

The implementation includes a full-featured inpainting interface with brush tools, mask editing, and seamless integration with the existing generation queue system.

This feature has been fully implemented. Added complete inpainting UI with canvas and backend integration. Key components delivered: - Inpainting UI page (webui/app/inpainting/page.tsx) - Canvas component for mask editing (webui/components/inpainting-canvas.tsx) - Backend endpoint integration (src/server.cpp) - API library updates (webui/lib/api.ts) - Model management for inpainting (src/model_manager.cpp) The implementation includes a full-featured inpainting interface with brush tools, mask editing, and seamless integration with the existing generation queue system.
Szontágh Ferenc kommentoitu 3 kuukautta sitten
Omistaja

Feature Analysis 📋

Scope: Implement inpainting canvas UI with Stable Diffusion masking

Stable Diffusion Inpainting Rules:

  • White pixels (255): Areas to regenerate/modify
  • Black pixels (0): Areas to preserve unchanged
  • Gray pixels (1-254): Blending zones (optional)

Requirements:

1. Canvas Component

Technology: HTML5 Canvas API or React-based canvas library (e.g., react-konva, fabric.js)

Features:

  • Brush tool with adjustable size
  • Eraser tool
  • Fill bucket tool
  • Undo/Redo stack
  • Clear all
  • Zoom/Pan controls
  • Brush opacity control

2. Image Loading & Sizing

Source Image:

  • File upload or paste from clipboard
  • Drag & drop support
  • Image preview before canvas initialization

Size Validation:

  • Must match SD model architecture constraints:
    • SD1.5: 512x512, 768x512, 512x768
    • SDXL: 1024x1024, 1024x768, 768x1024
    • Custom sizes (multiples of 64)
  • Auto-resize with aspect ratio preservation
  • Warning if size doesn't match model requirements

Canvas Setup:

  • Source image as background layer
  • Transparent mask layer on top
  • White brush paints on mask layer

3. UI Layout

┌─────────────────────────────────────────────────┐
│ File: [Upload Image ▼]  Model: [SD1.5 ▼]       │
├─────────────┬───────────────────────────────────┤
│ Tools:      │ Canvas Preview                    │
│ ○ Brush     │ ┌───────────────────────────────┐ │
│ ○ Eraser    │ │                               │ │
│ ○ Fill      │ │   [Source Image + Mask]       │ │
│             │ │                               │ │
│ Size: 20px  │ │                               │ │
│ [─────|────]│ │                               │ │
│             │ └───────────────────────────────┘ │
│ [Undo]      │                                   │
│ [Redo]      │ Prompt:                           │
│ [Clear]     │ [__________________________]      │
│ [Invert]    │                                   │
│             │ Negative Prompt:                  │
│             │ [__________________________]      │
│             │                                   │
│             │ Steps: [20] CFG: [7.5]            │
│             │ [Generate Inpaint]                │
└─────────────┴───────────────────────────────────┘

4. Mask Generation

Output Requirements:

  • Generate two images:
    1. Source Image: Original uploaded image (resized if needed)
    2. Mask Image: Binary mask (white = inpaint, black = preserve)
  • Both must be same dimensions
  • Convert to base64 for API transmission

API Payload:

{
  "image": "data:image/png;base64,...",
  "mask": "data:image/png;base64,...",
  "prompt": "replace with flowers",
  "negative_prompt": "blurry, distorted",
  "steps": 20,
  "cfg_scale": 7.5,
  "width": 512,
  "height": 512
}

5. Backend Support

API Endpoint: POST /api/v1/inpaint

Parameters:

  • image (required): Base64 source image
  • mask (required): Base64 mask image
  • prompt (required): Text prompt
  • All standard generation parameters

stable-diffusion.cpp Support: Check if inpainting is supported in current version, may require:

  • Specific model types (SD1.5 inpainting models)
  • CLI flags for mask input

6. Additional Features

  • Mask Opacity Slider: Preview mask transparency over source
  • Invert Mask: Swap black/white
  • Brush Presets: Small (10px), Medium (30px), Large (60px)
  • Feather/Blur Mask Edges: Smoother blending
  • Load Previous Mask: Save/load mask patterns
  • Export Mask: Download mask as PNG

Estimated Effort: High (12-16 hours)

  • Canvas component: 6-8 hours
  • Image handling: 2-3 hours
  • API integration: 2-3 hours
  • UI polish: 2-3 hours

Priority: Medium - Advanced feature for power users

Dependencies:

  • Verify stable-diffusion.cpp supports inpainting
  • May need inpainting-specific models
  • Canvas library selection

Technical Challenges:

  • Ensuring mask aligns perfectly with source image
  • Handling different image sizes/aspect ratios
  • Touch device support for canvas drawing
  • Performance with large images

Recommendation:

  1. Verify backend inpainting support first
  2. Prototype with simple canvas implementation
  3. Test with inpainting-specific models
  4. Add polish and advanced features incrementally
## Feature Analysis 📋 **Scope:** Implement inpainting canvas UI with Stable Diffusion masking **Stable Diffusion Inpainting Rules:** - **White pixels (255):** Areas to regenerate/modify - **Black pixels (0):** Areas to preserve unchanged - **Gray pixels (1-254):** Blending zones (optional) **Requirements:** ### 1. Canvas Component **Technology:** HTML5 Canvas API or React-based canvas library (e.g., `react-konva`, `fabric.js`) **Features:** - Brush tool with adjustable size - Eraser tool - Fill bucket tool - Undo/Redo stack - Clear all - Zoom/Pan controls - Brush opacity control ### 2. Image Loading & Sizing **Source Image:** - File upload or paste from clipboard - Drag & drop support - Image preview before canvas initialization **Size Validation:** - Must match SD model architecture constraints: - SD1.5: 512x512, 768x512, 512x768 - SDXL: 1024x1024, 1024x768, 768x1024 - Custom sizes (multiples of 64) - Auto-resize with aspect ratio preservation - Warning if size doesn't match model requirements **Canvas Setup:** - Source image as background layer - Transparent mask layer on top - White brush paints on mask layer ### 3. UI Layout ``` ┌─────────────────────────────────────────────────┐ │ File: [Upload Image ▼] Model: [SD1.5 ▼] │ ├─────────────┬───────────────────────────────────┤ │ Tools: │ Canvas Preview │ │ ○ Brush │ ┌───────────────────────────────┐ │ │ ○ Eraser │ │ │ │ │ ○ Fill │ │ [Source Image + Mask] │ │ │ │ │ │ │ │ Size: 20px │ │ │ │ │ [─────|────]│ │ │ │ │ │ └───────────────────────────────┘ │ │ [Undo] │ │ │ [Redo] │ Prompt: │ │ [Clear] │ [__________________________] │ │ [Invert] │ │ │ │ Negative Prompt: │ │ │ [__________________________] │ │ │ │ │ │ Steps: [20] CFG: [7.5] │ │ │ [Generate Inpaint] │ └─────────────┴───────────────────────────────────┘ ``` ### 4. Mask Generation **Output Requirements:** - Generate two images: 1. **Source Image:** Original uploaded image (resized if needed) 2. **Mask Image:** Binary mask (white = inpaint, black = preserve) - Both must be same dimensions - Convert to base64 for API transmission **API Payload:** ```json { "image": "data:image/png;base64,...", "mask": "data:image/png;base64,...", "prompt": "replace with flowers", "negative_prompt": "blurry, distorted", "steps": 20, "cfg_scale": 7.5, "width": 512, "height": 512 } ``` ### 5. Backend Support **API Endpoint:** `POST /api/v1/inpaint` **Parameters:** - `image` (required): Base64 source image - `mask` (required): Base64 mask image - `prompt` (required): Text prompt - All standard generation parameters **stable-diffusion.cpp Support:** Check if inpainting is supported in current version, may require: - Specific model types (SD1.5 inpainting models) - CLI flags for mask input ### 6. Additional Features - **Mask Opacity Slider:** Preview mask transparency over source - **Invert Mask:** Swap black/white - **Brush Presets:** Small (10px), Medium (30px), Large (60px) - **Feather/Blur Mask Edges:** Smoother blending - **Load Previous Mask:** Save/load mask patterns - **Export Mask:** Download mask as PNG **Estimated Effort:** High (12-16 hours) - Canvas component: 6-8 hours - Image handling: 2-3 hours - API integration: 2-3 hours - UI polish: 2-3 hours **Priority:** Medium - Advanced feature for power users **Dependencies:** - Verify stable-diffusion.cpp supports inpainting - May need inpainting-specific models - Canvas library selection **Technical Challenges:** - Ensuring mask aligns perfectly with source image - Handling different image sizes/aspect ratios - Touch device support for canvas drawing - Performance with large images **Recommendation:** 1. Verify backend inpainting support first 2. Prototype with simple canvas implementation 3. Test with inpainting-specific models 4. Add polish and advanced features incrementally
Kirjaudu sisään osallistuaksesi tähän keskusteluun.
Ei merkkipaalua
Ei osoitettua
1 osallistujaa
Ladataan...
Peruuta
Tallenna
Sisältöä ei vielä ole.