Stable Diffusion
  • 👋Welcome to Stable Diffusion
  • Stable Diffusion Overview
    • 💡Technology
      • Architecture
      • Training data
      • Training procedures
      • Limitations
      • End-user fine tuning
    • âš™ī¸Capabilities
      • Text to image generation
      • Image modification
    • đŸ•šī¸Usage and controversy
    • ✨License
    • 🔗External links
  • Stable Diffusion Chain
    • đŸ“ĒToken
    • 👑Tokenomics
    • đŸ’ģIntegrated Systems
    • ⌚Stable Diffusion Chain Roadmap
    • 🐧Build 3D NFT
Powered by GitBook
On this page
  1. Stable Diffusion Overview

Technology

Stable Diffusion is a deep learning, text-to-image model released in 2022.

Key Note: Stable Diffusion is primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt.

Stable Diffusion is a latent diffusion model, a kind of deep generative neural network developed by the CompVis group at LMU Munich.[4] The model has been released by a collaboration of Stability AI, CompVis LMU, and Runway with support from EleutherAI and LAION.[5][1][6] In October 2022, Stability AI raised US$101 million in a round led by Lightspeed Venture Partners and Coatue Management.[7]

Stable Diffusion's code and model weights have been released publicly,[8] and it can run on most consumer hardware equipped with a modest GPU with at least 8 GB VRAM. This marked a departure from previous proprietary text-to-image models such as DALL-E and Midjourney which were accessible only via cloud services.[9][10]

PreviousWelcome to Stable DiffusionNextArchitecture

Last updated 2 years ago

💡