Stable diffusion is a generative model that can create realistic and diverse images from text or image inputs. It works by gradually adding noise to an image until it becomes a random noise, and then reversing the process to reconstruct the image from the noise, conditioned on the input. Stable diffusion has many applications, such as text-to-image synthesis, image inpainting, style transfer, super-resolution, and more.
However, stable diffusion has some limitations when it comes to controlling the output. For example, if you want to generate an image of a cat with blue eyes and a pink collar, you may not get the desired result by simply providing the text description. Or if you want to inpaint a missing part of an image with a specific content, you may not be able to specify what you want to fill in.
This is where ControlNet comes in. ControlNet is an extension for stable diffusion that gives users an extra layer of control when it comes to img2img processing. The main idea behind ControlNet is to use additional input conditions that tell the model exactly what to do. These conditions can be in the form of masks, sketches, edges, poses, depth maps, normal maps, segmentation maps, or any other information that can guide the generation process.
For example, if you want to generate an image of a cat with blue eyes and a pink collar, you can use ControlNet to provide a mask that indicates where the eyes and the collar are, and a color map that specifies the colors for each region. Or if you want to inpaint a missing part of an image with a specific content, you can use ControlNet to provide a sketch or an edge map that outlines what you want to fill in.
ControlNet works by injecting these additional conditions into the stable diffusion model at different noise levels. This way, the model can learn to incorporate the conditions into the output image while maintaining the realism and diversity of stable diffusion. ControlNet can also be combined with other pre-processors, such as Canny edge detector or OpenPose skeleton extractor, to automatically generate the conditions from the input image.
ControlNet is an open-source project that can be installed as an extension for Automatic 1111’s Stable Diffusion web UI. It allows users to easily experiment with different models and conditions for img2img generation. ControlNet currently supports 14 pre-trained models for different tasks and domains, such as anime faces, landscapes, portraits, animals, flowers, and more. Users can also train their own models using the provided code and instructions.
ControlNet is a powerful and flexible tool that enhances the control over stable diffusion models. It opens up new possibilities for creative and artistic expression using img2img generation. If you are interested in learning more about ControlNet, you can visit its GitHub repository1 or read its paper 2.
1: Mikubill/sd-webui-controlnet: WebUI extension for ControlNet - GitHub 2: ControlNet: A Neural Network Interface Structure for Enhancing Control over Stable Diffusion Models