A method for editing images using human instructions, which involves providing an input image and a written instruction that tells the model what to do, and the model follows these instructions to edit the image. To generate training data for this problem, the article combines the knowledge of two large pretrained models, a language model (GPT-3) and a text-to-image model (Stable Diffusion), to generate a large dataset of image editing examples.
The Stable Diffusion WebUI Plugin is a plugin for Photoshop and Krita that interfaces with AUTOMATIC1111's Stable Diffusion WebUI without the need to switch to another WebUI or modify an existing installation. This plugin allows for text to image, image to image, inpainting, and outpainting inside Photoshop and Krita, eliminating the need to fuss around with the inpainter tool in the browser or upload masks. The plugin also allows for better script usage support, standalone face fix, pause and interrupt, render queue, and experiments, with upscaling support planned in the future.
The Stability Photoshop plugin enables users to generate and edit images using both Stable Diffusion and DALL•E 2 directly within Photoshop. The plugin can be obtained in two ways: by installing it from the Adobe Exchange or by downloading the CCX file directly. Users who wish to generate images locally will also need the Stable Diffusion API Server, and those who want to fine-tune their own models can use a fork of the DreamBooth project.
FoldFold allExpandExpand allAre you sure you want to delete this link?Are you sure you want to delete this tag?
The personal, minimalist, super-fast, database free, bookmarking service by the Shaarli community