The former one was generated using SDXL 0.9 vs 1.0. The new one has been
generated with diffusers:
import torch
from diffusers import StableDiffusionXLPipeline, DDIMScheduler
noise_scheduler = DDIMScheduler(
num_train_timesteps=1000,
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
clip_sample=False,
set_alpha_to_one=False,
steps_offset=1,
)
base_model_path = "/path/to/stabilityai/stable-diffusion-xl-base-1.0"
device = "cuda"
prompt = "a cute cat, detailed high-quality professional image"
negative_prompt = "lowres, bad anatomy, bad hands, cropped, worst quality"
seed = 2
pipe = StableDiffusionXLPipeline.from_pretrained(base_model_path, scheduler=noise_scheduler, torch_dtype=torch.float16, add_watermarker=False)
pipe = pipe.to(device)
generator = torch.Generator(device).manual_seed(seed)
images = pipe(prompt=prompt, negative_prompt=negative_prompt, num_inference_steps=30, generator=generator).images
This generalizes the Adapter abstraction to higher-level
constructs such as high-level LoRA (targeting e.g. the
SD UNet), ControlNet and Reference-Only Control.
Some adapters now work by adapting child models with
"sub-adapters" that they inject / eject when needed.