Commit graph

369 commits

Author SHA1 Message Date
Benjamin Trom 121ef4df39 add is_optimized option for attention 2023-09-14 14:12:27 +02:00
Pierre Chapuis 0e0c39b4b5 black 2023-09-13 17:02:47 +02:00
Cédric Deltheil eea340c6c4 add support for SDXL IP-Adapter
This only supports the latest SDXL IP-Adapter release (2023.9.8) which
builds upon the ViT-H/14 CLIP image encoder.
2023-09-12 18:00:39 +02:00
Cédric Deltheil 1b4dcebe06 make scheduler an actual abstract base class 2023-09-12 16:47:47 +02:00
Cédric Deltheil 12e37f5d85 controlnet: replace Lambda w/ Slicing basic layer 2023-09-12 15:37:33 +02:00
Pierre Chapuis 7a32699cc6 add ensure_find and ensure_find_parent helpers 2023-09-12 14:19:10 +02:00
Pierre Chapuis dc2c3e0163 implement CrossAttentionAdapter using chain operations 2023-09-12 11:58:24 +02:00
Pierre Chapuis 3c056e2231 expose lookup_top_adapter 2023-09-12 11:58:24 +02:00
Benjamin Trom b515c02867 add new basic layers and Matmul chain 2023-09-12 10:55:34 +02:00
Doryan Kaced 2f2510a9b1 Use bias correction on Prodigy 2023-09-12 10:44:05 +02:00
Pierre Chapuis be54cfc016 fix weight loading for float16 LoRAs 2023-09-11 16:14:19 +02:00
Cédric Deltheil e5425e2968 make IP-Adapter generic for SD1 and SDXL 2023-09-08 16:38:01 +02:00
Cédric Deltheil 61858d9371 add CLIPImageEncoderG 2023-09-08 12:00:21 +02:00
Cédric Deltheil c6fadd1c81 deprecate bidirectional_mapping util 2023-09-07 18:43:20 +02:00
limiteinductive 2786117469 implement SDXL + e2e test on random init 2023-09-07 18:34:42 +02:00
limiteinductive 02af8e9f0b improve typing of ldm and sd1, introducing SD1Autoencoder class 2023-09-07 18:34:42 +02:00
Benjamin Trom cf43cb191f Add better tree representation for fluxion Module 2023-09-07 16:33:24 +02:00
Cédric Deltheil c55917e293 add IP-Adapter support for SD 1.5
Official repo: https://github.com/tencent-ailab/IP-Adapter
2023-09-06 15:12:48 +02:00
Pierre Chapuis 864937a776 support injecting several LoRAs simultaneously 2023-09-06 11:49:55 +02:00
limiteinductive 88efa117bf fix model comparison with custom layers 2023-09-05 12:34:38 +02:00
Pierre Chapuis 566656a539 fix text encoder LoRAs 2023-09-04 15:51:39 +02:00
limiteinductive ebfa51f662 Make breakpoint a ContextModule 2023-09-04 12:22:10 +02:00
limiteinductive 9d2fbf6dbd Fix tuple annotation for pyright 1.1.325 2023-09-04 10:41:06 +02:00
Doryan Kaced 44e184d4d5 Init dtype and device correctly for OutputBlock 2023-09-01 19:44:06 +02:00
Cédric Deltheil 3a10baa9f8 cross-attn 2d: record use_bias attribute 2023-09-01 19:23:33 +02:00
Cédric Deltheil b933fabf31 unet: get rid of clip_embedding attribute for SD1
It is implicitly defined by the underlying cross-attention layer. This
also makes it consistent with SDXL.
2023-09-01 19:23:33 +02:00
Cédric Deltheil 134ee7b754 sdxl: remove wrong structural_attrs in cross-attn 2023-09-01 19:23:33 +02:00
Pierre Chapuis e91e31ebd2 check no two controlnets have the same name 2023-09-01 17:47:29 +02:00
Pierre Chapuis bd59790e08 always respect _can_refresh_parent 2023-09-01 17:44:16 +02:00
Pierre Chapuis d389d11a06 make basic adapters a part of Fluxion 2023-09-01 17:29:48 +02:00
Pierre Chapuis 31785f2059 scope range adapter in latent diffusion 2023-09-01 17:29:48 +02:00
Pierre Chapuis 73813310d0 rename SelfAttentionInjection to ReferenceOnlyControl and vice-versa 2023-09-01 17:29:48 +02:00
Pierre Chapuis eba0c33001 allow lora_targets to take a list of targets as input 2023-09-01 11:52:39 +02:00
Cédric Deltheil 92cdf19eae add Distribute to fluxion layers's __init__.py 2023-09-01 11:20:48 +02:00
Doryan Kaced 9f6733de8e Add concepts learning via textual inversion 2023-08-31 16:07:53 +02:00
Pierre Chapuis 0f476ea18b make high-level adapters Adapters
This generalizes the Adapter abstraction to higher-level
constructs such as high-level LoRA (targeting e.g. the
SD UNet), ControlNet and Reference-Only Control.

Some adapters now work by adapting child models with
"sub-adapters" that they inject / eject when needed.
2023-08-31 10:57:18 +02:00
Cédric Deltheil d8004718c8 foundationals: add clip image encoder 2023-08-30 21:50:01 +02:00
Doryan Kaced 08a5341452 Make image resize configurable in training scripts 2023-08-30 14:05:29 +02:00
Doryan Kaced 437fa24368 Make horizontal flipping parametrable in training scripts 2023-08-30 12:41:03 +02:00
Pierre Chapuis 18c84c7b72 shorter import paths 2023-08-29 16:57:40 +02:00
limiteinductive 8615dbdbde Add inner_dim Parameter to Attention Layer in Fluxion 2023-08-28 16:34:25 +02:00
limiteinductive 7ca6bd0ccd implement the ConvertModule class and refactor conversion scripts 2023-08-28 14:39:14 +02:00
Doryan Kaced 3680f9d196 Add support for learned concepts e.g. via textual inversion 2023-08-28 10:37:39 +02:00
Benjamin Trom 8b1719b1f9 remove unused TextEncoder and UNet protocols 2023-08-25 17:34:26 +02:00
limiteinductive 92a21bc21e refactor latent_diffusion module 2023-08-25 12:30:20 +02:00
Pierre Chapuis 802970e79a simplify Chain#append 2023-08-23 17:49:59 +02:00
Pierre Chapuis 16618d73de remove useless uses of type: ignore 2023-08-23 17:49:59 +02:00
Pierre Chapuis dec0d64432 make walk and layers not recurse by default
There is now a parameter to get the old (recursive) behavior.
2023-08-23 12:15:56 +02:00
Pierre Chapuis 2ad26a06b0 fix LoRAs on Self target 2023-08-23 12:13:01 +02:00
limiteinductive 3565a4127f implement DoubleTextEncoder for SDXL 2023-08-23 11:05:38 +02:00
Cédric Deltheil 71ddb55a8e infer device and dtype in LoraAdapter 2023-08-22 11:55:39 +02:00
Benjamin Trom 8c7298f8cc fix chain slicing with structural copy 2023-08-22 11:44:11 +02:00
limiteinductive e7c1db50e0 turn CLIPTokenizer into a fl.Module 2023-08-22 00:09:01 +02:00
Cédric Deltheil 1ad4e1a35a converter: add missing structural_attrs 2023-08-21 16:04:12 +02:00
Cédric Deltheil b91a457495 use Converter layer for sinuosoidal embedding 2023-08-21 16:04:12 +02:00
limiteinductive 108fa8f26a add converter layer + tests 2023-08-21 12:09:58 +02:00
limiteinductive 4526d58cd5 update CTOR of CLIPTextEncoder with max_sequence_length 2023-08-21 11:21:12 +02:00
limiteinductive 6fd5894caf split PositionalTokenEncoder 2023-08-21 11:21:12 +02:00
limiteinductive 9d663534d1 cosmetic changes for text_encoder.py 2023-08-21 11:21:12 +02:00
limiteinductive b8e7179447 make clip g use quick gelu and pad_token_id 0 2023-08-17 17:31:15 +02:00
limiteinductive 6594502c11 parametrize tokenizer for text_encoder 2023-08-17 17:31:15 +02:00
limiteinductive 4575e3dd91 add start, end and pad tokens as parameter 2023-08-17 17:31:15 +02:00
limiteinductive 63fda2bfd8 add use_quick_gelu kwarg for CLIPTextEncoder 2023-08-17 17:31:15 +02:00
limiteinductive efe923a272 cosmetic changes 2023-08-17 17:31:15 +02:00
limiteinductive 17dc75421b make basic layers an enum and work with subtyping 2023-08-17 15:36:43 +02:00
Pierre Chapuis 97b162d9a0 add InformativeDrawings
https://github.com/carolineec/informative-drawings

This is the preprocessor for the Lineart ControlNet.
2023-08-16 12:29:09 +02:00
Pierre Chapuis e10f761a84 GroupNorm and LayerNorm must be affine to be WeightedModules 2023-08-16 12:29:09 +02:00
Pierre Chapuis bd49304fc8 add Sigmoid activation 2023-08-07 19:56:28 +02:00
Cédric Deltheil 48f674c433 initial commit 2023-08-04 15:28:41 +02:00