Laurent
4f94dfb494
implement dinov2 positional embedding interpolation
2024-04-02 10:02:43 +02:00
hugojarkoff
a93ceff752
Add HQ-SAM Adapter
2024-03-21 15:36:55 +01:00
Pierre Chapuis
cce2a98fa6
add sanity check to auto_attach_loras
2024-03-08 15:43:57 +01:00
Pierre Chapuis
7d8e3fc1db
add SDXL-Lightning weights to conversion script + support safetensors
2024-02-26 12:14:02 +01:00
Pierre Chapuis
d14c5bd5f8
add option to override unet weights for conversion
2024-02-26 12:14:02 +01:00
Pierre Chapuis
684e2b9a47
add docstrings for LCM / LCM-LoRA
2024-02-21 16:37:27 +01:00
Pierre Chapuis
f8d55ccb20
add LcmAdapter
...
This adds support for the condition scale embedding.
Also updates the UNet converter to support LCM.
2024-02-21 16:37:27 +01:00
Pierre Chapuis
8139b2dd91
fix IP-Adapter weights conversion
2024-02-21 15:03:48 +01:00
Laurent
00270604ef
fix conversion_script bug + rename control_lora e2e test
2024-02-14 18:20:46 +01:00
Laurent
5fee723cd1
write ControlLora weight conversion script
2024-02-14 18:20:46 +01:00
Pierre Chapuis
471ef91d1c
make __getattr__
on Module return object, not Any
...
PyTorch chose to make it Any because they expect its users' code
to be "highly dynamic": https://github.com/pytorch/pytorch/pull/104321
It is not the case for us, in Refiners having untyped code
goes contrary to one of our core principles.
Note that there is currently an open PR in PyTorch to
return `Module | Tensor`, but in practice this is not always
correct either: https://github.com/pytorch/pytorch/pull/115074
I also moved Residuals-related code from SD1 to latent_diffusion
because SDXL should not depend on SD1.
2024-02-06 11:32:18 +01:00
limiteinductive
73f6ccfc98
make Scheduler a fl.Module + Change name Scheduler -> Solver
2024-01-31 17:03:52 +01:00
limiteinductive
ed3621362f
Add load_tensors utils in fluxion
2024-01-21 12:34:33 +01:00
limiteinductive
a1f50f3f9d
refactor Lora LoraAdapter and the latent_diffusion/lora file
2024-01-18 16:27:38 +01:00
Cédric Deltheil
dd87b9706e
pick the right class for CLIP text converter
...
i.e. CLIPTextModel by default or CLIPTextModelWithProjection for SDXL
so-called text_encoder_2
This silent false positive warnings like:
Some weights of CLIPTextModelWithProjection were not initialized
from the model checkpoint [...]
2024-01-18 11:17:41 +01:00
Pierre Chapuis
7839c54ae8
unet conversion: add option to skip init check
2024-01-16 19:10:59 +01:00
Pierre Chapuis
d2f38871fd
add a way to specify the subfolder of the unet
...
(no subfolder -> pass an empty string)
2024-01-16 19:10:59 +01:00
Pierre Chapuis
94a918a474
fix invalid default value for --half in help
2024-01-16 19:10:59 +01:00
limiteinductive
c9e973ba41
refactor CrossAttentionAdapter to work with context.
2024-01-08 15:20:23 +01:00
hugojarkoff
00f494efe2
SegmentAnything: add dense mask prompt support
2024-01-05 18:53:25 +01:00
limiteinductive
20c229903f
upgrade pyright to 1.1.342 ; improve no_grad typing
2023-12-29 15:09:02 +01:00
Cédric Deltheil
832f012fe4
convert_dinov2: tweak command-line args
...
i.e. mimic the other conversion scripts
2023-12-18 10:29:28 +01:00
Bryce
5ca1549c96
refactor: convert bash script to python
...
Ran successfully to completion. But on a repeat run `convert_unclip` didn't pass the hash check for some reason.
- fix inpainting model download urls
- shows a progress bar for downloads
- skips downloading existing files
- uses a temporary file to prevent partial downloads
- can do a dry run to check if url is valid `DRY_RUN=1 python scripts/prepare_test_weights.py`
- displays the downloaded file hash
2023-12-15 09:55:59 +01:00
Cédric Deltheil
e978b3665d
convert_dinov2: ignore pyright errors
...
And save converted weights into safetensors instead of pickle
2023-12-14 17:50:41 +01:00
Laureηt
9337d65e0e
feature: add DINOv2
...
Co-authored-by: Benjamin Trom <benjamintrom@gmail.com>
2023-12-14 17:27:32 +01:00
Cédric Deltheil
792a0fc3d9
run lint rules using latest isort settings
2023-12-11 11:58:43 +01:00
limiteinductive
86c54977b9
replace poetry by rye for python dependency management
...
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
Co-authored-by: Pierre Chapuis <git@catwell.info>
2023-12-08 17:40:10 +01:00
limiteinductive
807ef5551c
refactor fl.Parameter basic layer
...
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
2023-12-08 10:20:34 +01:00
Benjamin Trom
ea44262a39
unnest Residual subchain by modifying its forward
...
And replaced the remaining Sum-Identity layers by Residual.
The tolerance used to compare SAM's ViT models has been tweaked: for
some reasons there is a small difference (in float32) in the neck layer
(first conv2D)
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
2023-10-19 10:34:51 +02:00
Cédric Deltheil
7f7e129bb6
convert autoencoder: add an option for subfolder
2023-09-29 18:54:24 +02:00
Cédric Deltheil
5fc6767a4a
add IP-Adapter plus (aka fine-grained features)
2023-09-29 15:23:43 +02:00
Cédric Deltheil
2106c237d9
add T2I-Adapter conversion script
2023-09-25 13:54:26 +02:00
Benjamin Trom
282578ddc0
add Segment Anything (SAM) to foundational models
...
Note: dense prompts (i.e. masks) support is still partial (see MaskEncoder)
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
2023-09-21 11:44:30 +02:00
Cédric Deltheil
eea340c6c4
add support for SDXL IP-Adapter
...
This only supports the latest SDXL IP-Adapter release (2023.9.8) which
builds upon the ViT-H/14 CLIP image encoder.
2023-09-12 18:00:39 +02:00
Pierre Chapuis
7a32699cc6
add ensure_find and ensure_find_parent helpers
2023-09-12 14:19:10 +02:00
Pierre Chapuis
43075f60b0
do not use get_parameter_name in conversion script
2023-09-12 11:58:24 +02:00
Cédric Deltheil
9364c0ea1c
converters: get rid of default=True for --half
2023-09-11 21:49:24 +02:00
Cédric Deltheil
cc3b20320d
make clip text converter support SDXL
...
i.e. convert the 2nd text encoder and save the final double text encoder
2023-09-11 21:49:24 +02:00
Cédric Deltheil
e5425e2968
make IP-Adapter generic for SD1 and SDXL
2023-09-08 16:38:01 +02:00
Cédric Deltheil
946e7c2974
add threshold for clip image encoder conversion
2023-09-08 12:00:21 +02:00
Cédric Deltheil
c55917e293
add IP-Adapter support for SD 1.5
...
Official repo: https://github.com/tencent-ailab/IP-Adapter
2023-09-06 15:12:48 +02:00
limiteinductive
88efa117bf
fix model comparison with custom layers
2023-09-05 12:34:38 +02:00
Cédric Deltheil
b933fabf31
unet: get rid of clip_embedding attribute for SD1
...
It is implicitly defined by the underlying cross-attention layer. This
also makes it consistent with SDXL.
2023-09-01 19:23:33 +02:00
Pierre Chapuis
d389d11a06
make basic adapters a part of Fluxion
2023-09-01 17:29:48 +02:00
Pierre Chapuis
0f476ea18b
make high-level adapters Adapters
...
This generalizes the Adapter abstraction to higher-level
constructs such as high-level LoRA (targeting e.g. the
SD UNet), ControlNet and Reference-Only Control.
Some adapters now work by adapting child models with
"sub-adapters" that they inject / eject when needed.
2023-08-31 10:57:18 +02:00
Cédric Deltheil
3746d7f622
scripts: add converter for clip image encoder
...
Tested with:
python scripts/conversion/convert_transformers_clip_image_model.py \
\ --from /path/to/stabilityai/stable-diffusion-2-1-unclip
2023-08-30 21:50:01 +02:00
Pierre Chapuis
18c84c7b72
shorter import paths
2023-08-29 16:57:40 +02:00
limiteinductive
7ca6bd0ccd
implement the ConvertModule class and refactor conversion scripts
2023-08-28 14:39:14 +02:00