Pierre Chapuis
928da1ee1c
fix repr when there are non-Fluxion modules in a tree
CI / lint_and_typecheck (push) Has been cancelled
Deploy docs to GitHub Pages / Deploy docs (push) Has been cancelled
Spell checker / Spell check (push) Has been cancelled
2024-08-08 19:26:56 +02:00
limiteinductive
b4ee65b9b1
improve load_from_safetensors typing
CI / lint_and_typecheck (push) Has been cancelled
Deploy docs to GitHub Pages / Deploy docs (push) Has been cancelled
Spell checker / Spell check (push) Has been cancelled
2024-08-02 15:29:39 +02:00
limiteinductive
1de567590b
fix typing issues coming from torch 2.4 version ; typing is not guaranteed for torch < 2.4
2024-08-02 12:02:00 +02:00
limiteinductive
09a9dfd494
Add stochastic sampling to DPM solver (SDE)
CI / lint_and_typecheck (push) Waiting to run
Deploy docs to GitHub Pages / Deploy docs (push) Waiting to run
Spell checker / Spell check (push) Waiting to run
2024-07-23 11:13:12 +02:00
Pierre Chapuis
daee77298d
improve FrankenSolver
...
It now takes a Scheduler factory instead of a Scheduler.
This lets the user potentially recreate the Scheduler on `rebuild`.
It also properly sets the device and dtype on rebuild,
and it has better typing.
2024-07-19 16:46:52 +02:00
Laurent
88325c3bbc
multi-upscaler: specify map_location when loading negative embedding
2024-07-12 18:49:25 +02:00
Laurent
6ddd5435b8
fix broken dtypes in tiled auto encoders
2024-07-11 15:23:02 +02:00
Laurent
f3b5c8d3e1
create SD1.5 MultiUpscaler pipeline
...
Co-authored-by: limiteinductive <benjamin@lagon.tech>
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2024-07-11 15:23:02 +02:00
Laurent
66cd0d57a1
improve MultiDiffusion pipelines
...
Co-authored-by: limiteinductive <benjamin@lagon.tech>
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2024-07-11 15:23:02 +02:00
Pierre Chapuis
9e8c2a3753
add FrankenSolver
...
This solver is designed to use Diffusers Schedulers as Refiners Solvers.
2024-07-10 19:31:34 +02:00
Cédric Deltheil
245f51393e
auto_encoder: swap x/y names when generating tiles
...
Cosmetic change
2024-06-26 14:17:28 +02:00
limiteinductive
b42881e54e
Implement Tiled Autoencoder inference to save VRAM
2024-06-26 11:59:18 +02:00
limiteinductive
15ccdb38f3
Add scale_decay parameter for SD1 ControlNet
2024-06-24 13:21:27 +02:00
Pierre Chapuis
98de9d13d3
fix typing problems
2024-06-24 11:12:29 +02:00
Pierre Chapuis
1886e6456c
add a docstring for set_inference_steps
...
This explains the relation between first_step and strength,
as shown by @holwech here: https://github.com/finegrain-ai/refiners/discussions/374
2024-05-29 16:20:50 +02:00
Cédric Deltheil
ddcd84c740
training_utils: export neptune config and mixin
...
Like W&B. With this, #371 would have broken `test_import_training_utils`.
Follow up of #371 and #372 .
2024-05-29 10:29:34 +02:00
Laurent
7dba1c8034
(training_utils) neptune callback pydantic fix
2024-05-28 22:42:28 +02:00
Laurent
3ad7f592db
(training_utils) add NeptuneCallback
2024-05-28 16:35:11 +02:00
limiteinductive
3a7f14e4dc
Fix clock log order ; fix that the first iteration was skipped
2024-05-21 17:57:14 +02:00
limiteinductive
0bec9a855d
annotated validators for TimeValue
2024-05-09 10:53:58 +02:00
limiteinductive
22f4f4faf1
DataLoader validation
2024-05-09 10:53:58 +02:00
limiteinductive
38bddc49bd
implement data_iterable
2024-05-09 10:53:58 +02:00
Benjamin Trom
05a63ef44e
apply suggestions from code review
2024-05-09 10:53:58 +02:00
limiteinductive
d6c225a112
implement data_iterable (bis)
2024-05-09 10:53:58 +02:00
limiteinductive
de8334b6fc
remove dataset length
2024-05-09 10:53:58 +02:00
limiteinductive
b497b27cd3
remove dataset length (bis)
2024-05-09 10:53:58 +02:00
limiteinductive
603c8abb1e
fix clock
2024-05-09 10:53:58 +02:00
limiteinductive
44760ac19f
deprecate evaluation
2024-05-09 10:53:58 +02:00
limiteinductive
061d44888f
batch to step
2024-05-09 10:53:58 +02:00
limiteinductive
b7bb8bba80
remove EventConfig
...
This is a partial rollback of commit 5dde281
2024-05-09 10:53:58 +02:00
Laurent
7aff743019
initialize StableDiffusion_1_Inpainting with a 9 channel SD1Unet if not provided
2024-04-23 16:58:22 +02:00
limiteinductive
f32ccc3474
Remove seed_everything logging because it is too verbose
2024-04-22 18:14:33 +02:00
limiteinductive
5dde281ada
Implement EventConfig
2024-04-22 18:14:33 +02:00
limiteinductive
446796da57
Refactor TimeValue
2024-04-18 20:58:47 +02:00
Laurent
17246708b9
Add sample_noise
staticmethod and modify add_noise
to support batched steps
2024-04-18 12:55:49 +02:00
limiteinductive
7427c171f6
fix training_utils requirements check
2024-04-17 18:10:28 +02:00
Pierre Colle
bf7852b88e
SAM: image_to_scaled_tensor gray images
2024-04-16 18:45:17 +02:00
Laurent
eb4bb34f8b
(training_utils) add new ForceCommit callback
2024-04-16 14:43:10 +02:00
limiteinductive
be7d065a33
Add DataloadeConfig to Trainer
2024-04-15 20:56:19 +02:00
limiteinductive
b9b999ccfe
turn scoped_seed into a context manager
2024-04-13 15:03:35 +02:00
Pierre Colle
64692c3b5b
TrainerClock: assert dataset_length >= batch_size
2024-04-12 15:05:52 +02:00
Pierre Colle
0ac290f67d
SAM: expose sizing helpers
2024-04-12 08:56:23 +02:00
Laurent
06ff2f0a5f
add support for dinov2 giant flavors
2024-04-11 14:48:33 +02:00
Laurent
04e59bf3d9
fix GLU Activation docstrings
2024-04-11 14:48:33 +02:00
limiteinductive
f26b6ee00a
add static typing to __call__ method for latent_diffusion models ; fix multi_diffusion bug that wasn't taking guidance_scale into account
2024-04-11 12:13:30 +02:00
Cédric Deltheil
a2ee705783
hq sam: add constructor args to docstring
...
Additionally, mark `register_adapter_module` for internal use.
2024-04-08 11:46:37 +02:00
Pierre Colle
d05ebb8dd3
SAM/HQSAMAdapter: docstring examples
2024-04-08 07:12:57 +02:00
hugojarkoff
bbb46e3fc7
Fix clock step inconsistencies on batch end
2024-04-05 15:52:43 +02:00
Pierre Chapuis
09af570b23
add DINOv2-FD metric
2024-04-03 16:45:00 +02:00
Laurent
5f07fa9c21
fix dinov2 interpolation, support batching
2024-04-02 18:57:25 +02:00
Pierre Chapuis
fd5a15c7e0
update pyright and fix Pillow 10.3 typing issues
2024-04-02 18:15:52 +02:00
Laurent
4f94dfb494
implement dinov2 positional embedding interpolation
2024-04-02 10:02:43 +02:00
Laurent
0336bc78b5
simplify interpolate function and layer
2024-04-02 10:02:43 +02:00
Pierre Colle
6c37e3f933
hq-sam : weights/load_weights
2024-03-29 11:25:43 +01:00
Pierre Chapuis
404a15aad2
tweak auto_attach_loras so debugging is easier when it fails
2024-03-26 16:12:48 +01:00
Laurent
a0715806d2
modify ip_adapter's ImageCrossAttention scale getter and setter
...
this new version makes it robust in case mulitple Mulitply-s are inside the Chain (e.g. if the Linear layers are LoRA-ified)
2024-03-26 11:15:04 +01:00
Laurent
7e64ba4011
modify ip_adapter's CrossAttentionAdapters injection logic
2024-03-26 11:15:04 +01:00
Cédric Deltheil
df0cc2aeb8
do not call __getattr__ with keyword argument
...
Same for __setattr__. Use positional arguments instead. E.g.:
import torch
import refiners.fluxion.layers as fl
m = torch.compile(fl.Linear(1,1))
m(torch.zeros(1))
# TypeError: Module.__getattr__() got an unexpected keyword argument 'name'
2024-03-25 21:46:13 +01:00
Pierre Colle
cba83b0558
SAM init with mask_decoder after #325
2024-03-24 20:18:57 +01:00
Pierre Chapuis
364e196874
support no CFG in compute_clip_text_embedding
2024-03-22 17:06:51 +01:00
Pierre Colle
94e8b9c23f
SAM MaskDecoder token slicing
2024-03-22 13:11:40 +01:00
hugojarkoff
a93ceff752
Add HQ-SAM Adapter
2024-03-21 15:36:55 +01:00
limiteinductive
38c86f59f4
Switch gradient clipping to native torch torch.nn.utils.clip_grad_norm_
2024-03-19 22:08:48 +01:00
Pierre Colle
68fe725767
Add multimask_output flag to SAM
2024-03-19 17:40:26 +01:00
limiteinductive
6a72943ff7
change TimeValue to a dataclass
2024-03-19 14:49:24 +01:00
Laurent
b8fae60d38
make LoRA's weight initialization overridable
2024-03-13 17:32:16 +01:00
Pierre Chapuis
e32d8d16f0
LoRA loading: forward exclusions when preprocessing parts of the UNet
2024-03-13 15:25:00 +01:00
limiteinductive
ff5341c85c
Change weight decay for Optimizer to normal PyTorch default
2024-03-12 15:20:21 +01:00
Pierre Chapuis
975560165c
improve docstrings
2024-03-08 15:43:57 +01:00
Pierre Chapuis
cd5fa97c20
ability to get LoRA weights in SDLoraManager
2024-03-08 15:43:57 +01:00
Pierre Chapuis
fb90b00e75
add_loras_to_unet: add preprocess values as exclusions in last step
2024-03-08 15:43:57 +01:00
Pierre Chapuis
4259261f17
simplify LCM weights loader using new manager features
2024-03-08 15:43:57 +01:00
Pierre Chapuis
ccd9414ff1
fix debug map when attaching two LoRAs
...
(in that case return the path of the LoraAdapter)
2024-03-08 15:43:57 +01:00
Pierre Chapuis
8c7fcbc00f
LoRA manager: move exclude / include to add_loras call
...
Always exclude the TimestepEncoder by default.
This is because some keys include both e.g. `resnet` and `time_emb_proj`.
Preprocess blocks that tend to mix up with others in a separate
auto_attach call.
2024-03-08 15:43:57 +01:00
Pierre Chapuis
052a20b897
remove add_multiple_loras
2024-03-08 15:43:57 +01:00
Pierre Chapuis
ed8ec26e63
allow passing inclusions and exlusions to SDLoraManager
2024-03-08 15:43:57 +01:00
Pierre Chapuis
cce2a98fa6
add sanity check to auto_attach_loras
2024-03-08 15:43:57 +01:00
Pierre Chapuis
1eb71077aa
use same scale setter / getter interface for all controls
2024-03-08 11:29:28 +01:00
Laurent
5e7986ef08
adding more log messages in training_utils
2024-03-08 10:52:14 +01:00
Pierre Chapuis
be2368cf20
ruff 3 formatting (Rye 0.28)
2024-03-08 10:42:05 +01:00
Pierre Chapuis
91d1b46aa9
Add a note that we mitigate non-zero SNR in DDIM.
2024-02-26 12:14:02 +01:00
Pierre Chapuis
7f51d18045
clarify that add_lcm_lora can load SDXL-Lightning
2024-02-26 12:14:02 +01:00
Pierre Chapuis
7e4e0f0650
correctly scale init latents for Euler scheduler
2024-02-26 12:14:02 +01:00
Pierre Chapuis
bf0ba58541
refactor solver params, add sample prediction type
2024-02-26 12:14:02 +01:00
Pierre Chapuis
ddc1cf8ca7
refactor solvers to support different timestep spacings
2024-02-26 12:14:02 +01:00
Cédric Deltheil
176807740b
control_lora: fix adapter set scale
...
The adapter set scale did not propagate the scale to the underlying
zero convolutions. The value set at CTOR time was used instead.
Follow up of #285
2024-02-22 10:01:05 +01:00
Pierre Chapuis
03b79d6d34
rename ResidualBlock to ConditionScaleBlock in LCM
2024-02-21 16:37:27 +01:00
Pierre Chapuis
684e2b9a47
add docstrings for LCM / LCM-LoRA
2024-02-21 16:37:27 +01:00
Pierre Chapuis
12b6829a26
add support for LCM LoRA weights loading
2024-02-21 16:37:27 +01:00
Pierre Chapuis
fafe5f8f5a
Improve filtering when auto-attaching LoRAs.
...
Also support debug output to help diagnose bad mappings.
2024-02-21 16:37:27 +01:00
Pierre Chapuis
f8d55ccb20
add LcmAdapter
...
This adds support for the condition scale embedding.
Also updates the UNet converter to support LCM.
2024-02-21 16:37:27 +01:00
Pierre Chapuis
c8c6294550
add LCMSolver (Latent Consistency Models)
2024-02-21 16:37:27 +01:00
Pierre Chapuis
4a619e84f0
support disabling CFG in LatentDiffusionModel
2024-02-21 16:37:27 +01:00
Pierre Colle
d199cd4f24
batch sdxl + sd1 + compute_clip_text_embedding
...
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2024-02-21 15:17:11 +01:00
Cédric Deltheil
5ab5d7fd1c
import ControlLoraAdapter part of latent_diffusion
2024-02-19 14:11:32 +01:00
Laurent
2a3e353f04
enable StyleAligned
related docstrings in mkdocstrings
2024-02-15 15:22:47 +01:00
Laurent
efa3988638
implement StyleAlignedAdapter
2024-02-15 15:22:47 +01:00
limiteinductive
432e32f94f
rename Scheduler -> LRScheduler
2024-02-15 11:48:36 +01:00
Laurent
684303230d
export ControlLora
and ControlLoraAdapter
in refiners.foundationals.latent_diffusion.stable_diffusion_xl
2024-02-15 11:32:49 +01:00
Laurent
41a5ce2052
implement ControlLora
and ControlLoraAdapter
2024-02-14 18:20:46 +01:00