hugojarkoff
c6b5eb24a1
Add logits comparison for base SAM in single mask output prediction mode
2024-03-21 10:48:48 +01:00
limiteinductive
38c86f59f4
Switch gradient clipping to native torch torch.nn.utils.clip_grad_norm_
2024-03-19 22:08:48 +01:00
Pierre Colle
68fe725767
Add multimask_output flag to SAM
2024-03-19 17:40:26 +01:00
limiteinductive
6a72943ff7
change TimeValue to a dataclass
2024-03-19 14:49:24 +01:00
Pierre Chapuis
5d784bedab
add test for "Adapting SDXL" guide
2024-03-08 15:43:57 +01:00
Pierre Chapuis
72fa17df48
fix slider loras test
2024-03-08 15:43:57 +01:00
Pierre Chapuis
8c7fcbc00f
LoRA manager: move exclude / include to add_loras call
...
Always exclude the TimestepEncoder by default.
This is because some keys include both e.g. `resnet` and `time_emb_proj`.
Preprocess blocks that tend to mix up with others in a separate
auto_attach call.
2024-03-08 15:43:57 +01:00
Pierre Chapuis
052a20b897
remove add_multiple_loras
2024-03-08 15:43:57 +01:00
Pierre Chapuis
c383ff6cf4
fix DPO LoRA loading in tests
2024-03-08 15:43:57 +01:00
Pierre Chapuis
1eb71077aa
use same scale setter / getter interface for all controls
2024-03-08 11:29:28 +01:00
Pierre Chapuis
be2368cf20
ruff 3 formatting (Rye 0.28)
2024-03-08 10:42:05 +01:00
Pierre Chapuis
a0be5458b9
snip long prompt in tests
2024-03-05 19:54:44 +01:00
Pierre Chapuis
d5d199edc5
add tests for SDXL Lightning
2024-02-26 12:14:02 +01:00
Pierre Chapuis
7e4e0f0650
correctly scale init latents for Euler scheduler
2024-02-26 12:14:02 +01:00
Pierre Chapuis
bf0ba58541
refactor solver params, add sample prediction type
2024-02-26 12:14:02 +01:00
Pierre Chapuis
ddc1cf8ca7
refactor solvers to support different timestep spacings
2024-02-26 12:14:02 +01:00
Pierre Chapuis
8f614e7647
check hash of downloaded LoRA weights, update DPO refs
...
(the DPO LoRA weights have changed: 2699b36e22
)
2024-02-23 12:02:18 +01:00
Cédric Deltheil
176807740b
control_lora: fix adapter set scale
...
The adapter set scale did not propagate the scale to the underlying
zero convolutions. The value set at CTOR time was used instead.
Follow up of #285
2024-02-22 10:01:05 +01:00
Pierre Chapuis
684e2b9a47
add docstrings for LCM / LCM-LoRA
2024-02-21 16:37:27 +01:00
Pierre Chapuis
383c3c8a04
add tests for LCM and LCM-LoRA
...
(As of now LoRA with guidance > 1 and especially base do not pass with those tolerances.)
2024-02-21 16:37:27 +01:00
Pierre Chapuis
c8c6294550
add LCMSolver (Latent Consistency Models)
2024-02-21 16:37:27 +01:00
Cédric Deltheil
446967859d
test_style_aligned: switch to CLIP text batch API
...
Added in #263
2024-02-21 16:33:03 +01:00
Pierre Colle
d199cd4f24
batch sdxl + sd1 + compute_clip_text_embedding
...
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2024-02-21 15:17:11 +01:00
Cédric Deltheil
5ab5d7fd1c
import ControlLoraAdapter part of latent_diffusion
2024-02-19 14:11:32 +01:00
Laurent
da3c3602fb
write StyleAligned
e2e test
2024-02-15 15:22:47 +01:00
Laurent
60c0780fe7
write StyleAligned
inject/eject tests
2024-02-15 15:22:47 +01:00
limiteinductive
432e32f94f
rename Scheduler -> LRScheduler
2024-02-15 11:48:36 +01:00
Laurent
00270604ef
fix conversion_script bug + rename control_lora e2e test
2024-02-14 18:20:46 +01:00
Laurent
7fe392298a
write ControlLora
e2e tests
2024-02-14 18:20:46 +01:00
limiteinductive
bec845553f
update deprecated validator for field_validator
2024-02-13 18:35:51 +01:00
limiteinductive
ab506b4db2
fix bug that was causing double registration
2024-02-13 11:12:13 +01:00
limiteinductive
3488273f50
Enforce correct subtype for the config param in both decorators
...
Also add a custom ModelConfig for the MockTrainer test
Update src/refiners/training_utils/config.py
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2024-02-12 16:21:04 +01:00
limiteinductive
cef8a9936c
refactor register_model decorator
2024-02-12 16:21:04 +01:00
limiteinductive
d6546c9026
add @register_model and @register_callback decorators
...
Refactor ClockTrainer to include Callback
2024-02-12 10:24:19 +01:00
limiteinductive
f541badcb3
Allow optional train ModelConfig + forbid extra input for configs
2024-02-10 16:13:10 +01:00
Pierre Colle
25bfa78907
lr, betas, eps, weight_decay at model level
...
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2024-02-09 12:05:13 +01:00
Cédric Deltheil
9aefc9896c
test_trainer: use model_copy
instead of copy
...
The `copy` method has been deprecated.
2024-02-08 20:07:34 +01:00
Colle
f4aa0271b8
less than 1 epoch training duration
2024-02-08 19:20:31 +01:00
limiteinductive
41508e0865
change param name of abstract get_item method
2024-02-08 18:52:52 +01:00
Laurent
6d599d53fd
beautify EXPECTED_TREE
in test_chain.py
2024-02-08 15:09:47 +01:00
limiteinductive
2e526d35d1
Make Dataset part of the trainer
2024-02-07 16:13:01 +01:00
limiteinductive
2ef4982e04
remove wandb from base config
2024-02-07 11:06:59 +01:00
Pierre Chapuis
11da76f7df
fix sdxl structural copy
2024-02-07 10:51:26 +01:00
Pierre Chapuis
ca9e89b22a
cosmetics
2024-02-07 10:51:26 +01:00
limiteinductive
ea05f3d327
make device and dtype work in Trainer class
2024-02-06 23:10:10 +01:00
Pierre Chapuis
37425fb609
make LoRA generic
2024-02-06 11:32:18 +01:00
Pierre Chapuis
471ef91d1c
make __getattr__
on Module return object, not Any
...
PyTorch chose to make it Any because they expect its users' code
to be "highly dynamic": https://github.com/pytorch/pytorch/pull/104321
It is not the case for us, in Refiners having untyped code
goes contrary to one of our core principles.
Note that there is currently an open PR in PyTorch to
return `Module | Tensor`, but in practice this is not always
correct either: https://github.com/pytorch/pytorch/pull/115074
I also moved Residuals-related code from SD1 to latent_diffusion
because SDXL should not depend on SD1.
2024-02-06 11:32:18 +01:00
Pierre Chapuis
3de1508b65
increase tolerance on Euler test
2024-02-04 08:58:22 +01:00
Pierre Chapuis
83b478c0ff
fix test failure caused by Diffusers 0.26.0
2024-02-04 08:58:22 +01:00
Colle
4a6146bb6c
clip text, lda encode batch inputs
...
* text_encoder([str1, str2])
* lda decode_latents/encode_image image_to_latent/latent_to_image
* images_to_tensor, tensor_to_images
---------
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2024-02-01 17:05:28 +01:00
Pierre Chapuis
8a2b955bd0
add a test for SDXL + EulerScheduler (deterministic)
2024-02-01 16:17:07 +01:00
Pierre Chapuis
5ac5373310
add a test for SDXL with sliced attention
2024-02-01 16:17:07 +01:00
Pierre Chapuis
3ddd258d36
add a test for noise schedules
2024-02-01 16:17:07 +01:00
Pierre Chapuis
df843f5226
test SAG setter
2024-02-01 16:17:07 +01:00
Pierre Chapuis
f4ed7254fa
test IP adapter scale setter
2024-02-01 16:17:07 +01:00
Pierre Chapuis
8341d3a74b
use float16, save memory
2024-02-01 16:17:07 +01:00
Pierre Chapuis
d185711bc5
add tests based on repr for inject / eject
2024-02-01 16:17:07 +01:00
Pierre Chapuis
0e77ef1720
add inject / eject test for concept extender (+ better errors)
2024-02-01 16:17:07 +01:00
Pierre Chapuis
93270ec2d7
add inject / eject test for t2i adapter
2024-02-01 16:17:07 +01:00
Pierre Chapuis
f43a530254
add extra tests for Chain
2024-02-01 16:17:07 +01:00
Pierre Chapuis
bca50b71f2
test (and fix) basic_attributes
2024-02-01 16:17:07 +01:00
Pierre Chapuis
bba478abf2
create test_module
2024-02-01 16:17:07 +01:00
limiteinductive
abe50076a4
add NoiseSchedule to solvers __init__ + simplify some import pathing
...
further improve import pathing
2024-01-31 17:03:52 +01:00
limiteinductive
73f6ccfc98
make Scheduler a fl.Module + Change name Scheduler -> Solver
2024-01-31 17:03:52 +01:00
Pierre Chapuis
7eb8eb4c68
add support for pytorch 2.2 (2.1 is still supported)
...
also bump all dev dependencies to their latest version
2024-01-31 15:03:06 +01:00
Cédric Deltheil
5634e68fde
add end-to-end test for multi-ip adapter
2024-01-31 11:03:49 +01:00
Cédric Deltheil
feff4c78ae
segment-anything: fix class name typo
...
Note: weights are impacted
2024-01-30 09:52:40 +01:00
Pierre Chapuis
19bc081658
call GC between each e2e test to avoid OOM
2024-01-29 17:57:40 +01:00
Pierre Chapuis
ce0339b4cc
add a get_path
helper to modules
2024-01-26 19:31:13 +01:00
limiteinductive
0ee2d5e075
Fix warmup steps calculation when gradient_accumulation is used
2024-01-25 12:20:36 +01:00
Bryce
12a5439fc4
refactor: rename noise => predicted_noise
...
and in euler, `alt_noise` can now be simply `noise`
2024-01-24 18:15:10 +01:00
limiteinductive
3b458f0d8d
fix test_names LoraManager test
2024-01-23 14:12:03 +01:00
limiteinductive
421da6a3b6
Load Multiple LoRAs with SDLoraManager
2024-01-23 14:12:03 +01:00
Cédric Deltheil
40c33b9595
rollback to 50 inference steps in IP-Adapter tests
...
Follow up of 8a36c8c
This is what is used in the official notebook (ip_adapter_demo.ipynb and
ip_adapter-plus_demo.ipynb)
2024-01-22 09:35:31 +01:00
limiteinductive
ed3621362f
Add load_tensors utils in fluxion
2024-01-21 12:34:33 +01:00
Pierre Colle
91aea9b7ff
fix: summarize_tensor(tensor) when tensor.numel() == 0
2024-01-20 14:32:35 +01:00
Pierre Chapuis
8a36c8c279
make the first diffusion step a first class property of LDM & Schedulers
2024-01-19 18:52:45 +01:00
hugojarkoff
42b7749630
Fix references for e2e tests
2024-01-19 15:00:03 +01:00
Pierre Chapuis
ce3035923b
improve DPM solver test
2024-01-18 19:23:11 +01:00
hugojarkoff
17d9701dde
Remove additional noise in final sample of DDIM inference process
2024-01-18 18:43:13 +01:00
limiteinductive
a1f50f3f9d
refactor Lora LoraAdapter and the latent_diffusion/lora file
2024-01-18 16:27:38 +01:00
limiteinductive
2b977bc69e
fix broken self-attention guidance with ip-adapter
...
The #168 and #177 refactorings caused this regression. A new end-to-end
test has been added for proper coverage.
(This fix will be revisited at some point)
2024-01-16 17:21:24 +01:00
limiteinductive
d9ae7ca6a5
cast to float32 before converting to image in tensor_to_image to fix bfloat16 conversion
2024-01-16 11:50:58 +01:00
limiteinductive
7f722029be
add basic unit test for training_utils
2024-01-14 22:08:20 +01:00
Colle
dba9065229
fix test_debug_print
...
Follow-up of #173
2024-01-12 18:32:22 +01:00
Colle
c141091afc
Make summarize_tensor robust to non-float dtypes ( #171 )
2024-01-11 09:57:58 +01:00
Cédric Deltheil
ce0f9887a3
test_schedulers: fix pyright error
...
Due to changes in diffusers 0.25.0
2024-01-10 16:53:06 +01:00
Cédric Deltheil
6dbaec3e56
add end-to-end test for euler scheduler
...
Reference image generated with diffusers [1]
[1]: tests/e2e/test_diffusion_ref/README.md#expected-outputs
2024-01-10 16:53:06 +01:00
Cédric Deltheil
2b2b6740b7
fix or silent pyright issues
2024-01-10 16:53:06 +01:00
Cédric Deltheil
65f19d192f
ruff fix
2024-01-10 16:53:06 +01:00
Cédric Deltheil
ad143b0867
ruff format
2024-01-10 16:53:06 +01:00
Israfel Salazar
8423c5efa7
feature: Euler scheduler ( #138 )
2024-01-10 11:32:40 +01:00
limiteinductive
c9e973ba41
refactor CrossAttentionAdapter to work with context.
2024-01-08 15:20:23 +01:00
hugojarkoff
00f494efe2
SegmentAnything: add dense mask prompt support
2024-01-05 18:53:25 +01:00
limiteinductive
20c229903f
upgrade pyright to 1.1.342 ; improve no_grad typing
2023-12-29 15:09:02 +01:00
Cédric Deltheil
22ce3fd033
sam: wrap high-level methods with no_grad
2023-12-19 21:45:23 +01:00
Cédric Deltheil
e7892254eb
dinov2: add some coverage for registers
...
Those are not supported yet in HF: so just compared with a precomputed
norm. Note: in the initial PR [1] the Refiners' implementation has been
tested against the official code using Torch Hub.
[1]:
https://github.com/finegrain-ai/refiners/pull/132#issuecomment-1852021656
2023-12-18 10:29:28 +01:00
Cédric Deltheil
68cc346905
add minimal unit tests for DINOv2
...
To be completed with tests using image preprocessing, e.g. test cosine
similarity on a relevant pair of images
2023-12-18 10:29:28 +01:00
Benjamin Trom
e2f2e33add
Update tests/fluxion/layers/test_basics.py
...
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2023-12-13 17:03:28 +01:00
limiteinductive
7d9ceae274
change default behavior of end to None
2023-12-13 17:03:28 +01:00