Commit graph

149 commits

Author SHA1 Message Date
Pierre Chapuis 999e429697 fix bug in dpm_solver_first_order_update 2024-01-18 19:23:11 +01:00
Pierre Chapuis 59db1f0bd5 style
- avoid useless multiple assignments
- use coherent variable names
2024-01-18 19:23:11 +01:00
Pierre Chapuis aaddead17d DPM: add a mode to use first order for last step 2024-01-18 19:23:11 +01:00
hugojarkoff 17d9701dde Remove additional noise in final sample of DDIM inference process 2024-01-18 18:43:13 +01:00
limiteinductive a1f50f3f9d refactor Lora LoraAdapter and the latent_diffusion/lora file 2024-01-18 16:27:38 +01:00
hugojarkoff a6a9c8b972 Fix Value dimension in ImageCrossAttention 2024-01-17 16:46:24 +01:00
limiteinductive 2b977bc69e fix broken self-attention guidance with ip-adapter
The #168 and #177 refactorings caused this regression. A new end-to-end
test has been added for proper coverage.

(This fix will be revisited at some point)
2024-01-16 17:21:24 +01:00
limiteinductive d9ae7ca6a5 cast to float32 before converting to image in tensor_to_image to fix bfloat16 conversion 2024-01-16 11:50:58 +01:00
Colle 457c3f5cbd
display weighted module dtype and device (#173)
Co-authored-by: Benjamin Trom <benjamintrom@gmail.com>
2024-01-11 22:37:35 +01:00
limiteinductive 14ce2f50f9 make trainer an abstract class 2024-01-11 18:19:18 +01:00
limiteinductive deed703617 simplify even more CrossAttentionAdapter
Following Laurent2916's idea: see #167
2024-01-11 14:47:12 +01:00
limiteinductive 3ab8ed2989 remove unused script field from training BaseConfig 2024-01-11 12:28:47 +01:00
Colle c141091afc
Make summarize_tensor robust to non-float dtypes (#171) 2024-01-11 09:57:58 +01:00
Cédric Deltheil 2b2b6740b7 fix or silent pyright issues 2024-01-10 16:53:06 +01:00
Cédric Deltheil 65f19d192f ruff fix 2024-01-10 16:53:06 +01:00
Cédric Deltheil ad143b0867 ruff format 2024-01-10 16:53:06 +01:00
Israfel Salazar 8423c5efa7
feature: Euler scheduler (#138) 2024-01-10 11:32:40 +01:00
limiteinductive c9e973ba41 refactor CrossAttentionAdapter to work with context. 2024-01-08 15:20:23 +01:00
hugojarkoff 00f494efe2 SegmentAnything: add dense mask prompt support 2024-01-05 18:53:25 +01:00
limiteinductive 20c229903f upgrade pyright to 1.1.342 ; improve no_grad typing 2023-12-29 15:09:02 +01:00
limiteinductive 12eef9cca5 remove default hf_repo from config 2023-12-20 16:58:12 +01:00
limiteinductive 6a1fac876b remove huggingface datasets from default config 2023-12-20 16:58:12 +01:00
Cédric Deltheil 22ce3fd033 sam: wrap high-level methods with no_grad 2023-12-19 21:45:23 +01:00
Cédric Deltheil 68cc346905 add minimal unit tests for DINOv2
To be completed with tests using image preprocessing, e.g. test cosine
similarity on a relevant pair of images
2023-12-18 10:29:28 +01:00
Laureηt 9337d65e0e
feature: add DINOv2
Co-authored-by: Benjamin Trom <benjamintrom@gmail.com>
2023-12-14 17:27:32 +01:00
limiteinductive 7d9ceae274 change default behavior of end to None 2023-12-13 17:03:28 +01:00
Cédric Deltheil 82a2aa1ec4 deprecate DDPM step which is unused for now 2023-12-13 15:51:42 +01:00
limiteinductive a7551e0392 Change fl.Slicing API 2023-12-13 09:38:13 +01:00
Cédric Deltheil 11b0ff6f8c ddim: remove unused attribute 2023-12-12 17:26:14 +01:00
limiteinductive 7992258dd2 add before/after init callback to trainer 2023-12-12 10:22:55 +01:00
Pierre Chapuis 42a0fc4aa0 fix circular imports 2023-12-11 15:27:11 +01:00
Cédric Deltheil 792a0fc3d9 run lint rules using latest isort settings 2023-12-11 11:58:43 +01:00
Cédric Deltheil 4fc5e427b8 training_utils: fix extra detection
Requirements could be, e.g.:

    wandb (>=0.15.7,<0.16.0) ; extra == "training"

Or:

    wandb>=0.16.0; extra == 'training'

Follow up of 86c5497
2023-12-08 19:09:16 +01:00
limiteinductive 86c54977b9 replace poetry by rye for python dependency management
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
Co-authored-by: Pierre Chapuis <git@catwell.info>
2023-12-08 17:40:10 +01:00
limiteinductive 807ef5551c refactor fl.Parameter basic layer
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
2023-12-08 10:20:34 +01:00
Cédric Deltheil 46b4b4b462 training_utils: fix naming issue timestep->step 2023-12-05 10:05:34 +01:00
limiteinductive 1075ea4a62 fix ddpm and ddim __init__ 2023-12-04 15:27:06 +01:00
limiteinductive ad8f02e555 add karras sampling to the Scheduler abstract class, default is quadratic 2023-12-04 15:27:06 +01:00
Pierre Chapuis f22f969d65 remove Black preview mode
also fix multiline logs in training
2023-12-04 14:15:56 +01:00
Bryce 4176868e79 feature: add sliced-attention for memory efficiency
This allowed me to produce HD images on M1 32gb and 7000x5000 on Nvidia 4090

I saw no visual difference in images generated.

Some datapoints on slice_size
# 4096 max needed for SD 1.5 512x512
# 9216 max needed for SD 1.5 768x768
# 16384 max needed for SD 1.5 1024x1024
# 32400 max needed for SD 1.5 1920x1080 (HD)
# 129600 max needed for SD 1.5 3840x2160 (4k)
# 234375 max needed for SD 1.5 5000x3000
2023-12-01 15:30:23 +01:00
Benjamin Trom 2d4c4774f4 add maxpool to refiners layer 2023-11-20 10:58:53 +01:00
Bryce f666bc82f5 feature: support self-attention guidance with SD1 inpainting model 2023-11-20 10:17:15 +01:00
Cédric Deltheil ab0915d052 add tests for FreeU 2023-11-18 16:15:44 +01:00
Benjamin Trom 6eeb01137d Add Adapter in refiners.fluxion.adapters init 2023-11-18 13:54:40 +01:00
isamu-isozaki 770879a6df Free U 2023-11-17 17:22:20 +01:00
Cédric Deltheil fc71e900a0 black 2023-10-21 13:51:06 +02:00
Benjamin Trom ea44262a39 unnest Residual subchain by modifying its forward
And replaced the remaining Sum-Identity layers by Residual.

The tolerance used to compare SAM's ViT models has been tweaked: for
some reasons there is a small difference (in float32) in the neck layer
(first conv2D)

Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
2023-10-19 10:34:51 +02:00
Benjamin Trom 6ddd901767 improve image_to_tensor and tensor_to_image utils 2023-10-17 18:08:58 +02:00
limiteinductive 585c7ad55a improve consistency of the dpm scheduler 2023-10-12 15:48:43 +02:00
limiteinductive 7a62049d54 implement Restart method for latent diffusion 2023-10-12 15:48:43 +02:00