Commit graph

572 commits

Author SHA1 Message Date
limiteinductive f32ccc3474 Remove seed_everything logging because it is too verbose 2024-04-22 18:14:33 +02:00
limiteinductive 5dde281ada Implement EventConfig 2024-04-22 18:14:33 +02:00
limiteinductive 07985694ed fix training_101 import 2024-04-18 20:58:47 +02:00
limiteinductive 446796da57 Refactor TimeValue 2024-04-18 20:58:47 +02:00
Laurent 17246708b9 Add sample_noise staticmethod and modify add_noise to support batched steps 2024-04-18 12:55:49 +02:00
limiteinductive 7427c171f6 fix training_utils requirements check 2024-04-17 18:10:28 +02:00
Pierre Colle bf7852b88e SAM: image_to_scaled_tensor gray images 2024-04-16 18:45:17 +02:00
limiteinductive f48712ee29 lint 2024-04-16 16:24:33 +02:00
limiteinductive 347fdbc794 add init file for segment_anything tests 2024-04-16 16:24:33 +02:00
Laurent eb4bb34f8b (training_utils) add new ForceCommit callback 2024-04-16 14:43:10 +02:00
limiteinductive be7d065a33 Add DataloadeConfig to Trainer 2024-04-15 20:56:19 +02:00
limiteinductive b9b999ccfe turn scoped_seed into a context manager 2024-04-13 15:03:35 +02:00
Pierre Colle 64692c3b5b TrainerClock: assert dataset_length >= batch_size 2024-04-12 15:05:52 +02:00
Pierre Colle 0ac290f67d SAM: expose sizing helpers 2024-04-12 08:56:23 +02:00
Laurent 06ff2f0a5f add support for dinov2 giant flavors 2024-04-11 14:48:33 +02:00
Laurent 04e59bf3d9 fix GLU Activation docstrings 2024-04-11 14:48:33 +02:00
limiteinductive f26b6ee00a add static typing to __call__ method for latent_diffusion models ; fix multi_diffusion bug that wasn't taking guidance_scale into account 2024-04-11 12:13:30 +02:00
Cédric Deltheil a2ee705783 hq sam: add constructor args to docstring
Additionally, mark `register_adapter_module` for internal use.
2024-04-08 11:46:37 +02:00
Pierre Colle d05ebb8dd3 SAM/HQSAMAdapter: docstring examples 2024-04-08 07:12:57 +02:00
Pierre Chapuis e033306f60 use factories in context example
Using the same instance multiple times is a bad idea
because PyTorch memorizes things internally. Among
other things this breaks Chain's `__repr__`.
2024-04-05 18:06:54 +02:00
hugojarkoff bbb46e3fc7 Fix clock step inconsistencies on batch end 2024-04-05 15:52:43 +02:00
Pierre Chapuis 09af570b23 add DINOv2-FD metric 2024-04-03 16:45:00 +02:00
Cédric Deltheil c529006d13 get rid of invisible-watermark test dependency
Was needed originally for diffusers' StableDiffusionXLPipeline. It has
been relaxed in the meanwhile (see `add_watermarker` for details).
2024-04-03 14:48:56 +02:00
Laurent 2ecf7e4b8c skip dinov2 float16 test on cpu + test dinov2 when batch_size>1 2024-04-02 18:57:25 +02:00
Laurent 5f07fa9c21 fix dinov2 interpolation, support batching 2024-04-02 18:57:25 +02:00
Laurent ef427538a6 revert "arange" typo ignore 2024-04-02 18:18:22 +02:00
Pierre Chapuis 6c40f56c3f make numpy dependency explicit 2024-04-02 18:15:52 +02:00
Pierre Chapuis fd5a15c7e0 update pyright and fix Pillow 10.3 typing issues 2024-04-02 18:15:52 +02:00
Laurent 328fcb8ed1 update typos config, ignore torch.arange 2024-04-02 15:37:28 +02:00
Laurent 1a8ea9180f refactor dinov2 tests, check against official implementation 2024-04-02 10:02:43 +02:00
Laurent 4f94dfb494 implement dinov2 positional embedding interpolation 2024-04-02 10:02:43 +02:00
Laurent 0336bc78b5 simplify interpolate function and layer 2024-04-02 10:02:43 +02:00
Pierre Colle 6c37e3f933 hq-sam : weights/load_weights 2024-03-29 11:25:43 +01:00
Pierre Chapuis 2b48988c07 add missing word in documentation 2024-03-28 14:41:27 +01:00
Pierre Chapuis cb6ca60a4e add ci.yml to source (so it runs when we change it) 2024-03-28 14:40:07 +01:00
Pierre Chapuis daaa8c5416 use uv for Rye 2024-03-28 14:40:07 +01:00
Pierre Chapuis 404a15aad2 tweak auto_attach_loras so debugging is easier when it fails 2024-03-26 16:12:48 +01:00
Cédric Deltheil 2345f01dd3 test weights: check hash of pre-downloaded weights 2024-03-26 16:01:03 +01:00
Cédric Deltheil 04daeced73 test weights: fix control-lora expected hashes 2024-03-26 16:01:03 +01:00
Laurent a0715806d2 modify ip_adapter's ImageCrossAttention scale getter and setter
this new version makes it robust in case mulitple Mulitply-s are inside the Chain (e.g. if the Linear layers are LoRA-ified)
2024-03-26 11:15:04 +01:00
Laurent 7e64ba4011 modify ip_adapter's CrossAttentionAdapters injection logic 2024-03-26 11:15:04 +01:00
Cédric Deltheil df0cc2aeb8 do not call __getattr__ with keyword argument
Same for __setattr__. Use positional arguments instead. E.g.:

    import torch
    import refiners.fluxion.layers as fl
    m = torch.compile(fl.Linear(1,1))
    m(torch.zeros(1))
    # TypeError: Module.__getattr__() got an unexpected keyword argument 'name'
2024-03-25 21:46:13 +01:00
hugojarkoff 0f87ea29e0 Update README.md with HQ-SAM news 2024-03-25 09:19:19 +01:00
Pierre Colle cba83b0558 SAM init with mask_decoder after #325 2024-03-24 20:18:57 +01:00
Pierre Colle 5c937b184a HQ-SAM logit equal test, following #331 2024-03-23 21:58:32 +01:00
Pierre Colle 2763db960e SAM e2e test tolerance explained 2024-03-22 21:31:28 +01:00
Pierre Chapuis 364e196874 support no CFG in compute_clip_text_embedding 2024-03-22 17:06:51 +01:00
Pierre Colle 94e8b9c23f SAM MaskDecoder token slicing 2024-03-22 13:11:40 +01:00
hugojarkoff a93ceff752 Add HQ-SAM Adapter 2024-03-21 15:36:55 +01:00
hugojarkoff c6b5eb24a1 Add logits comparison for base SAM in single mask output prediction mode 2024-03-21 10:48:48 +01:00