Laurent
eb4bb34f8b
(training_utils) add new ForceCommit callback
2024-04-16 14:43:10 +02:00
limiteinductive
be7d065a33
Add DataloadeConfig to Trainer
2024-04-15 20:56:19 +02:00
limiteinductive
b9b999ccfe
turn scoped_seed into a context manager
2024-04-13 15:03:35 +02:00
Pierre Colle
64692c3b5b
TrainerClock: assert dataset_length >= batch_size
2024-04-12 15:05:52 +02:00
Pierre Colle
0ac290f67d
SAM: expose sizing helpers
2024-04-12 08:56:23 +02:00
Laurent
06ff2f0a5f
add support for dinov2 giant flavors
2024-04-11 14:48:33 +02:00
Laurent
04e59bf3d9
fix GLU Activation docstrings
2024-04-11 14:48:33 +02:00
limiteinductive
f26b6ee00a
add static typing to __call__ method for latent_diffusion models ; fix multi_diffusion bug that wasn't taking guidance_scale into account
2024-04-11 12:13:30 +02:00
Cédric Deltheil
a2ee705783
hq sam: add constructor args to docstring
...
Additionally, mark `register_adapter_module` for internal use.
2024-04-08 11:46:37 +02:00
Pierre Colle
d05ebb8dd3
SAM/HQSAMAdapter: docstring examples
2024-04-08 07:12:57 +02:00
Pierre Chapuis
e033306f60
use factories in context example
...
Using the same instance multiple times is a bad idea
because PyTorch memorizes things internally. Among
other things this breaks Chain's `__repr__`.
2024-04-05 18:06:54 +02:00
hugojarkoff
bbb46e3fc7
Fix clock step inconsistencies on batch end
2024-04-05 15:52:43 +02:00
Pierre Chapuis
09af570b23
add DINOv2-FD metric
2024-04-03 16:45:00 +02:00
Cédric Deltheil
c529006d13
get rid of invisible-watermark test dependency
...
Was needed originally for diffusers' StableDiffusionXLPipeline. It has
been relaxed in the meanwhile (see `add_watermarker` for details).
2024-04-03 14:48:56 +02:00
Laurent
2ecf7e4b8c
skip dinov2 float16 test on cpu + test dinov2 when batch_size>1
2024-04-02 18:57:25 +02:00
Laurent
5f07fa9c21
fix dinov2 interpolation, support batching
2024-04-02 18:57:25 +02:00
Laurent
ef427538a6
revert "arange" typo ignore
2024-04-02 18:18:22 +02:00
Pierre Chapuis
6c40f56c3f
make numpy dependency explicit
2024-04-02 18:15:52 +02:00
Pierre Chapuis
fd5a15c7e0
update pyright and fix Pillow 10.3 typing issues
2024-04-02 18:15:52 +02:00
Laurent
328fcb8ed1
update typos config, ignore torch.arange
2024-04-02 15:37:28 +02:00
Laurent
1a8ea9180f
refactor dinov2 tests, check against official implementation
2024-04-02 10:02:43 +02:00
Laurent
4f94dfb494
implement dinov2 positional embedding interpolation
2024-04-02 10:02:43 +02:00
Laurent
0336bc78b5
simplify interpolate function and layer
2024-04-02 10:02:43 +02:00
Pierre Colle
6c37e3f933
hq-sam : weights/load_weights
2024-03-29 11:25:43 +01:00
Pierre Chapuis
2b48988c07
add missing word in documentation
2024-03-28 14:41:27 +01:00
Pierre Chapuis
cb6ca60a4e
add ci.yml to source (so it runs when we change it)
2024-03-28 14:40:07 +01:00
Pierre Chapuis
daaa8c5416
use uv for Rye
2024-03-28 14:40:07 +01:00
Pierre Chapuis
404a15aad2
tweak auto_attach_loras so debugging is easier when it fails
2024-03-26 16:12:48 +01:00
Cédric Deltheil
2345f01dd3
test weights: check hash of pre-downloaded weights
2024-03-26 16:01:03 +01:00
Cédric Deltheil
04daeced73
test weights: fix control-lora expected hashes
2024-03-26 16:01:03 +01:00
Laurent
a0715806d2
modify ip_adapter's ImageCrossAttention scale getter and setter
...
this new version makes it robust in case mulitple Mulitply-s are inside the Chain (e.g. if the Linear layers are LoRA-ified)
2024-03-26 11:15:04 +01:00
Laurent
7e64ba4011
modify ip_adapter's CrossAttentionAdapters injection logic
2024-03-26 11:15:04 +01:00
Cédric Deltheil
df0cc2aeb8
do not call __getattr__ with keyword argument
...
Same for __setattr__. Use positional arguments instead. E.g.:
import torch
import refiners.fluxion.layers as fl
m = torch.compile(fl.Linear(1,1))
m(torch.zeros(1))
# TypeError: Module.__getattr__() got an unexpected keyword argument 'name'
2024-03-25 21:46:13 +01:00
hugojarkoff
0f87ea29e0
Update README.md with HQ-SAM news
2024-03-25 09:19:19 +01:00
Pierre Colle
cba83b0558
SAM init with mask_decoder after #325
2024-03-24 20:18:57 +01:00
Pierre Colle
5c937b184a
HQ-SAM logit equal test, following #331
2024-03-23 21:58:32 +01:00
Pierre Colle
2763db960e
SAM e2e test tolerance explained
2024-03-22 21:31:28 +01:00
Pierre Chapuis
364e196874
support no CFG in compute_clip_text_embedding
2024-03-22 17:06:51 +01:00
Pierre Colle
94e8b9c23f
SAM MaskDecoder token slicing
2024-03-22 13:11:40 +01:00
hugojarkoff
a93ceff752
Add HQ-SAM Adapter
2024-03-21 15:36:55 +01:00
hugojarkoff
c6b5eb24a1
Add logits comparison for base SAM in single mask output prediction mode
2024-03-21 10:48:48 +01:00
limiteinductive
38c86f59f4
Switch gradient clipping to native torch torch.nn.utils.clip_grad_norm_
2024-03-19 22:08:48 +01:00
Pierre Colle
68fe725767
Add multimask_output flag to SAM
2024-03-19 17:40:26 +01:00
limiteinductive
6a72943ff7
change TimeValue to a dataclass
2024-03-19 14:49:24 +01:00
Laurent
b8fae60d38
make LoRA's weight initialization overridable
2024-03-13 17:32:16 +01:00
Pierre Chapuis
c1b3a52141
set refine.rs home title
2024-03-13 16:34:42 +01:00
Pierre Chapuis
e32d8d16f0
LoRA loading: forward exclusions when preprocessing parts of the UNet
2024-03-13 15:25:00 +01:00
limiteinductive
ff5341c85c
Change weight decay for Optimizer to normal PyTorch default
2024-03-12 15:20:21 +01:00
Laurent
46612a5138
fix stalebot message config
2024-03-11 17:05:14 +01:00
Pierre Chapuis
975560165c
improve docstrings
2024-03-08 15:43:57 +01:00