Laurent
328fcb8ed1
update typos config, ignore torch.arange
2024-04-02 15:37:28 +02:00
Laurent
1a8ea9180f
refactor dinov2 tests, check against official implementation
2024-04-02 10:02:43 +02:00
Laurent
4f94dfb494
implement dinov2 positional embedding interpolation
2024-04-02 10:02:43 +02:00
Laurent
0336bc78b5
simplify interpolate function and layer
2024-04-02 10:02:43 +02:00
Pierre Colle
6c37e3f933
hq-sam : weights/load_weights
2024-03-29 11:25:43 +01:00
Pierre Chapuis
2b48988c07
add missing word in documentation
2024-03-28 14:41:27 +01:00
Pierre Chapuis
cb6ca60a4e
add ci.yml to source (so it runs when we change it)
2024-03-28 14:40:07 +01:00
Pierre Chapuis
daaa8c5416
use uv for Rye
2024-03-28 14:40:07 +01:00
Pierre Chapuis
404a15aad2
tweak auto_attach_loras so debugging is easier when it fails
2024-03-26 16:12:48 +01:00
Cédric Deltheil
2345f01dd3
test weights: check hash of pre-downloaded weights
2024-03-26 16:01:03 +01:00
Cédric Deltheil
04daeced73
test weights: fix control-lora expected hashes
2024-03-26 16:01:03 +01:00
Laurent
a0715806d2
modify ip_adapter's ImageCrossAttention scale getter and setter
...
this new version makes it robust in case mulitple Mulitply-s are inside the Chain (e.g. if the Linear layers are LoRA-ified)
2024-03-26 11:15:04 +01:00
Laurent
7e64ba4011
modify ip_adapter's CrossAttentionAdapters injection logic
2024-03-26 11:15:04 +01:00
Cédric Deltheil
df0cc2aeb8
do not call __getattr__ with keyword argument
...
Same for __setattr__. Use positional arguments instead. E.g.:
import torch
import refiners.fluxion.layers as fl
m = torch.compile(fl.Linear(1,1))
m(torch.zeros(1))
# TypeError: Module.__getattr__() got an unexpected keyword argument 'name'
2024-03-25 21:46:13 +01:00
hugojarkoff
0f87ea29e0
Update README.md with HQ-SAM news
2024-03-25 09:19:19 +01:00
Pierre Colle
cba83b0558
SAM init with mask_decoder after #325
2024-03-24 20:18:57 +01:00
Pierre Colle
5c937b184a
HQ-SAM logit equal test, following #331
2024-03-23 21:58:32 +01:00
Pierre Colle
2763db960e
SAM e2e test tolerance explained
2024-03-22 21:31:28 +01:00
Pierre Chapuis
364e196874
support no CFG in compute_clip_text_embedding
2024-03-22 17:06:51 +01:00
Pierre Colle
94e8b9c23f
SAM MaskDecoder token slicing
2024-03-22 13:11:40 +01:00
hugojarkoff
a93ceff752
Add HQ-SAM Adapter
2024-03-21 15:36:55 +01:00
hugojarkoff
c6b5eb24a1
Add logits comparison for base SAM in single mask output prediction mode
2024-03-21 10:48:48 +01:00
limiteinductive
38c86f59f4
Switch gradient clipping to native torch torch.nn.utils.clip_grad_norm_
2024-03-19 22:08:48 +01:00
Pierre Colle
68fe725767
Add multimask_output flag to SAM
2024-03-19 17:40:26 +01:00
limiteinductive
6a72943ff7
change TimeValue to a dataclass
2024-03-19 14:49:24 +01:00
Laurent
b8fae60d38
make LoRA's weight initialization overridable
2024-03-13 17:32:16 +01:00
Pierre Chapuis
c1b3a52141
set refine.rs home title
2024-03-13 16:34:42 +01:00
Pierre Chapuis
e32d8d16f0
LoRA loading: forward exclusions when preprocessing parts of the UNet
2024-03-13 15:25:00 +01:00
limiteinductive
ff5341c85c
Change weight decay for Optimizer to normal PyTorch default
2024-03-12 15:20:21 +01:00
Laurent
46612a5138
fix stalebot message config
2024-03-11 17:05:14 +01:00
Pierre Chapuis
975560165c
improve docstrings
2024-03-08 15:43:57 +01:00
Pierre Chapuis
5d784bedab
add test for "Adapting SDXL" guide
2024-03-08 15:43:57 +01:00
Pierre Chapuis
cd5fa97c20
ability to get LoRA weights in SDLoraManager
2024-03-08 15:43:57 +01:00
Pierre Chapuis
fb90b00e75
add_loras_to_unet: add preprocess values as exclusions in last step
2024-03-08 15:43:57 +01:00
Pierre Chapuis
4259261f17
simplify LCM weights loader using new manager features
2024-03-08 15:43:57 +01:00
Pierre Chapuis
ccd9414ff1
fix debug map when attaching two LoRAs
...
(in that case return the path of the LoraAdapter)
2024-03-08 15:43:57 +01:00
Pierre Chapuis
72fa17df48
fix slider loras test
2024-03-08 15:43:57 +01:00
Pierre Chapuis
8c7fcbc00f
LoRA manager: move exclude / include to add_loras call
...
Always exclude the TimestepEncoder by default.
This is because some keys include both e.g. `resnet` and `time_emb_proj`.
Preprocess blocks that tend to mix up with others in a separate
auto_attach call.
2024-03-08 15:43:57 +01:00
Pierre Chapuis
052a20b897
remove add_multiple_loras
2024-03-08 15:43:57 +01:00
Pierre Chapuis
c383ff6cf4
fix DPO LoRA loading in tests
2024-03-08 15:43:57 +01:00
Pierre Chapuis
ed8ec26e63
allow passing inclusions and exlusions to SDLoraManager
2024-03-08 15:43:57 +01:00
Pierre Chapuis
cce2a98fa6
add sanity check to auto_attach_loras
2024-03-08 15:43:57 +01:00
limiteinductive
5593b40073
fix training 101 guide inconsistencies
2024-03-08 14:24:34 +01:00
Pierre Chapuis
1eb71077aa
use same scale setter / getter interface for all controls
2024-03-08 11:29:28 +01:00
Laurent
5e7986ef08
adding more log messages in training_utils
2024-03-08 10:52:14 +01:00
Pierre Chapuis
be2368cf20
ruff 3 formatting (Rye 0.28)
2024-03-08 10:42:05 +01:00
Pierre Chapuis
a0be5458b9
snip long prompt in tests
2024-03-05 19:54:44 +01:00
Pierre Chapuis
defbb9eb3a
update deps and use ruff in Rye to format
2024-03-05 19:40:52 +01:00
Laurent
d26ec690e8
(CI) add bounty stale bot
2024-03-05 14:13:59 +01:00
Pierre Chapuis
98e2ab94c9
fix CI (again)
...
https://github.com/eifinger/setup-rye/releases/tag/v2.0.0
2024-03-04 11:28:40 +01:00