limiteinductive
6a1fac876b
remove huggingface datasets from default config
2023-12-20 16:58:12 +01:00
limiteinductive
0f560437bc
add tomli to training dependancy
2023-12-20 11:11:44 +01:00
Cédric Deltheil
22ce3fd033
sam: wrap high-level methods with no_grad
2023-12-19 21:45:23 +01:00
Cédric Deltheil
e7892254eb
dinov2: add some coverage for registers
...
Those are not supported yet in HF: so just compared with a precomputed
norm. Note: in the initial PR [1] the Refiners' implementation has been
tested against the official code using Torch Hub.
[1]:
https://github.com/finegrain-ai/refiners/pull/132#issuecomment-1852021656
2023-12-18 10:29:28 +01:00
Cédric Deltheil
f0ea1a2509
prepare_test_weights: add DINOv2
2023-12-18 10:29:28 +01:00
Cédric Deltheil
68cc346905
add minimal unit tests for DINOv2
...
To be completed with tests using image preprocessing, e.g. test cosine
similarity on a relevant pair of images
2023-12-18 10:29:28 +01:00
Cédric Deltheil
832f012fe4
convert_dinov2: tweak command-line args
...
i.e. mimic the other conversion scripts
2023-12-18 10:29:28 +01:00
Bryce
5ca1549c96
refactor: convert bash script to python
...
Ran successfully to completion. But on a repeat run `convert_unclip` didn't pass the hash check for some reason.
- fix inpainting model download urls
- shows a progress bar for downloads
- skips downloading existing files
- uses a temporary file to prevent partial downloads
- can do a dry run to check if url is valid `DRY_RUN=1 python scripts/prepare_test_weights.py`
- displays the downloaded file hash
2023-12-15 09:55:59 +01:00
Pierre Chapuis
77fb8032c2
tweak CONTRIBUTING.md section on tests
2023-12-14 18:44:10 +01:00
Pierre Chapuis
c27fd62fc5
add a way to run CI on external PRs using a label
2023-12-14 17:52:56 +01:00
Cédric Deltheil
3ff7719cb8
README: add DINOv2
2023-12-14 17:50:41 +01:00
Cédric Deltheil
e978b3665d
convert_dinov2: ignore pyright errors
...
And save converted weights into safetensors instead of pickle
2023-12-14 17:50:41 +01:00
Laureηt
9337d65e0e
feature: add DINOv2
...
Co-authored-by: Benjamin Trom <benjamintrom@gmail.com>
2023-12-14 17:27:32 +01:00
Benjamin Trom
e2f2e33add
Update tests/fluxion/layers/test_basics.py
...
Co-authored-by: Cédric Deltheil <355031+deltheil@users.noreply.github.com>
2023-12-13 17:03:28 +01:00
limiteinductive
7d9ceae274
change default behavior of end to None
2023-12-13 17:03:28 +01:00
Cédric Deltheil
82a2aa1ec4
deprecate DDPM step which is unused for now
2023-12-13 15:51:42 +01:00
limiteinductive
a7551e0392
Change fl.Slicing API
2023-12-13 09:38:13 +01:00
Cédric Deltheil
11b0ff6f8c
ddim: remove unused attribute
2023-12-12 17:26:14 +01:00
Cédric Deltheil
315b4ed2e4
test_schedulers: enforce manual seed
2023-12-12 17:26:14 +01:00
limiteinductive
7992258dd2
add before/after init callback to trainer
2023-12-12 10:22:55 +01:00
Pierre Chapuis
42a0fc4aa0
fix circular imports
2023-12-11 15:27:11 +01:00
Cédric Deltheil
c8d5faff9b
pyproject.toml: add example for combine-as-imports
...
b44d612
was misleading (s/avoid/allow/)
2023-12-11 13:57:57 +01:00
Cédric Deltheil
792a0fc3d9
run lint rules using latest isort settings
2023-12-11 11:58:43 +01:00
Cédric Deltheil
b44d6122c4
pyproject.toml: enable isort rule in Ruff
...
Use `combine-as-imports = true` [1] to avoid such kinds or imports on a
single line:
from torch import Tensor, device as Device, dtype as DType
[1]: https://docs.astral.sh/ruff/settings/#isort-combine-as-imports
2023-12-11 11:58:43 +01:00
Cédric Deltheil
4c07225d68
pyproject.toml: remove tool.isort section
...
Ruff is going to be used instead.
Follow up of #141
2023-12-11 11:58:43 +01:00
Cédric Deltheil
4fc5e427b8
training_utils: fix extra detection
...
Requirements could be, e.g.:
wandb (>=0.15.7,<0.16.0) ; extra == "training"
Or:
wandb>=0.16.0; extra == 'training'
Follow up of 86c5497
2023-12-08 19:09:16 +01:00
limiteinductive
86c54977b9
replace poetry by rye for python dependency management
...
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
Co-authored-by: Pierre Chapuis <git@catwell.info>
2023-12-08 17:40:10 +01:00
limiteinductive
807ef5551c
refactor fl.Parameter basic layer
...
Co-authored-by: Cédric Deltheil <cedric@deltheil.me>
2023-12-08 10:20:34 +01:00
Cédric Deltheil
11422a3faf
fix typo in finetune-ldm config
2023-12-07 18:19:45 +01:00
Cédric Deltheil
46b4b4b462
training_utils: fix naming issue timestep->step
2023-12-05 10:05:34 +01:00
limiteinductive
0dc3a17fbf
remove unnecessary test
2023-12-04 15:27:06 +01:00
limiteinductive
8bbcf2048d
add README bullet point
2023-12-04 15:27:06 +01:00
limiteinductive
37a74bd549
format test_scheduler file
2023-12-04 15:27:06 +01:00
limiteinductive
90db6ef59d
add e2e test for sd15 with karras noise schedule
2023-12-04 15:27:06 +01:00
limiteinductive
6f110ee2b2
fix test_scheduler_utils
2023-12-04 15:27:06 +01:00
limiteinductive
1075ea4a62
fix ddpm and ddim __init__
2023-12-04 15:27:06 +01:00
limiteinductive
ad8f02e555
add karras sampling to the Scheduler abstract class, default is quadratic
2023-12-04 15:27:06 +01:00
Pierre Chapuis
f22f969d65
remove Black preview mode
...
also fix multiline logs in training
2023-12-04 14:15:56 +01:00
Bryce
4176868e79
feature: add sliced-attention for memory efficiency
...
This allowed me to produce HD images on M1 32gb and 7000x5000 on Nvidia 4090
I saw no visual difference in images generated.
Some datapoints on slice_size
# 4096 max needed for SD 1.5 512x512
# 9216 max needed for SD 1.5 768x768
# 16384 max needed for SD 1.5 1024x1024
# 32400 max needed for SD 1.5 1920x1080 (HD)
# 129600 max needed for SD 1.5 3840x2160 (4k)
# 234375 max needed for SD 1.5 5000x3000
2023-12-01 15:30:23 +01:00
Cédric Deltheil
b306c7db1b
freeu: add one more test for identity scales
...
It should act as a NOP when [1.0, 1.0] is used for backbone and skip
scales.
2023-12-01 12:48:19 +01:00
Cédric Deltheil
761678d9a5
README: add yet another badge for discord
2023-11-30 17:33:32 +01:00
Cédric Deltheil
01cf4efba2
README: add code bounties badge
2023-11-28 11:41:11 +01:00
Cédric Deltheil
cbb13ed032
README: add a link to imaginAIry
2023-11-24 12:18:55 +01:00
Cédric Deltheil
dde47318da
README: add a link to the intro blog post
2023-11-21 11:19:23 +01:00
Benjamin Trom
2d4c4774f4
add maxpool to refiners layer
2023-11-20 10:58:53 +01:00
Bryce
f666bc82f5
feature: support self-attention guidance with SD1 inpainting model
2023-11-20 10:17:15 +01:00
Cédric Deltheil
ab0915d052
add tests for FreeU
2023-11-18 16:15:44 +01:00
Benjamin Trom
6eeb01137d
Add Adapter in refiners.fluxion.adapters init
2023-11-18 13:54:40 +01:00
Cédric Deltheil
86e7dfe0c7
add FreeU to latest news
2023-11-17 18:26:11 +01:00
isamu-isozaki
770879a6df
Free U
2023-11-17 17:22:20 +01:00