limiteinductive
|
8615dbdbde
|
Add inner_dim Parameter to Attention Layer in Fluxion
|
2023-08-28 16:34:25 +02:00 |
|
limiteinductive
|
7ca6bd0ccd
|
implement the ConvertModule class and refactor conversion scripts
|
2023-08-28 14:39:14 +02:00 |
|
Doryan Kaced
|
3680f9d196
|
Add support for learned concepts e.g. via textual inversion
|
2023-08-28 10:37:39 +02:00 |
|
Benjamin Trom
|
8b1719b1f9
|
remove unused TextEncoder and UNet protocols
|
2023-08-25 17:34:26 +02:00 |
|
limiteinductive
|
92a21bc21e
|
refactor latent_diffusion module
|
2023-08-25 12:30:20 +02:00 |
|
Pierre Chapuis
|
802970e79a
|
simplify Chain#append
|
2023-08-23 17:49:59 +02:00 |
|
Pierre Chapuis
|
16618d73de
|
remove useless uses of type: ignore
|
2023-08-23 17:49:59 +02:00 |
|
Pierre Chapuis
|
dec0d64432
|
make walk and layers not recurse by default
There is now a parameter to get the old (recursive) behavior.
|
2023-08-23 12:15:56 +02:00 |
|
Pierre Chapuis
|
2ad26a06b0
|
fix LoRAs on Self target
|
2023-08-23 12:13:01 +02:00 |
|
limiteinductive
|
3565a4127f
|
implement DoubleTextEncoder for SDXL
|
2023-08-23 11:05:38 +02:00 |
|
Cédric Deltheil
|
71ddb55a8e
|
infer device and dtype in LoraAdapter
|
2023-08-22 11:55:39 +02:00 |
|
Benjamin Trom
|
8c7298f8cc
|
fix chain slicing with structural copy
|
2023-08-22 11:44:11 +02:00 |
|
limiteinductive
|
e7c1db50e0
|
turn CLIPTokenizer into a fl.Module
|
2023-08-22 00:09:01 +02:00 |
|
Cédric Deltheil
|
1ad4e1a35a
|
converter: add missing structural_attrs
|
2023-08-21 16:04:12 +02:00 |
|
Cédric Deltheil
|
b91a457495
|
use Converter layer for sinuosoidal embedding
|
2023-08-21 16:04:12 +02:00 |
|
limiteinductive
|
108fa8f26a
|
add converter layer + tests
|
2023-08-21 12:09:58 +02:00 |
|
limiteinductive
|
4526d58cd5
|
update CTOR of CLIPTextEncoder with max_sequence_length
|
2023-08-21 11:21:12 +02:00 |
|
limiteinductive
|
6fd5894caf
|
split PositionalTokenEncoder
|
2023-08-21 11:21:12 +02:00 |
|
limiteinductive
|
9d663534d1
|
cosmetic changes for text_encoder.py
|
2023-08-21 11:21:12 +02:00 |
|
limiteinductive
|
b8e7179447
|
make clip g use quick gelu and pad_token_id 0
|
2023-08-17 17:31:15 +02:00 |
|
limiteinductive
|
6594502c11
|
parametrize tokenizer for text_encoder
|
2023-08-17 17:31:15 +02:00 |
|
limiteinductive
|
4575e3dd91
|
add start, end and pad tokens as parameter
|
2023-08-17 17:31:15 +02:00 |
|
limiteinductive
|
63fda2bfd8
|
add use_quick_gelu kwarg for CLIPTextEncoder
|
2023-08-17 17:31:15 +02:00 |
|
limiteinductive
|
efe923a272
|
cosmetic changes
|
2023-08-17 17:31:15 +02:00 |
|
limiteinductive
|
17dc75421b
|
make basic layers an enum and work with subtyping
|
2023-08-17 15:36:43 +02:00 |
|
Pierre Chapuis
|
97b162d9a0
|
add InformativeDrawings
https://github.com/carolineec/informative-drawings
This is the preprocessor for the Lineart ControlNet.
|
2023-08-16 12:29:09 +02:00 |
|
Pierre Chapuis
|
e10f761a84
|
GroupNorm and LayerNorm must be affine to be WeightedModules
|
2023-08-16 12:29:09 +02:00 |
|
Pierre Chapuis
|
bd49304fc8
|
add Sigmoid activation
|
2023-08-07 19:56:28 +02:00 |
|
Cédric Deltheil
|
48f674c433
|
initial commit
|
2023-08-04 15:28:41 +02:00 |
|