mirror of
https://github.com/finegrain-ai/refiners.git
synced 2024-11-12 16:18:22 +00:00
A microframework on top of PyTorch with first-class citizen APIs for foundation model adaptation
https://refine.rs/
background-generationbackground-removalcontrolnetdiffusion-modelsdinov2image-generationip-adapterlcmlcm-loralorasamsdxlsegment-anythingsegment-anything-modelshadow-generationstable-diffusiont2i-adaptertext-to-imagetextual-inversion
471ef91d1c
PyTorch chose to make it Any because they expect its users' code to be "highly dynamic": https://github.com/pytorch/pytorch/pull/104321 It is not the case for us, in Refiners having untyped code goes contrary to one of our core principles. Note that there is currently an open PR in PyTorch to return `Module | Tensor`, but in practice this is not always correct either: https://github.com/pytorch/pytorch/pull/115074 I also moved Residuals-related code from SD1 to latent_diffusion because SDXL should not depend on SD1. |
||
---|---|---|
.github/workflows | ||
assets | ||
configs | ||
docs | ||
notebooks | ||
scripts | ||
src/refiners | ||
tests | ||
.gitignore | ||
CONTRIBUTING.md | ||
LICENSE | ||
mkdocs.yml | ||
pyproject.toml | ||
README.md | ||
requirements.docs.txt | ||
requirements.lock |
The simplest way to train and run adapters on top of foundation models
Manifesto | Docs | Guides | Discussions | Discord
Latest News 🔥
- Added Euler's method to solvers (contributed by @israfelsr)
- Added DINOv2 for high-performance visual features (contributed by @Laurent2916)
- Added FreeU for improved quality at no cost (contributed by @isamu-isozaki)
- Added Restart Sampling for improved image generation (example)
- Added Self-Attention Guidance to avoid e.g. too smooth images (example)
- Added T2I-Adapter for extra guidance (example)
- Added MultiDiffusion for e.g. panorama images
- Added IP-Adapter, aka image prompt (example)
- Added Segment Anything to foundation models
- Added SDXL 1.0 to foundation models
- Made possible to add new concepts to the CLIP text encoder, e.g. via Textual Inversion
Installation
The current recommended way to install Refiners is from source using Rye:
git clone "git@github.com:finegrain-ai/refiners.git"
cd refiners
rye sync --all-features
Documentation
Refiners comes with a MkDocs-based documentation website available at https://refine.rs. You will find there a quick start guide, a description of the key concepts, as well as in-depth foundation model adaptation guides.
Awesome Adaptation Papers
If you're interested in understanding the diversity of use cases for foundation model adaptation (potentially beyond the specific adapters supported by Refiners), we suggest you take a look at these outstanding papers:
- ControlNet
- T2I-Adapter
- IP-Adapter
- Medical SAM Adapter
- 3DSAM-adapter
- SAM-adapter
- Cross Modality Attention Adapter
- UniAdapter
Projects using Refiners
Credits
We took inspiration from these great projects:
- tinygrad - For something between PyTorch and karpathy/micrograd
- Composer - A PyTorch Library for Efficient Neural Network Training
- Keras - Deep Learning for humans
Citation
@misc{the-finegrain-team-2023-refiners,
author = {Benjamin Trom and Pierre Chapuis and Cédric Deltheil},
title = {Refiners: The simplest way to train and run adapters on top of foundation models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/finegrain-ai/refiners}}
}