mirror of
https://github.com/finegrain-ai/refiners.git
synced 2024-11-22 14:18:46 +00:00
A microframework on top of PyTorch with first-class citizen APIs for foundation model adaptation
https://refine.rs/
background-generationbackground-removalcontrolnetdiffusion-modelsdinov2image-generationip-adapterlcmlcm-loralorasamsdxlsegment-anythingsegment-anything-modelshadow-generationstable-diffusiont2i-adaptertext-to-imagetextual-inversion
.github/workflows | ||
assets | ||
docs | ||
notebooks | ||
scripts | ||
src/refiners | ||
tests | ||
typings/gdown | ||
.gitignore | ||
CONTRIBUTING.md | ||
LICENSE | ||
mkdocs.yml | ||
pyproject.toml | ||
README.md | ||
requirements.docs.txt | ||
requirements.lock |
The simplest way to train and run adapters on top of foundation models
Manifesto | Docs | Guides | Discussions | Discord
Latest News 🔥
- Added IC-Light to manipulate the illumination of images
- Added Multi Upscaler for high-resolution image generation, inspired from Clarity Upscaler (HF Space)
- Added HQ-SAM for high quality mask prediction with Segment Anything
- Added SDXL-Lightning
- Added Latent Consistency Models and LCM-LoRA for Stable Diffusion XL
- Added Style Aligned adapter to Stable Diffusion models
- Added ControlLoRA (v2) adapter to Stable Diffusion XL
- Added Euler's method to solvers (contributed by @israfelsr)
- Added DINOv2 for high-performance visual features (contributed by @Laurent2916)
- Added FreeU for improved quality at no cost (contributed by @isamu-isozaki)
- Added Restart Sampling for improved image generation (example)
- Added Self-Attention Guidance to avoid e.g. too smooth images (example)
- Added T2I-Adapter for extra guidance (example)
- Added MultiDiffusion for e.g. panorama images
- Added IP-Adapter, aka image prompt (example)
- Added Segment Anything to foundation models
- Added SDXL 1.0 to foundation models
- Made possible to add new concepts to the CLIP text encoder, e.g. via Textual Inversion
Installation
The current recommended way to install Refiners is from source using Rye:
git clone "git@github.com:finegrain-ai/refiners.git"
cd refiners
rye sync --all-features
Documentation
Refiners comes with a MkDocs-based documentation website available at https://refine.rs. You will find there a quick start guide, a description of the key concepts, as well as in-depth foundation model adaptation guides.
Awesome Adaptation Papers
If you're interested in understanding the diversity of use cases for foundation model adaptation (potentially beyond the specific adapters supported by Refiners), we suggest you take a look at these outstanding papers:
- ControlNet
- T2I-Adapter
- IP-Adapter
- Medical SAM Adapter
- 3DSAM-adapter
- SAM-adapter
- Cross Modality Attention Adapter
- UniAdapter
Projects using Refiners
Credits
We took inspiration from these great projects:
- tinygrad - For something between PyTorch and karpathy/micrograd
- Composer - A PyTorch Library for Efficient Neural Network Training
- Keras - Deep Learning for humans
Citation
@misc{the-finegrain-team-2023-refiners,
author = {Benjamin Trom and Pierre Chapuis and Cédric Deltheil},
title = {Refiners: The simplest way to train and run adapters on top of foundation models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/finegrain-ai/refiners}}
}