mirror of
https://github.com/finegrain-ai/refiners.git
synced 2024-11-23 06:38:45 +00:00
A microframework on top of PyTorch with first-class citizen APIs for foundation model adaptation
https://refine.rs/
background-generationbackground-removalcontrolnetdiffusion-modelsdinov2image-generationip-adapterlcmlcm-loralorasamsdxlsegment-anythingsegment-anything-modelshadow-generationstable-diffusiont2i-adaptertext-to-imagetextual-inversion
.github/workflows | ||
assets | ||
docs | ||
notebooks | ||
scripts | ||
src | ||
tests | ||
typings/gdown | ||
.gitignore | ||
CONTRIBUTING.md | ||
LICENSE | ||
mkdocs.yml | ||
pyproject.toml | ||
README.md | ||
requirements.docs.txt | ||
requirements.lock |
The simplest way to train and run adapters on top of foundation models
Manifesto | Docs | Guides | Discussions | Discord
Latest News 🔥
- Added ELLA for better prompts handling (contributed by @ily-R)
- Added the Box Segmenter all-in-one solution (model, HF Space)
- Added MVANet for high resolution segmentation
- Added IC-Light to manipulate the illumination of images
- Added Multi Upscaler for high-resolution image generation, inspired from Clarity Upscaler (HF Space)
- Added HQ-SAM for high quality mask prediction with Segment Anything
- ...see past releases
Installation
The current recommended way to install Refiners is from source using Rye:
git clone "git@github.com:finegrain-ai/refiners.git"
cd refiners
rye sync --all-features
Documentation
Refiners comes with a MkDocs-based documentation website available at https://refine.rs. You will find there a quick start guide, a description of the key concepts, as well as in-depth foundation model adaptation guides.
Projects using Refiners
- Finegrain Editor: use state-of-the-art visual AI skills to edit product photos
- Visoid: AI-powered architectural visualization
- brycedrennan/imaginAIry: Pythonic AI generation of images and videos
- chloedia/layerdiffuse: an implementation of LayerDiffuse (foreground generation only)
Awesome Adaptation Papers
If you're interested in understanding the diversity of use cases for foundation model adaptation (potentially beyond the specific adapters supported by Refiners), we suggest you take a look at these outstanding papers:
- ControlNet
- T2I-Adapter
- IP-Adapter
- Medical SAM Adapter
- 3DSAM-adapter
- SAM-adapter
- Cross Modality Attention Adapter
- UniAdapter
Credits
We took inspiration from these great projects:
- tinygrad - For something between PyTorch and karpathy/micrograd
- Composer - A PyTorch Library for Efficient Neural Network Training
- Keras - Deep Learning for humans
Citation
@misc{the-finegrain-team-2023-refiners,
author = {Benjamin Trom and Pierre Chapuis and Cédric Deltheil},
title = {Refiners: The simplest way to train and run adapters on top of foundation models},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/finegrain-ai/refiners}}
}