better home page + menu icons

This commit is contained in:
Pierre Chapuis 2024-02-01 18:32:05 +01:00 committed by Cédric Deltheil
parent 3f3d192375
commit 39055f39a4
9 changed files with 79 additions and 42 deletions

BIN
docs/assets/logo_light.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View file

@ -1,3 +1,7 @@
---
icon: material/tray-plus
---
# Adapter
Adapters are the final and most high-level abstraction in Refiners. They are the concept of adaptation turned into code.

View file

@ -1,3 +1,7 @@
---
icon: material/family-tree
---
# Chain

View file

@ -1,3 +1,7 @@
---
icon: material/comment-alert-outline
---
# Context
## Motivation: avoiding "props drilling"

View file

@ -0,0 +1,13 @@
---
icon: material/wrench-cog-outline
---
# Advanced usage
## Using other package managers (pip, Poetry...)
We use Rye to maintain and release Refiners but it conforms to the standard Python packaging guidelines and can be used with other package managers. Please refer to their respective documentation to figure out how to install a package from Git if you intend to use the development branch, as well as how to specify features.
## Using stable releases from PyPI
Although we recommend using our development branch, we do [publish more stable releases to PyPI](https://pypi.org/project/refiners/) and you are welcome to use them in your project. However, note that the format of weights can be different from the current state of the development branch, so you will need the conversion scripts from the corresponding tag in GitHub, for instance [here for v0.2.0](https://github.com/finegrain-ai/refiners/tree/v0.2.0).

View file

@ -1,16 +1,14 @@
# Getting Started
---
icon: material/star-outline
---
Refiners is a micro framework on top of PyTorch with first class citizen APIs for foundation model adaptation.
Refiners requires Python 3.10 or later, its main dependency is PyTorch.
## Recommended usage (development branch, with Rye)
# Recommended usage
Refiners is still a young project and development is active, so to use the latest and greatest version of the framework we recommend you use the `main` branch from our development repository.
Moreover, we recommend using [Rye](https://rye-up.com) which simplifies several things related to Python package management, so start by following the instructions to install it on your system.
### Trying Refiners, converting weights
## Trying Refiners, converting weights
To try Refiners, clone the GitHub repository and install it with all optional features:
@ -28,7 +26,7 @@ python "scripts/conversion/convert_diffusers_autoencoder_kl.py" --to "lda.safete
If you need to convert weights for all models, check out `script/prepare_test_weights.py` (warning: it requires a GPU with significant VRAM and a lot of disk space).
Now let to check that it works copy your favorite 512x512 picture in the current directory as `input.png` and create `ldatest.py` with this content:
Now to check that it works copy your favorite 512x512 picture in the current directory as `input.png` and create `ldatest.py` with this content:
```py
from PIL import Image
@ -53,7 +51,7 @@ python ldatest.py
Inspect `output.png`: it should be similar to `input.png` but have a few differences. Latent Autoencoders are good compressors!
### Using refiners in your own project
## Using refiners in your own project
So far you used Refiners as a standalone package, but if you want to create your own project using it as a dependency here is how you can proceed:
@ -74,16 +72,6 @@ rye add --dev --git "git@github.com:finegrain-ai/refiners.git" --features conver
Note that you will still need to download the conversion scripts independently if you go that route.
### What next?
## What's next?
We suggest you check out the [guides](/guides/) section to dive into the usage of Refiners, of the [Key Concepts](/concepts/chain/) section for a better understanding of how the framework works.
## Advanced usage
### Using other package managers (pip, Poetry...)
We use Rye to maintain and release Refiners but it conforms to the standard Python packaging guidelines and can be used with other package managers. Please refer to their respective documentation to figure out how to install a package from Git if you intend to use the development branch, as well as how to specify features.
### Using stable releases from PyPI
Although we recommend using our development branch, we do [publish more stable releases to PyPI](https://pypi.org/project/refiners/) and you are welcome to use them in your project. However, note that the format of weights can be different from the current state of the development branch, so you will need the conversion scripts from the corresponding tag in GitHub, for instance [here for v0.2.0](https://github.com/finegrain-ai/refiners/tree/v0.2.0).

35
docs/home/why.md Normal file
View file

@ -0,0 +1,35 @@
---
icon: material/lightbulb-on-outline
---
# Why Refiners?
## PyTorch: an imperative framework
PyTorch is a great framework to implement deep learning models, widely adopted in academia and industry around the globe. A core design principle of PyTorch is that users write *imperative* Python code that manipulates Tensors[^1]. This code can be organized in Modules, which are just Python classes whose constructors typically initialize parameters and load weights, and which implement a `forward` method that computes the forward pass. Dealing with reconstructing an inference graph, backpropagation and so on are left to the framework.
This approach works very well in general, as demonstrated by the popularity of PyTorch. However, the growing importance of the Adaptation pattern is challenging it.
## Adaptation: patching foundation models
Adaptation is the idea of *patching* existing powerful models to implement new capabilities. Those models are called foundation models; they are typically trained from scratch on amounts of data inaccessible to most individuals, small companies or research labs, and exhibit emergent properties. Examples of such models are LLMs (GPT, LLaMa, Mistral), image generation models (Stable Diffusion, Muse), vision models (BLIP-2, LLaVA 1.5, Fuyu-8B) but also models trained on more specific tasks such as embedding extraction (CLIP, DINOv2) or image segmentation (SAM).
Adaptation of foundation models can take many forms. One of the simplest but most powerful derives from fine-tuning: re-training a subset of the weights of the model on a specific task, then distributing only those weights. Add to this a trick to significantly reduce the size of the fine-tuned weights and you get LoRA[^2], which is probably the most well-known adaptation method. However, adaptation can go beyond that and change the shape of the model or its inputs.
## Imperative code is hard to patch cleanly
There are several approaches to patch the code of a foundation model implemented in typical PyTorch imperative style to support adaptation, including:
- Just duplicate the original code base and edit it in place unconditionally. This approach is often adopted by researchers today.
- Change the original code base to optionally support the adapter. This approach is often used by frameworks and libraries built on top of PyTorch and works well for a single adapter. However, as you start adding support for multiple adapters to the same foundation model the cyclomatic complexity explodes and the code becomes hard to maintain and error-prone. The end result is that adapters typically do not compose well.
- Change the original code to abstract adaptation by adding ad-hoc hooks everywhere. This approach has the advantage of keeping the foundation model independent from its adapter, but it makes the code extremely non-linear and hard to reason about - so-called "spaghetti code".
As believers in adaptation, none of those approaches was appealing to us, so we designed Refiners as a better option. Refiners is a micro-framework built on top of PyTorch which does away with its imperative style. In Refiners, models are implemented in a *declarative* way instead, which makes them by nature easier to manipulate and patch.
## What's next?
Now you know *why* we wrote a declarative framework, you can check out [*how*](/concepts/chain/). It's not that complicated, we promise!
[^1]: Paszke et al., 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library.
[^2]: Hu et al., 2022. LoRA: Low-Rank Adaptation of Large Language Models.

View file

@ -1,24 +1,11 @@
# Refiners - Docs
---
icon: material/water-outline
---
## Why Refiners?
# ![Refiners](/assets/logo_light.png)
PyTorch is a great framework to implement deep learning models, widely adopted in academia and industry around the globe. A core design principle of PyTorch is that users write *imperative* Python code that manipulates Tensors[^1]. This code can be organized in Modules, which are just Python classes whose constructors typically initialize parameters and load weights, and which implement a `forward` method that computes the forward pass. Dealing with reconstructing an inference graph, backpropagation and so on are left to the framework.
Refiners is the simplest way to train and run [adapters](/concepts/adapter/) on top of foundation models.
This approach works very well in general, as demonstrated by the popularity of PyTorch. However, the growing importance of the Adaptation pattern is challenging it.
It is a microframework built on top of PyTorch with first-class citizen APIs for foundation model adaptation.
Adaptation is the idea of *patching* existing powerful models to implement new capabilities. Those models are called foundation models; they are typically trained from scratch on amounts of data inaccessible to most individuals, small companies or research labs, and exhibit emergent properties. Examples of such models are LLMs (GPT, LLaMa, Mistral), image generation models (Stable Diffusion, Muse), vision models (BLIP-2, LLaVA 1.5, Fuyu-8B) but also models trained on more specific tasks such as embedding extraction (CLIP, DINOv2) or image segmentation (SAM).
Adaptation of foundational models can take many forms. One of the simplest but most powerful derives from fine-tuning: re-training a subset of the weights of the model on a specific task, then distributing only those weights. Add to this a trick to significantly reduce the size of the fine-tuned weights and you get LoRA[^2], which is probably the most well-known adaptation method. However, adaptation can go beyond that and change the shape of the model or its inputs.
There are several approaches to patch the code of a foundation model implemented in typical PyTorch imperative style to support adaptation, including:
- Just duplicate the original code base and edit it in place unconditionally. This approach is often adopted by researchers today.
- Change the original code base to optionally support the adapter. This approach is often used by frameworks and libraries built on top of PyTorch and works well for a single adapter. However, as you start adding support for multiple adapters to the same foundational module the cyclomatic complexity explodes and the code becomes hard to maintain and error-prone. The end result is that adapters typically do not compose well.
- Change the original code to abstract adaptation by adding ad-hoc hooks everywhere. This approach has the advantage of keeping the foundational model independent from its adapter, but it makes the code extremely non-linear and hard to reason about - so-called "spaghetti code".
As believers in adaptation, none of those approaches was appealing to us, so we designed Refiners as a better option. Refiners is a micro-framework built on top of PyTorch which does away with its imperative style. In Refiners, models are implemented in a *declarative* way instead, which makes them by nature easier to manipulate and patch.
Now that you know *why* we do that, you can check out [*how*](/concepts/chain/). It's not that hard, we promise!
[^1]: Paszke et al., 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library.
[^2]: Hu et al., 2022. LoRA: Low-Rank Adaptation of Large Language Models.
Refiners is Open Source and published under the MIT License.

View file

@ -53,9 +53,11 @@ extra_css:
- stylesheets/extra.css
nav:
- Home:
- index.md
- Refiners: index.md
- home/why.md
- Getting started:
- getting_started.md
- getting-started/recommended.md
- getting-started/advanced.md
- Guides:
- guides/index.md
- Key Concepts: