add some links to API reference

This commit is contained in:
Pierre Chapuis 2024-02-02 11:48:46 +01:00 committed by Cédric Deltheil
parent f7cc6e577c
commit 2afb7cb638
3 changed files with 10 additions and 8 deletions

View file

@ -8,7 +8,7 @@ Adapters are the final and most high-level abstraction in Refiners. They are the
An Adapter is [generally](#higher-level-adapters) a Chain that replaces a Module (the target) in another Chain (the parent). Typically the target will become a child of the adapter.
In code terms, `Adapter` is a generic mixin. Adapters subclass `type(parent)` and `Adapter[type(target)]`. For instance, if you adapt a Conv2d in a Sum, the definition of the Adapter could look like:
In code terms, [`Adapter`][refiners.fluxion.adapters.Adapter] is a generic mixin. Adapters subclass `type(parent)` and `Adapter[type(target)]`. For instance, if you adapt a `Conv2d` in a `Sum`, the definition of the Adapter could look like:
```py
class MyAdapter(fl.Sum, fl.Adapter[fl.Conv2d]):
@ -68,7 +68,7 @@ Starting from the same model as earlier, let us assume we want to:
- invert the order of the Linear and Chain B in Chain A ;
- replace the first child block of chain B with the original Chain A.
This Adapter that will perform a `structural_copy` of part of its target, which means it will duplicate all Chain nodes but keep pointers to the same `WeightedModule`s, and hence not use extra GPU memory.
This Adapter that will perform a [`structural_copy`][refiners.fluxion.layers.Chain.structural_copy] of part of its target, which means it will duplicate all Chain nodes but keep pointers to the same [`WeightedModule`][refiners.fluxion.layers.WeightedModule]s, and hence not use extra GPU memory.
```py
class MyAdapter(fl.Chain, fl.Adapter[fl.Chain]):
@ -98,7 +98,7 @@ Note that the Linear is in the Chain twice now, but that does not matter as long
As before, we can call eject the adapter to go back to the original model.
## A real-world example: LoraAdapter
## A real-world example: [LoraAdapter][refiners.fluxion.adapters.LoraAdapter]
A popular example of adaptation is [LoRA](https://arxiv.org/abs/2106.09685). You can check out [how we implement it in Refiners](https://github.com/finegrain-ai/refiners/blob/main/src/refiners/fluxion/adapters/lora.py).

View file

@ -5,7 +5,7 @@ icon: material/family-tree
# Chain
When we say models are implemented in a declarative way in Refiners, what this means in practice is they are implemented as Chains. `Chain` is a Python class to implement trees of modules. It is a subclass of Refiners' `Module`, which is in turn a subclass of PyTorch's `Module`. All inner nodes of a Chain are subclasses of `Chain`, and leaf nodes are subclasses of Refiners' `Module`.
When we say models are implemented in a declarative way in Refiners, what this means in practice is they are implemented as Chains. [`Chain`][refiners.fluxion.layers.Chain] is a Python class to implement trees of modules. It is a subclass of Refiners' [`Module`][refiners.fluxion.layers.Module], which is in turn a subclass of PyTorch's `Module`. All inner nodes of a Chain are subclasses of `Chain`, and leaf nodes are subclasses of Refiners' `Module`.
## A first example
@ -41,9 +41,10 @@ class BasicModel(fl.Chain):
)
```
> **Note** - We often use the namespace `fl` which means `fluxion`, which is the name of the part of Refiners that implements basic layers.
!!! note
We often use the namespace `fl` which means `fluxion`, which is the name of the part of Refiners that implements basic layers.
As of writing, Refiners does not include a `Softmax` layer by default, but as you can see you can easily call arbitrary code using `fl.Lambda`. Alternatively, if you just wanted to write `Softmax()`, you could implement it like this:
As of writing, Refiners does not include a `Softmax` layer by default, but as you can see you can easily call arbitrary code using [`fl.Lambda`][refiners.fluxion.layers.Lambda]. Alternatively, if you just wanted to write `Softmax()`, you could implement it like this:
```py
class Softmax(fl.Module):
@ -51,7 +52,8 @@ class Softmax(fl.Module):
return torch.nn.functional.softmax(x)
```
> Note that we use type hints here. All of Refiners' codebase is typed, which makes it a pleasure to use if your downstream code is typed too.
!!! note
Notice the type hints here. All of Refiners' codebase is typed, which makes it a pleasure to use if your downstream code is typed too.
## Inspecting and manipulating

View file

@ -44,7 +44,7 @@ m.set_context("my context", {"my key": 4})
m() # prints 6
```
As you can see, to use the context, you define it by subclassing any `Chain` and defining `init_context`. You can set the context with the `set_context` method or the `SetContext` layer, and you can access it anywhere down the provider's tree with `UseContext`.
As you can see, to use the context, you define it by subclassing any `Chain` and defining `init_context`. You can set the context with the [`set_context`][refiners.fluxion.layers.Chain.set_context] method or the [`SetContext`][refiners.fluxion.layers.SetContext] layer, and you can access it anywhere down the provider's tree with [`UseContext`][refiners.fluxion.layers.UseContext].
## Simplifying complex models with Context