Skip to content

Commit

Permalink
Merge branch 'docs' into dev: Update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
NikolasEnt committed Apr 20, 2024
2 parents cba99e5 + da5a6eb commit 5a1924f
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 29 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ pip install -U git+https://github.com/lRomul/argus.git@dev
Simple image classification example with `create_model` from [pytorch-image-models](https://github.com/rwightman/pytorch-image-models):

```python
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST
from torchvision.transforms import Compose, ToTensor, Normalize

import timm
Expand Down Expand Up @@ -100,6 +100,7 @@ if __name__ == "__main__":
```

More examples you can find [here](https://pytorch-argus.readthedocs.io/en/latest/examples.html).
Additional guides on how to customize and use argus component can be found in [Guides](https://pytorch-argus.readthedocs.io/en/latest/guides.html) section.


## Why this name, Argus?
Expand Down
59 changes: 31 additions & 28 deletions docs/source/guides.rst
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ The simplest user case is allows to load a model with saved parameters and compo
However, the model loading process may require customizations; some cases are provided below.

1. Load the model to a specified device.
1. Load the model to a specific device.
Just provide the desired device name or a list of devices.

.. code:: python
Expand All @@ -156,9 +156,10 @@ However, the model loading process may require customizations; some cases are pr
model = load_model('/path/to/model/file', device='cuda:0')
The feature is helpful if one wants to load the model to a specific device for training or inference
and also to load the model on a machine that does not have the device, which was specified before the
model file was saved. For example, if the model was saved with ``device=='cuda:1'``,
while the target machine is equipped with the only GPU, so, ``device=='cuda:0'`` is the only valid option.
and also to load the model on a machine that does not have the device, which was used before the
model file was saved. For example, if a model was saved with ``device='cuda:1'`` but the target machine
only has one GPU, one would need to load the model on that GPU. In this case, the device
should be specified as ``device='cuda:0'``, as it is the only valid GPU option.

.. note::

Expand Down Expand Up @@ -257,6 +258,9 @@ dynamic batch size by using :func:`torch.onnx.export`.

.. code:: python
import torch
from argus import load_model
# Assuming the model has one input and one output.
model = load_model('/path/to/model/file', device='cpu', loss=None,
optimizer=None, prediction_transform=None)
Expand Down Expand Up @@ -303,10 +307,12 @@ The first attribute specifies the name of the evaluation metric, while the secon
(`max`) or a lower value (`min`) means improvement for this metric.

The code below demonstrates a top-K accuracy metric, which implements the required methods.
:class:`argus.utils.AverageMeter` used to compute the average metric value over the predictions.

.. code-block:: python
from argus.metrics import Metric
from argus.utils import AverageMeter
class TopKAccuracy(Metric):
Expand All @@ -321,26 +327,24 @@ The code below demonstrates a top-K accuracy metric, which implements the requir
def __init__(self, k: int = 5):
self.k = k
self.correct = 0
self.count = 0
self.accuracy_meter = AverageMeter()
# Parametrized name allows having several instances of the metric with different k values
self.name = f'top_{self.k}_accuracy'
def reset(self):
self.correct = 0
self.count = 0
self.accuracy_meter.reset()
def update(self, step_output: dict):
indices = torch.topk(step_output['prediction'], k=self.k, dim=1)[1]
target = step_output['target'].unsqueeze(1)
correct = torch.any(indices == target, dim=1)
self.correct += torch.sum(correct).item()
self.count += correct.shape[0]
n_correct = torch.sum(torch.any(indices == target, dim=1)).item()
n_items = target.shape[0]
self.accuracy_meter.update(n_correct, n=n_items)
def compute(self) -> float:
if self.count == 0:
if self.accuracy_meter.count == 0:
raise RuntimeError('Must be at least one example for computation')
return self.correct / self.count
return self.accuracy_meter.average
In some more advanced use cases, it may be required to create a custom metric to
Expand All @@ -359,9 +363,10 @@ correct answer was present among the top-K predictions.
from argus.engine import State
from argus.metrics import Metric
from argus.utils import AverageMeter
class TopKAccuracy(Metric):
class TopKAccuracyRank(Metric):
"""Calculate the top-K accuracy for multiclass classification.
It also reports the average rank of the correct top-K predictions.
Expand All @@ -374,34 +379,32 @@ correct answer was present among the top-K predictions.
def __init__(self, k: int = 5):
self.k = k
self.correct = 0
self.rank = 0
self.count = 0
self.accuracy_meter = AverageMeter()
self.rank_meter = AverageMeter()
self.name = f'top_{self.k}_accuracy'
def reset(self):
self.correct = 0
self.rank = 0
self.count = 0
self.accuracy_meter.reset()
self.rank_meter.reset()
def update(self, step_output: dict):
indices = torch.topk(step_output['prediction'], k=self.k, dim=1)[1]
target = step_output['target'].unsqueeze(1)
correct = torch.any(indices == target, dim=1)
rank = torch.nonzero(indices == target)[:, 1]
self.correct += torch.sum(correct).item()
self.rank += torch.sum(rank).item()
self.count += correct.shape[0]
n_correct = torch.sum(torch.any(indices == target, dim=1)).item()
rank_sum = torch.sum(torch.nonzero(indices == target)[:, 1]).item()
n_items = target.shape[0]
self.accuracy_meter.update(n_correct, n=n_items)
self.rank_meter.update(rank_sum, n=n_items)
def compute(self) -> float:
if self.count == 0:
if self.accuracy_meter.count == 0:
raise RuntimeError('Must be at least one example for computation')
return self.correct / self.count
return self.accuracy_meter.average
def epoch_complete(self, state: State):
with torch.no_grad():
accuracy = self.compute()
rank = self.rank / self.count + 1.0 # +1.0 because ranks are 1-indexed
rank = self.rank_meter.average + 1.0 # +1.0 because ranks are 1-indexed
name_prefix = f"{state.phase}_" if state.phase else ''
state.metrics[f'{name_prefix}{self.name}'] = accuracy
state.metrics[f'{name_prefix}rank_{self.k}'] = rank
Expand Down

0 comments on commit 5a1924f

Please sign in to comment.