PyTorch-Ignite PyTorch-Ignite
PyTorch-Ignite logo

High-level library to help with training and evaluating neural networks in PyTorch flexibly and transparently.

Get Started
New release: PyTorch-Ignite

Simple Engine and Event System

Trigger any handlers at any built-in and custom events.

from ignite.engine import Engine, Events

trainer = Engine(lambda engine, batch: batch / 2)
@trainer.on(Events.ITERATION_COMPLETED(every=2))
def print_output(engine):
    print(engine.state.output)

Rich Handlers

Checkpointing, early stopping, profiling, parameter scheduling, learning rate finder, and more.

from ignite.engine import Engine, Events
from ignite.handlers import ModelCheckpoint, EarlyStopping, PiecewiseLinear

model = nn.Linear(3, 3)
trainer = Engine(lambda engine, batch: None)

# model checkpoint handler
checkpoint = ModelCheckpoint('/tmp/ckpts', 'training')
trainer.add_event_handler(Events.EPOCH_COMPLETED(every=2), checkpoint, {'model': model})

# early stopping handler
def score_function(engine):
    val_loss = engine.state.metrics['acc']
    return val_loss
es = EarlyStopping(3, score_function, trainer)
# Note: the handler is attached to an *Evaluator* (runs one epoch on validation dataset).
evaluator.add_event_handler(Events.COMPLETED, es)

# Piecewise linear parameter scheduler
scheduler = PiecewiseLinear(optimizer, 'lr', [(10, 0.5), (20, 0.45), (21, 0.3), (30, 0.1), (40, 0.1)])
trainer.add_event_handler(Events.ITERATION_STARTED, scheduler)

Distributed Training

Speed up the training on CPUs, GPUs, and TPUs.

import ignite.distributed as idist

def training(local_rank, *args, **kwargs):
    dataloder_train = idist.auto_dataloader(dataset, ...)

    model = ...
    model = idist.auto_model(model)

    optimizer = ...
    optimizer = idist.auto_optim(optimizer)

backend = 'nccl'  # or 'gloo', 'horovod', 'xla-tpu'
with idist.Parallel(backend) as parallel:
    parallel.run(training)

50+ metrics

Distributed ready out-of-the-box metrics to easily evaluate models.

from ignite.engine import Engine
from ignite.metrics import Accuracy

trainer = Engine(...)
acc = Accuracy()
acc.attach(trainer, 'accuracy')
state = engine.run(data)
print(f"Accuracy: {state.metrics['accuracy']}")

Rich Integration with Experiment Managers

Tensorboard, MLFlow, WandB, Neptune, and more.

from ignite.engine import Engine, Events
from ignite.contrib.handlers.tensorboard_logger import TensorboardLogger

trainer = Engine(...)

# Create a tensorboard logger
with TensorboardLogger(log_dir="experiments/tb_logs") as tb_logger:
    # Attach the logger to the trainer to log training loss at each iteration
    tb_logger.attach_output_handler(
      trainer,
      event_name=Events.ITERATION_COMPLETED,
      tag="training",
      output_transform=lambda loss: {"loss": loss}
    )

Ecosystem

Project MONAI

MONAI is a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem.

Code-Generator

Application to generate your training scripts with PyTorch-Ignite.

Nussl

A flexible source separation library in Python

See all projects

Follow Us on Twitter

With the support of:

NumFOCUS Quansight Labs Amazon Web Services (AWS) Agenium Space
Support PyTorch-Ignite