3

How to Use Tensorboard in Pytorch

 2 years ago
source link: https://jdhao.github.io/2022/04/20/pytorch-tensorboard-use/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How to Use Tensorboard in Pytorch

2022-04-20 252 words 2 mins read 31 times read

This is a brief note on how to use Tensorboard in PyTorch.

First we need to install tensorboard:

pip install tensorboard

The main interface we use is SummaryWriter. It has many builtin functions, such as add_scalar, add_image, add_graph (for torch models) etc.

For most use cases, we just need to use add_scalar().

import numpy as np
import os
from torch.utils.tensorboard import SummaryWriter


def main():
    log_dir = "exp_log"
    if not os.path.exists(log_dir):
        os.makedirs(log_dir)

    writer = SummaryWriter(log_dir=log_dir)
    for i in range(50):
        writer.add_scalar("my curve", np.random.random(), i)

    # need to close the writer after training
    writer.close()

The first argument is the tag given to this value series.

We can also group the plot like this:

for n_iter in range(100):
    writer.add_scalar('Loss/train', np.random.random(), n_iter)
    writer.add_scalar('Loss/test', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
    writer.add_scalar('Accuracy/test', np.random.random(), n_iter)

In the visualization, we will get two groups, one for Loss and one for Accuracy. Each group has two plots, for train and test respectively.

Often we want to show/compare several curves on the same plot. This can be achieved with add_scalars():

for n_iter in range(100):
    writer.add_scalars('Loss', {'train': np.random.random(),
                                'test': np.random.random()}, n_iter)

    writer.add_scalars('Accuracy', {'train': np.random.random(),
                                    'test': np.random.random()}, n_iter)

In the above code, we have two groups, and each group has one plot showing both train and test stats.

To actually show the visualizations, we can run the following command:

tensorboard --logdir=exp_log

The argument --logdir should be followed one of valid tensorboard logs you have written during your experiment. Then you can open the browser and check the plots.

References


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK