Search

huggingface-pytorch-v100x4

train steps
900
batch-size
16
samples/sec
Empty
train-time(sec)
336.2196
x faster
Empty
python run_mlm.py\ --model_name_or_path roberta-base\ --dataset_name wikitext\ --dataset_config_name wikitext-2-raw-v1\ --do_train\ --do_eval\ --output_dir ../test-result --per_device_train_batch_size 4
Shell
[INFO|trainer.py:837] 2021-02-21 00:57:30,952 >> ***** Running training ***** [INFO|trainer.py:838] 2021-02-21 00:57:30,952 >> Num examples = 4798 [INFO|trainer.py:839] 2021-02-21 00:57:30,952 >> Num Epochs = 3 [INFO|trainer.py:840] 2021-02-21 00:57:30,952 >> Instantaneous batch size per device = 4 [INFO|trainer.py:841] 2021-02-21 00:57:30,952 >> Total train batch size (w. parallel, distributed & accumulation) = 16 [INFO|trainer.py:842] 2021-02-21 00:57:30,952 >> Gradient Accumulation steps = 1 [INFO|trainer.py:843] 2021-02-21 00:57:30,952 >> Total optimization steps = 900 02/21/2021 01:03:08 - INFO - __main__ - ***** Train results ***** 02/21/2021 01:03:08 - INFO - __main__ - epoch = 3.0 02/21/2021 01:03:08 - INFO - __main__ - train_runtime = 336.2196 02/21/2021 01:03:08 - INFO - __main__ - train_samples_per_second = 2.677 02/21/2021 01:03:08 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:1600] 2021-02-21 01:03:08,520 >> ***** Running Evaluation ***** [INFO|trainer.py:1601] 2021-02-21 01:03:08,520 >> Num examples = 496 [INFO|trainer.py:1602] 2021-02-21 01:03:08,521 >> Batch size = 32 02/21/2021 01:03:12 - INFO - __main__ - ***** Eval results ***** 02/21/2021 01:03:12 - INFO - __main__ - perplexity = 3.5460593834614293
Shell