Search

huggingface-pytorch-v100x4

train steps
225
batch-size
64
samples/sec
Empty
train-time(sec)
245.2095
x faster
Empty
python run_mlm.py\ --model_name_or_path roberta-base\ --dataset_name wikitext\ --dataset_config_name wikitext-2-raw-v1\ --do_train\ --do_eval\ --output_dir ../test-result --per_device_train_batch_size=16
Shell
[INFO|trainer.py:837] 2021-02-21 01:07:07,135 >> ***** Running training ***** [INFO|trainer.py:838] 2021-02-21 01:07:07,135 >> Num examples = 4798 [INFO|trainer.py:839] 2021-02-21 01:07:07,135 >> Num Epochs = 3 [INFO|trainer.py:840] 2021-02-21 01:07:07,135 >> Instantaneous batch size per device = 16 [INFO|trainer.py:841] 2021-02-21 01:07:07,135 >> Total train batch size (w. parallel, distributed & accumulation) = 64 [INFO|trainer.py:842] 2021-02-21 01:07:07,135 >> Gradient Accumulation steps = 1 [INFO|trainer.py:843] 2021-02-21 01:07:07,135 >> Total optimization steps = 225 02/21/2021 01:11:13 - INFO - __main__ - ***** Train results ***** 02/21/2021 01:11:13 - INFO - __main__ - epoch = 3.0 02/21/2021 01:11:13 - INFO - __main__ - train_runtime = 245.2095 02/21/2021 01:11:13 - INFO - __main__ - train_samples_per_second = 0.918 02/21/2021 01:11:13 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:1600] 2021-02-21 01:11:13,693 >> ***** Running Evaluation ***** [INFO|trainer.py:1601] 2021-02-21 01:11:13,693 >> Num examples = 496 [INFO|trainer.py:1602] 2021-02-21 01:11:13,694 >> Batch size = 32 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:03<00:00, 4.03it/s] 02/21/2021 01:11:17 - INFO - __main__ - ***** Eval results ***** 02/21/2021 01:11:17 - INFO - __main__ - perplexity = 3.5358893246784704
Shell