Search

huggingface-pytorch-tpuv3x8

train steps
225
batch-size
64
samples/sec
train-time(sec)
90.8725
x faster
Empty
python xla_spawn.py --num_cores 8 \ language-modeling/run_mlm.py \ --model_name_or_path roberta-base\ --dataset_name wikitext\ --dataset_config_name wikitext-2-raw-v1\ --do_train\ --do_eval\ --output_dir ../test-result\ --per_device_train_batch_size=8
Shell
[INFO|trainer.py:837] 2021-02-20 13:16:51,836 >> ***** Running training ***** [INFO|trainer.py:838] 2021-02-20 13:16:51,836 >> Num examples = 4798 [INFO|trainer.py:839] 2021-02-20 13:16:51,836 >> Num Epochs = 3 [INFO|trainer.py:840] 2021-02-20 13:16:51,836 >> Instantaneous batch size per device = 8 [INFO|trainer.py:841] 2021-02-20 13:16:51,836 >> Total train batch size (w. parallel, distributed & accumulation) = 64 [INFO|trainer.py:842] 2021-02-20 13:16:51,837 >> Gradient Accumulation steps = 1 [INFO|trainer.py:843] 2021-02-20 13:16:51,837 >> Total optimization steps = 225 02/20/2021 13:18:28 - INFO - run_mlm - ***** Train results ***** 02/20/2021 13:18:28 - INFO - run_mlm - epoch = 3.0 02/20/2021 13:18:28 - INFO - run_mlm - train_runtime = 90.8725 02/20/2021 13:18:28 - INFO - run_mlm - train_samples_per_second = 2.476 02/20/2021 13:18:28 - INFO - run_mlm - *** Evaluate *** [INFO|trainer.py:1600] 2021-02-20 13:18:28,429 >> ***** Running Evaluation ***** [INFO|trainer.py:1601] 2021-02-20 13:18:28,429 >> Num examples = 496 [INFO|trainer.py:1602] 2021-02-20 13:18:28,429 >> Batch size = 8 100%|###########################################################################################################################| 8/8 [00:01<00:00, 5.49it/s] 02/20/2021 13:18:30 - INFO - run_mlm - ***** Eval results ***** 02/20/2021 13:18:30 - INFO - run_mlm - perplexity = 3.4844321240810725
Shell