fbpx
Accelerating Model Training with the ONNX Runtime Accelerating Model Training with the ONNX Runtime
TDLR; This article introduces the new improvements to the ONNX runtime for accelerated training and outlines the 4 key steps for... Accelerating Model Training with the ONNX Runtime

What is the ONNX Runtime (ORT)?

[More from Microsoft: Announcing accelerated training with ONNX Runtime—train models up to 45% faster]

[More from Microsoft: onnxruntime-training-examples]

Step 1: Set Up ORT Distributed Training Environment

device = ort_supplement.setup_onnxruntime_with_mpi(args)

The Create ORTTrainer function can be found in the ort_supplement module

Step 2: Create an ORT Trainer Model

model = ort_supplement.create_ort_trainer(args, device, model)

The Create ORTTrainer function can be found in the ort_supplement module

Note the tensor dimensions should be passed as numeric values to get full optimization benefits

Step 3: Call ORT Training Steps to Train Model

loss, global_step = ort_supplement.run_ort_training_step(args,
global_step, training_steps, model, batch) # Runs the actual training
steps

The run_ort_training steps function can be found in the ort_supplement module

Step 4: Export Trained ONNX Model

 model.save_as_onnx(out_path)

Conclusion

Next Steps

Accelerate your NLP pipelines using Hugging Face Transformers and ONNX Runtime

ONNX and Azure Machine Learning: Create and accelerate ML models

Evaluating Deep Learning Models in 10 Different Languages (With Examples)


About the Author

ODSC Community

The Open Data Science community is passionate and diverse, and we always welcome contributions from data science professionals! All of the articles under this profile are from our community, with individual authors mentioned in the text itself.

1