fbpx
There are tons of options in the Machine Learning world. You may have noticed a large number of frameworks, libraries, and formats that are...

There are tons of options in the Machine Learning world. You may have noticed a large number of frameworks, libraries, and formats that are floating around.

Machine Learning is one of the most “hot” topics of the decade, and the tooling built as we are riding this train. The two main libraries to build a machine learning model has been PyTorch and Tensorflow. The most recent research papers use one of those two libraries.

How many times have you found a pre-built model that was with a library that you didn’t know? Should you learn this library just for this model? Is this where the industry is heading? You want to run some code, and here you are. You are stuck trying to fit a square peg into a round hole.

ONNX solves that problem by allowing you to build your model anywhere you want. An ONNX built model will allow you to run your model anywhere you want with the hardware technology you want.

Here’s a scenario. You can have your data scientist build a model using the library that they are the most comfortable with (Tensorflow, PyTorch, Caffe, etc.), and they output an ONNX model. That model can then be picked up by your developers without them knowing that same framework. When your Machine Learning gurus change frameworks to follow the industry, your developers don’t have to.

Until now, this focus was mostly on the technology you see above (Python, C++, C#, C, Java, and WinRT). Today at BUILD 2020, they just announced a model inferencing preview for Node 🤯.

Here’s what some simple code would look like.

// import the runtime

const ort = require('onnxruntime');

 
// loading the model

const session = await ort.InferenceSession.create('./model.onnx');
 

// prepare inputs. a tensor need its corresponding TypedArray as data

const dataA = Float32Array.from([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]);

const dataB = Float32Array.from([10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120]);

const tensorA = new ort.Tensor('float32', dataA, [3, 4]);

const tensorB = new ort.Tensor('float32', dataB, [4, 3]);
 

// prepare feeds. use model input names as keys.

const feeds = { a: tensorA, b: tensorB };
 

// feed inputs and run

const results = await session.run(feeds);
 

// read from results

const dataC = results.c.data;

console.log(`data of result tensor 'c': ${dataC}`);

That’s it! It is that easy!

With ONNX Runtime coming to the Node ecosystem, I can’t wait to build my next project with ONNX.

What are you going to build with ONNX?

Resources


About the author:

Francesca Lazzeri, PhD is an experienced scientist and machine learning practitioner with over 10 years of both academic and industry experience. She is author of a number of publications, including technology journals, conferences, and books. She currently leads an international team of cloud advocates, developers, and data scientists at Microsoft. Before joining Microsoft, she was a research fellow at Harvard University in the Technology and Operations Management Unit.

Related from Francesca: Training and Operationalizing Interpretable Machine Learning Models

ODSC Community

ODSC Community

The Open Data Science community is passionate and diverse, and we always welcome contributions from data science professionals! All of the articles under this profile are from our community, with individual authors mentioned in the text itself.

1