But it works on my machine?! ONNX 101
PyData Skopje Chapter
After a significant pause of this chapter, it's finally revived. So if you are located in close geographic proximity or are interested to remotely follow the events, join the official Meetup Group.
Reflecting on the exhilarating experience of spearheading the community's revival during the first event post-hiatus! It was an honor to contribute to reigniting the community spirit. Our exploration into the realms of interoperability and operationalizing ML models with ONNX was truly rewarding. The added excitement of delving into the seamless deployment capabilities offered by ONNX Runtime made this journey unforgettable. A heartfelt thanks to everyone who joined the discussion, and let's look forward to many more insightful conversations on the horizon!
Synopsis of my talk
Challenges: Taming the Complexity
First off, let's acknowledge the challenges we face in the ever-expanding universe of deep learning. Multiple frameworks like TensorFlow and PyTorch, coupled with varied training accelerators (think T4 GPUs and VPUs), make the landscape intricate. This is where ONNX steps in, initiated by tech giants AWS, Microsoft, and Meta. It acts as a unifying force, providing an abstraction layer over frameworks and hardware, reducing the complexity of integrating these different components.
Design Principles: Flexibility and Standardization
ONNX isn't just another acronym; it's a set of design principles that support both deep learning and traditional machine learning. These principles aim to be flexible enough to keep up with the rapid advances in AI while providing a standardized, cross-platform representation for serialization. Imagine it as the Common Language Runtime for programming languages, minimizing the number of moving parts in the deep learning landscape.
Understanding the ONNX Specification
So, what's under the hood of ONNX? The ONNX specification reveals a structured format where each computational graph is a directed acyclic graph. Nodes represent inputs, outputs, and operators, with metadata documenting crucial details about the model and its production environment. Data types, including tensor types and non-tensor types in ONNX-ML, showcase the framework's versatility.
Operators: The Essence of ONNX
Now, let's talk about operators – the heart and soul of ONNX. These are defined by name, domain, and version, and they encapsulate the essence of various operations. Take, for instance, the Relu and Abs operators, which I'll be diving into during the presentation. These examples demonstrate input-output relationships and type constraints, showcasing the elegance and power of ONNX.
Practical Demos: Bridging Theory and Application
Enough theory; let's get hands-on! In Demo 1, we'll walk through defining a PyTorch model, training it, exporting it to ONNX, and visualizing the resulting graph. The practical application comes to life as we tackle a real-world problem – training a classifier for the MNIST dataset using a convolutional neural network in PyTorch.
ONNX Runtime: Bridging the Gap
What about deployment? Enter ONNX Runtime, the bridge between the ONNX file format and deploying the graph on different hardware. Microsoft takes the lead here, maintaining ONNX Runtime as a distinct project from the ONNX specification governed by the Linux Foundation AI.
Interoperability: ONNX in Action
But ONNX isn't limited to theoretical discussions. We'll explore interoperability, touching on the ONNX Model Zoo, Azure Cognitive Services, and methods to convert existing models or train from scratch. Practical tools like Netron and VisualDL will be your companions in understanding and visualizing ONNX models.
Demo 2: ONNX in the Real World
Join the ONNX Adventure
I'm beyond excited to share this ONNX adventure with you. Whether you're a seasoned deep learning enthusiast or just dipping your toes into the data science world, this journey promises to offer insights, practical knowledge, and a newfound appreciation for the unifying force that ONNX brings to the table.