Amazon Web Services has become the latest tech firm to join the deep learning community’s collaboration on the Open Neural Network Exchange, recently launched to advance artificial intelligence in a frictionless and interoperable environment. Facebook and Microsoft led the effort.
As part of that collaboration, AWS made its open source Python package, ONNX-MxNet, available as a deep learning framework that offers application programming interfaces across multiple languages including Python, Scala and open source statistics software R.
The ONNX format will help developers build and train models for other frameworks, including PyTorch, Microsoft Cognitive Toolkit or Caffe2, AWS Deep Learning Engineering Manager Hagay Lupesko and Software Developer Roshani Nagmote wrote in an online post last week. It will let developers import those models into MXNet, and run them for inference.
Facebook and Microsoft this summer launched ONNX to support a shared model of interoperability for the advancement of AI. Microsoft committed its Cognitive Toolkit, Caffe2 and PyTorch to support ONNX.
Cognitive Toolkit and other frameworks make it easier for developers to construct and run computational graphs that represent neural networks, Microsoft said.
Initial versions of ONNX code and documentation were made available on Github.
AWS and Microsoft last month announced plans for Gluon, a new interface in Apache MXNet that allows developers to build and train deep learning models.
“Google’s omission from this is quite telling but also speaks to their dominance in the market,” he told LinuxInsider.
“Even Tensorflow is open source, and so open source is not the big catch here — but the rest of the ecosystem teaming up to compete with Google is what this boils down to,” Kaul said.
The Apache MXNet community earlier this month introduced version 0.12 of MXNet, which extends Gluon functionality to allow for new, cutting-edge research, according to AWS. Among its new features are variational dropout, which allows developers to apply the dropout technique for mitigating overfitting to recurrent neural networks.
Convolutional RNN, Long Short-Term Memory and gated recurrent unit cells allow datasets to be modeled using time-based sequence and spatial dimensions, AWS noted.
“This looks like a great way to deliver inference regardless of which framework generated a model,” said Paul Teich, principal analyst at Tirias Research.
“This is basically a framework-neutral way to deliver inference,” he told LinuxInsider.
Cloud providers like AWS, Microsoft and others are under pressure from customers to be able to train on one network while delivering on another, in order to advance AI, Teich pointed out.
“I see this as kind of a baseline way for these vendors to check the interoperability box,” he remarked.
“Framework interoperability is a good thing, and this will only help developers in making sure that models that they build on MXNet or Caffe or CNTK are interoperable,” Tractica’s Kaul pointed out.
As to how this interoperability might apply in the real world, Teich noted that technologies such as natural language translation or speech recognition would require that Alexa’s voice recognition technology be packaged and delivered to another developer’s embedded environment.
Thanks, Open Source
“Despite their competitive differences, these companies all recognize they owe a significant amount of their success to the software development advancements generated by the open source movement,” said Jeff Kaplan, managing director of ThinkStrategies.
“The Open Neural Network Exchange is committed to producing similar benefits and innovations in AI,” he told LinuxInsider.
A growing number of major technology companies have announced plans to use open source to speed the development of AI collaboration, in order to create more uniform platforms for development and research.