Deep Learning is the most applied area in the field of Artificial Intelligence. Majority of researches and applications use Deep Learning concepts. With such a wide and necessary need there aren’t many compilers yet. Intel has released its own Deep Learning compiler – nGraph. It takes away the huge work expectations from AI developers and data scientists by decluttering their workflow. Intel wants to increase productivity of people working in this area, thereby advancing AI research.Deep Learning is the most applied area in the field of Artificial Intelligence. Majority of researches and applications use Deep Learning concepts. With such a wide and necessary need there aren’t many compilers yet. Intel has released its own Deep Learning compiler – nGraph. It takes away the huge work expectations from AI developers and data scientists by decluttering their workflow. Intel wants to increase productivity of people working in this area, thereby advancing AI research.
nGraph frameworks and ecosystem explained
Image source: https://ngraph.nervanasys.com/index.html
The nGraph compiler supports many frameworks required by Deep Learning. A neural network requires a lot of processing power. It supports multiple workloads and compiles code for different processor architectures. Intel has open sourced it and hence, can be used as a universal compiler.
The nGraph compiler supports a wide variety of libraries and frameworks. It supports these deep learning frameworks MXNet and TensorFlow. Among others, it also supports ONNX, a framework brought by Facebook and Microsoft. Because of this open-sourced framework, it also supports PyTorch, CNTK among others. It also supports high level frameworks like Neon. Intel came up with neon API to work with their nGraph. The compiler supports all these high level frameworks at the front-end.
Talking about back-end support, the nGraph also supports different processors. Because of high demand of deep learning frameworks, high end processors are necessary. Intel has their own Nervana Neural Network processor specially designed for neural networks. Nvidia’s GPU based processors as well as Xeon CPU are some of the others supported. All of this leads to the support of field programmable gate arrays (FPGAs). It also paves the way for Movidius Vision Processing Unit (VPU). It may also support application specific Integrated Circuits known as ASICs. It reduces the number of frameworks and APIs one has to take care of. This compiler brings all of it together to reduce the workload and simplify the workflow. Intel has reduced the developers’ need to save design-time decision making.
An explanation of nGraph’s working
Image source: https://www.datanami.com/2018/04/26/inside-intels-ngraph-a-universal-deep-learning-compiler
The vice president of Intel’s AI Lab, Arjun Bansal says nGraph helps developers also. It reduces the complexity of the designing and framework management required in projects. This is because, with advancement in technology, there are no tools coming up every now and then. It increases the workload exponentially for engineers and researchers to handle.
nGraph works by understanding and optimizing the computational graph. It searches for patterns in the computational graph to optimize the use in the framework. The principle is to optimize all the different graphs and produce one kernel level graph. This process is tedious and complex to do in projects. So, the nGraph takes care of this by compiling everything together. By bringing out one assembly level kernel code, it makes the process streamlined. For example, in the TensorFlow, Theano among several others, developers need to understand the framework. One needs to understand the graph and shapes. nGraph eliminates this need like any automation platform, saving developers the trouble. The process also eliminates the error probability in the process.
It is an easy to use compiler which takes care of the necessary tasks like naming and graph construction. The nGraph API has the capability to let users name defined axes. The next step involves associating them with ‘tensors’ and letting users categorize them. This is possible on the basis of names and not only positions. So, one can name a particular axis with the problem and task requirement. This invaluable tool helps the user or the program to understand before transforming into a graph. All the associated frameworks in this compiler have their own framework bridge as well.
Optimization and learning:
The best part about the nGraph compiler is its usability for developers. It helps in getting great performance benchmarks out of some complex algorithms. The nGraph is great at handling reinforcement learning, convolutional and recurrent neural networks. These are some of the programs and algorithms the nGraph does without a hiccup. All this is also done with the developer’s required changes in mind. Besides reducing complexity and increasing performance, it provides performance tweaking. It is a super charged all-rounder when it comes to optimizing performance of algorithms. Not only this, supported frameworks seamlessly attach and function with the compiler. The compiler is only making the process streamlined as best as possible without taking user privileges.
Image source: https://ai.intel.com/high-performance-tensorflow-on-intel-xeon-using-ngraph/
According to the CEO of Intel’s AI lab behind nGraph, it is going to work at 80% performance of self-written codes. Even though it has exceeded this in testing, 80% will remain its minimum performance. AI programs and algorithms need powerful computational resources. That is why Intel’s nGraph makes it optimized and runs better than handwritten codes. Even testing benchmarks have shown it runs programs much faster than written code. This is eliminating the need to use resources to optimize code where possible. Rather than employ different engineers to handle such a task, why not have a program to take care?
Intel’s spent a great amount of effort in developing nGraph. Even Nervana Systems carried out a lot of research before being bought out by Intel in 2015. Despite all this, Intel is making the code open sourced to improve adoption rates. This way it gets adopted widely thus advancing research and development.
Companies do not usually open source their products. But because of the nature of Artificial Intelligence feed, it is much needed. Many companies and organizations have open sourced products and libraries for deep learning. Deep learning is one field of Artificial Intelligence that requires more effort. The usability and application of other frameworks depends on deep learning research. The entire point behind recent AI researches is to automate everything as much as possible. To learn more, reach out to our Deep Learning Kolkata centers. For people interested in other fields, our Artificial Intelligence Kolkata centers can help.