CONNECT WITH US

NVIDIA drives AI revolution, an ultimate challenge to GPU computing

Sponsored content 0

In his keynote speech, "The Revolution of Deep Learning and Artificial Intelligence," delivered at NVIDIA GTC in Taipei, Jen-Hsun Huang, Co-founder & CEO, NVIDIA, elaborated on the surging trend and rapid advancement of AI technologies over recent years. As a supplier of computing technologies fueling the trend, NVIDIA has built a forward-looking roadmap with comprehensive solutions, a partnership network and an industry ecosystem to accelerate the popularity of AI technologies and their commercial applications. As Huang emphasized, "This will be the next computer revolution and also the new industrial revolution." We can expect to see a very different world in the future.

The dawn of the new era for AI

Alex Krizhevsky first used two NVIDIA GPUs to train deep learning software in 2012, which accelerated visual recognition learning from months to just days. In 2015, with R&D investments from industry leaders including Google and Microsoft, it was confirmed that GPU-based deep learning could obtain results with amazing accuracy in image and voice recognition that surpassed human capability. AI has started taking various industries by storm, signaling the coming of a whole new computing era.

Asked why AI researchers are adopting GPUs for deep learning, Huang pointed out, "When people think, the brain creates a mental image. Working similarly to the human brain, a GPU's thousands of micro-processors can work in parallel connections like synapses and neurons to solve large-scale and complex problems."

At the start of the AI era, the computing industry is being reshaped as the GPU speeds up machine thinking so that computers have the ability to learn. This has also enabled the GPU to go beyond graphics applications. Huang commented, "The GPU computing we have created is at the center of the most exciting advances in computing today. Therefore, NVIDIA is becoming known as the AI computing company."

Three aspects of AI development: training, datacenter inferencing and device inferencing

Deep learning software designs itself so that computers will learn from experience and become smarter.

Huang indicated deep learning will create a whole new computing model and talked about its actual applications from three aspects: training, datacenter inferencing and device inferencing.

First of all, future software engineers will create and train neural network models with the GPU executing trillions of computations to come up with more complicated and accurate models. Inferencing, on the other hand, is searching. In terms of cloud services, neural networks will operate in datacenters, receiving billions of search requests for images, audio and video clips. The GPU, deployed in data centers, will be in charge of responding to these requests through inferencing.

Device inferencing enables AI in a variety of end devices. End devices will be able to infer and learn from experience, which enables them to better perform tasks and become smart devices with sensing, understanding and learning capabilities. What's more is that the intelligence gained at the end devices can be passed back to the cloud for datacenters to learn, together forming a complete AI deep learning loop.

Building end-to-end solutions enabling far-reaching AI computing

NVIDIA has built a complete technological roadmap targeting the three aspects of application. Its latest Pascal architecture is the first GPU designed specifically for deep learning. The outstanding computing power answers the most challenging training tasks in deep learning. Training efficiency has been boosted by 65 times in four years.

With Internet companies and AI startups beginning to deploy deep neural networks for their new services and applications, demand for datacenter inferencing is therefore emerging. To respond to inferencing requirements, NVIDIA has launched the Tesla P4 and P40 accelerators and the TensorRT software to boost AI for video inferencing.

"The CPU is based on traditional computation architecture, not tailored for AI. Tesla P4 and P40 are superior to the most advanced CPU in terms of both performance and power efficiency and therefore are capable of handling the massive inferencing demands on the Internet in the future," said Marc Hamilton, VP, Solutions Architecture and Engineering at NVIDIA.

Certainly, amid the rising trend of AI computation, the GPU faces challenges from different processor architectures including the CPU, FPGA and ASIC. Hamilton noted that the GPU outperforms the competitions in terms of scalability, computing performance and power efficiency. More importantly, in recognition of the critical role software plays, NVIDIA has crafted a complete platform that includes training programs, middleware and software development kits (SDK) to help expedite AI development and deployment.

For embedded device inferencing, NVIDIA has introduced the Jetson TX1 supercomputer-on-module with a low 10W power consumption and the JetPack SDK, equipping billions of autonomous devices with smart sensing and inferencing capabilities and driving revolutionary drone, robotics, autonomous devicesadvancement.

Huang said, "We are witnessing a rapid growth of the entire industry ecosystem. Almost all leading Internet service providers are adopting NVIDIA's technologies to train their datacenters for providing better Internet services such as searching, recognition and translation. We are set to build an end-to-end deep learning computing platform based on a uniform hardware architecture supporting software compatibility to greatly simplify development and deployment by vendors."

Opportunities that Taiwan high-tech industry cannot miss

In addition to AI, the latest VR and AR technologies can also leverage the GPU's superior computing power. The key lies in how to incorporate actions in the physical world with the virtual environment.

Victoria Rege, Manager, Global VR Alliances & Ecosystem Development at NVIDIA stated, "The GPU can perform dynamic computations of physical properties. For example, light tracking technologies can enable a realistic feel of objects. We are also actively developing SDKs to help developers create VR content and accelerate the process. VR applications are now widely used not only in gaming but also in business operations including automobile design, architectural design and retail demonstration. We hope to strengthen collaboration with Taiwan-based vendors to jointly push forward development of VR technology."

The rise of AI and VR represents an important technological trend and calls for more innovations and R&D efforts. The number of NVIDIA developers has tripled, reaching a total of 400,000 people, indicated Huang. The number of AI developers adopting NVIDIA GPUs as the deep learning R&D standard has grown 25 times in two years. Application areas are wide-ranging, encompassing automobile, government, internet and healthcare sectors. AI has expanded beyond a lab concept to becoming a truly commercialized technology. To broaden AI applications, NVIDIA is working with world-leading online learning organizations including Coursera and Udacity to bring AI within reach for more people.

NVIDIA has announced a collaboration project with National Taiwan University to establish an AI laboratory for strengthening and driving Taiwan's AI R&D. NVIDIA has also initiated Taiwan's first self-driving car research program with ITRI. Furthermore, at this year's GTC, Huang introduced four Taiwan-based AI startups, V5 Technologies, Viscovery, Umbo CV and SkyREC, and he credited Taiwan's high-tech industry, and in particular TSMC and Quanta Computer, for their contribution to bringing NVIDIA's technology to reality.

Huang highlighted, "This lays the foundation of the future industrial revolution and is an important trend not to be taken lightly. Taiwan should devote more resources to basic research in this field. With its existing strength, now is the ideal time for Taiwan to expand into AI."

NVIDIA drives AI revolution, an ultimate challenge to GPU computing

NVIDIA drives AI revolution, an ultimate challenge to GPU computing

DIGITIMES' editorial team was not involved in the creation or production of this content. Companies looking to contribute commercial news or press releases are welcome to contact us.