> Intel > eGuide: Exploring the path from AI theory to business value

eGuide: Exploring the path from AI theory to business value

White Paper Published By: Intel
Published:  Nov 14, 2019
Type:  White Paper

Infrastructure considerations for IT leaders

By 2020, deep learning will have reached a fundamentally different stage of maturity. Deployment and adoption will no longer be confined to experimentation, becoming a core part of day-to-day business operations across most fields of research and industries.

Why? Because advancements in the speed and accuracy of the hardware and software that underpin deep learning workloads have made it both viable and cost-effective. Much of this added value will be generated by deep learning inference – that is, using a model to infer something about data it has never seen before. Models can be deployed in the cloud or data center, but more and more we will see them on end devices like cameras and phones.

Intel predicts that there will be a shift in the ratio between cycles of inference and training from 1:1 in the early days of deep learning, to well over 5:1 by 2020¹. Intel calls this the shift to ‘inference at scale’ and, with inference also taking up almost 80 percent of artificial intelligence (AI) workflows (Figure 1, Page 3), it follows that the path to true AI readiness starts with selecting hardware architectures that are well-suited to this task.

However, as the AI space is becoming increasingly complex, a one-size-fits-all solution cannot address the unique constraints of each environment across the AI spectrum.

In this context, critical hardware considerations include availability, ease of use, and operational expense. What type of infrastructure do you use for your edge devices, workstations  or servers today? Do you want to deal with the complexities of multiple architectures?

Exploring these challenges is the subject of this paper.

Privacy Notice

Terms of Use

Tags :