James Moawad - Technical Solution Specialist - Intel
Title:
Abstract: We propose to show a Deep Learning Inference toolkit (OpenVINO), which provides a common API for inference independent of the underlying compute hardware. The inference engine can operate on CPU or be accelerated with GPU, VPU or FPGA. We will further look into details of an OpenCL based Deep Learning Accelerator running on FPGA and how this is integrated into the software flow. We will conclude with a brief discussion of the potential use of the oneAPI unified programming model could be used for future developments of such hardware agnostic accelerators.
Keywords: Inference, ML toolkit, CPU, GPU, VPU, FPGA
Bio: James Moawad is a Technical Solution Specialist with Intel’s Programmable Solutions Group specializing in compute acceleration using Field Programmable Gate Arrays (FPGA). He holds a B.S. in Electrical Engineering from the University of Illinois at Urbana-Champaign and a M.S. in Electrical and Computer Engineering from Georgia Institute of Technology with a focus on processor architecture. He designed telecommunication systems at Bell Laboratories / Lucent Technologies from 1999 to 2006 utilizing FPGAs and multi-processor arrays. Since 2006, he has worked as a Field Application Engineer helping customers architect systems with FPGA, embedded processors, DSP and various memory solutions including DRAM, solid state drives and high bandwidth memory (HBM).
Live content is unavailable. Log in and register to view live content