Timezone: »

 
Poster
Cortex: A Compiler for Recursive Deep Learning Models
Pratik Fegade · Tianqi Chen · Phillip Gibbons · Todd Mowry

Tue Apr 06 05:00 PM (PDT) @ Virtual

Optimizing deep learning models is generally performed in two steps: (i) high-level graph optimizations such as kernel fusion and (ii) low level kernel optimizations such as those found in vendor libraries. This approach often leaves significant performance on the table, especially for the case of recursive deep learning models. In this paper, we present Cortex, a compiler-based approach to generate highly-efficient code for recursive models for low latency inference. Our compiler approach and low reliance on vendor libraries enables us to perform end-to-end optimizations, leading to up to 14X lower inference latencies over past work, across different backends.

Author Information

Pratik Fegade (Carnegie Mellon University)

I am a fifth year PhD student in the Computer Science Department at CMU, advised by Prof. Todd Mowry and Prof. Phil Gibbons. My main research focus is to build compiler analysis techniques to understand and optimize programs at semantically higher levels than is possible now.

Tianqi Chen (CMU)
Phillip Gibbons (CMU)
Todd Mowry (Carnegie Mellon University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors