Posted On: Jul 20, 2020

HAQM EMR cluster running G4dn instances is 5.4X cheaper and 4.5X faster compared to an HAQM EMR cluster running HAQM EC2 R5 Memory-optimized instances. To learn more, please see Nvidia Blog Post.  

You can now use HAQM EC2 G4 instances with HAQM EMR to take advantage of the latest generation NVIDIA T4 GPUs for machine learning and deep learning use cases. G4 instances are well suited for running machine learning inference applications such as image classification, object detection, recommendation engines, automated speech recognition, and language translation. 

HAQM EMR enables you to run scalable machine learning workloads. HAQM EMR already supports HAQM EC2 G3 and P3 and instances optimized for running graphic intensive and compute intensive applications. You can use HAQM EMR to run deep learning and machine learning frameworks such as TensorFlow, Apache MXNet and add your preferred tools and libraries. HAQM EMR also supports EMR Notebooks and Apache Livy for building interactive Apache Spark applications.  

For HAQM EMR pricing, please visit HAQM EMR pricing page. For documentation please review the Supported Instance Types, the Hadoop Daemon Configuration Settings, and Task Configuration

HAQM EMR support for HAQM EC2 G4 instances is now Generally Available on EMR Versions 5.30 in the following regions: Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Ireland), Europe (London), Europe (Paris), Middle East (Bahrain), South America (São Paulo), US East (N. Virginia), US East (Ohio), AWS GovCloud (US), US West (N. California), and US West (Oregon). Here is a link to the overall Regional Availability of HAQM EMR