TensorFlow benchmarks were performed on bare metal single socket servers with equivalent memory, networking and storage configurations for the x86 platforms shown. Processors tested include AMD EPYC 7763 “Milan” with TF2.7 ZenDNN, Intel Xeon 7375 “Cascade Lake” with TF 2.7 DNNL, Intel Xeon 8380 “Ice Lake” with TF 2.7 DNNL and Ampere Altra Max M128-80 with Ampere Optimized TF 2.7. ARM-64 based “Graviton 2”, available exclusively through AWS (c6g shape), was tested in 64-core configuration. Benchmarks were performed with Ampere’s internal testing software based on Ampere Model Library. This software is written entirely in Python and complies with MLCommons Inference (a.k.a. MLPerf) methodology of calculating latency and throughput. It utilizes the standard APIs of frameworks and common ways while replicating usage in real-life applications. For latency benchmarks for each configuration listed below, a single system process has been executed at a time. Each process, following a warm-up run, has run workloads of batch size equal to 1 in loop for a minimum of 60 seconds. A final latency value has then been calculated based on collected net inference time of each pass through the network.
When it comes to the multi-process throughput benchmarks a search-space of different batch sizes and number of threads per process has been covered. Final throughput values have been estimated based on average (50th percentile) latencies observed during 60 second multi process runs. All systems were benchmarked running workloads of following batch sizes per each of n parallel processes: [1, 4, 16, 32, 64, 128, 256]. The number of threads per process vs. the number of processes in total were respectively:
Benchmarks of all platforms were run using the same scripting, same datasets, same representation of models. All platforms ran the same workloads, applying identical pre- and post- processing and making uniform inference calls. In the case of fp16 Altra data, values were obtained with the use of same scripting, while AI model representations differed from their fp32 counterparts only in the precision of weights – model quantization process consisted of only casting to the lower float precision.
Across all systems that were tested, TensorFlow library was used in its latest version available for a given platform: