Today, MLCommons®, an open engineering consortium, released new results for MLPerf™ Inference v1.1, the organization's machine learning inference performance benchmark suite. MLPerf Inference measures the performance of applying a trained machine learning model to new data for a wide variety of applications and form factors, and optionally includes system power measurement.

MLPerf Inference is a full system benchmark, testing machine learning models, software, and hardware. The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation and performance for the entire industry. While the majority of systems improved by 5-30% in just 5 months, some submissions have improved by more than two times previous performance, demonstrating the value of software optimization that will have a real impact on AI workloads.

Similar to past MLPerf Inference results, the submissions consist of two divisions: closed and open. Closed submissions use the same reference model to ensure a level playing field across systems, while participants in the open division are permitted to submit a variety of models. Submissions are additionally classified by availability within each division, including systems commercially available, in preview, and RDI (research, development, and internal).

MLPerf Inference v1.1 results further MLCommons’ goal to provide benchmarks and metrics that level the industry playing field through the comparison of ML systems, software, and solutions. The latest benchmark round received submissions from 20 organizations and released over 1,800 peer-reviewed performance results for machine learning systems spanning from edge devices to data center servers. This is the second round of MLPerf Inference to offer power measurement, with over 350 power results.

Submissions this round included software and hardware innovations from Alibaba, Centaur Technology, cTuning, Dell, EdgeCortix, Fujitsu, FuriosaAI, Gigabyte, HPE, Inspur, Intel, Krai, Lenovo, LTechKorea, Nettrix, Neuchips, NVIDIA, OctoML, Qualcomm Technologies, Inc., and Supermicro. In particular, MLCommons would like to congratulate new submitters cTuning, LTechKorea, and OctoML, and also Inspur for submitting their first power measurements. To view the results, please visit https://mlcommons.org/en/inference-datacenter-11/ and https://www.mlcommons.org/en/inference-edge-11/.

“We had an outstanding set of results that showed significant improvement across the MLPerf Inference benchmark suite,” said Ramesh Chukka, Co-Chair of the MLPerf Inference Working Group. “We are excited about bringing these performance gains to the AI community. Congratulations to all of our submitters, especially the first time submission teams.”

“The progress demonstrated in this round of results is outstanding over such a short time frame,” said David Kanter, Executive Director of MLCommons. "We are especially excited to see more software solution providers joining the MLPerf community to help improve machine learning."

Additional information about the Inference v1.1 benchmarks is available at https://mlcommons.org/en/inference-datacenter-11/ and https://www.mlcommons.org/en/inference-edge-11/.

About MLCommons

MLCommons is an open engineering consortium with a mission to accelerate machine learning innovation, raise all boats and increase its positive impact on society. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners - global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.

For additional information on MLCommons and details on becoming a Member or Affiliate of the organization, please visit http://mlcommons.org/ or contact participation@mlcommons.org.

Press Contact:
press@mlcommons.org