Today, MLCommons®, an open engineering consortium, released new results for MLPerf™ Training v1.0, the organization's machine learning training performance benchmark suite. MLPerf Training measures the time it takes to train machine learning models to a standard quality target in a variety of tasks including image classification, object detection, NLP, recommendation, and reinforcement learning. In its fourth round, MLCommons added two new benchmarks to evaluate the performance of speech-to-text and 3D medical imaging tasks.

MLPerf Training is a full system benchmark, testing machine learning models, software, and hardware. With MLPerf, MLCommons now has a reliable and consistent way to track performance improvement over time, plus results from a "level playing field" benchmark drives competition, which in turn is driving performance. Compared to the last submission round, the best benchmark results improved by up to 2.1X, showing substantial improvement in hardware, software, and system scale.

Similar to past MLPerf Training results, the submissions consist of two divisions: closed and open. Closed submissions use the same reference model to ensure a level playing field across systems, while participants in the open division are permitted to submit a variety of models. Submissions are additionally classified by availability within each division, including systems commercially available, in preview, and RDI.

New MLPerf Training Benchmarks to Advance ML Tasks and Performance

As industry adoption and use cases for machine learning expands, MLPerf will continue to evolve its benchmark suites to evaluate new capabilities, tasks and performance metrics. With the MLPerf Training v1.0 round, MLCommons included two new benchmarks to measure performance for speech-to-text and 3D medical imaging. These new benchmarks leverage the following reference models:

MLPerf Training v1.0 results further MLCommons’ goal to provide benchmarks and metrics that level the industry playing field through the comparison of ML systems, software, and solutions. The latest benchmark round received submissions from 13 organizations and released nearly 150 peer-reviewed results for machine learning systems spanning from edge devices to data center servers. Submissions this round included software and hardware innovations from Dell, Fujitsu, Gigabyte, Google, Graphcore, Habana Labs, Inspur, Intel, Lenovo, Nettrix, NVIDIA, PCL & PKU, and Supermicro. To view the results, please visit https://mlcommons.org/en/training-normal-10/.

“We’re thrilled to see the continued growth and enthusiasm from the MLPerf community, especially as we’re able to measure significant improvement across the industry with the MLPerf Training benchmark suite,” said Victor Bittorf, Co-Chair of the MLPerf Training Working Group. “Congratulations to all of our submitters in this v1.0 round - we’re excited to continue our work together, bringing transparency across machine learning system capabilities.”

“The industry progress highlighted in this round of results is outstanding,” said John Tran, Co-Chair of the MLPerf Training Working Group. "The training benchmark suite is at the center of MLCommon’s mission to push machine learning innovation forward for everyone, and we’re incredibly pleased both with the engagement from this round’s submissions, as well as increasing interest in MLPerf benchmark results by businesses looking to adopt AI solutions."

Additional information about the Training v1.0 benchmarks is available at https://mlcommons.org/en/training-normal-10/.

About MLCommons

MLCommons is an open engineering consortium with a mission to accelerate machine learning innovation, raise all boats and increase its positive impact on society. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ founding partners - global technology providers, academics and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets and best practices.

For additional information on MLCommons and details on becoming a Member or Affiliate of the organization, please visit http://mlcommons.org/ or contact participation@mlcommons.org.

Press Contact:
press@mlcommons.org