Today the MLPerf™ effort released results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. MLPerf is a consortium of over 40 companies and researchers from leading universities, and the MLPerf benchmark suites are rapidly becoming the industry standard for measuring machine learning performance. The MLPerf Training benchmark suite measures the time it takes to train one of six machine learning models to a standard quality target in tasks including image classification, object detection, translation, and playing Go. To see the results, go to mlcommons.org/en/training-normal-06/.

The first version of MLPerf Training was v0.5; this release, v0.6, improves on the first round in several ways. According to the MLPerf Training Special Topics Chairperson Paulius Micikevicius, “these changes demonstrate MLPerf’s commitment to its benchmarks’ representing the current industry and research state." The improvements include:

Submissions showed substantial technological progress over v0.5. Many benchmarks featured submissions at higher scales than v0.5. Benchmark results on the same system show substantial performance improvements over v0.5, even after the impact of the rules changes are factored out. (The higher quality targets lead to higher times on ResNet, SSD, and GNMT. The change to overhead timing leads to lower times especially on larger systems. The improved engine and different quality target make MiniGo times substantially different.) “The rapid improvement in MLPerf results shows how effective benchmarking can be in accelerating innovation.” said Victor Bittorf, MLPerf Submitters Working Group Chairperson.

MLPerf Training v0.6 showed increased support for the benchmark and greater interest from submitters. MLPerf Training v0.6 received sixty-three entries, up more than 30%. Submissions came from five submitters, up from three in the previous round. Submissions included the first submission to the “Open Division” submission, which allows the model to be further optimized or a different model to be used (though the same model was used in the v0.6 submission) as a means of showcasing more potential performance innovations through software changes. The MLPerf effort now has over 40 supporting companies, and recently released a complementary inference benchmark suite.

“We are creating a common yardstick for training and inference performance. We invite everyone to become involved by going to mlperf.org or emailing info@mlperf.org” said Peter Mattson, MLPerf General Chair.