History
The foundation for MLCommons® started with the MLPerf™ benchmarks in 2018, which established industry-standard metrics to measure machine learning performance and quickly grew to encompass data sets and best practices. The MLPerf benchmarks played a critical role for industry and research and were tremendously popular. The community quickly spread across nearly every continent and grew to over 70 supporting organizations from software startups, to researchers at top universities, and to cloud computing and semiconductor giants.
From the beginning, we knew that to drive progress in machine learning, we needed benchmarks that pushed on the frontier between research and industrial practice and that creating large-scale open data sets would be critical to shifting this line over time. To democratize these newfound technological capabilities and ensure wide adoption, we needed to reduce friction and improve ML portability so that we could share best practices across the boundaries between countries, between academia and industry, and between researchers and engineers in companies. This gave birth to our three pillars and the mission of MLCommons, which we formed in 2020. Some key milestones in our history include:
-
2018
- February: Initial meetings between engineers and researchers from Baidu, Google, Harvard University, Stanford University, and the University of California Berkeley
- May 2: Launched the the MLPerf Training benchmark suite
- December 5: Launched the MLPerf HPC benchmark suite
- December 12: Published results from the first MLPerf Training benchmark suite, including results from Google, Intel, and NVIDIA
-
2019
- June 24: Launched the MLPerf Inference benchmark suite
- October 22: Launched the TinyML benchmark suite
- November 6: Published results from the first MLPerf Inference benchmark suite, including results from Alibaba, Centaur Technology, Dell EMC, dividiti, FuriosaAI, Google, Habana Labs, Hailo, Inspur, Intel, NVIDIA, Polytechnic University of Milan, Qualcomm, and Tencent
- November 8: MLCube created
-
2020
- January: People’s Speech kickoff
- April: First 10,000 hours of data for People’s Speech
- September: Complete prototype of MLCube. 80,000 hours of aligned data for People’s Speech.
- October 21: First results using the MLPerf Mobile suite including results from Intel, MediaTek, Qualcomm, and Samsung
- November: People’s Speech shared with early adopters
- November 18: First results using the MLPerf HPC Training results, including results from the Swiss National Supercomputing Center (CSCS), Fujitsu, Lawrence Berkeley National Laboratory (LBNL), National Center of Supercomputer Applications (NCSA), Japan’s Institute of Physical and Chemical Research (RIKEN), and Texas Advanced Computing Center (TACC)
- December 3: MLCommons launches