MLCommons

Best Practices Working Group

Mission

The best practices working group aims to improve AI ease-of-use and to scale AI to more people.

Purpose

The best practices working group looks at opportunities to address common and cross-cutting needs of AI practitioners. The starting point for this effort is to reduce friction for machine learning by ensuring that models are easily portable and reproducible. This initial starting point is the MLCube™ project, where we are creating the source code and specifications to achieve this.

MLCube is the shipping container that enables researchers and developers to easily share the software that powers machine learning. MLCube is a set of common conventions for creating ML software that can just "plug-and-play" on many different systems. MLCube makes it easier for researchers to share innovative ML models, for a developer to experiment with many different models, and for software companies to create infrastructure for models. It creates opportunities by putting ML in the hands of more people.

Deliverables

  1. The MLCube specification
  2. Tutorials and instructions for MLCube
  3. The MLCube OSS project

Meeting Schedule

Weekly on Friday from 9:00-10:00AM Pacific.

How to Join

Use this link to request to join the group/mailing list, and receive the meeting invite:
Best Practices Google Group.
Requests are manually reviewed, so please be patient.

Working Group Resources

Shared documents and meeting minutes:

  1. Associate a Google account with your e-mail address.
  2. Ask to join our Public Google Group.
  3. Ask to join our Members Google Group.
  4. Once approved, go to the Best Practices folder in the Members Google Drive.

Working Group Chair Emails

Sergey Serebryakov (sergey.serebryakov@hpe.com)

Diane Feddema (dfeddema@redhat.com)

Working Group Chair Bios

Diane is a Principal Software Engineer at Red Hat leading performance analysis and visualization for the Open Data Hub managed service. She also creates experiments comparing different types of infrastructure and software frameworks to validate reference architectures for machine learning workloads using MLPerf™. Previously Diane was a performance engineer at the National Center for Atmospheric Research, NCAR, working on optimizations and tuning of parallel global climate models. She also worked at SGI and Cray on performance and compilers. She has a BS in Computer Science from the University of Iowa and MS in Computer Science from the University of Colorado.

LinkedIn