As I discussed recently, the AI industry is developing benchmarking suites that will help practitioners determine the target environment in which their machine learning, deep learning or other statistical models might perform best. Increasingly, these frameworks are turning their focus to benchmarking AI workloads that run on edge devices, such as “internet of things” endpoints, smartphones and embedded systems.
There are as yet no widely adopted AI benchmarking suites. Of the ones under development, these are the ones that stand the greatest chance of prevailing down the road:
- Transaction Processing Performance Council’s AI Working Group: The TPC includes more than 20 top server and software makers. Late last year, the organization formed a working group to define AI hardware and software benchmarks that are agnostic to the underlying chipsets where the workloads are executed.
- MLPerf: Early this month, Google Inc. and Baidu Inc. announced that they are teaming with chipmakers and academic research centers to create the AI benchmark MLPerf.
Read more at Silicon Angle