Skip navigation

BenchCouncil: International Open Benchmark Council

 

International Open Benchmark Council (BenchCouncil) is a non-profit international organization, which aims to promote the standardization, benchmarking, evaluation, incubation, and promotion of Chip, AI, Big Data, Block Chain, and other emerging techniques.

Since its founding, BenchCouncil has two fundamental responsibilities. On one hand, BenchCouncil incubates benchmark projects and hosts the BenchCouncil benchmark projects, and further encourages reliable and reproducible research using the BenchCouncil benchmarks. On the other hand, it encourages benchmark-based quantitative approaches to tackle multi-disciplinary challenges.

What's New:

04/30/2020: AIBench: An Industry Standard AI Benchmark Suite from Internet Services (Updated Technical Report). This TR presents a balanced AI benchmarking methodology for meeting the subtly different requirements of different stages in developing a new system/architecture and ranking/purchasing commercial off-the-shelf ones. We identify and include seventeen representative AI tasks to guarantee the representativeness and diversity of the benchmarks. Meanwhile, for reducing the benchmarking cost, we select a benchmark subset to a minimum–three tasks. The evaluations show AIBench outperforms MLPerf in terms of the diversity and representativeness of model complexity, computational cost, convergent rate, computation and memory access patterns, and hotspot functions.

04/27/2020: Bench'20 Call for Papers (Submission Deadline: July 15, 2020). Bench'20 will be held in Atlanta, Georgia, USA at November 14-16, 2020. The main themes of Bench'20 are benchmarking, measuring, and optimizing Big Data, AI, Block Chain, HPC, Datacenter, IoT, Edge and other things. Regularly, Bench'20 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), and the BenchCouncil Best Paper Award (1000$).

04/20/2020: AIBench: Scenario-distilling AI Benchmarking (Technical Report). This TR proposes a scenario-distilling AI benchmarking methodology. Instead of using real-world applications, we propose the permutations of essential AI and non-AI tasks as a scenario-distilling benchmark. The preliminary evaluation shows the advantage of scenario-distilling AI benchmarking against using component or micro AI benchmarks alone.

03/03/2020: AIBench Tutorial on ASPLOS 2020 Updated. This tutorial introduces the agile domain-specific benchmarking methodology, ten end-to-end application scenarios distilled from the industry-scale applications, AIBench framework, end-to-end, component, and micro benchmarks, and AIBench's value for software and hardware designer, micro-architectural researchers, and code developers. Additionally, the videos are provided to show how to use AIBench on a publicly available Testbed. All the AIBench slide presentations and hands-on tutorial videos are publicly available from Tutorial_Link. The separate link for each talk is also provided in Tutorial Website.

02/22/2020: AIBench Tutorial on HPCA 2020 Updated. This tutorial introduces the challenges, motivation, Industry Partners’ requirements, AIBench framework and benchmarks. Additionally, the videos are provided to show how to use AIBench on a publicly available Testbed. All the AIBench slide presentations and hands-on tutorial videos are publicly available from Tutorial_Link. The separate link for each talk is also provided in Tutorial Website.

02/17/2020: Updated BenchCouncil AIBench Technical Report. This TR proposes an agile domain-specific benchmarking methodology, speeding up software and hardware co-design. Together with seventeen industry partners, we identify ten important end-to-end application scenarios, among which sixteen representative AI tasks are distilled as the AI component benchmarks. We propose the permutations of essential AI and non-AI component benchmarks as end-to-end benchmarks, each of which is a distillation of the essential attributes of an industry-scale application. We design and implement a reusing benchmark framework, propose the guideline for building end-to-end benchmarks, and present the first end-to-end Internet service AI benchmark. The preliminary evaluation shows the value of our benchmark suite for hardware and software designers, micro-architectural researchers, and code developers.

12/20/2019: BenchCouncil AI Benchmark Specification Call for Comments: AIBench Specification, HPC AI500 Specification, Edge AIBench Specification, and AIoTBench Specification.

12/02/2019: 2020 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench'20) is online (Nov 14-16, 2020 @ Atlanta, Georgia, USA).

12/02/2019: 2020 BenchCouncil International Federated Intelligent Computing and Block Chain Conferences (FICC, Jun 29-Jul 3, 2020 @ Qingdao, Shandong, China) is online. Embracing intelligent computing and block chain technologies, FICC 2020 consists of five individual symposiums: Symposium on Intelligent Computers (IC 20), Symposium on Block Chain (BChain 20), Symposium on Intelligent Medical Technology (MedTech 20), Symposium on Financial Technology (FinTech 20), and Symposium on Education Technology (EduTech 20).

12/02/2019: 2019 International Symposium on Chips (Chips 2019) registration open. Chips 2019 aims to discuss the advanced technologies and strategies about general chips, open-source chips, and intelligent chips, towards building an open chip ecosystem.

11/19/2019: BenchCouncil’s View On Benchmarking AI and Other Emerging Workloads (Technical Report, Slides presented by Prof. Jianfeng Zhan at BenchCouncil SC BoF). This paper outlines BenchCounci’s view on the challenges, rules, and vision of benchmarking modern workloads. We conclude the challenges of benchmarking modern workloads as FIDSS (Fragmented, Isolated, Dynamic, Servicebased, and Stochastic), and propose the PRDAERS benchmarking rules that the benchmarks should be specified in a paper-and-pencil manner, relevant, diverse, containing different levels of abstractions, specifying the evaluation metrics and methodology, repeatable, and scaleable. We believe proposing simple but elegant abstractions that help achieve both efficiency and general-purpose is the final target of benchmarking in future, which may be not pressing. In the light of this vision, we shortly discuss BenchCouncil’s related projects.

11/16/2019: Prof. Dr. Tony Hey honored with BenchCouncil Achievement Award. Prof. Dr. Tony Hey, the Chief Data Scientist at Rutherford Appleton Laboratory STFC, has been named the 2019 recipient of the International Open Benchmark Council (BenchCouncil) achievement award. Prof. Dr. Tony Hey is a fellow of ACM, the American Association for the Advancement of Science, and the Royal Academy of Engineering.

11/14/2019: Chips 2019 (Dec 18-20 @ Beijing, China) is online. Chips 2019 is organized by BenchCouncil and Bulletin of Chinese Academy of Sciences, aiming to discuss the key technology, software and hardware ecology of chips industry.

10/28/2019: Bench'19 Call for Participation. Bench'19 Program is updated. Bench'19 provides six keynote presentations, four invited talks, seventeen regular paper presentations, four AI challenge paper presentations, and BenchCouncil AI Benchmarks tutorial.

10/23/2019: AIBench is open sourced, including AIBench framework, 2 end-to-end application benchmarks, 16 component benchmarks, and 12 micro benchmarks. AIBench covers sixteen prominent AI problem domains, including classification, image generation, text-to-text translation, image-to-text, image-to-image, speech-to-text, face embedding, 3D face recognition, object detection, video prediction, image compression, recommendation, 3D object reconstruction, text summarization, spatial transformer, and learning to rank.

10/23/2019: AIBench technical report updated. AIBench is the first industry-scale end-to-end AI benchmark suite, joint with a lot of industry partners. First, we present a highly extensible, configurable, and flexible benchmark framework, containing multiple loosely coupled modules like data input, prominent AI problem domains, online inference, offline training and automatic deployment tool modules.

10/16/2019: Bench 18 proceeding is online. Bench 18 is organized by International Open Benchmark Council (BenchCouncil) and dedicated to benchmarking, measuring, and optimizing complex systems.

10/16/2019: Call for BenchCouncil Achievement Award Nomination. This award recognizes a senior researcher, who has made long-term contributions to the benchmarking, measuring, and optimizing community. Any member of this community is welcome to nominate an individual who is eligible for this award.

10/06/2019: Call for HPC AI System Award Nomination. This award recognizes a group of scientists and engineers, who have made significant contributions to the design, implementation of a state-of-the- art or state-of-the-practice system. This year, the award is dedicated to the HPC AI systems.

09/26/2019: Performance numbers are updated. BenchCouncil publish the performance numbers of intelligent chips, using six representative component benchmarks in AIBench. The numbers comprehensively evaluate eight NVIDIA GPUs, covering different types, different architectures, different memory capacities, and having different prices.

09/17/2019: SC 19 BoF is online. The BoF forum is about BenchCouncil AI Benchmarks SC19 in Denver, Colorado, USA. November 19, 2019 (Tuesday), 05:15 pm - 06:45 pm (An hour and a half). Room 503-504.

09/07/2019: Bench 19 Registration is open. Bench 19 provides a high-quality, single-track forum for presenting results and discussing ideas that further the knowledge and understanding of the benchmark community as a whole.

08/12/2019: AIBench, HPC AI500, AIoTBench, Edge AIBench are released.

08/12/2019: BigDataBench is updated.

06/27/2019 - 06/29/2019: 2019 BenchCouncil International Symposium on Intelligent Computers is held in Shenzhen.

05/01/2019: BenchCouncil Competition is online for registration.

[Older items can be found in the archive.]