Skip navigation

BenchCouncil: International Open Benchmark Council

 

International Open Benchmark Council (BenchCouncil) is a non-profit international organization that aims to promote standardizing, benchmarking, evaluating, and incubating Big Data, AI, Chip, BlockChain, and other emerging technology.

Since its founding, BenchCouncil has three fundamental responsibilities: keep the benchmarks, data, standards, evaluations, and optimizations community open, inclusive, and growing; promote data or benchmark-based quantitative approaches to tackle multidisciplinary and interdisciplinary challenges; connect architecture, system, data management, algorithm, and application communities to better co-design for the inherent workload characterizations.

What's New:

11/15/2021: The accepted papers of Bench'21 are online at Elsevier ScienceDirect (https://www.sciencedirect.com/journal/benchcouncil-transactions-on-benchmarks-standards-and-evaluations/articles-in-press) as a Special issue of BenchCouncil Transactions on Benchmarks, Standards, and Evaluations.

11/11/2021: Three primary contributors to the MLPerf and AIBench projects were honored with the 2021 BenchCouncil Rising Star Award. They are Dr. Peter Mattson from Google, Prof. Dr. Vijay Janapa Reddi from Harvard University, and Dr. Wanling Gao from the Chinese Academy of Sciences. They will present keynote speeches at the virtual Bench 21 award ceremony (Nov 14-16, 2011, 8:00 am EST, Program), Free registration .

11/11/2021: Four Ph.D. were selected as the finalists for the 2021 BenchCouncil Distinguished Doctoral Dissertation Award. They are Dr. Romain Jacob from ETH Zurich, Switzerland; Dr. Pei Guo from the University of Maryland, Baltimore County, USA; Dr. Belen Bermejo from the University of the Balearic Islands, Spain; and Dr. Kai Shu from Illinois Institute of Technology, USA. They will give their presentations at the virtual Bench 21 award ceremony (Nov 14-16, 2011, 8:00 am EST, Program),Free registration.

11/10/2021: Professor Jack Dongarra was honored with the 2021 BenchCouncil Achievement Award. Prof. Jack Dongarra from the University of Tennessee has been named the 2021 recipient of the BenchCouncil achievement award for his long-term "novel and substantial contributions to development, testing and documentation of high- quality mathematical software" and "benchmarking HPC systems. " He will present a keynote speech at the virtual Bench 21 award ceremony (Nov 14-16, 2011, 8:00 am EST, Program), Free registration.

11/10/2021: Bench'21 Call for Participation (Nov 14-16, 2011, 8:00 am EST, Program , Free registration . Keynote: Professor Jack Dongarra (Achievement Award Owner), Three Rising Star Award Owners; Four Distinguished Doctoral Dissertation Award Finalist, The Best Paper Award, Tony Hey Best Student Paper Award; Four Tutorials for Benchmarking and Optimizing.

11/01/2021: The BenchCouncil Tony Hey Best Student Paper Award announcement. Prof. Tony Hey generously donated to the BenchCouncil Award committee to spin off the best student paper award. The committee will present this award to a student as the first author who publishes a paper that has a potential impact on benchmarking, measuring, and optimizing at the BenchCouncil conferences. Each award carries an honorarium of $ 1000. This award will be given at Bench’21 for the first time. Prof. Dr. Tony Hey is the Chief Data Scientist at Rutherford Appleton Laboratory STFC, a fellow of ACM, the American Association for the Advancement of Science, and the Royal Academy of Engineering. He is named the 2019 recipient of the BenchCouncil achievement award.

10/08/2021: BenchCouncil Distinguished Doctoral Dissertation Award Call for Nomination (Deadline: October 15, 2021 End of Day, AoE, Submission Site). Prof. Jack Dongarra from the University of Tennessee, Dr. Xiaoyi Lu from the University of California, Merced, Dr. Jeyan Thiyagalingam from STFC-RAL, Dr. Lei Wang from ICT, CAS, and Dr. Spyros Blanas from The Ohio State University form the Award Committee. This award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimization. Among the submissions, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the Bench’21 conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards and Evaluation. Finally, one among the four will receive the award, which carries a $1,000 honorarium.

09/08/2021: Dr. Jianfeng Zhan presents a keynote speech about AI benchmarking challenges, methodology, and progress at the AI4S workshop @ Cluster 2021.

09/05/2021: The ScenarioBench project is online! The goal of ScenarioBench is to propose methodology, tools, and metrics to model, characterize, and optimize ultra-scale real-world or future applications and systems using the benchmarks.

08/03/2021: The camera-ready version of the AIBench Scenario paper, accepted by PACT 2021. This paper presents a methodology to attack the challenge of benchmarking modern real-world application scenarios like Internet services, which consist of a diversity of AI and non-AI modules with huge code sizes and long and complicated execution paths. We formalize a real-world application scenario as a Directed Acyclic Graph-based model and propose the rules to distill it into a permutation of essential AI and non-AI tasks, which we call a scenario benchmark. Together with seventeen industry partners, we extract nine typical scenario benchmarks. We design and implement an extensible, configurable, and flexible benchmark framework. We implement two Internet service AI scenario benchmarks based on the framework as proxies to two real-world application scenarios. Link: AIBench Scenario Homepage and GitHub Download.

07/22/2021: Bench'21 CFP extended to August 6, 2021. Bench'21 will be held on Nov. 14-16, 2021 (Submission Site). Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench'21 conference. Regularly, Bench'21 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), the BenchCouncil Best Paper Award (1000$), and the BenchCouncil Distinguished Doctoral Dissertation Award ($1000).

07/14/2021: The camera-ready version of the HPC AI500 V2.0 paper, accepted by CLUSTER 2021. This paper presents a comprehensive HPC AI benchmarking methodology that achieves equivalence, representativeness, repeatability, and affordability. Among the nineteen AI workloads of AIBench Training–by far the most comprehensive AI benchmarks suite, we choose two repre- sentative and repeatable AI workloads in terms of both AI model and micro-architectural char- acteristics. The selected HPC AI benchmarks include both business and scientific computing: Image Classification and Extreme Weather Analytics. Finally, we propose three high levels of benchmarking and the corresponding rules to assure equivalence. To rank the performance of HPC AI systems, we present a new metric named Valid FLOPS, emphasizing both throughput performance and target quality.

07/08/2021: BenchCouncil Distinguished Doctoral Dissertation Award Call for Nomination (Deadline: August 31, 2021 End of Day, AoE, Submission Site). Prof. Jack Dongarra from the University of Tennessee, Dr. Xiaoyi Lu from the University of California, Merced, and Dr. Jeyan Thiyagalingam from STFC-RAL will co-lead the Award Committee. This award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimization. Among the submissions, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the Bench’21 conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards and Evaluation. Finally, one among the four will receive the award, which carries a $1,000 honorarium.

07/07/2021: The HPC AI500 V2.0 paper is accepted by CLUSTER 2021, a high-quality, high performance computing conference. The paper title is "HPC AI500 V2.0: The Methodology, Tools, and Metrics for Benchmarking HPC AI Systems." We will release the camera-ready version soon.

06/17/2021: AIBench Tutorial on ISCA 2021 call for participation (Thursday, June 17 | 9 AM - 3:45 PM EDT). Please contact gaowanling@ict.ac.cn or tangfei@ict.ac.cn for tutorial Zoom link.

06/16/2021: The WPC methodology and tool paper is accepted by IEEE Computer Architecture Letters (PDF). WPC presents a whole-picture workload characterization methodology across Intermediate Representation (IR), ISA, and microarchitecture to sum up the inherent workload characteristics and understand the reasons behind the numbers. For example, we contradict an influential observation in CloudSuite using the WPC tool: having higher front-end stalls is an intrinsic characteristic of scale-out workloads. The project homepage is https://www.benchcouncil.org/WPC.

06/07/2021: Bench'21 calls for papers. Bench'21 will be held on Nov. 14-16, 2021(Submission Site). The paper submission deadline is July 30, 2021, and the reviewing process is double-blind. Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench'21 conference. Regularly, Bench'21 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), the BenchCouncil Best Paper Award (1000$), and the BenchCouncil Distinguished Doctoral Dissertation Award ($1000).

04/18/2021: AIBench Tutorial Slides @ ASPLOS 21 are online.

04/16/2021: The Bench steering committee is glad to announce the chairs of Bench'21. Prof. Resit Sendag from the University of Rhode Island, USA, and Dr. Arne J. Berre from SINTEF Digital, Norway, are named general chairs. Dr. Lei Wang from the Chinese Academy of Sciences, China, Prof. Axel Ngonga from Paderborn University, Germany, and Prof. Chen Liu from Clarkson University, USA, are named PC Co-Chair. Bench'21 calls for papers.

04/13/2021: AIBench Tutorial on ASPLOS 2021 call for participation (Wednesday, April 14 | 7am-11am PT and 4pm-8pm PT). Zoom Link: https://zoom.us/j/98791303035?pwd=Q3BvdUNjbjhhNmhGKzlyN3RkdEh1Zz09.

04/13/2021: AIBench Project Homepage updated.

04/02/2021: FICC 2020 proceedings are online. Free access from the English Website and Chinese Website, until April 22, 2021.

03/12/2021: Bench'20 proceedings are online. Free access from the homepage and conference program, until April 8, 2021.

03/08/2021: The camera-ready version of the AIBench Training paper, accepted by ISPASS 2021. This paper summarizes AI benchmarking challenges: prohibitive cost, conflicting requirements, short shelf-life, scalability, and repeatability. AIBench is the first benchmark project that systematically tackles the above challenges: it distills and abstracts real-world application scenarios into the scenario, training, inference, micro, and synthetic AI benchmarks. AIBench Training uses real-world benchmarks to cover the factors space that impacts the learning dynamics to the most considerable extent. For repeatable performance ranking (RPR subset) and workload characterization (WC subset), we keep two subsets to a minimum for affordability.

02/10/2021: Bench'21 calls for papers. Bench'21 will be held on Nov. 14-16, 2021. There are three submission opportunities, and the reviewing process is double-blind. Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench'21 conference. Regularly, Bench'21 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), and the BenchCouncil Best Paper Award (1000$). If you feel interested in joining Bench'21 TPC, please contact the BenchCouncil executive committee member: Dr. Wanling Gao, via gaowanling(at)ict(dot)ac(dot)cn.

02/10/2021: BenchCouncil launches a new Journal BenchCouncil Transactions on Benchmarks, Standards, and Evaluations (TBench). TBench is an open-access multi-disciplinary journal dedicated to benchmarks, standards, evaluations, data sets, and optimizations. It uses a double-blind peer-review process to guarantee the integrity and seeks a fast-track publication with an average turnaround time of two months. Publishing in TBench, authors are not required to pay an article-processing charge (APC). If you feel interested in joining TBench's editorial board, please contact the Co-EIC, Prof. Jianfeng Zhan, and Prof. Tony Hey.

02/09/2021: AIBench Tutorial on ISCA 2021 Website Online. We will give a full-day tutorial about BenchCouncil AIBench benchmarks on ISCA 2021. AIBench is a comprehensive AI benchmark suite, distilling real-world application scenarios into AI Scenario, Training, Inference, and Micro Benchmarks across Datacenter, HPC, IoT, and Edge. We also provide hands-on demos on using AIBench on the BenchCouncil testbed---an open testbed for AI in HPC, Datacenter, IoT, and Edge.

02/09/2021: An AIBench Training paper is accepted by (ISPASS 2021, a high-quality conference on performance analysis of systems and software). The paper title is "AIBench Training: Balanced Industry-Standard AI Training Benchmarking." We will release the camera-ready version soon.

12/24/2020: AIBench Tutorial on ASPLOS 2021 Website Online. We are honored and pleased to accept the invitation from Dr. Tamara Silbergleit Lehman, Workshop and Tutorial co-chair of ASPLOS 2021, to give a full-day tutorial about BenchCouncil benchmarks. This tutorial aims at presenting BenchCouncil AIBench. AIBench is a comprehensive AI benchmark suite, distilling real-world application scenarios into AI Scenario, Training, Inference, and Micro Benchmarks across Datacenter, HPC, IoT, and Edge.

12/07/2020: AIBench and Its Performance Rankings (Video and Slides presented by Professor Jianfeng Zhan at Bench'20). AIBench is the most comprehensive and representative AI benchmark suite for datacenter, HPC systems, Edge computing, and IoT scenarios. AIBench provides scenario benchmarks, training benchmarks, and inference benchmarks to fulfill different benchmarking requirements. Using AIBench, BenchCouncil released AI chip ranking and the first performance ranking of HPC systems.

11/15/2020: Professor Torsten Hoefler was honored with the 2020 BenchCouncil Rising Star Award. Prof. Torsten Hoefler from ETH Zurich, has been named the 2020 recipient of the International Open Benchmark Council (BenchCouncil) rising star award for his outstanding contributions to benchmarking, measuring and optimization:
"proposing the fastest routing algorithm for arbitrary topologies with J. Domke" and "co-authoring the latest versions of MPI message-passing standard with Jack Dongarra and Rajeev Thakur" and "the recent work on the Deep500 project --- a deep learning meta-framework and HPC AI benchmarking library".

11/15/2020: Professor David J. Lilja was honored with the 2020 BenchCouncil Achievement Award. Prof. David J. Lilja from University of Minnesota in Minneapolis, has been named the 2020 recipient of the International Open Benchmark Council (BenchCouncil) achievement award for his long-term contributions to benchmarking, measuring and optimization: "Summarizing practical methods of measurement, simulation and analytical modeling" and "proposing MinneSPEC for simulation-based computer architecture research" and "exploiting hardware-software interactions and architecture-circuit interactions to improve system performance".

11/09/2020: Bench'20 Call for Participation (Nov 15-16, 2020, 8:00 am EST, Program, Free registration). Keynote: Professor Torsten Hoefler from ETH Zurich, Professor David J. Lilja from University of Minnesota, and Professor Kristel Michielsen from Jülich Supercomputing Centre.

10/09/2020: Open registration for 2020 BenchCouncil Federated Intelligent Computing and Block Chain Conferences (FICC 2020). Program Updates. Early registration dealine: Oct. 23rd, 2020.

09/08/2020: AIBench Training Ranking Released. Multiple TPU and GPU types are ranked. The benchmarks use AIBench subset: Image Classification, Object Detection, and Learning to Rank. It provides three benchmarking levels, including free level, system level, and hardware level. This ranking list reports system level benchmarking numbers. The AIBench methodology is available from AIBench TR.

08/13/2020: Bench'20 Submission Deadline: Saturday 15 Aug 2020 23:59:59 (Anywhere on Earth).

07/12/2020: Bench'20 CFP Deadline Extended to August 15, 2020. Bench'20 will be held in Atlanta, Georgia, USA on November 14-16, 2020. The main themes of Bench'20 are benchmarking, measuring, and optimizing Big Data, AI, Block Chain, HPC, Datacenter, IoT, Edge and other things. Regularly, Bench'20 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), and the BenchCouncil Best Paper Award (1000$).

07/03/2020: The First HPC AI500 Ranking Released. Fujistu system ranks first among all HPC AI systems, achieving 31.41 Valid PFLOPS (a new metric considering both FLOPS and target quality) using ImageNet/ResNet50. HPC AI500 includes two benchmarks: Image Classification and Extreme Weather Analytics (Object Detection), selected from 17 AI tasks of AIBench (TR). It provides three benchmarking levels, including free level, system level, and hardware level. This ranking list reports free level benchmarking numbers, which is to advance the state-of-the-art of software and hardware co-design. The HPC AI500 methodology is available from HPC AI500 TR.

06/30/2020: HPC AI500: The Methodology, Tools, Roofline Performance Models, and Metrics for Benchmarking HPC AI Systems. This TR proposes a comprehensive HPC AI benchmarking methodology that achieves the goal of being equivalent, relevant, representative, affordable, and repeatable. Following this methodology, we present open-source benchmarks, and a Roofline performance model to benchmarking and optimizing the systems. We propose two innovative metrics: Valid FLOPS and valid FLOPS per watt to rank HPC AI systems' performance and energy-efficiency. The evaluations show our methodology, benchmarks, performance models, and metrics can measure, optimize, and rank the HPC AI systems in a scalable, simple, and affordable way.

06/22/2020: 2020 BenchCouncil International Federated Intelligent Computing and Block Chain Conferences (FICC, Mid-October, 2020 @ Qingdao, Shandong, China) Call for Papers. Submission Site (five individual tracks for five conferences, Deadline: August 15, 2020). The five conferences are: Symposium on Intelligent Computers (IC 20), Symposium on Block Chain (BChain 20), Symposium on Intelligent Medical Technology (MedTech 20), Symposium on Financial Technology (FinTech 20), and Symposium on Education Technology (EduTech 20).

06/22/2020: Bench'19 proceeding is online. Free access until July 31, 2020. See AI Challenges papers on Cambricon, RISC-V, X86 Using AIBench.

04/30/2020: AIBench: An Industry Standard AI Benchmark Suite from Internet Services (Updated Technical Report). This TR presents a balanced AI benchmarking methodology for meeting the subtly different requirements of different stages in developing a new system/architecture and ranking/purchasing commercial off-the-shelf ones. We identify and include seventeen representative AI tasks to guarantee the representativeness and diversity of the benchmarks. Meanwhile, we reduce a benchmark subset to a minimum--three tasks to save the benchmarking cost. The evaluations show that AIBench outperforms MLPerf in terms of the diversity and representativeness of model complexity, computational cost, convergent rate, computation, memory access patterns, and hotspot functions.

04/27/2020: Bench'20 Call for Papers (Submission Deadline: July 15, 2020). Bench'20 will be held in Atlanta, Georgia, USA at November 14-16, 2020. The main themes of Bench'20 are benchmarking, measuring, and optimizing Big Data, AI, Block Chain, HPC, Datacenter, IoT, Edge and other things. Regularly, Bench'20 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), and the BenchCouncil Best Paper Award (1000$).

03/03/2020: AIBench Tutorial on ASPLOS 2020 Updated. This tutorial introduces the agile domain-specific benchmarking methodology, ten end-to-end application scenarios distilled from the industry-scale applications, AIBench framework, end-to-end, component, and micro benchmarks, and AIBench's value for software and hardware designer, micro-architectural researchers, and code developers. Additionally, the videos are provided to show how to use AIBench on a publicly available Testbed. All the AIBench slide presentations and hands-on tutorial videos are publicly available from Tutorial_Link. The separate link for each talk is also provided in Tutorial Website.

02/22/2020: AIBench Tutorial on HPCA 2020 Updated. This tutorial introduces the challenges, motivation, Industry Partners’ requirements, AIBench framework, and benchmarks. Additionally, the videos are provided to show how to use AIBench on a publicly available Testbed. All the AIBench slide presentations and hands-on tutorial videos are publicly available from Tutorial_Link. A separate link for each talk is also provided on the Tutorial Website.

02/17/2020: Updated BenchCouncil AIBench Technical Report. This TR proposes an agile domain-specific benchmarking methodology, speeding up software and hardware co-design. With seventeen industry partners, we identify ten critical end-to-end application scenarios, among which we distill sixteen representative AI tasks as the AI component benchmarks. We propose the permutations of essential AI and non-AI component benchmarks as end-to-end benchmarks, each of which is a distillation of an industry-scale application's fundamental attributes. We design and implement a reusing benchmark framework, propose the guideline for building end-to-end benchmarks, and present the first end-to-end Internet service AI benchmark. The preliminary evaluation shows our benchmark suite's value for hardware and software designers, micro-architectural researchers, and code developers.

12/20/2019: BenchCouncil AI Benchmark Specification Call for Comments: AIBench Specification, HPC AI500 Specification, Edge AIBench Specification, and AIoTBench Specification.

12/02/2019: 2020 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench'20) is online (Nov 14-16, 2020 @ Atlanta, Georgia, USA).

12/02/2019: 2020 BenchCouncil International Federated Intelligent Computing and Block Chain Conferences (FICC, Jun 29-Jul 3, 2020 @ Qingdao, Shandong, China) is online. Embracing intelligent computing and block chain technologies, FICC 2020 consists of five individual symposiums: Symposium on Intelligent Computers (IC 20), Symposium on Block Chain (BChain 20), Symposium on Intelligent Medical Technology (MedTech 20), Symposium on Financial Technology (FinTech 20), and Symposium on Education Technology (EduTech 20).

12/02/2019: 2019 International Symposium on Chips (Chips 2019) registration open. Chips 2019 aims to discuss the advanced technologies and strategies about general chips, open-source chips, and intelligent chips, towards building an open chip ecosystem.

11/19/2019: BenchCouncil’s View On Benchmarking AI and Other Emerging Workloads (Technical Report, Slides presented by Prof. Jianfeng Zhan at BenchCouncil SC BoF). This paper outlines BenchCounci’s view on the challenges, rules, and vision of benchmarking modern workloads. We conclude the challenges of benchmarking modern workloads as FIDSS. FIDSS is short for Fragmented, Isolated, Dynamic, Servicebased, and Stochastic. We propose the PRDAERS benchmarking rules that the benchmarks should be specified in a paper-and-pencil manner, relevant, diverse, containing different levels of abstractions, specifying the evaluation metrics and methodology, repeatable, and scaleable. We believe proposing simple but elegant abstractions that help achieve both efficiency and general-purpose is the final target of benchmarking in the future. In the light of this vision, we shortly discuss BenchCouncil’s related projects.

11/16/2019: Prof. Dr. Tony Hey was honored with the BenchCouncil Achievement Award. Prof. Dr. Tony Hey, the Chief Data Scientist at Rutherford Appleton Laboratory STFC, has been named the 2019 recipient of the International Open Benchmark Council (BenchCouncil) achievement award. Prof. Dr. Tony Hey is a fellow of ACM, the American Association for the Advancement of Science, and the Royal Academy of Engineering.

11/14/2019: Chips 2019 (Dec 18-20 @ Beijing, China) is online. Chips 2019 is organized by BenchCouncil and Bulletin of Chinese Academy of Sciences, aiming to discuss the key technology, software and hardware ecology of chips industry.

10/28/2019: Bench'19 Call for Participation. Bench'19 Program is updated. Bench'19 provides six keynote presentations, four invited talks, seventeen regular paper presentations, four AI challenge paper presentations, and BenchCouncil AI Benchmarks tutorial.

10/23/2019: AIBench is open sourced, including the AIBench framework, two end-to-end application benchmarks, 16 component benchmarks, and 12 micro benchmarks. AIBench covers sixteen prominent AI problem domains, including classification, image generation, text-to-text translation, image-to-text, image-to-image, speech-to-text, face embedding, 3D face recognition, object detection, video prediction, image compression, recommendation, 3D object reconstruction, text summarization, spatial transformer, and learning to rank.

10/23/2019: AIBench technical report updated. AIBench is the first industry-scale end-to-end AI benchmark suite, joint with a lot of industry partners. First, we present a highly extensible, configurable, and flexible benchmark framework containing multiple loosely coupled modules like data input, prominent AI problem domains, online inference, offline training, and automatic deployment tool modules.

10/16/2019: Bench 18 proceeding is online. Bench 18 is organized by International Open Benchmark Council (BenchCouncil) and is dedicated to benchmarking, measuring, and optimizing complex systems.

10/16/2019: Call for BenchCouncil Achievement Award Nomination. This award recognizes a senior researcher who has made long-term contributions to the benchmarking, measuring, and optimizing community. Any member of this community is welcome to nominate an individual who is eligible for this award.

10/06/2019: Call for HPC AI System Award Nomination. This award recognizes a group of scientists and engineers who have made significant contributions to the design, implementation of a state-of-the- art or state-of-the-practice system. This year, the award is dedicated to the HPC AI systems.

09/26/2019: Performance numbers are updated. BenchCouncil publish the performance numbers of intelligent chips, using six representative component benchmarks in AIBench. The numbers comprehensively evaluate eight NVIDIA GPUs, covering different types, different architectures, various memory capacities, and have different prices.

09/17/2019: SC 19 BoF is online. The BoF forum is about BenchCouncil AI Benchmarks SC19 in Denver, Colorado, USA. November 19, 2019 (Tuesday), 05:15 pm - 06:45 pm (An hour and a half). Room 503-504.

09/07/2019: Bench 19 Registration is open. Bench 19 provides a high-quality, single-track forum for presenting results and discussing ideas that further the knowledge and understanding of the benchmark community as a whole.

08/12/2019: AIBench, HPC AI500, AIoTBench, Edge AIBench are released.

08/12/2019: BigDataBench is updated.

06/27/2019 - 06/29/2019: 2019 BenchCouncil International Symposium on Intelligent Computers is held in Shenzhen.

05/01/2019: BenchCouncil Competition is online for registration.

[Older items can be found in the archive.]