Since its inception, BenchCouncil has undertaken three main responsibilities: advancing and advocating for evaluatology as a universal science and engineering discipline applicable across diverse fields; setting benchmarks and releasing consistent, comparable evaluation outcomes for any evaluation subject using BenchCouncil Evaluatology; nurturing an open, inclusive, and expanding community for benchmarks and evaluations.
As a non-profit organization, BenchCouncil relies on your support to sustain its development. You are encouraged to contribute to BenchCouncil's growth by purchasing its commercial tools and services .
What's New:
10/16/2024: Evaluatology 2024 1-page Abstract Submission Deadline Extended to October 25, 2024 at 11:59 PM AoE (CFP, Submission Site). Evaluatology 2024 is the First International Workshop on Evaluatology (Evaluatology 2024), in conjunction with the 16th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2024), December 4-6, 2024, Guangzhou, China.
09/20/2024: Evaluatology 2024 Call for Papers. Evaluatology 2024 is the First International Workshop on Evaluatology (Evaluatology 2024), in conjunction with the 16th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2024), December 4-6, 2024, Guangzhou, China. The 1-page abstract submission deadline is October 15, 2024 at 11:59 PM AoE (Submission Site:https://eva2024.hotcrp.com/).
09/03/2024: BenchCouncil launched ETRanking to benchmark the Top 500 Pioneers Across 500 Emerging Technologies.
08/30/2024: BenchCouncil AISys-IQ Specification Call for Comments. AISys-IQ provides the specification of IQ evaluation for AI systems.
08/05/2024: Call for Papers (CFP) for Bench 2024 has been extended to August 19, 2024. Please submit your papers through the (Submission Site: https://bench2024.hotcrp.com/) Bench 2024 will be held from December 4 to 6, 2024. The reviewing process is double-blind. All papers accepted for Bench 2024 will be presented at the conference and published in the Springer Lecture Notes in Computer Science (LNCS), which is indexed by EI.
08/05/2024: IC 2024 CFP extended to August 27, 2024 (Submission Site: https://ic2024.hotcrp.com/) IC 2024 will be held on Dec. 4-6, 2024. The reviewing process is double-blind. All accepted papers will be presented at the IC 2024 conference, and will be published by Springer CCIS (Indexed by EI).
07/01/2024: 2024 BenchCouncil International Symposium on Intelligent Computers, Algorithms, and Applications (IC 2024) website is online (Guangzhou, Guangdong Province, China).
05/02/2024: The 16th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2024) website is online (Guangzhou, Guangdong Province, China).
03/19/2024: BenchCouncil has recently published a technical article on Evaluatology, which focuses on the science and engineering of evaluation. Led by Dr. Jianfeng Zhan, the report explores the development of universal sciences and engineering of evaluation across different fields.
01/02/2024: Bench 2023 proceedings are online. Bench 2023 proceeding includes 11 papers.
01/02/2024: IC 2023 proceedings are online. Free access from the English website and Chinese Website, until March 3, 2024.
12/06/2023: Prof. Lieven Eeckhout was honored with the 2023 BenchCouncil Achievement Award (Evaluation Report). Prof. Lieven Eeckhout from Ghent University has been named the 2023 BenchCouncil Achievement Award for his long-term contributions to the workload characterization metrologies and tools, and contribution to Sniper, a fast, accurate and parallel x86 multi-core simulator. He presented a keynote speech at the Bench 2023 award ceremony (Dec. 3-6, 2024, Sanya).
12/06/2023: Congratulations to the FICC 2023 travel grant recipients: Singh Rohit, University of Cincinnati (300 USD), Xinlin Wang, University of Luxembourg (100 USD), Haojia Huang, Sun Yat-sen University (100 USD), Jiayi Xu, Institute of High Energy Physics, Chinese Academy of Sciences (100 USD), Jiaxing Li, Nankai University (100 USD), Jiahui Shen, Civil Aviation Flight University of China (100 USD), Hao Wang, Tongji University (100 USD), Song Li, University of Copenhagen (100 USD), Cen Mo, Shanghai Jiaotong University (100 USD), and Zejia Lu, Shanghai Jiaotong University (100 USD).
06/06/2023: IC 2023 Call for Papers. IC 2023 will be held in conjunction with FICC 2023 on December 4-6, 2023 in Sanya, a beautiful seaside city, well known as Hawaii in China. The paper submission deadline is July 31, 2023 at 11:59 PM AoE (Submission Site). The reviewing process is double-blind. Please note that citizens from up to 59 nations can visit Sanya without a Visa from the Chinese Government.
05/06/2023: Bench 2023 Call for Papers. Bench 2023 will be held in conjunction with FICC 2023 on December 3-5, 2023 in Sanya, a beautiful seaside city, well known as Hawaii in China. The paper submission deadline is July 31, 2023 at 11:59 PM AoE (Submission Site). The reviewing process is double-blind. Please note that citizens from up to 59 nations can visit Sanya without a Visa from the Chinese Government.
05/04/2023: Congratulations to Dr. Akshitha Sriraman from Carnegie Mellon University, 2022 BenchCouncil Distinguished Doctoral Dissertation Award (Computer Architecture) Recipient. According to the review criteria, the finalists must submit an article to TBench. Her article is available from Volume 3, Issue 1.
04/12/2023: To keep the community open, inclusive, and growing, we recommend influential benchmark and tool projects from BenchCouncil and other organizations. If you are willing to be recommended or not recommended, please do not hesitate to contact us (benchcouncil@gmail.com).
03/30/2023: The International Symposium on Open-source Computer Systems (OpenCS 2023) website is online (Sanya, Hainan, China).
03/30/2023: The 15th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2023) website is online (Sanya, Hainan, China).
11/9/2022: John L. Henning was honored with the 2022 BenchCouncil Achievement Award. John L. Henning from Oracle has been named the 2022 recipient of the BenchCouncil achievement award for his long-term "leadership technical contributions to the SPEC CPU 2000, SPEC CPU2006, and SPEC CPU 2017". He presented a keynote speech at the virtual Bench 2022 award ceremony (Nov 7-9, 2022, 8:00 am EST, Program).
11/9/2022: Dr. Douwe Kiela was honored with the 2022 BenchCouncil Rising Star Award. Dr. Douwe Kiela from Hugging Face has been named the 2022 recipient of the BenchCouncil rising star award for his "contribution to the natural language processing evaluation and benchmarking, including Dynabench, Senteval, Adversarial NLI". He presented a keynote speech at the virtual Bench 2022 award ceremony (Nov 7-9, 2022, 8:00 am EST, Program).
11/4/2022: Bench 2022 Call for Participation (Nov 6-11, 2022, 8:00 am UTC-5, Program, Free registration). Keynotes: John L. Henning (Achievement Award Owner), Dr. Douwe Kiela (Rising Star Award Owner); Invited talk: Dr. Kai Shu; Distinguished Doctoral Dissertation Award Finalist, The Best Paper Award, Tony Hey Best Student Paper Award; Paper presentations of Bench 2022 papers and TBench papers; Two workshops on OpenBench and Open-source Computer System (OpenCS).
10/13/2022: TBench Special Issue of "Open-source Computer Systems": Call for Papers (Submission Site). This special issue focuses on studies in exploring the software and hardware co-design space in high-end computer systems, studies in advancing open-source movement including novel abstraction and methodology, open-source hardware, open-source software, measurement and optimization tools.
10/13/2022: Open-source Computer System (OpenCS) Workshop Call for Participation (Nov 10-11, 2022, 8:00 am UTC-5, Preliminary Program, Free registration).
10/13/2022: Bench 2022 Call for Participation (Nov 6-11, 2022, 8:00 am UTC-5, Preliminary Program, Free registration). Highlights: BenchCouncil Achievement Award Lecture; BenchCouncil Rising Star Award Lectures; Paper presentations of Bench 2022 papers and TBench papers; Two workshops on OpenBench and Open-source Computer System (OpenCS).
07/31/2022: Bench 2022 CFP extended to August 18, 2022. Bench 2022 will be held on Nov. 7-8, 2022 (Submission Site). All accepted papers will be presented at the Bench 2022 conference, and will be published by Springer LNCS (Indexed by EI). Distinguished papers will be recommended to and published by the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench). Regularly, Bench 2022 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), the BenchCouncil Best Paper Award (1000$), and the BenchCouncil Distinguished Doctoral Dissertation Award in Computer Architecture ($1000) and other areas($1000).
07/01/2022: Three issues of BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench) are online. Welcome to visit and download.
06/09/2022: The SAIBench project is online! Position paper and slides[PDF] are available. The goal of SAIBench is to benchmark AI for science.
05/30/2022: Congratulations to Dr. Romain Jacob from ETH Zürich, 2021 BenchCouncil Distinguished Doctoral Dissertation Award Recipient. Through the anonymous votes from the award committee, Dr. Romain Jacob and Dr. Kai Shu from Illinois Institute of Technology won the first and second places, respectively. Finally, Dr. Romain Jacob stood out. According to the review criteria, the finalists must submit an article to TBench. Their articles are available from Volume 2, Issue 1.
05/23/2022: BenchCouncil Distinguished Doctoral Dissertation Award in Computer Architecture Call for Nomination (Deadline: October 15, 2022 End of Day, AoE, Submission Site). This award recognizes and encourages superior research and writing by doctoral candidates on benchmarks, workload characterization, and evaluations of the computer architecture community. Each candidate is encouraged to submit articles to BenchCouncil Transactions on Benchmarks, Standards, and Evaluation. Among the submissions, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the BenchCouncil Bench Conferences. The finalists must submit an article to BenchCouncil Transactions on Benchmarks, Standards, and Evaluation. Finally, one among the four will receive the award, which carries a $1,000 honorarium.
05/23/2022: BenchCouncil Distinguished Doctoral Dissertation Award in Other Areas Call for Nomination (Deadline: October 15, 2022 End of Day, AoE, Submission Site). Prof. Jack Dongarra from the University of Tennessee, Dr. Xiaoyi Lu from the University of California, Merced, Dr. Jeyan Thiyagalingam from STFC-RAL, Dr. Lei Wang from ICT, CAS, and Dr. Spyros Blanas from The Ohio State University form the Award Committee. This award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimization. Among the submissions, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the BenchCouncil Bench conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards and Evaluation. Finally, one among the four will receive the award, which carries a $1,000 honorarium.
05/22/2022: Bench 2022 calls for papers. Bench 2022 will be held on Nov. 7-9, 2022 (Submission Site). The paper submission deadline is July 28, 2022 at 11:59 PM AoE, and the reviewing process is double-blind.
05/17/2022: A BenchCouncil view on benchmarking emerging and future computing. Prof. Jianfeng Zhan presents a unifying benchmark definition, a conceptual framework, and a traceable and supervised learning-based benchmarking methodology for benchmark science and engineering. The measurable properties of the artifacts in the computer, management, or finance disciplines are extrinsic, not inherent — dependent on their problem definitions and solution instantiations. Only after the instantiation can the solutions to the problem be measured. The processes of definition, instantiation, and measurement are entangled, and they have complex mutual influences. Meanwhile,the technology inertia brings instantiation bias — trapped into a subspace or even a point at a high dimension solution space. These daunting challenges make metrology can not work for benchmark communities. It is pressing to establish independent benchmark science and engineering.
04/19/2022: TBench calls for papers. BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench) is an open-access multi-disciplinary journal dedicated to benchmarks, standards, evaluations, optimizations, and data sets. It seeks fast-track publishing with an average turnaround time of one month. This journal operates a double anonymized review process. Authors do not have to pay any open access publication fee. However, at least one of the authors must register for the Bench conference (https://www.benchcouncil.org/bench/) and present their work.
03/31/2022: Congratulations to Prof. Jack J. Dongarra, 2021 ACM Turing Award Recipient. Prof. Jack J. Dongarra is also the 2021 BenchCouncil Achievement Award Recipient and the award committee chair of BenchCouncil Distinguished Doctoral Dissertation Award.
03/31/2022: The camera-ready version of the OLxPBench paper, accepted by ICDE 2022 . This paper presents OLxPBench, a composite HTAP benchmark suite. OLxPBench proposes: (1) the abstraction of a hybrid transaction, performing a real-time query in-between an online transaction, to model widely-observed behavior pattern – making a quick decision while consulting real-time analysis; (2) a semantically consistent schema to express the relationships between OLTP and OLAP schema; (3) the combination of domain-specific and general benchmarks to characterize diverse application scenarios with varying resource demands. Our evaluations justify the three design decisions of OLxPBench and pinpoint the bottlenecks of two mainstream distributed HTAP DBMSs. Links: OLxPBench Homepage, Paper Download, and GitHub Download.
12/17/2021: Prof. Dr. Jianfeng Zhan calls for establishing benchmark science and engineering across multi-disciplines in TBench's first issue [PDF].
11/15/2021: The accepted papers of Bench 2021 are online at Elsevier ScienceDirect (https://www.sciencedirect.com/journal/benchcouncil-transactions-on-benchmarks-standards-and-evaluations/vol/1/issue/1) as a Special issue of BenchCouncil Transactions on Benchmarks, Standards, and Evaluations.
11/11/2021: Three primary contributors to the MLPerf and AIBench projects were honored with the 2021 BenchCouncil Rising Star Award. They are Dr. Peter Mattson from Google, Prof. Dr. Vijay Janapa Reddi from Harvard University, and Dr. Wanling Gao from the Chinese Academy of Sciences. They will present keynote speeches at the virtual Bench 2021 award ceremony (Nov 14-16, 2021, 8:00 am EST, Program), Free registration .
11/11/2021: Four Ph.D. were selected as the finalists for the 2021 BenchCouncil Distinguished Doctoral Dissertation Award. They are Dr. Romain Jacob from ETH Zurich, Switzerland; Dr. Pei Guo from the University of Maryland, Baltimore County, USA; Dr. Belen Bermejo from the University of the Balearic Islands, Spain; and Dr. Kai Shu from Illinois Institute of Technology, USA. They will give their presentations at the virtual Bench 2021 award ceremony (Nov 14-16, 2021, 8:00 am EST, Program),Free registration.
11/10/2021: Professor Jack Dongarra was honored with the 2021 BenchCouncil Achievement Award. Prof. Jack Dongarra from the University of Tennessee has been named the 2021 recipient of the BenchCouncil achievement award for his long-term "novel and substantial contributions to development, testing and documentation of high- quality mathematical software" and "benchmarking HPC systems. " He will present a keynote speech at the virtual Bench 2021 award ceremony (Nov 14-16, 2021, 8:00 am EST, Program), Free registration.
11/10/2021: Bench 2021 Call for Participation (Nov 14-16, 2021, 8:00 am EST, Program , Free registration. Keynote: Professor Jack Dongarra (Achievement Award Owner), Three Rising Star Award Owners; Four Distinguished Doctoral Dissertation Award Finalist, The Best Paper Award, Tony Hey Best Student Paper Award; Four Tutorials for Benchmarking and Optimizing.
11/01/2021: The BenchCouncil Tony Hey Best Student Paper Award announcement. Prof. Tony Hey generously donated to the BenchCouncil Award committee to spin off the best student paper award. The committee will present this award to a student as the first author who publishes a paper that has a potential impact on benchmarking, measuring, and optimizing at the BenchCouncil conferences. Each award carries an honorarium of $ 1000. This award will be given at Bench 2021 for the first time. Prof. Dr. Tony Hey is the Chief Data Scientist at Rutherford Appleton Laboratory STFC, a fellow of ACM, the American Association for the Advancement of Science, and the Royal Academy of Engineering. He is named the 2019 recipient of the BenchCouncil achievement award.
10/08/2021: BenchCouncil Distinguished Doctoral Dissertation Award Call for Nomination (Deadline: October 15, 2021 End of Day, AoE, Submission Site). Prof. Jack Dongarra from the University of Tennessee, Dr. Xiaoyi Lu from the University of California, Merced, Dr. Jeyan Thiyagalingam from STFC-RAL, Dr. Lei Wang from ICT, CAS, and Dr. Spyros Blanas from The Ohio State University form the Award Committee. This award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimization. Among the submissions, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the Bench 2021 conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards and Evaluation. Finally, one among the four will receive the award, which carries a $1,000 honorarium.
09/08/2021: Dr. Jianfeng Zhan presents a keynote speech about AI benchmarking challenges, methodology, and progress at the AI4S workshop @ Cluster 2021.
09/05/2021: The ScenarioBench project is online! The goal of ScenarioBench is to propose methodology, tools, and metrics to model, characterize, and optimize ultra-scale real-world or future applications and systems using the benchmarks.
08/03/2021: The camera-ready version of the AIBench Scenario paper, accepted by PACT 2021. This paper presents a methodology to attack the challenge of benchmarking modern real-world application scenarios like Internet services, which consist of a diversity of AI and non-AI modules with huge code sizes and long and complicated execution paths. We formalize a real-world application scenario as a Directed Acyclic Graph-based model and propose the rules to distill it into a permutation of essential AI and non-AI tasks, which we call a scenario benchmark. Together with seventeen industry partners, we extract nine typical scenario benchmarks. We design and implement an extensible, configurable, and flexible benchmark framework. We implement two Internet service AI scenario benchmarks based on the framework as proxies to two real-world application scenarios. Link: AIBench Scenario Homepage and GitHub Download.
07/22/2021: Bench 2021 CFP extended to August 6, 2021. Bench 2021 will be held on Nov. 14-16, 2021 (Submission Site). Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench 2021 conference. Regularly, Bench 2021 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), the BenchCouncil Best Paper Award (1000$), and the BenchCouncil Distinguished Doctoral Dissertation Award ($1000).
07/14/2021: The camera-ready version of the HPC AI500 V2.0 paper, accepted by CLUSTER 2021. This paper presents a comprehensive HPC AI benchmarking methodology that achieves equivalence, representativeness, repeatability, and affordability. Among the nineteen AI workloads of AIBench Training–by far the most comprehensive AI benchmarks suite, we choose two repre- sentative and repeatable AI workloads in terms of both AI model and micro-architectural char- acteristics. The selected HPC AI benchmarks include both business and scientific computing: Image Classification and Extreme Weather Analytics. Finally, we propose three high levels of benchmarking and the corresponding rules to assure equivalence. To rank the performance of HPC AI systems, we present a new metric named Valid FLOPS, emphasizing both throughput performance and target quality.
07/08/2021: BenchCouncil Distinguished Doctoral Dissertation Award Call for Nomination (Deadline: August 31, 2021 End of Day, AoE, Submission Site). Prof. Jack Dongarra from the University of Tennessee, Dr. Xiaoyi Lu from the University of California, Merced, and Dr. Jeyan Thiyagalingam from STFC-RAL will co-lead the Award Committee. This award is to recognize and encourage superior research and writing by doctoral candidates in the broad field of benchmarks, data, standards, evaluations, and optimization. Among the submissions, four candidates will be selected as finalists. They will be invited to give a 30-minute presentation at the Bench 2021 conference and contribute research articles to BenchCouncil Transactions on Benchmarks, Standards and Evaluation. Finally, one among the four will receive the award, which carries a $1,000 honorarium.
07/07/2021: The HPC AI500 V2.0 paper is accepted by CLUSTER 2021, a high-quality, high performance computing conference. The paper title is "HPC AI500 V2.0: The Methodology, Tools, and Metrics for Benchmarking HPC AI Systems." We will release the camera-ready version soon.
06/17/2021: AIBench Tutorial on ISCA 2021 call for participation (Thursday, June 17 | 9 AM - 3:45 PM EDT). Please contact gaowanling@ict.ac.cn or tangfei@ict.ac.cn for tutorial Zoom link.
06/16/2021: The WPC methodology and tool paper is accepted by IEEE Computer Architecture Letters (PDF). WPC presents a whole-picture workload characterization methodology across Intermediate Representation (IR), ISA, and microarchitecture to sum up the inherent workload characteristics and understand the reasons behind the numbers. For example, we contradict an influential observation in CloudSuite using the WPC tool: having higher front-end stalls is an intrinsic characteristic of scale-out workloads. The project homepage is https://www.benchcouncil.org/WPC.
06/07/2021: Bench 2021 calls for papers. Bench 2021 will be held on Nov. 14-16, 2021(Submission Site). The paper submission deadline is July 30, 2021, and the reviewing process is double-blind. Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench 2021 conference. Regularly, Bench 2021 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), the BenchCouncil Best Paper Award (1000$), and the BenchCouncil Distinguished Doctoral Dissertation Award ($1000).
04/18/2021: AIBench Tutorial Slides @ ASPLOS 21 are online.
04/16/2021: The Bench steering committee is glad to announce the chairs of Bench 2021. Prof. Resit Sendag from the University of Rhode Island, USA, and Dr. Arne J. Berre from SINTEF Digital, Norway, are named general chairs. Dr. Lei Wang from the Chinese Academy of Sciences, China, Prof. Axel Ngonga from Paderborn University, Germany, and Prof. Chen Liu from Clarkson University, USA, are named PC Co-Chair. Bench 2021 calls for papers.
04/13/2021: AIBench Tutorial on ASPLOS 2021 call for participation (Wednesday, April 14 | 7am-11am PT and 4pm-8pm PT). Zoom Link: https://zoom.us/j/98791303035?pwd=Q3BvdUNjbjhhNmhGKzlyN3RkdEh1Zz09.
04/13/2021: AIBench Project Homepage updated.
04/02/2021: FICC 2020 proceedings are online. Free access from the English Website and Chinese Website, until April 22, 2021.
03/12/2021: Bench 2020 proceedings are online. Free access from the homepage and conference program, until April 8, 2021.
03/08/2021: The camera-ready version of the AIBench Training paper, accepted by ISPASS 2021. This paper summarizes AI benchmarking challenges: prohibitive cost, conflicting requirements, short shelf-life, scalability, and repeatability. AIBench is the first benchmark project that systematically tackles the above challenges: it distills and abstracts real-world application scenarios into the scenario, training, inference, micro, and synthetic AI benchmarks. AIBench Training uses real-world benchmarks to cover the factors space that impacts the learning dynamics to the most considerable extent. For repeatable performance ranking (RPR subset) and workload characterization (WC subset), we keep two subsets to a minimum for affordability.
02/10/2021: Bench 2021 calls for papers. Bench 2021 will be held on Nov. 14-16, 2021. There are three submission opportunities, and the reviewing process is double-blind. Upon acceptance, papers will be scheduled for publication in the BenchCouncil Transactions on Benchmarks, Standards, and Evaluation (TBench) and presentation at the Bench 2021 conference. Regularly, Bench 2021 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), and the BenchCouncil Best Paper Award (1000$). If you feel interested in joining Bench 2021 TPC, please contact the BenchCouncil executive committee member: Dr. Wanling Gao, via gaowanling(at)ict(dot)ac(dot)cn.
02/10/2021: BenchCouncil launches a new Journal BenchCouncil Transactions on Benchmarks, Standards, and Evaluations (TBench). TBench is an open-access multi-disciplinary journal dedicated to benchmarks, standards, evaluations, data sets, and optimizations. It uses a double-blind peer-review process to guarantee the integrity and seeks a fast-track publication with an average turnaround time of two months. Publishing in TBench, authors are not required to pay an article-processing charge (APC). If you feel interested in joining TBench's editorial board, please contact the Co-EIC, Prof. Jianfeng Zhan, and Prof. Tony Hey.
02/09/2021: AIBench Tutorial on ISCA 2021 Website Online. We will give a full-day tutorial about BenchCouncil AIBench benchmarks on ISCA 2021. AIBench is a comprehensive AI benchmark suite, distilling real-world application scenarios into AI Scenario, Training, Inference, and Micro Benchmarks across Datacenter, HPC, IoT, and Edge. We also provide hands-on demos on using AIBench on the BenchCouncil testbed---an open testbed for AI in HPC, Datacenter, IoT, and Edge.
02/09/2021: An AIBench Training paper is accepted by (ISPASS 2021, a high-quality conference on performance analysis of systems and software). The paper title is "AIBench Training: Balanced Industry-Standard AI Training Benchmarking." We will release the camera-ready version soon.
12/24/2020: AIBench Tutorial on ASPLOS 2021 Website Online. We are honored and pleased to accept the invitation from Dr. Tamara Silbergleit Lehman, Workshop and Tutorial co-chair of ASPLOS 2021, to give a full-day tutorial about BenchCouncil benchmarks. This tutorial aims at presenting BenchCouncil AIBench. AIBench is a comprehensive AI benchmark suite, distilling real-world application scenarios into AI Scenario, Training, Inference, and Micro Benchmarks across Datacenter, HPC, IoT, and Edge.
12/07/2020: AIBench and Its Performance Rankings (Video and Slides presented by Professor Jianfeng Zhan at Bench 2020). AIBench is the most comprehensive and representative AI benchmark suite for datacenter, HPC systems, Edge computing, and IoT scenarios. AIBench provides scenario benchmarks, training benchmarks, and inference benchmarks to fulfill different benchmarking requirements. Using AIBench, BenchCouncil released AI chip ranking and the first performance ranking of HPC systems.
11/15/2020: Professor Torsten Hoefler was honored with the 2020 BenchCouncil Rising Star Award. Prof. Torsten Hoefler from ETH Zurich, has been named the 2020 recipient of the International Open Benchmark Council (BenchCouncil) rising star award for his outstanding contributions to benchmarking, measuring and optimization:
"proposing the fastest routing algorithm for arbitrary topologies with J. Domke" and "co-authoring the latest versions of MPI message-passing standard with Jack Dongarra and Rajeev Thakur" and "the recent work on the Deep500 project --- a deep learning meta-framework and HPC AI benchmarking library".
11/15/2020: Professor David J. Lilja was honored with the 2020 BenchCouncil Achievement Award. Prof. David J. Lilja from University of Minnesota in Minneapolis, has been named the 2020 recipient of the International Open Benchmark Council (BenchCouncil) achievement award for his long-term contributions to benchmarking, measuring and optimization: "Summarizing practical methods of measurement, simulation and analytical modeling" and "proposing MinneSPEC for simulation-based computer architecture research" and "exploiting hardware-software interactions and architecture-circuit interactions to improve system performance".
11/09/2020: Bench 2020 Call for Participation (Nov 15-16, 2020, 8:00 am EST, Program, Free registration). Keynote: Professor Torsten Hoefler from ETH Zurich, Professor David J. Lilja from University of Minnesota, and Professor Kristel Michielsen from Jülich Supercomputing Centre.
10/09/2020: Open registration for 2020 BenchCouncil Federated Intelligent Computing and Block Chain Conferences (FICC 2020). Program Updates. Early registration dealine: Oct. 23rd, 2020.
09/08/2020: AIBench Training Ranking Released. Multiple TPU and GPU types are ranked. The benchmarks use AIBench subset: Image Classification, Object Detection, and Learning to Rank. It provides three benchmarking levels, including free level, system level, and hardware level. This ranking list reports system level benchmarking numbers. The AIBench methodology is available from AIBench TR.
08/13/2020: Bench 2020 Submission Deadline: Saturday 15 Aug 2020 23:59:59 (Anywhere on Earth).
07/12/2020: Bench 2020 CFP Deadline Extended to August 15, 2020. Bench 2020 will be held in Atlanta, Georgia, USA on November 14-16, 2020. The main themes of Bench 2020 are benchmarking, measuring, and optimizing Big Data, AI, Block Chain, HPC, Datacenter, IoT, Edge and other things. Regularly, Bench 2020 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), and the BenchCouncil Best Paper Award (1000$).
07/03/2020: The First HPC AI500 Ranking Released. Fujistu system ranks first among all HPC AI systems, achieving 31.41 Valid PFLOPS (a new metric considering both FLOPS and target quality) using ImageNet/ResNet50. HPC AI500 includes two benchmarks: Image Classification and Extreme Weather Analytics (Object Detection), selected from 17 AI tasks of AIBench (TR). It provides three benchmarking levels, including free level, system level, and hardware level. This ranking list reports free level benchmarking numbers, which is to advance the state-of-the-art of software and hardware co-design. The HPC AI500 methodology is available from HPC AI500 TR.
06/30/2020: HPC AI500: The Methodology, Tools, Roofline Performance Models, and Metrics for Benchmarking HPC AI Systems. This TR proposes a comprehensive HPC AI benchmarking methodology that achieves the goal of being equivalent, relevant, representative, affordable, and repeatable. Following this methodology, we present open-source benchmarks, and a Roofline performance model to benchmarking and optimizing the systems. We propose two innovative metrics: Valid FLOPS and valid FLOPS per watt to rank HPC AI systems' performance and energy-efficiency. The evaluations show our methodology, benchmarks, performance models, and metrics can measure, optimize, and rank the HPC AI systems in a scalable, simple, and affordable way.
06/22/2020: 2020 BenchCouncil International Federated Intelligent Computing and Block Chain Conferences (FICC, Mid-October, 2020 @ Qingdao, Shandong, China) Call for Papers. Submission Site (five individual tracks for five conferences, Deadline: August 15, 2020). The five conferences are: Symposium on Intelligent Computers (IC 20), Symposium on Block Chain (BChain 20), Symposium on Intelligent Medical Technology (MedTech 20), Symposium on Financial Technology (FinTech 20), and Symposium on Education Technology (EduTech 20).
06/22/2020: Bench 2019 proceeding is online. Free access until July 31, 2020. See AI Challenges papers on Cambricon, RISC-V, X86 Using AIBench.
04/30/2020: AIBench: An Industry Standard AI Benchmark Suite from Internet Services (Updated Technical Report). This TR presents a balanced AI benchmarking methodology for meeting the subtly different requirements of different stages in developing a new system/architecture and ranking/purchasing commercial off-the-shelf ones. We identify and include seventeen representative AI tasks to guarantee the representativeness and diversity of the benchmarks. Meanwhile, we reduce a benchmark subset to a minimum--three tasks to save the benchmarking cost. The evaluations show that AIBench outperforms MLPerf in terms of the diversity and representativeness of model complexity, computational cost, convergent rate, computation, memory access patterns, and hotspot functions.
04/27/2020: Bench 2020 Call for Papers (Submission Deadline: July 15, 2020). Bench 2020 will be held in Atlanta, Georgia, USA at November 14-16, 2020. The main themes of Bench 2020 are benchmarking, measuring, and optimizing Big Data, AI, Block Chain, HPC, Datacenter, IoT, Edge and other things. Regularly, Bench 2020 will present the BenchCouncil Achievement Award (3000$), the BenchCouncil Rising Star Award (1000$), and the BenchCouncil Best Paper Award (1000$).
03/03/2020: AIBench Tutorial on ASPLOS 2020 Updated. This tutorial introduces the agile domain-specific benchmarking methodology, ten end-to-end application scenarios distilled from the industry-scale applications, AIBench framework, end-to-end, component, and micro benchmarks, and AIBench's value for software and hardware designer, micro-architectural researchers, and code developers. Additionally, the videos are provided to show how to use AIBench on a publicly available Testbed. All the AIBench slide presentations and hands-on tutorial videos are publicly available from Tutorial_Link. The separate link for each talk is also provided in Tutorial Website.
02/22/2020: AIBench Tutorial on HPCA 2020 Updated. This tutorial introduces the challenges, motivation, Industry Partners' requirements, AIBench framework, and benchmarks. Additionally, the videos are provided to show how to use AIBench on a publicly available Testbed. All the AIBench slide presentations and hands-on tutorial videos are publicly available from Tutorial_Link. A separate link for each talk is also provided on the Tutorial Website.
02/17/2020: Updated BenchCouncil AIBench Technical Report. This TR proposes an agile domain-specific benchmarking methodology, speeding up software and hardware co-design. With seventeen industry partners, we identify ten critical end-to-end application scenarios, among which we distill sixteen representative AI tasks as the AI component benchmarks. We propose the permutations of essential AI and non-AI component benchmarks as end-to-end benchmarks, each of which is a distillation of an industry-scale application's fundamental attributes. We design and implement a reusing benchmark framework, propose the guideline for building end-to-end benchmarks, and present the first end-to-end Internet service AI benchmark. The preliminary evaluation shows our benchmark suite's value for hardware and software designers, micro-architectural researchers, and code developers.
12/20/2019: BenchCouncil AI Benchmark Specification Call for Comments: AIBench Specification, HPC AI500 Specification, Edge AIBench Specification, and AIoTBench Specification.
12/02/2019: 2020 BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2020) is online (Nov 14-16, 2020 @ Atlanta, Georgia, USA).
12/02/2019: 2020 BenchCouncil International Federated Intelligent Computing and Block Chain Conferences (FICC, Jun 29-Jul 3, 2020 @ Qingdao, Shandong, China) is online. Embracing intelligent computing and block chain technologies, FICC 2020 consists of five individual symposiums: Symposium on Intelligent Computers (IC 20), Symposium on Block Chain (BChain 20), Symposium on Intelligent Medical Technology (MedTech 20), Symposium on Financial Technology (FinTech 20), and Symposium on Education Technology (EduTech 20).
12/02/2019: 2019 International Symposium on Chips (Chips 2019) registration open. Chips 2019 aims to discuss the advanced technologies and strategies about general chips, open-source chips, and intelligent chips, towards building an open chip ecosystem.
11/19/2019: BenchCouncil's View On Benchmarking AI and Other Emerging Workloads (Technical Report, Slides presented by Prof. Jianfeng Zhan at BenchCouncil SC BoF). This paper outlines BenchCounci's view on the challenges, rules, and vision of benchmarking modern workloads. We conclude the challenges of benchmarking modern workloads as FIDSS. FIDSS is short for Fragmented, Isolated, Dynamic, Servicebased, and Stochastic. We propose the PRDAERS benchmarking rules that the benchmarks should be specified in a paper-and-pencil manner, relevant, diverse, containing different levels of abstractions, specifying the evaluation metrics and methodology, repeatable, and scaleable. We believe proposing simple but elegant abstractions that help achieve both efficiency and general-purpose is the final target of benchmarking in the future. In the light of this vision, we shortly discuss BenchCouncil's related projects.
11/16/2019: Prof. Dr. Tony Hey was honored with the BenchCouncil Achievement Award. Prof. Dr. Tony Hey, the Chief Data Scientist at Rutherford Appleton Laboratory STFC, has been named the 2019 recipient of the International Open Benchmark Council (BenchCouncil) achievement award. Prof. Dr. Tony Hey is a fellow of ACM, the American Association for the Advancement of Science, and the Royal Academy of Engineering.
11/14/2019: Chips 2019 (Dec 18-20 @ Beijing, China) is online. Chips 2019 is organized by BenchCouncil and Bulletin of Chinese Academy of Sciences, aiming to discuss the key technology, software and hardware ecology of chips industry.
10/28/2019: Bench 2019 Call for Participation. Bench 2019 Program is updated. Bench 2019 provides six keynote presentations, four invited talks, seventeen regular paper presentations, four AI challenge paper presentations, and BenchCouncil AI Benchmarks tutorial.
10/23/2019: AIBench is open sourced, including the AIBench framework, two end-to-end application benchmarks, 16 component benchmarks, and 12 micro benchmarks. AIBench covers sixteen prominent AI problem domains, including classification, image generation, text-to-text translation, image-to-text, image-to-image, speech-to-text, face embedding, 3D face recognition, object detection, video prediction, image compression, recommendation, 3D object reconstruction, text summarization, spatial transformer, and learning to rank.
10/23/2019: AIBench technical report updated. AIBench is the first industry-scale end-to-end AI benchmark suite, joint with a lot of industry partners. First, we present a highly extensible, configurable, and flexible benchmark framework containing multiple loosely coupled modules like data input, prominent AI problem domains, online inference, offline training, and automatic deployment tool modules.
10/16/2019: Bench 2018 proceeding is online. Bench 2018 is organized by International Open Benchmark Council (BenchCouncil) and is dedicated to benchmarking, measuring, and optimizing complex systems.
10/16/2019: Call for BenchCouncil Achievement Award Nomination. This award recognizes a senior researcher who has made long-term contributions to the benchmarking, measuring, and optimizing community. Any member of this community is welcome to nominate an individual who is eligible for this award.
10/06/2019: Call for HPC AI System Award Nomination. This award recognizes a group of scientists and engineers who have made significant contributions to the design, implementation of a state-of-the- art or state-of-the-practice system. This year, the award is dedicated to the HPC AI systems.
09/26/2019: Performance numbers are updated. BenchCouncil publish the performance numbers of intelligent chips, using six representative component benchmarks in AIBench. The numbers comprehensively evaluate eight NVIDIA GPUs, covering different types, different architectures, various memory capacities, and have different prices.
09/17/2019: SC 19 BoF is online. The BoF forum is about BenchCouncil AI Benchmarks SC19 in Denver, Colorado, USA. November 19, 2019 (Tuesday), 05:15 pm - 06:45 pm (An hour and a half). Room 503-504.
09/07/2019: Bench 2019 Registration is open. Bench 2019 provides a high-quality, single-track forum for presenting results and discussing ideas that further the knowledge and understanding of the benchmark community as a whole.
08/12/2019: AIBench, HPC AI500, AIoTBench, Edge AIBench are released.
08/12/2019: BigDataBench is updated.
06/27/2019 - 06/29/2019: 2019 BenchCouncil International Symposium on Intelligent Computers is held in Shenzhen.
05/01/2019: BenchCouncil Competition is online for registration.
[Older items can be found in the archive.]