BenchCouncil Federated Conferences

The BenchCouncil Federated Conferences (FC) facilitate multidisciplinary and interdisciplinary communication, collaboration, and promote benchmark-based quantitative approaches to address complex challenges across different fields. FC brings together a diverse range of affiliated research and industry conferences, creating a week-long joint meeting comprising several individual conferences. Accepted papers are typically published in collaboration with Springer or BenchCouncil Transactions.

BenchCouncil organizes a series of conferences to fulfill its missions: promoting benchmark-based quantitative approaches for addressing multi-disciplinary or inter-disciplinary challenges, advancing state-of-the-art and state-of-the-practice technologies. Since 2021, all conferences have adopted a double-blind review process to ensure integrity.

2023 BenchCouncil International Federated Intelligent Computing and Chip Conference (FICC 2023)

Dec 3-7, 2023 @ Sanya, Hainan, China

2020 BenchCouncil International Federated Intelligent Computing and Block Chain Conference (FICC 2020)

Oct. 30 - Nov. 3, 2020 @ Qingdao, Shandong, China

The goal of FICC 2020 is to foster the communication, collaboration, and interplay among the communities of artificial intelligence, computer sciences, finance, medicine, and education.

Bench

Evolving from nine BPOE Workshop series and SDBA Workshop in conjunction with ASPLOS, VLDB, and ICS, Bench is an international multidisciplinary conference on benchmarks, standards, data sets, and evaluations, organized by the International Open Benchmark Council (BenchCouncil). The Bench conference encompasses a wide range of topics in benchmarks, datasets, metrics, indexes, measurement, evaluation, optimization, supporting methods and tools, and other best practices in computer science, medicine, finance, education, management, etc. The Bench conference invites manuscripts describing original work in the above areas and topics. At least one of the authors of the TBench articles is requested to present their work at the Bench conference.

Bench Steering Committees

Prof. Dr. Jack Dongarra, University of Tennessee

Prof. Dr. Geoffrey Fox, Indiana University

Prof. Dr. D. K. Panda, The Ohio State University

Prof. Dr. Felix, Wolf, TU Darmstadt.

Prof. Dr. Xiaoyi Lu, University of California, Merced

Dr. Wanling Gao, ICT, Chinese Academy of Sciences & UCAS

Prof. Dr. Jianfeng Zhan, ICT, Chinese Academy of Sciences &BenchCouncil

Conference Sites

The 15th BenchCouncil International Symposium On Benchmarking, Measuring And Optimizing (Bench 2023) (Dec 3-5, 2023 @ Sanya, Hainan, ChinaSanya, Hainan, China) (In conjunction with Federated Intelligent Computing and Chip Conference)

The 14th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2022)

The 13th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2021)

The 12th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2020)

The 11th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2019) (Denver, Colorado, USA)

The 10th BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench 2018) (Seattle, WA, USA)

The 9th Workshop On Big Data Benchmarks Performance, Optimization and Emerging Hardware (BPOE-9 In conjunction with ASPLOS 2018) (Williamsburg, VA, USA)

The 8th workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (BPOE-8 In conjunction with ASPLOS 2017) (Xi’an, China)

The 7th workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (BPOE-7 In conjunction with ASPLOS 2016) (Atlanta, GA, USA)

The 6th Workshop on Big data benchmarks, Performance Optimization, and Emerging Hardware (BPOE-6 In conjunction with VLDB 2015, BPOE-6 proceeding, published by Springer.) (Kohala Coast , Hawai‘i)

The 5th workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (BPOE-5 In conjunction with VLDB 2014) (Hangzhou, Zhejiang Province, China)

The 4th workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (BPOE-4 In conjunction with ASPLOS 2014) (Salt Lake City, Utah, USA)

The 3rd workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (BPOE-3 In conjunction with Big Data Technology Conference 2013) (BeiJing, China)

The 2nd workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (BPOE-2 In conjunction with HPC China 2013) (Guilin, China)

The 1st workshop on Big Data Benchmarks, Performance Optimization, and Emerging Hardware (BPOE-1 In conjunction with BigData 2013) (San Jose, CA)

OpenCS

The OpenCS conference is launched to promote the open-source computer system (OpenCS) initiative for global challenges like pandemics and climate change. The OpenCS conference encompasses a wide range of topics in exploring the software and hardware co-design space in computer systems, providing an ideal environment for developers and researchers from the architecture, system, algorithm, and application communities to advance the OpenCS initiative.

2023 International Symposium on Open-source Computer Systems (OpenCS 2023) (Dec 3-5, 2023 @ Sanya, Hainan, China) (In conjunction with The 15th BenchCouncil International Symposium On Benchmarking, Measuring And Optimizing)

2022 International Workshop on Open-source Computer Systems (In conjunction with The 14th BenchCouncil International Symposium On Benchmarking, Measuring And Optimizing) (Nov 10-11, 2022, 8:00 am UTC-5)

Proceedings

BenchCouncil Conference Review Rules

1. The online discussion is blind. While the reviewers discuss the papers, they don't know others�?identities beyond reviewer #A, #B; hence, a single reviewer cannot easily assert seniority, silence other voices, or influence them beyond the strength of their arguments.

2. When the reviewers point out closeness to prior work that informs the reviewer's decision to lower the novelty and contribution of a paper, they should provide a full citation to that previous work.

3. When the reviewers are asking authors to draw a comparison with concurrent work (e.g., work that was published or appeared online after the paper submission deadline) or with preliminary work (e.g., a poster or abstract that is not archival), this comparison should not inform a lower score by the reviewer.

4. Provide useful and constructive feedback to the authors. Be respectful, professional, and positive in your reviews and provide suggestions for the authors to improve their work.

5. Score the paper absolutely and relative to the group of papers you are reviewing.
Absolute overall merit - There are 4 grades you can give to each paper for absolute overall merit; the top 2 ratings mean that you think the paper is acceptable to the conference and the bottom 2 ratings mean that in your opinion the paper is below the threshold for the conference. Please assign these values thinking, whether the paper is above the threshold for the conference or below.
Relative overall merit is based on the papers that you are reviewing. You can rank your papers and then group the papers into four bins. Except for fractional errors, you should be dividing your papers equally into four categories.

6. Reviewers must treat all submissions as strictly confidential and destroy all papers once the technical program has been finalized.

7. Reviewers must contact the PC chair or EIC if they feel there is an ethical violation of any sort (e.g., authors seeking support for a paper, authors seeking to identify who the reviewers are).

8. Do not actively look for author identities. Reviewers should judge a paper solely on its merits.

9. If you know the authors, do not publicize the authors. If you would like to recuse yourself from the review task, contact the PC Chair.

10. Reviewers should review the current submission. If you have reviewed a previous submission, make sure your review is based on the current submission.

11. Reviewers must not share the papers with students/colleagues.

12. Reviewers must compose the reviews themselves and provide unbiased reviews.

13. Do not solicit external reviews without consulting the PC chairs or EIC. If you regularly involve your students in the review process as part of their Ph.D. training, contact the PC chairs. You are still responsible for the reviews. You may do this on no more than one of your reviews.

14. Reviewers must keep review discussions (including which papers you reviewed) confidential.

15. Do not discuss the content of a submitted paper/reviews with anyone other than officially on the submission management system like HotCRP during the online discussion period or the PC meeting (from now until paper publication in any venue).

16. Do not reveal the name of paper authors in case reviewers happen to be aware of author identity. (Author names of accepted papers will be revealed after the PC meeting; author names of rejected papers will never be revealed.)

17. Do not disclose the outcome of a paper until its authors are notified of its acceptance or rejection.

18. Do not download or acquire material from the review site you do not need access to.

19. Do not disclose the reviews' content, including the reviewers' identities or discussions about a paper.

Acknowledgments:
This set of review ethics is derived and based on the MICRO 2020, ASPLOS 2020-2021, ISCA 2020-21 review guidelines.