BenchCouncil组织了一系列会议以履行其使命:促进基于数据或基准的定量方法,以应对多学科或跨学科的挑战;推进最先进和实践的基准、数据、标准、评估和优化;自 2021 年以来,所有会议都使用双盲审查流程来确保完整性。

切换图片视图.

BenchCouncil 联邦大会(FICC)

BenchCouncill联邦会议(FC)促进多学科或跨学科的沟通,协作,并促进基于数据或基准的定量方法,以应对多学科或跨学科的挑战。FC将一系列附属研究和行业会议组织成为期一周的联邦会议,由几个单独的会议组成。通常,我们与Springer或BenchCouncil Transactions合作发表被接受的论文。

2023国际测试委员会智能计算与芯片联邦大会(FICC 2023)

2023年12月3-7号 @ 中国海南三亚

2020年国际测试委员会智能计算与区块链联邦大会 (FICC 2020)

2020年10月30日-11月3日,山东青岛

2020国际测试委员会智能计算与区块链联邦大会的目的旨在促进人工智能、计算机、金融、医学和教育等多学科的交流、协同和相互影响,鼓励采用基于Benchmark的定量方法来解决多学科的挑战。

国际测试基准与标准大会(Bench)

Bench 由九个 BPOE 研讨会系列和 SDBA 研讨会与 ASPLOS、VLDB 和 ICS 联合发展而来,是由国际开放基准委员会 (BenchCouncil) 组织的关于基准、标准、数据集和评估的国际多学科会议。Bench 会议涵盖了广泛的主题,包括基准、数据集、指标、指数、测量、评估、优化、支持方法和工具,以及计算机科学、医学、金融、教育、管理等方面的其他最佳实践。Bench会议邀请描述上述领域和主题的原创作品的手稿。至少要求一位TBench 文章的作者在 Bench 会议上展示他们的工作。

Bench指导委员会

Prof. Dr. Jack Dongarra, University of Tennessee

Prof. Dr. Geoffrey Fox, Indiana University

Prof. Dr. D. K. Panda, The Ohio State University

Prof. Dr. Felix, Wolf, TU Darmstadt.

Prof. Dr. Xiaoyi Lu, University of California, Merced

Dr. Wanling Gao, ICT, Chinese Academy of Sciences & UCAS

Prof. Dr. Jianfeng Zhan, ICT, Chinese Academy of Sciences &BenchCouncil

会议链接

第十五届国际测试基准与标准大会 (Bench 2023) (2023年12月3-5日,中国海南三亚),与2023国际测试委员会智能计算与芯片联邦大会共同举办

第十四届国际测试基准与标准大会 (Bench 2022)

第十三届国际测试基准与标准大会 (Bench 2021)

第十二届国际测试基准与标准大会 (Bench 2020)

第十一届国际测试基准与标准大会 (Bench 2019) (美国,科罗拉多州,丹佛市)

第十届国际测试基准与标准大会 (Bench 2018) (美国,华盛顿州,西雅图)

第九届国际测试基准与标准大会 (BPOE-9 In conjunction with ASPLOS 2018) (美国,弗吉尼亚州,威廉斯堡)

第八届国际测试基准与标准大会 (BPOE-8 In conjunction with ASPLOS 2017) (中国,西安)

第七届国际测试基准与标准大会 (BPOE-7 In conjunction with ASPLOS 2016) (美国,佐治亚州亚特兰大)

第六届国际测试基准与标准大会 (BPOE-6 In conjunction with VLDB 2015, BPOE-6 proceeding, published by Springer.) (夏威夷科哈拉海岸)

第五届国际测试基准与标准大会 (BPOE-5 In conjunction with VLDB 2014) (中国,浙江,杭州)

第四届国际测试基准与标准大会 (BPOE-4 In conjunction with ASPLOS 2014) (美国,犹他州,盐湖城)

第三届国际测试基准与标准大会 (BPOE-3 In conjunction with Big Data Technology Conference 2013) (中国,北京)

第二届国际测试基准与标准大会 (BPOE-2 In conjunction with HPC China 2013) (中国,桂林)

第一届国际测试基准与标准大会 (BPOE-1 In conjunction with BigData 2013) (加利福尼亚州圣何塞)

开源计算机系统大会(OpenCS)

OpenCS会议的启动是为了促进开源计算机系统(OpenCS)计划,以应对流行病和气候变化等全球挑战。OpenCS会议涵盖了探索计算机系统中软件和硬件协同设计空间的广泛主题,为来自架构,系统,算法和应用程序社区的开发人员和研究人员提供了一个理想的环境,以推进OpenCS计划。

2023开源计算机系统大会(OpenCS 2023) (2023年12月3-5日,中国海南三亚),与第十五届国际测试基准与标准大会 (Bench 2023)共同举办

2022开源计算机系统大会(OpenCS 2022) (In conjunction with Bench 2022) (2020年11月10-11日)

刊物发表

BenchCouncil会议评审规范

1. The online discussion is blind. While the reviewers discuss the papers, they don’t know others�?identities beyond reviewer #A, #B; hence, a single reviewer cannot easily assert seniority, silence other voices, or influence them beyond the strength of their arguments.

2. When the reviewers point out closeness to prior work that informs the reviewer’s decision to lower the novelty and contribution of a paper, they should provide a full citation to that previous work.

3. When the reviewers are asking authors to draw a comparison with concurrent work (e.g., work that was published or appeared online after the paper submission deadline) or with preliminary work (e.g., a poster or abstract that is not archival), this comparison should not inform a lower score by the reviewer.

4. Provide useful and constructive feedback to the authors. Be respectful, professional, and positive in your reviews and provide suggestions for the authors to improve their work.

5. Score the paper absolutely and relative to the group of papers you are reviewing.
Absolute overall merit - There are 4 grades you can give to each paper for absolute overall merit; the top 2 ratings mean that you think the paper is acceptable to the conference and the bottom 2 ratings mean that in your opinion the paper is below the threshold for the conference. Please assign these values thinking, whether the paper is above the threshold for the conference or below.
Relative overall merit is based on the papers that you are reviewing. You can rank your papers and then group the papers into four bins. Except for fractional errors, you should be dividing your papers equally into four categories.

6. Reviewers must treat all submissions as strictly confidential and destroy all papers once the technical program has been finalized.

7. Reviewers must contact the PC chair or EIC if they feel there is an ethical violation of any sort (e.g., authors seeking support for a paper, authors seeking to identify who the reviewers are).

8. Do not actively look for author identities. Reviewers should judge a paper solely on its merits.

9. If you know the authors, do not publicize the authors. If you would like to recuse yourself from the review task, contact the PC Chair.

10. Reviewers should review the current submission. If you have reviewed a previous submission, make sure your review is based on the current submission.

11. Reviewers must not share the papers with students/colleagues.

12. Reviewers must compose the reviews themselves and provide unbiased reviews.

13. Do not solicit external reviews without consulting the PC chairs or EIC. If you regularly involve your students in the review process as part of their Ph.D. training, contact the PC chairs. You are still responsible for the reviews. You may do this on no more than one of your reviews.

14. Reviewers must keep review discussions (including which papers you reviewed) confidential.

15. Do not discuss the content of a submitted paper/reviews with anyone other than officially on the submission management system like HotCRP during the online discussion period or the PC meeting (from now until paper publication in any venue).

16. Do not reveal the name of paper authors in case reviewers happen to be aware of author identity. (Author names of accepted papers will be revealed after the PC meeting; author names of rejected papers will never be revealed.)

17. Do not disclose the outcome of a paper until its authors are notified of its acceptance or rejection.

18. Do not download or acquire material from the review site you do not need access to.

19. Do not disclose the reviews' content, including the reviewers' identities or discussions about a paper.

Acknowledgments:
This set of review ethics is derived and based on the MICRO 2020, ASPLOS 2020-2021, ISCA 2020-21 review guidelines.