BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench) is an open-access multi-disciplinary journal dedicated to benchmarks, standards, evaluations, optimizations, and data sets. This journal is a peer-reviewed, subsidized open access journal where The International Open Benchmark Council pays the OA fee. Authors do not have to pay any open access publication fee. However, at least one of the authors must register BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench) (https://www.benchcouncil.org/bench/) and present their work. It seeks a fast-track publication with an average turnaround time of one month.

TBench Editorial Board

Co-EIC

Prof. Dr. Jianfeng Zhan, ICT, Chinese Academy of Sciences and BenchCouncil

Prof. Dr. Tony Hey, Rutherford Appleton Laboratory STFC, UK

Editorial office

Dr. Wanling Gao, ICT, Chinese Academy of Sciences and BenchCouncil

Shaopeng Dai, ICT, Chinese Academy of Sciences and BenchCouncil

Dr. Chunjie Luo, University of Chinese Academy of Sciences, China

Advisory Board

Prof. Jack Dongarra, University of Tennessee, USA

Prof. Geoffrey Fox, Indiana University, USA

Prof. D. K. Panda, The Ohio State University, USA

Founding Editor

Prof. H. Peter Hofstee, IBM Systems, USA and Delft University of Technology, Netherlands

Dr. Zhen Jia, Amazon, USA

Prof. Blesson Varghese, Queen's University Belfast, UK

Prof. Raghu Nambiar, AMD,USA

Prof. Jidong Zhai, Tsinghua University, China

Prof. Francisco Vilar Brasileiro, Federal University of Campina Grande, Brazil

Prof. Jianwu Wang, University of Maryland, USA

Prof. David Kaeli, Northeastern University, USA

Prof. Bingshen He, National University of Singapore, Singapore

Dr. Lei Wang, Institute of Computing Technology, Chinese Academy of Sciences, China

Prof. Weining Qian, East China Normal University, China

Dr. Arne J. Berre, SINTEF, Norway

Prof. Ryan Eric Grant, Sandia National Laboratories, USA

Prof. Rong Zhang, East China Normal University, China

Prof. Cheol-Ho Hong, Chung-Ang University, Korea

Prof. Vladimir Getov, University of Westminster, UK

Prof. Zhifei Zhang, Capital Medical University

Prof. K. Selcuk Candan, Arizona State University, USA

Dr. Yunyou Huang, Guangxi Normal University

Prof. Woongki Baek, Ulsan National Institute of Science and Technology, Korea

Prof. Radu Teodorescu, The Ohio State University, USA

Prof. John Murphy, University College Dublin, Ireland

Prof. Marco Vieira, The University of Coimbra (UC), Portugal

Prof. Jose Merseguer, University of Zaragoza (UZ), Spain

Prof. Xiaoyi Lu, University of California, USA

Prof. Yanwu Yang, Huazhong University of Science and Technology, China

Prof. Jungang Xu, University of Chinese Academy of Sciences, China

Prof. Jiaquan Gao, Professor, Nanjing Normal University, China

Associate Editor

Dr. Chen Zheng, Institute of Software, Chinese Academy of Sciences, China

Dr. Biwei Xie, Institute of Computing Technology, Chinese Academy of Sciences, China

Dr. Mai Zheng, Iowa State University, USA

Dr. Wenyao Zhang, Beijing Institute of Technology, China

Dr. Bin Liao, North China Electric Power University, China

Aims and Scopes

BenchCouncil Transactions on Benchmarks, Standards, and Evaluations (TBench) publishes position articles that open new research areas, research articles that address new problems, methodologies, tools, survey articles that build up comprehensive knowledge, and comments articles that argue the published articles. The submissions should deal with the benchmarks, standards, and evaluation research areas. Particular areas of interest include, but are not limited to:

  • 1. Generalized benchmark science and engineering (see https://www.sciencedirect.com/science/article/pii/S2772485921000120), including but not limited to
    • measurement standards
    • standardized data sets with defined properties
    • representative workloads
    • representative data sets
    • best practices
  • 2. Benchmark and standard specifications, implementations, and validations of:
    • Big Data
    • AI
    • HPC
    • Machine learning
    • Big scientific data
    • Datacenter
    • Cloud
    • Warehouse-scale computing
    • Mobile robotics
    • Edge and fog computing
    • IoT
    • Chain block
    • Data management and storage
    • Financial domains
    • Education domains
    • Medical domains
    • Other application domains
  • 3. Data sets
    • Detailed descriptions of research or industry datasets, including the methods used to collect the data and technical analyses supporting the quality of the measurements.
    • Analyses or meta-analyses of existing data and original articles on systems, technologies, and techniques that advance data sharing and reuse to support reproducible research.
    • Evaluating the rigor and quality of the experiments used to generate the data and the completeness of the data description.
    • Tools generating large-scale data while preserving their original characteristics.
  • 4. Workload characterization, quantitative measurement, design, and evaluation studies of:
    • Computer and communication networks, protocols, and algorithms
    • Wireless, mobile, ad-hoc and sensor networks, IoT applications
    • Computer architectures, hardware accelerators, multi-core processors, memory systems, and storage networks
    • High-Performance Computing
    • Operating systems, file systems, and databases
    • Virtualization, data centers, distributed and cloud computing, fog, and edge computing
    • Mobile and personal computing systems
    • Energy-efficient computing systems
    • Real-time and fault-tolerant systems
    • Security and privacy of computing and networked systems
    • Software systems and services, and enterprise applications
    • Social networks, multimedia systems, Web services
    • Cyber-physical systems, including the smart grid
  • 5. Methodologies, metrics, abstractions, algorithms, and tools for:
    • Analytical modeling techniques and model validation
    • Workload characterization and benchmarking
    • Performance, scalability, power, and reliability analysis
    • Sustainability analysis and power management
    • System measurement, performance monitoring, and forecasting
    • Anomaly detection, problem diagnosis, and troubleshooting
    • Capacity planning, resource allocation, run time management, and scheduling
    • Experimental design, statistical analysis, simulation
  • 6. Measurement and evaluation
    • Evaluation methodology and metric
    • Testbed methodologies and systems
    • Instrumentation, sampling, tracing, and profiling of Large-scale real-world applications and systems
    • Collection and analysis of measurement data that yield new insights
    • Measurement-based modeling (e.g., workloads, scaling behavior, assessment of performance bottlenecks)
    • Methods and tools to monitor and visualize measurement and evaluation data
    • Systems and algorithms that build on measurement-based findings
    • Advances in data collection, analysis, and storage (e.g., anonymization, querying, sharing)
    • Reappraisal of previous empirical measurements and measurement-based conclusions
    • Descriptions of challenges and future directions the measurement and evaluation community should pursue

Guide for authors

Types of paper

  • Contributions falling into the following categories will be considered for publication: Position papers, Full-length articles/Research articles, Review articles, Short communications, Discussions, Editorials, Case reports, Practice guidelines, Product reviews, Conference reports, and Opinion papers. Please ensure that you select the appropriate article type from the list of options when making your submission. Authors contributing to special issues should ensure that they select the special issue article type from this list.
  • Position Papers – No page limits.
  • Full-Length Articles/Research Articles - 12 double-column pages (All research article page limits do not include references and author biographies)
  • Review Papers - no page limits
  • Short Communications - 4 double-column pages (All short communication article page limits do not include references and author biographies)
  • Discussions - 2 double-column pages (All discussion article page limits do not include references and author biographies)
  • Editorials - 10 double-column pages (All editorial page limits do not include references and author biographies)
  • Case Studies - 8 double-column pages (All case report page limits do not include references and author biographies)
  • Practice Guidelines -12 double-column pages (All practice guideline page limits do not include references and author biographies)
  • Product Reviews - 4 double-column pages (All product review page limits do not include references and author biographies)
  • Conference Reports - 10 double-column pages (All conference report page limits do not include references and author biographies)
  • Opinion Papers - - 4 double-column pages (All opinion page limits do not include references and author biographies)
  • BEFORE YOU BEGIN

Ethics in Publishing

For information on Ethics in Publishing and Ethical guidelines for journal publication see http://www.elsevier.com/publishingethics and http://www.elsevier.com/ethicalguidelines

Conflict of interest

All authors are requested to disclose any actual or potential conflict of interest including any financial, personal or other relationships with other people or organizations within three years of beginning the submitted work that could inappropriately influence, or be perceived to influence, their work. See also http://www.elsevier.com/conflictsofinterest

Open access

Every peer-reviewed research article appearing in this journal will be published open access. This means that the article is universally and freely accessible via the internet in perpetuity, in an easily readable format immediately after publication. The author does not have any publication charges for open access. The International Open Benchmark Council will pay to make the article open access. However, at least one of the authors must register BenchCouncil International Symposium on Benchmarking, Measuring and Optimizing (Bench) (https://www.benchcouncil.org/bench/) and present their work.

Creative Commons Attribution-NonCommercial-NoDerivs (CC BY-NC-ND) For non-commercial purposes, lets others distribute and copy the article, and to include in a collective work (such as an anthology), as long as they credit the author(s) and provided they do not alter or modify the article.

Peer review

This journal operates a double anonymized review process. All contributions are typically sent to a minimum of two independent expert reviewers to assess the paper's scientific quality. The Editor is responsible for the final decision regarding the acceptance or rejection of articles. The Editor's decision is final. Editors are not involved in decisions about papers that they have written themselves or have been written by family members or colleagues or which relate to products or services in which the editor has a conflict of interest. Any such submission is subject to the journal's usual procedures, with peer review handled independently of the relevant editor and their research groups.

More information on types of peer review is as follows.
1. All manuscripts would be pre-checked by the editorial office. Any submission that fails to meet the basic standard of the journal would be desk rejected for reasons like out of scope, ethical issues, high similarities, etc. Then, the editorial office would assign the submission to Editor-in-Chief, an Associate Editor-in-Chief, or an Editorial Board member.
2. The editor would invite multiple reviewers to review this paper.
3. After at least two reviewers provide their review reports and comments, the editor would provide feedback based on review comments to the authors.
4. When the author submits the revised manuscript, the editor would recommend a decision to the Editorial Office.
5. The Editorial Office will make a final decision based on reviewers' comments and the editor's recommendation.
6. For submissions from Editors-in-Chief, Associate Editor-in-Chief, or an Editorial Board member, other journal editors will handle them independently.


Double anonymized review

This journal uses double anonymized review, which means the authors' identities are concealed from the reviewers, and vice versa. More information is available on our website. To facilitate this, please include the following separately: Title page (with author details): This should include the title, authors' names, affiliations, acknowledgments, any Declaration of Interest statement, and a complete address for the corresponding author, including an e-mail address. Anonymized manuscript (no author details): The main body of the paper (including the references, figures, tables, and any acknowledgments) should not include any identifying information, such as the authors' names or affiliations.

Review Rules

1. When the reviewers' pointing out closeness to prior work that informs the reviewer’s decision to lower the novelty and contribution of a paper, they should provide a full citation to that previous work.
2. In the following cases, this comparison should not inform a lower score by the reviewer. The reviewers ask authors to draw a comparison with concurrent work published or appeared online after the paper submission deadline or with preliminary work, e.g., a poster or abstract that is not archival.
3. Provide useful and constructive feedback to the authors. Be respectful, professional, and positive in your reviews and provide suggestions for the authors to improve their work.
4. Reviewers must contact the AE or EIC if they feel there is an ethical violation of any sort (e.g., authors seeking support for a paper, authors seeking to identify who the reviewers are).
5. Do not actively look for author identities. Reviewers should judge a paper solely on its merits.
6. Reviewers should review the current submission. If you have reviewed a previous submission, make sure your review is based on the current submission.
7. Reviewers must not share the papers with students/colleagues.
8. Reviewers must compose the reviews themselves and provide unbiased reviews.
9. Do not solicit external reviews without consulting the EIC. If you regularly involve your students in the review process as part of their Ph.D. training, contact the EIC. You are still responsible for the reviews.
10. Do not discuss the content of a submitted paper/review with anyone from now until paper publication in any venue.
11. Do not reveal the name of paper authors if reviewers happen to be aware of the author's identity. (Author names of accepted papers will be revealed after being accepted; The editorial board will never reveal author names of rejected papers.)
12. Do not disclose a paper's outcome until the editorial board notifies its acceptance or rejection to its authors.
13. Do not download or acquire material from the review site you do not need access to.
14. Do not disclose the reviews' content, including the reviewers' identities or discussions about a paper.