TY - CONF T1 - Empirical Evaluation of Pareto Efficient Multi-objective Regression Test Case Prioritisation T2 - International Symposium on Software Testing and Analysis (ISSTA'15) Y1 - 2015 A1 - Michael G. Epitropakis A1 - Shin Yoo A1 - Mark Harman A1 - Edmund K. Burke KW - additional greedy algorithm KW - coverage compaction KW - multi-objective evolutionary algo- rithm KW - Test case prioritization AB - The aim of test case prioritisation is to determine an ordering of test cases that maximises the likelihood of early fault revelation. Previous prioritisation techniques have tended to be single objective, for which the additional greedy algorithm is the current state-of-the-art. Unlike test suite minimisation, multi objective test case prioritisation has not been thoroughly evaluated. This paper presents an extensive empirical study of the effectiveness of multi objective test case prioritisation, evaluating it on multiple versions of five widely-used benchmark programs and a much larger real world system of over 1 million lines of code. The paper also presents a lossless coverage compaction algorithm that dramatically scales the performance of all algorithms studied by between 2 and 4 orders of magnitude, making prioritisation practical for even very demanding problems. JF - International Symposium on Software Testing and Analysis (ISSTA'15) PB - ACM CY - Baltimore, MD, USA ER - TY - RPRT T1 - Pareto Efficient Multi-Objective Regression Test Suite Prioritisation Y1 - 2014 A1 - Michael G. Epitropakis A1 - Shin Yoo A1 - Mark Harman A1 - Edmund K. Burke AB - Test suite prioritisation seeks a test case ordering that maximises the likelihood of early fault revelation. Previous prioritisation techniques have tended to be single objective, for which the additional greedy algorithm is the current state-of-the-art. We study multi objective test suite prioritisation, evaluating it on multiple versions of five widely-used benchmark programs and a much larger real world system of over 1MLoC. Our multi objective algorithms find faults significantly faster and with large effect size for 20 of the 22 versions. We also introduce a non-lossy coverage compact algorithm that dramatically scales the performance of all algorithms studied by between 2 and 4 orders of magnitude, making prioritisation practical for even very demanding problems. PB - Department of Computer Science, University College London CY - Gower Street, London ER - TY - CHAP T1 - Repairing and Optimizing Hadoop hashCode Implementations T2 - Search-Based Software Engineering: 6th International Symposium, SSBSE 2014, Fortaleza, Brazil, August 26-29, 2014. Proceedings Y1 - 2014 A1 - Kocsis, Zoltan A. A1 - Neumann, Geoff A1 - Swan, Jerry A1 - Epitropakis, Michael G. A1 - Brownlee, Alexander E. I. A1 - Haraldsson, Sami O. A1 - Bowles, Edward ED - Le Goues, Claire ED - Yoo, Shin AB - We describe how contract violations in Java TM hashCode methods can be repaired using novel combination of semantics-preserving and generative methods, the latter being achieved via Automatic Improvement Programming. The method described is universally applicable. When applied to the Hadoop platform, it was established that it produces hashCode functions that are at least as good as the original, broken method as well as those produced by a widely-used alternative method from the ‘Apache Commons’ library. JF - Search-Based Software Engineering: 6th International Symposium, SSBSE 2014, Fortaleza, Brazil, August 26-29, 2014. Proceedings PB - Springer International Publishing CY - Cham SN - 978-3-319-09940-8 UR - http://dx.doi.org/10.1007/978-3-319-09940-8_22 ER -