TY - CONF T1 - Empirical Evaluation of Pareto Efficient Multi-objective Regression Test Case Prioritisation T2 - International Symposium on Software Testing and Analysis (ISSTA'15) Y1 - 2015 A1 - Michael G. Epitropakis A1 - Shin Yoo A1 - Mark Harman A1 - Edmund K. Burke KW - additional greedy algorithm KW - coverage compaction KW - multi-objective evolutionary algo- rithm KW - Test case prioritization AB - The aim of test case prioritisation is to determine an ordering of test cases that maximises the likelihood of early fault revelation. Previous prioritisation techniques have tended to be single objective, for which the additional greedy algorithm is the current state-of-the-art. Unlike test suite minimisation, multi objective test case prioritisation has not been thoroughly evaluated. This paper presents an extensive empirical study of the effectiveness of multi objective test case prioritisation, evaluating it on multiple versions of five widely-used benchmark programs and a much larger real world system of over 1 million lines of code. The paper also presents a lossless coverage compaction algorithm that dramatically scales the performance of all algorithms studied by between 2 and 4 orders of magnitude, making prioritisation practical for even very demanding problems. JF - International Symposium on Software Testing and Analysis (ISSTA'15) PB - ACM CY - Baltimore, MD, USA ER - TY - JOUR T1 - Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution: A hybrid approach JF - Information Sciences Y1 - 2012 A1 - M. G. Epitropakis A1 - V. P. Plagianakos A1 - M. N. Vrahatis AB - In recent years, the Particle Swarm Optimization has rapidly gained increasing popularity and many variants and hybrid approaches have been proposed to improve it. In this paper, motivated by the behavior and the spatial characteristics of the social and cognitive experience of each particle in the swarm, we develop a hybrid framework that combines the Particle Swarm Optimization and the Differential Evolution algorithm. Particle Swarm Optimization has the tendency to distribute the best personal positions of the swarm particles near to the vicinity of problem’s optima. In an attempt to efficiently guide the evolution and enhance the convergence, we evolve the personal experience or memory of the particles with the Differential Evolution algorithm, without destroying the search capabilities of the algorithm. The proposed framework can be applied to any Particle Swarm Optimization algorithm with minimal effort. To evaluate the performance and highlight the different aspects of the proposed framework, we initially incorporate six classic Differential Evolution mutation strategies in the canonical Particle Swarm Optimization, while afterwards we employ five state-of-the-art Particle Swarm Optimization variants and four popular Differential Evolution algorithms. Extensive experimental results on 25 high dimensional multimodal benchmark functions along with the corresponding statistical analysis, suggest that the hybrid variants are very promising and significantly improve the original algorithms in the majority of the studied cases. VL - 216 ER - TY - JOUR T1 - Enhancing Differential Evolution Utilizing Proximity-based Mutation Operators JF - IEEE Transactions on Evolutionary Computation Y1 - 2011 A1 - M. G. Epitropakis A1 - D. K. Tasoulis A1 - N. G. Pavlidis A1 - V. P. Plagianakos A1 - M. N. Vrahatis AB - Differential evolution is a very popular optimization algorithm and considerable research has been devoted to the development of efficient search operators. Motivated by the different manner in which various search operators behave, we propose a novel framework based on the proximity characteristics among the individual solutions as they evolve. Our framework incorporates information of neighboring individuals, in an attempt to efficiently guide the evolution of the population toward the global optimum, without sacrificing the search capabilities of the algorithm. More specifically, the random selection of parents during mutation is modified, by assigning to each individual a probability of selection that is inversely proportional to its distance from the mutated individual. The proposed framework can be applied to any mutation strategy with minimal changes. In this paper, we incorporate this framework in the original differential evolution algorithm, as well as other recently proposed differential evolution variants. Through an extensive experimental study, we show that the proposed framework results in enhanced performance for the majority of the benchmark problems studied. VL - 15 ER - TY - CONF T1 - Evolving cognitive and social experience in Particle Swarm Optimization through Differential Evolution T2 - IEEE Congress on Evolutionary Computation, 2010. CEC 2010. (IEEE World Congress on Computational Intelligence) Y1 - 2010 A1 - M. G. Epitropakis A1 - V. P. Plagianakos A1 - M. N. Vrahatis KW - cognitive experience KW - convergence KW - differential evolution KW - evolutionary computation KW - particle swarm optimisation KW - particle swarm optimization KW - social experience AB - In recent years, the Particle Swarm Optimization has rapidly gained increasing popularity and many variants and hybrid approaches have been proposed to improve it. Motivated by the behavior and the proximity characteristics of the social and cognitive experience of each particle in the swarm, we develop a hybrid approach that combines the Particle Swarm Optimization and the Differential Evolution algorithm. Particle Swarm Optimization has the tendency to distribute the best personal positions of the swarm near to the vicinity of problem’s optima. In an attempt to efficiently guide the evolution and enhance the convergence, we evolve the personal experience of the swarm with the Differential Evolution algorithm. Extensive experimental results on twelve high dimensional multimodal benchmark functions indicate that the hybrid variants are very promising and improve the original algorithm. JF - IEEE Congress on Evolutionary Computation, 2010. CEC 2010. (IEEE World Congress on Computational Intelligence) CY - Barcelona, Spain ER - TY - CONF T1 - Evolutionary Adaptation of the Differential Evolution Control Parameters T2 - IEEE Congress on Evolutionary Computation, 2009. CEC 2009 Y1 - 2009 A1 - M. G. Epitropakis A1 - V. P. Plagianakos A1 - M. N. Vrahatis KW - adaptive control KW - differential evolution control parameter KW - evolutionary adaptation KW - evolutionary computation KW - optimisation KW - optimization KW - self-adaptive differential evolution algorithm KW - self-adjusting systems KW - user-defined parameter tuning AB - This paper proposes a novel self-adaptive scheme for the evolution of crucial control parameters in evolutionary algorithms. More specifically, we suggest to utilize the differential evolution algorithm to endemically evolve its own control parameters. To achieve this, two simultaneous instances of Differential Evolution are used, one of which is responsible for the evolution of the crucial user-defined mutation and recombination constants. This self-adaptive differential evolution algorithm alleviates the need of tuning these user-defined parameters while maintains the convergence properties of the original algorithm. The evolutionary self-adaptive scheme is evaluated through several well-known optimization benchmark functions and the experimental results indicate that the proposed approach is promising. JF - IEEE Congress on Evolutionary Computation, 2009. CEC 2009 CY - Trondheim, Norway ER - TY - CHAP T1 - Evolutionary Algorithm Training of Higher-Order Neural Networks T2 - Artificial Higher Order Neural Networks for Computer Science and Engineering: Tends for Emerging Applications Y1 - 2009 A1 - M. G. Epitropakis A1 - V. P. Plagianakos A1 - M. N. Vrahatis ED - Ming Zhang JF - Artificial Higher Order Neural Networks for Computer Science and Engineering: Tends for Emerging Applications PB - IGI Global ER -