« earlier | later » Page 1 of 1
Publication: Quantifying Performance Changes with Effect Size Confidence Intervals - School of Computing - University of Kent edit / delete
Tech report with more details of the statistics behind Tomas/Richard's approach. In particular, this describes how to do the same thing in either parametric or non-parametric ways, and gives some description of how badly the parametric approach performs when the underlying data isn't normally distributed (not very badly, as it turns out).
to benchmarking confidence effect-size non-parametric performance statistics ... on 26 March 2014
Rigorous Benchmarking in Reasonable Time - Kent Academic Repository edit / delete
Tomas Kalibera and Richard Jones' paper on how to do benchmarking that's actually meaningful -- presenting results as confidence intervals for effect sizes, with techniques to establish i.i.d. results and work out how many repetitions you need to do. Very nice work for a pretty short paper! (I've spent most of today chasing references from this in the interests of understanding the maths behind it...)
to benchmarking compiler confidence effect-size independence java performance reproducibility statistics vm ... on 26 March 2014
Computing and Interpreting Effect Sizes - Springer edit / delete
A fairly grumpy study of effect size measurement -- this makes some good points, though.
to effect-size significance statistics ... on 26 March 2014
« earlier | later » Page 1 of 1
- effect-size | |
2 | + benchmarking |
1 | + compiler |
2 | + confidence |
1 | + independence |
1 | + java |
1 | + non-parametric |
2 | + performance |
1 | + reproducibility |
1 | + significance |
3 | + statistics |
1 | + vm |
tasty by Adam Sampson.