The USE Method edit / delete

"For every resource, check utilization, saturation, and errors."

to ag0803 analysis benchmarking debugging networking performance ... on 24 August 2014

A concrete illustration of practical running time vs big-O notation - The Old New Thing - Site Home - MSDN Blogs edit / delete

One for AG0803 students: why algorithmic complexity isn't everything in a world with complex memory architectures.

to ag0803 complexity data-structures memory optimisation performance ... on 13 August 2014

An Introduction to the Python Buffer Protocol edit / delete

How to expose binary data as NumPy-style arrays in Python bindings.

to array binding buffer numeric performance python ... on 20 May 2014

gdb-heap edit / delete

Memory usage tracking for GDB. "gdb-heap is different in that it allows for unplanned memory usage debugging: if a process unexpectedly starts using large amounts of memory you can attach to it with gdb, and use a new heap command to figure out where the memory is going."

to debugging gdb heap memory performance ... on 28 April 2014

STABILIZER: statistically sound performance evaluation edit / delete

Neat trick: this uses some LLVM instrumentation to shuffle memory layout around in a program while it's running, to randomise the effects of layout on performance. As a result of the central limit theorem, this tends to normalise the distribution of timing errors too (provided your program runs long enough to have been thoroughly shuffled).

to benchmarking compiler llvm performance research statistics timing ... on 01 April 2014

Welcome to the Evaluate Collaboratory! | Evaluate Collaboratory edit / delete

Tomas and Richard are involved in this project for empirical measurement in CS. Their position paper would be sensible reading for students; it explains some of the common pitfalls of performance measurement.

to ag0803 benchmarking cs empirical performance research statistics ... on 26 March 2014

Publication: Quantifying Performance Changes with Effect Size Confidence Intervals - School of Computing - University of Kent edit / delete

Tech report with more details of the statistics behind Tomas/Richard's approach. In particular, this describes how to do the same thing in either parametric or non-parametric ways, and gives some description of how badly the parametric approach performs when the underlying data isn't normally distributed (not very badly, as it turns out).

to benchmarking confidence effect-size non-parametric performance statistics ... on 26 March 2014

Rigorous Benchmarking in Reasonable Time - Kent Academic Repository edit / delete

Tomas Kalibera and Richard Jones' paper on how to do benchmarking that's actually meaningful -- presenting results as confidence intervals for effect sizes, with techniques to establish i.i.d. results and work out how many repetitions you need to do. Very nice work for a pretty short paper! (I've spent most of today chasing references from this in the interests of understanding the maths behind it...)

to benchmarking compiler confidence effect-size independence java performance reproducibility statistics vm ... on 26 March 2014

Statistically rigorous Java performance evaluation edit / delete

One of the papers that inspired Tomas/Richard's rigorous benchmarking work. This is a much simpler strategy, involving looking for overlapping confidence intervals -- which is statistically pretty dubious, but common in other disciplines...

to benchmarking confidence java performance research statistics ... on 26 March 2014

Producing wrong data without doing anything obviously wrong! edit / delete

Lots of examples of how environmental factors (e.g. environment variable size, room temperature, link order, ASLR...) can affect experimental results, to the tune of 20% or more. Basically: why pretty much any benchmark you've seen in a paper where the effect size isn't huge is probably nonsense.

to benchmarking compiler performance reproducibility research statistics ... on 26 March 2014

Browser bookmarks: tasty+ | tasty= Log in | Export | Atom