« earlier | later » Page 2 of 5
36-350, Statistical Computing, Fall 2013 edit / delete
Programming course for people who know some statistics already.
to maths programming r statistics teaching ... on 03 October 2014
How Not To Run An A/B Test edit / delete
"Decide on a sample size in advance and wait until the experiment is over before you start believing the “chance of beating original” figures that the A/B testing software gives you."
to ag0803 honours significance statistics testing ... on 24 August 2014
Kenneth W. Regan's Chess Page edit / delete
Mostly about detecting cheating in chess through extremely crafty statistical methods. Fascinating stuff -- although you probably need to know more about chess than I do to really appreciate it.
to cheating chess competition games statistics ... on 24 August 2014
StatsModels: Statistics in Python — statsmodels documentation edit / delete
Fancy statistics package for Python. (Not quite as fancy as R, but then again you also don't have to write in R to use it...)
to library python statistics ... on 28 April 2014
With lots of handy guides for scientific data processing with Python. A good starting point.
to analysis data matplotlib plotting python science scipy statistics ... on 28 April 2014
matplotlib: python plotting — Matplotlib 1.3.1 documentation edit / delete
Having had a play with this, I can see why everyone's so enthusiastic about it. It even has an XKCD mode.
to data graph plotting python software statistics ... on 28 April 2014
PyTables - Getting the most *out* of your data edit / delete
"PyTables is a package for managing hierarchical datasets and designed to efficiently and easily cope with extremely large amounts of data." Might be overkill for my temperature sensors!
to data database python statistics ... on 28 April 2014
STABILIZER: statistically sound performance evaluation edit / delete
Neat trick: this uses some LLVM instrumentation to shuffle memory layout around in a program while it's running, to randomise the effects of layout on performance. As a result of the central limit theorem, this tends to normalise the distribution of timing errors too (provided your program runs long enough to have been thoroughly shuffled).
to benchmarking compiler llvm performance research statistics timing ... on 01 April 2014
Sixteen is not magic: Comment on Friston (2012) | [citation needed] edit / delete
Review of "Ten ironic rules for non-statistical reviewers". Read the original paper first, since it's got some good points -- particularly on exactly what the limitations on normality are, and why you need to be careful about very large studies -- but it probably overstates its case a bit, as this review suggests.
to hypothesis normality research statistics testing ... on 01 April 2014
Welcome to the Evaluate Collaboratory! | Evaluate Collaboratory edit / delete
Tomas and Richard are involved in this project for empirical measurement in CS. Their position paper would be sensible reading for students; it explains some of the common pitfalls of performance measurement.
to ag0803 benchmarking cs empirical performance research statistics ... on 26 March 2014
« earlier | later » Page 2 of 5
tasty by Adam Sampson.