So, I'm gathering data on an algorithm's runtime before and after I implemented some improvements.

I ran 4 different queries 100 times using each version of the algorithm. There's some network queries involved, so I figured this would smooth out some of the random time increases introduced by network slowness.

Looking at the data now, and the times are pretty uniform. Each set of data has some weird outliers that are really screwing with my averages. I want to drop the 5 largest and 5 smallest query times from each set of data to remove this strangeness — is that normal to do in statistics, or I am making stuff up to give me better numbers?

(Also, this question is boring as shit, so here's an unrelated pretty graph to look at)