You can’t really look at data and infer a whole lot about methodology. Your methods are very good, and probably can’t be improved on.
However, you can infer some things about uncontrolled variables by looking at data. One of the striking features of your data sets is the high standard deviation, or high variability if you prefer to call it that. As a general rule, statisticians get uneasy when the standard deviation is more than half of the mean. Without going into too much detail, when the variability is that high, the data are often not normally distributed. It can be normalized by transforming the data, and doing statistics on the transforms. For example, square root or logarithmic transforms are commonly used to make data behave better. These techniques are not really necessary for your data though.
High variability can be a result of an uncontrolled variable or it can be an accurate representation of the feature you are measuring. I’ve thought about your data a lot, and I think that the variability is caused by one (or both) of two things: variability in how the loose powder in the pan ignites and spreads the flame front; or variability in how many sparks land on the powder and where they land. If I had to guess, I’d guess that the latter is the most likely cause.
I think you could test this hypothesis quite easily. If you used a uniform source of ignition, perhaps a nichrome wire, you could see how much the variability decreased. Any decrease in variability could be attributed to a spark effect and the remaining variability would be due to spread of the flame front.
At risk of getting flamed myself, I think that I could make a stronger argument for minimizing lock time by keeping the flint sharp and the frizzen hard, than I could for preferring a coned vent over a straight hole. I’ll try and get some time in the next few days to crunch the numbers and show you why.