Re: LOS - Reply
- Posted by jiri babor <jbabor at PARADISE.NET.NZ> Aug 26, 1998
- 627 views
Robert Craig wrote: >Michael's timing approach may not be as bad as it looks. >By the time he gets to 1000 iterations, he may have a >somewhat reasonable result (to one significant figure). He >should ignore the first 999 values that he prints, since they are >less accurate. In fact, he should only print the final result at the end >of the loop after 1000 iterations. > >You are correct that most iterations will add 0 seconds, >but every now and then a clock interrupt will occur during an >iteration, and that iteration will be (unfairly) charged for >.055 seconds (assuming default tick rate of 18.2/sec). >When you average it out, it should be reasonably fair. >If an iteration actually takes .055/100 seconds (very roughly >what he reported), then about 1 out of every 100 iterations >will be charged .055 and the other 99 will be charged 0. >In 1000 iterations he would have about 990 0's plus >10 * 0.055 = .55, giving a result of .00055 per iteration, >which is fair. I am sorry, Rob, but it is *not* fair. What you said would be true, if we had sufficient number of consecutive (adjacent in time) samples or iterations. But this is clearly not true! The purpose of the whole exercise was to exclude potentially large (relatively) overheads of screen prints and the randomizer. I hope you can see the bigger is the disparity between relatively large overheads and a much shorter measured inner cycle, the more is the final result subject to (or submerged in) the random noise of the system. A good statistician (which I am not) would be able to tell you how many iterations you would need to get a really fair estimate, but the number would be *huge*, not just thousands, probably millions in this case, depending, of course, on the required accuracy. jiri