1. LOS - Reply

Hi Einar,

I do not think it will get much faster than the nowall() function in
the attached example file, but I have been so badly wrong too often in
my life to bet on it. It combines scaling with very fast integer
routine. If, for any reason, you want to eliminate scaling, just take
out the first four statements with the floor() function, and
re-introduce '/20' division into each of the 'if LEVEL[...' statements
in the body of the nowall() function. Enjoy. jiri


NOTE: This mail message has enclosures, 1 more mail message(s)
follow. The files are:
EINAR.EX
---

new topic     » topic index » view message » categorize

2. Re: LOS - Reply

Michael,

I am not going to comment on your nobisect() function for Einar, just
the timing method you used. Basically, you are kidding yourself, if
you believe what it tells you. The trap is, probably in your effort to
avoid random number overheads for the parameters (quite legitimate!),
you resorted to timing *each* pass separately. But each pass, even on
very slow machines, would be much less then a single tick duration
(18.2 ticks/s ==> 0.05s). Consequently, through your test cycle, you
are cumulating zeros, and at the end you divide the bunch of zeros by
a thousand for a very satisfying result. jiri

new topic     » goto parent     » topic index » view message » categorize

3. Re: LOS - Reply

Jiri Babor writes:

> Michael,
> I am not going to comment on your nobisect() function for
> Einar, just the timing method you used. Basically, you are
> kidding yourself, if you believe what it tells you. The trap is,
> probably in your effort to avoid random number overheads
> for the parameters (quite legitimate!), you resorted to
> timing *each* pass separately. But each pass, even on
> very slow machines, would be much less then a single tick
> duration (18.2 ticks/s ==> 0.05s). Consequently, through
> your test cycle, you are cumulating zeros, and at the end
> you divide the bunch of zeros by a thousand for a very
> satisfying result.

Michael's timing approach may not be as bad as it looks.
By the time he gets to 1000 iterations, he may have a
somewhat reasonable result (to one significant figure). He
should ignore the first 999 values that he prints, since they are
less accurate. In fact, he should only print the final result at the end
of the loop after 1000 iterations.

You are correct that most iterations will add 0 seconds,
but every now and then a clock interrupt will occur during an
iteration, and that iteration will be (unfairly) charged for
.055 seconds (assuming default tick rate of 18.2/sec).
When you average it out, it should be reasonably fair.
If an iteration actually takes .055/100 seconds (very roughly
what he reported), then about 1 out of every 100 iterations
will be charged .055 and the other 99 will be charged 0.
In 1000 iterations he would have about 990 0's plus
10 * 0.055 = .55, giving a result of .00055 per iteration,
which is fair.

Clearly he needs to increase his iterations and
only print the final, averaged result. He might also increase his
tick_rate.

I think his timing approach is legitimate, since
he doesn't want to count the calls to rand(). He may already
be within an order of magnitude of the truth.

I've used this method in the past to measure
operations that take a very short time (less than a tick).
The chance of a tick occurring during an operation is proportional
to the time taken, so things should average out correctly
if you allow enough operations to take place.

Regards,
     Rob Craig
     Rapid Deployment Software
     http://members.aol.com/FilesEu/

new topic     » goto parent     » topic index » view message » categorize

4. Re: LOS - Reply

Robert Craig wrote:

>Michael's timing approach may not be as bad as it looks.
>By the time he gets to 1000 iterations, he may have a
>somewhat reasonable result (to one significant figure). He
>should ignore the first 999 values that he prints, since they are
>less accurate. In fact, he should only print the final result at the end
>of the loop after 1000 iterations.
>
>You are correct that most iterations will add 0 seconds,
>but every now and then a clock interrupt will occur during an
>iteration, and that iteration will be (unfairly) charged for
>.055 seconds (assuming default tick rate of 18.2/sec).
>When you average it out, it should be reasonably fair.
>If an iteration actually takes .055/100 seconds (very roughly
>what he reported), then about 1 out of every 100 iterations
>will be charged .055 and the other 99 will be charged 0.
>In 1000 iterations he would have about 990 0's plus
>10 * 0.055 = .55, giving a result of .00055 per iteration,
>which is fair.

I am sorry, Rob, but it is *not* fair. What you said would be true, if
we had sufficient number of consecutive (adjacent in time) samples or
iterations. But this is clearly not true! The purpose of the whole
exercise was to exclude potentially large (relatively) overheads of
screen prints and the randomizer. I hope you can see the bigger is the
disparity between relatively large overheads and a much shorter
measured inner cycle, the more is the final result subject to (or
submerged in) the random noise of the system. A good statistician
(which I am not) would be able to tell you how many iterations you
would need to get a really fair estimate, but the number would be
*huge*, not just thousands, probably millions in this case, depending,
of course, on the required accuracy. jiri

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu