Re: which is faster?

new topic     » goto parent     » topic index » view thread      » older message » newer message

Chris Bensler wrote:
> 
> CChris wrote:
> > 
> > Here is a benchmark to prove and disprove stuff....
> 
> }}}
<eucode>
> Try adding this bit of test code to verify that the variations are accurate.
> 
> sequence test
> test = {
>   st_dev0(s[1],0)
>  ,st_dev1(s[1],0)
>  ,st_dev2(s[1],0)
>  ,st_dev3(s[1],0)
>  ,st_dev4(s[1],0)
>  ,st_dev5(s[1],0)
> }
> test = (test = test[1])
> test[1] = 0
> for i = 2 to length(test) do
>   if test[i] = 0 then
>     test[1] += 1
>     printf(1,"st_dev%d() != st_dev0()\n",{i-1})
>   end if
> end for
> if test[1] then
>   machine_proc(26,{})
>   abort(1)
> end if
> </eucode>
{{{

> 
> Chris Bensler
> Code is Alchemy

I had tried this, and all values differ, though by very tiny amounts. This is
known as roundoff error accumulating.

Adding N numbers with some fuzz in a ]-eps..+eps[ range will result in a total
error whose magnitude is in the order of eps*sqrt(N). With 53 bit precision on fp
numbers and N=10000, you can easily expect that the few last bits are different
when you change the method of computation.

But there is hardly a point in estimating a deviation estimate with many
significant digits, is it?

You can compute a mean with more accuracy by doing:
m += ((s[i] - m)/i)
w/eucode>
starting with m = s[1]. The roundoff error buildup is way slower. But because of
the many integer divisions, the algorithm is way slower too.

CChris }}}

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu