Re: Reducing fractions

new topic     » goto parent     » topic index » view thread      » older message » newer message

Matt Lewis wrote:

<snip>

> As a first step optimization, you could ignore when the number is less
> than the square root of whatever your precision limit is.  Assuming you're
> using doubles, then you can go up to 51 or 52 bits, which give you up to
> 2,251,799,813,685,250 or 4,503,599,627,370,500 as your max integers before
> you start losing precision.  That means that you can safely ignore numbers
> up to either 47,453,132 or 67,108,864 (check the spec for a 64-bit floating
> point number--you'll get 1 additional digit over whatever the mantissa size
> is).

I see. Should have thought of that myself ...

> Of course, you may well find that one large factoriztion takes longer
> than a few small factorizations, so you may have to be more clever in
> deciding when factoring is worth it.

I recognized that Ricardo's "genfunc.e" library contains PrimeFactors(),
a basic function for factorization. I'll play with it a little.

> Beyond that, you'll probably need
> to investigate faster factoring methods.  Algoritms you could investigate
> include:
>
> Quadratic Sieve
> General Number Field Sieve
> Lenstra Elliptic Curve Factorization
>
> Google will give you lots of results for these, but implementing them
> seems far from simple (and will likely include needing a good matrix
> library).

This seems to be a bit over my head, so I'll probably have to wait
until someone provides an Eu library for it. blink
Well, currently I don't need it for a real application.

Thanks,
   Juergen

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu