RE: Defining long constants - do-able but not exactly elegant

new topic     » goto parent     » topic index » view thread      » older message » newer message

On 14 Mar 2002, at 8:07, bensler at mail.com wrote:

> 
> What are you using to read in the file data?
> get(), gets() or getc()?

I tried all 3.

> Can you give an example of what is taking so long?

dictionary = repeat({},200) -- we are going to organize the words by size
time1 = time()
for ocindex = 1 to length(ocypher) do
  if length(ocypher[ocindex]) then
    junk_i = length(ocypher[ocindex])
    if not equal(ocypher[ocindex],{}) and equal(dictionary[junk_i],{}) then
      dfilename = "D:\\Gertie\\decypher\\dfile"&sprintf("%d",junk_i)&".txt"
      puts(1,sprintf("%d",ocindex)&"  "&dfilename&"\n")
      readfile = open(dfilename,"r")
      if not equal(readfile,-1) then
        readline = gets(readfile)
        while not equal(readline,-1) do
dictionary[junk_i] =
          append(dictionary[junk_i],readline[1..length(readline)-1])
          readline = gets(readfile)
        end while
        close(readfile)
      end if
    end if
  end if
end for
time2 = time()
puts(1,"dfile load time: "&sprintf("%d",time2-time1)&" seconds\n")

> I used to play with Eu on my 486DX 100mhz, and I never noticed any 
> significant file loading times. Granted, I never tried opening any 50Mb 
> files.

In this case, i premunged the dictionary to separate word-sized files and 
used gets() for each word. I had made one file using printf(), and one get(), 
which took longer and made a *much* bigger file. The getc() version ran the 
slowest.

Kat

> 
> 
> Chris
> 
> 
> Kat wrote:
> > On 14 Mar 2002, at 0:55, Andy Serpa wrote:
> > 
> > > 
> > > Kat wrote:
> > > > 
> > > > What's wrong with simply gets()ing a text file of any length you want,
> > > > edited any way you want, and displayed any way you want, in any editor
> > > > you
> > > > want, rather than cluttering up the code files?
> > > > 
> > > 
> > > Yeah, that's what I was gonna say.  The rules for #3 don't preclude you
> > > from
> > > loading in any file you want (except machine code, or something like
> > > that).
> > > The program I'm working on loads in a "pre-hashed" dictionary, along with
> > > some other stuff, instead of wasting time on that 
> > > 
> > > everytime it starts....
> > 
> > I found loading the pre-munged data took way longer than grabbing the 
> > plain 
> > dictionary. Near as i can tell, it's the stuffing into 3-deep nested 
> > sequences 
> > that is eating the time. Maybe i should try making them a set length in 
> > each 
> > nest with the repeat() from the start. But munging dictionaries took 
> > forever 
> > too. Anyhoo, like i said, i am out of the contest, i can write the code, but
> > it won't execute in time. It's so wierd, because the grammar parser runs
> > faster than loading the dictionary for this contest.
> > 
> > What bugs me a little about real-world apps using these cypto-solvers is
> > that
> > anyone with a reason to use them would have more than one paragraph to
> > solve,
> > and loading and dictionary munging time would be inconsequential, 
> > 
> > because they'd be turned on and left running, looking for input. (I 
> > leave one 
> > mining program running sometimes, it goes active when the other programs 
> > 
> > feed it an url.) But then, these guys have load time down to sub-second 
> > times, so,, errr, nevermind.
> > 
> > Kat
> > 
> > 
> 
> 
>

new topic     » goto parent     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu