program ran out of memory!!

new topic     » topic index » view thread      » older message » newer message

hi all,
i would like to know if there are any techniques in programming that can 
bypass this
"your program ran out of memory". I'm parsing a 688880 file. using the 
parse() in the token library that i just used
in my previous email for a test i have checked three of the functions 
that i used that lead to this message and i really 
don't see how they lead to "your program ran out of memory".
here they are>>>>
-----------------------This is my version of deparse. it "untokenizes" a 
sequence based on the c delimeter
-----------------------It cannot be my routine, the original deparse is 
worse.
-----------------------eg data = {{hi},{all},{cool}}
-----------------------    void = deparse(data,32)--c delimeter 32 is fo 
space
----------------------     --void id {hi all cool}
global function deparse_new(sequence list, integer c)
object t
sequence s
 t= time()
   if length(list) then
      s = list[1]
      for i = 2 to length(list) by 1 do
	 s =s&c
      s=s&list[i]
      end for
printf(1,"%.32f my version of deparse\n",time()-t)
      return s
   end if

   return ""
end function
------------------------
-----------------------parse() by Andy serpa. it "tokenizes" a string 
based on the c delimiter
-----------------------eg data = " hi all cool"
-----------------------void = parse(data,32)  --void is 
{{hi},{all},{cool}}
--Andy Serpas Turbo version 3X faster
global function parse(sequence s, integer c)
integer slen, spt, flag
sequence parsed

	parsed = {}
	slen = length(s)

	spt = 1
	flag = 0
	for i = 1 to slen do
		if s[i] = c then
			if flag = 1 then
				parsed = append(parsed,s[spt..i-1])
				flag = 0
				spt = i+1
			else
				spt += 1
			end if
		else
			flag = 1
		end if
	end for
	if flag = 1 then
		parsed = append(parsed,s[spt..slen])
	end if
	return parsed
end function
-------------------------this function checks a sequence fo "\n\n" it 
then replaces it with "\n"
------------------------'' is anscii 15.
------------------------for example data = "hi\nall\n\nhow\nare\nyou"
-------------------------void =Unique_LF(data)--void is 
"hi\nall\n\how\nare\nyou"
-------------------------
global function Unique_LF(sequence data)
sequence res
res = {}
for i = 1 to length(data) do
if data[i] = '\n' then
if data[i+1] = '\n' then
res = res &"\n"
else
 res = res&data[i]
end if
else
res = res&data[i]
end if
end for
return res
end function
--************************************************************


the above routines work for data less than 688880 bytes. They work on 
computers with high memory but 
but i prefer testing my programs on the 486 with 12MB memory. Are there 
any programming styles that do the above
things but don't chew up memory. While we are at that. can any one find 
out a faster way of the UNIQUE_LF
function. i wrote a super fast one but it fails on detecting three 
Linefeeds at once

thanx Jordah Ferguson




--Processed and saved using: Cirrus Mail ver 0.2a

new topic     » topic index » view thread      » older message » newer message

Search



Quick Links

User menu

Not signed in.

Misc Menu