1. Faster parse for strtok.e

Here is a simpler & faster (by almost 3x) version of the tokenizing 
routine found in STRTOK.E.  I know Kat uses this a lot -- this one 
should speed things up a bit.  It maintains the same behavior of 
skipping over multiple delimiters, delimiters at the start of the 
string, etc. (i.e. it doens't return any empty elements unless the 
subject string is empty or all delimiters.)  It is about twice as fast 
as the similar routine in token.e.

-- Andy Serpa


global function parse(sequence s, integer c)
integer slen, spt, flag
sequence parsed
	
	parsed = {}
	slen = length(s)
	
	spt = 1
	flag = 0
	for i = 1 to slen do
		if s[i] = c then
			if flag = 1 then
				parsed = append(parsed,s[spt..i-1])
				flag = 0
				spt = i+1
			else
				spt += 1
			end if
		else
			flag = 1
		end if	
	end for
	if flag = 1 then
		parsed = append(parsed,s[spt..slen])
	end if
	return parsed
end function

new topic     » topic index » view message » categorize

2. Re: Faster parse for strtok.e

On 22 Mar 2002, at 3:06, Andy Serpa wrote:

> 
> 
> Here is a simpler & faster (by almost 3x) version of the tokenizing 
> routine found in STRTOK.E.  I know Kat uses this a lot -- this one 
> should speed things up a bit.  It maintains the same behavior of 
> skipping over multiple delimiters, delimiters at the start of the 
> string, etc. (i.e. it doens't return any empty elements unless the 
> subject string is empty or all delimiters.)  It is about twice as fast 
> as the similar routine in token.e.

Thanks, i'll give it a spin. The find_all() is so easy to drop into code, i keep
on
using it. I have been going thru strtok() lately, not to speed it up so much, 
but add functionality, without breaking backwards compatability. Any 
suggestions for strtok.e v2?

I am interested in what you are doing in Ai stuff. Wanna tell?

Kat

new topic     » goto parent     » topic index » view message » categorize

Search



Quick Links

User menu

Not signed in.

Misc Menu