1. Faster parse for strtok.e
- Posted by Andy Serpa <renegade at earthling.net> Mar 21, 2002
- 472 views
Here is a simpler & faster (by almost 3x) version of the tokenizing routine found in STRTOK.E. I know Kat uses this a lot -- this one should speed things up a bit. It maintains the same behavior of skipping over multiple delimiters, delimiters at the start of the string, etc. (i.e. it doens't return any empty elements unless the subject string is empty or all delimiters.) It is about twice as fast as the similar routine in token.e. -- Andy Serpa global function parse(sequence s, integer c) integer slen, spt, flag sequence parsed parsed = {} slen = length(s) spt = 1 flag = 0 for i = 1 to slen do if s[i] = c then if flag = 1 then parsed = append(parsed,s[spt..i-1]) flag = 0 spt = i+1 else spt += 1 end if else flag = 1 end if end for if flag = 1 then parsed = append(parsed,s[spt..slen]) end if return parsed end function
2. Re: Faster parse for strtok.e
- Posted by Kat <gertie at PELL.NET> Mar 21, 2002
- 445 views
On 22 Mar 2002, at 3:06, Andy Serpa wrote: > > > Here is a simpler & faster (by almost 3x) version of the tokenizing > routine found in STRTOK.E. I know Kat uses this a lot -- this one > should speed things up a bit. It maintains the same behavior of > skipping over multiple delimiters, delimiters at the start of the > string, etc. (i.e. it doens't return any empty elements unless the > subject string is empty or all delimiters.) It is about twice as fast > as the similar routine in token.e. Thanks, i'll give it a spin. The find_all() is so easy to drop into code, i keep on using it. I have been going thru strtok() lately, not to speed it up so much, but add functionality, without breaking backwards compatability. Any suggestions for strtok.e v2? I am interested in what you are doing in Ai stuff. Wanna tell? Kat