John Carter wrote:
Slightly unfair of me to comment since I wrote it…
But yes, I have used it to drive very many (100’s) of real and virtual
devices simultaneously. So yes, it is robust and does scale, which I
found the standard one didn’t.
Great!
That’s not quite the absolute latest version I have but should be good
to try. (The API hasn’t changed except I now through a TimeoutError
instead of EOFError on timeout.)
Ok, a few questions then:
Is there some historical reason why the result of a match is “buffer”, $1,
$2, $3…? To me, it would make more sense to do
matchdata = regexp.match(buffer)
return matchdata.to_a
I don’t see why it is useful to return the whole buffer, and also this
allows really complex regexps with more than 9 elements.
Also, the other efficiency thing that concerns me is that both versions
attempt to match against the entire buffer. The majority of regexps
could match something huge, but are generally only used for something
less than say 80 chars long. I want to use an ‘expect’ type library
against something that is updating a count of each byte of a 200k+ file
written. After that progress meter it prints “Done.”, which is all I
really want to match against. It would be more efficient if there were an
option to specify the maximum size of the buffer to use as a match, and to
throw out parts of a buffer bigger than that. Something like:
Append the data to the match buffer, and shorten it if necessary
match_buff << read_buff
if match_buff.length > maxlength
match_buff = match_buff[-maxlength … -1]
end
debug(“match buff is now <<#{match_buff}>>”)
RubyForge didn’t exist when I submitted it, I guess I should create a
RubyForge project for it. Except it is so small it is more appropriate
to included it as part of some other project.
It really isn’t all that small, I mean, look at most of the ruby libs,
they’re much smaller. I think it’s worth adding to the stdlib.
Ben