From: James F.Hranicky [mailto:jfh@cise.ufl.edu]
I proposed this in my last solution, though it doesn’t grab
a block of data
first. I know the source for tail does this, but if we’re
counting char by
char anyway, how does first grabbing a 4k block (or whatever) help?Say you wanted to tail 100,000 lines before your tail -f .
Given 80 chars
per line, thats 8,000,000 seeks and reads, vs 2000 seeks and reads for
reading in 4k chunks. The comparison for c == “\n” gets done the sameof time in each version, so unless reading in 4k chunks with one
seek/read vs reading in the same with 4000 seeks/reads is slower
(low memory, perhaps), I’d think the chunked version should be much
faster.Or am I off base here?
Ok. I had it in my head that no one would tail more than 4k anyway, though
now that I think about it, it’s obviously a possibility.
Now I’m wondering if there’s a good way to dynamically compute a buffer size
for such an operation with each ‘grab’, rather than a fixed buffer size. Is
that a dumb idea? Am I getting into the “premature optimization” stage?
Jim Hranicky, Senior SysAdmin UF/CISE Department |
E314D CSE Building Phone (352) 392-1499 |
jfh@cise.ufl.edu http://www.cise.ufl.edu/~jfh |
You’re a Gator?! Go Seminoles!!
Regards,
Dan
···
-----Original Message-----
On Fri, 2 Aug 2002 23:55:42 +0900 > “Berger, Daniel” djberge@qwest.com wrote: