A very basic tail -f implementation

From: James F.Hranicky [mailto:jfh@cise.ufl.edu]

I proposed this in my last solution, though it doesn’t grab
a block of data
first. I know the source for tail does this, but if we’re
counting char by
char anyway, how does first grabbing a 4k block (or whatever) help?

Say you wanted to tail 100,000 lines before your tail -f .
Given 80 chars
per line, thats 8,000,000 seeks and reads, vs 2000 seeks and reads for
reading in 4k chunks. The comparison for c == “\n” gets done the same

of time in each version, so unless reading in 4k chunks with one

seek/read vs reading in the same with 4000 seeks/reads is slower
(low memory, perhaps), I’d think the chunked version should be much
faster.

Or am I off base here?

Ok. I had it in my head that no one would tail more than 4k anyway, though
now that I think about it, it’s obviously a possibility.

Now I’m wondering if there’s a good way to dynamically compute a buffer size
for such an operation with each ‘grab’, rather than a fixed buffer size. Is
that a dumb idea? Am I getting into the “premature optimization” stage?


Jim Hranicky, Senior SysAdmin UF/CISE Department |
E314D CSE Building Phone (352) 392-1499 |
jfh@cise.ufl.edu http://www.cise.ufl.edu/~jfh |


You’re a Gator?! Go Seminoles!! :stuck_out_tongue_winking_eye:

Regards,

Dan

···

-----Original Message-----
On Fri, 2 Aug 2002 23:55:42 +0900 > “Berger, Daniel” djberge@qwest.com wrote:

Ok. I had it in my head that no one would tail more than 4k anyway, though
now that I think about it, it’s obviously a possibility.

I do it all the time :->

Now I’m wondering if there’s a good way to dynamically compute a buffer size
for such an operation with each ‘grab’, rather than a fixed buffer size. Is
that a dumb idea? Am I getting into the “premature optimization” stage?

Actually, I was wondering that, too…some kind of percentage of the #
of lines requested? Not sure…

You’re a Gator?! Go Seminoles!! :stuck_out_tongue_winking_eye:

I grew up in Tallahasse – go both! :->

Jim

···

On Sat, 3 Aug 2002 00:14:07 +0900 “Berger, Daniel” djberge@qwest.com wrote:

file.stat.blksize

will return the block size of the underlying file; multiples of this
would probably be most efficient.

···

On Friday 02 August 2002 08:14 am, Berger, Daniel wrote:

Now I’m wondering if there’s a good way to dynamically compute a
buffer size for such an operation with each ‘grab’, rather than a
fixed buffer size. Is that a dumb idea? Am I getting into the
“premature optimization” stage?


Ned Konz
http://bike-nomad.com
GPG key ID: BEEA7EFE