This works very quickly with sets of files that are in directories
containing (say) 90K documents or fewer. But when there are 200k +
documents in the directories, it begins a substantial amount of time.Suffice it to say that few file systems are optimized for the case of
200k files per directory.
Fair enough. But we sometimes get directories with several hundred
thousand records.
From your example, I’m guessing that you’re using Windows which I
don’t know much about. But my advice is to restructure your program
to using more, smaller directories.
We don’t create the input files or their structure–we are simply
processing them. We get media that contain them with text files
listing the files and their relationships. If we moved the files, we’d
have modify the description files, which would be a time consuming and
error prone task.
We’re using Redhat’s most recent linux (on a Dual Athlon with 2GB RAM).
Why would you think I was using Windows from the example?
···
On Feb 13, 2004, at 1:04 PM, J.Herre wrote:
On Feb 13, 2004, at 9:16 AM, David King Landrith wrote:
David King Landrith
(w) 617.227.4469x213
(h) 617.696.7133
One useless man is a disgrace, two
are called a law firm, and three or more
become a congress – John Adams
public key available upon request