Beyond YAML? (scaling)

Hi,

I've been using YAML files to store hashes of numbers, e.g.,

  { ["O_Kc_01"] => [ 0.01232, 0.01212, 0.03222, ... ], ... }

This has worked wonderfully for portability and visibility
into the system as I've been creating it.

Recently, however, I've increased my problem size by orders
of magnitude in both the number of variables and the number
of associated values. The resulting YAML files are prohibitive:
10s of MBs big and requiring 10s of minutes to dump/load.

Where should I go from here?

Thanks,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

Some random thoughts:

* If they are just super straightforward lists of numbers like this a trivial flat file scheme, say with one number per line, might get the job done.
* XML can be pretty darn easy to output manually and if you use REXML's stream parser (not slurping everything into a DOM) you should be able to read it reasonably quick.
* If you are willing to sacrifice a little visibility, you can always take the step up to a real database, even if it's just sqlite. These have varying degrees of portability as well.
* You might want to look at KirbyBase. (It has a younger brother Mongoose, but that uses binary output.)

Hope something in there helps.

James Edward Gray II

···

On May 3, 2007, at 8:50 AM, Bil Kleb wrote:

Hi,

I've been using YAML files to store hashes of numbers, e.g.,

{ ["O_Kc_01"] => [ 0.01232, 0.01212, 0.03222, ... ], ... }

This has worked wonderfully for portability and visibility
into the system as I've been creating it.

Recently, however, I've increased my problem size by orders
of magnitude in both the number of variables and the number
of associated values. The resulting YAML files are prohibitive:
10s of MBs big and requiring 10s of minutes to dump/load.

Where should I go from here?

Use a SQL database?

It all depends what sort of processing you're doing. If you're adding to a
dataset (rather than starting with an entirely fresh data set each time),
having a database makes sense. If you're doing searches across the data,
and/or if the data is larger than the available amount of RAM, then a
database makes sense. If you're only touching small subsets of the data at
any one time, then a database makes sense.

Put it another way, does your processing really require you to read the
entire collection of objects into RAM before you can perform any processing?

If it does, and your serialisation needs are as simple as it appears above,
then maybe something like CSV would be better.

O_Kc_01,0.01232,0.01212,0.03222,...

If the source of the data is another Ruby program, then Marshal will be much
faster than YAML (but unfortunately binary).

You could consider using something like Madeleine:
http://madeleine.rubyforge.org/
This snapshots your object tree to disk (using Marshal by default I think,
but can also use YAML). You can then make incremental changes and
occasionally rewrite the snapshot.

B.

···

On Thu, May 03, 2007 at 10:50:06PM +0900, Bil Kleb wrote:

I've been using YAML files to store hashes of numbers, e.g.,

{ ["O_Kc_01"] => [ 0.01232, 0.01212, 0.03222, ... ], ... }

This has worked wonderfully for portability and visibility
into the system as I've been creating it.

Recently, however, I've increased my problem size by orders
of magnitude in both the number of variables and the number
of associated values. The resulting YAML files are prohibitive:
10s of MBs big and requiring 10s of minutes to dump/load.

Where should I go from here?

Bil Kleb wrote:

Hi,

I've been using YAML files to store hashes of numbers, e.g.,

{ ["O_Kc_01"] => [ 0.01232, 0.01212, 0.03222, ... ], ... }

This has worked wonderfully for portability and visibility
into the system as I've been creating it.

Recently, however, I've increased my problem size by orders
of magnitude in both the number of variables and the number
of associated values. The resulting YAML files are prohibitive:
10s of MBs big and requiring 10s of minutes to dump/load.

Where should I go from here?

Hey, Bil. If you don't mind a couple of shameless plugs, you might want to try KirbyBase or Mongoose.

KirbyBase should be faster than YAML and it still stores the data in plain text files, if that is important to you.

Mongoose is faster than KirbyBase, at the expense of the data not being stored as plain text.

I don't know if either will be fast enough for you.

HTH,

Jamey Cribbs

I guess that depends on whether you need the files to be easily readable or not. If you don't, Marshal will be faster than YAML.

Kirk Haines

···

On Thu, 3 May 2007, Bil Kleb wrote:

I've been using YAML files to store hashes of numbers, e.g.,

{ ["O_Kc_01"] => [ 0.01232, 0.01212, 0.03222, ... ], ... }

of magnitude in both the number of variables and the number
of associated values. The resulting YAML files are prohibitive:
10s of MBs big and requiring 10s of minutes to dump/load.

Where should I go from here?

khaines@enigo.com wrote:

I guess that depends on whether you need the files to be easily readable or not. If you don't, Marshal will be faster than YAML.

At this point, I'm looking for an easy out that will
reduce size and increase speed, and I'm willing to
go binary if necessary.

Of the answers I've seen so far (thanks everyone!),
migrating to Marshal seems to be the Simplest Thing
That Could Possibly Work.

Thanks,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

Jamey Cribbs wrote:

Hey, Bil.

Hi.

KirbyBase should be faster than YAML and it still stores the data in plain text files, if that is important to you.

Plain text will be too big -- I've got an n^2 problem.

Mongoose is faster than KirbyBase, at the expense of the data not being stored as plain text.

Sounds intriguing, but where can I find some docs? So far, I'm
coming up empty...

Regards,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

Brian Candler wrote:

Use a SQL database?

I always suspect that I should be doing that more often,
but as my experience with databases is rather limited
and infrequent, I always shy away from those as James
already knows. Regardless, I should probably overcome
my aggressive incompetence one day!

It all depends what sort of processing you're doing. If you're adding to a
dataset (rather than starting with an entirely fresh data set each time),
having a database makes sense.

In this point, I'm generating an entirely fresh data set
each time, but I can foresee a point where that will change
to an incremental model...

Put it another way, does your processing really require you to read the
entire collection of objects into RAM before you can perform any processing?

Yes, AFAIK, but I suppose there are algorithms that could
compute statistical correlations incrementally.

You could consider using something like Madeleine:
http://madeleine.rubyforge.org/
This snapshots your object tree to disk (using Marshal by default I think,
but can also use YAML). You can then make incremental changes and
occasionally rewrite the snapshot.

Probably not a good fit as I won't change existing data,
only add new...

Thanks,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

Bil Kleb wrote:

···

khaines@enigo.com wrote:

I guess that depends on whether you need the files to be easily readable or not. If you don't, Marshal will be faster than YAML.

At this point, I'm looking for an easy out that will
reduce size and increase speed, and I'm willing to
go binary if necessary.

Of the answers I've seen so far (thanks everyone!),
migrating to Marshal seems to be the Simplest Thing
That Could Possibly Work.

Well, maybe not so simple...

  `dump': can't dump hash with default proc (TypeError)

which seems to be due to the trick I learned from zenspider
and drbrain to quickly setup a hash of arrays:,

  Hash.new{ |hash,key| hash[key]= }

Later,
--
Bil Kleb
http://fun3d.larc.nasa.gov

Bil Kleb wrote:

Jamey Cribbs wrote:

Hey, Bil.

Hi.

KirbyBase should be faster than YAML and it still stores the data in plain text files, if that is important to you.

Plain text will be too big -- I've got an n^2 problem.

Mongoose is faster than KirbyBase, at the expense of the data not being stored as plain text.

Sounds intriguing, but where can I find some docs? So far, I'm
coming up empty...

Docs are light compared to KirbyBase. If you download the distribution, there is the README file, some pretty good examples in the aptly named "examples" directory, and unit tests in the "tests" directory.

HTH,

Jamey

Don't be afraid of the database solution. In the long term, it is much more scalable and will pay dividends immediately.
MySQL and PostgreSQL are both pretty fast and scalable, but if you have a large data set, you certainly do need to plan a schema carefully, but it should be somewhat similar to your existing data structures anyway.
the database APIs in Ruby are pretty simple.

···

On May 4, 2007, at 12:35 AM, Bil Kleb wrote:

Brian Candler wrote:

Use a SQL database?

I always suspect that I should be doing that more often,
but as my experience with databases is rather limited
and infrequent, I always shy away from those as James
already knows. Regardless, I should probably overcome
my aggressive incompetence one day!

Bil Kleb wrote:

···

khaines@enigo.com wrote:

I guess that depends on whether you need the files to be easily readable or not. If you don't, Marshal will be faster than YAML.

At this point, I'm looking for an easy out that will
reduce size and increase speed, and I'm willing to
go binary if necessary.

What about mmap and pack/unpack, as ara does in

http://blade.nagaokaut.ac.jp/cgi-bin/scat.rb/ruby/ruby-talk/175944

?

--
       vjoel : Joel VanderWerf : path berkeley edu : 510 665 3407

If you want to describe your data needs a bit, and what operations you
need to operate on it, I'll be happy to play around with an ruby/sqlite3
program and see what pops out.

Since there's no Ruby Quiz this weekend, we all need something to work
on :-).

enjoy,

-jeremy

···

On Thu, May 03, 2007 at 11:45:05PM +0900, Bil Kleb wrote:

khaines@enigo.com wrote:
>
>I guess that depends on whether you need the files to be easily readable
>or not. If you don't, Marshal will be faster than YAML.

At this point, I'm looking for an easy out that will
reduce size and increase speed, and I'm willing to
go binary if necessary.

--

Jeremy Hinegardner jeremy@hinegardner.org

Jamey Cribbs wrote:

Docs are light compared to KirbyBase. If you download the distribution, there is the README file, some pretty good examples in the aptly named "examples" directory, and unit tests in the "tests" directory.

Roger, I was afraid you'd say that. :slight_smile:

Please throw those up on your Rubyforge webpage at some point?

Later,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

Bil Kleb wrote:

Hash.new{ |hash,key| hash[key]= }

Is there a better way than,

  samples[tag] = unless samples.has_key? tag
  samples[tag] << sample

?

Anyway, apart from Marshal not having a convenient
#load_file method like Yaml, the conversion was
very painless and dropped file sizes considerably
and run times into the minutes category instead of
hours.

Thanks,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

Would making a copy of the hash use too much
memory or time?

h=Hash.new{ |hash,key| hash[key]= }

h['foo'] << 44
h['foo'] << 88

h_copy = {}
h.each{|k,v| h_copy[k] = v}
p h_copy

···

On May 3, 10:11 am, Bil Kleb <Bil.K...@NASA.gov> wrote:

Bil Kleb wrote:
> khai...@enigo.com wrote:

>> I guess that depends on whether you need the files to be easily
>> readable or not. If you don't, Marshal will be faster than YAML.

> At this point, I'm looking for an easy out that will
> reduce size and increase speed, and I'm willing to
> go binary if necessary.

> Of the answers I've seen so far (thanks everyone!),
> migrating to Marshal seems to be the Simplest Thing
> That Could Possibly Work.

Well, maybe not so simple...

  `dump': can't dump hash with default proc (TypeError)

which seems to be due to the trick I learned from zenspider
and drbrain to quickly setup a hash of arrays:,

  Hash.new{ |hash,key| hash[key]= }

Later,
--
Bil Klebhttp://fun3d.larc.nasa.gov

Hi,

Jeremy Hinegardner wrote:

If you want to describe your data needs a bit, and what operations you
need to operate on it, I'll be happy to play around with an ruby/sqlite3
program and see what pops out.

I've created a small tolerance DSL, and coupled with the Monte Carlo
Method[1] and the Pearson Correlation Coefficient[2], I'm performing
sensitivity analysis[2] on some of the simulation codes used for our
Orion vehicle[3]. In other words, jiggle the inputs, and see how
sensitive the outputs are and which inputs are the most influential.

The current system[5] works, and after the YAML->Marshal migration,
it scales well enough for now. The trouble is the entire architecture
is wrong if I want to monitor the Monte Carlos statistics to see
if I can stop sampling, i.e., the statistics are converged.

The current system consists of the following steps:

  1) Prepare a "sufficiently large" number of cases, each with random
     variations of the input parameters per the tolerance DSL markup.
     Save all these input variables and all their samples for step 5.
  2) Run all the cases.
  3) Collect all the samples of all the outputs of interest.
  4) Compute running history of the output statistics to see
     if they have have converged, i.e., the "sufficiently large"
     guess was correct -- typically a wasteful number of around 3,000.
     If not, start at step 1 again with a bigger number of cases.
  5) Compute normalized Pearson correlation coefficients for the
     outputs and see which inputs they are most sensitive to by
     using the data collected in steps 1 and 3.
  6) Lobby for experiments to nail down these "tall pole" uncertainties.

This system is plagued by the question of "sufficiently large"?
The next generation system would do steps 1 through 3 in small
batches, and at the end of each batch, check for the statistical
convergence of step 4. If convergence has been reached, shutdown
the Monte Carlo process, declare victory, and proceed with steps
5 and 6.

I'm thinking this more incremental approach, and my lack of database
experience would make a perfect match for Mongoose[6]...

Since there's no Ruby Quiz this weekend, we all need something to work
on :-).

:slight_smile:

Regards,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

[1] Monte Carlo method - Wikipedia
[2] Pearson correlation coefficient - Wikipedia
[3] Sensitivity analysis - Wikipedia
[4] Crew Exploration Vehicle - Wikipedia
[5] The current system consists of 5 Ruby codes at ~40 lines each
plus some equally tiny library routines.
[6] http://mongoose.rubyforge.org/

Not exactly identical but usually good enough:

samples[tag] ||=
samples[tag] << sample

And you can probably combine:

(samples[tag] ||= ) << sample

···

On Fri, May 04, 2007 at 01:25:05AM +0900, Bil Kleb wrote:

Bil Kleb wrote:
>
> Hash.new{ |hash,key| hash[key]= }

Is there a better way than,

samples[tag] = unless samples.has_key? tag
samples[tag] << sample

William James wrote:

Would making a copy of the hash use too much
memory or time?

I don't know, but that's surely another way out of
the Marshal-hash-proc trap...

Thanks,

···

--
Bil Kleb
http://fun3d.larc.nasa.gov

You're building an Orion? Please tell me it's not true!

-- Matt
It's not what I know that counts. It's what I can remember in time to use.

···

On Sat, 5 May 2007, Bil Kleb wrote:

I've created a small tolerance DSL, and coupled with the Monte Carlo
Method[1] and the Pearson Correlation Coefficient[2], I'm performing
sensitivity analysis[2] on some of the simulation codes used for our
Orion vehicle[3]. In other words, jiggle the inputs, and see how
sensitive the outputs are and which inputs are the most influential.