Hm...ok, then I could just return the cache itself, as Sean originally
suggested. Not as slick, but it does avoid namespace pollution.
Dan
···
-----Original Message-----
From: Pit Capitain [mailto:pit@capitain.de]
Sent: Tuesday, October 18, 2005 8:21 AM
To: ruby-talk ML
Subject: Re: declaratively caching results of a method
Berger, Daniel schrieb:
>> (accessing the memoize cache)
>
> Alrighty, then. I'll make that change in the next release.
My code was meant as a quick hack for the OP to be able to experiment
with the cache. I wouldn't include it in the library, though,
because I
don't like polluting the namespace with those additional methods.
> > Alrighty, then. I'll make that change in the next release.
Here's another change to memoize for your consideration:
Suppose the result of a method call is nil or false, and suppose is
takes a lot of work to find that out. As written, methods that return
nil or false are not getting speedups from memoize.
How about changing memoize to something like below?
define_method(name) do |*args|
#cache[args] ||= meth.call(*args) #what if the cached value is
nil or false?
cache.has_key?(args) ? cache[args] : cache[args] ||= meth.call(*args)
end
Brian
> > Alrighty, then. I'll make that change in the next release
.
Here's another change to memoize for your consideration:
Suppose the result of a method call is nil or false, and suppose it
takes a lot of work to find that out. As written, methods that return
nil or false are not getting speedups from memoize.
How about changing memoize to something like below?
define_method(name) do |*args|
#cache[args] ||= meth.call(*args) #what if the cached value is
nil or false?
cache.has_key?(args) ? cache[args] : cache[args] ||= meth.call(*args)
end
Brian