# I have this
arr = [[1.1, 2.2, 3.3], [4.1, 5.6, 6.8], [7.1, 8.7, 9.0], [10.0, 11.4, 12.6]]
# I want this [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
# This works. But, I want to find a way that is faster than this in 1
or 2 lines of code.
# Any nice ideas how I can do it without 2 maps?
# Even if it is not faster, I would like to see how you would do it.
I would personally use maps, but to do it without mapping it would look like
this. Or were you wanting to go completely procedural and see what it would
look like without any iterators at all?
On Sat, Sep 10, 2011 at 9:16 AM, Harry Kakueki <list.push@gmail.com> wrote:
# I have this
arr = [[1.1, 2.2, 3.3], [4.1, 5.6, 6.8], [7.1, 8.7, 9.0], [10.0, 11.4,
12.6]]
# I want this [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
# This works. But, I want to find a way that is faster than this in 1
or 2 lines of code.
# Any nice ideas how I can do it without 2 maps?
# Even if it is not faster, I would like to see how you would do it.
Thanks to everyone who offered some type of solution.
arr.each{|y| y.map!{|z| z.to_i}} gave me a little more speed.
I will study to learn more about GC, for some reason I usually forget
about inject, and I did not know about #with_object.
But, I'll be learning more about these now.
I didn't mention it, but I was trying not to iterate. But, I guess I
need to if I want to get more speed.
The question here is going to come down to why you're so worried about speed. What's your specific use case? The one you gave us seems to simplified to vouch for going the optimization route. Are you repeating this process thousands of times? With larger data sets? Computer processing power has evolved a lot over the years to the point to where the bar is set higher for when granular optimizations are required. If, however, such speed is critical to the application, you also have the route of writing a C level extension (assuming MRI here).
I just gave a very simple example.
I want to use larger arrays millions of times.
The speed I have now is acceptable.
I was just trying to squeeze a little more out to see if I could.
Thanks.
Harry
···
On Mon, Sep 12, 2011 at 11:47 AM, Chris White <cwprogram@live.com> wrote:
I didn't mention it, but I was trying not to iterate. But, I guess I
need to if I want to get more speed.
The question here is going to come down to why you're so worried about speed. What's your specific use case? The one you gave us seems to simplified to vouch for going the optimization route. Are you repeating this process thousands of times? With larger data sets? Computer processing power has evolved a lot over the years to the point to where the bar is set higher for when granular optimizations are required. If, however, such speed is critical to the application, you also have the route of writing a C level extension (assuming MRI here).
I just gave a very simple example.
I want to use larger arrays millions of times.
The speed I have now is acceptable.
I was just trying to squeeze a little more out to see if I could.
Garbage collection could be part of it, as the GC has to stop for the collection phase, which can interrupt current processing. You can try doing some calculations using the GC module:
I've seen some pretty impressive numbers for mass object creation for rubinius, and the JVM allows you quite a number of optimization options and garbage collection methods. As I mentioned though, you can fall back to creating C extensions to have more precise control over things.
I didn't mention it, but I was trying not to iterate. But, I guess I
need to if I want to get more speed.
What exactly do you mean by that? If the array is given as is and you
need to change all values there is no other way than iterating.
I just gave a very simple example.
I want to use larger arrays millions of times.
The speed I have now is acceptable.
I was just trying to squeeze a little more out to see if I could.
Well, then you could do the conversion when you *create* the nested
arrays. That way you spare the iteration through the whole dataset.
Can you explain with more detail what your use case is? Where do you
get the data from? Why do you need to convert it? Where do you put
it? Things like that.
Kind regards
robert
···
On Mon, Sep 12, 2011 at 5:09 AM, Harry Kakueki <list.push@gmail.com> wrote:
On Mon, Sep 12, 2011 at 11:47 AM, Chris White <cwprogram@live.com> wrote: