Not 2 maps

# I have this
arr = [[1.1, 2.2, 3.3], [4.1, 5.6, 6.8], [7.1, 8.7, 9.0], [10.0, 11.4, 12.6]]

# I want this [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]

# This works. But, I want to find a way that is faster than this in 1
or 2 lines of code.
# Any nice ideas how I can do it without 2 maps?
# Even if it is not faster, I would like to see how you would do it.

p arr.map{|y| y.map{|z| z.to_i}}

Harry

A marginally faster solution would be

arr.each{|y| y.map!{|z| z.to_i}}

This doesn't create new objects, instead modifying arrays in place.
It's about 15% faster for me.

Benchmark: https://gist.github.com/1208377

-- Matma Rex

I would personally use maps, but to do it without mapping it would look like
this. Or were you wanting to go completely procedural and see what it would
look like without any iterators at all?

pre = [[1.1, 2.2, 3.3], [4.1, 5.6, 6.8], [7.1, 8.7, 9.0], [10.0, 11.4,
12.6]]

post = Array.new
pre.each do |inner_array|
  post <<
  inner_array.each { |num| post.last << num.to_i }
end

post == [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] # => true

···

On Sat, Sep 10, 2011 at 9:16 AM, Harry Kakueki <list.push@gmail.com> wrote:

# I have this
arr = [[1.1, 2.2, 3.3], [4.1, 5.6, 6.8], [7.1, 8.7, 9.0], [10.0, 11.4,
12.6]]

# I want this [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]

# This works. But, I want to find a way that is faster than this in 1
or 2 lines of code.
# Any nice ideas how I can do it without 2 maps?
# Even if it is not faster, I would like to see how you would do it.

p arr.map{|y| y.map{|z| z.to_i}}

Thanks to everyone who offered some type of solution.

arr.each{|y| y.map!{|z| z.to_i}} gave me a little more speed.

I will study to learn more about GC, for some reason I usually forget
about inject, and I did not know about #with_object.
But, I'll be learning more about these now.

Thanks again.

Harry

I didn't mention it, but I was trying not to iterate. But, I guess I
need to if I want to get more speed.

Thanks.

Harry

···

On Sat, Sep 10, 2011 at 11:43 PM, Bartosz Dziewoński <matma.rex@gmail.com> wrote:

A marginally faster solution would be

arr.each{|y| y.map!{|z| z.to_i}}

This doesn't create new objects, instead modifying arrays in place.
It's about 15% faster for me.

Benchmark: https://gist.github.com/1208377

-- Matma Rex

luv this inject code. hope matz can enhance inject further

pre.inject() do |new_array,inner_array|
   new_array + [inner_array.map(&:to_i)]
end
#=> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]

kind regards -botp

···

On Mon, Sep 12, 2011 at 12:12 PM, Josh Cheek <josh.cheek@gmail.com> wrote:

pre = [[1.1, 2.2, 3.3], [4.1, 5.6, 6.8], [7.1, 8.7, 9.0], [10.0, 11.4,
12.6]]

post = Array.new
pre.each do |inner_array|
post <<
inner_array.each { |num| post.last << num.to_i }
end

I didn't mention it, but I was trying not to iterate. But, I guess I
need to if I want to get more speed.

The question here is going to come down to why you're so worried about speed. What's your specific use case? The one you gave us seems to simplified to vouch for going the optimization route. Are you repeating this process thousands of times? With larger data sets? Computer processing power has evolved a lot over the years to the point to where the bar is set higher for when granular optimizations are required. If, however, such speed is critical to the application, you also have the route of writing a C level extension (assuming MRI here).

Regards,
Chris White
Twitter: http://www.twitter.com/cwgem

IDK, I feel like most places #inject is used, #with_object would be a better
choice.

pre.each.with_object() do |inner_array, new_array|
  new_array << inner_array.map(&:to_i)
end

···

On Sun, Sep 11, 2011 at 11:49 PM, botp <botpena@gmail.com> wrote:

On Mon, Sep 12, 2011 at 12:12 PM, Josh Cheek <josh.cheek@gmail.com> wrote:
> pre = [[1.1, 2.2, 3.3], [4.1, 5.6, 6.8], [7.1, 8.7, 9.0], [10.0, 11.4,
> 12.6]]
>
> post = Array.new
> pre.each do |inner_array|
> post <<
> inner_array.each { |num| post.last << num.to_i }
> end

luv this inject code. hope matz can enhance inject further

pre.inject() do |new_array,inner_array|
  new_array + [inner_array.map(&:to_i)]
end
#=> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]

kind regards -botp

I just gave a very simple example.
I want to use larger arrays millions of times.
The speed I have now is acceptable.
I was just trying to squeeze a little more out to see if I could.

Thanks.

Harry

···

On Mon, Sep 12, 2011 at 11:47 AM, Chris White <cwprogram@live.com> wrote:

I didn't mention it, but I was trying not to iterate. But, I guess I
need to if I want to get more speed.

The question here is going to come down to why you're so worried about speed. What's your specific use case? The one you gave us seems to simplified to vouch for going the optimization route. Are you repeating this process thousands of times? With larger data sets? Computer processing power has evolved a lot over the years to the point to where the bar is set higher for when granular optimizations are required. If, however, such speed is critical to the application, you also have the route of writing a C level extension (assuming MRI here).

Regards,
Chris White
Twitter: http://www.twitter.com/cwgem

totally agree, josh. but i still luv the inject magic. i can
concentrate on the code block result, not some var..
kind regards -botp

···

On Mon, Sep 12, 2011 at 1:13 PM, Josh Cheek <josh.cheek@gmail.com> wrote:

IDK, I feel like most places #inject is used, #with_object would be a better
choice.

pre.each.with_object() do |inner_array, new_array|
new_array << inner_array.map(&:to_i)
end

I just gave a very simple example.
I want to use larger arrays millions of times.
The speed I have now is acceptable.
I was just trying to squeeze a little more out to see if I could.

Garbage collection could be part of it, as the GC has to stop for the collection phase, which can interrupt current processing. You can try doing some calculations using the GC module:

http://www.ruby-doc.org/core/classes/GC.html

You can also disable / re-enable it when doing expensive operations. You can also try another implementation such as Rubinius:

http://rubini.us/

or JRuby:

I've seen some pretty impressive numbers for mass object creation for rubinius, and the JVM allows you quite a number of optimization options and garbage collection methods. As I mentioned though, you can fall back to creating C extensions to have more precise control over things.

Regards,
Chris White
http://www.twitter.com/cwgem

I didn't mention it, but I was trying not to iterate. But, I guess I
need to if I want to get more speed.

What exactly do you mean by that? If the array is given as is and you
need to change all values there is no other way than iterating.

I just gave a very simple example.
I want to use larger arrays millions of times.
The speed I have now is acceptable.
I was just trying to squeeze a little more out to see if I could.

Well, then you could do the conversion when you *create* the nested
arrays. That way you spare the iteration through the whole dataset.

Can you explain with more detail what your use case is? Where do you
get the data from? Why do you need to convert it? Where do you put
it? Things like that.

Kind regards

robert

···

On Mon, Sep 12, 2011 at 5:09 AM, Harry Kakueki <list.push@gmail.com> wrote:

On Mon, Sep 12, 2011 at 11:47 AM, Chris White <cwprogram@live.com> wrote:

--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/