Proc {} vs. Method#to_proc

Hi –

Arising from a recent discussion on #ruby-lang:

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t. To illustrate:

class A
def talk
puts "hi from A#talk"
end
end

pr = Proc.new { puts “hi from anonymous Proc” }
methproc = A.new.method(:talk).to_proc

[1,2,3].each &pr # “hi from anonymous Proc\n” * 3
[1,2,3].each &methproc # in `talk’: wrong # of arguments(1 for 0)

The #to_proc proc is reacting like a method, not a proc, when given
the “wrong” number of arguments. For me, this doesn’t fit in with the
idea of (complete) conversion to a proc.

Any other thoughts or interpretations of this behavior?

David

···


David Alan Black
home: dblack@candle.superlink.net
work: blackdav@shu.edu
Web: http://pirate.shu.edu/~blackdav

Hi –

Arising from a recent discussion on #ruby-lang:

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t. To illustrate:

class A
def talk
puts “hi from A#talk”
end
end

pr = Proc.new { puts “hi from anonymous Proc” }
methproc = A.new.method(:talk).to_proc

[1,2,3].each &pr # “hi from anonymous Proc\n” * 3
[1,2,3].each &methproc # in `talk’: wrong # of arguments(1 for 0)

Interesting. I would have just done

[1,2,3].each { methproc.call }

and never have noticed. Can you explain what is happening when
I replace {…} with &pr.

···

On Friday, 21 February 2003 at 1:19:17 +0900, dblack@candle.superlink.net wrote:

The #to_proc proc is reacting like a method, not a proc, when given
the “wrong” number of arguments. For me, this doesn’t fit in with the
idea of (complete) conversion to a proc.


Jim Freeze

If God had intended Man to Walk, He would have given him Feet.

Hi –

Arising from a recent discussion on #ruby-lang:

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t. To illustrate:

Procs don’t care about the arity when they are transformed into blocks
with ‘&’: unneeded arguments are discarded and if there’s not enough
they get filled with nil.

p = proc { |a,b,c,d,e| p(a) }
=> #Proc:0x402ab4b4
[1,2,3].each &p
(irb):4: warning: &' interpreted as argument prefix 1 2 3 => [1, 2, 3] p.call(1) ArgumentError: wrong # of arguments (1 for 5) from (irb):5 from (irb):3:in call’
from (irb):5

class A
def talk
puts “hi from A#talk”
end
end

pr = Proc.new { puts “hi from anonymous Proc” }
methproc = A.new.method(:talk).to_proc

[1,2,3].each &pr # “hi from anonymous Proc\n” * 3
[1,2,3].each &methproc # in `talk’: wrong # of arguments(1 for 0)

The #to_proc proc is reacting like a method, not a proc, when given
the “wrong” number of arguments. For me, this doesn’t fit in with the
idea of (complete) conversion to a proc.

Any other thoughts or interpretations of this behavior?

Method#to_proc seems to be working like this:
batsman@tux-chan:/tmp$ expand -t 2 j.rb
class Method
def my_to_proc
proc { |*args| self.call(*args) }
end
end

class A
def talk
puts “A#talk”
end
end

methproc = A.new.method(:talk).my_to_proc
[1,2,3].each(&methproc)

batsman@tux-chan:/tmp$ ruby j.rb
j.rb:4:in talk': wrong # of arguments(1 for 0) (ArgumentError) from j.rb:4:in call’
from j.rb:4:in my_to_proc' from j.rb:4:in each’
from j.rb:15

You can make it behave like a “real” proc with the following:

batsman@tux-chan:/tmp$ expand -t 2 k.rb

class Method
def my_to_proc
case
when arity > 0
proc do |args|
(arity - args.size).times() { args << nil } if arity > args.size
self.call(
(args[0,arity]))
end
when arity == 0
proc { |*args| self.call }
when arity < 0
rarity = -1 - arity
proc do |*args|
(rarity - args.size).times() { args << nil } if rarity > args.size
self.call(*args)
end
end
end
end

class A
def talk
puts “A#talk”
end

def var_arg(a,b,c,*others)
p a,b,c, others
end

def normal(a,b,c)
p a,b,c
end
end

methproc = A.new.method(:talk).my_to_proc
m2 = A.new.method(:var_arg).my_to_proc
m3 = A.new.method(:normal).my_to_proc
[1].each(&methproc)
[1].each(&m2)
[1].each(&m3)

batsman@tux-chan:/tmp$ ruby k.rb
A#talk
1
nil
nil

1
nil
nil

···

On Fri, Feb 21, 2003 at 01:19:17AM +0900, dblack@candle.superlink.net wrote:


_ _

__ __ | | ___ _ __ ___ __ _ _ __
'_ \ / | __/ __| '_ _ \ / ` | ’ \
) | (| | |
__ \ | | | | | (| | | | |
.__/ _,
|_|/| || ||_,|| |_|
Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com

Save yourself from the ‘Gates’ of hell, use Linux." – like that one.
– The_Kind @ LinuxNet

Hi,

···

In message “proc {} vs. Method#to_proc” on 03/02/21, dblack@candle.superlink.net dblack@candle.superlink.net writes:

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t.

Method#to_proc returns a Proc defined like

Proc{|*args| self.call(*args)}

this explains the behavior.

						matz.

& turns a proc into a block.

···

Jim Freeze (jim@freeze.org) wrote:

On Friday, 21 February 2003 at 1:19:17 +0900, dblack@candle.superlink.net wrote:

pr = Proc.new { puts “hi from anonymous Proc” }
methproc = A.new.method(:talk).to_proc

[1,2,3].each &pr # “hi from anonymous Proc\n” * 3
[1,2,3].each &methproc # in `talk’: wrong # of arguments(1 for 0)

Interesting. I would have just done

[1,2,3].each { methproc.call }

and never have noticed. Can you explain what is happening when
I replace {…} with &pr.


Eric Hodel - drbrain@segment7.net - http://segment7.net
All messages signed with fingerprint:
FEC2 57F1 D465 EB15 5D6E 7C11 332A 551C 796C 9F04

Hi –

···

On Fri, 21 Feb 2003, Mauricio [iso-8859-1] Fernández wrote:

On Fri, Feb 21, 2003 at 01:19:17AM +0900, dblack@candle.superlink.net wrote:

Hi –

Arising from a recent discussion on #ruby-lang:

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t. To illustrate:

Procs don’t care about the arity when they are transformed into blocks
with ‘&’: unneeded arguments are discarded and if there’s not enough
they get filled with nil.

But that isn’t what’s happening with the to_proc procs. That’s why
their behavior strikes me as anomalous.

David


David Alan Black
home: dblack@candle.superlink.net
work: blackdav@shu.edu
Web: http://pirate.shu.edu/~blackdav

“Yukihiro Matsumoto” matz@ruby-lang.org schrieb im Newsbeitrag
news:1045775864.869816.25765.nullmailer@picachu.netlab.jp…

Hi,

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t.

Method#to_proc returns a Proc defined like

Proc{|*args| self.call(*args)}

this explains the behavior.

While we’re at it… I did a little testing and two things strike me odd:

irb(main):143:0* RUBY_VERSION
=> “1.6.8”
irb(main):144:0> procs = [
irb(main):145:1* Proc.new{|a| p a},
irb(main):146:1* Proc.new{|a| p a},
irb(main):147:1
Proc.new{|a,| p a},
irb(main):148:1* Proc.new{|a,b| p a}
irb(main):149:1> ]
=> [#Proc:0x27dda90, #Proc:0x27dda60, #Proc:0x27dda30,
#Proc:0x27dda00]
irb(main):150:0> procs.each_with_index{|p,i| puts "proc[#{i}].arity =
#{p.arity}
"}
proc[0].arity = -1
proc[1].arity = -1
proc[2].arity = 1
proc[3].arity = 2
=> [#Proc:0x27dda90, #Proc:0x27dda60, #Proc:0x27dda30,
#Proc:0x27dda00]
irb(main):151:0> procs.each_with_index{|p,i| puts “proc[#{i}]”
irb(main):152:1> begin
irb(main):153:2* p.call(i)
irb(main):154:2> rescue
irb(main):155:2> puts “ERROR”
irb(main):156:2> end
irb(main):157:1> }
proc[0]
0
proc[1]
[1]
proc[2]
2
proc[3]
ERROR
=> [#Proc:0x27dda90, #Proc:0x27dda60, #Proc:0x27dda30,
#Proc:0x27dda00]
irb(main):158:0>

  1. Why is it that “proc[0].arity = -1”? Or put differently: To me it seems
    there is an asymmetry between proc[0].arity = proc[1].arity and the behavior
    of proc[0] that resembles more p[2]. If “proc[0].arity = 1” I would be
    fine, but -1 puzzles me…

  2. Why does this parse “Proc.new{|a,| p a}”? (the comma with following bar)

Thanks!

robert
···

In message “proc {} vs. Method#to_proc” > on 03/02/21, dblack@candle.superlink.net dblack@candle.superlink.net writes:

So why the error wrong # of arguments?

···

On Friday, 21 February 2003 at 3:38:30 +0900, Eric Hodel wrote:

Jim Freeze (jim@freeze.org) wrote:

On Friday, 21 February 2003 at 1:19:17 +0900, dblack@candle.superlink.net wrote:

pr = Proc.new { puts “hi from anonymous Proc” }
methproc = A.new.method(:talk).to_proc

[1,2,3].each &pr # “hi from anonymous Proc\n” * 3
[1,2,3].each &methproc # in `talk’: wrong # of arguments(1 for 0)

Interesting. I would have just done

[1,2,3].each { methproc.call }

and never have noticed. Can you explain what is happening when
I replace {…} with &pr.

& turns a proc into a block.


Jim Freeze

Always borrow money from a pessimist; he doesn’t expect to be paid
back.

irb(main):144:0> procs = [
irb(main):145:1* Proc.new{|a| p a},
irb(main):146:1* Proc.new{|a| p a},
irb(main):147:1
Proc.new{|a,| p a},
irb(main):148:1* Proc.new{|a,b| p a}
irb(main):149:1> ]

  1. Why is it that “proc[0].arity = -1”? Or put differently: To me it seems
    there is an asymmetry between proc[0].arity = proc[1].arity and the behavior
    of proc[0] that resembles more p[2]. If “proc[0].arity = 1” I would be
    fine, but -1 puzzles me…

-1 means 0 or more args, -2 means 1 or more args, etc

  1. Why does this parse “Proc.new{|a,| p a}”? (the comma with following bar)

the |…| construct is pretty much the same as multiple assignment with

···

Robert (bob.news@gmx.net) wrote:
,

a,b,c = [1,2,3]

a, = [:a, :b, :c]

a is now :a, and :b and :c got thrown away.

You can even do stuff like:

class Foo
attr_accessor :bar
end

f = Foo.new

[1,2,3].each do |f.bar|
end

f.bar == 3 #true


Eric Hodel - drbrain@segment7.net - http://segment7.net
All messages signed with fingerprint:
FEC2 57F1 D465 EB15 5D6E 7C11 332A 551C 796C 9F04

Read the rest of my message :slight_smile:

to_proc works like
proc { |*args| self.call(*args) }
calling the block works (it accepts any number of args, so even .call
would), self.call fails.

You can use #my_to_proc in [ruby-talk:65372] to make these procs behave like the
others.

···

On Fri, Feb 21, 2003 at 06:03:38AM +0900, dblack@candle.superlink.net wrote:

Hi –

On Fri, 21 Feb 2003, Mauricio [iso-8859-1] Fernández wrote:

On Fri, Feb 21, 2003 at 01:19:17AM +0900, dblack@candle.superlink.net wrote:

Hi –

Arising from a recent discussion on #ruby-lang:

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t. To illustrate:

Procs don’t care about the arity when they are transformed into blocks
with ‘&’: unneeded arguments are discarded and if there’s not enough
they get filled with nil.

But that isn’t what’s happening with the to_proc procs. That’s why
their behavior strikes me as anomalous.


_ _

__ __ | | ___ _ __ ___ __ _ _ __
'_ \ / | __/ __| '_ _ \ / ` | ’ \
) | (| | |
__ \ | | | | | (| | | | |
.__/ _,
|_|/| || ||_,|| |_|
Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com

  • liw prefers not to have Linus run Debian, because then /me would
    have to run Red Hat, just to keep the power balance :slight_smile:
    #Debian

It seems that Method#to_proc works like

class Method
def my_to_proc
proc { |*args| self.call(*args) }
end
end

no matter what you pass the block, it won’t complain, but at some point
you actually call the method…

In eval.c you have

static VALUE
bmcall(args, method)
VALUE args, method;
{
args = svalue_to_avalue(args);
return method_call(RARRAY(args)->len, RARRAY(args)->ptr, method);
/* => this is the line responsible for the arity error if I’m right */
}

static VALUE
method_proc(method)
VALUE method;
{
return rb_iterate((VALUE(*)_((VALUE)))mproc, 0, bmcall, method);
}

Read my other msg. for a quick “solution”.

···

On Fri, Feb 21, 2003 at 04:10:41AM +0900, Jim Freeze wrote:

On Friday, 21 February 2003 at 3:38:30 +0900, Eric Hodel wrote:

Jim Freeze (jim@freeze.org) wrote:

On Friday, 21 February 2003 at 1:19:17 +0900, dblack@candle.superlink.net wrote:

pr = Proc.new { puts “hi from anonymous Proc” }
methproc = A.new.method(:talk).to_proc

[1,2,3].each &pr # “hi from anonymous Proc\n” * 3
[1,2,3].each &methproc # in `talk’: wrong # of arguments(1 for 0)

Interesting. I would have just done

[1,2,3].each { methproc.call }

and never have noticed. Can you explain what is happening when
I replace {…} with &pr.

& turns a proc into a block.

So why the error wrong # of arguments?


_ _

__ __ | | ___ _ __ ___ __ _ _ __
'_ \ / | __/ __| '_ _ \ / ` | ’ \
) | (| | |
__ \ | | | | | (| | | | |
.__/ _,
|_|/| || ||_,|| |_|
Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com

LILO, you’ve got me on my knees!
– David Black, dblack@pilot.njin.net, with apologies to Derek and the
Dominos, and Werner Almsberger

“Eric Hodel” drbrain@segment7.net schrieb im Newsbeitrag
news:20030220231342.GH73411@segment7.net

irb(main):144:0> procs = [
irb(main):145:1* Proc.new{|a| p a},
irb(main):146:1* Proc.new{|a| p a},
irb(main):147:1
Proc.new{|a,| p a},
irb(main):148:1* Proc.new{|a,b| p a}
irb(main):149:1> ]

  1. Why is it that “proc[0].arity = -1”? Or put differently: To me it
    seems
    there is an asymmetry between proc[0].arity = proc[1].arity and the
    behavior
    of proc[0] that resembles more p[2]. If “proc[0].arity = 1” I would be
    fine, but -1 puzzles me…

-1 means 0 or more args, -2 means 1 or more args, etc

I know that, sorry if I wasn’t clear enough. What bugs me is the fact that
the block with a single argument has negative arity, indicating that there
can be more arguments. Why then doesn’t the block with two arguments have
arity -2, too? Or otherwise, why doesn’t the block with |a| have arity 1?

  1. Why does this parse “Proc.new{|a,| p a}”? (the comma with following
    bar)

the |…| construct is pretty much the same as multiple assignment with,

a,b,c = [1,2,3]

a, = [:a, :b, :c]

It just seemed strange to me that you can have a comma here but no
following enumerated element - especially since a block definition with

a,| behaves differently from the one with |a|.

Regards

robert
···

Robert (bob.news@gmx.net) wrote:

Hi –

Hi –

Hi –

Arising from a recent discussion on #ruby-lang:

Proc objects created by Method#to_proc seem to care about their arity
in a way that other Proc objects don’t. To illustrate:

Procs don’t care about the arity when they are transformed into blocks
with ‘&’: unneeded arguments are discarded and if there’s not enough
they get filled with nil.

But that isn’t what’s happening with the to_proc procs. That’s why
their behavior strikes me as anomalous.

Read the rest of my message :slight_smile:

to_proc works like
proc { |*args| self.call(*args) }
calling the block works (it accepts any number of args, so even .call
would), self.call fails.

You can use #my_to_proc in [ruby-talk:65372] to make these procs behave like the
others.

I understand that (we were already playing around with
reimplementations in the original irc session on this :slight_smile: but my point
was that I think that the behavior of these procs is potentially
misleading and I wonder whether it should (or could) be changed in the
language itself. Somehow “to_proc” doesn’t convey the idea of a
wrapper of this kind to me.

David

···

On Fri, 21 Feb 2003, Mauricio [iso-8859-1] Fernández wrote:

On Fri, Feb 21, 2003 at 06:03:38AM +0900, dblack@candle.superlink.net wrote:

On Fri, 21 Feb 2003, Mauricio [iso-8859-1] Fernández wrote:

On Fri, Feb 21, 2003 at 01:19:17AM +0900, dblack@candle.superlink.net wrote:


David Alan Black
home: dblack@candle.superlink.net
work: blackdav@shu.edu
Web: http://pirate.shu.edu/~blackdav

Hi,

-1 means 0 or more args, -2 means 1 or more args, etc

I know that, sorry if I wasn’t clear enough. What bugs me is the fact that
the block with a single argument has negative arity, indicating that there
can be more arguments. Why then doesn’t the block with two arguments have
arity -2, too? Or otherwise, why doesn’t the block with |a| have arity 1?

proc{|a|…}.call(1,2,3)

works like

a = 1,2,3 # a = [1,2,3]

so that it takes arbitrary number of argument (thus arity = -1). If
you want to receive one argument and only one, you have to do |a,|
which works like

a, = 1,2,3 # a = 1

						matz.
···

In message “Re: proc {} vs. Method#to_proc” on 03/02/21, “Robert Klemme” bob.news@gmx.net writes:

Robert Klemme wrote:

I know that, sorry if I wasn’t clear enough. What bugs me is the fact that
the block with a single argument has negative arity, indicating that there
can be more arguments. Why then doesn’t the block with two arguments have
arity -2, too? Or otherwise, why doesn’t the block with |a| have arity 1?

I think it has to do with being dynamic so that iterators and blocks can
be combined more easily.

The fact is that with one block argument, if more values are yielded to
it, they are collected in an array and passed in the one argument. Thus
-1 arity. With two block parameters, where would the extra yielded
values go? We could have treated the last parameters as *args in
methods, but then too much is going on at once for me atleast. (And
getting used to blocks took a while for me.)

For a two argument -2 arity block:
{|a,*b|}

It might not seem apparent at first, but the special case of the lone
block parameter helps make Ruby more dynamic, IMHO. Iterating over
hashes is a rather obvious case:
{1=>3, 2=>4}.each{|a,b| puts “#{a}->#{b}”}
versus
{1=>3, 2=>4}.each{|a| puts “#{a.first}->#{a.last}”}

With the latter block, you could have your iterator change a lot more
without affecting your usage of it. You, the block writer, do not have
to worry about whether the iterator yields one value (an array with two
values) or two separate values. And that is yet another bit of freedom
and dynamicity that makes Ruby stand out. (This kind of magic is
inherent in return values too, IIRC)

It just seemed strange to me that you can have a comma here but no
following enumerated element - especially since a block definition with

a,| behaves differently from the one with |a|.

An extra character and visual complexity for those who are really strict
about what they’ll accept from the iterator and bind themselves closer
to the iterator implementation. Suits them right :slight_smile:

···


([ Kent Dahl ]/)_ ~ [ http://www.stud.ntnu.no/~kentda/ ]/~
))_student
/(( _d L b_/ NTNU - graduate engineering - 5. year )
( __õ|õ// ) )Industrial economics and technological management(
_
/ö____/ (_engineering.discipline=Computer::Technology)

Tangentially, is the following code safe:

def initialize(*factors)
@factors = factors
@filters =
@factors.select {|i| i.is_a?(Proc)}.each {|fn|
(@filters[(fn.arity.abs) -1] ||= ) << fn
# ( a kludge to get around the fact that {|x|}.arity = -1 )
}
@factors.reject!{|i| i.is_a?(Proc)}
@dim = @factors.length
end

assuming the blocks in @factors are guaranteed not to have *d arguments?
Or is there some insidious corner case I’m missing?

martin

···

Robert Klemme bob.news@gmx.net wrote:

I know that, sorry if I wasn’t clear enough. What bugs me is the fact that
the block with a single argument has negative arity, indicating that there
can be more arguments. Why then doesn’t the block with two arguments have
arity -2, too? Or otherwise, why doesn’t the block with |a| have arity 1?

Hi Kent,

thanks for the lengthy explanation. Some remarks below.

“Kent Dahl” kentda@stud.ntnu.no schrieb im Newsbeitrag
news:3E55E417.32BFBC78@stud.ntnu.no…

Robert Klemme wrote:

I know that, sorry if I wasn’t clear enough. What bugs me is the fact
that
the block with a single argument has negative arity, indicating that
there
can be more arguments. Why then doesn’t the block with two arguments
have
arity -2, too? Or otherwise, why doesn’t the block with |a| have arity
1?

I think it has to do with being dynamic so that iterators and blocks can
be combined more easily.

The fact is that with one block argument, if more values are yielded to
it, they are collected in an array and passed in the one argument. Thus
-1 arity. With two block parameters, where would the extra yielded
values go? We could have treated the last parameters as *args in
methods, but then too much is going on at once for me atleast. (And
getting used to blocks took a while for me.)

For a two argument -2 arity block:
{|a,*b|}

It might not seem apparent at first, but the special case of the lone
block parameter helps make Ruby more dynamic, IMHO. Iterating over
hashes is a rather obvious case:
{1=>3, 2=>4}.each{|a,b| puts “#{a}->#{b}”}
versus
{1=>3, 2=>4}.each{|a| puts “#{a.first}->#{a.last}”}

With the latter block, you could have your iterator change a lot more
without affecting your usage of it. You, the block writer, do not have
to worry about whether the iterator yields one value (an array with two
values) or two separate values. And that is yet another bit of freedom
and dynamicity that makes Ruby stand out. (This kind of magic is
inherent in return values too, IIRC)

I’m not fully convinced. The reason is, that unless the parameters are just
passed on to some other processing, the block writer must expect multiple
values. In you example, writing

{1=>3, 2=>4}.each{|*a| puts “#{a.first}->#{a.last}”} # note the “*a”

is not too much overhead but a bit clearer IMHO. If you write a block that
relies on the some type property (i.e. method present) you have to
explicitely deal with that:

X.each do |a|
if a.kind_of? String then
p a if a =~ /foo/
else
a.each do |e|
p e if e =~ /foo/
end
end
end

or

X.each do |a|
a.to_a.each do |e|
p e if e =~ /foo/
end
end

or

X.each do |*a|
a.each do |e|
p e if e =~ /foo/
end
end

But I guess it’s a lot a matter of taste.

It just seemed strange to me that you can have a comma here but no
following enumerated element - especially since a block definition with

a,| behaves differently from the one with |a|.

An extra character and visual complexity for those who are really strict
about what they’ll accept from the iterator and bind themselves closer
to the iterator implementation. Suits them right :slight_smile:

:slight_smile:

Regards

robert

“Martin DeMello” martindemello@yahoo.com schrieb im Newsbeitrag
news:Y0w5a.296694$sV3.9652811@news3.calgary.shaw.ca…

I know that, sorry if I wasn’t clear enough. What bugs me is the fact
that
the block with a single argument has negative arity, indicating that
there
can be more arguments. Why then doesn’t the block with two arguments
have
arity -2, too? Or otherwise, why doesn’t the block with |a| have arity
1?

Tangentially, is the following code safe:

def initialize(*factors)
@factors = factors
@filters =
@factors.select {|i| i.is_a?(Proc)}.each {|fn|
(@filters[(fn.arity.abs) -1] ||= ) << fn
# ( a kludge to get around the fact that {|x|}.arity = -1 )
}
@factors.reject!{|i| i.is_a?(Proc)}
@dim = @factors.length
end

assuming the blocks in @factors are guaranteed not to have *d arguments?
Or is there some insidious corner case I’m missing?

I’m not sure whether I understand where you’re up to. I wonder why you
substract 1 from the abs(arity).

And another tip aside: move the reject! above then @factors.select. Then
you can omit the select and gain a bit performance. :slight_smile:

robert
···

Robert Klemme bob.news@gmx.net wrote:

Robert Klemme wrote:

I’m not fully convinced. The reason is, that unless the parameters are just
passed on to some other processing, the block writer must expect multiple
values. In you example, writing

{1=>3, 2=>4}.each{|*a| puts “#{a.first}->#{a.last}”} # note the “*a”

is not too much overhead but a bit clearer IMHO.

And alot more symmetric with methods, which is yet another argument for
the asterisk. But after having grown used to it, I still don’t want to
have to use the * here.

If you write a block that
relies on the some type property (i.e. method present) you have to
explicitely deal with that:

Do I? First of all, there is the issue of duck typing, so I wouldn’t
test like this anyway. What I am advocating is not that yield 1 and
yield 1,2,3 is interchangable(which would require explicit testing of
the type), but that yield 1,2,3 and yield [1,2,3] are.

That doesn’t seem like a big thing at first, until you suddenly find the
need to extend the returned values for some special cases. I’ll use
something I wrote up to convince myself after writing the previous post.
Lets say we have been yielding two values, key and value, such as from a
Hash, yielding it with “yield key, value”. Then suddenly yet another
property is needed. I could change values to [value, new_property] and
extract the value when iterating normally and yielding all including
new_property in a special method. This way lies madness, and it isn’t
particularly OO either. So lets write a simple Hash that uses a concrete
object for the entry pair key-value:

class Entry < Array
end
class MyHash
def initialize
@entries =
end
def
# strange hashing magic
end
def insert(key, value)
e = Entry.new
e << key
e << value
@entries << e
end
def each
@entries.each{|e| yield e}
end
end
m = MyHash.new
m.insert( “a”, “Alfa” )
m.insert( “b”, “Beta” )
m.each{|e| p e.type } # I care about entry metadata
m.each{|k,v| p k.type, v.type } # I don’t, gimme the content!

Those two calls to each are what I’m all about. If I want access to all
of the object, and do the cumbersome lookup, and have access to whatever
extra properties that I decide to add to Entry, then I only write one
block parameter. If I only want the key and value from the entry, I use
two. The control is in the recieving block, and I do not have to
consider both cases in one block; I choose which way to attack what is
yielded by how I define the block.

Now, this doesn’t look all to wrong with the *a version does it? Well,
it blows up with *a. Remember how methods use * to pack and unpack
arguments in an Array? The line

m.each{|*e| p e.type } # But I cared about entry metadata! sob

turns out to turn my stuff inside out, removing my dear Entry class and
rewrapping everything in a new Array. Should the * be allowed to have a
differing behaviour and take over for the lone block argument? Doing so
would make block arguments more like method arguments, removing one
surprise, but introducing another: *e can turn out to be something
different from an Array with blocks. Net gain, zilch.

But I guess it’s a lot a matter of taste.

Aye, and like fine wine, its an aquired one at that :slight_smile:

Disclaimer: I hope the above makes somewhat sense and isn’t too wrong in
parts. I haven’t slept too well so my head is everywhere.

···


([ Kent Dahl ]/)_ ~ [ http://www.stud.ntnu.no/~kentda/ ]/~
))_student
/(( _d L b_/ NTNU - graduate engineering - 5. year )
( __õ|õ// ) )Industrial economics and technological management(
_
/ö____/ (_engineering.discipline=Computer::Technology)

“Martin DeMello” martindemello@yahoo.com schrieb im Newsbeitrag

Tangentially, is the following code safe:

def initialize(*factors)
@factors = factors
@filters =
@factors.select {|i| i.is_a?(Proc)}.each {|fn|
(@filters[(fn.arity.abs) -1] ||= ) << fn
# ( a kludge to get around the fact that {|x|}.arity = -1 )
}
@factors.reject!{|i| i.is_a?(Proc)}
@dim = @factors.length
end

assuming the blocks in @factors are guaranteed not to have *d arguments?
Or is there some insidious corner case I’m missing?

I’m not sure whether I understand where you’re up to. I wonder why you
substract 1 from the abs(arity).

To have the array start indexing from 0 :slight_smile: The bane of array programmers
for generations now. The abs() is the kludge I was referring to - it’s
there solely to take care of the arity=-1 case.

And another tip aside: move the reject! above then @factors.select. Then
you can omit the select and gain a bit performance. :slight_smile:

Sadly, reject! returns the array, not the rejected elements. I’m trying
to separate the array into procs and non-procs (what I ought to do is to
use partition, now that it’s in 1.8).

martin

···

Robert Klemme bob.news@gmx.net wrote: