I am trying to copy a file from my local machine to a remote machine using Net::SSH. The copy fails part way leaving the file partly written on the remote machine. The size of the remote file portion is always 131072 bytes (128 kB). My local file is ~1.2MB. This leads me to suspect that the data are being fed in chunks and something is going wrong after the first chunk -- though that's a guess.
Here's the output:
Copying /path/to/my/file.zip to /path/to/remote/directory/file.zip...done.
/usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/transport/session.rb:256:in `wait_for_message': disconnected: Received data for nonexistent channel 0. (2) (Net::SSH::Transport::Disconnect)
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/transport/session.rb:240:in `wait_for_message'
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/connection/driver.rb:148:in `process'
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/connection/driver.rb:138:in `loop'
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/service/process/popen3.rb:66:in `popen3'
[snipped rest of trace]
Interestingly the 'puts "...done."' line is executed before the error is thrown. Since session.process.popen3 is synchronous, does that not imply that the copy has finished?
And here's the code (based on Ruby Cookbook recipe 14.11):
I am trying to copy a file from my local machine to a remote machine using
what platform(s)?
Net::SSH. The copy fails part way leaving the file partly written on the
remote machine. The size of the remote file portion is always 131072 bytes
(128 kB). My local file is ~1.2MB. This leads me to suspect that the data
[...]
And here's the code (based on Ruby Cookbook recipe 14.11):
# I suspect this should be:
open(source_path, 'rb') { |f| stdin.write f.read }
# so that that ctrl-Z isn't treated as EOF, or something ghastly
# of that sort.
puts "done."
end
end
Net::SSH.start(host, :username => user) do |session|
copy_file session, my_file, my_file.sub(/^.*\//, "#{upload_dir}")
end
Interestingly the 'puts "...done."' line is executed before the error is thrown. Since session.process.popen3 is synchronous, does that not imply that the copy has finished?
And here's the code (based on Ruby Cookbook recipe 14.11):
class SSHAgent
def initialize @agent_env = Hash.new
agenthandle = IO.popen("/usr/bin/ssh-agent -s", "r")
agenthandle.each_line do |line|
if line.index("echo") == nil
line = line.slice(0..(line.index(';')-1))
key, value = line.chomp.split(/=/)
puts "Key = #{key}, Value = #{value}" @agent_env[key] = value
end
end
end
def (key)
return @agent_env[key]
end
end
Net::SSH.start( '192.168.1.12',
:username=>'pgquiles',
:compression_level=>0,
:compression=>'none'
) do |session|
session.sftp.connect do |sftp|
sftp.put_file("bigvideo.avi", "bigvideo.avi")
end
end
That code is using public-key, password-less cryptography, but with slight
modifications it will work with public-key+password or only password. There
is some more info about Net::SFTP in the blog post.
···
On Wednesday 08 November 2006 18:28, Andrew Stewart wrote:
Hello,
I am trying to copy a file from my local machine to a remote machine
using Net::SSH. The copy fails part way leaving the file partly
written on the remote machine. The size of the remote file portion
is always 131072 bytes (128 kB). My local file is ~1.2MB. This
leads me to suspect that the data are being fed in chunks and
something is going wrong after the first chunk -- though that's a guess.
Here's the output:
Copying /path/to/my/file.zip to /path/to/remote/directory/
file.zip...done.
/usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/transport/
session.rb:256:in `wait_for_message': disconnected: Received data for
nonexistent channel 0. (2) (Net::SSH::Transport::Disconnect)
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/transport/session.rb:240:in `wait_for_message'
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/connection/driver.rb:148:in `process'
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/connection/driver.rb:138:in `loop'
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/service/process/popen3.rb:66:in `popen3'
[snipped rest of trace]
Interestingly the 'puts "...done."' line is executed before the error
is thrown. Since session.process.popen3 is synchronous, does that
not imply that the copy has finished?
And here's the code (based on Ruby Cookbook recipe 14.11):
# I suspect this should be:
open(source_path, 'rb') { |f| stdin.write f.read }
# so that that ctrl-Z isn't treated as EOF, or something ghastly
# of that sort.
I tried that just now but sadly it didn't change the result.
undefined method `flush' for #<Net::SSH::Service::Process::POpen3Manager::SSHStdinPipe:0x6f387c> (NoMethodError)
from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/service/process/popen3.rb:52:in `popen3'
Regards,
Andy
···
On 8 Nov 2006, at 18:19, ara.t.howard@noaa.gov wrote:
That code is using public-key, password-less cryptography, but with slight
modifications it will work with public-key+password or only password. There
is some more info about Net::SFTP in the blog post.
Thanks very much for this. I'll check it out (though I've just got my code working after several helpful suggestions from others) because it's using a different approach.
Thanks again,
Andy
···
On 8 Nov 2006, at 18:43, Pau Garcia i Quiles wrote:
OK, given they're both unices, that figures... I'd then read and
write in smaller chunks. I don't do enough deep networking to know
what packet sizes are sensible, but...
> # I suspect this should be:
> open(source_path, 'rb') { |f| stdin.write f.read }
> # so that that ctrl-Z isn't treated as EOF, or something ghastly
> # of that sort.
I tried that just now but sadly it didn't change the result.
------------------------------------------------------------------------
Writes the given string to _ios_. The stream must be opened for
writing. If the argument is not a string, it will be converted to a
string using +to_s+. Returns the number of bytes written.
[...]
...probably a good idea to check the return value. And for f.read.
I had the same problem where my ssh connection closing early. I solved
it by opening a shell- don't know if that can help with sftp though. I
am having no problems copying files with the ftp library.
Net::SSH.start(SSH_SERVER, :username => u, :password => p) do |session|
shell = session.shell.sync
shell.exec( cmd ).stdout
end
open(source_path) { |f|
x = f.read
puts "x: #{x.length}"
result = stdin.write x
puts "result: #{result}"
}
And it told me that x is the size of my file, so f.read is reading in the entire file.
The stdin object is an instance of SSHStdinPipe. The documentation for its write method [1] says: "Write the given data as channel data to the underlying channel." I.e. it doesn't mention anything about size limits.
But clearly there is a size limit somewhere and the 'stdin.write data' method must be the line which hits it. I dug into the source but got a bit confused.
OK, given they're both unices, that figures... I'd then read and
write in smaller chunks. I don't do enough deep networking to know
what packet sizes are sensible, but...
--------------------------------------------------------------- IO#write
ios.write(string) => integer
------------------------------------------------------------------------
Writes the given string to _ios_. The stream must be opened for
writing. If the argument is not a string, it will be converted to a
string using +to_s+. Returns the number of bytes written.
[...]
...probably a good idea to check the return value. And for f.read.
> OK, given they're both unices, that figures... I'd then read and
> write in smaller chunks. I don't do enough deep networking to know
> what packet sizes are sensible, but...
[ri output]
> ...probably a good idea to check the return value. And for f.read.
I changed this:
[...]
To this:
open(source_path) { |f|
x = f.read
puts "x: #{x.length}"
result = stdin.write x
puts "result: #{result}"
}
And it told me that x is the size of my file, so f.read is reading in the
entire file.
what was #{result}? Oh ...
The stdin object is an instance of SSHStdinPipe. The documentation for its
write method [1] says: "Write the given data as channel data to the underlying
channel." I.e. it doesn't mention anything about size limits.
... and write doesn't return how many things it wrote breaking the Duck Type
that says "treat me like IO". Rats.
But clearly there is a size limit somewhere and the 'stdin.write data' method
must be the line which hits it. I dug into the source but got a bit confused.
Any more ideas?
I'd read then write q bytes at a time until the whole file is read
I think the value for q is the length of the file you ended up with
at the far end. That would make a good start for a search, anyway.
I tried various chunk sizes descending from 128kB, the size which made it over the wire previously. The failures remained until the chunk size was 1024 bytes.
The overall speed was comparable with scp from the shell.
Thanks for your help,
Andy
···
On 8 Nov 2006, at 18:32, Hugh Sasse wrote:
I'd read then write q bytes at a time until the whole file is read
I think the value for q is the length of the file you ended up with
at the far end. That would make a good start for a search, anyway.