Net::SSH Data Stream Problem

Hello,

I am trying to copy a file from my local machine to a remote machine using Net::SSH. The copy fails part way leaving the file partly written on the remote machine. The size of the remote file portion is always 131072 bytes (128 kB). My local file is ~1.2MB. This leads me to suspect that the data are being fed in chunks and something is going wrong after the first chunk -- though that's a guess.

Here's the output:

Copying /path/to/my/file.zip to /path/to/remote/directory/file.zip...done.
/usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/transport/session.rb:256:in `wait_for_message': disconnected: Received data for nonexistent channel 0. (2) (Net::SSH::Transport::Disconnect)
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/transport/session.rb:240:in `wait_for_message'
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/connection/driver.rb:148:in `process'
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/connection/driver.rb:138:in `loop'
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/service/process/popen3.rb:66:in `popen3'
[snipped rest of trace]

Interestingly the 'puts "...done."' line is executed before the error is thrown. Since session.process.popen3 is synchronous, does that not imply that the copy has finished?

And here's the code (based on Ruby Cookbook recipe 14.11):

require 'rubygems'
require 'net/ssh'

host = 'xyz.com'
user = 'me'
upload_dir = '/path/to/remote/directory/'
my_file = '/path/to/my/file.zip'

def copy_file(session, source_path, destination_path=nil)
   destination_path ||= source_path
   cmd = %{cat > "#{destination_path.gsub('"', '\"')}"}
   session.process.popen3 cmd do |stdin, stdout, stderr|
     print "Copying #{source_path} to #{destination_path}..."
     open(source_path) { |f| stdin.write f.read }
     puts "done."
   end
end

Net::SSH.start(host, :username => user) do |session|
   copy_file session, my_file, my_file.sub(/^.*\//, "#{upload_dir}")
end

I would greatly appreciate any help.

Thanks and regards,
Andy Stewart

Hello,

I am trying to copy a file from my local machine to a remote machine using

what platform(s)?

Net::SSH. The copy fails part way leaving the file partly written on the
remote machine. The size of the remote file portion is always 131072 bytes
(128 kB). My local file is ~1.2MB. This leads me to suspect that the data

        [...]

And here's the code (based on Ruby Cookbook recipe 14.11):

require 'rubygems'
require 'net/ssh'

host = 'xyz.com'
user = 'me'
upload_dir = '/path/to/remote/directory/'
my_file = '/path/to/my/file.zip'

def copy_file(session, source_path, destination_path=nil)
destination_path ||= source_path
cmd = %{cat > "#{destination_path.gsub('"', '\"')}"}
session.process.popen3 cmd do |stdin, stdout, stderr|
   print "Copying #{source_path} to #{destination_path}..."
   open(source_path) { |f| stdin.write f.read }

     # I suspect this should be:
     open(source_path, 'rb') { |f| stdin.write f.read }
     # so that that ctrl-Z isn't treated as EOF, or something ghastly
     # of that sort.

   puts "done."
end
end

Net::SSH.start(host, :username => user) do |session|
copy_file session, my_file, my_file.sub(/^.*\//, "#{upload_dir}")
end

        Hugh

···

On Thu, 9 Nov 2006, Andrew Stewart wrote:

Interestingly the 'puts "...done."' line is executed before the error is thrown. Since session.process.popen3 is synchronous, does that not imply that the copy has finished?

And here's the code (based on Ruby Cookbook recipe 14.11):

require 'rubygems'
require 'net/ssh'

host = 'xyz.com'
user = 'me'
upload_dir = '/path/to/remote/directory/'
my_file = '/path/to/my/file.zip'

does this help?

def copy_file(session, source_path, destination_path=nil)
destination_path ||= source_path
cmd = %{cat > "#{destination_path.gsub('"', '\"')}"}
session.process.popen3 cmd do |stdin, stdout, stderr|
   print "Copying #{source_path} to #{destination_path}..."
   open(source_path) { |f| stdin.write f.read }

      stdin.flush
      stdin.close_write
      stdout.read
      stderr.read

   puts "done."
end
end

-a

···

On Thu, 9 Nov 2006, Andrew Stewart wrote:
--
my religion is very simple. my religion is kindness. -- the dalai lama

What about this:
http://www.elpauer.org/index.php?p=213

require 'net/ssh'
require 'net/sftp'

class SSHAgent
  def initialize
        @agent_env = Hash.new
        agenthandle = IO.popen("/usr/bin/ssh-agent -s", "r")
        agenthandle.each_line do |line|
          if line.index("echo") == nil
                  line = line.slice(0..(line.index(';')-1))
                  key, value = line.chomp.split(/=/)
                  puts "Key = #{key}, Value = #{value}"
                  @agent_env[key] = value
          end
        end
  end
  def (key)
        return @agent_env[key]
  end
end

agent = SSHAgent.new
ENV["SSH_AUTH_SOCK"] = agent["SSH_AUTH_SOCK"]
ENV["SSH_AGENT_PID"] = agent["SSH_AGENT_PID"]
system("/usr/bin/ssh-add")

Net::SSH.start( '192.168.1.12',
                :username=>'pgquiles',
                :compression_level=>0,
                :compression=>'none'
              ) do |session|
                     session.sftp.connect do |sftp|
                     sftp.put_file("bigvideo.avi", "bigvideo.avi")
                end
end

That code is using public-key, password-less cryptography, but with slight
modifications it will work with public-key+password or only password. There
is some more info about Net::SFTP in the blog post.

···

On Wednesday 08 November 2006 18:28, Andrew Stewart wrote:

Hello,

I am trying to copy a file from my local machine to a remote machine
using Net::SSH. The copy fails part way leaving the file partly
written on the remote machine. The size of the remote file portion
is always 131072 bytes (128 kB). My local file is ~1.2MB. This
leads me to suspect that the data are being fed in chunks and
something is going wrong after the first chunk -- though that's a guess.

Here's the output:

Copying /path/to/my/file.zip to /path/to/remote/directory/
file.zip...done.
/usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/transport/
session.rb:256:in `wait_for_message': disconnected: Received data for
nonexistent channel 0. (2) (Net::SSH::Transport::Disconnect)
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/transport/session.rb:240:in `wait_for_message'
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/connection/driver.rb:148:in `process'
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/connection/driver.rb:138:in `loop'
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/
ssh/service/process/popen3.rb:66:in `popen3'
[snipped rest of trace]

Interestingly the 'puts "...done."' line is executed before the error
is thrown. Since session.process.popen3 is synchronous, does that
not imply that the copy has finished?

And here's the code (based on Ruby Cookbook recipe 14.11):

require 'rubygems'
require 'net/ssh'

host = 'xyz.com'
user = 'me'
upload_dir = '/path/to/remote/directory/'
my_file = '/path/to/my/file.zip'

def copy_file(session, source_path, destination_path=nil)
   destination_path ||= source_path
   cmd = %{cat > "#{destination_path.gsub('"', '\"')}"}
   session.process.popen3 cmd do |stdin, stdout, stderr|
     print "Copying #{source_path} to #{destination_path}..."
     open(source_path) { |f| stdin.write f.read }
     puts "done."
   end
end

Net::SSH.start(host, :username => user) do |session|
   copy_file session, my_file, my_file.sub(/^.*\//, "#{upload_dir}")
end

I would greatly appreciate any help.

Thanks and regards,
Andy Stewart

--
Pau Garcia i Quiles
http://www.elpauer.org
(Due to the amount of work, I usually need 10 days to answer)

what platform(s)?

Local: OS X
Remote: Linux

     # I suspect this should be:
     open(source_path, 'rb') { |f| stdin.write f.read }
     # so that that ctrl-Z isn't treated as EOF, or something ghastly
     # of that sort.

I tried that just now but sadly it didn't change the result.

Andy

···

On 8 Nov 2006, at 17:39, Hugh Sasse wrote:

It gives me an undefined method error for flush:

undefined method `flush' for #<Net::SSH::Service::Process::POpen3Manager::SSHStdinPipe:0x6f387c> (NoMethodError)
         from /usr/local/lib/ruby/gems/1.8/gems/net-ssh-1.0.9/lib/net/ssh/service/process/popen3.rb:52:in `popen3'

Regards,
Andy

···

On 8 Nov 2006, at 18:19, ara.t.howard@noaa.gov wrote:

does this help?

def copy_file(session, source_path, destination_path=nil)
destination_path ||= source_path
cmd = %{cat > "#{destination_path.gsub('"', '\"')}"}
session.process.popen3 cmd do |stdin, stdout, stderr|
   print "Copying #{source_path} to #{destination_path}..."
   open(source_path) { |f| stdin.write f.read }

     stdin.flush
     stdin.close_write
     stdout.read
     stderr.read

   puts "done."
end
end

What about this:
A solution for public-keyed SFTP and Ruby – elpauer

[snipped]

That code is using public-key, password-less cryptography, but with slight
modifications it will work with public-key+password or only password. There
is some more info about Net::SFTP in the blog post.

Thanks very much for this. I'll check it out (though I've just got my code working after several helpful suggestions from others) because it's using a different approach.

Thanks again,
Andy

···

On 8 Nov 2006, at 18:43, Pau Garcia i Quiles wrote:

OK, given they're both unices, that figures... I'd then read and
write in smaller chunks. I don't do enough deep networking to know
what packet sizes are sensible, but...

--------------------------------------------------------------- IO#write
     ios.write(string) => integer

···

On Thu, 9 Nov 2006, Andrew Stewart wrote:

On 8 Nov 2006, at 17:39, Hugh Sasse wrote:

> what platform(s)?

Local: OS X
Remote: Linux

> # I suspect this should be:
> open(source_path, 'rb') { |f| stdin.write f.read }
> # so that that ctrl-Z isn't treated as EOF, or something ghastly
> # of that sort.

I tried that just now but sadly it didn't change the result.

------------------------------------------------------------------------
     Writes the given string to _ios_. The stream must be opened for
     writing. If the argument is not a string, it will be converted to a
     string using +to_s+. Returns the number of bytes written.
        [...]

...probably a good idea to check the return value. And for f.read.

Andy

        Hugh

I had the same problem where my ssh connection closing early. I solved
it by opening a shell- don't know if that can help with sftp though. I
am having no problems copying files with the ftp library.

Net::SSH.start(SSH_SERVER, :username => u, :password => p) do |session|
  shell = session.shell.sync
  shell.exec( cmd ).stdout
end

I changed this:

     open(source_path) { |f| stdin.write f.read }

To this:

     open(source_path) { |f|
       x = f.read
       puts "x: #{x.length}"
       result = stdin.write x
       puts "result: #{result}"
     }

And it told me that x is the size of my file, so f.read is reading in the entire file.

The stdin object is an instance of SSHStdinPipe. The documentation for its write method [1] says: "Write the given data as channel data to the underlying channel." I.e. it doesn't mention anything about size limits.

But clearly there is a size limit somewhere and the 'stdin.write data' method must be the line which hits it. I dug into the source but got a bit confused.

Any more ideas?

Thanks,
Andy

[1] http://net-ssh.rubyforge.org/api/classes/Net/SSH/Service/Process/POpen3Manager/SSHStdinPipe.html

···

On 8 Nov 2006, at 17:56, Hugh Sasse wrote:

OK, given they're both unices, that figures... I'd then read and
write in smaller chunks. I don't do enough deep networking to know
what packet sizes are sensible, but...

--------------------------------------------------------------- IO#write
     ios.write(string) => integer
------------------------------------------------------------------------
     Writes the given string to _ios_. The stream must be opened for
     writing. If the argument is not a string, it will be converted to a
     string using +to_s+. Returns the number of bytes written.
        [...]

...probably a good idea to check the return value. And for f.read.

> OK, given they're both unices, that figures... I'd then read and
> write in smaller chunks. I don't do enough deep networking to know
> what packet sizes are sensible, but...

        [ri output]

> ...probably a good idea to check the return value. And for f.read.

I changed this:

        [...]

To this:

   open(source_path) { |f|
     x = f.read
     puts "x: #{x.length}"
     result = stdin.write x
     puts "result: #{result}"
   }

And it told me that x is the size of my file, so f.read is reading in the
entire file.

what was #{result}? Oh ...

The stdin object is an instance of SSHStdinPipe. The documentation for its
write method [1] says: "Write the given data as channel data to the underlying
channel." I.e. it doesn't mention anything about size limits.

... and write doesn't return how many things it wrote breaking the Duck Type
that says "treat me like IO". Rats.

But clearly there is a size limit somewhere and the 'stdin.write data' method
must be the line which hits it. I dug into the source but got a bit confused.

Any more ideas?

I'd read then write q bytes at a time until the whole file is read
I think the value for q is the length of the file you ended up with
at the far end. That would make a good start for a search, anyway.

Thanks,
Andy

[1]
http://net-ssh.rubyforge.org/api/classes/Net/SSH/Service/Process/POpen3Manager/SSHStdinPipe.html

Nice docs, except for not being able to see and navigate to the parent
class easily.

        Hugh

···

On Thu, 9 Nov 2006, Andrew Stewart wrote:

On 8 Nov 2006, at 17:56, Hugh Sasse wrote:

Thanks for the suggestion -- it worked. Here are the code changes (based on Ruby Cookbook recipe 6.6):

I added:

class File
   def each_chunk(chunk_size=1024)
     yield read(chunk_size) until eof?
   end
end

And I changed:

     open(source_path) { |f| stdin.write f.read }

To:

     open(source_path) { |f|
       f.each_chunk { |chunk| stdin.write chunk }
     }

I tried various chunk sizes descending from 128kB, the size which made it over the wire previously. The failures remained until the chunk size was 1024 bytes.

The overall speed was comparable with scp from the shell.

Thanks for your help,
Andy

···

On 8 Nov 2006, at 18:32, Hugh Sasse wrote:

I'd read then write q bytes at a time until the whole file is read
I think the value for q is the length of the file you ended up with
at the far end. That would make a good start for a search, anyway.