Yes, I will authenticate useing their OS username and password. I
haven't used PTY.spawn before. Does it just spawn off a new psuedo
terminal? After doing a 'su -username' and authenicating; I want to
execute a block of ruby code as that user.
I was thinking of something like this (untested):
username = "foo"
password = "bar"
require 'pty'
require 'expect'
inp, out = PTY.spawn("su - #{username}")
inp.expect /assword: /
out.puts password
inp.expect /\$ /
out.puts "ruby /path/to/foo.rb"
Ideally that block would
return a ruby object that I could then manipulate in the broader app.
That's more difficult, because your web server is running as one uid and the
other part is running as a completely different uid, and therefore by
definition must be in a separate process.
You can run a DRb server in the child process, and have a DRb client (proxy)
object in the web service. This will work fine, but DRb doesn't run over
stdin/stdout, so you would have to bind each child process to a different
port number.
This isn't unsurmountable though. I think you can let DRb in the child pick
a port number dynamically, and then send that number back over stdout (which
you then pick up by reading the PTY)
There's a DRb tutorial at
http://wiki.rubygarden.org/Ruby/page/show/DrbTutorial
which you may find useful. (Personal interest disclaimer: I wrote it
(ie since this is a Rails app, I would execute controller code as the
authenticated user (ie to get job files); and use one set of views for
all users.
It seems you have hit against the brick wall which many PHP hosting
providers come across: namely, they want to run Apache with mod_php for
efficiency, but they don't want one user's PHP scripts to be able to access
or modify another user's files. Since Apache runs as a single user, and
mod_php runs as that user, this turns out to be tricky.
One solution is to run each user's applications as fastcgi scripts, where
the users' code runs as a pool of separate processes running under their own
uid. A similar solution is to give each user their own completely separate
webserver instance (e.g. httpd), either bound to a separate IP, or bound to
a separate port with a proxy in front which routes the incoming HTTP
requests to the right webserver instance.
In your case, this would effectively mean running multiple copies of your
entire Rails app, one under each user ID. If the number of users you have is
not large, this is probably a reasonable approach.
You could do this statically, e.g. with webrick or mongrel (run a separate
application instance for each user, bound to a separate port, running as
their userid). Or, although I've not tried it myself, it should also be
possible to run Rails under fastcgi with suexec or cgiwrap. The advantage of
this approach is that when a user is no longer active, the webserver can
completely kill off their Rails application instance.
In both cases, the app would then take on responsibility for authenticating
the user. Once they had authenticated, the controller would have rights to
do whatever that user could do.
Regards,
Brian.
···
On Tue, May 01, 2007 at 05:34:40AM +0900, John Clisham wrote: