FreeRIDE hangs when I use gets

Hi there, and happy new year!

I’m starting with FreeRIDE. I think it’s an excellent project, and I
desire it goes on.

But if I run a script with a simple gets into FreeRIDE, it hangs.

The puts works fine.

Is there a problem with the console input inside FreeRIDE?

Thanks a lot!

Sothoth.

Is there any way to retrieve the table and database associated with
each column in a resultset?

Also, I haven’t been able to find anywhere that outlines the
functionality goals of Ruby DBI vis a vis other database APIs. It
would be quite nice if the goal of Ruby DBI was parity with JDBC or
ODBC rather than parity with Perl DBI (which is just barely good enough
for rudimentary database programming).

···

David King Landrith
(w) 617.227.4469x213
(h) 617.696.7133

One useless man is a disgrace, two
are called a law firm, and three or more
become a congress – John Adams

public key available upon request

Yeah…there is a problem with gets input. Right now the script_runner
simply executes the current script file using ruby #{file} with a
bit o magic to interleave stdout/err. If you are doing this under
windows, from what I remember, gets called in a exec like this does not
work right. Regardless, there is no current way to fulfill the gets
request. The script_runner is really basic, and we are going to make
it more robust going forward.

-rich

···

On Wednesday, January 1, 2003, at 08:25 AM, Yog-Sothoth wrote:

Hi there, and happy new year!

I’m starting with FreeRIDE. I think it’s an excellent project, and I
desire it goes on.

But if I run a script with a simple gets into FreeRIDE, it hangs.

The puts works fine.

Is there a problem with the console input inside FreeRIDE?

Thanks a lot!

Sothoth.

Is there any way to retrieve the table and database associated with
each column in a resultset?

i have not seen a way, but i wonder how you got the resultset without first
knowing the database and table name?

Also, I haven’t been able to find anywhere that outlines the
functionality goals of Ruby DBI vis a vis other database APIs. It
would be quite nice if the goal of Ruby DBI was parity with JDBC or
ODBC rather than parity with Perl DBI (which is just barely good enough
for rudimentary database programming).

all i can say is that i haven’t been at a loss for getting things done with
Ruby DBI. what functionality does it lack that ODBC and JDBC have? by the way
there is a Ruby ODBC library avail., though i have no expeirence with it.

-transami

···

On Wednesday 01 January 2003 07:03 am, David King Landrith wrote:


David King Landrith
(w) 617.227.4469x213
(h) 617.696.7133

One useless man is a disgrace, two
are called a law firm, and three or more
become a congress – John Adams

public key available upon request

Is there any way to retrieve the table and database associated with
each column in a resultset?

i have not seen a way, but i wonder how you got the resultset without
first
knowing the database and table name?

Two situations come to mind immediately:

First, if you create objects corresponding to tables in your database,
you’ll want to make them subclasses of some abstract super-class that
contains the common code. In the superclass, there are often cases
where you’ll be dealing with result sets without table information.

Second, if you want to create utilities for dealing with result sets.
For example, in Java you can create (and I have created) an object that
buffers a result set and metadata. If you are clever, you can track
changes, deletions, and additions to the data. If you are really
clever, you can use the metadata to construct SQL to change the
database to match the buffer. This allows you to instantiate a result
set object by sending it any SQL whatever, change the values in that
object, add rows to the object, and delete rows to the object, and then
simply call a “commit” method to have the object issue the appropriate
update, insert, and delete statements. This type of a setup allows you
to eliminate hundreds or thousands of error-prone sql-computation lines
from your code. Moreover, such an object makes an ideal superclass for
objects that correspond to database tables.

Also, I haven’t been able to find anywhere that outlines the
functionality goals of Ruby DBI vis a vis other database APIs. It
would be quite nice if the goal of Ruby DBI was parity with JDBC or
ODBC rather than parity with Perl DBI (which is just barely good
enough
for rudimentary database programming).

all i can say is that i haven’t been at a loss for getting things done
with
Ruby DBI. what functionality does it lack that ODBC and JDBC have?

The difference is primarily in the meta-data. JDBC and ODBC can
identify primary keys, foreign keys, constraints, source database,
source schema. This type of meta-data allows you to do real object
oriented database programming, rather than simply bare or parameterized
SQL.

by the way there is a Ruby ODBC library avail., though i have no
experience with it.

Is this a DBD for ODBC, or an alternative to DBI? If it’s an
alternative, then it may be workable, but using Ruby through ODBC seems
to represent another layer of overhead. It would be nice if Ruby
supported robust database functionality through DBI/DBD.

···

On Wednesday, January 1, 2003, at 09:20 AM, Tom Sawyer wrote:

On Wednesday 01 January 2003 07:03 am, David King Landrith wrote:


David King Landrith
(w) 617.227.4469x213
(h) 617.696.7133

One useless man is a disgrace, two
are called a law firm, and three or more
become a congress – John Adams

public key available upon request

This sounds really good. Has anyone really really clever
generalised all this and released it as a library (in any language)?

Gavin

···

On Thursday, January 2, 2003, 4:10:53 AM, David wrote:

Second, if you want to create utilities for dealing with result sets.
For example, in Java you can create (and I have created) an object that
buffers a result set and metadata. If you are clever, you can track
changes, deletions, and additions to the data. If you are really
clever, you can use the metadata to construct SQL to change the
database to match the buffer. This allows you to instantiate a result
set object by sending it any SQL whatever, change the values in that
object, add rows to the object, and delete rows to the object, and then
simply call a “commit” method to have the object issue the appropriate
update, insert, and delete statements. This type of a setup allows you
to eliminate hundreds or thousands of error-prone sql-computation lines
from your code. Moreover, such an object makes an ideal superclass for
objects that correspond to database tables.

Second, if you want to create utilities for dealing with result sets.

from your code. Moreover, such an object makes an ideal superclass for
objects that correspond to database tables.

This sounds really good. Has anyone really really clever
generalised all this and released it as a library (in any language)?

Well, i did just extend DBI myself with a class Meta, don’t see any reason
that this class can’t be scaled up to include much of this functionality.

do you want to work on it David? i’ll send you my code. it’s very straight
foward, i’ll even do a little preliminary work on it to make it easier to do.
mostly i expect all you’ll have to do is write the “commit” code.
except…one question: are there going to be issues here with cross-sql
compatability for the different DBD backends? that would mean hacking the
different DBD’s as well. hmm…well i suppose one could start with just the
functionality that would work cross-database first, then worry about where
the DBD’s would need work later.

-transami

···

On Wednesday 01 January 2003 04:57 pm, Gavin Sinclair wrote:

On Thursday, January 2, 2003, 4:10:53 AM, David wrote:

From: “Gavin Sinclair”

Second, if you want to create utilities for dealing with result sets.
For example, in Java you can create (and I have created) an object that
buffers a result set and metadata. If you are clever, you can track
changes, deletions, and additions to the data. If you are really
clever, you can use the metadata to construct SQL to change the
database to match the buffer. This allows you to instantiate a result
set object by sending it any SQL whatever, change the values in that
object, add rows to the object, and delete rows to the object, and then
simply call a “commit” method to have the object issue the appropriate
update, insert, and delete statements. >
[snip]

This sounds really good. Has anyone really really clever
generalised all this and released it as a library (in any language)?

Microsoft ADO.NET has something to this, as a so called “disconnected
recordset”.

Regards,
M.

PS Maybe you knew this but deliberately wanted to filter out this answer
with a “realy really clever” clause? :slight_smile:

I wrote classes that did this in Java in 1998-1999 and used them in
several projects for web driven database access (buffered result sets
are perfect for web driven stuff, because the result sets will always
be reasonably small). They had some shortcomings, because at that time
not all JDBC drivers were up to spec. For example, the DB2 Type 3 and
Type 4 drivers would always pass an empty string as the schema name.
Oracle (as is always the case unless it is set correctly by a wise DBA
at install time) had its screwy date format and didn’t much like the
default Sql.Date class.

I’d be happy to dig up the classes. In the past when I’ve explained
what they did, nobody expressed any interest.

Dave

PGP.sig (186 Bytes)

···

On Wednesday, January 1, 2003, at 06:57 PM, Gavin Sinclair wrote:

On Thursday, January 2, 2003, 4:10:53 AM, David wrote:

Second, if you want to create utilities for dealing with result sets.
For example, in Java you can create (and I have created) an object
that
buffers a result set and metadata. If you are clever, you can track
changes, deletions, and additions to the data. If you are really
clever, you can use the metadata to construct SQL to change the
database to match the buffer. This allows you to instantiate a result
set object by sending it any SQL whatever, change the values in that
object, add rows to the object, and delete rows to the object, and
then
simply call a “commit” method to have the object issue the appropriate
update, insert, and delete statements. This type of a setup allows
you
to eliminate hundreds or thousands of error-prone sql-computation
lines
from your code. Moreover, such an object makes an ideal superclass
for
objects that correspond to database tables.

This sounds really good. Has anyone really really clever
generalised all this and released it as a library (in any language)?

Gavin


David King Landrith
(w) 617.227.4469x213
(h) 617.696.7133

One useless man is a disgrace, two
are called a law firm, and three or more
become a congress – John Adams

public key available upon request

It is known as DataSet. However, in legacy ADO, there is also
“disconnected recordset”. ADO.NET does not have recordset any more.

···

On 1/2/2003 4:53 PM, Milan Maksimovic wrote:

From: “Gavin Sinclair”

Second, if you want to create utilities for dealing with result sets.
For example, in Java you can create (and I have created) an object that
buffers a result set and metadata. If you are clever, you can track
changes, deletions, and additions to the data. If you are really
clever, you can use the metadata to construct SQL to change the
database to match the buffer. This allows you to instantiate a result
set object by sending it any SQL whatever, change the values in that
object, add rows to the object, and delete rows to the object, and then
simply call a “commit” method to have the object issue the appropriate
update, insert, and delete statements. >
[snip]

This sounds really good. Has anyone really really clever
generalised all this and released it as a library (in any language)?

Microsoft ADO.NET has something to this, as a so called “disconnected
recordset”.


Cheers, Peter

speaking of Ruby DBI, does anyone know if there is a limitation to the size of
the sql executed by DBI?

in my case i’m thinking of sending multiple queries at once:

sql = %Q{
UPDATE table SET a=‘1’ WHERE x=1;
UPDATE table SET a=‘2’ WHERE x=2;
UPDATE table SET a=‘3’ WHERE x=3;

}

rahter than doing each of these individually. will i hit a cap on this type of
execution? does it depend on the DBD or database itself?

-transami

I’d very much like to see the code if you’re willing to share it,
Dave.

Cheers,
Gavin

···

On Friday, January 3, 2003, 1:59:03 AM, David wrote:

I wrote classes that did this in Java in 1998-1999 and used them in
several projects for web driven database access (buffered result sets
are perfect for web driven stuff, because the result sets will always
be reasonably small). They had some shortcomings, because at that time
not all JDBC drivers were up to spec. For example, the DB2 Type 3 and
Type 4 drivers would always pass an empty string as the schema name.
Oracle (as is always the case unless it is set correctly by a wise DBA
at install time) had its screwy date format and didn’t much like the
default Sql.Date class.

I’d be happy to dig up the classes. In the past when I’ve explained
what they did, nobody expressed any interest.

Dave

I doubt if that SQL will actually work, since it is many statements,
and I’d bet $250 that DBI will only allow one statement at a time.
The smart money is on:

dbh.prepare(“update table set a = ? where x = ?”) do |sth|
records.each do |rec|
sth.execute(rec.a, rec.x)
end
dbh.commit
end

i.e. a prepared statement will let you handle multiple statements
gracefully, and more efficiently (depending on DB) than sending
boatloads of hardcoded SQL at it.

Cheers,
Gavin

···

On Thursday, January 2, 2003, 11:38:01 PM, Tom wrote:

speaking of Ruby DBI, does anyone know if there is a limitation to the size of
the sql executed by DBI?

in my case i’m thinking of sending multiple queries at once:

sql = %Q{
UPDATE table SET a=‘1’ WHERE x=1;
UPDATE table SET a=‘2’ WHERE x=2;
UPDATE table SET a=‘3’ WHERE x=3;

}

rahter than doing each of these individually. will i hit a cap on this type of
execution? does it depend on the DBD or database itself?

continuing on with the variaous DBI questions, now i’m stuck on a
cross-database issue and would like recommendations.

as it stands when i insert a new record into a table, i then do: “SELECT
currval(‘table_idfield_seq’) as recid;” to get the latest automatically
generated record id. and thus know which record i just inserted. but this
seems to me like a jerry-rig. i don’t like it and i doubt it works on
anything but postgresql. what’s the best way to deal with this? how does one
get auto generated id fields in such a way as to be cross-compatible? or am i
mistaken in using auto generated id feilds in the first place?

thanks,
transami

gavin, i’ll take that bet! :slight_smile:

%Q{
CREATE TABLE je_d
( record SERIAL PRIMARY KEY,
je_ref int REFERENCES je(record) ON DELETE CASCADE,
debit float DEFAULT 0,
credit float DEFAULT 0,
account_ref int REFERENCES acct(record),
pr text DEFAULT ‘’,
sort int DEFAULT 0
);
INSERT INTO je_d (je_ref, debit, credit, account_ref, sort) VALUES (1, 0, 0,
NULL, 1);
INSERT INTO je_d (je_ref, debit, credit, account_ref, sort) VALUES (1, 0, 0,
NULL, 2);
}

run it all the time. 3 statements in one. well, at least it works against
postgresql DBD. what was that, $250, did you say? :slight_smile:

really think prep’d statements are more efficient? do they really do anything
special? i thought they just did a gsub before sending off the sql, nothing
more.

well, maybe i’ll throw a quck test together. not much for profiling, but it
shouldn’t be too hard. just two big long repretitve scripts and a watch.

-transami

-transami

···

On Thursday 02 January 2003 05:53 am, Gavin Sinclair wrote:

I doubt if that SQL will actually work, since it is many statements,
and I’d bet $250 that DBI will only allow one statement at a time.
The smart money is on:

dbh.prepare(“update table set a = ? where x = ?”) do |sth|
records.each do |rec|
sth.execute(rec.a, rec.x)
end
dbh.commit
end

i.e. a prepared statement will let you handle multiple statements
gracefully, and more efficiently (depending on DB) than sending
boatloads of hardcoded SQL at it.

well i just droped 1000 sql statements on my database via DBI in one fell
swoop. finished in less than a second. think that’s good engouh for me :slight_smile:

so unless anyone knows for sure that other DBDs/databases won’t hack it looks
like the way to go. the only advantage i could see to prepared statements is
that the gsub of the parameters might be faster than building the sql myself.
is DBI c code? guess i can check, thought it was pure ruby myself, in which
case it won’t make much difference and i’m probably getting better execution
dropping all the statements at once. perhaps its just postgresql though. you
can use psql cli tool and drop a file of sql statments into it to be
executed. same principle.

-transami

how does one get auto generated id fields in such a way as to be
cross-compatible?

There is no way to do this portably, AFAIK.

or am i mistaken in using auto generated id fields in the first
place?

It’s a standard database practice, and a lot of hassle to do manually.

Gavin

···

On Friday, January 3, 2003, 3:46:32 PM, Tom wrote:

AFAIK there isn’t a database independent way of doing what you want. Scott
Ambler has written a paper describing some approaches that you might find
useful. The part about OIDs is the bit I am referring to.

http://www.ambysoft.com/mappingObjects.html


Signed,
Holden Glova

···

On Fri, 03 Jan 2003 17:46, Tom Sawyer wrote:

continuing on with the variaous DBI questions, now i’m stuck on a
cross-database issue and would like recommendations.

as it stands when i insert a new record into a table, i then do: “SELECT
currval(‘table_idfield_seq’) as recid;” to get the latest automatically
generated record id. and thus know which record i just inserted. but this
seems to me like a jerry-rig. i don’t like it and i doubt it works on
anything but postgresql. what’s the best way to deal with this? how does
one get auto generated id fields in such a way as to be cross-compatible?
or am i mistaken in using auto generated id feilds in the first place?

thanks,
transami

Let me answer your questions in the reverse order that you asked them
in:

First, you should be using auto generated id fields whenever you can.
They are fast, easy, and they stay out of the way. I even use them in
intersect tables. On a totally unrelated note: I recommend that you
use the Oracle naming convention, which is to give all auto increment
primary keys the same name; e.g., id. The DB2 convention (having a pk
named for the table and the id–e.g., docId or userId–strikes me as
just plain silly, since using this convention causes you to have to
constantly describe your tables while writing sql. (Although I prefer
DB2 as a database over Oracle as a database, I do prefer this one
Oracle convention–which is external to the actual databases.)

Second, whenever someone tells you that an object oriented language
can’t create non-implementation specific interfaces, either they are
wrong or the language isn’t really object oriented. Allowing for
non-implementation specific interfaces is what Object Oriented
languages do best (literally). So don’t listen to all the nay-sayers
who claim that you can’t write code that will get you across multiple
databases. Not only can it be done, but it is surprisingly easy.
There are several ways to do this from simple wrapper classes to whole
APIs. (I’m currently writing a whole API, which doesn’t bare
reproducing on a mailing list.) The easiest way that comes to my mind
is to use a simple wrapper with an abstract method (mind you, the
following is way, way paired down so that it fits in this email):

require “dbi”

DISCLAIMER: there may be some minor typos in this code,

since I’ve written it off the top of my head and have

not tried to run it. But it should give you the a

clear idea of what I’m getting at.

class Connection

 def initialize(dsn, user=nil, auth=nil, parms=nil)
     @dbh = connect(dsn, user, auth, parms)
 end

 def quote(str)
     return @dbh.quote(str)
 end

 def execute(sql, *vals)
     return @dbh.execute(sql, *vals)
 end

 def executeInsert(sql, *vals)
     raise(RuntimeError, "Abstract method. Implement in subclass")
 end

 def do(sql, *vals)
     return @dbh.do(sql, *vals)
 end

 def select_one(sql, *vals)
     return @dbh.select_one(sql, *vals)
 end

# insert other DatabaseHandle methods here so that you wrap up
# the full complement of DatabaseHandle functions

 def disconnect()
     @dbh.disconnect()
 end

end

Now you can extend this class as follows:

for mysql

class Connection_mysql < Connection

 def executeInsert(sql, *vals)
     self.do(sql, *vals)
     return self.select_one("SELECT last_insert_id()")[0]
 end

end

for postgress

class Connection_pg < Connection

 def executeInsert(sql, *vals)
     self.do(sql, *vals)
     return self.select_one("SELECT currval('table_idfield_seq')")[0]
 end

end

You can implement other databases the same way. For example, in a
Connection_ora for Oracle, you’ll want to grab the value from a
sequencer and with a Connection_db2 for DB2 you’ll want to grab and
parse in the unique value.

Now you can either hard-code the type of Connection that you’ll be want
(e.g., Connection_pg or Connection_mysql) into your code, or you can
create an static method (might I suggest calling it “connect”) in your
Connection class that chooses which type of Connection object to return
based on the Driver being used (e.g., with a big case statement).

···

On Thursday, January 2, 2003, at 11:46 PM, Tom Sawyer wrote:

as it stands when i insert a new record into a table, i then do:
“SELECT
currval(‘table_idfield_seq’) as recid;” to get the latest automatically
generated record id. and thus know which record i just inserted. but
this
seems to me like a jerry-rig. i don’t like it and i doubt it works on
anything but postgresql. what’s the best way to deal with this? how
does one
get auto generated id fields in such a way as to be cross-compatible?
or am i
mistaken in using auto generated id feilds in the first place?


David King Landrith
(w) 617.227.4469x213
(h) 617.696.7133

One useless man is a disgrace, two
are called a law firm, and three or more
become a congress – John Adams

public key available upon request

I doubt if that SQL will actually work, since it is many statements,
and I’d bet $250 that DBI will only allow one statement at a time.
The smart money is on:

[code snipped]
i.e. a prepared statement will let you handle multiple statements
gracefully, and more efficiently (depending on DB) than sending
boatloads of hardcoded SQL at it.

gavin, i’ll take that bet! :slight_smile:
[transami won :frowning: ]

really think prep’d statements are more efficient? do they really do anything
special? i thought they just did a gsub before sending off the sql, nothing
more.

I definitely think they’re more efficient in some DBs. In Oracle,
the statement is compiled internally. The prepared statement is a
database reosurce. For one or two statements, it’s probably the same
or worse, but for 100+, I reckon PSs would fly. Let us know the
results of your tests.

(I’d be surprised if PostgreSQL didn’t support them.)

Even if it wasn’t more efficient, it’s a nice way to program.

well, maybe i’ll throw a quck test together. not much for profiling, but it
shouldn’t be too hard. just two big long repretitve scripts and a watch.

Again, let us know.

Gavin

···

On Friday, January 3, 2003, 12:08:23 AM, Tom wrote:

On Thursday 02 January 2003 05:53 am, Gavin Sinclair wrote: