You can't get in trouble with your boss for picking C#

Hello Robert,

Roland Schmitt wrote:

Robert Klemme wrote:

Roland Schmitt wrote:

Robert Klemme wrote:

Btw, is J2EE enterprise ready? Is there a significant number of
applications out there that make full use of J2EE's feature set
(including scalability features) and serve large numbers of
concurrent users? I'm not prejudicing, just curious.

Yes, they are.
I'm working in a company with products based on J2EE/JBoss where the
scalability is one reason for our customers to choice the products.
  They want to add users as they need and the front-end is a
rich-client for image processing, not only a web front-end.

Can you post some figures (# of hosts, max concurrent users etc.) or
is this somehow confidential?

Ok, looked at my current installations, find one with:
- 10 Servers (8*CPUs...)
- 120-160 Users
- Oracle and MSSSQL-DBs
- SAP-Integration

Hmm, ok, it is not that big, but because it is an type of accounting
system, it must be very reliable/scalable.

You need 80 CPU's to serve 120-160 concurrent users? That would be 2
concurrent users per cpu *max*. This certainly sounds like bad resource
usage - or did I miss something here?

Maybe you missed the bad word in his email: SAP

It is a common joke that you need a SUN Enterprise Server for each user.

···

--
Best regards, emailto: scholz at scriptolutions dot com
Lothar Scholz http://www.ruby-ide.com
CTO Scriptolutions Ruby, PHP, Python IDE 's

Robert Klemme wrote:

Roland Schmitt wrote:

Robert Klemme wrote:

Roland Schmitt wrote:

Robert Klemme wrote:

Btw, is J2EE enterprise ready? Is there a significant number of
applications out there that make full use of J2EE's feature set
(including scalability features) and serve large numbers of
concurrent users? I'm not prejudicing, just curious.

Yes, they are.
I'm working in a company with products based on J2EE/JBoss where the
scalability is one reason for our customers to choice the products.
  They want to add users as they need and the front-end is a
rich-client for image processing, not only a web front-end.

Can you post some figures (# of hosts, max concurrent users etc.) or
is this somehow confidential?

Ok, looked at my current installations, find one with:
- 10 Servers (8*CPUs...)
- 120-160 Users
- Oracle and MSSSQL-DBs
- SAP-Integration

Hmm, ok, it is not that big, but because it is an type of accounting
system, it must be very reliable/scalable.

You need 80 CPU's to serve 120-160 concurrent users? That would be 2
concurrent users per cpu *max*. This certainly sounds like bad resource
usage - or did I miss something here?

It is not that simple. There is a lot of graphic processing, character recognition, web services and SAP.

So i think it is not an "typical example" where you use J2EE more or less as a backend-end for a browser application.

Regards,
Roland

Very interesting. But you'll still need C :wink: I'm interested to know
how this will effect performance, though.

No, you need C to implement a virtual machine, but not for the
language parser/bytecode compiler/runtime.

But i'm still convinced that ruby in ruby will got give us any real
benefits.

My Point was that you still need to use C (a _strongly_ typed
language) at some point,

C is anything *BUT* a strongly typed language. C is a very weakly typed
language that use some static type analysis for some optimizations. C++
is only marginally more strongly typed than C. Java and C# are also only
marginally more strongly typed than C++, in turn. There is a distinct
difference between the strong/weak typing axis and the static/dynamic
typing axis. Ruby is strong-dynamic; Ada is strong-static; C is weak-
static; JavaScript is weak-dynamic.

C's primary advantages are how close the development model is to the
bare metal of the machine without being tied to a single machine
architecture ("portable assembly") and that many thousands of man-years
of effort have been put into optimizing code generated by C compilers.

or else you can't do what you want to do.

Your assertion here is wrong. One does not require a strongly typed or
even a statically typed language to do what you're suggesting. What you
need is a development model that makes it easy to adapt to the hardware
on which you're developing. Ruby is several abstraction levels above the
hardware, and *that* is what I believe makes it difficult to consider
writing an OS in Ruby. That said, I will *also* point out that many
early operating systems were developed entirely in machine code or
assembly language, neither of which can be described as either strongly
or statically typed.

I was still reacting to the guy that flat out said that stronly typed
languages were just wrong.

I don't agree with Phlip here, but he's not completely *wrong*, either.
Static typing is a false promise -- it gives a promise of compile-time
correctness, but if you've ever had to debug a C program, you know
better. Statically typed languages -- which is what he was actually
talking about, and I'll thank you to not confuse the two axes -- aren't
the panacea or the benefit that their advocates claim. Dynamically typed
languages are far more suitable to *most* problems than statically typed
languages.

My entire point is that your should use the best tool for the job. It
would be bad idea to implement something like basecamp in C. It would
also be a bad idea to implement the latest NVidia 3D device driver in
Ruby.

Not if Ruby had a good model for generating optimized machine code for
that NVidia 3D device driver. C is appropriate because it's effectively
and efficiently portable while remaining close to the hardware on which
it is to run. It is also highly optimized and optimizable to machine
code. Solve those, and Ruby is appropriate for developing device drivers
as well as for developing Rails websites.

-austin

···

On 9/16/05, Josh Charles <josh.charles@gmail.com> wrote:
--
Austin Ziegler * halostatue@gmail.com
               * Alternate: austin@halostatue.ca

Josh Charles <josh.charles@gmail.com> writes:

> Very interesting. But you'll still need C :wink: I'm interested to know
> how this will effect performance, thoug.

No, you need C to implement a virtual machine, but not for the
language parser/bytecode compiler/runtime.

But i'm still convinced that ruby in ruby will got give us any real
benefits.

My Point was that you still need to use C (a _strongly_ typed
language) at some point, or else you can't do what you want to do.

How ironic. :slight_smile: Did you ever look into the Ruby implementation?
With a proper, *strongly* typed language, it would look a lot different.

···

--
Christian Neukirchen <chneukirchen@gmail.com> http://chneukirchen.org

Josh Charles wrote:

My Point was that you still need to use C (a _strongly_ typed
language) at some point, or else you can't do what you want to do. I
was still reacting to the guy that flat out said that stronly typed
languages were just wrong.

First, some guideposts. C is weakly statically typed. Ruby is strongly
dynamically typed. Ada is strongly statically typed. Perl is weakly
dynamically typed.

Next, some definitions. "Strong" means you cannot glom two variables
together and the compiler will "guess" how to resolve their types. Perl is
"weak" when it lets you do this because in Perl everything is a string, so
every activity that's not obviously math can resolve to a string
manipulation.

"Dynamic" means all objects inherit from a great Object in the sky, which
virtually contains every possible method. So the line foo.bar() will compile
and run for any foo of any type that contains any bar(). "Static" means the
compiler trivially checks at compile time that all use of foo.bar() use a
foo that refers to an object of a class in the same inheritance hierarchy.
bar() must only bind to a single root bar() that all variations must derive
from.

My entire point is that your should use
the best tool for the job. it would be bad idea to implement
something like basecamp in C. It would also be a bad idea to
implement the latest NVidia 3D device driver in Ruby.

Totally. The closer to the metal, the more you need a strong statically
typed language. (C, a weak language, squeaks by because everyone who learns
it knows how to use it strongly.)

The closer to the user, the more you need a strong dynamic language, for the
command-and-control stuff. Note that the popular ORBs (CORBA, ActiveX), are
strongly dynamically typed, regardless of the languages that support them.

I forgot what I wrote, but what I meant was "static typing is wrong". Hacks
like generics are attempts to put "a little" dynamic typing into static
typing languages. Static typing should be optional, and it should not be an
excuse not to write unit tests. They make static typing less important for
robustness.

···

--
  Phlip
  greencheese.org <-- NOT a blog!!!

Austin Ziegler wrote:

[snip]

My Point was that you still need to use C (a _strongly_ typed
language) at some point,

C is anything *BUT* a strongly typed language. C is a very weakly typed
language that use some static type analysis for some optimizations.

"C is strongly-typed" is indeed a pernicious meme.

[Richter@GEN-WAYNE ~/src/c]$ cat weak_type.c
#include <stdio.h>

int main()
{

int i = 2000000000;
int j = 2000000000;
int k;

k = i * j;

printf( "%d * %d = %d\n", i, j, k );

}

[Richter@GEN-WAYNE ~/src/c]$ a.exe
2000000000 * 2000000000 = -1651507200

[rest snipped]

···

On 9/16/05, Josh Charles <josh.charles@gmail.com> wrote:

-austin

--
Daryl

"We want great men who, when fortune frowns, will not be discouraged."
     -- Colonel Henry Knox, 1776

Hey all,

On the typing issues, it seems that the third axis of declarative vs
inferred typing has been missed.

For example, in Java/C# etc. you have to declare the type of any and
every variable you create. In languages like ML / Haskell, the
compiler is able to infer the types of your variables (you may
optionally expicitly declare the type of them yourself if you want).
If you want a language that mixes the whole
dynamic/static/optional/inferred system there's Boo sitting on codehaus
that (the last time I looked) does all of those, which was quite neat.
[Tho .NET based]

Of course in the dynamic languages where the checks are done at
runtime, it doesn't make sense to declare the types yourself. Of
course some level of static typing / static analysis in a language like
ruby is still useful (e.g. for accurate IDE code completions)

I forgot what I wrote, but what I meant was "static typing is wrong".

I'd disagree, static typing is right, forced declaritive typing is
wrong. The computer should be able to infer the types of most things.

I'll admit that actually writing the algorithms to do the inference for
something as dynamic as ruby would be hard (and probably provably
"impossible") to cover every case, but if you're coding in the sane
90%ish of the language that doesn't use eval or equivalent to redefine
every method call to something else, then you should be able to have
the computer check that you are doing something right, or at least not
somethign that is impossibly wrong.

Hacks like generics are attempts to put "a little" dynamic typing into static
typing languages.

Actually, (java) generics adds a small layer of static safety to what
would other-wise would only be a dynamic operation. (Tho in java the
dynamic check is still made).

Static typing should be optional,

Declaritive typing should be optional. Static typing should be used
for whatever can be analysed, and dynamic typing the rest.

and it should not be an excuse not to write unit tests. They make static typing less important for robustness.

Absolutly, stepping back into context, type errors are just a small
subset of all the kinds of programming errors that tests (in all
guises) should be used to pick up on.

My £0.02

Tris

ToRA wrote:

On the typing issues, it seems that the third axis of declarative
vs inferred typing has been missed.

That's because I don't understand it.

For example, in Java/C# etc. you have to declare the type of any
and every variable you create. In languages like ML / Haskell,
the compiler is able to infer the types of your variables (you may
optionally expicitly declare the type of them yourself if you want).

Woah, that is _exactly_ like the Eclipse editor for Java. If you don't
declare the type correctly, Eclipse will infer what it is (I suspect by
running a Java parser inside Java), and at a keystroke will push the correct
type in.

Conclusion: Java + Eclipse == Haskell :wink:

I'll admit that actually writing the algorithms to do the inference for
something as dynamic as ruby would be hard (and probably provably
"impossible") to cover every case, but if you're coding in the sane
90%ish of the language that doesn't use eval or equivalent to redefine
every method call to something else, then you should be able to have
the computer check that you are doing something right, or at least not
somethign that is impossibly wrong.

We need look no further than the efforts to optimize Smalltalk to see how
far this concept can (and can't) push.

My £0.02

That's why I just added this:

http://www.c2.com/cgi/wiki?RubyVsJava

It compares Java's Log4j to its clone, Ruby's Log4r.

"So they both apparently solve exactly same problem in exactly the same way.

"log4j's src folder has 31,764 lines of code.

"log4r's src folder has 2,071 lines of code."

Some languages are just more fun to belt out lots of lines with, I guess...

···

--
  Phlip
  greencheese.org <-- NOT a blog!!!

Hello ToRA,

For example, in Java/C# etc. you have to declare the type of any and
every variable you create. In languages like ML / Haskell, the
compiler is able to infer the types of your variables (you may
optionally expicitly declare the type of them yourself if you want).

You forget that this does absolutely not fit into the ruby model. You
can't do type inference with overloaded operators. I don't know
Haskell but i know a little bit of OCAML and there you have to use the
ugly floating point operators "+." etc.

Otherwise there is nothing the compiler can find out.

Ruby is highly overloaded, so type interference is almost impossible.
Take this together with all the other methods (i don't talk about eval
here) and you see that this is a different world.

···

--
Best regards, emailto: scholz at scriptolutions dot com
Lothar Scholz http://www.ruby-ide.com
CTO Scriptolutions Ruby, PHP, Python IDE 's

ToRA wrote:

For example, in Java/C# etc. you have to declare the type of any and
every variable you create.

This is changed in C# 3. You will be able to do var and AFAIK there is even experiments with automatically specifying generic collections to a specific type if you only ever put in objects of that type.

I guess I'm increasingly more sounding like a C# fan boy, but I think it is going into an interesting direction...

Lothar Scholz http://www.ruby-ide.com

Just a note, dude. I clicked on your URL there, and was greeted with...

"our full featured editor that is inspired by the famous emacs and vi
editors"

Your marketeers need to learn there's no faster way to turn some of us off
to an editor than praise its comparison to those pieces of crap. I worked vi
classic for 2 years, and vim for 2 years, and I will do anything to avoid
them.

If you all do indeed mean vim-like power without the f~~~ed up keystrokes,
find another way to say it. CUA keystrokes provide the flattest possible
learning curve for modern programmers...

···

--
  Phlip
  greencheese.org <-- NOT a blog!!!

In Haskell you have both overloaded operators/functions and type inference.

Paolo

···

On 9/17/05, Lothar Scholz <mailinglists@scriptolutions.com> wrote:

Hello ToRA,

> For example, in Java/C# etc. you have to declare the type of any and
> every variable you create. In languages like ML / Haskell, the
> compiler is able to infer the types of your variables (you may
> optionally expicitly declare the type of them yourself if you want).

You forget that this does absolutely not fit into the ruby model. You
can't do type inference with overloaded operators. I don't know
Haskell but i know a little bit of OCAML and there you have to use the
ugly floating point operators "+." etc.

Lothar Scholz ha scritto:

You
can't do type inference with overloaded operators. I don't know
Haskell but i know a little bit of OCAML and there you have to use the
ugly floating point operators "+." etc.

Otherwise there is nothing the compiler can find out.

well, haskell is able to handle overloaded operators, see here:
http://www.haskell.org/tutorial/classes.html

But I think that the Haskell model would be quite hard to retrofit into ruby, anyway :slight_smile:

Florian Groß ha scritto:

ToRA wrote:

For example, in Java/C# etc. you have to declare the type of any and
every variable you create.

This is changed in C# 3. You will be able to do var and AFAIK there is even experiments with automatically specifying generic collections to a specific type if you only ever put in objects of that type.

in C# 2.0 there is type inference for delegates IIRC

I guess I'm increasingly more sounding like a C# fan boy, but I think it is going into an interesting direction...

I somewhat agree, but I think the C# team made a great design error: growing the language by stacking feature over feature over feature..

Florian Groß wrote:

ToRA wrote:

> For example, in Java/C# etc. you have to declare the type of any and
> every variable you create.

This is changed in C# 3. You will be able to do var and AFAIK there is
even experiments with automatically specifying generic collections to a
specific type if you only ever put in objects of that type.

I guess I'm increasingly more sounding like a C# fan boy, but I think it
is going into an interesting direction...

Yes, C# 3.0 and VB9 are going in a very interesting direction.

There's something hilarious about the idea that VB could be a cool
language - but WOW these are mainstream languages with a sizable
userbase.

Odd. I pine for vim when I'm (for whatever reason) forced to use some
bloated-ass IDE. To each, his own, I suppose... :wink:

···

On 9/17/05, Phlip <phlipcpp@yahoo.com> wrote:

> Lothar Scholz http://www.ruby-ide.com

Just a note, dude. I clicked on your URL there, and was greeted with...

"our full featured editor that is inspired by the famous emacs and vi
editors"

Your marketeers need to learn there's no faster way to turn some of us off
to an editor than praise its comparison to those pieces of crap. I worked vi
classic for 2 years, and vim for 2 years, and I will do anything to avoid
them.

--
Regards,
John Wilger

-----------
Alice came to a fork in the road. "Which road do I take?" she asked.
"Where do you want to go?" responded the Cheshire cat.
"I don't know," Alice answered.
"Then," said the cat, "it doesn't matter."
- Lewis Carrol, Alice in Wonderland

Just a note, dude. I clicked on your URL there, and was greeted with...

"our full featured editor that is inspired by the famous emacs and vi
editors"

Your marketeers need to learn there's no faster way to turn some of us off
to an editor than praise its comparison to those pieces of crap. I worked
vi
classic for 2 years, and vim for 2 years, and I will do anything to avoid
them.

Opinionated, or just ignorant? You seem quite happy to label the two most
popular text editors by usage ( excluding notepad ) as crap.
And yes, their popularity isn't just because of lack of other options.

If you all do indeed mean vim-like power without the f~~~ed up keystrokes,

I think you meant to type modal editing.

find another way to say it. CUA keystrokes provide the flattest possible

learning curve for modern programmers...

Huh, what good is a short learning curve if you end up learning all the
features of a very basic editor?

···

--

Phlip
greencheese.org <-- NOT a blog!!!

--
Into RFID? www.rfidnewsupdate.com <http://www.rfidnewsupdate.com> Simple,
fast, news.

Hey again,

Further to that, I probably should explicitly point out that in Ruby
the type model is subtly different than most(?) of the rest of the
other languages being mentioned, since a valid type for something in
ruby is one that can respond to the appropriate messages/method calls.
This is 'Duck' or structural typing, as opposed to the explicit type
tree that java et al have.

So really, you don't have operator overloading in the classical sense,
an operator is just a message the lhs can respond to when passed a rhs
argument.

Types then become sets of methods, and super/subclass relations become
sub/superset relations on those method sets.

Static type inference then becomes a hefty job of filling in those sets
and checking constraints, which if you google on it, will turn up has
already been done (to varying degrees) for javascript and python.

Tris.

#: gabriele renzi changed the world a bit at a time by saying on 9/18/2005 12:36 PM :#

Florian Groß ha scritto:

ToRA wrote:

For example, in Java/C# etc. you have to declare the type of any and
every variable you create.

This is changed in C# 3. You will be able to do var and AFAIK there is even experiments with automatically specifying generic collections to a specific type if you only ever put in objects of that type.

in C# 2.0 there is type inference for delegates IIRC

I guess I'm increasingly more sounding like a C# fan boy, but I think it is going into an interesting direction...

I somewhat agree, but I think the C# team made a great design error: growing the language by stacking feature over feature over feature..

From their point of view this is not a problem. They have the monopol so nobody will be able to complain. On the other hand, on the Java world sometimes the things are moving way too slow, because there are many peons (for example generics where praised by users for couple of years). I don't have the power to comment which approach is better, but I am getting the feeling that it is always in the middle :-).

./alex

···

--
.the_mindstorm.

The "learning curve" is a red herring here. You're making the classic
mistake of confusing "easy to learn" with "easy to use once learnt".

martin

···

Phlip <phlipcpp@yahoo.com> wrote:

If you all do indeed mean vim-like power without the f~~~ed up keystrokes,
find another way to say it. CUA keystrokes provide the flattest possible
learning curve for modern programmers...