Ruby UTF-8

I'm working with Japanese character sets in Windows. I can save my
*.rb files with notepad using UTF-8 but I can't run them with Ruby.
This is what happens when I try to run it.

c:\> ruby -Ku myFile.rb
jpn.rb:1: undefined method `' for main:Object
(NoMethodError)

Am I doing something wrong?

My goal is the read/write strings (containing Japanese characters)
from a web browser. Is there a recommend way of doing this?

Peter

Wolfgang Nádasi-Donner wrote:

"Peter C" <pkchau@gmail.com> schrieb im Newsbeitrag
news:2c9220c5.0503150920.6eedfbc9@posting.google.com...

I'm working with Japanese character sets in Windows. I can save my
*.rb files with notepad using UTF-8 but I can't run them with Ruby.
This is what happens when I try to run it.

c:\> ruby -Ku myFile.rb
jpn.rb:1: undefined method `&#8745;&#9559;&#9488;' for main:Object
(NoMethodError)

The Windows-Editor writes always a "Byte Order Mark" (BOM) at the beginning
of UTF-8/16LE/16BE coded files. In this case a UTF-8 coded file begins with
"EF BB BF" (hex). These non-characters should usually be ignored (for more
information see http://www.unicode.org/\).

One possibility is to remove the first three bytes of the UTF-8 encoded file
using some filter program or a hex editor. You should do this into a copy,
because the Windows editor cannot work correctly on this changed data :-((

Another one is to have an assignment to a scratch variable at the beginning of the script. Ruby will parse the BOM as the part of the variable name and thus not complain about it.

I've posted this to ruby-core a few months ago and would like to see it fixed...

* Wolfgang N�dasi-Donner (Mar 15, 2005 19:10):

> I'm working with Japanese character sets in Windows. I can save my
> *.rb files with notepad using UTF-8 but I can't run them with Ruby.

The Windows-Editor writes always a "Byte Order Mark" (BOM) at the
beginning of UTF-8/16LE/16BE coded files. In this case a UTF-8 coded
file begins with "EF BB BF" (hex). These non-characters should usually
be ignored (for more information see http://www.unicode.org/\).

Why does it write a BOM for UTF-8 encoded files? It's utterly
meaningless to discuss byte order for UTF-8 encoded text,
        nikolai

···

--
::: name: Nikolai Weibull :: aliases: pcp / lone-star / aka :::
::: born: Chicago, IL USA :: loc atm: Gothenburg, Sweden :::
::: page: minimalistic.org :: fun atm: gf,lps,ruby,lisp,war3 :::
main(){printf(&linux["\021%six\012\0"],(linux)["have"]+"fun"-97);}

Nikolai Weibull wrote:

I'm working with Japanese character sets in Windows. I can save my
*.rb files with notepad using UTF-8 but I can't run them with Ruby.

The Windows-Editor writes always a "Byte Order Mark" (BOM) at the
beginning of UTF-8/16LE/16BE coded files. In this case a UTF-8 coded
file begins with "EF BB BF" (hex). These non-characters should usually
be ignored (for more information see http://www.unicode.org/\).

Why does it write a BOM for UTF-8 encoded files? It's utterly
meaningless to discuss byte order for UTF-8 encoded text,

So that it can identify the file as UTF-8 encoded in the future without having to guess based on byte count, I assume.

I think that that behavior makes sense and would like to see it supported in Ruby.

Quoting mailing-lists.ruby-talk@rawuncut.elitemail.org, on Sun, Mar 20, 2005 at 11:25:27PM +0900:

* Wolfgang N???dasi-Donner (Mar 15, 2005 19:10):
> > I'm working with Japanese character sets in Windows. I can save my
> > *.rb files with notepad using UTF-8 but I can't run them with Ruby.

> The Windows-Editor writes always a "Byte Order Mark" (BOM) at the
> beginning of UTF-8/16LE/16BE coded files. In this case a UTF-8 coded
> file begins with "EF BB BF" (hex). These non-characters should usually
> be ignored (for more information see http://www.unicode.org/\).

Why does it write a BOM for UTF-8 encoded files? It's utterly
meaningless to discuss byte order for UTF-8 encoded text,

It's a tag to indicate the data is UTF-8. If it wasn't there it could be
anything, iso-8859-*, koi8, ...

Apple does this in NSString when I convert UCS-2 data to UTF-8, too. It
confused me. I still don't like it, but I have a suspicion that its
partly because these OSes have a legacy "assumed encoding" for 8-bit
text, and that it isn't UTF-8.

In a way it means that there are two "dialects" of utf-8. This is
unfortunate.

Sam

Hello Florian,

Nikolai Weibull wrote:

I'm working with Japanese character sets in Windows. I can save my
*.rb files with notepad using UTF-8 but I can't run them with Ruby.

The Windows-Editor writes always a "Byte Order Mark" (BOM) at the
beginning of UTF-8/16LE/16BE coded files. In this case a UTF-8 coded
file begins with "EF BB BF" (hex). These non-characters should usually
be ignored (for more information see http://www.unicode.org/\).

Why does it write a BOM for UTF-8 encoded files? It's utterly
meaningless to discuss byte order for UTF-8 encoded text,

So that it can identify the file as UTF-8 encoded in the future without
having to guess based on byte count, I assume.

I think that that behavior makes sense and would like to see it
supported in Ruby.

Doesn't ruby CVS already do this ?
I thought i read something about this on the ruby core list.

If there is no BOM i would really recommend the way python is using
for specifying font encodings, it's simple and excellent.

···

--
Best regards, emailto: scholz at scriptolutions dot com
Lothar Scholz http://www.ruby-ide.com
CTO Scriptolutions Ruby, PHP, Python IDE 's

Florian Gross <flgr@ccan.de> writes:

Nikolai Weibull wrote:

I'm working with Japanese character sets in Windows. I can save my
*.rb files with notepad using UTF-8 but I can't run them with Ruby.

The Windows-Editor writes always a "Byte Order Mark" (BOM) at the
beginning of UTF-8/16LE/16BE coded files. In this case a UTF-8 coded
file begins with "EF BB BF" (hex). These non-characters should usually
be ignored (for more information see http://www.unicode.org/\).

Why does it write a BOM for UTF-8 encoded files? It's utterly
meaningless to discuss byte order for UTF-8 encoded text,

So that it can identify the file as UTF-8 encoded in the future
without having to guess based on byte count, I assume.

I think that that behavior makes sense and would like to see it
supported in Ruby.

To what extent do BOMs interfere with shebang-lines?

···

--
Christian Neukirchen <chneukirchen@gmail.com> http://chneukirchen.org

Lothar Scholz wrote:

> So that it can identify the file as UTF-8 encoded in the future without
> having to guess based on byte count, I assume.

> I think that that behavior makes sense and would like to see it > supported in Ruby.
Doesn't ruby CVS already do this ?
I thought i read something about this on the ruby core list.

No idea, I'm on 1.8.2.

* Christian Neukirchen (Mar 20, 2005 17:50):

To what extent do BOMs interfere with shebang-lines?

They can't coexist, unless the operating-system deals with a BOM
appropriately. Linux doesn't, for one,
        nikolai

···

--
::: name: Nikolai Weibull :: aliases: pcp / lone-star / aka :::
::: born: Chicago, IL USA :: loc atm: Gothenburg, Sweden :::
::: page: minimalistic.org :: fun atm: gf,lps,ruby,lisp,war3 :::
main(){printf(&linux["\021%six\012\0"],(linux)["have"]+"fun"-97);}

Quoting chneukirchen@gmail.com, on Mon, Mar 21, 2005 at 01:45:57AM +0900:

Florian Gross <flgr@ccan.de> writes:
To what extent do BOMs interfere with shebang-lines?

They are the first characters in the file, so the first two characters
are not # and !.

Cheers,
Sam

Sam Roberts wrote:

Quoting chneukirchen@gmail.com, on Mon, Mar 21, 2005 at 01:45:57AM +0900:

Florian Gross <flgr@ccan.de> writes:
To what extent do BOMs interfere with shebang-lines?

Are you really sure I wrote that? :wink: