Hi,
> >
> > Yes but what about stuff already encoded in UTF-16?
>
> That's why I said read up on unicode!
> After you read that stuff you'll understand why it's no problem.
> I'm not going to explain it. Many people understand it, but when
> explaining it might make mistakes.
> Read the unicode stuff carefully. It's vital for many things.
>
> The only thing you might run into is BOM or Endian-ness, but it's
> doubtful it will be an issue in most cases.
>
> This might get you started.
> FAQ - UTF-8, UTF-16, UTF-32 & BOM
>
>
> Even Joel Spoelsky wrote a brief bit on unicode... mostly trumpeting
> how programmers need to know it and how few actually do.
> The short version is that UTF-16 is basically wasteful. It uses 2
> bytes for lower-level code-points (the stuff also known as ASCII
> range) where UTF-8 does not.
As you suggested I read the article:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) – Joel on Software
I didn't find anything new. It's just explaining character sets in a
rather non-specific way. ASCII uses 8 bits, so it can store 256
characters, so it can't store all the characters in the world, so
other character sets are needed (really? I would have never guessed
that). UTF-16 basically stores characters in 2 bytes (that means more
characters in the world), UTF-8 also allows more characters it doesn't
necessarily needs 2 bytes, it uses 1, and if the character is beyond
127 then it will use 2 bytes. This whole thing can be extended up to 6
bytes.
So what exactly am I looking for here?
UTF-8 and UTF-16 are pretty much the same. They encode a single
character using one or more units, where these units are 8-bit or
16-bit respectively. The only thing you buy by converting to utf-16 is
space efficiency for codepoints that require nearly 16 bits to
represent (such as Japanese characters) and endianness issues. Note
that some characters may (and some must) be composed of multiple
codepoints (a character codepoint, and additional accent
codepoint(s)).
> You really need to spend an afternoon reading about unicode. It
> should be required in any computer science program as part of an
> encoding course, Americans in particular are often the ones who know
> the least about it....
What is there to know about Unicode? There's a couple of character
sets, use UTF-8, and remember that one character != one byte. Is there
anything else for practical purposes?
I'm sorry if I'm being rude, but I really don't like when people tell
me to read stuff I already know.
My question is still there:
Let's say I want to rename a file "fooobar", and remove the third "o",
but it's UTF-16, and Ruby only supports UTF-8, so I remove the "o" and
of course there will still be a 0x00 in there. That's if the string is
recognized at all.
Why is there no issue with UTF-16 if only UTF-8 is supported?
If you handle UTF-16 as something else you break it regardless of the
language support. If you know (or have a way to find out) it's UTF-16
you can convert it. If there is no way to find out all language
support is moot.
Thanks
Michal
···
On 30/09/2007, Felipe Contreras <felipe.contreras@gmail.com> wrote:
On 9/29/07, John Joyce <dangerwillrobinsondanger@gmail.com> wrote: