[ANN] FixedPt-0.0.1

First release. This is the first gem I've ever made and I've gotta say
it was pretty-much painless. There's even a C extension involved. Kudos
to the Rubygems folks. Please let me know if there are any problems with
the gem.

From the README:

FixedPt class

···

-------------
The FixedPt class is used for representing fixed point numbers and
it defines mathematical operations on them.

This class is not very useful unless you are trying to model hardware -
if you're doing that it can be very useful. If you're doing web
development, there's no need to read further.

What's a fixed point number?
-------------------------------------
First off a fixed point number is represented by a limited number of
bits.
You can define that limit when you construct a FixedPt:
  fp = FixedPt.new(9.25,6,2)
Which means "construct a fixed point number with the value of 9.25
represented by 6 total bits with the binary point at 2" or in other
words, the number 9.25 with 4 bits for the integer part and 2 bits for
the fractional part,like so:
  "1001.01"
       ^
       >+----- That's the binary point. It's just like a decimal point,
       but it's for binary numbers.

Actually, to be honest, if you have to ask what a fixed point number is
you probably aren't interested in this package.

Why would I want to use this?
------------------------------
You probably don't want to unless you're modeling a hardware design in
software. ( I suppose you could use FixedPt if you think that math is
just too fast in Ruby these days :slight_smile:
For example, say you're doing some sort of Digital Signal Processing
(DSP)and you'd like to implement a particlular algorithm in an FPGA. But
before you reach for the HDL and simulator you want to try an
implementation in a programming language (a real programming language
like Ruby, of course, not Matlab:) to see if your algorithm works.
However, because you know that you'll be implementing this eventually in
hardware and you also know that you don't have infinite precision in
hardware(generally speaking) so you'd like to see what effect arithmetic on
limited precision numbers (Fixed Point numbers) will have on your algorithm.
How many fractional bits do you need to use to accurately represent
exp(-x) in your design, for example. How many fractional and integral
bits will you need to represent your numbers? These are questions you
can find answers to by using this handy FixedPt class.

Is it slow?
-----------
If you don't have a C compiler, you bet it'll be slow. It's mostly in
pure Ruby. However, there is some sneakiness afoot to help speed things up.
If you do happen to have a C compiler then some of the methods of the
class could end up being implemented in C which will speed things up. I say
'could end up being implemented in C' because it all depends on your
setup, but not to worry, even if you don't have a C compiler and even if
you don't have Ruby setup right to create extensions it'll still work -
just more slowly; about half as fast, actually.

Hopefully, I'll add more C code in the future to speed things up more.

Are there bugs?
---------------
It's a sure bet.
Lots of unit tests need to be written yet.

What else do I need to know?
----------------------------
* You can do math on FixedPt numbers (+-*/)
* You can choose from two different overflow handlers:
  1) saturate (the default): FixedPt(9,4,1).to_binary #=>"111.1"
  2) truncate: FixedPt(9,4,1,:truncate) #=>"001.0"
* After running your algorithm you can check to see if any of the FixedPt
  numbers have overflowed and see what the maximum number was that caused
  the overflow. You can then use this information to size your actual
hardware
  registers.

Where can I find it?
--------------------
http://rubyforge.org/projects/fixedpt/

How Do I install it?
---------------------
Use gem:
  gem install FixedPt-x.y.z.gem

What if I run into some of those bugs?
--------------------------------------
Send me an email:
philtomson<at>gmail.com

License
-------
Ruby's

TODO:
* Add more docs besides this README
* Add examples

Phil Tomson wrote:

What's a fixed point number?
-------------------------------------
First off a fixed point number is represented by a limited number of bits.
You can define that limit when you construct a FixedPt:
  fp = FixedPt.new(9.25,6,2)

It's an interesting library and who knows -- I'd not be entirely too surprised if there were use cases out of hardware design for this.

Anyway, I'm writing this because I am wondering if you also allow a String style constructor like FixedPt.new("9.25", 6, 2)? Otherwise you could get a bit of trouble when working with FixedPt numbers with lots of post-decimal digits because the float literal implies a certain inaccurateness.

In article <d9ivki$iu0$1@sea.gmane.org>,

Phil Tomson wrote:

What's a fixed point number?
-------------------------------------
First off a fixed point number is represented by a limited number of
bits.
You can define that limit when you construct a FixedPt:
  fp = FixedPt.new(9.25,6,2)

It's an interesting library and who knows -- I'd not be entirely too
surprised if there were use cases out of hardware design for this.

Anyway, I'm writing this because I am wondering if you also allow a
String style constructor like FixedPt.new("9.25", 6, 2)? Otherwise you
could get a bit of trouble when working with FixedPt numbers with lots
of post-decimal digits because the float literal implies a certain
inaccurateness.

Constuction with Strings is not currently supported, but that's a
thought. Practically speaking, I'm not sure how big of a problem this
is, I doubt I'd really need to have a FixedPt number with more
precision than Ruby has.

I did some experimenting in irb to find out where the cutoff is:

irb(main):027:0> x = 9.0000000000001
=> 9.0000000000001
irb(main):028:0> x = 9.00000000000001
=> 9.00000000000001
irb(main):029:0> x = 9.000000000000001
=> 9.0

It happens at 15 digits to the right of the decimal point. (When you
go the other way by increasing the size of the number you automatically
convert to a Bignum - however we don't have a 'Smallnum' equivilent when the
numbers get very small.) That would be about 50 fractional bits in a
binary fixed point number - practically speaking, you usually (where
'usually' probably covers 99.999% of the cases ) don't need that kind of
precision in a hardware implementation of an algorithm.

Mostly, you would use FixedPt to figure out how few fractional bits you
can get away with before things break. For example (actual example which
caused me to develop FixedPt), let's say you need to calculate exp(-x) in
a function. Since you want to implement in hardware, often the best way
to represent a function like exp is to use a lookup table. Keep in
mind that the value of exp(-x) is between 0 and 1 (1 when x is 0, '0' for
all practical purposes when x is large). There are three issues to
consider when constructing a lookup table for exp(-x):

1) How many bits of precision will I need to accurately represent the
values I get from the lookup table? (where 'accurately' is dependent on
how the values will be used.) This can be determined by experimentation
in which you try some number of fractional bits and then use the values
in your algorithm; if it works, keep trying to reduce the number of
fractional bits till it fails. Remember, we want to reduce the number of
bits as much as possible because we're going to implement in real
hardware where such resources cost $.

2) At what value of x will we consider the output of the function to be
0.0?

3) How much resolution should the lookup table have in order to produce
accurate results in the target algorithm? In other words, how many
entries will the lookup table need? (again, this has to be determined
experimentally)

I implemented a model of a Support Vector Machine (kind of like a
Perceptron, a type of artificial neuron) using FixedPt and an exp(-x)
lookup table and came up with the following results:

1) I needed 9 fractional bits at a minimum.
2) The value of x for which exp(-x) = 0.0 was -12.25.
3) The number of entries in the exp(-x) lookup table was 128 (7 bits of
address) which was surprisingly small.

Phil

···

Florian Groß <florgro@gmail.com> wrote:

Phil Tomson wrote:

Constuction with Strings is not currently supported, but that's a thought. Practically speaking, I'm not sure how big of a problem this is, I doubt I'd really need to have a FixedPt number with more precision than Ruby has.

I did some experimenting in irb to find out where the cutoff is:

irb(main):027:0> x = 9.0000000000001
=> 9.0000000000001
irb(main):028:0> x = 9.00000000000001
=> 9.00000000000001
irb(main):029:0> x = 9.000000000000001
=> 9.0

It happens at 15 digits to the right of the decimal point. (When you go the other way by increasing the size of the number you automatically convert to a Bignum - however we don't have a 'Smallnum' equivilent when the numbers get very small.)

BigDecimal would work, but does AFAIK not automatically take care of the precision that would be needed to avoid data loss. Keeping numbers around as Rational and converting to a BigDecimal on output might work in that cases.

That would be about 50 fractional bits in a binary fixed point number - practically speaking, you usually (where 'usually' probably covers 99.999% of the cases ) don't need that kind of precision in a hardware implementation of an algorithm.

Ah, I'm not sure how frequently such accurate numbers are used (I'm not experienced at hardware design), but it might still be worth considering. After all 1.1 will not be accurately presented (the first error occurs at the 15th digit) either.

[...]

I implemented a model of a Support Vector Machine (kind of like a Perceptron, a type of artificial neuron) using FixedPt and an exp(-x) lookup table and came up with the following results:

1) I needed 9 fractional bits at a minimum.
2) The value of x for which exp(-x) = 0.0 was -12.25.
3) The number of entries in the exp(-x) lookup table was 128 (7 bits of address) which was surprisingly small.

Thank you for the detailed description of the process where this library would be involved. It was very interesting for me -- always nice to see where Ruby can be used in the real world where you might not have expected it to be at first. Perhaps this is an interesting enough use case for it to be added to the RealWorldRuby page of the rubygarden.org Wiki? If you can name the company you are doing this for then even better. :slight_smile:

Anyway, I think you are likely right in that this will rarely be much of a problem -- but having the possibility in the next release (there will be one anyway, won't there? ;)) might still be a good thing -- even if just getting users in the habit of being careful with Floats. Yours will probably know anyway, but allowing for such a style consistently through all libraries should keep confusion low.

In article <d9kfk2$55j$1@sea.gmane.org>,

Phil Tomson wrote:

Constuction with Strings is not currently supported, but that's a
thought. Practically speaking, I'm not sure how big of a problem this
is, I doubt I'd really need to have a FixedPt number with more
precision than Ruby has.

I did some experimenting in irb to find out where the cutoff is:

irb(main):027:0> x = 9.0000000000001
=> 9.0000000000001
irb(main):028:0> x = 9.00000000000001
=> 9.00000000000001
irb(main):029:0> x = 9.000000000000001
=> 9.0

It happens at 15 digits to the right of the decimal point. (When you
go the other way by increasing the size of the number you automatically
convert to a Bignum - however we don't have a 'Smallnum' equivilent when the
numbers get very small.)

BigDecimal would work, but does AFAIK not automatically take care of the
precision that would be needed to avoid data loss. Keeping numbers
around as Rational and converting to a BigDecimal on output might work
in that cases.

I'll look into this.

That would be about 50 fractional bits in a
binary fixed point number - practically speaking, you usually (where
'usually' probably covers 99.999% of the cases ) don't need that kind of
precision in a hardware implementation of an algorithm.

Ah, I'm not sure how frequently such accurate numbers are used (I'm not
experienced at hardware design), but it might still be worth
considering. After all 1.1 will not be accurately presented (the first
error occurs at the 15th digit) either.

[...]

I implemented a model of a Support Vector Machine (kind of like a
Perceptron, a type of artificial neuron) using FixedPt and an exp(-x)
lookup table and came up with the following results:

1) I needed 9 fractional bits at a minimum.
2) The value of x for which exp(-x) = 0.0 was -12.25.
3) The number of entries in the exp(-x) lookup table was 128 (7 bits of
address) which was surprisingly small.

Thank you for the detailed description of the process where this library
would be involved. It was very interesting for me -- always nice to see
where Ruby can be used in the real world where you might not have
expected it to be at first. Perhaps this is an interesting enough use
case for it to be added to the RealWorldRuby page of the rubygarden.org
Wiki? If you can name the company you are doing this for then even
better. :slight_smile:

It was for a University project, so I'm not sure it qualifies as a
'RealWorld' application :wink: Actually, I spent last winter
working on this project as a visiting researcher at the University of
Genova, Italy.

Anyway, I think you are likely right in that this will rarely be much of
a problem -- but having the possibility in the next release (there will
be one anyway, won't there? ;)) might still be a good thing -- even if
just getting users in the habit of being careful with Floats. Yours will
probably know anyway, but allowing for such a style consistently through
all libraries should keep confusion low.

Hopefully there will be other releases: I certainly hope to implement
more methods in C to speed things up. I'll keep your suggestion in
mind. Thanks.

Phil

···

Florian Groß <florgro@gmail.com> wrote: