Its good news that Ruby 1.7.x and later uses the Mersenne Twister for
rand. However, I feel we should have “taken it all the way” and introduced
a class for RNG’s for which MersenneTwister is subclass and rand/srand
are really calls to an instance of a MersenneTwister instance.
I’d like to know any reasons not to go down that route. The added
complexity would be minimal since the mt algorithm is already in the
interpreter.
The main benefits imho would be:
Multiple different rng streams can be used independently from each other
Persistence of rng streams by marshalling MT objects
In message “Objectify the mersenne twister in 1.8?” on 03/02/25, Robert Feldt feldt@ce.chalmers.se writes:
I’d like to know any reasons not to go down that route. The added
complexity would be minimal since the mt algorithm is already in the
interpreter.
Historical reason. The API was defined under the influence of UNIX
rand(3). Mersenne Twister was its drop-in replacement. I don’t
object to objectify RNG (no pun intended). It’s just not yet done,
and I have no driving force to do it right now.
It seems to me that I noticed something about this thing in your web
site.
Could we hope you just did some work and could integrate it with the
actual MT implementation?
Having a PRNG class would be cool, and marshaling ( seen has
seed-saving) would be good too.
It would even be cool to have a choice, cause (I seem to remeber)
that MT is not a secure algorithm.
And… this hypotethical module could support underlying OS’s RNG
(at least on some ) …
We could benefit of powerful alghoritms on modern OSes, such as Yarrow
, and even get the fast native implementation.
···
il Tue, 25 Feb 2003 04:00:25 +0900, Robert Feldt feldt@ce.chalmers.se ha scritto::
Hi,
Its good news that Ruby 1.7.x and later uses the Mersenne Twister for
rand.
Its good news that Ruby 1.7.x and later uses the Mersenne Twister for
rand. However, I feel we should have “taken it all the way” and introduced
a class for RNG’s for which MersenneTwister is subclass and rand/srand
are really calls to an instance of a MersenneTwister instance.
I’d like to know any reasons not to go down that route. The added
complexity would be minimal since the mt algorithm is already in the
interpreter.
The main benefits imho would be:
Multiple different rng streams can be used independently from each other
Persistence of rng streams by marshalling MT objects
Conceptually nicer, more oo
This is a good idea. It would also make it easier to ‘plug-in’ different
RNG algoithms for different rng streams.
Its good news that Ruby 1.7.x and later uses the Mersenne Twister for
rand. However, I feel we should have “taken it all the way” and introduced
a class for RNG’s for which MersenneTwister is subclass and rand/srand
are really calls to an instance of a MersenneTwister instance.
I’d like to know any reasons not to go down that route. The added
complexity would be minimal since the mt algorithm is already in the
interpreter.
The main benefits imho would be:
Multiple different rng streams can be used independently from each other
Persistence of rng streams by marshalling MT objects
Conceptually nicer, more oo
This has been on my wish list for a long time. I’ve been using an
“objectified” version of the Numerical Recipes generators, but those are
of poorer quality and are require a license.
The only disadvantage (for my use at least) is that MT has much larger
state than NR, so you want to think twice about using large numbers of them.
It seems to me that I noticed something about this thing in your web
site.
Could we hope you just did some work and could integrate it with the
actual MT implementation?
Yes, most of the needed code is in RandomR. Needs to be integrated with
random.c in ruby src though.
Having a PRNG class would be cool, and marshaling ( seen has
seed-saving) would be good too.
It would even be cool to have a choice, cause (I seem to remeber)
that MT is not a secure algorithm.
And… this hypotethical module could support underlying OS’s RNG
(at least on some ) …
We could benefit of powerful alghoritms on modern OSes, such as Yarrow
, and even get the fast native implementation.
Yarrow is much too complex for the ruby src distribution imho (needs
lots of code for crypting state etc). But sure, a natural consequence
of objectifying mt is that we could make it pluggable (ie rand calls
$DEFAULT_RNG.rand
and srand(s) does
$DEFAULT_RNG = $DEFAULT_RNG_CLASS.new(s)
).
It would even be cool to have a choice, cause (I seem to remeber)
that MT is not a secure algorithm.
I didn’t know MT is insecure. Do you have any reference?
Most PRNG’s are insecure, only a few are considered secure. Its not so
much about the algorithm itself as about how you create seeds and maintain
the state.
I haven’t seen any papers talking about the security of MT but see
Caution: MT is for MonteCarlo, and is NOT SECURE for CRYPTOGRAPHY as
it is.
That was the reason I wrote secure this way …
Well, I could have explained myself better, sorry :-/
_ As a side note, it’s an open question whether there is
ANY prng that is secure for cryptography. All you can do
is show that a given prng is not suitable for cryptography.
There is some very interesting work happening in this
area based on the Yarrow paper by Bruce Schneier
and John Kelsey. The EGADS system provided by
_ As a side note, it’s an open question whether there is
ANY prng that is secure for cryptography. All you can do
is show that a given prng is not suitable for cryptography.
There is some very interesting work happening in this
area based on the Yarrow paper by Bruce Schneier
and John Kelsey. The EGADS system provided by
Acually I think having OS-level entropy gathering, and thus getting
ability to provide good random numbers is a must for crypto-level
randomness .
Freebsd 5.0 (maybe OpenBSD too?) and linux <insert recent version
support yarrow for random, and they could even use hardware
PRNG devices, that’s why I’d like to exploit this features, even if
missing some portability.
Oh, and someone feels that Intel saying they put a “pure RNG” in their
cpu is hype?
_ As a side note, it’s an open question whether there is
ANY prng that is secure for cryptography. All you can do
is show that a given prng is not suitable for cryptography.
There is some very interesting work happening in this
area based on the Yarrow paper by Bruce Schneier
and John Kelsey. The EGADS system provided by
Acually I think having OS-level entropy gathering, and thus getting
ability to provide good random numbers is a must for crypto-level
randomness .
Freebsd 5.0 (maybe OpenBSD too?) and linux <insert recent version
support yarrow for random, and they could even use hardware
PRNG devices, that’s why I’d like to exploit this features, even if
missing some portability.
Is reading /dev/random (under Linux) enough to assure the random numbers
are cryptographically secure?
man urandom
The random number generator gathers environmental noise from
device drivers and other sources into an entropy pool. The generator
also keeps an estimate of the number of bit of the noise in the
entropy pool. From this entropy pool random numbers are created.
When read, the /dev/random device will only return random bytes
within the estimated number of bits of noise in the entropy pool.
/dev/random should be suitable for uses that need very high
quality randomness such as one-time pad or key generation.
When the entropy pool is empty, reads to /dev/random will block
until additional environmental noise is gathered.
Oh, and someone feels that Intel saying they put a “pure RNG” in their
cpu is hype?
If they got something like the Pentium division bug to bite randomly
(by quantum effects or the like) it’d do the trick, wouldn’t it?
The chips run so hot it must be easy to gather some entropy there…
···
On Thu, Feb 27, 2003 at 05:36:36AM +0900, gabriele renzi wrote:
_ As a side note, it’s an open question whether there is
ANY prng that is secure for cryptography. All you can do
is show that a given prng is not suitable for cryptography.
There is some very interesting work happening in this
area based on the Yarrow paper by Bruce Schneier
and John Kelsey. The EGADS system provided by
Acually I think having OS-level entropy gathering, and thus getting
ability to provide good random numbers is a must for crypto-level
randomness .
Well, any system that doesn’t is probably not good enough for
crypto, but you can do these things and still get it wrong.
Freebsd 5.0 (maybe OpenBSD too?) and linux <insert recent version
support yarrow for random, and they could even use hardware
PRNG devices, that’s why I’d like to exploit this features, even if
missing some portability.
Is reading /dev/random (under Linux) enough to assure the random numbers
are cryptographically secure?
There’s a problem with that sentence
“random numbers are cryptographically secure”
The only truly cryptographically secure random number
generator is a gieger counter on a serial port and even then you have
to be careful with the implementation to get it right.
The real question to ask is
“Is /dev/random good enough?”
and if they are using entrophy pools and something Yarrow
based, it’s as good as anything that’s currently publically
available. It’s certainly better than the random number
generation in most currently deployed ssh implementations
and many other pieces of crypto software.
The random number generator gathers environmental noise from
device drivers and other sources into an entropy pool. The generator
also keeps an estimate of the number of bit of the noise in the
entropy pool. From this entropy pool random numbers are created.
When read, the /dev/random device will only return random bytes
within the estimated number of bits of noise in the entropy pool.
/dev/random should be suitable for uses that need very high
quality randomness such as one-time pad or key generation.
I would not use it for a one-time pad. I think it’s fine for
key generation.
When the entropy pool is empty, reads to /dev/random will block
until additional environmental noise is gathered.
Oh, and someone feels that Intel saying they put a “pure RNG” in their
cpu is hype?
If they got something like the Pentium division bug to bite randomly
(by quantum effects or the like) it’d do the trick, wouldn’t it?
The chips run so hot it must be easy to gather some entropy there…
Depends on what they did. A really small gieger counter would
be ideal, I’m sure there are things you could do to get
randomness from some quantum effects. I have no ideal what
Intel has done though.
_ As a side note, it’s an open question whether there is
ANY prng that is secure for cryptography. All you can do
is show that a given prng is not suitable for cryptography.
There is some very interesting work happening in this
area based on the Yarrow paper by Bruce Schneier
and John Kelsey. The EGADS system provided by
Acually I think having OS-level entropy gathering, and thus getting
ability to provide good random numbers is a must for crypto-level
randomness .
Well, any system that doesn’t is probably not good enough for
crypto, but you can do these things and still get it wrong.
Freebsd 5.0 (maybe OpenBSD too?) and linux <insert recent version
support yarrow for random, and they could even use hardware
PRNG devices, that’s why I’d like to exploit this features, even if
missing some portability.
Is reading /dev/random (under Linux) enough to assure the random numbers
are cryptographically secure?
There’s a problem with that sentence
“random numbers are cryptographically secure”
The only truly cryptographically secure random number
generator is a gieger counter on a serial port and even then you have
to be careful with the implementation to get it right.
The real question to ask is
“Is /dev/random good enough?”
and if they are using entrophy pools and something Yarrow
based, it’s as good as anything that’s currently publically
available. It’s certainly better than the random number
generation in most currently deployed ssh implementations
and many other pieces of crypto software.
The random number generator gathers environmental noise from
device drivers and other sources into an entropy pool. The generator
also keeps an estimate of the number of bit of the noise in the
entropy pool. From this entropy pool random numbers are created.
When read, the /dev/random device will only return random bytes
within the estimated number of bits of noise in the entropy pool.
/dev/random should be suitable for uses that need very high
quality randomness such as one-time pad or key generation.
I would not use it for a one-time pad. I think it’s fine for
key generation.
When the entropy pool is empty, reads to /dev/random will block
until additional environmental noise is gathered.
Oh, and someone feels that Intel saying they put a “pure RNG” in their
cpu is hype?
If they got something like the Pentium division bug to bite randomly
(by quantum effects or the like) it’d do the trick, wouldn’t it?
The chips run so hot it must be easy to gather some entropy there…
Depends on what they did. A really small gieger counter would
be ideal, I’m sure there are things you could do to get
randomness from some quantum effects. I have no ideal what
Intel has done though.
Are you saying that the thermally generated random numbers on the P4 are not
truly random?
Depends on what they did. A really small gieger counter would
be ideal, I’m sure there are things you could do to get
randomness from some quantum effects. I have no ideal what
Intel has done though.
_ Please read carefully ( okay I’m terrible at proofreading my own
writing, it should say.)
“I have no idea what intel has done”.
_ I can’t comment one way or the other on what’s in the P4.
If they are sampling a random physical process[1], then they
have something really useful. I’m just trying to get
across the incredible difficulty of actually generating a
truly random sequence on a computer. It’s pretty hard to
get one suitable for physical simulations, getting one that’s
crypto quality is an open problem.
Booker C. Bense
[1]- These are much harder to find than you might think at
first, particularly if you need a very large/fast random byte stream.
Are you saying that the thermally generated random numbers on the P4 are
not truly random?
Depends on what they did. A really small gieger counter would
be ideal, I’m sure there are things you could do to get
randomness from some quantum effects. I have no ideal what
Intel has done though.
_ Please read carefully ( okay I’m terrible at proofreading my own
writing, it should say.)
“I have no idea what intel has done”.
_ I can’t comment one way or the other on what’s in the P4.
If they are sampling a random physical process[1], then they
have something really useful. I’m just trying to get
across the incredible difficulty of actually generating a
truly random sequence on a computer. It’s pretty hard to
get one suitable for physical simulations, getting one that’s
crypto quality is an open problem.
I am aware of the difficulties. Intel claims that with the P4 a physical
temperature measurement is used as part of the generation of random numbers,
and that this produce truly random random numbers rather than pseudo-random
numbers. I’m interested in independent verification (hopefully) or
refutation of this claim, which is why I posted the question.
···
On Thursday 27 February 2003 11:40 am, bbense+comp.lang.ruby.Feb.27.03@telemark.slac.stanford.edu wrote: