I felt like giving myself a small project to get my feet a bit more wet
with a small project in Rails
http://www.robbyonrails.com/articles/read/7
resulted in having this up a few hours later:
http://rubyurl.com/
I promise..if you add links, I won't delete them..it'll be up for as
long as I can maintain the $8/year for the domain.
*be gentle..* 
-Robby
ยทยทยท
--
/***************************************
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON | www.planetargon.com
* Portland, OR | robby@planetargon.com
* 503.351.4730 | blog.planetargon.com
* PHP, Ruby, and PostgreSQL Development
* http://www.robbyonrails.com/
****************************************/
Quoting robby@planetargon.com, on Mon, Mar 14, 2005 at 07:45:19PM +0900:
I felt like giving myself a small project to get my feet a bit more wet
with a small project in Rails
http://www.robbyonrails.com/articles/read/7
resulted in having this up a few hours later:
http://rubyurl.com/
I tried:
http://rubyurl.com/
and got
<html><body><h1>Application error (Rails)</h1></body></html>
Cheers,
Sam
Robby Russell wrote:
I felt like giving myself a small project to get my feet a bit more wet
with a small project in Rails
http://www.robbyonrails.com/articles/read/7
resulted in having this up a few hours later:
http://rubyurl.com/
Nice. I noticed, though, that you do not get the same rubyURL for the same source URL each time you do a conversion. I'm guessing then there is no straightforward has mapping going on.
Would this sort of 1-to-1 mapping be useful? Two arguments come to mind:
1. Multiple use of the same source URL would not consume additional space on each conversion; once a URL is entered, the same RubyURL is reused.
2. If a predictable hashing system is used that assures each source URL maps to only one Ruby URL, and that algorithm is published, then people can manually decode Ruby URLs if need be (should, say, the site go away). For example, if you see this:
http://rubyurl.com/2OJCU
you should be able to reverse-engineer it to this
http://www.ruby-doc.org/
Anyways, thanks for the site.
James
I felt like giving myself a small project to get my feet a bit more wet
with a small project in Rails
http://www.robbyonrails.com/articles/read/7
In my case qurl was a small project to get a bit more used to writing
web applications in Ruby/FCGI. I've found it very fast and reliable;
Ruby seems very well suited to running daemonized applications, which is
a breath of fresh air next to PHP appservers which like to leak until
they explode, despite not maintaining any state to speak of across
requests.
I'm just mapping qurl strings to table ID's using base63 ([0-9a-zA-Z]).
This keeps the mapping as dense as possible (up to 4000 URLs with
2 characters, over quarter of a million with 3), at the expense of
predictability (which can be avoided by randomizing allocation within a
sensible range, but I don't really see the point).
Next step is to optionally generate a link ID which is easy to give to
someone verbally. I have some pronouncable-password generation code
somewhere, maybe that could help...
If qurl is down, btw, it's probably because the server is getting
senile. It "lost" the network interface a few days ago and needed
rebooting, which is occasionally decides to do on its own.
resulted in having this up a few hours later:
http://rubyurl.com/
Nice. It doesn't seem to like data: URI's (Application Error); qurl
allows for pretty freeform URLs up to 64k, letting you "link" to things
like: qurl.net - This website is for sale! - qurl Resources and Information. (non-IE users only).
*be gentle..* 
Damn, *aborts DDoS tests* 
ยทยทยท
* Robby Russell (robby@planetargon.com) wrote:
--
Thomas 'Freaky' Hurst
http://hur.st/
Hmm. It's working for me...
(Loading the first page anyway.)
James Edward Gray II
ยทยทยท
On Mar 14, 2005, at 8:14 AM, Sam Roberts wrote:
I tried:
http://rubyurl.com/
and got
<html><body><h1>Application error (Rails)</h1></body></html>
Yeah, that's on the list for things to do. It'll just return the
existing short_url rather than generating a new one.
*working on getting FCGI installed so it'll be a bit faster*
Cheers,
Robby
ยทยทยท
On Tue, 2005-03-15 at 00:35 +0900, James Britt wrote:
Robby Russell wrote:
> I felt like giving myself a small project to get my feet a bit more wet
> with a small project in Rails
>
> http://www.robbyonrails.com/articles/read/7
>
> resulted in having this up a few hours later:
>
> http://rubyurl.com/
Nice. I noticed, though, that you do not get the same rubyURL for the
same source URL each time you do a conversion. I'm guessing then
there is no straightforward has mapping going on.
Would this sort of 1-to-1 mapping be useful? Two arguments come to mind:
--
/***************************************
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON | www.planetargon.com
* Portland, OR | robby@planetargon.com
* 503.351.4730 | blog.planetargon.com
* PHP, Ruby, and PostgreSQL Development
* http://www.robbyonrails.com/
****************************************/
qurl
allows for pretty freeform URLs up to 64k, letting you "link" to things
like: http://qurl.net/27 (non-IE users only).
Heheheheh.... Very nice !!
=D
Regards,
Bill
ยทยทยท
From: "Thomas Hurst" <tom.hurst@clara.net>
James Britt wrote:
Nice. I noticed, though, that you do not get the same rubyURL for the
same source URL each time you do a conversion. I'm guessing then
there is no straightforward has mapping going on.
In my little IOWA based system, I just do a query back to the database when
one enters a URL. If there is already a short version available, one is
informed of what the already existing URL is. The expense of a query here,
when entering a URL into the system, is minimal.
away). For example, if you see this:
http://rubyurl.com/2OJCU
you should be able to reverse-engineer it to this
http://www.ruby-doc.org/
2OJCU does not have all of the information in it that www.ruby-doc.org does.
Hashing algorithms like that are one way encodings.
General philosophical question about these sorts of services. What is the
advantage of using a hashing algorithm over some sort of simple counter?
My implementation just uses a count, expressed as a base62 number. So, even
with a very, very large number of URLs in the database, the URLs will still
stay quite short, and if one has the disk space, one need never purge or
overwrite a URL as can happen when there are hashing algorithm collisions.
I suppose the downside is just that this makes it easy for someone to scan
the database of URLs?
Kirk Haines
This comes down to an issue of how compressible urls are. I think most
(if not all) of the url shortening sites use the fact that urls people
submit are sparse in the space of all urls, and just keep assigning them
arbitrary generated symbols; I don't think an algorithmically reversible
scheme would be possible when you consider some of the huge
server-state-carrying urls that webapps generate. As a quick experiment
I catted this example from upthread:
http://www.mapquest.com/maps/map.adp?ovi=1&mqmap.x=300&mqmap.y=75&map\.\.\.
z8OOUkZWYe7NRH6ldDN96YFTIUmSH3Q6OzE5XVqcuc5zb%252fY5wy1MZwTnT2pu%252bNMj
OjsHjvNlygTRMzqazPStrN%252f1YzA0oWEWLwkHdhVHeG9sG6cMrfXNJKHY6fML4o6Nb0Se
Qm75ET9jAjKelrmqBCNta%252bsKC9n8jslz%252fo188N4g3BvAJYuzx8J8r%252f1fPFWk
PYg%252bT9Su5KoQ9YpNSj%252bmo0h0aEK%252bofj3f6vCP
into a file and tried running some standard compression routines on it -
they didn't do very much.
martin
ยทยทยท
James Britt <jamesUNDERBARb@neurogami.com> wrote:
2. If a predictable hashing system is used that assures each source URL
maps to only one Ruby URL, and that algorithm is published, then people
can manually decode Ruby URLs if need be (should, say, the site go
away). For example, if you see this:
http://rubyurl.com/2OJCU
you should be able to reverse-engineer it to this
http://www.ruby-doc.org/
Hi James,
> I tried:
> http://rubyurl.com/
>
> and got
>
> <html><body><h1>Application error (Rails)</h1></body></html>
Hmm. It's working for me...
(Loading the first page anyway.)
James Edward Gray II
Try to put "http://rubyurl.com/" in the form field and you will get this
error 
Regards,
Alex
ยทยทยท
--
People in cars cause accidents. Accidents in cars cause people.
I tried:
http://rubyurl.com/
and got
<html><body><h1>Application error (Rails)</h1></body></html>
Hmm. It's working for me...
It seems to work for all urls, except when trying to create a link to the rubyurl itself. Reading Robby's blog it looks like the URL filter has a problem.
martinus
http://marnanel.org/writing/tinyurl-whacking
martin
ยทยทยท
Kirk Haines <wyhaines@gmail.com> wrote:
I suppose the downside is just that this makes it easy for someone to scan
the database of URLs?
* Alex Martin Ugalde (Mar 14, 2005 16:20):
โฎ
> > <html><body><h1>Application error (Rails)</h1></body></html>
โฎ
> Hmm. It's working for me...
โฎ
Try to put "http://rubyurl.com/" in the form field and you will get
this error 
See http://rubyurl.com/4U1Cm on why this is so,
nikolai
P.S.
It's of course not a very good error message for this condition.
D.S.
ยทยทยท
--
::: name: Nikolai Weibull :: aliases: pcp / lone-star / aka :::
::: born: Chicago, IL USA :: loc atm: Gothenburg, Sweden :::
::: page: www.pcppopper.org :: fun atm: gf,lps,ruby,lisp,war3 :::
main(){printf(&linux["\021%six\012\0"],(linux)["have"]+"fun"-97);}
>> I tried:
>> http://rubyurl.com/
>> and got
>> <html><body><h1>Application error (Rails)</h1></body></html>
>
> Hmm. It's working for me...
It seems to work for all urls, except when trying to create a link to
the rubyurl itself. Reading Robby's blog it looks like the URL filter
has a problem.
martinus
Indeed, it's displaying an error as I have a constraint on the
postgresql table to block things like 'goatse, tubgurl, and rubyurl' in
the table itself. RubyURL was my latest "I should see how quick i can
get something up" project. I should be adding some friendly error
messages when people attempt to submit urls like that.
Otherwise, it seems to be working good... hoping to also get FastCGI
running on the server in the next day or two as well.
In a day and a half:
# SELECT count(id) FROM rubyurls ;
count
-------
242
I also hope to add a web service for it so that it can be used by irc bots and such.... in time. 
-Robby
ยทยทยท
On Tue, 2005-03-15 at 22:59 +0900, Martin Ankerl wrote:
--
/***************************************
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON | www.planetargon.com
* Portland, OR | robby@planetargon.com
* 503.351.4730 | blog.planetargon.com
* PHP, Ruby, and PostgreSQL Development
* http://www.robbyonrails.com/
****************************************/
Kirk Haines wrote:
It looks like Bill Kelly used the same algorithm with qurl.com as I did
Argh. Never type while reading something else. Thomas Hurst is the name
that I wanted there. Appologies.
Kirk Haines
Just a thing
More fun than otherwise, IMO.
martin
ยทยทยท
Kirk Haines <wyhaines@gmail.com> wrote:
Martin DeMello wrote:
> Kirk Haines <wyhaines@gmail.com> wrote:
>> I suppose the downside is just that this makes it easy for someone to
>> scan the database of URLs?
>
> http://marnanel.org/writing/tinyurl-whacking
Sure, but is that a _bad_ thing, or merely a thing? That is, is it a
problem, or just one facet of providing the service in that way?
If you're blocking stuff, you should probably block other common
redirect sites like makeashorterlink and tinyurl
martin
ยทยทยท
Robby Russell <robby@planetargon.com> wrote:
Indeed, it's displaying an error as I have a constraint on the
postgresql table to block things like 'goatse, tubgurl, and rubyurl' in
the table itself. RubyURL was my latest "I should see how quick i can
I could spend days adding filters. I guess at some point, you just have
to trust the public a little bit. 
Right now it's http only. I had noticed at least one entry for
"file:///etc/passwd"
not that it would be bad..but some people might freak out about it. heh
-Robby
ยทยทยท
On Wed, 2005-03-16 at 14:44 +0900, Martin DeMello wrote:
Robby Russell <robby@planetargon.com> wrote:
> Indeed, it's displaying an error as I have a constraint on the
> postgresql table to block things like 'goatse, tubgurl, and rubyurl' in
> the table itself. RubyURL was my latest "I should see how quick i can
If you're blocking stuff, you should probably block other common
redirect sites like makeashorterlink and tinyurl
martin
--
/***************************************
* Robby Russell | Owner.Developer.Geek
* PLANET ARGON | www.planetargon.com
* Portland, OR | robby@planetargon.com
* 503.351.4730 | blog.planetargon.com
* PHP, Ruby, and PostgreSQL Development
* http://www.robbyonrails.com/
****************************************/