RubyGarden Spam

The rubygarden wiki has been over-run with spam links.

220.163.37.233 is one of the offending source IP addresss.

I fixed the home page, and then saw the extent of the crap. Looks like
many personal pages have been altered.

Those with user pages may want to go check their own page to assist with
the clean up.

James

You should create a way to generate images with text
verification. This would eliminate spam.

--dross

···

--- James Britt <jamesUNDERBARb@neurogami.com> wrote:

The rubygarden wiki has been over-run with spam
links.

220.163.37.233 is one of the offending source IP
addresss.

I fixed the home page, and then saw the extent of
the crap. Looks like
many personal pages have been altered.

Those with user pages may want to go check their own
page to assist with
the clean up.

James

__________________________________
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
http://promotions.yahoo.com/new_mail

I've got a list, but it has become obvious that maintaining a list
manually isn't going to work. I'm tempted to require registration and
authentication at this point as much as I hate the thought.

Chad

···

On Tue, 28 Sep 2004 13:29:04 +0900, James Britt <jamesunderbarb@neurogami.com> wrote:

The rubygarden wiki has been over-run with spam links.

220.163.37.233 is one of the offending source IP addresss.

I fixed the home page, and then saw the extent of the crap. Looks like
many personal pages have been altered.

Those with user pages may want to go check their own page to assist with
the clean up.

You should create a way to generate images with text
verification. This would eliminate spam.

I think it would slow them down but it wouldn't eliminate them completely.

Captchas can generally be defeated by programs and violate usability
standards in any case unless there's a fallback -- which would likely
be able to be used by spammers to continue their process.

-austin

···

On Tue, 28 Sep 2004 13:39:59 +0900, David Ross <drossruby@yahoo.com> wrote:

You should create a way to generate images with text
verification. This would eliminate spam.

--
Austin Ziegler * halostatue@gmail.com
               * Alternate: austin@halostatue.ca
: as of this email, I have [ 6 ] Gmail invitations

Chad Fowler wrote:

···

On Tue, 28 Sep 2004 13:29:04 +0900, James Britt ><jamesunderbarb@neurogami.com> wrote:

The rubygarden wiki has been over-run with spam links.

220.163.37.233 is one of the offending source IP addresss.

I fixed the home page, and then saw the extent of the crap. Looks like
many personal pages have been altered.

Those with user pages may want to go check their own page to assist with
the clean up.

I've got a list, but it has become obvious that maintaining a list
manually isn't going to work. I'm tempted to require registration and
authentication at this point as much as I hate the thought.

Chad

As much as I like the idea of having authenticatoin, I don't think it would work. Automation of scripts or a program would allow them to bypass the authentication system. These attacks are not automatic, they are performmed manually by morons.

--dross

I've got a list, but it has become obvious that maintaining a list
manually isn't going to work. I'm tempted to require registration and
authentication at this point as much as I hate the thought.

I'd certainly be against it, I know spam is a bad thing and indeed my
own wiki has had it from time to time but requiring authentication /
registration removes a freedom from people they shouldn't have to give
up and might indeed push people away from using it.

Also there is nothing to stop spammers from setting up a ton of "junk"
accounts to get around it. This has happened a lot on Yahoo Groups and
the group we are in basically decided that new users (for a period of
a couple of weeks) has to have their posts moderated. This was to
prevent general spam and job solicitations. I can't think of a way to
make that sort of scheme work in a Wiki enviroment though.

Rob

···

--
Personal responsibility is battling extinction.

There are other, less intrusive, ways of combatting wiki spam. Why
not be tempted by one of those instead?

Gavin

···

On Tuesday, September 28, 2004, 9:53:53 PM, Chad wrote:

On Tue, 28 Sep 2004 13:29:04 +0900, James Britt > <jamesunderbarb@neurogami.com> wrote:

The rubygarden wiki has been over-run with spam links.

220.163.37.233 is one of the offending source IP addresss.

I fixed the home page, and then saw the extent of the crap. Looks like
many personal pages have been altered.

Those with user pages may want to go check their own page to assist with
the clean up.

I've got a list, but it has become obvious that maintaining a list
manually isn't going to work. I'm tempted to require registration and
authentication at this point as much as I hate the thought.

Hello,

You should create a way to generate images with text
verification. This would eliminate spam.

The only way to stop wiki spam is to have a dedicated admin. Creativity helps reduce the time burden, but it is a constant endeavor.

A tarpit would be easier to implement than a captcha. In the usemod settings, you use NetAddr::IP to check if the env's Remote Addr is within a known spammer domain. If it is a spammer, set the pages database to a copy. Nightly / weekly / whatever, dump the latest pages directory on top of the tarpit.

There goes one of my points for my presentation :slight_smile:

The main resource in fighting spammers is time. You want to waste their time, let them think that things are working.

Cheers,

Patrick

···

On Tuesday, September 28, 2004, at 12:39 AM, David Ross wrote:

Yes. Captcha analyzers would work on it, but only to
an extent. If you make it right to where it is so
mixed its not just text that will
squigle..*complicated) lines, to where it should be
able to confuse an AI. the squigly letters are a sign
of novice ;). I certainly don't use them. If I were to
make one it would be really confusing as to not look
like words or anything to an analyzer.

Wiki spam is ridiculous. Of course they are going to
keep doing it, considerations should have been thought
of before writing the Wiki software. security should
be the ultimate goal in any software because there
are nasty people out there that would exploit it.

--dross

···

--- Austin Ziegler <halostatue@gmail.com> wrote:

On Tue, 28 Sep 2004 13:39:59 +0900, David Ross > <drossruby@yahoo.com> wrote:
> You should create a way to generate images with
text
> verification. This would eliminate spam.

Captchas can generally be defeated by programs and
violate usability
standards in any case unless there's a fallback --
which would likely
be able to be used by spammers to continue their
process.

-austin
--
Austin Ziegler * halostatue@gmail.com
               * Alternate: austin@halostatue.ca
: as of this email, I have [ 6 ] Gmail invitations

__________________________________
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!
http://promotions.yahoo.com/new_mail

I Disagree. With a little cleverness, this would stop it completely.

Sadly, b/c of the spam, I for one have stopped using Garden like I used too.

T.

···

On Tuesday 28 September 2004 05:05 am, Robert McGovern wrote:

> You should create a way to generate images with text
> verification. This would eliminate spam.

I think it would slow them down but it wouldn't eliminate them completely.

As much as I like the idea of having authenticatoin, I don't think it
would work. Automation of scripts or a program would allow them to
bypass the authentication system. These attacks are not automatic, they
are performmed manually by morons.

If you think these are being performed manually be morons why did you
suggest earlier having a captcha type system?

"You should create a way to generate images with text verification.
This would eliminate spam."

Rob

···

--
Personal responsibility is battling extinction.

How about displaying a trivial line of Ruby code and asking the user to enter the value. Something like

    To stop spammers, please enter the value of the following

       1.+(2) = | |

Change the + to a - or * randomly, and pick random numbers between 1 and 9

Cheers

Dave

···

On Sep 28, 2004, at 5:59, Austin Ziegler wrote:

On Tue, 28 Sep 2004 13:39:59 +0900, David Ross <drossruby@yahoo.com> > wrote:

You should create a way to generate images with text
verification. This would eliminate spam.

Captchas can generally be defeated by programs and violate usability
standards in any case unless there's a fallback -- which would likely
be able to be used by spammers to continue their process.

Robert McGovern wrote:

You should create a way to generate images with text
verification. This would eliminate spam.

I think it would slow them down but it wouldn't eliminate them completely.

If the spam is entered by a script, then the wiki code should be able to use some simple heuristics to block the most annoying crap.

For example, if the diff from the old page to the new page is greater than some percentage, or if the new page contains X number of links to the same site.

Make this Ruby Quiz #2 :slight_smile:

Might this cause a problem for legit users once in a while? Sure. But we have that now, with spam clean-up.

James Britt

interesting thought:... I wonder if a sort of ruby Passport service
would get any use and create less hassle. I don't really agree to having
it centralised for the whole world by one company as Microsoft are doing
but targeted at a community like ruby it could be useful. All the sites
such as ruby-forum, ruby-garden and rubyforge could then identify you
and you'd only have to go through one registration procedure. Who knows
it could even have use for things like distributing rubygems and other
ruby programs...
just a crazy thought

  -Daniel

···

On Tue, 2004-09-28 at 14:13, Robert McGovern wrote:

> I've got a list, but it has become obvious that maintaining a list
> manually isn't going to work. I'm tempted to require registration and
> authentication at this point as much as I hate the thought.

I'd certainly be against it, I know spam is a bad thing and indeed my
own wiki has had it from time to time but requiring authentication /
registration removes a freedom from people they shouldn't have to give
up and might indeed push people away from using it.

Hello,

The only way to stop wiki spam is to have a dedicated admin. Creativity helps reduce the time burden, but it is a constant > endeavor.

A tarpit would be easier to implement than a captcha. In the usemod settings, you use NetAddr::IP to check if the env's Remote Addr is within a known spammer domain. If it is a spammer, set the pages database to a copy. Nightly / weekly / whatever, dump the latest pages directory on top of the tarpit.

I said domain. I meant subnet. You can just put a whole isp on probation and not allow changes from it to be propagated to the main database.

Cheers,

Patrick

I'm approaching it, again, from a slightly different perspective. My
goal is to make the page seem as if it were entirely a read-only
website to robots, and 403 if they are known bad crawlers. I don't yet
have IP banning, but I have robot exclusion.

-austin

···

On Wed, 29 Sep 2004 08:14:42 +0900, Patrick May <patrick@hexane.org> wrote:

A tarpit would be easier to implement than a captcha. In the usemod
settings, you use NetAddr::IP to check if the env's Remote Addr is
within a known spammer domain. If it is a spammer, set the pages
database to a copy. Nightly / weekly / whatever, dump the latest pages
directory on top of the tarpit.

There goes one of my points for my presentation :slight_smile:

The main resource in fighting spammers is time. You want to waste
their time, let them think that things are working.

--
Austin Ziegler * halostatue@gmail.com
               * Alternate: austin@halostatue.ca
: as of this email, I have [ 6 ] Gmail invitations

Hello,

Hello,

You should create a way to generate images with text
verification. This would eliminate spam.

The only way to stop wiki spam is to have a dedicated admin. Creativity helps reduce the time burden, but it is a constant > endeavor.

A tarpit would be easier to implement than a captcha. In the usemod settings, you use NetAddr::IP to check if the env's Remote Addr is within a known spammer domain. If it is a spammer, set the pages database to a copy. Nightly / weekly / whatever, dump the latest pages directory on top of the tarpit.

I threw together tarpit logic for usemod:

# == Configuration ====================================
use NetAddr::IP;
use vars qw( $TarpitDir $VandalFile );

$DataDir = "/tmp/mywikidb"; # Main wiki directory
$TarpitDir = "/tmp/tarpitdb"; # tarpit dir
$VandalFile = "/Users/patsplat/Desktop/usemod10/vandals.txt";

open(SOURCE, "< $VandalFile")
     or die "Couldn't open $VandalFile for reading: $!\n";
my $remote_addr = new NetAddr::IP $ENV{"REMOTE_ADDR"};
while(<SOURCE>) {
     my $vandal_host = new NetAddr::IP $_;
     if ( $remote_addr->within( $vandal_host ) ) {
  $DataDir = $TarpitDir;
     }
}

Cheers,

Patrick

···

On Tuesday, September 28, 2004, at 07:14 PM, Patrick May wrote:

On Tuesday, September 28, 2004, at 12:39 AM, David Ross wrote:

> > You should create a way to generate images with text
> > verification. This would eliminate spam.
>
> I think it would slow them down but it wouldn't eliminate them completely.

I Disagree. With a little cleverness, this would stop it completely.

It all hangs on whether its bot spam or manual spam. I never believe
in absolutes :slight_smile:

Rob

···

--
Personal responsibility is battling extinction.

Robert McGovern wrote:

As much as I like the idea of having authenticatoin, I don't think it
would work. Automation of scripts or a program would allow them to
bypass the authentication system. These attacks are not automatic, they
are performmed manually by morons.
   
If you think these are being performed manually be morons why did you
suggest earlier having a captcha type system?

"You should create a way to generate images with text verification.
This would eliminate spam."

Rob

Sorry, I didn't explain well. I mean sites are targetted manually. Most likely they are using automation scripts to spam. Depends on how much spam was actually performed as well.. It could've been performed manually for all I know, if thats the case nothing will stop morons. The internet is an insecure place unfortunately. Bans can be evaded by open proxies, HTTP, HTTPS, SOCKS, etc, etc.

Chad, Britt: Would it be possible to just have a simple command roll back everything by time? Automation scripts can be halted, yet manual attacks which just came to mind by McGovern can never be halted.

-dross