Ruby web crawler for https

Actually there is code in ruby which collects all the URLs from your
site and stores them in a file.
I took him out http://snippets.dzone.com/posts/show/1893
I understand that code is relevant for http connection, and uses GET and
HTTP requests.
I'll scan the address type: https://site.domain.ru
How to edit this code?

···

--
Posted via http://www.ruby-forum.com/.

How to edit this code?

vi crawler.rb

xdg-open How To Ask Questions The Smart Way

···

--
Posted via http://www.ruby-forum.com/\.

change this line

        if %r{http://([^/]+)/([^/]+)}i =~ $_

to this

        if %r{https?://([^/]+)/([^/]+)}i =~ $_

-- Sergey Avseyev

···

On Thu, Aug 4, 2011 at 14:42, Temur Fatkulin <stickz@rambler.ru> wrote:

Actually there is code in ruby which collects all the URLs from your
site and stores them in a file.
I took him out http://snippets.dzone.com/posts/show/1893
I understand that code is relevant for http connection, and uses GET and
HTTP requests.
I'll scan the address type: https://site.domain.ru
How to edit this code?

--
Posted via http://www.ruby-forum.com/\.

Simple ETL (Extract Transform Load) Tool for web
https://github.com/alexeypetrushin/wetl

Here's the complete sample how to fully grab & parse the membrana.ru
site
https://github.com/alexeypetrushin/wetl/tree/master/examples/membrana

···

--
Posted via http://www.ruby-forum.com/.