我是编程新手,我只是阅读文档.对于这个小项目,我读过一些Perl书和一本PHP-Cookbook.但是我已经选择了一些食谱并且信不信由你:它在屏幕上看起来非常糟糕.我想我现在需要一些帮助 -
凭借我的小知识很难完成这项工作......我需要一些有效的Mechanize,因为以下一些例子已经过时了:
请参阅cpan-site获取机械化示例
我希望了解更多 - 通过真实的实例 - 你还有更多......
我喜欢听到你的消息
你可以更具体地了解你究竟是什么...例如,这是一个登录网站的脚本:
use WWW::Mechanize; my $mech = WWW::Mechanize->new(); my $url = "http://www.test.com"; $mech->cookie_jar->set_cookie(0,"start",1,"/",".test.com"); $mech->get($url); $mech->form_name("frmLogin"); $mech->set_fields(user=>'test',passwrd=>'test'); $mech->click(); $mech->save_content("logged_in.html");
这是一个执行谷歌搜索的脚本
use WWW::Mechanize; use 5.10.0; use strict; use warnings; my $mech = new WWW::Mechanize; my $option = $ARGV[$#ARGV]; #you may customize your google search by editing this url (always end it with "q=" though) my $google = 'http://www.google.co.uk/search?q='; my @dork = ("inurl:dude","cheese"); #declare necessary variables my $max = 0; my $link; my $sc = scalar(@dork); #start the main loop, one itineration for every google search for my $i ( 0 .. $sc ) { #loop until the maximum number of results chosen isn't reached while ( $max <= $option ) { $mech->get( $google . $dork[$i] . "&start=" . $max ); #get all the google results foreach $link ( $mech->links() ) { my $google_url = $link->url; if ( $google_url !~ /^\// && $google_url !~ /google/ ) { say $google_url; } } $max += 10; } }
简单的站点爬虫从每个页面提取信息(html注释):
#call the mechanize object, with autocheck switched off #so we don't get error when bad/malformed url is requested my $mech = WWW::Mechanize->new(autocheck=>0); my %comments; my %links; my @comment; my $target = "http://google.com"; #store the first target url as not checked $links{$target} = 0; #initiate the search my $url = &get_url(); #start the main loop while ($url ne "") { #get the target url $mech->get($url); #search the source for any html comments my $res = $mech->content; @comment = $res =~ //g; #store comments in 'comments' hash and output it on the screen, if there are any found $comments{$url} = "@comment" and say "\n$url \n---------------->\n $comments{$url}" if $#comment >= 0; #loop through all the links that are on the current page (including only urls that are contained in html anchor) foreach my $link ($mech->links()) { $link = $link->url(); #exclude some irrelevant stuff, such as javascript functions, or external links #you might want to add checking domain name, to ensure relevant links aren't excluded if ($link !~ /^(#|mailto:|(f|ht)tp(s)?\:|www\.|javascript:)/) { #check whether the link has leading slash so we can build properly the whole url $link = $link =~ /^\// ? $target.$link : $target."/".$link; #store it into our hash of links to be searched, unless it's already present $links{$link} = 0 unless $links{$link}; } } #indicate we have searched this url and start over $links{$url} = 1; $url = &get_url(); } sub get_url { my $key, my $value; #loop through the links hash and return next target url, unless it's already been searched #if all urls have been searched return empty, ending the main loop while (($key,$value) = each(%links)) { return $key if $value == 0; } return ""; }
这真的取决于你追求的是什么,但如果你想要更多的例子,我会把你推荐给perlmonks.org,在那里你可以找到很多材料来帮助你.
通过机械化模块手册页肯定书签,这是最终的资源......