我想检测(在服务器端)哪些请求来自机器人.我现在不关心恶意机器人,只关注那些玩得很好的机器人.我见过一些主要涉及将用户代理字符串与'bot'等关键字匹配的方法.但这似乎很尴尬,不完整,不可维护.那么有没有人有更坚实的方法?如果没有,您是否拥有用于跟上所有友好用户代理的最新资源?
如果你很好奇:我不打算对任何搜索引擎政策做任何事情.我们有一个网站的部分,其中用户随机呈现一个页面的几个略有不同的版本之一.但是,如果检测到Web爬网程序,我们将始终为它们提供相同的版本,以使索引保持一致.
我也在使用Java,但我认为这种方法对于任何服务器端技术都是类似的.
你说在'bot'上匹配用户代理可能很尴尬,但我们发现它是一个非常好的匹配.我们的研究表明,它将覆盖您收到的大约98%的点击量.我们也没有遇到过任何误报.如果你想把它提高到99.9%,你可以加入一些其他着名的比赛,比如'履带式','baiduspider','ia_archiver','curl'等等.我们已经在我们的生产系统上测试过数百万命中
以下是一些针对您的c#解决方案:
处理未命中时最快.即来自非机器人的流量 - 普通用户.捕获99%以上的爬虫.
bool iscrawler = Regex.IsMatch(Request.UserAgent, @"bot|crawler|baiduspider|80legs|ia_archiver|voyager|curl|wget|yahoo! slurp|mediapartners-google", RegexOptions.IgnoreCase);
处理命中时速度最快.即来自机器人的流量.对于未命中也很快.捕获接近100%的爬虫.预先匹配'bot','crawler','spider'.您可以添加任何其他已知的抓取工具.
ListCrawlers3 = new List () { "bot","crawler","spider","80legs","baidu","yahoo! slurp","ia_archiver","mediapartners-google", "lwp-trivial","nederland.zoek","ahoy","anthill","appie","arale","araneo","ariadne", "atn_worldwide","atomz","bjaaland","ukonline","calif","combine","cosmos","cusco", "cyberspyder","digger","grabber","downloadexpress","ecollector","ebiness","esculapio", "esther","felix ide","hamahakki","kit-fireball","fouineur","freecrawl","desertrealm", "gcreep","golem","griffon","gromit","gulliver","gulper","whowhere","havindex","hotwired", "htdig","ingrid","informant","inspectorwww","iron33","teoma","ask jeeves","jeeves", "image.kapsi.net","kdd-explorer","label-grabber","larbin","linkidator","linkwalker", "lockon","marvin","mattie","mediafox","merzscope","nec-meshexplorer","udmsearch","moget", "motor","muncher","muninn","muscatferret","mwdsearch","sharp-info-agent","webmechanic", "netscoop","newscan-online","objectssearch","orbsearch","packrat","pageboy","parasite", "patric","pegasus","phpdig","piltdownman","pimptrain","plumtreewebaccessor","getterrobo-plus", "raven","roadrunner","robbie","robocrawl","robofox","webbandit","scooter","search-au", "searchprocess","senrigan","shagseeker","site valet","skymob","slurp","snooper","speedy", "curl_image_client","suke","www.sygol.com","tach_bw","templeton","titin","topiclink","udmsearch", "urlck","valkyrie libwww-perl","verticrawl","victoria","webscout","voyager","crawlpaper", "webcatcher","t-h-u-n-d-e-r-s-t-o-n-e","webmoose","pagesinventory","webquest","webreaper", "webwalker","winona","occam","robi","fdse","jobo","rhcs","gazz","dwcp","yeti","fido","wlm", "wolp","wwwc","xget","legs","curl","webs","wget","sift","cmc" }; string ua = Request.UserAgent.ToLower(); bool iscrawler = Crawlers3.Exists(x => ua.Contains(x));
速度相当快,但比选项1和2慢一点.它是最准确的,并允许您根据需要维护列表.如果你害怕未来的误报,你可以在其中维护一个单独的名单,其中包含'bot'.如果我们得到一个短的匹配,我们记录它并检查它是否为误报.
// crawlers that have 'bot' in their useragent ListCrawlers1 = new List () { "googlebot","bingbot","yandexbot","ahrefsbot","msnbot","linkedinbot","exabot","compspybot", "yesupbot","paperlibot","tweetmemebot","semrushbot","gigabot","voilabot","adsbot-google", "botlink","alkalinebot","araybot","undrip bot","borg-bot","boxseabot","yodaobot","admedia bot", "ezooms.bot","confuzzledbot","coolbot","internet cruiser robot","yolinkbot","diibot","musobot", "dragonbot","elfinbot","wikiobot","twitterbot","contextad bot","hambot","iajabot","news bot", "irobot","socialradarbot","ko_yappo_robot","skimbot","psbot","rixbot","seznambot","careerbot", "simbot","solbot","mail.ru_bot","spiderbot","blekkobot","bitlybot","techbot","void-bot", "vwbot_k","diffbot","friendfeedbot","archive.org_bot","woriobot","crystalsemanticsbot","wepbot", "spbot","tweetedtimes bot","mj12bot","who.is bot","psbot","robot","jbot","bbot","bot" }; // crawlers that don't have 'bot' in their useragent List Crawlers2 = new List () { "baiduspider","80legs","baidu","yahoo! slurp","ia_archiver","mediapartners-google","lwp-trivial", "nederland.zoek","ahoy","anthill","appie","arale","araneo","ariadne","atn_worldwide","atomz", "bjaaland","ukonline","bspider","calif","christcrawler","combine","cosmos","cusco","cyberspyder", "cydralspider","digger","grabber","downloadexpress","ecollector","ebiness","esculapio","esther", "fastcrawler","felix ide","hamahakki","kit-fireball","fouineur","freecrawl","desertrealm", "gammaspider","gcreep","golem","griffon","gromit","gulliver","gulper","whowhere","portalbspider", "havindex","hotwired","htdig","ingrid","informant","infospiders","inspectorwww","iron33", "jcrawler","teoma","ask jeeves","jeeves","image.kapsi.net","kdd-explorer","label-grabber", "larbin","linkidator","linkwalker","lockon","logo_gif_crawler","marvin","mattie","mediafox", "merzscope","nec-meshexplorer","mindcrawler","udmsearch","moget","motor","muncher","muninn", "muscatferret","mwdsearch","sharp-info-agent","webmechanic","netscoop","newscan-online", "objectssearch","orbsearch","packrat","pageboy","parasite","patric","pegasus","perlcrawler", "phpdig","piltdownman","pimptrain","pjspider","plumtreewebaccessor","getterrobo-plus","raven", "roadrunner","robbie","robocrawl","robofox","webbandit","scooter","search-au","searchprocess", "senrigan","shagseeker","site valet","skymob","slcrawler","slurp","snooper","speedy", "spider_monkey","spiderline","curl_image_client","suke","www.sygol.com","tach_bw","templeton", "titin","topiclink","udmsearch","urlck","valkyrie libwww-perl","verticrawl","victoria", "webscout","voyager","crawlpaper","wapspider","webcatcher","t-h-u-n-d-e-r-s-t-o-n-e", "webmoose","pagesinventory","webquest","webreaper","webspider","webwalker","winona","occam", "robi","fdse","jobo","rhcs","gazz","dwcp","yeti","crawler","fido","wlm","wolp","wwwc","xget", "legs","curl","webs","wget","sift","cmc" }; string ua = Request.UserAgent.ToLower(); string match = null; if (ua.Contains("bot")) match = Crawlers1.FirstOrDefault(x => ua.Contains(x)); else match = Crawlers2.FirstOrDefault(x => ua.Contains(x)); if (match != null && match.Length < 5) Log("Possible new crawler found: ", ua); bool iscrawler = match != null;
笔记:
只是继续为正则表达式选项1添加名称很诱人.但是如果你这样做会变慢.如果你想要一个更完整的列表,那么带有lambda的linq会更快.
确保.ToLower()在你的linq方法之外 - 记住方法是一个循环,你将在每次迭代期间修改字符串.
始终将最重的机器人放在列表的开头,这样它们就能更快地匹配.
将列表放入静态类,以便不在每个页面视图上重建它们.
蜜罐
唯一真正的替代方法是在您的网站上创建一个只有机器人才能到达的"蜜罐"链接.然后,将访问蜜罐页面的用户代理字符串记录到数据库中.然后,您可以使用这些记录的字符串对爬网程序进行分类.
Postives:
它将匹配一些未声明自己的未知爬虫.
Negatives:
并非所有抓取工具都能够深入挖掘您网站上的每个链接,因此他们可能无法访问您的蜜罐.
您可以在robotstxt.org 机器人数据库中找到关于已知"好"网络爬虫的非常全面的数据库.利用这些数据远比在用户代理中匹配机器人更有效.
一个建议是在您的页面上创建一个只有机器人会跟随的空锚.普通用户不会看到链接,留下蜘蛛和机器人.例如,指向子文件夹的空锚标记会在日志中记录获取请求...
许多人在运行HoneyPot时使用此方法来捕获未遵循robots.txt文件的恶意机器人.我在我编写的ASP.NET蜜罐解决方案中使用空锚方法来捕获并阻止那些令人毛骨悚然的爬虫......
任何访问者的入口页面是/robots.txt可能是一个机器人.