当前位置: 首页 > news >正文

邢台商城类网站建设wap网站制作软件

邢台商城类网站建设,wap网站制作软件,如何增加网站关键词,网站建设大约要多少钱Scrapy是一个为了爬取网站数据#xff0c;提取结构性数据而编写的应用框架。 其可以应用在数据挖掘#xff0c;信息处理或存储历史数据等一系列的程序中。其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的#xff0c; 也可以应用在获取API所返回的数据(例如 Amazon As…  Scrapy是一个为了爬取网站数据提取结构性数据而编写的应用框架。 其可以应用在数据挖掘信息处理或存储历史数据等一系列的程序中。其最初是为了页面抓取 (更确切来说, 网络抓取 )所设计的 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛可以用于数据挖掘、监测和自动化测试。 Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下 Scrapy主要包括了以下组件 引擎(Scrapy)用来处理整个系统的数据流处理, 触发事务(框架核心)调度器(Scheduler)用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL抓取网页的网址或者说是链接的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址下载器(Downloader)用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)爬虫(Spiders)爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面项目管道(Pipeline)负责处理爬虫从网页中抽取的实体主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后将被发送到项目管道并经过几个特定的次序处理数据。下载器中间件(Downloader Middlewares)位于Scrapy引擎和下载器之间的框架主要是处理Scrapy引擎与下载器之间的请求及响应。爬虫中间件(Spider Middlewares)介于Scrapy引擎和爬虫之间的框架主要工作是处理蜘蛛的响应输入和请求输出。调度中间件(Scheduler Middewares)介于Scrapy引擎和调度之间的中间件从Scrapy引擎发送到调度的请求和响应。Scrapy运行流程大概如下 引擎从调度器中取出一个链接(URL)用于接下来的抓取引擎把URL封装成一个请求(Request)传给下载器下载器把资源下载下来并封装成应答包(Response)爬虫解析Response解析出实体Item,则交给实体管道进行进一步的处理解析出的是链接URL,则把URL交给调度器等待抓取linux系统 pip3 install scrapyWindows系统 #scrapy 的一些依赖pywin32、pyOpenSSL、Twisted、lxml 、zope.interface。安装的时候注意看报错信息 #安装wheelpip3 install wheel-i http://pypi.douban.com/simple --trusted-host pypi.douban.com #安装这个依赖包才有安装上Twistedpip3 install Incremental -i http://pypi.douban.com/simple --trusted-host pypi.douban.com #再pip3安装Twisted但是还是安装不成功会报错。解决其它依赖问题pip3 install Twisted -i http://pypi.douban.com/simple --trusted-host pypi.douban.com #再进入软件存放目录再安装就可以成功啦。pip3 install Twisted-17.1.0-cp35-cp35m-win32.whl #安装scrapypip3 install scrapy -i http://pypi.douban.com/simple --trusted-host pypi.douban.com #pywin32下载https://sourceforge.net/projects/pywin32/files/ 检查pywin32是否安装成功。 C:\Users\Administratorpython Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (Intel)] on win32 Type help, copyright, credits or license for more information. import win32apiimport win32conwin32api.MessageBox(win32con.NULL, Python 你好, 你好, win32con.MB_OK) 二、基本使用 1. 基本命令 #创建项目 scrapy startproject xiaohuar#进入项目 cd xiaohuar#创建爬虫应用 scrapy genspider xiaohuar xiaohar.com#运行爬虫 scrapy crawl chouti --nolog 2.项目结构以及爬虫应用简介 文件说明 scrapy.cfg 项目的主配置信息。真正爬虫相关的配置信息在settings.py文件中 items.py 设置数据存储模板用于结构化数据如Django的Model pipelines 数据处理行为如一般结构化的数据持久化 settings.py 配置文件如递归的层数、并发数延迟下载等 spiders 爬虫目录如创建文件编写爬虫规则注意一般创建爬虫文件时以网站域名命名 import scrapyclass XiaoHuarSpider(scrapy.spiders.Spider):name xiaohuar # 爬虫名称 *****allowed_domains [xiaohuar.com] # 允许的域名start_urls [http://www.xiaohuar.com/hua/, # 其实URL]def parse(self, response):# 访问起始URL并获取结果后的回调函数爬虫1.py 3、找标签方法 # -*- coding: utf-8 -*- import scrapy import sys import io from scrapy.http import Request from scrapy.selector import Selector, HtmlXPathSelector from ..items import ChoutiItemsys.stdout io.TextIOWrapper(sys.stdout.buffer,encodinggb18030)class ChoutiSpider(scrapy.Spider):name choutiallowed_domains [chouti.com]start_urls [http://dig.chouti.com/]visited_urls set()# def start_requests(self):# for url in self.start_urls:# yield Request(url,callbackself.parse)def parse(self, response):# content str(response.body,encodingutf-8)# 找到文档中所有A标签# hxs Selector(responseresponse).xpath(//a) # 标签对象列表# for i in hxs:# print(i) # 标签对象# 对象转换为字符串# hxs Selector(responseresponse).xpath(//div[idcontent-list]/div[classitem]).extract() # 标签对象列表# hxs Selector(responseresponse).xpath(//div[idcontent-list]/div[classitem]) # 标签对象列表# for obj in hxs:# a obj.xpath(.//a[classshow-content]/text()).extract_first()# print(a.strip())# 选择器// 表示子孙中.// 当前对象的子孙中/ 儿子/div 儿子中的div标签/div[idi1] #儿子中的div标签且idi1/div[idi1] #儿子中的div标签且idi1obj.extract() # 列表中的每一个对象转换字符串 》 []obj.extract_first() # 列表中的每一个对象转换字符串 列表第一个元素//div/text() 获取某个标签的文本 # 获取当前页的所有页码# hxs Selector(responseresponse).xpath(//div[iddig_lcpage]//a/text())# hxs Selector(responseresponse).xpath(//div[iddig_lcpage]//a/href).extract()# hxs Selector(responseresponse).xpath(//a[starts-with(href, /all/hot/recent/)]/href).extract()# responsehxs1 Selector(responseresponse).xpath(//div[idcontent-list]/div[classitem]) # 标签对象列表for obj in hxs1:title obj.xpath(.//a[classshow-content]/text()).extract_first().strip()href obj.xpath(.//a[classshow-content]/href).extract_first().strip()item_obj ChoutiItem(titletitle,hrefhref)# 将item对象传递给pipelineyield item_objhxs2 Selector(responseresponse).xpath(//a[re:test(href, /all/hot/recent/\d)]/href).extract()for url in hxs2:md5_url self.md5(url)if md5_url in self.visited_urls:passelse:self.visited_urls.add(md5_url)url http://dig.chouti.com%s %url# 将新要访问的url添加到调度器yield Request(urlurl,callbackself.parse)# a/href 获取属性# //a[starts-with(href, /all/hot/recent/)]/href 已xx开始# //a[re:test(href, /all/hot/recent/\d)] 正则# yield Request(urlurl,callbackself.parse) # 将新要访问的url添加到调度器# 重写start_requests指定最开始处理请求的方法# def show(self,response):# print(response.text)def md5(self,url):import hashlibobj hashlib.md5()obj.update(bytes(url,encodingutf-8))return obj.hexdigest() 3. 小试牛刀 import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Requestclass DigSpider(scrapy.Spider):# 爬虫应用的名称通过此名称启动爬虫命令name dig# 允许的域名allowed_domains [chouti.com]# 起始URLstart_urls [http://dig.chouti.com/,]has_request_set {}def parse(self, response):print(response.url)hxs HtmlXPathSelector(response)page_list hxs.select(//div[iddig_lcpage]//a[re:test(href, /all/hot/recent/\d)]/href).extract()for page in page_list:page_url http://dig.chouti.com%s % pagekey self.md5(page_url)if key in self.has_request_set:passelse:self.has_request_set[key] page_urlobj Request(urlpage_url, methodGET, callbackself.parse)yield objstaticmethoddef md5(val):import hashlibha hashlib.md5()ha.update(bytes(val, encodingutf-8))key ha.hexdigest()return key 执行此爬虫文件则在终端进入项目目录执行如下命令 1 scrapy crawl dig --nolog 对于上述代码重要之处在于 Request是一个封装用户请求的类在回调函数中yield该对象表示继续访问HtmlXpathSelector用于结构化HTML代码并提供选择器功能4、选择器 #!/usr/bin/env python # -*- coding:utf-8 -*- from scrapy.selector import Selector, HtmlXPathSelector from scrapy.http import HtmlResponse html !DOCTYPE html htmlhead langenmeta charsetUTF-8title/title/headbodyulli classitem-a idi1 hreflink.htmlfirst item/a/lili classitem-0a idi2 hrefllink.htmlfirst item/a/lili classitem-1a hrefllink2.htmlsecond itemspanvv/span/a/li/uldiva hrefllink2.htmlsecond item/a/div/body /htmlresponse HtmlResponse(urlhttp://example.com, bodyhtml,encodingutf-8) # hxs HtmlXPathSelector(response) # print(hxs) # hxs Selector(responseresponse).xpath(//a) #找到a标签 # print(hxs) # hxs Selector(responseresponse).xpath(//a[2]) #找到列表中的第2个 # print(hxs) # hxs Selector(responseresponse).xpath(//a[id]) #找到有a标签的属性 # print(hxs) # hxs Selector(responseresponse).xpath(//a[idi1]) #找到ID他的值 # print(hxs) # hxs Selector(responseresponse).xpath(//a[hreflink.html][idi1]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[contains(href, link)]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[starts-with(href, link)]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[re:test(id, i\d)]) #正测表达式 # print(hxs) # hxs Selector(responseresponse).xpath(//a[re:test(id, i\d)]/text()).extract() # print(hxs) # hxs Selector(responseresponse).xpath(//a[re:test(id, i\d)]/href).extract() # print(hxs) # hxs Selector(responseresponse).xpath(/html/body/ul/li/a/href).extract() # print(hxs) # hxs Selector(responseresponse).xpath(//body/ul/li/a/href).extract_first() # print(hxs)# ul_list Selector(responseresponse).xpath(//body/ul/li) # for item in ul_list: # v item.xpath(./a/span) # # 或 # # v item.xpath(a/span) # # 或 # # v item.xpath(*/a/span) # print(v) 示例自动登陆抽屉并点赞 # -*- coding: utf-8 -*- import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Request from scrapy.http.cookies import CookieJar from scrapy import FormRequestclass ChouTiSpider(scrapy.Spider):# 爬虫应用的名称通过此名称启动爬虫命令name chouti# 允许的域名allowed_domains [chouti.com]cookie_dict {}has_request_set {}def start_requests(self):url http://dig.chouti.com/# return [Request(urlurl, callbackself.login)]yield Request(urlurl, callbackself.login)def login(self, response):cookie_jar CookieJar()cookie_jar.extract_cookies(response, response.request)for k, v in cookie_jar._cookies.items():for i, j in v.items():for m, n in j.items():self.cookie_dict[m] n.valuereq Request(urlhttp://dig.chouti.com/login,methodPOST,headers{Content-Type: application/x-www-form-urlencoded; charsetUTF-8},bodyphone8615131255089passwordpppppppponeMonth1,cookiesself.cookie_dict,callbackself.check_login)yield reqdef check_login(self, response):req Request(urlhttp://dig.chouti.com/,methodGET,callbackself.show,cookiesself.cookie_dict,dont_filterTrue)yield reqdef show(self, response):# print(response)hxs HtmlXPathSelector(response)news_list hxs.select(//div[idcontent-list]/div[classitem])for new in news_list:# temp new.xpath(div/div[classpart2]/share-linkid).extract()link_id new.xpath(*/div[classpart2]/share-linkid).extract_first()yield Request(urlhttp://dig.chouti.com/link/vote?linksId%s %(link_id,),methodPOST,cookiesself.cookie_dict,callbackself.do_favor)page_list hxs.select(//div[iddig_lcpage]//a[re:test(href, /all/hot/recent/\d)]/href).extract()for page in page_list:page_url http://dig.chouti.com%s % pageimport hashlibhash hashlib.md5()hash.update(bytes(page_url,encodingutf-8))key hash.hexdigest()if key in self.has_request_set:passelse:self.has_request_set[key] page_urlyield Request(urlpage_url,methodGET,callbackself.show)def do_favor(self, response):print(response.text) 注意settings.py中设置DEPTH_LIMIT 1来指定“递归”的层数。 5. 格式化处理 上述实例只是简单的处理所以在parse方法中直接处理。如果对于想要获取更多的数据处理则可以利用Scrapy的items将数据格式化然后统一交由pipelines来处理。 spiders/xiahuar.py import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Request from scrapy.http.cookies import CookieJar from scrapy import FormRequestclass XiaoHuarSpider(scrapy.Spider):# 爬虫应用的名称通过此名称启动爬虫命令name xiaohuar# 允许的域名allowed_domains [xiaohuar.com]start_urls [http://www.xiaohuar.com/list-1-1.html,]# custom_settings {# ITEM_PIPELINES:{# spider1.pipelines.JsonPipeline: 100# }# }has_request_set {}def parse(self, response):# 分析页面# 找到页面中符合规则的内容校花图片保存# 找到所有的a标签再访问其他a标签一层一层的搞下去hxs HtmlXPathSelector(response)items hxs.select(//div[classitem_list infinite_scroll]/div)for item in items:src item.select(.//div[classimg]/a/img/src).extract_first()name item.select(.//div[classimg]/span/text()).extract_first()school item.select(.//div[classimg]/div[classbtns]/a/text()).extract_first()url http://www.xiaohuar.com%s % srcfrom ..items import XiaoHuarItemobj XiaoHuarItem(namename, schoolschool, urlurl)yield objurls hxs.select(//a[re:test(href, http://www.xiaohuar.com/list-1-\d.html)]/href)for url in urls:key self.md5(url)if key in self.has_request_set:passelse:self.has_request_set[key] urlreq Request(urlurl,methodGET,callbackself.parse)yield reqstaticmethoddef md5(val):import hashlibha hashlib.md5()ha.update(bytes(val, encodingutf-8))key ha.hexdigest()return key items import scrapyclass XiaoHuarItem(scrapy.Item):name scrapy.Field()school scrapy.Field()url scrapy.Field() pipelines import json import os import requestsclass JsonPipeline(object):def __init__(self):self.file open(xiaohua.txt, w)def process_item(self, item, spider):v json.dumps(dict(item), ensure_asciiFalse)self.file.write(v)self.file.write(\n)self.file.flush()return itemclass FilePipeline(object):def __init__(self):if not os.path.exists(imgs):os.makedirs(imgs)def process_item(self, item, spider):response requests.get(item[url], streamTrue)file_name %s_%s.jpg % (item[name], item[school])with open(os.path.join(imgs, file_name), modewb) as f:f.write(response.content)return item settings ITEM_PIPELINES {spider1.pipelines.JsonPipeline: 100,spider1.pipelines.FilePipeline: 300, } # 每行后面的整型值确定了他们运行的顺序item按数字从低到高的顺序通过pipeline通常将这些数字定义在0-1000范围内。 自定义pipeline from scrapy.exceptions import DropItemclass CustomPipeline(object):def __init__(self,v):self.value vdef process_item(self, item, spider):# 操作并进行持久化# return表示会被后续的pipeline继续处理return item# 表示将item丢弃不会被后续pipeline处理# raise DropItem()classmethoddef from_crawler(cls, crawler):初始化时候用于创建pipeline对象:param crawler: :return: val crawler.settings.getint(MMMM)return cls(val)def open_spider(self,spider):爬虫开始执行时调用:param spider: :return: print(000000)def close_spider(self,spider):爬虫关闭时被调用:param spider: :return: print(111111) 6.中间件 爬虫中间件 class SpiderMiddleware(object):def process_spider_input(self,response, spider):下载完成执行然后交给parse处理:param response: :param spider: :return: passdef process_spider_output(self,response, result, spider):spider处理完成返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)return resultdef process_spider_exception(self,response, exception, spider):异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常含 Response 或 Item 的可迭代对象(iterable)交给调度器或pipelinereturn Nonedef process_start_requests(self,start_requests, spider):爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象return start_requests下载器中间件 class DownMiddleware1(object):def process_request(self, request, spider):请求需要被下载时经过所有下载器中间件的process_request调用:param request: :param spider: :return: None,继续后续中间件去下载Response对象停止process_request的执行开始执行process_responseRequest对象停止中间件的执行将Request重新调度器raise IgnoreRequest异常停止process_request的执行开始执行process_exceptionpassdef process_response(self, request, response, spider):spider处理完成返回时调用:param response::param result::param spider::return: Response 对象转交给其他中间件process_responseRequest 对象停止中间件request会被重新调度下载raise IgnoreRequest 异常调用Request.errbackprint(response1)return responsedef process_exception(self, request, exception, spider):当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return: None继续交给后续中间件处理异常Response对象停止后续process_exception方法Request对象停止中间件request将会被重新调用下载return None 7. 自定制命令 在spiders同级创建任意目录如commands在其中创建 crawlall.py 文件 此处文件名就是自定义的命令crawlall.py from scrapy.commands import ScrapyCommandfrom scrapy.utils.project import get_project_settingsclass Command(ScrapyCommand):requires_project Truedef syntax(self):return [options]def short_desc(self):return Runs all of the spidersdef run(self, args, opts):spider_list self.crawler_process.spiders.list()for name in spider_list:self.crawler_process.crawl(name, **opts.__dict__)self.crawler_process.start() 在settings.py 中添加配置 COMMANDS_MODULE 项目名称.目录名称在项目目录执行命令scrapy crawlall 8. 自定义扩展 自定义扩展时利用信号在指定位置注册制定操作 from scrapy import signalsclass MyExtension(object):def __init__(self, value):self.value valueclassmethoddef from_crawler(cls, crawler):val crawler.settings.getint(MMMM)ext cls(val)crawler.signals.connect(ext.spider_opened, signalsignals.spider_opened)crawler.signals.connect(ext.spider_closed, signalsignals.spider_closed)return extdef spider_opened(self, spider):print(open)def spider_closed(self, spider):print(close) 9. 避免重复访问 scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重相关配置有 1 2 3 DUPEFILTER_CLASS  scrapy.dupefilter.RFPDupeFilter DUPEFILTER_DEBUG  False JOBDIR  保存范文记录的日志路径如/root/  # 最终路径为 /root/requests.seen 自定义URL去重操作 class RepeatUrl:def __init__(self):self.visited_url set()classmethoddef from_settings(cls, settings):初始化时调用:param settings: :return: return cls()def request_seen(self, request):检测当前请求是否已经被访问过:param request: :return: True表示已经访问过False表示未访问过if request.url in self.visited_url:return Trueself.visited_url.add(request.url)return Falsedef open(self):开始爬去请求时调用:return: print(open replication)def close(self, reason):结束爬虫爬取时调用:param reason: :return: print(close replication)def log(self, request, spider):记录日志:param request: :param spider: :return: print(repeat, request.url)  10.其他 settings一些设置参数说明 # -*- coding: utf-8 -*-# Scrapy settings for step8_king project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html# 1. 爬虫名称 BOT_NAME step8_king# 2. 爬虫应用路径 SPIDER_MODULES [step8_king.spiders] NEWSPIDER_MODULE step8_king.spiders# Crawl responsibly by identifying yourself (and your website) on the user-agent # 3. 客户端 user-agent请求头 # USER_AGENT step8_king (http://www.yourdomain.com)# Obey robots.txt rules # 4. 禁止爬虫配置 # ROBOTSTXT_OBEY False# Configure maximum concurrent requests performed by Scrapy (default: 16) # 5. 并发请求数 # CONCURRENT_REQUESTS 4# Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # 6. 延迟下载秒数 # DOWNLOAD_DELAY 2# The download delay setting will honor only one of: # 7. 单域名访问并发数并且延迟下次秒数也应用在每个域名 # CONCURRENT_REQUESTS_PER_DOMAIN 2 # 单IP访问并发数如果有值则忽略CONCURRENT_REQUESTS_PER_DOMAIN并且延迟下次秒数也应用在每个IP # CONCURRENT_REQUESTS_PER_IP 3# Disable cookies (enabled by default) # 8. 是否支持cookiecookiejar进行操作cookie # COOKIES_ENABLED True # COOKIES_DEBUG True# Disable Telnet Console (enabled by default) # 9. Telnet用于查看当前爬虫的信息操作爬虫等... # 使用telnet ip port 然后通过命令操作 # TELNETCONSOLE_ENABLED True # TELNETCONSOLE_HOST 127.0.0.1 # TELNETCONSOLE_PORT [6023,]# 10. 默认请求头 # Override the default request headers: # DEFAULT_REQUEST_HEADERS { # Accept: text/html,application/xhtmlxml,application/xml;q0.9,*/*;q0.8, # Accept-Language: en, # }# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html # 11. 定义pipeline处理请求 # ITEM_PIPELINES { # step8_king.pipelines.JsonPipeline: 700, # step8_king.pipelines.FilePipeline: 500, # }# 12. 自定义扩展基于信号进行调用 # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html # EXTENSIONS { # # step8_king.extensions.MyExtension: 500, # }# 13. 爬虫允许的最大深度可以通过meta查看当前深度0表示无深度 # DEPTH_LIMIT 3# 14. 爬取时0表示深度优先Lifo(默认)1表示广度优先FiFo# 后进先出深度优先 # DEPTH_PRIORITY 0 # SCHEDULER_DISK_QUEUE scrapy.squeue.PickleLifoDiskQueue # SCHEDULER_MEMORY_QUEUE scrapy.squeue.LifoMemoryQueue # 先进先出广度优先# DEPTH_PRIORITY 1 # SCHEDULER_DISK_QUEUE scrapy.squeue.PickleFifoDiskQueue # SCHEDULER_MEMORY_QUEUE scrapy.squeue.FifoMemoryQueue# 15. 调度器队列 # SCHEDULER scrapy.core.scheduler.Scheduler # from scrapy.core.scheduler import Scheduler# 16. 访问URL去重 # DUPEFILTER_CLASS step8_king.duplication.RepeatUrl# Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html 17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后获取其连接时间 latency即请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay latency / self.target_concurrencynew_delay (slot.delay target_delay) / 2.0 # 表示上一次的延迟时间new_delay max(target_delay, new_delay)new_delay min(max(self.mindelay, new_delay), self.maxdelay)slot.delay new_delay# 开始自动限速 # AUTOTHROTTLE_ENABLED True # The initial download delay # 初始下载延迟 # AUTOTHROTTLE_START_DELAY 5 # The maximum download delay to be set in case of high latencies # 最大下载延迟 # AUTOTHROTTLE_MAX_DELAY 10 # The average number of requests Scrapy should be sending in parallel to each remote server # 平均每秒并发数 # AUTOTHROTTLE_TARGET_CONCURRENCY 1.0# Enable showing throttling stats for every response received: # 是否显示 # AUTOTHROTTLE_DEBUG True# Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings 18. 启用缓存目的用于将已经发送的请求或相应缓存下来以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage# 是否启用缓存策略 # HTTPCACHE_ENABLED True# 缓存策略所有请求均缓存下次在请求直接访问原来的缓存即可 # HTTPCACHE_POLICY scrapy.extensions.httpcache.DummyPolicy # 缓存策略根据Http响应头Cache-Control、Last-Modified 等进行缓存的策略 # HTTPCACHE_POLICY scrapy.extensions.httpcache.RFC2616Policy# 缓存超时时间 # HTTPCACHE_EXPIRATION_SECS 0# 缓存保存路径 # HTTPCACHE_DIR httpcache# 缓存忽略的Http状态码 # HTTPCACHE_IGNORE_HTTP_CODES []# 缓存存储的插件 # HTTPCACHE_STORAGE scrapy.extensions.httpcache.FilesystemCacheStorage 19. 代理需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一使用默认os.environ{http_proxy:http://root:woshiniba192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/}方式二使用自定义下载中间件def to_bytes(text, encodingNone, errorsstrict):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError(to_bytes must receive a unicode, str or bytes object, got %s % type(text).__name__)if encoding is None:encoding utf-8return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES [{ip_port: 111.11.228.75:80, user_pass: },{ip_port: 120.198.243.22:80, user_pass: },{ip_port: 111.8.60.9:8123, user_pass: },{ip_port: 101.71.27.120:80, user_pass: },{ip_port: 122.96.59.104:80, user_pass: },{ip_port: 122.224.249.122:8088, user_pass: },]proxy random.choice(PROXIES)if proxy[user_pass] is not None:request.meta[proxy] to_byteshttp://%s % proxy[ip_port]encoded_user_pass base64.encodestring(to_bytes(proxy[user_pass]))request.headers[Proxy-Authorization] to_bytes(Basic encoded_user_pass)print **************ProxyMiddleware have pass************ proxy[ip_port]else:print **************ProxyMiddleware no pass************ proxy[ip_port]request.meta[proxy] to_bytes(http://%s % proxy[ip_port])DOWNLOADER_MIDDLEWARES {step8_king.middlewares.ProxyMiddleware: 500,}20. Https访问Https访问时有两种情况1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY scrapy.core.downloader.webclient.ScrapyHTTPClientFactoryDOWNLOADER_CLIENTCONTEXTFACTORY scrapy.core.downloader.contextfactory.ScrapyClientContextFactory2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY scrapy.core.downloader.webclient.ScrapyHTTPClientFactoryDOWNLOADER_CLIENTCONTEXTFACTORY step8_king.https.MySSLFactory# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 crypto.load_privatekey(crypto.FILETYPE_PEM, open(/Users/wupeiqi/client.key.unsecure, moder).read())v2 crypto.load_certificate(crypto.FILETYPE_PEM, open(/Users/wupeiqi/client.pem, moder).read())return CertificateOptions(privateKeyv1, # pKey对象certificatev2, # X509对象verifyFalse,methodgetattr(self, method, getattr(self, _ssl_method, None)))其他相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):下载完成执行然后交给parse处理:param response: :param spider: :return: passdef process_spider_output(self,response, result, spider):spider处理完成返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)return resultdef process_spider_exception(self,response, exception, spider):异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常含 Response 或 Item 的可迭代对象(iterable)交给调度器或pipelinereturn Nonedef process_start_requests(self,start_requests, spider):爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象return start_requests内置爬虫中间件scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware: 50,scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware: 500,scrapy.contrib.spidermiddleware.referer.RefererMiddleware: 700,scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware: 800,scrapy.contrib.spidermiddleware.depth.DepthMiddleware: 900, # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware # Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html SPIDER_MIDDLEWARES {# step8_king.middlewares.SpiderMiddleware: 543, } 22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):请求需要被下载时经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载Response对象停止process_request的执行开始执行process_responseRequest对象停止中间件的执行将Request重新调度器raise IgnoreRequest异常停止process_request的执行开始执行process_exceptionpassdef process_response(self, request, response, spider):spider处理完成返回时调用:param response::param result::param spider::return:Response 对象转交给其他中间件process_responseRequest 对象停止中间件request会被重新调度下载raise IgnoreRequest 异常调用Request.errbackprint(response1)return responsedef process_exception(self, request, exception, spider):当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None继续交给后续中间件处理异常Response对象停止后续process_exception方法Request对象停止中间件request将会被重新调用下载return None默认下载中间件{scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware: 100,scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware: 300,scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware: 350,scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware: 400,scrapy.contrib.downloadermiddleware.retry.RetryMiddleware: 500,scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware: 550,scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware: 580,scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware: 590,scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware: 600,scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware: 700,scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware: 750,scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware: 830,scrapy.contrib.downloadermiddleware.stats.DownloaderStats: 850,scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware: 900,} # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES { # step8_king.middlewares.DownMiddleware1: 100, # step8_king.middlewares.DownMiddleware2: 500, # } 11.TinyScrapy #!/usr/bin/env python # -*- coding:utf-8 -*- import types from twisted.internet import defer from twisted.web.client import getPage from twisted.internet import reactorclass Request(object):def __init__(self, url, callback):self.url urlself.callback callbackself.priority 0class HttpResponse(object):def __init__(self, content, request):self.content contentself.request requestclass ChouTiSpider(object):def start_requests(self):url_list [http://www.cnblogs.com/, http://www.bing.com]for url in url_list:yield Request(urlurl, callbackself.parse)def parse(self, response):print(response.request.url)# yield Request(urlhttp://www.baidu.com, callbackself.parse)from queue import Queue Q Queue()class CallLaterOnce(object):def __init__(self, func, *a, **kw):self._func funcself._a aself._kw kwself._call Nonedef schedule(self, delay0):if self._call is None:self._call reactor.callLater(delay, self)def cancel(self):if self._call:self._call.cancel()def __call__(self):self._call Nonereturn self._func(*self._a, **self._kw)class Engine(object):def __init__(self):self.nextcall Noneself.crawlling []self.max 5self._closewait Nonedef get_response(self,content, request):response HttpResponse(content, request)gen request.callback(response)if isinstance(gen, types.GeneratorType):for req in gen:req.priority request.priority 1Q.put(req)def rm_crawlling(self,response,d):self.crawlling.remove(d)def _next_request(self,spider):if Q.qsize() 0 and len(self.crawlling) 0:self._closewait.callback(None)if len(self.crawlling) 5:returnwhile len(self.crawlling) 5:try:req Q.get(blockFalse)except Exception as e:req Noneif not req:returnd getPage(req.url.encode(utf-8))self.crawlling.append(d)d.addCallback(self.get_response, req)d.addCallback(self.rm_crawlling,d)d.addCallback(lambda _: self.nextcall.schedule())defer.inlineCallbacksdef crawl(self):spider ChouTiSpider()start_requests iter(spider.start_requests())flag Truewhile flag:try:req next(start_requests)Q.put(req)except StopIteration as e:flag Falseself.nextcall CallLaterOnce(self._next_request,spider)self.nextcall.schedule()self._closewait defer.Deferred()yield self._closewaitdefer.inlineCallbacksdef pp(self):yield self.crawl()_active set() obj Engine() d obj.crawl() _active.add(d)li defer.DeferredList(_active) li.addBoth(lambda _,*a,**kw: reactor.stop())reactor.run()  点击下载 更多文档参见http://scrapy-chs.readthedocs.io/zh_CN/latest/index.html
http://www.zqtcl.cn/news/825212/

相关文章:

  • 网站建设实训个人深圳做营销网站的公司哪家好
  • 广州seo网站策划wordpress关闭主题提示
  • 做门票售卖网站怎么制作自己的水印
  • 网站绑定两个域名怎么做跳转asp 网站后台
  • 百度网站怎么做的赚钱吗郑州资助app下载
  • 成都成华区网站建设天津网站优
  • 大朗网站制作商城网站建设相关费用
  • 付费阅读网站代码搜索引擎推广方式有哪些
  • 企业网站搭建介绍一个电影的网站模板下载
  • wordpress网站插件下载郑州专业网站制作
  • 佛山南海区建网站的公司dw怎么做购物网站
  • 杭州网站关键词排名优化响应式网站好还是自适应网站好
  • 潍坊作风建设网站什么是网站建设技术
  • 网站后台图片不显示东莞市企业招聘信息网
  • 网站发布平台商业网站的网址
  • 免费的培训网站建设门户网站建设管理工作方案
  • 企业网站建设实验感想企业网络推广哪家公司好
  • 网站建设和维护视频如何入侵网站服务器
  • 怎样建设网站空间成都网站设公司
  • 百度文库账号登录入口百度seo规则最新
  • 服务器可以自己的网站吗网络营销策划与创意
  • 广州市招投标网站个人网站可以做论坛
  • 易语言做购物网站春节网站怎么做
  • 建公司网站设计网站公司做网上夫妻去哪个网站
  • 稷山网站建设wordpress单本小说采集
  • 凡客网站规划与建设ppt网站做跳转教程
  • 怎么看网站空间多大做网站旅游销售
  • 天津做手机网站建设旅游网站的目的
  • 飞机查询网站开发的创新点注册公司流程和费用大概多少钱
  • 高质量的邯郸网站建设厦门网页制作厦门小程序app