当前位置: 首页 > news >正文

网站建设需求分析写什么专门下软件的app

网站建设需求分析写什么,专门下软件的app,辽宁网站制作,做金融的免费发帖的网站有哪些Scrapy是一个为了爬取网站数据#xff0c;提取结构性数据而编写的应用框架。 其可以应用在数据挖掘#xff0c;信息处理或存储历史数据等一系列的程序中。其最初是为了页面抓取 (更确切来说, 网络抓取)所设计的#xff0c; 也可以应用在获取API所返回的数据(例如 Amazon Ass…Scrapy是一个为了爬取网站数据提取结构性数据而编写的应用框架。 其可以应用在数据挖掘信息处理或存储历史数据等一系列的程序中。其最初是为了页面抓取 (更确切来说, 网络抓取)所设计的 也可以应用在获取API所返回的数据(例如 Amazon Associates Web Services ) 或者通用的网络爬虫。Scrapy用途广泛可以用于数据挖掘、监测和自动化测试。 Scrapy 使用了 Twisted异步网络库来处理网络通讯。整体架构大致如下 Scrapy主要包括了以下组件 引擎(Scrapy)用来处理整个系统的数据流处理, 触发事务(框架核心)调度器(Scheduler)用来接受引擎发过来的请求, 压入队列中, 并在引擎再次请求的时候返回. 可以想像成一个URL抓取网页的网址或者说是链接的优先队列, 由它来决定下一个要抓取的网址是什么, 同时去除重复的网址下载器(Downloader)用于下载网页内容, 并将网页内容返回给蜘蛛(Scrapy下载器是建立在twisted这个高效的异步模型上的)爬虫(Spiders)爬虫是主要干活的, 用于从特定的网页中提取自己需要的信息, 即所谓的实体(Item)。用户也可以从中提取出链接,让Scrapy继续抓取下一个页面项目管道(Pipeline)负责处理爬虫从网页中抽取的实体主要的功能是持久化实体、验证实体的有效性、清除不需要的信息。当页面被爬虫解析后将被发送到项目管道并经过几个特定的次序处理数据。下载器中间件(Downloader Middlewares)位于Scrapy引擎和下载器之间的框架主要是处理Scrapy引擎与下载器之间的请求及响应。爬虫中间件(Spider Middlewares)介于Scrapy引擎和爬虫之间的框架主要工作是处理蜘蛛的响应输入和请求输出。调度中间件(Scheduler Middewares)介于Scrapy引擎和调度之间的中间件从Scrapy引擎发送到调度的请求和响应。 Scrapy运行流程大概如下 引擎从调度器中取出一个链接(URL)用于接下来的抓取引擎把URL封装成一个请求(Request)传给下载器下载器把资源下载下来并封装成应答包(Response)爬虫解析Response解析出实体Item,则交给实体管道进行进一步的处理解析出的是链接URL,则把URL交给调度器等待抓取 一、安装 Linux:pip3 install scrapyWindowsa. pip3 install wheelb. 下载twisted http://www.lfd.uci.edu/~gohlke/pythonlibs/#twistedc. 进入下载目录执行 pip3 install Twisted‑17.1.0‑cp35‑cp35m‑win_amd64.whld. pip3 install scrapye. 下载并安装pywin32https://sourceforge.net/projects/pywin32/files/ 二、基本使用 1. 基本命令 1. scrapy startproject 项目名称# 在当前目录中创建一个项目文件类似于Django2. scrapy genspider [-t template] name domain# 创建爬虫应用scrapy gensipider -t basic oldboy oldboy.comscrapy gensipider -t xmlfeed autohome autohome.com.cnPS:查看所有命令scrapy gensipider -l查看模板命令scrapy gensipider -d 模板名称3. scrapy list# 展示爬虫应用列表4. scrapy crawl 爬虫应用名称# 运行单独爬虫应用要在项目内运行 2.项目结构以及爬虫应用简介 project_name/scrapy.cfg # 项目的主配置信息。真正爬虫相关的配置信息在settings.py文件中project_name/__init__.pyitems.py # 设置数据存储模板用于结构化数据如Django的Modelpipelines.py # 数据处理行为如一般结构化的数据持久化settings.py # 配置文件如递归的层数、并发数延迟下载等spiders/ # 爬虫目录如创建文件编写爬虫规则__init__.py爬虫1.py爬虫2.py爬虫3.py 注意一般创建爬虫文件时以网站域名命名 爬虫1.py import scrapyclass XiaoHuarSpider(scrapy.spiders.Spider):name spidername # 爬虫名称 *****allowed_domains [spider.com] # 允许的域名start_urls [http://www.flepeng.com/, # 起始URL]def parse(self, response):# 访问起始URL并获取结果后的回调函数 关于windows编码 import sys,os sys.stdoutio.TextIOWrapper(sys.stdout.buffer,encodinggb18030) 3. 小试牛刀 import scrapy from scrapy.selector import HtmlXPathSelector # 新版的好像已经弃用使用Selector from scrapy.http.request import Requestclass DigSpider(scrapy.Spider):name dig # 爬虫应用的名称通过命令启动爬虫时使用此参数allowed_domains [chouti.com] # 允许的域名start_urls [http://dig.chouti.com/,] # 起始URLhas_request_set {}def parse(self, response):print(response.url)hxs HtmlXPathSelector(response)page_list hxs.select(//div[iddig_lcpage]//a[re:test(href, /all/hot/recent/\d)]/href).extract()for page in page_list:page_url http://dig.chouti.com%s % pagekey self.md5(page_url)if key not in self.has_request_set:self.has_request_set[key] page_urlobj Request(urlpage_url, methodGET, callbackself.parse)yield objstaticmethoddef md5(val):import hashlibha hashlib.md5()ha.update(bytes(val, encodingutf-8))key ha.hexdigest()return key 执行此爬虫文件则在终端进入项目目录执行如下命令 scrapy crawl dig --nolog # nolog 表示不打印日志 对于上述代码重要之处在于 Request是一个封装用户请求的类在回调函数中yield该对象表示继续访问HtmlXpathSelector用于结构化HTML代码并提供选择器功能 4. 选择器 xpath的路径表达式 表达式 描述nodename选取此节点的所有子节点。/从根节点选取。//从匹配选择的当前节点选择文档中的节点而不考虑它们的位置。.选取当前节点。..选取当前节点的父节点。选取属性。 在下面的表格中列出了一些路径表达式以及表达式的结果 路径表达式结果bookstore选取 bookstore 元素的所有子节点。/bookstore 选取根元素 bookstore。 注释假如路径起始于正斜杠( / )则此路径始终代表到某元素的绝对路径 bookstore/book选取属于 bookstore 的子元素的所有 book 元素。//book选取所有 book 子元素而不管它们在文档中的位置。bookstore//book选择属于 bookstore 元素的后代的所有 book 元素而不管它们位于 bookstore 之下的什么位置。//lang选取名为 lang 的所有属性。 #!/usr/bin/env python # -*- coding:utf-8 -*- from scrapy.selector import Selector, HtmlXPathSelector # 新版好像已弃使用Selector用法和这个一样 from scrapy.http import HtmlResponse html  !DOCTYPE html htmlhead langenmeta charsetUTF-8title/title/headbodyulli classitem-a idi1 hreflink.htmlfirst item/a/lili classitem-0a idi2 hrefllink.htmlfirst item/a/lili classitem-1a hrefllink2.htmlsecond itemspanvv/span/a/li/uldiva hrefllink2.htmlsecond item/a/div/body /html response  HtmlResponse(urlhttp://example.com, bodyhtml,encodingutf-8) # hxs HtmlXPathSelector(response) # print(hxs) # hxs Selector(responseresponse).xpath(//a) # 从根目录下查找所有 a 元素 # print(hxs) # hxs Selector(responseresponse).xpath(//a[2]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[id]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[idi1]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[hreflink.html][idi1]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[contains(href, link)]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[starts-with(href, link)]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[re:test(id, i\d)]) # print(hxs) # hxs Selector(responseresponse).xpath(//a[re:test(id, i\d)]/text()).extract() # print(hxs) # hxs Selector(responseresponse).xpath(//a[re:test(id, i\d)]/href).extract() # print(hxs) # hxs Selector(responseresponse).xpath(/html/body/ul/li/a/href).extract() # print(hxs) # hxs Selector(responseresponse).xpath(//body/ul/li/a/href).extract_first() # print(hxs) # ul_list Selector(responseresponse).xpath(//body/ul/li) # for item in ul_list: #     v item.xpath(./a/span) #     # 或 #     # v item.xpath(a/span) #     # 或 #     # v item.xpath(*/a/span) #     print(v) 示例自动登陆抽屉并点赞 # -*- coding: utf-8 -*- import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Request from scrapy.http.cookies import CookieJar from scrapy import FormRequestclass ChouTiSpider(scrapy.Spider):# 爬虫应用的名称通过此名称启动爬虫命令name chouti# 允许的域名allowed_domains [chouti.com]cookie_dict {}has_request_set {}def start_requests(self):url http://dig.chouti.com/# return [Request(urlurl, callbackself.login)]yield Request(urlurl, callbackself.login)def login(self, response):cookie_jar CookieJar()cookie_jar.extract_cookies(response, response.request)for k, v in cookie_jar._cookies.items():for i, j in v.items():for m, n in j.items():self.cookie_dict[m] n.valuereq Request(urlhttp://dig.chouti.com/login,methodPOST,headers{Content-Type: application/x-www-form-urlencoded; charsetUTF-8},bodyphone8615131255089passwordpppppppponeMonth1,cookiesself.cookie_dict,callbackself.check_login)yield reqdef check_login(self, response):req Request(urlhttp://dig.chouti.com/,methodGET,callbackself.show,cookiesself.cookie_dict,dont_filterTrue)yield reqdef show(self, response):# print(response)hxs HtmlXPathSelector(response)news_list hxs.select(//div[idcontent-list]/div[classitem])for new in news_list:# temp new.xpath(div/div[classpart2]/share-linkid).extract()link_id new.xpath(*/div[classpart2]/share-linkid).extract_first()yield Request(urlhttp://dig.chouti.com/link/vote?linksId%s %(link_id,),methodPOST,cookiesself.cookie_dict,callbackself.do_favor)page_list hxs.select(//div[iddig_lcpage]//a[re:test(href, /all/hot/recent/\d)]/href).extract()for page in page_list:page_url http://dig.chouti.com%s % pageimport hashlibhash hashlib.md5()hash.update(bytes(page_url,encodingutf-8))key hash.hexdigest()if key in self.has_request_set:passelse:self.has_request_set[key] page_urlyield Request(urlpage_url,methodGET,callbackself.show)def do_favor(self, response):print(response.text) 处理Cookie # -*- coding: utf-8 -*- import scrapy from scrapy.http.response.html import HtmlResponse from scrapy.http import Request from scrapy.http.cookies import CookieJarclass ChoutiSpider(scrapy.Spider):name choutiallowed_domains [chouti.com]start_urls (http://www.chouti.com/,)def start_requests(self):url http://dig.chouti.com/yield Request(urlurl, callbackself.login, meta{cookiejar: True})def login(self, response):print(response.headers.getlist(Set-Cookie))req Request(urlhttp://dig.chouti.com/login,methodPOST,headers{Content-Type: application/x-www-form-urlencoded; charsetUTF-8},bodyphone8613121758648passwordwoshinibaoneMonth1,callbackself.check_login,meta{cookiejar: True})yield reqdef check_login(self, response):print(response.text) 注意settings.py中设置DEPTH_LIMIT 1来指定“递归”的层数。 5. 格式化处理 pipelines 上述实例只是简单的处理所以在parse方法中直接处理。如果对于想要获取更多的数据处理则可以利用Scrapy的items将数据格式化然后统一交由pipelines来处理。 spiders/xiahuar.py import scrapy from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Request from scrapy.http.cookies import CookieJar from scrapy import FormRequestclass XiaoHuarSpider(scrapy.Spider):name xiaohuarallowed_domains [xiaohuar.com]start_urls [http://www.xiaohuar.com/list-1-1.html,]# setting 中的配置pipelines# custom_settings {# ITEM_PIPELINES:{# spider1.pipelines.JsonPipeline: 100# }# }has_request_set {}def parse(self, response):# 分析页面# 找到页面中符合规则的内容校花图片保存# 找到所有的a标签再访问其他a标签一层一层的搞下去hxs HtmlXPathSelector(response)items hxs.select(//div[classitem_list infinite_scroll]/div)for item in items:src item.select(.//div[classimg]/a/img/src).extract_first()name item.select(.//div[classimg]/span/text()).extract_first()school item.select(.//div[classimg]/div[classbtns]/a/text()).extract_first()url http://www.xiaohuar.com%s % srcfrom ..items import XiaoHuarItemobj XiaoHuarItem(namename, schoolschool, urlurl)yield objurls hxs.select(//a[re:test(href, http://www.xiaohuar.com/list-1-\d.html)]/href)for url in urls:key self.md5(url)if key in self.has_request_set:passelse:self.has_request_set[key] urlreq Request(urlurl,methodGET,callbackself.parse)yield reqstaticmethoddef md5(val):import hashlibha hashlib.md5()ha.update(bytes(val, encodingutf-8))key ha.hexdigest()return key items import scrapyclass XiaoHuarItem(scrapy.Item):name scrapy.Field()school scrapy.Field()url scrapy.Field() pipelines import json import os import requestsclass JsonPipeline(object):def __init__(self):self.file open(xiaohua.txt, w)def process_item(self, item, spider):v json.dumps(dict(item), ensure_asciiFalse)self.file.write(v)self.file.write(\n)self.file.flush()return itemclass FilePipeline(object):def __init__(self):if not os.path.exists(imgs):os.makedirs(imgs)def process_item(self, item, spider):response requests.get(item[url], streamTrue)file_name %s_%s.jpg % (item[name], item[school])with open(os.path.join(imgs, file_name), modewb) as f:f.write(response.content)return item settings ITEM_PIPELINES {spider1.pipelines.JsonPipeline: 100,spider1.pipelines.FilePipeline: 300, } # 后面的整数值确定了他们运行的顺序item按数字从低到高的顺序通过pipeline通常将这些数字定义在0-1000范围内。 对于pipeline可以做更多如下 自定义pipeline格式 from scrapy.exceptions import DropItemclass CustomPipeline(object):def __init__(self,v):self.value vdef process_item(self, item, spider):# 运行pipeline时会调用此函数操作并进行持久化# return表示会被后续的pipeline继续处理return item# 表示将item丢弃不会被后续pipeline处理# raise DropItem()classmethoddef from_crawler(cls, crawler):# 初始化时候用于创建pipeline对象val crawler.settings.getint(MMMM)return cls(val)def open_spider(self,spider):# 爬虫开始执行时调用print(000000)def close_spider(self,spider):# 爬虫关闭时被调用print(111111) 6.中间件 爬虫中间件 class SpiderMiddleware(object):def process_spider_input(self,response, spider):下载完成执行然后交给parse处理:param response: :param spider: :return: passdef process_spider_output(self,response, result, spider):spider处理完成返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)return resultdef process_spider_exception(self,response, exception, spider):异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常含 Response 或 Item 的可迭代对象(iterable)交给调度器或pipelinereturn Nonedef process_start_requests(self,start_requests, spider):爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象return start_requests 下载器中间件 class DownMiddleware1(object):def process_request(self, request, spider):请求需要被下载时经过所有下载器中间件的process_request调用:param request: :param spider: :return: None,继续后续中间件去下载Response对象停止process_request的执行开始执行process_responseRequest对象停止中间件的执行将Request重新调度器raise IgnoreRequest异常停止process_request的执行开始执行process_exceptionpassdef process_response(self, request, response, spider):spider处理完成返回时调用:param response::param result::param spider::return: Response 对象转交给其他中间件process_responseRequest 对象停止中间件request会被重新调度下载raise IgnoreRequest 异常调用Request.errbackprint(response1)return responsedef process_exception(self, request, exception, spider):当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return: None继续交给后续中间件处理异常Response对象停止后续process_exception方法Request对象停止中间件request将会被重新调用下载return None 7. 自定制命令 在spiders同级创建任意目录如commands在其中创建 crawlall.py 文件 此处文件名就是自定义的命令 crawlall.py from scrapy.commands import ScrapyCommand from scrapy.utils.project import get_project_settingsclass Command(ScrapyCommand):requires_project Truedef syntax(self): # 命令的参数return [options]def short_desc(self): # 命令的描述return Runs all of the spidersdef run(self, args, opts):spider_list self.crawler_process.spiders.list()for name in spider_list:self.crawler_process.crawl(name, **opts.__dict__)self.crawler_process.start()在settings.py 中添加配置 COMMANDS_MODULE 项目名称.目录名称在项目目录执行命令scrapy crawlall  单个爬虫 import sys from scrapy.cmdline import executeif __name__ __main__:execute([scrapy,github,--nolog]) 8. 自定义扩展 自定义扩展时利用信号在指定位置注册制定操作 from scrapy import signalsclass MyExtension(object):def __init__(self, value):self.value valueclassmethoddef from_crawler(cls, crawler):val crawler.settings.getint(MMMM)ext cls(val)# 注册信号crawler.signals.connect(ext.spider_opened, signalsignals.spider_opened)crawler.signals.connect(ext.spider_closed, signalsignals.spider_closed)return extdef spider_opened(self, spider):print(open)def spider_closed(self, spider):print(close) 9. 避免重复访问 scrapy默认使用 scrapy.dupefilter.RFPDupeFilter 进行去重相关配置有 DUPEFILTER_CLASS scrapy.dupefilter.RFPDupeFilter DUPEFILTER_DEBUG False JOBDIR 保存范文记录的日志路径如/root/ # 最终路径为 /root/requests.seen 自定义URL去重操作 class RepeatUrl:def __init__(self):self.visited_url set()classmethoddef from_settings(cls, settings):初始化时调用:param settings: :return: return cls()def request_seen(self, request):检测当前请求是否已经被访问过:param request: :return: True表示已经访问过False表示未访问过if request.url in self.visited_url:return Trueself.visited_url.add(request.url)return Falsedef open(self):开始爬取请求时调用:return: print(open replication)def close(self, reason):结束爬虫爬取时调用:param reason: :return: print(close replication)def log(self, request, spider):记录日志:param request: :param spider: :return: print(repeat, request.url) 10.其他 settings # -*- coding: utf-8 -*-# Scrapy settings for step8_king project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # http://doc.scrapy.org/en/latest/topics/settings.html # http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html# 1. 爬虫名称 BOT_NAME step8_king# 2. 爬虫应用路径 SPIDER_MODULES [step8_king.spiders] NEWSPIDER_MODULE step8_king.spiders# Crawl responsibly by identifying yourself (and your website) on the user-agent # 3. 客户端 user-agent请求头 # USER_AGENT step8_king (http://www.yourdomain.com)# Obey robots.txt rules # 4. 禁止爬虫配置应该开启看看是否允许 # ROBOTSTXT_OBEY False# Configure maximum concurrent requests performed by Scrapy (default: 16) # 5. 并发请求数 # CONCURRENT_REQUESTS 4# Configure a delay for requests for the same website (default: 0) # See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs # 6. 延迟下载秒数 # DOWNLOAD_DELAY 2# The download delay setting will honor only one of: # 7. 单域名访问并发数并且延迟下次秒数也应用在每个域名 # CONCURRENT_REQUESTS_PER_DOMAIN 2 # 单IP访问并发数如果有值则忽略CONCURRENT_REQUESTS_PER_DOMAIN并且延迟下次秒数也应用在每个IP # CONCURRENT_REQUESTS_PER_IP 3# Disable cookies (enabled by default) # 8. 是否支持cookiecookiejar进行操作cookie # COOKIES_ENABLED True # COOKIES_DEBUG True# Disable Telnet Console (enabled by default) # 9. Telnet用于查看当前爬虫的信息操作爬虫等... # 使用telnet ip port 然后通过命令操作 # TELNETCONSOLE_ENABLED True # TELNETCONSOLE_HOST 127.0.0.1 # TELNETCONSOLE_PORT [6023,]# 10. 默认请求头 # Override the default request headers: # DEFAULT_REQUEST_HEADERS { # Accept: text/html,application/xhtmlxml,application/xml;q0.9,*/*;q0.8, # Accept-Language: en, # }# Configure item pipelines # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html # 11. 定义pipeline处理请求 # ITEM_PIPELINES { # step8_king.pipelines.JsonPipeline: 700, # step8_king.pipelines.FilePipeline: 500, # }# 12. 自定义扩展基于信号进行调用 # Enable or disable extensions # See http://scrapy.readthedocs.org/en/latest/topics/extensions.html # EXTENSIONS { # # step8_king.extensions.MyExtension: 500, # }# 13. 爬虫允许的最大深度可以通过meta查看当前深度0表示无深度 # DEPTH_LIMIT 3# 14. 爬取时0表示深度优先Lifo(默认)1表示广度优先FiFo# 后进先出深度优先 # DEPTH_PRIORITY 0 # SCHEDULER_DISK_QUEUE scrapy.squeue.PickleLifoDiskQueue # SCHEDULER_MEMORY_QUEUE scrapy.squeue.LifoMemoryQueue # 先进先出广度优先# DEPTH_PRIORITY 1 # SCHEDULER_DISK_QUEUE scrapy.squeue.PickleFifoDiskQueue # SCHEDULER_MEMORY_QUEUE scrapy.squeue.FifoMemoryQueue# 15. 调度器队列 # SCHEDULER scrapy.core.scheduler.Scheduler # from scrapy.core.scheduler import Scheduler# 16. 访问URL去重 # DUPEFILTER_CLASS step8_king.duplication.RepeatUrl# Enable and configure the AutoThrottle extension (disabled by default) # See http://doc.scrapy.org/en/latest/topics/autothrottle.html 17. 自动限速算法from scrapy.contrib.throttle import AutoThrottle自动限速设置1. 获取最小延迟 DOWNLOAD_DELAY2. 获取最大延迟 AUTOTHROTTLE_MAX_DELAY3. 设置初始下载延迟 AUTOTHROTTLE_START_DELAY4. 当请求下载完成后获取其连接时间 latency即请求连接到接受到响应头之间的时间5. 用于计算的... AUTOTHROTTLE_TARGET_CONCURRENCYtarget_delay latency / self.target_concurrencynew_delay (slot.delay target_delay) / 2.0 # 表示上一次的延迟时间new_delay max(target_delay, new_delay)new_delay min(max(self.mindelay, new_delay), self.maxdelay)slot.delay new_delay # 开始自动限速 # AUTOTHROTTLE_ENABLED True # The initial download delay # 初始下载延迟 # AUTOTHROTTLE_START_DELAY 5 # The maximum download delay to be set in case of high latencies # 最大下载延迟 # AUTOTHROTTLE_MAX_DELAY 10 # The average number of requests Scrapy should be sending in parallel to each remote server # 平均每秒并发数 # AUTOTHROTTLE_TARGET_CONCURRENCY 1.0# Enable showing throttling stats for every response received: # 是否显示 # AUTOTHROTTLE_DEBUG True# Enable and configure HTTP caching (disabled by default) # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings 18. 启用缓存目的用于将已经发送的请求或相应缓存下来以便以后使用from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddlewarefrom scrapy.extensions.httpcache import DummyPolicyfrom scrapy.extensions.httpcache import FilesystemCacheStorage# 是否启用缓存策略 # HTTPCACHE_ENABLED True# 缓存策略所有请求均缓存下次在请求直接访问原来的缓存即可 # HTTPCACHE_POLICY scrapy.extensions.httpcache.DummyPolicy # 缓存策略根据Http响应头Cache-Control、Last-Modified 等进行缓存的策略 # HTTPCACHE_POLICY scrapy.extensions.httpcache.RFC2616Policy# 缓存超时时间 # HTTPCACHE_EXPIRATION_SECS 0# 缓存保存路径 # HTTPCACHE_DIR httpcache# 缓存忽略的Http状态码 # HTTPCACHE_IGNORE_HTTP_CODES []# 缓存存储的插件 # HTTPCACHE_STORAGE scrapy.extensions.httpcache.FilesystemCacheStorage 19. 代理需要在环境变量中设置from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware方式一使用默认os.environ{http_proxy:http://root:woshiniba192.168.11.11:9999/https_proxy:http://192.168.11.11:9999/}方式二使用自定义下载中间件def to_bytes(text, encodingNone, errorsstrict):if isinstance(text, bytes):return textif not isinstance(text, six.string_types):raise TypeError(to_bytes must receive a unicode, str or bytes object, got %s % type(text).__name__)if encoding is None:encoding utf-8return text.encode(encoding, errors)class ProxyMiddleware(object):def process_request(self, request, spider):PROXIES [{ip_port: 111.11.228.75:80, user_pass: },{ip_port: 120.198.243.22:80, user_pass: },{ip_port: 111.8.60.9:8123, user_pass: },{ip_port: 101.71.27.120:80, user_pass: },{ip_port: 122.96.59.104:80, user_pass: },{ip_port: 122.224.249.122:8088, user_pass: },]proxy random.choice(PROXIES)if proxy[user_pass] is not None:request.meta[proxy] to_byteshttp://%s % proxy[ip_port]encoded_user_pass base64.encodestring(to_bytes(proxy[user_pass]))request.headers[Proxy-Authorization] to_bytes(Basic encoded_user_pass)print **************ProxyMiddleware have pass************ proxy[ip_port]else:print **************ProxyMiddleware no pass************ proxy[ip_port]request.meta[proxy] to_bytes(http://%s % proxy[ip_port])DOWNLOADER_MIDDLEWARES {step8_king.middlewares.ProxyMiddleware: 500,} 20. Https访问Https访问时有两种情况1. 要爬取网站使用的可信任证书(默认支持)DOWNLOADER_HTTPCLIENTFACTORY scrapy.core.downloader.webclient.ScrapyHTTPClientFactoryDOWNLOADER_CLIENTCONTEXTFACTORY scrapy.core.downloader.contextfactory.ScrapyClientContextFactory2. 要爬取网站使用的自定义证书DOWNLOADER_HTTPCLIENTFACTORY scrapy.core.downloader.webclient.ScrapyHTTPClientFactoryDOWNLOADER_CLIENTCONTEXTFACTORY step8_king.https.MySSLFactory# https.pyfrom scrapy.core.downloader.contextfactory import ScrapyClientContextFactoryfrom twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)class MySSLFactory(ScrapyClientContextFactory):def getCertificateOptions(self):from OpenSSL import cryptov1 crypto.load_privatekey(crypto.FILETYPE_PEM, open(/Users/wupeiqi/client.key.unsecure, moder).read())v2 crypto.load_certificate(crypto.FILETYPE_PEM, open(/Users/wupeiqi/client.pem, moder).read())return CertificateOptions(privateKeyv1, # pKey对象certificatev2, # X509对象verifyFalse,methodgetattr(self, method, getattr(self, _ssl_method, None)))其他相关类scrapy.core.downloader.handlers.http.HttpDownloadHandlerscrapy.core.downloader.webclient.ScrapyHTTPClientFactoryscrapy.core.downloader.contextfactory.ScrapyClientContextFactory相关配置DOWNLOADER_HTTPCLIENTFACTORYDOWNLOADER_CLIENTCONTEXTFACTORY 21. 爬虫中间件class SpiderMiddleware(object):def process_spider_input(self,response, spider):下载完成执行然后交给parse处理:param response: :param spider: :return: passdef process_spider_output(self,response, result, spider):spider处理完成返回时调用:param response::param result::param spider::return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)return resultdef process_spider_exception(self,response, exception, spider):异常调用:param response::param exception::param spider::return: None,继续交给后续中间件处理异常含 Response 或 Item 的可迭代对象(iterable)交给调度器或pipelinereturn Nonedef process_start_requests(self,start_requests, spider):爬虫启动时调用:param start_requests::param spider::return: 包含 Request 对象的可迭代对象return start_requests内置爬虫中间件scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware: 50,scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware: 500,scrapy.contrib.spidermiddleware.referer.RefererMiddleware: 700,scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware: 800,scrapy.contrib.spidermiddleware.depth.DepthMiddleware: 900, # from scrapy.contrib.spidermiddleware.referer import RefererMiddleware # Enable or disable spider middlewares # See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html SPIDER_MIDDLEWARES {# step8_king.middlewares.SpiderMiddleware: 543, } 22. 下载中间件class DownMiddleware1(object):def process_request(self, request, spider):请求需要被下载时经过所有下载器中间件的process_request调用:param request::param spider::return:None,继续后续中间件去下载Response对象停止process_request的执行开始执行process_responseRequest对象停止中间件的执行将Request重新调度器raise IgnoreRequest异常停止process_request的执行开始执行process_exceptionpassdef process_response(self, request, response, spider):spider处理完成返回时调用:param response::param result::param spider::return:Response 对象转交给其他中间件process_responseRequest 对象停止中间件request会被重新调度下载raise IgnoreRequest 异常调用Request.errbackprint(response1)return responsedef process_exception(self, request, exception, spider):当下载处理器(download handler)或 process_request() (下载中间件)抛出异常:param response::param exception::param spider::return:None继续交给后续中间件处理异常Response对象停止后续process_exception方法Request对象停止中间件request将会被重新调用下载return None默认下载中间件{scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware: 100,scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware: 300,scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware: 350,scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware: 400,scrapy.contrib.downloadermiddleware.retry.RetryMiddleware: 500,scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware: 550,scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware: 580,scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware: 590,scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware: 600,scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware: 700,scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware: 750,scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware: 830,scrapy.contrib.downloadermiddleware.stats.DownloaderStats: 850,scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware: 900,} # from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware # Enable or disable downloader middlewares # See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html # DOWNLOADER_MIDDLEWARES { # step8_king.middlewares.DownMiddleware1: 100, # step8_king.middlewares.DownMiddleware2: 500, # } 此文为转载https://www.cnblogs.com/wupeiqi/articles/6229292.html
http://www.zqtcl.cn/news/486912/

相关文章:

  • 美食网站建设项目预算域名解析站长工具
  • 网站如何备案工信局学网站开发首先学哪些基础
  • 什么网站利于优化河北省建设局网站材料备案
  • 自学装修设计从哪里入手沈阳百度seo
  • 做jsp网站用哪些软件下载如何利用网站赚钱
  • 注册网站域名需要什么湘潭公司做网站
  • 一个网站如何优化企业资质查询平台
  • 模板网站为什么做不了优化山西网络网站建设销售公司
  • 建设什么网站可以赚钱设计本网站是用什么做的
  • 荆州市网站建设策划师
  • 苏州中国建设银行招聘信息网站中国企业登记网
  • 网站服务器的重要性新闻软文范例大全
  • 茶叶网站建设一般的风格加大志愿服务网站建设
  • 湖州医院网站建设方案网页游戏知乎
  • 以网站建设为开题报告临海门户网站住房和城乡建设规划局
  • 河南省大型项目建设办公室网站wordpress置顶功能
  • 奉化网站建设三合一网站建设多少钱
  • wordpress文章页怎么调用网站图片wordpress菜单锚点定位
  • 网站建设运营合作合同网站建设英文合同
  • wordpress chrome插件开发图片式网站利于做优化吗
  • 如何做好品牌网站建设策划app要有网站做基础
  • 横沥网站建设公司wordpress运行php
  • 南皮网站建设价格网络推广这个工作好做吗
  • 长安大学门户网站是谁给做的网站排名logo怎么做
  • 襄樊做网站做网站做网站
  • 百度做网站续费费用网站开发的可行性
  • 电子商务网站建设效益分析如何才能做好品牌网站建设策划
  • 能打开各种网站的浏览器app文章目录wordpress
  • 网站注册页面html中国建设招标网网站
  • 云南网站设计海外直购网站建设方案书范文