6

Scrapy爬虫框架的使用

 3 years ago
source link: http://www.cnblogs.com/teark/p/14290334.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

导读:如何使用scrapy框架实现爬虫的4步曲?什么是CrawSpider模板?如何设置下载中间件?如何实现Scrapyd远程部署和监控?想要了解更多,下面让我们来看一下如何具体实现吧!

Scrapy安装(mac)

pip install scrapy

注意:不要使用commandlinetools自带的python进行安装,不然可能报架构错误;用brew下载的python进行安装。

Scrapy实现爬虫

新建爬虫

scrapy startproject demoSpider,demoSpider为项目名。

确定目标

编写items.py,如添加目标字段:person = scrapy.Field()

制作爬虫

scrapy genspider demo "baidu.com",创建demo爬虫文件,指定爬取域。

修改demo.py里的start_urls的地址为自己想爬取的地址如:https://www.cnblogs.com/teark/

随意修改parse()方法,如保存所爬取的页面,可以这样:

def parse(self, response):
    with open("teark.html", "w") as f:
        f.write(response.text)

运行爬虫,看看效果:scrapy crawl demo

有了保存的页面后(可注释掉或删掉保存页面的代码),根据页面结构提取所需数据,一般用xpath表达式,如:

def parse(self, response):
    for _ in response.xpath("//div[@class='teark_article']"):
        item = ItcastItem()
        title = each.xpath("h3/text()").extract()
        content = each.xpath("p/text()").extract()
        item['title'] = title[0]
        item['content'] = content[0]
        yield item

保存数据:scrapy crawl demo -o demo.json(以下格式都行:jsonl,jsonl,csv,xml)

注:该过程在取值中经常需要页面调试,使用scrapy shell(最好先安装ipython,有语法提示),调试好了再放到代码里,如:

scrapy shell "https://www.cnblogs.com/teark/"
response.xpath('//*[@class="even"]')
print site[0].xpath('./td[2]/text()').extract()[0]

处理内容

pipline常用来存储内容,pipline.py中必须实现process_item()方法,该方法必须返回Item对象,如:

import json

class ItcastJsonPipeline(object):

    def __init__(self):
        self.file = open('demo.json', 'wb')

    def process_item(self, item, spider):
        content = json.dumps(dict(item), ensure_ascii=False) + "\n"
        self.file.write(content)
        return item

    def close_spider(self, spider):
        self.file.close()

在settings.py中添加ITEM_PIPELINES配置,如:

ITEM_PIPELINES = {
    "demoSpider.pipelines.DemoJsonPipeline":300
}

重新启动爬虫:scrapy crawl demo,看看当前目录是否生成demo.json

CrawlSpiders

CrawlSpider是spider的派生类,为了从爬取的网页中获取link并继续爬取。

快速创建 CrawlSpider模板:scrapy genspider -t crawl baidu baidu.com

Rule类制定了爬取规则;LinkExtractors类为了提取链接,如:

scrapy shell "http://teark.com/article.php?&start=0#a"

from scrapy.linkextractors import LinkExtractor

# 注意转义字符&
page_lx = LinkExtractor(allow=('comment.php?\&start=\d+'))
page_lx.extract_links(response)

测试完后就知道了allow和rules了,修改spider代码:

#提取匹配 'http://teark.com/article.php?&start=\d+'的链接
page_lx = LinkExtractor(allow = ('start=\d+'))

rules = [
    #提取匹配,并使用spider的parse方法进行分析;并跟进链接(没有callback意味着follow默认为True)
    Rule(page_lx, callback = 'parseContent', follow = True)
]

注:callback 千万不能写 'parse'

Logging

添加日志功能:settings.py中添加如下:

LOG_FILE = "DemoSpider.log"

# 还有CRITICAL, ERROR, WARNING DEBUG等级别
LOG_LEVEL = "INFO"

FormRequest

该方法用来发送POST请求,在spider.py中如下:

def start_requests(self):
        url = 'http://www.renren.com/PLogin.do'
        yield scrapy.FormRequest(
            url = url,
            formdata = {"email" : "teark@9133***34.com", "password" : "**teark**"},
            callback = self.parse_page
        )

模拟登陆:

import scrapy

class LoginSpider(scrapy.Spider):
    name = 'demo.com'
    start_urls = ['http://www.demo.com/users/login.php']
    

    def parse(self, response):
        return scrapy.FormRequest.from_response(
            response,
            formdata={'username': 'teark', 'password': '***'},
            callback=self.after_login
        )

    def after_login(self, response):
        # 检查登陆成功还是失败
        if "authentication failed" in response.body:
            self.log("Login failed", level=log.ERROR)
            return

Downloader Middlewares

防止爬虫被反策略

设置随机User-Agent

禁用Cookies,可以通过 COOKIES_ENABLED 控制 CookiesMiddleware 开启或关闭

设置延迟下载降低频率

使用谷歌/百度等搜索引擎服务器页面缓存获取页面数据

使用IP地址池——VPN和代理IP

使用cralera(专用于爬虫的代理组件),配置中间件后请求都是通过crawlera发出的

设置下载中间件——Downloader Middlewares

键为中间件类的路径,值为其中间件的顺序(order)

DOWNLOADER_MIDDLEWARES = {
    'mySpider.middlewares.MyDownloaderMiddleware': 543,
}

当请求通过下载中间件时,process_request方法被调用;当下载中间件完成请求后传递响应给引擎时process_response方法被调用。

settings.py 同级目录下创建 middlewares.py 文件,如:

import random
import base64
from settings import USER_AGENTS
from settings import PROXIES


class RandomUserAgent(object):
    def process_request(self, request, spider):
        useragent = random.choice(USER_AGENTS)
        request.headers.setdefault("User-Agent", useragent)

class RandomProxy(object):
    def process_request(self, request, spider):
        proxy = random.choice(PROXIES)
        if proxy['user_passwd'] is None:
            request.meta['proxy'] = "http://" + proxy['ip_port']
        else:
            base64_userpasswd = base64.b64encode(proxy['user_passwd'])
            request.headers['Proxy-Authorization'] = 'Basic ' + base64_userpasswd
            request.meta['proxy'] = "http://" + proxy['ip_port']

与代理服务器连接需要使用base64编码,如果有需要身份验证的话还需要加上授权信息,

修改settings.py配置USER_AGENTS和PROXIES

免费代理IP上网搜,或购买可用的私密代理IP

USER_AGENTS = ["Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
    "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
    "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5"
    ]

PROXIES = [
  {'ip_port': '111.8.60.9:8123', 'user_passwd': 'user1:pass1'},
  {'ip_port': '101.71.27.120:80', 'user_passwd': 'user2:pass2'},
  {'ip_port': '122.96.59.104:80', 'user_passwd': 'user3:pass3'},
  ]

# 禁用cookies
COOKIES_ENABLED = False

# 设置下载延迟
DOWNLOAD_DELAY = 3

# 添加自己编写的下载中间件
DOWNLOADER_MIDDLEWARES = {
    #'mySpider.middlewares.MyCustomDownloaderMiddleware': 543,
    'mySpider.middlewares.RandomUserAgent': 1,
    'mySpider.middlewares.ProxyMiddleware': 100
}

Scrapyd - 爬虫的远程部署和监控

安装Scrapyd

sudo pip install scrapyd
sudo pip install scrapyd-client

修改scrapyd的配置文件

启用scrapyd配置,在deploy后面指定当前项目的Scrapyd配置名,配置Scrapyd服务的ip和port,以及当前项目的项目名,如:

sudo vi /usr/local/lib/python3.8.6/site-packages/scrapyd/default_scrapyd.conf
# scrapy项目的settings文件位置,不用改动
[settings]
default = Demo.settings

# Scrapyd_Tencent是配置名
[deploy:Scrapyd_Demo]        

# 若是本机Scrapyd服务部署,则使用localhost即可否则使用服务主机的ip
url = http://localhost:6800/    

# 需要部署和监控的Scrapy项目名
project = Demo

通过Scrapyd客户端工具挂载项目

指令:scrapyd-deploy Scrapyd_Demo -p Demo

scrapyd-deploy 是客户端命令,参数是 Scrapyd配置名,以及 -p 指定项目名

远程部署和停止爬虫

部署:curl http://localhost:6800/schedule.json -d project=Demo -d spider=demo

停止:curl http://localhost:6800/cancel.json -d project=Demo -d job=iundsw....

注:爬虫启动成功后,会生成job值,停止爬虫时需要通过job值停止。


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK