likes
comments
collection
share

手把手教你30行代码爬取《某某某报》

作者站长头像
站长
· 阅读数 7

一、项目简介

大家好,这个爬虫项目是自己开发用来阅览报纸的,大概已经用了5年了,很稳定。看到社区爬虫征集令,就来献丑了。

1.思路介绍

  • 1.爬《某某某报》官网,获取指定日期报纸pdf
  • 2.合并爬取的每片pdf,并进行合并
  • 3.输出终版报纸pdf

2.功能介绍

  • 本项实现根据爬虫原理获取数据,最终输出《某某某报》电子版全过程
  • 代码简短,适合爬虫初学者练手

二、项目环境构建

1.python环境

起码python3吧,python2很少有人用。

2.依赖包

依赖os、httplib2、PyPDF2等包

2.1 PyPDF2等包

PyPDF2是一个纯Python PDF库,能够读取、写入、分割、合并、裁剪和转换PDF文件的页面,还可以为PDF文件添加自定义数据、查看选项和密码,以及从PDF中检索文本和元数据。

需要注意的是,PyPDF2不能操作PDF获取文字信息,且PyPDF2=3.0.X将是PyPDF2的最后一个版本,开发将继续使用pypdf==3.1.012

2.2 httplib2包

httplib2是一个使用Python编写的,支持非常全面的HTTP特性的库1。

httplib2需要Python2.3或更高版本的运行环境,0.5.0版本及其以后包含了对Python3的支持。httplib2支持HTTP/1.1,具有持久连接、连接池、分块传输编码、内容编码(gzip和deflate)、身份验证(基本、摘要和OAuth)、缓存、重试、重定向、错误处理、进度通知、SOCKS代理、TLS/SSL等特性1

!pip install httplib2 pypdf2
Collecting httplib2
  Downloading httplib2-0.22.0-py3-none-any.whl (96 kB)
     |████████████████████████████████| 96 kB 16 kB/s            
[?25hCollecting pypdf2
  Downloading pypdf2-3.0.1-py3-none-any.whl (232 kB)
     |████████████████████████████████| 232 kB 11 kB/s             
[?25hCollecting pyparsing!=3.0.0,!=3.0.1,!=3.0.2,!=3.0.3,<4,>=2.4.2
  Downloading pyparsing-3.0.7-py3-none-any.whl (98 kB)
     |████████████████████████████████| 98 kB 13 kB/s             
[?25hCollecting typing_extensions>=3.10.0.0
  Downloading typing_extensions-4.1.1-py3-none-any.whl (26 kB)
Collecting dataclasses
  Downloading dataclasses-0.8-py3-none-any.whl (19 kB)
Installing collected packages: typing-extensions, pyparsing, dataclasses, pypdf2, httplib2
  Attempting uninstall: typing-extensions
    Found existing installation: typing-extensions 3.7.4
    Uninstalling typing-extensions-3.7.4:
      Successfully uninstalled typing-extensions-3.7.4
  Attempting uninstall: pyparsing
    Found existing installation: pyparsing 2.1.10
    Uninstalling pyparsing-2.1.10:
      Successfully uninstalled pyparsing-2.1.10
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
auto-sklearn 0.5.2 requires scikit-learn<0.20,>=0.19, but you have scikit-learn 0.21.1 which is incompatible.
Successfully installed dataclasses-0.8 httplib2-0.22.0 pyparsing-3.0.7 pypdf2-3.0.1 typing-extensions-4.1.1

三、实操环节

1.页面分析

手把手教你30行代码爬取《某某某报》

2.导入所需包

import os
import httplib2
import urllib.request
import PyPDF2
import time

3.下载单页pdf

def downFile1(mytime):
    year = mytime.tm_year
    month = mytime.tm_mon
    day = mytime.tm_mday
    year = str(year)
    month = str('%02d' % month)
    day = str('%02d' % day)
    files = []
    for i in range(1, 9, 1):
        connection = httplib2.HTTPConnectionWithTimeout("paper.people.com.cn")
        path = 'rmrb/images/' + year + '-' + month + '/' + day + '/' + '01' + '/rmrb' + year + month + day + str(
            '%02d' % i) + '.pdf'
        connection.request("HEAD", path)
        response = connection.getresponse()
        if response.status == 400:
            name = 'rmrb' + year + month + day + str('%02d' % i) + '.pdf'
            url = r'http://paper.people.com.cn/rmrb/images/' + year + '-' + month + '/' + day + '/' + str(
                '%02d' % i) + '/' + name
            urllib.request.urlretrieve(url, name)
            files.append(name)
    targetfile = 'rmrb' + year + month + day + '.pdf'
    merge_pdfs(files, targetfile)
    for file in files:
        os.remove(file)

4.合并pdf

def merge_pdfs(paths, output):
    pdf_writer = PyPDF2.PdfWriter()

    for path in paths:
        pdf_reader = PyPDF2.PdfReader(path, strict=False)
        for page_num in range(len(pdf_reader.pages)):
            pdf_writer.add_page(pdf_reader.pages[page_num])
    with open(output, 'wb') as out:
        pdf_writer.write(out)

5.开始下载

downFile1(time.localtime())

6.查看下载的文件

  • 文件以时间命名,可查看如下:

手把手教你30行代码爬取《某某某报》

  • 报纸pdf查看

手把手教你30行代码爬取《某某某报》

四、觉得有用请给个好评吧。