喵星人宠物研究所
- 你有猫么?
- 你羡慕有猫的人吗?
我的回答是,我没有猫,也不羡慕有猫的人。活动要求用代码来吸猫,看了这个活动发现,对猫感兴趣的小伙伴还真不少啊,有些群友也老是给我晒猫,不禁让我对喵星人产生了好奇。
想吸猫肯定得先买猫把?想买猫至少得先了解猫吧? 作为一个对猫完全不了解的人,今天就借着这个活动好好了解一下各种宠物猫。
这里我找到了一个专门交易猫猫的网站-猫猫交易网:www.maomijiaoyi.com/
其中的猫咪品种栏目列出了各种类型的宠物猫:
我们可以采集一下其中的数据学习一下各种宠物猫的特点。
最终本文得到了如下效果:
数据采集
首先我们爬取主页可以爬到的链接列表:
from lxml import etree
import requests
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
}
url_base = "http://www.maomijiaoyi.com"
session = requests.Session()
# 访问猫咪品种入口页,获取各品种详情页的链接
url = url_base+"/index.php?/pinzhongdaquan_5.html"
res = session.get(url, headers=headers)
html = etree.HTML(res.text)
main_data = []
for a_tag in html.xpath("//div[@class='pinzhong_left']/a"):
url = url_base+a_tag.xpath("./@href")[0]
pet_name, pet_price = None, None
pet_name_tag = a_tag.xpath("./div[@class='pet_name']/text()")
if pet_name_tag:
pet_name = pet_name_tag[0].strip()
pet_price_tag = a_tag.xpath("./div[@class='pet_price']/span/text()")
if pet_price_tag:
pet_price = pet_price_tag[0].strip()
print(pet_name, pet_price, url)
main_data.append((pet_name, pet_price, url))
打印结果如下:
我想更多的了解猫就必须点进详情页,看看详细的属性:
分别解析这三部分的数据,测试对这第一个链接解析:
pet_name, pet_price, url = main_data[0]
res = session.get(url, headers=headers)
html = etree.HTML(res.text)
row = {}
# 解析基本属性
for text in html.xpath("//div[@class='details']//text()"):
text = text.strip()
if not text:
continue
if text.endswith(":"):
key = text[:-1]
else:
row[key] = text
row["参考价格"] = pet_price
# 解析外观属性
for shuxing in html.xpath("//div[@class='shuxing']/div"):
name, v = shuxing.xpath("./div/text()")
row[name.strip()] = v.strip()
row["链接"] = url
# 解析详细说明
titles = html.xpath(
"//div[@class='content']/div[@class='property_title']/div/text()")
property_tags = html.xpath(
"//div[@class='content']/div[@class='property_list']/div")
for title, property_tag in zip(titles, property_tags):
p_texts = []
for p_tag in property_tag.xpath(".//p|.//div"):
p_text = "".join([t.strip()
for t in p_tag.xpath(".//text()") if t.strip()])
if p_text:
p_texts.append(p_text)
text = "\n".join(p_texts)
row[title] = text
row
可以看到前两部分的数据都非常顺利的解析出来:
对于第三部分的数据也顺利的解析出来了:
除了文本说明信息外,我们还需要保存图片。下面解析图片网址并下载:
img_urls = [
url_base+url for url in html.xpath("//div[@class='big_img']/img/@src") if url]
row["图片地址"] = img_urls
for i, img_url in enumerate(img_urls, 1):
with requests.get(img_url) as res:
imgbytes = res.content
with open(f"imgs/{pet_name}{i}.jpg","wb") as f:
f.write(imgbytes)
可以看到几张图片都顺利的下载下来:
那么我们可以整理一下网站的代码,将文本数据保存到Excel中,并将图片保存到文件中:
import pandas as pd
from lxml import etree
import requests
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36",
}
url_base = "http://www.maomijiaoyi.com"
session = requests.Session()
# 访问猫咪品种入口页,获取各品种详情页的链接
url = url_base+"/index.php?/pinzhongdaquan_5.html"
res = session.get(url, headers=headers)
html = etree.HTML(res.text)
main_data = []
for a_tag in html.xpath("//div[@class='pinzhong_left']/a"):
url = url_base+a_tag.xpath("./@href")[0]
pet_name, pet_price = None, None
pet_name_tag = a_tag.xpath("./div[@class='pet_name']/text()")
if pet_name_tag:
pet_name = pet_name_tag[0].strip()
pet_price_tag = a_tag.xpath("./div[@class='pet_price']/span/text()")
if pet_price_tag:
pet_price = pet_price_tag[0].strip()
main_data.append((pet_name, pet_price, url))
data = []
for pet_name, pet_price, url in main_data:
res = session.get(url, headers=headers)
html = etree.HTML(res.text)
row = {}
# 解析基本属性
for text in html.xpath("//div[@class='details']//text()"):
text = text.strip()
if not text:
continue
if text.endswith(":"):
key = text[:-1]
else:
row[key] = text
row["参考价格"] = pet_price
# 解析外观属性
for shuxing in html.xpath("//div[@class='shuxing']/div"):
name, v = shuxing.xpath("./div/text()")
row[name.strip()] = v.strip()
row["链接"] = url
# 解析详细说明
titles = html.xpath(
"//div[@class='content']/div[@class='property_title']/div/text()")
property_tags = html.xpath(
"//div[@class='content']/div[@class='property_list']/div")
for title, property_tag in zip(titles, property_tags):
p_texts = []
for p_tag in property_tag.xpath(".//p|.//div"):
p_text = "".join([t.strip()
for t in p_tag.xpath(".//text()") if t.strip()])
if p_text:
p_texts.append(p_text)
text = "\n".join(p_texts)
row[title] = text
img_urls = [
url_base+url for url in html.xpath("//div[@class='big_img']/img/@src") if url]
row["图片地址"] = img_urls
data.append(row)
for i, img_url in enumerate(img_urls, 1):
with requests.get(img_url) as res:
imgbytes = res.content
with open(f"imgs/{pet_name}{i}.jpg", "wb") as f:
f.write(imgbytes)
df = pd.DataFrame(data)
df.to_excel("猫咪.xlsx", index=False)
爬取结果前几列如下:
下载的各类猫的图片:
有了上面的Excel数据,我们就可以分析处理了:
数据分析
首先读取Excel的数据:
import pandas as pd
df = pd.read_excel("猫咪.xlsx")
观察数据发现,很多宠物猫存在多个别名,我们可以做一张关系图展示每种猫对应的别名:
from pyecharts import options as opts
from pyecharts.charts import Graph
links = []
nodes = []
nodes.append({"name": "猫", "symbolSize": 10})
for name, alias in df[["中文学名", "别名"]].values:
nodes.append({"name": name, "symbolSize": 10})
links.append({"source": "猫", "target": name})
for dest in alias.split(","):
if name == dest:
continue
nodes.append({"name": dest, "symbolSize": 10})
links.append({"source": name, "target": dest})
c = (
Graph(init_opts=opts.InitOpts(width="800px", height="800px"))
.add("", nodes, links, repulsion=250,
linestyle_opts=opts.LineStyleOpts(width=0.5, curve=0.3, opacity=0.7))
.set_global_opts(title_opts=opts.TitleOpts(title="宠物猫的品种"))
)
c.render_notebook()
鼠标指向中心点时可以查看主名称:
宠物猫原产地分布:
from pyecharts.charts import Bar
data = df.原产地.value_counts()
c = (
Bar()
.add_xaxis(data.index.to_list())
.add_yaxis("", data.values.tolist())
.set_global_opts(
xaxis_opts=opts.AxisOpts(axislabel_opts=opts.LabelOpts(rotate=15)),
title_opts=opts.TitleOpts(title="宠物猫原产地分布")
)
)
c.render_notebook()
可以看到各种宠物猫主要分布在英国、美国和苏格兰。
那么在画个树形图展示各品种的猫分布的国家:
data = []
tmp = df.groupby("原产地", as_index=False).agg(
品种=("中文学名", ",".join), 品种数=("中文学名", "count"))
for src, dest in tmp.values[:, :2]:
dests = dest.split(",")
children = []
data.append({"value": len(dests), "name": src, "children": children})
for dest in dests:
children.append({"name": dest, "value": 1})
c = (
TreeMap(init_opts=opts.InitOpts(width='1280px', height='560px'))
.add("", data,
levels=[
opts.TreeMapLevelsOpts(
treemap_itemstyle_opts=opts.TreeMapItemStyleOpts(
border_color="#555", border_width=1, gap_width=1
)
),
opts.TreeMapLevelsOpts(
color_saturation=[0.3, 0.6],
treemap_itemstyle_opts=opts.TreeMapItemStyleOpts(
border_color_saturation=0.7, gap_width=5, border_width=10
),
upper_label_opts=opts.LabelOpts(
is_show=True, position='insideTopLeft', vertical_align='top'
)
),
opts.TreeMapLevelsOpts(
color_saturation=[0.3, 0.5],
treemap_itemstyle_opts=opts.TreeMapItemStyleOpts(
border_color_saturation=0.6, gap_width=1
),
),
opts.TreeMapLevelsOpts(color_saturation=[0.3, 0.5]),
])
.set_global_opts(title_opts=opts.TitleOpts(title="宠物猫原产地分布"))
)
c.render_notebook()
然后看下品种体型占比:
from pyecharts.charts import Pie
c = (
Pie()
.add(
"体型",
df.体型.value_counts().reset_index().values.tolist(),
radius=["40%", "55%"],
label_opts=opts.LabelOpts(
position="outside",
formatter="{a|{a}}{abg|}\n{hr|}\n {b|{b}: }{c} {per|{d}%} ",
background_color="#eee",
border_color="#aaa",
border_width=1,
border_radius=4,
rich={
"a": {"color": "#999", "lineHeight": 22, "align": "center"},
"abg": {
"backgroundColor": "#e3e3e3",
"width": "100%",
"align": "right",
"height": 22,
"borderRadius": [4, 4, 0, 0],
},
"hr": {
"borderColor": "#aaa",
"width": "100%",
"borderWidth": 0.5,
"height": 0,
},
"b": {"fontSize": 16, "lineHeight": 33},
"per": {
"color": "#eee",
"backgroundColor": "#334455",
"padding": [2, 4],
"borderRadius": 2,
},
},
),
)
.set_global_opts(
title_opts=opts.TitleOpts(title="品种体型占比"),
)
)
c.render_notebook()
可以看到只有一种猫体型是最大的,即布偶猫。 下面我们找出价格最便宜和价格最贵的猫,目前认为最低价格最低的就是最便宜的品种,最高价格最高的就是最贵的品种:
tmp = df.参考价格.str.split("-", expand=True)
tmp.columns = ["最低价格", "最高价格"]
tmp.dropna(inplace=True)
tmp = tmp.astype("int")
cheap_cat = df.loc[tmp.index[tmp.最低价格 == tmp.最低价格.min()], "中文学名"].to_list()
costly_cat = df.loc[tmp.index[tmp.最高价格 == tmp.最高价格.max()], "中文学名"].to_list()
print("最便宜的品种有:", cheap_cat)
print("最贵的品种有:", costly_cat)
最便宜的品种有: ['加菲猫', '金渐层', '银渐层', '橘猫']
最贵的品种有: ['布偶猫', '缅因猫', '无毛猫']
对于数据集中的 ['整体', '毛发', '颜色', '头部', '眼睛', '耳朵', '鼻子', '尾巴', '胸部', '颈部', '前驱', '后驱'] 这些列都是对猫的描述文字,我们可以整体组合起来,给喵星人做个词云图:
import stylecloud
from IPython.display import Image
text = ""
for row in df[['整体', '毛发', '颜色', '头部', '眼睛', '耳朵',
'鼻子', '尾巴', '胸部', '颈部', '前驱', '后驱']].values:
for v in row:
if pd.isna(v):
continue
text += v
stylecloud.gen_stylecloud(text,
collocations=False,
font_path=r'C:\Windows\Fonts\msyhbd.ttc',
icon_name='fas fa-cat',
output_name='tmp.png')
Image(filename='tmp.png')
然后我们分别对性格特点和生活习性等做词云图。
性格特点词云图:
import jieba
import stylecloud
from IPython.display import Image
stopwords = ["主人", "它们", "毛猫", "不会", "性格特点", "猫咪"]
words = df.性格特点.astype("str").apply(jieba.lcut).explode()
words = words[words.apply(len) > 1]
words = [word for word in words if word not in stopwords]
stylecloud.gen_stylecloud(" ".join(words),
collocations=False,
font_path=r'C:\Windows\Fonts\msyhbd.ttc',
icon_name='fas fa-square',
output_name='tmp.png')
Image(filename='tmp.png')
import jieba
import stylecloud
from IPython.display import Image
stopwords = ["主人", "它们", "毛猫", "不会", "性格特点", "猫咪"]
words = df.生活习性.astype("str").apply(jieba.lcut).explode()
words = words[words.apply(len) > 1]
words = [word for word in words if word not in stopwords]
stylecloud.gen_stylecloud(" ".join(words),
collocations=False,
font_path=r'C:\Windows\Fonts\msyhbd.ttc',
icon_name='fas fa-square',
output_name='tmp.png')
Image(filename='tmp.png')
猫图生成
经过上面的分析,我们已经对猫有了一个基本的了解,接下来我们对各个品种的喵星人生成一张图。
该做一张什么图好呢?我认真想了一下就做一张思维导图。
首先生成分类文本:
for a, bs in df.中文学名.groupby(df.体型):
print(a)
for b in bs.values:
print(f"\t{b}")
中型
加菲猫
金渐层
英短蓝猫
英短蓝白
英国短毛猫
美国短毛猫
苏格兰折耳猫
银渐层
异国短毛猫
孟买猫
暹罗猫
孟加拉豹猫
大型
布偶猫
小型
缅因猫
金吉拉猫
无毛猫
高地折耳猫
曼基康矮脚猫
波斯猫
橘猫
阿比西尼亚猫
德文卷毛猫
此时我将其粘贴到思维导图中,然后经常一段时间的编辑得到:
转载自:https://juejin.cn/post/7032697100601294855