likes
comments
collection
share

Python数据分析从入门到进阶:快速处理文本(含代码)

作者站长头像
站长
· 阅读数 30

🍁1. 清洗文本

对一些非结构化的文本数据进行基本的清洗

  • strip
  • split
  • replace
# 创建文本
text_data = ['   Interrobang. By Aishwarya Henriette   ',
             'Parking And goding. by karl fautier',
             '   Today is the night. by jarek prakash    ']
# 去除文本两端的空格
stripwhitespace = [string.strip() for string in text_data]
stripwhitespace
['Interrobang. By Aishwarya Henriette', 'Parking And goding. by karl fautier', 'Today is the night. by jarek prakash']
# 删除句号
remove_periods = [string.replace('.','') for string in text_data]
remove_periods
['   Interrobang By Aishwarya Henriette   ', 'Parking And goding by karl fautier', '   Today is the night by jarek prakash    ']
# 创建函数
def capitalizer(string):
    return string.upper()
[capitalizer(string) for string in remove_periods]
['   INTERROBANG BY AISHWARYA HENRIETTE   ', 'PARKING AND GODING BY KARL FAUTIER', '   TODAY IS THE NIGHT BY JAREK PRAKASH    ']
# 使用正则表达式
import re
def replace_letters_with_x(string):
    return re.sub(r'[a-zA-Z]','x',string)
[replace_letters_with_x(string) for string in remove_periods]
['   xxxxxxxxxxx xx xxxxxxxxx xxxxxxxxx   ', 'xxxxxxx xxx xxxxxx xx xxxx xxxxxxx', '   xxxxx xx xxx xxxxx xx xxxxx xxxxxxx    ']

🍂2. 解析并清洗HTML

#使用beautiful soup 对html进行解析
from bs4 import BeautifulSoup
# 创建html代码
html = """
        <div class='full_name'><span style='font-weight:bold'>
        Masege Azra"
    
    """
# 创建soup对象
soup = BeautifulSoup(html, 'lxml')
soup.find('div')
<div class="full_name"><span style="font-weight:bold">
        Masege Azra"
    
    </span></div>

🍃3. 移除标点

import unicodedata
import sys
text_data = ['Hi!!!! I. love. This. Song....',
             '10000% Agree!!!! #LoveIT',
             'Right??!!']
# 创建一个标点符号字典
punctuation = dict.fromkeys(i for i in range(sys.maxunicode) if unicodedata.category(chr(i)).startswith('P'))
[string.translate(punctuation) for string in text_data]
['Hi I love This Song', '10000 Agree LoveIT', 'Right']

🌍4. 文本分词

这里介绍一下jieba库

import jieba
# 创建文本
string = 'The science of study is the technology of tomorrow'
seg = jieba.lcut(string)
print(seg)
['The', ' ', 'science', ' ', 'of', ' ', 'study', ' ', 'is', ' ', 'the', ' ', 'technology', ' ', 'of', ' ', 'tomorrow']

当然,本文只是介绍了在数据清洗中的一些最基本的文本处理方法,后续还会介绍目前NLP的一些主流方法和代码。

转载自:https://juejin.cn/post/7278983339544150028
评论
请登录