python数据存储-- CSV

2021-07-20 23:08

阅读:473

标签:访问   end   利用   ons   port   下标   ble   .text   header   

CSV,其文件以纯文本形式存储表格数据(数字和文本),CSV记录简由某种换行符分隔字段间分隔又其他字符,常见逗号或者制表符,

  • 例如:
#coding:utf-8

import csv
headers = [‘ID‘,‘UserName‘,‘Password‘,‘Age‘,‘Country‘]
rows = [(1001,"guobao","1382_pass",21,"China"),
         (1002,"Mary","Mary_pass",20,"USA"),
         (1003,"Jack","Jack_pass",20,"USA"),
       ]
with open(‘guguobao.csv‘,‘w‘) as f:
    f_csv = csv.writer(f)
    f_csv.writerow(headers)
    f_csv.writerows(rows)


运行结果:

ID,UserName,Password,Age,Country

1001,guobao,1382_pass,21,China

1002,Mary,Mary_pass,20,USA

1003,Jack,Jack_pass,20,USA
  • 里面的rows列表中数据元组,也可以字典数组,例如:
import csv
headers = [‘ID‘,‘UserName‘,‘Password‘,‘Age‘,‘Country‘]
rows = [{‘ID‘:1001,‘UserName‘:"qiye",‘Password‘:"qiye_pass",‘Age‘:24,‘Country‘:"China"},
{‘ID‘:1002,‘UserName‘:"Mary",‘Password‘:"Mary_pass",‘Age‘:20,‘Country‘:"USA"},
{‘ID‘:1003,‘UserName‘:"Jack",‘Password‘:"Jack_pass",‘Age‘:20,‘Country‘:"USA"},
]
with open(‘qiye.csv‘,‘w‘) as f:
    f_csv = csv.DictWriter(f,headers)
    f_csv.writeheader()
    f_csv.writerows(rows)

接下来是CSV的读取,要取出CSV文件,需要创建reader对象,例如:

import csv
with open(‘gugobao.csv‘,‘r‘) as f:
    f_csv = csv.reader(f)
    headers = next(f_csv)
    print headers
    for row in f_csv:
        print row
  • 除了利用row[0]访问ID,row[3]访问age,由于索引访问引起混淆,因此可以考虑使用元组
from collections import namedtuple
import csv
with open(‘qiye.csv‘) as f:
    f_csv = csv.reader(f)
    headings = next(f_csv)
    Row = namedtuple(‘Row‘, headings)
    for r in f_csv:
        row = Row(*r)
        print row.UserName,row.Password
        print row

运行结果:
C:\Python27\python.exe F:/爬虫/5.1.2.py
qiye qiye_pass
Row(ID=‘1001‘, UserName=‘qiye‘, Password=‘qiye_pass‘, Age=‘24‘, Country=‘China‘)
Mary Mary_pass
Row(ID=‘1002‘, UserName=‘Mary‘, Password=‘Mary_pass‘, Age=‘20‘, Country=‘USA‘)
Jack Jack_pass
Row(ID=‘1003‘, UserName=‘Jack‘, Password=‘Jack_pass‘, Age=‘20‘, Country=‘USA‘)

Process finished with exit code 0
  • 可以使用列名如row.UserName和row.Password代替下标访问。除了使用命名分组之外,另外一个解决办法就是读取一个字典序列中,如下:
import csv
with open(‘qiye.csv‘) as f:
    f_csv = csv.DictReader(f)
    for row in f_csv:
        print row.get(‘UserName‘),row.get(‘Password‘)

运行结果:
import csv
with open(‘qiye.csv‘) as f:
    f_csv = csv.DictReader(f)
    for row in f_csv:
        print row.get(‘UserName‘),row.get(‘Password‘)

最终使用CSV解析http://seputu.com首页的标题章节和连接

from lxml import etree
import requests
user_agent = ‘Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)‘
headers={‘User-Agent‘:user_agent}
r = requests.get(‘http://seputu.com/‘,headers=headers)
#使用lxml解析网页

html = etree.HTML(r.text)
div_mulus = html.xpath(‘.//*[@class="mulu"]‘)#先找到所有的div class=mulu标签
pattern = re.compile(r‘\s*\[(.*)\]\s+(.*)‘)
rows=[]
for div_mulu in div_mulus:
    #找到所有的div_h2标签
    div_h2 = div_mulu.xpath(‘./div[@class="mulu-title"]/center/h2/text()‘)
    if len(div_h2)> 0:
        h2_title = div_h2[0].encode(‘utf-8‘)
        a_s = div_mulu.xpath(‘./div[@class="box"]/ul/li/a‘)
        for a in a_s:
            #找到href属性
            href=a.xpath(‘./@href‘)[0].encode(‘utf-8‘)
            #找到title属性
            box_title = a.xpath(‘./@title‘)[0]
            pattern = re.compile(r‘\s*\[(.*)\]\s+(.*)‘)
            match = pattern.search(box_title)
            if match!=None:
                date =match.group(1).encode(‘utf-8‘)
                real_title= match.group(2).encode(‘utf-8‘)
                # print real_title
                content=(h2_title,real_title,href,date)
                print content
                rows.append(content)
headers = [‘title‘,‘real_title‘,‘href‘,‘date‘]
with open(‘qiye.csv‘,‘w‘) as f:
    f_csv = csv.writer(f,)
    f_csv.writerow(headers)
    f_csv.writerows(rows)

python数据存储-- CSV

标签:访问   end   利用   ons   port   下标   ble   .text   header   

原文地址:https://www.cnblogs.com/guguobao/p/9515298.html


评论


亲,登录后才可以留言!