aiohttp使用详解

2021-03-27 10:25

阅读:557

标签:load   div   取图   就是   产生   上下文   cli   value   print   

aiohttp分为服务器端和客户端,本文只介绍客户端。
由于上下文的缘故,请求代码必须在一个异步的函数中进行:

async def fn():
pass
安装
pip install aiohttp

 

基本语法

async with aiohttp.request(GET,https://github.com) as r:
await r.text()

 

指定编码
await resp.text(encoding=‘windows-1251’)
适合读取图像等
await resp.read()

request案例
超时处理timeout
async with session.get(https://github.com, timeout=60) as r:

 

#使用示例, 进行一次请求

import aiohttp, asyncio

async def main():#aiohttp必须放在异步函数中使用
async with aiohttp.request(GET, https://api.github.com/events) as resp:
json = await resp.json()
print(json)

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

 


------------------------------------------------------------------------------
#使用示例,进行多次请求

import aiohttp, asyncio

async def main():#aiohttp必须放在异步函数中使用
tasks = []
[tasks.append(fetch(https://api.github.com/events?a={}.format(i))) for i in range(10)]#十次请求
await asyncio.wait(tasks)

async def fetch(url):
async with aiohttp.request(GET, url) as resp:
json = await resp.json()
print(json) 

loop = asyncio.get_event_loop()
loop.run_until_complete(main())
------------------------------------------------------------------------------

 


#使用示例,进行多次请求,并限制同时请求的数量

import aiohttp, asyncio

async def main(pool):#aiohttp必须放在异步函数中使用
tasks = []
sem = asyncio.Semaphore(pool)#限制同时请求的数量
[tasks.append(control_sem(sem, https://api.github.com/events?a={}.format(i))) for i in range(10)]#十次请求
await asyncio.wait(tasks)

async def control_sem(sem, url):#限制信号量
async with sem:
await fetch(url)

async def fetch(url):
async with aiohttp.request(GET, url) as resp:
json = await resp.json()
print(json) 

loop = asyncio.get_event_loop()
loop.run_until_complete(main(pool=2))

 

上面的示例中可以正确的使用协程进行请求,但是由于aiohttp自身的原因会报 Unclosed client session 的警告。官方不推荐使用aiohttp.request的方式请求,可以将 aiohttp.request 换成 

aiohttp.ClientSession(**kw).request的方式即可

session
import aiohttp
async with aiohttp.ClientSession() as session:
async with session.get(https://api.github.com/events) as resp:
print(resp.status)
print(await resp.text())

 

我们创建了一个 ClientSession 对象命名为session,然后通过session的get方法得到一个 ClientResponse 对象,命名为resp,get方法中传入了一个必须的参数url,就是要获得源码的http url。至此便通过协程完成了一个异步IO的get请求。
带参数

session.post(http://httpbin.org/post, data=bdata)
session.put(http://httpbin.org/put, data=bdata)
session.delete(http://httpbin.org/delete)
session.head(http://httpbin.org/get)
session.options(http://httpbin.org/get)
session.patch(http://httpbin.org/patch, data=bdata)

 

不要为每次的连接都创建一次session,一般情况下只需要创建一个session,然后使用这个session执行所有的请求。

每个session对象,内部包含了一个连接池,并且将会保持连接和连接复用(默认开启)可以加快整体的性能。

#使用示例

import aiohttp, asyncio

async def main(pool):#启动
sem = asyncio.Semaphore(pool)
async with aiohttp.ClientSession() as session:#给所有的请求,创建同一个session
tasks = []
[tasks.append(control_sem(sem, https://api.github.com/events?a={}.format(i), session)) for i in range(10)]#十次请求
await asyncio.wait(tasks)

async def control_sem(sem, url, session):#限制信号量
async with sem:
await fetch(url, session)

async def fetch(url, session):#开启异步请求
async with session.get(url) as resp:
json = await resp.json()
print(json)

loop = asyncio.get_event_loop()
loop.run_until_complete(main(pool=2))

 

URL传参数

params = {key1: value1, key2: value2}
# params = [(‘key‘, ‘value1‘), (‘key‘, ‘value2‘)]
async with session.get(http://httpbin.org/get,params=params) as resp:
assert resp.url ==http://httpbin.org/getkey2=value2&key1=value1

 

请求头

import json
url = https://api.github.com/some/endpoint
payload = {some: data}
headers = {content-type: application/json}

await session.post(url,
data=json.dumps(payload),
headers=headers)

响应

assert resp.status == 200
resp.headers

await resp.text()
await resp.text(encoding=‘gb2312’)
await resp.read()
await resp.json()
await resp.content.read(10) #读取前10个字节

 

文件保存

with open(filename, ‘wb’) as fd:
while True:
chunk = await resp.content.read(chunk_size)
if not chunk:
break
fd.write(chunk)

cookie
url = http://httpbin.org/cookies
cookies = {cookies_are: working}
async with ClientSession(cookies=cookies) as session:
async with session.get(url) as resp:
assert await resp.json() == {
"cookies": {"cookies_are": "working"}}

 

POST
表单

payload = {key1: value1, key2: value2}
async with session.post(http://httpbin.org/post,
data=payload) as resp:
print(await resp.text())
json

import json
url = https://api.github.com/some/endpoint
payload = {some: data}

async with session.post(url, data=json.dumps(payload)) as resp:
...

 

小文件

url = http://httpbin.org/post
files = {file: open(report.xls, rb)}
await session.post(url, data=files)

设置好文件名、content-type

url = http://httpbin.org/post
data = FormData()
data.add_field(file,
open(report.xls, rb),
filename=report.xls,
content_type=application/vnd.ms-excel)

await session.post(url, data=data)

 

大文件

keep-alive, 连接池,共享cookie
cookie安全性
默认ClientSession使用的是严格模式的 aiohttp.CookieJar. RFC 2109,明确的禁止接受url和ip地址产生的cookie,只能接受 DNS 解析IP产生的cookie。可以通过设置aiohttp.CookieJar 的 unsafe=True 来配置

jar = aiohttp.CookieJar(unsafe=True)
session = aiohttp.ClientSession(cookie_jar=jar)

 

同时连接数量
同时请求数量
conn = aiohttp.TCPConnector(limit=30)#同时最大进行连接的连接数为30,默认是100,limit=0的时候是无限制
同一端口连接数量
conn = aiohttp.TCPConnector(limit_per_host=30)#默认是0

 

自定义域名解析
自己指定域名解析

from aiohttp.resolver import AsyncResolver

resolver = AsyncResolver(nameservers=["8.8.8.8", "8.8.4.4"])
conn = aiohttp.TCPConnector(resolver=resolver)

 

代理
普通代理

async with aiohttp.ClientSession() as session:
async with session.get("http://python.org",
proxy="http://some.proxy.com") as resp:
print(resp.status)

验证代理

async with aiohttp.ClientSession() as session:
proxy_auth = aiohttp.BasicAuth(user, pass)
async with session.get("http://python.org",
proxy="http://some.proxy.com",
proxy_auth=proxy_auth) as resp:
print(resp.status)

 

或者

session.get("http://python.org",
proxy="http://user:pass@some.proxy.com")

 

aiohttp使用详解

标签:load   div   取图   就是   产生   上下文   cli   value   print   

原文地址:https://www.cnblogs.com/wukai66/p/12632680.html


评论


亲,登录后才可以留言!