您好, 欢迎来到 !    登录 | 注册 | | 设为首页 | 收藏本站

使用Python 3从网上下载文件

使用Python 3从网上下载文件

如果要将网页的内容转换为变量,则只需read响应urllib.request.urlopen

import urllib.request
...
url = 'http://example.com/'
response = urllib.request.urlopen(url)
data = response.read()      # a `bytes` object
text = data.decode('utf-8') # a `str`; this step can't be used if data is binary

下载和保存文件的最简单方法是使用以下urllib.request.urlretrieve功能

import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
import urllib.request
...
# Download the file from `url`, save it in a temporary directory and get the
# path to it (e.g. '/tmp/tmpb48zma.txt') in the `file_name` variable:
file_name, headers = urllib.request.urlretrieve(url)

但是请记住,这urlretrieve被认为是遗留的,并且可能会被弃用(尽管不确定为什么)。

因此,执行此操作的最正确方法是使用urllib.request.urlopen函数返回一个表示HTTP响应的类似文件的对象,然后使用将其复制到实际文件中shutil.copyfileobj。

import urllib.request
import shutil
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    shutil.copyfileobj(response, out_file)

如果这看起来太复杂,则可能要简化一些并将整个下载存储在一个bytes对象中,然后将其写入文件。但这仅适用于小文件

import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
    data = response.read() # a `bytes` object
    out_file.write(data)

可以动态提取.gz(可能还有其他格式)压缩数据,但是这种操作可能需要HTTP服务器支持文件随机访问。

import urllib.request
import gzip
...
# Read the first 64 bytes of the file inside the .gz archive located at `url`
url = 'http://example.com/something.gz'
with urllib.request.urlopen(url) as response:
    with gzip.GzipFile(fileobj=response) as uncompressed:
        file_header = uncompressed.read(64) # a `bytes` object
        # Or do anything shown above using `uncompressed` instead of `response`.
python 2022/1/1 18:24:05 有497人围观

撰写回答


你尚未登录,登录后可以

和开发者交流问题的细节

关注并接收问题和回答的更新提醒

参与内容的编辑和改进,让解决方法与时俱进

请先登录

推荐问题


联系我
置顶