Download Html Page And Its Contents
Solution 1:
You can use the urllib
module to download individual URLs but this will just return the data. It will not parse the HTML and automatically download things like CSS files and images.
If you want to download the "whole" page you will need to parse the HTML and find the other things you need to download. You could use something like Beautiful Soup to parse the HTML you retrieve.
This question has some sample code doing exactly that.
Solution 2:
What you're looking for is a mirroring tool. If you want one in Python, PyPI lists spider.py but I have no experience with it. Others might be better but I don't know - I use 'wget', which supports getting the CSS and the images. This probably does what you want (quoting from the manual)
Retrieve only one HTML page, but make sure that all the elements needed for the page to be displayed, such as inline images and external style sheets, are also downloaded. Also make sure the downloaded page references the downloaded links.
wget -p--convert-links http://www.server.com/dir/page.html
Solution 3:
You can use the urlib:
import urllib.request
opener = urllib.request.FancyURLopener({})
url = "http://stackoverflow.com/"
f = opener.open(url)
content = f.read()
Solution 4:
Function savePage
bellow can:
- Save the
.html
on the current folder - Downloads,
javascripts
,css
andimages
based on the tagsscript
,link
andimg
.- Saved on a folder with suffix
_files
.
- Saved on a folder with suffix
- Any exceptions are printed on
sys.stderr
- returns a
BeautifulSoup
object
- returns a
Uses Python 3+ Requests, BeautifulSoup and other standard libraries.
The function savePage
receives a url
and filename
where to save it.
You can expand/adapt it to suit your needs
import os, sys
import requests
from urllib.parse import urljoin
from bs4 import BeautifulSoup
import re
defsavePage(url, pagepath='page'):
defsoupfindnSave(pagefolder, tag2find='img', inner='src'):
"""saves on specified `pagefolder` all tag2find objects"""ifnot os.path.exists(pagefolder): # create only once
os.mkdir(pagefolder)
for res in soup.findAll(tag2find): # images, css, etc..try:
ifnot res.has_attr(inner): # check if inner tag (file object) existscontinue# may or may not exist
filename = re.sub('\W+', '', os.path.basename(res[inner])) # clean special chars
fileurl = urljoin(url, res.get(inner))
filepath = os.path.join(pagefolder, filename)
# rename html ref so can move html and folder of files anywhere
res[inner] = os.path.join(os.path.basename(pagefolder), filename)
ifnot os.path.isfile(filepath): # was not downloadedwithopen(filepath, 'wb') as file:
filebin = session.get(fileurl)
file.write(filebin.content)
except Exception as exc:
print(exc, file=sys.stderr)
return soup
session = requests.Session()
#... whatever other requests config you need here
response = session.get(url)
soup = BeautifulSoup(response.text, features="lxml")
pagepath, _ = os.path.splitext(pagepath) # avoid duplicate .html
pagefolder = pagepath+'_files'# page contents
soup = soupfindnSave(pagefolder, 'img', 'src')
soup = soupfindnSave(pagefolder, 'link', 'href')
soup = soupfindnSave(pagefolder, 'script', 'src')
withopen(pagepath+'.html', 'wb') as file:
file.write(soup.prettify('utf-8'))
return soup
Example saving google.com
as google.html
and contents on google_files
folder. (current folder)
soup = savePage('https://www.google.com', 'google')
Post a Comment for "Download Html Page And Its Contents"