Here is a simple three step process to convert any site to pdf. Good for traveling or lack of internet. In my case, the original inspiration for this came from the Pyglet API docs. Pyglet offers an introductory pdf but limits the full api to html. Their server used to be really spotty and would disappear for days at a time. Trying to use their server was less fun than learning a new CLI trick.
Step one: wget the site
wget -nd -mk http://example.com
-nd
flattens any directory structure
-m
mirrors site
-k
converts all internet links to local filesystem links
Step two: make htmldoc happy
enca -L none -x ISO-8859-1 *.html
mogrify -background white -flatten *.png
Turns out htmldoc 1.8 does not support UTF-8. Version 1.9 does, but use
enca
to get around it otherwise. Furthmore, htmldoc does not play well with transparent PNGs. Themogrify
command will remove the alpha and replace it with a white background.
Step three: convert to pdf
htmldoc --webpage -f example.pdf example_path/toc.html example_path/*.html
--webpage
glues many html documents together, inserting page breaks between each
-f
names the output file
Optionally, to format the document for the computer screen, use --size 12x9in
.
The default glob expansion puts the pages in alphabetical order. Explicitly mentioning the table of contents places it on the first page. There will be a second copy of the TOC in the glob, but it won't get in the way.
You'll have little control over the exact formatting and ordering of the sections. Htmldoc usually makes the formatting look pretty good. The page order does not matter too much because all the html links are preserved in the pdf. More importantly, it is electronically searchable.
And finally here is a complete, fully linked, offline, searchable PDF version of the Pyglet API in 8.5x11 paper format and formatted for computer screens
I found out about the UTF-8 and PNG thing while building a copy of Pro Git