Here is a simple two step process to convert any site to pdf. Good for traveling or lack of internet. In my case, the original inspiration for this came from the Pyglet API docs. Pyglet offers an introductory pdf but limits the full api to html. Their server can be really spotty and disappears for days at a time. Trying to use their server is less fun than learning a new CLI trick.
March 2009: Now with a sample site conversion.
Step one: wget the site
wget -nd -mk http://example.com
-ndflattens any directory structure
-kconverts all internet links to local filesystem links
Step two: convert to pdf
htmldoc --webpage -f example.pdf example_path/toc.html example_path/*.html
--webpageglues many html documents together, inserting page breaks between each
-fnames the output file
The default glob expansion puts the pages in alphabetical order. Explicitly mentioning the table of contents places it on the first page. There will be a second copy of the TOC in the glob, but this doesn't matter much, but it won't get in the way.
You'll have little control over the exact formatting and ordering of the sections. Htmldoc usually makes the formatting look pretty good. It The page order does not matter too much because all the html links are preserved in the pdf. More importantly, it is electronically searchable.
And finally here is a complete, fully linked, offline, searchable PDF version of the Pyglet API.