just in case someone's interested, wget can often easily save a whole webpage including dependencies
wget -p <url> gives you the page and what it needs to render, but doesn't try to recursively download the entire website (which it would certainly fail at and just make a mess). And it seems to work flawlessly for chosts. (though you might need to host it on a web server of some kind, but python -m http.server does the job)
Adding the -k option will have it try to fix links but that's probably not a good idea when archiving chosts.
EDIT: Actually if you use the basic python web server you need to rename the post's file to have a .html at the end otherwise your browser might just try to download it instead.