I have no idea what I’m doing and you can’t stop me.

Author, Trans Woman, Hypno Domme, Hopeless Romantic, Sadist, newly out system.

Pronouns are She/It, perpetually happy HRT gave me titties and sad it didn’t give me tentacles.

I had shame once.

Ξ

Θ Δ

Dating: @lunasorcery

18+ only


lunasorcery
@lunasorcery

I took the HTTP Range header and wrapped an impl of Rust’s Read+Seek around it

which means I can “open” a URL as if it’s a file, and do random-access partial reads

so now I can do things like “hey open the zipfile at this URL and list the contents, don’t bother downloading the whole thing, just pull out the file tables” and it Just Works


You must log in to comment.

in reply to @lunasorcery's post:

so a handful of years back, i made something that did similar by chaining together fuse-zip and something that fuse-mounted a cloud folder - it was for a thing where a developer uploads their game server zip to the cloud, and then that gets deployed into a bunch of containers in a cluster. problem was, downloading the zip files onto each host was taking ages when customers would include giant asset files in their zip, so startup times were getting into the minutes. my hack was a quick fix to get the containers booting up quicker while the proper download-and-extract task was running - once the files were available on the local SSD, we'd transparently swap the FUSE part out of the overlayfs and suddenly all your file i/o would get faster.

building something more intentional and less redundant with something like what you're describing - a souped-up BufferedReader of sorts that downloads the ranges you're trying to access on-demand, and then when nothing is being actively demanded, filling in the gaps until you have the full archive locally - sounds like a really cool direction to take this

this isn't cursed/crimes at all, if you ask me! Always cool to see something actually use Range requests, etc. and going beyond "download everything to a temp file and continue from there".

Perhaps this isn't what Range requests were meant for originally, but this seems like such a logical combination of features that I'm always surprised how rarely it's used. The world is constantly downloading large .zip files, yet somehow no browser supports previewing the contents of a .zip URL before downloading the whole thing (or downloading only some files from the archive, like the other comment says)? But I've never experimented too much with this, so perhaps there are practical issues that make it unreliable or not worth it in practice...