Slowly listening to old podcasts
tl;dr I made a service that lets you replay old podcast feeds as if theyâre just starting: PodReplay.com. Continue reading if youâre curious to read a smattering of design notes and half remembered anecdotes.
The long version
I have a bad habit. Well, maybe more than one, but Iâll try to suppress the one about trying to talk about too many ideas at once and focus on the one about trying to experiment on too many new technologies at the same time. Baby steps.
I like listening to podcasts. Too many podcasts? Also not the bad habit weâre talking about here. Many good podcasts are about current events or trends and donât tend to age well. However, there are others (e.g. history or fiction oriented) where an episode from several years ago is still good today.
So I wanted a simple way to intersperse old episodes of a podcast in my feed as if picking up from the start. A nice trickle over time until caught up. I kept searching in vain1 and it sat in my pile of âthings I want to existâ for a while.
In parallel, Iâve been lightly dabbling with Rust and Svelte. Iâve also read glowing reviews of Fly.io. It was time to try them out. All at once 2.
The plan
This is a âcoffee table projectâ3, which means I get to set the rules. For my own sake, I want to not worry too much about long term maintenance. Which means I should minimize moving parts and the use of novel technologies4. But hey, I get to both set and interpret the rules.
The basic idea was to have a web service that takes a podcast feed URL, a start timestamp and schedule. The server would fetch the feed, reschedule the items while discarding any that fall after the current date. Take for example, a podcast with the following episodes:
2022-01-24 One
2022-02-20 Two
2022-03-22 Three
A request comes in on 2022-11-24 for a replay that was started 2022-10-20.
/replay?feed=https://example.com/feed&start=2022-10-20T21:05:36Z&rule=1m
They would receive in a response with a new episode every month up to the current date:
2022-10-20 One
2022-11-20 Two
Once the current date advances far enough, the third episode will appear:
2022-10-20 One
2022-11-20 Two
2022-12-20 Three
The first problem
There are several existing feed parsing libraries for Rust, and the few that I tried seemed fine. The thing about feed parsing, especially for podcasts, is that publishers do all kinds of interesting things with their feeds. The bulk of the effort goes into normalizing these eccentricities down into a simpler structure for the host app to consume.
In my case, I only need to identify the timestamp of the episode (to change) and the boundaries of the item in the feed (to drop). Everything else should be left as untouched as possible.
So I had to drop down to lower level XML parsing. I ended up using quick-xml, which allowed me to stream in the original feed, and write directly back to a new xml document on the fly. A super rough excerpt:
match reader.read_event(&mut buf) {
Ok(Event::Eof) => break,
Ok(Event::Start(start)) => match start.name() {
b"item" | b"entry" => {
rewrite_or_skip_item(start, &mut reader, &mut writer, schedule)?;
}
// ... snip ...
Apart from the usual borrow checker shenanigans, I got an initial working web server (based on Axum) up and running fairly quickly!
The second problem
Many podcasts only maintain something like the last one hundred episodes in their feeds (sometimes substantially less!). Whether itâs for practical reasons (they publish new items every day) or for business model reasons (old episodes are only available to subscribers), I needed to account for this. If not, my naive approach would get confused and start replaying the episodes earlier. Say the feed from our earlier example stops containing the first episode: now âTwoâ and âThreeâ will be scheduled a month earlier than before, shifting the entire replay forward an episode.
2022-10-20 Two
2022-11-20 Three
This error would accumulate with every old episode that drops off the end of the feed.
Adding state
So I needed to keep state. I figured that if I tracked all the item GUIDs/timestamps observed for a given feed, I could reconstruct enough history to reliably reschedule the rest long after older items have expired. I wonât include them in the replayed feed (since Iâm not caching enough data to reconstruct them in full), but I can account for them when scheduling out the items that are available.
Once I determined that I needed to store persistant state, I decided it was time to pull another thing off my âwant to tryâ pile: Litestream. Litestream can backup SQLite databases to an external blob storage in the background. If my server goes down or I deploy a new version, the most recent backup is automatically restored and everything carries on.
This is sufficient for the nature of the data Iâm storing (and the amount of long term effort/money Iâm interested in spending). This would let me keep a small SQLite database file locally on the single Fly.io instance I was planning to run and not mess with managing a separate database5.
The front end
I built out the front end with SvelteKit. I was running low on steam at this point, so the UI design is⊠basic.
I wanted to be able to show a live preview of the effect various options can have on the replayed feed. In order to keep this snappy, I tried to do as much of the incremental updating on the client as practical. Which required either porting or sharing the relatively involved rescheduling code. A perfect excuse to throw WebAssembly into the mix!
To preview a feed, first we fetch the feed from the server and parse it to extract a summary, containing a guid/title/timestamp for each item. The core rescheduling code (now a standalone Rust lib compilable to WASM) takes this summary and the scheduling config to generate a preview.
WebAssembly asides
Itâs easy to accidentally generate very large bundle sizes depending on the dependencies youâre using. This is similar to problems we have in the Javascript ecosystem, but the extra opaqueness of WASM makes it harder to debug.
wasm-bindgen does a really good job of gluing the WASM/JavaScript worlds together. However, there is a pretty substantial cost to serializing and transferring complex data structures across the boundary between worlds. For the sake of performance, I ended up simplifying most of my transferrable data to a simple Float64Array containing timestamps. Strictly necessary? No.
Deployment
This part was anticlimactic, which is a testament to Fly.io. Once I got a docker container sorted out locally it took only minor fiddling to get things up and running. It reminded me of all the best feelings of using Heroku back in the day.
One really neat thing Iâd like to call out is their slick static file support, specifically how it works when a deploy is in progress.
Conclusion
As with many side projects the most important outcome is not the final artifact, but what was learned along the way. I left out several details, either because Iâve forgotten them (I wrote most of this post several months after I wrapped up work) or because they could blow up into complete standalone posts (e.g. feed autodiscovery).
If youâre still interested, hereâs the link again: PodReplay.com. Or, you can check out the actual code at github.com/tgecho/podreplay.
Potential future enhancements
- Add the ability to pause/resume/skip replay without creating a new feed. This will require additional per replay state.
- Add more robust autodiscovery, possibly based on something like the ListenNotes API.
Stuff I used
An incomplete list of notable tools/libraries used to build PodReplay, many for the first time:
- Rust (used to implement server and part of client)
- TypeScript (used where Rust didnât make sense)
- Axum (Rust web server framework)
- quick-xml (XML reading/writing)
- wasm-bindgen (gluing WASM/JavaScript together in the browser)
- chrono (Rust date/time library)
- chronoutil (provided a really nice DateRule iterator)
- SQLite (a lightweight embeddable database engine that seems to be going places)
- sqlx (Rust database toolkit)
- Svelte/SvelteKit (front end UI library)
- pnpm (fast and economical npm replacement)
- vite (JavaScript bundler)
Footnotes
-
I later discovered Recast and a few other possible solutions, some of which require manual management. Recast looks quite nice, though it wouldnât really cover the part of my use case that involves building new things. â©
-
I like the idea of a novelty budget, but Iâm terrible at applying this to my own hobby projects. This is how I end up trying to learn Kafka, spaCy and Docker Compose while exploring topical RSS feed clustering back in ~2015. Multiple pieces of complicated bleeding edge tech (at least to me) applied to an Unsolved Problem. What could go wrong? â©
-
I made this up. If I was into wood working, I might build a coffee table not because it makes practical or economic sense, but because I derive satisfaction from making something in whatever way I see fit. It may even be functional. Thatâs a bonus! â©
-
Oh wait, not the goal this time. â©
-
Since I built this out, Ben Johnson (Litestream creator) announced a new project called LiteFS, which is oriented at replication, as opposed to the disaster recovery focus of Litestream. Also, Fly.io has a partially managed Postgres offering. Both are interesting, but overkill for my needs. â©