Skip to content
Salah Adawi Salah Adawi

GitHub - russellromney/honker: SQLite extension + bindings for Postgres NOTIFY/LISTEN semantics with durable queues, streams, pub/sub, and scheduler

We believe that this document is a mix of AI-generated, and human-written content.

Hacker News Article AI Analysis

Content Label

Mixed

AI Generated

52%

Human

48%

Window 1 - Human
honker is a SQLite extension + language bindings that add Postgres-style NOTIFY/LISTEN semantics to SQLite, with built-in durable pub/sub, task queue, and event streams, without client polling or a daemon/broker. Any language that can SELECT load_extension('honker') gets the same features. honker ships as a Rust crate (honker, plus honker-core/honker-extension), a SQLite loadable extension, and language packages: Python (honker), Node (@russellthehippo/honker-node), Bun (@russellthehippo/honker-bun), Ruby (honker), Go, Elixir, C++. The on-disk layout is defined once in Rust; every binding is a thin wrapper around the loadable extension. honker works by replacing a polling interval with event notifications on SQLite's WAL file, achieving push semantics and enabling cross-process notifications with single-digit millisecond delivery. Experimental. API may change. SQLite is increasingly the database for shipped projects. Those inevitably require pubsub and a task queue. The usual answer is "add Redis + Celery." That works, but it introduces a second datastore with its own backup story, a dual-write problem between your business table and the queue, and the operational overhead of running a broker. honker takes the approach that if SQLite is the primary datastore, the queue should live in the same file. That means INSERT INTO orders and queue.enqueue(...) commit in the same transaction. Rollback drops both. The queue is just rows in a table with a partial index. Prior art: pg_notify (fast triggers, no retry/visibility), Huey (SQLite-backed Python), pg-boss and Oban (the Postgres-side gold standards we're chasing on SQLite). If you already run Postgres, use those, as they are excellent. At a glance import honker db = honker.open("app.db") emails = db.queue("emails") # Enqueue emails.enqueue({"to": "alice@example.com"}) # Consume (worker process) async for job in emails.claim("worker-1"): send(job.payload) job.ack() Any enqueue can be atomic with a business write. Rollback drops both. with db.transaction() as tx: tx.execute("INSERT INTO orders (user_id) VALUES (?)", [
Window 2 - Human
42]) emails.enqueue({"to": "alice@example.com"}, tx=tx) Features Today: Notify/listen across processes on one .db file Work queues with retries, priority, delayed jobs, and a dead-letter table Any send can be atomic with your business write (commit together or roll back together) Single-digit millisecond cross-process reaction time, no polling Handler timeouts, declarative retries with exponential backoff Delayed jobs, task expiration, named locks, rate-limiting Crontab-style periodic tasks with a leader-elected scheduler Opt-in task result storage (enqueue returns an id, worker persists the return value, caller awaits queue.wait_result(id)) Durable streams with per-consumer offsets and configurable flush interval SQLite loadable extension so any SQLite client can read the same tables Bindings: Python, Node.js, Rust, Go, Ruby, Bun, Elixir Deliberately not built: task pipelines/chains/groups/chords, multi-writer replication, workflow orchestration with DAGs. Quick start Python: queue (durable at-least-once work) pip install honker import honker db = honker.open("app.db") emails = db.queue("emails") with db.transaction() as tx: tx.execute("INSERT INTO orders (user_id) VALUES (?)", [42]) emails.enqueue({"to": "alice@example.com"}, tx=tx) # atomic with order # Then in a worker, do: async for job in emails.claim("worker-1"): # wakes on any WAL commit try: send(job.payload); job.ack() except Exception as e: job.retry(delay_s=60, error=str(e)) claim() is an async iterator. Each iteration is one claim_batch(worker_id, 1). Wakes on any WAL commit, falls back to a 5 s paranoia poll only if the WAL watcher can't fire. For batched work, call claim_batch(worker_id, n) explicitly and ack with queue.ack_batch(ids, worker_id).
Window 3 - 100% AI-Generated
Defaults: visibility 300 s. Python: tasks (Huey-style decorators) If you want a function call to turn into an enqueued job without wrapping queue.enqueue by hand: @emails.task(retries=3, timeout_s=30) def send_email(to: str, subject: str) -> dict: ... return {"sent_at": time.time()} # Caller r = send_email("alice@example.com", "Hi") # enqueues, returns a TaskResult print(r.get(timeout=10)) # blocks until worker runs it Worker side, either in-process or as its own process: python -m honker worker myapp.tasks:db --queue=emails --concurrency=4 Auto-name is {module}.{qualname} (Huey/Celery convention). Explicit names with @emails.task(name="...") are recommended in prod so renames don't orphan pending jobs. Periodic tasks use @emails.periodic_task(crontab("0 3 * * *")). Full details in packages/honker/examples/tasks.py. Python: stream (durable pub/sub) stream = db.stream("user-events") with db.transaction() as tx: tx.execute("UPDATE users SET name=? WHERE id=?", [name, uid]) stream.publish({"user_id": uid, "change": "name"}, tx=tx) async for event in stream.subscribe(consumer="dashboard"): await push_to_browser(event) Each named consumer tracks its own offset in the _honker_stream_consumers table. subscribe replays rows past the saved offset, then transitions to live delivery on WAL wake. The iterator auto-saves offset at most every 1000 events or every 1 second (whichever first) so a high-throughput stream doesn't hammer the single-writer slot. Override with save_every_n= / save_every_s=, or set both to 0 to disable auto-save and call stream.save_offset(consumer, offset, tx=tx) yourself (atomic with whatever you just did in that tx). At-least-once: a crash re-delivers in-flight events up to the last flushed offset.
Window 4 - 100% AI-Generated
Python: notify (ephemeral pub/sub) async for n in db.listen("orders"): print(n.channel, n.payload) with db.transaction() as tx: tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99.99]) tx.notify("orders", {"id": 42}) Listeners attach at current MAX(id); history is not replayed. Use db.stream() if you need durable replay. The notifications table is not auto-pruned. Call db.prune_notifications(older_than_s=…, max_keep=…) from a scheduled task. Task payloads have to be valid JSON so a Python writer and Node reader can share a channel. Node.js const { open } = require('@russellthehippo/honker-node'); const db = open('app.db'); // Atomic: business write + notify commit together const tx = db.transaction(); tx.execute('INSERT INTO orders (id) VALUES (?)', [42]); tx.notify('orders', { id: 42 }); tx.commit(); // Listen wakes on WAL commits, filters by channel for await (const n of db.listen('orders')) { handle(n.payload); } SQLite extension (any SQLite 3.9+ client) .load ./libhonker_ext SELECT honker_bootstrap(); INSERT INTO _honker_live (queue, payload) VALUES ('emails', '{"to":"alice"}'); SELECT honker_claim_batch('emails', 'worker-1', 32, 300); -- JSON array SELECT honker_ack_batch('[1,2,3]', 'worker-1'); -- DELETEs; returns count SELECT honker_sweep_expired('emails'); -- count moved to dead SELECT honker_lock_acquire('backup', 'me', 60); -- 1 = got it, 0 = held SELECT honker_lock_release('backup', 'me'); -- 1 = released SELECT honker_rate_limit_try('api', 10, 60); --
Window 5 - 100% AI-Generated
1 = under, 0 = at limit SELECT honker_rate_limit_sweep(3600); -- drop windows >1h old SELECT honker_cron_next_after('0 3 * * *', unixepoch()); -- unix ts of next fire SELECT honker_scheduler_register('nightly', 'backups', '0 3 * * *', '"go"', 0, NULL); -- register periodic task SELECT honker_scheduler_tick(unixepoch()); -- JSON: fires due SELECT honker_scheduler_soonest(); -- min next_fire_at SELECT honker_scheduler_unregister('nightly'); -- 1 = deleted SELECT honker_stream_publish('orders', 'k', '{"id":42}'); -- returns offset SELECT honker_stream_read_since('orders', 0, 1000); -- JSON array SELECT honker_stream_save_offset('worker', 'orders', 42); -- monotonic upsert SELECT honker_stream_get_offset('worker', 'orders'); -- offset or 0 SELECT honker_result_save(42, '{"ok":true}', 3600); -- save w/ 1h TTL SELECT honker_result_get(42); -- value or NULL SELECT honker_result_sweep(); -- prune expired SELECT notify('orders', '{"id":42}'); The extension shares _honker_live, _honker_dead, and _honker_notifications with the Python binding, so a Python worker can claim jobs any other language pushed via the extension.
Window 6 - Human
Schema compatibility is pinned by tests/test_extension_interop.py. Design This repo includes the honker SQLite loadable extension and bindings for Python, Node, Rust, Go, Ruby, Bun, and Elixir. For most applications, SQLite alone is sufficient. There are already great libraries that leverage SQLite for durable messaging. Huey is the one honker draws the most from. This project is inspired by it and seeks to do something similar across languages and frameworks by moving package logic into a SQLite extension. For Postgres-backed apps, pg_notify + pg-boss or Oban is the equivalent. This library is for apps where SQLite is the primary datastore. The extension has three primitives that tie it together: ephemeral pub/sub (notify()), durable pub/sub with per-consumer offsets (stream()), at-least-once work queue (queue()). All three are INSERTs inside your transaction, which lets a task "send" be atomic with your business write, and rollback drops everything. The explicit goal is to do NOTIFY/LISTEN semantics without constant polling, to achieve single-digit ms reaction time. If you use your app's existing SQLite file containing business logic, it will notify workers on every WAL commit. This means that most triggers will not result in anything happening: instead, workers just read the message/queue with no result. This "overtriggering" is on purpose and is the tradeoff for push semantics and fast reaction time. WAL-only by design honker requires journal_mode = WAL on every database it manages. honker_bootstrap() refuses to run on a file-backed DB that isn't in WAL mode, and the language bindings set PRAGMA journal_mode = WAL in their default open path. Workers hold open read views (WAL subscription channels, listener iterators) for their whole lifetime. In DELETE / TRUNCATE modes, writers take an EXCLUSIVE lock; every active reader blocks until release. A single worker actively claiming would serialize every enqueue() / notify() in the system behind it. WAL lets readers and writers coexist. The .db-wal sidecar grows on every commit and only shrinks at checkpoint. Stat-polling it gives a monotonic, unambiguous change signal. The rollback-journal sidecar (.db-journal) in DELETE mode appears mid-transaction and vanishes on commit, making it a poor stat-poll target.