Expose docusaurus generation over OpenRPC as async jobs #92
Labels
No labels
prio_critical
prio_low
type_bug
type_contact
type_issue
type_lead
type_question
type_story
type_task
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
lhumina_code/hero_books#92
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Goal
Bring docusaurus site generation into
hero_books_serverso it can be invoked over JSON-RPC from any Hero service or UI, not only from the standalonehero_docsCLI. Long-running generation must be async — the call returns a job ID immediately, and a separate method reports status.Motivation
Today
hero_docsis a standalone binary that blocks on generation. Making this a server capability means:books.pdfcan apply to docusaurus output.Scope
Dependencies
hero_books_docusaurustocrates/hero_books_server/Cargo.toml.New OpenRPC methods
Add to
crates/hero_books_server/openrpc.json:docs.new— scaffold a new docusaurus site. Params:name,path, optionalforce. Returns:{ job_id }.docs.generate— generate a docusaurus site from an existing book (by book id or path). Returns:{ job_id }.docs.jobStatus— poll a job. Params:{ job_id }. Returns:{ state: pending|running|done|failed, output_path?, error? }.Job registry
Arc<Mutex<HashMap<JobId, JobState>>>.tokio::spawnand updates state on completion or failure.books.pdfcaching) so repeated calls with identical inputs are cheap.Handlers
crates/hero_books_server/src/web/rpc.rsalongsidehandle_books_pdf(~line 1249). Dispatch entries go in the same switch block.The standalone CLI
hero_docsas-is. It callshero_books_docusaurusdirectly — the server is an additional entry point, not a replacement.Acceptance criteria
docs.newanddocs.generatereturn a job id within milliseconds.docs.jobStatusreflectsrunningwhile work is in flight anddonewithoutput_pathon success.state: failedwith anerrorstring; the server does not panic.docs.generatecalls for different inputs run in parallel.openrpc.json.docs.new→docs.jobStatuspolling loop.Non-goals
tokio::spawnper job is enough at this stage.books.*methods use.Dependency order
Lands after #1 (fix
--pathnesting) so the server exposes correct scaffold behavior. Can land in parallel with #3 inhero_skills(installer work).Implementation Spec for Issue #92
Objective
Add three new JSON-RPC methods (
docs.new,docs.generate,docs.jobStatus) tohero_books_serverthat exposehero_books_docusaurusscaffolding and site generation as asynchronous background jobs. Callers receive a job ID immediately and poll for completion, mirroring the existingimport_jobs()pattern.Requirements
docs.newacceptsname,path, and optionalforce; spawns scaffold + full generation in background; returns{ job_id }within millisecondsdocs.generateacceptspath(heroscript path or book identifier); spawns docusaurus generation in background; returns{ job_id }docs.jobStatusaccepts{ job_id }; returns{ state: "pending"|"running"|"done"|"failed", output_path?, error? }std::thread::spawn(docusaurus APIs are synchronous/CPU-bound, dispatch is synchronous)state: "failed"witherrorstring; server never panics.docusaurus_cache/docs.new->docs.jobStatuspollingFiles to Modify
crates/hero_books_server/Cargo.tomlhero_books_docusaurusdependencycrates/hero_books_server/src/web/server.rsDocsJobStatusenum,DocsJobstruct,docs_jobs()registry, cache dir helper, input hash computation, running job dedupcrates/hero_books_server/src/web/rpc.rscrates/hero_books_server/src/web/mod.rscrates/hero_books_server/openrpc.jsondocs.new,docs.generate,docs.jobStatusmethod definitionscrates/hero_books_server/src/web/rpc_spec.rsImplementation Plan
Step 1: Add
hero_books_docusaurusdependencyFiles:
crates/hero_books_server/Cargo.tomlhero_books_docusaurus = { path = "../hero_books_docusaurus" }to[dependencies]Dependencies: none
Step 2: Add docs job registry and cache helpers to
server.rsFiles:
crates/hero_books_server/src/web/server.rsDocsJobStatusenum (Pending/Running/Done/Failed)DocsJobstruct (state, output_path, error, input_hash)docs_jobs()global registry (followsimport_jobs()pattern)get_docusaurus_cache_dir()(followsget_pdf_cache_dir()pattern)calculate_docs_input_hash()andfind_running_job_for_hash()helpersDependencies: Step 1
Step 3: Add dispatch entries and handler functions to
rpc.rsFiles:
crates/hero_books_server/src/web/rpc.rsdocs.new,docs.generate,docs.jobStatusin the match blockhandle_docs_new(): parse params, compute hash, dedup check, spawn thread for scaffold + generatehandle_docs_generate(): parse params, compute hash, dedup check, spawn thread for generationhandle_docs_job_status(): look up job, return stateDependencies: Step 2
Step 4: Update
mod.rsre-exportsFiles:
crates/hero_books_server/src/web/mod.rspub use server::{ ... }blockDependencies: Step 2
Step 5: Update
openrpc.jsonwith new method definitionsFiles:
crates/hero_books_server/openrpc.jsonDependencies: none
Step 6: Add typed structs to
rpc_spec.rsand update inline schemaFiles:
crates/hero_books_server/src/web/rpc_spec.rsget_openrpc_schema()inline JSONDependencies: none
Step 7: Add smoke test
Files:
crates/hero_books_server/src/web/server.rs,crates/hero_books_server/src/web/rpc.rsdocs.new->docs.jobStatusviahandle_rpc_requestDependencies: Steps 2, 3
Acceptance Criteria
hero_books_docusaurusis a dependency ofhero_books_serverdocs.newreturns{ job_id }within milliseconds (non-blocking)docs.generatereturns{ job_id }within milliseconds (non-blocking)docs.jobStatusreturns correct state transitions (pending -> running -> done/failed){ state: "failed", error: "..." }; server does not panicopenrpc.jsoncontains three new method definitionscargo buildsucceeds with no new warningsNotes
handle_rpc_requestfunction is synchronous (called insidespawn_blocking). Job handlers usestd::thread::spawnfor background work..docusaurus_cache/{hash}/. If cache exists withbuild/subdirectory, skip regeneration.import_jobs()behavior).parking_lot::Mutex<HashMap<...>>for thread safety (same asimport_jobs()).uuidcrate withv4feature will be added for job ID generation.Test Results
All tests pass, including the 2 new docs job tests:
test_docs_job_registry-- verifies job state transitions (Pending -> Running -> Done)test_docs_job_dedup-- verifies duplicate job detection by input hashBuild compiles cleanly with no warnings.
Implementation Summary
Changes Made
Modified files:
crates/hero_books_server/Cargo.toml-- Addedhero_books_docusaurus(path dependency) anduuid(v1, v4 feature) dependencies.crates/hero_books_server/src/web/server.rs-- AddedDocsJobStateenum,DocsJobstruct,docs_jobs()global registry (followsimport_jobs()pattern),get_docusaurus_cache_dir(),calculate_docs_input_hash(), andfind_running_docs_job()helpers. Added 2 unit tests.crates/hero_books_server/src/web/rpc.rs-- Added dispatch entries fordocs.new,docs.generate,docs.jobStatus. Implementedhandle_docs_new(),handle_docs_generate(), andhandle_docs_job_status()handler functions with async job spawning viastd::thread::spawn.crates/hero_books_server/src/web/mod.rs-- Added re-exports for all new public items.crates/hero_books_server/openrpc.json-- Added three new method definitions with params and result schemas.crates/hero_books_server/src/web/rpc_spec.rs-- AddedDocsNewRequest,DocsGenerateRequest,DocsJobStatusRequest,DocsJobStatusResponsestructs. Updated inlineget_openrpc_schema()with the three new methods.Key Design Decisions
std::thread::spawnsincehandle_rpc_requestis synchronous (runs insidespawn_blocking)..docusaurus_cache/{hash}/. If a cachedbuild/directory exists, a synthetic "done" job is returned immediately.import_jobs()behavior.Test Results
test_docs_job_registry,test_docs_job_dedupPull request opened: #94
This PR implements the changes discussed in this issue.