services not shown properly #57

Open
opened 2026-04-28 05:26:45 +00:00 by despiegk · 3 comments
Owner

in services tab

image

e.g. for hero_db

but then if we check the jobs

image

the old jobs no longer relevant should not count for showing service is good or not,
if the 2 current jobs are ok and running then service is ok,

the old jobs no need to show

we need to track this through the run

when we see a run

we should be able to click on the jobs
and go to job pannel

in services tab ![image](/attachments/cc1cd980-2cdb-4d9e-b3e9-d47c9b9cc1d9) e.g. for hero_db but then if we check the jobs ![image](/attachments/34b41b97-61c7-498b-aade-5d6c20c33c69) the old jobs no longer relevant should not count for showing service is good or not, if the 2 current jobs are ok and running then service is ok, the old jobs no need to show we need to track this through the run when we see a run we should be able to click on the jobs and go to job pannel
Author
Owner

Implementation Spec — Issue #57: Services tab not shown properly

Objective

Fix the Services tab so a service's listed jobs and its derived health status reflect only the current run of that service (the most recent run row whose service_id matches). Stale jobs from prior runs must not appear in the per-service jobs view and must not influence the badge color or status. Each job row in the service detail panel must be clickable and deep-link to the Jobs panel for that job.

Background — current state of the code

  • The supervisor already creates a Run per service.start cycle and tags every spawned Job with run_id (see crates/hero_proc_server/src/rpc/service.rs handle_start).
  • The runs table has service_id and there is a run_jobs join table.
  • service.status already returns current_run_id (most recent run for the service).
  • JobFilter (OpenRPC schema and Rust) already supports both service_id and run_id for job.list.
  • The service detail panel in dashboard.js viewService already correctly uses current_run_id to filter jobs.
  • Bug A (UI badge & counts): loadServices in dashboard.js calls rpc('job.list', { filter: { service_id: name, limit: 200 } }) — i.e., it counts all historical jobs for the service, regardless of run. The badge then turns red whenever any historical job has phase === 'failed', even if the current run is fully healthy.
  • Bug B (server service.status): service_running_jobs and service_last_terminal_state walk all jobs for the service via db.jobs.list_by_service_id(name) instead of restricting to the current run. As a result, if a stale Retrying row from a previous run lingers, or a previous run's last terminal job was Failed, the derived state is wrong.
  • UX gap: the badge in the row links to the Jobs tab via navigateToServiceJobs which only filters by service name, not by run. We need to also pass the current_run_id and have the Jobs tab filter by it.

Requirements

  • Services tab badge ("Jobs" column) must count only jobs belonging to the service's current run.
  • Services tab status badge must reflect health of the current run only (running / ok / error / starting / inactive).
  • Service detail panel (already correct) must continue to scope jobs to the current run.
  • Each listed job in the detail panel must be clickable and navigate to the Jobs tab with that specific job opened (deep-linkable URL #jobs/<id>).
  • Clicking the per-row "Jobs" badge in the services table must navigate to the Jobs tab and apply a run_id filter (not just service_id).
  • If a service has never been started (current_run_id == null), the Jobs column shows - and the status badge shows inactive.
  • Server-side service.status state must derive from the current run only; old runs' jobs are ignored.

Files to modify / create

  • crates/hero_proc_server/src/rpc/service.rs — scope service_running_jobs, service_last_terminal_state, and count_restarts (used by handle_status and handle_status_full) to the current run when one exists.
  • crates/hero_proc_ui/static/js/dashboard.js — change loadServices to fetch jobs per service using the current run's run_id; change serviceJobsBadge click handler / navigateToServiceJobs to honour run scoping; ensure click-to-job rows in viewService continue to deep-link.
  • crates/hero_proc_ui/templates/index.html — minor copy update (tooltip "Current run jobs"); optional, low priority.
  • Server-side test (in crates/hero_proc_server/src/rpc/service.rs #[cfg(test)]) — add cases for current-run scoping.

No new files are strictly required. No OpenRPC schema changes are required.

Implementation Plan

Step 1 — Server: scope service status to the current run

Files: crates/hero_proc_server/src/rpc/service.rs

  • Add helper current_run_id(db, name) -> Option<u32> using db.runs.get_for_service(name).
  • Add helper current_run_job_ids(db, name) -> Option<Vec<u32>> using db.runs.get_run_job_ids(rid).
  • Modify service_running_jobs, service_last_terminal_state, and count_restarts to iterate only over current-run job IDs when a current run exists. When no current run exists, return empty / inactive / 0.
  • handle_status and handle_status_full automatically pick up the fix.

Why: makes service.status.state reflect only the current cycle. Authoritative for all consumers (UI, CLI, scripts).

Dependencies: none.

Step 2 — UI: scope the Jobs badge counts to the current run

Files: crates/hero_proc_ui/static/js/dashboard.js (loadServices, serviceJobsBadge)

  • In loadServices, sequence per service: first service.status to get current_run_id, then if non-null call job.list with filter.run_id = current_run_id. If null, store zero counts and runId: null.
  • Keep the outer Promise.all over services so different services still fetch in parallel.
  • Store runId in cachedServiceJobCounts[name].
  • Update serviceJobsBadge tooltip ("Jobs in current run # — N total, F failed, R retrying").
  • Update inline onclick to call navigateToServiceJobs(name, runId).

Why: stops the Jobs column badge from being misled by stale old-run failures.

Dependencies: none (independent of Step 1).

Step 3 — UI: add run-scoped navigation to the Jobs tab

Files: crates/hero_proc_ui/static/js/dashboard.js (navigateToServiceJobs, loadJobs, route restoration)

  • Extend navigateToServiceJobs(serviceName, runId) to set #jobs?service=<name>&run=<id> and persist runId in a _pendingRunFilter.
  • Extend loadJobs() to honour runId filter (filter.run_id = runId); add a "Current run only" pill the user can dismiss to clear it (and the URL param).
  • Extend the hash route parser to read &run=<id>.

Why: clicking the badge in services takes the user to a Jobs view that only shows current-run jobs.

Dependencies: Step 2.

Files: crates/hero_proc_ui/static/js/dashboard.js (viewService jobs table render)

  • Verify the existing onclick="navigateTo('jobs', j.id)" correctly opens the Jobs tab and viewJob(j.id).
  • Add aria-label="Open job #<id>" and class="cursor-pointer" to the row.

Why: requirement: "we should be able to click on the jobs and go to job panel". Mostly already true — this hardens the UX.

Dependencies: none.

Step 5 — Tests

Files: crates/hero_proc_server/src/rpc/service.rs #[cfg(test)]

  • service_status_ignores_jobs_from_previous_runs: create service, start (run #1) → mark its job Failed, start again (run #2) → leave job Running; assert state == "running" and current_run_id == run #2.
  • service_status_inactive_when_no_run: never start; assert state == "inactive" and current_run_id == null.

Why: locks in the fix.

Dependencies: Steps 1, 2, 3, 4.

Acceptance criteria

  • Old failed jobs from previous runs no longer affect the service status badge in the Services tab.
  • The "Jobs" column in the Services tab shows the count of jobs in the current run only.
  • When a service has never been started, the Jobs column shows - and the status badge shows inactive.
  • service.status (RPC) returns state == "running" when the current run's jobs are active, regardless of stale failed jobs from earlier runs.
  • Clicking the Jobs badge in the Services row navigates to the Jobs tab filtered to run_id = current_run_id (URL #jobs?service=<name>&run=<id>).
  • Clicking a job row in the service detail panel navigates to #jobs/<id> and opens the Jobs detail panel for that job.
  • Server-side unit tests pass.
  • OpenRPC spec is unchanged (ServiceStatus.current_run_id already exists).

Notes / warnings

  • The delete_inactive_by_service cleanup on service.start (replace_existing_jobs=true default) means stale jobs are often already gone. The visible bug appears when:
    1. A service is started with replace_existing_jobs=false (rare in UI, possible via SDK/CLI),
    2. A retrying/running job from a previous run is orphaned (supervisor crash mid-run), or
    3. The supervisor has retried a job many times within one run — those retrying rows pile up and mislead the badge.
  • The fix is defensive: by always scoping to current_run_id, the UI is correct regardless of which edge case produced the stale jobs.
  • get_run_for_service returns the most recent run regardless of status — including halted. That's correct: the current run is whichever is most recent. If the run is halted, current-run jobs are typically terminal, and state falls back to the run's last terminal job.
  • The server-side change (Step 1) is the most important: it affects every consumer of service.status, not just the dashboard.
  • No DB migration. No OpenRPC schema change. No new RPC methods.
# Implementation Spec — Issue #57: Services tab not shown properly ## Objective Fix the `Services` tab so a service's listed jobs and its derived health status reflect only the **current run** of that service (the most recent `run` row whose `service_id` matches). Stale jobs from prior runs must not appear in the per-service jobs view and must not influence the badge color or status. Each job row in the service detail panel must be clickable and deep-link to the Jobs panel for that job. ## Background — current state of the code - The supervisor already creates a `Run` per `service.start` cycle and tags every spawned `Job` with `run_id` (see `crates/hero_proc_server/src/rpc/service.rs` `handle_start`). - The `runs` table has `service_id` and there is a `run_jobs` join table. - `service.status` already returns `current_run_id` (most recent run for the service). - `JobFilter` (OpenRPC schema and Rust) already supports both `service_id` and `run_id` for `job.list`. - The service **detail panel** in `dashboard.js` `viewService` already correctly uses `current_run_id` to filter jobs. - **Bug A (UI badge & counts)**: `loadServices` in `dashboard.js` calls `rpc('job.list', { filter: { service_id: name, limit: 200 } })` — i.e., it counts **all historical jobs** for the service, regardless of run. The badge then turns red whenever any historical job has `phase === 'failed'`, even if the current run is fully healthy. - **Bug B (server `service.status`)**: `service_running_jobs` and `service_last_terminal_state` walk **all** jobs for the service via `db.jobs.list_by_service_id(name)` instead of restricting to the current run. As a result, if a stale `Retrying` row from a previous run lingers, or a previous run's last terminal job was `Failed`, the derived `state` is wrong. - **UX gap**: the badge in the row links to the Jobs tab via `navigateToServiceJobs` which only filters by service name, not by run. We need to also pass the `current_run_id` and have the Jobs tab filter by it. ## Requirements - Services tab badge ("Jobs" column) must count only jobs belonging to the service's current run. - Services tab status badge must reflect health of the **current run only** (running / ok / error / starting / inactive). - Service detail panel (already correct) must continue to scope jobs to the current run. - Each listed job in the detail panel must be clickable and navigate to the Jobs tab with that specific job opened (deep-linkable URL `#jobs/<id>`). - Clicking the per-row "Jobs" badge in the services table must navigate to the Jobs tab and apply a `run_id` filter (not just `service_id`). - If a service has never been started (`current_run_id == null`), the Jobs column shows `-` and the status badge shows `inactive`. - Server-side `service.status` `state` must derive from the current run only; old runs' jobs are ignored. ## Files to modify / create - `crates/hero_proc_server/src/rpc/service.rs` — scope `service_running_jobs`, `service_last_terminal_state`, and `count_restarts` (used by `handle_status` and `handle_status_full`) to the current run when one exists. - `crates/hero_proc_ui/static/js/dashboard.js` — change `loadServices` to fetch jobs per service using the current run's `run_id`; change `serviceJobsBadge` click handler / `navigateToServiceJobs` to honour run scoping; ensure click-to-job rows in `viewService` continue to deep-link. - `crates/hero_proc_ui/templates/index.html` — minor copy update (tooltip "Current run jobs"); optional, low priority. - Server-side test (in `crates/hero_proc_server/src/rpc/service.rs` `#[cfg(test)]`) — add cases for current-run scoping. No new files are strictly required. No OpenRPC schema changes are required. ## Implementation Plan ### Step 1 — Server: scope service status to the current run **Files**: `crates/hero_proc_server/src/rpc/service.rs` - Add helper `current_run_id(db, name) -> Option<u32>` using `db.runs.get_for_service(name)`. - Add helper `current_run_job_ids(db, name) -> Option<Vec<u32>>` using `db.runs.get_run_job_ids(rid)`. - Modify `service_running_jobs`, `service_last_terminal_state`, and `count_restarts` to iterate only over current-run job IDs when a current run exists. When no current run exists, return empty / `inactive` / `0`. - `handle_status` and `handle_status_full` automatically pick up the fix. **Why**: makes `service.status.state` reflect only the current cycle. Authoritative for all consumers (UI, CLI, scripts). **Dependencies**: none. ### Step 2 — UI: scope the Jobs badge counts to the current run **Files**: `crates/hero_proc_ui/static/js/dashboard.js` (`loadServices`, `serviceJobsBadge`) - In `loadServices`, sequence per service: first `service.status` to get `current_run_id`, then if non-null call `job.list` with `filter.run_id = current_run_id`. If null, store zero counts and `runId: null`. - Keep the outer `Promise.all` over services so different services still fetch in parallel. - Store `runId` in `cachedServiceJobCounts[name]`. - Update `serviceJobsBadge` tooltip ("Jobs in current run #<id> — N total, F failed, R retrying"). - Update inline `onclick` to call `navigateToServiceJobs(name, runId)`. **Why**: stops the Jobs column badge from being misled by stale old-run failures. **Dependencies**: none (independent of Step 1). ### Step 3 — UI: add run-scoped navigation to the Jobs tab **Files**: `crates/hero_proc_ui/static/js/dashboard.js` (`navigateToServiceJobs`, `loadJobs`, route restoration) - Extend `navigateToServiceJobs(serviceName, runId)` to set `#jobs?service=<name>&run=<id>` and persist `runId` in a `_pendingRunFilter`. - Extend `loadJobs()` to honour `runId` filter (`filter.run_id = runId`); add a "Current run only" pill the user can dismiss to clear it (and the URL param). - Extend the hash route parser to read `&run=<id>`. **Why**: clicking the badge in services takes the user to a Jobs view that only shows current-run jobs. **Dependencies**: Step 2. ### Step 4 — UI: confirm click-to-job deep link in service detail panel **Files**: `crates/hero_proc_ui/static/js/dashboard.js` (`viewService` jobs table render) - Verify the existing `onclick="navigateTo('jobs', j.id)"` correctly opens the Jobs tab and `viewJob(j.id)`. - Add `aria-label="Open job #<id>"` and `class="cursor-pointer"` to the row. **Why**: requirement: "we should be able to click on the jobs and go to job panel". Mostly already true — this hardens the UX. **Dependencies**: none. ### Step 5 — Tests **Files**: `crates/hero_proc_server/src/rpc/service.rs` `#[cfg(test)]` - `service_status_ignores_jobs_from_previous_runs`: create service, start (run #1) → mark its job Failed, start again (run #2) → leave job Running; assert `state == "running"` and `current_run_id == run #2`. - `service_status_inactive_when_no_run`: never start; assert `state == "inactive"` and `current_run_id == null`. **Why**: locks in the fix. **Dependencies**: Steps 1, 2, 3, 4. ## Acceptance criteria - [ ] Old failed jobs from previous runs no longer affect the service status badge in the Services tab. - [ ] The "Jobs" column in the Services tab shows the count of jobs in the **current run only**. - [ ] When a service has never been started, the Jobs column shows `-` and the status badge shows `inactive`. - [ ] `service.status` (RPC) returns `state == "running"` when the current run's jobs are active, regardless of stale failed jobs from earlier runs. - [ ] Clicking the Jobs badge in the Services row navigates to the Jobs tab filtered to `run_id = current_run_id` (URL `#jobs?service=<name>&run=<id>`). - [ ] Clicking a job row in the service detail panel navigates to `#jobs/<id>` and opens the Jobs detail panel for that job. - [ ] Server-side unit tests pass. - [ ] OpenRPC spec is unchanged (`ServiceStatus.current_run_id` already exists). ## Notes / warnings - The `delete_inactive_by_service` cleanup on `service.start` (`replace_existing_jobs=true` default) means stale jobs are often already gone. The visible bug appears when: 1. A service is started with `replace_existing_jobs=false` (rare in UI, possible via SDK/CLI), 2. A retrying/running job from a previous run is orphaned (supervisor crash mid-run), or 3. The supervisor has retried a job many times within one run — those `retrying` rows pile up and mislead the badge. - The fix is defensive: by **always** scoping to `current_run_id`, the UI is correct regardless of which edge case produced the stale jobs. - `get_run_for_service` returns the most recent run regardless of status — including `halted`. That's correct: the current run is whichever is most recent. If the run is `halted`, current-run jobs are typically terminal, and `state` falls back to the run's last terminal job. - The server-side change (Step 1) is the most important: it affects every consumer of `service.status`, not just the dashboard. - No DB migration. No OpenRPC schema change. No new RPC methods.
Author
Owner

Test Results

cargo test --workspace --no-fail-fast

  • Total: 462
  • Passed: 408
  • Failed: 1
  • Ignored: 53

Failures

hero_proc_integration_tests (lib unittest):

test harness::tests::test_harness_starts_and_stops ... FAILED

failures:

---- harness::tests::test_harness_starts_and_stops stdout ----
DEBUG hero_proc_integration_tests::harness: creating test harness test_id=0 socket=/var/folders/.../hero_proc.sock
INFO  hero_proc_integration_tests::harness: starting hero_proc_server test_id=0 binary=/Volumes/T7/build/cargo/debug/hero_proc_server
DEBUG hero_proc_integration_tests::harness: test harness dropped test_id=0

thread 'harness::tests::test_harness_starts_and_stops' (4647793) panicked at tests/integration/src/harness.rs:372:52:
harness should start: server startup timeout

failures:
    harness::tests::test_harness_starts_and_stops

test result: FAILED. 3 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 10.22s
error: test failed, to rerun pass `-p hero_proc_integration_tests --lib`
## Test Results `cargo test --workspace --no-fail-fast` - Total: 462 - Passed: 408 - Failed: 1 - Ignored: 53 ### Failures **hero_proc_integration_tests** (lib unittest): ``` test harness::tests::test_harness_starts_and_stops ... FAILED failures: ---- harness::tests::test_harness_starts_and_stops stdout ---- DEBUG hero_proc_integration_tests::harness: creating test harness test_id=0 socket=/var/folders/.../hero_proc.sock INFO hero_proc_integration_tests::harness: starting hero_proc_server test_id=0 binary=/Volumes/T7/build/cargo/debug/hero_proc_server DEBUG hero_proc_integration_tests::harness: test harness dropped test_id=0 thread 'harness::tests::test_harness_starts_and_stops' (4647793) panicked at tests/integration/src/harness.rs:372:52: harness should start: server startup timeout failures: harness::tests::test_harness_starts_and_stops test result: FAILED. 3 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 10.22s error: test failed, to rerun pass `-p hero_proc_integration_tests --lib` ```
Author
Owner

Implementation summary

All five steps from the spec are implemented.

Server (crates/hero_proc_server/src/rpc/service.rs)

  • Added private helpers current_run_id(db, name) and current_run_job_ids(db, name).
  • service_running_jobs now returns active jobs from the current run only; returns empty when there is no current run.
  • service_last_terminal_state now considers only current-run terminal jobs; returns "inactive" when there is no current run.
  • count_restarts now counts restarts only for current-run jobs; returns 0 when there is no current run.
  • handle_status and handle_status_full automatically pick up the fix.

UI — Services tab badge (crates/hero_proc_ui/static/js/dashboard.js)

  • loadServices now calls service.status first per service to get current_run_id, then calls job.list filtered by run_id (skipping the call entirely when no current run exists).
  • cachedServiceJobCounts[name] now also stores runId.
  • serviceJobsBadge(name) reads runId, renders a dash + "service inactive" tooltip when runId == null, and otherwise shows the badge with Jobs in current run #<id> — N total, F failed, R retrying — click to view.
  • The badge onclick now calls navigateToServiceJobs(name, runId).

UI — Run-scoped Jobs navigation (crates/hero_proc_ui/static/js/dashboard.js and templates/index.html)

  • New module-level state _pendingRunFilter / _activeRunFilter.
  • navigateToServiceJobs(serviceName, runId) accepts the optional runId and writes it to #jobs?service=<name>&run=<id>.
  • loadJobs() honours _activeRunFilter (or pending) by adding filter.run_id to the job.list rpc call, and renders the run-filter pill.
  • New renderJobsRunPill() and clearJobsRunFilter() — Bootstrap dismissible badge Run #N with btn-close that strips &run= from the URL hash and refreshes the list.
  • Initial hash parser and hashchange handler now read &run=<id> and clear it on plain #jobs.
  • templates/index.html: added <span id="jobs-run-pill"> host element in the Jobs toolbar.

UI — Service detail row UX (crates/hero_proc_ui/static/js/dashboard.js and static/css/dashboard.css)

  • The <tr> rows in the service detail jobs table now have class="cursor-pointer", aria-label="Open job #<id>", and an explicit "Click to open job #" tooltip.
  • Existing onclick="navigateTo('jobs', j.id)" is preserved (it deep-links to #jobs/<id> and opens the job detail panel).
  • Added .cursor-pointer { cursor: pointer; } to dashboard.css (utility was not present).

Tests

Added two unit tests in crates/hero_proc_server/src/rpc/service.rs:

  • service_status_ignores_jobs_from_previous_runs — proves a Failed job from run #1 does not affect status when run #2 has a Running job. Asserts current_run_id == Some(run #2), state == "running", running_jobs.len() == 1.
  • service_status_inactive_when_no_run — proves an unstarted service reports current_run_id == None, state == "inactive", no running jobs, count_restarts == 0.

Both new tests pass. cargo test -p hero_proc_server --lib is green (67 passed, 0 failed).

Workspace test results

cargo test --workspace --no-fail-fast: 408 passed, 1 failed, 53 ignored (462 total).

The single failure was hero_proc_integration_tests::harness::tests::test_harness_starts_and_stops — a server-startup-timeout flake that occurs only when the harness self-test runs concurrently with the rest of the workspace. Re-running cargo test -p hero_proc_integration_tests --lib in isolation passes cleanly (4/4). The failure is unrelated to the issue #57 changes and reproduces on the unchanged baseline under parallel load.

Acceptance criteria — status

  • Old failed jobs from previous runs no longer affect the service status badge.
  • The "Jobs" column in the Services tab shows the count of jobs in the current run only.
  • When a service has never been started, the Jobs column shows - and the status badge shows inactive.
  • service.status RPC returns state == "running" when the current run is active, regardless of stale failures.
  • Clicking the Jobs badge navigates to #jobs?service=<name>&run=<id> and the Jobs tab applies the run_id filter.
  • Clicking a job row in the service detail panel navigates to #jobs/<id> and opens the Jobs detail panel.
  • Server-side tests pass.
  • OpenRPC schema unchanged.
## Implementation summary All five steps from the spec are implemented. ### Server (`crates/hero_proc_server/src/rpc/service.rs`) - Added private helpers `current_run_id(db, name)` and `current_run_job_ids(db, name)`. - `service_running_jobs` now returns active jobs from the **current run only**; returns empty when there is no current run. - `service_last_terminal_state` now considers only current-run terminal jobs; returns `"inactive"` when there is no current run. - `count_restarts` now counts restarts only for current-run jobs; returns `0` when there is no current run. - `handle_status` and `handle_status_full` automatically pick up the fix. ### UI — Services tab badge (`crates/hero_proc_ui/static/js/dashboard.js`) - `loadServices` now calls `service.status` first per service to get `current_run_id`, then calls `job.list` filtered by `run_id` (skipping the call entirely when no current run exists). - `cachedServiceJobCounts[name]` now also stores `runId`. - `serviceJobsBadge(name)` reads `runId`, renders a dash + "service inactive" tooltip when `runId == null`, and otherwise shows the badge with `Jobs in current run #<id> — N total, F failed, R retrying — click to view`. - The badge `onclick` now calls `navigateToServiceJobs(name, runId)`. ### UI — Run-scoped Jobs navigation (`crates/hero_proc_ui/static/js/dashboard.js` and `templates/index.html`) - New module-level state `_pendingRunFilter` / `_activeRunFilter`. - `navigateToServiceJobs(serviceName, runId)` accepts the optional `runId` and writes it to `#jobs?service=<name>&run=<id>`. - `loadJobs()` honours `_activeRunFilter` (or pending) by adding `filter.run_id` to the `job.list` rpc call, and renders the run-filter pill. - New `renderJobsRunPill()` and `clearJobsRunFilter()` — Bootstrap dismissible badge `Run #N` with `btn-close` that strips `&run=` from the URL hash and refreshes the list. - Initial hash parser and `hashchange` handler now read `&run=<id>` and clear it on plain `#jobs`. - `templates/index.html`: added `<span id="jobs-run-pill">` host element in the Jobs toolbar. ### UI — Service detail row UX (`crates/hero_proc_ui/static/js/dashboard.js` and `static/css/dashboard.css`) - The `<tr>` rows in the service detail jobs table now have `class="cursor-pointer"`, `aria-label="Open job #<id>"`, and an explicit "Click to open job #<id>" tooltip. - Existing `onclick="navigateTo('jobs', j.id)"` is preserved (it deep-links to `#jobs/<id>` and opens the job detail panel). - Added `.cursor-pointer { cursor: pointer; }` to `dashboard.css` (utility was not present). ### Tests Added two unit tests in `crates/hero_proc_server/src/rpc/service.rs`: - `service_status_ignores_jobs_from_previous_runs` — proves a `Failed` job from run #1 does not affect status when run #2 has a `Running` job. Asserts `current_run_id == Some(run #2)`, `state == "running"`, `running_jobs.len() == 1`. - `service_status_inactive_when_no_run` — proves an unstarted service reports `current_run_id == None`, `state == "inactive"`, no running jobs, `count_restarts == 0`. Both new tests pass. `cargo test -p hero_proc_server --lib` is green (67 passed, 0 failed). ### Workspace test results `cargo test --workspace --no-fail-fast`: 408 passed, 1 failed, 53 ignored (462 total). The single failure was `hero_proc_integration_tests::harness::tests::test_harness_starts_and_stops` — a server-startup-timeout flake that occurs only when the harness self-test runs concurrently with the rest of the workspace. Re-running `cargo test -p hero_proc_integration_tests --lib` in isolation passes cleanly (4/4). The failure is unrelated to the issue #57 changes and reproduces on the unchanged baseline under parallel load. ### Acceptance criteria — status - [x] Old failed jobs from previous runs no longer affect the service status badge. - [x] The "Jobs" column in the Services tab shows the count of jobs in the current run only. - [x] When a service has never been started, the Jobs column shows `-` and the status badge shows `inactive`. - [x] `service.status` RPC returns `state == "running"` when the current run is active, regardless of stale failures. - [x] Clicking the Jobs badge navigates to `#jobs?service=<name>&run=<id>` and the Jobs tab applies the `run_id` filter. - [x] Clicking a job row in the service detail panel navigates to `#jobs/<id>` and opens the Jobs detail panel. - [x] Server-side tests pass. - [x] OpenRPC schema unchanged.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_proc#57
No description provided.