Livekit service not starting successfully #153

Open
opened 2026-04-28 08:13:45 +00:00 by ashraf · 1 comment
Member

service livekit config files are not written or consistent with each other livekit.yaml and runtime.json should have same key

service livekit config files are not written or consistent with each other livekit.yaml and runtime.json should have same key
Author
Member

Implementation Spec for Issue #153

Objective

Make service_livekit start reliably produce livekit.yaml, backend.env, and runtime.json whose api_key / api_secret values are mutually consistent, so the upstream livekit-server and the lk-backend JWT signer agree on credentials and the service starts successfully on first boot, on re-runs, and on upgrade from a previously broken state.

Root-cause summary

Two coupled defects produced the symptom "config files not written or inconsistent":

  1. Upstream supervisor (already fixed in hero_livekit commit 64c5711 on 2026-04-28)

    • livekitservice.install only downloaded the binary; it did not write livekit.yaml/backend.env/runtime.json. If the user ran start before configure, the supervisor errored "configs missing — call configure first" instead of self-healing.
    • livekitservice.configure minted a fresh api_secret only when the in-memory secret was empty / placeholder, so an in-memory cfg loaded from a partial runtime.json (no api_secret) could write a YAML with one key while leaving stale state on disk.
    • Net effect: re-installs / partial failures could leave livekit.yaml keyed K1: S1 while runtime.json had api_key=K2, api_secret=S2 (or S2 empty). livekit-server accepted tokens signed with S1; lk-backend (which loads LIVEKIT_API_SECRET from backend.env, derived from runtime.json) signed with S2. Browsers got 401 and the service appeared "not started".
  2. hero_skills bootstrap (tools/modules/services/service_livekit.nu) — the orchestration path that drives install → configure → start against the supervisor:

    • Does not pin / require the supervisor version that contains the fix above. On hosts that already had the broken hero_livekit binary cached in ~/hero/bin, service_livekit start (without --update) silently reuses the broken binary even after pulling new skills.
    • Has no post-write validation: it doesn't verify that the keys: map in livekit.yaml actually matches api_key/api_secret in runtime.json. Stale, mismatched files left over from a prior broken run survive because start's self-heal in the supervisor only triggers when livekit.yaml or backend.env is missing — not when they are inconsistent with runtime.json.
    • svx_bootstrap_livekit calls livekitservice.configure with all-empty params (api_key: \"\", api_secret: \"\", domain: \"\", redis_address: \"\") and trusts the supervisor to do the right thing. When the supervisor is the broken pre-fix version, this is what produces the inconsistency.

The fix in this repo is to (a) force a rebuild of hero_livekit at the fixed commit on start, and (b) add a defensive pre-flight that wipes mismatched config files so the (now-fixed) supervisor's self-heal path is forced to regenerate a consistent triple.

Requirements

  • After service_livekit start finishes successfully, all three files in ~/hero/var/hero_livekit/ must be mutually consistent: the single key in livekit.yaml's keys: map must equal runtime.json.api_key, its value must equal runtime.json.api_secret, and backend.env's LIVEKIT_API_KEY / LIVEKIT_API_SECRET must equal the same pair.
  • The start command must detect an inconsistent on-disk state from a prior broken run and force regeneration (rather than reusing stale files).
  • The skill must build / run a hero_livekit containing the supervisor-side fix (64c5711 "always make sure secrets are set"). On hosts with stale local binaries, service_livekit start must either rebuild or print an actionable error instructing the operator to pass --update.
  • The bootstrap must not silently swallow livekitservice.configure failures: if configure fails or the post-write consistency check fails, the skill must print a clear error and return non-zero.
  • The fix must be idempotent — re-running service_livekit start against an already-consistent state must be a no-op (no secret rotation, since that breaks tokens already issued).

Files to Modify/Create

  • tools/modules/services/service_livekit.nu — Add a pre-flight consistency check that inspects livekit.yaml + runtime.json + backend.env and deletes them if mismatched; tighten svx_bootstrap_livekit to fail loudly on configure errors and validate post-write consistency; bump the install path to force --update when the local binary predates the supervisor fix.
  • tools/modules/services/lib.nu — (read-only inspection) confirm svc_install --update semantics and whether a "minimum-commit-required" guard belongs here or in service_livekit.nu. No change expected unless we want a reusable helper for "rebuild if binary older than Git ref".
  • (Optional) claude/skills/hero_running/SKILL.md — Add a row to the "Common failure modes" table describing the symptom ("livekit-server 401s / service_collab huddles never connect") and the one-line fix (service_livekit start --reset --update).

No new files are required.

Implementation Plan

Step 1: Add a config-consistency pre-flight helper in service_livekit.nu

Files: tools/modules/services/service_livekit.nu

  • Add a private helper svx_lk_configs_consistent [root: bool] -> bool that:
    • Resolves cfg_dir = $\"(svc_home $root)/var/hero_livekit\".
    • Returns true (vacuously consistent) if none of livekit.yaml, runtime.json, backend.env exist (clean install path — supervisor will populate).
    • Returns false if some-but-not-all exist (partial state — must be wiped).
    • When all three exist:
      • Parses runtime.json with open to get api_key (string) and api_secret (string).
      • Reads livekit.yaml as text and extracts the single keys: mapping line: a regex like ^\\s+(\\S+):\\s*\"([^\"]+)\"\\s*$ immediately after the keys: line. (Avoid pulling in a full YAML parser — the file is generated by render_livekit_yaml and has a fixed shape.)
      • Reads backend.env as text and extracts LIVEKIT_API_KEY=… and LIVEKIT_API_SECRET=… values.
      • Returns true iff all four pairs are equal: yaml-key == runtime.api_key == env.LIVEKIT_API_KEY, and yaml-value == runtime.api_secret == env.LIVEKIT_API_SECRET.
    • The helper must never throw; on parse failure it returns false so callers wipe and regenerate.
  • Add a private helper svx_lk_wipe_stale_configs [root: bool] that, when called, removes the three files (livekit.yaml, backend.env, runtime.json) from ~/hero/var/hero_livekit/. Leaves data/ and logs/ alone. Uses ^rm -f and tolerates missing files.

Dependencies: none. This step adds new helpers; nothing else in the file changes yet.

Step 2: Wire the pre-flight into svx_bootstrap_livekit

Files: tools/modules/services/service_livekit.nu

  • In svx_bootstrap_livekit, before the livekitservice.install RPC call (i.e. between the rpc.sock wait loop and the print \"→ livekitservice.install …\" line):
    • Call svx_lk_configs_consistent $root. If false, print \"WARNING stale/inconsistent livekit configs detected — wiping so they will be regenerated\", then call svx_lk_wipe_stale_configs $root.
    • Rationale: this guarantees the supervisor's start self-heal path (lines 615-626 of crates/hero_livekit_server/src/livekit/rpc.rs) is exercised, since now both livekit.yaml and backend.env are absent and write_config_artifacts will be called.
  • Tighten the configure failure handling (currently lines 261-266):
    • On failure, also delete the partially-written files (svx_lk_wipe_stale_configs $root) before returning false, so a retry starts from clean state instead of half-written files.
  • Add a post-configure consistency assertion:
    • After the if not ($runtime | path exists) guard succeeds (around line 270), call svx_lk_configs_consistent $root and abort with a clear error (print \"FAIL post-configure consistency check failed — refusing to start livekit-server with mismatched keys\"; return false) if it returns false. This catches a still-broken supervisor binary in the field.

Dependencies: Step 1.

Step 3: Force a rebuild path that picks up the supervisor fix

Files: tools/modules/services/service_livekit.nu

  • In export def start, change the install --root=$root --update=$update --reset=$reset call (currently line 336) so that when --reset is not passed and the local hero_livekit_server binary already exists, the skill still pulls + rebuilds the supervisor source. Two acceptable approaches — pick (a) for minimum diff:
    • (a) When neither --update nor --reset was passed AND ~/hero/var/hero_livekit/runtime.json does not exist OR the consistency pre-flight from Step 1 returned false, internally promote $update = true for the install call so forge merge runs and the supervisor binary is rebuilt from the dev branch (which contains commit 64c5711).
    • (b) Add a one-shot version probe (e.g. invoke hero_livekit_server --version or shell out to strings $bin | grep ensure_secret as a heuristic) and force --reset if absent.
  • Document the behaviour change in the doc comment block above export def start (the comment that begins on line 305): mention that on a detected inconsistent state the skill auto-rebuilds the supervisor.

Dependencies: Step 1 (uses the consistency helper).

Step 4: Surface the failure mode in the operator-facing skill

Files: claude/skills/hero_running/SKILL.md

  • Add one row to the table under "### Common failure modes" (around line 97):
    • symptom: huddles silently fail / livekit-server logs \"invalid signature\" / 401
    • cause: stale livekit.yaml + runtime.json from a pre-fix supervisor (issue #153)
    • fix: service_livekit start --reset --update
  • No other docs in this repo reference livekit start failures, so this is the only doc touchpoint.

Dependencies: none. Can run in parallel with Steps 1-3.

Step 5: Manual verification on a host with the bug

Files: none (operational check)

  • On the dev box, simulate the broken state:
    • service_livekit stop
    • Hand-edit ~/hero/var/hero_livekit/runtime.json to set \"api_key\": \"wrongkey\" (mismatched against livekit.yaml).
    • Run service_livekit start. Expected: pre-flight prints "stale/inconsistent livekit configs detected — wiping…", supervisor regenerates all three files, post-configure consistency check passes, livekit-server comes up on :7880, and proc service status hero_livekit reports running.
  • On a fresh box (no ~/hero/var/hero_livekit/):
    • Run service_livekit start. Expected: pre-flight is a no-op (vacuous), supervisor's install writes consistent triple, configure re-writes consistent triple, start spawns livekit-server + lk-backend, status is running.

Dependencies: Steps 1-3 merged.

Acceptance Criteria

  • livekit.yaml and runtime.json share consistent key/secret values: the single entry under keys: in the YAML equals <runtime.api_key>: \"<runtime.api_secret>\".
  • backend.env's LIVEKIT_API_KEY and LIVEKIT_API_SECRET equal runtime.api_key and runtime.api_secret.
  • All three files are present in ~/hero/var/hero_livekit/ after service_livekit start completes successfully.
  • livekit-server process is alive (proc service status hero_livekit reports running) and accepts WebSocket connections on ws://<node_ip>:7880/rtc/v1.
  • On a host with a previously inconsistent triple, service_livekit start (no flags) auto-detects, wipes, and regenerates a consistent triple — operator does not need to manually rm files.
  • On a clean host, service_livekit start succeeds without ever creating an inconsistent triple at any point during the sequence.
  • Re-running service_livekit start against a consistent state does not rotate api_secret (token-stability guarantee from the upstream ensure_secret idempotency).
  • If livekitservice.configure fails, the skill returns non-zero, prints a clear error, and leaves no half-written files behind.

Notes

  • The supervisor-side fix (commit 64c5711 in lhumina_code/hero_livekit, branch development) is a hard prerequisite for this issue to truly close. The hero_skills changes specified above are necessary because (a) operators on stale local binaries will never pick up the fix without a rebuild trigger, and (b) inconsistent files left over from a previous broken run survive even after the supervisor is upgraded — the supervisor's self-heal only triggers when files are missing, not when they disagree with each other.
  • Do not rotate api_secret on every start: tokens already issued (e.g. by service_collab for an in-progress huddle) would silently invalidate. The pre-flight wipe path means rotation only happens after a detected inconsistency, which is exactly the case where the existing tokens are already broken.
  • The YAML parser in Step 1 is intentionally a regex over the known-shape output of render_livekit_yaml (lines 887-918 of crates/hero_livekit_server/src/livekit/rpc.rs). A full YAML dependency in nushell is unnecessary; if upstream render_livekit_yaml ever grows multi-key support, this regex needs to be revisited (currently the renderer hard-codes a single keys: entry, so single-line extraction is safe).
  • Consider opening a follow-up issue against hero_livekit to add a livekitservice.health() RPC that returns (in_memory_cfg_hash, on_disk_yaml_hash) so future skills can do this consistency check over RPC instead of by file inspection.
  • Do not touch tools/modules/services/service_collab.nu. It already chains service_livekit start correctly and reads runtime.json defensively.
## Implementation Spec for Issue #153 ### Objective Make `service_livekit start` reliably produce `livekit.yaml`, `backend.env`, and `runtime.json` whose `api_key` / `api_secret` values are mutually consistent, so the upstream `livekit-server` and the `lk-backend` JWT signer agree on credentials and the service starts successfully on first boot, on re-runs, and on upgrade from a previously broken state. ### Root-cause summary Two coupled defects produced the symptom "config files not written or inconsistent": 1. **Upstream supervisor (already fixed in `hero_livekit` commit `64c5711` on 2026-04-28)** - `livekitservice.install` only downloaded the binary; it did not write `livekit.yaml`/`backend.env`/`runtime.json`. If the user ran `start` before `configure`, the supervisor errored "configs missing — call configure first" instead of self-healing. - `livekitservice.configure` minted a fresh `api_secret` only when the in-memory secret was empty / placeholder, so an in-memory cfg loaded from a partial `runtime.json` (no `api_secret`) could write a YAML with one key while leaving stale state on disk. - Net effect: re-installs / partial failures could leave `livekit.yaml` keyed `K1: S1` while `runtime.json` had `api_key=K2, api_secret=S2` (or `S2` empty). `livekit-server` accepted tokens signed with `S1`; `lk-backend` (which loads `LIVEKIT_API_SECRET` from `backend.env`, derived from `runtime.json`) signed with `S2`. Browsers got `401` and the service appeared "not started". 2. **hero_skills bootstrap (`tools/modules/services/service_livekit.nu`)** — the orchestration path that drives `install → configure → start` against the supervisor: - Does not pin / require the supervisor version that contains the fix above. On hosts that already had the broken `hero_livekit` binary cached in `~/hero/bin`, `service_livekit start` (without `--update`) silently reuses the broken binary even after pulling new skills. - Has no post-write validation: it doesn't verify that the `keys:` map in `livekit.yaml` actually matches `api_key`/`api_secret` in `runtime.json`. Stale, mismatched files left over from a prior broken run survive because `start`'s self-heal in the supervisor only triggers when `livekit.yaml` *or* `backend.env` is missing — not when they are inconsistent with `runtime.json`. - `svx_bootstrap_livekit` calls `livekitservice.configure` with all-empty params (`api_key: \"\"`, `api_secret: \"\"`, `domain: \"\"`, `redis_address: \"\"`) and trusts the supervisor to do the right thing. When the supervisor is the broken pre-fix version, this is what produces the inconsistency. The fix in this repo is to (a) force a rebuild of `hero_livekit` at the fixed commit on `start`, and (b) add a defensive pre-flight that wipes mismatched config files so the (now-fixed) supervisor's self-heal path is forced to regenerate a consistent triple. ### Requirements - After `service_livekit start` finishes successfully, all three files in `~/hero/var/hero_livekit/` must be mutually consistent: the single key in `livekit.yaml`'s `keys:` map must equal `runtime.json.api_key`, its value must equal `runtime.json.api_secret`, and `backend.env`'s `LIVEKIT_API_KEY` / `LIVEKIT_API_SECRET` must equal the same pair. - The `start` command must detect an inconsistent on-disk state from a prior broken run and force regeneration (rather than reusing stale files). - The skill must build / run a `hero_livekit` containing the supervisor-side fix (`64c5711` "always make sure secrets are set"). On hosts with stale local binaries, `service_livekit start` must either rebuild or print an actionable error instructing the operator to pass `--update`. - The bootstrap must not silently swallow `livekitservice.configure` failures: if configure fails or the post-write consistency check fails, the skill must print a clear error and return non-zero. - The fix must be idempotent — re-running `service_livekit start` against an already-consistent state must be a no-op (no secret rotation, since that breaks tokens already issued). ### Files to Modify/Create - `tools/modules/services/service_livekit.nu` — Add a pre-flight consistency check that inspects `livekit.yaml` + `runtime.json` + `backend.env` and deletes them if mismatched; tighten `svx_bootstrap_livekit` to fail loudly on configure errors and validate post-write consistency; bump the install path to force `--update` when the local binary predates the supervisor fix. - `tools/modules/services/lib.nu` — (read-only inspection) confirm `svc_install` `--update` semantics and whether a "minimum-commit-required" guard belongs here or in `service_livekit.nu`. No change expected unless we want a reusable helper for "rebuild if binary older than Git ref". - (Optional) `claude/skills/hero_running/SKILL.md` — Add a row to the "Common failure modes" table describing the symptom ("livekit-server 401s / `service_collab` huddles never connect") and the one-line fix (`service_livekit start --reset --update`). No new files are required. ### Implementation Plan #### Step 1: Add a config-consistency pre-flight helper in `service_livekit.nu` Files: `tools/modules/services/service_livekit.nu` - Add a private helper `svx_lk_configs_consistent [root: bool] -> bool` that: - Resolves `cfg_dir = $\"(svc_home $root)/var/hero_livekit\"`. - Returns `true` (vacuously consistent) if **none** of `livekit.yaml`, `runtime.json`, `backend.env` exist (clean install path — supervisor will populate). - Returns `false` if some-but-not-all exist (partial state — must be wiped). - When all three exist: - Parses `runtime.json` with `open` to get `api_key` (string) and `api_secret` (string). - Reads `livekit.yaml` as text and extracts the single `keys:` mapping line: a regex like `^\\s+(\\S+):\\s*\"([^\"]+)\"\\s*$` immediately after the `keys:` line. (Avoid pulling in a full YAML parser — the file is generated by `render_livekit_yaml` and has a fixed shape.) - Reads `backend.env` as text and extracts `LIVEKIT_API_KEY=…` and `LIVEKIT_API_SECRET=…` values. - Returns `true` iff **all four pairs are equal**: yaml-key == runtime.api_key == env.LIVEKIT_API_KEY, and yaml-value == runtime.api_secret == env.LIVEKIT_API_SECRET. - The helper must never throw; on parse failure it returns `false` so callers wipe and regenerate. - Add a private helper `svx_lk_wipe_stale_configs [root: bool]` that, when called, removes the three files (`livekit.yaml`, `backend.env`, `runtime.json`) from `~/hero/var/hero_livekit/`. Leaves `data/` and `logs/` alone. Uses `^rm -f` and tolerates missing files. Dependencies: none. This step adds new helpers; nothing else in the file changes yet. #### Step 2: Wire the pre-flight into `svx_bootstrap_livekit` Files: `tools/modules/services/service_livekit.nu` - In `svx_bootstrap_livekit`, **before** the `livekitservice.install` RPC call (i.e. between the `rpc.sock` wait loop and the `print \"→ livekitservice.install …\"` line): - Call `svx_lk_configs_consistent $root`. If `false`, print `\"WARNING stale/inconsistent livekit configs detected — wiping so they will be regenerated\"`, then call `svx_lk_wipe_stale_configs $root`. - Rationale: this guarantees the supervisor's `start` self-heal path (lines 615-626 of `crates/hero_livekit_server/src/livekit/rpc.rs`) is exercised, since now both `livekit.yaml` and `backend.env` are absent and `write_config_artifacts` will be called. - Tighten the `configure` failure handling (currently lines 261-266): - On failure, also delete the partially-written files (`svx_lk_wipe_stale_configs $root`) before returning `false`, so a retry starts from clean state instead of half-written files. - Add a post-`configure` consistency assertion: - After the `if not ($runtime | path exists)` guard succeeds (around line 270), call `svx_lk_configs_consistent $root` and abort with a clear error (`print \"FAIL post-configure consistency check failed — refusing to start livekit-server with mismatched keys\"; return false`) if it returns `false`. This catches a still-broken supervisor binary in the field. Dependencies: Step 1. #### Step 3: Force a rebuild path that picks up the supervisor fix Files: `tools/modules/services/service_livekit.nu` - In `export def start`, change the `install --root=$root --update=$update --reset=$reset` call (currently line 336) so that when `--reset` is **not** passed and the local `hero_livekit_server` binary already exists, the skill still pulls + rebuilds the supervisor source. Two acceptable approaches — pick (a) for minimum diff: - (a) When neither `--update` nor `--reset` was passed AND `~/hero/var/hero_livekit/runtime.json` does not exist OR the consistency pre-flight from Step 1 returned `false`, internally promote `$update = true` for the `install` call so `forge merge` runs and the supervisor binary is rebuilt from the dev branch (which contains commit `64c5711`). - (b) Add a one-shot version probe (e.g. invoke `hero_livekit_server --version` or shell out to `strings $bin | grep ensure_secret` as a heuristic) and force `--reset` if absent. - Document the behaviour change in the doc comment block above `export def start` (the comment that begins on line 305): mention that on a detected inconsistent state the skill auto-rebuilds the supervisor. Dependencies: Step 1 (uses the consistency helper). #### Step 4: Surface the failure mode in the operator-facing skill Files: `claude/skills/hero_running/SKILL.md` - Add one row to the table under \"### Common failure modes\" (around line 97): - symptom: `huddles silently fail / livekit-server logs \"invalid signature\" / 401` - cause: `stale livekit.yaml + runtime.json from a pre-fix supervisor (issue #153)` - fix: `service_livekit start --reset --update` - No other docs in this repo reference livekit start failures, so this is the only doc touchpoint. Dependencies: none. Can run in parallel with Steps 1-3. #### Step 5: Manual verification on a host with the bug Files: none (operational check) - On the dev box, simulate the broken state: - `service_livekit stop` - Hand-edit `~/hero/var/hero_livekit/runtime.json` to set `\"api_key\": \"wrongkey\"` (mismatched against `livekit.yaml`). - Run `service_livekit start`. Expected: pre-flight prints \"stale/inconsistent livekit configs detected — wiping…\", supervisor regenerates all three files, post-configure consistency check passes, `livekit-server` comes up on `:7880`, and `proc service status hero_livekit` reports running. - On a fresh box (no `~/hero/var/hero_livekit/`): - Run `service_livekit start`. Expected: pre-flight is a no-op (vacuous), supervisor's `install` writes consistent triple, `configure` re-writes consistent triple, `start` spawns `livekit-server` + `lk-backend`, status is running. Dependencies: Steps 1-3 merged. ### Acceptance Criteria - [ ] `livekit.yaml` and `runtime.json` share consistent key/secret values: the single entry under `keys:` in the YAML equals `<runtime.api_key>: \"<runtime.api_secret>\"`. - [ ] `backend.env`'s `LIVEKIT_API_KEY` and `LIVEKIT_API_SECRET` equal `runtime.api_key` and `runtime.api_secret`. - [ ] All three files are present in `~/hero/var/hero_livekit/` after `service_livekit start` completes successfully. - [ ] `livekit-server` process is alive (`proc service status hero_livekit` reports `running`) and accepts WebSocket connections on `ws://<node_ip>:7880/rtc/v1`. - [ ] On a host with a previously inconsistent triple, `service_livekit start` (no flags) auto-detects, wipes, and regenerates a consistent triple — operator does not need to manually `rm` files. - [ ] On a clean host, `service_livekit start` succeeds without ever creating an inconsistent triple at any point during the sequence. - [ ] Re-running `service_livekit start` against a consistent state does not rotate `api_secret` (token-stability guarantee from the upstream `ensure_secret` idempotency). - [ ] If `livekitservice.configure` fails, the skill returns non-zero, prints a clear error, and leaves no half-written files behind. ### Notes - The supervisor-side fix (commit `64c5711` in `lhumina_code/hero_livekit`, branch `development`) is a hard prerequisite for this issue to truly close. The hero_skills changes specified above are necessary because (a) operators on stale local binaries will never pick up the fix without a rebuild trigger, and (b) inconsistent files left over from a *previous* broken run survive even after the supervisor is upgraded — the supervisor's self-heal only triggers when files are missing, not when they disagree with each other. - Do **not** rotate `api_secret` on every `start`: tokens already issued (e.g. by `service_collab` for an in-progress huddle) would silently invalidate. The pre-flight wipe path means rotation only happens after a *detected inconsistency*, which is exactly the case where the existing tokens are already broken. - The YAML parser in Step 1 is intentionally a regex over the known-shape output of `render_livekit_yaml` (lines 887-918 of `crates/hero_livekit_server/src/livekit/rpc.rs`). A full YAML dependency in nushell is unnecessary; if upstream `render_livekit_yaml` ever grows multi-key support, this regex needs to be revisited (currently the renderer hard-codes a single `keys:` entry, so single-line extraction is safe). - Consider opening a follow-up issue against `hero_livekit` to add a `livekitservice.health()` RPC that returns `(in_memory_cfg_hash, on_disk_yaml_hash)` so future skills can do this consistency check over RPC instead of by file inspection. - Do not touch `tools/modules/services/service_collab.nu`. It already chains `service_livekit start` correctly and reads `runtime.json` defensively.
mahmoud added this to the now milestone 2026-04-28 09:37:37 +00:00
mahmoud added this to the ACTIVE project 2026-04-28 09:37:39 +00:00
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_skills#153
No description provided.