runners dont seem to pick up the actions? #2

Closed
opened 2026-02-05 05:31:57 +00:00 by despiegk · 4 comments
Owner
No description provided.
Member
  • From logs here runners picks up the jobs, then get stuck timing out on Podman operations (especially cleanup), so jobs fail/cancel/retry
root@kristof2 ~ # journalctl -u forgejo-runner-1.service -S "2 hours ago" --no-pager | tail -n 200
Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=warning msg="failed to remove docker container 1a1e6c7ef9ebd22c98f331ec2a73861586aa63a3ce05ac106fc48b7b6f8e0d39 (retry #0): Delete \"http://%2Frun%2Fpodman%2Fpodman.sock/v1.41/containers/1a1e6c7ef9ebd22c98f331ec2a73861586aa63a3ce05ac106fc48b7b6f8e0d39?force=1&v=1\": context deadline exceeded\n"
Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=error msg="Error while stop job container FORGEJO-ACTIONS-TASK-3898_WORKFLOW-5df67c74def4ebfcdd083b2db4cbe0d2439fef08ed3b2b54748583fd722daad1_JOB-lint-linux: Error occurred running finally: All attempts fail:\n#1: Delete \"http://%2Frun%2Fpodman%2Fpodman.sock/v1.41/containers/1a1e6c7ef9ebd22c98f331ec2a73861586aa63a3ce05ac106fc48b7b6f8e0d39?force=1&v=1\": context deadline exceeded\n#2: context deadline exceeded (original error: <nil>)"
Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=error msg="Error while cleaning network WORKFLOW-ed88f0eb449252dbf92af7ecafe3870b: Get \"http://%2Frun%2Fpodman%2Fpodman.sock/v1.51/networks\": context deadline exceeded"
Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=info msg="UpdateTask returned task result RESULT_FAILURE for a task that was in local state RESULT_CANCELLED - beginning local task termination"
Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=info msg="task 3902 repo is lhumina_code/hero_db https://code.forgejo.org https://forge.ourworld.tf/"
root@kristof2 ~ #
  • Runners from 2-4 logs pick up the jobs but stuck
root@kristof2 ~ # journalctl -u forgejo-runner-2.service -S "2 hours ago" --no-pager | tail -n 200
Feb 05 07:13:47 kristof2 forgejo-runner[2594]: time="2026-02-05T07:13:47+01:00" level=info msg="task 3903 repo is lhumina_code/hero_redis https://code.forgejo.org https://forge.ourworld.tf/"
root@kristof2 ~ # journalctl -u forgejo-runner-3.service -S "2 hours ago" --no-pager | tail -n 200
Feb 05 07:47:45 kristof2 forgejo-runner[2674]: time="2026-02-05T07:47:45+01:00" level=info msg="task 3905 repo is lhumina_code/hero_embedder https://code.forgejo.org https://forge.ourworld.tf/"
root@kristof2 ~ # journalctl -u forgejo-runner-4.service -S "2 hours ago" --no-pager | tail -n 200
Feb 05 07:47:43 kristof2 forgejo-runner[2732]: time="2026-02-05T07:47:43+01:00" level=info msg="task 3904 repo is lhumina_code/hero_redis https://code.forgejo.org https://forge.ourworld.tf/"
root@kristof2 ~ #

  • Podman stuck
root@kristof2 ~ # podman ps -a 
^Croot@kristof2 ~ #
root@kristof2 ~ # systemctl status podman.socket
Failed to get properties: Transport endpoint is not connected
root@kristof2 ~ # systemctl status podman
Failed to get properties: Transport endpoint is not connected
root@kristof2 ~ #
- From logs here runners picks up the jobs, then get stuck timing out on Podman operations (especially cleanup), so jobs fail/cancel/retry ```sh root@kristof2 ~ # journalctl -u forgejo-runner-1.service -S "2 hours ago" --no-pager | tail -n 200 Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=warning msg="failed to remove docker container 1a1e6c7ef9ebd22c98f331ec2a73861586aa63a3ce05ac106fc48b7b6f8e0d39 (retry #0): Delete \"http://%2Frun%2Fpodman%2Fpodman.sock/v1.41/containers/1a1e6c7ef9ebd22c98f331ec2a73861586aa63a3ce05ac106fc48b7b6f8e0d39?force=1&v=1\": context deadline exceeded\n" Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=error msg="Error while stop job container FORGEJO-ACTIONS-TASK-3898_WORKFLOW-5df67c74def4ebfcdd083b2db4cbe0d2439fef08ed3b2b54748583fd722daad1_JOB-lint-linux: Error occurred running finally: All attempts fail:\n#1: Delete \"http://%2Frun%2Fpodman%2Fpodman.sock/v1.41/containers/1a1e6c7ef9ebd22c98f331ec2a73861586aa63a3ce05ac106fc48b7b6f8e0d39?force=1&v=1\": context deadline exceeded\n#2: context deadline exceeded (original error: <nil>)" Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=error msg="Error while cleaning network WORKFLOW-ed88f0eb449252dbf92af7ecafe3870b: Get \"http://%2Frun%2Fpodman%2Fpodman.sock/v1.51/networks\": context deadline exceeded" Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=info msg="UpdateTask returned task result RESULT_FAILURE for a task that was in local state RESULT_CANCELLED - beginning local task termination" Feb 05 07:13:27 kristof2 forgejo-runner[2539]: time="2026-02-05T07:13:27+01:00" level=info msg="task 3902 repo is lhumina_code/hero_db https://code.forgejo.org https://forge.ourworld.tf/" root@kristof2 ~ # ``` - Runners from 2-4 logs pick up the jobs but stuck ```sh root@kristof2 ~ # journalctl -u forgejo-runner-2.service -S "2 hours ago" --no-pager | tail -n 200 Feb 05 07:13:47 kristof2 forgejo-runner[2594]: time="2026-02-05T07:13:47+01:00" level=info msg="task 3903 repo is lhumina_code/hero_redis https://code.forgejo.org https://forge.ourworld.tf/" root@kristof2 ~ # journalctl -u forgejo-runner-3.service -S "2 hours ago" --no-pager | tail -n 200 Feb 05 07:47:45 kristof2 forgejo-runner[2674]: time="2026-02-05T07:47:45+01:00" level=info msg="task 3905 repo is lhumina_code/hero_embedder https://code.forgejo.org https://forge.ourworld.tf/" root@kristof2 ~ # journalctl -u forgejo-runner-4.service -S "2 hours ago" --no-pager | tail -n 200 Feb 05 07:47:43 kristof2 forgejo-runner[2732]: time="2026-02-05T07:47:43+01:00" level=info msg="task 3904 repo is lhumina_code/hero_redis https://code.forgejo.org https://forge.ourworld.tf/" root@kristof2 ~ # ``` - Podman stuck ```sh root@kristof2 ~ # podman ps -a ^Croot@kristof2 ~ # root@kristof2 ~ # systemctl status podman.socket Failed to get properties: Transport endpoint is not connected root@kristof2 ~ # systemctl status podman Failed to get properties: Transport endpoint is not connected root@kristof2 ~ #
Member
  • We hit a kernel Oops (page fault in kernel mode) that killed a rustc process and along with a large accumulation of zombie sshd processes ([sshd] ). The crash happened on Ubuntu 24.04 / kernel 6.8.0-90-generic and the stack trace points to Linux memory-management code (__lruvec_stat_mod_folio() / folio_remove_rmap_ptes() / exit_mmap()), i.e. a kernel bug / MM race.

Fixing was to move to the Ubuntu 24.04 HWE kernel (now running 6.17.0-14-generic), which is expected to include fixes vs the older 6.8.

root@kristof2 ~ # uname -r
6.17.0-14-generic

Logs :

  • rustc process became dead (kernel killed task)
root@kristof2 ~ # ps -p 1209070 -o pid,ppid,stat,wchan:40,comm,args
    PID    PPID STAT WCHAN                                    COMMAND         COMMAND
1209070 1201613 X    -                                        rustc           [rustc]

root@kristof2 ~ # cat /proc/1209070/stack

root@kristof2 ~ # cat /proc/1209070/status | egrep 'State|Name|Pid|PPid|Threads|voluntary_ctxt_switches|nonvoluntary_ctxt_switches'
Name:	rustc
State:	X (dead)
Pid:	1209070
PPid:	1201613
TracerPid:	0
Threads:	1
voluntary_ctxt_switches:	2
nonvoluntary_ctxt_switches:	16
  • Kernel Oops: page fault in __lruvec_stat_mod_folio() while exiting/unmapping
Feb 05 03:43:55 kristof2 kernel: BUG: unable to handle page fault for address: 0000000000024558
Feb 05 03:43:55 kristof2 kernel: #PF: supervisor read access in kernel mode
Feb 05 03:43:55 kristof2 kernel: #PF: error_code(0x0000) - not-present page
Feb 05 03:43:55 kristof2 kernel: PGD 0 P4D 0
Feb 05 03:43:55 kristof2 kernel: Oops: 0000 [#1] PREEMPT SMP NOPTI
Feb 05 03:43:55 kristof2 kernel: CPU: 10 PID: 1209070 Comm: rustc Not tainted 6.8.0-90-generic #91-Ubuntu
Feb 05 03:43:55 kristof2 kernel: Hardware name: Hetzner /B450 Pro4 R2.0, BIOS H8.02E 08/30/2024
Feb 05 03:43:55 kristof2 kernel: RIP: 0010:__lruvec_stat_mod_folio+0x57/0xc0
Feb 05 03:43:55 kristof2 kernel: Code: 48 8b 53 38 48 89 d0 48 83 e0 fc 83 e2 02 74 04 48 8b 40 10 48 85 c0 74 48 66 90 49 63 96 4
0 9e 02 00 48 8b bc d0 90 08 00 00 <4c> 3b b7 10 06 00 00 75 56 44 89 e2 44 89 ee e8 35 ff ff ff e8 a0
Feb 05 03:43:55 kristof2 kernel: RSP: 0018:ffffcd0e93097768 EFLAGS: 00010286
Feb 05 03:43:55 kristof2 kernel: RAX: ffff890800d81000 RBX: ffffef8f2d873780 RCX: 00000000ffffffff
Feb 05 03:43:55 kristof2 kernel: RDX: 0000000000000000 RSI: 0000000000000011 RDI: 0000000000023f48
Feb 05 03:43:55 kristof2 kernel: RBP: ffffcd0e93097788 R08: 00000000ffffffff R09: ffffef8f2d873780
Feb 05 03:43:55 kristof2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000ffffffff
Feb 05 03:43:55 kristof2 kernel: R13: 0000000000000011 R14: ffff8914ff355000 R15: ffffcd0e93097a70
Feb 05 03:43:55 kristof2 kernel: FS:  0000000000000000(0000) GS:ffff8914be700000(0000) knlGS:0000000000000000
Feb 05 03:43:55 kristof2 kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Feb 05 03:43:55 kristof2 kernel: CR2: 0000000000024558 CR3: 000000054fd76000 CR4: 0000000000350ef0
Feb 05 03:43:55 kristof2 kernel: Call Trace:
Feb 05 03:43:55 kristof2 kernel:  <TASK>
Feb 05 03:43:55 kristof2 kernel:  folio_remove_rmap_ptes+0x78/0xe0
Feb 05 03:43:55 kristof2 kernel:  zap_pte_range+0x24f/0xcd0
Feb 05 03:43:55 kristof2 kernel:  zap_pmd_range.isra.0+0x121/0x280
Feb 05 03:43:55 kristof2 kernel:  unmap_page_range+0x2c6/0x4f0
Feb 05 03:43:55 kristof2 kernel:  unmap_single_vma+0x89/0xf0
Feb 05 03:43:55 kristof2 kernel:  unmap_vmas+0xb5/0x190
Feb 05 03:43:55 kristof2 kernel:  exit_mmap+0x10a/0x3e0
Feb 05 03:43:55 kristof2 kernel:  __mmput+0x41/0x140
Feb 05 03:43:55 kristof2 kernel:  mmput+0x31/0x40
Feb 05 03:43:55 kristof2 kernel:  exit_mm+0xbe/0x130
Feb 05 03:43:55 kristof2 kernel:  do_exit+0x276/0x530
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? wake_up_state+0x10/0x20
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  do_group_exit+0x35/0x90
Feb 05 03:43:55 kristof2 kernel:  __x64_sys_exit_group+0x18/0x20
Feb 05 03:43:55 kristof2 kernel:  x64_sys_call+0x259b/0x25a0
Feb 05 03:43:55 kristof2 kernel:  do_syscall_64+0x7f/0x180
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? filemap_map_pages+0x2fe/0x4c0
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? do_read_fault+0x112/0x200
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? do_fault+0xf0/0x260
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? handle_pte_fault+0x114/0x1d0
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? __handle_mm_fault+0x654/0x800
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? __count_memcg_events+0x6b/0x120
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? count_memcg_events.constprop.0+0x2a/0x50
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? handle_mm_fault+0xad/0x380
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? arch_exit_to_user_mode_prepare.isra.0+0x1a/0xe0
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? irqentry_exit_to_user_mode+0x38/0x1e0
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? irqentry_exit+0x43/0x50
Feb 05 03:43:55 kristof2 kernel:  ? srso_return_thunk+0x5/0x5f
Feb 05 03:43:55 kristof2 kernel:  ? exc_page_fault+0x94/0x1b0
Feb 05 03:43:55 kristof2 kernel:  entry_SYSCALL_64_after_hwframe+0x78/0x80
Feb 05 03:43:55 kristof2 kernel: RIP: 0033:0x7844892d6f8d
Feb 05 03:43:55 kristof2 kernel: Code: Unable to access opcode bytes at 0x7844892d6f63.
Feb 05 03:43:55 kristof2 kernel: RSP: 002b:00007ffd58dcb948 EFLAGS: 00000206 ORIG_RAX: 00000000000000e7
Feb 05 03:43:55 kristof2 kernel: RAX: ffffffffffffffda RBX: 00007844893f2fe8 RCX: 00007844892d6f8d
Feb 05 03:43:55 kristof2 kernel: RDX: 00000000000000e7 RSI: ffffffffffffc840 RDI: 0000000000000000
Feb 05 03:43:55 kristof2 kernel: RBP: 00007ffd58dcb9a0 R08: fffffffffffff000 R09: 000078447f2010c0
Feb 05 03:43:55 kristof2 kernel: R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000a64
Feb 05 03:43:55 kristof2 kernel: R13: 0000000000000000 R14: 00007844893f1680 R15: 00007844893f3000
Feb 05 03:43:55 kristof2 kernel:  </TASK>
Feb 05 03:43:55 kristof2 kernel: Modules linked in: tls xt_addrtype xt_mark xt_conntrack xt_comment nft_chain_nat xt_MASQUERADE nf
_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables veth bridge stp llc overlay cfg80211 binfmt_misc nls_iso8859_
1 intel_rapl_msr intel_rapl_common edac_mce_amd kvm_amd kvm irqbypass rapl wmi_bmof ccp k10temp i2c_piix4 gpio_amdpt mac_hid sch_f
q_codel dm_multipath efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_re
cov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 crct10dif_pclmul crc32_pclmul polyval_clmulni poly
val_generic nvme ghash_clmulni_intel sha256_ssse3 sha1_ssse3 r8169 nvme_core realtek ahci xhci_pci xhci_pci_renesas libahci nvme_a
uth wmi aesni_intel crypto_simd cryptd
Feb 05 03:43:55 kristof2 kernel: CR2: 0000000000024558
Feb 05 03:43:55 kristof2 kernel: ---[ end trace 0000000000000000 ]---

- We hit a kernel Oops (page fault in kernel mode) that killed a rustc process and along with a large accumulation of zombie sshd processes ([sshd] <defunct>). The crash happened on Ubuntu 24.04 / kernel 6.8.0-90-generic and the stack trace points to Linux memory-management code (__lruvec_stat_mod_folio() / folio_remove_rmap_ptes() / exit_mmap()), i.e. a kernel bug / MM race. Fixing was to move to the Ubuntu 24.04 HWE kernel (now running 6.17.0-14-generic), which is expected to include fixes vs the older 6.8. ```sh root@kristof2 ~ # uname -r 6.17.0-14-generic ``` #### Logs : - `rustc` process became dead (kernel killed task) ```sh root@kristof2 ~ # ps -p 1209070 -o pid,ppid,stat,wchan:40,comm,args PID PPID STAT WCHAN COMMAND COMMAND 1209070 1201613 X - rustc [rustc] root@kristof2 ~ # cat /proc/1209070/stack root@kristof2 ~ # cat /proc/1209070/status | egrep 'State|Name|Pid|PPid|Threads|voluntary_ctxt_switches|nonvoluntary_ctxt_switches' Name: rustc State: X (dead) Pid: 1209070 PPid: 1201613 TracerPid: 0 Threads: 1 voluntary_ctxt_switches: 2 nonvoluntary_ctxt_switches: 16 ``` - Kernel Oops: page fault in __lruvec_stat_mod_folio() while exiting/unmapping ```sh Feb 05 03:43:55 kristof2 kernel: BUG: unable to handle page fault for address: 0000000000024558 Feb 05 03:43:55 kristof2 kernel: #PF: supervisor read access in kernel mode Feb 05 03:43:55 kristof2 kernel: #PF: error_code(0x0000) - not-present page Feb 05 03:43:55 kristof2 kernel: PGD 0 P4D 0 Feb 05 03:43:55 kristof2 kernel: Oops: 0000 [#1] PREEMPT SMP NOPTI Feb 05 03:43:55 kristof2 kernel: CPU: 10 PID: 1209070 Comm: rustc Not tainted 6.8.0-90-generic #91-Ubuntu Feb 05 03:43:55 kristof2 kernel: Hardware name: Hetzner /B450 Pro4 R2.0, BIOS H8.02E 08/30/2024 Feb 05 03:43:55 kristof2 kernel: RIP: 0010:__lruvec_stat_mod_folio+0x57/0xc0 Feb 05 03:43:55 kristof2 kernel: Code: 48 8b 53 38 48 89 d0 48 83 e0 fc 83 e2 02 74 04 48 8b 40 10 48 85 c0 74 48 66 90 49 63 96 4 0 9e 02 00 48 8b bc d0 90 08 00 00 <4c> 3b b7 10 06 00 00 75 56 44 89 e2 44 89 ee e8 35 ff ff ff e8 a0 Feb 05 03:43:55 kristof2 kernel: RSP: 0018:ffffcd0e93097768 EFLAGS: 00010286 Feb 05 03:43:55 kristof2 kernel: RAX: ffff890800d81000 RBX: ffffef8f2d873780 RCX: 00000000ffffffff Feb 05 03:43:55 kristof2 kernel: RDX: 0000000000000000 RSI: 0000000000000011 RDI: 0000000000023f48 Feb 05 03:43:55 kristof2 kernel: RBP: ffffcd0e93097788 R08: 00000000ffffffff R09: ffffef8f2d873780 Feb 05 03:43:55 kristof2 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 00000000ffffffff Feb 05 03:43:55 kristof2 kernel: R13: 0000000000000011 R14: ffff8914ff355000 R15: ffffcd0e93097a70 Feb 05 03:43:55 kristof2 kernel: FS: 0000000000000000(0000) GS:ffff8914be700000(0000) knlGS:0000000000000000 Feb 05 03:43:55 kristof2 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Feb 05 03:43:55 kristof2 kernel: CR2: 0000000000024558 CR3: 000000054fd76000 CR4: 0000000000350ef0 Feb 05 03:43:55 kristof2 kernel: Call Trace: Feb 05 03:43:55 kristof2 kernel: <TASK> Feb 05 03:43:55 kristof2 kernel: folio_remove_rmap_ptes+0x78/0xe0 Feb 05 03:43:55 kristof2 kernel: zap_pte_range+0x24f/0xcd0 Feb 05 03:43:55 kristof2 kernel: zap_pmd_range.isra.0+0x121/0x280 Feb 05 03:43:55 kristof2 kernel: unmap_page_range+0x2c6/0x4f0 Feb 05 03:43:55 kristof2 kernel: unmap_single_vma+0x89/0xf0 Feb 05 03:43:55 kristof2 kernel: unmap_vmas+0xb5/0x190 Feb 05 03:43:55 kristof2 kernel: exit_mmap+0x10a/0x3e0 Feb 05 03:43:55 kristof2 kernel: __mmput+0x41/0x140 Feb 05 03:43:55 kristof2 kernel: mmput+0x31/0x40 Feb 05 03:43:55 kristof2 kernel: exit_mm+0xbe/0x130 Feb 05 03:43:55 kristof2 kernel: do_exit+0x276/0x530 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? wake_up_state+0x10/0x20 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: do_group_exit+0x35/0x90 Feb 05 03:43:55 kristof2 kernel: __x64_sys_exit_group+0x18/0x20 Feb 05 03:43:55 kristof2 kernel: x64_sys_call+0x259b/0x25a0 Feb 05 03:43:55 kristof2 kernel: do_syscall_64+0x7f/0x180 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? filemap_map_pages+0x2fe/0x4c0 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? do_read_fault+0x112/0x200 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? do_fault+0xf0/0x260 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? handle_pte_fault+0x114/0x1d0 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? __handle_mm_fault+0x654/0x800 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? __count_memcg_events+0x6b/0x120 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? count_memcg_events.constprop.0+0x2a/0x50 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? handle_mm_fault+0xad/0x380 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? arch_exit_to_user_mode_prepare.isra.0+0x1a/0xe0 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? irqentry_exit_to_user_mode+0x38/0x1e0 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? irqentry_exit+0x43/0x50 Feb 05 03:43:55 kristof2 kernel: ? srso_return_thunk+0x5/0x5f Feb 05 03:43:55 kristof2 kernel: ? exc_page_fault+0x94/0x1b0 Feb 05 03:43:55 kristof2 kernel: entry_SYSCALL_64_after_hwframe+0x78/0x80 Feb 05 03:43:55 kristof2 kernel: RIP: 0033:0x7844892d6f8d Feb 05 03:43:55 kristof2 kernel: Code: Unable to access opcode bytes at 0x7844892d6f63. Feb 05 03:43:55 kristof2 kernel: RSP: 002b:00007ffd58dcb948 EFLAGS: 00000206 ORIG_RAX: 00000000000000e7 Feb 05 03:43:55 kristof2 kernel: RAX: ffffffffffffffda RBX: 00007844893f2fe8 RCX: 00007844892d6f8d Feb 05 03:43:55 kristof2 kernel: RDX: 00000000000000e7 RSI: ffffffffffffc840 RDI: 0000000000000000 Feb 05 03:43:55 kristof2 kernel: RBP: 00007ffd58dcb9a0 R08: fffffffffffff000 R09: 000078447f2010c0 Feb 05 03:43:55 kristof2 kernel: R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000a64 Feb 05 03:43:55 kristof2 kernel: R13: 0000000000000000 R14: 00007844893f1680 R15: 00007844893f3000 Feb 05 03:43:55 kristof2 kernel: </TASK> Feb 05 03:43:55 kristof2 kernel: Modules linked in: tls xt_addrtype xt_mark xt_conntrack xt_comment nft_chain_nat xt_MASQUERADE nf _nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables veth bridge stp llc overlay cfg80211 binfmt_misc nls_iso8859_ 1 intel_rapl_msr intel_rapl_common edac_mce_amd kvm_amd kvm irqbypass rapl wmi_bmof ccp k10temp i2c_piix4 gpio_amdpt mac_hid sch_f q_codel dm_multipath efi_pstore nfnetlink dmi_sysfs ip_tables x_tables autofs4 btrfs blake2b_generic raid10 raid456 async_raid6_re cov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 crct10dif_pclmul crc32_pclmul polyval_clmulni poly val_generic nvme ghash_clmulni_intel sha256_ssse3 sha1_ssse3 r8169 nvme_core realtek ahci xhci_pci xhci_pci_renesas libahci nvme_a uth wmi aesni_intel crypto_simd cryptd Feb 05 03:43:55 kristof2 kernel: CR2: 0000000000024558 Feb 05 03:43:55 kristof2 kernel: ---[ end trace 0000000000000000 ]--- ```
Member
  • Started Runners again.
- Started Runners again.
Owner

It seems good now. CI builds and tests are working. We should be able to close this issue afaik.

It seems good now. CI builds and tests are working. We should be able to close this issue afaik.
Sign in to join this conversation.
No milestone
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
lhumina_code/hero_skills#2
No description provided.