Homu retry log - rust

Time (UTC) PR Message
2025-03-31 22:20:23 139185
@bors retry

2025-03-31 20:35:00 139182
@bors retry 
2025-03-31 12:33:51 138749
netwotk problems @bors retry
2025-03-30 03:42:28 139000
ugh race condition @bors r- retry
2025-03-29 06:15:07 139067
@bors retry
2025-03-28 15:21:29 139054
@bors retry
2025-03-28 14:43:51 138745
@bors retry
2025-03-28 12:50:39 138478
This is the same failure as the previous time (maybe due to OOM?), but let's try again.
@bors retry
2025-03-28 03:22:49 138478
Seems unrelated to this PR.

@bors retry
2025-03-28 03:22:38 139037
looks spurious

@bors retry
2025-03-25 12:48:20 138923
Job not picked up
@bors retry
2025-03-25 02:21:34 138912
@bors retry
2025-03-24 23:10:27 138909
The error seems not related to the pull requests. @bors retry
2025-03-24 21:24:28 138634
@bors retry GHA internal error
2025-03-24 10:22:04 138884
@bors r- retry
2025-03-24 07:22:14 138878
@bors retry r-
2025-03-24 07:09:04 138848
>>> warning: spurious network error (3 tries remaining): [28] Timeout was reached (download of `object v0.36.7` failed to transfer more than 10 bytes in 30s)
  warning: spurious network error (3 tries remaining): [28] Timeout was reached (failed to download any data for `adler2 v2.0.0` within 30s)
  warning: spurious network error (2 tries remaining): [28] Timeout was reached (download of `object v0.36.7` failed to transfer more than 10 bytes in 30s)
  warning: spurious network error (2 tries remaining): [28] Timeout was reached (failed to download any data for `adler2 v2.0.0` within 30s)
    Downloaded adler2 v2.0.0
  warning: spurious network error (1 tries remaining): [28] Timeout was reached (failed to download any data for `object v0.36.7` within 30s)
  error: failed to download from `[https://static.crates.io/crates/object/0.36.7/download`](https://static.crates.io/crates/object/0.36.7/download%60)

Spurious
@bors retry
2025-03-24 06:08:11 138580
Sorry, bootstrap CI LLVM is broken atm re. perf.
@bors retry r-
@bors treeclosed=100
2025-03-24 06:04:56 137995
Sorry, perf and CI LLVM is a bit broken atm. Please re-approve once bootstrap & perf is fixed.
@bors retry r-
2025-03-24 00:11:58 138859
@bors r- retry
2025-03-23 20:58:08 138859
yield to larger rollup

@bors retry
2025-03-23 20:57:45 138866
@bors retry
2025-03-23 18:42:57 138859
@bors retry network error crates io
2025-03-22 09:07:56 136974
@bors retry
2025-03-21 18:49:50 138580
@bors retry
2025-03-20 20:09:47 138747
@bors retry
2025-03-19 20:59:19 138714
@bors retry
2025-03-18 17:58:19 138661
@rustbot ping fuchsia
This is the second fuchsia error in a row. In a PR that's juts a revert, so this kind of has to be spurious, right? No PR landed since ~6h ago so seems like CI might just be broken by the fuchsia job right now?

@bors retry odd fuchsia error
2025-03-18 16:44:17 138653
@bors retry
2025-03-18 14:46:06 138661
@bors retry Fuchsia networking issue 
2025-03-18 13:30:03 138653
@bors retry
2025-03-18 12:06:06 138515
@bors retry
2025-03-18 05:24:40 138630
@bors retry
2025-03-17 15:56:53 138515
@bors retry
2025-03-17 10:20:47 136929
That failure seems spurious...
@bors retry
2025-03-16 05:07:22 138525
dead runner?

@bors retry
2025-03-15 12:27:12 137665
Ok, looks good.

@bors retry
2025-03-14 13:55:27 138452
@bors retry

More Windows segfaults..
2025-03-14 01:00:29 138157
```
2025-03-14T00:40:22.1512202Z --- stderr -------------------------------
2025-03-14T00:40:22.1580734Z error: couldn't create a temp dir: Access is denied. (os error 5) at path "C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\rustcfzdxGW"
2025-03-14T00:40:22.1581195Z 
2025-03-14T00:40:22.1581369Z error: aborting due to 1 previous error
2025-03-14T00:40:22.1581672Z ------------------------------------------
```
@bors retry (x86_64-pc-windows-gnu transient Windows filesystem access issue)
2025-03-13 10:22:19 137665
That looked like a spurious network error.

@bors retry
2025-03-12 23:12:36 138416
This is a classic MSVC spurious failure.
@bors retry
2025-03-12 08:17:07 137612
The failure doesn't seem relevant to this PR.

@bors retry
2025-03-12 07:59:47 138388
@bors retry
2025-03-10 23:48:40 138302
@bors retry (access denied spurious)
2025-03-10 18:40:38 138302
@bors retry
``
2025-03-10T18:36:02.0742736Z thread 'main' panicked at src/tools/remote-test-client/src/main.rs:309:9:
2025-03-10T18:36:02.0743637Z client.read_exact(&mut header) failed with Connection reset by peer (os error 104)
2025-03-10T18:36:02.0744533Z note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
``
2025-03-10 17:37:33 138302
@bors retry
2025-03-08 19:25:01 138200
Seems unrelated 

@bors retry
2025-03-06 20:36:20 138114
@bors retry
2025-03-06 16:50:44 138039
I can't seem to repro this locally on mingw. Let's see if it's spurious; I'll open an issue to track this.

@bors retry
2025-03-06 07:36:21 138079
Looks like no luck yet

@bors r- retry
2025-03-06 04:34:30 136780
@bors retry

GitHub credits issue.
2025-03-05 17:32:58 137513
> The hosted runner encountered an error while running your job. (Error Type: Disconnect).

@bors retry


2025-03-05 17:03:01 137011
@bors retry
2025-03-04 12:07:08 137373
@bors retry (spurious mingw)
2025-03-04 10:08:13 137373
@bors retry
2025-03-04 08:12:46 137985
@bors retry
2025-03-03 18:24:49 137925
The rfl job is a rustup error and is getting disabled.

@bors retry
2025-03-03 16:31:53 137927
@bors retry
2025-03-03 16:13:16 137914
@bors retry
2025-03-03 13:54:04 137373
Oh... I think that might be a `rust-lld` issue where sometimes it just *dies*.

Let's see on a retry.

@bors retry
2025-03-03 11:33:25 137921
@bors retry
2025-03-03 10:39:42 137373
@bors retry

Trying again after ~30 minutes.
2025-03-03 10:10:03 137914
@bors retry
2025-03-03 10:03:31 137373
@bors p=101
@bors retry
2025-03-03 09:45:55 137373
@bors retry
2025-03-03 09:44:31 135695
@bors retry
@bors treeclosed ghcr network errors
2025-03-03 09:29:02 137914
@bors retry
2025-03-03 09:12:41 135695
@bors retry network error

sigh..
2025-03-03 08:32:59 137921
@bors retry Get "https://ghcr.io/v2/": context deadline exceeded
2025-03-03 08:32:39 135695
@bors retry network error
2025-03-03 08:29:41 137918
@bors retry spurious network error
2025-03-03 07:14:36 137914
@bors retry
2025-03-01 22:20:58 137855
 @bors retry
2025-03-01 18:12:18 137752
@bors retry
2025-03-01 05:21:59 133250
@bors retry
2025-02-28 10:56:38 137517
@bors retry
2025-02-28 10:15:01 137775
@bors retry
2025-02-28 09:50:42 137775
@bors retry
2025-02-27 12:02:40 137710
@bors retry
2025-02-27 10:54:22 137669
@bors retry
2025-02-27 10:25:58 137710
@bors retry
2025-02-27 07:43:05 137425
@bors retry for the time being 
2025-02-27 05:55:56 135695
what the fuck?????
@bors retry 
2025-02-25 07:44:16 133832
@bors retry
2025-02-25 04:52:53 136921
@bors retry
2025-02-23 20:31:26 137497
Yeah sorry meant to do that but got distracted.

@bors r- retry
2025-02-23 20:17:51 137476
Unfortunately it looks like dist-powerpc64le-linux is stuck, last log update was two hours ago and all other jobs are complete. I don't think this would be a side effect of the changes here.

@bors r- retry
2025-02-21 16:21:11 137189
With the switch to smaller builders, updating CI config now more often ends in timeouts :disappointed: Let's try again.

@bors retry
2025-02-21 15:00:46 136921
@bors retry
2025-02-20 22:33:42 137189
Sounds spurious.

@bors retry
2025-02-19 19:19:03 137123
@bors retry temp for failure on windows
2025-02-19 17:57:37 137023
This essentially wiped the sccache cache, and this builder didn't like it, lol. Let's try again with the cache (hopefully) primed.

@bors retry
2025-02-18 17:29:31 137226
@bors r- retry
2025-02-18 07:48:01 135408
@bors r- retry
There's still something wrong with `tests/codegen/abi-x86-sse.rs`.
2025-02-17 02:11:55 136804
@bors retry r- (still in the queue (unapproved ofc))