Self-Hosted GitHub Runner
There has been significant activity this past month regarding the WTF-Model. After getting accustomed to streaming and video tools like OBS (useful later for producing video content for the WTF), I realized that—besides the existing pipeline tests for the Web Release—I also needed at least some smoke tests for the Steam Release.
The problem: in the past, I occasionally introduced changes that broke the Steam build. Since my development primarily focuses on the Web Release (with the Steam Release just inheriting downstream from that), these issues often went unnoticed until much later.
As the Steam Build is tailored for Windows end-consumers, the real blocker was—surprise—testing on Windows. And a standard GitHub runner for that wasn’t an option.
For context: a runner is the machine executing the CI/CD pipeline—testing code, building, and deploying the application.
While GitHub does provide Windows runners, Steam & Steamworks authentication makes this impractical: Steam’s partner program is notoriously strict and airtight regarding authentication, even for development and testing purposes.
Long story short: I inevitably had to set up a self-hosted runner at home. As a nice side-effect this also significantly improves speed compared to GitHub’s cloud runners as it also allows local caching. My main network server (Linux) was already fully booked with service provision, so I decided to get a small dedicated rig specialized for the runner task:

I chose a low-end Intel NUC with minimal storage and RAM—just enough for its purpose—keeping costs down to a few hundred euros. I went with Windows Server 2025, zip-tied the new cable overhead to the VESA mounted NUC behind the iMac, and connected it directly to my main machine. With NAT configured on the iMac, the NUC accesses the internet only through it, giving me a lightweight isolation setup without fully integrating it into the main network.

The registration with GitHub was straightforward, and I quickly had the NUC up and working as an independent runner. No issues here at all - it just worked and was a pleasant surprise.

The following week was spent working through the existing pipeline code and salvaging parts that eventually contributed to building a solid new workflow with WPF & WebView2 smoke tests. In contrast to the usual Bash
and Python
scripts that run the Linux pipelines, i was using here solely Powershell 7
.

Just as for the Web Release earlier, I also used Playwright here as well. I additionally modified the C# .NET
application to allow hooks for that when the GitHub CI environment variable is detected, making debugging seamless.

In the end, I implemented two complementary smoke tests that do not affect production.
- UIAutomation (
Run-App-UiAutomationSmoke.ps1
)
- What it does: Starts the app; on interactive desktops waits for the main WPF window and detects WebView2. In Session 0, falls back to DevTools/liveness.
- Verifies: The app launches; (interactive) a real window renders; WebView2 is hosted.
- CI Bridge (
Run-App-BridgeSmoke.ps1
)
- What it does: Uses the app’s CI-only TCP bridge (127.0.0.1:38999) to read status, discover Kestrel, fetch
/index.html
withX-WTF-Auth
, and evaluate simple JS. - Verifies: Chunks are unpacked; Kestrel serves the real site; the auth path works; the app remains stable.
Together, these tests provide a pragmatic “green light” for deployment. They are smoke tests, so gaps remain, but this is a solid foundation to build on.

This was quite a piece of work and, like the streaming setup, caused some delays. Still, I’m committed to pushing forward, and I believe things are moving in the right direction. As I plan to ship the WTF-Model commercially, proper testing is non-negotiable—I can’t expect customers to tolerate buggy software or failed updates.
Additional progress was also made on IP compliance and its related imagery AI rework, but that will be covered in a future post.