← Back to blog
TL;DR: What does it actually take to send pixels from one computer to another and have it feel instant? Turns out this is a beautiful engineering problem — every solution trades bandwidth for latency for quality. There are no free lunches, only interesting constraints.
Display cables feel like magic. Plug in HDMI or DisplayPort, and pixels just... appear. No lag. No compression artifacts. It just works.
But what's actually happening? Can I replicate it by daisy-chaining two computers?
I had a reason to find out: I wanted to use one monitor with two computers without swapping cables. What if the Mac was the KVM? I already got Mac-to-PC networking working over USB-C at 5-10 Gbps. That's real bandwidth. Could I send pixels over it?
How hard could it be?
Engineering is finding out.
In a perfect world, you'd send raw pixels. No compression. No quality loss. The GPU renders a frame, you grab it, you send it.
This is actually what your monitor does. DisplayPort and HDMI are streaming raw pixels at ridiculous bandwidth — DisplayPort 2.1 pushes 80 Gbps. No encoding. No decoding. Just bits flying down a wire.
5K resolution is 5120 × 2880 pixels. Each pixel is 4 bytes (RGBA). That's 58.98 megabytes per frame.
At 60fps, that's 3.5 gigabytes per second. Or about 28 Gbps.
Look at the visualization — the pipeline is beautiful. Capture: ~1ms. Send: ~1ms. Receive: ~1ms. Render: ~1ms. Total: under 5ms. This is what "cable feel" looks like.
If you have infinite bandwidth, you're done. Ship it.
I don't have infinite bandwidth. I have a USB-C cable running at 5 Gbps. Maybe 10 Gbps on a good day. I still haven't cracked getting it to 20 Gbps.
Here's the thing nobody tells you: you don't have a full second to send a frame. At 60fps, you have 16.67 milliseconds. And in those 16 milliseconds, you need to capture, send, receive, and render.
This is where my mental model broke and rebuilt itself.
5 Gbps sounds like a lot. But what does it actually mean for real-time video?
5 Gbps = 5,000 megabits per second
5,000 megabits ÷ 1000 milliseconds = 5 megabits per millisecond
At 60fps, I have 16.67ms per frame. At 5 Gbps, that's ~83 megabits I can transmit in one frame period.
My raw 5K frame is 472 megabits (58.98 MB × 8).
472 megabits doesn't fit in 83 megabits. Not even close.
I'd been thinking about bandwidth wrong my entire career. "5 Gbps is plenty" only makes sense if you have seconds to work with. When you're measuring in milliseconds, the math is completely different.
The Mac shows sad face because the frame simply won't arrive in time. Raw pixels at 5K/60fps need ~28 Gbps. I have 5. Broken.
Raw doesn't fit. So I compress.
HEVC is the industry standard. What Netflix uses. What game streaming uses. It's what everyone recommends.
HEVC can compress 5K video down to ~50-100 Mbps. That's 0.8-1.7 megabits per frame. Now we're talking. That fits in my 5 Gbps with room to spare.
I set up an NVENC encoder on Windows, sent HEVC frames over the network, decoded on the Mac. It worked.
Kind of.
HEVC is an inter-frame codec by default. It achieves those incredible compression ratios by referencing previous frames. "This pixel is the same as 3 frames ago, so I'll just point to it instead of encoding it again."
Great for Netflix. Problematic for real-time streaming.
If I lose a single frame, the decoder can't reconstruct subsequent frames. My entire pipeline breaks. One dropped packet and suddenly I'm staring at corrupted garbage until the next keyframe.
Solution: Force HEVC to intra-frame mode. Every frame is self-contained. Lose one, the next one still works. But now my compression ratios tank from 100:1 to maybe 20:1.
Even with intra-frame, encoding takes time — 8-11ms on NVENC hardware. That's already half the frame budget gone before we even hit the network.
Encode and Decode now eat into the budget. The network is fast, but I'm paying for it in compute.
At 60fps, 10ms of codec latency means I'm always teetering on the edge. One slow frame and I miss the deadline. Jitter city.
Here's the thing that really got me: HEVC optimizes for bandwidth, not quality. It's designed to make videos small enough to stream over bad internet connections.
I have a USB cable. I don't need 50 Mbps. I need pixels that look correct.
Dark scenes? Blocky mess. The algorithm sees "these pixels are all similar-ish black" and crushes them together. Here's my solar eclipse wallpaper:

Original desktop with solar eclipse wallpaper
And here's the delta between raw and HEVC (amplified 10x to make artifacts visible):

Delta view: RGB differences amplified 10x. Concentric rings = DCT block artifacts. Menu bar rainbow = chroma subsampling.
See those concentric rings around the corona? That's HEVC's block-based compression struggling with subtle gradients. And the rainbow noise in the menu bar? That's 4:2:0 chroma subsampling — HEVC halves the color resolution, and high-contrast text pays the price. This is at maximum quality settings.
Text? Even worse. Sharp edges, high contrast, no motion to exploit — HEVC wasn't designed for this. Windows desktop and Steam UI look like they're being viewed through fog.
HEVC optimizes for constrained networks. I have a dedicated cable. Different problem.
Apple designed ProRes for professional video editing, where color accuracy matters and the benchmark is RAW. It has properties that happen to be perfect for real-time streaming:
Text stays sharp. Dark scenes stay smooth. The codec isn't trying to outsmart the content — it's just compressing it cleanly.

Delta view: ProRes 422 HQ — not even 4444 — and already much cleaner. Softer gradients, no banding, no chroma fringing.
Look at the pipeline now — Encode and Decode are tiny. Network is bigger than HEVC but still fits. The total is well under budget.
ProRes doesn't have hardware acceleration on Windows.
For Apple to Apple streaming, ProRes is incredible. MacBook to Mac Studio over Thunderbolt? Chef's kiss. Visually very hard to tell the difference.
For Windows→Mac? I'd need to software-encode ProRes on the Windows side. That means CPU encoding at maybe 10fps instead of 60. Not viable.
This is the engineering reality: the "right" answer depends on which machines you're connecting.
Here's an idea: what if we didn't wait for each step to complete sequentially?
A frame is just a grid of pixels. What if I split it into horizontal stripes, encoded each stripe independently, and started sending the first stripe while I'm still encoding the rest?
That's striped encoding. The guest encodes stripes and pushes them to the network as they complete. The host receives stripes and starts decoding while more are still arriving. Two parallel pipelines instead of one sequential chain.
Look at the visualization — three tracks running simultaneously. The host starts working before the guest finishes. Wall-clock time drops because we're not waiting anymore.
Split HEVC encoding is RTX 4090 only. NVIDIA's split encoding API requires their latest architecture. My 3090? Not supported.
So while this is conceptually beautiful — and the math works out to shave 3-5ms off the pipeline — I can't actually use it. Yet.
This is what engineering looks like. You find the elegant solution, then discover it requires hardware you don't have.
We're not done. But we understand the trade-offs:
For now, HEVC at the highest bandwidth and intra-frame is the pragmatic choice for Windows→Mac. It works. It's not perfect — text is fuzzy, sometimes jittery, dark scenes suffer — but it's playable.
Think in bits per millisecond, not bits per second. For real-time systems, 5 Gbps = 5 megabits per millisecond. Frame budgets change the math.
Every codec has a design goal. HEVC minimizes bandwidth at the cost of latency and quality. ProRes preserves quality for professional editing — and happens to be fast. Raw is the baseline. Know what each tool optimizes for.
The laws of physics aren't optional. DisplayPort feels instant because it's scanout — pixels stream directly from GPU buffer to wire, line by line, no frame buffering, with guaranteed bandwidth.
Here's the interactive version. Change the connection type, resolution, and frame rate to see how your setup would perform:

Every solution so far trades quality and latency for bandwidth. HEVC compresses a 60MB frame down to 0.2MB. But I have 5 Gbps — that's ~10MB per frame I could be using. I'm leaving 98% of my bandwidth on the table.
What if I made a different trade-off? Use more bandwidth to buy back quality and speed. A codec that's dumber but faster. Compress less, transfer more, decode faster.