Hacker Newsnew | past | comments | ask | show | jobs | submit | bradgessler's commentslogin

What’s the advantage of using fiber optics for home networking over 10Gbe Ethernet?

Many people believe running fibre between buildings, even in ducts, is safer than running copper because you get opto-isolation from lightning.

The second thing is that domestic buildings usually do not come with a consistent ground plane. I worked in a 1960s build purpose made for mainframes and we had ~48v floating between racks at either end of the building and had to do a shitload of work to reground the building, in the 90s (-we were decommissioning an IBM 3033 and deploying a secondhand cray1) the point being somehow, God knows how, prior rs232 serial wiring didn't care and the ground plane for the mainframe was fine at the time. Pre Ethernet this stuff maybe just passed code.

I suspect people who build their own home to some spec acquire these theories. Data comms? Not much reason tbh unless you're pushing a lot more data than normal.


> The second thing is that domestic buildings usually do not come with a consistent ground plane.

Ordinary unshielded copper Ethernet doesn’t care: it’s transformer-isolated at both ends. Shielded cable may object to carrying any substantial amount of current through the shield.

Anyway, there are a handful of good reasons to use fiber:

- Length. Copper is specced for 100m. Panduit will sell fancy copper cable that they pinky-swear works for Ethernet at 150m. Single mode fiber will work at silly long distances.

- High bandwidth. Copper will do 10Gbps. High speed specs exist, but there is approximately zero commercial availability of anything beyond 10G using copper at any appreciable distance. Fiber has no such problems.

IMO if you are running fiber anywhere that makes it awkward to replace (i.e. not just within a single room), use single mode. Multimode fiber has gone through ~5 revisions over the last few decades, and the earlier ones have very unimpressive bandwidth capabilities at any reasonable distance. Even the latest version, where truly heroic engineering has gone into reducing modal dispersion, relies on fancy multistrand cables for the faster Ethernet speeds. Single-mode fiber, meanwhile, continues to work very well and supports truly huge bandwidth at rather long distances, and even decades-old fiber supports the latest standards. And the transceivers for single-mode fiber are no longer much more expensive than multimode transceivers.


Late edit: one exception is A/V. Sometimes you want fiber for applications other than networking. There are A/V applications for fiber, for example. If you need this, use what the thing you’re using the fiber for requires, and consider putting it in conduit. If your application calls for MMF, consider using the highest grade you can get at a reasonable price, which is probably OM4.

Also, preterminated fiber is a thing. While it’s not that hard to terminate MMF, it’s still easier and more reliable to buy preterminated fiber. SMF terminations are apparently much more sensitive to being done perfectly, and buying preterminated fiber is wise. (I’ve never personally terminated any fiber, but I have installed and connected fiber, and it’s delightful to just plug it in.)


It's one of those "just because" moments. The idea was to future proof my home infra for a 25G NAS connection. Most ethernet connections tap out at 10G. While theoretically speaking Cat 8 cables can do 40G, hardware support for full 40G Cat 8 NICs is rare. Fibre is very very flexible with its potential bandwidth and SFP28 transceivers are relatively affordable (if you don't do what I did by using SMF. Home networks should only use MMF if the property isn't a mansion.)

I still prefer SMF over MMF. It also allows me to "move" where the city fiber comes in and patch it to another location without any active hardware. So I can for example move my router into another room.

I ran SMF and have no regrets. https://sschueller.github.io/posts/wiring-a-home-with-fiber/


I'd say the opposite - any fibre in a wall should probably be single-mode fibre (SMF), simply for future proofing. Single mode optics aren't much more expensive and single mode fibre hasn't changed nearly as many times as multi-mode. You can run 25G over the same SMF that once ran 1G - not so with MMF.

10GbE rj45 (normal ethernet jack) spf modules tend to burn power and get extremely hot, like to the point of burning you if you touch it - the manual for my switch said to leave adjacent ports unoccupied if using one of those. The fiber ones run cool to the touch.

Also, not needing to rerun any cabling if we want to bump up speeds in the future, you just change the laser module on either end. These should be good to >100x current speeds. Not the case with copper.


"10GBASE-T runs hot" is only half true.

The real problem here is that 10GBASE-T is ancient. The spec dates back to 2006! And worst of all: it only saw lukewarm adoption by the datacenter industry, so there hasn't really been a reason for manufacturers to refresh their lineups. This means that SFP+ transceiver you buy in 2026 might be using chips manufactured using a 20-year-old node. No wonder it is running hot!

2.5GBASE-T and 5GBASE-T are essentially using the same technology, but you don't hear anyone complaining about it running hot: hardware for this only recently started to become available due to consumer demand, so any hardware for that is being manufactured using more modern technology, which means a lower power use.

It's still going to consume more power than fiber, but a modern 10GBASE-T SFP+ transceiver should not be burning hot.


Realtek recently made a new 10GBASE-T NIC that consumes much less power:

https://www.servethehome.com/cheap-10gbe-realtek-rtl8127-nic...


Ah, thanks! That helps explain why it seemed like a sort of poorly executed/ill conceived product. Any good brands you know of? I think I just got something random from FS.com or maybe even Amazon.

Fiber runs cool because it's operating well within the physical capacity of the channel. Copper needs an incredibly high signal to noise ratio to overcome the limitations of its medium. Copper will consume 5-10x more power than fiber for the same # of bits transmitted.

10G is looking to be the end for twisted-pair copper.

25GBASE-T and 40GBASE-T were standardized 10 years ago, but there are still basically zero products available with support for it. The datacenter market just wasn't interested and chose to use fiber and DAC instead. Worst of all: it requires Cat8 cables and is limited to 30 meters. This means it can't reuse existing cabling, and doesn't have the reach for many home applications - OPs blog post mentions the longest run in their apartment being 55 meters.

Combine that with the general death of wired networking for home & office use, and it is extremely unlikely the market of hardcore tech enthusiasts is big enough to warrant massive investments into developing some kind of 25G-over-Cat6-for-100m standard.

10G is pretty much the standard for high-end gear these days. This means any kind of future-proof setup needs to be prepared for a future upgrade to a fiber-based technology.


Ethernet can be run over copper or fiber cabling, it's not an alternative to fiber networking. Assuming you meant what's the advantage of fiber over copper: you can use faster speeds, longer distances less power on fiber plus it's not electrically coupled.

(speeds: 100 gig today, but faster speeds are coming.)


They get less hot (especially the network adapters on the ends of them), can go a lot further, can be a lot denser (including being able to carry things other than Ethernet in the same bundle), and are a lot more future-proof (unless the cable jacket literally crumbles at the slightest movement in a few years).

For me, it was for a nearly 100ft run that try as I may to get a good termination, I'd often find my Mac Studio and Ubiquiti EdgeSwitch struggle to negotiate at more than 5gbps. So I got a smaller switch upstairs, ran 10GbE to it, approximately 10ft, and then ran OM-3 fiber for the 100ft up into my attic, across my house and down into the garage. Rock solid at 10 Gbps.

If they're using spf+, it's almost certainly ethernet on the fiber. Do you mean, why not use copper twisted pair?

Ethernet runs on many mediums, as well as over the ether.


For me, I wanted my networking to be future proof over the next 25 years if Im going to be putting all that work in to wire up the house. My ISP already offers 5Gb/s upstream and will most likely offer 10Gb/s in the coming years.

I'm kind of going the opposite way. I've got 10gig internet, and I planned on running fiber for future-proofing in our remodel, but the past 20 years has taught me that fiber's day is coming "real soon like now" for a long time... and then copper keeps up because everyone uses it.

I kinda want some just 'cause it's cool, with the only problem being that I haven't been able to find an excuse to justify (to myself) needing it.

Also much lower power and heat.

This is the most compelling reason (unless you really need the range of fiber) - 10GbE can be really power hungry. Each 10G switch port that is in use adds something like 1-5 watts to the power budget. 1 watt is reasonable but most switch hardware isn't nearly that efficient. That could mean 10 watts for every single link if you're using 5 watts at each end. Multiply that by several links and it starts to add up really quickly.

Server-grade fiber optics can get to terrabit/sec speeds. In other words, ludicrous speed.

(He’s gone, plaid!)


Yeah, software orgs ship their promotion structures.

This explains why all commercial software enshittifies.

And why open source UIs are anarchist. :)

https://terminalwire.com

It’s “Hotwire for command-line apps”, meaning you can ship a CLI in a Rails app without building an API. The dream is to make it work for all major web frameworks.

Terminalwire streams stdio, browser launch commands, and a few more things needs to ship a CLI for a SaaS quickly.

The best part is when you want to ship a feature for the CLI, you don’t have to worry about pushing out updates to clients and making sure it’s compatible with your API.

A more interesting development are companies that are using it as a replacement for MCP in AI stacks. They’re reporting less token usage and better overall results.


Dustin Curtis wrote about a similar incident at https://dcurt.is/apple-card-can-disable-your-icloud-account

Slightly different issue involving the Apple credit card, but it’s just as insane that there’s no separation between the different parts of Apple.

For that reason I will never have an Apple Card, and I guess I won’t be redeeming Apple gift cards with my Apple ID.


If you run an Apple HomeKit stack and don’t need all the other stuff HA offers, I recommend checking out https://homebridge.io


There is a lot of complexity lurking there. At a second home I have HA running on an RPi and also an Apple Homepod Mini (with Matter). I wish someone could explain to me how these two devices do (or do not) interact, and how I can get Apple Home on my iPhone to play nicely with HA both locally and (when I'm away) over Tailscale (so I can do remote operation of thermostats and a heat pump on wi-fi).

Is this the kind of scenario that I should be asking an LLM to sort out for me ?


A big problem with CLI tooling is it starts off seeming like it’s an easy problem to solve from a devs perspective. “I’ll just write a quick Go or Node app that consumes my web app’s API”

Fast forward 12-18 months, after several new features ship and several breaking API changes are made and teams that ship CLIs start to realize it’s actually a big undertaking to keep installed CLI software up-to-date with the API. It turns out there’s a lot of auto-updating infrastructure that has to be managed and even if the team gets that right, it can still be tricky managing which versions get deprecated vs not.

I built Terminalwire (https://terminalwire.com) to solve this problem. It replaces JSON APIs with a smaller API that streams stdio (kind of like ssh), and other commands that control browsers, security, and file access to the client.

It’s so weird to me how each company wants to ship their own CLI and auto-update infrastructure around it. It’s analogous to companies wanting to ship their own browser to consume their own website and deal with all the auto update infrastructure around that. It’s madness.


I wrote a paper about building web applications in 10th grade a long time ago. When class was out the teacher asked me to stay for a minute after everybody left. He asked in disbelief, “did you really write that paper?”

I could see why he didn’t, so I wasn’t offended or defensive and started to tell him the steps required to build web apps and explained it in a manner he could understand using analogies. Towards the end of our conversation he could see I both knew about the topic and was enthusiastic about it. I think he was still a bit shocked that I wrote that paper, but he could tell from the way I talked about it that it was authentic.

It will be interesting to see how these situations evolve as AI gets even better. I suspect assessment will be more manual and in-person.


I built https://terminalwire.com to make it easier for more web applications to ship CLIs like gh. Are there any web services like GH that you use in your workflow that don’t have decent AI integration that would benefit from a CLI interface?


I built https://terminalwire.com around the idea that CLIs are a great way to interact with web applications.

Turns out the approach works well for integrating web apps with LLMs. I have a payroll company using it in their stack to replace MCP and they’re reporting lower token usage and a better end result.


Re CLI: In fact it seems to be very similar to command line in AutoCAD (you can do things visually with mouse or choose to draw via CLI). With LLMs it is more sophisticated (intelligent) because you are not limited with set of predefined commands.

I am waiting for Excel CLI…


I want macOS on my iPad Pro so I can plug my Moonlander keyboard into it and have a “laptop” that doesn’t destroy my wrists.


You can have that today. If you get a USB-C breakout for the dock, it'll treat the Moonlander, or any keyboard, like a normal keyboard. You can not destroy your wrists right now, as I sometimes choose to do.


I’m pretty sure they are saying they want both that keyboard plugged in and MacOS rather than the limited iPadOS; not that they think MacOS is required to be able to plug a keyboard in.


Exactly. iPadOS can’t run a Unix shell and docker, but macOS can.

I don’t even want touch to work on the iPad running macOS. Just let me run my own input hardware against it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: