
I mentioned recently that for the third time in roughly a decade, a subset of the electronics suite in my residence had gotten zapped by a close-proximity lightning storm. Although this follow-up writeup, one of a planned series, was already proposed to (and approved by) Aalyia at the time, subsequent earlier-post comments exchanges with a couple of readers were equal parts informative and validating on this one’s topical relevance.
First off, here’s what reader Thinking_J had to say:
Only 3 times in a 10-year span, in the area SW of Colorado Springs?
Brian, you appear to be lucky.
My response:
Southwest of Golden (and Denver, for that matter), not Colorado Springs, but yes, the broader area is active each year’s “monsoon season”:
https://climate.colostate.edu/co_nam.html
The “monsoon season” I was referencing historically runs from mid-June through the end of September. Storms normally fire up beginning mid-afternoon and can continue overnight and into the next morning. As an example of what they look like, I grabbed a precipitation-plot screenshot during a subsequent storm this year; I live in Genesee, explicitly noted on the map:
Wild, huh?
Then there were the in-depth thoughts of reader “bdcst”, in a posting only the first half of which I’ve republished here for brevity (that said, I encourage you to read the post in its entirety at the original-published location):
Hi Brian,
Several things come to mind. First is, if you think it was EMP, then how will moving your copper indoors make a difference unless you live in a Faraday cage shielded home? The best way to prevent lightning induced surges from entering your equipment via your network connection, is to go to a fiber drop from your ISP, cable or telecom carrier. You could also change over to shielded CAT-6 Ethernet cable.
At my broadcast tower sites, it’s the incoming copper, from the tower, or telephone system or from the power line itself that brings lighting induced current indoors. Even with decent suppressors on all incoming copper, the only way to dissipate most of the differential voltage from the large current spikes is with near zero ohms bonding between every piece of equipment and to a single very low impedance earth ground point. All metal surfaces in my buildings are grounded by large diameter flexible copper wire, even the metal entrance door is bonded to it bypassing the resistance of its hinges.
When I built my home at the end of a long rural power line, I experienced odd failures during electrical storms. I built my own power line suppressor with the largest GE MOV’s I could find. That eliminated my lightning issues. Of course, surge suppressors must have very low resistance path to ground to be effective. If you can’t get a fiber drop for your data, then do install several layers of Ethernet suppressors between the incoming line and your home. And do install at least a small AC line suppressor in place of a two-pole circuit breaker in your main panel, preferably at the top of the panel where the main circuit breaker resides.
My response, several aspects of which I’ll elaborate on in this writeup:
Thanks everso for your detailed comments and suggestions. Unfortunately, fiber broadband isn’t an option here; I actually feel fortunate (given its rural status) to have Gbit coax courtesy of Comcast:
https://www.edn.com/a-quest-for-faster-upstream-bandwidth/
Regarding internal-vs-external wired Ethernet spans, I don’t know why, but the only times I’ve had Ethernet-connected devices fry (excluding coax and HDMI, which also have been problematic in the past) are related to those (multi-port switches, to be precise) on one or both ends of an external-traversed Ethernet span. Fully internal Ethernet connections appear to be immune. The home has cedar siding and of course there’s also insulation in the walls and ceiling, so perhaps that (along with incremental air gaps) in sum provides sufficient protection?
Your question regarding Ethernet suppressors ties nicely into one of the themes of an upcoming planned blog post. I’ve done only rudimentary research so far, but from what I’ve uncovered to date, they tend to be either:
- Inexpensive but basically ineffective or
- Incredibly expensive, but then again, replacement plasma TVs and such are pricey too (http://www.edn.com/electronics-blogs/brians-brain/4435969/lightning-strike-becomes-emp-weapon-)
Plus, I’m always concerned about bandwidth degradation that may result from the added intermediary circuitry (same goes for coax). Any specific suggestions you have would be greatly appreciated.
Thanks again for writing!
Before continuing, an overview of my home network will be first-time informative for some and act as a memory-refresher to long-time readers for whom I’ve already touched on various aspects. Mine’s a two-story home, with the furnace room, roughly in the middle of the lower level, acting as the networking nexus. Comcast-served coax enters there from the outside and, after routing through my cable modem and router, feeds into an eight-port GbE switch. From there, I’ve been predominantly leveraging Ethernet runs originally laid by the prior owner.
In one direction, Cat 5 (I’m assuming, given its age, versus a newer generation) first routes through the interstitial space between the two levels of the house to the far wall of the family room next to the furnace room, connecting to another 8-port GbE switch. At that point, another Ethernet span exits the house, is tacked to the cedar wood exterior and runs to the upper-level living room at one end of the house, where it re-enters and connects to another 8-port GbE switch. In the opposite direction, another Cat 5 span exits the house at the furnace room and routes outside to the upper-level master bedroom at the other end of the house, where it re-enters and connects to a five-port GbE switch. Although the internal-only Ethernet is seemingly comprised of conventional unshielded cable, judging from its flexibility, I was reminded via examination in prep for tackling this writeup that the external wiring is definitely shielded, not that this did me any protective good (unsurprisingly, sadly, given that externally-routed shielded coax cable spans from room to room have similarly still proven vulnerable in the past).
Normally, there are four Wi-Fi nodes in operation, in a mesh configuration comprised of Google Nest Wifi routers:
- The router, in the furnace room downstairs
- A mesh point in the master bedroom upstairs at one end of the house
- Another in the living room upstairs at the other end of the house
- And one more downstairs, in an office directly below the living room
Why routers in the latter three cases, versus less expensive access points? In the Google Nest Wifi generation, versus with the Google OnHub and Google Wifi precursors (as well as the Google Nest Wifi Pro successor, ironically), access points are only wirelessly accessible; they don’t offer Ethernet connectivity as an option for among other things creating a wired “mesh” backbone (you’ll soon see why such a backbone is desirable). Plus, Google Nest Wifi Routers’ Wi-Fi subsystems are more robust; AC2200 MU-MIMO with 4×4 on 5 GHz and 2×2 on 2.4GHz, versus only AC1200 MU-MIMO Wi-Fi 2×2 on both 2.4 GHz and 5 GHz for the Google Nest Wifi Point. And the Point’s inclusion of a speaker is a don’t-care (more accurate: a detriment) to me.
I’ve augmented the already-existing Ethernet wiring when we bought the house with two other notable additional spans, both internal-only. One runs from the furnace room to my office directly above it (I did end up replacing the original incomplete-cable addition with a fully GbE-complaint successor). The other goes through the wall between the family room and the earlier-mentioned office beyond it (and below the living room), providing it with robust Wi-Fi coverage. As you’ll soon see, this particular AP ended up being a key (albeit imperfect) player in my current monsoon-season workaround.
Speaking of workarounds, what are my solution options, given that the outdoor-routed Ethernet cable is already shielded? Perhaps the easiest option would be to try installing Ethernet surge protectors at each end of the two outdoors-dominant spans. Here, for example are some that sell for $9.99 a pair at Amazon (and were discounted to $7.99 a pair during the recent Prime Fall Days promotion; I actually placed an order but then canceled it after I read the fine print):
As the inset image shows and the following teardown image (conveniently supplied by the manufacturer) further details, they basically just consist of a bunch of diodes:
This one’s twice as expensive, albeit still quite inexpensive, and adds an earth ground strap:
Again, nothing but diodes (the cluster of four on each end are M7s; I can’t read the markings on the middle two), though:
Problem #1: diving into the fine print (therefore my earlier mentioned order cancellation), you’ll find that they only support passing 100 Mbit Ethernet through, not GbE. And problem #2; judging from the user comments published on both products, they don’t seem to work, at least at the atmospheric-electricity intensities my residence sees.
Ok, then, if my observation passes muster that internal-only Ethernet spans, even unshielded ones, are seemingly EMI-immune, why not run replacement cabling from the furnace room to both upper-level ends of the house through the interstitial space between the levels, as well as between the inner and outer walls? That may indeed be what I end up biting the bullet and doing, but the necessary navigation around (and/or through) enroute joists, ductwork and other obstacles is not something that I’m relishing, fiscally or otherwise. In-advance is always preferable to after-the-fact when it comes to such things, after all! Ironically, right before sitting down to start writing this post, I skimmed through the final print edition of Sound & Vision magazine, which included a great writeup by home installer (and long-time column contributor) John Sciacca. There’s a fundamentally solid reason why he wrote the following wise words!
A few of my biggest tips: Prewire for everything (a wire you aren’t using today might be a lifesaver tomorrow!), leave a conduit if possible…
What about MoCA (coax-based networking) or powerline networking? No thanks. As I’ve already mentioned, the existing external-routed coax wiring has proven vulnerable to close-proximity lightning, too. If I’m going to run internally routed cable instead, I’ll just do Ethernet. And after several decades’ worth of dealing with powerline’s unfulfilled promise due to its struggles to traverse multiple circuit breakers and phases, including at this house (which has two breaker boxes, believe it or not, the original one in the garage and a newer supplement in the furnace room), along with injected noise from furnaces, air conditioning units, hair dryers, innumerable wall warts and the like, I’ve frankly collected more than enough scars already. But speaking of breaker boxes, by the way, I’ve already implemented one of the earlier documented suggestions from reader “bdcst”, courtesy of an electrician visit a few years back:
The final option, which I did try (with interesting results), involved disconnecting both ends of the exterior-routed Cat 5 spans and instead relying solely on wireless backbones for the mesh access points upstairs at both ends of the house. As setup for the results to come, I’ll first share what the wired-only connectivity looks like between the furnace room and my office directly above it. I’m still relying predominantly on my legacy, now-obsolete (per Windows 8’s demise) Windows Media Center-based cable TV-distribution scheme, which has a convenient built-in Network Tuner facility accessible via any of the Xbox 360s acting as Windows Media Extenders:
In preparation for my external-Ethernet severing experiment, to maximize the robustness of the resultant wireless backbone connectivity to both ends of the house, I installed a fifth Google Nest Wifi router-as-access point in the office. It indeed resulted in reasonably robust, albeit more erratic, bandwidth between the router and the access point in the living room, first as reported in the Google Home app:
and then by Windows Media Center’s Network Tuner:
I occasionally experienced brief A/V dropouts and freezes with this specific configuration. More notably, the Windows Media Center UI was more sluggish than before, especially in its response to remote control button presses (fast-forward and -rewind attempts were particularly maddening). Most disconcerting, however, was the fact that my wife’s iPhone now frequently lost network connectivity after she traversed from one level of the house to the other, until she toggled it into and then back out of Airplane Mode.
One of the downsides of mesh networks is that, because all nodes broadcast the exact same SSID (in various Google Wifi product families’ case), or the same multi-SSID suite for other mesh setups that use different names for the 2.4 GHz, 5 GHz, and 6 GHz beacons, it’s difficult (especially with Google’s elementary Home utility) to figure out exactly what node you’re connected to at any point in time. I hypothesized that her iPhone was stubbornly clinging to the now-unusable Wi-Fi node she was using before versus switching to the now-stronger signal of a different node in her destination location. Regardless, once I re-disconnected the additional access point in my office, her phone’s robust roaming behavior returned:
But as the above screenshot alludes to, I ended up with other problems in exchange. Note, specifically, the now-weak backbone connectivity reported by the living room node (although, curiously, connectivity between the master bedroom and furnace room remained solid even now over Wi-Fi). The mesh access point in the living room was, I suspect, now wirelessly connected to the one in the office below it, ironically a shorter node-to-node distance than before, but passing through the interstitial space between the levels. And directly between the two nodes in that interstitial space is a big hunk of metal ductwork. Note, too, that the Google Nest Wifi system is based on Wi-Fi 5 (802.11ac) technology, and that the wireless backbone is specifically implemented using the 5 GHz band, which is higher-bandwidth than its 2.4 GHz counterpart but also inherently shorter-range. The result was predictable:
The experiment wasn’t a total waste, though. On a hunch, I tried using the Xfinity Stream app on my Roku to view Comcast-sourced content instead. The delivery mechanism here is completely different: streamed over the Internet and originating from Comcast’s server, versus solely over my LAN from the mini PC source (in all cases, whether live, time-shifted or fully pre-recorded, originating at my Comcast coax TV feed via a SiliconDust HDHomeRun Prime CableCARD intermediary). I wasn’t direct-connecting to premises Wi-Fi from the Roku; instead, I kept it wired Ethernet-connected to the multi-port switch as before, leveraging the now-wireless-backbone-connected access point also connected to the switch there instead. And, as a pleasant surprise to me, I consistently received solid streaming delivery.
What’s changed? Let’s look first at the video codec leveraged. The WTV “wrapper” (container) format now in use by Windows Media Center supersedes the DVR-MS precursor with expanded support for both legacy MPEG-2 and newer MPEG-4 video. And indeed, although a perusal of a recent recorded-show file in Window Explorer’s File Properties option was fruitless (the audio and video codec sections were blank), pulling the file into VLC Media Player and examining it there proved more enlightening. There were two embedded audio tracks, one English and the other Spanish, both Dolby AC3-encoded. And the video was encoded using H.264, i.e., MPEG-4 AVC (Part 10). Interestingly, again according to VLC, it was formatted at 1280×720 pixel resolution and a 59.940060 fps frame rate. And the bitrate varied over time, confirmative of VBR encoding, with input and demuxed stream bitrates both spiking to >8,000 kb/sec peaks.
The good news here, from a Windows Media Center standpoint, is two-fold: it’s not still using archaic MPEG-2 as I’d feared beforehand might have been the case, and the MPEG-4 profile in use is reasonably advanced. The bad news, however, is that it’s only using AVC, and at a high frame rate (therefore bitrate) to boot. Conversely, Roku players also support the more advanced HEVC and VP9 video codec formats (alas, I have no idea what’s being used in this case). And, because the content is streamed directly from Comcast’s server, the Roku and server can communicate to adaptively adjust resolution, frame rate, compression level and other bitrate-related variables, maximizing playback quality as WAN and LAN bandwidth dynamically vary.
For now, given that monsoon season is (supposedly, at least) over until next summer, I’ve reconnected the external Cat 5 spans. And it’s nice to know that when the “thunderbolt and lightning, very, very frightening” return, I can always temporarily sever the external Ethernet again, relying on my Rokus’ Xfinity Stream apps instead. That said, I also plan to eventually try out newer Wi-Fi technology, to further test the hypothesis that “wires beat wireless every time”. Nearing 3,000 words, I’ll save more details on that for another post to come. And until then, I as-always welcome your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Lightning strikes…thrice???!!!
- Empty powerline networking promises
- Lightning strike becomes EMP weapon
- Devices fall victim to lightning strike, again
- Ground strikes and lightning protection of buried cables
- Teardown: Lightning strike explodes a switch’s IC
- Teardown: Ethernet and EMP take out TV tuner
The post The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans appeared first on EDN.