I lied. This post is about two-port network switches.

The dumbest product in the entire world of computer networking is the two-port Ethernet switch. It should be the equivalent of one of those Useless Machines that simply turns itself off when you turn it on. What can you do with two ports? You can plug in two machines - which is exactly identical to plugging them into each other, so why not just do that?
The dumbest thing about it is that it actually serves an extremely valid purpose, but one most people won't get unless I explain how much Ethernet sucked in the 90s. In general, few people really know how Ethernet works, because it all got papered over a long, long time ago.
At its core, Ethernet is almost nothing more than a serial framing protocol, like HDLC - a thin wrapper around a "bit pipe" that lets you tell where individual messages begin and end, but leaves everything else up to software. You can actually treat it this way if you want, too.
It's a fascinatingly sophisticated and forward-thinking design, beautiful in its simplicity. I would love to run pure-Ethernet networks, with no higher layers; these were not uncommon in the earlier days of the protocol if I'm not mistaken, and are perfectly practical even now, if you want to write your own clients and servers.
You could completely rawdog it, nothing requires you to use IP or any other further encapsulations. ACAB Includes The TCP/IP Model
Of course, it's a mixed bag. Ethernet contains a lot of solutions to problems which it creates itself. But it does so by virtue of its incredible simplicity.
Given the opportunity, I regularly assert that nobody has used Ethernet as designed in almost 30 years. The network switch is an abomination, you see. Ethernet wasn't designed for them, doesn't know they exist, and the way they work is tantamount to NAT IP routers in terms of "oh no, oh no, don't do that, I was sure you weren't doing it like that. Now I need a shower."
So What Made Ethernet A Mess
The original design of Ethernet used a shared medium, which is similar to the "bus" concept used inside computers. That approach, used most famously by ISA, PCI, IDE and SCSI, connects multiple devices to a computer by... just putting them all on the same wires. They all simply shout into the same shared bundle of copper, and much of the spec is about making sure that two devices don't talk at the same time. Generally speaking, this has to have a literal 100% success rate, or your PC crashes and corrupts your hard drive.
Ethernet did this too, originally with coaxial cable. I'm not going to get into the whole "vampire taps" lore, but in 1987, an Ethernet LAN was literally a single piece of copper, plus a ground, and everyone vibrated that single wire to send packets. Video and phone calls could happen over this, somehow.
Because that single wire was shared, there was no way to address a specific host on the network. If you sent a packet, everyone received it - they had to. This is where the MAC address came from.
When you send an Ethernet frame*, the MAC says who it came from and who it's going to. The former is just a return address, unnecessary but convenient. The latter is essential, because Ethernet hardware automatically filters out all packets that aren't addressed either to its specific MAC address, or to a special broadcast address that's just a solid string of binary 1s.
*frame, packet, or datagram are all valid terms from a computer science perspective; they just got de facto assigned to different layers.
Remember that early Ethernet-connected machines were very, very slow, and did not necessary have multiprocessing OSes behind them. Requiring the OS to react to every single packet on a heavily utilized network, only to find 99% of the time that the traffic was intended for other machines, would have rendered any system totally unusable.
This is why MAC addresses are baked into network adapters from the factory. The hardware itself needs to know what its address is, so it can drop all frames that aren't intended for that host without even handing them off to the system bus. It's not much acceleration, but it's just enough to make Ethernet practical on single-thread/single-process or simply ancient hardware.
Shouting Doctors
Still, it doesn't fix the inherent problems of a shared medium.
Imagine you're a doctor trying to order medication for patients, but the only way to do this is to open a door and shout into a long hallway that terminates at the hospital pharmacy. There are 100 other doors lining the hallway.
You open yours and start shouting: "AMOXICILLIN 500 MEG PO Q 12 HOUR." But two words in, another door opens and another doctor starts shouting a completely different order.
Naturally, the pharmacist is going to throw up his hands. "Sorry, I couldn't understand either of you!" Although really, you'll both have realized right away that you're talking over each other and stopped before you finish your sentences. The pharmacist ignores both of your fragmentary requests.
Ethernet has this same problem, it's called a collision, and it's solved the exact same way: You both stop talking for a second or two, then one of you goes again. Since there's no good way to negotiate who goes first, you just hope that one of you starts talking before the other.
Most of the time this works. With two or three doctors, the stochastic nature of your requests means that collisions don't often happen, and when they do, they get resolved quickly. But when you ramp up to 30, 40, or 100 doctors, who are also talking to each other, conversation becomes impossible. You're constantly shouting over each other, stopping and starting again, and hardly anyone can get a sentence out.
The solution to this, of course, is to switch to a telephone system. Stop shouting into the hallway - pick up a phone and call someone. Their line will ring, and if they're on a call already, they'll finish it or put it on hold gracefully, so their other conversation doesn't get trashed.
The fresh problem: doctors are notoriously unreceptive to new processes. They hate having to adapt to new tools to do their jobs, and want to just do things the same way forever and focus on their core skills instead of the meta. You will not get them to pick up the phone.
So, how do you fix this? I would begin my answer with "easy:" but it's not. It's hard as hell.
You don't tell the doctors you solved the problem. You let them continue thinking that they're shouting into a hallway, but when they open the door, a personal secretary is standing there. They shout their order into her face, she notes it down, then carries it to the pharmacy, waits in line, and hands it over.
Great! The doctor doesn't need to learn any new tricks, and the problem is solved!
...at the cost of 100 new fulltime employees. This is exactly how we fixed Ethernet.
Wait What Was The Problem Exactly
Ethernet started out at a couple megabits, then accelerated to 10, and then 100 megabits, while still using a shared medium. In 1998, you could very easily have a dozen, or a hundred computers plugged into a Ethernet segment, pushing one hundred million bits per second. This is terrifying to think about.
It wasn't a single shared wire anymore. This seems important, but isn't, really. We had left "thicknet" and "thinnet" (which both used simple coaxial cable) in the past and switched to the same twisted pair cables we use now, but it didn't really change much.
Twisted pair Ethernet uses balanced signaling, which is a magic spell that allows unshielded wire - Basically, Total Garbage - to carry stupidly, phenomenally high-bandwidth signals that simply should not be possible. The downside is that it depends on being a "closed loop," where all current is perfectly balanced at both ends, so you can't just hang multiple network cards (or MAUs) off the same wire pair.
Fast Ethernet required twisted pair, so 100 megabit LANs couldn't just have everyone punched onto a single conductor anymore. They required a dedicated transceiver for every single host, but it wasn't very sophisticated. The signal actually going onto the line was very simple, and the transceivers didn't have any intelligence, they just ensured that the current was always perfectly balanced between the wires.
The Ethernet "hub" was introduced to plaster over the balanced signaling problem. It's simply a pile of transceivers, connected together with a logical "OR" in the middle - topologically, identical to the original single-wire-everyone-touches. When one computer on a Fast Ethernet lan wobbles its line, every other computer sees the exact same wobble. There is zero intelligence in a hub.
So this meant that you now had hundreds of computers transmitting at up to one hundred million bits per second, all going into "a single wire." When any one of them said anything, everything else saw it. Needless to say, the chances of actually seeing full throughput in this scenario were... slim.
Trying to actually utilize a 100 megabit LAN at full throughput with dozens of machines attached was likely to be a near-constant series of collisions. If you were very lucky and disciplined, and could get people to coordinate their use of the network, there's no reason you couldn't get the full throughput between any two machines at once, but as soon as you added a third machine doing anything at all, speeds could drop dramatically.
Switches buried this problem completely in a graveyard.
Oh My God What Do Switches Do Then
Switches are the secretaries behind all the doors. When you send a packet to an Ethernet switch, it doesn't just send it out to all the other ports. Instead, the switch finds out exactly where that packet needs to go, and sends it only there.
This is, unsurprisingly, not a trivial task, because:
-
You need a dedicated network interface for every port. Not just a dumb transceiver, but something with custom silicon that understands what an Ethernet packet is.
-
You need somewhere to store packets. The concept of network port "utilization" is misleading; ports are not "partially utilized." A network port is either transmitting, or not. There are no other states. If a packet comes in destined for port 3, but another packet is busy being transmitted out that same port, the new packet simply has to wait. Where? In a RAM buffer, somewhere.
-
You need to figure out what is attached to each port. If a packet comes in addressed to $GIVEN_MAC, the switch needs to know that it's on e.g. port 3, and that $GIVEN_MAC_2 is on port 4, and so on. Oops! Ethernet provides no mechanism for this.
This is where we get into "ethernet doesn't know what a switch is." It doesn't. Still. To this day. Nobody has ever built a solution for this problem.
When you connect a bunch of PCs to a bunch of switches, what would make sense for building the network? All the PCs should announce themselves to the switches, so they know what's where, and then if the network has multiple switches, they should tell each other which MACs they've seen.
But nothing does this. Doctors won't learn how the new phone system works, they just keep shouting into the hallway. Fortunately for us, however, most computers do shout. Thanks to some accidental side effects of higher level protocols, and for no other reason, we avoided needing to extend Ethernet to solve this.
Because of DHCP/BOOTP and ARP, both parts of the IP protocol, the practical reality is that once a computer is connected to any Ethernet port, it will almost certainly send a packet with its source MAC address attached, for any one of many reasons, immediately after being plugged in. Because this works, switches can leverage it for MAC learning.
Basically, switches just remember everything they've seen. If a switch sees a packet come in on a given port, it looks at the source MAC, then stores it in a table (the CAM table) which it consults for all later forwarding. If it doesn't see a given MAC transmit anything for a while, it forgets about it.
This is something that shouldn't work, there's no guarantee that it'll work reliably, but just because of sheer luck and probability and the chattiness of IP protocols, it does. When it doesn't work, it's okay; the switch just falls back to the old-school approach. If it receives a packet for which it has no destination in the CAM table, it just sends it to every port, which is called "flooding." This continues until that MAC says something; if it never transmits a single packet from its source address, then all packets destined to it will get flooded.
This seems bad. Doesn't it lose all the advantages of switching? Well, not quite - you still won't ever get a collision this way, because the switch will always ensure that only one packet is being sent through any given port at any given moment. So if a port is in use, then any other packets destined to it have to wait until it's free.
But hang on. That's impossible, unless they have somewhere to wait around. Yeah. Yeah.
Okay So Why A Two Port Switch Though
The problem with switches is that they're incredibly sophisticated. A 48 port switch is basically a computer with 48 network cards. It's a little simpler than that, because they have very rudimentary understanding of the protocol (at least, they used to - modern ones are basically full routers) but it's still a lot of dedicated silicon.
Because, yeah, if a port is in use when another machine wants to transmit to it, then the packet has to wait somewhere. There's no way to tell the originating machine "hey, hang on, don't send that yet," so the switch just has to accept the packet and store it until the port's free. It does this with a buffer or "queue", and these can come in different depths, but they still cost money. So, in the mid 90s, when switches were new, they were obscenely expensive.
In 1998, despite knowing a Fast Ethernet network with 100 computers was going to be dog slow, you couldn't give everyone their own switch port, it was just too expensive. So what was a network architect to do? You had to build what you could afford.


Around that time, a 24 port Ethernet switch was as much as seven thousand dollars. A hub was a fifth that price. So, you were gonna buy hubs unless you were absolutely Microsoft-loaded.
However, just because you bought hub ports in bulk didn't mean your entire network had to be on a "single wire" as it were. There were options for mitigating the pain of large hubbed networks.
One was segmentation. It's dumb to think about, but valid - if you bought a 24 port hub for the best value, but just... not all your machines needed to talk to all the others, some allowed you to simply press a button to break the hub into two twelve-port units. This split the "collision domain", that is, the group of machines that could talk over each other - but it also split the broadcast domain, meaning, the group of machines that could talk to each other at all.
The better approach was to add a switch. Switches are so much better than hubs that they can actually make a network healthier without actually needing to fully control it.
Switches don't care whether a device is directly attached to them. They only care what MACs they've seen on a given port, so if you plug a hub into a switchport, the switch will go "Okay, I've seen 12 MACs on port 3," and remember all of those.
So, take your 24-port hub. Press the button to divide it into two 12-port hubs. Now buy a four port switch, for maybe a thousand dollars, and plug one port of each half of that hub into it.
If a device on hub 1 wants to talk to another device on hub 1, it does it just like normal. It sends a packet to all ports on that hub, and collisions can happen. But nothing on hub 2 will see that packet. So if you have one very busy conversation on each hub, and no more, they will both be able to move at top speed.
And if they want to talk to a machine that's on the other hub, they'll send a packet addressed to its MAC, and the switch will wake up and go "Oh, I know where that is," and hand it off from one hub to the other.
You just doubled or tripled your network capacity, by splitting your collision domains, but not your broadcast domains. Everything can still talk to everything else, but half the hosts cannot cause collisions with the other half. The smaller the hubs, and the more ports on the Ethernet switch, the better performance you'll get, for far less cost than replacing your entire hub stack.
This was a popular solution, as I understand it, in the transitional period of the late 90s. In the 2000s, switches plummeted in cost, but for a short time it made a lot of sense to do this. And that's why the two-port switch existed.

If your network consisted of just a couple hubs, or one hub with a switchable partition, for only $159, you could massively improve the performance of your network. You simply bought a two-port switch, connected cables to ports on either hub, and that was that. It was like a bolt-on upgrade - I wouldn't be surprised if there were hubs that supported literal slide-in modules that wired into dedicated ports on the backplane.
It's such a silly device, and it solves a silly but nonetheless very genuine problem in a genuinely effective way. I own one, and have not yet gotten around to testing it out with some hubs to see what the gains really look like.
My favorite thing about it however is that, without doing research into the state of networking almost 25 years ago, you'd never be able to puzzle out what these were for. They make absolutely no sense in a world where silicon is so cheap and plentiful that we just unthinkingly connect our computers together with these breathtakingly sophisticated devices.
Modern switches have frightening amounts of throughput. A high end 48 port switch has, you know, 48 gigabits of backplane bandwidth; low end and consumer units used to be a lot slower if you actually tried to saturate all the ports, but again, as things get cheaper, the gap narrows. And the featureset is shocking.
Modern switches, even consumer ones, quietly include tons of little tuning features and intelligence and QoS (lol.) And business-grade ones with remarkable featuresets are amazingly cheap. My home LAN runs off a Juniper EX3300, with a full speed backplane and 48 ports of gigabit PoE, which I got for something like $120.
Even new, these aren't that much, and they're essentially 48-port IP routers. I'd have to go check, but I think this damn thing can do NAT and has ALGs. The secretary is more like a full fledged RN, who not only carries the orders to the pharmacy, but knows how to sanity-check them against the patient's chart. Things have come so far since the 90s.
