I was just browsing around on Slashdot a bit, and I found an article on the topic of Token Ring Networks.
We all know that back in the day, Token Ring was considered 'better' than Ethernet because Ethernet would start colliding like mad under heavy (60/70% and higher) loads. This was because there was no central authority on who was allowed to send over the wire with Ethernet, so nodes on the network had to duke it out amongst themselves using Carrier Sense Multiple Access with Collision Detection. The more nodes on the network, the more collisions. Token Ring didn't suffer from the same problem, giving higher possible loads on a network.
But this interesting comment on Slashdot claims something else:
In theory, Ethernet on coax should be stable under heavy load. But in the late 1980s and early 1990s, it wasn't, due to defective design of some widely used interface chips. Here's the actual story. See this note by Wes Irish at Xerox PARC [link dead, so not included]
The worst device was the SEEQ 8003 chip, found in some Cisco and SGI devices. Due to an error in the design of its hardware state machine, it would turn on its transmitter for a few nanoseconds in the middle of an interframe gap. This noise caused other machines on the LAN to restart their interframe gap timers and ignore the next packet, if it followed closely enough. This happened even if the SEEQ chip was neither the sender or the receiver of the packets involved. So as soon as you plugged one of these things into a LAN, throughput went down, even if it wasn't doing anything. A network analyzer wouldn't even see the false collision; this was at too low a level.
This was tough to find. Wes Irish worked on the problem by arranging for both ends of Xerox PARC's main coax LAN to terminate in one office. Then he hooked up a LeCroy digital oscilloscope to both ends. Then he tapped into a machine with an Ethernet controller to bring out a signal when the problem was detected and trigger the oscilloscope. Then, when the problem occured, he had a copy of the entire packet as an analog waveform stored in the scope. This could then be printed with a thermal printer and gone over by hand.
Because he had the same signal from both ends of the wire, the wierd SEEQ interference mentioned above appeared time-shifted due to speed of light lag, making it clear that the interference was from a different node than the one that was supposed to be sending. You could measure the time shift and figure out from where on the cable the noise was being inserted. Which he did.
It took some convincing to get manufacturers to admit there was a problem. It helped that Wes was at Xerox PARC, where Ethernet was born. I went up there to see his work, and once I saw the waveforms, I was convinced. There was much faxing of waveform printouts for a few months, and some vendors were rather unhappy, but the problem got fixed.
So that's why.
I'm not completely convinced that is the real reason why Ethernet didn't perform well under high loads back in the day (these days, switches take care of the problem of high levels of collisions on Ethernet networks), but it does make for an interesting read.