In February of 1999, Intel announced that they were creating a consortium of companies to bring USB to faster speeds, and allow it to compete with IEEE-1394 FireWire. This USB 2.0 spec, is supposed to allow USB to utilize the full range of devices including cameras, drives and other things that USB was never intended to work with.
The spirit of USB (up until February) was to basically copy ADB (Apple's Desktop Bus). 10 years after the Macs, NeXT, Sun and some others used ADB to connect keyboards, mice and other controllers through this cool serial bus, Intel decided they needed this for the PC -- and so USB was (re)created. USB added some new ideas -- but most of those "new" ideas were borrowed from Apple's other standard serial bus (1394 FireWire). USB is faster than ADB and allows hot-swappable devices (devices that could be plugged in and pulled out while the computer was running or the bus was "hot" or "with the power on"). From the start, USB was designed to compliment FireWire and allow low and medium speed devices (like printers, keyboards, mice, modems, scanners) to talk to the computer through a shared serial bus.
Dave's first law of engineering is "There are no free features!" When you target a product for one thing, then it will usually do that one thing well. However, the more you try to do with one product, then the worse it will be at doing any one thing. Apple was creating a high-speed video, storage and communication standard with 1394 FireWire, it made perfect sense for Intel not to overlap and try to compete in that market. So Intel instead targeted USB for low cost and low speed devices. If they had tried to make USB as fast as FireWire, then that would have driven up the cost (or decreased the reliability) and made the device less useful for low-cost devices. Engineering is about tradeoffs. Intel understood this (notice the past tense), and pounded the message home with many comments about how USB was for the low end, and was complimentary to FireWire and never intended to compete with or replace 1394 FireWire.
Intel changed their mind about doing one thing well and moved towards doing many things poorly -- or so it seems since USB 2.0 is definitely a change in that direction. USB 2.0 is about trying to use one bus to talk to low speed and low cost devices, while simultaneously trying to talk to higher cost and higher speed devices. This is like saying that you want a really fast car that gets great gas mileage -- it just doesn't work that way. Don't get me wrong -- more speed is good (over all), but it also has costs. Remember engineering is about tradeoffs and Intel isn't just saying that they were going to allow moderately higher speed devices -- they are trying to make it 40 times faster and completely retargeting what USB is supposed to be used for. Instead of 12 Mbps (megabits per second or 1.5 megabytes per second), they are going for 480 Mbps. This speed increase doesn't come free (and is actually a bit of a fraud). Intel can't achieve what they claim and they are starting to use these artificial (and misleading) specs to imply that USB can replace FireWire -- and that is idiocy.
Problems with the design
By jumping speeds so dramatically, Intel is creating "issues". The old high-speed mode of USB (1.1) was 12 megabits per second, and the low-speed mode was 1.5 megabits per second. When the low speed mode's data (packet) is being sent, then you can't be sending the high speed data (packets). If you have a mix of both devices, the slower speed devices reduce the performance of the bus -- so even in USB 1.1 the bus isn't really able to transmit 12 megabits per second if you are using any low speed devices at the same time. However, the difference between 1.5 and 12 isn't that big a deal for most users -- and in low speed devices you really don't care about a small loss of performance, it is still "fast enough". With USB 2.0 the issues become much bigger. High speed devices are a completely different market -- when working with video or high end sound and so on, you care about speed (a lot) -- and glitches in performance really matter!
In USB 2.0 there is a new high speed mode (480 megabit per second), the older high speed mode (12 megabits per second) and the old low-speed mode (1.5 megabits per second) -- this is HUGE difference in speed (320:1). What this means is that there will be a lot of potential bandwidth wasted -- or things will never come close to their claimed performance. Think of the following:
If you are using 3 devices, a keyboard (or mouse), a printer, and a new USB 2.0 video camera then you could divide your potential performance as seen above. 1/3 the performance would be used up at the lowest speed, and 1/3 at the middle speed, and what is left goes to the highest speed mode -- this means your theoretical 480 Mbps bus (data pipe) might really only achieve about 149 Mbps in our sample usage. In the little diagram above, all the red area is wasted potential (about 330.75 Mbps worth). Imagine how much worse it gets if you have many more slow speed or USB 1 devices (like a keyboard, mouse, joystick, printer, scanner, disk drive, etc.) and only one high speed USB 2.0 device.
Now it isn't quite as clear as the diagram since there is a way for one device to steal as much time as it needs which only allows the others the leftover time (called isochronous mode). However that has tradeoffs too -- it means that the isochronous device (video) will go faster, but at the expense of the other devices -- the printer and keyboard run even slower to make up. Even then USB just can't live up to the promise of 480 Mbps in the real world -- so 480 Mbps in Intel's USB lingo is not the same as 480 Mbps to anyone else.
Since Intel didn't want USB 2.0 to run that slow all the time, and have every slow speed device dragging down the performance of all the others --they figured out a hack to get around the problem... sometimes. However, their solution comes at the expense of users and ease of use since their solution only works right when you become a network topology expert.
Users can opt to replace their simple and low cost USB 1.1 hubs, with complex, higher cost and higher speed USB 2.0 routers (they are too sophisticated to be called hubs anymore). If users do that, then the USB 2.0 router can step-up the speed from the slower speed (USB1) devices to make their packets waste less space (from there to the computer). This means that from there on, the USB1 packets are sped up (which means it takes less time to send them) so they don't slow down the bus (as much) and the USB 1 devices behave like higher speed devices (from there on)... unless there is a USB 1.1 hub in the way, in which case all devices collapse back down to the slower 1.1 speed. This makes for a complex solution (to get the speed you want).
Look at the following diagram:
NOTE: I used different devices as examples to explain how each "node" in the USB network will behave. Not all devices of a certain type (like a camera or scanner) will behave at as a certain USB speed (USB 1 or 2). One model scanner can be a USB 1 device, while another model might be USB 2 -- this makes picking and choose models more confusing and interesting.
Using the example USB network (devices), here are the different cases:
Or to sum all this up, the only way to get anywhere near the performance claims of USB 2 is to have all your users (and configuration people) know the USB level of each and every device --- and to map out the devices so that only the high speed devices (USB2) are on high speed hubs and ports, and low speed devices (USB1) are at the ends, and plug into high speed (USB2) hubs along the way.
The issues can be more complex than explained -- because there is not only bandwidth / thruput issues (how much data can be moved around), but also latency issues (how long it takes the data to get to/from the computer from another device). The more stages (hubs you have) the slower things are to respond. Your USB1 devices (like keyboards and mice) are likely to be way out in the chain to prevent them from slowing the USB 2 devices down. Ironically, the device that use low bandwidth and are unlikely to be upgraded to USB 2 (like keyboard, mice and joysticks) are often the devices that you want the closest to the computer to decrease latency (delays in response) -- but if you put them there, then they slow everything else down (since they are USB 1 devices).
USB 1 was about a simple design that allowed users to walk up and plug in (anywhere) and things would work right. Now, with USB 2.0, users must know the speed of every device they are plugging in, and the speed of the hubs they are plugging in to (and the speed of all the other devices on that hub). Plus all users of USB 2.0 will need to buy at least one new high-speed hub to split off slower speed devices from the faster ones. So you can only use USB2 well, if you understand USB and all of your devices specs. Intel has done it again -- now users have to know all this Stupid-Knowledge (things you shouldn't have to know, but need to know because of incompetent design) in order to use USB correctly. This leaves me to wonder if Intel is being bribed in order to guarantee IS/IT's job security or to keep tech-support costs higher?!?! USB 2.0 works -- but only if you stand on your right leg, hop and bark like a dog -- gee thanks Intel, great forward thinking design.
FireWire is not completely immune to these topology issues -- it just does things better -- and these issues don't matter as much since the difference between 100 Mbps FireWire and 800 Mbps FireWire (8:1) isn't nearly as dramatic as the different between 1.5 Mbps USB and 480 Mbps USB (320:1). So any of these issues in FireWire are magnified about 40 times for USB. Also almost all FireWire devices use the 200 Mbps version or above, and in the video and drive industry there is strong incentive to upgrade the speed of the device (and go faster) -- while a great many of the USB devices are the slowest speed, and will remain so, and there are negative incentives (cost) to upgrade the speed and standard. These things will make the problems far less important for FireWire, and far worse for USB.
Yesterday and tomorrow
FireWire can transfer data point-to-point (one device to another) while USB requires the computer to be the go between. USB means that if you want to move data from one place to another, you have to move it from one device to the computer, then the computer has some overhead (which adds a delay or latency), and then move if from the computer to the other device. In typical Intel fashion, they figured out a way to double the overhead for the bus itself, while simultaneously wasting processor overhead doing useless things (like being a network router). The results are that the performance of point-to-point USB is about half as fast as FireWire if they were running at the same speed. So 480 Mbps USB 2 (with only USB 2 devices and hubs) is really less of a performer than 200 Mbps FireWire.
FireWire already has 800 Mbps designed (1394b), and will evolve to that speed as we need it (and chips become more available). FireWire was designed to scale in the first place -- and there is also a 1.6 Gbps version or two in the books (not quite formalized, but ready to go if needed). FireWire will go to 800 Mbps this year or next, and 1.6 Gbps probably in another year or so after that. USB 2.0 is not born yet... in fact, it isn't even yet fully conceived -- it is more a dirty thought in someone's mind. The draft for the specification is being finalized, but still has some more work to go. Give it another 6 months to a year (when FireWire is at 800 Mbps), and the requirements and design will be done for USB 2.0 -- another 6 month to a year, and the first versions might be leaking out -- and another 1 - 2 years to get all the bugs and kinks worked out (changes impact reliability) -- then another 2 - 3 years to get everyone using the higher speed versions of USB (if ever). USB 2.0 should be marketed as, "Yesterday's technology tomorrow... only worse".
What about standards?
There are standards, and then there are standards. Intel and Microsoft don't make real standards -- they make these proprietary things that everyone follows because they have to. These are defacto standards that are almost never as well defined as real standards -- and they often change or are interpreted differently which causes problems. There is a process to create standards -- ISO, IEEE and many other places do open standards, and that is why you see things like Apple's FireWire numbered as IEEE-1394, it is a real open standard. In open standards there are committees that decide things, so a single company can't just change things on a whim. USB is not a real standard but a defacto standard.
Because USB (and most Intel and Microsoft standards) are not "real" standards and because there is no process on defining those standards (Intel defines the rules by which anything will happen) this means that Intel and MS are free to tweak and change things and make people follow them. This means following these "psuedo-standards" is fine for a while -- but as a designer, manufacturer, or user, you need to know that it is just a matter of time before you get burned.
Microsoft and Intel are aware of the proper procedures for creating standards -- they both join the boards and committees of competing standards -- but Intel and Microsoft don't always feel that they should have to conform to real standards themselves. Instead, they think everyone should follow their whims. The sad part is that so many people do, even when they know it is wrong. Then when the followers of MS or Intel get thumped for their stupidity, they just get back in line to have it happen again.
Specsmanship is the way the game is played -- but users (and managers) won't know that Intel's 480 Mbps USB 2.0 will almost never perform as well as 100 Mbps FireWire in the real world. Users will be defrauded by the numbers, Intel's marketing, and their own ignorance into thinking that 480 is better than 400 -- just like they think Intel's MHz ploys or other specsmanship matters (when it often doesn't). Intel is good at selling tomorrow's promises today, and so is Microsoft. Microsoft even used that as their slogan, "Where do you want to go today?". My answer is always, "just to where you promised me that I would be three years ago!". Years ago I could get FireWire that outperforms tomorrow's USB 2.0 -- but if Intel can convince people and companies not to jump to FireWire in favor of their proprietary and incomplete standard, then they win, and only the rest of us lose. This technique is called FUD (Fear, Uncertainty and Doubt) -- make manufacturers fear that if they don't go the Intel path, that they will miss out on something cool. The truth is that if they follow Intel then we all miss out on something cool.
So the truth is that the USB 2.0 spec sucks for the industry. Not because of what it does -- but because of how it does it and how it will be sold. Don't get me wrong -- USB is a nice little bus. I like it for what it does well, basic low (and mid)-speed input and output. I don't mind having it go faster -- that is a good thing -- but I care about the costs in good design. Making it go faster only if users all know these little tweaks to topology issues is not a big win -- it is a loss to ease of use and added complexity -- and Intel needs to learn how to be honest in the salesmanship of what they sell. If they are selling USB 2.0 as a minor upgrade to USB, then I agree, it is -- but when they start marketing it as a replacement for FireWire, or selling it as "the way" we should hook things up to our computer, then I take issue. That hype won't help the industry and does no one any good -- it confuses users, slows down progress, and frustrates the industry. Especially when there is a superior "competing" standard (like FireWire) already working. I won't mind using USB 2.0, and don't mind my devices eventually going a bit faster. But I do mind having to explain to users that Intel is defrauding the public if they imply 480 Mbps or that USB is even close to as good for high end data as FireWire.