Most potential users assume cellular data modems are just a newer form of the old Hayes AT-style analog dial-up modems - you know, your old ATDT {phone-number} interface. While there are a few old, rapidly obsoleting standards which work like this, all modern cellular data systems are IP-based. They operate much like your home Cable or DSL modem. Your cellular data modem links and authenticates to your ISP (or "carrier") and is assigned an IP address - just like your home Cable/DSL modem. There are no phone numbers to dial; your modem is "connected" as long as it is powered up. Your charges relate only to IP packets moved without any concept of minutes connected.
But ... marketing hype-aside, current and near-term cellular systems are NOT broadband as you or I would use that term. Ever used a cell phone and had a call drop? A voice garbled beyond recognition? No signal now even when you had one an instant ago? The same variability applies to cellular data networks. Cell towers do their best to share the air waves with all users; thus the speed and latency in your data movement is deeply impacted by both location and what other cellular users are doing.
In my world it is even worse ... this meaning people who use IP networks primarily for telemetry or data collection with small, cyclic polls for data. While you'd think this would be a natural fit for cellular data networks, it turns out to be fairly abnormal when viewed in the context of how the standards have been evolving. Every new advance in cellular data networks is aimed at pumping up throughput for web browsing, email exchange, or image/music downloads. New advances are aimed at your average human user with a PDA, notebook computer, or iPod; users who pay lots of cash per month for a single account with little concern for Return-On-Investment (ROI). Parents of teenagers know this best - what is the ROI of the $50 to $200 monthly the average "wired" teenager seems to spend? My youngest daughter spent an amazing three years during high school working part-time at a noisy place with video games and a big dancing mouse to pay for her cell phone usage.
In old-fashioned network terms, these are connection-oriented paradigms where larger latency & overhead is invested to initiate the connection as a tradeoff to having less cost to move each fragment of a large amount of data. Without going into too much detail, understand that traditional digital cellular voice calls can be viewed as small streaming media sessions. Each cell phone negotiates a strict series of repeated tiny time-periods during which it can send the next small digitized and compressed portion of your conversation. You can visualize this as an impossible juggling act where a few dozen assistants arranged in a star around an old juggling pro set up a rhythm and tempo that passes their pins in and out without a hiccup. Each pin must be launched at the exact correct instant to find an empty hand waiting on the other side. (Note that CDMA uses a different paradigm than these GSM "slots", but the need to share a limited resource is the same.)
The 2G/2.5G cellular data networks (slightly past state-of-the-art) functioned by just mimicking a voice call but substituting small chunks of pure data for the compressed slices of voice. Later advances to these standards allowed the tower and data modems to negotiate more or fewer of the tiny time-periods per second to better match the actual data throughput. Thus your perceived data network throughput can grow or shrink as the tower as fewer or more voice calls to handle. But breaking data up into a whole series of contiguous tiny time periods separated by overhead is very inefficient. So a key evolution in the latest standards is the ability of the tower to in effect negotiate consolidating a block of contiguous tiny time periods into fewer, longer time period.
Ok, enough over-simplified details - back to why telemetry data is abnormal in this world of "Cellular Broadband". Lets say every 15 minutes I send a Modbus/RTU poll that consists of 8 bytes of data into my 400kbps to 3mbps "Broadband" connection ... well, I'll get my blazingly fast 100 bytes response back in from 2 to 5 seconds. Hmm, sounds a lot like the performance of an old 600bps (600 baud) radio modem! So cellular data networks have fairly large latency when small infrequent amounts of data are moved. This goes back to the "juggler" paradigm I mentioned before. If your cellular device has no data to send after some number of seconds, the cell tower asks it to "stop juggling" - your device gives back its allocated tiny time-period(s) to be reused by other voice or data devices. After all, there are just so many of these periods to be shared by all users. So the process of the tower and data modem reallocating these tiny time-period(s) is the primary source of this large, variable latency.
Web browsers won't notice this latency since once the web page request is sent up, the page content just flows rapidly down in multiple TCP/IP streams without the explicit poll/response behavior of a telemetry session. Since the data exchange between the tower and cell modem is active and heavy, the tower does its best to bump up and allocate as much bandwidth to the cell modem as it can spare. This is where the elusive "Broadband" performance is able to peek out.
Today, your data modem needs to send data at least every 40 seconds to avoid this latency, but with 3G this will drop to every 3 seconds - which would be cost prohibitive. You say just get an unlimited plan? Sorry, no such thing, but that will be my next blog entry.
Want more details? See Also:
http://www.gsmworld.com/index.shtml
http://electronics.howstuffworks.com/cell-phone.htm/printable
1 comment:
Post a Comment