Besides cost, there's the matter of potential data loss and
(in the wireless world) collisions. Traditional networking needs lots of checks
and double-checks on message integrity and order to minimize costly
retransmissions. These constraints led to the protocol stacks with which we are
familiar today such as TCP/IP and 802.11.
In most of the Internet of Things, however, the situation is
completely different. Oh, the costs of wireless and wide-area bandwidth are
still high, to be sure. But the amounts of data from most devices will be
almost immeasurably low and the
delivery of any single "chirp" or message completely uncritical. As I keep saying, the IoT is lossy and
intermittent, so the end devices will be designed to function perfectly well
even if they miss sending or receiving data for a while – even a long while.
It's this self-sufficiency that eliminates the criticality of any single
"chirp".
It might be worthwhile at this point to contrast my view of
the IoT with traditional IP. First, IP is fundamentally oriented toward large packets.
With large packets, the IP overhead is a relatively small percentage of the
overall transmission. But in the IoT, IP overhead is much larger than the
typical payload of a chirp.
In addition, a significant amount of the overhead in IP is
dedicated to security, encryption, and other services, none of which matter at the very edges of the Internet of Things
where the simplest devices predominate (if my view of the IoT is correct).
By contrast, IoT chirps are like pollen – lightweight,
broadly propagated, and with meaning only at the "interested" Integrator
functions. The IoT is receiver-centric, not sender-centric, as is IP. Because
IoT chirps are so small and no individual
chirp is critical, we have no concern over retries and resulting broadcast
storms, which are a danger in IP.
It’s true that efficient IoT propagator nodes will prune and
bundle broadcasts, but seasonal or episodic broadcast storms from end devices
are much less of a problem because the chirps are small and individually
uncritical. Like nature treats pollen, the IoT may treat any single chirp as
truly "best effort" – so heavy broadcast storms caused by an external
event will die out pretty quickly.
In my view of the IoT, this means that huge packets,
security at the publisher, and assured delivery of any single message are passé. This will allow us to mirror nature with
massive networks based on lightweight components. In my technical imagination,
this makes the IoT more "female" (receiver-oriented) than the
"male" structure of IP (sender-oriented).
But having said all that, what's the point in having an IoT
if nothing ever gets through? How can
we deal with the unpredictable nature of connections? The answer, perhaps
surprisingly, is over-provisioning.
That is, we can resend these short simple chirps over and over again as a brute
force means of ensuring that some get through.
Because the chunks of data are so small, the costs of this
over-provisioning at the very edge of the IoT are infinitesimal. But the
benefits of this sort of scheme are huge. Since no individual message is
critical, there's no need for any error-recovery or integrity-checking overhead
(except for the most basic checksum to avoid a garbled message). Each message
simply has an address, a short data field, and a checksum. In some ways, these
messages are what IP Datagrams were meant to be. The cost and complexity burden
on the end devices will be very low, as it must be in the IoT.
The address will incorporate the "arrow" of
transmission I mentioned earlier, identifying the general direction of the message:
whether toward end devices or toward integrator functions. Messages moving
to-or-from end devices will only need the address of the end device – where it
is headed or where it is from is unimportant to the vast majority of simple end
devices. They're merely broadcasting and/or listening.
So the end devices are awash in the ebb and flow of
countless transmissions. But replicating this traffic willy-nilly throughout
the IoT would clearly choke the network, so we must apply intelligence at
levels above the individual devices. For this, we'll turn to the propagator
nodes I've referenced in past posts.
Propagator nodes will use their knowledge of adjacencies to
form a near- range picture of the network, locating end devices and nearby
propagator nodes. The propagator nodes will intelligently package and prune the
various data messages before broadcasting them to adjacent nodes. Using the
simple checksum and the "arrow" of transmission (toward end devices
or toward integrator functions), redundant messages will be discarded. Groups
of messages that are all to be propagated via an adjacent node may be bundled
into one "meta" message for efficient transmission. Arriving
"meta" messages may be unpacked and re-packed.
Propagator nodes will be biased to forward certain
information in particular directions based on routing instructions passed down
from the integrator functions interested in communicating with a particular
functional or geographic neighborhood of end devices. It is the integrator
functions that will dictate the overall communications flow based on their
needs to get data or set parameters in a neighborhood of IoT end devices.
Discovery of new end devices, propagator nodes, and
integrator functions will be again similar to my architecture for wireless
mesh. When messages from-or-to new end devices appear, propagator nodes will
forward those and add the addresses to their tables. Appropriate age-out
algorithms will allow for pruning the tables of adjacencies for devices that go
off-line or are mobile and are only passing through.
One other aspect of communication to be addressed within the
Internet of Things is the matter of wireless networking. It’s likely that many
of the end device connections in the IoT will be wireless, using a wide variety
of frequencies. This fact seems to suggest a need for something like CSMA/CD
(Carrier Sense Multiple Access with Detection), as used in 802.11 WiFi. But
that's another aspect of traditional networking that we need to forget.
Again, data rates will be very small and most individual
transmissions completely uncritical. Even in a location with many devices vying
for airtime, the overall duty cycle will be very low. And most messages will be
duplicates, from our earlier principle of over-provisioning. With that in mind,
an occasional collision is of no significance. All that we must avoid is a
"deadly embrace" in which multiple devices, unaware of one another's
presence, continue transmitting at exactly the same time and colliding over and
over.
The solution is a simple randomization of transmission times
at every device, perhaps with continuously varying pauses between transmission
based on prime numbers, hashed end device address or some other factor that
provide uniquely varying transmission events.
While the resulting communication scheme is very different
from traditional networking protocols, it will be all that we need for the IoT.
Providing just enough communication
at very low cost and complexity will be good enough for the Internet of Things.
Many of the ideas I'm developing for the Internet of Things
are inspired by the interactions of beings in nature. Next time, a look at the
way aggregations of creatures become highly-functioning colonies and
"SuperOrganisms" – and the lessons this provides for the IoT.