Wednesday, October 3, 2018

Critics from the Past – Smarter Simulation

The previous three posts described work done, starting 2002, in building mesh networking products and devising a truly scalable architecture for the Internet of Things (IoT). One other area of interest dates back to 1992. The problem I was working on then was auto-programming robots: leveraging Artificial Intelligence-based reasoning tools to critique and collaboratively flesh out skeletal robotic assembly strategies.

The “Critics” system developed at that time first modeled the (virtual) robot environment, and then progressively refined it with (real) robot sensor feedback.

Today, simulation systems model large networks and I was tasked with developing one such system to validate a wireless mesh network design of ours. I could have focused on two different things: model the Finite State Machine of the wireless “robot” interactions; or simulate the RF channel and link quality of the RF transmitters. I needed both to do this project justice.

So we developed a framework to run exactly the same firmware image that runs on the embedded devices for an x86 desktop platform. Only the RF characteristics must be estimated for the model (the physical nodes are constantly measuring and analyzing real-world RF). The rest is operating in the simulation exactly as in the networking devices themselves – and vice-versa.

New processors, new radios, different environments – all may easily be accommodated in the simulation. Our OEM and System Developer partners are assured that their next-generation Structured Mesh™ products still work. MeshSuite™ does not "model" their embedded device software – it is their embedded software, running on a desktop (x86) target platform.

Straddling both worlds 

In addition, the software may be moved back and forth between simulation and the physical network. It only needs to be tuned in one environment or the other – ideally, they move together in lockstep. This is engendered by the Abstraction Layers built into both the networking device software and the simulation – the core networking strategies and State Machines don’t change. Different channels, different network goals, different embedded user applications – all are accommodated.

Combined with the autonomy of the nodes, this gives us something like a Mars Rover situation: we can create a mission-level strategic plan for network tree topology and count on devices to execute those tactics to accomplish it without further ado. 

And we can run multiple different scenarios in parallel in the simulation without reconfiguring physical nodes for every test, rapidly prototyping the environment under different conditions - as in genetic algorithms.

Ants and Finite State Automata

Once again, this approach is informed by Nature. Individual ants operate on a very simple “If … Then … Else” decision tree, biased by pheromones from the Queen and their nest mates – and it scales. They are all driven by the same very simple “Operating System”, so the overall actions of the colony are nearly optimized from an external view.

By developing the simulation and network operating software identically and basing it on collaborating Critics, we achieve both resilience and high performance – and can scale up or down. We have only scratched the surface of what may be accomplished with an interested OEM or System Developer partner. Reach me via my LinkedIn page or website.

Friday, September 28, 2018

Saying Controversial Things – Again!

When I wrote Rethinking the Internet of Things, I expected one proposal to be controversial. And it was.

I stated that the myriad devices making up the emerging Internet of Things (IoT) will be too dumb, cheap, and uncritical to justify the power to equip them all with IPv6. Howls of derision resulted.

But I have been through this before. Back In 2004 I wrote another controversial piece about the ugly truths of (single-radio) wireless mesh. Today, almost all mesh network products use (at least) dual-radio wireless switch stacks – even in home WiFi networks

At the edge of industrial and other networks, the vast majority of devices will speak and listen in tiny bits of data – like ants. I developed the concept of "Chirp" networks – communications using tiny bits of data with minimal framing. And like Nature's lightweight pollen in spring time, no guaranteed delivery, either.

Digital “pollen” can be non-unique as well

These Chirps are self-classified. Chirps identify themselves in both public and private ways, to allow integration of data from a wide variety of sources. This identification might be type (moisture sensor, door lock, etc.), location, etc.

Because the size of the future IoT is well beyond our current comprehension, it won’t be possible for humans (or even machines!) to catalog every device or pre-configure preferred data sources. Instead, Publish/Discover/Subscribe will reveal useful data that no one could have known of or predicted.

At the edges of the network, the vast numerical majority of devices will simply speak and listen in tiny bits of data. And they will be designed with a basic trust in an IoT universe that propagates these messages to some sort of integration point where the IoT may be interpreted for human consumption.

Also controversial – Chirps need not be uniquely addressed – nor do they need IPv6

Nobody confuses my grandfather Francis daCosta with me. Our lineage paths differ in our family's network tree. All that is necessary is the context of where we come from and where we have been.

Routing – back to trees

So. If the Chirps are so simple and non-uniquely addressed, how will big data integrators ever make sense of the cacophony?

The routing and other network intelligence come from a device I’ve called a Propagator in the Chirp-related patents and the book. It’s a straightforward derivative of our mesh node. The Propagators add the context of the data (location, lineage, et al) and the intelligence to the transmission (multicast bundling, pruning, and routing; addressing and IPv6 packetization; management; control loops; the list goes on …). Economies of scale stem from placing CPU cycles on mesh nodes, thus simplifying end devices. Rather than build a bulky IPv6 packet for every tiny squib of data, the Propagators spoof, trim, and package as necessary.

In this three-tiered architecture (Integrators, Propagators, Devices) it is critical that deterministic paths are available to link them. Happily, the overlying structured tree topology I discussed earlier (and developed in 2002 for MeshDynamics mesh nodes) works perfectly well for Propagators. Nature tells us that trees scale, connecting trillions and trillions of cells in a networked path (leaves-to-roots and vice-versa) that doesn’t burden any cell with management of the whole.

I am interested in developing collaboration with the larger and sophisticated OEMs and System Developers who might share this vision of the full potential of a massively-scaled public/private Internet of Things. If these ideas intrigue you, let’s talk. Reach me via my LinkedIn page or website.

Thursday, September 20, 2018

Making Tree Topologies Dynamic

As I noted in my previous post, after years of developing higher-performance wireless networking products my focus is shifting in two ways. First, I am orienting my efforts toward working with OEMs, System Developers, System Integrators, and major agencies to integrate my software into their "things".

Second, I am refining my algorithms based on lessons from nature conferred by my friend and marine biologist (by training) Byron Henderson, which we began to explore in our book Rethinking the Internet of Things.

Trees and network switches

One of these lessons from nature involves trees. In 2002, my robotics and control system experience suggested that a tree-like branching structure would be the way to create a deterministic network architecture across a physical mesh of inter-working wireless nodes. Essentially, this is creating a switch-like topology from a physical mesh. But I also wanted networks to converge -- and more importantly, re-converge -- quickly and with more intention than conventional Spanning Tree Protocols. This required placing more independent intelligence in each node, as I’ll describe later.

With the recent emergence of new networking requirements, such as drone swarms and other mobile applications, I have been reflecting on trees in nature- again.  Tree-like branching structures have evolved multiple times and in varied lineages -- organisms as diverse as giant oaks and the marine Gorgonian soft coral colonial animals (Order Alcyonacea).

Trees are a mathematically efficient way to organize living tissue (and other things) to maximize spread and coverage from a fixed connection (such as a root).

What if the branches could move?

But a tree-like structure has limitations in adapting to rapid and unpredictable changes in environments. We’ve all seen trees and shrubs growing at odd angles to try to reach the sunlight when another tree or building shades the plant. But a tree can only adapt to environmental changes to a limited degree, as the branching structure is already set. It’s not possible for the organism to disconnect some branches and reconnect them elsewhere to optimize for the current situation.

Returning to networking, some of my recent algorithm work has aimed to enhance the efficiency and “tune-ability” of tree topologies with even more rapid re-convergence as nodes move and/or the environment changes. I’ve wanted to minimize latency and (especially) jitter to support real-time Publish/Subscribe applications in my algorithms.

So I built in the capacity to bias and tune the network topology to optimize for a flexible variety of factors, including hop count, link cost, bandwidth, and end-to-end delay. But the fundamental architectural decision that I made early-on that is enabling these refinements is distributing the networking intelligence to every node. In essence, I freed up every “branch” to make its own decision on how, where, and when to connect -- or reconnect.

This is accomplished by having each node maintain an awareness of its adjacent nodes and potential connections (usually radios and channels). The tuning and biasing takes place on top of this foundation, which nicely separates the two functions I wish to optimize: rapid convergence for immediate adaptation; and sophisticated capacity for performance optimization.

If the applications you are working on demand real-time performance in complex networking environments, I look forward to discussing how we might work together -- perhaps in the shade of a tree. 😁

Please connect via LinkedIn or my website -- and thanks for your time.

Tuesday, September 11, 2018

Change is Natural, Collaboration is Next

After a long absence, I am returning to this blog because of some changes in personal philosophy and developments in the emerging Internet of Things and Networking spaces. I have been developing wireless mesh networking algorithms, software, and products for 16 years as MeshDynamics, drawing on my lifetime experience in real time embedded systems, robotics, and wireless networking to create technologies uniquely suited to demanding outdoor environments.

MeshDynamics software, for example, is especially well-suited to the most demanding outdoor environments requiring the highest performance over many hops, in motion, and/or for high throughput and low-latency applications like voice, video, and real-time command and control. Because my networking software is abstracted and isolated from the radio and other hardware, it may be optimized for use with any combination of radios, frequencies, and device configurations. Much of my software has been re-written based on open-source packages like OpenWRT to speed integration.

Integrating networking into others’ products

Post-2014, we shifted our emphasis from building products to providing source code licenses and working primarily with OEMs, Embedded Software Developers, System Developers, and major agencies to integrate our software into their devices and solutions. To this end, MeshDynamics has created an open-source-based suite of software modules, source code included, intended to be incorporated into "things": robot drone swarms, mesh nodes, Internet of Things hubs, etc.

We are now seeking partners ready to test this source code base for a fit with their own offerings, just as organizations as diverse as Sharp, PGA Tour, mining OEMs, and the US Navy used the software currently and in the past.

Emulating Nature's networks

Back in 2002, when I began architecting "wireless" switch stacks, I was developing algorithms based on my judgment that radios would become cheaper and that enterprise networking environments would become more complex. The last mile needed more than single-radio, MANET based access points and obsolete hub-like mesh architectures.

This has proven true, but over the last few years I have realized that scaling to the large numbers and dynamic network configurations required by swarms of drones or self-driving cars, etc., represents an unprecedented challenge.

Unprecedented in traditional wireless networking -- but not in nature.

So in recent years, I am using the communication principles that have emerged over millennia in nature to inform my networking development. Some of this thinking is reflected in the book Rethinking the Internet of Things, which I wrote with the help of long-time friend Byron Henderson. We drew on our combined backgrounds in networking, robotics, embedded systems, and biology to describe an architecture for the IoT that builds on lessons from the way nature deals with copious tiny “signals” -- from pollen and birdsong on up. Industry interactions and the developments in drone technology and Artificial/Augmented Intelligence are causing me to expand the biological approach to network topology once again.

Directed propagation

Metaphors by themselves can be misleading, but building on actual principles developed by nature over millions of years of evolution yields insights. The key driver of all biological existence is propagation – placing as many of an individual’s genes as possible into future generations. In that process, the environment exerts a pressure through natural selection that leads to the best-adapted individuals leaving more offspring. This creates the illusion of progress in evolution, as successive generations become better adapted to conditions over hundreds of thousands or millions of years. Sterile hybrids, such as mules, leave no offspring and thus are not refined by this environmental pressure.

Robotic drone swarms have a similar drive to propagate inherent in their design and programming. But this propagation is of data and information related to their mission. Adapting to their local physical and radio environments, they only survive and carry on their mission through communication (messaging) – with other devices in the swarm, and with command, control, and big data analysis functions at some distance.

Interconnected drone robots may adapt more quickly to their environment than living beings.
So the “generations” pass in seconds rather than millennia – but only if the communications paths are persistent and resilient, even reforming after interruptions. And the devices may learn and pass on information from the environment – a process mirroring human cultural evolution, which proceeds much more quickly than can biological evolution

This concept of swarms of adaptive robotic individuals communicating wirelessly in a rapidly evolving topology is top-of-mind for me now as I develop new networking algorithms for use by OEMs, agencies, and System Developers. Demanding outdoor environments requiring mobility, low latency, large hop-to-hop counts (as in mines, tunnels, or a long string of drones), and high throughput are the most likely to need these developing capabilities.

A delicate balance

A delicate balance is needed between individual autonomy, learning and the ability to externally “bias” the network for better efficiency of aggregated devices. Biological evolution similarly acts on individuals – but aggregations of individuals may better survive through common adaptations. This is seen in human society as well as “super-organisms” such as ants and bees. Networks not inherently driven to learn, propagate, and evolve are the mules of the wireless world – and thus have no future.

Networking technologies have evolved: from the strict topologies of Token Ring to the shared backplanes of hubs, to dedicated switched ports, and now to wireless.

I believe that the next phase will be driven by independent but interconnected machines responding to environmental pressures and the "mission" bias to rapidly evolve their internal networking topology.

I am interested in talking with those who are intrigued by these ideas and wish to work together on developing solutions for dynamic networking environments of today and the future. Contact me via my website to start a conversation.

Friday, February 28, 2014

Writing the Book on the IoT

I’d like to announce that I have written a book proposing a new architecture for the coming explosion of devices in the Internet of Things (IoT). I hasten to add that I didn't set out to write a book – rather Intel Corporation approached me on the basis of the blog posts below and asked me to expand my thoughts into a more detailed treatment. You can click the link to learn more about Rethinking the Internet of Things: A Scalable Approach to Connecting Everything.

I did not originally intend to develop a new architecture for the IoT. Rather, I was thinking about the implications of control and scheduling within machine social networks in the context of Metcalfe's Law. The coming tsunami of machine-to-machine interconnections could yield tremendous flows of information – and knowledge. The book builds on the principles outlined in the blog posts below, but it also expands on a number of new directions that I will touch on in later blog posts.

Once we free the machine social network of the drag of human interaction, there is tremendous potential for creating autonomous communities of machines that require only occasional interaction with or reporting to humans. As the Internet of Things expands exponentially over the coming years, it will be expected to connect to devices that are cheaper, dumber, and more diverse. Traditional networking thinking will fail for multiple reasons.

Fundamentally, traditional IP-based peer-to-peer relationships lock out much of the potential richness of the Internet of Things. There will be vast streams of data flowing, many of which are unknown or unplanned. Only a publish/subscribe architecture allows us to tap into this knowledge by discovering interesting data flows and relationships. And only a publish/subscribe network can scale to the tremendous size of the coming Internet of Things. So appliances, sensors, and actuators must use self-classified traffic schemes to allow for discovery and creation of information “neighborhoods”.

Note that the data needs of the IoT are completely different from the traditional global Internet. Most of the communications will be terse machine-to-machine interchanges that are largely asymmetrical, with much more data flowing in one direction (sensor to server, for example) than in the other. And in most cases, losing an individual message to an intermittent or noisy connection will be no big deal. Unlike the traditional Internet, which is primarily human-oriented (and thus averse to data loss), much of the Internet of Things traffic will be analyzed over time, not acted upon immediately. Most of the end devices will be essentially autonomous, operating independently whether anyone is “listening” or not.

Although IPv6 can potentially provide addresses for myriad devices, the largest population of these appliances, sensors, and actuators will lack the horsepower in terms of processors, memory, and bandwidth to run the bloated IP protocol stack. It simply does not make financial sense to burden a simple sensor with all of the protocol overhead needed for host-to-host communications.

Additionally, the conventional implementation of IP protocols implies networking knowledge on the part of device manufacturers: without centrally authorized MAC IDs and end-to-end management, IP falls flat. Many of the hundreds of thousands of manufacturers of all sizes worldwide building moisture sensors, streetlights, and toasters lack the technical expertise to implement legacy network technology in traditional ways.

There is another subtle reason why a new architecture is needed, and it relates to a favorite theme of mine: control loops. When there are real-time sensing and response loops needed in the Internet of Things, traditional network architectures with their round-trip control loops will be problematic. Instead, a way would be needed to engender independent local control loops managing the “business” of appliances, sensors, and actuators while still permitting occasional “advise and consent” communications with central servers.

The only systems on earth that have ever scaled to the size and scope of the Internet of Things are natural systems: pollen distribution, ant colonies, redwoods, and so on. From examining these natural systems, I developed the concept of the three-tiered IoT architecture described in the book: simple end devices, networking specialist propagator nodes, and information-seeking integrator functions.

I hope that you’ll read my explanation of why terse, self-classified messages, networking overhead isolated to a specialized tier of devices, and publish/subscribe relationships are the only way to fully distill the power of the coming Internet of Things. And I hope especially for your feedback, pro and con.

Wednesday, September 5, 2012

Think Like an Ant

The title of this blog post is slightly misleading, because I'm finding that to "think like an ant" would mean almost not at all. Many species of insects, such as ants, bees, and termites, form social colonies often characterized by one reproductive individual (the queen) and hundreds to millions of non-reproductive workers (in bees and ants, all these workers are female). Because these colonies in some ways essentially form a single distributed individual, they are often referred to as "SuperOrganisms".

These are complex societies, complete with clear divisions of labor, the ability to exploit new food sources and defend the nest from enemies, and the capacity to move the entire community to a new nest location if necessary. But there is no centralized "command and control" by the queen or any other individual.

In place of any centralized control are a set of highly-evolved behaviors that each individual is essentially born with. Ant workers begin these tasks from the moment they emerge as recognizable adults, often immediately beginning to care for nearby young in the brood chamber. No on-the-job training, no instruction from mission control, no learning. They simply operate.

Often worker activities are guided by a simple algorithm selecting from a small set of choices based on situation. For example, an ant worker encountering a hungry larva in the brood chamber feeds it. If the same ant encounters the same larva outside the brood chamber, she carries it back to the brood chamber, hungry or not. A relatively small set of these highly-evolved simple decision trees is multiplied by thousands (or even millions) of workers and then integrated in the form of the colony to result in a highly adaptive, seemingly very intelligent SuperOrganism. But the actual decision-making process of any individual worker is very simple.

These basic principles of individual simplicity and collective sophistication could be mirrored in the structure of the Internet of Things. End devices may be outfitted with only the barest of intelligence, simply broadcasting their state or responding to simple direction from other functions. Meanwhile, intermediate devices in the Internet of Things may be forwarding messages as necessary to effect communications throughout the system.

A key to keeping the overall architecture simple may be gleaned from the uncomplicated social insect decision process outlined above. A basic set of communication and networking rules, multiplied billions of times over in devices throughout the Internet of Things, may yield great sophistication in overall function. Each individual ant does very little "computing" – the same may be true for many members of the Internet of Things.

One other factor has led to the incredible success of many social insect colonies: the relatedness of the workers in the colony. In many cases, the workers are all sisters of the same parents. This makes them very closely related to one another. Because of this (and because workers typically don't reproduce), there is no incentive for competition with nest mates. Thus, the various classes of workers (or castes) perform a variety of duties on one another's behalf selflessly. No external control is needed to guide the colony to maximize its environmental fitness.

In contrast, the Internet of Things is being architected by Homo sapiens, whose innate drive for competition with one another is adaptive and tragically well-documented. If this competition (one might label it "greed" if uncharitable) is reflected in the architecture of the Internet of Things, a great opportunity will be lost.

If instead devices in the Internet of Things can be designed to "help out" other devices (when the costs to do so are low); there will a tremendous benefit to the network as a whole. This might take the form of forwarding messages for unrelated devices or other means of insuring better overall function at a minor cost of delay or compute power. Thinking like the humble ant might prove to have great power in the Internet of Things.

Some writers have used ideas from nature simply as metaphors for the IoT, while still others make a more meaningful link to network engineering. I believe that the latter are on the right track: architecting the Internet of Things based on successful natural models of massive systems offers many possibilities.

In the next posts, I may move off communications for a while and explore other areas of the IoT.

Wednesday, August 29, 2012

Talk (Should Be) Cheap

When contemplating how the Internet of Things will communicate, it helps to forget everything you know about traditional networking schemes – especially wide area networking and wireless networking. In traditional wide area and wireless networking, the bandwidth or spectrum is expensive and limited; and the amount of data to be transmitted is large and always growing. While over-provisioning data paths in wiring the desktop is commonplace, this isn't usually practical in the WAN or wireless network – it’s just too expensive.

Besides cost, there's the matter of potential data loss and (in the wireless world) collisions. Traditional networking needs lots of checks and double-checks on message integrity and order to minimize costly retransmissions. These constraints led to the protocol stacks with which we are familiar today such as TCP/IP and 802.11.

In most of the Internet of Things, however, the situation is completely different. Oh, the costs of wireless and wide-area bandwidth are still high, to be sure. But the amounts of data from most devices will be almost immeasurably low and the delivery of any single "chirp" or message completely uncritical. As I keep saying, the IoT is lossy and intermittent, so the end devices will be designed to function perfectly well even if they miss sending or receiving data for a while – even a long while. It's this self-sufficiency that eliminates the criticality of any single "chirp".

It might be worthwhile at this point to contrast my view of the IoT with traditional IP. First, IP is fundamentally oriented toward large packets. With large packets, the IP overhead is a relatively small percentage of the overall transmission. But in the IoT, IP overhead is much larger than the typical payload of a chirp.

In addition, a significant amount of the overhead in IP is dedicated to security, encryption, and other services, none of which matter at the very edges of the Internet of Things where the simplest devices predominate (if my view of the IoT is correct).

By contrast, IoT chirps are like pollen – lightweight, broadly propagated, and with meaning only at the "interested" Integrator functions. The IoT is receiver-centric, not sender-centric, as is IP. Because IoT chirps are so small and no individual chirp is critical, we have no concern over retries and resulting broadcast storms, which are a danger in IP.

It’s true that efficient IoT propagator nodes will prune and bundle broadcasts, but seasonal or episodic broadcast storms from end devices are much less of a problem because the chirps are small and individually uncritical. Like nature treats pollen, the IoT may treat any single chirp as truly "best effort" – so heavy broadcast storms caused by an external event will die out pretty quickly.

In my view of the IoT, this means that huge packets, security at the publisher, and assured delivery of any single message are passé. This will allow us to mirror nature with massive networks based on lightweight components. In my technical imagination, this makes the IoT more "female" (receiver-oriented) than the "male" structure of IP (sender-oriented).

But having said all that, what's the point in having an IoT if nothing ever gets through? How can we deal with the unpredictable nature of connections? The answer, perhaps surprisingly, is over-provisioning. That is, we can resend these short simple chirps over and over again as a brute force means of ensuring that some get through.

Because the chunks of data are so small, the costs of this over-provisioning at the very edge of the IoT are infinitesimal. But the benefits of this sort of scheme are huge. Since no individual message is critical, there's no need for any error-recovery or integrity-checking overhead (except for the most basic checksum to avoid a garbled message). Each message simply has an address, a short data field, and a checksum. In some ways, these messages are what IP Datagrams were meant to be. The cost and complexity burden on the end devices will be very low, as it must be in the IoT.

The address will incorporate the "arrow" of transmission I mentioned earlier, identifying the general direction of the message: whether toward end devices or toward integrator functions. Messages moving to-or-from end devices will only need the address of the end device – where it is headed or where it is from is unimportant to the vast majority of simple end devices. They're merely broadcasting and/or listening.

So the end devices are awash in the ebb and flow of countless transmissions. But replicating this traffic willy-nilly throughout the IoT would clearly choke the network, so we must apply intelligence at levels above the individual devices. For this, we'll turn to the propagator nodes I've referenced in past posts.

Propagator nodes will use their knowledge of adjacencies to form a near- range picture of the network, locating end devices and nearby propagator nodes. The propagator nodes will intelligently package and prune the various data messages before broadcasting them to adjacent nodes. Using the simple checksum and the "arrow" of transmission (toward end devices or toward integrator functions), redundant messages will be discarded. Groups of messages that are all to be propagated via an adjacent node may be bundled into one "meta" message for efficient transmission. Arriving "meta" messages may be unpacked and re-packed.

Propagator nodes will be biased to forward certain information in particular directions based on routing instructions passed down from the integrator functions interested in communicating with a particular functional or geographic neighborhood of end devices. It is the integrator functions that will dictate the overall communications flow based on their needs to get data or set parameters in a neighborhood of IoT end devices.

Discovery of new end devices, propagator nodes, and integrator functions will be again similar to my architecture for wireless mesh. When messages from-or-to new end devices appear, propagator nodes will forward those and add the addresses to their tables. Appropriate age-out algorithms will allow for pruning the tables of adjacencies for devices that go off-line or are mobile and are only passing through.

One other aspect of communication to be addressed within the Internet of Things is the matter of wireless networking. It’s likely that many of the end device connections in the IoT will be wireless, using a wide variety of frequencies. This fact seems to suggest a need for something like CSMA/CD (Carrier Sense Multiple Access with Detection), as used in 802.11 WiFi. But that's another aspect of traditional networking that we need to forget.

Again, data rates will be very small and most individual transmissions completely uncritical. Even in a location with many devices vying for airtime, the overall duty cycle will be very low. And most messages will be duplicates, from our earlier principle of over-provisioning. With that in mind, an occasional collision is of no significance. All that we must avoid is a "deadly embrace" in which multiple devices, unaware of one another's presence, continue transmitting at exactly the same time and colliding over and over.

The solution is a simple randomization of transmission times at every device, perhaps with continuously varying pauses between transmission based on prime numbers, hashed end device address or some other factor that provide uniquely varying transmission events.

While the resulting communication scheme is very different from traditional networking protocols, it will be all that we need for the IoT. Providing just enough communication at very low cost and complexity will be good enough for the Internet of Things.

Many of the ideas I'm developing for the Internet of Things are inspired by the interactions of beings in nature. Next time, a look at the way aggregations of creatures become highly-functioning colonies and "SuperOrganisms" – and the lessons this provides for the IoT.