Fog and Edge Computing

Edge and fog computing are closely related – both refer to the ability to process data closer to the requestor / consumer to reduce latency cost and increase user experience. Both are able to filter data before it “hit” a big data lake for further consumption, reducing the amount of data that needs to be processed. The basic idea of edge computing is to move data logic (mainly around data validation / data grammar checks) to an outer ring of capabilities.

This is in direct response to the cheer increase of data bandwidth required by end devices and has been fuelled by the explosion of IoT (Internet of Things) which in turn has increased the need to process the generated data much closer to the source in real time. In other words, edge and fog computing push the cloud (read data centre) closer to the requestor to minimise latency,  minimise cost and increase quality.

 Let’s use an example

  • A Boeing 787 generates 40 terabytes (TB) per hour of flight, half a TB of which is ultimately transmitted to a data centre for analysis and storage.
  • A large retail store collects approximately 10 gigabytes (GB) of data per hour, and 1 GB of that is transmitted to a data centre.
  • An automated manufacturing facility generates approximately 1 TB of data per hour, and 5 GB of that is transmitted to a data centre.
  • A mining operation such as that of Rio Tinto can generate up to 2.4 TB per minute.

Someone / something will have to crunch all that data to decide what is being discarded and what data is being further processed and in what manner. As for a 787 it is not very efficient (or even impossible) to install a Data centre on board; the same applies to the large retailer; it is similarly not effective nor very sensible to install a Data Centre in each store.

So, this means to “pull” the data validation and pre-processing closer to the source without having to install a full set of servers, storage, and networking. Edge devices like routers and switches as well as standard servers (like an instore server for the retail store or servers in a 787) provide a lot of near real-time data processing and validation.

The main difference between edge and fog computing is the location of the devices – fog computing pushes the data validation intelligence further into the local network, whereas edge computing is placing that data validation and processing intelligence onto a more central gateway like device.

 Cloud and edge / Fog computing cater for a different set for requirements as set out in the comparison table below:

 


IoT devices are “chatty”, they produce a constant stream of data that has to be validated, analysed, and processed. Traditional transactional systems had the requestor (say a customer application) sending not validated data to a data processor that was typically installed in a central data centre. With the explosion of IoT devices the data validation / data grammar checks have to happen closer to the requestor.

 


The advantages of edge / fog computing :

  • edge computing can crunch through more data at a faster pace than having the data being processed in a central location.
  • it allows for offline / disconnected validation and can help to reduce the total amount of end to end bandwidth needed.
  • it can help to reduce security related aspects such as virus and hacking attempts as it can add encryption and other security measures before the data traverses unprotected part of the internet.
  • due to its proximity edge / fog computing can lower bandwidth cost.
What are the disadvantages of edge computing for the industrial internet of things?

Edge / fog computing

  • can potentially add a layer of complexity to the overall compute, storage, and network architecture, which in turn can add to the time it takes to perform root cause analysis.
  • as edge computing is “pulling” capabilities to a de-central location all physical assets will have to be secured, maintained, and operated. In some instances, the total cost of ownership can increase.
  • the refresh cycles of these edge devices can be higher than a typical cloud infrastructure device, resulting in “architecture design” lock-ins. For instance, an in-store computing and storage device might not be changed for 6+ years. This is an important aspect when designing the overall architecture as it restricts the amount of physical change the environment can cater for.
  • it also means that the innovation cycles of existing and new IoT might be restricted as the environment cannot accommodate these if fundamental changes to the edge hardware is needed.

 Summary

Fog and edge computing are both able to filter data before it “hit” a big data lake for further consumption, reducing the amount of data that needs to be processed. The basic idea of edge and fog computing is to move data logic (mainly around data validation / data grammar checks) to an outer ring of capabilities.

 Thanks for Reading

Comments

Popular posts from this blog

What is Microsegmentation?

Event based versus Data based Programming

Agile Architecture