Cloud computing is about hardware-based services involving computing, network and storage capacities. These services are provided on-demand, hosted by the cloud provider and can easily scale up and down.

- Wiki
4 articles, 1 books. Go to books ↓

“The Cloud” is infinite. It can scale to eternity. It’s entirely redundant and resilient to any outage. Except when it isn’t.

When it comes to measuring applications performance across our local enterprise network, we think we know what network latency is and how to calculate it. But when we move these applications off premise and onto private or public infrastructures (the cloud) there are a lot of subtleties that can impact latency in ways that we don’t immediately realise.

Using a public Cloud such as Microsoft Azure is associated with a general expectation of infinite capacity and scalability. While we all know that there are always physical limits, the massive scale, ease of management and self-service nature of cloud environments give us the impression of a seemingly infinite set of computing resources. However, all cloud resources have finite capacity and when creating cloud apps we should carefully design for scalability from the very beginning.

The word edge in this context means literal geographic distribution. Edge computing is computing that’s done at or near the source of the data, instead of relying on the cloud at one of a dozen data centers to do all the work. It doesn’t mean the cloud will disappear. It means the cloud is coming to you.