Scalability is the ability of a system, network, or process, to handle growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth.

- Stackoverflow.com Wiki
24 articles, 3 books. Go to books ↓

Redis drives Timeline, Twitter’s most important service. Timeline is an index of tweets indexed by an id. Chaining tweets together in a list produces the Home Timeline. The User Timeline, which consists of tweets the user has tweeted, is just another list.


Using a public Cloud such as Microsoft Azure is associated with a general expectation of infinite capacity and scalability. While we all know that there are always physical limits, the massive scale, ease of management and self-service nature of cloud environments give us the impression of a seemingly infinite set of computing resources. However, all cloud resources have finite capacity and when creating cloud apps we should carefully design for scalability from the very beginning.


The idea behind cloud computing, as pioneer Amazon Web Services believed when it launched its first utility compute and storage products eight years ago, is to abstract away the underlying hardware and provide raw resources to programmers for applications to run on.


When you're designing an application, there is a temptation to build it to a super-scalable future-proof architecture, even when the immediate requirements can be met by a simple single-tier application that can exploit the pure power of IIS and SQL Server.


How to Handle 10B Requests a Day


Failure is part of engineering any large-scale system. One of Facebook's cultural values is embracing failure.


Principles behind writing highly-available fault-resilient systems.


At Netflix we receive high quality sources for our movies and TV shows and encode them to the best video streams possible for a given member’s viewing device and bandwidth capabilities. With the continued growth of our service it has been essential to build a video encoding pipeline that is highly robust, efficient and scalable.


How do you scale a system from one user to more than 11 million users?


Since this this the cloud, smaller machines will give you better scale in/out granularity - so ymmv depending on what kind of workload you are doing. Also in production, you want to give yourself headroom for bursts, rather than trying to max out your connections - which should also be kinder on latency.


Multi-user online games are among the most successful interactive, world-scale distributed systems built in the past decade.


This is #1 in a very long series of posts on Stack Overflow’s architecture. Welcome.


Fewer companies know how to build world spanning distributed services than there are countries with nuclear weapons. Facebook is one of those companies and Facebook Live, Facebook’s new live video streaming product, is one one of those services.


Scaling the traffic is not the issue. Scaling the team and the product feature release rate is the primary driver.


Mastodon consists of two parts that scale differently: databases, and code. Databases scale vertically. That means, it’s a lot easier and more cost efficient to buy a super beefy machine for your database, than it is to spread the database over multiple machines with sharding or replication. The Mastodon code on the other hand, scales horizontally — run it from as many machines as you want, concurrently, and load balance the web requests, and you’re good.


This is a tale of infrastructure evolution. In the span of 12 months, we went from large, release-oriented deploys to an architecture and approach that enabled 150+ deploys a day. Getting there wasn’t easy, but we learned a lot along the way…


In a pretty much any serious real-world interactive system, it is database which is The Bottleneck™.


The consumption plan for Azure Functions is capable of scaling your app to run on hundreds of VMs, enabling high performance scenarios without having to reserve and pay for huge amounts of compute capacity up front.


Netflix, a service that streams around 250 million hours of video per day to around 98 million paying subscribers in 190 countries. At this scale, providing quality entertainment in a matter of a few seconds to every user is no joke.


How many nodes do I need to deploy to accommodate x number of requests per second? When should I consider scaling out my application? How does scale-out affect the customer experience? This is precisely why server time and response time matters!


Recommended books