Go to books ↓
[Video] by a Google Engineer about what it takes to render a web page in under 1 second.
Page weights and load times vary so much from site to site and industry to industry. While it’s easy to spot the obviously bad examples, it can be much more difficult to find the line between is “fast enough” and what is slow.
How to use a waterfall chart to diagnose your website's performance pains
How pixels get onto your users' screens is something you should know about. Not for the sake of knowing, but because in order to be effective as a modern web developer you're going to need to optimize for it.
Because great performance starts with design.
You really don't know what needs to be optimized until you measure performance.
The main lesson to take-away from Stack Overflow (the team/product) is that they take performance seriously.
Using HTTPS on your site can be a great way to ensure that your users are receiving safe, encrypted information. If not configured correctly, HTTPS can be slightly slower when compared to HTTP, but there are steps that can be taken to reduce this overhead.
Measuring performance is important and so important that we need a hash tag (#perfmatters) to discuss all the difficult scenarios and topics surrounding the question “How can I make my Website faster?”
Netflix: "We’ve been busy building our next-generation Netflix.com web application using Node.js. Today, I want to share some recent learnings from performance tuning this new application stack."
Social media share buttons can be easily added to any website. The buttons make it simple for users to share the page, and display the number of times people have shared that page. This might make it seem like a no-brainer to include on every website you design, but these buttons come at a cost to performance.
How .NET treats loops involving array and collection access and what kinds of optimizations you can expect.
If we need to provide fast-responding websites, we have to optimize focus on client side and how to efficiently deliver content to end user.
Elasticsearch configuration properties are key to it's elasticity. If the default configurations are working perfectly adequately for you in the current state of your application’s evolution, rest assured that you’ll have plenty of levers available to you as your application grows.
Onee of my favorite features of ASP.NET WebAPI is the opportunity to run your code outside Internet Information Service (IIS). I don’t have anything against IIS, but System.Web is really a problem and, in some cases, IIS pipeline is too complicated for a simple REST call.
Most of these techniques involve common sense once you have understood the underlying problem.
Flipboard launched during the dawn of the smartphone and tablet as a mobile-first experience, allowing us to rethink content layout principles from the web for a more elegant user experience on a variety of touchscreen form factors.
There are compelling arguments why companies – particularly online retailers – should care about serving faster pages to their users. Countless studies have found an irrefutable connection between load times and key performance indicators ranging from page views to revenue.
Scott Jehl takes a look at Wired's new site and explains a few optimization tweaks that could massively improve the perceived performance.
Simply put, performance matters. We know members want to immediately start browsing or watching their favorite content and have found that faster startup leads to more satisfying usage.
There are plenty of ways in which we can measure performance. Measuring performance is important when we’re trying to improve performance on our sites. You need to know where you are before you know where you can go next. Let’s go over a few ways in which we can measure performance.
Modern browsers try their best to anticipate what connections the site will need before the actual request is made. By initiating early "preconnects", the browser can set up the necessary sockets ahead of time and eliminate the costly DNS, TCP, and TLS roundtrips from the critical path of the actual request. That said, as smart as modern browsers are, they cannot reliably predict all the preconnect targets for each and every website.
Building web application and hosting it on a web server is insanely easy with ASP.NET and IIS. But there are lots of opportunities and hidden configurations which can be tweaked that can make it high performance web application.
When we talk about time that can be measured with a stopwatch, we’re talking about objective time or clock time. Objective time, though, is usually different from how users perceive time while waiting for or interacting with a website, app, etc. When we talk about the user’s perception of time, we mean psychological time or brain time.
Your website is slow, but the backend is fast. How do you diagnose performance issues on the frontend of your site?
Load web fonts asynchronously. Avoid big reflows in layout. Load web fonts as fast as possible. Avoid loading web fonts for recurring visitors.
The total size of a webpage, measured in bytes, has little to do with its load time. Instead, increase network utilization: make your site preloader-friendly, minimize parser blocking, and start downloading resources ASAP with Resource Hints.
C# (and any language that runs on the CLR) is a garbage-collected language, meaning that objects that have no references to them remaining will have their memory reclaimed at some point in the future. Creating too much garbage (by creating too many ephemeral objects or over-using the new keyword) can induce the garbage-collector too frequently, slowing down the entire application.
Jonathan Blow of “The Witness” fame likes to talk about just typing the obvious code first. Usually it will turn out to be fast enough. If it doesn’t, you can go back and optimize it later.
Going the extra mile designing for performance can shave vital seconds off of the page load of your site.
Jamie Knight reveals the techniques the BBC uses to speed up its site and help users flow from one page to the next.
Kestrel is the new cross platform .NET web server (based on libuv) which runs on Linux, Mac and Windows 10 and will, eventually, run on Raspberry Pi. One the outstanding improvements is the sheer speed. According to some measure it is about 20 times faster than ASP.NET running on IIS.
Once you start working with the Varnish source code, you will notice that Varnish is not your average run of the mill application. That is not a coincidence.
Are you using progressive booting already? What about tree-shaking and code-splitting in React and Angular? Have you set up Brotli or Zopfli compression, OCSP stapling and HPACK compression? Also, how about resource hints, client hints and CSS containment — not to mention IPv6, HTTP/2 and service workers?
Writing a fast website is like raising a puppy, it requires constancy and consistency (both over time and from everyone involved). You can do a great job keeping everything lean and mean, but if you get sloppy and use an 11 KB library to format a date and let the puppy shit in the bed just one time, you’ve undone a lot of hard work and have some cleaning up to do.
ASYNC and DEFER are similar in that they allow scripts to load without blocking the HTML parser which means users see page content more quickly. But they do have differences.
The jury has spoken: performance, conversion, and brand engagement are inextricably connected. Amazon has shown that each 100ms of latency costs them 1% in sales. Walmart chalks up an extra 2% conversions with every second of performance improvement. Any online shopper will tell you that faster is better than slower — but is speed as simple as the shortest distance from point A to B?
The OutputCache attribute is a great way to improve both response time and scaleability, except there are many times when you can’t use it. Here’s how to leverage the HtmlHelper Action method to handle those exceptions.
One of the most exciting aspects of .NET Core is performance. There’s been a lot of discussion about the significant advancements that have been made in ASP.NET Core performance, its status as a top contender on various TechEmpower benchmarks, and the continual advancements being made in pushing it further. However, there’s been much less discussion about some equally exciting improvements throughout the runtime and the base class libraries.
In this post we’ll be discussing lots of ways to tune web servers and proxies. Please do not cargo-cult them. For the sake of the scientific method, apply them one-by-one, measure their effect, and decide whether they are indeed useful in your environment.
When I launched Pwned Passwords V2 last week, I made it fast - real fast - and I want to talk briefly here about why that was important, how I did it and then how I've since shaved another 56% off the load time for requests that hit the origin. And a bunch of other cool perf stuff while I'm here.
How many nodes do I need to deploy to accommodate x number of requests per second? When should I consider scaling out my application? How does scale-out affect the customer experience? This is precisely why server time and response time matters!