My top 10 take-aways from velocity
After attending Velocity this year, I thought I would take a moment to reflect on some things I learned.
Unfortunately, my laptop and all of my notes from the conference are in the hands of an unknown taxi driver somewhere in the San Francisco Bay area. So I’m pulling most of this from memory. So in no particular order:
- Perceptual diffs - I forget what talk this was from, but one of the speakers mentioned that as part of automated regression testing in their CI process, they actually perform “perceptual diffs” of on new content vs. old content. Meaning that purely visual bugs (think CSS error on a web page) can be caught, where traditional regression tests would miss them. Cool, right?
- RUM (Real user monitoring) - Not a replacement to Synthetic monitor, but a compliment. Using the Navigation Timing API, we can gather metrics on what real users are actually experiencing (don’t forget about the outliers!). Best of all, Google Analytics by default will now gather samples from 1% of your traffic and report it under the “Site Speed” section. Now we just need to wait to see what goodies the Resource Timing and User Timing specs provide.
- John Snow is not only a bastard brother of the night’s watch and true heir to the kingdom of Westeros (speculation/spoiler, sorry), he was also a renowned physician infamized by his “work in tracing the source of a cholera outbreak in Soho, England”.
- Async script hacks aren’t always the best option. So, to download a script asyncronously, we have to do some hacks, right? Inject a dom node, or one of those other crazy hacks, right? Well, it sounds like the answer is “no, not really”. The main issue with this, is that it defeats “look-ahead” script downloading … meaning the browser can’t start downloading other scripts ahead of time, so you can actually get worse parallelization and ultimately worse performance. So, that’s where the script tag’s “async” attribute comes into play. We favor these async hacks because the “async” attribute is not fully supported by all browsers, namely the old ones. However, considering the advantages of “look-ahead”script downloads, and that 80% of users are using browsers that DO support the “async” attribute, it actually may be more advantageous to use as script tag with the “async” attribute. The one BIG exception to this is third-party scripts. So for example, if you’re including addThis (who don’t actually provide you with async snippet … shame on you!) on your website, you’ll want to include the script via an async hack, because you’d risk introducing a SPOF to your site for user’s who’s browsers DO NOT support the “async” attribute.
- SPOF tools - Speaking of Single Point Of Failures, there are now 2 tools available for auditing your site for SPOFs. One, a Chrome plugin by Stoyan Stefanov and another another Chrome plugin by Patrick Meenan.
- “Don’t fight stupid, make awesome”. - I love this concept … don’t waste your time fighting stupid, spend your time elsewhere making awesome. And people do, there was a whole lot of awesome all around … awesome programmers, awesome projects, awesome talks … it’s all very inspiring.
- Automated optimization - This was new to me. There are a number of vendors offering products that will actually intercept your server responses rewrite them to implement front end performance techniques. They do everything from script deferal, image optimization, and even domain sharding. The one thing that was more realistic for me was mod_pagespeed, an Apache plugin that will automatically implement a subset of these best practices. It’s smaller, free, and gives you more fine grained control. This will make for a fun “experiment”.
- Google Critical Path Explorer. Basiaclly, a waterfall diagram but you can actually interfactvely explore the “critical path” … meaning what is blocking what from executing. This will be offered as part of Page Speed Insights.
- “Networky” things matter - I used to ignore all the different colors on a waterfall chart, but as it turns out, they’re quite important. They represent a bunch of “networky” things you can tune to make your page faster … minimizing DNS look ups, setting a keep-alive header, preventing scripts from being split across packets, chunked encoding, gzip, and a few other techniques.
- SPDY is not always faster than HTTP - As I understand it, while SPDY is strictly faster than HTTPS, it is only faster than HTTP in certain situations. A lot of the problems SPDY was meant to solve originally, have been fixed in most browsers. So while SPDY is “awesome”, depending on the situation, it is not always faster than HTTP. I’ll be interested to see how widely this is adopted.
Bonus: Facebook is big … I mean REALLY big. Like 6 million images uploaded every 30 minutes big.
Bonus: CSS can be tuned for performance. Who knew? It mainly involves minimizing your DOM size and favoring the most efficient selectors.
I use Fiddler extensively. It’s one of the few tools that I honestly don’t think I could do my job without.
A few good use cases:
- Any time redirects are involved … like with authentication systems with multiple redirects for one request. You can see step by step what’s happening.
- Cookies … rather than just checking your browser tools to see what cookies have been set, just check out the response headers and see exactly where they where and by what they were set.
- Caching. Viewing cache headers is huge. Not to mention to see whether a request for a resource was actually made to the server, or just served from cache.
- Ajax. Obviously, it helps to see what you are sending a receiving.
- Tracking code … is your beacon request being sent?
- Basically any time there is a front end issue with your web page, fiddler can usually help … are all the resources you’re expecting being included? Is there a funny third-party request from your page that you don’t recognize? Are there 404’s on any of your resources… etc.
- Pausing requests and responses. This is helpful if you want to test a SPOF for a file, or performance.
- Editing requests and responses. Is there an obfuscated js file on a page that you want to debug, but can’t because it’s all on a single line? No worries, pause the response, copy the script into a js beautifier, edit the response by adding the beautified js, and viola, you can now debug.
You can manipulate requests and responses programatically, which is a new thing I recently discovered… which is way more useful than I would have anticipated. http://www.fiddler2.com/fiddler/dev/scriptsamples.asp
- Highlight requests that have certain text or headers … (like requests that set a certain cookie, etc.)
- Simulate delays
- Pause particular responses so you can edit (or test SPOF)
- Return a 404 or 500 on certain resources to test how your app will handle it
- Add / Remove headers
- You can record a session and use it to load testing.
- You can use it to test a REST api by creating your own requests from scratch.
- You can reply requests and sessions over again without actually using your browser.
There’s other good tools out there that can accomplish the same thing, and Fiddler is admittedly not the most intuitive program. But just like Eclipse, once you figure it out, you’ll love it.
Bah! This took longer than 10 minutes … i might have to bump my description up to 15 :/