First days with Ghost Inspector

A while ago we switched away from using Newrelic APM, simply because it is way too expensive for the pricing points that we are use to in an AWS environment. It's fine to have a few cents overage on an AMI, but what about when you're trying to monitor Lambda functions, or have a bunch of micro-services?

The APM piece we were able to replace pretty easily with AppEnlight, and there's some additional help there with the addition of Logging and other analytics that we were able to pull in. However, one piece that I thought Newrelic did very well which I still missed was Browser-based end-to-end testing. This was called Newrelic Synthentics, and unlike the rest of Newrelic, it was priced very reasonably.

Unfortunately we'd made the push to move away from Newrelic entirely, so I had to search for something else. This also gave me the advantage of not being confined to just what Newrelic could offer, so when I ran across a Promoted Tweet (Yes, those do occasionally work marketers!) for Ghost Inspector, I decided to take a look. What I found initially amazed me, then made me slightly sad, but finally came around to amaze me again.

The Chrome Recorder

This is a great little tool that automatically builds a test just by you running commands in a chrome browser. You can even stop in the middle of a test and insert assertions. It's easy to use, and can be enabled to work in Incognito Mode (what I prefer when using Chrome since that starts a fresh session).

It was what brought me back to Ghost Inspector (after looking around I realized I had tried them out before but it was too cumbersome). The greatest part about it was that it recorded clicks, assignment of Input functions, and everything I was doing. It seemed like a great way to go!

Then I took a look at the actual recorded events, the Chrome Recorder, and in fact all of Ghost Inspector, just uses CSS selectors to pick elements, do assertions, and generally make your tests. I quickly realized that many of the CSS selectors the Chrome Recorder was doing just wouldn't work; it was picking up on bootstrap classes instead of things like data-role attributes. Still, this was a good starting point so I could simply fix up the CSS selectors a bit and make them work for both production and development environments.

The Screen Comparison Tool

Another pretty cool idea, Screenshot Comparison. You can even tell Ghost Inspector to include only specific areas, and exclude areas, all by CSS selector. Pretty neat if you're working with a mostly static layout, and would actually notify you of a deployment if you updated the layout of a given page. It's nice to be able to do, but for the most part it wasn't really what drew me to the tool. Still, it's nice to see both a final screenshot of each test run as well as a video, especially in the event of a failure.

Variables, and imported tests

Ok, so by this point I was "so-so" about Ghost Inspector, and then I noticed the ability to import steps from other tests, and set custom variables. So there's not exactly a "for each of these elements, do this" type of option, however with a few quick shared tests, I was able to not only build a "test template" to handle login operations for tests that required it, but also create a "template" for a test to test a specific type of page, then set a variable and run that same test on dozens of pages that all needed to be checked.

Obviously tests like this are only needed to check against regressions, but when you do find a bug and you can make a test that fails, then build in the fix and watch it pass, it's something amazing.

My setup

Ok, so down to the nitty-gritty here, I've set up Ghost Inspector to have three separate "Projects" or test-suites. One Suite is actually never run, it's just called "Shared". This includes all of the test snippets like Logging in, or testing one specific page part that's used in multiple other tests, and multiple times.

Another suite is called "Development Tests", these are tests that are not scheduled to run automatically, but are designed to run one at a time, right before doing any deployment. They are time consuming, there are a lot of them, and they're basically advanced unit-tests. They point directly at our staging/development server and really hammer it with User and non-user based tests.

The last suite is called "Production Hourly Tests", as the name implies, these tests run against production servers once every hour. Some of them test screenshot compliance, some just use assertions to make sure certain items are visible. These tests are usually shorter in nature and we try not to make them spam our servers.

A final note about Analytics

One last thing, we needed to make sure our Analytics didn't get jacked up because of a test. We track how many users view any given story to see how popular it is, but we didn't want our repeated searches for the same thing to confuse that. Ghost Inspector doesn't (yet) let you pass in custom headers like the DNT header, but it does allow you to specify a User Agent. I set that to "Ghost Inspector" and check on that for each request. If it's set, I set on the backend the DNT flag, and make sure to exclude all analytics, from Intercom, Google Analytics, and our custom tracking. Not ideal to have this rely on a User Agent string, but so far it seems to be working.