Make Your Specs Faster with Poltergeist. And Fail.
Some time ago we decided to make our acceptance tests faster. We were using Cucumber with Selenium and we replaced it with Poltergeist driver. Poltergeist uses PhantomJS engine and thanks to that our tests run around three times faster then before. Everything works smoothly on our machines, but there is one small problem. Sometimes, in some steps PhantomJS crashes on CircleCI :).
It forces us to click “rebuild” a few times in a row. This doesn’t make our tests faster. But the direction is good, so what could we do? We could:
- Connect with CircleCI VM using SSH.
- Download the crash dump.
- Notice that it contains sensitive data.
Report the crash by creating an issue on GitHub.Wait for someone to fix it or fix it by ourselves.Wait for a new version of Poltergeist.Wait for CircleCI to update their Poltergeist version.
Or maybe…
Rerun Failing Specs
Cucumber as most of testing tools allows you to choose format of output. What’s more, it has one specific format, called rerun. It writes a list of failing scenarios to the specified file.
Once you have this file, you can run these scenarios again:
It’s as easy as this! Let’s write rake tasks which do this:
At the beginning I was afraid that this will not work with parallel nodes. failing_scenarios.txt shouldn’t be shared between them. But every CircleCI node is an independent virtual machine, with it’s own filesystem, so every node has separate file.
Now you can type rake failing_cucumber_specs:record and rake failing_cucumber_specs:rerun.
It’s good idea to put failing_scenarios.txt to the .gitignore file before committing changes.
Usage with Knapsack
We use Knapsack (written by Artur Trzop) which splits tests among multiple nodes. Knapsack has it’s own adapter for Cucumber, so I had to modify the failing_cucumber_specs:record task. Here is a version for Knapsack:
I also updated the test section of circle.yml:
Possible Troubles
Exit 0 Is Not a Perfect Solution
If you look closely at the rerun task, you can see exit 0 after running Cucumber. We must return a successful exit code, because we don’t want our build to be interrupted during recording failing scenarios. The problem with Cucumber is that it returns 1 when some scenarios fail as well as when it fails itself for any reason. Imagine such situation:
- Cucumber doesn’t run specs, creates an empty failing scenarios file and crashes.
- CircleCI doesn’t notice that, because we force exit 0.
- Second Cucumber execution run specs from an empty file. No specs, so it returns 0.
- Build is green.
Fortunately, the first point seems to be very unlikely. Even if Cucumber fails for another reason than red specs (that’s unlikely itself), it doesn’t create an empty file, so the second Cucumber run fails. However there was a feature request regarding Cucumber exit status codes. It’s implemented and merged to the master branch so in future releases we will be able to determine whether scenarios failed (exit status 1) or application returned error (exit status 2).
Less Trust in Specs
Imagine some functionality which doesn’t work as expected from time to time, let’s say because of a race condition. This problem could be noticed when it’s test fails. Rerunning failing tests decreases probability of detecting this issue. I don’t think it’s a huge problem in our case. I’ve never encountered this in any project I was developing at our company, but I feel obligated to mention this.