There are few books I feel I can recommend to those starting off in test automation but this
book is one of them.
Most book begin with the tedious ‘how to install [the software]’ but by the end of Chapter 1
Matt has taken you on the journey from installing the relevant software to having a fully working
test. If you’re the type who wants to get down to business quickly (hello!) then this book is
As well as giving clear examples of how to deal with each of the types of elements you’re likely to
encounter when automating a web UI with Capybara, Matt also explains the often-confusing rules
around finders, scoping and multiple matchers. If you want to better understand how Capybara deals
with finding elements then read this chapter!
A final thing: most tutorials you come across on the web only deal with the simplest use cases.
Instead of avoiding the more difficult scenarios, Matt deals with them head on in the ‘Ninja Topics’
chapter. If you’re struggling with using Capybara with tools other than cucumber, this is the chapter
for you. If you want to get under capybara’s hood, there’s lots of good info. Want to use
Capybara with a driver other than Selenium? Everything you need is here :)
classSearchResults<SitePrism::Pageelement:view_more,"li",text:"View More"end@results_page.<element_or_section_name>:text=>"Welcome!"@results_page.has_<element_or_section_name>?:count=>25@results_page.has_no_<element_or_section_name>?:text=>"Logout"@results_page.wait_for_<element_or_section_name>:count=>25@results_page.wait_until_<element_or_section_name>_visible:text=>"Some ajaxy text appears!"@results_page.wait_until_<element_or_section_name>_invisible:text=>"Some ajaxy text disappears!"
Both changes supplied by tmertens, both changes being the last 2 gripes I hear from people about SitePrism :)
Thanks to LukasMac for his extensive testing of this version.
Running xpath queries shouldn’t be hard. It used to be that you’d have to install
plugins into whatever browser you were using. They were often clumsy and always buggy.
And, even though querying using CSS selectors has grown more popular in the
automated-acceptance-web-test world, there are still sometimes that xpath is the only
It turns out that Chrome has built in support for xpath queries in its dev tools. Simply
run the following in the console tab:
For example, run the following command in the console when the http://www.google.co.uk
page is loaded:
Here’s a screenshot of $x("your_xpath") in action:
Which one more quickly and clearly transmits the intent of the test to the reader? I’d argue that the first does. The reader is not distracted with unnecessary details; instead they know they are creating a new generic Human object, creating an account with it and then verifying that the account has been created. The second one achieves the same thing but uses a lot more code – figuring out the intent of the test takes more time and effort; the result is less maintainable too.
It doesn’t take much work to use test data abstractions like Human and what little work is required is paid back many, many, many times over. Eg: creating the above Human class is as simple as this:
The initialize method creates sensible default test data attributes when an instance of the Human class is created. The test data attributes are exposed using attr_accessors so the test data object can be changed in the test. The to_xml method creates an XML representation of the human. This could just as well be a to_json method that spits out a json representation of the human*.
* For those who take abstraction seriously this is a fine place to use the Template or Strategy patterns to decide between json and xml output at runtime.
Being able to create objects containing default test data that can be changed in the test will lead to more expressive test code (have I said that already?). Here’s what I mean:
The defaults, other than age are sensible, so we’ll leave them. The only one we need to change is age, so we override the default value with 1. Now that the @baby instance of Human has been created, when we see @baby in the test code it will read nicely:
But still, it would be nice to not have to change the age of @baby in the test – why can’t this happen automatically?
Well, by using the Factory pattern you can create specific instances of test data without cluttering up your test code. Factory classes are those that create instances of other classes, hiding any complicated setup. Eg:
classHumanFactorydefself.standardHuman.newenddefbabyhuman=Human.newhuman.age=1humanenddefself.too_oldhuman=Human.newhuman.age=500human.name="Methuselah"humanendend#and to use the factory...@standard=HumanFactory.standard@baby=HumanFactory.baby@geriatric=HumanFactory.too_old
Thus far we are able to dynamically create objects that represent test data.
There is one more important thing in this pattern – the separation between the data and the representation of the data. When we call to_xml, we get back a string containing an XML representation of the test data object. What’s this for? Well, in your tests you can use the output of the method to pass to services, etc – that’s what the AccountService.create_account_for(@baby) example is doing – the to_xml method would be called inside the create_account_for method.
An essential attribute of the to_xml method is that however many times it is called, unless the data changes it should always return the same thing. For this reason the following would be bad:
require'builder'require'active_support/time'classHumanattr_accessor:birthdaydefinitialize#@birthday not set to sensible default :(enddefto_xmlb=Builder::XmlMarkup.new:indent=>2b.instruct!:xml,:version=>"1.0",:encoding=>"utf-8"b.Humandob.birthdayTime.now# <-- this is bad!endb.target!endend
The problem with the above is that in the to_xml method there is a call to something that will change every time it is called; Time.now. To illustrate the point here’s what happens when you create an instance of the above class and call to_xml on it lots of times:
In setting up a Fedora 19 VM for web testing with Chrome
I found that I needed to install a few packages before things started to work. After some trial-and-error, I came up
with the following command:
It should install everything you need to start coding in ruby (the above command assumes your code is stored in git and you’re
using vim) – I had particular trouble trying to get nokogiri to work. But the above command solved it :)
Today I found myself building a Fedora-based VM for running some tests that require chrome. I needed
to be able to prevent the screensaver from starting, and after a bit of googling I figured out how
to do it. Here’s the command you need to run:
$ gsettings set org.gnome.desktop.session idle-delay 0
I tested this on Fedora 19 but I guess it should work on any Gnome 3 distro…
I very frequently find myself debugging http calls. Curl makes it easy to do this through its -v switch that lets you see exactly what it’s doing. For example:
$ curl -v http://www.google.co.uk
* About to connect() to www.google.co.uk port 80 (#0)* Trying 184.108.40.206... connected
* Connected to www.google.co.uk (220.127.116.11) port 80 (#0)> GET / HTTP/1.1
> User-Agent: curl/7.21.4 (universal-apple-darwin11.0) libcurl/7.21.4 OpenSSL/0.9.8x zlib/1.2.5
> Host: www.google.co.uk
> Accept: */*
Hostname-to-IP resolution, any SSL handshaking as well as full header details are on display. If you’re doing a
POST or PUT you get the body too. All very helpful.
But, once I’ve figured out what I need my code to do I need to translate my curl incantations into HTTParty – currently my favourite ruby http library. It is possible to get
similar details out of httparty but it’s a little esoteric.