As a Test Analyst I'm responsible for ensuring user experiences are consistent across a range of devices. Why is this important? Well imagine the following scenario...
You're at home on your sofa with your ipad sipping a cup of tea browsing your favourite website. The next day you visit the same website on your 24 inch desktop in your lunch hour at the workplace. Then imagine you are commuting back home from work on your shiny new smartphone again checking out your favourite website.
Of course you expect the same high quality and consistent experience across all these devices. As a tester it's my job to ensure you get exactly this.
At Quba we build and evolve lots of enterprise websites each year. Almost all of them are responsive and handle significant volumes of traffic, so if the experience on mobiles or tablets isn’t working we know about it quickly. In this article, I talk about how we test responsive websites and the tools we use to ensure consistency.
Tip 1: Devise the responsive test strategy through the whole project
Image Courtesy: https://pixabay.com
Deciding on a strategy that best tests the responsiveness of the website is an important consideration. Unless you have a significant testing budget you will probably not be able to test on every browser/operating system/device combination so it becomes important to identify a core set of devices which broadly cover the browsing habits of your users. We have a device wall at Quba which we use to test our clients' websites on the handsets most frequently used by their customers.
To ensure we can spot issues on new devices we do some cross browser testing using Browserstack. This isn’t as reliable as having the device in your hand but it is useful.
Using Google Analytics we establish which browsers and devices influence the majority of the site’s traffic. Browsers and devices that bring in a smaller percentage of the traffic are not disregarded but we can make sensible suggestions to client about how much time and effort they should spend on optimising experiences for smaller sets of users.
As an example we might suggest that we spend less time optimising the onsite experience for someone using iPad 1 using Firefox rather than iPad 3 using Safari. We spot these issues early by testing the front end mark-up rather than waiting to the end of the project and testing the whole solution with the CMS back-end attached. A secondary benefit of taking this approach is that it reduces the time it takes to re-engineer back-end and front-end code which doesn’t work well on tablets and phones.
We had a great example of this on a recent project.
While the design rendered well on most devices and browsers, there was clearly a problem with the design on a few Internet Explorer (IE) browsers. On IE the copy on the design looked misaligned, giving a skewed look.
Picking up the issue at markup stage, meant the developers could work around a solution easily. Interestingly the issue we spotted wasn’t to do with our code. It was to do with the fact that IE had not standardised to World Wide Web Consortium (W3C) guidelines. We made a small change to the CSS which solved this issue quickly. If we hadn’t spotted it at this stage it could have been very tricky to sort out later on.
Tip 2: Be smart with your time
As a tester, there is a constant struggle to make the most of the time available. While it may seem simple and uncomplicated to do this, there are a number of maintenance related issues that can take time away from this. For example, ensuring that all devices are charged regularly and the software on them is updated. These may take a short amount of time individually but multiplied into at least 7- 8 devices, this time soon adds up.
I have found that breaking up browsers into representational groups affords a short cut. If there are minor changes to the site, then you could test only the primary and secondary groups of browsers. Ensuring that the devices in the device wall are updated frequently and regularly means you do not need to ‘waste’ time doing this essential maintenance at a time when you are hard pressed for testing. To ensure this, we check our devices at least once a week.
Tip 3: Be aware of subtle differences
The phrase responsive design infers a degree of ‘change’ in the design at different screen sizes. This might seem like an obvious statement to make but it is important to make the point that testing multiple template designs at the same time can be a bit disorientating. To get around this issue it is important to plan your testing so that you are not constantly flipping from desktops to tablets and to phones.
There are of course occasions towards the end of testing where avoiding this becomes impossible and at this stage it is important for you to fully understand the designer’s intentions on different pages. For example, design on the desktop calls for results to appear in four columns while on the mobile it might be just 2 columns. The search box on the desktop may be fixed at the top but on mobile a search toggle may be more appropriate showing and hiding the search option in order to maintain the real estate of a mobile design.
There are instances where designers use links that go for a hover over effect on a desktop. This would not work on a mobile or tablet. Again, touchscreen devices have gestures that have no relevance to a desktop website.
As a Test Analyst this to me highlights the importance of considering testing within the project specification and requirements definition. If I don’t have the information I need in the project documentation I ask our team to give me more detail.
Failing to plan your testing efforts on a responsive design project is a recipe for disaster so you should expect to talk to a Test Analyst like me during the planning stage of your project. If you would like to talk about testing in your project get in touch by emailing firstname.lastname@example.org, or giving us a call on 0114 279 7779.