Sizing up UX: Motivating for UX budget and scoping design time effectively
- 22 November, 2019
- Article - Software Best Practices
Sizing for user experience (UX) is a complex topic and poses challenges to business development and delivery management teams, as well as UX engineers (UXEs) themselves. We’ll look at some of those challenges, as well as methods to overcome them and scope projects effectively.
Challenge 01: Organisational UX maturity
Many of the challenges UXEs face when sizing UX tasks relate directly back to the UX maturity of their organisations. UX maturity is how far along an organisation is in its adoption of UX as a fundamental part of the software delivery process. While good UX is crucial to the success of any digital product, in many organisations work still needs to be done to fully integrate UX into the software development lifecycle (SDLC).
If a company is in the early stages of UX adoption, simply motivating for UX research and design to be part of a project’s scope is a challenge. To organisations with low UX maturity, UX may seem more academic than practical due to its research-intensive nature.
While the value of user research and user-centred design has proved itself over time among Fortune 500 companies, it's still seen as redundant by many stakeholders. These stakeholders often feel they know their users better than their users know themselves. This makes actually receiving any budget to consider UX more of a challenge than scoping accurately.
Challenge 02: Giving definite sizes to iterative processes
The second challenge is giving finite timelines to tasks that form part of iterative design processes. Iterative design uses cyclical processes of research, prototyping and testing to optimise and refine design over time. Based on insights gleaned from user testing results, the UXE will often change and adapt their prototypes to either validate or challenge their design decisions. A prototype might go through three or four iterations, or it may take months of rework, ideation and testing to truly answer a user’s needs.
The solution: Prove the value of UX without going over budget
While challenges will always present themselves, the UXE needs to be strategic in their selection of research and design methods to optimise delivery and stay within budget. It’s a tricky balance that requires negotiation, paired with advocacy and education around UX to convince resistant stakeholders.
The power of open dialogue
It is essential to engage in candid discussions prior to the procurement phase and well before project kick-off. One way to do this is to include product owners and other decision-makers in design thinking workshops. These workshops, such as the now well-known Design Sprint, should be conducted prior to planning and budgeting. This is where design, business and technology representatives will work together to map out the business landscape and define mutual goals for the project, ensuring strategic alignment and understanding. If you don’t engage with your stakeholders, crucial research or testing exercises will be excluded from the project due to a lack of understanding.
If successfully advocated for, the UXE can make use of a strategic arsenal of research and design methods to output effective software, and in turn, prove the value of UX. This exercise may need to happen several times before it is institutionalised, but once an organisation sees direct business value, it will become easier to negotiate scope on future projects.
You’ve got the buy in, now it’s time to size your UX tasks
Let’s assume you’ve successfully advocated for UX, motivated for budget and developed a solid research strategy. It’s now time to size the various UX tasks you feel you’ll need to complete the project accurately. The importance of sticking to defined timelines is crucial to engender trust with your stakeholders. This is particularly true on fixed-price projects where there is little room for error and so you need to estimate with a high degree of confidence.
We can’t account for the sizing of all UX tasks in a single article as that would require a book’s worth of content, but we can look at some key tasks as examples. For the purposes of this article, we’ll look at the sizing of qualitative and quantitative UX research.
Qualitative research scoping example: Contextual inquiries
We'll start with a well-established qualitative research method, the contextual inquiry. This is because it's a semi-structured interview method, making it difficult to estimate for. Contextual inquiries are intentionally less structured, as it allows users to go about their regular activities in their context of use. The UX researcher uses observation, as well as periodic and pointed questions to uncover key insights.
When sizing contextual inquiries or other forms of user research, there are two approaches you can use. The first is a dedicated time-box, in which you would conduct as many inquiries as you can within a given timeframe. The second is to decide on how many users you would like to observe and then pre-define how long you would need with each. Let's use the second example, as it is a bit more complex. We’ll work with a single user, then scale it to form part of a full study.
The time you require to observe a user is dependent on the complexity of:
a. The system they use
b. The tasks they need to complete
c. The access to users themselves
Let's use a theoretical example of a task that takes a user 30 minutes to complete. We'll then need to double that time, to account for introducing yourself and the work you’re intending to do, as well as getting consent to observe the user.
You now need to repeat this exercise with at least 5 participants to make your research effective.
With the added complexity of travel and setup, we can anticipate this task will take a full day.
Further to that is preparation time: Writing up consent forms, logistics and travel arrangements, as well as any background research that might be required. For this entire preparatory exercise, one might want to bank at least 2 days.
Following this is transcribing, collating and analysing the data, which may take as much time as the research itself.
We can now see that qualitative research has many dynamics that affect its timing. It’s not much different from investigative journalism or in-store consumer research, so ensure you're accounting for all aspects of your research when scoping.
Quantitative research scoping: User surveys
Let’s now look at a passive research method, the user survey. Surveys can take many forms and can use both qualitative and quantitative methods to gain results. The advantage of a survey is the same as its disadvantage, in that the researcher need not be present.
During the survey, the most time spent is on setup and analysing the findings afterwards, but the actual survey is relatively quick. If done through an online form, the survey can be conducted without the involvement of the UXE, meaning passive data collection.
The sizing of a survey is dependent on the number of questions needed to account for the relative complexity of a system or user. If a user or the system they use is particularly nuanced, one might need to develop a variety of questions over a few days. The following explores what goes into creating and sizing a survey:
1. Defining the users you would need to survey to achieve your research aims
Meet with your product owner or an equivalent subject matter expert (SME) to determine who would be participating in the survey and what questions you will need answered. You will need to define the following:
a. Is there a single user group or many distinct groups requiring separate surveys?
b. Run a research gap analysis: What don't you know about your users?
c. What is the complexity of the system you are designing for?
The total time spent on your survey preparation can range from a single day to several days. Let’s assume the user base is somewhat homogenous and the system is relatively simple. In this case, you will only need a day or two of preparation.
2. Writing your survey and choosing the appropriate technology
You now have the necessary background information and stakeholder alignment on your research aims and therefore shouldn’t need to spend more than a day writing your survey. Use online survey tools such as Google Forms, which offers data visualisation and CSV exports to assist in your analysis (it’s also free).
3. Determining how long should you run your survey
Once you’ve sent an email of your survey out to your chosen participants, you’ll need at least 10% of them to respond. If the survey is internal, you can aim for a 30 –40% response rate.
If you hit your response target, you can use the law of diminishing returns to decide whether to continue your survey or begin analysing your results. If sent using a well-marketed email, you’ll generally have a large uptake in the beginning of your survey. After a few weeks, your respondents will plateau before they drop off entirely. You’ll want to close your survey and begin analysing your results before the responses plateau.
4. Analysing your survey data
Once you’ve gathered all your data, you’ll need to spend a fair bit of time combing through results and trying to find patterns. The time spent will be dependent on how disparate or diverse your results are. If the survey answers are one-sided, then you won’t have to spend much time on analysis. Let's extend our example of a relatively homogenous user base and assume the following:
a. We created a simple survey made up of 10 multiple-choice and yes/no questions
b. We included 2 or 3 qualitative questions that will need to be read and interpreted
c. We received 200 out of 2000 responses over a period of one week
d. We found most answers have a distinct pattern with minor disparities
Based on the example above, we can assume our analysis and reporting won’t take more than 2 days. The total active time spent on the survey would therefore be around 4 days, while the total time required would be approximately two weeks.
Estimating design time
Once research is complete, you will need to apply your findings. This means coming up with prototypes that truly represent and integrate your research. The scoping of prototyping is often defined by the research outcomes and is therefore a bit tricky on fixed-price projects. You often don’t know what you’ll be designing until you do the research.
If you’re in a situation where you need to define the scope of design before having conducted research, you will need to analyse work you’ve done on previous projects. For example, you may have designed a similar system in the past or you may have been a user of a system not much different from the one you’re required to design. With this type of knowledge, you can begin to estimate the following:
a. The number of screens that would make up the user-flow
b. The relative complexity of the system
c. The features that the user would need to achieve their goals
Implementation and measuring success
Once your prototypes have been designed, tested and refined, it’s time to hand over to the development team to implement the design vision. It’s crucial you put measurements in place to track the success of your system, as this will be fundamental in negotiating scope on future projects. Success metrics will also assist in UX adoption more broadly at your organisation. We can therefore add a few more steps to our iterative design thinking model:
It’s certainly a complex task going from advocacy for UX, to the negotiation of budgets and resources, and finally executing your design strategy. Keep the following in mind when unpacking this process:
- Align all stakeholders from the start, under a user-centred goal
- Estimate research and design time as accurately as possible
- Define measurements of success to prove value of over time
In the end it’s all about advocating for the user
Sizing for UX and negotiating budgets all relates back to the UXE’s central purpose of user advocacy. It’s the UXE’s job to give the user a voice and bring them into business and technology discussions when planning a delivery schedule. This means that the UXE needs to push back if there is a real chance of under-delivery, due to lack of user research and testing. If you fail to provide a good user experience, the cost saved on research methods will be outweighed by lack of user adoption and failed user experiences. As the saying goes, ‘there is never enough time to do it right, but there is always enough time to do it over.’