We start with the user — the child we are trying to reach, the caregiver we are trying to help and health workers. From there we define the intended program outcome as a measurable goal and focus on the biggest obstacles we will set out do address – obstacles that must be researched a proven during our user research. The final objective statement focuses our work through all subsequent activities.
Clearly delineate exactly which community we are concerned with.
Specify the change in immunization outcomes that the team is capable of influencing.
Explain how the group is or is not engaging with services.
This tools helps describe how a group of people are or are not using the services being provided to them.
This tool helps separate out each element of the objective formula to arrive at a final objective statement.
This phase is about downloading the local knowledge that already exists and determining what we still don’t know. What might we be assuming? What might we suppose we know more about than we really do? What ‘best practices’ could be called into question?
Gather available information about the challenge, past efforts and the individual or community in question. Mark key pieces of information that show what we have learned, what we should keep in mind, and the relevance this information has to the present.
To help avoid bias, document the possible assumptions that you and your team might carry with you. Talk through assumptions, expectations, closely-held beliefs, perspectives, hypotheses, and contradictions.
Use this tool to document existing assumptions about the challenge, past efforts, and the user-group in question.
A short-list of general assumptions that span contexts and communities.
Use the Journey to Immunization model to think about what areas need the most attention and what we can learn at each stage.
Using the “Journey to Vaccination” as a tool, clarify what you hope to get out of the research. These learning goals will help you to choose the research methods to use during Question 3.
During research, each step will yield distinct outputs — your “Field Notes.” This tool gives you a framework in which to capture them.
What prevents users from using services? What do they do now and what do we want them to do? To find out, we conduct user research. The result is a set of specific challenges to solve.
Collect information in the field. Choose which activities, including observations and interviews, should be used for research. Record what is seen, heard, felt, and said.
These tools share activities, including both observations (what we see) and interviews (what others say), to plan your field research.
After each day of field research, quickly synthesize and record the information you have gathered.
Share information from the field. Prioritize the most important information by identifying patterns, surprises, and commonalities. Analyze these findings to hypothesize why this is happening. Translate hypotheses to final diagnoses, which may require returning to the field to gather more information.
This tool gives sample activities to empathetically transcribe what you’ve seen and heard in the field to your team members.
This tool shares a process for turning key pieces of information into final diagnoses.
This tool shares a process to isolate the most important pieces of information.
Use these examples of recurring challenges to prompt new thinking about why the problems we witness in the field persist.
Translate diagnoses of the root causes of the challenge into creative prompts.
Create a persona profile for your prioritized user group and each additional person who has a role in your identified challenge.
This tool organises the different pieces of the system to show how they connect to and communicate with one another.
Use this tool to create prompts that respond to your challenges and guide the generation of creative solutions.
Given what we know about users, how can we shape their environments and influence their behaviors to achieve our objective? This is a creative and collaborative process: generating ideas and testing them out.
With an extended team, quickly generate many possible solutions for each prompt. Assess the solutions to identify 2-3 promising ideas per prompt.
Generate a large quantity of possible solutions to each of the prompts drawn from your Field Notes.
After brainstorming, use this chart to organise ideas for each prompt.
Make ideas concrete through initial outlines, models or rough sketches of ways to implement promising concepts.
For each of the candidate ideas that made it through your Assess Concepts step, make the idea real by visualizing, building a model or storyboarding a sequence.
Define learning goals for each design, then select activities that will test (prototype) the design in the field. Take draft ideas into the field to trial with, and get feedback from, users.
For each solution you are taking into the field, use this worksheet to develop a prototype plan in preparation for gathering in-field feedback.
Use these three dimensions that focus on an idea’s potential to evaluate the simulated solution’s future success. For each idea, use this page to evaluate the idea post-prototyping.
Good ideas are not only innovative, but also effective. This last phase is about continuous inquiry — measuring how the ideas respond to the challenges identified during user research and making adjustments to improve their efficacy. Implementation begins with defining performance indicators and continues as an exercise in ongoing user research.
Devise an Adaptation Plan for each draft initiative. Define the key evaluative questions, possible risks, measurable criteria and corresponding indicators to track progress over time. We will return to the Adaptation Plan after each phase of implementation and make adjustments.
This tool outlines what we hope to learn by selectively deciding what to measure and track for the purposes of adapting an idea over time.
Assess each revised idea in the field using the Adaptation Plan as a guide. Evaluate the accuracy of diagnoses and determine what we still don’t know much about.
Revisit the initial Adaptation Plan to reflect what we’re learning, adjust what we’re measuring, and continue to improve the execution of our ideas. Implement adaptive changes that respond to findings as you scale the improved idea.