About the project
Have you ever felt like you want someone to read out a recipe to you while you just follow the instructions in the kitchen, all without having to go back and forth to a screen with messy hands? This is the story of designing Head Chef, an Amazon Alexa skill that can provide you with quick recipes with step by step instructions. This voice skill allows users to choose a recipe and navigate through the steps to prepare their meal.
My Role
This was a solo project completed between April - June 2022. I was the UX Researcher and UX/UI Designer for the voice interface.
Tools
Alexa Developer Console, Draw.io
The Problem
Handling screens while cooking
Making a meal is difficult in today’s busy world. Consider Jennifer.
 She is a 38 year old lady who is busy with her work and hobbies while managing a household with two young kids. She finds it difficult to find quick meal options which are nutritious. When she does, her hands are tied up in gathering and mixing ingredients and she cannot scroll to the next part of the recipe without dirtying her phone. Just like Jennifer, everyone who is cooking hates to go back and forth with scrolling through recipes and get the their devices dirty.
“ How can we make cooking a meal, a quick and easy handsfree task?”
The Challenge
Smooth Navigation
The challenge here was to allow a smooth interaction between the users and the system and make it worth it. Considering the speaking style of our persona, I mapped out all possible utterances for the core intents and made sure to handle errors or unrecognized speech in an elegant way.
Current Solutions and opportunities
I chose Allrecipes and recipeBuddy skills as competitors since these are Amazon Alexa skills too.  The objective of these skills is similar but both of them have several shortcomings in terms of usability and cognitive load.
SWOT Analysis for AllRecipes Skill
The Approach
What happens in the kitchen...
I interviewed users to uncover how they currently go about searching for recipes and following the instructions. My aim was to understand deeply their cooking habits, style and any concerns and challenges. The findings I gathered from the research helped me make informed decisions during my design phase and script writing.
Target Market: My target user base was people in the age group of 25-50 years, leading a busy life.  Most of them have families with young children and therefore need to prepare numerous meals in a day including snacks. They may not have a lot of time on hand each time to cook an elaborate meal.
Research Goals:
1. Learn about the users’ experience with voice assistants or voice apps.
2. Learn the frequency of cooking meals in the user’s household.
3. Understand how the user searches for recipes.
4. Understand the importance of dietary preferences and favorite cuisines.
The Discovery
Prefer home cooked meals
Users preferred home cooked meals and all of the users cooked meals at home for 4-5 days per week. They were also conscious about healthy eating and always looked for nutritious options. 
Utilize available ingredients
Most of the users do not like to waste food and would like to use the available ingredients to quickly prepare something healthy to eat. They would like to hence search for recipes based on their pantry.
Quick quicker quickest
Everyone is always trying to look for the most easiest , simplest recipe with less ingredients. Most of the users are busy and do not have the time to cook elaborate meals or need to fix something quick for their hungry kiddos.
System doesn't understand
80% of the users use at least some kind of voice assistant or app daily. There are times when they get frustrated with the repeating on going loop of system responses that don't guide them to move on to the next task to achieve their goal and have to abandon completely. 
The Vision
A cooking buddy in the form of an Alexa skill
A voice recipe app in the form of an Amazon Alexa skill which allows the users to gather ingredients, and follow step by step instructions for making the recipe of their choice and at their own pace.
Users want their cooking buddy to be helpful, reliable and friendly. I developed the system persona that aligns with the user's expectations. Betty is a 35 year old British lady who is efficient, cooperative and friendly and her informative speaking style provides the user with appropriate meal prep guidance.
The Framework
Applying UX principles to voice interactions
User stories
To better understand what the voice experience needs to cover, I started with identifying user needs and wrote a few user stories keeping in mind the functional requirements. These stories served as a good foundation to make informed decisions in the design process.
Sample dialogues
I chose the most important user stories and wrote sample dialogues for those tasks to lay out how a user system interaction would look like. 
User flows
To clarify how the system will work, I mapped out the logic of the skill in the user flow that showed how all the intents were related and how errors will be handled. Creating the flow also allowed to expose any new cases that I would have to account for when writing the voice script. View the full flow here.
Voice scripts
Once I had the logic of the system mapped out, I wrote the complete script to lay out all the utterances, prompts and responses for the skill. I defined all the skill variables and slots here for each intent. View the full script here.
Adding context
Acknowledging returning users
Designed the dialogues in a way that the system applies context to the interactions. By making certain inferences, the system can give smart replies to reduce the number of interactions needed for the user to reach their goals. 
Made the experience more personal by recognizing returning users and writing the script for them in a way which allows them to move forward quickly. Remembering user preferences makes the experience smooth and enjoyable. 
The Refinement
Wizard of Oz testing
Since this was the initial stage of the skill development process, I decided on using the Wizard of Oz testing method for the usability study. It was apt for quickly testing out the basic functionality of my skill without investing more time and resources. The testing was done remotely using zoom for moderation. My video was kept off so that the participant would get a feel of talking to a voice system and not a person. I was the moderator and pretended to be the system (wizard) to read out the responses and prompts per the user utterances. The participants were given three scenario tasks to perform and later asked a SEQ (Single Ease Question) to see how easy or difficult they found the task to be.
Test Objectives: I tested the core intents based on the most important user flows.
1. Learn whether users are able to launch the skill.
2. Learn steps taken by the user to find the recipe they need.
3. Test the interactions for getting the instructions for their chosen recipe.
4. Learn whether users are able to move forward in a recipe at their own pace.

Success Metrics to measure the the outcome of the scenario based tasks:
1. Task completion.
2. Post task confidence rating via SEQ.
Insights
All participants were able to complete all the three tasks with different levels of ease. The main errors were related to user utterances and system responses which were then fixed in the script.
Issue 1: User said 'continue' to go to the next step and system did not understand.
Modification: 
The only utterances saved in the skill for moving on to the next step were ‘next’ and ‘next step’. I updated the script for the NextStepIntent to support ‘continue’, ‘done’ and ‘ok done’ too.
Issue 2: User could not start cooking with "Let's make it" utterance.
Modification: 
Added the 'let's make it' utterance to the InstructionsIntent.
Issue 3: Duration of recipe not available.
Modification: 
I added the same to the ChooseTypeIntent where the system provides recipe suggestions to the user.
Previous prompt:
Sure, here's two options - [Delicious lemon rice] and [spicy mushroom rice]. Would you like to hear more?
New prompt:
Sure, here's two options - [Delicious lemon rice], cooking time 10 mins and [spicy mushroom rice], cooking time 15 mins. would you like to hear more?
Issue 4: Number of servings and scaling the recipe.
Modification: 
New utterances and responses have been added to the InstructionsIntent in the Gather Ingredients section. Also, a new slot needs to be added to get the the number of servings input from the user.
New utterances:
How many servings will this make?
I need to cook for 5 people
New responses:
Alright, let's gather the ingredients for delicious lemon rice for 5 servings. The first one is 2 cups rice.
This will make 2 servings.
The Solution
Head chef - an Amazon Alexa recipe skill
Based on the research and insights gained, I designed the flow and script for the voice interface that allows users to cook quick meals in their kitchen, without getting their devices dirty. 
Features:
1. Choose your own recipe based on time of the day, ingredients and cuisine.
2. Get step by step instructions to gather ingredients and cook the recipe.
3. Get help and support to navigate the voice skill.​​​​​​​

Next steps
iterative design
Designing is an iterative process and I will be making incremental improvements to the prototype. This will involve writing out the script for additional intents precisely - GlobalIntents and SaveRecipeIntent. Also, next round of testing will be done after working with developers to incorporate the changes.
Reflections
My learnings
1. I learnt how to apply UX design principles to voice taking into account other complexities that come with a voice interface - context, safety and privacy. 
2. Write more sample dialogues for all kinds of intents for the skill and do a table read for each one. It is a simple method but can give so much insights into what can be improved.

You may also like

Back to Top