With just a handful of examples and minimal instruction, the GPT-3 language model we use in Viable predicts the summaries it should generate from customer feedback datasets.
Like any other model, it needs something to learn by. At Viable, we have spent many hours training our GPT-3 model to provide useful answers out of qualitative customer support data. Along the way, we learned a few tricks for getting the most useful answers.
For a broad range of questions to ask in Viable, check out our guide on what questions you should ask in Viable.
By starting with a broad question such as:
How can we improve our products?
The response will reveal specific areas that you can further probe. For example, the answer to the above question might be:
We should improve the sign in process for customers, make the checkout process faster, and enable customers to add more than one credit card.
Let’s say you already know there were issues with checkout and the ability to add more payment options for your product which your team is improving in the next release. But perhaps you didn’t know users were also having trouble with signing into their accounts. You might want to further investigate this by asking more specific questions:
How can we improve the sign in process?
This would then provide a more insightful and detailed answer such as:
We should improve the sign in process for customers by adding more login links across more pages, making the page load faster, and giving them the option of a single sign on.
Starting broad about what might be challenges (or opportunities) will lead you to specific areas to dig into for deeper insights.
Additional examples of specific questions across a variety of products might include:
What features would users like to see for keyboard shortcuts?
What do our customers find frustrating about payments?
How can we improve the alerting function?
Why are deposits challenging for our users?
Sample question flow
Question: How can we improve our email product?
Answer: We can improve our email product by making it easier to use, adding more features, and improving the product’s reliability.
Follow up question: What features do customers want us to add to our email product?
Answer: Customers want the ability to download attachments, unlimited access to calendars, and more options for shortcuts.
Follow up question No. 2: How can we improve attachments?
Answer: We can improve attachments by building a one-click link to files and supporting more formats.
As you ask questions about specific topics, the model will pull from the relevant data points to answer your questions so long as there's available customer feedback data.
GPT-3 is best at answering questions that start with What, Why or How. For example:
What is confusing about our onboarding?
What do customers find frustrating about checkout?
How can we improve keyboard shortcuts?
How can we make the calendar feature better?
What is difficult about adding attachments to messages?
These types of questions will generate responses that are more accurate, consistent, and specific than comparison questions such as which is better, x or y?
Avoid yes and no questions. Much like in user research, asking yes or no questions won’t yield much insight in Viable. The model will answer but won’t give you much more than a yes or no answer. Examples of yes or no questions include:
Do customers like our tutorials?
Are customers using our support documents?
Do customers cancel their subscriptions?
Are users use getting stuck at login?
Avoid using the question box as a search bar. Using the question bar like a search bar is less likely to yield good answers. The model is smart enough to interpret the words and will do its best to provide an appropriate answer; however, it’s more likely to craft better answers when questions are entered rather than single word terms since that’s how the model was trained.
Don't stay too high level in your questions. We’ve seen that asking questions that are too broad and never digging deeper into specific areas of your product won’t generate much insight. The model is good at identifying specific topics in the customer feedback and surfacing those so you might as well take advantage of it.
The best way to learn what questions work best is to try a few different questions or variations of the same question. You’ll notice that even asking the same exact question multiple times will yield slightly different results. The same core concepts will be included in the answer summaries but the exact language will vary. That’s because the model is not deterministic.
If an answer looks particularly useful, give it a thumbs up. If an answer seems a bit off, give it a thumbs down. It will help the AI learn what makes for good answers and what doesn't.
So go ahead and ask away!