Ad Experience
UX Research
CONTEXT
During a design research class taken in Spring 2019, my teammates and I worked with a confidential search engine company at a high-level to define what “good” ad experiences consisted of for users, and what factors influenced these experiences.
The project was completed within a span of 8 weeks and involved design research methods such as diary studies, interviews, and reaction cards.
Thanks to Maggie Chen, Prasad Gaikwad, and Nathan Khuu for making the JMPN on Ads team so fun!
RESEARCH GOALS
The focus of our project was to answer the following questions:
What makes up a “good” ad experience in web, mobile, and app environments?
How do good ad experiences affect user attitudes and behaviors?
What components of an ad (content, creative, and contexts) define a “good” ad experience?
As there was already some foundational research regarding bad ad experiences already published online by Think With Google and Coalition for Better Ads, we decided to center our attention on defining “good” ad experiences - based on whatever definitions of “good” our participants had. However, in order to narrow the scope of our project, we concentrated only on online shopping ads and conducted interviews with frequent online shoppers (3+ a week) over the age of 18.
Since product shopping ads are largely representative of the ads that most people encounter online, we hoped to capture measurable data about user behaviors in order to inform future directions of strategies and products.
DESIGN METHODS
Because much of the research on ads was foundational and exploratory, we decided to focus on three design methods.
Diary Studies
Since this allowed us to observe them in their natural habitat without being physically present, we decided to use diary studies as a starting point to gain a broad overview of how people viewed and interacted with ads.
Length: 5 days
Procedure: Through Google Forms, participants uploaded screenshots of ads that they considered “good” ad experiences and answered questions based on their ad.
Outcome: From 18 participants that were selected from a screener, we received around ~70 responses total and proceeded to further analyze the data. We noticed the occurrence of three themes: visual appeal, how straightforward and concise the ad content was, and relevancy to the user.
Challenges: Out of our 18 participants, 3 users dropped. Several other participants also had missing responses. Our participants was also not very diverse: majority were Asian, technology-savvy college students.
“The caption was very catchy, and the demonstration was also short and sweet.”
Interviews
After gathering and analyzing diary response data, we followed-up with 6 of the 15 participants who had interesting or thoughtful responses that we wanted to take a deeper dive into. Interviews were chosen as one of our research methods because they serve as an information-rich way to gain a deeper overview of participants’ experiences.
Length: 30 minutes ~ 1 hour
Procedure: Our team finalized a set of interview questions and asked the participants follow-up questions about their experience with the diary study activity and general ads that they commonly saw.
Outcome: Through multiple interviews, we were able to gain a good amount of rich, qualitative data. Some insights that we learned was that there was varying opinions on ad creative: while some people like bright colors and visuals, others preferred native ads that weren’t distracting. We also gained a deeper focus on the relationship between relevancy and privacy.
Challenges: Because our sample size (n=6) was small, it may have been more beneficial to tailor each interview script to their diary study responses. Moreover, conducting the reaction card activity before interviews may have helped establish a common foundation for each interview.
Reaction Cards
After conducting interviews with our participants, we immediately followed-up with a reaction card activity via Trello that would allow them to define and specify their own definitions of what a “good” ad experience was.
Length: ~10 minutes
Procedure: A list of positive and negative reaction words were created, some taken from a word cloud generated from our diary study activity. Participants were asked to think about their current and ideal experiences with online shopping ads, and respectively sort the list of 26 reaction words into empty “Yes” and “No” lists. The top 3 of each list were ranked, and participants were asked to explain each word and why.
Outcome: These were the top reaction words for each experience.
Current ad experience: Familiar, Busy, Relevant, Simple
Ideal ad experience: Ethical, Trustworthy, Eye-catching
Challenges: Because the reaction card activity was conducted digitally in order to accommodate remote participants, it was slightly difficult for participants to rank their top three choices, as they had to scroll through each list.
KEY FINDINGS
After completing our research methods, our team gathered to specify and distill insights and key findings, some of which were:
There was a fine line between relevancy vs. privacy: while new personalized ad content was useful and (usually) welcomed, participants were extremely uncomfortable when their exact tracking history was shown through their ads.
How ethical and honest company practices and values were taken into account and affected user behavior.
While participants had different opinions about ad creative, most preferred concise, to-the-point ad content.
+
_
REFLECTION
As this was my first project focusing solely on UX research, it allowed me to gain a deeper overview of various design methods - because my previous experience with design methodologies was also mostly confined to interviews and usability tests, I had considered them to be the best method in every scenario. Through this class and project, I’ve gained a familiarity with which research methods are most appropriate for the data needed to be collected. Moreover, working with various stakeholders was an extremely valuable experience that has allowed me to practice developing goals and insights cross-functionally; in an industry setting, it’s crucial to balance business needs and present actionable insights.
One thing that I learned through this entire process was how long it can take to narrow down a topic or audience, especially since our project wasn’t for a physical product or app. Because our topic was so broad in the beginning, it was difficult to settle on a specific set of ads to focus on. I’ve realized the importance of defining research goals quickly. Moreover, recruiting for a diverse set of participants other than college students was difficult.