
NIQ: DEFINING DATA
REFINING & EVOLVING A TEDIOUS TASK

Web based application
OVERVIEW
PROBLEM SPACE
Analytical data comes from hundreds of sources, producing thousands of attributes.
The challenge: Even experienced users find locating the right attributes difficult and time-consuming. For novice users, it's overwhelming—sometimes even a blocker.
PRODUCT GOALS
Evolve the data selections capabilities to match functionality from the old platform so all customers can migrate to the new platform to focus costs on one platform. All while refining the experience to make it simple enough for newer (less frequent) users to feel comfortable learning how to define reports.
MY ROLE
Lead Product Designer
1 of 2 Designers
KEY RESPONSIBILITIES
Mentor designers
Defining & communicating the long-term vision
Lead Research
Visualizing Problem Spaces
Crafting UI solutions for product teams to consider for their roadmap,
Wireframing, Prototyping, Concept Testing, Handoff
OUTCOME
20% reduction in time to insight
After 12 months of refinement, we reduced data selection time by 8 minutes per report.
UNDERSTANDING
With data selection being a critical step in the journey for over 100K monthly active users, we used several routes to learn about the space and uncover more helpful insight including:
-
Led and conducted multiple rounds of qualitative research sessions with our external and internal users
-
Collaborated with our product teams to get quantitative research from our online feedback tool
-
Worked with other team members to collect and analyzed available analytics from our usage tool
-
Met with multiple stakeholders to learn about the intricacies of the product and get their perspectives on the biggest problems and their vision.
RESEARCH OUTCOMES
We synthesized findings into a comprehensive document highlighting major user pain points, which directly shaped the product roadmap and identified priority focus areas.

IDEATION

As part of navigating the ambiguity of this space, I organized and ran a cross functional ideation session with members from our tech teams, product teams and fellow design team members to explore “how might we” data selection task.
WHAT DID WE ACHIEVE?
By including those who might not normally be involved in the ideation steps, we gained some interesting ideas and perspectives. We were able to get better alignment across the teams as each member gravitated towards shared concepts.
Not only were we able to merge some ideas into stronger concepts, we also learned about some of the constraints in the system by ideas that pushed the borders of what is possible.
"How might we" collaborative Ideation board
ALIGNMENT
WHERE DID WE FOCUS?
With such a generalized problem space and many directions to pursue, it was critical that we identified and aligned on the areas that had the most value with a level of effort that could fit into the next 6-12 months. Based on value and effort, below are the top opportunities that we targeted:
​
-
Natural language data selection - Allow users to simply tell the system what insight they were looking for in a single search query.
-
Contextual filtering - Show shorter, more relevant lists based on data type
-
Control complexity - Let users manage display density and prioritize frequently-used options
-
Inline searching - Provide easy ways for the users to find the “one” thing amongst the many
1
NATURAL LANGUAGE DATA SELECTION
Providing users the ability to simply enter in natural language what insights they are looking for was part of the original vision for the product. But it was considered way too much effort until the teams could lean on AI based learning models.
TRADE-OFFS
The data summary component was central to Discover's design. We intended natural language search (NLS) to enhance—and eventually replace—this view. Technical limitations forced us to launch NLS as a supplemental view instead, creating undesired visual complexity. The initial release treats NLS as an additive feature that will replace the original once adoption increases.

2
CONTEXTUAL FILTERING
With hundreds of data sources feeding together there are hundreds attributes to select from in order to define or refine your report. But not all of those attributes are needed or even relative. We learned from our research that users typically use the same 8-10 attributes so we wanted to offer a more contextual feature to display these options.
This could be extended to many different scenarios including in our natural language search feature where users can quickly switch attributes that the system identifies from the search string
TRADE-OFFS
This feature was not able to be offered to all dimension types and all report types. The first iteration of this was considered to be delivered as a beta feature that users could explore the feature and are not required to use it. The language model was developed to best handle our most popular data inquiries.

3
CONTROL COMPLEXITY
We learned in our research that based on the user type users were either overwhelmed or comfortable with the amount of attributes that were presented in one view. We redesigned the initial view to provide more structure and opportunity for prioritizing the attributes that are used most commonly.
TRADE-OFF
Since we didn’t have any “intelligence” in the system to lean on, we had to rely on user interaction to manage it. Our initial ideas had more to do with “recommended” attributes based on the data. We wanted to remove the dependency on the users to manage it themselves.

4
INLINE SEARCHING
Since we are dealing with a large quantity of attributes that are available, searching is a basic need for users. This feature was something that was originally implemented in the product at its inception but the technical approach made it unusable. It was later removed bc it was so frustrating to use. It was important that we met some basic expectations before attempting to re-deliver it.
​
-
It must be able to search across all instances in the respective attribute type.
-
It must return results in less than 3 seconds.
-
It must be able to differentiate attributes from folders of attributes

VALIDATION
The validation (concept testing) for each of these feature sets were conducted separately as they were delivered at different times.
1
Natural language data selection
KEY FINDINGS
The system's ability to provide prompts and guide users through selections was appreciated, making the overall experience more user-friendly.
Users did not understand the significance of different colors used, e.g., blue for matched dimensions, AI gradient for something Arthur suggests.
The processes for changing time periods, making edits, and handling multiple selections was not intuitive for all users.
HOW DID WE ITERATE
What did we achieve? We included suggestions to help users create more prompts that could produce more relevant results

2
A shorter list of contextual options
KEY FINDINGS
Advanced users resisted the change, preferring the familiar side panel, while newer users found the list menu intuitive and expected.
3
Control the amount of complexity
KEY FINDINGS
Users responded positively to the features and content organization, claiming they'd customize their views. However, analytics showed minimal actual usage of customization options.

4
Inline searching
KEY FINDINGS
Users welcomed search as essential functionality, though some remained skeptical after experiencing the poorly performing original version.



