FloowDrive
Project overview
Problem
Drivers complain about the scores’ “inaccuracies” and being penalised for something they did not do whilst they were driving.
Solution
The goal of the project was to educate the drivers on their driving by improving the findability of content and, subsequently, helping them understand where their scores came from, how they were generated and how they could improve them. All to help them become safer on the road and get a better policy premium.
Impact
- Reduced support burden by 63%
- Increased user satisfaction by 34%
Background
FloowDrive is a mobile app that automatically detects and scores journeys driven by a driver. Insurance companies use it to assess their drivers and offer good drivers better rates.
The app provides guidance to drivers on what they should do to become safer drivers and, in return, be less likely to be involved in a car accident but also pay less for their car insurance.
My role
I led quantitative and qualitative research and creation of a feature that informs and guides drivers on their driving behaviour.
I collaborated with the commercial, data science and support teams, and the members from my squad to design a better experience of finding the educational content.
What I did
-
Lead UX and UI designer
-
Running a series of workshops within the company
-
Planning and running remote usability tests
-
Executive and company-wide presentations
-
iOS and Android Development and QA support
Running workshops
Based on my research, users kept telling us the app was broken because they were penalised for things they did not do. The biggest area of concern from all the research done was a misunderstanding of how the “Mobile use” score worked. This became the first thing we worked on solving.
I organised and run three remote workshops: to ideate on the problem with the potential solutions, critiquing those solutions and evaluating better ones.
The team I worked with included the commercial team (who work with our clients on the daily basis), data science team (who know the ins and outs our how our scores work) and my product team (who work on the product on the daily basis).
Testing
I tested the potential solutions with 6 participants. All tests were done remotely. The areas I tested:
-
Discoverability of content. To learn if the users could find the information the previous participants struggled to find.
-
Tone of voice. Designed to understand if the vocabulary and keywords were clear.
-
Use-cases-based questions. Designed to learn if the participants understood how the mobile distraction score was generated.
I did three iterations on the designs attempting to fix all of the problems my participants have found with the designs.