Content design for a mobility monitoring app FloowDrive

My input:

  • Conducted a full content inventory & audit to identify gaps and redundancies

  • Ensured content reflected the latest product and flow changes

  • Run a series of workshops with the stakeholders

  • Collaborated with the commercial, data science and support teams

  • Planned, executed, and analysed usability tests to validate design decisions

I led quantitative and qualitative research and created experience that educates and guides drivers on their driving behaviour.

""
""
""
""

My role

Company

The Floow

Content designer

My team

I was a part of a multidisciplinary team

Project background

FloowDrive is a white-labeled mobile app that automatically detects and scores journeys driven by a driver. Insurance companies use it to assess driver behavior and offer better rates to safe drivers.

Problem

The app struggled to effectively communicate drivers’ scores and the behaviours influencing them. It also lacked actionable guidance on how drivers could improve their scores to become safer drivers. These issues resulted in frustration, distrust, and increased complaints from users.

Key Business and User Needs

  • Business needs. Reduce user complaints, build trust in the scoring system, and maintain strong partnerships with insurance companies.

  • User needs. Provide clarity on driving scores and actionable guidance to help users improve their behaviour and benefit from lower insurance rates.

Approach

At the beginning of the project, several gaps became apparent:

  • A lack of clarity around how the score (and its components) were calculated

  • No centralised repository for this information and limited knowledge of where the educational resources were stored

  • Limited understanding of the problem space

Content inventory & audit

To address these issues, I organised a series of workshops with the data science team to gain a deep understanding of the scoring system. I then created a definitive source of truth, enabling everyone in the company to access and learn from this vital information.

""

A snapshot of where I stored the copy on all driving events, faqs and more

Assessing the problem space

Gathering app feedback from individual clients was challenging. To tackle this—with approval from our insurance clients—I proposed designing a feedback card. This feature enabled us to collect feedback directly from all apps using FloowDrive, and let us gain a deeper understanding of the problem space.

Research outcome

After 2,000 pieces of feedback from the feedback card—and about 200 support tickets—I learnt that feedback across all clients was consistently similar. There was a clear problem around educational content and driving-guidance. The satisfaction with the app was also very poor. The app got 2.4 out of 5 rating.

Mobile use score was the biggest point of concern to our users - this became our priority to fix.

Education problem

Drivers felt unfairly penalised by perceived score inaccuracies, with some describing the app as "trying to get you." The lack of transparency in how scores were calculated left users feeling confused and incapable of improving their driving behaviour.

I keep getting told to get off my phone even though I don’t go on it. My phone can be in my bag yet says I was on it. Awful!
— Client of our app

Guidance problem

Although the app included a map displaying driving events, it lacked explanations for what those events meant. This ambiguity made it difficult for drivers to understand how to improve. The absence of clear, actionable feedback was a major issue that needed addressing.

Running workshops

Before the workshops, I developed a journey map that illustrated our users' actions and emotions while reviewing information about their latest drive.

A few sticky notes examples from the workshop

The visualisation helped participants empathise with user frustrations before brainstorming solutions. I then organised three remote workshops to ideate, critique, and evaluate potential solutions, collaborating with the commercial team, data science team, and my product team.

Running moderated usability tests

I planned and conducted three sets of remote, moderated usability tests, focusing on the following areas:

  • Content discoverability: To determine whether users could locate information that had previously been difficult to find.

  • Tone of voice: To assess whether the vocabulary and keywords were clear and easy to understand.

  • Use-case-based scenarios: To evaluate if participants could comprehend how the mobile distraction score was calculated.

Based on the feedback, I iterated on the designs three times, addressing and resolving all the issues identified by participants.

Solution

Over the course of 18 weeks, I redesigned how content is discovered, organised, and written. The new solution enables drivers to access personalised content tailored to their driving behaviour and provides clear explanations of how scores are calculated, addressing their frustrations with the app.

""

Key features

  • Driving insights: Clear, actionable steps are provided to help drivers become safer and improve their scores. These are prominently accessible from the Home page.

  • Educational articles: Drivers can access detailed articles that explain key driving behaviours and tips for improvement in a user-friendly format.

  • Score transparency: A breakdown of score calculations helps users understand the factors influencing their scores, building trust in the system.

  • Improved map experience: The map now includes detailed explanations of driving events, making it easier for users to interpret and take corrective action.

Prototype

The prototype includes interactive features such as the driving insights page, new educational articles, and a transparent score breakdown page. It also showcases the improved map experience, with event descriptions and links to see the events in Google Maps.

Outcome

The solution successfully addressed drivers’ frustrations with the app and delivered measurable improvements in both user satisfaction and operational efficiency.

The solution was designed to improve usability and reduce user churn, which we expected would lead to better user retention over time. However, as I left the business not long after delivery, I was unable to track the long-term results.

Reduced support burden by 63%

Increased user satisfaction by 34%

Better alignment of DS components

What I could have done differently

I could have tried to get a buy-in from the stakeholders to conduct usability tests with a more diverse participant pool. It could have uncovered a wider range of user experiences and potential issues, leading to more robust solutions enabling me to offer an experience that was even more personalised.