Novartis | Global Risk Management
Helping to mitigate a $1.3 billion risk profile with a breakthrough platform
Role
Product Design Lead
Engagement Lead
Team
Scope
Project Management
Product Management
UI/UX Design
User Research
Prototyping
User Testing
Components
Context + Problem
In 2020 alone Novartis was fined $1.3billion for kickbacks, price fixing and bribery.
Aiming to advance the ethical standards of the business, one solution was to improve how they identified and subsequently mitigated risky activities.
Whilst there was a broad guide of how to test activities for risk, it was left up to each of the global 71 RAM (Risk Assessment Management) Units on how they went about it. This led to testing being done in a time intensive way, manually in Excel with varying documents being created across the units, and no central records of the testing process.
The original brief
A RAM Unit had created a new Excel Testing Template and the idea was to create a digital version of this, with activities being referenced in from existing databases.
The aim, to have one universal process, to reduce manual input errors, reduce time and money spent, have centralised records - and ultimately be better able to identify risks.
Oh - and we had 2 months till we had to launch for the beginning of the testing cycle.
The proposed Excel Testing Template
Challenging the brief
Concerned simply replicating the proposed Testing Template oversimplified the global needs of a diverse user base I decided we needed to validate our assumptions.
We conducted 8 in-depth user interviews (across the 5 core subdivisions of units and responsibility levels).
From the initial insights we defined 3 key opportunity areas to design to, alongside getting buy-in and agreement to not replicate the Excel 1:1.
First release | Creating a linear testing journey
Initial discovery workshops
The testing flow was the most important feature to develop for first release - as the progress metrics and reporting would be gain more significance further into the business cycle.
In the first two weeks I ran three 2 hour worrkshops where we mapped out crude user journeys in Miro and onto Figma with the the core project team.
This allowed us to quickly map out the process, data requirements and challenge them on existing taxonomies.
The key details we needed were the number of RAM Units different users needed to be visible and if they actively tested or not.
A defined testing journey
The outcome of this continuous discovery was a defined linear process with 4 repeating steps (dependent on user testing responsibilities).
Top-line user journey
Moderated usability testing
"We have to test with users to de-risk"
One of the challenges on this project was advocating for a user centred approach, the most effective way I found to do this was to define the risk and business cost of developing un-validated designs.
With buy-in to test we conducted moderated usability testing of the testing flow with 5 participants across business units. We identified the need for:
Custom questions per control
The ability to record multiple deficiencies per sample
Adjustments to how you record and assess activity type results at the end of testing (Shown below)
Other minor shifts in UI
Compromising on automated samples
Working with the engineers on the requirements for referencing in automated samples (and selecting X no. to suggest to test) it became apparent there would need to be a significant data mapping exercise to be able to reference all samples. The 8 different databases being drawn from had inconsistent naming conventions etc.
We decided that given the 2 month time frame we were working within to first release, the development cost of the data mapping would block the development of more important features.
We compromised for first release to require users to manually input sample information - this allowed us as the core team to define what the nomenclature and minimum requirements were for recording samples, which could be the basis for data mapping in the future.
The final solution
With over 100+ screens being designed I haven't included them all here but please see a selection below.
The process to create the data-viz reports section is worthy of a case-study within itself so if you're interested please get in touch.
Identifying a moment of delight
Whether you're designing a process to assess risk, a payment flow or helping someone learn a new language - there are always opportunities to create moments of delight.
Putting myself into a risk managers' shoes I thought what an anticlimax it would be to end a 6 month testing cycle and simply getting a "Submitted, confirmed" and re-directed to home.
Legally was this celebration screen a requirement, no - but it helps to re-enforce that the action just taken is one of significance, one that mattered.
Oh, and let's be honest, it's fun.
Outcomes
A initial release allowed Units to begin testing using the Testing Tab, a subsequent release pushed the home tab, with progress tracking metrics and the reports tab with custom data-viz reports.
Across the project I consider the successes to be:
Alignment across Units on a singular testing process, including nomenclature and taxonomy
Central tracking for the first time, allowing visibility of progress at all levels
Comprehensive reporting capabilities
150+ users trained
Teaching & up-skilling the core team on user-centric methodologies