top of page

dCube

​AI first Open-source code hosting & pipeline engine

Lead Product Designer

IMPORT-DATA-01 Copy.png

ENTERPRISE FRAUD ANALYTICS DASHBOARD

The project

What is dCube?

dCube is a Machine Learning Platform for organizations that can address all steps in the fraud management lifecycle—from data management and feature engineering, to model development and deployment, to fraud review and investigation.

Currently, Machine Learning Platforms are highly technical in nature and the organizations need specialized data scientists to build models and review the results. With dCube, the goal is to design the product in such a way that even non-data scientists can perform these tasks. 

With dCube the goal was to streamline fraud investigation and management workflow for the fraud analysts. And also offers data science teams increased insight into the underlying nature of fraud to build more effective models and deploy them in production while meeting compliance requirements.

dCube flow.png
Personas

Harold Streeter, Fraud Analyst

headshots-sydney.jpg
Group 2.png
  • Review suspicious activities and does case management

  • Create and manages rules to detect fraud

  • Reviews and Labels Model results to re-tune

Kathleen Avery, Data Scientist

headshots-for-business.jpg
Group 3.png
  • Manage and transform datasets

  • Does Feature Engineering

  • Build a variety of Machine Learning Models

  • Tunes and re-tunes the models and analyzes model results

Luis Rojas, Cheif Risk Officer

CEO.jpeg
Group 4.png
  • Monitor the performance and the detection of the ML models

  • Explores insights into emerging fraud patterns

  • Analyzes team productivity metrics and operations status highlights

Designs

STEP 1: DATA PREPARATION

Goals

  • Collect data from multiple data sources, and then profile, cleanse, enrich, and combine those into a derived data set for use in a downstream process. 

  • Create an intuitive workflow to prepare the data so that even the non-data scientists can do the job easily.

  • Tailor the process for the data-scientist and the non-data scientist personas.

Research

After getting the requirement spec from the product team, I conducted several round of user interviews with 8-10 data scientists to try and understand the ideal procedure to prepare the data for modeling and what their current process is like. Also, apart from talking to other stakeholders like the fraud analyst team, customer success team and the sales engineer team, I also did competitor study to understand the space of the existing data preparation tools.

Design Exploration and Validation

Following the comprehensive research to understand the problem, I started with sketching the high-level workflow and the navigation between different areas. Then I did the first round of validation with the target personas if it makes sense at a high level. Following this, I generally dive into sketching the screens and interactions to sketch out the details.

unnamed-2.jpg
unnamed (1)-2.jpg

Hi-fi designs

DATA STUDIO -- HOME --EMPTY STATE Copy.p

1. Import Data - Zero State Page

V12 - PREVIEW DATASET.png
IMPORT-DATA-01 Copy.png

2. Import Data

V12 - DATA QUALITY CHECK POPUP.png

3. Preview Data

V12 - SET DATATYPES - ROW EXPANDED.png
Group 8.png

6. Validation in progress - popup

4. Transform fields and Validate data

STEP 2: FEATURE ENGINEERING

Feature engineering and model tuning determine model performance. To ensure optimal results, dCube gives data scientists and fraud analysts access to an extensive library of out-of-the-box features and additionally provides a powerful platform for them to engineer custom features. dCube additionally provides deep learning features, and features derived from intelligence gathered across 4.2B+ user accounts.

Goals

  • Accelerate feature engineering with out-of-the-box features and built-in feature templates for rapid production deployment.

  • Create an intuitive workflow to create a variety of different features like Velocity and non-Velocity features.

  • Create workflows to replay features on uploaded datasets and build dashboards to analyze the feature effectiveness.

Problem Definition

After getting the requirement spec from the product team, I close worked with the data science team to first understand the world of feature engineering and then with the help of design workshops, brainstorming sessions and stakeholder interviews, I designed clean and intuitive workflow to create features and manage features. 

Sitemap

Feature Platform Standalone taskflow dia

Product Video

STEP 3: MODELING

dCube offers powerful model development capabilities, ranging from auto model development to manual model development. Models can be tuned via feature selection as well as feature weighting. This gives users maximal ability to combine DataVisor’s domain expertise with their own; accepting recommendations where appropriate, customizing as needed, and refining over time.

Goals

  • Accelerate feature engineering with out-of-the-box features and built-in feature templates for rapid production deployment.

  • Create an intuitive workflow to create a variety of different features like Velocity and non-Velocity features.

  • Create workflows to replay features on uploaded datasets and build dashboards to analyze the feature effectiveness.

Research, Solution Exploration, Wireframes

Led regular team brainstorming workshops, involving key stakeholders, the data scientists, and the development team to understand the advances made to the machine learning models by the research team.  And, also to brainstorm how to make these capabilities easier to understand and use by non-data scientists.

When working with remote teams we used Invision freehand tool to collaborate and communicate ideas.

The design team then moved forward with that understanding and created workflow diagrams, Low-fidelity and high-fidelity prototypes iteratively.

UML Enterprise Task Flow diagram.png
SML - 4.jpg
SML - 5-2.jpg
SML - 3.jpg
SML Invision Freehand.png

Final Design

Usability Testing

At the end of the design of each major workflow, conducted comprehensive summative usability testing. As part of this, we target and recruited ideal participants, prepared usability test scripts and defined both qualitative and quantitative measurement metrics.

 

For qualitative metrics, we recorded user comments, usability issues identified as the user was performing the tasks and also the ease of use rating by the user. And for quantitative measures we conducted a survey followed by the usability testing.

FP Usability Testing - 1.png
FP Usability Testing - 5.png
FP Usability Testing - 6.png
FP Usability Testing - 4.png
FP Usability Testing - 2.png
FP Usability Testing - 3.png
FP Usability Testing - 7.png
Making designs come to life - Implementation

Key Challenges

1. The implementation almost never matched the design.

2. Any feedback, bug-reporting by the design team on the implementation was ending up in product backlog as the engg. team busy with implementing new features.

3. Inconsistent use of libraries for key UI components. Since the engg. team was spread across 2 locations, the developers used different libraries for the same UI components which was leading to inconsistent experience and also slowing down the performance.

Approach

1. Design System

Did a presentation to the key stakeholders including the CEO about the need and importance of design system and how it could save time, resources and dollars. After getting the buy-in from the key stakeholders, I led the efforts to design a robust design system (took inspirations from Canvas, Material Design and a few other design systems). Also, worked closely with the product manager to make changes to the product development life cycle to make sure design gets implemented as expected.

Styleguide - 1.png
Styleguide - 2.png
Styleguide - 5.png
Screen Shot 2019-11-17 at 12.23.25 AM.pn

2. To make sure the implementation is closer to the design, I worked closely with the product management and made a few process changes like, adding design and interaction details as part of the JIRA user story so that the QA can also make sure the design is as intended along with the functionality. Also, introduced a new status 'Design Review' in JIRA for the final sign-off for every user story by the design team. After these changes, the userstory was only complete after the design team has given a thumbs up.

JIRA.png

3. Worked with the Engg. team to understand the UI framework they were using and the libraries they were comfortable with for the UI components and the data visualization charts. We ended up using 'Angular material design' for the form elements, 'React JS' for data grid and 'D3' for all data visualizations. This ensured that all the developers used the same set of these central libraries during the implementation.

bottom of page