Data Integrity Commitment

How I verify every metric in this portfolio

Every number cited here will be questioned by Legal, validated by Product, scrutinised by Engineering. This page is the answer. Four evidence tiers, transparent formulas, attribution-honest claims. Confidential employer data is paraphrased, not exposed — the same discipline I'd bring to your data.

4 Evidence tiers
3 Live formulas with sample size + limitations
100% Sources attribution-honest
0 Confidential leaks · ever
View Complete Verification Manifest (JSON)

The Framework

Four evidence tiers — every claim sits in one of these

Public evidence first, NDA paraphrase last. The tier is always disclosed alongside the number. Reviewers can verify Tier 1 themselves; Tiers 2–4 unlock progressively in interviews.

PUBLIC · ANYONE CAN CHECK Awwwards · ASIC records · website footers

Tier 01

Public evidence

Verifiable by anyone through public sources. Awwwards nominations, ASIC public records, jurisdictional disclosures visible in website footers.

VERIFIABLE NOW
GA4 · 30D ROLLING d-30 d-0 42.37%

Tier 02

Analytics screenshots

Hotjar, Google Analytics 4, Search Console. Sample size and date range disclosed. Live dashboard walkthrough in video interviews.

SCREENSHOT ON REQUEST
NDA · INDEXED ONLY +35%

Tier 03

NDA-protected (paraphrased)

Confidential business metrics presented as relative changes ("+35% improvement") with methodology disclosed but exact values withheld. Final-round disclosure under bilateral NDA.

CONFIDENTIAL
SANITISED C-SUITE · IR · LEGAL

Tier 04

Executive collaboration

Sanitised investor-deck templates, FY26/27 financial framing artefacts, C-level strategic feedback summaries. Walkthroughs in final interviews.

FINAL ROUND

Why this page exists

Quantifying design isn't proof — it's trust infrastructure

I'm a designer, not a data scientist. But every design decision I make has consequences Legal, Engineering, and the CFO will ask me to defend. This framework is how I answer them — with rigor, attribution, and the same confidentiality I'd carry into your team.

Question 01 · Legal

"How do users actually see this risk disclosure before they commit?"

Answered with eye-tracking samples, scroll-depth analytics, and interaction logs. Not a screenshot of the disclaimer; evidence the disclaimer was read.

Question 02 · Engineering

"Does this redesign improve task time or is it just prettier?"

Answered with controlled usability data, before/after times, sample size, and statistical caveats. Cohen's d only when the test design supports it.

Question 03 · CFO

"What's the ROI of the design system investment?"

Answered with implementation-time delta against baseline period and explicit confounding factors (TypeScript migration ran in parallel — not all gain is design system attribution).

Attribution Discipline

What I measure, what I don't claim, how I handle confidential data

Most designer portfolios over-attribute team outcomes. Mine doesn't. The line between "I designed this" and "the company reported this" is drawn explicitly on every claim.

Principle 01

What I measure

User behaviour (task time, error rates, time-on-task), system adoption (component reuse rate, design system uptake), and compliance outcomes (zero design-related regulatory findings).

DESIGN-ATTRIBUTABLE

Principle 02

What I don't claim credit for

Revenue, user growth, conversion rates. These are team outcomes driven by Product, Marketing, Engineering — not design alone. Company-wide metrics are attributed to source, not to me.

TEAM-ATTRIBUTABLE

Principle 03

How I handle confidential data

Three tiers: public evidence anyone can check, analytics screenshots in interviews, confidential data shown as directional estimates with methodology. I won't compromise a previous employer's confidentiality to make my portfolio look better.

PROFESSIONAL ETHICS

Calculation Methodology

Three live formulas with sample sizes and limitations

Every quantitative claim in the portfolio is backed by a transparent formula. Below: the math, the sample, the confounders that should keep you skeptical of the headline number.

BASELINE 3 days · n=10 components POST-DS 2 days · n=20 Δ ≈ 30–40% ((3-2)/3)·100 = 33.33% — reported as range

Formula 01

Design system implementation speed

Baseline 3 days/component (n=10, 6-month pre-system) vs. 2 days (n=20, 9-month post-system). Reported as range to absorb engineer-level and component-complexity variance. Confounders: TypeScript migration ran in parallel — not all gain attributable to design system. Directional, not causal.

OBSERVATIONAL DIRECTIONAL ONLY
SECONDS · n=15 TRADERS 8.2s PRE 2.9s POST −5.3s · 64.6%

Formula 02

Order placement flow time reduction

Moderated lab test, n=15 (5 novice / 7 intermediate / 3 expert). Within-subjects, ±0.2s manual timing. Cohen's d = 2.47 (very large) — expected, since steps were cut 6→2. Limitations: learning effect not counterbalanced, lab ≠ live trading stress.

CONTROLLED LAB d = 2.47
31M impressions 187K clicks CTR = 0.603% · pos 21.7 12-mo period · ~2,400 unique queries

Formula 03

TradingCup SEO performance

187K organic clicks ÷ 31M impressions = 0.603% CTR over 12 months (Google Search Console). Attribution: SEO strategy was Marketing + Content. I claim only the UX-side scaffolding — page structure, Schema.org markup, Core Web Vitals.

PUBLIC TIER 1 UX SCAFFOLD ONLY

Honest Assessment

What this evidence means — and what it would take at institutional scale

Small samples done well are still small samples. The portfolio's strongest study (Finlogix, n=15 within-subjects) is a directional signal, not a guarantee. Naming that gap honestly is part of the methodology.

What I have · current portfolio

Exploratory + one well-designed comparison

  • Finlogix n=15 within-subjects, p<0.001, d=2.47
  • Nova n=8, HorizonSync simulated scenarios — proof-of-concept
  • Production analytics (n=100K+) used only for IA-level claims
  • Every directional finding labelled directional, not validated
What I'd add · institutional scale

Proper randomised + longitudinal protocol

  • n=200+ per cell with random assignment (95% / ±7%)
  • Counterbalanced order, controls for trader experience & market regime
  • Effect-size + confidence intervals, not just p-values
  • 3–6 month longitudinal tracking to catch learning effects
  • Multi-market replication — confirm results aren't one-region artefacts

Why this matters for institutional roles: on retail platforms (100K+ users), fast exploratory testing drives shipping. At institutional scale — where one UX choice can affect billions in AUM or expose the firm to regulatory risk — design needs rigorous testing, peer review, and reproducible methods. I'm ready to partner with quantitative researchers and compliance teams to validate decisions to that bar.

Real-World Application

Four claims, four verification paths

Each example below is one tier of evidence. Tier 1 examples can be checked right now; Tier 2 unlocks in interviews; Tier 3 stays paraphrased.

Google Search Console — TradingCup organic search traffic over 3-month period showing clicks and impressions
Google Search Console · TradingCup · 3-month window. Tier 1 public evidence — live dashboard walkthrough available in interview. SEO strategy attributed to Marketing + Content; UX scaffolding mine.

Tier 01 · Public

TradingCup · 187K clicks

Google Search Console screenshot above. Live GSC dashboard walkthrough on request. CTR 0.6% / position 21.7 demonstrates discovery effectiveness — UX side, not the SEO strategy itself.

VERIFIABLE NOW

Tier 01 · Attribution

100K+ traders · 40+ jurisdictions

Company-wide metric, publicly stated by my former employer. I designed for a platform operating at this scale — I do not claim user-acquisition credit. Jurisdiction count verifiable in website footer regulatory disclosures (ASIC, FCA, CySEC, FSA).

PLATFORM-STATED FOOTER-VERIFIABLE

Tier 02 · Analytics

User behaviour · Hotjar & GA4

Hotjar heatmaps + session recordings + GA4 engagement metrics drove design prioritisation. What I won't show: internal dashboard screenshots, session recordings, geographic distributions — these belong to my prior employer. Methodology walkthrough in interview; detailed data under NDA in final round.

NDA-PROTECTED METHOD ON REQUEST

Tier 01 · Live demo

Macro Signal · FRED + EDGAR pipeline

Every metric on Macro Signal Network is derived from public APIs — FRED, SEC EDGAR, U.S. Treasury FiscalData, BLS. Reproducible end-to-end by any reviewer with an API key. Zero NDA risk; nothing paraphrased from a prior employer's dashboard.

FULLY PUBLIC SOURCE

Why I take the confidentiality stance: If I share a previous employer's internal data to land a new role, why would you trust me with yours when I eventually leave? This page demonstrates I can measure design impact rigorously while honouring professional confidentiality. In financial services, that's not optional.