METHODOLOGY · DESIGN FAILURES & ITERATIONS · ACY SECURITIES

What Didn't Work, and Why

Four design directions that failed in production at ACY Securities. The hypotheses that seemed reasonable, what the data said, and what shipped instead.

4 Documented failures
ACY Practice base
0 UX violations (post-V2)
40+ Jurisdictions held

In financial design, the cost of a design failure is not a sprint retro and a note in the changelog. It is user trust eroded, compliance risk incurred, and — at ACY's scale — real money lost by traders who made decisions on a surface that misled them. I document these failures not as confessions but as the evidence that makes every subsequent design decision less guessable. What you see below is the shortest version of four hypotheses that failed, what the tests showed, and what shipped in the second iteration.

01

Maximum Transparency

Hypothesis: more data equals more trust. If users see all performance metrics, they will feel confident making decisions.

V1 — Failed

What I built:

  • 12+ metrics visible per provider card
  • Full monthly returns table on the browse page
  • Sharpe ratio, drawdown, win rate all exposed at the same level
  • Assumption: "power users want density"
V2 — Shipped

What changed:

  • Visual risk gauge (colour-coded: green / amber / red) as the primary signal
  • Strategy style labels ("Conservative", "Aggressive Scalper") before numbers
  • "Show Details" progressive disclosure for the full metric set
  • Metrics hierarchy: right things at the right decision stage
78%
Users overwhelmed in V1 usability test
3/15
Accidental taps on wrong metrics (observation sessions)
73%
Users preferred simple risk labels over raw numbers (V2 test)
Key learning

Trust comes from showing the right things at the right hierarchy, not everything at once. V2's layered approach split users naturally: 73% into a "quick decision" path (visual risk gauge only) and 27% into "deep analysis" (expanded metrics). This matched actual user behaviour far better than forcing everyone through the same information-dense interface — which mostly produced paralysis and accidental taps.

02

Gamifying Risk

Hypothesis: badges and leaderboards will increase engagement without encouraging reckless trading behaviour.

V1 — Failed

What I built:

  • "Top Trader" leaderboard ranked by most profitable this week
  • Achievement badges for high-risk strategy selection
  • Streak counters ("5 winning days in a row!")
  • Assumption: "social proof drives trust"
V2 — Shipped

What changed:

  • Leaderboards penalise high drawdown, not just reward high returns
  • Badges for "Consistent performance" — not "Highest return"
  • Mandatory risk disclosure gate before copying any provider
  • ASIC compliance: no inducements to trade in any surface element
Key learning

Gamification in financial products requires ruthless ethical discipline. What works in fitness apps — streaks, badges, leaderboards — can be dangerous in a trading platform. The V1 design inadvertently encouraged users to chase high-risk strategies to "win" badges. Legal flagged this as a potential violation of ASIC's inducement prohibitions (Corporations Act 2001, s. 911A context). V2 shifted every engagement mechanic to risk-adjusted performance metrics and explicit warnings. The lesson is not that gamification cannot exist in fintech — it is that every gamification element must be tested against the question: does this incentivise a behaviour that is good for the user's financial outcome?

03

Institutional Density for Retail Users

Hypothesis: if the platform looks like Bloomberg Terminal, users will feel like professionals and trust it more.

V1 — Failed

What I built:

  • 8 concurrent widgets on the Finlogix dashboard (Bloomberg-inspired)
  • Dense typography — 12px font size for data tables
  • Advanced technical indicators exposed by default on first session
  • Assumption: "more data = more professional = more trust"
V2 — Shipped

What changed:

  • 2–3 essential widgets on first load (Account Summary, Watchlist, Quick Trade)
  • Progressive disclosure: "Add More Tools" panel for advanced features
  • Three explicit modes: Novice, Intermediate, Expert — user-selectable
  • 14px minimum body font throughout, per WCAG 2.1 AA
Higher error rate for novice traders in V1
67%
V1 users who never customised the widget layout
+40%
Session duration increase after V2 shipped
Key learning

Institutional design patterns do not translate directly to retail — context matters more than aesthetic. Bloomberg Terminal users are professional traders who spend 8+ hours a day in the platform and receive formal onboarding training. Finlogix users are part-time retail traders managing personal portfolios between other commitments. Research showed novice traders made 3× more errors in V1's dense layout — accidentally clicking wrong instruments, misreading P&L direction. V2's progressive disclosure let users graduate to complexity as they gained experience, matching the actual learning curve instead of assuming day-one expertise.

This lesson applies directly to Private Banking: UHNW clients expect institutional-grade tools, but wrapped in a UX that does not require 40 hours of training. Power when needed, simplicity by default.

04

Design First, Legal Later

Hypothesis: design the best UX first, then adjust for compliance — Legal can work around good design.

V1 — Failed

What happened:

  • Designed ACYVerse "Recommended Providers" feature based on raw performance rankings
  • Four weeks of design iterations presented to stakeholders
  • Legal review flagged potential ASIC violation — RG146 inducement prohibition
  • Result: four weeks of work discarded, morale impact across the team
V2 — Shipped

New process:

  • Legal stakeholder invited to Day 1 kickoff — regulatory constraints scoped before wireframes
  • Low-fidelity wireframes reviewed with Legal before any high-fidelity investment
  • Pivoted to "Risk-Adjusted Performance" ranking with compliant disclaimers
  • Zero design-related violations across 40+ jurisdictions for the following 2+ years
Key learning

In regulated environments, Legal is not a blocker — they are a design constraint that should inform architecture from day one. Legal teams think in risk mitigation: what could go wrong? Designers think in opportunity: what could users do if this friction were removed? These perspectives conflict unless aligned early. V2's process — inviting Legal to wireframe reviews, building dedicated compliance QA checkpoints into the design cycle — turned Legal from a veto gate into a collaborative partner. The cost of a 30-minute Day 1 conversation is almost always less than four weeks of rework.

This is critical for any institution operating under ASIC, MiFID II, FCA COBS, or FINRA. A single design-driven compliance finding does not just cost the fine — it costs the audit cycle, the legal review, the remediation sprint, and the client trust that evaporated while the feature was offline.

Portfolio thread

Where this connects

These four failures are documented evidence — the same evidence discipline that runs across the portfolio's research and methodology pages.

Thread

Evidence & Verification Discipline

How quantitative claims are sourced, validated, and turned into design decisions — pooled-SD, A/B, usability metrics

  • Lessons Learned — Four documented failures Usability data and compliance outcomes from ACY Securities production Methodology · 4 iterations · 0 post-V2 violations
  • Data Verification Methodology How quantitative claims in financial UI are sourced and audited Methodology · pooled-SD · Cohen's d · citation discipline
  • MiFID II Best-Execution Report Slippage in basis points — measurement discipline for venue comparison Field note · MiFID II Art 27 · bps unit standard

Thread

Retail → Institutional Translation

What breaks when consumer UX patterns meet regulated institutional contexts — and what specifically replaces them

  • Lessons Learned — Bloomberg density for retail Failure 03: institutional pattern mismatch in a retail CFD context Methodology · 3× error rate · progressive complexity
  • FIX 4.4 Latency & Order Entry Consumer form patterns fail at institutional latency — four replacements Field note · 8–12ms budget · pre-flight validation
  • KYC Drop-Off at EDD Consumer onboarding mental model fails at Enhanced Due Diligence Field note · UHNW context · progressive disclosure

Want to discuss these in more depth?

These failures taught me more than most successes. If you are building regulated financial products and want to work through design trade-offs, compliance process, or user research methodology, I am happy to go further.

ed@edwson.com