Designing MiFID II Best-Execution Reports for Buy-Side PMs
Article 27 forces the proof. The default output is a PDF nobody reads. Here is what a PM actually needs on the screen.
MiFID II Article 27(1) is short enough to quote in full:
"Member States shall require that investment firms take all sufficient steps to obtain, when executing orders, the best possible result for their clients taking into account price, costs, speed, likelihood of execution and settlement, size, nature or any other consideration relevant to the execution of the order."
That one sentence generates the two regulatory deliverables most buy-side shops hate producing: the RTS 27 venue-quality report (quarterly, from execution venues) and the RTS 28 top-five-venues report (annual, from the firm itself). Both are publicly filed. Both are, in almost every case I have seen on the buy side, an unreadable PDF churned out by a compliance team, rubber-stamped, and forgotten until the next cycle.
That is a design failure disguised as a regulatory one. The data is there. The portfolio managers who need it cannot see it. Here is how I approach the UX when I get to build it properly.
What PMs actually ask the report
When a buy-side PM reviews best-ex, the questions are specific and always the same five:
- Where did my flow go? Venue-level volume share over the period, by asset class and by order size bucket.
- Was I getting better prices elsewhere? Slippage versus arrival-price midpoint and versus EBBO (European best bid/offer), broken down by venue.
- How fast did it fill? Time-to-fill distribution — not the mean, the full histogram, because the tail is where information leakage lives.
- What did it cost me beyond the spread? Explicit fees + implicit market impact, per venue, per order size.
- Is this pattern new? Versus last quarter, versus same-quarter-last-year, and versus the firm's stated RTS 28 execution policy.
None of those questions are answered well by a PDF. They are all answerable by the same data that generates the PDF. The design job is to present that data as a filterable, drill-downable surface that a PM can spend 90 seconds on during a Monday review.
The three-layer screen I keep coming back to
The best-ex surface I designed for Argos (a compliance analytics concept) and variations of it I have shipped at broker desks all use the same three-layer layout:
Layer 1 — Policy compliance strip (top, always visible)
A horizontal strip with the RTS 28 stated policy on one side and observed reality on the other. "Policy: 60% LSE, 25% Cboe, 10% Turquoise, 5% other." "Observed this quarter: 52% LSE, 31% Cboe, 12% Turquoise, 5% other." Red pill if drift exceeds a threshold the compliance team sets. This answers question 5 in 3 seconds and gives compliance something they can staple to the quarterly file.
Layer 2 — Venue attribution grid (middle, primary workspace)
A table of venues as rows; columns for volume share, arrival-price slippage (bps), EBBO capture %, mean time-to-fill, p95 time-to-fill, explicit cost (bps), implicit cost (bps), reject rate. Every cell is clickable and drills to the underlying trades. Row hover shows a 30-day sparkline of that venue's slippage against the firm's average. This is where the PM lives.
Layer 3 — Trade-level audit trail (right drawer, on demand)
When the PM clicks into a cell, a right-side drawer opens with the individual executions: order ID, timestamp (microsecond), venue, arrival mid, fill price, fill size, venue latency, counterparty, MiFID transaction reference number (TRN). Copy-to-clipboard on the TRN because that is the field compliance will later need to cross-reference with the transaction reporting system (ARM). Export-to-CSV so the PM can hand the slice to the TCA (transaction cost analysis) team.
Design choices that matter more than they look
Time-to-fill as a distribution, not a mean. Showing the mean time-to-fill is actively misleading — a venue with a 12ms mean and a 180ms p99 is a venue with an information-leakage problem. I always show the full distribution as a small histogram or boxplot in the row.
Slippage in basis points, not currency. A PM trades AAPL at 210 USD and VOD at 70 GBp in the same session. Currency-denominated slippage is noise. Bps is the only unit that compares cleanly.
Policy vs observed colour logic. Drift < 2pp = neutral. 2–5pp = amber. > 5pp = red and it auto-generates a compliance note prompt. The colour is a signal to the compliance officer, not to the PM; the PM does not need to be told his execution is drifting — he already knows. The compliance team needs the escalation path visible.
Export everything. Buy-side PMs live in spreadsheets. Every drill-down must be CSV-exportable with one click, headers preserved, so they can paste into their own TCA model without reformatting. If the UI is a dead end for data, the PM will stop using it within two weeks.
What I push back on
Compliance teams often ask for the report surface to double as the client-facing RTS 28 publication. It should not. The public RTS 28 is a static annual artefact in a prescribed ESMA format; it does not need a UI. The internal best-ex surface is a living tool for decision-making. Conflating the two produces a page that is too complex for regulators and too dumbed-down for PMs. Ship two things.
The regulatory driver for best-ex is not going away — ESMA's 2024 MiFID II review kept Article 27 largely intact and tightened the monitoring obligation. The firms that treat it as a PDF-generation problem will keep producing unreadable PDFs. The ones that treat it as a product-design problem will give their PMs a surface that actually moves execution quality. That is the whole argument.
Related work