Mystery shopping done right: how Parking PI scores a site
Encompass Parking
Controllership for Parking Revenue · April 10, 2026 · 7 min read
Mystery shopping in parking has a credibility problem, and the problem is borrowed from retail. Most mystery shop programs in service industries were designed to evaluate the customer-facing experience of a store. The shopper rates the greeting, the product display, the checkout courtesy. The output is a percentage score that gets reviewed at a quarterly all-hands and then filed.
That is not how parking fails.
Parking fails at the control points, not the courtesy points. The cashier who completes the transaction without issuing a receipt is not a customer service issue; it is a revenue control issue. The attendant who is absent from the booth during a peak window is not an etiquette issue; it is a coverage issue. The pay station that prints a confirmation with a total that does not match the posted maximum rate is not a UX issue; it is an audit issue.
A mystery shop program built for parking has to be built around the way parking actually breaks. That is what Parking PI is.
What the program evaluates
Parking PI scores a site across four control planes. Each plane is weighted by its operational impact on the asset, not by the order it appears on the shopper checklist.
Transaction integrity. Did the rate charged match the posted rate? Was a receipt issued without prompting? Did the ticket print legibly? On a credit card payment, did the receipt include the last four of the card and the merchant identifier? Did the system close out the transaction in a way that reconciles to the daily report? Transaction integrity is weighted highest because it is where revenue is at risk.
Staff readiness. Was the attendant present at the booth during the shopped window? In uniform with name tag visible? Off the personal phone? Not soliciting tips? Did the cashier complete the transaction in a way that matched documented procedure, including dual-confirmation steps where the operator's procedure manual requires them?
Facility condition. Rate signage accurate, legible, illuminated. Equipment functioning across the path of travel: gates operating, pay stations responsive, intercoms tested live by the shopper. Cleanliness, lighting, and ADA path of travel intact. Safety observations that would trigger a separate report flagged.
Shuttle and valet. Conditionally weighted, applied only at sites where the service exists. Driver readiness, vehicle condition, claim ticket discipline, key handling chain. Outbound and inbound shuttle runs both shopped where applicable.
The weighting matters. A site that scores well on courtesy and poorly on transaction integrity is failing in the way that costs the asset money. A program that does not weight by operational impact will produce a higher overall score for that site than the asset performance justifies. That is the structural mistake retail-adapted programs make.
What the report looks like
Section scores render as scored circles with directional arrows for trend across repeat visits. A weighted overall facility score, comparable across sites in the portfolio. Priority findings ranked by revenue impact, not by chronological order on the shop checklist.
For multi-property portfolios, an aggregate dashboard surfaces site-by-site comparison and trend tracking. A property that consistently underperforms its peers in the same portfolio is rarely a one-shop story. It is a coverage or training story, and the dashboard makes that visible by design.
Each finding maps to one of three remediation outcomes: a configuration change to the system, a process update to the operator's procedure manual, or a training item for site staff. We do not produce findings that map to "the operator should care more." Every finding has a fix and an owner.
What we have learned from running the program
Roughly 3,000 mystery shops have been completed across the JDE history that the Parking PI program inherited. Findings consistently align with what comes out of the financial audit in 85 percent or more of cases. The mystery shop catches additional process and staff-readiness gaps that financial data cannot see, because some failures only show up at the gate or the booth and never make it into the system of record.
The first shop at any new site finds something. That has been true at essentially every site we have ever shopped, at every operator, at every property type. The follow-up shop tells you whether the operator fixed it. That second data point is where the program starts paying for itself.
The most common findings, summarized across the program history:
Receipts not issued on cash transactions. The easiest control to test, the most-failed control in the program. Customers who do not request a receipt do not get one. The operator may or may not be reconciling the missing-receipt cash to the system later; the absence of a receipt at the moment of transaction is itself the finding.
Rate signage inconsistent with the actual posted maximum. Often by a few dollars in either direction, sometimes because rate cards were updated in the system but not on the physical signage, sometimes the reverse. Either way, the customer-facing posted rate does not match what the system is charging, which is both a compliance issue and a customer dispute generator.
Attendants on personal phones during peak windows, creating long booth lines and distracted transactions. This is a procedure failure, not a character failure; operators whose procedure manuals require phones be stowed during shifts have substantially fewer findings here.
Pay stations that accept payment but do not print confirmation. A common signature of a printer that is low on paper or a print head that needs service. The transaction closed; the customer has no proof. Reconciliation works for the operator. The customer has no recourse if the charge is disputed.
Shuttle drivers running off-schedule, particularly the first and last runs. The shuttle program deteriorates at the schedule boundaries first. A program that holds in the middle of the day but slips on the 5am and 11pm runs is the typical pattern.
How the program is misused
The most common misuse of mystery shop data is treating it as gotcha. A score gets surfaced, the operator gets confronted at a quarterly business review, defensive responses follow, the program loses credibility, the next quarter's shops get scheduled later or quietly canceled.
The right use treats the data as a continuous-improvement signal. A single shop is a snapshot. A quarterly cadence becomes an operating discipline. The site manager learns what the program is testing for and starts coaching staff against the same standards. Findings that recur across two consecutive shops escalate from the scorecard into the close pack as findings with quantified financial impact. Findings that disappear across two consecutive shops get logged as remediated.
For multi-site portfolios, the program also functions as benchmarking. The properties that consistently top the portfolio scoreboard are the ones whose practices get documented and propagated. The properties that consistently bottom the scoreboard get the operational attention they need. Without the scoreboard, that information is invisible.
Closing
A mystery shop program that does not change behavior is decoration. It generates a quarterly report that gets read once and filed.
Parking PI was built to change behavior, by mapping every observable failure to a fixable owner of a fix, and by weighting the scoring by what actually matters to the asset. That is the difference between rating a site and improving it. The program is a control loop: shop, score, surface, fix, re-shop. Operators who engage with the loop get measurably better. Operators who resist it tell you something useful about themselves, which is also a finding.
Either outcome moves the asset forward. That is what the program is for.
Encompass Parking
Encompass is the controllership layer for parking assets, reconciling revenue, governing exceptions, and continuously improving NOI.
← Previous Post
The economics of remote command center coverage
Adding 24/7 coverage to a parking facility looks expensive until you do the math…
Apr 7, 2026
Next Post →
Why parking needs a controllership layer
Operators manage throughput. Vendors manage devices. Finance teams receive month…
Apr 13, 2026
Stay Updated
Quarterly notes on parking controllership.
What we see when we audit the numbers. Rate drift, validation leakage, operator incentive conflicts, PARCS tradeoffs, and the operating disciplines that keep NOI intact. No spam. Unsubscribe anytime.