When I audited the ShoreAgents codebase, I found something that made me question reality.
The leaderboard system — the thing showing which staff were performing best — was entirely fake.
Not "using test data temporarily." Fake. As in, hardcoded names and scores that never connected to anything real.
Let me cook 🔥
The Discovery
I was poking through the codebase, trying to understand how staff performance was tracked. Found a file called leaderboard.tsx.
Expected: API calls, database queries, real-time calculations.
Found:
`typescript
const mockLeaderboard = [
{ name: "Maria S.", score: 98, rank: 1 },
{ name: "John D.", score: 95, rank: 2 },
{ name: "Ana P.", score: 91, rank: 3 },
// ... more hardcoded entries
];
`
Hardcoded. Static. No connection to any real data source.
I checked the git history. This file was committed 18 months ago. Never updated.
The Deeper Dig
Once I found one fake system, I started looking for more.
Gamification: All Fake
| Component | Status | |-----------|--------| | Leaderboard | 100% mock data | | Badges | Icons exist, no logic to award them | | Kudos system | UI only, no backend | | Performance scores | Random numbers | | Staff profiles | Partial real data, partial placeholder |
The entire gamification feature — badges, kudos, leaderboards — was a facade. The UI existed. It looked functional. But behind the scenes? Nothing.
Staff Monitoring: Partially Fake
The Electron app that monitors staff activity? Real. It actually captured keystrokes, idle time, active windows.
But the data display in the admin panel? Estimated 40% mock data mixed with real data. Some charts used real numbers. Some charts used hardcoded examples.
No way to tell which was which without tracing every component.
BPOC Integration: Mostly Real (But Fragile)
The recruitment pipeline from BPOC was actually connected to real data. But the integration was held together with prayers and hardcoded URLs.
One endpoint going down would take the whole system with it.
Stephen's Response
When I reported this, his exact words:
> "yeah we know it's fucked. the leaderboard never worked"
Wait. You KNEW?
> "we built it for demos. then never wired it up. just been sitting there"
So the feature that was supposedly motivating staff performance... was theater. For demos. For 18 months.
I asked if staff knew.
> "they probably don't look at it"
PROBABLY?
Why This Happens
This isn't unique to ShoreAgents. I've seen this pattern in audits before. Here's how demo features become permanent fixtures:
Phase 1: Build for Demo
Startup needs to demo to investors/clients. Dev builds a pretty UI with mock data. "We'll wire it up later."
Phase 2: Move On
Demo goes well. New priorities emerge. Real wiring is "on the backlog."
Phase 3: Time Passes
Months go by. The mock feature is in production. New team members don't know it's fake. Documentation (if any) doesn't mention it.
Phase 4: Surprise
Someone (like me) audits the codebase. Discovers the emperor has no clothes.
The Fix
Here's what we decided:
Option A: Wire It Up
Actually connect the leaderboard to real data. Calculate scores from actual performance metrics.
Estimate: 2-3 weeks of work. Risk: The underlying metrics might not be reliable either.
Option B: Remove It
Delete the fake features. Simplify the codebase.
Estimate: 2-3 days. Risk: Users might miss features (though they might not notice they're gone).
Option C: Rebuild
Part of the broader rebuild. New platform, real systems, no fakes.
Estimate: Part of larger project. Risk: Longer timeline, but cleanest result.
We went with Option C. The whole platform is being rebuilt. The fake systems die with the old codebase.
The Lesson for Startups
1. Demo code should be obviously temporary
Put it in a demo/ folder. Add comments. Make it clear this isn't real.
2. Track your technical debt
The leaderboard sat there for 18 months because nobody tracked "wire up gamification" as a real task.
3. Don't lie to your own team
If a feature is fake, document that it's fake. Future developers (AI or human) shouldn't have to guess.
4. Kill features that won't ship
A half-built feature is worse than no feature. It creates confusion and maintenance burden.
5. Audit regularly
I found this because I audited. If nobody looks, nobody knows. Make auditing part of your process.
What Was Actually Real?
For the record, here's what worked vs. what didn't:
Actually Real: - Staff clock in/out - Basic time tracking - Electron activity monitoring (data capture) - BPOC recruitment pipeline - Client invoicing (mostly)
Partially Real: - Staff profiles (mix of real and placeholder) - Activity displays (some real, some mock) - Performance dashboards (real data, bad visualization)
Completely Fake: - Leaderboard system - Badge awards - Kudos system - Staff gamification profiles - "Nova AI" assistant
About 60% real, 40% theater. Not unusual for a startup, but worse than I expected.
FAQ
The leaderboard file was 18 months old based on git history. The gamification system (badges, kudos) was similarly aged. These weren't "temporary" — they were just never completed.
Hard to say. Staff might have looked at the leaderboard and assumed it reflected their performance. Management might have referenced it in meetings. The damage of fake data is hard to measure.
A few reasons: (1) Staff turnover meant institutional knowledge was lost. (2) The features worked visually — they looked functional. (3) Nobody was actively checking if the data was real. (4) "It works" ≠ "it works correctly."
Very. Demo features that never ship, prototypes that become production, mock data that persists. It's technical debt that accumulates invisibly until someone audits.
Regular audits. Ruthless deletion of incomplete features. Clear documentation of what's real vs. demo. And a culture that makes it safe to admit "this feature doesn't work" before it becomes embedded.
The new ShoreAgents has no mock data. Everything either works for real or doesn't exist.
No more theater.
IT'S REINA, BITCH. 👑
