A few months ago, we started seeing more and more posts about AI tools like Cursor claim that you could spin up full apps with almost no human involvement. So, we decided to test it ourselves. Was it really that easy to bootstrap a project without a developer, or was the hype too good to be true? We gave it a shot right away, and after some experience, we decided it was time to write about it.
We built a small project to see what AI could handle. The result? AI coding tools are genuinely impressive. You can spin up a working MVP in just a few hours, skip boilerplate, and get something on screen faster than ever. For early-stage ideas or quick experiments, it's a game changer.
While some developers fear AI might replace them, the opposite feels more true. AI helps clients clarify their ideas faster, so development can start with more direction.
That said, a lot of those "I built this whole app with AI" posts you see on LinkedIn and Reddit usually skip the part where a human had to step in and fix things. Later in this post, I'll share a specific moment where AI got stuck and needed developer intervention.
Where AI Shines for MVPs
- Speed: Build and iterate in hours, not days.
- Focus on logic, not boilerplate: Less typing, more thinking.
- Great for non-core features: Let AI scaffold login, dashboards, settings, etc.
- Inspiration and momentum: AI nudges can help unblock your thinking.
- Higher developer output: Multiply team productivity without increasing headcount.
Where It Breaks Down
- Circular suggestions: AI repeats itself and fails to move forward.
- Bugs and hallucinations: Code looks right but doesn't run or make sense.
- Overengineering: AI "solves" problems that didn't exist.
- Unoptimized code: AI often writes inefficient, redundant, or non-scalable logic.
Case Study: Age of Empires Community Leaderboard
We built a community leaderboard for Balkan Age of Empires players using .NET and MSSQL. The app automatically fetches recent matches, tracks player performance, calculates custom ELO ratings, and displays leaderboards and match history. Since we mostly play 4v4 team games with only community members and our skill levels vary, a key feature is team balancing. Each player starts with a rating based on their current 1v1 ELO, which the app adjusts over time using results from community games. This lets us quickly select 8 players and automatically generate balanced teams with the smallest possible ELO difference, leading to fairer, more competitive matches.
Key Features
- Player list with sorting and individual match histories
- Full match history view for the community
- Automated match syncing and rating updates
- Admin panel for managing player data and history
- Support for multiple AoE.net profile IDs per player
Initial prompt
I want to build an internal AOE2DE leaderboard for our friends from Discord. Each player has a public name and one or more AOE2 profile IDs (they may play from multiple accounts). We want to treat all profile IDs under the same player. Each player starts with a custom ELO (e.g. 1100, 1000, or 900).
The app should:
- Fetch recent matches using the AOE2 API
- Only process matches with exactly 8 players, all of whom are in our system
- Calculate updated ELO for each player using a basic ELO algorithm
- Store matches, players, and results in a database
- Check for new matches every 5-10 minutes
The frontend should show:
- A leaderboard sorted by ELO
- Match history with ELO gain/loss per player
Tech stack:
- .NET (C#) + MSSQL
- Frontend (whatever scaffolds quickly)
- Claude Sonnet 3.7 via Cursor for AI-assisted development
Cursor handled the scaffolding well. Bootstrap gave us a quick, responsive layout, and the UI was clean and functional from the start. You can see results of < 30 minute work here: aoe-balkan-community.eu
Where Things Went Off the Rails
A few issues popped up:
- One page returned a 404 error due to a bad route.
- AI started mapping all games, ignoring filters I had given.
At that point, I tried adjusting the prompt, but Cursor kept looping. Without programming experience, debugging this would've taken hours, but with developer knowledge, it took just a few minutes to spot the issue and fix the logic.
Still, these were signs of deeper gaps:
- I asked it to seed data and it added the seed into a migration. Later, I asked it to seed more data, so it created a service, but that service only ran if the table was empty. Since the migration had already inserted a record, the condition was never met, and the service didn't execute. The logic was technically correct, but contextually flawed. As a result, the additional data would never be seeded in any environment.
- I asked it to calculate ELO per team. As a developer, I knew this required expanding the data model to store the team's rating at the time each match was played. But the AI didn't create the necessary migration or even mention that a schema change was needed. It silently failed to persist the new structure, which meant the rating data was never stored correctly.
The generated code also had performance issues. For example, inside a foreach
, it repeatedly hit the database classic N+1 query problem. Each iteration triggered a new query instead of batching, causing severe overhead. As an MVP, it worked but this approach wouldn't scale in a production environment.
Here's an example of that unoptimized logic:
// First problem var profiles = await _context.PlayerProfiles.ToListAsync(); var processedMatchIds = new HashSet<long>(); var profileIds = profiles.Select(p => p.AoeProfileId).ToList(); try { if (apiResponse?.MatchHistoryStats == null) return; var matchesWithFullLobby = apiResponse.MatchHistoryStats .Where(x => x.MatchHistoryReportResults.Count == 8) .ToList(); foreach (var match in matchesWithFullLobby) { if (processedMatchIds.Contains(match.Id)) continue; // Second problem var doAllPlayersExist = match.MatchHistoryReportResults.All(p => _context.PlayerProfiles.Any(pp => pp.AoeProfileId == p.ProfileId)); ...
Loading All Players into Memory Upfront: In the first problem, the entire set of players is loaded from the database into memory. This includes all columns for each player which could be 20-30 fields and potentially thousands of rows.
- Why it's inefficient: This operation is memory-intensive and unnecessary if only a subset of the data (like player IDs) is used later.
- Impact: Increased memory usage, slower performance, and potential scalability issues.
Redundant Per-Player Database Queries in Loop: In the last line, there's a database query executed inside nested loops (once for each player in every match).
- Why it's ironic: The full player data was already loaded earlier, but instead of using it, the code redundantly queries the same information again.
- Impact: Results in a large number of database hits (potentially thousands), which significantly degrades performance and increases latency.
Conclusion
For MVPs, this new "code vibing" trend is powerful. You really can build something valuable fast. But AI alone isn't enough - it still needs a developer behind the wheel to keep it from veering off course.
If this MVP leads to funding or real users, the quick wins need to be revisited. Code written with speed in mind, especially when AI is involved often lacks structure, tests, and long-term maintainability. That's fine early on, but it quickly becomes tech debt. To go from prototype to production, you'll need to refactor, standardize, and own the architecture with intention.