0:00
/
0:00
Transcript

Prototyping iterations and fixes with Lovable

Getting 80% of the way there, then hitting the wall

I wanted a simple tool for creating profile pictures with vibrant colored backgrounds. You know, the kind where you upload a selfie and the app replaces the background with something more visually interesting. It seemed straightforward enough — upload, remove background, add color, download.

So I decided to build it with Lovable, one of the most popular vibe coding tools out there right now. It promises that anyone can create web apps, dashboards, landing pages, whatever you can imagine. They even have kids building their own games with it.

What I learned was that getting to a working prototype was surprisingly easy. But getting it production-ready? That’s where things got complicated.

Writing the initial prompt

Like my past prototypes, I started with Claude to draft the first prompt. I wanted to be clear about what I was trying to achieve and what core functionality mattered for an MVP. The result was a detailed prompt describing a profile image generator web app with image upload, cropping, background color selection, and AI-powered background removal.

I fed that into Lovable and watched it generate the first version. The right panel showed a pretty bare interface for a profile picture generator, but I wasn’t looking to optimize design yet. I wanted to confirm the functionality worked first.

Watching it fail (repeatedly)

When I tested that first version, the app didn’t work. I’d upload a photo and watch it process in perpetuity, stuck in an endless loop. So I went through multiple iterations with Lovable to try and fix it.

This is where Lovable’s chat feature became useful. Instead of immediately asking it to implement changes, I could use “plan mode” to investigate. I’d describe the issue, and Lovable would do its own debugging, then recommend next steps based on what it found.

After a few days of trying, I finally got to a version where the functionality started working. That’s when I shifted focus to design.

Making it look playful

Once the core features worked, I asked Lovable for suggestions to make the web app look fun and playful. The current version felt too bare for a tool meant to generate images with vibrant backgrounds.

Lovable came back with design analysis and suggestions. Some felt too extreme for what I had in mind, so it gave me phased options. I approved phases one and two, and it implemented them. The result was “Profile Pic Magic” — a much more personality-filled version compared to that initial bare-bones interface.

The issue I couldn’t solve

The functionality looked great for square profile pictures. Upload a square photo, select a background color (purple, blue, whatever), click generate, and it worked as intended. The AI removed the background and replaced it with the vibrant color I selected.

But when I uploaded a portrait-mode photo, the orientation changed in the final output. The background was correctly replaced, but the image was rotated in ways I didn’t want. That’s not what I intended — I needed it to preserve the original orientation.

I tried multiple iterations to fix this. Each time, Lovable wasn’t actually modifying the web app’s code. Instead, it was modifying the prompt being sent to the Google Nano Banana model handling background removal. It kept making the prompt more explicit about preserving orientation, but the issue persisted.

This is where I got stuck. It’s a prompt within a prompt situation. The web app is trying to modify instructions for an external AI model — and I haven’t been able to get past it.

What this revealed

This experience reminded me of something I’ve seen before: It’s very easy to create prototypes with vibe coding tools like Lovable, Bolt, or Replit. The path to production readiness though is not as easy.

When you hit issues that involve multiple systems working together (your web app code, external AI models, image processing), debugging becomes exponentially harder. The AI can iterate quickly on what it controls, but when the problem lives in the handoff between systems, you start running into limitations.

It takes patience to work through these issues. And if you’re building something you expect a lot of users to rely on, it’s still important to have expert guidance from real designers and engineers rather than being fully reliant on AI right now.

Where I landed

I have a working prototype that handles one use case well (square photos) and fails at another (portrait photos). It’s functional enough to demonstrate the concept, but not polished enough to ship publicly.

That’s probably the most honest assessment I can give. Lovable got me 80% of the way there in a fraction of the time it would have taken me to code this from scratch. But that last 20% (handling edge cases, debugging multi-system interactions, ensuring reliability) is where the real work lives.

I’m not giving up on it. But I am recognizing that the iterative prototyping process has a ceiling, and crossing it requires different skills than the ones that got me this far.

Share


Have you used Lovable or similar vibe coding tools? Where did you hit your ceiling? Was it design, functionality, or something in between?

And more broadly, how are you thinking about the gap between “working prototype” and “production-ready product” when using AI-assisted development tools? I’m still figuring out where to invest time versus when to bring in expert help.

Leave a comment

Discussion about this video

User's avatar

Ready for more?