0:00
/
0:00
Transcript

Music generation with Suno

From Thanksgiving bops to campaign jingles

As a non-singer, songwriter, composer, or audiophile, the bar for my being amazed by AI-generated music is pretty low. But I’ve been curious about what these AI music tools can actually produce when you give them something specific to work with.

So I tested Suno with two fun use cases — potential TikTok songs for Thanksgiving cringe moments and a campaign jingle for a fictional mayoral candidate in the Philippines.

(Unfortunately, each song was muffled on the final video output, even if the music played fine when I was recording the live demo on Loom. Unclear if this is intentional on either Suno or Loom, but I am embedding audio file samples in this post for reference!)

Not your typical AI interface

The first thing that caught my attention was how different Suno feels from most AI tools I’ve used. There’s no big prompt box in the middle asking what you want to build. Instead, it’s structured like Spotify — a homepage showing songs other users have generated, suggested creators, community features like contests and discovery sections. You can create your own song, but the default experience is about exploring what’s already been made.

I actually found this engaging. It signals that Suno is as much a content platform as it is a creation tool. A departure from conversational AI interfaces, and a welcome one at that.

Starting simple with Thanksgiving songs

Before jumping into the campaign jingle, I wanted to see what Suno could do with minimal input. Given the season, I prompted it to create a “pop, high virality potential song about cringe-worthy Thanksgiving moments.” One phrase, nothing more.

Suno generated the following options: “Turkey Tango,” “Turkey Day Cringe,” “Leftover Regret.” The first was electronic pop with playful synths and a bouncy rhythm. (Listen below.)

0:00
-2:15

The second had more claps and bass drops. “Leftover Regret” ended up being my favorite — upbeat punchy pop that reminded me of Meghan Trainor, though I couldn’t pinpoint exactly who else it might be drawing from.

All of these came from that single one-liner prompt. The AI handled lyrics, melody, instrumentation, and vocal style on its own. I was kind of amazed it works as well as it does, but again, the bar is low.

Building a campaign jingle from scratch

For the main experiment, I wanted to test Suno’s Custom mode, where you can provide your own lyrics and configure style settings. I asked ChatGPT to create a profile for a fictional mayoral candidate, then generate campaign jingle lyrics in English. (It initially gave me Tagalog, but I wanted readers, watchers, listeners to understand what the AI was creating.)

I pasted those lyrics into Suno’s Custom tab, set the style to “upbeat electoral campaign jingle,” kept weirdness at 50% and style influence at 50% (playing it safe), and clicked Create.

Unlike chat interfaces where you see real-time responses or at least a thinking indicator, Suno shows you four cards immediately, but the songs aren’t ready yet. You see loading spinners, the cover art generates quickly, but the actual audio takes time. It reminds me more of image generation than conversational AI — you submit, wait, and then it’s done. No behind-the-scenes look at the process.

What the AI came up with

When the songs finished generating, I played all the options. The first one was excellent — it actually sounded like the catchy campaign jingles popular in Philippine elections. If I were working for this fictional candidate, I’d go with that one without hesitation.

The second option had value too, though not as strong. The third one? That didn’t sound like a campaign jingle at all. (Listen below.) Good beat, but not what we were looking for. Same lyrics, same style guidance, but the AI interpreted the brief very differently across the three outputs.

0:00
-1:21

Even with specific inputs, Suno doesn’t always nail the intent. It generates options, but you’re hoping at least one of them lands. It’s more like a slot machine than a conversation. You can’t iterate in real-time or ask for adjustments. You get what you get, and if nothing works, you start over.

Where I see this fitting

I think Suno is a genuinely fun tool for hobbyists to play around with. I also see real value for content creators who want to create their own background music, songwriters who are looking for musical inspiration when they already have lyrics, or musicians who are searching for the right words to complement fresh beats.

But the workflow feels different from other AI tools I’ve used. The lack of real-time iteration, the platform-like interface that emphasizes discovery, the way it generates multiple options without letting you refine — all of this suggests a different mental model. It’s not “AI assistant that helps you build things.” It’s more “AI that gives you options and you pick what works.”

To me, Suno works best when you’re open to what it gives you rather than trying to execute a precise vision. The campaign jingle that worked felt more like luck than precision. Maybe that’s fine for music creation, where serendipity can be part of the process. But it’s a different relationship with AI than I’ve developed with other tools.

Share


Have you tried Suno or other AI music generation tools? When AI generated multiple options without letting you iterate or refine, how does that change how you think about creative work? Does the “slot machine” dynamic feel limiting, or does it actually open up possibilities you wouldn’t have explored otherwise?

Leave a comment

Discussion about this video

User's avatar

Ready for more?