I’d been sitting on a collection of research papers about disinformation and elections for years. Twelve PDFs from reputable sources, all focused on the Philippines and the United States. The kind of material you save thinking “I should really synthesize this someday” but never quite get around to.
So I decided to test NotebookLM, Google’s AI-powered research tool, to see what it could do with that pile of academic papers.
The setup: Sources, studio, and chat
NotebookLM’s interface is split into three deliberate sections. The left panel holds your sources (I uploaded those 12 PDFs on disinformation). The right panel is what they call the “studio” where you can generate different output formats. And the middle is a standard chat interface where you can ask questions about your sources.
What caught my attention was that studio panel. Eight different formats: audio overview, video overview, mind map, reports, flashcards, quiz, infographic, and slide deck. The free version gives you access to the first four, with the rest available through Google One or in beta.
I skipped flashcards and quiz for this test. I wanted to see how the tool handled synthesis and visualization across the other six formats.
Testing six different outputs
I didn’t customize much. For most formats, I just hit generate and waited to see what would happen. And unlike some AI tools that spit out results in seconds, NotebookLM takes its time. Simple formats took a couple of seconds. Video, reports, and slide decks took a few minutes.
Audio overview generated a podcast-style conversation between two AI hosts discussing the professionalization of disinformation operations. But what stood out was the interactive mode. I could literally interrupt the podcast mid-conversation and ask questions. When I asked whether the focus was global or country-specific, the hosts paused and answered: “Both. We’re covering 28 countries by 2017, but we’ll deep dive into the Philippines and the U.S. as case studies.” It wasn’t perfect (the voices are clearly synthetic), but the ability to jump into an audio discussion felt genuinely useful.
Video overview created a seven-and-a-half-minute video without any storyboard or outline from me. The imagery was surprisingly relevant (not generic stock photos), and the narration walked through the organizational structure of disinformation campaigns. I didn’t watch the whole thing, but the preview showed more polish than I expected from a fully automated generation.
Mind map is where NotebookLM does something I haven’t seen elsewhere. It created an interconnected web of topics from my sources — six main branches under “Organized Digital Influence Operations.” When I clicked on “Policy and Intervention,” the map expanded with five subtopics and automatically generated a chat prompt: “Discuss what these sources say about policy and intervention in the larger context of organized digital influence operations.” The answer pulled from multiple sources with citations. The visual mapping combined with real-time chat felt like a genuinely different way to explore complex material.
Reports offered four suggested formats based on my content — strategic analysis, policy paper, concept explainer, or case study. I chose concept explainer and got a comprehensive document defining key terms and frameworks. It reads like something you’d use as a briefing doc, not a final paper, but as a starting point, it saved hours of manual synthesis.
Infographic generated a clean, portrait-mode one-pager titled “The Architecture of Disinformation.” No typos. Clear images. Proper text hierarchy. Past AI-generated images would have overlapping elements or garbled text, but this was polished enough to actually use.
Slide deck produced 15 slides that honestly impressed me more than I expected. The formatting was sophisticated — not just templated consulting deck layouts, but thoughtfully designed slides with relevant imagery. One slide about the 2022 Philippines election included an AI-generated image of the current president surrounded by a web representing media and family connections. The balance of text and visuals across all 15 slides felt intentional, not algorithmic.
What stood out (and what I’m curious about)
The range in sophistication across formats was striking. The mind map and slide deck felt genuinely powerful — the kind of outputs I’d actually use in real work. The slide deck especially caught me off guard with how tailored it was to my specific sources, not just generic templates.
What I’m most curious to test with more time is how these formats hold up with different types of source material. This worked incredibly well with academic papers on a focused topic, but I wonder how it would handle messier inputs like blog posts, transcripts, or loosely connected ideas. The tool clearly excels at synthesis when you bring it curated sources.
I also didn’t have time to deeply fact-check the outputs against my sources. The citations were there, which is reassuring, but I’d want to verify accuracy more thoroughly before relying on this for high-stakes research or presentations. That’s less a limitation of the tool and more a reality of working with any AI-generated content.
Where this fits
This is the most impressed I’ve been by an AI research tool. Not because it replaces the work of reading and thinking, but because it accelerates how you organize and communicate what you’ve learned.
The variety of formats isn’t just a nice-to-have; it changes how you can actually use synthesized information. Need to present findings to a team? The slide deck gives you a real starting point. Trying to understand connections across sources? The mind map makes relationships explicit in ways that linear notes don’t. Want to process material while commuting? The interactive audio mode lets you learn actively, not passively.
I’ve tested enough AI tools to be skeptical of anything that promises to “transform” research. But NotebookLM delivered on something more practical: It made a pile of PDFs I’d been avoiding actually approachable. For topics where you have good sources and need to synthesize quickly or communicate findings across multiple formats, this tool is worth trying.
Have you tried NotebookLM? If you’re sitting on research sources you haven’t synthesized yet, which format would be most useful — audio, visual, or written? I’m particularly curious whether anyone’s used the interactive audio mode and found it actually helpful versus gimmicky.
And for those who regularly do research synthesis: Where do you see tools like this fitting into your workflow? I’m convinced this accelerates communication of findings, but I’m still thinking through how it changes the actual research process itself.









