I had a thought: where were all the AI leaders actually born? And I wondered if I could visualize it - brains on a map, each one placed at a leader's birthplace.
So I described what I wanted to an AI agent. Twenty minutes later, the interactive map below existed. That's Vibe Blogging - ideas becoming real through conversation with AI.
A note on accuracy: Fact-finding is not my preferred use case for generative AI. I'd rather enter a conversation with well-established facts and use AI to manipulate, analyze, or visualize them. For this post, I used deep research tools to demonstrate the workflow, which resulted in some discrepancies in birthplace data. Some locations may be approximate or require verification. The point here is the process, not the precision of every data point.
How I Built This (The Prompt Engineering)
The real point of this post isn't just the map - it's demonstrating that ideas can come to life in a way that was never before possible. Let me walk you through the two key techniques I used.
Technique 1: Ensembling
Ensembling means using multiple generative AI or agentic tools in one workflow. Instead of relying on a single model, you triangulate across several to get better, more reliable results.
Here's how I used it:
- I started with Claude Code to write a research prompt - essentially describing what information I needed about AI companies
- I fed that prompt into both Gemini Deep Research and ChatGPT Deep Research - two different models doing the same task
- I took both outputs, brought them back to Claude Code, and asked it to synthesize a unified list of the top 20 AI companies worth investigating
- Then I repeated the process: wrote another research prompt about the leaders of those companies and their birthplaces, ran it through both Gemini and ChatGPT again
- I ran a second pass on leader birthplaces through Gemini and Claude to validate and catch any discrepancies
- Finally, I combined all that research to populate this interactive map
This is textbook ensembling - one of the best-researched techniques for actually improving generative AI performance. You're not trusting any single model; you're cross-referencing and validating across multiple systems.
Technique 2: Decomposition
Decomposition simply means breaking big tasks into smaller chunks. If I had tried to prompt: "Build me an interactive blog post with a world map showing where AI leaders were born, research all the companies, find their leaders, get their birthplaces, and write the content" - it would have been a disaster. The model would have hallucinated. The results would have been unreliable.
Instead, I decomposed the problem:
- Phase 1: Identify which AI companies matter (separate research task)
- Phase 2: Identify who leads those companies and where they were born (separate research task)
- Phase 3: Build the interactive visualization (separate coding task)
- Phase 4: Write the blog content explaining the process (separate writing task)
Each phase had clear inputs and outputs. Each could be validated before moving to the next. The result is what you're reading now.
The Data
Here's the complete breakdown of the 26 AI leaders I researched, organized by company.
| Name | Company | Role | Birthplace | Status |
|---|