Skip to main content

Building a Council of AI Agents: Multiple LLMs Working Together

5 min readBy Hamza

Learn how to build a council of AI agents that combines multiple LLMs for better insights. Inspired by Perplexity's Model Council, explore multi-agent AI systems.

AI AgentsMulti-LLM SystemCouncil of AgentsPerplexity Model CouncilAI CollaborationMultiple AI ModelsAgent ChainMCP Council

Building a Council of AI Agents: Multiple LLMs Working Together


Have you ever wondered what would happen if you could get multiple AI models to debate your ideas? Perplexity AI introduced this exact concept with their Model Council, and it worked remarkably well. Inspired by their approach, I built my own Council of Agents system that combines insights from different LLMs to provide more comprehensive analysis.


Think of it as a panel of AI experts, each with their own perspective and capabilities. The concept, proven effective by Perplexity's implementation, leverages the fact that different AI models have different strengths, training data, and reasoning patterns. Instead of relying on a single AI model, you can now:


  • Get diverse perspectives from multiple providers (Claude, GPT-4, Gemini, etc.)
  • Chain responses where each agent sees what previous agents said
  • Build consensus by comparing different viewpoints
  • Reduce bias by avoiding reliance on a single model's limitations

The Chain Mechanism


Here's where it gets interesting. When you send a message:


  1. First Agent receives your original prompt
  2. Second Agent sees your prompt PLUS the first agent's response
  3. Third Agent sees everything that came before
  4. Each agent is instructed to provide DIFFERENT perspectives

This creates a cascading effect where agents build upon, challenge, or refine previous responses—just like a real brainstorming session.


Each agent after the first receives a modified system prompt:


"You are Council Agent ${index}. Previous agents have provided their 
analysis below. Provide a DIFFERENT perspective or additional insights. 
If you genuinely agree with all previous responses and have nothing 
new to add, simply state your agreement."

This instruction prevents echo chambers and encourages critical thinking. The system maintains full conversation history, so context carries across turns. Agent 2 in the second turn knows what Agent 1 said in the first turn.


Use Cases


This setup is perfect for:


  • Brainstorming sessions: Get multiple creative angles on a problem
  • Decision validation: Cross-check important choices across models
  • Code review: Different models might catch different issues
  • Research synthesis: Combine analytical approaches from various AIs
  • Bias detection: Identify when models disagree significantly

This multi-model approach, as demonstrated by Perplexity's success, can significantly improve answer quality and reduce hallucinations by having models cross-validate each other's responses.


Challenges & Considerations


  • Context window limits: Each agent needs the full conversation history
  • API costs: More agents = more tokens used
  • Response time: Sequential processing takes longer
  • Rate limiting: Multiple providers means managing multiple rate limits
  • Quality variance: Not all models are equally capable

I tried to experiment with this council of models. I think it's really amazing depending on what you want to achieve. I built a simple version, but you can of course extend it by adding different personas using multiple LLM models. This way you can actually create something which helps you brainstorm or validate stuff. Remember, this is just AI, and a lot of times AI is You made him wrong, so you need to be careful and not just blindly follow whatever they tell you. I still think it's a useful feature. Currently I'm using Council of Mine MCP, which is way simpler as you just need to install the mcp. So if you don't want to build your own council, there are already options available in the market which you can use, but they have limited functionality.