Note: I wrote this blog post about Deep Research vs Perplexity the day before Open AI:s Deep Research was released. I suppose Leung will create new videos on the topic, including Open AI.
The world of AI research tools is rapidly evolving, leaving many of us wondering which platform best suits our needs. While there are other AI research tools on the market, tech reviewer Grace Leung’s recent YouTube video focuses on a head-to-head comparison of two prominent options: Gemini Deep Research and Perplexity. This post aims to provide an objective summary of her findings, allowing you to draw your own conclusions. If you’re looking for the best AI research tool, this summary of Leung’s insightful comparison of Gemini Deep Research vs Perplexity will help you make an informed decision.

Grace Leung is a Digital Growth Consultant & Educator who shares her expertise on all things digital marketing, personal growth, tech, and business on her YouTube channel. In the video she compares two powerful AI research tools: Gemini Deep Research and Perplexity.
What is Gemini Deep Research?
Gemini Deep Research is Google’s answer to the growing need for sophisticated AI-powered research tools. As Leung explains, it’s an add-on feature exclusive to the Gemini Advanced subscription plan, priced at around $20 per month. This tool is designed to create multi-step research plans, boasting an “agentic capability” to generate comprehensive and well-thought-out research outlines. Some of its key features, as highlighted in the video, include:
- Integration with other Google products like Docs and NotebookLM (Google’s AI-powered note-taking tool), streamlining the research workflow.
- Detailed citations for each point, making it easy to trace sources.
- A planned upgrade path to the more advanced Gemini 2.0 model once it’s stabilized.
However, Leung notes that it currently lacks file upload capabilities. She also points out that while Deep Research excels at creating initial research plans, its performance in other areas might vary depending on the research topic.
What is Perplexity?
Perplexity, as described by Leung, is a standalone AI research tool that also offers a Pro plan for around $20 per month, which users can upgrade to from the free plan. Unlike Gemini Deep Research, Perplexity allows users to switch between different advanced AI models, including popular options like DeepSeek and O1. This flexibility is a significant advantage for those who like to experiment with different models. Other features Leung highlights are:
- A limit of 300 Pro searches per month.
- Support for file uploads, enabling users to build their own knowledge base relevant to their research.
- Interactive searching and the ability to refine source selection, giving users more control over the research process.
- Different modes to narrow down or broaden the search.
With a basic understanding of both tools, let’s dive into Grace Leung’s detailed comparison and see how they stack up against each other when it comes to Gemini Deep Research vs Perplexity.
Grace Leung’s Comparison: Key Takeaways
Leung’s video provides a thorough comparison of the two platforms. Here are the key takeaways, categorized for clarity:
Research Efficiency
- Perplexity: Leung finds Perplexity to be faster in terms of response time. Its interactive searching feature makes it easier to adjust queries and refine the research scope on the fly. You can even deselect sources and it offers different modes.
- Deep Research: While Deep Research generates detailed and comprehensive research plans, it’s slower, taking around 8-10 minutes for each search. Leung notes that it’s less flexible when it comes to adjusting the search direction during the process. However, it’s well-suited for creating formal research documentation and conveniently exports to Google Docs.
- Leung’s Verdict: Perplexity is slightly better for overall efficiency and offers more flexibility in refining the search.
Source Reliability and Diversity
- Perplexity: Pulls from a diverse range of sources, including major brands, tech media outlets, social media platforms, and online forums. Leung finds the sources to be timely and reliable.
- Deep Research: Tends to rely more heavily on big brand websites, which Leung points out can limit source diversity. While the sources are generally reliable and it has a neat feature to highlight the specific part of a source used, it includes fewer social media or academic sources compared to Perplexity.
- Leung’s Observation: Perplexity provides better source diversity, although both platforms offer good source reliability.
Information Depth and Output Quality
- Deep Research: Provides more comprehensive responses that are well-structured and detailed. It excels at creating a good flow with different section headings. However, Leung observes that it sometimes relies too heavily on single sources, even when those sources are not entirely relevant to the specific point being made. It can also occasionally include generic “buzzwords” instead of specific insights. It’s important to note that this comparison focused on a specific research topic, and Deep Research’s output quality might differ in other areas.
- Perplexity: Offers more condensed responses, but Leung often finds them to be more meaningful and insightful. It generally uses multiple sources for each point, reducing the risk of bias.
- Leung’s Verdict: While Deep Research has the potential to perform well, and Leung acknowledges its strengths in generating detailed plans and maintaining context, Perplexity’s output quality is often “more insightful” for the AI agent research topic.
Context Retention and Cross-Referencing
- Deep Research: Performs better at retaining context throughout the research process. Leung notes that it effectively links metrics back to specific use cases mentioned earlier in the conversation. However, she also points out that follow-up prompts tend to use fewer sources than the initial research plan and the analysis can be a bit too general at times.
- Perplexity: While less effective at tying specific metrics back to all use cases, it excels at cross-referencing. Leung highlights its ability to identify gaps between marketed claims and actual performance by using specific examples. It also tends to use more sources in follow-up prompts, enhancing its ability to cross-validate claims.
- Leung’s Verdict: A mixed bag. Deep Research is better at maintaining context, while Perplexity demonstrates stronger cross-referencing capabilities in her examples.
Pricing and Value
- Both Gemini Deep Research and Perplexity Pro are priced similarly, at around $20 per month.
- Leung’s Implication: The value of each tool depends on individual needs and research style.
Specific Use Cases
- Deep Research: Due to its ability to generate comprehensive reports with detailed citations and its integration with the Google ecosystem, it might be more suitable for academic research or projects requiring formal documentation. Students writing research papers might find Deep Research’s citation features and Google Docs integration particularly useful.
- Perplexity: Its speed, flexibility, and source refinement features make it a better choice for quick analyses, high-level research, and situations where users need to adjust their search direction frequently. Marketers conducting competitor analysis might prefer Perplexity’s ability to quickly scan various sources and identify trends. Content creators looking for quick information and diverse perspectives may prefer Perplexity’s speed and source variety.
- Leung’s Note: Deep Research should not be confused with an AI-powered search engine like Google’s AI overviews.
Leung’s Overall Conclusion
Grace Leung concludes that, for now, Perplexity is slightly better overall due to its efficiency, flexibility, source diversity, and the insightful quality of its output. While she acknowledges that Gemini Deep Research has high potential, especially given Google’s vast resources and the anticipated improvements with future Gemini models, she doesn’t find it “super impressive” in its current state.
She emphasizes the importance of fact-checking, regardless of which tool you use, and suggests that combining tools (like Perplexity with NotebookLM) can create a powerful research workflow. While this post focused on Grace Leung’s comparison of Gemini Deep Research vs Perplexity, we encourage you to explore other AI research tools to find the perfect fit for your workflow.
For a more in-depth understanding of the nuances of each platform, be sure to watch Grace Leung’s full video linked above.
Leave a Reply