Setting Up Your AI Agent
System Prompt Configuration
Copy this system prompt to configure your AI agent to use Valyu DeepSearch effectively:Why Effective Query Prompting Matters
Valyu is AI-native and built for AI agents that need factual grounding from authoritative sources. The more precise the agent’s search queries to Valyu DeepSearch, the more relevant results are returned that reduce hallucinations and improve your AI’s accuracy.Anatomy of a Good Prompt
When calling the API from your LLM, Agent or User, effective prompts for Valyu DeepSearch should include:Component | Description | Example |
---|---|---|
Intent | What specific knowledge do you need? | ”LLM transformer efficiency optimizations” |
Source Type | Which data sources should Valyu prioritize? | ”{author} {document name}“ |
Constraints | What filters improve relevance? | “production-ready solutions” |
Time Range | What time period should results cover? | “2022-2024”, “last 6 months”, “after 2021” |
Pro tip: Don’t want to write prompts? Use
tool_call_mode=false
in the
API parameters. However, to get the best results, keep reading.Query Optimisation Essentials
Character Limits and Precision
Valyu works best with focused, concise queries under 400 characters. Treat the character limit as a hard ceiling where short, high-signal phrasing beats extremely verbose prompts. ❌ Too long (450+ characters): “I need comprehensive information about the latest developments in artificial intelligence and machine learning technologies, particularly focusing on large language models, their training methodologies, performance benchmarks, computational requirements, and how they compare to previous generations of AI systems in terms of accuracy and efficiency” ✅ Optimized (under 400 characters): “LLM training methodologies performance benchmarks computational requirements vs previous AI systems”Multi-Topic Query Strategy
Complex research needs? Split them into targeted sub-queries rather than cramming multiple intents into one request. This approach delivers more precise results and better leverages Valyu’s DeepSearch capabilities over each query. Instead of one broad query:Developer insight: Parallel focused queries often outperform single
comprehensive ones, delivering higher relevance scores.
Common Prompting Mistakes
Ineffective prompts that waste your API credits:Avoid Generic Queries
Too generic: Valyu needs specificity to deliver factual grounding ❌ “AI research” ✅ “transformer attention mechanism computational complexity analysis” Generic queries return broad, unfocused results that dilute relevance. Specific technical terms help Valyu’s search algorithms identify precise sources, details and content that match your AI needs.Specify Source Guidance
Missing source guidance: Specify the type of content you need ❌ “Stock data” ✅ “Apple quarterly earnings financial statements SEC filings” Without source context, Valyu may return news articles when you need financial data, or vice versa. Explicit source indicators help prioritise the right content from Valyu’s comprehensive search index.Focus Your Scope
Overly broad scope: Granular search controls work better with focused queries ❌ “Everything about quantum computing” ✅ “quantum error correction surface codes implementation” Broad topics overwhelm dilute your search intent and will return surface-level or inprecise content. Focused queries will better surface specialised content that will be more relevant and useful for your AI system.Single Intent Per Query
Multiple queries in one prompt: Broad queries dilute the intent, keep to focused queries ❌ “Explain causes of high inflation rates, and also tell me about cryptocurrency market trends” ✅ “Federal Reserve interest rate policy impact on inflation 2023-2024” Multiple intents dilutes query intent and reduce precision for each topic. Single-intent queries allow Valyu’s relevance algorithms to optimise for a specific domain, delivering higher-quality results that better serve your LLM’s context requirements.Optimize for Low-Verbosity Structure
Too Verbose Querying: Don’t add additional noise to the query, keep to key information ❌ “Explain concepts on how bioinformatics works by helix” ✅ “DNA helix structure bioinformatics sequence analysis” Verbose phrasing with unnecessary words reduces search precision and wastes tokens. Compressed, keyword-focused queries improves search precision especailly when looking for speciifc information within a specific document.Transform weak prompts into high intent queries:
Ineffective Prompt | Optimized for Valyu |
---|---|
”Find information about machine learning" | "production RAG benchmarks enterprise deployment technical whitepapers 2023" |
"Cancer research" | "CAR-T cell therapy B-cell lymphoma phase III outcomes FDA briefing documents 2023" |
"Recent studies on psychology" | "CBT efficacy treatment-resistant adolescent depression meta-analysis peer-reviewed journals 2020-2024" |
"Database optimization" | "PostgreSQL time-series query tuning indexing partitioning official documentation benchmarks” |
If a user is querying the Valyu API directly (not through an LLM tool call),
set
tool_call_mode=false
for better results.Maximizing Valyu’s Search Parameters
Combine well-crafted prompts with Valyu’s search parameters for better search guardrails. This boxes in your search to only search over a specific areas. This is useful if trying to use the DeepSearch API for a specific domain or tool call.Pro tip: Leverage Valyu’s beyond-the-web capabilities with
included_sources
like valyu/valyu-arxiv
for academic content, financial
market data, or specialized datasets that other Search APIs can’t access.Avoid Common Integration Mistakes
- Token waste: Focus prompts on essential information for your LLM context dont ask general questions
- Ambiguous queries: Define domain-specific terms and expand acronyms to improve search precision
- Missing filters: Always use Valyu’s relevance thresholds and source controls
- Ignoring cost optimization: Balance
max_price
with result quality needs - Wrong source expectations: Sometimes highly-cited/ popular sources may not contain the context you need. For example, the “Attention is All You Need” paper is foundational but terrible for learning how transformers work in modern LLMs
Start Building with Valyu
Ready to integrate production-grade search into your AI stack?Quick Integration
Get your first Valyu search running in minutes
API Reference
Explore all search parameters and response formats
Developer Support
Building something ambitious? Our team helps optimize search strategies for mission-critical AI applications:- Technical Support: contact@valyu.network
- Developer Community: Join our Discord
Performance tip: The most effective prompts combine domain expertise with
Valyu’s search controls. Start with our templates, then iterate based on your
LLM’s specific context requirements.