Setting Up Your AI Agent
System Prompt Configuration
Add this to your AI agent’s system prompt:Why Good Queries Matter
Better queries mean better results. When your AI agent sends precise search queries to Valyu, it gets back more relevant information, which means fewer hallucinations and more accurate responses.What Makes a Good Query
A good search query has four parts:| Component | What it does | Example |
|---|---|---|
| Intent | The specific knowledge you need | ”LLM transformer efficiency optimisations” |
| Source Type | Which data sources to prioritise | ”{author} {document name}“ |
| Constraints | Filters that improve relevance | ”production-ready solutions” |
| Time Range | The time period results should cover | ”2022-2024”, “last 6 months”, “after 2021” |
Writing Better Queries
Keep It Short
Queries work best when they’re under 400 characters. Short, focused phrasing beats long, rambling prompts. ❌ Too long (450+ characters): “I need comprehensive information about the latest developments in artificial intelligence and machine learning technologies, particularly focusing on large language models, their training methodologies, performance benchmarks, computational requirements, and how they compare to previous generations of AI systems in terms of accuracy and efficiency” ✅ Better (under 400 characters): “LLM training methodologies performance benchmarks computational requirements vs previous AI systems”Split Complex Queries
If you’re researching something complex, break it into separate queries. You’ll get more precise results. Instead of one broad query:Common Mistakes
Being Too Vague
❌ “AI research” ✅ “transformer attention mechanism computational complexity analysis” Vague queries return unfocused results. Specific terms help Valyu find exactly what you need.Not Specifying Source Type
❌ “Stock data” ✅ “Apple quarterly earnings financial statements SEC filings” Without context, Valyu might return news articles when you want financial statements. Be explicit about what type of content you need.Asking for Too Much
❌ “Everything about quantum computing” ✅ “quantum error correction surface codes implementation” Broad queries return surface-level content. Narrow your focus to get deeper, more relevant results.Mixing Multiple Topics
❌ “Explain causes of high inflation rates, and also tell me about cryptocurrency market trends” ✅ “Federal Reserve interest rate policy impact on inflation 2023-2024” One topic per query. Multiple topics dilute the search and reduce precision.Using Too Many Words
❌ “Explain concepts on how bioinformatics works by helix” ✅ “DNA helix structure bioinformatics sequence analysis” Strip out unnecessary words. Keyword-focused queries are more precise.Quick Reference
| Weak Query | Better Query |
|---|---|
| ”Find information about machine learning" | "production RAG benchmarks enterprise deployment technical whitepapers 2023" |
| "Cancer research" | "CAR-T cell therapy B-cell lymphoma phase III outcomes FDA briefing documents 2023" |
| "Recent studies on psychology" | "CBT efficacy treatment-resistant adolescent depression meta-analysis peer-reviewed journals 2020-2024" |
| "Database optimization" | "PostgreSQL time-series query tuning indexing partitioning official documentation benchmarks” |
Using Search Parameters
Combine good queries with Valyu’s search parameters to narrow your results further:Things to Avoid
- Wasting tokens: Keep prompts focused on what your LLM actually needs
- Vague queries: Define technical terms and expand acronyms
- Skipping filters: Use relevance thresholds and source controls
- Ignoring costs: Balance
max_pricewith the quality you need - Wrong source assumptions: Popular sources aren’t always the best for learning, “Attention is All You Need” is foundational but not great for understanding modern LLMs
Next Steps
Quick Integration
Get your first Valyu search running in minutes
API Reference
Explore all search parameters and response formats
Get Help
Need assistance? We’re here to help:- Technical Support: [email protected]
- Developer Community: Join our Discord

