update exa search block: add deep search type, reorder defaults#12476
update exa search block: add deep search type, reorder defaults#12476theishangoswami wants to merge 2 commits intoSignificant-Gravitas:devfrom
Conversation
Co-Authored-By: ishan <ishan@exa.ai>
|
This PR targets the Automatically setting the base branch to |
WalkthroughUpdated Exa search block's public enum by removing Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
autogpt_platform/backend/backend/blocks/exa/search.py (1)
128-128: Description omits keyword and neural search types.The updated description mentions "auto, fast, and deep search modes" but the
ExaSearchTypesenum still includesKEYWORDandNEURALas valid options. Users may be confused when they see these additional options in the dropdown. Consider either:
- Updating the description to mention all supported types, or
- Adding a note that keyword/neural are advanced/legacy options
📝 Suggested description update
- description="Searches the web using Exa, the best search engine for AI agents. Supports auto, fast, and deep search modes.", + description="Searches the web using Exa, the best search engine for AI agents. Supports auto, fast, deep, keyword, and neural search modes.",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/blocks/exa/search.py` at line 128, The description string currently states "auto, fast, and deep search modes" but the ExaSearchTypes enum also exposes KEYWORD and NEURAL, which can confuse users; update the description (the description="..." argument near Exa search registration in search.py) to either list all supported types including KEYWORD and NEURAL or append a short clarifying note that KEYWORD and NEURAL are advanced/legacy options so the dropdown matches the enum (reference: ExaSearchTypes).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@autogpt_platform/backend/backend/blocks/exa/search.py`:
- Around line 27-32: The ExaSearchTypes enum contains an invalid member KEYWORD
which the Exa API rejects; remove the KEYWORD member from the ExaSearchTypes
Enum declaration (class ExaSearchTypes) so only valid types remain (AUTO, FAST,
DEEP, NEURAL), or if you need an alternative, replace KEYWORD with a supported
type such as INSTANT or DEEP_REASONING and update any references to
ExaSearchTypes.KEYWORD accordingly (search usages, input validation, and tests)
to prevent runtime API failures.
---
Nitpick comments:
In `@autogpt_platform/backend/backend/blocks/exa/search.py`:
- Line 128: The description string currently states "auto, fast, and deep search
modes" but the ExaSearchTypes enum also exposes KEYWORD and NEURAL, which can
confuse users; update the description (the description="..." argument near Exa
search registration in search.py) to either list all supported types including
KEYWORD and NEURAL or append a short clarifying note that KEYWORD and NEURAL are
advanced/legacy options so the dropdown matches the enum (reference:
ExaSearchTypes).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 68d47dec-6173-4fb4-a8ff-b3354d3fa4b7
📒 Files selected for processing (1)
autogpt_platform/backend/backend/blocks/exa/search.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: types
- GitHub Check: check-docs-sync
- GitHub Check: Check PR Status
- GitHub Check: Analyze (python)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
🧰 Additional context used
📓 Path-based instructions (5)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/blocks/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/backend/blocks/**/*.py: Inherit from 'Block' base class with input/output schemas when adding new blocks in backend
Implement 'run' method with proper error handling in backend blocks
Generate block UUID using 'uuid.uuid4()' when creating new blocks in backend
Write tests alongside block implementation when adding new blocks in backend
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/**/*.{py,txt}
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use
poetry runprefix for all Python commands, including testing, linting, formatting, and migrations
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/**/*.py
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use Prisma ORM for database operations in PostgreSQL with pgvector for embeddings
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
🧠 Learnings (8)
📚 Learning: 2026-02-05T04:11:00.596Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11796
File: autogpt_platform/backend/backend/blocks/video/concat.py:3-4
Timestamp: 2026-02-05T04:11:00.596Z
Learning: In autogpt_platform/backend/backend/blocks/**/*.py, when creating a new block, generate a UUID once with uuid.uuid4() and hard-code the resulting string as the block's id parameter. Do not call uuid.uuid4() at runtime; IDs must be constant across all imports and runs to ensure stability.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:32:21.686Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:32:21.686Z
Learning: In autogpt_platform/backend/backend/blocks/, the Block base class execute() already wraps run() in a try/except to convert uncaught exceptions into BlockExecutionError/BlockUnknownError. Do not add per-block try/except in individual block run() methods, as this is not the established pattern (e.g., Gmail, Slack, Todoist blocks omit it). Only use explicit try/except within blocks that need to distinguish between success and error yield paths inside a generator (e.g., attachment blocks). This guidance applies to all Python files under autogpt_platform/backend/backend/blocks/ and similar block implementations; avoid duplicating error handling in run() unless a block requires generator-based branching.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: In autogpt_platform/backend/backend/blocks/ (and related blocks under autogpt_platform/backend/backend/blocks/), do not add try/except blocks around a block's run() method for standard error propagation. The block executor framework (backend/executor/manager.py) catches uncaught exceptions from run() and emits them on the 'error' output. Only add explicit try/except blocks when you need to control partial outputs in failure cases (e.g., certain outputs must not be yielded on error, as in attachment blocks). This is the standard pattern across the codebase; apply it broadly to blocks' run() implementations.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:30:23.196Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:30:23.196Z
Learning: In any Python file under autogpt_platform/backend/backend/blocks, do not add a try/except around run() solely for standard error handling. The block framework’s _execute() in _base.py already catches unhandled exceptions and re-raises as BlockExecutionError or BlockUnknownError. If you yield ("error", message), _execute() raises BlockExecutionError immediately, so the error port will not propagate downstream. Reserve explicit try/except for scenarios where you must control partial output (e.g., attachment blocks that must skip yielding content_base64 on failure).
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: Do not wrap synchronous AgentMail SDK calls with asyncio.to_thread() in blocks under autogpt_platform/backend/backend/blocks (and across the codebase). The block executor runs node execution in dedicated threads via asyncio.run_coroutine_threadsafe (see manager.py around lines ~745-752 and ~1079). The existing pattern avoids using asyncio.to_thread for SDK calls inside async run() methods, so maintain that approach and do not add to_thread usage in these code paths.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
| class ExaSearchTypes(Enum): | ||
| AUTO = "auto" | ||
| FAST = "fast" | ||
| DEEP = "deep" | ||
| KEYWORD = "keyword" | ||
| NEURAL = "neural" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Exa API search type parameter deep auto fast keyword neural
💡 Result:
The Exa Search API's type parameter specifies the search mode, balancing speed and quality. Valid values from official docs (as of 2026) are: - neural: Embeddings-based semantic search. - fast: Streamlined models for high speed (~450ms p50 latency), ideal for low-latency apps and agents. Introduced July 2025. - auto: Default (~1s), intelligently combines neural and other methods for balanced performance. - deep: Light deep search (4-12s), for complex queries with multi-step reasoning and structured outputs. - deep-reasoning: Higher-effort deep search (12-50s), enhanced in March 2026 with structured outputs and grounding. - instant: Lowest latency (~200ms), optimized for real-time applications like chat/voice. Usage example (Python): result = exa.search("query", type="fast") keyword is not a supported type; use auto or neural for semantic/keyword-like search. See API reference for full details.
Citations:
- 1: https://exa.ai/docs/changelog/new-fast-search-type
- 2: https://docs.exa.ai/reference/search-api-guide
- 3: https://exa.ai/docs/reference/search
- 4: https://exa.ai/docs/changelog
- 5: https://docs.exa.ai/reference/search
- 6: https://exa.ai/blog/exa-api-2-0
Remove KEYWORD from the enum—it is not a valid Exa API search type.
The verification confirms that DEEP is valid (light deep search, 4-12s latency), as are AUTO, FAST, and NEURAL. However, KEYWORD is not a supported search type according to the official Exa API documentation. The API explicitly rejects "keyword" as an invalid value; use auto or neural for semantic/keyword-like search instead.
If the code attempts to use ExaSearchTypes.KEYWORD, the Exa API call will fail at runtime. Remove this enum member, or consider adding other valid types like INSTANT (lowest latency, ~200ms) or DEEP_REASONING (higher-effort deep search, 12-50s, enhanced March 2026).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@autogpt_platform/backend/backend/blocks/exa/search.py` around lines 27 - 32,
The ExaSearchTypes enum contains an invalid member KEYWORD which the Exa API
rejects; remove the KEYWORD member from the ExaSearchTypes Enum declaration
(class ExaSearchTypes) so only valid types remain (AUTO, FAST, DEEP, NEURAL), or
if you need an alternative, replace KEYWORD with a supported type such as
INSTANT or DEEP_REASONING and update any references to ExaSearchTypes.KEYWORD
accordingly (search usages, input validation, and tests) to prevent runtime API
failures.
…rding Co-Authored-By: ishan <ishan@exa.ai>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@autogpt_platform/backend/backend/blocks/exa/search.py`:
- Line 126: Update the descriptive metadata so the resolved type wording matches
the new search-mode terms: find the description/metadata string in
autogpt_platform/backend/backend/blocks/exa/search.py (the variable/argument
named description and any text mentioning "resolved type" or "(neural or
keyword)") and replace that old phrase with the new wording "(auto, fast, or
deep)" so the schema/docs consistently reference the new search modes; ensure
the sentence remains grammatically correct and aligns with the existing
description text that mentions "auto, fast, and deep".
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9faf171a-e37e-4174-b03a-2a1cb9d22efd
📒 Files selected for processing (1)
autogpt_platform/backend/backend/blocks/exa/search.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
- GitHub Check: Check PR Status
- GitHub Check: Analyze (python)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
- GitHub Check: end-to-end tests
- GitHub Check: setup
🧰 Additional context used
📓 Path-based instructions (5)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/blocks/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/backend/blocks/**/*.py: Inherit from 'Block' base class with input/output schemas when adding new blocks in backend
Implement 'run' method with proper error handling in backend blocks
Generate block UUID using 'uuid.uuid4()' when creating new blocks in backend
Write tests alongside block implementation when adding new blocks in backend
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/**/*.{py,txt}
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use
poetry runprefix for all Python commands, including testing, linting, formatting, and migrations
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/backend/backend/**/*.py
📄 CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use Prisma ORM for database operations in PostgreSQL with pgvector for embeddings
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
autogpt_platform/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/blocks/exa/search.py
🧠 Learnings (8)
📚 Learning: 2026-02-05T04:11:00.596Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 11796
File: autogpt_platform/backend/backend/blocks/video/concat.py:3-4
Timestamp: 2026-02-05T04:11:00.596Z
Learning: In autogpt_platform/backend/backend/blocks/**/*.py, when creating a new block, generate a UUID once with uuid.uuid4() and hard-code the resulting string as the block's id parameter. Do not call uuid.uuid4() at runtime; IDs must be constant across all imports and runs to ensure stability.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:32:21.686Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:32:21.686Z
Learning: In autogpt_platform/backend/backend/blocks/, the Block base class execute() already wraps run() in a try/except to convert uncaught exceptions into BlockExecutionError/BlockUnknownError. Do not add per-block try/except in individual block run() methods, as this is not the established pattern (e.g., Gmail, Slack, Todoist blocks omit it). Only use explicit try/except within blocks that need to distinguish between success and error yield paths inside a generator (e.g., attachment blocks). This guidance applies to all Python files under autogpt_platform/backend/backend/blocks/ and similar block implementations; avoid duplicating error handling in run() unless a block requires generator-based branching.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: In autogpt_platform/backend/backend/blocks/ (and related blocks under autogpt_platform/backend/backend/blocks/), do not add try/except blocks around a block's run() method for standard error propagation. The block executor framework (backend/executor/manager.py) catches uncaught exceptions from run() and emits them on the 'error' output. Only add explicit try/except blocks when you need to control partial outputs in failure cases (e.g., certain outputs must not be yielded on error, as in attachment blocks). This is the standard pattern across the codebase; apply it broadly to blocks' run() implementations.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:30:23.196Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/pods.py:62-74
Timestamp: 2026-03-16T16:30:23.196Z
Learning: In any Python file under autogpt_platform/backend/backend/blocks, do not add a try/except around run() solely for standard error handling. The block framework’s _execute() in _base.py already catches unhandled exceptions and re-raises as BlockExecutionError or BlockUnknownError. If you yield ("error", message), _execute() raises BlockExecutionError immediately, so the error port will not propagate downstream. Reserve explicit try/except for scenarios where you must control partial output (e.g., attachment blocks that must skip yielding content_base64 on failure).
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:30:11.452Z
Learnt from: Abhi1992002
Repo: Significant-Gravitas/AutoGPT PR: 12417
File: autogpt_platform/backend/backend/blocks/agent_mail/threads.py:80-102
Timestamp: 2026-03-16T16:30:11.452Z
Learning: Do not wrap synchronous AgentMail SDK calls with asyncio.to_thread() in blocks under autogpt_platform/backend/backend/blocks (and across the codebase). The block executor runs node execution in dedicated threads via asyncio.run_coroutine_threadsafe (see manager.py around lines ~745-752 and ~1079). The existing pattern avoids using asyncio.to_thread for SDK calls inside async run() methods, so maintain that approach and do not add to_thread usage in these code paths.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.
Applied to files:
autogpt_platform/backend/backend/blocks/exa/search.py
🔇 Additional comments (1)
autogpt_platform/backend/backend/blocks/exa/search.py (1)
29-30: Enum update looks good.Adding
DEEPand keepingAUTOas the default-facing option is consistent with the block input schema behavior.
| super().__init__( | ||
| id="996cec64-ac40-4dde-982f-b0dc60a5824d", | ||
| description="Searches the web using Exa's advanced search API", | ||
| description="Searches the web using Exa, the best web search API for AI agents. Supports auto, fast, and deep search modes.", |
There was a problem hiding this comment.
Align output metadata text with the new search-mode wording.
The new description mentions auto, fast, and deep, but Line 116 still says resolved type is “(neural or keyword)”. That can mislead users reading generated schema/docs.
Proposed doc-text fix
- resolved_search_type: str = SchemaField(
- description="The search type that was actually used for this request (neural or keyword)"
- )
+ resolved_search_type: str = SchemaField(
+ description="The search type that was actually used for this request."
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| description="Searches the web using Exa, the best web search API for AI agents. Supports auto, fast, and deep search modes.", | |
| resolved_search_type: str = SchemaField( | |
| description="The search type that was actually used for this request." | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@autogpt_platform/backend/backend/blocks/exa/search.py` at line 126, Update
the descriptive metadata so the resolved type wording matches the new
search-mode terms: find the description/metadata string in
autogpt_platform/backend/backend/blocks/exa/search.py (the variable/argument
named description and any text mentioning "resolved type" or "(neural or
keyword)") and replace that old phrase with the new wording "(auto, fast, or
deep)" so the schema/docs consistently reference the new search modes; ensure
the sentence remains grammatically correct and aligns with the existing
description text that mentions "auto, fast, and deep".
themavik
left a comment
There was a problem hiding this comment.
clean update, matches Exa API changes
Added the new
deepsearch type to the Exa search block and reordered the enum soautois the default first option. Also updated the block description to better reflect what Exa does.