Compare commits

...

4 Commits

  1. 32
      .github/ISSUE_TEMPLATE/bug_report.md
  2. 22
      .github/ISSUE_TEMPLATE/feature_request.md
  3. 34
      .github/mergify.yml
  4. 20
      .github/workflows/cd-docs.yml
  5. 24
      .github/workflows/ci-docs.yml
  6. 27
      .github/workflows/docs.yml
  7. 37
      .github/workflows/release.yml
  8. 25
      .github/workflows/ruff.yml
  9. 173
      deepsearcher/agent/deep_search.py
  10. 2
      deepsearcher/backend/templates/index.html
  11. 2
      deepsearcher/config.yaml
  12. 2
      deepsearcher/configuration.py
  13. 8
      deepsearcher/llm/openai_llm.py
  14. 11
      deepsearcher/loader/splitter.py
  15. 6
      deepsearcher/online_query.py
  16. 51
      main.py

32
.github/ISSUE_TEMPLATE/bug_report.md

@ -1,32 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
Please describe your issue **in English**
*Note: Small LLMs cannot perform well at prompt following, and are prone to hallucinations. Please make sure your LLM is cutting-edge, preferably a reasoning model, e.g. OpenAI o-series, DeepSeek R1, Claude 3.7 Sonnet etc.*
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: [e.g. MacOS]
- pip dependencies
- Version [e.g. 0.0.1]
**Additional context**
Add any other context about the problem here.

22
.github/ISSUE_TEMPLATE/feature_request.md

@ -1,22 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
Please describe your suggestion **in English**.
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

34
.github/mergify.yml

@ -1,34 +0,0 @@
misc:
- branch: &BRANCHES
# In this pull request, the changes are based on the main branch
- &MASTER_BRANCH base=main
- name: Label bug fix PRs
conditions:
# branch condition: in this pull request, the changes are based on any branch referenced by BRANCHES
- or: *BRANCHES
- 'title~=^fix:'
actions:
label:
add:
- kind/bug
- name: Label feature PRs
conditions:
# branch condition: in this pull request, the changes are based on any branch referenced by BRANCHES
- or: *BRANCHES
- 'title~=^feat:'
actions:
label:
add:
- kind/feature
- name: Label enhancement PRs
conditions:
# branch condition: in this pull request, the changes are based on any branch referenced by BRANCHES
- or: *BRANCHES
- 'title~=^enhance:'
actions:
label:
add:
- kind/enhancement

20
.github/workflows/cd-docs.yml

@ -1,20 +0,0 @@
name: "Run Docs CD with UV"
on:
push:
branches:
- "main"
- "master"
paths:
- 'docs/**'
- 'mkdocs.yml'
- '.github/workflows/docs.yml'
jobs:
build-deploy-docs:
if: github.repository == 'zilliztech/deep-searcher'
uses: ./.github/workflows/docs.yml
with:
deploy: true
permissions:
contents: write

24
.github/workflows/ci-docs.yml

@ -1,24 +0,0 @@
name: "Run Docs CI with UV"
on:
pull_request:
types: [opened, reopened, synchronize]
paths:
- 'docs/**'
- 'mkdocs.yml'
- '.github/workflows/docs.yml'
push:
branches:
- "**"
- "!gh-pages"
paths:
- 'docs/**'
- 'mkdocs.yml'
- '.github/workflows/docs.yml'
jobs:
build-docs:
if: ${{ github.event_name == 'push' || (github.event.pull_request.head.repo.full_name != 'zilliztech/deep-searcher') }}
uses: ./.github/workflows/docs.yml
with:
deploy: false

27
.github/workflows/docs.yml

@ -1,27 +0,0 @@
on:
workflow_call:
inputs:
deploy:
type: boolean
description: "If true, the docs will be deployed."
default: false
jobs:
run-docs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
- name: Install dependencies
run: |
uv sync --all-extras --dev
source .venv/bin/activate
- name: Build docs
run: uv run mkdocs build --verbose --clean
- name: Build and push docs
if: inputs.deploy
run: uv run mkdocs gh-deploy --force

37
.github/workflows/release.yml

@ -1,37 +0,0 @@
#git tag v0.x.x # Must be same as the version in pyproject.toml
#git push --tags
name: Publish Python Package to PyPI
on:
push:
tags:
- "v*"
jobs:
publish:
name: Publish to PyPI
runs-on: ubuntu-latest
environment: pypi
permissions:
id-token: write
contents: read
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install build tools
run: python -m pip install build
- name: Build package
run: python -m build
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1

25
.github/workflows/ruff.yml

@ -1,25 +0,0 @@
name: Ruff
on:
push:
branches: [ main, master ]
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
- name: Install the project
run: |
uv sync --all-extras --dev
source .venv/bin/activate
- name: Run Ruff
run: |
uv run ruff format --diff
uv run ruff check
# - name: Run tests
# run: uv run pytest tests

173
deepsearcher/agent/deep_search.py

@ -1,5 +1,3 @@
import asyncio
from deepsearcher.agent.base import BaseAgent, describe_class
from deepsearcher.embedding.base import BaseEmbedding
from deepsearcher.llm.base import BaseLLM
@ -7,20 +5,23 @@ from deepsearcher.utils import log
from deepsearcher.vector_db import RetrievalResult
from deepsearcher.vector_db.base import BaseVectorDB, deduplicate
COLLECTION_ROUTE_PROMPT = """
I provide you with collection_name(s) and corresponding collection_description(s).
Please select the collection names that may be related to the question and return a python list of str.
If there is no collection related to the question, you can return an empty list.
Please select the collection names that may be related to the query and return a python list of str.
If there is no collection related to the query, you can return an empty list.
"QUESTION": {question}
"Query": {query}
"COLLECTION_INFO": {collection_info}
When you return, you can ONLY return a json convertable python list of str, WITHOUT any other additional content.
Your selected collection name list is:
"""
SUB_QUERY_PROMPT = """
To answer this question more comprehensively, please break down the original question into few numbers of sub-questions (more if necessary).
To answer this question more comprehensively, please break down the original question into few numbers of sub-questions
(the less the better, but more if nesscessary to ensure answering the original question).
If this is a very simple question and no decomposition is necessary, then keep the only one original question.
Make sure each sub-question is clear, concise and atomic.
Return as list of str in python style and json convertable.
@ -43,16 +44,18 @@ Example output:
Provide your response in a python code list of str format:
"""
RERANK_PROMPT = """
Based on the query questions and the retrieved chunks, determine whether each chunk is helpful in answering any of the query questions.
For each chunk, you must return "YES" or "NO" without any other information.
Based on the query and the retrieved chunks, give a quick judge of whether each chunk is helpful in answering the query.
For each chunk, you must return "YES" or "NO" python style list without any other information.
Query Questions: {query}
Query: {query}
Retrieved Chunks:
{retrieved_chunks}
Respond with a list of "YES" or "NO" values, one for each chunk, in the same order as the chunks are listed. For example a list of chunks of three: ["YES", "NO", "YES"]
Respond with a list of "YES" or "NO" values, one for each chunk, in the same order as the chunks are listed.
For example, if there is a list of four chunks, the answer could be: ["YES", "NO", "YES", "YES"]
"""
@ -60,16 +63,16 @@ REFLECT_PROMPT = """
Determine whether additional search queries are needed based on the original query, previous sub queries, and all retrieved document chunks.
If returned chunks does not cover all previous sub-queries, this means that there are no related documents can be retrieved.
In this case, try generate simliar but slightly different queries to the previous sub-queries.
And if further research is needed based on the new information, provide a Python list of more queries.
(which is prefered, even if the previous sub-queries can be well answered by retrieved chunks, but ultimately according to your judge)
And if further research is needed based on the new information which those chunks provided, give more queries on the basis of them.
(which is prefered, but ultimately according to your judge)
If no further research is needed, return an empty list.
Original Query: {question}
Original Query: {original_query}
Previous Sub Queries: {mini_questions}
Previous Sub Queries: {all_sub_queries}
Related Chunks:
{mini_chunk_str}
{chunks}
Respond exclusively in valid List of str format without any other text."""
@ -79,14 +82,15 @@ You are a AI content analysis expert.
Please generate a long, specific and detailed answer or report based on the previous queries and the retrieved document chunks.
If the chunks are not enough to answer the query or additional information is needed to enhance the content, you should answer with your own knowledge.
In this case, mark the part(s) that generated by your own with <unref>your knowledge here</unref>
(Don't place <unref></unref> part(s) individually into one paragraph, but insert it the proper place of the context)
(Don't place <unref></unref> part(s) individually into one paragraph, but insert it the proper place of the report)
Plus, you should give references in the report where you quote from the chunks using markdown links, and give a list of references at the end of the report.
Original Query: {question}
Original Query: {original_query}
Previous Sub Queries: {mini_questions}
Previous Sub Queries: {all_sub_queries}
Related Chunks:
{mini_chunk_str}
{chunks}
"""
@ -108,7 +112,7 @@ class DeepSearch(BaseAgent):
embedding_model: BaseEmbedding,
vector_db: BaseVectorDB,
max_iter: int = 3,
route_collection: bool = True,
route_collection: bool = False,
text_window_splitter: bool = True,
**kwargs,
):
@ -162,7 +166,7 @@ class DeepSearch(BaseAgent):
)
return [the_only_collection]
vector_db_search_prompt = COLLECTION_ROUTE_PROMPT.format(
question=query,
query=query,
collection_info=[
{
"collection_name": collection_info.collection_name,
@ -198,7 +202,7 @@ class DeepSearch(BaseAgent):
content = self.llm.remove_think(content)
return self.llm.literal_eval(content)
async def _search_chunks_from_vectordb(self, query: str):
def _search_chunks_from_vectordb(self, query: str):
if self.route_collection:
selected_collections = self.invoke(
query=query, dim=self.embedding_model.dimension
@ -222,7 +226,10 @@ class DeepSearch(BaseAgent):
# Format all chunks for batch processing
formatted_chunks = ""
for i, retrieved_result in enumerate(retrieved_results):
formatted_chunks += f"<chunk_{i}>\n{retrieved_result.text}\n</chunk_{i}>\n"
formatted_chunks += f'''
<chunk_{i + 1}>\n{retrieved_result.text}\n</chunk_{i + 1}>\n
<reference_{i + 1}>\n{retrieved_result.reference}\n</reference_{i + 1}>
'''
# Batch process all chunks with a single LLM call
content = self.llm.chat(
@ -278,21 +285,27 @@ class DeepSearch(BaseAgent):
)
return all_retrieved_results
def _generate_gap_queries(
self, original_query: str, all_sub_queries: list[str], all_chunks: list[RetrievalResult]
def _generate_more_sub_queries(
self, original_query: str, all_sub_queries: list[str], all_retrieved_results: list[RetrievalResult]
) -> list[str]:
chunks = []
for i, chunk in enumerate(all_retrieved_results):
if self.text_window_splitter and "wider_text" in chunk.metadata:
chunks.append(chunk.metadata["wider_text"])
else:
chunks.append(f'''<chunk {i + 1}>{chunk.text}</chunk {i + 1}><reference {i + 1}>{chunk.reference}</reference {i + 1}>''')
reflect_prompt = REFLECT_PROMPT.format(
question=original_query,
mini_questions=all_sub_queries,
mini_chunk_str=self._format_chunk_texts([chunk.text for chunk in all_chunks])
if len(all_chunks) > 0
original_query=original_query,
all_sub_queries=all_sub_queries,
chunks="\n".join(chunks)
if len(all_retrieved_results) > 0
else "NO RELATED CHUNKS FOUND.",
)
response = self.llm.chat([{"role": "user", "content": reflect_prompt}])
response = self.llm.remove_think(response)
return self.llm.literal_eval(response)
def retrieve(self, original_query: str, **kwargs) -> tuple[list[RetrievalResult], dict]:
def retrieve(self, original_query: str, **kwargs) -> tuple[list[RetrievalResult], list[str]]:
"""
Retrieve relevant documents from the knowledge base for the given query.
@ -308,15 +321,10 @@ class DeepSearch(BaseAgent):
- A list of retrieved document results
- Additional information about the retrieval process
"""
return asyncio.run(self.async_retrieve(original_query, **kwargs))
async def async_retrieve(
self, original_query: str, **kwargs
) -> tuple[list[RetrievalResult], dict]:
max_iter = kwargs.pop("max_iter", self.max_iter)
### SUB QUERIES ###
log.color_print(f"<query> {original_query} </query>\n")
all_search_res = []
all_search_results = []
all_sub_queries = []
sub_queries = self._generate_sub_queries(original_query)
@ -324,54 +332,46 @@ class DeepSearch(BaseAgent):
log.color_print("No sub queries were generated by the LLM. Exiting.")
return [], {}
else:
log.color_print(
f"<think> Break down the original query into new sub queries: {sub_queries}</think>\n"
)
log.color_print(f"</think> Break down the original query into new sub queries: {sub_queries} ")
all_sub_queries.extend(sub_queries)
sub_gap_queries = sub_queries
for iter in range(max_iter):
log.color_print(f">> Iteration: {iter + 1}\n")
search_res_from_vectordb = []
search_res_from_internet = [] # TODO
# Create all search tasks
search_tasks = [
self._search_chunks_from_vectordb(query)
for query in sub_gap_queries
]
# Execute all tasks in parallel and wait for results
search_results = await asyncio.gather(*search_tasks)
# Merge all results
for result in search_results:
search_res = result
search_res_from_vectordb.extend(search_res)
search_res_from_vectordb = deduplicate(search_res_from_vectordb)
for it in range(max_iter):
log.color_print(f">> Iteration: {it + 1}\n")
# Execute all search tasks sequentially
for query in sub_queries:
result = self._search_chunks_from_vectordb(query)
all_search_results.extend(result)
undeduped_len = len(all_search_results)
all_search_results = deduplicate(all_search_results)
deduped_len = len(all_search_results)
if undeduped_len - deduped_len != 0:
log.color_print(
f"<search> Removed {undeduped_len - deduped_len} duplicates </search> "
)
# search_res_from_internet = deduplicate_results(search_res_from_internet)
all_search_res.extend(search_res_from_vectordb + search_res_from_internet)
if iter == max_iter - 1:
log.color_print("<think> Exceeded maximum iterations. Exiting. </think>\n")
# all_search_res.extend(search_res_from_vectordb + search_res_from_internet)
if it == max_iter - 1:
log.color_print("</think> Exceeded maximum iterations. Exiting. ")
break
### REFLECTION & GET GAP QUERIES ###
log.color_print("<think> Reflecting on the search results... </think>\n")
sub_gap_queries = self._generate_gap_queries(
original_query, all_sub_queries, all_search_res
### REFLECTION & GET MORE SUB QUERIES ###
log.color_print("</think> Reflecting on the search results... ")
sub_queries = self._generate_more_sub_queries(
original_query, all_sub_queries, all_search_results
)
if not sub_gap_queries or len(sub_gap_queries) == 0:
log.color_print("<think> No new search queries were generated. Exiting. </think>\n")
if not sub_queries or len(sub_queries) == 0:
log.color_print("</think> No new search queries were generated. Exiting. ")
break
else:
log.color_print(
f"<think> New search queries for next iteration: {sub_gap_queries} </think>\n"
)
all_sub_queries.extend(sub_gap_queries)
f"</think> New search queries for next iteration: {sub_queries} ")
all_sub_queries.extend(sub_queries)
all_search_res = deduplicate(all_search_res)
additional_info = {"all_sub_queries": all_sub_queries}
return all_search_res, additional_info
all_search_results = deduplicate(all_search_results)
return all_search_results, all_sub_queries
def query(self, query: str, **kwargs) -> tuple[str, list[RetrievalResult]]:
def query(self, original_query: str, **kwargs) -> tuple[str, list[RetrievalResult]]:
"""
Query the agent and generate an answer based on retrieved documents.
@ -387,31 +387,24 @@ class DeepSearch(BaseAgent):
- The generated answer
- A list of retrieved document results
"""
all_retrieved_results, additional_info = self.retrieve(query, **kwargs)
all_retrieved_results, all_sub_queries = self.retrieve(original_query, **kwargs)
if not all_retrieved_results or len(all_retrieved_results) == 0:
return f"No relevant information found for query '{query}'.", []
all_sub_queries = additional_info["all_sub_queries"]
chunk_texts = []
for chunk in all_retrieved_results:
return f"No relevant information found for query '{original_query}'.", []
chunks = [] # type: list[str]
for i, chunk in enumerate(all_retrieved_results):
if self.text_window_splitter and "wider_text" in chunk.metadata:
chunk_texts.append(chunk.metadata["wider_text"])
chunks.append(chunk.metadata["wider_text"])
else:
chunk_texts.append(chunk.text)
chunks.append(f'''<chunk {i + 1}>{chunk.text}</chunk {i + 1}><reference {i + 1}>{chunk.reference}</reference {i + 1}>''')
log.color_print(
f"<think> Summarize answer from all {len(all_retrieved_results)} retrieved chunks... </think>\n"
)
summary_prompt = SUMMARY_PROMPT.format(
question=query,
mini_questions=all_sub_queries,
mini_chunk_str=self._format_chunk_texts(chunk_texts),
original_query=original_query,
all_sub_queries=all_sub_queries,
chunks="\n".join(chunks)
)
response = self.llm.chat([{"role": "user", "content": summary_prompt}])
log.color_print("\n==== FINAL ANSWER====\n")
log.color_print(self.llm.remove_think(response))
return self.llm.remove_think(response), all_retrieved_results
def _format_chunk_texts(self, chunk_texts: list[str]) -> str:
chunk_str = ""
for i, chunk in enumerate(chunk_texts):
chunk_str += f"""<chunk_{i}>\n{chunk}\n</chunk_{i}>\n"""
return chunk_str

2
deepsearcher/backend/templates/index.html

@ -297,7 +297,6 @@
<div id="queryResult" class="result-container">
<h3>查询结果:</h3>
<div class="query-result" id="resultText"></div>
<div id="tokenInfo"></div>
</div>
</div>
</main>
@ -515,7 +514,6 @@
if (response.ok) {
showStatus('queryStatus', '查询完成', 'success');
document.getElementById('resultText').textContent = data.result;
document.getElementById('tokenInfo').textContent = `消耗Token数: ${data.consume_token}`;
showResult();
// 显示进度日志

2
deepsearcher/config.yaml

@ -80,7 +80,7 @@ provide_settings:
# port: 6333
query_settings:
max_iter: 3
max_iter: 2
load_settings:
chunk_size: 2048

2
deepsearcher/configuration.py

@ -210,6 +210,6 @@ def init_config(config: Configuration):
embedding_model=embedding_model,
vector_db=vector_db,
max_iter=config.query_settings["max_iter"],
route_collection=True,
route_collection=False,
text_window_splitter=True,
)

8
deepsearcher/llm/openai_llm.py

@ -32,7 +32,7 @@ class OpenAILLM(BaseLLM):
base_url = kwargs.pop("base_url")
self.client = OpenAI(api_key=api_key, base_url=base_url, **kwargs)
def chat(self, messages: list[dict], stream_callback = None) -> str:
def chat(self, messages: list[dict]) -> str:
"""
Send a chat message to the OpenAI model and get a response.
@ -47,7 +47,7 @@ class OpenAILLM(BaseLLM):
with self.client.chat.completions.create(
model=self.model,
messages=messages,
stream=True,
stream=True
) as stream:
content = ""
reasoning_content = ""
@ -59,12 +59,8 @@ class OpenAILLM(BaseLLM):
if hasattr(delta, 'reasoning_content') and delta.reasoning_content is not None:
print(delta.reasoning_content, end='', flush=True)
reasoning_content += delta.reasoning_content
if stream_callback:
stream_callback(delta.reasoning_content)
if hasattr(delta, 'content') and delta.content is not None:
print(delta.content, end="", flush=True)
content += delta.content
if stream_callback:
stream_callback(delta.content)
print("\n")
return content

11
deepsearcher/loader/splitter.py

@ -1,7 +1,6 @@
## Sentence Window splitting strategy, ref:
# https://github.com/milvus-io/bootcamp/blob/master/bootcamp/RAG/advanced_rag/sentence_window_with_langchain.ipynb
from typing import List
from langchain_core.documents import Document
from langchain_text_splitters import RecursiveCharacterTextSplitter
@ -26,7 +25,7 @@ class Chunk:
text: str,
reference: str,
metadata: dict = None,
embedding: List[float] = None,
embedding: list[float] = None,
):
"""
Initialize a Chunk object.
@ -44,8 +43,8 @@ class Chunk:
def _sentence_window_split(
split_docs: List[Document], original_document: Document, offset: int = 200
) -> List[Chunk]:
split_docs: list[Document], original_document: Document, offset: int = 200
) -> list[Chunk]:
"""
Create chunks with context windows from split documents.
@ -78,8 +77,8 @@ def _sentence_window_split(
def split_docs_to_chunks(
documents: List[Document], chunk_size: int = 1500, chunk_overlap=100
) -> List[Chunk]:
documents: list[Document], chunk_size: int = 1500, chunk_overlap=100
) -> list[Chunk]:
"""
Split documents into chunks with context windows.

6
deepsearcher/online_query.py

@ -39,10 +39,10 @@ def retrieve(
Returns:
A tuple containing:
- A list of retrieval results
- An empty list (placeholder for future use)
- A list of strings representing consumed tokens
"""
default_searcher = configuration.default_searcher
retrieved_results, consume_tokens, metadata = default_searcher.retrieve(
retrieved_results, metadata = default_searcher.retrieve(
original_query, max_iter=max_iter
)
return retrieved_results, []
return retrieved_results

51
main.py

@ -38,55 +38,15 @@ async def read_root():
"""
Serve the main HTML page.
"""
# 获取当前文件所在目录
current_dir = os.path.dirname(os.path.abspath(__file__))
# 构建模板文件路径 - 修复路径问题
template_path = os.path.join(current_dir, "deepsearcher", "backend", "templates", "index.html")
# 读取HTML文件内容
try:
with open(template_path, encoding="utf-8") as file:
html_content = file.read()
return HTMLResponse(content=html_content, status_code=200)
except FileNotFoundError:
# 如果找不到文件,提供一个简单的默认页面
default_html = f"""
<!DOCTYPE html>
<html>
<head>
<title>DeepSearcher</title>
<meta charset="utf-8">
<style>
body {{ font-family: Arial, sans-serif; margin: 40px; }}
.container {{ max-width: 800px; margin: 0 auto; }}
h1 {{ color: #333; }}
.info {{ background: #f0f8ff; padding: 15px; border-radius: 5px; }}
.error {{ background: #ffe4e1; padding: 15px; border-radius: 5px; color: #d00; }}
</style>
</head>
<body>
<div class="container">
<h1>DeepSearcher</h1>
<div class="info">
<p>欢迎使用 DeepSearcher 智能搜索系统!</p>
<p>系统正在运行但未找到前端模板文件</p>
<p>请确认文件是否存在: {template_path}</p>
</div>
<div class="info">
<h2>API 接口</h2>
<p>您仍然可以通过以下 API 接口使用系统:</p>
<ul>
<li><code>POST /load-files/</code> - 加载本地文件</li>
<li><code>POST /load-website/</code> - 加载网站内容</li>
<li><code>GET /query/</code> - 执行查询</li>
</ul>
<p>有关 API 使用详情请查看 <a href="/docs">API 文档</a></p>
</div>
</div>
</body>
</html>
"""
return HTMLResponse(content=default_html, status_code=200)
raise HTTPException(status_code=404, detail="Template file not found")
@app.post("/set-provider-config/")
@ -154,12 +114,11 @@ def load_files(
HTTPException: If loading files fails.
"""
try:
# 修复batch_size为None时的问题
load_from_local_files(
paths_or_directory=paths,
collection_name=collection_name,
collection_description=collection_description,
batch_size=batch_size if batch_size is not None else 8, # 提供默认值
batch_size=batch_size if batch_size is not None else 8,
)
return {"message": "Files loaded successfully."}
except Exception as e:
@ -205,12 +164,11 @@ def load_website(
HTTPException: If loading website content fails.
"""
try:
# 修复batch_size为None时的问题
load_from_website(
urls=urls,
collection_name=collection_name,
collection_description=collection_description,
batch_size=batch_size if batch_size is not None else 256, # 提供默认值
batch_size=batch_size if batch_size is not None else 8,
)
return {"message": "Website loaded successfully."}
except Exception as e:
@ -249,7 +207,7 @@ def perform_query(
from deepsearcher.utils.log import clear_progress_messages
clear_progress_messages()
result_text, _, consume_token = query(original_query, max_iter)
result_text, _ = query(original_query, max_iter)
# 获取进度消息
from deepsearcher.utils.log import get_progress_messages
@ -257,7 +215,6 @@ def perform_query(
return {
"result": result_text,
"consume_token": consume_token,
"progress_messages": progress_messages
}
except Exception as e:

Loading…
Cancel
Save