# Frequently Asked Questions ## 🔍 Common Issues and Solutions --- ### 💬 Q1: Why am I failing to parse LLM output format / How to select the right LLM?

Solution: Small language models often struggle to follow prompts and generate responses in the expected format. For better results, we recommend using large reasoning models such as:

These models provide superior reasoning capabilities and are more likely to produce correctly formatted outputs.

--- ### 🌐 Q2: "We couldn't connect to 'https://huggingface.co'" error

Error Message:

OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like GPTCache/paraphrase-albert-small-v2 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

Solution: This issue is typically caused by network access problems to Hugging Face. Try these solutions:

Network Issue? Try Using a Mirror ```bash export HF_ENDPOINT=https://hf-mirror.com ```
Permission Issue? Set Up a Personal Token ```bash export HUGGING_FACE_HUB_TOKEN=xxxx ```
--- ### 📓 Q3: DeepSearcher doesn't run in Jupyter notebook

Solution: This is a common issue with asyncio in Jupyter notebooks. Install nest_asyncio and add the following code to the top of your notebook:

Step 1: Install the required package

```bash pip install nest_asyncio ```

Step 2: Add these lines to the beginning of your notebook

```python import nest_asyncio nest_asyncio.apply() ```