# MCPClientDev **Repository Path**: zhaoyuSpace/MCPClientDev ## Basic Information - **Project Name**: MCPClientDev - **Description**: No description available - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-04-13 - **Last Updated**: 2025-04-13 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Dynamic Model Context Protocol (MCP) Client A Python implementation of a Model Context Protocol (MCP) client that dynamically interacts with local LLM models through Ollama and an MCP server. ## Overview This project provides a complete system for building an MCP client that can: 1. Connect to a local Llama 3.2 model running on Ollama 2. Dynamically decide when to use external context 3. Communicate with an MCP server for additional information 4. Present a user-friendly chat interface The key innovation in this implementation is the dynamic decision-making process, where the LLM itself determines when external context is needed and which specific tools to use. ## Components - **Dynamic MCP Client**: Core client that handles communication with both Ollama and the MCP server - **Enhanced MCP Server**: Flask-based server that provides context through registered tools - **Chat Interface**: Interactive CLI for chatting with the LLM using MCP awareness - **Setup Script**: Easy installation and setup process ## Requirements - Python 3.6+ - [Ollama](https://ollama.ai/) with Llama 3.2 model installed - Required Python packages: - flask - requests - sseclient-py ## Installation 1. Clone this repository: ```bash git clone https://github.com/yourusername/dynamic-mcp-client.git cd dynamic-mcp-client ``` 2. Run the setup script: ```bash chmod +x setup.sh ./setup.sh ``` This will create a virtual environment, install dependencies, and generate run scripts. ## Usage ### Starting the System 1. Start the MCP server in one terminal: ```bash ./run_server.sh ``` 2. Start the chat interface in another terminal: ```bash ./run_chat.sh ``` 3. Make sure Ollama is running with the Llama 3.2 model: ```bash ollama run llama3.2 ``` ### Chat Commands - Type your messages normally to chat with the LLM - Type `exit`, `quit`, or `q` to end the chat session ## How It Works ### Dynamic Decision Process When you send a message, the system: 1. Asks the LLM if external context is needed for your query 2. If yes, it determines which tool to use and what parameters to pass 3. Calls the appropriate tool on the MCP server 4. Enhances your original query with the retrieved context 5. Sends the enhanced query to the LLM for the final response ### Tools Available The MCP server provides these tools: - **knowledge_retrieval**: Fetches information on specific topics from the knowledge base - **topic_search**: Searches across the knowledge base using keywords ## Project Structure ``` dynamic-mcp-client/ ├── dynamic_mcp_client.py # Core client implementation ├── dynamic_mcp_server.py # MCP server with tools ├── dynamic_mcp_chat.py # Interactive chat interface ├── setup.sh # Setup script ├── run_server.sh # Script to run the server ├── run_chat.sh # Script to run the chat interface ├── data/ # Directory for knowledge base data │ └── knowledge_base.json # Sample knowledge base ├── venv/ # Virtual environment (created by setup) └── mcp_chat_logs.jsonl # Chat interaction logs ``` ## Extending the System ### Adding New Tools To add a new tool to the MCP server: 1. Register it in the `_register_tools` method 2. Implement the tool's functionality in a new method 3. Update the `process_query` method to handle the new tool ### Enhancing the Knowledge Base To add information to the knowledge base: 1. Use the `/add` endpoint of the MCP server: ```bash curl -X POST http://localhost:3000/add \ -H "Content-Type: application/json" \ -d '{"key": "new_topic", "data": {"title": "New Topic", "description": "Description", "details": ["Detail 1", "Detail 2"]}}' ``` 2. Or manually edit the `knowledge_base.json` file in the `data` directory ## Logging All interactions are logged to `mcp_chat_logs.jsonl` for analysis. Each log entry includes: - User input - Assistant response - Whether MCP context was used - The tool used (if any) - The reasoning behind the decision - Processing time ## License MIT ## Contributing Contributions are welcome! Please feel free to submit a Pull Request.