From def51a70adea777f4b53c1c13338a85a7802bebd Mon Sep 17 00:00:00 2001 From: Olivier Chafik Date: Tue, 20 Jan 2026 01:08:01 +0000 Subject: [PATCH 1/2] fix(say-server): clean up README - Add screenshot reference - Remove GPU prerequisite (not needed, wasn't requested) - Use GitHub raw URLs instead of local file paths - Simplify Docker section - Fix Claude Desktop config to use remote URL --- examples/say-server/README.md | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/examples/say-server/README.md b/examples/say-server/README.md index b6629ef8..94c5a9fb 100644 --- a/examples/say-server/README.md +++ b/examples/say-server/README.md @@ -2,6 +2,8 @@ A real-time text-to-speech MCP App with karaoke-style text highlighting, powered by [Kyutai's Pocket TTS](https://github.com/kyutai-labs/pocket-tts). +![Screenshot](screenshot.png) + ## MCP App Features Demonstrated This example showcases several MCP App capabilities: @@ -26,35 +28,31 @@ This example showcases several MCP App capabilities: ## Prerequisites -- [uv](https://docs.astral.sh/uv/getting-started/installation/) - fast Python package manager -- A CUDA GPU (recommended) or CPU with sufficient RAM (~2GB for model) +- [uv](https://docs.astral.sh/uv/) - fast Python package manager ## Quick Start -The server is a single self-contained Python file that can be run directly with `uv`: +The server is a single self-contained Python file that can be run directly from GitHub: ```bash -# Run directly (uv auto-installs dependencies) -uv run examples/say-server/server.py +# Run directly from GitHub (uv auto-installs dependencies) +uv run https://raw.githubusercontent.com/modelcontextprotocol/ext-apps/main/examples/say-server/server.py ``` The server will be available at `http://localhost:3109/mcp`. ## Running with Docker -Run directly from GitHub using the official `uv` Docker image. Mount your HuggingFace cache to avoid re-downloading the model: +Run directly from GitHub using the official `uv` Docker image: ```bash docker run --rm -it \ -p 3109:3109 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ - -e HF_HOME=/root/.cache/huggingface \ ghcr.io/astral-sh/uv:debian \ uv run https://raw.githubusercontent.com/modelcontextprotocol/ext-apps/main/examples/say-server/server.py ``` -For GPU support, add `--gpus all` (requires [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)). - ## Usage ### With Claude Desktop @@ -66,8 +64,11 @@ Add to your Claude Desktop config (`~/Library/Application Support/Claude/claude_ "mcpServers": { "say": { "command": "uv", - "args": ["run", "server.py", "--stdio"], - "cwd": "/path/to/examples/say-server" + "args": [ + "run", + "https://raw.githubusercontent.com/modelcontextprotocol/ext-apps/main/examples/say-server/server.py", + "--stdio" + ] } } } From 198a1be034a587da636596d1aadf634c204717cf Mon Sep 17 00:00:00 2001 From: Olivier Chafik Date: Tue, 20 Jan 2026 10:46:21 +0000 Subject: [PATCH 2/2] fix(say-server): use isolated HuggingFace cache for Docker Avoid sharing host's HF cache with Docker container since downloaded models may contain unsafe pickled files or code. --- examples/say-server/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/say-server/README.md b/examples/say-server/README.md index 94c5a9fb..9718fe4f 100644 --- a/examples/say-server/README.md +++ b/examples/say-server/README.md @@ -48,7 +48,7 @@ Run directly from GitHub using the official `uv` Docker image: ```bash docker run --rm -it \ -p 3109:3109 \ - -v ~/.cache/huggingface:/root/.cache/huggingface \ + -v ~/.cache/huggingface-docker-say-server:/root/.cache/huggingface \ ghcr.io/astral-sh/uv:debian \ uv run https://raw.githubusercontent.com/modelcontextprotocol/ext-apps/main/examples/say-server/server.py ```