Welcome to the blog of a data science voyager: hands-on machine learning, playing with large language models and dataset exploration.
TL;DR: I setup a local MCP server (FastMCP) with Ollama models to read and write markdown notes, and then stress-tested tool calling over 50 runs per model. Result: small local models are surprisingly unstable, but some patterns (gates, explicit END_OF_RUN tag, better catalogs) make them more usable.
Explore FineVision with CLIP and SSCD: embeddings, FAISS range search for near-duplicates, and UMAP visuals. A hands-on introduction to multimodal datasets.
TL;DR: I setup three experiments : Cline for an agent in IDE, CrewAI for multi-agent work, and a RAG over my Obsidian notes using Chroma and LlamaIndex. Everything runs locally via Ollama on a 16GB GPU. Models are not quite there in terms of reasoning power; you can still get interesting results.