Category
Technologies
LLM Tutorials
Keep up to date with the latest news, techniques, and resources for Large Language Models. Our tutorials are full of practical walk throughs & use cases you can use to upskill.
Other technologies:
Training 2 or more people?Try DataCamp for Business
Run Qwen3-Coder-Next Locally: Vibe Code an Analytics Dashboard
Run Qwen3-Coder-Next locally on an RTX 3090 with llama.cpp, then vibe code a complete analytics dashboard in minutes using Qwen Code CLI.
Abid Ali Awan
February 6, 2026
How to Run Kimi K2.5 Locally
Learn how to run a top open-source model locally with llama.cpp, connect it to the Kimi CLI, and one-shot an interactive game using vibe coding.
Abid Ali Awan
February 5, 2026
FLUX.2 Klein Tutorial: Building a Generate-and-Edit Image App with Gradio
Learn how to combine local FLUX.2 Klein 4B generation with API-based image editing, multi-reference conditioning, and session history to create an image editor with Gradio.
Aashi Dutt
February 3, 2026
Using Claude Code With Ollama Local Models
Run GLM 4.7 Flash locally (RTX 3090) with Claude Code and Ollama in minutes, no cloud, no lock-in, just pure speed and control.
Abid Ali Awan
February 3, 2026
Kimi K2.5 and Agent Swarm: A Guide With Four Practical Examples
Learn what Moonshot’s Kimi K2.5 is, how Agent Swarm works, and see it in action through four hands-on, real-world experiments.
Aashi Dutt
January 29, 2026
OpenClaw (Clawdbot) Tutorial: Control Your PC from WhatsApp
Set up OpenClaw(formerly Clawdbot/Moltbot), a self-hosted agent connecting Claude to your Mac via WhatsApp. Search files and run shell commands from your phone.
Bex Tuychiev
February 2, 2026
Transformers v5 Tokenization: Architecture and Migration Guide
Upgrade to Transformers v5. A practical guide to the unified Rust backend, API changes, and side-by-side v4 vs v5 migration patterns for encoding and chat.
Aashi Dutt
January 27, 2026
Google MCP Servers Tutorial: Deploying Agentic AI on GCP
Explore the architecture of Google’s managed MCP servers and learn how to turn LLMs into proactive operators for BigQuery, Maps, GCE, and Kubernetes.
Aryan Irani
January 26, 2026
How to Run GLM-4.7 Locally with llama.cpp: A High-Performance Guide
Setting up llama.cpp to run the GLM-4.7 model on a single NVIDIA H100 80GB GPU, achieving up to 20 tokens per second using GPU offloading, Flash Attention, optimized context size, efficient batching, and tuned CPU threading.
Abid Ali Awan
January 26, 2026
How to Run GLM 4.7 Flash Locally
Learn how to run GLM-4.7-Flash on an RTX 3090 for fast local inference and integrating with OpenCode to build a fully local automated AI coding agent.
Abid Ali Awan
January 22, 2026
How to Fine-Tune FunctionGemma: A Step-by-Step Guide
Learn how to fine-tune FunctionGemma in under 10 minutes using Kaggle’s free GPUs, from dataset preparation and baseline evaluation to training and post-fine-tuning validation.
Abid Ali Awan
January 21, 2026
Claude Code Hooks: A Practical Guide to Workflow Automation
Learn how hook-based automation works and get started using Claude Code hooks to automate coding tasks like testing, formatting, and receiving notifications.
Bex Tuychiev
January 19, 2026