Ana içeriğe atla
Kategori

LLM Öğreticileri

Büyük Dil Modelleri ile ilgili en son haberleri, teknikleri ve kaynakları takip edin. Öğreticilerimiz, becerilerinizi geliştirmek için kullanabileceğiniz pratik adım adım kılavuzlar ve kullanım örnekleriyle doludur.
Group2 veya daha fazla kişiyi mi eğitiyorsunuz?DataCamp for Business ürününü deneyin

How to Run Kimi K2.5 Locally

Learn how to run a top open-source model locally with llama.cpp, connect it to the Kimi CLI, and one-shot an interactive game using vibe coding.

Abid Ali Awan

5 Şubat 2026

Using Claude Code With Ollama Local Models

Run GLM 4.7 Flash locally (RTX 3090) with Claude Code and Ollama in minutes, no cloud, no lock-in, just pure speed and control.
Abid Ali Awan's photo

Abid Ali Awan

3 Şubat 2026

Kimi K2.5 and Agent Swarm: A Guide With Four Practical Examples

Learn what Moonshot’s Kimi K2.5 is, how Agent Swarm works, and see it in action through four hands-on, real-world experiments.
Aashi Dutt's photo

Aashi Dutt

29 Ocak 2026

OpenClaw (Clawdbot) Tutorial: Control Your PC from WhatsApp

Set up OpenClaw(formerly Clawdbot/Moltbot), a self-hosted agent connecting Claude to your Mac via WhatsApp. Search files and run shell commands from your phone.
Bex Tuychiev's photo

Bex Tuychiev

2 Şubat 2026

Transformers v5 Tokenization: Architecture and Migration Guide

Upgrade to Transformers v5. A practical guide to the unified Rust backend, API changes, and side-by-side v4 vs v5 migration patterns for encoding and chat.
Aashi Dutt's photo

Aashi Dutt

27 Ocak 2026

Google MCP Servers Tutorial: Deploying Agentic AI on GCP

Explore the architecture of Google’s managed MCP servers and learn how to turn LLMs into proactive operators for BigQuery, Maps, GCE, and Kubernetes.
Aryan Irani's photo

Aryan Irani

26 Ocak 2026

How to Run GLM-4.7 Locally with llama.cpp: A High-Performance Guide

Setting up llama.cpp to run the GLM-4.7 model on a single NVIDIA H100 80GB GPU, achieving up to 20 tokens per second using GPU offloading, Flash Attention, optimized context size, efficient batching, and tuned CPU threading.
Abid Ali Awan's photo

Abid Ali Awan

26 Ocak 2026

How to Run GLM 4.7 Flash Locally

Learn how to run GLM-4.7-Flash on an RTX 3090 for fast local inference and integrating with OpenCode to build a fully local automated AI coding agent.
Abid Ali Awan's photo

Abid Ali Awan

22 Ocak 2026

How to Fine-Tune FunctionGemma: A Step-by-Step Guide

Learn how to fine-tune FunctionGemma in under 10 minutes using Kaggle’s free GPUs, from dataset preparation and baseline evaluation to training and post-fine-tuning validation.
Abid Ali Awan's photo

Abid Ali Awan

21 Ocak 2026

Claude Code Hooks: A Practical Guide to Workflow Automation

Learn how hook-based automation works and get started using Claude Code hooks to automate coding tasks like testing, formatting, and receiving notifications.
Bex Tuychiev's photo

Bex Tuychiev

19 Ocak 2026

Claude Cowork Tutorial: How to Use Anthropic's AI Desktop Agent

Learn what Claude Cowork is and how to use for file organization, document generation, and browser automation. Hands-on tutorial with real examples and limitations.
Bex Tuychiev's photo

Bex Tuychiev

16 Ocak 2026