Kategori
Teknologi
Tutorial LLM
Ikuti kabar, teknik, dan sumber daya terbaru untuk Large Language Model. Tutorial kami penuh dengan panduan praktis langkah demi langkah dan use case yang dapat Anda gunakan untuk meningkatkan keterampilan.
Teknologi lainnya:
Pelatihan untuk 2 orang atau lebih?Coba DataCamp for Business
FLUX.2 Klein Tutorial: Building a Generate-and-Edit Image App with Gradio
Learn how to combine local FLUX.2 Klein 4B generation with API-based image editing, multi-reference conditioning, and session history to create an image editor with Gradio.
Aashi Dutt
3 Februari 2026
Using Claude Code With Ollama Local Models
Run GLM 4.7 Flash locally (RTX 3090) with Claude Code and Ollama in minutes, no cloud, no lock-in, just pure speed and control.
Abid Ali Awan
3 Februari 2026
Kimi K2.5 and Agent Swarm: A Guide With Four Practical Examples
Learn what Moonshot’s Kimi K2.5 is, how Agent Swarm works, and see it in action through four hands-on, real-world experiments.
Aashi Dutt
29 Januari 2026
OpenClaw (Clawdbot) Tutorial: Control Your PC from WhatsApp
Set up OpenClaw(formerly Clawdbot/Moltbot), a self-hosted agent connecting Claude to your Mac via WhatsApp. Search files and run shell commands from your phone.
Bex Tuychiev
2 Februari 2026
Transformers v5 Tokenization: Architecture and Migration Guide
Upgrade to Transformers v5. A practical guide to the unified Rust backend, API changes, and side-by-side v4 vs v5 migration patterns for encoding and chat.
Aashi Dutt
27 Januari 2026
Google MCP Servers Tutorial: Deploying Agentic AI on GCP
Explore the architecture of Google’s managed MCP servers and learn how to turn LLMs into proactive operators for BigQuery, Maps, GCE, and Kubernetes.
Aryan Irani
26 Januari 2026
How to Run GLM-4.7 Locally with llama.cpp: A High-Performance Guide
Setting up llama.cpp to run the GLM-4.7 model on a single NVIDIA H100 80GB GPU, achieving up to 20 tokens per second using GPU offloading, Flash Attention, optimized context size, efficient batching, and tuned CPU threading.
Abid Ali Awan
26 Januari 2026
How to Run GLM 4.7 Flash Locally
Learn how to run GLM-4.7-Flash on an RTX 3090 for fast local inference and integrating with OpenCode to build a fully local automated AI coding agent.
Abid Ali Awan
22 Januari 2026
How to Fine-Tune FunctionGemma: A Step-by-Step Guide
Learn how to fine-tune FunctionGemma in under 10 minutes using Kaggle’s free GPUs, from dataset preparation and baseline evaluation to training and post-fine-tuning validation.
Abid Ali Awan
21 Januari 2026
Claude Code Hooks: A Practical Guide to Workflow Automation
Learn how hook-based automation works and get started using Claude Code hooks to automate coding tasks like testing, formatting, and receiving notifications.
Bex Tuychiev
19 Januari 2026
Claude Cowork Tutorial: How to Use Anthropic's AI Desktop Agent
Learn what Claude Cowork is and how to use for file organization, document generation, and browser automation. Hands-on tutorial with real examples and limitations.
Bex Tuychiev
16 Januari 2026
Fine-Tuning T5Gemma-2
A hands-on, end-to-end guide to fine-tuning T5Gemma-2 (270M-270M) for LaTeX OCR, showing how to correctly train and run inference with a multimodal encoder–decoder model using a small dataset.
Abid Ali Awan
13 Januari 2026