track
How to Set Up and Run QwQ 32B Locally With Ollama
Learn how to install, set up, and run QwQ-32B locally with Ollama and build a simple Gradio application.
Mar 10, 2025 · 12 min read
Learn AI with these courses!
23hrs hr
course
Developing LLM Applications with LangChain
3 hr
18.3K
track
Llama Fundamentals
5 hours hr
See More
RelatedSee MoreSee More
tutorial
How to Set Up and Run DeepSeek R1 Locally With Ollama
Learn how to install, set up, and run DeepSeek-R1 locally with Ollama and build a simple RAG application.
Aashi Dutt
12 min
tutorial
How to Run Llama 3 Locally: A Complete Guide
Run LLaMA 3 locally with GPT4ALL and Ollama, and integrate it into VSCode. Then, build a Q&A retrieval system using Langchain, Chroma DB, and Ollama.
Abid Ali Awan
15 min
tutorial
Local AI with Docker, n8n, Qdrant, and Ollama
Learn how to build secure, local AI applications that protect your sensitive data using a low/no-code automation framework.
Abid Ali Awan
tutorial
Llama 3.2 and Gradio Tutorial: Build a Multimodal Web App
Learn how to use the Llama 3.2 11B vision model with Gradio to create a multimodal web app that functions as a customer support assistant.
Aashi Dutt
10 min
tutorial
How to Use Qwen2.5-VL Locally
Learn about the new flagship vision-language model and run it on a laptop with 8GB VRAM.
Abid Ali Awan
10 min
tutorial
RAG With Llama 3.1 8B, Ollama, and Langchain: Tutorial
Learn to build a RAG application with Llama 3.1 8B using Ollama and Langchain by setting up the environment, processing documents, creating embeddings, and integrating a retriever.
Ryan Ong
12 min