Course
How to Set Up and Run Gemma 3 Locally With Ollama
Learn how to install, set up, and run Gemma 3 locally with Ollama and build a simple file assistant on your own device.
Mar 17, 2025 · 12 min read
What are the hardware requirements for running Gemma 3 locally?
Can I run multiple instances of Gemma 3 simultaneously?
Is it necessary to use Anaconda to set up a Python environment?
Learn AI with these courses!
3 hr
4.6K
Course
Understanding the EU AI Act
1 hr
3K
Track
Developing AI Applications
23hrs hr
See More
RelatedSee MoreSee More
Tutorial
How to Set Up and Run QwQ 32B Locally With Ollama
Learn how to install, set up, and run QwQ-32B locally with Ollama and build a simple Gradio application.
Aashi Dutt
12 min
Tutorial
How to Run Llama 3 Locally With Ollama and GPT4ALL
Run LLaMA 3 locally with GPT4ALL and Ollama, and integrate it into VSCode. Then, build a Q&A retrieval system using Langchain and Chroma DB.
Abid Ali Awan
12 min
Tutorial
How to Set Up and Run DeepSeek R1 Locally With Ollama
Learn how to install, set up, and run DeepSeek-R1 locally with Ollama and build a simple RAG application.
Aashi Dutt
12 min
Tutorial
Llama 3.2 Vision With RAG: A Guide Using Ollama and ColPali
Learn the step-by-step process of setting up a RAG application using Llama 3.2 Vision, Ollama, and ColPali.
Ryan Ong
12 min
Tutorial
RAG With Llama 3.1 8B, Ollama, and Langchain: Tutorial
Learn to build a RAG application with Llama 3.1 8B using Ollama and Langchain by setting up the environment, processing documents, creating embeddings, and integrating a retriever.
Ryan Ong
12 min
Tutorial
vLLM: Setting Up vLLM Locally and on Google Cloud for CPU
Learn how to set up and run vLLM (Virtual Large Language Model) locally using Docker and in the cloud using Google Cloud.
François Aubry
12 min