In this article, I will show you how to turn my old laptop that I haven’t used in a while into a local ChatGPT clone in a few simple steps. It’s easy to set up and doesn’t require much effort.
Why use local AI chatbots?
- privacy: Your data remains local.
- Customization: Fine-tune the model to suit your needs.
- flexibility: Use artificial intelligence according to your needs.
Steps to set up a local AI chatbot
1. Run Open WebUI through Docker
To simplify the installation, I will use Docker to configure Open WebUI and bundle Ollama support.
Execute the following commands on your computer:
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
This command will start Open WebUI. After running, visit http://localhost:3000
in your browser and set up an admin user.
2. Install LLAMA3.2:1b
For this example I’m using the llama3.2:1b model, but you can choose orama websitedepending on your needs.
- In the upper left corner of Open WebUI, type the name of the model you want to install.
- In my case it’s llama3.2:1b.
- Click to pull “llama3.2:1b” from Ollama.com.
- Here’s an example of what the installation process will look like:
That’s it! You can access your LLM (Large Language Model) from another device on the same network using the local IP address, as shown in the following example:
Once you install the model, you can start using your local AI chatbot. Open WebUI supports multiple LLMs, so you can try different models to find the one that best suits your needs.
What to do next?
I plan to fine-tune LLAMA on my own dataset. So, wish me luck ;v
refer to