Back
Tutorials

Deploy an AI Coding Assistant in the Cloud with Hetzner, Ollama, and TailScale for Cursor

Focused CEO, Austin Vance, shows you how to set and run your own coding assistant model

Aug 29, 2024

By Austin Vance

Share:
  • linkedin
  • facebook
  • instagram
  • twitter

Learn how to easily set up and run your own coding assistant model in the cloud using Hetzner servers, NVIDIA GPU drivers, CUDA, and Ollama. The tutorial covers essential steps, from initial server setup with SSH keys to installing necessary drivers and software. 

Additionally, discover how to enhance security and control by integrating TailScale VPN, managing access with UFW firewall, and exposing models to the internet with TailScale Funnel for seamless access through coding assistants like Cursor. 

Ideal for developers seeking to manage their own coding infrastructure with full observability and security.

00:00 Introduction to hosting your own coding assistant 
00:55 Setting up the server 
01:47 Installing drivers and CUDA 
08:35 Configuring Ollama for coding assistance 
12:38 Securing the server with VPN and Tailscale 
20:02 Integrating with Cursor and TailScale Funnel 
24:15 Conclusion and next steps

Back to Explore Focused Lab
/Contact us

Let’s build better software together