Building My Personal Server: A Complete Journey

So, are you ready to take control of your data and learn valuable tech skills? This guide details my personal server build, from choosing hardware to self-hosting a secure website with Docker, a VPN with WireGuard, and more. It's a journey into digital ownership.

Building My Personal Server: A Complete Journey
audio-thumbnail
Digital Ownership Achieved by Building Your Own Home Server
0:00
/889.719002

Estimated Reading Time: 6 minutes

  • Control and Privacy: Maintain ownership over your data.
  • Learning and Experimentation: A hands-on approach to gaining technical skills.
  • Cost Efficiency: Long-term savings compared to cloud solutions.
  • Reliability and Availability: Consistent access to services.

Table of Contents

Why I Decided to Build a Personal Server

Suggesting someone run their own server at home sounds a bit like recommending they mill their own flour or weave their own clothes. I mean, why would you deal with hardware, electricity bills, and middle-of-the-night troubleshooting when Google, AWS, and countless other services are just a credit card swipe away? Well, I asked myself that same question about two years ago, and somehow still ended up with a humming box in my homelab that I genuinely love. Here's what pushed me over the edge:

  • Control and Privacy: Having complete control over my data and services means I know exactly where my information lives and who has access to it. No third-party terms of service changes, no unexpected shutdowns, and no concerns about data mining.
  • Learning and Experimentation: Running a home server provides an incredible learning playground. I can experiment with different technologies, break things safely (that happens a lot), and understand how enterprise-level services work under the hood.
  • Cost Efficiency: While the upfront investment is significant, the long-term costs of running my own services often beat monthly cloud subscription fees, especially when you factor in multiple services.
  • Reliability and Availability: With proper setup, my home server provides 24/7 access to my services, regardless of internet connectivity issues with cloud providers.

Hardware Choices: Building the Foundation

Choosing the right hardware was crucial for creating a stable, efficient, and future-proof setup. Most of the time when we choose our hardware, we are going for the "Good Stuff", maybe the one that we saw in a YouTube review or the newest one with the coolest features (such as core number and CPU speed, etc). Here's what I settled on and why:

  • CPU: I opted for Intel Core i5-8600 because it offers excellent virtualization support with hardware-assisted features like Intel VT-x. The multiple cores allow me to run several virtual machines and LXC (Linux Containers) simultaneously without performance degradation.
  • RAM: I went with 16GB of DDR4 RAM. While I could have spent a lot of money buying ECC (Error-Correcting Code) memory with the appropriate processor and motherboard to support it, I found it to be overkill for my setup and use cases.
  • Storage Strategy: I implemented a multi-tier storage approach:
    • Boot Drive: A small, reliable SSD for the operating system
    • Fast Storage: NVMe SSDs for frequently accessed data and VM storage
    • Bulk Storage: Large capacity HDDs in a redundant configuration for file storage and backups
  • Network: Gigabit Ethernet connectivity ensures fast data transfer, both for accessing services remotely and for local network operations.
  • Power Supply: I chose an 80+ Gold-certified PSU with plenty of headroom. Efficiency matters when the system runs continuously.

The key was balancing performance, reliability, and power consumption. This isn't a gaming rig – it's designed for consistent, efficient operation over the years.

Operating System: Why I Chose Proxmox

After evaluating several options, including ESXi, unRAID, and TrueNAS, I settled on

Proxmox Virtual Environment (VE)

. Here's why:

  • Open Source Freedom: Proxmox is built on Debian Linux and is completely open source. This means no licensing costs, no vendor lock-in, and full transparency in how the system operates.
  • Hybrid Virtualization: Proxmox supports both KVM virtual machines and LXC containers in a single platform. This flexibility allows me to choose the right virtualization method for each use case.
  • Web-Based Management: The intuitive web interface makes managing VMs, containers, storage, and networking straightforward, even from mobile devices.
  • Built-in Backup and Clustering: Proxmox includes robust backup capabilities and can easily scale to a multi-node cluster if I decide to expand.
  • Storage Flexibility: Support for various storage types, including ZFS, which provides features like snapshots, compression, and data integrity checking.
  • Active Community: A vibrant community provides excellent documentation, tutorials, and support.

The installation was straightforward, and the learning curve was manageable even for someone new to enterprise virtualization platforms.

NFS Storage Setup: Centralized File Management

Setting up Network File System (NFS) storage was crucial for centralizing file access across all my services. Here's how I approached it:

  • ZFS Pool Configuration: I created a ZFS pool using multiple drives in a RAID-Z configuration, providing redundancy while maximizing usable space. ZFS's copy-on-write functionality and built-in snapshots give me confidence in data integrity and easy recovery options.
  • NFS Shares Structure: I organized my NFS exports into logical categories:
    • /media - For media files accessed by Plex, Jellyfin, etc.
    • /documents - Personal and work documents
    • /backups - Automated backup storage
    • /shared - Files that need access from multiple containers/VMs
  • Performance Tuning: I optimized NFS settings for my network, adjusting parameters like rsize and wsize for better throughput, and enabling NFSv4 for improved security and performance.
  • Security Considerations: I configured NFS exports with appropriate restrictions, limiting access to specific IP ranges and implementing proper user mapping to maintain security while ensuring functionality.

The centralized storage approach means I can easily backup, migrate, or share files between different services without duplicating data.

LXC Containers: Privileged vs Unprivileged Decision

One of the most important architectural decisions was choosing between privileged and unprivileged LXC containers. After careful consideration, here's the approach I took:

  • Unprivileged Containers: For most services, I use unprivileged containers because:
    • Security: They provide better isolation and a reduced attack surface.
    • Best Practice: It's the recommended approach for production environments.
    • Peace of Mind: Even if a container is compromised, the damage is limited.
  • Privileged Containers: I only use privileged containers for specific use cases:
    • Hardware access (like for GPU passthrough).
    • Nesting virtualization when running Docker inside LXC.
    • Legacy applications that don't work well with user namespace mapping.

I set up unprivileged containers with careful UID/GID mapping to ensure they can access shared NFS storage while maintaining security. This required some initial setup work but provides the best of both worlds. I allocate resources conservatively, starting small and scaling up as needed. LXC's efficiency means I can run many more containers than traditional VMs on the same hardware.

Website Deployment: WordPress + Docker + Nginx Proxy Manager

Setting up my personal website was one of the most rewarding parts of this project. Here's how I created a professional web presence:

  • Docker in LXC: I created an unprivileged LXC container specifically for web services and installed Docker inside it. This provides good isolation while maintaining easy management.
  • WordPress Stack: I used Docker Compose to deploy:
    • WordPress: The main application container.
    • MySQL: Database container with persistent volume.
    • Redis: Caching layer for improved performance.
  • Nginx Proxy Manager: This was a game-changer for managing multiple web services:
    • Reverse Proxy: Routes traffic to appropriate containers based on domain.
    • SSL Certificates: Automatic Let's Encrypt certificate management.
    • User-Friendly Interface: Web-based configuration instead of editing nginx configs.

Domain and DNS Setup:

  • Domain Registrar: Purchased domain through papaki.gr.
  • Cloudflare Integration: Configured Cloudflare as DNS provider for:
    • DDoS protection
    • CDN capabilities
    • Additional SSL encryption
    • DNS management flexibility

The result is a fast, secure website that I have complete control over, with professional features like automatic HTTPS and CDN acceleration.

VPN Solutions: WireGuard and Tailscale

Network security and remote access were critical requirements, leading me to implement two complementary VPN solutions:

WireGuard: The Performance Champion

Why WireGuard:
  • Speed: Significantly faster than traditional VPN protocols.
  • Simplicity: Minimal configuration with strong security by default.
  • Efficiency: Low overhead and excellent battery life on mobile devices.
  • Modern Cryptography: Uses state-of-the-art cryptographic primitives.

I set up WireGuard in an LXC container with:

  • Site-to-Site Access: Full access to my home network and services.
  • Mobile Clients: Easy connection from phones and laptops.
  • Split Tunneling: Option to route only specific traffic through the VPN.

Tailscale: For Accessing My Entire Network Securely

Why Tailscale:
  • Zero Configuration: Automatic mesh networking with no manual port forwarding.
  • Cross-Platform: Seamless experience across all devices and operating systems.
  • NAT Traversal: Works behind any router without configuration.
  • Access Controls: Granular permissions and device management.

I install Tailscale in a VM (Virtual Machine) because it provides a more isolated and stable environment for a network-critical service like a subnet router:

  • Quick Access: Instant secure connection to services from anywhere.
  • File Sharing: Easy access to NFS shares from any location.
  • Device Management: Centralized view and control of all connected devices.

Why Both? Having both solutions provides redundancy and optimization:

  • WireGuard: For high-performance, always-on connections.
  • Tailscale: For convenience when a traditional VPN setup isn't possible.

Lessons Learned and Future Plans

Building and maintaining a personal server has been incredibly rewarding. Some key takeaways:

  • You Own Every Problem (And That's Both Scary and Awesome): When you run your own server, you become a one-person IT department. When your website mysteriously stops working at 2 AM, there's no helpdesk to call, just you, a cup of tea, and whatever troubleshooting skills you can deploy. At first, this responsibility felt overwhelming, but it makes you a much better systems administrator.
  • Start Small, Scale Smart: Begin with essential services/hardware and add complexity gradually. This approach helps you understand each component before building on it.
  • Documentation is Critical: Keep detailed notes about configurations, especially network settings and container mappings. Future you will thank present you.
  • Backups Are Non-Negotiable: Implement automated, tested backup strategies from day one. I learned this the hard way.
  • Power Management Matters: Consider UPS systems and power-efficient hardware. Unexpected shutdowns can cause data corruption.
  • Community Resources: The homelab community is incredibly helpful. Don't hesitate to ask questions and share your experiences.

Future Enhancements: I'm planning to add:

  • Monitoring Stack: Prometheus and Grafana for system monitoring.
  • Media Server: Plex or Jellyfin for personal media streaming.
  • Home Automation: Integration with smart home devices.
  • Kubernetes: Eventually migrate some services to K3s for learning.