Congratulations, you’ve made it this far! You have a server powered by open-source software that you can access from anywhere in the world, safely and securely. Now that you have a solid foundation in Linux and networking, you can start playing with different applications and services. Our final post will look at some tips for making the most out of your new server.
SSH’s password authentication is handy, but there’s an even safer and more convenient way of connecting remotely to your server. Similar to SSL, SSH can use public key cryptography to authenticate users through a feature known as SSH keys. SSH keys can add to – or replace entirely – the security of a password by using a pre-authorized key.
The exact steps will vary based on whether you’re using Windows, Linux, or OS X on your client machine, but the general idea is the same: you generate a public/private key pair on your client machine, send your public key to your server, and then configure the SSH service to accept public keys as an authentication method (Ubuntu Server will by default). For even greater security, you can then disable password authentication altogether, requiring the client to have an approved key before it can connect.
Ubuntu maintains a collection of logs for everything ranging from login attempts to failed services. The Ubuntu wiki explains almost all of the default logs and how to interpret them. Some of the services you install, such as Apache, will create their own log files or even their own log directories. And other services, such as Subsonic, will maintain log files in an entirely different directory (in the case of Subsonic, the main log file is /var/subsonic/subsonic_sh.log)! Knowing how to find and read logs will save you time and frustration when you need to troubleshoot or review a potential problem.
In addition to security, one of the most critical components of computer maintenance is backing up your data. When your data is hosted by a third party, you rarely have to worry about data loss since the responsibility is on the service provider to maintain backups. However, when you’re the one providing the service, it’s on you to make sure the data you’re hosting can be recovered in the event of a full system crash.
A good backup scheme follows the 3-2-1 rule: you should always have 3 copies of your data at any given time, on 2 separate storage devices, with at least 1 copy stored in a completely different (i.e. off-site) location. If one of the backups is corrupt, a hard drive fails, or even if your house catches fire, you still have a preserved copy of your data.
Most decent backup software creates two forms of backups: full and incremental. A full backup makes a complete copy of the data stored on one drive onto another drive. An incremental backup copies the differences from the last full backup: for example, if you run a full backup, modify a single file, then run an incremental backup, only the one modified file will be copied during the backup. This allows you to maintain consistent backups over a period of weeks or months without immediately running out of disk space.
My personal recommendation for backup software is Duplicity. Duplicity can be automated to run on a daily or weekly basis, can perform full and incremental backups, and can automatically merge or even delete older backups to preserve free space. Ubuntu provides a guide for getting started with Duplicity.
For a better understanding of managing external drives in Ubuntu server, see the Mount/USB guide.
While servers aren’t generally designed to save power, scaling back could prolong the life expectancy of your hardware while lowering your electric bill. Ubuntu provides the cpufreq service, which dynamically changes the speed of your server’s CPU to meet demand. cpufreq will automatically boost your server’s processor speed if it detects a spike in CPU usage, and will likewise pull it back if it detects little to no CPU usage.
cpufreq changes the state of the CPU based on rule sets called governers. The default governer is ondemand, which jumps to the highest speed as soon as it detects heavy load. Other available governers include powersave, which sets the CPU to its lowest speed; conservative, which gradually increases the CPU speed until it hits its maximum speed; and performance, which constantly runs the CPU at its maximum speed.
Installing cpufreq requires the cpufrequtils package. Once it’s installed, the cpufrequtils service will automatically apply the ondemand governer. To change the active governer, use the cpufreq-set command:
sudo cpufreq-set -g <governer>
To permanently change your governer, edit the /etc/init.d/cpufrequtils file and change the line beginning with “GOVERNER=”.
An open SSH port is a prime target for attackers. While a secure password (or key) will prevent an unauthorized user from gaining access, frequent failed access attempts will clog up your server’s log files. Fail2ban is a popular intrusion detection service that automatically scans log files and blacklists IP addresses with repeating failures. Fail2ban can be installed with the fail2ban package.
Your server will most likely be running 24/7, but only a fraction of that time will be spent performing work. If you want to keep your busy server when you’re not using it, you can make it a member of a distributed computing cluster through BOINC.
BOINC is a platform for research institutions to run CPU-intensive calculations on a worldwide computing platform. By spreading the workload across millions of low-power computers, institutions can quickly perform work without the cost of acquiring and setting up their own supercomputers. BOINC lets you choose from a collection of projects hosted by different institutions, with goals ranging from curing cancer to mapping global weather patterns.
You can install boinc using the boinc-client package. You can also adjust boinc in tandem with cpufreq to keep your CPU usage at a minimum.
There are countless possibilities open to you now that you have your own server. You can learn more about each of the technologies we used through the following links:
- Domain Names
- Dynamic DNS
- Firewall (ufw, iptables/netfilter)
- Internet Protocol (v4, v6)
- Network Address Translation
You can also begin looking into different ways to use your web server. Try hosting a server for one of your favorite online games. Learn how to develop dynamic websites using MySQL and PHP. Run a BitTorrent service to host popular files such as Linux images. Run a Tor relay to help Internet users evade censorship from oppressive governments. Install a virtual machine manager and run copies of other operating systems. Install Samba and make your server browsable from a Windows or OS X computer. Your options are limited only by your willingness to learn and to explore.
Before this guide began I mentioned my own personal reasons for building a private server. Growing my technical knowledge, developing skills, and regaining ownership of my data were some of the excuses I used. But the main reason why I did this, the true reason why I created this seven-part guide on moving your data from a third party isn’t because there’s necessarily a strong pragmatic, economic, or intellectual benefit to doing so. It’s because it’s fun.
It’s fun to tinker with the systems and services that drive our everyday life. It’s fun to break down the walls of a black box, examine how it works, and reproduce or improve on its design. It’s fun to take a pile of unused scraps and electronics and create a working system with nothing but a bit of knowledge and perseverance. And, as I’ve learned over the course of writing this guide, it’s fun to share your experiences and your discoveries with others who are interested in hearing what you have to say. As you explore the world of servers. software, and services, my only request if that you pass on your own experiences and discoveries to others so that they can examine and enjoy the world of information technology.
Keep exploring, stay safe, and – most importantly – have fun!