Ajaxtown
Published on

Automatic Domain mapping in Letterpad platform

Authors

I have been contributing and maintaining Letterpad for quite sometime now. For those of you who do not know Letterpad, it's an open source blogging platform. During the development of Letterpad, I have faced many interesting problems. One of the issue I wanted to solve is to allow users to link their custom domain with Letterpad. This post talks about how I did it and the various challenges I faced.

This post assumes that you have some knowledge of nginx server and certbot. This post explains them briefly.

When you register in Letterpad, you receive a free subdomain. However, users often want to use their own existing domain. In these scenarios, the users own the domain but not the server. So I somehow have to manage this mapping in a secured and automatic way.

How do we map a domain ?

The hosting server where you keep all your content provides you with an IP address. This IP address is hard to remember, so you buy a domain name (example.com), and configure this domain to point to the IP address of your server. And then you configure your server to allow requests from this domain and depending on the request, the server sends back the response which is nothing but your content.

e.g. letterpad.app pointing to its server

I am using nginx server and configuring the server to accept request is very straightforward.

server {
   listen	 80;
   server_name letterpad.app;
   root /var/www/letterpad.app;
}

Here nginx is listening to port 80. When a request comes from letterpad.app, it is going to serve the content which is kept in /var/www/letterpad.app. But this is not secure. It can only accept request using http and not https. This is because https listens on port 443, and it requires an SSL certificate. The most important part of an SSL (Secure Sockets Layer) certificate is that it is digitally signed by a trusted CA (Certificate Authority), like Let's Encrypt. Anyone can create a certificate, but browsers only trust certificates that come from an organization on their list of trusted CA's.

In Letterpad, I used Let's Encrypt as the CA. And I used Certbot which is a free and open-source utility mainly used for managing SSL certificates from the Let's Encrypt certificate authority.

Problems

There are multiple steps involved in doing domain mapping, especially a secured one. Ideally, when a user wants to map their domain, I should save their domain and then a cron job or a separate process should generate certificates, create nginx configs and restart the server. But since Letterpad is not heavy on traffic, I wanted to do this at the request level.

Doing it at the request level comes with its own problems.

  • Certbot is managed by root user. Also, I used nginx which is also managed by root user. But Letterpad application is run by a non-root user.
  • Verify if the user has pointed their domain to Letterpad's IP. So I need Nginx to be configured both on port 80 and 443.
  • I used bash to create nginx configuration and generate certificates, and it has to communicate with NodeJS and handle errors gracefully.

Solving Nginx permission

When you create a new configuration in Nginx that will be responsible to accept request from your domain name, you will need to reload Nginx for the configuration to work. Since Letterpad was run by a non-root user, it didn't have permissions to reload. Also, I thought, what if I don't have to worry about reloading Nginx. So I wrote a watcher script which watches if any configuration has been added, modified or deleted and if so, restart nginx. This watcher runs independently and has nothing to do with Letterpad. And it is run by root user.

nginxWatcher.sh

#!/bin/bash
###########

while true
do
 inotifywait --exclude .swp -e create -e modify -e delete -e move /etc/nginx/sites-enabled
 nginx -t
 if [ $? -eq 0 ]
 then
  date +%F_%T
  echo "Detected Nginx Configuration Change"
  echo "Executing: nginx -s reload"
  nginx -s reload
 fi
done

I used nohup to create a job which runs this continuously. It keeps a log which I can monitor.

nohup bash nginxWatcher.sh </dev/null >./log.txt 2>&1 &

The log prints something like this.

Setting up watches.
Watches established.
/etc/nginx/sites-enabled/ CREATE .example.com.enabled
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
2022-04-16_23:19:20
Detected Nginx Configuration Change
Executing: nginx -s reload

Solving Certbot Permission

Certbot generates the certificates in /etc/letsencrypt/live, and it maintains the logs in /var/log/letsencrypt. I changed the permissions of these two folders so that the non-root user can create certificates.

Handling bash commands from NodeJS

Communication between NodeJS and Bash is not complex at all, but since there are too many steps, error handling is important.

Handling errors:

#!/bin/bash

#This function is used to cleanly exit any script. It does this displaying a
# given error message, and exiting with an error code.
function error_exit {
    echo "[email protected]"
    exit 1
}
#Trap the killer signals so that we can exit with a good message.
trap "error_exit 'Received signal SIGHUP'" SIGHUP
trap "error_exit 'Received signal SIGINT'" SIGINT
trap "error_exit 'Received signal SIGTERM'" SIGTERM

#Alias the function so that it will print a message with the following format:
#prog-name(@line#): message
#We have to explicitly allow aliases, we do this because they make calling the
#function much easier (see example).
shopt -s expand_aliases
alias die='error_exit "Error (@`echo $(( $LINENO - 1 ))`):"'

Now whenever there is an error, it will gracefully handle them like this.

./domainMapping.sh: line 58: {fakeCommand}: command not found

I will be adding functions in the bash script for handling different steps, so I need a way to call those functions. So I added the below lines.

# Check if the function exists (bash specific)
if declare -f "$1" > /dev/null
then
  # call arguments verbatim
  "[email protected]"
else
  echo "'$1' is not a known function name" >&2
  exit 1
fi

I tested this with the below two functions.

function testSuccess {
    echo "success"
}

function testFailure {
    # the below command does not exist and should throw error
    helloWorld
}

But before that, I need to call these functions from NodeJS. So I need to create a child process that can communicate with a script written in bash, call these functions and also take care of errors.

import { exec } from "child_process";

export const execShellCommand = (command: string): Promise<string> => {
  return new Promise((resolve, reject) => {
    exec(`${command}`, (err, stdout) => {
      if (err) {
        reject(err);
      } else {
        resolve(stdout);
      }
    });
  });
};

Using the above snippet, I can execute any bash command. I also created a wrapper on top so that I don't have to enter the filename every time.

async function execShell(fn, domain = "") {
  try {
    const result = await execShellCommand(
      `./domainMapping.sh ${fn} ${domain}`.trim(),
    );
    if (result.includes("success")) {
      return {
        ok: true,
      };
    } else {
      return {
        ok: false,
        message: result,
      };
    }
  } catch (e) {
    return {
      ok: false,
      message: e.message,
    };
  }
}

We can test the functions this way.

execShell("testSuccess").then(console.log); // { ok: true }

execShell("testFailure").then(console.log).catch(console.log); 
// { ok: false, message: "./domainMapping.sh: line 58: helloWorld: command not found" } 

Now I am ready to actually write the functions to do the domain mapping.

Generating Certificates

The domain-mapping page in Letterpad looks like this.

After the user points the domain to Letterpad's IP, the user is going to submit the domain.

Validate if the user has pointed the domain to Letterpad's IP

I create a Nginx configuration which accepts request from that domain. And then do a curl request to check if the domain receives a 200 response. You can also add a header in Nginx config, and validate if you receive the header.

function nginxSetConfig_80 {
    DOMAIN=$1
    # Create nginx config file
 cat > $NGINX_AVAILABLE_VHOSTS/$DOMAIN.enabled <<EOF
server {
   listen	 80;
   server_name $DOMAIN;
   add_header X-App-Name Letterpad;
   root $WEB_DIR;
}
EOF

    if curl -Is http://$DOMAIN | head -1 | grep -o '200'; then
        echo "success"
    else
        die "Unable to ping the server"
    fi
}

I can call the above function from NodeJS like this.

const res = await execShell("nginxSetConfig_80 example.com");

Create Certificates

If the above works well, then I can generate the SSL certificates. While generating the certificates using certbot, make sure to use the webroot method.

function createCertificate {
    DOMAIN=$1
    certbot certonly \
        --webroot \
        --agree-tos \
        --email [email protected] \
        -d $DOMAIN \
        -w $WEB_DIR >/dev/null 2>&1

    l=$?
    if [ $l -eq 0 ]; then
        echo "success"
    elif [ $l -eq 127 ]; then
        die "Install certbot to generate certificates"
    else
        die "Certificate generation failed"
    fi
}

I can call this function from NodeJS like this.

const res = await execShell("createCertificate example.com");

Update nginx config to use SSL

Now that the certificates are ready, we need to update Nginx config, to use these certificates. I wrote the below function for that.

function nginxSetConfig_443 {
    DOMAIN=$1
    # Create nginx config file
    cat > $NGINX_AVAILABLE_VHOSTS/$DOMAIN.enabled <<EOF
server {
   listen	 80;
   server_name $DOMAIN;
   add_header X-App-Name Letterpad;
   return	 301 https://$DOMAIN\$request_uri;
}
server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name $DOMAIN;

    # RSA certificate
    ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem; # managed by Certbot

    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

    gzip on;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

    root $WEB_DIR;
    add_header X-App-Name Letterpad;
    location / {
        proxy_pass http://127.0.0.1:3001;
        proxy_http_version 1.1;
        proxy_set_header Upgrade \$http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host \$host;
        proxy_cache_bypass \$http_upgrade;
    }
}
EOF
    echo "success"
}

Now you know how we can call this function from NodeJS.

Whenever we touch nginx configuration, the server reloads automatically as described before.

Finally we need to test if everything worked well. I used this snippet for that.

function verifySSL {
    DOMAIN=$1
    if curl -Is https://$DOMAIN | head -1 | grep -o '200'; then
        echo "success"
    else
        die "Failed to ping using https"
    fi
}

That's the whole process about automated domain mapping. 

Summary

I hope you got an idea on how this works. If you have any other ways of doing this or know a better way of handling this, do post a comment here