cloud-3406627_1920

Deploying Rasa Chatbot on Google Cloud with Docker

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

We are talking about chatbots a lot nowadays. One can easily create a chatbot from scratch with Rasa. If you haven’t read part-1 of our blog on chatbots, refer to our previous chatbot blog.

What are the other steps to make the chatbot live after you have a simple chatbot demo running on your local machine? We will discover all of it in this blog.

Ideally a chatbot should be running on a live website with a secure connection and is equipped with all tools to modify/enhance and visualize the chats or inflow of the traffic and analytics around it.

In order to achieve this we highlight building the front-end of the chatbot i.e. the interface of the chatbot, the secure connection to the chat server and hosting on a cloud server.

 

Connecting Rasa to Webchat

The first task after building a chatbot with rasa is to connect it with a chat window so that we can chat with it . For this purpose, we will use webchat by  botfront .

After setting up web chat , we can then run rasa server and action server to see if it works with webchat. To try this we need to run the below commands:

  1. rasa run -m models –enable-api –cors “*” –debug

This command is used to run rasa server as a http server. After running this command we can see something like this in the terminal:

Starting Rasa server on http://localhost:5005

Now we need to map http://localhost:5005 with the socketUrl of the index.html of our webchat.

Open another terminal and run

  1. rasa run actions

    This command is used to start the action server for us.

Make sure to uncomment below lines inside your credentials.yml file and modify them as given.

After we are done with the above steps we can visit the http://localhost:5005 to check if it is working fine. You should see something like this

Hello from Rasa: 1.10.3 (or some other version of rasa)

If you are getting the above message , you can open your index.html inside any browser and you will see your chatbot inside a chat window like the image below.

We are good to proceed now.


Hosting the Chatbot 

The next thing to make it live is to host the chatbot created by us. Here, we are talking about hosting it on GCP. You can create your GCP account and create an VM instance for you. If you have never created an account before , for the first time you can get 300$ as credits to use.

After that you can install rasa on the VM instance you just created , copy all your setup files and train it there.

But we used two terminals to run rasa server and action server to communicate with our chatbot and as soon as we close the rasa server terminal it doesn’t work . Now we just can’t do the same to host our chatbot as it won’t work. Now how do we make both rasa server and action server  run in the background with just one command and make it keep on running till we don’t stop?

The answer is Docker!

 


Rasa inside Docker

The first thing you need to do is to download docker and docker-compose on the VM instance you created in the above step.

After that we will be taking help from docker to run rasa server and action server inside docker containers for us. If you are not familiar with docker , it is as follows:

     Docker provides the ability to package and run an application in a loosely isolated environment called a     container. The isolation and security allow you to run many containers simultaneously on a given host. You can     even run Docker containers within host machines that are actually virtual machines!


This is what the internet says about docker .

Now, how do we actually do it?

We will make separate docker containers for rasa server and action server. We will create a docker-compose.yml file and will list the services to run inside that .

The docker-compose.yml  file should look something like this:

version: ‘3’  

services:

rasa:

   container_name: “rasa_server”

   build:

     context: backend

   ports:

     – “5005:5005”

action_server:

   container_name: “action_server”

   build:

     context: actions

   volumes:

     – ./actions:/app/actions

   ports:

  – “5055:5055”  


Where rasa_server and action_server are the name of our containers. 

 

The hierarchy of our project directory should be as follows:

 

Now the content of Docker file inside backend should look something like this :

 

If you would always like to get the latest version of Rasa you can choose to have 

FROM rasa/rasa:latest-full


A small issue with using the latest rasa image in the docker config file is that if you are not building the model but using a locally trained (or trained on other machine) model then the rasa version should match else there would be an error of version mismatch. Hence in many cases it is also advisable to use a fixed version.

 

And the contents of our Dockerfile inside actions folder should be as follows:

 

Now we are all set to run our docker containers .To run it we need to go inside our project directory and run the below command 

sudo docker-compose build

It will download the necessary docker images and will build the containers for us. Once the containers are ready , we can run the following command to make the containers up in running state:

sudo docker-compose up

These will start the service for us. We can now go to http://ip_of_your_vm_instance:5005 to see if rasa server responds or not.

You will see something like :

Hello from Rasa: 1.10.3

If you are not getting any response , check the firewall rules of your VM and include port 5005 to accept requests as http traffic.

If you are getting the same message ,this means that our rasa server is up and running inside the docker container on GCP.


Did you notice that it is running as http and not as https ? Well that’s something we don’t want as almost all websites run as https because of the security concern and hence we won’t be able to integrate with a website running as https. So the next thing to do is to run it as https .


 

 

Reverse proxy using nginx

Now to run rasa as https , we will use nginx to do the reverse proxy setting. But before that we need to have a domain name registered and a ssl certificate generated with the same domain name . If you don’t have this , you can refer to this blog to generate a certificate for you and do your own setup of nginx running for you as https.

Now you can verify that nginx is running with https by visiting https://your_domain_name

Now once we have nginx up and running with an ssl certificate as https , we will proceed to do the reverse proxy thing for us . 

Now to do that we will create a config file for the nginx and inside that we will route all the requests that are coming on https://domain_name to our rasa server.

The config file will look something like this 

Now create a new directory inside your project directory and copy this config file inside that. The extension of the config file is .conf .

We will now create a new docker container for nginx and it will run as a service along with rasa server and action server.

The docker-compose.yml file will now look something like this

Now we need to stop the running docker containers. For this you can go to the project directory and type sudo docker-compose down. 

Now we can start building the containers again by typing in sudo docker-compose build. 

Once the building is done , we can start the services again by typing the command sudo docker-compose up.

That’s it !! We are done .

We can now visit https://your_domain_name . If it says  “Hello from Rasa: 1.10.3”, Congrats, you are done!! everything worked fine and now you can include the chatbot in a website.

If you are getting an error like a process is already running on port 443 , then kill the process running on port 443 by typing in sudo fuser -k 443/tcp and then go to /etc/nginx/nginx.conf and comment out this line

worker_processes auto;

Now restart the nginx again. It should work.

Now you can change the socketUrl of your index.html to https://your_domain_name and you will be connected to your bot running on GCP. You can verify the same by running your index.html on any browser.

This way you have a chatbot platform which is secure and well-deployed to extend it to build a SaaS chatbot platform. 

 
 

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

Request a call

Leave your contact info and we will get back to you soon.