It sounds like you might benefit from using Docker for this. I recommend using one of the Cloud9 images available on Docker Hub, and creating containers for each user who wants to access the server. Then, you can have the Cloud9 Docker containers listen on 127.0.0.1
and a random port, say 3000
for this example, and use an NGINX Reverse Proxy to route to the correct containers:
server {
listen 80;
location /user/username {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:3000;
}
}
Then, you can create new location proxies to specific containers based on their usernames (i.e. replace “username” in the path in the code with any username, and just change the port to point to a different container). This is by no means a perfect solution, as storing the workspace data and scaling your userbase is difficult. You could also just skip the reverse proxy and give specific ports for each user (have a container listen on port 3001
for User1
, 3002
for User2
, etc. I do, however, believe that Docker is what you’re looking for, as the alternative is running a lot of Cloud9 instances on different ports and using a chroot jail for each user, which is much more difficult to set up and scale. I also do not recommend this path as it is far less secure.
You might try using some easier means of managing Docker, like using Rancher with a load balancer. This will take a larger effort to set up, but will ease the pain of storage and make scalability a breeze. It really depends how many users you plan on handling.