Raspbery Pi (+Odroid C1) Docker Cluster

In this day of containerized workloads having a development or testing lab, using a IoT cluster could be a good use for a faction of the desk space. Running a Debian (Raspbian) on a mix of Raspberry Pi’s and Odroid C1’s I have successfully spun up a Docker Swarm cluster and have Portainer running on it.

Inventory of Hardware:

First step for my setup was getting a board to my inventory all on a single space and wired together. I could have made presentation smaller by stacking them, but I wanted to be able to see each one individually. I may change from the board to a stack at a later date to save even more space. (see below)

Second step for me was to get all the images loaded up on the SD Cards. I went with the latest Raspbian for each of the respective models. Which turned out to be Debian 10 (Buster) for all 3 types. But I would suggest you look up your specific models of IoT to see what the latest (and you are most comfortable with) version of Linux. Most have a Docker service that will support Docker Swarm.

Next step would be to SSH to the node you are classifying as the “Master” or “Leader of the swarm and run the following command: docker swarm init this will create the docker swarm cluster and output the command you need for the rest of the IoT devices.

After that step you will want to SSH into each of the additional IoT devices and run the join commands that were outputted in the above step that will join all the IoTs to the Docker Swarm Cluster. You should be to run the command: docker node ls which should output all the nodes that you have added and show you their status.

Final step for this post is loading up Portainer service. On the Leader Node create a yaml file with the below data: (This has to be done on a leader node, docker service commands cannot be executed on worker nodes)

version: '3.2'

services:
  agent:
    image: portainer/agent
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /var/lib/docker/volumes:/var/lib/docker/volumes
    networks:
      - agent_network
    deploy:
      mode: global
      placement:
        constraints: [node.platform.os == linux]

  portainer:
    image: portainer/portainer-ce
    command: -H tcp://tasks.agent:9001 --tlsskipverify
    ports:
      - "9000:9000"
      - "8000:8000"
    volumes:
      - portainer_data:/data
    networks:
      - agent_network
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]

networks:
  agent_network:
    driver: overlay
    attachable: true

volumes:
  portainer_data:

After you have created the yaml file run the command: docker stack deploy /c portainer.yml management to create the new Portainer service. Give it a minute or 5 (depending on your internet speed and IoT power) to pull the Docker images down from Docker Hub and start the service. After a period of time run docker service ls to see the status of the docker service. You should see something like the below:

pi@Docker1:~ $ docker service ls
ID             NAME                   MODE         REPLICAS   IMAGE                           PORTS
4ff7hmvm8hqe   Management_agent       global       4/4        portainer/agent:latest
imoa5784pdi9   Management_portainer   replicated   1/1        portainer/portainer-ce:latest   *:8000->8000/tcp, *:9000->9000/tcp

Please note that the Replicas are showing the same number on both sides of the slash: 4/4 for the agent service. Once all the replicas for the Portainer service are showing up you will be able to log into the Leader node on port 9000: http://192.168.0.10:9000 (for example). The first time you pull up the page you will be promted to set an admin user and password, once these are set you will be shown the Portainer dashboard where you can manage your Docker Swarm Services and Containers.

For the next post we’ll talk about automation Docker Swarm deployments with Fedora CoreOS on AWS.

Loading up an Elixir App on Heroku

In a project I’ve been working on it came up the want/need to put a Elixir (ErLang) app in the cloud, and me I’d like to manage the infrastructure as much as possible, but as my life continues to get busier and busier, sometimes its nice to have an infrastructure that you can simply deploy to and just worry about making the code work. This is where places like Heroku and Gigalixir (and many many others) come into play. So the dev I’ve been working with created the Elixir app in Windows which did not natively go into either Heroku or Gigalixir so it definitely took some tweaking.

In this overall Apps case it had 3 sub apps, we’ll call these App1, App2, and App3, where App1 one has a dependency on App2 and App3. When the developer coded this app on the Windows machine he put all three apps’ folders at the same level in his dev folder.

\dev\App1

\dev\App2

\dev\App3

This worked fine for what he had working on the temp server I had setup for him in my VMWare cluster, however when trying to upload into a cloud environment, it would want the three apps to be separate (or at least this is what I figured). So in order for it to operate under a single container with either of the cloud providers I was working with I found the dependency section in the mix.exs of App1. So I was able to update the entries to bring them all into a single “App” Folder

\dev\App1

\dev\App1\App2

\dev\App1\App3

Then I updated the mix.exs like this: (Before the path had two periods (eg. “../App2”) so just had to remove one to make it point at local folder)

{:App2, path: “./App2”},

{:App3, path: “./App3”},

This was the first step in getting it working, otherwise I wasn’t even able to get the git push to attempt the Heroku’s or Gigalixir’s pre-hook command. After this I discovered that the dev had not included the elixir_buildpack.config file which tells these cloud providers environment settings such as Erlang version, elixir version, runtime_path, etc. Which with out the config file it will try to use defaults which in our case were not the correct versions of the Erlang and Elixir. My elixir_buildpack.config is below


# Erlang version
erlang_version=21.0

# Elixir version
elixir_version=1.6.6

# Always rebuild from scratch on every deploy?
always_rebuild=true

# A command to run right before fetching dependencies
hook_pre_fetch_dependencies=”pwd”

# A command to run right before compiling the app (after elixir, .etc)
hook_pre_compile=”pwd”

# A command to run right after compiling the app
hook_post_compile=”pwd”

# Set the path the app is run from
runtime_path=/app


Other gotcha’s I noticed, when my developer built the mix.exs file the default for “start_permanent: Mix.env” was set to prod (which is good in my thoughts) however didn’t do any settings in the prod.exs file so in order for the app to run I had to update a number of the prod.exs file entries. Which included an unused reference to the Windows System Environment Variables as well as updating the host URLs to the respective App URL provided by our cloud provider.

No one interesting thing I learned about (at least Heroku) is that when you specific a port in the prod.exs file it ignores it for running on their location, in fact I’m not 100% sure, it may ignore any of the settings in the exs files that pertain to URL, or Port numbers as their platform has a standard of running on specific ports. (At least in the free account) Which brings up another great point about both of these cloud providers (as with most cloud providers) they have a free tier which allows you to get your app up and running at no hosting cost and to ensure its the right platform for what you are wanting to do. I’ve really grown to like the IaaS environment the more I use it and sure makes what I used to do many years ago as my primary living of managing hardware for development teams and their test environments for a large software company kind of obsolete and makes me glad I’ve been migrating into Dev and DevOps, among other things.

I know I didn’t cover everything you need in this post to get up and running on Heroku, feel free to comment (Sorry not the fastest at responding) and I will get back to you. I have included some of the basic commands you will need to get your code base in Heroku or Gigalixir.


Commands:

  • Heroku Specific 
    • heroku login -> This command is pretty obvious, you will need the heroku CLI for this to work, but it authenticates you with the heroku environment
    • heroku git:clone -a app1 -> This command takes the local directory that you are in and sets it for the remote repository on heroku’s side that you will be pushing your code to
    • git add .   -> (Must include the period at the end) This is adding the changes you have made to the local directory to your “local git repository” (More about local git repository versus remote some day later)
    • git commit -am “Some relevant commit text” -> This commits the code to your “local repository” with the comment tag specified in the quotations
    • git push heroku master -> Finally this command will push your code to the heroku “remote repository” which will kick off the hook scripts to run your app code through their verification and build out process specified in your code. After this step your site “should” be live unless there are issues.
    • heroku logs -> This is a great command if you are needing to see the logs of your app which you wouldn’t otherwise be able to see great for troubleshooting apps when they’ve been deployed
  • gigalixir specific 
    • git remote add gigalixir https://<URL Provided by gigalixir> -> This command is used to add the gigalixir remote repository to your local folder so that you can push your code to gigalixir using the following command
    • git push gigalixir master -> This command is used to push your code upto your gigalixir repository and subsequently rebuild your app and deploy it to your replica
    • Gigalixir too has a CLI you can install but requires you to install Python first and subsequently pip so you can install the CLI using pip. Please refer to gigalixir’s command’s page for these commands

As a final comment on the commands above, please keep in mind that these were accurate at the time of writing this article and if any don’t work, its a good possibility something was updated and I would recommend you go to the respective cloud provider for the most up to date commands. Also Heroku and Gigalixir are not affiliated in any way with this site nor this post, they are simply the mediums I used to get my code base on the internet.

VPS vs AWS Instance vs Other (GCE/Azure, etc)

When looking at hosting your site/web app somewhere what are somethings to help you to determine which method of hosting you should use? As we know from searching the internet the options are almost as limitless as the choices for what car to buy. In the last decade or two things have changed tremendously in respects to hosting web sites and or web applications. When I started hosting the most economical option was to find someone with rack space in a colocation datacenter and rent it from them. (My first colocation rack space was 1U for $100 USD/month) This allows you to control a vast majority of your hosting infrastructure, just not the networking/internet access portion of it, then again who really has control over the internet… And while colocation is still an option, it is very static and often times not as cost efficient, don’t forget your time is worth something as well.

Then Amazon Inc. made a change to how they utilize and  subsequently offer up their own hosting infrastructure, giving birth to Amazon Web Services (AWS).

In a lot of ways this revolutionized the online hosting industry, giving people the option to run “Instances” and only paying for the hours in which it was used, only paying for allocated storage, bandwidth used… the list goes on. This arguably caused a great shift in the online virtual server/dedicated server market and other companies sprinted to catch up, still never making quite the offerings/customer service of AWS.

Now with all this hype about AWS, that doesn’t mean its always the right choice, AWS was originally designed for a specific targeted audience, those who believe their setup to be ephemeral, or transitory. So the idea is your critical data is stored in a data storage place (such as a database or caching system) and nothing is stored on the instances serving up the web site or web application. Now this type of setup is ideal for those fluctuating load levels that need more computing power some periods of time and less other periods. Especially when you include AutoScaling in the mix of your AWS service offerings. The other trick is that AWS when utilized like a traditional VPS (Systems running 24/7, allocating significant storage and high bandwidth utilization) can be more expensive than some VPS offerings, the trick is to determine what you are wanting to do.

Traditional VPS offerings are nice because they are a fixed cost and depending on the service company can include unmetered (unlimited amount of bandwidth) internet connections (unlike AWS which bills you for your bandwidth usage, outbound only). For instance one of the companies I’ve been using lately OVH, has very economical plans which include unmetered bandwidth. I have a number of their $3.50/month VPSes which due a but of different web App and service offerings and the speed of the machines is great, the speed of the internet is great and when you do multiple months or year plans they offer discounts as well.

Now the title mentions GCE and Azure, and while their offerings have some similar to AWS, in the case of Azure its interface and service offerings leave a lot to be desired. I found the understanding of how the different virtual machines and networking interacts kind of confusing and not intuitive at all, I probably said the same thing when I first learned AWS more than 5 years ago. However if you already have an MSDN subscription, there is a set of hours of compute/etc that you get with it, as well as discounts which is beneficial to those with the subscription.

I’ll be honest that I have not used GCE at all, but as I understand Google’s offering is very similar to Azures and not as widely used as AWS.

To summarize the options for application, site, service and other type hosting are substantial and what is needed for your specific use will need to be determine by your technical staff. However if you can get your development team to focus on more of an ephemeral design it will benefit you in the long run. It gives you more options than the traditional monolith.

The views expressed here are my opinions and not intended on being used as factual data for any decision making or reports.

SEGS – Super Entity Game Server on Cent OS 7 build out guide

One of the things I love doing is building out servers that the average user never realized existed. Today build out is a SEGS server, built on CentOS7 from source. SEGS is a service that can run on most *nix platforms that allows users to connect their CoX client to and run around the city. (Well it will be that when its done, has a fair bit of work to do on it). If you don’t know what CoX is here is a quick definition http://cityofheroes.wikia.com/wiki/CoX.

Now lets get building:

  1. First we are starting out with a bare minimum install of CentOS 7 at which time there are some general tools I like to install, (Vim my text editor of choice, htop a prettier and better version of top, git which will be required to pull the source code, cmake3 which is used for compiling the source code and screen which allows you to have running processes set into a screen which doesn’t close if you disconnect. Run the below commands to install these (Assumes running at root)
    • yum install epel-release
    • yum install htop vim git screen
  2. Next I’ll be slightly modifying what it says in SEGS readme on their repo (https://github.com/Segs/Segs) Note the change of devtoolset-6 to devtoolset-7
    • yum install git centos-release-scl-rh
    • yum install devtoolset-7-gcc-c++ devtoolset-6-gcc-c++
    • source /opt/rh/devtoolset-7/enable
    • source /opt/rh/devtoolset-6/enable
    • yum install gcc gcc-c++ git-core kernel-devel qt5-qtdeclarative-devel qt5-qtbase-devel qt5-qtbase qt5-qtwebsockets qt5-qtwebsockets-devel
    • Optional: yum install qt5-qtbase-mysql
  3. Next you will need to pull the source from the repo
    • git clone https://github.com/Segs/Segs.git
  4. Now that you have installed the pre-reqs and pulled the source down you should be ready to build. These instructions are on the site, but I figured I’d condense it down here
    • Make your build directory
      • mkdir bld
    • Run cmake against the source code from your build directory
      • cmake <path to source code>
        • eg cmake ../Segs
    • Now to compile you need to run make in the build directory
      • Run make
  5. After this compiles you will see an out folder this is where all the compiled files went to including the settings.cfg which you will need to update all the localhost entries (127.0.0.1) except the database if you are using sqlite
  6. You will need to create the default db and user, as well as mapinstances folder.
    • from the out folder run:
      • ./dbtool -f create
    • To create an admin user run:
      • ./dbtool adduser -l otlichno -p Temp1234 -a 9
    • And finally create the MapInstances folder with sub folder
      • From the out folder run:
        • mkdir MapInstances
        • mkdir MapInstances/City_00_01
  7. Finally the Pigg files, these are the files containing different components to the CoX game, and come with the CoX download. There is one in particular that needs to be extracted in order for the service to run. However all pigg files included with your CoX download need to be copied to the ./out/data folder directly. Steps are below:
    • Copy pigg files to ./out/data (The copy files not the folder)
    • Change directory into the data directory and run
      • ../piggtool -x bin.pigg
      • ../piggtool -x geom.pigg
      • ../piggtool -x geomBC.pigg
      • ../piggtool -x geomV1.pigg
      • ../piggtool -x geomV2.pigg
      • ../piggtool -x mapsCities1.pigg
      • ../piggtool -x mapsCities2.pigg
      • ../piggtool -x mapsHazards.pigg
      • ../piggtool -x mapsMisc.pigg
      • ../piggtool -x mapsMissiong.pigg
      • ../piggtool -x mapsTrials.pigg
  8. Then all you need to do is run ./authserver from the out folder
  9. **NOTE** If your firewall is enabled (Which it is by default) you will need to enable ports 2106, 7002, and 7003
    • sudo firewall-cmd –add-port=2106/tcp –permanent
    • sudo firewall-cmd –add-port=7002/udp –permanent
    • sudo firewall-cmd –add-port=7003/udp –permanent
    • sudo firewall-cmd –reload
    • **Aditional Note** MapInstance will have it’s port increment from the MapServer’s base port, which means you may want to execute the following firwall-cmd command prior to running the reload command
      • sudo firewall-cmd –add-port=7004-7009/udp –permanent

After all this you should have a ‘working’ version of SEGS and should be able to point your client at the server and at least log in and create characters and log into the world as it were.

Thank you for trying this out, let me know if you have any questions or issues.

************************************************************************************

Further notes:

if you are seeing error

c++: error: unrecognized command line option ‘-std=c++14’

you may need to add the below to your cmake command:

-DCMAKE_CXX_COMPILER=g++ -DCMAKE_C_COMPILER=gcc

LetsEncrypt a Free SSL Certificate Service

In this world of identify theft and man in the middle attacks, its always nice to see that green lock in the address bar of your browser. As a service provider and providing site hosting for numerous individuals and companies, keeping your site secure and safe is a top priority. There are many ways of doing this including Hardening your site (which I will post about later), ensuring proper firewall settings (Another post for later) and if you’re hosting a website or web application, having an SSL Certificate is also a key especially if you are dealing with users information.

Great so we need an SSL Certificate. That sounds easy. Well it is easy if you don’t mind spending the money. Before LetsEncrypt, if you wanted any sort of reputable SSL Certificate you would have to spend at least $20USD for a single FQDN certificate, and at least $99USD for a wild card certificate (Per year!). Fortunately for us that is no longer the case, a group of large companies, like Akama, Mozilla, and the EFF just to name a few; came together and now offer a free SSL Certificate services including an script that will (when setup for scheduled running) auto renew your certificates.

I have been using this service for a few years now and will likely be switching all my old paid for certificates for free LetsEncrypt certificates. The only restrictions on this service is you have to own the domain you are registering it for. Sounds pretty sweet if you ask me.

Getting started with LetsEncrypt is pretty simple. If you are running on linux, you simply download the script (CertBot) and run it on your system, it will ask you a series of questions such as domain name, entity registering the certificate, etc. The Certbot will do all of the Apache/Nginx configurations for SSL for you if you want it to including placing both the private key and public certificate where they need to go. This is also the same script that you will setup a CRON job for to check your certificate status and update if need be all automatically.

Now if you want to use LetsEncrypt on a windows machine, there are a number of ACME clients out there that will allow you to do this, however I have not tried any of them yet as my Windows hosting days are long in the past. But if you check out the LetsEncrypt Getting Started Guide it will have more information for you there.

All in all I have been pleased with the LetsEncrypt service and am very appreciative of the companies who sponsored it and provide on going support for it. Please go out and check out their site and all the sponsors.