11 April 2022

Installing NextCloud on CentOS 7

So I’m going to walk thru installing Nextcloud on CentOS 7. Your mileage will vary if you attempt to use this as a guide to install NextCloud on CentOS 8 (which is EOL) or CentOS Stream 8/9 as it is not intended for those versions of CentOS.

Nextcloud is an open-source self-hosted sync and file sharing server that was forked from OwnCloud. It is written in PHP and JavaScript and supports multiple databases like MySQL, PostgreSQL, SQLite, and Oracle Database.

Before we get started, we will need to make sure we are set up with a LAMP stack. LAMP stands for Linux, Apache, MySQL, PHP. It’s bascially setting us up as a web server. And since we are going to be a webserver, we should also add Let’s Encrypt for SSL on our machine.

First step is to update your system.

yum -y update

Install PHP

To install PHP 8, you will need to add the EPEL and Remi repositories to your machine. You should also import the repo’s signing key.

yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm --import http://download.fedoraproject.org/pub/eprl/RPM-GPG-KEY-EPEL-7

yum -y install http://rpms.remirepo.net/enterprise/remi-release-7.rpm
rpm --import https://rpms.remirepo.net/RPM-GPG-KEY-remi

You can verify the repositories were added by using the command below to look for the “php8” packages are there.

yum list php

Install “yum-utils”

yum -y install yum-utils

Enable the Remi repository for PHP, after disabling any existing repo for PHP.

yum-config-manager --disable 'remi-php*'
yum-config-manager --enable remi-php80

Install PHP and all of the required extensions

yum -y install php php-{bcmath,cli,common,curl,devel,gd,imagick,intl,json,mbstring,mcrypt,mysql,mysqlnd,pdo,pear,pecl-apcu,pecl-apcu-devel,ldap,xml,zip}

Verify PHP is installed and the version. You can see I was able to install PHP v8.0.17

php -v

Open the php.ini config file and set your timezone. You will need to uncomment the line for date.timezone and set it to your timezone of choice.

vi /etc/php.ini

date.timezone = Pacific/Honolulu

Raise PHP’s memory limit

sed -i '/^memory_limit =/s/=.*/= 512M/' /etc/php.ini

Install Apache

Install Apache on your machine.

yum -y install httpd mod_ssl

Start Apache and enable the Apache service at boot.

systemctl start httpd
systemctl enable httpd

Install MariaDB

Add the MariaDB repository to your machine

cat <<EOF | sudo tee /etc/yum.repos.d/MariaDB.repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.6/centos7-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
EOF

Clean the yum cache

yum makecache fast

Install MariaDB 10.6

yum -y install MariaDB-server MariaDB-client

Start and enable MariaDB service:

systemctl start mariadb
systemctl enable mariadb

Secure or instance of Maria DB by running the ‘mariadb_secure_installation‘ command.

mariadb-secure-installation
mariadb secure installation script

Enter your root credentials when prompted. For the next two prompts, if you have your root account protected correctly, it will tell you so and you can follow the recommendation to enter ‘n’ for them.

more mariadb secure installation script

For the next four prompts, enter ‘Y’ for them.

last of the mariadb secure installation script

Check your MariaDB and what version it is running this command below or login into the database and check as shown in the image below.

mysql -V
Checking MariaDB version

Create the Database and the user account for NextCloud using the commands below.

Take note of what you set for:
<nextcloud_db> : This will be the name of your NextCloud database.
<nextcloud_user> : This will be the NextCloud user.
<nextcloud_pw> : This is a strong password that you have created for your ‘nextcloud_user’.

mysql -u root -p

create database <nextcloud_db>;
create user '<nextclouduser>'@'localhost' identified BY '<nextcloud_pw>';
grant all privileges on <nextcloud_db>.* to '<nextclouduser>'@'localhost';
flush privileges;
\q

Give Apache access to MariaDB

setsebool -P httpd_can_network_connect_db 1

Let us go ahead and reboot the system before we proceed with installing NextCloud.

init 6

Installing NextCloud

Download the packages needed to download and unzip NextCloud

yum -y install wget unzip

Next, download the latest stable release of NextCloud to your system.

wget https://download.nextcloud.com/server/releases/latest.zip

Unzip the file we just downloaded, move the extracted folder, and then delete the zip file.

unzip latest.zip
mv nextcloud/ /var/www/html/
rm -f latest.zip

Create a data directory to store files that get uploaded to NextCloud. If you use a symlink, this can be any type of path to a NAS, SAN, or NFS. Give Apache permiss

mkdir /var/www/html/nextcloud/data
chown apache:apache -R /var/www/html/nextcloud/data

Give the Apache user and group ownership of the NextCloud folder.

chown apache:apache -R /var/www/html/nextcloud

The next step will create an Apache VirtualHost configuration file.

vi /etc/httpd/conf.d/nextcloud.conf

Copy and paste the following code block into the file.
Note: Make sure to update the “ServerName” and “ServerAdmin” settings to suit your environment. The “ServerName” is its FQDN, so remember to setup your DNS entry for it, if necessary.

<VirtualHost *:80>
  ServerName nextcloud.pwwf.com
  ServerAdmin nextcloud.admin@pwwf.com
  DocumentRoot /var/www/html/nextcloud
  <directory /var/www/html/nextcloud>
    Require all granted
    AllowOverride All
    Options FollowSymLinks MultiViews
    SetEnv HOME /var/www/html/nextcloud
    SetEnv HTTP_HOME /var/www/html/nextcloud
  </directory>
</VirtualHost>

Configure SELinux

Install the SEMange package.

yum -y install policycoreutils-python

Add the context rules to allow NextCloud to write data into its directories.


semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/data'
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html(/.*)?"
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/config(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/apps(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/3rdparty(/.*)?'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.htaccess'
semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/html/nextcloud/.user.ini'

restorecon -Rv /var/www/html

Configure Firewall

Set the firewall to allow http traffic.

firewall-cmd --add-service={http,https} --permanent
firewall-cmd --reload

Completing the NextCloud UI and Setup

Open your web browser of choice and enter either the server name URL you entered in the ‘nextcloud.conf’ file, or alternatively you could use the IP address of your machine, to access the NextCloud Web GUI.

example – http://nextcloud.pwwf.com/
http://10.1.2.169/

The first fields are for creating an admin account for your NextCloud instance. Set it to anything you wish, just don’t forget those credentials.

Then select “MySQL/MariaDB” and configure the database fields with the information we used earlier when we set up the database in MariaDB.

Then click on the “Install” button at the very bottom of the page.

Once the install completes, your dashboard will be ready to use.
In your browser, go to: http://<ServerName>/nextcloud/index.php/apps/dashboard

example: http://nextcloud.pwwf.com/nextcloud/index.php/apps/dashboard

Configure SSL with Let’s Encrypt

Having HTTP access is great… but I think that we would like to have some security. There are plenty of paid services out there to get an SSL from. But for this post let us add SSL encryption using the FREE resource that is Let’s Encrypt so that we can utilize HTTPS without any additional cost.

The first thing we need to do is install certbot.

yum -y install epel-release certbot

Next we will need to request our SSL certificate for this machine.

export DOMAIN="nextcloud.pwwf.com"
export EMAIL="admin@playswellwithflavors.com"
sudo certbot certonly --standalone -d $DOMAIN --preferred-challenges http --agree-tos -n -m $EMAIL --keep-until-expiring

Note: If certbot is not working for you, you will need to figure out whatever issue it is having before proceeding. If you cannot resolve it, the rest of this article will not benefit you. Unfortunately, troubleshooting certbot is outside the scope of this article.

After the SSL certificate has successfully been generated, it is time to edit your Apache config file for NextCloud, again.

vi /etc/httpd/conf.d/nextcloud.conf

Make your configuration file look like what I have below.
Note: Make sure to update the “ServerName” and “ServerAdmin” settings to suit your environment.

<VirtualHost *:80>
  ServerName nextcloud.pwwf.com
  ServerAdmin nextcloud.admin@pwwf.com
  Redirect permanent / https://nextcloud.pwwf.com
</VirtualHost>

<IfModule mod_ssl.c>
   <VirtualHost *:443>
  ServerName nextcloud.pwwf.com
  ServerAdmin nextcloud.admin@pwwf.com
     DocumentRoot /var/www/html/nextcloud
     <directory /var/www/html/nextcloud>
        Require all granted
        AllowOverride All
        Options FollowSymLinks MultiViews

      <IfModule mod_dav.c>
        Dav off
      </IfModule>

        SetEnv HOME /var/www/html/nextcloud
        SetEnv HTTP_HOME /var/www/html/nextcloud
    </directory>

    <IfModule mod_headers.c>
      Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains"
    </IfModule>

    SSLEngine on
    SSLCertificateFile /etc/ssl/certs/apache-selfsigned.crt
    SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key

RewriteEngine On
RewriteRule ^/\.well-known/carddav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
RewriteRule ^/\.well-known/caldav https://%{SERVER_NAME}/remote.php/dav/ [R=301,L]
RewriteRule ^/\.well-known/host-meta https://%{SERVER_NAME}/public.php?service=host-meta [QSA,L]
RewriteRule ^/\.well-known/host-meta\.json https://%{SERVER_NAME}/public.php?service=host-meta-json [QSA,L]
RewriteRule ^/\.well-known/webfinger https://%{SERVER_NAME}/public.php?service=webfinger [QSA,L]


   </VirtualHost>
</IfModule>

In your browser, you can now go to: https://<ServerName>/nextcloud/index.php/apps/dashboard

example: https://nextcloud.pwwf.com/nextcloud/index.php/apps/dashboard

Other Stuff

Enable OPCache

yum -y install php-opcache

Edit the opcache ini file like so

vi /etc/php.d/10-opcache.ini

Enable these values

zend_extension=opcache
opcache.enable=1
opcache.enable_cli=1
opcache.interned_strings_buffer=8
opcache.max_accelerated_files=10000
opcache.memory_consumption=128
opcache.save_comments=1
opcache.revalidate_freq=1

Then restart Apache

systemctl restart httpd

Pretty Links

To remove the “index.php” from every URL, open the Nextcloud config file.

vi /var/www/html/nextcloud/config/config.php

Depending on how your config file is setup, you will add one of the following entries below based on how your URL is configured. If you get this wrong, don’t worry, you will see an “Internal Server Error” message instead of your NextCloud page and will have to come back into this file and change it.

If your line for “overwrite.cli.url” looks like this

'overwrite.cli.url' => 'https://nextcloud.pwwf.com',

then add this line of code under it.

'htaccess.RewriteBase' => '/',

OR – If your line for “overwrite.cli.url” looks like this

'overwrite.cli.url' => 'https://nextcloud.pwwf.com/nextcloud',

Then you will want to add the following line of code under it.

'htaccess.RewriteBase' => '/nextcloud',

Run the following command

sudo -u apache php /var/www/html/nextcloud/occ maintenance:update:htaccess

Now go back to your browser and in the address bar, enter your pretty url without the ‘index.php’ in it…
In my case, it will be “https://nextcloud.pwwf.com/”

Proxy override

I was having an issue with the UI inside NextCloud. I could view folders and files, but I could not create new folders or files. After some troubleshooting recreating the NextCloud server and testing before adding the SSL certificate and also after adding the certificate, as well as testing bypassing the proxy I was able to confirm that the proxy was indeed causing me my headaches. This should help you if you are behind a proxy…

vi /var/www/html/nextcloud/config/config.php

Under your line for “overwrite.cli.url” add this entry.

'overwriteprotocol' => 'https',

This will make sure that any requests, and replies, are done over HTTPS and now HTTP.

Max Upload

PHP is going to try to limit the file upload size that you can use. Since I know you are going to probably want to save/share some large files, let us update those limits to something more realistic.

vi /etc/php.ini

Search the file and update these values to your desired limit, I’m going to set it to 10GB.

upload_max_filesize = 10240M
post_max_size = 10342M

While you can adjust these values to your environment, just remember to always make your “post_max_size” a little bit larger than your “upload_max_filesize”. This will keep you from having any issues when uploading a file that is the same size as your max upload limit.

Lastly, you will need to restart Apache.

systemctl restart httpd

Trash Cleanup

So NextCloud isn’t always great at cleaning up your deleted files. By design, it is set to hold on to your deleted items for 30 days, then it only forces a delete if you are running low on space. Since you’re probably sitting on at least a few terabytes of storage, those deleted files may never actually get deleted.

vi /var/www/html/nextcloud/config/config.php

Open your NextCloud config file.

Here is how you can control NextCloud’s behavior with these settings.

  • auto – default setting. keeps files and folders in the trash bin for 30 days and automatically deletes anytime after that if space is needed (note: files may not be deleted if space is not needed).
  • D, auto – keeps files and folders in the trash bin for D+ days, delete anytime if space needed (note: files may not be deleted if space is not needed)
  • auto, D – delete all files in the trash bin that are older than D days automatically, delete other files anytime if space needed
  • D1, D2 – keep files and folders in the trash bin for at least D1 days and delete when exceeds D2 days (note: files will not be deleted automatically if space is needed)
  • disabled – trash bin auto clean disabled, files and folders will be kept forever

To automatically delete the files after 30 days and allow NextCloud to purge them sooner if space is needed, you can add this line.

'trashbin_retention_obligation' => 'auto, 30',

To retain the files for 30 days and then absolutely purge them after 40 days, you would add this line.

'trashbin_retention_obligation' => '30, 40',

Install ClamAV

Here is how to add the open source antivirus tool ClamAV to the CentOS machine and configure it automatically run a virus scan on newly uploaded files. ClamAV detects all forms of malware including Trojan horses, viruses, and worms, and it operates on all major file types including Windows, Linux, and Mac files, compressed files, executables, image files, Flash, PDF, and many others. ClamAV’s Freshclam daemon automatically updates its malware signature database at scheduled intervals.

yum -y install clamav clamav-scanner clamav-scanner-systemd clamav-server clamav-server-systemd clamav-update

First edit freshclam.conf and configure your options.

vi /etc/freshclam.conf

Freshclam updates your malware database, so you want it to run frequently to get updated malware signatures. Run it manually post-installation to download your first set of malware signatures:

freshclam

Next, edit scan.conf.

vi /etc/clamd.d/scan.conf

Uncomment this line

LocalSocket /run/clamd.scan/clamd.sock

When you’re finished you must enable the clamd service file and start clamd:

systemctl enable clamd@scan.service
systemctl start clamd@scan.service

Cron Jobs

You will first want to check if there are any existing cronjobs.

crontab -u www-data -l

If you don’t see any NextCloud cron job after running the command above, add one.

crontab -u www-data -e

Add this line at the bottom to the last line, to check/run the NextCloud cron every 5 minutes.

*/5 * * * * php -f /var/www/nextcloud/cron.php

Open and edit your NextCloud config file to schedule the maintenance hours in UTC time.

vi /etc/httpd/conf.d/nextcloud.conf
'maintenance_window_start' => 10,

Other things…

https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/index.html

9 April 2022

Upgrade CentOS 7 to CentOS 8

Warning: CentOS 8 has reached End of Life (EOL) and is no longer supported. You should really consider moving to a supported OS such as CentOS 8 Stream.

I was looking at some virtual machines earlier today and I realized that they were not running the most current version of CentOS. Since I am going to upgrade them, I figured it’d be the perfect time to document the process of how to do it.

The first thing I do is make a backup of my virtual machine. You can’t recover from an accident if you don’t have a recovery point. At the very least, make sure you have taken a snapshot of your virtual machine.

Next, I verify what version of CentOS I’m on by running the following command.

cat /etc/centos-release

From the screenshot below you can see that I am currently on version 7.9.2009.

Check CentOS version

At this point, I’m going to enter “sudo su” on my VM and then enter my credentials, so that I can continue as ‘root’ and I don’t have to type “sudo” before every single command.

First step is to install the EPEL repository.

yum -y install epel-release

Next, install both ‘yum-utils’ and ‘rpmconf’ by using this command.

yum -y install yum-utils rpmconf

Next, use ‘rpmconf’ to resolve the RPM packages that are in use on your VM.

rpmconf -a

Then clean up any packages that are not required by your system.

package-cleanup --leaves

package-cleanup --orphans

Go ahead and reboot the system.

init 6

Log back in and do “sudo su” again.
CentOS uses the dnf package manager as its new default package manager, so time to install it.

yum -y install dnf

With dnf installed, it is time to remove the yum package manager.

dnf -y remove yum yum-metadata-parser
rm -Rf /etc/yum

Update all of the dnf packages.

dnf -y update

The next step is to install the CentOS 8 release package.

dnf -y install http://vault.centos.org/8.5.2111/BaseOS/x86_64/os/Packages/{centos-linux-repos-8-3.el8.noarch.rpm,centos-linux-release-8.5-1.2111.el8.noarch.rpm,centos-gpg-keys-8-3.el8.noarch.rpm}

Then upgrade the EPEL repository.

dnf -y upgrade https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
rpm --import http://download.fedoraproject.org/pub/eprl/RPM-GPG-KEY-EPEL-8

Next, clean up the dnf cached files.

dnf clean all
rm -rf /var/cache/dnf

CentOS Linux 8 had actually reached the End Of Life (EOL) as of December 31st, 2021. Which means that CentOS 8 will no longer receive development from the official CentOS project. After that EOL date, if you need to update your CentOS (yes, that means us right now), you need to change the mirrors to point to vault.centos.org where they are archived. So a better option would actually be to upgrade to CentOS Stream instead, but we’ll save that for another post…
Here is how to change the mirrors.

cd /etc/yum.repos.d/
sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
dnf update
cd

There are two packages, dracut-network and rpmconf, that conflict with upgradingand need to be removed.

dnf remove dracut-network rpmconf

Remove the old CentOS 7 kernel

rpm -e `rpm -q kernel`

Remove any conflicting packages that are not needed any longer

rpm -e --nodeps sysvinit-tools

Now run the upgrade for CentOS 8

dnf -y --releasever=8 --allowerasing --setopt=deltarpm=false distro-sync

Next it is time to install a new kernel on your VM.

dnf -y install kernel-core

The final step to perform is to install CentOS 8 minimal packages

dnf -y groupupdate "Core" "Minimal Install"

Now if you recheck you can see that both the CentOS version and the kernel version have been updated.

Updated CentOS version

13 November 2021

Adding a wildcard SSL certificate to your WordPress site

So this one threw me for a little bit of a loop when I was first trying to figure it out, even though it shouldn’t have. I was just overthinking it. There was plenty of documentation out there for adding a certificate to a single site, but there is not much when it comes to adding a wildcard certificate to a multi-site WordPress install. I guess that was where I had gotten confused. For reference, this was the specific KB article that helped me the most.

For folks that don’t know what I’m talking about, a multi-site install is one where you can host different WordPress sites on the same server. Meaning that site1.<yoursite>.com and site2 .<yoursite>.com could both reside on the same server even if they are about completely different content. Thus you would only have to cover the cost to host one server, instead of paying for two, one for each host. Yes, they do share some resources, so there are some possible drawbacks… But for most personal sites it should not really be an issue for a few sites to share the same host.

You will need OpenSSL installed on your machine before we continue. It’ll likely already be installed if you are using LInux. If it’s not installed please use your OS’s package manager to install it.

Generate a new private key:

sudo openssl genrsa -out /opt/bitnami/apache2/conf/server.key 2048

Use that key to create a certificate:
***IMPORTANT: Enter the server domain name when the below command asks for the “Common Name”.***

sudo openssl req -new -key /opt/bitnami/apache2/conf/server.key -out /opt/bitnami/apache2/conf/cert.csr

Send the cert.csr file to your Certificate Authority (CA). After they complete their validation checks, they will issue you your new certificate.

Download your certificates. You should have received two files, one was your new certificate and the other file is the CA’s certificate. Rename them as follows:

  • STAR_YourSite_com.crt –> server.crt
  • STAR_YourSite_com.ca-bundle –> server-ca.crt

Backup your private key after generating a password-protected version in the pem format.

sudo openssl rsa -des3 -in /opt/bitnami/apache2/conf/server.key -out privkey.pem

Note: To regenerate the key and remove the password protection, you can use this command:

sudo openssl rsa -in privkey.pem -out /opt/bitnami/apache2/conf/server.key

We’re almost done. Next you’ll open the Apache configuration file to verify it’s setup to use the certificates you just uploaded. The config file can be found at: /opt/bitnami/apache2/conf/bitnami/

Scroll down until you find “<VirtualHost _default_:443>” and verify that it is pointing to the correct certificate, key, and CA certificate bundle that you uploaded earlier. You should find the below lines, if you don’t, go ahead and add them.

SSLCertificateFile "/opt/bitnami/apache2/conf/server.crt"
SSLCertificateKeyFile "/opt/bitnami/apache2/conf/server.key"
SSLCACertificateFile "/opt/bitnami/apache2/conf/server-ca.crt"

Note: It’s easiest to use these default names and not a custom name for these files. If you use a custom name you might need to update that name in other spots of the Apache config file, and you’ll have to google that on your own. If your cert/key is using another name, I recommend just renaming them to the default names above that Apache uses.

After we have copied our files over and have verified that the Apache config file is correct, we are going to update the file persmissions on our certificate files. We will make them readable by the root user only with the following commands:

sudo chown root:root /opt/bitnami/apache2/conf/server*
sudo chmod 600 /opt/bitnami/apache2/conf/server*

Open port 443 in the server firewall. If you’re using Bitnami you can reference this KB.

Restart your server.

Once it comes up, you should now be able to connect to your site using HTTPS.


  • If you are looking for where to purchase an SSL certificate, check out SSLs.com. I use them for my projects. I’ve shopped around, and they have the best deals that I have found anywhere on the Internet.
16 May 2021

GitLab Certified Associate Certification

GitLab - GitLab Certified Associate

I’ll be honest; I’ve little experience using Git, or any other versioning software for that matter. I have had an interest in Git for a while now though. Mostly for keeping a personal code repo; scripts for working in the Azure and AWS clouds, PowerShell scripts for system administration tasks, and most recently to use to learn and deploy Docker and Kubernetes in my home lab. Previously, I just never thought that I had had the time to learn it. So when I stumbled across a link to register for FREE, for the GitLab Certified Associate (GCA) Training and Exam, I decided “What the hell. let’s do it!” (The link only lasted 2 days before they took down the free offering due to overwhelming interest, so sorry folks, I can’t provide you with the link.)

In my personal opinion, this certification is much more of a knowledge certificate than a technical certification. I feel like the course is designed to take you from 0 to drive. You cover all the basics and afterward, you’ll be able to jump right into using git without feeling like an imposter. If you have no experience, like me, this is the perfect place to start. If you’re already familiar with Git, well tough… You’ll still need to get the GCA before you can get one of their ‘specialist’ or ‘professional’ certifications. More info on their more advanced certifications can be found here.

The hands-on, self-paced, training lab was informative. There was definitely a sprinkle of marketing in there, like the inclusion of GitLab’s history. But they did do a good job of teaching the various Git concepts and terminology. They also included a bunch of labs to work on while proceeding thru the training. The hands-on portion, doing labs, was by far my favorite part. I like to learn by doing. So doing stuff like making a pull request, making changes in the WebIDE and from the command prompt, tagging code, and committing code to a project was what really made the training count. I also was able to recall that hands-on training to complete the exams later on. Like I mentioned early, I didn’t think I had the time to commit to learning Git… Well by spending 1-2hrs a night, for just a few nights, I was totally able to learn how to use Git.

The exam was twofold. One part was a “written” exam with questions you had to answer. The second part was a “lab” exam where you had to work a project and submit that project for grading. The written exam was not too bad. They give you a series of questions and you have to score 100% on them before you can proceed to the “lab” project exam. The questions dealt with terminology and things that GitLab could do. Honestly, if you did the labs, it was pretty easy as they had already covered all the information. I didn’t feel like there were any surprises or gotchas. I was a little more worried about doing the “lab” project. But again, having done the hands-on training labs, it was pretty straightforward of an exam. Some of the verbiage in the lab instructions had confused me up, and I had to reread the task it asked for a couple of times. But in the end, they again were only asking you to do stuff they had covered in the training materials. So nothing too bad if you take your time to complete it.

I feel like unless you work in development or DevOps, this is not going to be a high-priority cert for you to get. For most folks, I feel that this certification is going to more of a skill that they can add to their resume to show one more item that they are knowledgeable in. That said, it won’t hurt any to get the GitLab’s GCA if the opportunity presents itself like it did for me. You never know what you will be working on 1, 2, 5, or even 10 years from now in the future. IT is always changing. Who knows…. Tomorrow could come, and you or I might find ourselves in some sort of role needing to deploy code to a production CI/CD pipeline and using GitLab to commit our code change and push it. You never know… It could happen and when it does you’ll be happy you got yourself the GCA.

26 March 2020

Howto: Folding@Home – Linux


The Folding@Home (F@H) team has released v7 (currently v7.5.1) of their F@H software. It has a newer simpler graphical interface aimed at making it easier for people to install and contribute to the project. Here is how to make it run on your Linux computer. Linux has been growing in popularity as a desktop OS, so it’s great to see projects like this include it as a viable platform for contributing to F@H.

You can find F@H’s official documentation for Linux here – https://foldingathome.org/support/faq/installation-guides/linux/

Install F@H

I’m going to use a 64-bit Ubuntu v19.10 desktop to show you how to install F@H. You can download the latest Ubuntu Desktop versions here.

1. Download the installer from here: https://foldingathome.org/alternative-downloads/ (link opens in new tab)

2. Click on the “fahclient_7.5.1_amd64.deb” installer.

2. Allow the file to open with the default software installer.

3. Click the ‘Install’ button.

4. Enter your password, if asked, to allow the F@H client to get installed.

5. Enter your F@H user and passkey, then click ‘Next’.
*Make sure to check the box to automatically start the FAHClient.

6. The install itself should be really quick.

7. Open a browser on your Linux machine and in the address bar go to: 127.0.0.1:7396

It will open the F@H webgui where you can watch your work progress or adjust settings.

8. Just like that you are contributing to F@H! The client will be running as a service in the background.

I know that I left my F@H username and passkey in my post. Go ahead and use my F@H username & passkey if you really want to… It just means my F@H user will get credit for any folding you do.

Category: Linux | LEAVE A COMMENT
25 March 2020

PhotonOS – Set timezone

PhotonOS is VMware’s minimalist Linux based OS that has been heavily optimized for vSphere environments. Many of VMware’s appliance and OVAs are based on this super light weight platform. The problem with appliances and OVAs, is that I have yet to find or launch one that is set to MY timezone by default. I guess that is the price I have to pay for living in Hawaii.

While having the timezone mis-configured probably won’t hurt the VM itself most of the time, it definitely makes reading timestamps and logs more difficult. I mean come on, we’ve all been there before, add or subtracting your timezone offset to figure out what time an event actually happened since we probably don’t live in the GMT or UTC timezones. Much to our luck, setting the timezone PhotonOS using SSH (or the console’s CLI) is pretty easy after you log in as ‘root’.

Enter the command below to get a list of all available timezones.

ls -lsa /usr/share/zoneinfo | more

If you live in a region that is divided into subregions, such as the ‘Pacific’, we can use the following command instead to list those zones.

ls -lsa /usr/share/zoneinfo/Pacific | more

Once you have found the name of your desired timezone you can use the following command to set it. I’m using “Pacific/Honolulu” as my desired timezone.

set Pacific/Honolulu timezone

Then make a symbolic link from localtime to “Pacific/Honolulu”, or your desired timezone…

ln -sf /usr/share/zoneinfo/Pacific/Honolulu /etc/localtime

The final step is to check and visually confirm that the timezone is correct. To do this, we simply run the following command.

date

Now we can finally make some sense out of our logs!!!

13 December 2019

dracut-initqueue

I was updating the firmware on some Dell FC630 servers when I came across this. I really thought that the server hung during the update and I was in for a long night of trying to fix it. Wait and see what the fix was…

So using the DellEMC Repository Manager tool, I created a linux based SmartBootableISO that included the desired updates for my hardware. I then connected to the server’s iDrac virtual console, mounted the iso, and booted the server to the iso image. Everything appeared fine as I watched the server boot up. Then I saw it throw the following message:

dracut-initqueue[686]: mount /dev/sr0/ is write-protected, mounting read-only

Then after waiting and staring for about 5 minutes I started to worry. What’s going on? Did it just freeze?

Well… No, thankfully it had not froze.

It was just mounting a file as read-only, which apparently took longer than you would think it would. After waiting even longer felt right, it finally got past this step, and the server proceeded along with it’s boot-up process. The wait time varied slightly between servers, ranging from about 7 to 10 minutes.

So if you happen to see the “dracut-initqueue” message, don’t panic, your server did not hang. Just wait it out… Grab a coffee or go have a restroom break. Use those few minutes to stretch. Your server will continue chugging along shortly.

30 November 2019

Stop the Ads @ Home – Pi-Hole

Advertising, love it or hate it, truly drives the internet. It is scary the amount of data companies skim about you from the ads that get served to and what you click onto. But with the right tools, you can do a lot to protect your privacy. One of the best thing you can do at home to protect your privacy and those stop unwanted ads, is to deploy Pi-hole.

As described on their homepage Pi-hole is “A black hole for Internet ads”, that is “Easy-to-install”, and “is a DNS sinkhole that protects your devices from unwanted content”. All of which can be done in a one-time setup, usually on a RaspberryPi, without installing any software on your devices.

Pi-hole acts upon your network. It takes on the role of serving as the DNS on your network, and optionally role of a DHCP server.

In most homes today, both of these roles are usually preform by your router/wifi access point. DNS, in its’ most simplistic terms, acts as the whitepages that helps your devices translate a URL to an IP addresses. DHCP allows your device to get a ‘dynamically’ assigned address on the network so that it can communicate with everythign else. So by utilizing some whitelists and blacklists, the PiHole can simply not serve the address to known advertising URLs. Cutting the ads off before the request ever leaves your house.

I’ve been using it at home for about 3 years now and absolutely love it. There are some ads that still come thru, and sometimes if I click on an ad, I’ll get a “page can’t be reached” message. It was different at first to get used to, but now… I would trade it for the world!



Another great piece of software to install on you RPi is PiVPN. It’s an easy and secure way to create a VPN (a private tunnel) to your home when you are out and about. The best part about it is that it can allow you to use Pi-Hole when your not at home. Check out my article here.