Current OS = Linux Mint 13


The aim of this blog. You know how something challenges you and you google away, find a fix with some 'trial and error' and then in the future someone asks about how you did it, or you need to alter/re-do it at a later date but you have forgotten what little trick you did to accomplish it ? Well my aim is to keep a track of what I am working on and methods I have used here. And now, I can access it easily, it can be google indexed for others and I will have a URL to send others for problems I cant recall off hand how I fixed them. I hope you find this site useful.

27-11-2015 19:35

Upgrade a Debian 7 VPS to Debian 8

Some VPS providers don't have a Debian 8 template to build your VPS from. Just Debian 7. So you need to do the upgrade yourself. Not a big task, but there is the change to systemd to consider. So here are the commands that you need for this. After changing all references to wheezy in your /etc/apt/sources.list file to jessie:

# update with the new sources.list
apt-get update
# Make aure you have all the keys you need
apt-get install --reinstall debian-archive-keyring
# In case you're coming from a minimal Debian 7 template
apt-get -y install dialog
# In case your locale is not setup, again, coming from a minial template
dpkg-reconfigure locales
# The upgrade
apt-get dist-upgrade
# Check the version upgrade has taken place
lsb_release -a
# or
cat /etc/debian_version
# Swap out upstart with systemd
apt-get -y install systemd systemd-sysv
# Need to force a reboot I found as init is not working due to the above
reboot -f
# Check systemd and not upstart is running
ps aux|grep '(systemd|upstart)'
# Remove upstart and its files
apt-get -y purge upstart
rm -fr /var/log/upstart
# Done
If you have a messy of a source.list(.d) then you can build a clean one here Ref:

Posted by DaveQB | Permanent Link | Categories: IT

26-06-2015 16:02

cgi.fix_pathinfo and nginx

Hi all,

If you're reading this, you may have ran into the same issue I did.

  • FUDForum gives a 200 HTTP response but 0bytes of data (blank white page)
  • No errors in your logs
I ran into this when Roundcube stopped working for me when I moved my webamail vhosts to a different server.

I found the issue here was that in my fpm php.ini config I had:

cgi.fix_pathinfo = 0

This was the default config for this server.
I changed this to:

cgi.fix_pathinfo = 1

And this got roundcube working. Yay! And it is safer too.

But now FUDForum was broken (see symptoms above). Digging further, I pieced together that the solution was to add the below to your *.php block in nginx:

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

Once this is done you should now be getting 500 internal error pres from your server. The issue is it can't find the GLOBALS.php file due to the include path (well atleast for me anyway)

So then set your include_path in your php pool config for the php pool your FUDForum website is using:

php_value[include_path]  = "/usr/share/php:/usr/share/pear:/var/local/FUDforum/scripts:/var/local/FUDforum/include:/var/local/FUDforum/include/theme/default"

Change the path to match the data folder of your installation (/var/local/FUDforum) You should be right to go.


Posted by DaveQB | Permanent Link | Categories: IT

26-06-2015 12:25


So it seems the cgi.fix_pathinfo setting needs to be set to 1 in your php.ini file in order for this line to not break everything in your nginx config:

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

So to wrap up, set:

cgi.fix_pathinfo 1;
In your php.ini.

Posted by DaveQB | Permanent Link | Categories: IT

20-03-2015 20:56

CrashPlan on FreeBSD 10

So you want to run CrashPlan on FreeBSD. So do I!

I did lots of searching but the main article I used was this I'll go through it next.
I didn't edit my make.conf. So right off the bat I am diverging. I did edit my rc.conf and added the below to it:

and then installed these with pkgng:
* Needed to use ports to install this package.
I only added one fstab entry as the linux compat seems to have full access to the FS:
linproc   /compat/linux/proc   linprocfs   rw   0  0
Download crashplan somewhere under /compat/linux, and then chroot into your linux environment. Run the install script, and back out.
# chroot /compat/linux/ /bin/bash
# cd CrashPlan-install && ./
Edited my /compat/linux/usr/local/crashplan/bin/run.conf to look like this:
SRV_JAVA_OPTS=" -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx1024m -Dnetworkaddress.cache.ttl=300 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false"
GUI_JAVA_OPTS=" -Dfile.encoding=UTF-8 -Dapp=CrashPlanDesktop -DappBaseName=CrashPlan -Xms20m -Xmx512m -Dnetworkaddress.cache.ttl=300 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false"
And to /compat/linux/usr/local/crashplan/install.vars I appened:
And my /usr/local/etc/rc.d/crashplan contains:

# PROVIDE: crashplan
# KEYWORD: shutdown

. /etc/rc.subr


crashplan_start () {
  /compat/linux/bin/bash /usr/local/crashplan/bin/CrashPlanEngine start

crashplan_stop () {
  /compat/linux/bin/bash /usr/local/crashplan/bin/CrashPlanEngine stop

load_rc_config $name
run_rc_command "$1"

An issue I had after upgrading from 9.3 to 10.0 was the path in the run.conf reverted back to default. It took awhile to realise and even longer to figure it out. I haven't proof read this post but I hope it helps me (and you) when the time comes.

Posted by DaveQB | Permanent Link | Categories: IT

26-11-2014 18:34

MySQL multi-master replication with GTID on version 5.6

It turns out that multi-master replication is nothing more than a "criss-cross" master-slave replication setup. To explain that better with an example, server A is the master for server B and server B is the master of server A. This obviously mean any changes on either is replicated to the other. So you simply get master-slave replication working in one way, then mimic that on the other side once satisfied.

The first step in getting this working is to set up the config files. See here:

# Needed for masters and slaves
server-id       = 3
log_bin         = /var/log/mysql/mysql-bin.log

# Needed for the slaves
# Better to filter here than using binlog_do_db on the master etc
# more here:

relay_log         = /var/log/mysql/mysql-relay-bin.log
auto-increment-offset = 1
auto-increment-increment = 4

I simply add this to /etc/mysql/conf.d/ ensuring the file name ends in .cnf. Restart mysql and that part is done. Next we connect to one of the servers, let's call this one A. Step 1 is to create a dump or a snapshot of your data.

 mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob  |bzip2 > $HOSTNAME-$(date +%F).sql.bz2
Next step is to add the replication user on server A. You could use root, but best not to as you will be setting up the user (password and all) on another server.
 GRANT REPLICATION SLAVE ON *.* TO 'rep'@'slave_ip' IDENTIFIED BY 'some_secret';
Now on server B, import the dump from A.
 bzcat $HOSTNAME-$(date +%F) | mysql -u root -p
Now we tell server B where the master server is and the user to use (the one we just setup).
 change master to master_host='master_ip", master_port=3306, master_user='rep', master_password='some_secret', master_auto_position=1; 
start slave; 
We should be up and running. Check with the "show slave status\G" command on the slave. Now we simply repeat this but the other way around (skipping the dump and restore) in order to set up multi-master. Setup a replication user on B, then run the "change master to" command on server A so it is now a slave of B. All should be done now. See the references for troubleshooting. I actually use stunnel to connect my MySQL servers over the internet. Also, you can create a ~/.my.cnf file with your login info to save having to pass that to the mysql command every time. Contents would like so:

Posted by DaveQB | Permanent Link | Categories: IT

03-10-2014 15:55

How the MySQL ALTER command works.

What does an ALTER command do in MySQL? Quoting from this StackOverflow thread:

In most cases, ALTER TABLE works by making a temporary copy of the original table. The alteration is performed on the copy, and then the original table is deleted and the new one is renamed. While ALTER TABLE is executing, the original table is readable by other sessions. Updates and writes to the table are stalled until the new table is ready, and then are automatically redirected to the new table without any failed updates.
This makes a lot of sense. Using the "show processlist;" command, we can see the state an ALTER command is in. Such as "copy to tmp table ALTER TABLE mdl_sessions2 ENGINE=InnoDB". This gives one confident to cancel the operation while nothing has actually been changed. There might be a small window between checking the state of the command to actually cancelling it. But if it is a long running query (which would be the only way it would be humanly possible to cancel it) then the next step is altering the tmp table, so again, no issue if that is cancelled. So cancel away on an ALERT command while it is still copying. If it is in a state of altering the tmp table, but be careful you don't cancel it as it deletes the original table.

Posted by DaveQB | Permanent Link | Categories: IT

27-09-2014 23:26

Flash in Chromium on Ubuntu based OS

sudo apt-add-repository ppa:skunk/pepper-flash
sudo apt-get update
sudo apt-get install pepflashplugin-installer -y
sudo su -c "echo '. /usr/lib/pepflashplugin-installer/' >> /etc/chromium-browser/default"

This works on any Ubuntu based distro. I ran it on my Mint 13 desktop and it worked after restarted flash.

Posted by DaveQB | Permanent Link | Categories: IT

21-07-2014 12:13

Networking tips and tricks

This is a good article for basics using network manager. A good read for anyone using Network Manager. The one issue is the hosts file editing. It is better to put the FQDN first after the IP address and then any short names after that as the first name after the IP is what the system uses to resolve it's own hostname. So the command hostname -f won't work if you have your systems FQDN second or later on it's line. It is just a good habit to be in.

Posted by DaveQB | Permanent Link | Categories: IT

23-06-2014 15:57

AWS Sydney Region network throughput

I did some iperf testing on the two Availability Zones (AZ) in the Sydney AWS region. I have 4 gluster servers a m1.medium and m3.large in ap-southeast-2a and a m1.medium and m3.large in ap-southeast-2b. So I did a cross AZ iperf test between like servers.

m1.medium 2a > m1.medium 2b

[ ID] Interval Transfer Bandwidth 
[ 4] 0.0-10.0 sec 220 MBytes 184 Mbits/sec 
[ 7] 0.0-10.0 sec 188 MBytes 157 Mbits/sec 
[ 6] 0.0-10.0 sec 193 MBytes 161 Mbits/sec 
[ 8] 0.0-10.1 sec 262 MBytes 218 Mbits/sec 
[SUM] 0.0-10.1 sec 863 MBytes 719 Mbits/sec 

[ 4] 0.0-10.1 sec 219 MBytes 181 Mbits/sec 
[ 6] 0.0-10.2 sec 227 MBytes 187 Mbits/sec 
[ 7] 0.0-10.2 sec 197 MBytes 162 Mbits/sec 
[ 3] 0.0-10.2 sec 221 MBytes 182 Mbits/sec 
[SUM] 0.0-10.2 sec 864 MBytes 712 Mbits/sec 

m3.large 2a > m3.large 2b

[ ID] Interval Transfer Bandwidth 
[ 7] 0.0-10.0 sec 222 MBytes 185 Mbits/sec 
[ 3] 0.0-10.0 sec 66.0 MBytes 55.1 Mbits/sec 
[ 6] 0.0-10.1 sec 96.6 MBytes 80.6 Mbits/sec 
[ 8] 0.0-10.1 sec 56.8 MBytes 47.3 Mbits/sec 
[SUM] 0.0-10.1 sec 441 MBytes 368 Mbits/sec 

[ 7] 0.0-10.1 sec 140 MBytes 116 Mbits/sec 
[ 3] 0.0-10.1 sec 162 MBytes 134 Mbits/sec 
[ 6] 0.0-10.2 sec 79.1 MBytes 65.3 Mbits/sec 
[ 5] 0.0-10.2 sec 57.4 MBytes 47.4 Mbits/sec 
[SUM] 0.0-10.2 sec 438 MBytes 362 Mbits/sec

Not that good, not gigabit speeds, but not terrible.

Now keeping the test inside the same AZ.

m1.medium 2b > m3.large 2b

[ ID] Interval Transfer Bandwidth 
[ 7] 0.0-10.0 sec 105 MBytes 87.6 Mbits/sec 
[ 6] 0.0-10.0 sec 161 MBytes 135 Mbits/sec 
[ 5] 0.0-10.1 sec 93.9 MBytes 78.3 Mbits/sec 
[ 8] 0.0-10.1 sec 86.4 MBytes 71.9 Mbits/sec 
[SUM] 0.0-10.1 sec 446 MBytes 371 Mbits/sec 

[ 5] 0.0-10.2 sec 143 MBytes 118 Mbits/sec 
[ 4] 0.0-10.2 sec 93.5 MBytes 77.0 Mbits/sec 
[ 7] 0.0-10.2 sec 119 MBytes 98.2 Mbits/sec 
[ 6] 0.0-10.2 sec 89.0 MBytes 73.3 Mbits/sec 
[SUM] 0.0-10.2 sec 445 MBytes 366 Mbits/sec 

Not much better.

Posted by DaveQB | Permanent Link | Categories: IT

21-05-2014 13:32

How to generate a HmacSHA512 from the shell

This is ripped right from here but I am copying here for my own (and others) reference.

I realise this isn't exactly what you're asking for, but there's no point in reinventing the wheel and writing a bash version. You can simply use the openssl command to generate the hash within your script.

[me@home] echo -n "value" | openssl dgst -sha1 -hmac "key"
Or simply:
[me@home] echo -n "value" | openssl sha1 -hmac "key"
Remember to use -n with echo or else a line break character is appended to the string and that changes your data and the hash. That command comes from the OpenSSL package which should already be installed (or easily installed) in your choice of Linux/Unix, Cygwin and the likes. Do note that older versions of openssl (such as that shipped with RHEL4) may not provide the -hmac option. As an alternative solution, but mainly to prove that the results are the same, we can also call PHP's hmac_sha1() from the command line:
[me@home]$ echo '' | php
Edit: One could use printf rather than echo -n.

Posted by DaveQB | Permanent Link | Categories: IT