Current OS = KDE Neon 20.04
The aim of this blog. You know how something challenges you and you google away, find a fix with some 'trial and error' and then in the future someone asks about how you did it, or you need to alter/re-do it at a later date but you have forgotten what little trick you did to accomplish it ? Well my aim is to keep a track of what I am working on and methods I have used here. And now, I can access it easily, it can be google indexed for others and I will have a URL to send others for problems I cant recall off hand how I fixed them. I hope you find this site useful.
I think I have fallen behind in the latest tools to do this elegantly. I found a great page explaining the commands needed. I will paraphrase here for my own notes and in case the site goes away.
A handy command is: ubuntu-drivers. In this instance, we run ubuntu-drivers devices to see the driver options we have. From there, we can run sudo ubuntu-drivers autoinstall to install the recommended driver or sudo apt install nvidia-driver-460 to install a specific driver. Reboot. We have another command nvidia-smi that gives us good info about our Nvidia graphics card and the driver being used.
Long time, no post. Now running KDE Neon on my desktop and added a USB dongle. When ever I turned off bluetooth I could not turn it back on using the GUI. I found the "btusb" module was not loaded, so a:
sudo modprobe btusb
Solved that. I figured it might be permissions that didn't alllow me to turn bluetooth back on. Sure enough, I was not in the "bluetooth" group.
I have long wanted to try FreeIPA but haven't had the need. Now I do. I just wanted to record the issues I have had.
I am installing on RHEL 7.2 in AWS using the AWS AMI.
So far that's it. Looking at using FreeIPA's builtin DNS instead of my Bind9 setup DNS servers. There's no zone file as I am used to as FreeIPA is using bind-dyndb-ldap so records are kept in LDAP. But the commands to manage them seem very thorough. I am just worried if DNS breaks down, I won't have the knowledge to fix it like I could with my own managed bind setup.
The issue is mail that comes into a postfix server destined for an external address (think group addresses) will skip procmail. Spamassasin only rewrites headers, it does not delete mail. As a result, mail that is even detected and marked as spam are sent out on to users that are external making your mail server a spam bot. I found many tutorials online around this. See references. The answer was in the value set to postfix option, content_filter, in master.cf config. Here you put a name of a transport that filters mail and then puts it back in the queue (or deletes it etc). I had spamassasin there that was defined later in the config to use spamc. Again, this won't delete anything.
So I created my own script, a variant on what I saw is in the tutorials and deleted mail marked spam. After some typos and speling mistakes, it was all working. I added a logger line when deleting an email so there's more fo a record.
Ref: http://www.postfix.org/FILTER_README.html#simple_filter http://www.akadia.com/services/postfix_spamassassin.html https://joost.vunderink.net/blog/2011/04/23/deleting-spam-with-postfix-and-spamassassin/
Some VPS providers don't have a Debian 8 template to build your VPS from. Just Debian 7. So you need to do the upgrade yourself. Not a big task, but there is the change to systemd to consider. So here are the commands that you need for this. After changing all references to wheezy in your /etc/apt/sources.list file to jessie:
# update with the new sources.list apt-get update # Make aure you have all the keys you need apt-get install --reinstall debian-archive-keyring # In case you're coming from a minimal Debian 7 template apt-get -y install dialog # In case your locale is not setup, again, coming from a minial template dpkg-reconfigure locales # The upgrade apt-get dist-upgrade # Check the version upgrade has taken place lsb_release -a # or cat /etc/debian_version # Swap out upstart with systemd apt-get -y install systemd systemd-sysv # Need to force a reboot I found as init is not working due to the above reboot -f # Check systemd and not upstart is running ps aux|grep '(systemd|upstart)' # Remove upstart and its files apt-get -y purge upstart rm -fr /var/log/upstart # DoneIf you have a messy of a source.list(.d) then you can build a clean one here Ref: http://justinfranks.com/linux-administration/upgrade-openvz-vps-from-debian-7-wheezy-64-bit-to-debian-8-jessie-64-bit
Hi all, If you're reading this, you may have ran into the same issue I did.
cgi.fix_pathinfo = 0
This was the default config for this server. I changed this to:
cgi.fix_pathinfo = 1
And this got roundcube working. Yay! And it is safer too. But now FUDForum was broken (see symptoms above). Digging further, I pieced together that the solution was to add the below to your *.php block in nginx:
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
Once this is done you should now be getting 500 internal error pres from your server. The issue is it can't find the GLOBALS.php file due to the include path (well atleast for me anyway) So then set your include_path in your php pool config for the php pool your FUDForum website is using:
php_value[include_path] = "/usr/share/php:/usr/share/pear:/var/local/FUDforum/scripts:/var/local/FUDforum/include:/var/local/FUDforum/include/theme/default"
Change the path to match the data folder of your installation (/var/local/FUDforum) You should be right to go. Ref: http://serverfault.com/questions/514157/how-to-set-php-include-path-for-php-fpm-in-nginx-config http://fudforum.org/forum/index.php?t=msg&th=119217&goto=162124msg_162124
So it seems the cgi.fix_pathinfo setting needs to be set to 1 in your php.ini file in order for this line to not break everything in your nginx config:
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
So to wrap up, set:
cgi.fix_pathinfo 1;In your php.ini.
So you want to run CrashPlan on FreeBSD. So do I! I did lots of searching but the main article I used was this http://worrbase.com/2012/03/31/Crashplan.html. I'll go through it next. I didn't edit my make.conf. So right off the bat I am diverging. I did edit my rc.conf and added the below to it:
crashplan_enable="YES" linux_enable="YES"and then installed these with pkgng:
linux-f10-expat-2.0.1_1 linux-f10-fontconfig-2.6.0_1 linux-f10-procps-3.2.7 linux-f10-xorg-libs-7.4_1 *linux-sun-jre17-7.71 linux_base-f10-10_7* Needed to use ports to install this package. I only added one fstab entry as the linux compat seems to have full access to the FS:
linproc /compat/linux/proc linprocfs rw 0 0Download crashplan somewhere under /compat/linux, and then chroot into your linux environment. Run the install script, and back out.
# chroot /compat/linux/ /bin/bash # cd CrashPlan-install && ./install.shEdited my /compat/linux/usr/local/crashplan/bin/run.conf to look like this:
SRV_JAVA_OPTS="-Djava.nio.channels.spi.SelectorProvider=sun.nio.ch.PollSelectorProvider -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx1024m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false" GUI_JAVA_OPTS="-Djava.nio.channels.spi.SelectorProvider=sun.nio.ch.PollSelectorProvider -Dfile.encoding=UTF-8 -Dapp=CrashPlanDesktop -DappBaseName=CrashPlan -Xms20m -Xmx512m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false"And to /compat/linux/usr/local/crashplan/install.vars I appened:
JAVACOMMON=/usr/local/linux-sun-jre1.7.0/bin/javaAnd my /usr/local/etc/rc.d/crashplan contains:
#!/bin/sh # # PROVIDE: crashplan # REQUIRE: NETWORKING # KEYWORD: shutdown . /etc/rc.subr name="crashplan" rcvar=`set_rcvar` start_cmd=crashplan_start stop_cmd=crashplan_stop crashplan_start () { /compat/linux/bin/bash /usr/local/crashplan/bin/CrashPlanEngine start } crashplan_stop () { /compat/linux/bin/bash /usr/local/crashplan/bin/CrashPlanEngine stop } load_rc_config $name run_rc_command "$1"
An issue I had after upgrading from 9.3 to 10.0 was the path in the run.conf reverted back to default. It took awhile to realise and even longer to figure it out. I haven't proof read this post but I hope it helps me (and you) when the time comes.
It turns out that multi-master replication is nothing more than a "criss-cross" master-slave replication setup. To explain that better with an example, server A is the master for server B and server B is the master of server A. This obviously mean any changes on either is replicated to the other. So you simply get master-slave replication working in one way, then mimic that on the other side once satisfied.
The first step in getting this working is to set up the config files. See here:
[mysqld] # Needed for masters and slaves server-id = 3 log_bin = /var/log/mysql/mysql-bin.log binlog_format=row gtid_mode=on enforce_gtid_consistency=true log_slave_updates=true # Needed for the slaves # Better to filter here than using binlog_do_db on the master etc # http://www.mysqlperformanceblog.com/2009/05/14/why-mysqls-binlog-do-db-option-is-dangerous/ # more here: http://dev.mysql.com/doc/refman/5.1/en/replication-options-binary-log.html#option_mysqld_binlog-do-db replicate-wild-ignore-table=mysql.% replicate-wild-ignore-table=information_schema.% replicate-wild-ignore-table=performance_schema.% relay_log = /var/log/mysql/mysql-relay-bin.log # http://jonathonhill.net/2011-09-30/mysql-replication-that-hurts-less/ auto-increment-offset = 1 auto-increment-increment = 4 read_only=OFF
I simply add this to /etc/mysql/conf.d/ ensuring the file name ends in .cnf. Restart mysql and that part is done. Next we connect to one of the servers, let's call this one A. Step 1 is to create a dump or a snapshot of your data.
mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob |bzip2 > $HOSTNAME-$(date +%F).sql.bz2Next step is to add the replication user on server A. You could use root, but best not to as you will be setting up the user (password and all) on another server.
GRANT REPLICATION SLAVE ON *.* TO 'rep'@'slave_ip' IDENTIFIED BY 'some_secret';Now on server B, import the dump from A.
bzcat $HOSTNAME-$(date +%F).sql.bz | mysql -u root -pNow we tell server B where the master server is and the user to use (the one we just setup).
change master to master_host='master_ip", master_port=3306, master_user='rep', master_password='some_secret', master_auto_position=1; start slave;We should be up and running. Check with the "show slave status\G" command on the slave. Now we simply repeat this but the other way around (skipping the dump and restore) in order to set up multi-master. Setup a replication user on B, then run the "change master to" command on server A so it is now a slave of B. All should be done now. See the references for troubleshooting. I actually use stunnel to connect my MySQL servers over the internet. Also, you can create a ~/.my.cnf file with your login info to save having to pass that to the mysql command every time. Contents would like so:
[client] user=root password=some_secretRef: http://fromdual.com/gtid_in_action?_ga=1.204815570.317472044.1416546765 http://fromdual.com/replication-troubleshooting-classic-vs-gtid https://www.digitalocean.com/community/tutorials/how-to-set-up-mysql-master-master-replication http://www.percona.com/blog/2013/02/08/how-to-createrestore-a-slave-using-gtid-replication-in-mysql-5-6/
What does an ALTER command do in MySQL? Quoting from this StackOverflow thread:
In most cases, ALTER TABLE works by making a temporary copy of the original table. The alteration is performed on the copy, and then the original table is deleted and the new one is renamed. While ALTER TABLE is executing, the original table is readable by other sessions. Updates and writes to the table are stalled until the new table is ready, and then are automatically redirected to the new table without any failed updates.This makes a lot of sense. Using the "show processlist;" command, we can see the state an ALTER command is in. Such as "copy to tmp table ALTER TABLE mdl_sessions2 ENGINE=InnoDB". This gives one confident to cancel the operation while nothing has actually been changed. There might be a small window between checking the state of the command to actually cancelling it. But if it is a long running query (which would be the only way it would be humanly possible to cancel it) then the next step is altering the tmp table, so again, no issue if that is cancelled. So cancel away on an ALERT command while it is still copying. If it is in a state of altering the tmp table, but be careful you don't cancel it as it deletes the original table.