Tuesday, July 12, 2016

Live Migrating Virtual Machines in KVM Without Shared Storage (with scripts!)

The last few weeks I've been working on redistributing our KVM guests amongst our host systems at the datacenters. Unfortunately, this process is slow and labor intensive due to some legacy design (the state of which is steadily improving). One of the few remaining items on the improvements list, some big, fast shared storage. After several frustrating days of hunting down system custodians, coordinating maintenance windows, shutting down services, scping files and battling with fussy services upon reboot I decided I had to come up with a better solution. This post will walk you through my process for migrating a VM between two hosts on the same network using local storage.

If you're interested, there's a whole lot of really great information on the libvirt migration page about how libvirt handles migrations. After a brief read of this page, a few more searches around the net and a brief study of the man pages I came up with the following command:

virsh migrate --live \
--persistent \
--undefinesource \
--copy-storage-all \
--verbose \
--desturi [destination] [vm_name]

The key to this command is the --copy-storage-all command. This will copy the contents of the source image file to the destination. Before we can run this command, though, we need to create a representation of the image on the destination server, preferably in the same location. Since I use sparsely allocated qcow2 files, I will be using qemu-img with the "qcow2" format flag. If you are using something different, you should create the destination image the same way as you created the original source image. This file should be the same size as the source image:

qemu-img create -f qcow2 /vm_storage/[my_vm].img

Once you've created the file on the destination server, you are almost ready to kick off migration. First we need to make sure you can connect to the remote host. You can test this with virsh:

virsh -c qemu+ssh://[remote_host]/system -l

If all goes well, you should see a list of the VMs on the remote host. You may, however, receive an error or an empty list (assuming it shouldn't be empty). If this happens, you probably need to add the following pkla file which allows the group your user is in to connect to the remote libvirt socket. I've provided the policykit file contents below:

[Remote libvirt SSH access]
Identity=unix-group:MY_GROUP
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes

Change "MY_GROUP" to whatever group you want to use. Wheel is a good choice if your user is a member. On a Redhat/CentOS system, save this file as /etc/polkit-1/localauthority/50-local.d/50-libvirt-remote-access.pkla and try again.

Once you're able to successfully run the test above, you can try to migrate your VM. I've provided an example below. Run this as the user who is a member of the group we added to the pkla file above. You should probably also run this inside of a tmux or screen session as it will take a while to complete:

virsh migrate --live --persistent --undefinesource --copy-storage-all \
--verbose --desturi qemu+ssh://[remote_host]/system [vm_name]

If all goes well, you should receive a progress indicator that will progress slowly. In my tests, a 100GB VM with 4GB of RAM under minimal load took about 15 minute to transfer over a 1GbE network.

This script should be run from the source VM host. It will handle creating the img file on the destination host and do some sanity checking for you:

# Change libvirt default URI to allow enforecement by the pkla file
export LIBVIRT_DEFAULT_URI=qemu:///system
vm="$1"
dest_host="$2"
storage="/directory/of/img/files"

# Are you root?
if [[ "$UID" == "0" ]]
then
  echo "You can not run migrate as root"
  exit 1
fi

# Exit if not running in screen or tmux
if [[ -z "$TMUX" ]] || [[ "$TERM" != "screen" ]]
then
  echo "You must run migrations in either screen or tmux. Aborting."
  exit
fi

# Confirmation
read -p "You are about to move ${vm} to ${dest_host}. Are you sure? y/N " -n 1 -r
echo

# Check if VM exists
if ! virsh list --all | awk '{print $2}' | grep -q "$vm"
then
  echo "$vm does not exist on this server. Aborting."
  exit 1
fi

# Check if remote destination exists
if ! host "$dest_host" > /dev/null
then
  echo "Unable to reach $dest_host. Aborting."
  exit 1
fi

# Check that we are forwarding ssh agent
if [[ -z "${SSH_AUTH_SOCK}" ]]
then
  echo "Please exit and re-connect using ssh's -A flag"
  exit 1
fi

# Get VM size (this is ugly)
# TODO: Make this handle multiple disks. Right now it will only work if the
#       img file is named the same as the host and there is only one file
disk_size=$(virsh vol-info --pool default "${vm}".img | awk '/Capacity/ {print $2}' | awk -F'.' '{print $1}')

# Create remote disk
if ! ssh "${dest_host}" "[[ ! -f ${storage}/${vm}.img ]] && sudo qemu-img create -f qcow2 ${storage}/${vm}.img ${disk_size}G"
then
  echo "Unable to create image file ${storage}/${vm}.img on ${dest_host}. Confirm that this does not already exist and try again."
  exit 1
fi

# Migrate VM to new host
virsh migrate --live --persistent --undefinesource --copy-storage-all --verbose --desturi qemu+ssh://"${dest_host}"/system "${vm}"
if [[ $? -eq 0 ]]
then
  echo
  echo "Migration complete. A copy of the VM image file still resides on the old host inside of ${storage}."
  read -n 1 -p "Do you want to delete ${storage}/${vm}.img now? y/N " -r del
  if [ "$del" == "y" ]
  then
    rm -f "${storage}"/"${vm}".img
  fi
else
  echo "The migration does not appear to have completed cleanly. Cleaning up remote host and exiting."
  ssh "${dest_host}" "[[ ! -f ${storage}/${vm}.img ]] && sudo rm -f ${storage}/${vm}.img"
  exit 1
fi

exit 0


Invoke the script as follows:

script [vm_name] [destination_server]


Good luck!

Sunday, July 10, 2016

Native screen lock and suspend on FreeBSD

For some time I've tinkered with FreeBSD. Mainly on my home file server. Sometimes on a Raspberry Pi and most recently on laptops. I had been running FreeBSD 11-current on my old Thinkpad X61 for the last few months while waiting for 11.0 to be released. The X61 being an older laptop, however, made compiling updates a tedious and boring process so I rarely used it. Well, the wait is over! FreeBSD 11.0 Beta-1 was released on July 8th, 2016. I've installed it on the X61 and we're back business! This time around, however, I wanted to challenge myself to minimize my use of non-native packages and see how much I could get done with what FreeBSD provides out of the box.

Right off the bat I'm going to tell you that, unless you want to run FreeBSD without X, you have to install at least a couple of ports. Xorg, at a minimum. I opted to install Xorg and use Fluxbox as my window manager. Everything was wonderful. Outside of those two ports, I had a fully functional laptop. I didn't even have to resort to Google for answers to my questions thanks to FreeBSD's awesome handbook which is included right there in the base installation!

One problem remained. The problem that plagues every Linux and BSD user. Lid triggered suspend/resume. Without this feature, it's a lot harder to use your laptop (or at least more annoying to have to shut it down and start it up constantly). Thankfully, I knew from my past experiences that the X61 had perfect suspend/resume support using the native `hw.acpi.lid_switch_state` setting in sysctl.conf. Set that puppy to "S3" and you're off and running. There is just one problem. This does not lock your screen so anyone who opens your laptop while it's asleep can just jump right on and use the computer as your user. I needed to find a way to lock the screen before suspend kicked in.

In the past, I have installed xfce-power-manager and this has handled everything very neatly for me. This time, however, I was trying to minimize the number of ports installed on the system and see how far I could get with just the native tools. Enter devd.

For those coming from Linux, devd is FreeBSD's udev. You can configure actions based on hardware events. After reading the man pages, guess what!? It supports a "Lid" ACPI subsystem trigger with a 0 and a 1 state. A simple config file added to /etc/devd/ and we're in business. But what should we use to lock the screen? This comes down to personal preference. I've always liked and used xscreensaver. Others may prefer slock or xlock. Whichever you choose is up to you. Heck, there might even be one included in the base system. I just never explored that since I prefer xscreensaver. Install whichever you would like from ports and continue on to the next section.

Once I had xscreensaver installed, I created a file called "lidlock.conf" inside of /etc/devd/. This is a dump of this file:


notify 20 {
    match "system"    "ACPI";
    match "subsystem" "Lid";
    match "notify"    "0x00";
    action "/usr/bin/su [my_user] -c '/usr/local/bin/xscreensaver-command -display :0.0 -lock'";
    action "acpiconf -s3";
};

The only part about that you have to change is the "[my_user]". Set that to your user and you'll be all set. This example also assumes you have a single user system. If there are other users on your system and you don't always want xscreensaver running as your user you might consider creating a special user just for this purpose. You could also just leave off the 'su -c' entirely and run xscreensaver-command directly. I chose to run it as an unprivileged user, however, for security reasons.

Once you have this saved inside /etc/devd you can restart devd and it should work. Note that I had issues doing it this way, however, and had to actually restart my entire system before the config file was picked up. I'm not sure why. You will also want to make sure you have 'hw.acpi.lid_switch_state' inside of /etc/sysctl.conf set to "NONE" or it will trigger before devd can run this custom action and you'll spend hours trying to figure out why (trust me).

Anyway, that's it! Enjoy your FreeBSD laptop!

Sunday, May 15, 2016

Building RPMs with Docker-Cookery

Earlier this year I started a new chapter in my career. With this new position comes some new challenges and the need to learn some new tools. One of the tool sets we use is called FPM which we manage the FPM-Cookery project. An aspect of our package management system that I've found particularly interesting (at least viewed through the lens of my past experience) is that package builds are all done locally by the package maintainer and uploaded to our Pulp server. Of course, this would not be terribly interesting in itself except for the fact that everyone within the ops team (with a few exceptions) uses a Macbook Pro, not a CentOS or Fedora based system. To facilitate local package builds on a non-rpm based system, we've employed the use of docker containers with the aid of Docker Machine

My exposure to Docker up to this point had been purely academic. I'd read through some tutorials and official documentation. I'd gone so far as to deploy Owncloud and Postgres together via docker and had it running successfully in a test environment for several moths (until ultimately deciding to scrap it and return to a more traditional VM based approach in production). But I'd never really had any exposure to using Docker as part of a large production environment.

To help myself learn a little more about Docker and FPM-Cookery, I've taken it upon myself to build some shell wrappers which can easily automate the process of generating fpm-cookery recipes and building them using Docker. I'm calling the project "Docker-Cookery". While the tools are still a work in progress, I've made everything publicly available on Github. Please feel free to read through the documentation and submit feedback or pull requests.

Friday, November 13, 2015

Multitouch Gestures with Touchegg in Fedora 23

Just a quick entry for the log book.

Today I installed Fedora 23 on a spare Macbook Pro Retina I had in the office. Man, they have really polished this product. I had heard that Fedora 22 was good but I never tried out it. Fedora 23 just knocks it out of the park, though. Near perfect out of the box experience on the Macbook. The only hardware problem I noticed was that the wifi didn't work out of the box. Not a big surprise, though, since it's a Broadcom chip. Normally the fix is to install akmod-wl from the RPM Fusion repos, however, this package didn't work for me and I had to build it myself, adding an upstream patch. Thankfully, there's a kind soul out there who has created a script to automate this for you.

Now, on to the real reason for this post: Multitouch Gestures!

TL;DR, Touchegg is currently broken in Gnome 3.14 and up. This is because Gnome now has native gesture support (that I still can't figure out how to enable) which interferes with Touchegg's ability to read the gestures. The fix for this is was actually really obvious once I realized this. Make Touchegg start before Gnome! Follow along below to see how to do this:

First we need to download and install touchegg and all it's dependencies. You'll need to use a copr repository. Once you've followd the instructions on that page, create a new file in /etc/X11/xinit/xinitrc.d called touchegg.sh that contains the following lines:
#!/bin/sh
# remove three button mapping from synaptics
synclient ClickFinger3=0
# Launch touchegg
touchegg &
Now just restart gdm with systemctl and everything should be working with the default config.

If you want to customize your config, copy /usr/share/touchegg/touchegg.conf to ~/.config/touchegg/touchegg.conf (you'll need to make that directory inside .config). You can then either edit the file by hand or use the touchegg-gce.

One note before I go. While this does technically "work" it seems that your actions are limited to "SEND_KEYS" only. Something is still interfering with touchegg controlling windows with actions such as "RESIZE_WINDOW" and "MOVE_WINDOW". Thankfully, for my use case anyway, I just wanted to map some of the keyboard shorcuts to snap the windows and switch workspaces to mouse gestures (similar to how they are in Mac OSX).

I hope you found this helpful and if you have any comments or corrections please don't hesitate to leave a comment.

Monday, September 21, 2015

FreeIPA with Kerberos for OSX 10.7+

Wow, it's been a long time since I last posted something! I finally completed a project I've been working on for some time now and wanted to share it so that others could benefit. This is article is going to be pretty dense and technical. A lot of the documentation you'll see here came directly from http://linsec.ca/Using_FreeIPA_for_User_Authentication#Mac_OS_X_10.7.2F10.8. That page has some other information regarding FreeIPA so I would encourage you to check it out. This post expands a good bit on the original document to include, among other things, a more security focused approach to connecting to the server including the use of LDAPS and disallowing "week crypto" in the Kerberos configuration.

Kerberos and Pam Configuration

Create a file in /Library/Preferences/edu.mit.Kerberos with the following contents:
[domain_realm]

   .example.com = EXAMPLE.COM

   example.com = EXAMPLE.COM

[libdefaults]
   default_realm = EXAMPLE.COM
   allow_weak_crypto = no
   dns_lookup_realm = true
   dns_lookup_kdc = true
   rdns = false
   ticket_lifetime = 24h
   forwardable = yes
   renewable = true

[realms]
   EXAMPLE.COM = {
       kdc = ipa.example.com:88
       master_kdc = ipa.example.com:88
       admin_server = ipa.example.com:749
       default_domain = example.com
       pkinit_anchors = FILE:/etc/ipa/ca.crt
   }
Download the ca.crt file, create a directory as shown below and place the ca.crt within that directory:
# sudo -i
# cd /etc/
# mkdir ipa
# cd ipa
# curl -OL https://ipa.example.com/ipa/config/ca.crt
Edit /etc/pam.d/authorization to match the following:
 # authorization: auth account
 auth       optional       pam_krb5.so use_first_pass use_kcminit default_principal
 auth       sufficient     pam_krb5.so use_first_pass default_principal
 auth       optional       pam_ntlm.so use_first_pass
 auth       required       pam_opendirectory.so use_first_pass nullok
 account    required       pam_opendirectory.so
Edit /etc/pam.d/screensaver to match the following:
 # screensaver: auth account
 auth       optional       pam_krb5.so use_first_pass use_kcminit default_principal
 auth       required       pam_opendirectory.so use_first_pass nullok
 account    required       pam_opendirectory.so
 account    sufficient     pam_self.so
 account    required       pam_group.so no_warn group=admin,wheel fail_safe
 account    required       pam_group.so no_warn deny group=admin,wheel ruser fail_safe
Edit /etc/pam.d/sudo to match the following:
 # sudo: auth account password session
 auth       sufficient     pam_krb5.so try_first_pass default_principal
 auth       required       pam_opendirectory.so use_first_pass
 account    required       pam_permit.so
 password   required       pam_deny.so
 session    required       pam_permit.so

IPA Enrollment

Because we cannot enroll the system into IPA the easy way, we need to visit the web UI and add a new host. In the IPA web UI, go the Identity and then the Hosts page. Click the "Add" button, where you will need to add the fully qualified domain name of the host (e.g. mac.example.com ), and then click the "Add and Edit" button. You don't need to add much here, other than the MAC address of the system, and the SSH public keys, which can be found in /etc/ssh_host_dsa_key.pub and /etc/ssh_host_rsa_key.pub. The Ethernet MAC address can be found via either ifconfig or System Preferences.
Generate ssh_host_rsa_key

If you don't have /etc/ssh_host_rsa_key.pub or /etc/ssh_host_dsa_key.pub then follow the steps below to create one:
# cd /etc
# ssh-keygen -t rsa -b 4096 -f /etc/ssh_host_rsa_key

Generate keytab File

This, unfortunately, does not generate a keytab file for the host, so on the server, using the ipa-getkeytab program, we will create an obtain the keytab for our new host:

    # ipa-getkeytab -s ipa.example.com -p host/your_hostname.example.com -k ~/mac.keytab

Now that the keytab is generated, scp it from the server to the new workstation and place it in /etc/krb5.keytab. Make sure the file is owned by the user root and group wheel (root:wheel) and is mode 0600.
OpenSSL and OpenLDAP Configuration

Import and symlink the CA certificate to /System/Library/OpenSSL/certs/:
# sudo -ipa
# cd /System/Library/OpenSSL/certs
# curl -OL https://ipa.example.com/ipa/config/ca.crt
# ln -s /System/Library/OpenSSL/certs/ca.crt $(openssl x509 -noout -hash -in /System/Library/OpenSSL/certs/ca.crt).0
Now we need to import the ca.crt file to Keychain Access and trust it.

First, either copy the ca.crt file from /System/Library/OpenSSL/certs to a more convenient location (e.g. Desktop) or re-download it from https://ipa.example.com/ipa/config/ca.crt using your web browser. Next, open up Keychain Access. Select "System" under the "Keychains" menu on the left of the screen. Next, select the "Certificates" from beneath the "Category" field. Click on the padlock to unlock the keychain. Click on the "+" symbol at the bottom of the window, navigate to where you saved the ca.crt file, highlight it and click "Open". You'll now see a new entry in the list with a red dot. This red dot indicates that the certificate is untrusted. We need to trust the CA by double clicking on the entry, click the arrow next to "Trust" and in the drop down next to "When using this certificate:" select "Always Trust. Close the window and Keychain Access window to save your changes.

Finally, we need to configure the ldap.conf located in /etc/openldap/ldap.conf. Edit the file to look like the following example:
#
# LDAP Defaults
#
# See ldap.conf(5) for details
# This file should be world readable but not world writable.
#BASE    dc=example,dc=com
#URI    ldap://ldap.example.com ldap://ldap-master.example.com:666
#SIZELIMIT    12
#TIMELIMIT    15

DEREF        never
REFERRALS    off

TLS_REQCERT    demand
TLS_CACERTDIR    /System/Library/OpenSSL/certs
Once all the above changes have been made you need to reboot to reload all of the services that read these files. If you do not, the following steps will fail.
Directory Utility Setup

To begin, launch Directory Utility. On the Services pane will be three service names: Active Directory, LDAPv3, and NIS. After authenticating (click the padlock), click the "LDAPv3" line to highlight it, then click the little pencil icon to edit.

Click the "New..." button and enter the IPA server name (ipa.example.com) in the "Server Name or IP Address" field. Make sure that "Encrypt using SSL" is checked, and "Use for contacts" is unchecked (you could, optionally, use the LDAP directory for contact information but the point of this particular exercise is for authentication, so at this point turn it off).

You should be back at the option list. Enter "IPA LDAP" for the configuration name, and select "Custom" for the LDAP Mappings. Make sure the SSL checkbox is checked. Now highlight the new entry and click the "Edit..." button

Under the "Connection" tab, change a few of the defaults:
  • Open/close times out in 10 seconds
  • Query times out in 10 seconds
  • Connection idles out in 1 minutes
Unfortunately, at least in OS X 10.8, you cannot change the re-bind attempts timeout from the default of 120 seconds; you can change it in OS X 10.7, so if using that version set it to 10s as well. Also ensure that SSL encryption and the custom port are checked and that the custom port is 636.

User Mapping

Under the "Search & Mappings" tab, you will need to add a few record types and attributes. In the first pane, click the "Add..." button and add the Record Type of "Users". This should show up in the first pane, so in the second pane click the "Add..." button and you'll have an entry field. Type in inetOrgPerson here and press enter or click outside of the edit box.

Now you should be able to define the Search base near the bottom of the window; set it to dc=example,dc=com and make sure the "all subtrees" radio button is selected.

Click on "Users" in the first pane and then click the "Add..." button. We will be adding a number of Attribute Types and setting the associated map, similar to how we mapped "Users" to "inetOrgPerson". The Attribute Types and their respective values are noted below:
AuthenticationAuthority: uid
HomeDirectory: #/Users/$uid$
NFSHomeDirectory: #/Users/$uid$ (NOTE: odd as it sounds, this seems to be required, even if you're not using NFS)
PrimaryGroupID: gidNumber
RealName: cn
RecordName: uid
UniqueID: uidNumber
UserShell: loginShell
For the above you, you have a choice for the HomeDirectory and NFSHomeDirectory options. If you use homeDirectory for both, it will map to /home/[user] which is fine for automounted home directories (/home/ is an automount on OS X). However, if you want a local directory on the machine for the user (not an automounted/shared home directory), use #/Users/$uid$ instead.

Once this is done, click the "OK" button to save and return to the server list. Click "OK" again and head to the Search Policy pane. In the "Authentication" page, you should see "/LDAPv3/ipa.example.com" beneath "/LocalDefault". If you do not, click the "+" button and add your new LDAP server definition (e.g. "/LDAPv3/ipa.example.com"). It will show up after the "/Local/Default" domain. Make sure that the "Search" field is set to "Custom path".

Now move to the Directory Editor pane. If everything is setup correctly, you should see a list of users pulled from the IPA server's LDAP directory on the right (if you don't, you likely missed something during the SSL configuration. You should revisit the previous section on OpenSSL and OpenLDAP configuration). If you click one of the user names, you should see a pane full of name and value pairs, which is what OS X is mapping locally from the directory server. The items in grey are the static bits that OS X generates, and the names starting with "dsAttrTypeNative:" are the un-mapped bits from the LDAP directory. You should see quite a few of them, including kerberos principal name, password policy references, the "dsAttrTypeNative:ipaUniqueID", and so on. More importantly, you should see at the top various bits that are being mapped properly.

To see the results of your changes without rebooting, go to the Terminal and use dscacheutil to empty the cache which will allow it to pick up the changes:
$ dscacheutil -flushcache
Next, use dscacheutil to do a lookup to make sure that the user is actually found:

 $ dscacheutil -q user -a name jsmith
 name: jsmith
 password: ********
 uid: 1000
 gid: 100
 dir: /Users/jsmith
 shell: /bin/bash
 gecos: Smith, John

Group Mapping

Now that the user information is present, the last step is to setup the groups (from the above, you can see that the group names are missing). Once again, in Directory Utility you want to go to the "Search & Mappings" pane and this time add a "Groups" record type, which should map to "posixgroup". There are only a few attributes to add under the Groups record type:
PrimaryGroupID: gidNumber
RecordName: cn
As with the Users record type, you will need to set the search base for groups. Click the "Groups" record type and use cn=groups,cn=accounts,dc=example,dc=com for the search base.

Click "OK" to save. You can now go to the "Directory Editor" and select "Groups" from the "Viewing" pulldown menu and you should see all of the groups from your directory, just as you did for the users. Should also see a lot of information in the Name/Value screen showing that the groups were properly found.

Once again, head back to the Terminal, flush the cache, and do a lookup:
 $ dscacheutil -flushcache
 $ dscacheutil -q group -a name jsmith
 name: jsmith
 password: *
 gid: 1000
 $ id jsmith
 uid=1000(jsmith) gid=100(_lpoperator) groups=100(_lpoperator)
You'll notice a lot of weird looking groups in the previous `id` command. This is because Apple has chosen to use a non-standard number schema for it's /etc/groups file. Sometime in the future we might be able to find a way to clean this up, but for now, your user account will just look something like the above if you are a member of any ldap groups that have a gidNumber that overlaps one of the local groups.

Creating Home Directories

If you elected to have the home directory on the local system (using /Users/[user]), you have one further step to make. OS X does not auto-create home directories for LDAP-based users, so you will need to create them yourself. All you need to do is create the directory, upon first login, the rest will be populated:
$ sudo -i
# mkdir /Users/jsmith
# chown jsmith:100 /Users/jsmith

 System Preferences: Login

Finally, make a trip to System Preferences, in particular the Users & Groups settings. Click the "Login Options". Here you will want to ensure that the following are set:
  • Display login window as: Name and password (otherwise network users cannot login)
  • Allow network users to log in at login window (checked, you can restrict to certain users by clicking "Options..."
  • Network Account Server is set and has a green light (should display the IPA server's hostname)

Enable Mobile Accounts for LDAP Users

In order to be able to log in to your system while disconnected from the local network, you will need to turn on "Mobile Accounts" for your user (or the user using the system if you're setting this up for someone else). This is most easily done from the command line using the following command:
$ sudo /System/Library/CoreServices/ManagedClient.app/Contents/Resources/createmobileaccount -n jsmith
If setting this up for yourself or the user you are setting it up for is present, it may also be beneficial to present the password to the previous command with the -P flag. This will prompt you to enter the user's password. If this fails, mobile accounts will pick up the password from the next time the user logs in while connected to the network

To add the password run:
sudo /System/Library/CoreServices/ManagedClient.app/Contents/Resources/createmobileaccount -n jsmith -P
Sometimes the above does not work using the users actual username. For some, yet to be determined, reason you may need to use the users real name in place of the username in the format of "Lastname, Firstname" (including the double quotes). More work needs to be done to determine why this is and if it's actually required or not. If the above commands do not work as advertised, try running them with "Lastname, Firstname" in place of "username".

Thursday, May 9, 2013

Move a Window Between Monitors with the Keyboard

This one will be really quick. I wanted to share something I found today in the hopes that someone else finds it useful. I was looking for a way to use my keyboard to move windows between monitors in Gnome3. It already has the built in shortcuts for moving between workspaces with ctrl+alt+shift+{UP,DOWN} but I wanted to move my window between left and right physical monitors. To do this, you'll need to install xdotool and write a very short script. My steps below will work on Redhat/CentOS/Fedora. The only difference for Debian based systems should be the command to install the package and possibly the location of xdotool after installation. To find out where it is located run the following command which xdotool. Download xdotool with your package manager:

sudo yum install xdotool
Create a file called move-window-to-display.sh. I prefer to place my scripts in ~/bin but any directory will work.
#!/bin/bash

if [ $1 -eq 2 ]; then
    POS="1680 0"
else
    POS="0 0"
fi

/usr/bin/xdotool windowmove `/usr/bin/xdotool getwindowfocus` $POS
exit 0
Next we need to make this new script executable:

sudo chmod +x ~/bin/move-window-to-display.sh
The script above is written assuming your monitors are 1680 pixels wide. If your monitor is wider or narrower change the value of $POS from "1680 0" to "1024 0" or "1440 0", etc. The settings I've placed here also put the window in the top left corner of the monitor upon being moved. If you wish to place it elsewhere you can treat these numbers as X and Y pixel coordinates and put whatever value in them you would like. The first definition of $POS is for the right monitor and the $POS in the "else" portion is for the left monitor.
Finally we need to assign it to a keyboard shortcut. I chose Alt+Shift+{LEFT,RIGHT} but you can choose whatever you like as long as it doesn't conflict with another shortcut (I found that combinations with the super key did not work for some reason as well). You can set these by going to the Gnome "Keyboard" tool. Click "Shortcuts" at the top and go to "Custom Shortcuts" on the bottom of the left hand list. From there, click the [+] button and give the shortcut a name and in the "Command" field give the full path of the above script and at the end append a "1" or "2" depending on which monitor you would like to move to.

Thursday, March 28, 2013

No more Google Reader? Try Tiny Tiny RSS!

It's old news now but, in case you've been living under a rock, I should let you know that Google Reader is shutting down. As a result, many users, myself included, have been looking for a solid replacement. I'm happy to report that I think I've found my new home in Tiny-Tiny RSS. After trying out many other alternatives, among them The Old Reader and Feedly I have found that this simple, self-hosted solution really does everything I want. What's more? Did I mention it's Self Hosted!?

At this point you're probably asking yourself, "But Ted, why would I want to go through all the effort of setting up and maintaining my own server just for an RSS reader?". A local RSS reader has it's limitations. Sure, you could copy your reader's config around with you on a flash drive or use a cloud storage solution like Owncloud to move it around your machines, but then you have to figure out what reader works on all of your platforms. A web reader is, generally, a much more flexible route as it, usually, works on all platforms and Tiny-Tiny RSS even has a native Android app. So why not just use a hosted solution like Feedly, TheOldReader or GoogleRea.... oh, right, because these are run companies who have the right to shut down down a product whether you like it or not. By hosting it yourself, you only have to answer to what you want, it doesn't have to shutdown just because the rest of the market would prefer to hear what their neighbor had for breakfast on the insert latest social media/micro-blogging site, you control the site, you control the content, you control it's fate.

I found Tiny-tiny RSS to be extremely easy to set-up. Now, don't say, "But you're a sysadmin, that's not saying very much". Well, I pride myself on trying hard to analyse things from the average user's (or at least average Linux user's) perspective. If you know how what the terminal is, can edit a plain English configuration file and have a spare computer or shared web host just laying around (or even if you don't), Tiny-tiny RSS takes only a few minute to set-up.

The Install

The installation of Tiny-tiny RSS requires that you have a web host running php and mysql or postgres. I've outlined the basic installation procedure below:
  1. Download the .tar.gz file from here (as always, make sure you get the latest version).

  2. Extract this to your web directory. On most servers this will be /var/www. Make sure your apache user owns this directory, on Ubuntu you can do this with: chown -R www-data:www-data /var/www/ttrss

  3. Once you've downloaded the files and extracted them to your web directory, you need to tell apache where to look. Since you're going to be logging in to this server, you'll probably want to use HTTPS. It's not necessary but I strongly encourage it. I've copied my server configuration below as an example of how to do an port 80 (http) to port 443 (https) redirect to make your life easier when navigating to your feeds as well as the main apache configuration I used. If you're lucky enough to have a server running Ubuntu or Debian (or other Debian dirivative that uses the /etc/apache2/site-available directory structure) you can just copy paste these two files to /etc/apache2/sites-available and proceed.

    /etc/apache2/site-available/yoursite-redirect
    
    <VirtualHost feeds.yoursite.com:80>
        RewriteEngine on
        RewriteCond %{SERVER_PORT} !^443$
        RewriteRule ^/(.*)$ https://%{SERVER_NAME}/$1 [NC,R,L]
    </VirtualHost>
    

    /var/apache2/site-available/yoursite
    
    <Virtualhost feeds.yoursite.com:443>
        ServerName feeds.yoursite.com
        ServerAdmin admin@feeds.yoursite.com
    
        DocumentRoot /var/www/tt-rss
    
        SSLEngine on
        SSLCertificateFile      /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile   /etc/ssl/private/ssl-cert-snakeoil.key
    
        <Directory /var/www/tt-rss>
            Options Indexes FollowSymLinks MultiViews
            AllowOverride None
            Order allow,deny
            allow from all
        </Directory>
    
        ErrorLog ${APACHE_LOG_DIR}/feeds_yoursite_error.log
    
        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel info
    
        CustomLog ${APACHE_LOG_DIR}/feeds_yoursite_access.log combined
    
        Alias /doc/ "/usr/share/doc/
        <Directory "/usr/share/doc/">
            Options Indexes MultiViews FollowSymLinks
            AllowOverride None
            Order deny,allow
            Deny from all
            Allow from 127.0.0.0/255.0.0.0 ::1/128
        </Directory>
    </VirtualHost>
    

  4. Disable the default apache configuration and enabler your new ones:
    sudo a2dissite 000-default && sudo a2dissite default-ssl
    sudo a2ensite yoursite && sudo a2ensite yoursite-redirect

  5. Since we're using apache's ssl module along with the apache rewrite module to redirect you to https://, we need to enable two modules:

    sudo a2enmod ssl && a2enmod rewrite

    enable php as well if not already:

    sudo a2enmod php

  6. Restart apache:

    sudo service apache2 restart

  7. Set cronjob for www-data to refresh feeds periodically. To do this run the following command and paste the following line (making changes where appropriate for your site). The example I have given will refresh all feeds, for all users, every 30 minutes.

    sudo crontab -u www-data -e

    */30 * * * * cd /var/www/tt-rss && /usr/bin/php /var/www/tt-rss/update.php --feeds >/dev/null 2>&1

That's pretty much it for the install. In addition to the web interface, you can also download the Android app from the marketplace. It's free for the demo and only a few dollars to buy forever. I have used the app for a few days now and have come to like it almost as much, if not more, than I loved the Google Reader app.