Monday, September 21, 2015

FreeIPA with Kerberos for OSX 10.7+

Wow, it's been a long time since I last posted something! I finally completed a project I've been working on for some time now and wanted to share it so that others could benefit. This is article is going to be pretty dense and technical. A lot of the documentation you'll see here came directly from http://linsec.ca/Using_FreeIPA_for_User_Authentication#Mac_OS_X_10.7.2F10.8. That page has some other information regarding FreeIPA so I would encourage you to check it out. This post expands a good bit on the original document to include, among other things, a more security focused approach to connecting to the server including the use of LDAPS and disallowing "week crypto" in the Kerberos configuration.

Kerberos and Pam Configuration

Create a file in /Library/Preferences/edu.mit.Kerberos with the following contents:
[domain_realm]

   .example.com = EXAMPLE.COM

   example.com = EXAMPLE.COM

[libdefaults]
   default_realm = EXAMPLE.COM
   allow_weak_crypto = no
   dns_lookup_realm = true
   dns_lookup_kdc = true
   rdns = false
   ticket_lifetime = 24h
   forwardable = yes
   renewable = true

[realms]
   EXAMPLE.COM = {
       kdc = ipa.example.com:88
       master_kdc = ipa.example.com:88
       admin_server = ipa.example.com:749
       default_domain = example.com
       pkinit_anchors = FILE:/etc/ipa/ca.crt
   }
Download the ca.crt file, create a directory as shown below and place the ca.crt within that directory:
# sudo -i
# cd /etc/
# mkdir ipa
# cd ipa
# curl -OL https://ipa.example.com/ipa/config/ca.crt
Edit /etc/pam.d/authorization to match the following:
 # authorization: auth account
 auth       optional       pam_krb5.so use_first_pass use_kcminit default_principal
 auth       sufficient     pam_krb5.so use_first_pass default_principal
 auth       optional       pam_ntlm.so use_first_pass
 auth       required       pam_opendirectory.so use_first_pass nullok
 account    required       pam_opendirectory.so
Edit /etc/pam.d/screensaver to match the following:
 # screensaver: auth account
 auth       optional       pam_krb5.so use_first_pass use_kcminit default_principal
 auth       required       pam_opendirectory.so use_first_pass nullok
 account    required       pam_opendirectory.so
 account    sufficient     pam_self.so
 account    required       pam_group.so no_warn group=admin,wheel fail_safe
 account    required       pam_group.so no_warn deny group=admin,wheel ruser fail_safe
Edit /etc/pam.d/sudo to match the following:
 # sudo: auth account password session
 auth       sufficient     pam_krb5.so try_first_pass default_principal
 auth       required       pam_opendirectory.so use_first_pass
 account    required       pam_permit.so
 password   required       pam_deny.so
 session    required       pam_permit.so

IPA Enrollment

Because we cannot enroll the system into IPA the easy way, we need to visit the web UI and add a new host. In the IPA web UI, go the Identity and then the Hosts page. Click the "Add" button, where you will need to add the fully qualified domain name of the host (e.g. mac.example.com ), and then click the "Add and Edit" button. You don't need to add much here, other than the MAC address of the system, and the SSH public keys, which can be found in /etc/ssh_host_dsa_key.pub and /etc/ssh_host_rsa_key.pub. The Ethernet MAC address can be found via either ifconfig or System Preferences.
Generate ssh_host_rsa_key

If you don't have /etc/ssh_host_rsa_key.pub or /etc/ssh_host_dsa_key.pub then follow the steps below to create one:
# cd /etc
# ssh-keygen -t rsa -b 4096 -f /etc/ssh_host_rsa_key

Generate keytab File

This, unfortunately, does not generate a keytab file for the host, so on the server, using the ipa-getkeytab program, we will create an obtain the keytab for our new host:

    # ipa-getkeytab -s ipa.example.com -p host/your_hostname.example.com -k ~/mac.keytab

Now that the keytab is generated, scp it from the server to the new workstation and place it in /etc/krb5.keytab. Make sure the file is owned by the user root and group wheel (root:wheel) and is mode 0600.
OpenSSL and OpenLDAP Configuration

Import and symlink the CA certificate to /System/Library/OpenSSL/certs/:
# sudo -ipa
# cd /System/Library/OpenSSL/certs
# curl -OL https://ipa.example.com/ipa/config/ca.crt
# ln -s /System/Library/OpenSSL/certs/ca.crt $(openssl x509 -noout -hash -in /System/Library/OpenSSL/certs/ca.crt).0
Now we need to import the ca.crt file to Keychain Access and trust it.

First, either copy the ca.crt file from /System/Library/OpenSSL/certs to a more convenient location (e.g. Desktop) or re-download it from https://ipa.example.com/ipa/config/ca.crt using your web browser. Next, open up Keychain Access. Select "System" under the "Keychains" menu on the left of the screen. Next, select the "Certificates" from beneath the "Category" field. Click on the padlock to unlock the keychain. Click on the "+" symbol at the bottom of the window, navigate to where you saved the ca.crt file, highlight it and click "Open". You'll now see a new entry in the list with a red dot. This red dot indicates that the certificate is untrusted. We need to trust the CA by double clicking on the entry, click the arrow next to "Trust" and in the drop down next to "When using this certificate:" select "Always Trust. Close the window and Keychain Access window to save your changes.

Finally, we need to configure the ldap.conf located in /etc/openldap/ldap.conf. Edit the file to look like the following example:
#
# LDAP Defaults
#
# See ldap.conf(5) for details
# This file should be world readable but not world writable.
#BASE    dc=example,dc=com
#URI    ldap://ldap.example.com ldap://ldap-master.example.com:666
#SIZELIMIT    12
#TIMELIMIT    15

DEREF        never
REFERRALS    off

TLS_REQCERT    demand
TLS_CACERTDIR    /System/Library/OpenSSL/certs
Once all the above changes have been made you need to reboot to reload all of the services that read these files. If you do not, the following steps will fail.
Directory Utility Setup

To begin, launch Directory Utility. On the Services pane will be three service names: Active Directory, LDAPv3, and NIS. After authenticating (click the padlock), click the "LDAPv3" line to highlight it, then click the little pencil icon to edit.

Click the "New..." button and enter the IPA server name (ipa.example.com) in the "Server Name or IP Address" field. Make sure that "Encrypt using SSL" is checked, and "Use for contacts" is unchecked (you could, optionally, use the LDAP directory for contact information but the point of this particular exercise is for authentication, so at this point turn it off).

You should be back at the option list. Enter "IPA LDAP" for the configuration name, and select "Custom" for the LDAP Mappings. Make sure the SSL checkbox is checked. Now highlight the new entry and click the "Edit..." button

Under the "Connection" tab, change a few of the defaults:
  • Open/close times out in 10 seconds
  • Query times out in 10 seconds
  • Connection idles out in 1 minutes
Unfortunately, at least in OS X 10.8, you cannot change the re-bind attempts timeout from the default of 120 seconds; you can change it in OS X 10.7, so if using that version set it to 10s as well. Also ensure that SSL encryption and the custom port are checked and that the custom port is 636.

User Mapping

Under the "Search & Mappings" tab, you will need to add a few record types and attributes. In the first pane, click the "Add..." button and add the Record Type of "Users". This should show up in the first pane, so in the second pane click the "Add..." button and you'll have an entry field. Type in inetOrgPerson here and press enter or click outside of the edit box.

Now you should be able to define the Search base near the bottom of the window; set it to dc=example,dc=com and make sure the "all subtrees" radio button is selected.

Click on "Users" in the first pane and then click the "Add..." button. We will be adding a number of Attribute Types and setting the associated map, similar to how we mapped "Users" to "inetOrgPerson". The Attribute Types and their respective values are noted below:
AuthenticationAuthority: uid
HomeDirectory: #/Users/$uid$
NFSHomeDirectory: #/Users/$uid$ (NOTE: odd as it sounds, this seems to be required, even if you're not using NFS)
PrimaryGroupID: gidNumber
RealName: cn
RecordName: uid
UniqueID: uidNumber
UserShell: loginShell
For the above you, you have a choice for the HomeDirectory and NFSHomeDirectory options. If you use homeDirectory for both, it will map to /home/[user] which is fine for automounted home directories (/home/ is an automount on OS X). However, if you want a local directory on the machine for the user (not an automounted/shared home directory), use #/Users/$uid$ instead.

Once this is done, click the "OK" button to save and return to the server list. Click "OK" again and head to the Search Policy pane. In the "Authentication" page, you should see "/LDAPv3/ipa.example.com" beneath "/LocalDefault". If you do not, click the "+" button and add your new LDAP server definition (e.g. "/LDAPv3/ipa.example.com"). It will show up after the "/Local/Default" domain. Make sure that the "Search" field is set to "Custom path".

Now move to the Directory Editor pane. If everything is setup correctly, you should see a list of users pulled from the IPA server's LDAP directory on the right (if you don't, you likely missed something during the SSL configuration. You should revisit the previous section on OpenSSL and OpenLDAP configuration). If you click one of the user names, you should see a pane full of name and value pairs, which is what OS X is mapping locally from the directory server. The items in grey are the static bits that OS X generates, and the names starting with "dsAttrTypeNative:" are the un-mapped bits from the LDAP directory. You should see quite a few of them, including kerberos principal name, password policy references, the "dsAttrTypeNative:ipaUniqueID", and so on. More importantly, you should see at the top various bits that are being mapped properly.

To see the results of your changes without rebooting, go to the Terminal and use dscacheutil to empty the cache which will allow it to pick up the changes:
$ dscacheutil -flushcache
Next, use dscacheutil to do a lookup to make sure that the user is actually found:

 $ dscacheutil -q user -a name jsmith
 name: jsmith
 password: ********
 uid: 1000
 gid: 100
 dir: /Users/jsmith
 shell: /bin/bash
 gecos: Smith, John

Group Mapping

Now that the user information is present, the last step is to setup the groups (from the above, you can see that the group names are missing). Once again, in Directory Utility you want to go to the "Search & Mappings" pane and this time add a "Groups" record type, which should map to "posixgroup". There are only a few attributes to add under the Groups record type:
PrimaryGroupID: gidNumber
RecordName: cn
As with the Users record type, you will need to set the search base for groups. Click the "Groups" record type and use cn=groups,cn=accounts,dc=example,dc=com for the search base.

Click "OK" to save. You can now go to the "Directory Editor" and select "Groups" from the "Viewing" pulldown menu and you should see all of the groups from your directory, just as you did for the users. Should also see a lot of information in the Name/Value screen showing that the groups were properly found.

Once again, head back to the Terminal, flush the cache, and do a lookup:
 $ dscacheutil -flushcache
 $ dscacheutil -q group -a name jsmith
 name: jsmith
 password: *
 gid: 1000
 $ id jsmith
 uid=1000(jsmith) gid=100(_lpoperator) groups=100(_lpoperator)
You'll notice a lot of weird looking groups in the previous `id` command. This is because Apple has chosen to use a non-standard number schema for it's /etc/groups file. Sometime in the future we might be able to find a way to clean this up, but for now, your user account will just look something like the above if you are a member of any ldap groups that have a gidNumber that overlaps one of the local groups.

Creating Home Directories

If you elected to have the home directory on the local system (using /Users/[user]), you have one further step to make. OS X does not auto-create home directories for LDAP-based users, so you will need to create them yourself. All you need to do is create the directory, upon first login, the rest will be populated:
$ sudo -i
# mkdir /Users/jsmith
# chown jsmith:100 /Users/jsmith

 System Preferences: Login

Finally, make a trip to System Preferences, in particular the Users & Groups settings. Click the "Login Options". Here you will want to ensure that the following are set:
  • Display login window as: Name and password (otherwise network users cannot login)
  • Allow network users to log in at login window (checked, you can restrict to certain users by clicking "Options..."
  • Network Account Server is set and has a green light (should display the IPA server's hostname)

Enable Mobile Accounts for LDAP Users

In order to be able to log in to your system while disconnected from the local network, you will need to turn on "Mobile Accounts" for your user (or the user using the system if you're setting this up for someone else). This is most easily done from the command line using the following command:
$ sudo /System/Library/CoreServices/ManagedClient.app/Contents/Resources/createmobileaccount -n jsmith
If setting this up for yourself or the user you are setting it up for is present, it may also be beneficial to present the password to the previous command with the -P flag. This will prompt you to enter the user's password. If this fails, mobile accounts will pick up the password from the next time the user logs in while connected to the network

To add the password run:
sudo /System/Library/CoreServices/ManagedClient.app/Contents/Resources/createmobileaccount -n jsmith -P
Sometimes the above does not work using the users actual username. For some, yet to be determined, reason you may need to use the users real name in place of the username in the format of "Lastname, Firstname" (including the double quotes). More work needs to be done to determine why this is and if it's actually required or not. If the above commands do not work as advertised, try running them with "Lastname, Firstname" in place of "username".

Thursday, May 9, 2013

Move a Window Between Monitors with the Keyboard

This one will be really quick. I wanted to share something I found today in the hopes that someone else finds it useful. I was looking for a way to use my keyboard to move windows between monitors in Gnome3. It already has the built in shortcuts for moving between workspaces with ctrl+alt+shift+{UP,DOWN} but I wanted to move my window between left and right physical monitors. To do this, you'll need to install xdotool and write a very short script. My steps below will work on Redhat/CentOS/Fedora. The only difference for Debian based systems should be the command to install the package and possibly the location of xdotool after installation. To find out where it is located run the following command which xdotool. Download xdotool with your package manager:

sudo yum install xdotool
Create a file called move-window-to-display.sh. I prefer to place my scripts in ~/bin but any directory will work.
#!/bin/bash

if [ $1 -eq 2 ]; then
    POS="1680 0"
else
    POS="0 0"
fi

/usr/bin/xdotool windowmove `/usr/bin/xdotool getwindowfocus` $POS
exit 0
Next we need to make this new script executable:

sudo chmod +x ~/bin/move-window-to-display.sh
The script above is written assuming your monitors are 1680 pixels wide. If your monitor is wider or narrower change the value of $POS from "1680 0" to "1024 0" or "1440 0", etc. The settings I've placed here also put the window in the top left corner of the monitor upon being moved. If you wish to place it elsewhere you can treat these numbers as X and Y pixel coordinates and put whatever value in them you would like. The first definition of $POS is for the right monitor and the $POS in the "else" portion is for the left monitor.
Finally we need to assign it to a keyboard shortcut. I chose Alt+Shift+{LEFT,RIGHT} but you can choose whatever you like as long as it doesn't conflict with another shortcut (I found that combinations with the super key did not work for some reason as well). You can set these by going to the Gnome "Keyboard" tool. Click "Shortcuts" at the top and go to "Custom Shortcuts" on the bottom of the left hand list. From there, click the [+] button and give the shortcut a name and in the "Command" field give the full path of the above script and at the end append a "1" or "2" depending on which monitor you would like to move to.

Thursday, March 28, 2013

No more Google Reader? Try Tiny Tiny RSS!

It's old news now but, in case you've been living under a rock, I should let you know that Google Reader is shutting down. As a result, many users, myself included, have been looking for a solid replacement. I'm happy to report that I think I've found my new home in Tiny-Tiny RSS. After trying out many other alternatives, among them The Old Reader and Feedly I have found that this simple, self-hosted solution really does everything I want. What's more? Did I mention it's Self Hosted!?

At this point you're probably asking yourself, "But Ted, why would I want to go through all the effort of setting up and maintaining my own server just for an RSS reader?". A local RSS reader has it's limitations. Sure, you could copy your reader's config around with you on a flash drive or use a cloud storage solution like Owncloud to move it around your machines, but then you have to figure out what reader works on all of your platforms. A web reader is, generally, a much more flexible route as it, usually, works on all platforms and Tiny-Tiny RSS even has a native Android app. So why not just use a hosted solution like Feedly, TheOldReader or GoogleRea.... oh, right, because these are run companies who have the right to shut down down a product whether you like it or not. By hosting it yourself, you only have to answer to what you want, it doesn't have to shutdown just because the rest of the market would prefer to hear what their neighbor had for breakfast on the insert latest social media/micro-blogging site, you control the site, you control the content, you control it's fate.

I found Tiny-tiny RSS to be extremely easy to set-up. Now, don't say, "But you're a sysadmin, that's not saying very much". Well, I pride myself on trying hard to analyse things from the average user's (or at least average Linux user's) perspective. If you know how what the terminal is, can edit a plain English configuration file and have a spare computer or shared web host just laying around (or even if you don't), Tiny-tiny RSS takes only a few minute to set-up.

The Install

The installation of Tiny-tiny RSS requires that you have a web host running php and mysql or postgres. I've outlined the basic installation procedure below:
  1. Download the .tar.gz file from here (as always, make sure you get the latest version).

  2. Extract this to your web directory. On most servers this will be /var/www. Make sure your apache user owns this directory, on Ubuntu you can do this with: chown -R www-data:www-data /var/www/ttrss

  3. Once you've downloaded the files and extracted them to your web directory, you need to tell apache where to look. Since you're going to be logging in to this server, you'll probably want to use HTTPS. It's not necessary but I strongly encourage it. I've copied my server configuration below as an example of how to do an port 80 (http) to port 443 (https) redirect to make your life easier when navigating to your feeds as well as the main apache configuration I used. If you're lucky enough to have a server running Ubuntu or Debian (or other Debian dirivative that uses the /etc/apache2/site-available directory structure) you can just copy paste these two files to /etc/apache2/sites-available and proceed.

    /etc/apache2/site-available/yoursite-redirect
    
    <VirtualHost feeds.yoursite.com:80>
        RewriteEngine on
        RewriteCond %{SERVER_PORT} !^443$
        RewriteRule ^/(.*)$ https://%{SERVER_NAME}/$1 [NC,R,L]
    </VirtualHost>
    

    /var/apache2/site-available/yoursite
    
    <Virtualhost feeds.yoursite.com:443>
        ServerName feeds.yoursite.com
        ServerAdmin admin@feeds.yoursite.com
    
        DocumentRoot /var/www/tt-rss
    
        SSLEngine on
        SSLCertificateFile      /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile   /etc/ssl/private/ssl-cert-snakeoil.key
    
        <Directory /var/www/tt-rss>
            Options Indexes FollowSymLinks MultiViews
            AllowOverride None
            Order allow,deny
            allow from all
        </Directory>
    
        ErrorLog ${APACHE_LOG_DIR}/feeds_yoursite_error.log
    
        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel info
    
        CustomLog ${APACHE_LOG_DIR}/feeds_yoursite_access.log combined
    
        Alias /doc/ "/usr/share/doc/
        <Directory "/usr/share/doc/">
            Options Indexes MultiViews FollowSymLinks
            AllowOverride None
            Order deny,allow
            Deny from all
            Allow from 127.0.0.0/255.0.0.0 ::1/128
        </Directory>
    </VirtualHost>
    

  4. Disable the default apache configuration and enabler your new ones:
    sudo a2dissite 000-default && sudo a2dissite default-ssl
    sudo a2ensite yoursite && sudo a2ensite yoursite-redirect

  5. Since we're using apache's ssl module along with the apache rewrite module to redirect you to https://, we need to enable two modules:

    sudo a2enmod ssl && a2enmod rewrite

    enable php as well if not already:

    sudo a2enmod php

  6. Restart apache:

    sudo service apache2 restart

  7. Set cronjob for www-data to refresh feeds periodically. To do this run the following command and paste the following line (making changes where appropriate for your site). The example I have given will refresh all feeds, for all users, every 30 minutes.

    sudo crontab -u www-data -e

    */30 * * * * cd /var/www/tt-rss && /usr/bin/php /var/www/tt-rss/update.php --feeds >/dev/null 2>&1

That's pretty much it for the install. In addition to the web interface, you can also download the Android app from the marketplace. It's free for the demo and only a few dollars to buy forever. I have used the app for a few days now and have come to like it almost as much, if not more, than I loved the Google Reader app.

Friday, January 4, 2013

Syncing Encrypted Files Between Multiple Platforms

This is going to be a fairly short and quick how-to on syncing encrypted files between different systems running different operating systems. The software we will be using is EncFS (on Linux) and Boxcryptor (on everything else).

First thing we need to do is choose what we're going to use as the common storage point for the encrypted files. Boxcryptor supports Dropbox, Google Drive, SkyDrive, WebDav and SD card. Since Google hasn't gotten their act together on a Google Drive application for Linux and I don't have anything running WebDav, I chose to use Dropbox.

Download Boxcryptor to your device (I chose to use my Android phone) and select to connect it to your drop box account. Once connected, click the button to create a new encrypted folder. Once you have done this, we can move on to setting up your Linux desktop with EncFS. If you don't have any Linux systems (shame on you) you can stop here and repeat this process of setting up BoxCryptor on your other close source platforms. The only difference being that instead of creating a new folder, you'll be selecting the one you created from your first device.

Setting up EncFS on Linux can be a bit tricky. This is mainly due to the fact that a lot of tutorials I found around the web recommended that you use the "CryptKeeper" application as a front end. I tried this and it was terrible. I would recommend skipping it all together and just using the command line utility "encfs". To do this, use your package manager to download fuse-encfs. "apt-get install fuse-encfs" on Ubuntu and "yum install fuse-encfs on RHEL/Fedora". Next, download and install the dropbox build for your distribution. Sign in to your account and let it sync your files. Once it finishes synchronizing we're ready to set-up the encfs mount. Run "encfs ~/Dropbox/path/to/folder.bc ~/mount/point/for/encfs". This will prompt you if you want to create the mountpoint (if it does not yet exist) and then your encryption passphrase.

Thats it! It's important to note that in order for files to be encrypted, you must access the directory via the mount point, not the raw directory within dropbox. To demonstrate this, you can create a text file within the "~/mount/point/for/encfs" and put some random jibberish in it. Save the file and then try to read the file from the raw directory, in this case, ~/Dropbox/path/to/folder.bc. You'll notice that the contents of the file when viewed outside of the mountpoint is scrambled.

Please feel free to ask any question in the comments section, I'll be happy to help you out if you get stuck or send me a message on Google+ if that's more convenient.

Thursday, December 1, 2011

Using Rsync to Backup Windows

I don't normally write about Windows and I don't think I would consider today's post to be any different. That said, I am going to post about something related to Windows. I recently made the switch from Red Hat Enterprise Linux 6.1 back to Windows 7 for day to day work on my office workstation. The reason behind this was mainly just software compatibility. I found most of my work was being done on my secondary Windows workstation anyway and decided that the few advantages Linux had over Windows, for my purposes in a 70% Windows corporate environment, were just not worth the hassle of maintaining two workstations. As of yesterday I'm now running off of my HP Elitebook with Windows 7 Enterprise edition. With this switch has come several challenges, not the least of which was how to make sure I had regular, local, backups of my data.

My laptop has a 120GB SSD in it and, while they are getting better, I still don't trust the reliability of MLC SSDs. In the past, under Linux, I had just been running rsync as a cron job ever few hours while I was in the office. This had been working very well for me for some time and I wanted to find a way to accomplish the same thing under Windows. Enter a project named Cygwin. Cygwin allows you to run Linux applications inside of your Windows environment; setup is easy and you can easily expand the selection of available tools by re-running the installer. I opted to give this a try by installing a few simple utilties, bash, openssh, openssl, git and rsync.

Since this provides you a full bash shell you are able to create bash scripts just like you would in a real Linux environment so I set off creating myself a quick backup.sh script that would use rsync to back up C:\Users\my_username. I've posted the results of my work below, feel free to copy this and use it on your own windows system for backups. Feel free to post any comments or questions in the comments section below as well.

Note: Don't forget to change the variables I've noted in the comments in the script.


#!/bin/bash
#
# Will backup /cygdrive/c/Users/$username to /cygdrive/$drv/$hostname/Users/$username
#
## Change these values to match your computer
#
hostname="MY_COMPUTER" #Name of your computer
drv="g" #Drive to back up to
username="YOUR_USER" #Your username
## You might need to change "c" on the next line if your users folder is elsewhere
bkpfrm="/cygdrive/c/Users/$username"

## Nothing below here should need to be changed

# Check that drive letter entered is present, else prompt
while [ ! -d /cygdrive/$drv/ ]; do
read -p "$drv:\ does not appear to be a valid drive letter, please enter the correct drive letter: " drv1
if [ ! -z $drv1 ]; then #Change drive to new value if one is entered
drv="$drv1"
fi
done
rsync -au --delete $bkpfrm /cygdrive/$drv/$hostname/Users/$username

Monday, November 21, 2011

Android Hacking on the Kindle Fire

I've been working recently on making the new Amazon Kindle Fire do everything that Amazon doesn't want it to do. Among other things, after a few days of tinkering and hacking away on the Android Developer Bridge (ADB) I now have a fully rooted Kindle Fire. In addition I've also been working to get all of the Google Android apps working and here is the progress so far.

1. Rooted the Kindle Fire using the well publicized "SuperOneClick" method.
2. Loaded the Google account manager (required step for the applications to follow).
3. Loaded Gmail, Google+, Android Market and several other applications.

At this point with those three steps I've managed to convert what was intended to be a simple ebook reader into a fully featured Android tablet (now we just need an Android 4 rom). If you have a Kindle Fire and wish to do the same as I've done to yours head over to xda-developers.com and take a peek at kindle forums.

Speaking of Android 4, ICS, I have started playing around with compiling my own Android 4.0 ROM. So far I've not made a lot of progress but I'll definitely update in a post here once I've successfully booted my first ROM.

Saturday, November 19, 2011

The Eee is dead and a mini MAME system is born

I must apologize to those of you who follow me. I've been absent recently and I would like to take a moment and try to explain why.

For those of you who know where I work, I don't need to explain too much, you know it's been a busy month at the office. In addition, as the title of the post states, my brand new laptop has sadly processed it's last bit, at least for now. For those who don't know, the laptop took a tumble last month while I was getting into my car on my way into the office. I was in the middle of writing a review on the performance of Ubuntu 11.10 on the Eee 1215B when this occurred. After the fateful tumble the touchpad frequently malfunctioned, requiring me to shutdown and pull the battery out to reset it. I tried for several days to repair it and call Asus about acquiring a replacement wrist rest. I was unsuccessful in my repair attempts and was told my Asus technical support that a replacement was not available. This effectively brought a halt to my review and the search for a replacement test machine is still ongoing. As unfortunate as this was I do not believe this will be the EOL for this machine.

I was fortunate enough to attend SkyDogCon earlier this month. At the conference I began talking with one of the members of Unallocated space, a hacker space in Maryland, who had brought with him a device called a "MAME box". For those who are unfamiliar with MAME (Multi Arcade Machine Emulator), it is a cross-platform application designed to play classic arcade and video games. A few days later I came across the Nanocade and decided that I could give the Eee new life as a desktop arcade machine. If you go to my Google+ page you can check out the first prototype design I made in Sketchup for the housing. The system is undoubtedly going to run some incarnation of Linux, likely Debian or Arch. Subscribe to the blog to stay up to date on my progress as I move forward with the project.