Sep 222015

So I’ve been getting a tremendous amount of spam lately, which is really annoying, since my e-mail is filtered through my own mail server.  Absentmindedly, I’ve been moving the spam into the appropriate “spam” folder each day so that Spamtrainer can do it’s work, and make spamassassin  smarter.  Only spamassassin doesn’t appear to be getting any smarter…  So it occurs to me that I should just make my own rule to deal with the onslaught.  Since most of what I’m seeing slip through the filter – How?  I don’t know! – has ASCII characters used to draw emphasis to the phishing scam of the day, I setup some super basic rules as follows:

NOTE: I had to update the rules to ensure rule names begin with a letter and also escape out any characters reserved for use by regex.   Oops.

Mar 202015

Until recently, Mozy Pro was used to backup offsite nightly.  The backup included all administrative data, user profiles, SQL databases, etc.  The SQL databases were automatically backed up to the server nightly, and then pushed offsite via Mozy – a huge improvement from the unreliable DLT autoloader we had previously used.

The document attachment feature of production software has helped us realize the benefit of transitioning  to a paperless file system.  However, once the SQL attachment database alone grew past 40GB, uploading the nightly backup was taking longer than 48 hours!  Even worse, the performance impact of Mozy uploading the data from our SQL server during business hours was resulting in noticeable delays at point of sale terminals!   Although the 4 NICs in the server are bonded together into a 4Gbps ethernet connection to the core switch, the SQL databases are stored on a RAID5 volume which also has an impact on database performance.

In looking for ideas to improve the backup schema, I turned to the Google group moderated by our software vendor. Another IT Manager recommended Backup Assist (BA) for easily scheduled SQL backups, in lieu of doing something in Powershell or changing settings in the SQL Server Management Studio.  We are now running BA 7.5 on our Windows 2003 server.  I would strongly recommend this software, as it has an easy to navigate GUI, an excellent feature set, and add-on modules which are very useful.  Compared to some other Enterprise class backup management software, BA is a good value.  The SQL backup module automates creating a daily backup of SQL databases as well as incremental backups every 15 minutes.  This is now a crucial part of our disaster recovery protocol.  Previously, a catastrophic failure could have ultimately resulted in the loss of all transactions and data modified on the day of the failure. Now any lost transactions should be possible to manually recreate, even in the event of a catastrophic failure!

There are three multi-mode fiber links between the buildings at work, and internal network data wiring has been optimized to eliminate two switches from the physical network topology.  Ethernet media converters are being replaced with fiber optic transceivers in the more advanced layer-2 switches that will allow link aggregation on trunk ports.  This is important to the new backup schema, as the additional bandwidth used by Backup Assist to transfer the SQL backups to the back building are physically as well as logically logically seperated (VLANs) from other network traffic using VLAN  access rules on the core layer 3 switch stack.

I prefer rsync as a network transfer agent, but I will need to do some more work in setting up multiple rsync daemons between local storage servers in back building and the offsite server.  2MBps upload across hardware VPN keeps a steady stream of traffic between the local storage servers and offsite server.  This has caused a few incomplete backups with the rsync daemon getting confused running multiple operations in the same shell.  For now, SMB is working with Backup Assist to transfer the SQL backups to the local storage server in the back building.  Because I am using a non-NTFS (ext4) file system on the offsite server, the current backup reports on Backup Assist throw some warnings about NTFS permissions and mount points being stripped from files and folders in copy job.  I don’t see this as a problem given the nature of what we are accomplishing with our offsite backup.  Most restores are done from the Shadow-copied files and folders on the local servers.  The offsite backups are more for full restores in the event of catastrophic failure or intentional deletion.

There are two backup sets running for SQL.  The production software file attachment database, and main database are archived in full each business day, and incrementally backed up every 15 minutes for the remainder of the day.  Each hour, these files are copied across the fiber link to a local storage backup location where they are then moved through a series of six folders based on date/time stamp of the backup files so that there are approximately 2 weeks worth of backups available on the local network.  Another backup job makes a full backup of all additional databases in the production environment and transfers all of the SQL backups across the fiber link to a different location in the local storage server at the end of the day.  All the SQL backups are then archived daily to an alternate location that keeps one weeks worth of backups of the SQL database in folders labeled by each day.  These are then transferred to the offsite server across the VPN with rsync over SSH.  On Sundays, all of the other network data (user profiles, documents, files, pictures, etc) are copied to the local backup servers, where they are then transferred to the offsite server via the rsync daemon that runs hourly, pushing any new content to the offsite server.

Because I wanted to keep the remote system headless, and because I had some existing components hanging around the office, here is the equipment I used at the remote location:
SonicWALL TZ-100W
  • Inexpensive firewall that allows an easy to manage hardware VPN with the main office.
APC Masterswitch AP9225
  • Initially, I think I would have preferred the AP9211. The installed firmware offers a web-GUI option to ‘restart’ individual outlets.
  • The AP9255 has matching serial ports for pairing UPS communication with each outlet.   It may be possible to integrate some environmental probes for temperature/humidity, and RS-232 communication with  an Arduino board for more advanced household automation applications.
  • I haven’t looked at scripting or automation on these yet, so time will tell which ultimately works better.  I want an iOS app that will give me push button access to these controls so that I can simplify procedural troubleshooting that may require restarting devices based on error condition (e.g. triggering the cable modem to turn off and on if the VPN drops for x minutes and google’s DNS cannot be PINGed.
PowerConnect 2816
  • An 8 port switch would have been fine, but the 16 was available.  SNMP allows network traffic data to be graphed which is useful for planning and scheduling backup transfers.Screen Shot 2015-03-20 at 1.06.43 AM
PogoPlug E02
  • ARM powered ‘cloud storage’ device marketed to consumers that is easily reconfigured to run Arch Linux.
8GB Sandisk USB Flash drive
  • This is partitioned as an ext3 volume and serves as the root volume for the Linux OS.
Sabrent USB 3.0 to SATA Dual Bay External Hard Drive Docking Station
  • Easy access to media, and adaptable as backup requirements change.  Disk cloning is a nice perk.
2 x WD RE4 2TB Enterprise Hard Drives
  • Target Storage media for offsite data
APC Back-UPS 750VA
  • Provides battery power to network components until generator is online in the event of power failure

DIY Mozy

I am working on setting up scripts to automatically correct and report common issues with the rsync portion of the offsite backup.

Now for the $64 question:

Who has fully implemented and tested a disaster recovery protocol and purposely failed over to backup systems to test?  I have not done a SQL restore from the offsite data yet, so I cannot call this project complete until I have done so.  Since Windows 2003 is end of life this June, I will have to replace the server with Windows 2012.  Given the age of the existing system, coupled with hardware limitations (I would have to add another CPU to increase the memory based on the server systemboard chipset), I plan to deploy new hardware.  Once this is operational, the plan is to house the 2nd server offsite and keep the software updated to match the production server (SQL version and production software version), provided the operating system and software licensing allow it.

This should allow for greater flexibility in recovery capability.  In a case where we lose the production server, the SQL database could be restored offsite, and production software could be accessed across the VPN.  Alternatively, the backup server could be physically moved allowing IT to react to any number of recovery scenarios.

Nov 112014

My 2009 Mac Mini is in desperate need of an SSD upgrade.  Unfortunately, I have close to 1TB of data there (mostly PLEX media), so upgrading the two drives would be cost prohibitive.  I will probably pick up a 2012 MD388LL/A with i7 CPU at some point…  I hope.  The new ones are pretty much not upgradeable.

Since my website and mail server are running on Mavericks, I dare not load Yosemite on that system.  Lord only knows what would happen to PLEX.  It was just recently updated to solve a memory leak issue which pretty much crippled the machine on more than one occasion.  I even bought an APC Masterswitch in case I had to remote hard boot the server in case my mail stopped working.

I’ve replicated most of the shell configuration from the Mac Mini Server I setup at work to the one at home.  There were some pretty helpful blogs which outlined how to include command aliases in bash.  This makes it real easy to execute a multi tail of all three mail server logs.  I think It might be useful to consider customizing the log system on the new servers to make data mining easier.  Everyone is so convinced that the mail server is broken, that I am constantly providing proof of delivery from the mail log data.  It’s actually really useful when dealing with another company’s IT – especially if they’re outsourced.

multitail of Mavericks mail server

multitail of Mavericks mail server


I’ve got a Linux box that I’m messing around with here as well.  I think I can do port mirroring on the switch and send all mail traffic to both servers.  Maybe even a simpler configuration until the Linux box goes live.  In either event, Mavericks handling of spam messages, (assuming you want to use spamtrainer to update bayesian filter rules) leaves much to be desired.  Accounts have to be created on the system for “junkman” and “notjunkmail”.  This looks ridiculous on the login screen for starters…  Messages meant to be used for training must be redirected to the other accounts.  This is easy for experienced IMAP users, but for novices using a PC based mail client, it may be very difficult at best.


Nov 112014

Following many months of attempting to resolve an issue whereby incoming mail delivery was disrupted every 48 – 60 hours, I now have a functioning patch in place. Recently, I determined that the mail filter (amavis), was faulting during it’s cleanup cycle. Somehow it’s temp (working) folder is deleted, and then the process hangs. Consequently, postfix is unable to deliver mail since the filter has broken it’s connection. Thanks to monit (, I was able to configure a service that verifies the temp folder status every 60 seconds, and then creates the folder with the proper user/group permissions (_amavisd:_amavisd) if it does not exist. Mail delivery is restored immediately, as the amavis process is now able to execute.

The mail server has now been error free for four
days and counting!


No need to watch the server logs in real-time any longer!

The amavisd version included with Mountain Lion is 2.8.0. I believe that somewhere in the modified code is an error that is triggered by a yet to be identified instruction sequence or message handling. It is certainly due to some modification I made to the server config at some point. Either way, there should be no further ‘tweaking’ required. I am now able to direct my attention back to the pure Linux mail server that will enable end users to customize their own mail filtering options. Once the configuration is tested, I can begin importing the active directory accounts, and replicating dovecot folders.

Now that I won’t have to restart the mail service – the monit solution solves the problem gracefully. Existing IMAP connections to dovecot are not disrupted, so end users are not disconnected from their mailboxes. Not only will confidence be restored, but secondary issues such as incorrect passwords entered at the mail client’s prompting will improve end user satisfaction.

Aug 252014

If you are running postfix/dovecot using the server app on OSX 10.8.x and want to implement the markasjunk2 plugin for roundcube, allow me to save you hours of frustration…  Here are the settings that worked for me.

Assuming you intent to use sa-learn to update the Bayesian filter when using the plugin, modify as follows:

Set plugin to use cmd_learn driver:$rcmail_config['markasjunk2_learning_driver'] = cmd_learn;


$rcmail_config[‘markasjunk2_learning_driver’] = cmd_learn;

Set spam option for learn driver:$rcmail_config['markasjunk2_spam_cmd'] = 'sudo /Applications/  --spam %f';

$rcmail_config[‘markasjunk2_spam_cmd’] = ‘sudo /Applications/  –spam %f’;

Set ham options for learn driver:

$rcmail_config['markasjunk2_ham_cmd'] = 'sudo /Applications/ --ham %f';

$rcmail_config[‘markasjunk2_ham_cmd’] = ‘sudo /Applications/ –ham %f’;

If you want to see it in action, be sure to turn on logging:$rcmail_config['markasjunk2_debug'] = true;


$rcmail_config[‘markasjunk2_debug’] = true;

In order for roundcube to call sa-learn with access permission to spamassassin database, it is necessary to update the sudoers file.

Open terminal and type:  sudo visudo

Screen Shot 2014-08-25 at 10.28.51 AM

(homebrew is so much easier on the eyes)


Once in the sudoers file, add the following line:

Screen Shot 2014-08-25 at 10.05.50 AM




_www ALL=(root) NOPASSWD:/Applications/

After you have added the changes,  save your changes –   ‘:’  brings up menu and ‘w’ to write changes.  Then ‘:’  and ‘q’ to quit (I prefer nano to vim, but supposedly there is some voodoo about changing the sudoers file in an unsafe manner and you’ll shoot your eye out.. blah blah blah.

Open roundcube inbox, and mash the junk button, and see the results in the log file:

displayed at bottom of roundcube interface


learned some tokens!

Here are some good references (without which, I’d have never gotten this working):


Oct 222011

Hooray!  After a few hours of struggle,  I now have Gnome Classic running with the window picker option enabled and working.  What a piece of shit Unity is.  Also, my netbook is noticeably slower in 11.10 than 11.04.  May have to add the extra stick of RAM after all…