Saturday 9 March 2019

Gmail account archiving - why is it so hard?

Long time no post, I think I'm averaging one post a year now!

Whilst we love G Suite, especially the API of which we make extensive use, I can't work out why there is no easy way to archive a users email when you delete them.

So a member of staff leaves, and as part of the delete process one can transfer all the files they own to another user, so we transfer all files to an archive account and Google kindly puts all those files in a folder named the same as the user, brilliant!

But why oh why can't we do the same thing with email?

Some of our clients have to keep all correspondence for a certain amount of time for legal reasons, however there is no easy workflow for this that I can see, and so we have a lot of dormant accounts sitting there "just in case"

OK: I know I could download all the email via IMAP using Thunderbird or similar and then either upload to a Gmail archive account or move the whole lot to Glacier via S3 in MBOX format.
But the later doesn't give me search-ability and the former is a PITA to organise.

I just don't understand why Google have made G Suite so damn easy for a sysadmin to manage and have produced a fantastic API that allows us to automate workflows efficiently, but we can't do something so seemingly simple as click and "archive email to" button?

Monday 20 June 2016

Let's Encrypt - Updating Cert on Amazon AWS Linux AMI

So I'm using Certs from Let's Encrypt ( letsencrypt.org) for the web servers I run on EC2.
I'm also using the Amazon Linux AMI so that I have all the S3 tools etc pre installed.
Amazon's version of Python 2.7 appears to have some issues, but I managed to get round them by piecing together some guides on the web to get LE installed and working.

However, in the spirit of good practice, I ran a package update recently, which overwrote the changes I had made to get LE working, and borked the cert update process :(

The error I was getting whilst running the update command in debug was:

letsencrypt-auto: line 167: virtualenv: command not found

The suggested fix was to run pip upgrade then pip install virtualenv.

This doesn't work :( The fix was to do the following:

sudo easy_install --upgrade pip
sudo easy_install virtualenv

After this I was able to update the cert.

Posting here to remember it and in case it helps someone!

Tuesday 1 December 2015

Extending SNMP

One of the things I love about SNMP is how easy it is to extend and use it as a wrapper service.

I was training some of my colleagues today on SNMP and as part of the course we covered extending SNMP in detail.

Beyond the simple things like echoing back a value, it's great that it can call external scripts and pass arguments to them.

An example of this is to be able to get the percentage of used disk space on a hard drive.
To extend snmp, we add a line to snmpd.conf file.
extend rootspace /bin/bash /scripts/getdiskspace.sh /dev/disk1

/scripts/getdiskspace.sh - the script to call

/dev/disk1 - the parameter to pass to the script.

Once we have set this up, we can get all of the extended parameters by walking the NET-SNMP-EXTEND-MIB::nsExtendObjects OID

snmpwalk -c public 192.168.0.2 NET-SNMP-EXTEND-MIB::nsExtendObjects

Which gives us:
NET-SNMP-EXTEND-MIB::nsExtendNumEntries.0 = INTEGER: 2
NET-SNMP-EXTEND-MIB::nsExtendCommand."dataspace" = STRING: /bin/bash
NET-SNMP-EXTEND-MIB::nsExtendCommand."rootspace" = STRING: /bin/bash
NET-SNMP-EXTEND-MIB::nsExtendArgs."dataspace" = STRING: /scripts/getdiskspace.sh /Volumes/Data
NET-SNMP-EXTEND-MIB::nsExtendArgs."rootspace" = STRING: /scripts/getdiskspace.sh /dev/disk1
NET-SNMP-EXTEND-MIB::nsExtendInput."dataspace" = STRING: NET-SNMP-EXTEND-MIB::nsExtendInput."rootspace" = STRING: NET-SNMP-EXTEND-MIB::nsExtendCacheTime."dataspace" = INTEGER: 5
NET-SNMP-EXTEND-MIB::nsExtendCacheTime."rootspace" = INTEGER: 5
NET-SNMP-EXTEND-MIB::nsExtendExecType."dataspace" = INTEGER: exec(1)
NET-SNMP-EXTEND-MIB::nsExtendExecType."rootspace" = INTEGER: exec(1)
NET-SNMP-EXTEND-MIB::nsExtendRunType."dataspace" = INTEGER: run-on-read(1)
NET-SNMP-EXTEND-MIB::nsExtendRunType."rootspace" = INTEGER: run-on-read(1)
NET-SNMP-EXTEND-MIB::nsExtendStorage."dataspace" = INTEGER: permanent(4)
NET-SNMP-EXTEND-MIB::nsExtendStorage."rootspace" = INTEGER: permanent(4)
NET-SNMP-EXTEND-MIB::nsExtendStatus."dataspace" = INTEGER: active(1)
NET-SNMP-EXTEND-MIB::nsExtendStatus."rootspace" = INTEGER: active(1)
NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."dataspace" = STRING: 93
NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."rootspace" = STRING: 76
NET-SNMP-EXTEND-MIB::nsExtendOutputFull."dataspace" = STRING: 93
NET-SNMP-EXTEND-MIB::nsExtendOutputFull."rootspace" = STRING: 76
NET-SNMP-EXTEND-MIB::nsExtendOutNumLines."dataspace" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendOutNumLines."rootspace" = INTEGER: 1
NET-SNMP-EXTEND-MIB::nsExtendResult."dataspace" = INTEGER: 0
NET-SNMP-EXTEND-MIB::nsExtendResult."rootspace" = INTEGER: 0
NET-SNMP-EXTEND-MIB::nsExtendOutLine."dataspace".1 = STRING: 93
NET-SNMP-EXTEND-MIB::nsExtendOutLine."rootspace".1 = STRING: 76

The info we are interested in are the output lines:

NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."rootspace" = STRING: 93
NET-SNMP-EXTEND-MIB::nsExtendOutLine."rootspace".1 = STRING: 93

The result value gives the exit status of the script, so we could pass an integer on exit to this parameter

NET-SNMP-EXTEND-MIB::nsExtendResult."rootspace" = INTEGER: 0

To get just a single response with snmpget, we would use the following:

snmpget -c public 192.168.0.2 'NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."rootspace"'
which responds with:

NET-SNMP-EXTEND-MIB::nsExtendOutput1Line."rootspace" = STRING: 76

This format of the above line is:
extend - the extend directive
rootspace - the extension command that the daemon will respond to
/bin/bash - the external environment to run the extension in

Sunday 29 November 2015

Thoughts on the future of IT

I have just become a certified AWS Solutions Architect, and along the journey to becoming qualified a new perspective regarding the future of IT for the SMB market is slowly dawning on me.

In the past the standard procedure was to put kit on premise, and then to maintain it as hosting it / paying a provider was quite often more expensive.

Now however the market is changing and it no longer makes sense to put kit on premise due to the economies of scale and high resilience offered by hosting services such as AWS.

Couple this with the decreasing cost of internet connectivity, and the move to truly cloud based infrastructure makes sense.

I am currently designing HA systems that have little to no on premise infrastructure, with almost everything provided as a managed service.

This moves most of the cost into OP-EX, and reduces the TCO drastically, and seems to be a no brainer!


Saturday 23 August 2014

NFS Client Settings on OSX 10.9

In OSX 10.9 you can specify the options that the Finder uses to connect to NFS shares by putting them in /etc/nfs.conf

For example
#
# nfs.conf: the NFS configuration file
#
nfs.client.mount.options = nolocks,locallocks,intr,soft,nfcple:

This brings back the functionality available in 10.5 an 10.6

I have noticed that where I had to sometimes explicitly specify resvport in 10.5 and 10.6, this no longer seems to be necessary in 10.9

Thursday 14 August 2014

Reset an OSX Server

There's an easy way to reset an OSX 10.8 and 10.9 server back to initial settings.

1. Drag the server app to the trash, the system will detect this and then warn you that it is stopping all services.
2. Rename /Library/Server to /Library/Server.old
3. Reboot the machine ( maybe not necessary, but might be needed to clear any stuck processes )
4. Move the Server app back into Applications
5. Launch it. It will start afresh.

Tuesday 29 July 2014

Backing up and Nuke OSX 10.8 Calendar Server

Backup

There's various bits of documentation around about how to do this on 10.7 Server, but although the principal is correct it doesn't work on 10.8

The reason for this is that in 10.8 there are two instances of the postgres daemon.

One is in user land, for sysadmins to setup their own databases, and is also used by Roundcube Topic Desk. The second is hidden and is used to store the servers data such as calendar events and wikis.

The 10.7 instructions to backup the DB are:
sudo pg_dump -U _postgres caldav -c -f caldav.sql

If you run this on 10.8 server it will fail, saying that it can't find the caldav database.

The only way to access the caldav db is via a unix domain socket located at:
/Library/Server/PostgreSQL\ For\ Server\ Services/Socket/.s.PGSQL.5432

You can verify this by using telnet to connect to it:
telnet -u /Library/Server/PostgreSQL\ For\ Server\ Services/Socket/.s.PGSQL.5432

** Note, a service that uses the postgres db must be running for the socket to exist.

In addition, the pg_dump program located on the standard path, i.e. /usr/bin is the wrong version to access the postgres damon hosting the service databases, there is another version hidden away within the server app at:
/Applications/Server.app/Contents/ServerRoot/usr/bin/

Even if you call this directly, it will fail because it tries to access the standard TCP port for postgres.

So, we need to use the pg_dump in the server app, and pass the socket location to it.

Luckily we can do this with the host flag, from the man page:
-h host, --host=host
           Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket.
           The default is taken from the PGHOST environment variable, if set, else a Unix domain socket connection is attempted.

It appends the socket file name, so actually only wants the path.

Putting this all together, we can successfully backup the caldav DB with the following command:
/Applications/Server.app/Contents/ServerRoot/usr/bin/pg_dump -U _postgres -h /Library/Server/PostgreSQL\ For\ Server\ Services/Socket/ caldav -c -f caldav.sql

This will backup the DB that contains all the events, however there is also another program hidden in the server app that we need to run that will backup the sqlite dbs and the server settings.

/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_backup

To backup, pass the following:
/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_backup backup file.tgz

To restore
/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_backup restore file.tgz

If you run it without any options or with -h it will give you basic help info.

Nuke and Rebuild

So now we know how to backup the DB, what if we want to wipe it and start again?
To do this we need to drop the caldav db, and then run calendarserver_bootstrap_database to recreate the DB.

However, the socket only exists when services that use the db are running, and you can't drop the db if it's being used.

To overcome this, luckily the wiki service uses the postgres daemon and creates the socket, but does not lock the caldav db.

Step 1…. get the background postgres daemon working..

In Server App
Stop Calendar
Stop Contacts 
Start Wiki

Step 2.. drop the caldav DB

sudo /Applications/Server.app/Contents/ServerRoot/usr/bin/dropdb -U _postgres -h /Library/Server/PostgreSQL\ For\ Server\ Services/Socket/ caldav

Step 3 … rebuild the caldav DB

sudo calendarserver_bootstrap_database -v