Why I Do No Use Amazon RDS (no longer valid)

This post is no longer valid, as my main reason for not using RDS has been turned into a good reason to use RDS.

Amazon RDS announced cross region read replicas in November 2013. You can read their blog post about it here.

I think this makes DR easier using RDS, and encourage everyone to use RDS with cross region replicas.

Old post is here for archival reading.

This post is a bit rambly, I apologize in advance.

TL;DR I don’t use Amazon RDS because it doesn’t have global/non-Amazon failover.

I use Percona’s distribution of MySQL. Primarily because of their XtraDB. When Amazon Web Services (AWS) came out with their hosted MySQL database service (RDS) I checked it out, but did not move to it from hosting my own MySQL servers on their EC2 cloud. The solution I had setup was similar to RDS, both use EC2 instances running Linux with an active slave, and running on their Enterprise Block Store (EBS). Don’t get me wrong, RDS is a great service, and if you don’t know how, or simple don’t want to, run your own MySQL server, it’s a great option, and you should be using it.

RDS uses Amazon’s Linux with MySQL. It stores it’s data on EBS volumes. It’s easy to setup a replication slave in a different Amazon Availability Zone (AZ) for redundancy, and Amazon has a smooth process to fail from your master to it’s slave, should the master have a problem that requires replacement. This is all great, as replication and failing over are all things that usually require someone who knows MySQL to setup. The one thing it does not account for is Inter-Region failover, or failover to a non-EC2 solution. The simplest solution to this problem is to setup a MySQL slave in the desired location (another region, or on your own hardware). The upside of this is your live slave would be hopefully not far behind the master, and able to take over with minimal trouble, or downtime. The downside to this is that it you already have a slave that does that, running that instance or physical server has lots of costs associated with it, and failover to a different location is often non-trivial for the things that talk to the database, like web nodes.

I had setup my binlog replay system, which basically keeps warm disks partially up to date for multiple MySQL master nodes, using transient compute resources. I have started to using ec2-consistent-snapshot for master/slave node replication, and now that Amazon has enabled copying of EBS snapshots between regions, I plan to retire my binlog replay system, and either update ec2-consistent-snapshot to support inter-region copying, or, more likely, make a snapshot monitor/manager that watches snapshots for completion, and copies them to another region.

Since Amazon actually listens to customer requests, I would sum up my non-RDS desires with this request.

Amazon, I would like to be able to have an RDS instance automatically ship binlogs to S3 buckets, take and copy consistent MySQL snapshots, and copy them to other region(s).

, , , , , , , , ,

No Comments

iPad Mini is a Pass For Me.

* Disclaimer * I love Apple products. I’m typing this on a rMBP. I use an iPhone 4S. I recommend 99% of people get iPhones over anything else. I love the Mac Air, Mac Mini, Mac OS X, and everything Apple has done.

That said, I just went to Best Buy to see the new iPad Mini. They have one model in stock, the $529 64GB model. They sold out of the 16/32GB ones early in the day, an employee said.

My impression is, pass. And I don’t feel like I’m missing out on anything, other than Apple innovating!


  • It’s too big, feels slightly smaller than full iPad. Who’s giant, manly hands, are in the picture on the iPad Mini page? It feels much closer to a big tablet than a small one.
  • Plastic back feels cheap and fragile, and is too thick.
  • It’s *way* to expensive. At $199 I would have been slightly excited and probably very likely to get a 16GB model. $199 for a 32GB and I would have been there at the open, $249 for a 64GB probably the same. But $329 for a 16GB is almost double comparable high-end Android tablets. I would pay a premium for the Apple version, but not 65% more, maybe 20% more.
  • The new connector. It’s nice it’s not that giant 30 pin monster, but seriously? Not micro-usb? WTF? That’s a blatant wallet punch. I hated their old connectors, but at least they were ubiquitous. Everything else in the world is now Micro-USB.
  • Still not happy Apple bars jailbreaking, my device, my choice.



  • It’s fairly light, battery life looks awesome.
  • Some people may like the size, between most 7″ tablets and 10″ tablets.
  • It has all iPad and iPhone apps from the app store. There are way more apps for iOS than Android.
  • Front and rear cameras. No tablet should ship without at least one camera for video conferencing, and really all should have a front and rear facing camera.


I am sad that Apple launched 3 products that I have no desire to own. This new iPad “Mini” is the one I had the highest hopes for.

The iPad 4, I don’t need my full size iPad to be twice as fast, I need it to weigh less, and didn’t I just buy this iPad 3!

The iPhone 5, WTF? Worst. Form factor. Ever. What were they thinking? People miss long phones? We took the iPhone and made it more like a thin banana? I like the thin. But give me more screen, not taller screen. If you need a reference, I really like the size, weight and feel of my Motorola DROID RAZR. I think it may be the perfect size smartphone.

I am waiting for the Barnes and Noble Nook HD on Nov 8.

I really like the 7″ tablet space, and have high hopes for awesome, portable computers that they could become.

What do you think?

, , , , , , , , , ,

No Comments

How I install Sun Java JDK on Ubuntu Linux

Oracle’s licensing debacle is annoying, and I would love to ditch the Sun JVM. But, nothing out there is 100% compatible and free.

Install Sun java JDK in /usr/java manually
Unfortunately, Sun doesn’t have a good url for wget/curl’ing, so you have to manually download the self-extracting Linux installer.
Current version is 1.6.25, and is here http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u25-download-346242.html
You should be getting the x64 version (there is no good reason to run a 32-bit OS unless you are on a *very* old, or embedded, computer)
The current file is jdk-6u25-linux-x64.bin
Put this on your server and run it
chmod uog+x jdk-6u25-linux-x64.bin;

then move the directory to /usr/
I like to leave the full version on the directory, and sym link it to /usr/java

cd /usr/
ln -s jdk1.6.XXX java
chown -R root:root jdk*
chown -R root:root java*

Setup system JAVA_HOME and add /usr/java/bin to the default PATH

echo 'declare -x JAVA_HOME="/usr/java"' | sudo tee -a /etc/bash.bashrc
echo 'declare -x PATH="${PATH}:${JAVA_HOME}/bin"' | sudo tee -a /etc/bash.bashrc

Ubuntu’s update-alternatives crap will prevent accidental install of OpenJDK

sudo update-alternatives --install /usr/bin/java java /usr/java/bin/java 1
sudo update-alternatives --install /usr/bin/javac javac /usr/java/bin/javac 1
sudo update-alternatives --install /usr/bin/itweb-settings itweb-settings /usr/java/bin/itweb-settings 1
sudo update-alternatives --install /usr/bin/javaws javaws /usr/java/bin/javaws 1
sudo update-alternatives --install /usr/bin/jexec jexec /usr/java/bin/jexec 1
sudo update-alternatives --install /usr/bin/jexec-binfmt jexec-binfmt /usr/java/bin/jexec-binfmt 1
sudo update-alternatives --install /usr/bin/keytool keytool /usr/java/bin/keytool 1
sudo update-alternatives --install /usr/bin/orbd orbd /usr/java/bin/orbd 1
sudo update-alternatives --install /usr/bin/pack200 pack200 /usr/java/bin/pack200 1
sudo update-alternatives --install /usr/bin/policytool policytool /usr/java/bin/policytool 1
sudo update-alternatives --install /usr/bin/rmid rmid /usr/java/bin/rmid 1
sudo update-alternatives --install /usr/bin/rmiregistry rmiregistry /usr/java/bin/rmiregistry 1
sudo update-alternatives --install /usr/bin/servertool servertool /usr/java/bin/servertool 1
sudo update-alternatives --install /usr/bin/tnameserv tnameserv /usr/java/bin/tnameserv 1
sudo update-alternatives --install /usr/bin/unpack200 unpack200 /usr/java/bin/unpack200 1

Confirm the version is installed with
java -version
which java

Good luck. I can’t wait for OpenJDK to deprecate Sun JDK!

, , , , ,

No Comments

Howto: Add an Outlook.com address to your existing hotmail.com or live.com address

  1. Go to http://outlook.com
  2. Login with your hotmail/live account
  3. Click the gear at the top right
  4. Select More Mail Settings
  5. Click the “Create a Outlook alias”
  6. Select how you want that mail to arrive (in the Inbox, or a new Folder)
  7. Done! You now have your Outlook.com address AND your old address.

, , , , , ,

No Comments

Amazon EBS Provisioned IOPS and EBS connected instances

Amazon just blew my mind (again!) with dedicated IOPS EBS volumes and instances with dedicated connectivity to the EBS network.

The announcement is on their blog, here http://aws.typepad.com/aws/2012/08/fast-forward-provisioned-iops-ebs.html

This is awesome news!
So this basically just replaces the IOs (you pay for IOs or dedicated IOPS now).
It doesn’t matter the size of the volume it’s $.10 per 1 IOPS/month
At first this didn’t seem reasonable, but now I see 1TB w/1k IOPS is less IOPS than 10x100GB each with 1k/IOPS.

It looks like the change in price is around 35 IOPS where it’s then cheaper to pay for 100 IOPS/month dedicated instead of the actual IO.?

Here is my google spreadsheet with the pricing breakdown.


Thanks again, Amazon!

, , , , ,

No Comments

Howto Install Percona with XtraDB MySQL on Ubuntu Via Packages

Percona makes a great MySQL distribution, including their drop in replacement for innodb, XtraDB.

This Bash snippit will install Percona from their provided Repository.

gpg --keyserver  hkp://keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
gpg -a --export CD2EFD2A | sudo apt-key add -

if [ ! -f /etc/apt/sources.list.d/percona.list ]
     echo "deb http://repo.percona.com/apt $(grep DISTRIB_CODENAME /etc/lsb-release | sed 's/=/ /' | awk '{ print $2 }') main" | sudo tee /etc/apt/sources.list.d/percona.list
     echo "deb-src http://repo.percona.com/apt $(grep DISTRIB_CODENAME /etc/lsb-release | sed 's/=/ /' | awk '{ print $2 }') main" | sudo tee -a /etc/apt/sources.list.d/percona.list
     echo "Percona sources exist, $(cat /etc/apt/sources.list.d/percona.list)"
apt-get update
apt-get install libmysqlclient18 percona-server-server-5.5 percona-server-client-5.5


If you already have MySQL you can backup, stop existing mysql, remove packages, remove config files and data files, install Percona, restore from backup.


No Comments

Script to backup files & MySQL to S3, and to restore them.

Here is my script that will backup files & MySQL db’s to Amazon S3, and restore them.

Thanks to the team over at Chatterfly, check them out for sponsoring this scripts development.

The use case I wrote this for was having an auto-scaling group that included the latest database. For infrequently changing LAMP sites like my Wordpress blog, which include a MySQL database, you run the script to update S3 after making changes. Then when traffic gets heavier, your auto-scaling group launches more instances, they boot up and download the latest files and db, and then join the ELB pool.

It also works well for backing up a non-EC2 Linux server to S3 for Disaster Recovery (DR).

WARNING: Running the start command on the wrapper script, or on the bluesun-setup.sh script will DELETE the local versions of everything the script otherwise backs up. I recommend NEVER adding this script to init on your MASTER server. If you only want backups, run the script only with updateS3. Then, when you need to restore, run it manually with start.


Steps to setup backups to Amazon S3 using this script.

  1. Download script from github, here https://github.com/jonzobrist/Bash-Admin-Scripts/tree/master/bluesun-setup
  2. Copy scripts somewhere on your server, I use /etc/init.d. Mark script bluesun-setup.sh executable.
  3. Create directory /etc/bluesun-setup and put server.conf in it.
  4. Edit /etc/bluesun-setup/server.conf. Read the comments, it is heavily commented and includes use cases.
  5. Download and setup Tim Kay’s excellent Amazon AWS tools. Follow Tim’s instructions for setting up your AWS credentials file.
  6. Setup a mysql user, and put the credentials in root’s my.cnf.
  7. Create an Amazon S3 bucket to store your files.
  8. Push your files to Amazon S3 manually.
  9. Setup a cron job to push them however more frequently you like.

Steps to restore from Amazon S3 using this script.

  1. Follow steps 1-6 from above.
  2. Copy the /etc/bluesun-setup/server.conf from the original server to the restore server.
  3. Restore from S3.

Detailed version of backup to Amazon S3 using this script

1. Download script from github, here https://github.com/jonzobrist/Bash-Admin-Scripts/tree/master/bluesun-setup
2. Copy scripts somewhere on your server, I use /etc/init.d. Mark script bluesun-setup.sh executable.
3. Create directory /etc/bluesun-setup and put server.conf in it.
4. Edit /etc/bluesun-setup/server.conf. Read the comments, it is heavily commented and includes use cases.
5. Download and setup Tim Kay’s excellent Amazon AWS tools. Follow Tim’s instructions for setting up your AWS credentials file.
6. Setup a mysql user, and put the credentials in root’s my.cnf.
7. Create an Amazon S3 bucket to store your files.
8. Push your files to Amazon S3 manually.
9. Setup a cron job to push them however more frequently you like.

cd /etc/init.d
wget https://github.com/jonzobrist/Bash-Admin-Scripts/raw/master/bluesun-setup/bluesun-setup.sh
chmod uog+x bluesun-setup.sh
mkdir -p /etc/bluesun-setup
cd /etc/bluesun-setup
wget https://github.com/jonzobrist/Bash-Admin-Scripts/raw/master/bluesun-setup/server.conf
#Edit server.conf. At a minimum set S3_BUCKET, DIRS and one of MYSQL_FILENAME or MYSQL_DATABASES.
mkdir -p /root/bin
cd /root/bin
curl https://raw.github.com/timkay/aws/master/aws -o aws
chmod uog+x aws
perl aws --install
touch /root/.awssecret
chmod og-rwx /root/.awssecret
#Edit /root/.awssecret. Add your AWS credentials, put the ACCESS KEY ID on the first line, and the SECRET KEY on the second line
s3mkdir backups.example.com
touch /root/.my.cnf
chmod og-rwx /root/.my.cnf
#Create your mysql user, if not using root. A typical grant for a read only backup user could be
# mysql> GRANT SELECT, LOCK TABLES ON *.* TO user@'localhost' IDENTIFIED BY 'password';
#Edit /root/.my.cnf so it has a mysql [client] section that includes your username and password, so it looks like this:
#Now backup to your new S3 bucket with
/etc/init.d/bluesun-setup.sh updateS3
#Check that your files are there
s3ls backups.example.com
#Setup a cron job for more backups
crontab -e
#Add this line & save
0 * * * * /etc/init.d/bluesun-setup.sh updateS3

To restore follow all the same steps except manually run

/etc/init.d/bluesun-setup.sh start

Also, don’t setup that in a cron job 😉

That should be it! Email me jon@jonzobrist.com or ping me on twitter @jonzobrist

, , , , , , , , , ,

No Comments

My response to Edward Capriolo’s “Myth Busters: Ops edition. Is EC2 is less expensive then running your own gear?”

Edward Capriolo’s (@edwardcapriolo) post may be better titled “Myth Busters: Opts edition. The Misleading Appearance of Amazon AWS Costs.”


Edward, you are absolutely correct. The cost of servers on AWS is more than the cost of servers in real life.

Your final conclusion is absolutely incorrect.

In fairness you are attempting an apples to apples comparison, and concluding that apples are better than oranges.

I suggest you consider apples to oranges comparison and see that the cloud (specifically Amazon’s AWS) is not the sour apples you’re comparing it to.

So, if you compared features on the servers, and, even more so, if you change your application to take advantage of the AWS cloud, the cloud will absolutely crush your comparison in price and scale.

Things that you won’t have on your servers for $175k.

  1. Atomic-multi data center sub second volume snapshots. EBS volumes rock. Snapshots persistent to S3 are amazing.
  2. Global redundancy. You pay $2k/month for your data center, I’m guessing if it gets hit by a meteor you’re SOL. With AWS for far less than $2k/month we can recover to either coast of the US, Ireland, Singapore, Tokyo or Sao Paolo in < 1 hour.
  3. Elasticity. They named their platform Elastic Cloud for a reason. You bought 20 servers. How long did that take? A week, a month, a day? On Amazon it took 2 minutes. Need 20 more, 2 more minutes. Don’t want to watch your Cassandra cluster for load and pre-order servers? Setup an auto scaling group, have 10-60 nodes based on average CPU, or any other metric you want. You don’t only get elastic load scaling, if your app can wait you get elastic pricing. Don’t care when your job runs, just needs to be sometime between midnight and 6am? Game the spot instance market, save a ton, 50-90%. Have a resource that is usually idle but sometimes needs 60GB RAM? Pay for a micro/medium instance and scale it to a 4XL whenever your spikes are.

This is all just with EC2, server virtualization.

If you added the components necessary to do this with your own hardware your price would be 4X what it is on AWS.

Now, let’s talk about where you can save 10X. Things AWS excels in that you did not even mention.

  1. S3. I know your local SATA drives or SAN are cheaper. But are they designed for 11 9’s of redundancy? Compare that cost. Are they secure and globally accessible? Do they have virtually unlimited bandwidth to your alternate site/customers? Can you just keep growing them and only paying fer allocated space?
  2. Bandwidth. You did not even mention this. We went from a traditional 5MBPS commit on a dual 100MBPS ethernet for $800/month 95% billing scam, to no upper limit for burst, and tens of dollars per month based on a fair, actual usage at pennies per GB.
  3. Actual cloud apps. Ditch your MySQL database and use simpleDB or the SSD based DynamoDB. Get infinite scale, price per actual data used, and built in redundancy.
  4. Support staff. As an IT person this one pains me a bit, until I recall how many bad IT departments I’ve experienced. How much are you paying the monkey who maintains those 20 servers? Your developers do it? What if they could just concentrate on coding? All of this costs more than your straight hardware comparison.
  5. Opportunity cost. You own those servers. Microsoft or Apple or Google or someone completely new comes up with a new cloud paradigm, I can migrate in days if not less from AWS. There is no option for dealing with one of the most consistent paradigms of our age – Change is inevitable.
  6. Development flexibility. You did not price the likely necessary qa and dev servers. Right off the top, that doubles or triples your price if you need a clone of production for test or development. On AWS you automatically clone your running production and test your continuous deploys on real, identical data and setups. It takes some work, but once you’re there it’s a million times better, and you never have to hear “It worked on QA” again! And what about new development? You or one of your engineers wants to ‘try out’ something new. How much is that server? AWS it’s pennies to tens of dollars to let people play with wild new configurations.
  7. Growth. Already mentioned in the elastic part, but consider how much happier your boss is if sales hit the hockey stick and he didn’t hear complaints of any operations issues, vs. calling you on vacation to yell about “Everything being down” right at the moment you made it big.


There are 2 instances where I think you should not drop everything you are doing and migrate to AWS.

  1. If you are very data heavy (PB) you should burst to the cloud, get a 10gb cross connect to AWS.
  2. If you are cpu heavy (>80% cpu/server avg) you should burst cpu to the cloud.


I did the same comparisons you did when I first evaluated AWS, and am so glad my boss urged me to try it out. And, yes, it seemed scary and more expensive, but has turned out easy and far less expensive.

Everyone else should refactor their applications and move to the cloud. If you haven’t already started you are behind in the game.

And you can quote me on that.


, , , , , ,

No Comments

My updated Cloud Drive Pricing Breakdowns

Old google pricing is WAS 🙁 the cheapest per GB at $.25/GB/Year.
Microsoft SkyDrive is second at $.50/GB/Year, and the New Google Drive pricing is third with tiers mostly around $.60/GB/Year

Here is the link to my “Public Google Spreadsheet With Cloud Storage Options Breakdown

There are plenty of good articles out there comparing the features of each cloud drive, as many provide nice add ons.
Here are a few

Mashable Tech – Google Officially Launches Google Drive


Telegraph – Google Drive: iCloud, Dropbox, SkyDrive and Box comparison


The Guardian – Google Drive versus Dropbox and the rest: cloud storage compared


CNN – How does Google Drive compare to the competition?


Lifehacker’s article, “Drag-and-Drop To Automatically Encrypt Files in Google Drive Using Automator on Mac” http://lifehacker.com/google-drive/

And they discuss it extensively on Twit.tv’s podcast [which totally rocks] – This Week in Google episode 143 http://twit.tv/show/this-week-in-google/143

So many cloud storages, still too many files, and still too little bandwidth…. My ~3.5TB of files would take 169 Days 21 Hours 47 Minutes 44 Seconds to upload at 2 Mbit/Sec…

, , , , ,

No Comments

Change ports on an Amazon Elastic Load Balancer (ELB)

Of course you need your ELB command line tools, but you also need the IAM Cli tools if you are using an SSL certificate.

There is more detail here for SSL certificates https://makandracards.com/makandra/1673-change-update-ssl-certificate-for-amazon-elastic-load-balancer


You will need your ELB load balancer’s name, find it with elb-describe-lbs

1. Remove the old port if there is one already (In this example it’s 80 and 443)

./elb-delete-lb-listeners my-inthinc-com-oregon --lb-ports 80

-or- for HTTPS

./elb-delete-lb-listeners my-inthinc-com-oregon --lb-ports 443

2. Add the new port (using your SSL cert name found from iam-servercertlistbypath)

./elb-create-lb-listeners my-inthinc-com-oregon --listener "lb-port=443,instance-port=80,protocol=http"

-or- for HTTPS

./elb-create-lb-listeners my-inthinc-com-oregon --listener "lb-port=443,instance-port=8080,protocol=https,cert-id=arn:aws:iam::322191361670:server-certificate/www.example.com"

That’s it!

You can’t do this in the current AWS Management Console.

No Comments