Setting JAVA_HOME on OSX lion

I recently upgraded to OSX 10.6.8 (Lion) and found that my JAVA_HOME was no longer set correctly. I found this out when my command line ec2 tools failed.

If you currently have your JAVA_HOME set to something like “/Library/Java/Home”,
under OSX Lion, you’ll want to change that to $(/usr/libexec/java_home), thus:

export JAVA_HOME=$(/usr/libexec/java_home)

The script /usr/libexec/java_home outputs the true location of JAVA_HOME, so unless that script goes away, this should prove upgrade safe.

I found this at http://steveswinsburg.wordpress.com/2011/07/22/java_home-on-os-x-lion/

Postfix SMTP AUTH w/TLS

The sysadmins at TnR Global, LLC enable email to be successfully delivered from EC2 instances, instead of being caught by Spamhaus and others.

One of the widely discussed issues with Amazon EC2 instances is the inability to reliably send email from the instances. In all too many cases, email from EC2  instances is automatically categorized as spam by the various relay databases, and by many ISP’s and carriers. There are several solutions, with the most common being a smarthost setup using either an external smarthost smtp service, such as http://authsmtp.com, or using an existing smtp server within our infrastructure. Continue reading “Postfix SMTP AUTH w/TLS”

Will Amazon AWS save me money?

One of this first questions we asked when deciding to sue Amazon’s AWS services was: Will we save money?

At first, your EC2 servers may be simply architected, perhaps small instances. Once a more serious commitment is made for a more robust architecture at Amazon, there will be additional costs introduced. For example, a small instance, running a 2 disk 100GB EBS volume w/RAID0 plus monitoring and a static IP, and the cost goes from the current ~$68 monthly per system to ~$88 monthly per system. Bump the instances to large instances (likely needed for any server running mysql, or any other equally intensive application) and that cost goes to ~$265 per instance. Add in the costs of the additional services like bandwidth, Static IP, Cloudwatch etc and the costs can quickly escalate. Of course, upfront payments for Reserved Instances can drastically reduce the costs further.

However, the savings in development and deployment costs I think far outweighs a narrower gap in the savings between physical and AWS servers, and the real MRC on the AWS servers will likely be lower for a given amount of computing resources.

So- can you save money? Yes. In some cases, it will be a direct apples to oranges savings of hard dollars. In other cases, the agility gained will provide the greatest savings. In most cases- a combination of both will drive your cost savings.

To learn more about how operating in the cloud can save your company money, contact us for a free consultation.

Leveraging your assets: Repurposing a physical server as an OpenVZ virtual server

As an active system administrator, part of my job is determining which systems require decommissioning due to the age of the OS or for other reasons. When readying a server for retirement, we’ll take the opportunity to move and upgrade the services running on that server. Often, we are then left with a perfectly good piece of hardware that has already been paid for and is still a valuable asset. A great way to leverage this equipment is virtualization- specifically, OpenVZ.

Oftentimes, servers are underutilized- especially when it comes to development work, or when running lower impact applications. Rather than deal with multiple users on a system, apache virtual hosts for multiple websites, worrying about secure file access, or one user or customer hogging a huge amount of resources, we have found that creating multiple virtual servers using OpenVZ is an ideal solution. I won’t delve into OpenVZ deployment other than to briefly note that on our CentOS and RedHat servers, installation is as simple as adding the correct repository and installing via yum (see here for more info). Once installed, a quick reboot into the new kernel and you are ready to roll. We are running 45-50 virtual servers (VEs, or containers) on one of our 2 QUAD core CPU, 8GB RAM servers, with plenty of room to spare. I recommend running ‘vzsplit’ to generate a good configuration basis for you VEs.

Once we have installed and configured OpenVZ on our new server, we are then able to deploy a large number of VEs for individual users or customers. Each VE provides the user with the ability to have root access, update and install their own software, deploy their own applications, etc. To the user- they are on their own complete system. Should their application misbehave, it won’t affect the others on the system.

Additionally, many resources can be adjusted on the fly. Running out of disk space? Increase it on the fly. Need more memory? Increase on the fly. Live resource management such as this is a very powerful way to leverage your hardware.

We currently are using OpenVZ for CMS development, custom programming development, building custom rpms, running websites, and various other testing where we need easily deployed servers which may or may not be needed of extended periods of time.

Virtualizing our own equipment in this way makes great economic sense for several reasons-. We are using a server which we already own, thus helping us increase our “green” sensibilities by keeping this system out of the landfill. We eliminate the need for more servers for development work. We can even host paying customers, thus deriving income from the hardware. Using our own equipment also helps us keep costs lower by lessening the need to move data and applications offsite to providers such as Slicehost.  Slicehost has it’s place- and, in fact, we use them for certain applications- but they do not provide the versatility necessary for much of our development work.
In summary, by leveraging existing, underutilized  or potentially retired hardware, you can save money in reduced additional hardware costs, derive income and help the environment. Additionally, the agility in development and deployment that we gain simply adds another layer to the economic advantages that we gain.That sounds like a good plan to me!

Sidekick Danger – It’s not the cloud, it’s the approach

I recently saw the headline,”T-Mobile and Microsoft/Danger data loss is bad for the cloud“, and, as an admin who works with cloud technology on a daily basis, viewed the headline with some concern. However, after reading the article itself, my only thought is “What does this have to do with the cloud?”. Reading through, we find that Microsoft/Danger stores your phone data (contacts, photos, etc) on it’s servers, and that the phone needs to constantly be in contact with the servers in order to maintain service and data. Unfortunately, the servers crashed, and all of the data was lost. Turn off your phone, lose all your data. Yet- this is exactly what the Sidekick service promises to protect you from- and it failed.

The problem with blaming this on the “cloud” is that, while technically, your cell phone and the Microsoft/Danger servers form a “cloud”, the failure lies with the servers, and those who administer those servers. It doesn’t matter whether those servers are virtual, or physical- if there is not a disaster recovery plan in place, and if that disaster recovery plan has not been tested- data will be lost. Your data. This is not a shortcoming of cloud computing- it is a result of depending on others to maintain your data. It also gives us caution when depending on external providers over the network to always be available. Services stop. Power fails. Disks die. Routing interruptions happen.

This is just network computing. But if the people (or companies) behind it all don’t do their own due diligence- disasters like the this, and worse will continue to happen.

Amazon EC2 system restore

Recently, one of our small EC2 instances failed. While we had Nagios monitoring it, Nagios only provides alerts when services fail, or when the host goes down. In this case, the failure was on Amazon’s side- the hardware where our instance resided was failing.

Recently, one of our small EC2 instances failed.  While we had Nagios monitoring it, Nagios only provides alerts when services fail, or when the host goes down. In this case, the failure was on Amazon’s side- the hardware where our instance resided was failing.

Continue reading “Amazon EC2 system restore”

Migrating an OpenVZ Virtual Machine

One of the great features of OpenVZ is the ability to easily migrate a virtual machine(VM) to another server. While identifying the best methods to perform this task recently, I read about two tools to accomplish this task: vzdump, and vzmigrate. Continue reading “Migrating an OpenVZ Virtual Machine”

Transparent MySQL migration using MySQL proxy

How can we transparently migrate MySQL from one server to another when we don’t want to disrupt end users? That was the question posed as we come to the final phase of decommissioning a server. We have transitioned almost all services away from the older server (CHIMAY)- but there is one external cron that is not under our control that we can see in the logs which generates several MySQL queries. Therefore- we need to transparently move MySQL through another server (TECATE). Here’s the scenario: Continue reading “Transparent MySQL migration using MySQL proxy”

Identifying bottlenecks on your server

As heavy users of the LAMP stack for our applications, we of course find that various systems are not performing as expected. We have one webserver (part of an application cluster) that often spikes loads that seem to be unrelated to the actual traffic on the machine. For example, we may have 80 httpd requests, yet the load on the machine is 8 or 9. So- how do we begin to identify where the bottleneck exists? Typically, those bottlenecks can be narrowed to two places- CPU and I/O (disk). We can check our system with a couple of tools to identify where the problem is: vmstat and iostat. Continue reading “Identifying bottlenecks on your server”