A bit over a year ago, OSU Libraries web development team first started toying with Ruby on Rails. Since that time we have established it as our language of choice for any new project. For me, the switch to Ruby has been absolutely outstanding. For the most part Ruby makes things so much easier.  It tends to get out of my way and let me focus on the application I’m trying to write and not the syntax of the language.  Heck, even running a local development copy of a web application is easy!  In PHP you need to have MAMP (or some other apache installation) set up and configured, point it to the directory, set up a new virtual host….guh!  I just want to run my application real quick to check something. In rails it’s as simple as “rails s” on the command line.

But what happens when you want to take that rails application that you wrote on your laptop and push it to a production web server?  It turns out that this is an incredibly complex thing. So many options can lead to confusion and complexity. For us, it took a progression from one method to another.  I’m by no means saying that our solution is the best or even that it’s finished yet, but maybe our progression will provide some insight for someone else out there.

Stage 1 – Phusion Passenger

Passenger seems an obvious solution when coming from PHP.  It’s an apache addon that you can quickly drop in, and point it to your rails path and go.  Seems simple enough, right?  We thought so.  So we tried it.  Turns out that it’s not entirely simple, and it’s also incredibly slow.  It has (or at least had) fairly abysmal performance and it doesn’t scale very well at all.  It is worth noting that this may have to do with our misconfigurations.  Maybe we weren’t doing it right.  But generally speaking we have some smart people working here so if we couldn’t maybe it’s too complicated.

Regardless of reason, passenger never seemed to work well for us.  So we tried something else.

Stage 2 – Unicorn

One of the key features of unicorn is that it loads your rails environment once and then loads an arbitrary number of worker processes that handle individual requests.  This was much faster for us, fairly easy to set up and configure, and had room to grow.  Because you can set the number of workers you can scale the app to fit the server you put it on.  You can take advantage of multiple cores or multiple processes.

This also solved an issue we had with keeping rails running. One problem we had with passenger is that it for some reason kept shutting down after long periods of inactivity (overnight) and the first page load would have to reload the entire rails environment all over again.  This led to the first page load of the day timing out.

We also were able to add unicorn start and stop commands into our capistrano files to make a restart of the server automatic upon deployment as well as having 0 downtime (when properly configured).

The one problem we did have with Unicorn is making sure that it started properly when the server itself rebooted.  Enter bluepill.

Stage 3 – Bluepill

Bluepill isn’t really a rails web server, it is a process monitoring tool.  For the most part what it does is bring up various processes and make sure they stay up and stay under control.  For example, one of our applications runs 3 different processes – unicorn, delayed_job, and clockwork (the latter 2 being incredibly helpful gems that you should check out.)  So we had to set up separate cron jobs to start all of these and keep them going.

This was not only tedious and annoying, but it led to problems when moving apps from system to system.  The problem here is that you are putting application specific information on the server instead of containing things about the application within the application itself.  Bluepill bundles all of your app-specific processes into the application itself.  It also allows for monitoring and ensures that processes don’t go out of control, eating up all of your CPU or memory.  If they do, they get restarted.  Here is an example of our bluepill configuration file to show you a few of the things it can do:

class NeverUnmonitor < Bluepill::Trigger
   def notify(transition)
     if transition.to_name == :unmonitored
       dispatch!(:stop)
       schedule_event(:start, 20)
     end
   end
 end

Bluepill.application('room') do |app|

  app_path = "/room-reservation" 

  # Set environment variables
  rails_root = File.expand_path('../', File.dirname(__FILE__))
  rails_env = ENV['RAILS_ENV'] || 'production'

  app.working_dir = rails_root

  app.process('unicorn') do |process|
    process.pid_file = "#{rails_root}/tmp/pids/unicorn.pid"
    process.start_command = "unicorn_rails -D -c config/unicorn.rb -E #{rails_env} --path #{app_path}"
    process.stop_command = "kill -QUIT {{PID}}"
    process.auto_start = true

    process.start_grace_time = 10.seconds
    process.stop_grace_time = 10.seconds
    process.restart_grace_time = 10.seconds

    process.checks :never_unmonitor

    process.monitor_children do |child_process|
      child_process.checks :cpu_usage, 
                           :every => 10, 
                           :below => 30, 
                           :times => 3
      child_process.checks :mem_usage, 
                           :every => 10, 
                           :below => 200.megabytes, 
                           :times => 3
      child_process.stop_command = "kill -QUIT {{PID}}"
    end
  end

  app.process('delayed_job') do |process|
    process.pid_file = "#{rails_root}/tmp/pids/delayed_job.pid"

    process.auto_start = true
    process.start_grace_time    = 30.seconds
    process.stop_grace_time     = 10.seconds
    process.restart_grace_time  = 40.seconds

    process.checks :never_unmonitor

    process.start_command = "script/delayed_job start"
    process.stop_command = "script/delayed_job stop"
    process.checks :cpu_usage, 
                   :every => 10, 
                   :below => 30, :times => 3
    process.checks :mem_usage, 
                   :every => 10, 
                   :below => 200.megabytes, 
                   :times => 3
  end

  app.process('clockwork') do |process|
    process.auto_start = true

    process.start_grace_time    = 30.seconds
    process.stop_grace_time     = 10.seconds
    process.restart_grace_time  = 40.seconds

    process.checks :never_unmonitor

    process.pid_file = "#{rails_root}/tmp/pids/clockwork.pid"
    process.start_command = "ruby config/clockwork_control.rb start"
    process.stop_command     = "kill -QUIT {{PID}}"
    process.checks :cpu_usage, 
                   :every => 10, 
                   :below => 30
  end
end

For more specific information about what this file does or how to create your own, I would look into the documentation.  I only link it here to show a few of the features.  You can see here that a single bluepill file contained within the git repository covers all three processes that need to start and also monitors their CPU and Memory usage, restarting them if, for example, they go above 30% CPU usage for 3 checks in a row.  This allows for quick spikes, but kills runaway processes.  This ability is hugely useful and no matter the server you are running on, a quick upstart script that simply runs bluepill will handle all of the processes associated with the application.  No more writing a new upstart script every time you add a new gem to your application.

Summary

As you can see our server configuration has been an iterative improvement to the solution that works best for us.  In the future I’m interested in looking at things like Puma and JRuby.  We dont’ need the boost in performance they seem to give yet, but in the toying I’ve done so far they seem a very easy switch that provide better performance with less overhead.  We here at AD&S may toy with them in the future.

I’ve glossed over some of the technical details here to focus on the overall path we took.  If there is interest I’m happy to delve into more specific details. I welcome your comments and questions.