Using SQLite3 & Capistrano & Mongrel Cluster oh my!

Eric PughMarch 1, 2007

The biggest slam against using file based databases like SQLite is that they are actual files embedded in your application, and that you have to go through extra hoops when upgrading your application. For example, if you have your production prod.sqlite3 file in ./db, then every time you update your application using Capistrano youll replace your production database with a fresh one!

Well, my first attempt to work around this involved excluding the ./db directory from checkout and then creating a db directory in /shared/db. Symlinking from /shared/db to ./db worked, but was clumsy, and required extra code in my deployment recipe.

A light bulb finally dawned that the path in databases.yml for my production database could be something like:

adapter: sqlite3
database: ../../shared/db/prod.sqlite3

Just go up two directories and over to the shared/db directory and locate the database file there! I tried it out, and it worked great. I ran cap deploy_with_migrations and soon enough had a shared sqlite3 database. I then fired up the app with cap cold_deploy and everything ground to a halt :( . I was getting an error about “database cant be opened. Eventually, after pulling my hair out I discovered that while starting mongrel with ./server/script works great with relative paths to the database, starting with mongrel_rails requires a complete path:

adapter: sqlite3
database: /opt/apps/horseshow/shared/db/prod.sqlite3

Victory at last!

To automate the generation of the shared/db directory, just add to your deploy.rb recipe:

desc 'A setup task to put shared system, log, and database directories in place';
task :setup, :roles => [:app, :db, :web] do
run <<-CMD
mkdir -p -m 775 #{release_path} #{shared_path}/system #{shared_path}/db &&
mkdir -p -m 777 #{shared_path}/log

More blog articles:

We empower great search teams!

We help teams that use Solr and Elasticsearch become more capable through consulting and training.

Services Training