Optimising Memory Usage In Ruby and Rails
What’s the difference between these two lines of Ruby code:
Post.all.count
# VS
Post.all.to_a.count
If you have experience with Ruby on Rails, you know that the logs give away a big hint here:
DEBUG -- : (1.8ms) SELECT COUNT(*) FROM "posts"
# VS
DEBUG -- : Post Load (246.7ms) SELECT "posts".* FROM "posts"
In the first case, we simply execute a database count; in the second we load all posts from the database into memory. Based on the logs alone, we can see clearly the performance of the first is clearly better than the performance of the second. This is one of the classic optimisations that we’re told when we start looking at Rails - don’t do in memory what can be done in the database.
While this example is straightforward, I have seen examples of code in the wild that have done far too much in memory that could be done in the database.
Learning About Memory Profiling in Ruby
I wanted to get a better grip on how to profile memory in Ruby. I decided to explore Sam Saffron’s memory_profiler
gem to see how I could use it to debug potential memory issues in future.
The gem has the concept of allocated memory and retained memory. Retained is memory that continues to be occupied after a block is run. Allocated is the total memory allocated by the block - the garbage collector doesn’t run when the profiler is running.
I expected a memory profiler gem to be a complex codebase that leveraged C extensions to do some fancy low level stuff. This gem is more straightforward than that - the codebase is pure Ruby and very easy to read. Looking at lib/memory_profiler/reporter.rb
you can see it call GC.start
a bunch of times (presumably to start with a clean slate). It then disables the garbage collector and calls ObjectSpace.trace_object_allocations_start
to start collecting stats about memory usage.
Scripting a test
I decided to write a single script Rails application based on the Rails bug report template. Using this template, I was able to create 100,000 fake posts in a SQLite DB. I check that the DB file exists on each run, so I don’t have to repopulate the 100,000 records on every run - that takes a while!
require 'bundler/inline'
gemfile(true) do
source 'https://rubygems.org'
gem 'rails'
gem 'sqlite3'
gem 'memory_profiler'
end
require 'active_record'
require 'logger'
# This connection will do for database-independent bug reports.
DB_FILE_PATH = './count_memory.db'
ActiveRecord::Base.establish_connection(adapter: 'sqlite3', database: DB_FILE_PATH)
ActiveRecord::Base.logger = Logger.new(STDOUT)
class Post < ActiveRecord::Base
end
unless File.file?(DB_FILE_PATH)
ActiveRecord::Schema.define do
create_table :posts, force: true do |t|
t.string :title
t.string :body
end
end
100_000.times { Post.create!(title: 'title', body: 'body') }
end
I can then run two tests at the end of my script:
require 'memory_profiler'
report = MemoryProfiler.report do
Post.all.count
end.pretty_print(top: 20, scale_bytes: true, normalize_paths: true, to_file: './reports/count_db.txt')
report = MemoryProfiler.report do
Post.all.to_a.count
end.pretty_print(top: 20, scale_bytes: true, normalize_paths: true, to_file: './reports/count_memory.txt')
Analysing Results
The report that is spat out by MemoryProfiler
gives some headline figures on memory. The sections that follow contain figures that let us know the memory & strings allocated / retained.
Let’s start by looking at the headline figures. It’s no surprise that the in memory version allocates significantly more memory:
Total allocated: 333.84 kB (2852 objects)
Total retained: 60.20 kB (443 objects)
# VS
Total allocated: 65.94 MB (901305 objects)
Total retained: 19.53 kB (87 objects)
Interstingly, the database query seems to retain 3 times as much memory - I’m not sure why that is.
Potential Solutions
If you run into these problems in Rails, there’s a couple of patterns you can leverage.
Use Active Record
Most often this occurs when developers aren’t familiar with the power of ActiveRecords query interface. If you’re unfamiliar, thoughtbot have a great free course that can get you up to speed.
Batch Your Queries
If you must load a large number of records for processing, you can reduce the memory consumption by processing those records in batches. find_in_batches
will break your query into batches (of 1000 by default) and pass each batch to the given block. Similarly, find_each
batches your query and forwards single records to the block.
Lazy Enumerators
One optimisation you may want to try outside of Active Record is lazy enumerators. Lazy enumerators work like regular enumerators, except they only evaluate what they need. Honeybadger have a great write up of how this can be used to save memory when processing files.