Rails performance and caching
While it may not be all that obvious, this entire website is built within a Ruby on Rails application. One of the main reasons I did this was to get some real-world deployment experience, and while it’s only been a few days since launch, I’m learning fast. Today, I added caching to help some performance issues; here is what I’ve learned: caching rocks!
When I first launched the site last Saturday it was without caching, and it showed. I had Remote Desktop set up on one screen watching the CPU usage of Zelda (the Mac mini that runs this site) and was using my other computer to browse. Every time I would request a page, CPU on Zelda would hit 100% for approximately three seconds. This is three seconds under Apache+FastCGI mind you. Ouch.
So what on earth am I doing that would peg the processor like that? Nothing too crazy (at least I don’t think so). For many pages I’m just using an internal helper that takes
.markdown plain-text files in a folder called
content that mirrors the site hierarchy, processing them and then displaying the page. Examples include home, services, rate and about. I do suspect that the Ruby implementation of Markdown isn’t as fast as others I’ve used, but by itself I don’t think it’s the lone cause. For the blog, we are doing database fetches, initializing
entries that own
comments and then laying it all out. Blog entries are written in Markdown too, so again there is a conversion process.
While I was proud of my new site, I dared not tell people to visit it for fear of crushing poor little Zelda. Thus caching was my number one to do.
Turns out caching in Rails is a mixed bag. There are basically three flavors: cache the entire page, the action or fragments that you manually define. I’m now using page caching for those plain static pages (home, services, rate and so on…) and it works well. When you ask it to cache the page, Rails creates a static html file inside of
/public, and when a visitor asks for the page, they never even hit the Rails environment; Apache just gives them the static html file.
For the blog and its subsections, I’m using fragment caching. Here, Rails will cache sections of the template and allow you to give the caches names. Then, you look in your controller to see if a cache of that name exists. If so, skip over the database and other expensive calls in the controller. The application will continue to render the template and will substitute some static html that is sitting on the file system for the area where you would have used the result of the expensive calls you skipped. Finally, you’ll want to be sure to invalidate certain caches when you edit a blog entry or when someone adds a comment and so on. This is where my current code base gets really clunky. I have cache invalidators all over the place. What I really need to do is set up some sweepers. These are objects that observe models for significant changes and invalidate caches accordingly. It’s explained in the Rails book - I just haven’t had an opportunity to play with them yet.
So, while not perfectly implemented (yet) the caches are helping:
- Plain pages before caching => ~0.18 seconds
- Plain pages after caching (as fast as Apache can serve a static file)
- Blog pages before caching => ~0.50 seconds
- Blog pages after caching => ~0.02 seconds (25 times faster)
- RSS feeds before caching => ~0.50 seconds
- RSS feeds after caching => 0.009 seconds (55 times faster)
My next big performance bottle neck is Markdown. I don’t care as much about Markdown speed on the server , but I’m allowing people to use Markdown in blog comments and then showing live previews of what their comment will look like. In development, I was refreshing the preview per second and it felt very nice. In production, the server can’t keep up, so I’ve been forced to throttle the live preview to a 5-second update rate, which just isn’t as fast as I’d like it to be.
Posted on: December 15, 2005 – 4:04 PM