Earlier this morning I got a chance to sit in on a phone conversation with an engineer from Heroku. The topic of discussion was on how we can improve performance for fairgoods.com. These are some of the notes from that conversation.
- Memory usage reports from new relic is not very useful.
- look up log runtime metrics from the heroku dev center.
- gives visibility into load and memory usage for running dynos.
- goal is to keep reponse times to below 500mS.
- find the optimal number of unicorn workers per dyno. without constraining resources. (3 seems to be the magic number for us.)
- lookup a plugin called manual deploy to do rolling restarts. this allows you to restart workers one at a time instead of all at once.
- load average below 4s is ok or else you might need to tweak the number of unicorn workers.
- heroku currently offers 2X dynos that have approximately double the memory available.
- we were suggested to A/B test 1X vs 2X dynos to see if that would make a difference for us.
- we had no H12 errors which indicate the number of request timeouts.
- suggested unicorn timeout is 15 seconds, we are currently at 30 seconds.
- if your requests need to take longer than that then we should consider partitioning different parts of the application.
- one place that seems to have larger request times is when running reports.
- break out the admin section of the site into its own application. You can configure different applications to connect to the same database by configuring environment variables. This move slow processing on the admin side to separate dynos than from the dynos that would serve the main website.
- you can also scale up the dynos when interacting with the admin section.
- in our case we could increase the unicorn worker count.
- move image assets to cloudfront. serving images and static content through heroku is not ideal. we should see a performance increase if we can do this.
example proc file
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb worker: bundle exec rake jobs:work