Using Enumerations to Make a Faster Activity Feed in Rails

Fast TrainA month ago the bright guys over at Thoughtbot wrote a blog post about using polymorphism to make a better activity feed in rails. Now, I have great respect for the work that Thoughtbot does and in particular I’m a huge fan of their blog content. However, on this specific issue I have to take exception. As it turns out, we’ve stared this issue in the face over at GiveGab and while the solution that Thoughtbot proposes may seem tempting, there are some serious issues to consider. I’ll explain my solution shortly but before that I’d like to take a step back and talk about activity feed design in the abstract.

Fanning In or Out

First off, I highly recommend Thierry Schellenbach‘s post over at High Scalability. It does a great job giving an overview of the design decisions that go into building an activity feed. However, the decision I particularly want to call out is wither to fan out on write or fan in on read. That is to say, do you materialize the activity feed on write by updating a table as the activities happen or do you materialize the activity feed on read by selecting from all the tables that populate the activity feed? There is the option of a hybrid approach, but the complexity prevents this implementation from making sense in most cases.

Where Thoughtbot Went Wrong

In the Thoughtbot post, they walk through an example of an ebay like site where users can both bid and comment on items. From there, they advocate creating a fan out on write feed. In the post, they juxtapose this with a fan in on read example which they label as an “anti-pattern”. Unfortunately, the fan in on read example that they refer to is the most naive implementation that one could conceive of.

class User < ActiveRecord::Base
  def recent_activities(limit)
     [comments,, bids].

This code suffers from two egregious issues. First, the entirety of the activity sources are being eagerly loaded from the database and turn into Ruby objects. This obviously won’t scale. Second, once all the records are loaded they are then sorted in Ruby. It’s quite clear that we should be pushing most of this work down to the database. However, instead of attacking these problems, Thoughtbot adds another problem; significant additional write load on the database. Without the feed, every bid used to result in one record being created. With the feed, a new bid now creates both bid record and an activity record. The comments situation is even worse, since the number of commenters on popular items is unbounded and every new comment triggers a write for every other commenter on the thread. I would argue this won’t scale beyond a trivial example either.

A Better Approach

We’ve seen that a fan out on write approach can cause some serious write time pain, so lets flip back to the fan in on read approach. The initial naive implementation that Thoughtbot held up is flawed, but can we improve on it? Well, as it turns out we can, and by a large margin. The first step is to stop eagerly loading all of the results. This is as simple as using the find_each method on each model and allowing Rails to batch the database loads for us behind the scene. We can even go one step further and convert the result to an enumerator using enum_for. We will see how this helps in a second. The second issue with the naive approach was sorting, which we can also push to the db in part by using a simple order(created_at).

class User < ActiveRecord::Base
  def recent_activities(limit)
    c_stream = comments.order_by(:created_at).to_enum :find_each
    i_stream = items.comments.order_by(:created_at).to_enum :find_each
    b_stream = bids.order_by(:created_at).to_enum :find_each

However, we still have to contend with sorting between the various streams. This is where the magic comes in. We have a collection of ordered streams of records, so let’s create a simple aggregator that pulls the next newest record from amongst all the streams and continues in a loop until it has pulled enough records to satisfy the limit parameter.

class ActivityAggregator
  def initialize(streams)
    @streams = streams

  def next_activities(limit)
    activities = []
    while (activities.size < limit) && more_activites? do
      activities << next_activity

  def next_activity{ |s| has_next?(s) }.sort_by{ |s| s.peek.created_at }

  def more_activities?
    @streams.any?{ |s| has_next?(s) }

  def has_next?(stream)

When we put this all together, we end up with an elegant solution that lazily loads data as needed from the database and pushes most of the heavy sorting down to the database level. This solution results in a small number of database queries and can be further tuned by matching the batch size of the find_each to more closely align with the limit parameter. Moreover it greatly reduces the write load on the database and keeps everything humming long much more smoothly.

class User < ActiveRecord::Base
  def recent_activities(limit)
    c_stream = comments.order_by(:created_at).to_enum :find_each
    i_stream = items.comments.order_by(:created_at).to_enum :find_each
    b_stream = bids.order_by(:created_at).to_enum :find_each[c_stream, i_stream, b_stream]).next_activities(limit)

11 Best Practices for Low Latency Systems

stopwatchIts been 8 years since Google noticed that an extra 500ms of  latency dropped traffic by 20% and Amazon realized that 100ms of extra latency dropped sales by 1%. Ever since then developers have been racing to the bottom of the latency curve, culminating in front-end developers squeezing every last millisecond out of their JavaScript, CSS, and even HTML. What follows is a random walk through a variety of best practices to keep in mind when designing low latency systems. Most of these suggestions are taken to the logical extreme but of course tradeoffs can be made. (Thanks to an anonymous user for asking this question on Quora and getting me to put my thoughts down in writing).

Choose the right language

Scripting languages need not apply. Though they keep getting faster and faster, when you are looking to shave those last few milliseconds off your processing time you cannot have the overhead of an interpreted language. Additionally, you will want a strong memory model to enable lock free programming so you should be looking at Java, Scala, C++11 or Go.

Keep it all in memory

I/O will kill your latency, so make sure all of your data is in memory. This generally means managing your own in-memory data structures and maintaining a persistent log, so you can rebuild the state after a machine or process restart. Some options for a persistent log include Bitcask, KratiLevelDB and BDB-JE. Alternatively, you might be able to get away with running a local, persisted in-memory database like redis or MongoDB (with memory >> data). Note that you can loose some data on crash due to their background syncing to disk.

Keep data and processing colocated

Network hops are faster than disk seeks but even still they will add a lot of overhead. Ideally, your data should fit entirely in memory on one host. With AWS providing almost 1/4 TB of RAM in the cloud and physical servers offering multiple TBs this is generally possible. If you need to run on more than one host you should ensure that your data and requests are properly partitioned so that all the data necessary to service a given request is available locally.

Keep the system underutilized

Low latency requires always having resources to process the request. Don’t try to run at the limit of what your hardware/software can provide. Always have lots of head room for bursts and then some.

Keep context switches to a minimum

Context switches are a sign that you are doing more compute work than you have resources for. You will want to limit your number of threads to the number of cores on your system and to pin each thread to its own core.

Keep your reads sequential

All forms of storage, wither it be rotational, flash based, or memory performs significantly better when used sequentially. When issuing sequential reads to memory you trigger the use of prefetching at the RAM level as well as at the CPU cache level. If done properly, the next piece of data you need will always be in L1 cache right before you need it. The easiest way to help this process along is to make heavy use of arrays of primitive data types or structs. Following pointers, either through use of linked lists or through arrays of objects, should be avoided at all costs.

Batch your writes

This sounds counterintuitive but you can gain significant improvements in performance by batching  writes. However, there is a misconception that this means the system should wait an arbitrary amount of time before doing a write. Instead, one thread should spin in a tight loop doing I/O. Each write will batch all the data that arrived since the last write was issued. This makes for a very fast and adaptive system.

Respect your cache

With all of these optimizations in place, memory access quickly becomes a bottleneck. Pinning threads to their own cores helps reduce CPU cache pollution and  sequential I/O also helps preload the cache. Beyond that, you should keep memory sizes down using primitive data types so more data fits in cache. Additionally, you can look into cache-oblivious algorithms which work by recursively breaking down the data until it fits in cache and then doing any necessary processing.

Non blocking as much as possible

Make friends with non blocking and wait free data structures and algorithms. Every time you use a lock you have to go down the stack to the OS to mediate the lock which is a huge overhead. Often, if you know what you are doing, you can get around locks by understanding the memory model of the JVM, C++11 or Go.

Async as much as possible

Any processing and particularly any I/O that is not absolutely necessary for building the response should be done outside the critical path.

Parallelize as much as possible

Any processing and particularly any I/O that can happen in parallel should be done in parallel. For instance if your high availability strategy includes logging transactions to disk and sending transactions to a secondary server those actions can happen in parallel.


Almost all of this comes from following what LMAX is doing with their Disruptor project. Read up on that and follow anything that Martin Thompson does.

Additional Blogs

Mathematical Purity in Distributed Systems: CRDTs Without Fear

mathThis is a story about distributed systems, commutativity, idempotency, and semilattices. It’s a real nail biter so put the kettle on to boil and settle in. Our story starts one bright morning at your budding mobile gaming startup a few days before the big launch. Lets watch.

The Simplest Thing That Works, Right?

Its Thursday and the big launch is just a few days away. You are putting the finishing touches on your viral referral loop when you think to yourself, “wouldn’t it be grand if we tracked how many levels our players complete”. Since you fully expect that you are about to release the next Candy Crush Saga you don’t want to build something that won’t scale so you bust out your favorite NoSQL database, MonogDB and turn on the webscale option. You lean back in your chair and after a few minutes of staring at the ceiling you come up with the perfect schema.

    _id: hash(<user_name>),
    level: <int>

You flip over to your mobile app and wire up a “new level” message that gets sent back to your servers. Then, in the server code you process the message and increment the appropriate MongoDB record.

But we don’t even have that many levels!

Your app launches with a feeble roar (its ok, just wait for TechCrunch to pick it up) and you decide to check in on your analytics. You start poking around and quickly notice that some user has played to level 534 when there are only 32 levels in the game. What gives? You poke around some more and realize that this user is actually your co-worker Lindsey. You wander over to her desk to see if she can think of anything that might have caused this. The two of you pour over the logs from the app on her phone only to find a lot of network timeouts and retries. Thats when Lindsey says “You know, I was at my parents house last night showing them the game and the cell service there is pretty crappy. I wonder if the game kept resending the “new level” message thinking that the send had timed out, when in fact the server had received the message but wasn’t able to acknowledge the message fast enough.” Your stomach quickly sinks as you realize that this behavior will totally screw up your numbers. The two of you wander over to a whiteboard and in short order you realized the MongoDB increment function has to go. Instead, the two of you conclude that the game should send a “level X” method to the server and the server will simply write the level number into MongoDB. A quick update to your server and a lengthy update of the app later (because Apple) your analytics are starting to look better.


What you’ve just experienced is a the pain that comes from a system that lacks idempotency. Idempotent operations are ones that can be safely retried without effecting the end result of the system. In the first design, the increment function is not idempotent since when it is retried it increments the MongoDB number again even though it shouldn’t. By storing the level number that is sent along with the message you transformed the system into one that is idempotent since storing the same level number over and over will always end up with the same result.

I know I played more levels than that!

TechCrunch is taking its sweet time to cover your awesome game so after blindly throwing half of your seed money at Google AdWords you decide to check in on the analytics again. This time, you start by looking at your personal record and notice another oddity; you’ve beaten all 32 levels and yet your analytics record has only recorded up to level 30. You mull it over for a few minutes and realize when you beat the game you were on the subway where there wasn’t any cell reception. You take out your phone and check the logs and sure enough you still have two “level X” messages waiting to be sent back to the server. You start up the game and watch the logs as the last two messages are safely sent off to your servers. For fun you’ve been tailing the server logs as well and notice something interesting; the two messages where handled by two different servers. “Just the load balancer doing its job” you think. Back to MongoDB you go, but instead of the 32 you were hoping for, you see a 31 staring you in the face. What the? And then it hits you. The “level 31″ message must have been processed a little bit slower than the “level 32″ message and overwrote the 32 in MongoDB. “Damn this stuff is hard”. You head back to Lindey’s desk and explain the problem to her. The two of you hit the whiteboard again and decided that what you really should do is update the value in MongoDB to be the maximum of the value in MongoDB and the value in the message. Luckily this change can be done solely on the server side (no waiting for Apple). A quick deploy later and you are feeling pretty smug.


This time you were bitten by the lack of commutativity in your system. Commutative operations are ones that can be processed in any order without effecting the end result of the system. In the second design updating the value in MongoDB was not commutative since it blindly overwrote any previous value resulting in a last write wins situation. By using the maximum function you transformed the system into one that is commutative since the maximum of a stream of numbers always returns the same result regardless of the order of the numbers.

Monotonic Semilattices

Now we are breaking out the big words. As it turns out, if you create a system that is idempotent and commutative (and associative but that almost always goes hand in hand with commutative) you have created a semilattice. Moreover, in your particular case, you created a monotonic semilattice. That is to say, you created an idempotent and commutative system that grows only in one direction. Specifically, the number in MongoDB only increases. Now why is this interesting? Well, monotonic semilattices are the building blocks for Conflict-free Replicated Data Types. These things are all the rage and Riak has already implemented one CRDT for their distributed counters feature.

Wrapping Up

So as you can see none of this stuff is particularly hard but when these simple principals are combined they can make life much easier. Go forth and wrangle some distributed systems!

Dropping ACID on your NoSQL

acid-tripA lot has been said about NoSQL and the resulting backlash of NewSQL. NoSQL reared its head in reaction to the severe pain of sharding traditional open source SQL databases and quickly took the software community by storm. However, scalability wasn’t the only selling point. Many made the argument that SQL itself was to blame and that developers should rise up and throw off the shackles of this repressive language. Fast forward a few years and, while NoSQL alternatives are here to stay, it is quite clear that SQL is far from being overthrown. Instead, we now have a new breed of SQL options, the so called NewSQL databases, that offer the familiarity of SQL, with all its tooling and frameworks, while still giving us the scalability we need in this web scale world. It is in the midst of all this commotion that a quiet trend is emerging, one that is going mostly unnoticed, a trend of dropping ACID on your NoSQL.


So what is this trend? Well it is a middle ground between the NoSQL world and the NewSQL world where ACID guarantees are provided across multiple keys but without the overhead of SQL. Until recently if you wanted multi-key transactions you had to use a NewSQL database; the ideas of SQL and ACID were intertwined and no one seemed to be trying to untangle them. However, in the past year and a half or so that has changed.


The first one of these databases to come to my attention was HyperDex. I was originally drawn to the project when I learned of its approach to secondary indexes which they call “hyperspace hashing”. In a traditional key-value store the values are placed in a 1-dimensional ring based on the primary key. In HyperDex the values are placed in n-dimentional space based on the primary and any secondary keys that are specified. This allows secondary index queries to be routed to a subset of the servers in the cluster instead of every server as is the case with secondary indexes in traditional key-value stores. I highly recommend reading the research paper to get a better understanding. However, it was only recently with the addition of Hyperdex Warp that ACID support was added. I won’t go into the details of how Warp works but instead I’ll direct you to the research paper describing  it.


The next database on the scene was FoundationDB. While HyperDex, being developed at Cornell University, has a strong research influence, FoundationDB instead comes from a practical engineering background. Not only has the FoundationDB team built a compelling, ACID compliant, NoSQL store but they ended up building Flow, a C++ actor library, and a trifecta of testing tools to ensure that FoundationDB is as fast and as resilient as possible. More recently they acquired Akiban, makers of an interesting SQL database, and have started building a SQL layer on top of their key value store. This approach is very similar to the approach that Google took when creating F1 and not unlike the the approach that NuoDB is taking. It will be interesting to follow that NewSQL trend.


Finally we come to Spanner, Google’s geo-distributed datastore. While significantly more vast in scale than either HyperDex or FoundationDB it still shares the common thread of ACID transactions in a NoSQL store. Of course the interesting part of Spanner that everyone is talking about is TrueTime which uses atomic clocks and GPS to create a consistent ordering of transactions across the globe. Its going to be a while before that becomes common place outside of Google data centers but its a fascinating solution to the normal assumptions about synchronized clocks in distributed systems.

Trend Watch: Technology Email Newsletters

NewsletterNewsletters? Really? 2014 is a few months away, and yet I’m talking about newsletters like its 1998? Didn’t RSS kill newsletters and Twitter supplant RSS? Well, that is certainly the prevailing thinking. However, there is a retro trend cropping up, technology email newsletters, and its quickly becoming the primary way in which I find interesting articles online.

The trend seems to have started in August 2010 when Peter Cooper first published Ruby Weekly. Back in those days Ruby 1.8.7 was still shiny and new and Rails 3.0 had only just been released on the world. Since then, Peter has slowly added newsletter after newsletter, finally reaching a total of 6 newsletters under the Cooper Press banner. Many others, including Rahul Chaudhary, have taken inspiration from Peter and started their own newsletters. So, without further ado, here is a complete listing of newsletters you should know about as of October 2013.

Management and Leadership

Technology & Leadership News

An incredible wealth of knowledge from Kate Matsudaira and Kate Stull spanning technology, product, leadership, management and anything else interesting from the week. Probably the single most interesting and comprehensive newsletter of the bunch. If you enjoy the content then I highly recommend checking out their latest project over at

Agile Weekly

A focused newsletter on agile development with a eye towards productivity and team leadership. Well worth the read, as well as the podcast run by the same folks.

Software Lead Weekly

A great compliment to the Agile Weekly newsletter with links focused on team building and leadership.

Founder Weekly

One of Rahul’s three newsletters. If you want information about financing, sales, marketing, PR, hackathons, incubators and the like, you will find it here.


Status Code

Another one of Peter’s newsletters and one of the most interesting technology newsletters digging into all sorts of hard core programming nerdery. C, UNIX, Protocols, algorithims, editors, etc.

NoSQL Weekly

Interested in MongoDB, Riak, Cassandra and other NoSQL stores? This newsletter from Rhaul helps you keep up with whats going on in the NoSQL world.

DevOps Weekly

Your weekly list of links covering Puppet, Chef, Docker, Vagrant, Boxen and all other manner of interesting DevOps tools as well as plenty of slide decks and blog posts on culture and process.

Postgres Weekly

Peter Cooper strikes again with news from the world of PostgreSQL.

Heroku Weekly

A production of Higher Order Heroku for those that want to level up their Heroku skills.

HTML5 Weekly

CSS 3, HTML 5, Websockets, Canvas, all covered, here, in another Cooper Press newsletter.

Responsive Design Weekly

A complement to the HTML 5 newsletter for all the web designer out there.

Programming Languages

These don’t require much in the way of introduction so here goes:

Introducing Chronophage

time eaterAt Localytics we deal with data, lots of data, and that data is never as clean as you want it. When we developed the first version of our upload API, thrown together during sleep deprived coding binges while participating in TechStars, the decision was made to upload datetimes as ISO 8601 formatted date strings. Code was written, uploads were sent, and life was grand; or at least we thought it was. You see Ruby, being the helpful language that it is, very kindly parses all manor of garbage, often quietly ignoring huge swaths of the string that it couldn’t make sense of. But I’m getting ahead of myself.

Putting on Grownup Pants

When I was brought on board, I was tasked with rewriting the processing pipeline which was beginning to creak under the weight of the data we were receiving. Being a fan of the JVM and wanting to hang out with all the cool kids, I decided to use Scala for the rewrite. Piece by piece the pipeline was rewritten, care was taken to unit test and code coverage was high. Then the fateful day came when we threw the switch and migrated to the new system. No sooner had we completed the migration then alerts started flying. Date strings were all over the map. Twelve hour time? Hindi? Kanji? What was this garbage? Only now were we seeing the dirt that Ruby had been silently sweeping under the rug for us. With a bit of scrambling the first iteration of what would become Chronophage was unleashed upon the world.

The Problem

It turns out, what we thought was ISO formatting was in fact a crap shoot based upon clients locale settings, language settings, and who knows what else. Dates were frequently formatted in 12 hr time and then kindly translated to whatever language the client was in. Did you know priekšpusdienā is Latvian for AM? Neither did I.


Fast forward a few years and Chronophage has been battle-hardened by trillions of datapoints. It supports twelve hour time in dozens of languages and locales, thanks in large part to Dave Rolsky‘s DateTime-Locale project for the locale translations, and a wide variety of other odd formattings, thanks to the Joda Time library’s ISO datetime parser. And now, it is with great pleasure that I can announce we are releasing Chronophage as an open source project. Go have a look at the GitHub project and if you find yourself drowning under a deluge of unintelligible date strings give it a spin.

My Scala Toolset – May 2013

scala-logo-cropedA friend recently asked for about my Scala toolset. When my response quickly spiraled out of control I decided to turn it into a quick blog post so everyone could benefit from the reply. So without further ado here is the current set of tools, libraries, and frameworks that I use on a semi-regular basis.


  • Akka: A very slick Actor framework.
  • Scalatra: A Scala port of sinatra
  • Scalatest: Nice testing framework, specs is the other main player
  • IntelliJ: Very nice IDE for Scala, I prefer it to Eclipse


  • SBT: Its a bit of a pain to work with but its the default build tool for Scala. If you are just trying to do basic dependency management its pretty nice. Extending it is where the pain comes in.
  • sbt-idea plugin: will generate intellij projects from sbt
  • xsbt-web-plugin: will create wars for your project
  • junit_xml_listener: outputs scalatest results as junit xml so Jenkins can process it
  • NOTE: SBT runs everything in parallel as much as it can, this can screw with junit tests but you can turn it off by setting parallelExecution := false

Java Libs

  • JodaTime: Replaces the standard Java date stuff which is quite weak. The next version of Java will actually have JodaTime built in
  • c3p0: A solid JDBC connection pool. I’ve also heard some talk about BoneCP which the Play framework uses.
  • Apache Commons: look here for things like, string utils, io utils, db utils, utils around codecs, really anything that seems like it should already exist.
  • SLF4J: The only logging framework worth using anymore
  • Whirlycache: A simple in memory cache though I keep hearing Google MapMaker is very nice.
  • Jackson: The only JSON library you should use, stupid fast

Scala Wrappers

  • scalaj-time: wrapper for joda time, lets you do things like 4.minutes.ago
  • scalaj-collection: wrapper for collections that allows you to easily convert between scala and java collections. However, this was essentially moved into the standard library in Scala 2.9 so you might just want to use that now
  • Jerkson: scala wrapper for Jackson though this is abandoned now. Jackson has a Scala module now and Json4s is trying to unify all the Scala JSON wrappers and supports Jackson.
  • slf4s: A Scala wrapper for slf4j
  • scalaj-http: A simple wrapper for the built in Java HTTP library, not super performant but works in a pinch
  • dispatch: A Scala wrapper around Apache HTTP Async, better for high performance HTTP but a bit more complicated to use as you should use a completion handler to process the response

Get every new post delivered to your Inbox.