11 Best Practices for Low Latency Systems

stopwatchIts been 8 years since Google noticed that an extra 500ms of  latency dropped traffic by 20% and Amazon realized that 100ms of extra latency dropped sales by 1%. Ever since then developers have been racing to the bottom of the latency curve, culminating in front-end developers squeezing every last millisecond out of their JavaScript, CSS, and even HTML. What follows is a random walk through a variety of best practices to keep in mind when designing low latency systems. Most of these suggestions are taken to the logical extreme but of course tradeoffs can be made. (Thanks to an anonymous user for asking this question on Quora and getting me to put my thoughts down in writing).

Choose the right language

Scripting languages need not apply. Though they keep getting faster and faster, when you are looking to shave those last few milliseconds off your processing time you cannot have the overhead of an interpreted language. Additionally, you will want a strong memory model to enable lock free programming so you should be looking at Java, Scala, C++11 or Go.

Keep it all in memory

I/O will kill your latency, so make sure all of your data is in memory. This generally means managing your own in-memory data structures and maintaining a persistent log, so you can rebuild the state after a machine or process restart. Some options for a persistent log include Bitcask, KratiLevelDB and BDB-JE. Alternatively, you might be able to get away with running a local, persisted in-memory database like redis or MongoDB (with memory >> data). Note that you can loose some data on crash due to their background syncing to disk.

Keep data and processing colocated

Network hops are faster than disk seeks but even still they will add a lot of overhead. Ideally, your data should fit entirely in memory on one host. With AWS providing almost 1/4 TB of RAM in the cloud and physical servers offering multiple TBs this is generally possible. If you need to run on more than one host you should ensure that your data and requests are properly partitioned so that all the data necessary to service a given request is available locally.

Keep the system underutilized

Low latency requires always having resources to process the request. Don’t try to run at the limit of what your hardware/software can provide. Always have lots of head room for bursts and then some.

Keep context switches to a minimum

Context switches are a sign that you are doing more compute work than you have resources for. You will want to limit your number of threads to the number of cores on your system and to pin each thread to its own core.

Keep your reads sequential

All forms of storage, wither it be rotational, flash based, or memory performs significantly better when used sequentially. When issuing sequential reads to memory you trigger the use of prefetching at the RAM level as well as at the CPU cache level. If done properly, the next piece of data you need will always be in L1 cache right before you need it. The easiest way to help this process along is to make heavy use of arrays of primitive data types or structs. Following pointers, either through use of linked lists or through arrays of objects, should be avoided at all costs.

Batch your writes

This sounds counterintuitive but you can gain significant improvements in performance by batching  writes. However, there is a misconception that this means the system should wait an arbitrary amount of time before doing a write. Instead, one thread should spin in a tight loop doing I/O. Each write will batch all the data that arrived since the last write was issued. This makes for a very fast and adaptive system.

Respect your cache

With all of these optimizations in place, memory access quickly becomes a bottleneck. Pinning threads to their own cores helps reduce CPU cache pollution and  sequential I/O also helps preload the cache. Beyond that, you should keep memory sizes down using primitive data types so more data fits in cache. Additionally, you can look into cache-oblivious algorithms which work by recursively breaking down the data until it fits in cache and then doing any necessary processing.

Non blocking as much as possible

Make friends with non blocking and wait free data structures and algorithms. Every time you use a lock you have to go down the stack to the OS to mediate the lock which is a huge overhead. Often, if you know what you are doing, you can get around locks by understanding the memory model of the JVM, C++11 or Go.

Async as much as possible

Any processing and particularly any I/O that is not absolutely necessary for building the response should be done outside the critical path.

Parallelize as much as possible

Any processing and particularly any I/O that can happen in parallel should be done in parallel. For instance if your high availability strategy includes logging transactions to disk and sending transactions to a secondary server those actions can happen in parallel.

Resources

Almost all of this comes from following what LMAX is doing with their Disruptor project. Read up on that and follow anything that Martin Thompson does.

Additional Blogs

Mathematical Purity in Distributed Systems: CRDTs Without Fear

mathThis is a story about distributed systems, commutativity, idempotency, and semilattices. It’s a real nail biter so put the kettle on to boil and settle in. Our story starts one bright morning at your budding mobile gaming startup a few days before the big launch. Lets watch.

The Simplest Thing That Works, Right?

Its Thursday and the big launch is just a few days away. You are putting the finishing touches on your viral referral loop when you think to yourself, “wouldn’t it be grand if we tracked how many levels our players complete”. Since you fully expect that you are about to release the next Candy Crush Saga you don’t want to build something that won’t scale so you bust out your favorite NoSQL database, MonogDB and turn on the webscale option. You lean back in your chair and after a few minutes of staring at the ceiling you come up with the perfect schema.

{
    _id: hash(<user_name>),
    level: <int>
}

You flip over to your mobile app and wire up a “new level” message that gets sent back to your servers. Then, in the server code you process the message and increment the appropriate MongoDB record.

But we don’t even have that many levels!

Your app launches with a feeble roar (its ok, just wait for TechCrunch to pick it up) and you decide to check in on your analytics. You start poking around and quickly notice that some user has played to level 534 when there are only 32 levels in the game. What gives? You poke around some more and realize that this user is actually your co-worker Lindsey. You wander over to her desk to see if she can think of anything that might have caused this. The two of you pour over the logs from the app on her phone only to find a lot of network timeouts and retries. Thats when Lindsey says “You know, I was at my parents house last night showing them the game and the cell service there is pretty crappy. I wonder if the game kept resending the “new level” message thinking that the send had timed out, when in fact the server had received the message but wasn’t able to acknowledge the message fast enough.” Your stomach quickly sinks as you realize that this behavior will totally screw up your numbers. The two of you wander over to a whiteboard and in short order you realized the MongoDB increment function has to go. Instead, the two of you conclude that the game should send a “level X” method to the server and the server will simply write the level number into MongoDB. A quick update to your server and a lengthy update of the app later (because Apple) your analytics are starting to look better.

Idempotency

What you’ve just experienced is a the pain that comes from a system that lacks idempotency. Idempotent operations are ones that can be safely retried without effecting the end result of the system. In the first design, the increment function is not idempotent since when it is retried it increments the MongoDB number again even though it shouldn’t. By storing the level number that is sent along with the message you transformed the system into one that is idempotent since storing the same level number over and over will always end up with the same result.

I know I played more levels than that!

TechCrunch is taking its sweet time to cover your awesome game so after blindly throwing half of your seed money at Google AdWords you decide to check in on the analytics again. This time, you start by looking at your personal record and notice another oddity; you’ve beaten all 32 levels and yet your analytics record has only recorded up to level 30. You mull it over for a few minutes and realize when you beat the game you were on the subway where there wasn’t any cell reception. You take out your phone and check the logs and sure enough you still have two “level X” messages waiting to be sent back to the server. You start up the game and watch the logs as the last two messages are safely sent off to your servers. For fun you’ve been tailing the server logs as well and notice something interesting; the two messages where handled by two different servers. “Just the load balancer doing its job” you think. Back to MongoDB you go, but instead of the 32 you were hoping for, you see a 31 staring you in the face. What the? And then it hits you. The “level 31″ message must have been processed a little bit slower than the “level 32″ message and overwrote the 32 in MongoDB. “Damn this stuff is hard”. You head back to Lindey’s desk and explain the problem to her. The two of you hit the whiteboard again and decided that what you really should do is update the value in MongoDB to be the maximum of the value in MongoDB and the value in the message. Luckily this change can be done solely on the server side (no waiting for Apple). A quick deploy later and you are feeling pretty smug.

Commutativity

This time you were bitten by the lack of commutativity in your system. Commutative operations are ones that can be processed in any order without effecting the end result of the system. In the second design updating the value in MongoDB was not commutative since it blindly overwrote any previous value resulting in a last write wins situation. By using the maximum function you transformed the system into one that is commutative since the maximum of a stream of numbers always returns the same result regardless of the order of the numbers.

Monotonic Semilattices

Now we are breaking out the big words. As it turns out, if you create a system that is idempotent and commutative (and associative but that almost always goes hand in hand with commutative) you have created a semilattice. Moreover, in your particular case, you created a monotonic semilattice. That is to say, you created an idempotent and commutative system that grows only in one direction. Specifically, the number in MongoDB only increases. Now why is this interesting? Well, monotonic semilattices are the building blocks for Conflict-free Replicated Data Types. These things are all the rage and Riak has already implemented one CRDT for their distributed counters feature.

Wrapping Up

So as you can see none of this stuff is particularly hard but when these simple principals are combined they can make life much easier. Go forth and wrangle some distributed systems!

Dropping ACID on your NoSQL

acid-tripA lot has been said about NoSQL and the resulting backlash of NewSQL. NoSQL reared its head in reaction to the severe pain of sharding traditional open source SQL databases and quickly took the software community by storm. However, scalability wasn’t the only selling point. Many made the argument that SQL itself was to blame and that developers should rise up and throw off the shackles of this repressive language. Fast forward a few years and, while NoSQL alternatives are here to stay, it is quite clear that SQL is far from being overthrown. Instead, we now have a new breed of SQL options, the so called NewSQL databases, that offer the familiarity of SQL, with all its tooling and frameworks, while still giving us the scalability we need in this web scale world. It is in the midst of all this commotion that a quiet trend is emerging, one that is going mostly unnoticed, a trend of dropping ACID on your NoSQL.

NewACID

So what is this trend? Well it is a middle ground between the NoSQL world and the NewSQL world where ACID guarantees are provided across multiple keys but without the overhead of SQL. Until recently if you wanted multi-key transactions you had to use a NewSQL database; the ideas of SQL and ACID were intertwined and no one seemed to be trying to untangle them. However, in the past year and a half or so that has changed.

HyperDex

The first one of these databases to come to my attention was HyperDex. I was originally drawn to the project when I learned of its approach to secondary indexes which they call “hyperspace hashing”. In a traditional key-value store the values are placed in a 1-dimensional ring based on the primary key. In HyperDex the values are placed in n-dimentional space based on the primary and any secondary keys that are specified. This allows secondary index queries to be routed to a subset of the servers in the cluster instead of every server as is the case with secondary indexes in traditional key-value stores. I highly recommend reading the research paper to get a better understanding. However, it was only recently with the addition of Hyperdex Warp that ACID support was added. I won’t go into the details of how Warp works but instead I’ll direct you to the research paper describing  it.

FoundationDB

The next database on the scene was FoundationDB. While HyperDex, being developed at Cornell University, has a strong research influence, FoundationDB instead comes from a practical engineering background. Not only has the FoundationDB team built a compelling, ACID compliant, NoSQL store but they ended up building Flow, a C++ actor library, and a trifecta of testing tools to ensure that FoundationDB is as fast and as resilient as possible. More recently they acquired Akiban, makers of an interesting SQL database, and have started building a SQL layer on top of their key value store. This approach is very similar to the approach that Google took when creating F1 and not unlike the the approach that NuoDB is taking. It will be interesting to follow that NewSQL trend.

Spanner

Finally we come to Spanner, Google’s geo-distributed datastore. While significantly more vast in scale than either HyperDex or FoundationDB it still shares the common thread of ACID transactions in a NoSQL store. Of course the interesting part of Spanner that everyone is talking about is TrueTime which uses atomic clocks and GPS to create a consistent ordering of transactions across the globe. Its going to be a while before that becomes common place outside of Google data centers but its a fascinating solution to the normal assumptions about synchronized clocks in distributed systems.

Trend Watch: Technology Email Newsletters

NewsletterNewsletters? Really? 2014 is a few months away, and yet I’m talking about newsletters like its 1998? Didn’t RSS kill newsletters and Twitter supplant RSS? Well, that is certainly the prevailing thinking. However, there is a retro trend cropping up, technology email newsletters, and its quickly becoming the primary way in which I find interesting articles online.

The trend seems to have started in August 2010 when Peter Cooper first published Ruby Weekly. Back in those days Ruby 1.8.7 was still shiny and new and Rails 3.0 had only just been released on the world. Since then, Peter has slowly added newsletter after newsletter, finally reaching a total of 6 newsletters under the Cooper Press banner. Many others, including Rahul Chaudhary, have taken inspiration from Peter and started their own newsletters. So, without further ado, here is a complete listing of newsletters you should know about as of October 2013.

Management and Leadership

Technology & Leadership News

An incredible wealth of knowledge from Kate Matsudaira and Kate Stull spanning technology, product, leadership, management and anything else interesting from the week. Probably the single most interesting and comprehensive newsletter of the bunch. If you enjoy the content then I highly recommend checking out their latest project over at https://popforms.com/.

Agile Weekly

A focused newsletter on agile development with a eye towards productivity and team leadership. Well worth the read, as well as the podcast run by the same folks.

Software Lead Weekly

A great compliment to the Agile Weekly newsletter with links focused on team building and leadership.

Founder Weekly

One of Rahul’s three newsletters. If you want information about financing, sales, marketing, PR, hackathons, incubators and the like, you will find it here.

Technology

Status Code

Another one of Peter’s newsletters and one of the most interesting technology newsletters digging into all sorts of hard core programming nerdery. C, UNIX, Protocols, algorithims, editors, etc.

NoSQL Weekly

Interested in MongoDB, Riak, Cassandra and other NoSQL stores? This newsletter from Rhaul helps you keep up with whats going on in the NoSQL world.

DevOps Weekly

Your weekly list of links covering Puppet, Chef, Docker, Vagrant, Boxen and all other manner of interesting DevOps tools as well as plenty of slide decks and blog posts on culture and process.

Postgres Weekly

Peter Cooper strikes again with news from the world of PostgreSQL.

Heroku Weekly

A production of Higher Order Heroku for those that want to level up their Heroku skills.

HTML5 Weekly

CSS 3, HTML 5, Websockets, Canvas, all covered, here, in another Cooper Press newsletter.

Responsive Design Weekly

A complement to the HTML 5 newsletter for all the web designer out there.

Programming Languages

These don’t require much in the way of introduction so here goes:

Introducing Chronophage

time eaterAt Localytics we deal with data, lots of data, and that data is never as clean as you want it. When we developed the first version of our upload API, thrown together during sleep deprived coding binges while participating in TechStars, the decision was made to upload datetimes as ISO 8601 formatted date strings. Code was written, uploads were sent, and life was grand; or at least we thought it was. You see Ruby, being the helpful language that it is, very kindly parses all manor of garbage, often quietly ignoring huge swaths of the string that it couldn’t make sense of. But I’m getting ahead of myself.

Putting on Grownup Pants

When I was brought on board, I was tasked with rewriting the processing pipeline which was beginning to creak under the weight of the data we were receiving. Being a fan of the JVM and wanting to hang out with all the cool kids, I decided to use Scala for the rewrite. Piece by piece the pipeline was rewritten, care was taken to unit test and code coverage was high. Then the fateful day came when we threw the switch and migrated to the new system. No sooner had we completed the migration then alerts started flying. Date strings were all over the map. Twelve hour time? Hindi? Kanji? What was this garbage? Only now were we seeing the dirt that Ruby had been silently sweeping under the rug for us. With a bit of scrambling the first iteration of what would become Chronophage was unleashed upon the world.

The Problem

It turns out, what we thought was ISO formatting was in fact a crap shoot based upon clients locale settings, language settings, and who knows what else. Dates were frequently formatted in 12 hr time and then kindly translated to whatever language the client was in. Did you know priekšpusdienā is Latvian for AM? Neither did I.

Chronophage

Fast forward a few years and Chronophage has been battle-hardened by trillions of datapoints. It supports twelve hour time in dozens of languages and locales, thanks in large part to Dave Rolsky‘s DateTime-Locale project for the locale translations, and a wide variety of other odd formattings, thanks to the Joda Time library’s ISO datetime parser. And now, it is with great pleasure that I can announce we are releasing Chronophage as an open source project. Go have a look at the GitHub project and if you find yourself drowning under a deluge of unintelligible date strings give it a spin.

My Scala Toolset – May 2013

scala-logo-cropedA friend recently asked for about my Scala toolset. When my response quickly spiraled out of control I decided to turn it into a quick blog post so everyone could benefit from the reply. So without further ado here is the current set of tools, libraries, and frameworks that I use on a semi-regular basis.

Tools/Frameworks

  • Akka: A very slick Actor framework.
  • Scalatra: A Scala port of sinatra
  • Scalatest: Nice testing framework, specs is the other main player
  • IntelliJ: Very nice IDE for Scala, I prefer it to Eclipse

SBT

  • SBT: Its a bit of a pain to work with but its the default build tool for Scala. If you are just trying to do basic dependency management its pretty nice. Extending it is where the pain comes in.
  • sbt-idea plugin: will generate intellij projects from sbt
  • xsbt-web-plugin: will create wars for your project
  • junit_xml_listener: outputs scalatest results as junit xml so Jenkins can process it
  • NOTE: SBT runs everything in parallel as much as it can, this can screw with junit tests but you can turn it off by setting parallelExecution := false

Java Libs

  • JodaTime: Replaces the standard Java date stuff which is quite weak. The next version of Java will actually have JodaTime built in
  • c3p0: A solid JDBC connection pool. I’ve also heard some talk about BoneCP which the Play framework uses.
  • Apache Commons: look here for things like, string utils, io utils, db utils, utils around codecs, really anything that seems like it should already exist.
  • SLF4J: The only logging framework worth using anymore
  • Whirlycache: A simple in memory cache though I keep hearing Google MapMaker is very nice.
  • Jackson: The only JSON library you should use, stupid fast

Scala Wrappers

  • scalaj-time: wrapper for joda time, lets you do things like 4.minutes.ago
  • scalaj-collection: wrapper for collections that allows you to easily convert between scala and java collections. However, this was essentially moved into the standard library in Scala 2.9 so you might just want to use that now
  • Jerkson: scala wrapper for Jackson though this is abandoned now. Jackson has a Scala module now and Json4s is trying to unify all the Scala JSON wrappers and supports Jackson.
  • slf4s: A Scala wrapper for slf4j
  • scalaj-http: A simple wrapper for the built in Java HTTP library, not super performant but works in a pinch
  • dispatch: A Scala wrapper around Apache HTTP Async, better for high performance HTTP but a bit more complicated to use as you should use a completion handler to process the response

Web Scale Analytics Reading List

columnsBig data is taking the world by storm and with it comes an explosion of new ideas and technologies looking to help us understand what this data is telling us. With VLDB 2012 under way I decided to take another look at the literature to see what advances are out there as well as refresh on the classics. The result of this deep dive is the web scale analytics reading list below. The list is grouped at a high level into column oriented database solutions and online analytical processing (OLAP) solutions. Column oriented databases are by far more powerful but also more complicated to implement. As such, much of the work on column oriented databases is being done at companies that are building one as a product. The obvious exception being Google. OLAP systems on the other hand, while less powerful, are simpler to implement. For this reason we see a variety of companies rolling their own solutions in response to their growing analytics problems.

Column Oriented Databases

Online Analytical Processing

Follow

Get every new post delivered to your Inbox.