caustik's blog

programming and music

Archive for the ‘node.js’ Category

Node.js w/1M concurrent connections!

with 63 comments

I’ve decided to ramp up the Node.js experiments, and pass the 1 million concurrent connections milestone. It worked, using a swarm of 500 Amazon EC2 test clients, each establishing ~2000 active long-poll COMET connections to a single 15GB rackspace cloud server.

This isn’t landing the mars rover, or curing cancer. It’s just a pretty cool milestone IMO, which may help a few people who want to use Node.js for a large number of concurrent connections. So, hopefully it’s of some benefit to a few Node developers who can use these settings as a starting point in their own projects.

Here’s the connection count as displayed on the sprite’s page:

Here’s a sysctl dumping the number of open file handles (sockets are file handles):

Here’s the view of “top” showing system resources in use:

I think it’s pretty reasonable for 1M connections to consume 16GB of memory, but it could probably be trimmed down quite a bit. I haven’t spent any time optimizing that. I’ll leave that for another day.

Here’s a latency test run against the comet URL:

The new tweaks, placed in /etc/sysctl.conf (CentOS) and then reloaded with “sysctl -p” :

net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 16384 33554432
net.ipv4.tcp_wmem = 4096 16384 33554432
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.tcp_max_tw_buckets = 360000
net.core.netdev_max_backlog = 2500
vm.min_free_kbytes = 65536
vm.swappiness = 0
net.ipv4.ip_local_port_range = 1024 65535

Other than that, the steps were identical to the steps described in my previous blog posts, except this time using Node.js version 0.8.3.

Here is the server source code, so you can get a sense of the complexity level. Each connected client is actively sending messages, for the purpose of verifying the connections are alive. I haven’t pushed that throughput yet, to see what data rate can be sent. Since the modest 16GB memory was consumed, this would have likely caused swapping and meant little. I’ll give it a shot with a higher memory server next time.

Escape the 1.4GB V8 heap limit in Node.jsSprites Project

Written by caustik

August 19th, 2012 at 12:43 am

Escape the 1.4GB V8 heap limit in Node.js!

with 25 comments

(continued from the sprites scalability experiments [250k] [100k])

Well, finally found a work around for the 1.4GB limit imposed on the V8 heap. This limitation was at first a “hard” limit, and after some tweaks it became a “soft” limit. Now, the combined tweaks listed below have removed the limit entirely! YESSSSSS!!

The first two were already mentioned in my previous few blog articles, and the new ones (#3 and #4) finally open up the possibility of utilizing the real capacity of your server hardware.

1) ulimit -n 999999

This effectively increases the number of sockets Node.js can have open. You won’t get far without it, as the default tends to be around 1024.

2) –nouse-idle-notification

This is a command line parameter you can pass to node. It gets passed along to the V8 engine, and will prevent it from constantly running that darn garbage collector. IMO, you can’t do a real-time server with constant ~4 second latency hiccups every 30 seconds or so.

You might want to also use the flag “–expose-gc”, which will enable the “gc();” function in your server JavaScript code. You can then tie this to an admin mechanism, so you will retain the power to trigger garbage collection at any time you want, without having to restart the server. For the most part, if you don’t leak Objects all over the place, you won’t really need to do this often, or at all. Still, it’s useful to have the capability.

3) –max-old-space-size=8192

This you can tweak to fit your particular server, but the value is in MB. I chose 8GB because my expectation is that 4GB is going to be plenty, and 8GB was just for good measure. You may also consider using “–max-new-space-size=2048″ (measured in KB, as opposed to the other). I don’t believe that one is nearly as critical, though.

4) Compile the latest V8 source code, with two modifications

The current node distribution isn’t using the new V8 engine which has significant improvements to garbage collection and performance in general. It’s worth upgrading, if you need more memory and better performance. Luckily, upgrading is really easy.

cd node-v0.6.14/deps/v8
rm -rf *
svn export http://v8.googlecode.com/svn/tags/3.10.0.5 .

You can replace the version numbers as appropriate. There may be new versions of either node and/or V8 at the time you are reading this.

Next, you’ll need to add a single line addition to “SConstruct” inside the V8 directory.

    'CPPPATH': [src_dir],
    'CPPDEFINES': ['V8_MAX_SEMISPACE_SIZE=536870912'],

The second line above is new. Basically, you just need to set that V8_MAX_SEMISPACE definition. It will otherwise default to a much lower value, causing frequent garbage collection to trigger, depending on the memory characteristics of your server JS.

Next, comment out the call to “CollectAllGarbage” in “V8/heap-inl.h”:

    if (amount_since_last_global_gc > external_allocation_limit_) {
      //CollectAllGarbage(kNoGCFlags, "external memory allocation limit reached");
    }

This was done “just in case” because it costs me money each time I run a scaling test. I wanted to be damn sure it was not going to trigger the external allocation limit GC!

That’s it! You will now want to rebuild both V8 and node, I would recommend doing a clean build for both, since replacing the entire V8 source may otherwise fail to take effect.

While writing this, I’ve got a server log open with 2281 MB of V8 heap usage. That’s far beyond the normal 1.4 GB limitation. The garbage collection is behaving, the server remains VERY responsive and low latency. In addition to the 250k concurrent and active connections, each of those nodes is also sending a sprite every 5 seconds.

There is still CPU to spare, the only limitation keeping me from trying 500k concurrent is my Amazon EC2 quota, which I already had to request an upgrade to do 250k.

OK – Now, go eat up all the memory you want on your Node servers :)

Written by caustik

April 11th, 2012 at 2:43 am

Node.js w/250k concurrent connections!

with 19 comments

Okay, I figured it must be possible, so it had to be done..

The sprites server has smashed through the 250k barrier with 250,001 concurrent, active connections. In addition to the stuff talked about in my previous two blog entries, it took a few more optimizations.

1) Latest tagged revision of V8, which appears to perform a little better

The 250k limit is right on the fringe of what this server can pull off without violating the 1.4GB heap limitation in V8. It’s not clear to me why this limitation hasn’t bubbled up in priority enough to be taken care of yet. Just look at those free CPU cycles and unused memory! V8 is a complete bad-ass, and it’s just this one limitation that is holding it back from some really extreme capabilities.

I had tried 250k a few times, and couldn’t get it stable until after upgrading the /deps/v8/ directory of Node.js with the latest version tagged in SVN. I was really hoping the 1.4GB limit had been removed, but alas. Instead, just had to settle with some significant improvements to garbage collection and performance in general.

2) Workers via “cluster” module

I figured the onslaught of HTTP GET requests the 100 EC2 servers were unleashing at a rate of 100,000 JSON packets per second was contributing both to CPU and memory consumption, which was agitating the garbage collector enough to resist the 250k mark.

So, to reduce the overhead of these transient requests, which are in addition to the 250k concurrent connections (which really leaves us on average at about 350k connections at any given second, 100k of which are transients), I decided to leverage the cluster module.

The master performs the exact same tasks it did before. Except, now there are workers spawned, one for each CPU on the system, and they listen on a separate port from the master process. This port is used by the clients for these transient requests, and as they arrive they are parsed and forwarded to the master using the Node.js send() function.

The only reason this was a critical adjustment is, again, the 1.4GB heap limitation in V8. Keeping all the resources associated with those requests off the master process saves just enough memory to get past the 250k milestone.

TL;DR – V8, you’re so good, but it’s time for you to support more memory!

Written by caustik

April 10th, 2012 at 7:53 am

Scaling node.js to 100k concurrent connections!

with 26 comments

UPDATE: Broke the 250k barrier, too :]

The node.js powered sprites fun continues, with a new milestone:

That’s right, 100,004 active connections! Note the low %CPU and %MEM numbers in the picture. To be fair, the CPU usage does wander between about 5% and 40% – but it’s also not a very beefy box. This is on a $0.12/hr rackspace 2GB cloud server.

Each connection simulates sending a single sprite every 5 seconds. The destination for each sprite is randomize to an equal distribution across all nodes. This means there is traffic of 20,000 sprites per second, which amounts to 40,000 JSON packets per second. This doesn’t even include the keep-alive pings which occur on a 2-minute interval per connection.

At this scale, the sprite network topology remains very responsive. Tested using my desktop PC neighboring my laptop, throwing a sprite off the screen arrives at the laptop so fast that I can’t gauge any latency at all.

Here are a few key tweaks which contribute to this performance:

1) Nagle’s algorithm is disabled

If you’re familiar at all with real-time network programming, you’ll recognize this algorithm as a common socket tweak. This makes each response leave the server much quicker.

The tweak is available through the node.js API “socket.setNoDelay“, which is set on each long-poll COMET connection’s socket.

2) V8’s idle garbage collection is disabled via “–nouse-idle-notification”

This was critical, as the server pre-allocates over 2 million JS Objects for the network topology. If you don’t disable idle garbage collection, you’ll see a full second of delay every few seconds, which would be an intolerable bottleneck to scalability and responsiveness. The delay appears to be caused by the garbage collector traversing this list of objects, even though none of them are actually candidates for garbage collection.

I’m eager to experiment further by scaling this up to 250k connections. The only thing keeping that test from being run is the quota on my amazon EC2 account, which is limiting the number of simulated clients I can run simultaneously. They have responded to my request to increase quota, but sadly it hasn’t taken effect yet.

The sprites source code, both client and server, are available via subversion. The repository URLs are provided on the sprites web site.

http://sprites.caustik.com/

For more information about the testing and tweaks involved in scaling the server, check my previous post Node.js scalability testing with EC2.

Written by caustik

April 8th, 2012 at 9:09 am

Node.js scalability testing with EC2

with 7 comments

sprites server

The sprites project leverages Node.js to implement it’s server logic. The server is surprisingly simple JavaScript code which accepts long-poll COMET HTTP connections from it’s C++ clients in order to push JSON format sprite information in real-time. Sprites can be thrown from one client’s desktop to another via HTTP. The server maintains a network topology (which is worth a separate post in itself, it turned out to be an interesting algorithm), which is then used to send the posted sprite to the appropriate neighbor.

The asynchronous sockets in Node.js scale very well. Combine this with the JavaScript being interpreted by the blazing fast V8 engine, and you have the foundation of a very simple and flexible server platform which scales mightily.

So, how well does it scale?

In order to test that, I decided to leverage Amazon EC2 to run a bunch of fake clients to simulate network usage. Each EC2 instance can make thousands of connections to the sprites server, and simulate throwing sprites around.

To implement this stress test, I first booted up a single instance with one of the default Linux templates. From there it only takes 5 minutes to install a basic build environment using “yum install subversion make gcc-c++” — after that, I modified /etc/rc.local to do a couple things:

1) ulimit -n 999999

This is necessary to get around the default limitation in file handles, which applies to sockets, that would otherwise prevent the instance from making more than about 1024 connections.

This command must also be used on the Node.js server, which you don’t really see documented anywhere! If you don’t increase the file handle limit on your Node.js server, somewhere down the line you are going to run into clients being unable to connect, along with the server suddenly deciding to peg the CPU at 100% and behave strangely.

I’ve seen all sorts of posts around the net struggling to figure out why their Node.js server doesn’t scale quite as high as they think it should. Well, if you’re not increasing your file handle limit, that would definitely be one explanation. Aside from that, it’s obviously critical to write very performance-conscious JavaScript code.

2) svn update the test working directory

This is done so that each instance is always up to date with the latest test code after a reboot. This greatly simplifies scalability stress testing. You just start and stop instances using the EC2 console to magically test the latest code at whatever scale you want. You can literally test a million concurrent connections using this technique (though, I’m pending a quota increase on EC2 to actually give this a try, they stop you after 20 instances by default).

3) cd into the test working directory, build, and run the test

That’s it! I use a Windows development machine to create the cross-platform test code, and whenever a change is made, I commit it to subversion, and recreate/reboot all the EC2 instances. They automatically update, and swarm the server. It’s a beautiful thing :)

Since each instance on EC2 costs $0.02 (yes, 2 cents), this is actually incredibly cheap. Each test run takes well under an hour, so for 20 servers with 1000 simulated connections each, you can test 20,000 users for 40 cents! To test a million clients, it would run you about 20 bucks. Not bad… You could tweak this to be more frugal by adding more connections per-instance, and maybe even leverage Spot instances for a lower rate per-hour.

Written by caustik

April 6th, 2012 at 3:28 am