AIM bookmark bot

I coded this AIM bot, quite a while ago, which lets you add/remove/search bookmarks. It was written using the C interface to Amazon’s SimpleDB and the AIM SDK. It’s pretty basic. Just send an IM to “whutsnu” on AIM, and he’ll reply with the list of commands.

The “export” command hasn’t worked for quite a while, because the domain expired. The other features work, though. This was originally meant to be one part of a larger project.

Node.js w/1M concurrent connections!

I’ve decided to ramp up the Node.js experiments, and pass the 1 million concurrent connections milestone. It worked, using a swarm of 500 Amazon EC2 test clients, each establishing ~2000 active long-poll COMET connections to a single 15GB rackspace cloud server.

This isn’t landing the mars rover, or curing cancer. It’s just a pretty cool milestone IMO, which may help a few people who want to use Node.js for a large number of concurrent connections. So, hopefully it’s of some benefit to a few Node developers who can use these settings as a starting point in their own projects.

Here’s the connection count as displayed on the sprite’s page:

Here’s a sysctl dumping the number of open file handles (sockets are file handles):

Here’s the view of “top” showing system resources in use:

I think it’s pretty reasonable for 1M connections to consume 16GB of memory, but it could probably be trimmed down quite a bit. I haven’t spent any time optimizing that. I’ll leave that for another day.

Here’s a latency test run against the comet URL:

The new tweaks, placed in /etc/sysctl.conf (CentOS) and then reloaded with “sysctl -p” :

net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 16384 33554432
net.ipv4.tcp_wmem = 4096 16384 33554432
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.tcp_max_tw_buckets = 360000
net.core.netdev_max_backlog = 2500
vm.min_free_kbytes = 65536
vm.swappiness = 0
net.ipv4.ip_local_port_range = 1024 65535

Other than that, the steps were identical to the steps described in my previous blog posts, except this time using Node.js version 0.8.3.

Here is the server source code, so you can get a sense of the complexity level. Each connected client is actively sending messages, for the purpose of verifying the connections are alive. I haven’t pushed that throughput yet, to see what data rate can be sent. Since the modest 16GB memory was consumed, this would have likely caused swapping and meant little. I’ll give it a shot with a higher memory server next time.

Escape the 1.4GB V8 heap limit in Node.jsSprites Project

Trance Evolution 001

Here’s a mix with some older trance tracks (2006-2009)

Trance Evolution 001 [Listen]

01. Chris Corrigan pres Velvet Skies – Uplifter (Mike Koglin Re-edit)
02. Catching Dreams – Timeless (DJ Ray Remix)
03. Paul Davies – Ironic Perception (Luke Terry pres. Akemi Remix)
04. Jon O’Bir – Answers (Original Mix)
05. Kyau & Albert – Velvet Morning (Mirco De Govia Remix)
06. Marcel Woods – Advanced (Woods Re-visited Remix)
07. Mike Koglin vs Seventh Heaven – Sanctuary
08. Smith & Pledger – Believe (Smith & Pledger’s 2004 Mix)
09. Danjo & Styles – What Lies Ahead (Estuera Remix)
10. Rapid Sense – Insight (Original Mix)
11. Evbointh – One Wish (Daniel Kandi Remix)
12. Spirit & Dave – Afterglow (Original Mix)
13. Super8 & Tab – Delusion (Original Mix)

Dry ice

Dry ice w/cup

[embedplusvideo height=”300″ width=”430″ standard=”http://www.youtube.com/v/qlx-5LnnJPY?fs=1″ vars=”ytid=qlx-5LnnJPY&width=430&height=300&start=&stop=&rs=w&hd=0&react=0&chapters=&notes=” id=”ep9701″ /]

Dry ice w/soap

[embedplusvideo height=”300″ width=”430″ standard=”http://www.youtube.com/v/HCLV-p0i4ww?fs=1″ vars=”ytid=HCLV-p0i4ww&width=430&height=300&start=&stop=&rs=w&hd=0&react=0&chapters=&notes=” id=”ep5357″ /]

That is all.. 😛

May 2012 Electro House Mix

Caustik – May 2012 Electro House Mix [ Download MP3 ]

01. Hirshee – Bang This (Original Mix)
02. Hypster – Nitro Party Music (Heren Remix)
03. Neologic vs Beat Hunters – Logical Beats (Darth & Vader Remix)
04. Andrey Sher – Exodus (Original Mix)
05. Fatso – Nightlife (Original Mix)
06. The Slag – Crackling Vinyl (Original Mix)
07. Darth & Vader & Perfect Cell – Cellbacca (Original Mix)
08. Asser – Evil (Original Mix)
09. Ben Coda, Ad Brown – Rinse & Repeat (Magitman Remix)
10. Paul Oakenfold & Marco V – Groove Machine (Gareth Wyn Remix)
11. Dragon and Jontron – Sriracha (Original Mix)
12. Swen Weber – Back To Rave (Original Mix)
13. Porter Robinson – The Seconds (feat. Jano)
14. Digitalchord – Digicollapse (Paulo Pacco Remix)
15. Digitalchord – Digicollapse (Divine X Remix)

[audio:http://caustik.com/mixes/May%202012%20Electro%20House%20Mix.mp3%5D

Pixel Scaling (Nearest Neighbor, Scale2X, HQX)

I decided to get HQX to compile on Win64, so it can be used by sprites. The author didn’t reply to my email, so I just decided to hack it up. It’s open source, after all.

It wasn’t too bad, just a bunch of random tweaks to get it to compile using visual studio instead of MinGW. So, since there was already some code sitting around for the Scale2X algorithm, I took a few screenshots to see how they compare with Nearest Neighbor scaling.

Nearest Neighbor

Nearest Neighbor

Scale2X

Scale2X

HQX

HQX

I can’t really decide which I prefer. Nearest Neighbor always looks a bit too jagged for my tastes, but some sprites don’t look right with HQX. Scale2x does a good job retaining the classic look, but it’s not without artifacts.

There were a few researchers at Microsoft who wrote a paper on a really cool scaling algorithm, called Depixelizing Pixel Art. The sad part is, they didn’t release any source code or a library. So it’s pretty much just something to look at and say “Oh. That’s nice. Too bad none of us can actually use it” ..

I’d love to give their algorithm a shot to see how it stacks up against the others. I honestly don’t know what the point is to create an elaborate algorithm and presentation, then do nothing with it. Maybe it has some use case at Microsoft, I suppose.

So, which do you prefer? Do you have a favorite scaling method outside of these 3?

Escape the 1.4GB V8 heap limit in Node.js!

(continued from the sprites scalability experiments [250k] [100k])

Well, finally found a work around for the 1.4GB limit imposed on the V8 heap. This limitation was at first a “hard” limit, and after some tweaks it became a “soft” limit. Now, the combined tweaks listed below have removed the limit entirely! YESSSSSS!!

The first two were already mentioned in my previous few blog articles, and the new ones (#3 and #4) finally open up the possibility of utilizing the real capacity of your server hardware.

1) ulimit -n 999999

This effectively increases the number of sockets Node.js can have open. You won’t get far without it, as the default tends to be around 1024.

2) –nouse-idle-notification

This is a command line parameter you can pass to node. It gets passed along to the V8 engine, and will prevent it from constantly running that darn garbage collector. IMO, you can’t do a real-time server with constant ~4 second latency hiccups every 30 seconds or so.

You might want to also use the flag “–expose-gc”, which will enable the “gc();” function in your server JavaScript code. You can then tie this to an admin mechanism, so you will retain the power to trigger garbage collection at any time you want, without having to restart the server. For the most part, if you don’t leak Objects all over the place, you won’t really need to do this often, or at all. Still, it’s useful to have the capability.

3) –max-old-space-size=8192

This you can tweak to fit your particular server, but the value is in MB. I chose 8GB because my expectation is that 4GB is going to be plenty, and 8GB was just for good measure. You may also consider using “–max-new-space-size=2048” (measured in KB, as opposed to the other). I don’t believe that one is nearly as critical, though.

4) Compile the latest V8 source code, with two modifications

The current node distribution isn’t using the new V8 engine which has significant improvements to garbage collection and performance in general. It’s worth upgrading, if you need more memory and better performance. Luckily, upgrading is really easy.

cd node-v0.6.14/deps/v8
rm -rf *
svn export http://v8.googlecode.com/svn/tags/3.10.0.5 .

You can replace the version numbers as appropriate. There may be new versions of either node and/or V8 at the time you are reading this.

Next, you’ll need to add a single line addition to “SConstruct” inside the V8 directory.

    'CPPPATH': [src_dir],
    'CPPDEFINES': ['V8_MAX_SEMISPACE_SIZE=536870912'],

The second line above is new. Basically, you just need to set that V8_MAX_SEMISPACE definition. It will otherwise default to a much lower value, causing frequent garbage collection to trigger, depending on the memory characteristics of your server JS.

Next, comment out the call to “CollectAllGarbage” in “V8/heap-inl.h”:

    if (amount_since_last_global_gc > external_allocation_limit_) {
      //CollectAllGarbage(kNoGCFlags, "external memory allocation limit reached");
    }

This was done “just in case” because it costs me money each time I run a scaling test. I wanted to be damn sure it was not going to trigger the external allocation limit GC!

That’s it! You will now want to rebuild both V8 and node, I would recommend doing a clean build for both, since replacing the entire V8 source may otherwise fail to take effect.

While writing this, I’ve got a server log open with 2281 MB of V8 heap usage. That’s far beyond the normal 1.4 GB limitation. The garbage collection is behaving, the server remains VERY responsive and low latency. In addition to the 250k concurrent and active connections, each of those nodes is also sending a sprite every 5 seconds.

There is still CPU to spare, the only limitation keeping me from trying 500k concurrent is my Amazon EC2 quota, which I already had to request an upgrade to do 250k.

OK – Now, go eat up all the memory you want on your Node servers 🙂

Node.js w/250k concurrent connections!

Okay, I figured it must be possible, so it had to be done..

The sprites server has smashed through the 250k barrier with 250,001 concurrent, active connections. In addition to the stuff talked about in my previous two blog entries, it took a few more optimizations.

1) Latest tagged revision of V8, which appears to perform a little better

The 250k limit is right on the fringe of what this server can pull off without violating the 1.4GB heap limitation in V8. It’s not clear to me why this limitation hasn’t bubbled up in priority enough to be taken care of yet. Just look at those free CPU cycles and unused memory! V8 is a complete bad-ass, and it’s just this one limitation that is holding it back from some really extreme capabilities.

I had tried 250k a few times, and couldn’t get it stable until after upgrading the /deps/v8/ directory of Node.js with the latest version tagged in SVN. I was really hoping the 1.4GB limit had been removed, but alas. Instead, just had to settle with some significant improvements to garbage collection and performance in general.

2) Workers via “cluster” module

I figured the onslaught of HTTP GET requests the 100 EC2 servers were unleashing at a rate of 100,000 JSON packets per second was contributing both to CPU and memory consumption, which was agitating the garbage collector enough to resist the 250k mark.

So, to reduce the overhead of these transient requests, which are in addition to the 250k concurrent connections (which really leaves us on average at about 350k connections at any given second, 100k of which are transients), I decided to leverage the cluster module.

The master performs the exact same tasks it did before. Except, now there are workers spawned, one for each CPU on the system, and they listen on a separate port from the master process. This port is used by the clients for these transient requests, and as they arrive they are parsed and forwarded to the master using the Node.js send() function.

The only reason this was a critical adjustment is, again, the 1.4GB heap limitation in V8. Keeping all the resources associated with those requests off the master process saves just enough memory to get past the 250k milestone.

TL;DR – V8, you’re so good, but it’s time for you to support more memory!

Scaling node.js to 100k concurrent connections!

UPDATE: Broke the 250k barrier, too :]

The node.js powered sprites fun continues, with a new milestone:

That’s right, 100,004 active connections! Note the low %CPU and %MEM numbers in the picture. To be fair, the CPU usage does wander between about 5% and 40% – but it’s also not a very beefy box. This is on a $0.12/hr rackspace 2GB cloud server.

Each connection simulates sending a single sprite every 5 seconds. The destination for each sprite is randomize to an equal distribution across all nodes. This means there is traffic of 20,000 sprites per second, which amounts to 40,000 JSON packets per second. This doesn’t even include the keep-alive pings which occur on a 2-minute interval per connection.

At this scale, the sprite network topology remains very responsive. Tested using my desktop PC neighboring my laptop, throwing a sprite off the screen arrives at the laptop so fast that I can’t gauge any latency at all.

Here are a few key tweaks which contribute to this performance:

1) Nagle’s algorithm is disabled

If you’re familiar at all with real-time network programming, you’ll recognize this algorithm as a common socket tweak. This makes each response leave the server much quicker.

The tweak is available through the node.js API “socket.setNoDelay“, which is set on each long-poll COMET connection’s socket.

2) V8’s idle garbage collection is disabled via “–nouse-idle-notification”

This was critical, as the server pre-allocates over 2 million JS Objects for the network topology. If you don’t disable idle garbage collection, you’ll see a full second of delay every few seconds, which would be an intolerable bottleneck to scalability and responsiveness. The delay appears to be caused by the garbage collector traversing this list of objects, even though none of them are actually candidates for garbage collection.

I’m eager to experiment further by scaling this up to 250k connections. The only thing keeping that test from being run is the quota on my amazon EC2 account, which is limiting the number of simulated clients I can run simultaneously. They have responded to my request to increase quota, but sadly it hasn’t taken effect yet.

The sprites source code, both client and server, are available via subversion. The repository URLs are provided on the sprites web site.

http://sprites.caustik.com/

For more information about the testing and tweaks involved in scaling the server, check my previous post Node.js scalability testing with EC2.

Node.js scalability testing with EC2

sprites server

The sprites project leverages Node.js to implement it’s server logic. The server is surprisingly simple JavaScript code which accepts long-poll COMET HTTP connections from it’s C++ clients in order to push JSON format sprite information in real-time. Sprites can be thrown from one client’s desktop to another via HTTP. The server maintains a network topology (which is worth a separate post in itself, it turned out to be an interesting algorithm), which is then used to send the posted sprite to the appropriate neighbor.

The asynchronous sockets in Node.js scale very well. Combine this with the JavaScript being interpreted by the blazing fast V8 engine, and you have the foundation of a very simple and flexible server platform which scales mightily.

So, how well does it scale?

In order to test that, I decided to leverage Amazon EC2 to run a bunch of fake clients to simulate network usage. Each EC2 instance can make thousands of connections to the sprites server, and simulate throwing sprites around.

To implement this stress test, I first booted up a single instance with one of the default Linux templates. From there it only takes 5 minutes to install a basic build environment using “yum install subversion make gcc-c++” — after that, I modified /etc/rc.local to do a couple things:

1) ulimit -n 999999

This is necessary to get around the default limitation in file handles, which applies to sockets, that would otherwise prevent the instance from making more than about 1024 connections.

This command must also be used on the Node.js server, which you don’t really see documented anywhere! If you don’t increase the file handle limit on your Node.js server, somewhere down the line you are going to run into clients being unable to connect, along with the server suddenly deciding to peg the CPU at 100% and behave strangely.

I’ve seen all sorts of posts around the net struggling to figure out why their Node.js server doesn’t scale quite as high as they think it should. Well, if you’re not increasing your file handle limit, that would definitely be one explanation. Aside from that, it’s obviously critical to write very performance-conscious JavaScript code.

2) svn update the test working directory

This is done so that each instance is always up to date with the latest test code after a reboot. This greatly simplifies scalability stress testing. You just start and stop instances using the EC2 console to magically test the latest code at whatever scale you want. You can literally test a million concurrent connections using this technique (though, I’m pending a quota increase on EC2 to actually give this a try, they stop you after 20 instances by default).

3) cd into the test working directory, build, and run the test

That’s it! I use a Windows development machine to create the cross-platform test code, and whenever a change is made, I commit it to subversion, and recreate/reboot all the EC2 instances. They automatically update, and swarm the server. It’s a beautiful thing 🙂

Since each instance on EC2 costs $0.02 (yes, 2 cents), this is actually incredibly cheap. Each test run takes well under an hour, so for 20 servers with 1000 simulated connections each, you can test 20,000 users for 40 cents! To test a million clients, it would run you about 20 bucks. Not bad… You could tweak this to be more frugal by adding more connections per-instance, and maybe even leverage Spot instances for a lower rate per-hour.