March 2016 Trance Mix

01. Boom Jinx, Daniel Kandi – Azzura (Juventa Vs. Willem de Roo Remix)
02. Juventa – Like These Eyes (Answer42 Remix)
03. Bakke – Birds (Atmos Remix)
04. Solarstone – Twisted Wing feat. Julie Scott (Dave Horne Remix)
05. Alex O’Rion – Satellites (Thomas Coastline Remix)
06. Mino Safy – Patience (Original Mix)
07. Ovnimoon – High Nrg (Original Mix)
08. Spark7, Bec Peters – Pieces Broken (Solid Stone Remix)
09. Alex M.O.R.P.H., Chriss Ortega – Ocean Drive (Protoculture Remix)
10. Estiva – Fame (Masters Series Edit)
11. Alex O’Rion – Changing Pace (Original Mix)
12. Mat Zo – Back In Time (Original Mix)
13. Airwave – The Moment Of Truth (Matt Holliday Remix)
14. Tempo Glusto – Spatter Analysis (Original Mix)
15. Oliver Smith – Cadence (Original Mix)
16. Offshore Wind, Aimoon – Alpha (Original Mix)
17. Aimoon – Snowball (Original Mix)
18. Hudson & Kant – Coconut (Solid Stone Remix)
19. Leon Bolier, LWB – Deep Red (Original Mix)
20. Sunset, Mino Safy – Prometheus (Iversoon & Alex Daf Remix)
21. M6, Klauss Goulart – Hidden Light (Skytech Remix)
22. Matt Hardwick, Mark Pleder – Fallen Tides feat. Melina Gareh (Mat Zo Vocal Remix)

VST for automating MIDI program changes

There’s a piece of functionality missing from Ableton Live. Say you’ve got an external device who’s knobs and other controls you’re using as input to Ableton Live through the MIDI learn functionality. This works great for controlling things like volume levels, effects parameters, etc. But oddly enough you can’t use this method to control program changes on an instrument. I searched the internet for a few hours trying to find a solution, until finally deciding to just program it myself. So a couple hours later, introducing the “RGProgram” VSTPlugin. Each time you change any one of the values (Program, Bank, Sub-bank), a MIDI program change message is sent out from the VST plugin. So you can use the routing in Ableton to send that MIDI output to whichever device(s) you’d like the program changes sent to, and map the plugin parameters to whatever MIDI controller you want.

I’ve only tested this in Ableton Live, so apologies if there are any compatibility issues.

RGProgramInstaller.exe (32-bit installer)
RGProgram.dll (or you can just copy the plugin to your VST folder manually)

Crucial SSD firmware bug, crashes/BSOD

Recently, I left for about a week on vacation. After coming home, I noticed this PC had crashed. After restarting, it crashed again an hour later. After restarting, it crashed again… an hour later.

After checking the hot-swap bay on my Corsair 800D (bypassing it by connecting direct), re-arranging the power cables to daisy chain less drives on a single cable, swapping SATA cables between the working and broken drives, upgrading the BIOS, and a few other things.. I finally came across this forum post for my Crucial SSD’s firmware:

http://forum.crucial.com/t5/Solid-State-Drives-SSD/Firmware-Update-Notifications/td-p/57854

Release Date: 01/13/2012
Change Log:

  • Changes made in version 0002 (m4 can be updated to revision 0309 directly from either revision 0001, 0002, or 0009)
  • Correct a condition where an incorrect response to a SMART counter will cause the m4 drive to become unresponsive after 5184 hours of Power-on time. The drive will recover after a power cycle, however, this failure will repeat once per hour after reaching this point. The condition will allow the end user to successfully update firmware, and poses no risk to user or system data stored on the drive.

Apparently, after 5184 hours, the default firmware on these drives causes the drive to “disappear” an hour after booting up – due to some S.M.A.R.T. related bug. This timeout doesn’t reset until you power OFF, so the drive will still be missing from the BIOS settings after a crash and soft reboot.

Crucial claims there is no risk of data loss – but that’s completely false, as the associated BSOD / freeze degraded my RAID mirrors almost every time this happened. It also corrupted the truecrypt volume I had mounted – every time. That the failing drive itself flushes it’s caches isn’t enough to prevent data loss on other drives as an indirect result of the failure!

I would have never guessed, at the start of this problem, that hard drive firmware could have been the issue. To be honest, I don’t usually even think to upgrade hard drive firmware.. since when did that become a necessary maintenance step?

After a firmware upgrade for both drives (I’m using 2 in a striped RAID), everything is back to stable 🙂

AIM bookmark bot

I coded this AIM bot, quite a while ago, which lets you add/remove/search bookmarks. It was written using the C interface to Amazon’s SimpleDB and the AIM SDK. It’s pretty basic. Just send an IM to “whutsnu” on AIM, and he’ll reply with the list of commands.

The “export” command hasn’t worked for quite a while, because the domain expired. The other features work, though. This was originally meant to be one part of a larger project.

Node.js w/1M concurrent connections!

I’ve decided to ramp up the Node.js experiments, and pass the 1 million concurrent connections milestone. It worked, using a swarm of 500 Amazon EC2 test clients, each establishing ~2000 active long-poll COMET connections to a single 15GB rackspace cloud server.

This isn’t landing the mars rover, or curing cancer. It’s just a pretty cool milestone IMO, which may help a few people who want to use Node.js for a large number of concurrent connections. So, hopefully it’s of some benefit to a few Node developers who can use these settings as a starting point in their own projects.

Here’s the connection count as displayed on the sprite’s page:

Here’s a sysctl dumping the number of open file handles (sockets are file handles):

Here’s the view of “top” showing system resources in use:

I think it’s pretty reasonable for 1M connections to consume 16GB of memory, but it could probably be trimmed down quite a bit. I haven’t spent any time optimizing that. I’ll leave that for another day.

Here’s a latency test run against the comet URL:

The new tweaks, placed in /etc/sysctl.conf (CentOS) and then reloaded with “sysctl -p” :

net.core.rmem_max = 33554432
net.core.wmem_max = 33554432
net.ipv4.tcp_rmem = 4096 16384 33554432
net.ipv4.tcp_wmem = 4096 16384 33554432
net.ipv4.tcp_mem = 786432 1048576 26777216
net.ipv4.tcp_max_tw_buckets = 360000
net.core.netdev_max_backlog = 2500
vm.min_free_kbytes = 65536
vm.swappiness = 0
net.ipv4.ip_local_port_range = 1024 65535

Other than that, the steps were identical to the steps described in my previous blog posts, except this time using Node.js version 0.8.3.

Here is the server source code, so you can get a sense of the complexity level. Each connected client is actively sending messages, for the purpose of verifying the connections are alive. I haven’t pushed that throughput yet, to see what data rate can be sent. Since the modest 16GB memory was consumed, this would have likely caused swapping and meant little. I’ll give it a shot with a higher memory server next time.

Escape the 1.4GB V8 heap limit in Node.jsSprites Project

Pixel Scaling (Nearest Neighbor, Scale2X, HQX)

I decided to get HQX to compile on Win64, so it can be used by sprites. The author didn’t reply to my email, so I just decided to hack it up. It’s open source, after all.

It wasn’t too bad, just a bunch of random tweaks to get it to compile using visual studio instead of MinGW. So, since there was already some code sitting around for the Scale2X algorithm, I took a few screenshots to see how they compare with Nearest Neighbor scaling.

Nearest Neighbor

Nearest Neighbor

Scale2X

Scale2X

HQX

HQX

I can’t really decide which I prefer. Nearest Neighbor always looks a bit too jagged for my tastes, but some sprites don’t look right with HQX. Scale2x does a good job retaining the classic look, but it’s not without artifacts.

There were a few researchers at Microsoft who wrote a paper on a really cool scaling algorithm, called Depixelizing Pixel Art. The sad part is, they didn’t release any source code or a library. So it’s pretty much just something to look at and say “Oh. That’s nice. Too bad none of us can actually use it” ..

I’d love to give their algorithm a shot to see how it stacks up against the others. I honestly don’t know what the point is to create an elaborate algorithm and presentation, then do nothing with it. Maybe it has some use case at Microsoft, I suppose.

So, which do you prefer? Do you have a favorite scaling method outside of these 3?

Escape the 1.4GB V8 heap limit in Node.js!

(continued from the sprites scalability experiments [250k] [100k])

Well, finally found a work around for the 1.4GB limit imposed on the V8 heap. This limitation was at first a “hard” limit, and after some tweaks it became a “soft” limit. Now, the combined tweaks listed below have removed the limit entirely! YESSSSSS!!

The first two were already mentioned in my previous few blog articles, and the new ones (#3 and #4) finally open up the possibility of utilizing the real capacity of your server hardware.

1) ulimit -n 999999

This effectively increases the number of sockets Node.js can have open. You won’t get far without it, as the default tends to be around 1024.

2) –nouse-idle-notification

This is a command line parameter you can pass to node. It gets passed along to the V8 engine, and will prevent it from constantly running that darn garbage collector. IMO, you can’t do a real-time server with constant ~4 second latency hiccups every 30 seconds or so.

You might want to also use the flag “–expose-gc”, which will enable the “gc();” function in your server JavaScript code. You can then tie this to an admin mechanism, so you will retain the power to trigger garbage collection at any time you want, without having to restart the server. For the most part, if you don’t leak Objects all over the place, you won’t really need to do this often, or at all. Still, it’s useful to have the capability.

3) –max-old-space-size=8192

This you can tweak to fit your particular server, but the value is in MB. I chose 8GB because my expectation is that 4GB is going to be plenty, and 8GB was just for good measure. You may also consider using “–max-new-space-size=2048” (measured in KB, as opposed to the other). I don’t believe that one is nearly as critical, though.

4) Compile the latest V8 source code, with two modifications

The current node distribution isn’t using the new V8 engine which has significant improvements to garbage collection and performance in general. It’s worth upgrading, if you need more memory and better performance. Luckily, upgrading is really easy.

cd node-v0.6.14/deps/v8
rm -rf *
svn export http://v8.googlecode.com/svn/tags/3.10.0.5 .

You can replace the version numbers as appropriate. There may be new versions of either node and/or V8 at the time you are reading this.

Next, you’ll need to add a single line addition to “SConstruct” inside the V8 directory.

    'CPPPATH': [src_dir],
    'CPPDEFINES': ['V8_MAX_SEMISPACE_SIZE=536870912'],

The second line above is new. Basically, you just need to set that V8_MAX_SEMISPACE definition. It will otherwise default to a much lower value, causing frequent garbage collection to trigger, depending on the memory characteristics of your server JS.

Next, comment out the call to “CollectAllGarbage” in “V8/heap-inl.h”:

    if (amount_since_last_global_gc > external_allocation_limit_) {
      //CollectAllGarbage(kNoGCFlags, "external memory allocation limit reached");
    }

This was done “just in case” because it costs me money each time I run a scaling test. I wanted to be damn sure it was not going to trigger the external allocation limit GC!

That’s it! You will now want to rebuild both V8 and node, I would recommend doing a clean build for both, since replacing the entire V8 source may otherwise fail to take effect.

While writing this, I’ve got a server log open with 2281 MB of V8 heap usage. That’s far beyond the normal 1.4 GB limitation. The garbage collection is behaving, the server remains VERY responsive and low latency. In addition to the 250k concurrent and active connections, each of those nodes is also sending a sprite every 5 seconds.

There is still CPU to spare, the only limitation keeping me from trying 500k concurrent is my Amazon EC2 quota, which I already had to request an upgrade to do 250k.

OK – Now, go eat up all the memory you want on your Node servers 🙂