Archive for August, 2012
There’s an awesome utility called F.lux which automatically tweaks your monitor colors throughout the day, to make your display easier on the eyes. The problem is, it doesn’t work with fullscreen games, by default.
Luckily, there is a cool utility called Color Clutch which was created as a way to get around the fact that Windows desktop color calibration doesn’t apply to DirectX fullscreen mode. It accomplishes this using function hooking, so it won’t work for all games (games with strict anti-cheat detection will get a false positive on it).
To get this working, just download Color Clutch from the website above. Create a batch file with contents like this (you will need to use the correct paths):
inject.exe "D:\cclutch\cclutch_ix.dll" patch "C:\Program Files (x86)\Guild Wars 2\gw2.exe"
Now, just run that batch file when you want to launch Guild Wars 2.
If the game you want to use F.lux with is using a different version of DirectX, you’ll need to modify the batch file to point to a different version of ccluch_*.dll – easy enough.
Recently, myself and a friend did the Half-dome hike in Yosemite, CA.
The last bit of this hike requires the use of cables, to reach the summit. In the past, this area has become very crowded, and they’ve since began to use a lottery process to divvy out permits, thus limiting traffic. You “must” have a permit to reach the summit, and this permit is actually checked at the start of a rocky / steep climb leading up to the cables. So, the permit is really important to get your hands on, if you want to do the full hike.
Since this trip was spontaneous, there was no opportunity to participate in the advance lottery. This being the case, our only choice was to attempt to enter the daily lottery for each of the 3 days that would work for us. So, each day we entered the lotto, using the very weak and unreliable cell phone signal to slowly enter credit card info for the submission process.
We didn’t win the lottery, on any of the 3 attempts.
I figured there was still a chance we could get onto the cables, somehow – so, we set out to leave the trail head by 6:30am. We took the Mist Trail, starting from the valley, at a quick pace and without making any significant stops along the way. Toward the tail end of the hike, nearing the start of the permit checking area, my friend made conversation with a guy and his son. It turns out, basically their entire group had bailed out on the hike, and were still at camp (side note: it’s frustrating to know that, for a lottery which has low odds of winning, there are some who win the lotto and then don’t even use their permit). They offered to allow us to head up with them, using their extra permit slots.
They were moving at a slower pace, so we made it to the permit station before them. Since our start time was early, and our pace fast, we actually reached the permit area before the permit-checking ranger even got there. There was another ranger there, and she was telling people – If you have a permit, you can go ahead, the other ranger will be here on your way down, and will check your permit then.
Rather than sit around waiting for the other ranger, we headed up without a permit.
So, everything worked out – even though we didn’t have a permit. On the way back down, the permit-checking ranger was there. We just told her the name of the guy, and she let us leave (what could she do, anyway?) – supposedly, we’re meant to stay with the group or they won’t allow you up, but obviously it’s a moot point since we were already done with the cables. The website actually claims you can receive a significant fine, or even jail time, for going up without a permit – but, given the scenario I think the odds were about zero of that happening.
Recently, I left for about a week on vacation. After coming home, I noticed this PC had crashed. After restarting, it crashed again an hour later. After restarting, it crashed again… an hour later.
After checking the hot-swap bay on my Corsair 800D (bypassing it by connecting direct), re-arranging the power cables to daisy chain less drives on a single cable, swapping SATA cables between the working and broken drives, upgrading the BIOS, and a few other things.. I finally came across this forum post for my Crucial SSD’s firmware:
Release Date: 01/13/2012
- Changes made in version 0002 (m4 can be updated to revision 0309 directly from either revision 0001, 0002, or 0009)
- Correct a condition where an incorrect response to a SMART counter will cause the m4 drive to become unresponsive after 5184 hours of Power-on time. The drive will recover after a power cycle, however, this failure will repeat once per hour after reaching this point. The condition will allow the end user to successfully update firmware, and poses no risk to user or system data stored on the drive.
Apparently, after 5184 hours, the default firmware on these drives causes the drive to “disappear” an hour after booting up – due to some S.M.A.R.T. related bug. This timeout doesn’t reset until you power OFF, so the drive will still be missing from the BIOS settings after a crash and soft reboot.
Crucial claims there is no risk of data loss – but that’s completely false, as the associated BSOD / freeze degraded my RAID mirrors almost every time this happened. It also corrupted the truecrypt volume I had mounted – every time. That the failing drive itself flushes it’s caches isn’t enough to prevent data loss on other drives as an indirect result of the failure!
I would have never guessed, at the start of this problem, that hard drive firmware could have been the issue. To be honest, I don’t usually even think to upgrade hard drive firmware.. since when did that become a necessary maintenance step?
After a firmware upgrade for both drives (I’m using 2 in a striped RAID), everything is back to stable
I coded this AIM bot, quite a while ago, which lets you add/remove/search bookmarks. It was written using the C interface to Amazon’s SimpleDB and the AIM SDK. It’s pretty basic. Just send an IM to “whutsnu” on AIM, and he’ll reply with the list of commands.
The “export” command hasn’t worked for quite a while, because the domain expired. The other features work, though. This was originally meant to be one part of a larger project.
I’ve decided to ramp up the Node.js experiments, and pass the 1 million concurrent connections milestone. It worked, using a swarm of 500 Amazon EC2 test clients, each establishing ~2000 active long-poll COMET connections to a single 15GB rackspace cloud server.
This isn’t landing the mars rover, or curing cancer. It’s just a pretty cool milestone IMO, which may help a few people who want to use Node.js for a large number of concurrent connections. So, hopefully it’s of some benefit to a few Node developers who can use these settings as a starting point in their own projects.
Here’s the connection count as displayed on the sprite’s page:
Here’s a sysctl dumping the number of open file handles (sockets are file handles):
Here’s the view of “top” showing system resources in use:
I think it’s pretty reasonable for 1M connections to consume 16GB of memory, but it could probably be trimmed down quite a bit. I haven’t spent any time optimizing that. I’ll leave that for another day.
Here’s a latency test run against the comet URL:
The new tweaks, placed in /etc/sysctl.conf (CentOS) and then reloaded with “sysctl -p” :
net.core.rmem_max = 33554432 net.core.wmem_max = 33554432 net.ipv4.tcp_rmem = 4096 16384 33554432 net.ipv4.tcp_wmem = 4096 16384 33554432 net.ipv4.tcp_mem = 786432 1048576 26777216 net.ipv4.tcp_max_tw_buckets = 360000 net.core.netdev_max_backlog = 2500 vm.min_free_kbytes = 65536 vm.swappiness = 0 net.ipv4.ip_local_port_range = 1024 65535
Other than that, the steps were identical to the steps described in my previous blog posts, except this time using Node.js version 0.8.3.
Here is the server source code, so you can get a sense of the complexity level. Each connected client is actively sending messages, for the purpose of verifying the connections are alive. I haven’t pushed that throughput yet, to see what data rate can be sent. Since the modest 16GB memory was consumed, this would have likely caused swapping and meant little. I’ll give it a shot with a higher memory server next time.