…one man's contribution to the Weeeeerly Wild World
My last SETI@home update was for October when I lost one of my Nvidia 560ti GPUs. Strangely I had another GPU fail in the same manner, this time an Nvidia 460 – one of my lowest performers (I have/had two 460s) but still a shame to see it go. I listed both on ebay describing their faults and was shut of them.
More recently I’ve had a fan failure in a PSU which caused it to overheat and shut down, preventing further damage but still requiring an RMA nonetheless. The first strange thing was the Corsair PSU was only a couple of months old. I contacted Scan to arrange a return and frustratingly heard nothing back, so then contacted Corsair directly and promptly received the instructions to return the part; the second strange thing being that it turns out Corsair’s UK returns centre is shared with Scan. Perhaps the PSU just needs the fan swapping out but even to save myself the cost of the return I wasn’t prepared to void the warranty by lifting the cover to try – for a PSU that’s out of warranty I would have fitted another fan but one shouldn’t take poking about inside a PSU too lightly as there can be dangerous electricity lurking within. Luckily I had a spare PSU to hand so I could keep all my rigs running.
January saw the arrival of the setiathome v.8 app which “gives us the ability to process data from multiple sources, including the Green Bank Telescope. That means we’ll be ready for data from Breakthrough Listen when it’s available.” I suppose I could read up a little more on this really understand what it means! Sadly there was no GPU work for a while until that point so it was either a case of switching everything off or running only CPU work. I opted for the latter; I could have taken the opportunity to save some electricity but it’s winter here and my house was getting chilly – I kept all rigs on and optimised them to run SETI@home on all CPU cores (the best practice is to limit the number of CPU cores crunching work units in order to feed the GPUs). The idea being that my RAC (Recent Average Credit) wouldn’t drop off quite so dramatically/and I’d have a little head start against those who are normally optimised for GPU crunching also and who stopped crunching for the time being.
As soon as v.8 work started flowing in for the GPUs I switched back to GPU crunching and then as soon as Lunactics v0.44 became available to better push my hardware I installed that also.
My tactics paid off and I sat quietly with a smug look on my face as I rose from 20th in team GPUUG to 15th+. I’ve also had the run on a few fellow participants in the UK where my RAC dropped significantly while there was no GPU work, but I’m currently in 5th, based on weekly credit whereas realistically I’m in 8th. Things will settle down once my performance peaks of course – I expect this to occur in a few weeks.
Until then, and when that occurs, I’ll continue to mull over the idea of upgrading some of my lowest crunchers; my last Nvidia 460 and my 560 really ‘need’ replacing with something more recent if I’m to make any headway this year before the weather warms up and I stop crunching for the summer part of the year. So far I’ve been managing to talk myself out of any such expenditure – “Save your money” I tell myself.
As a final point, I noticed that not all my computers were crunching so well – GPU usage wasn’t in the mid-to-high 90s as I’d like to see. The two rigs that are making best use of their GPUs are old Windows XP machines, one with an Intel Core2Duo and the other with a Core2Quad, both with identical Nvidia 560ti’s. For example I have an AMD socket AM3 machine running Windows Vista 64-bit and GPU usage of the Nvidia 460 was hovering around 60-70% – it seems different work units using different apps push things differently, but it’s peculiar. Loading the card up with three work units improved things by 10% or so (although trying this on another problematic rig with a 670 saw no such improvement) – loading the card up with four work units made matters worse (since) more than the card’s 1GB of RAM was required.