it's because some projects are optimizing their apps
So why doesn't
everybody optimize their apps? Getting more done faster seems like it would only be a good thing. Isn't that the whole idea behind distributed computing? Or am I missing something?
There have been a couple threads about this. I actually started one.
First, it's difficult to write an app that works well that says "if you have SSE3, use it". One thing that I did not know is that most optimized functions in the processor use log functions, and may not have the same rounding rules. So, you may get different results if you use SSE, SEE2, SSE3, MMX, 3DNow, etc. Actually, some projects have even had to compensate for Intel, AMD, and PPC chips having different answered when you go out many decimal places. (Some of my early SZTAKI units were invalidated because I have an AMD processor, and all testing was on Intel. Same thing happened with my G5 Mac.) You have to remember, optimized processor functions are designed to do specific functions more quickly than the standard x86 instruction set. The thing is, you're not running at a faster clock speed, so just how do you do the same math faster on the same chip?
Even with that aside, if you get something that produces good results with, say, SSE 3 (which allows floating-point SSE), is it worth loosing the volunteers who have older processors? Is it worth supporting two apps?
Further, as I understand it, BOINC itself does not yet have a reliable way to say "this processor can use SSE3, send that work unit". BOINC does list processor libraries when it starts (part of the "alphabet soup"
, but it can't yet pick a crunching engine based on what the processor has.