PoorBoy has noted that the 5670 only supports single precision. . . so it is apparently useless for BOINC purposes.
. . . it sounded too good to be true. . . .
Still, there are decent CUDA cards that are near that price, especially if you can find a sale. I got my nVidia 9800 GT for $128. Quadra and Fermi cards will drive the lower end prices down.
Not to hijack a thread, but which is better for BOINC? With Folding, better throughput seems to be on the CUDA side. Historically, there are some significant differences between the two manufacturer architectures. ATI always ran 24-bit processors on the GPU, while Nvidia (I'm still used to capital "V", old habits...) ran 32-bit processors. Early GPGPU studies always used Nvidia for that reason.
Also, the two companies took approaches similar to the CISC / RISC processor split. ATI has optimized libraries for everything. There are libraries, and versions of libraries, for shading, rendering, moving, lighting, cross-lighting, glows, super-intelligent shades of the color blue, etc. While the GPU is slower and 24-bit, a programmer can call an optimized library and get a significant speed boost for that function. Nvidia, however, had fewer specialized functions and focused on a faster GPU. More was done by the GPU, which required more load but was done by a faster GPU. That also lessened reliance on specific versions of specific functions. This also helped Nvidia in GPGPU projects, as developers could rely on a more robust processor, rather than forcing data into specific optimized library formats.
Mike