Pages: [1]
Rakarin
 
BAM!ID: 1019
Joined: 2006-05-30
Posts: 92
Credits: 0
World-rank: 0

2007-11-14 04:23:26

Hello,

I apologize for the length of this post...

Something crossed my mind this past weekend, and I was wondering if any projects had worked in this direction. I recently purchased one of the WalMart gOS PC's. It runs a Via C7-D processor. I discuss it in another thread here. The floating point power is a bit... abysmal, but it has SSE3. If I could get a Linux client with SSE2 / SSE3 optimization for some of my projects, they would run faster.

Anyway, one thought led to another, and I thought of Gentoo Linux. Gentoo is a source-based distribution. What this means is that when you run the install CD, you actually install very little from the CD. The CD will boot to a small kernel which runs a setup wizard, your network connection, and a small set of GNU compilers. After you pick what you want, it downloads the source code for everything, compiles it on your PC, and installs the compiled code on your PC. And install can take a day or days (it's a complete operating system, window manager, applications, etc.), but what you get is optimized for your PC. This means that if your processor supports, say, MMX instructions, and the kernel has code for MMX optimization, this gets compiled into the kernel. If you do not have MMX+, the code for MMX+ does not get compiled into the binary. If your processor has SSE, but not SSE2 or SSE3, you only get SSE optimizations. The result is a smaller (both hard drive and RAM), more stable build.

So, I was wondering if any BOINC projects were looking into following a Gentoo-style model. What I would imagine (and I'm not a CS major, so please tell me if this is unrealistic), would eliminate different optimized binaries. There would be large code sets for each architecture (x86, x64, PPC, SPARC) + OS (Win, Linux, Unix/OSX, Solaris). When a new user starts crunching for a project, or there is a client engine upgrade, BOINC downloads a small GNU C (C++, Python, Fortran, etc. as needed) compiler and the source. The compiler then builds the client engine based on what the user's processor has and OS supports. For example, my Via C7 has MMX, 3DNow, and SSE through SSE3, but not MMX+ or 3DNow+, and runs Ubuntu. BOINC would download gcc, and build a client with MMX, 3DNow, SSE 1-3, but not MMX+ or 3DNow+. On my Windows box, which runs on an AMD Athalon 3200 XP+, the client would build an application with 3DNow, MMX, and SSE (no SSE2 or 3).

To me, this seems to have numerous benefits. On the x86 platform, BOINC already reports potentially useful processor instruction sets when it starts. (My G5 only lists "AltiVec".) The initial development, granted, could be a nightmare, but in the long run:

More optimized clients means work gets done faster.

Improved security. It would be easy to build in the option "check code before compiling", which would let the user view the source and give permission to compile, if desired.

For old/slow clients, a default un-optimized client could be sent, and compilation could require an opt-in. The project could also do this if it sets thresholds for instruction sets it finds useful. For example, if it finds MMX, 3DNow, or SSE provide only a small boost, they could require an opt-in for compilation for older processors. If the project finds that only SSE3 provides any real benefit, they could require opt-in if anything below relatively new processors.

For concerns of stability, the project could add a weighted "buggy_{project}" variable. Successful runs reduce the variable, client errors raise it a lot. When the buggy variable hits a certain weight, the project could force resetting the local client with a standard client, or force a re-compile. This could also be split so after so many re-compiles (of the same version), the standard client is forced and the user has to opt-in to a re-compile.

Faster integration of new core technology. According to Wikipedia, Intel will release SSE4 in two sets. SSE4.1, which contains 47 new instructions, will be in the Penryn line processors. SSE.1, which includes an additional seven instructions, will be in the Nehalem chips released two to three quarters later. By updating source code, new technology can be quickly integrated into a projects common use without affecting users with older (or not cutting edge) processors.

Again I apologize for the length of this post. This is just an idea, so feel free to point out why it's wrong. (All I ask is that reasons are given, rather than bickering.) If this idea has been picked apart elsewhere, the admins can delete this thread.

Thank you.
Dotsch
Tester
BAM!ID: 833
Joined: 2006-05-27
Posts: 74
Credits: 3,240,670
World-rank: 104,771

2007-11-14 17:13:58

I see some problems at your idea...

At the most projects the source is not free available. SETI, BURP and Leiden (only one of three applications) has free available sources. At the other projects are the source often copyrighted and could not make public.

The most projects needs absouluty mathematical correct results. Here is the problem that results must be validated.
But the problem is, that if the source is be free, there are some changes posible which change the results in the end. Also different compilers, compiler flags, CPU types or CPU optimistations could produce some different resutls. Some projects has homogenus redundany to compare the different results in CPU classes, to prevent such problems...

Sometimes the application optimisations could be a tricky. It depends very stong at the application and compiler, from which a optimisations produce better results. Some wrong compiler versions and wrong flags could gave a bad perfomance.
Also some projects using good compilers like the Intel Fortran or C/C++ compilers which produce better results like the gcc's with all optimisations.
I think, that this would not gave a hughe perfomance gain at all and produce more problems like positive benefits.
Rakarin
 
BAM!ID: 1019
Joined: 2006-05-30
Posts: 92
Credits: 0
World-rank: 0

2007-11-15 02:55:34

I see some problems at your idea...
...
Sometimes the application optimisations could be a tricky. It depends very stong at the application and compiler, from which a optimisations produce better results. Some wrong compiler versions and wrong flags could gave a bad perfomance.
Also some projects using good compilers like the Intel Fortran or C/C++ compilers which produce better results like the gcc's with all optimisations.
I think, that this would not gave a hughe perfomance gain at all and produce more problems like positive benefits.


Thank you for explaining that. I understand the points you make, and I cannot argue with any of them.

Thank you for your time. It is appreciated.
PovAddict
BAM!ID: 115
Joined: 2006-05-10
Posts: 1013
Credits: 4,227,477
World-rank: 88,176

2007-11-15 16:45:44

I think it shouldn't be too hard to compile the application for a dozen different platforms and instruction sets. The problem is: testing if they all give acceptable numerical differences, and the big modifications needed on the scheduler to choose what app to send.
Not running BOINC anymore for several reasons...
Pages: [1]

Index :: BOINC :: Idea: BOINC project, Gentoo Linux style
Reason: