Pages: [1]
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7331
Credits: 579,013,549
World-rank: 2,376

2008-12-29 01:19:05





Many-core chips are the great hope for more performance but Sandia National Lab simulations show they are about to hit a memory wall. How bad is it?

Memory bandwidth is the limiting performance factor in CPUs. If you can’t feed the beast it stops working, simple as that. Performance roughly doubles from 2 cores to 4 (yay!), near flat to 8 (boo!) and then falls (hiss!).

Many-core is the future for computer performance. Memory bandwidth is one big problem. Software support for efficient many-core use is another. Either could bring the performance expected from Moore’s Law to dead stop.


More . . .

Guest

2009-04-18 15:30:44
last modified: 2009-04-18 15:42:09

I know this is old news, but I want to ask... Does this apply for hardware thread based architectures? I mean, that graph is refered to number of cores in the same CPU, not number of threads by core, which don't need the same comunication as symmetric cores processing.
Gerry Rough
 
BAM!ID: 713
Joined: 2006-05-25
Posts: 226
Credits: 6,726,654
World-rank: 60,308

2009-04-20 13:21:51

Your post raises a question or two in my mind. Is the industry looking to increase available bandwidth of memory chips themselves? This would be the answer to the dilemma. I assume they are, of course, but what then is the barrier to more bandwidth that is preventing the industry from increasing bandwidth?

(Click for detailed stats)
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7331
Credits: 579,013,549
World-rank: 2,376

2009-04-20 16:24:29

Is the industry looking to increase available bandwidth of memory chips themselves? This would be the answer to the dilemma. I assume they are, of course, but what then is the barrier to more bandwidth that is preventing the industry from increasing bandwidth?



GR:


[From my meager understanding:] Placing the memory controller on the chip instead of the mobo is one of the keys to the Intel i7 major improvements.

. . . and going to DDR3 memory.



Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7331
Credits: 579,013,549
World-rank: 2,376

2009-12-03 15:11:46


In this segment, Dr. Nash Palaniswamy (Intel) talks about some of the hardware considerations when looking at your HPC performance vector. Check it out!


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7331
Credits: 579,013,549
World-rank: 2,376

2009-12-03 15:16:31


Part 2 of 2.


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7331
Credits: 579,013,549
World-rank: 2,376

2010-05-06 20:10:40


Nvidia chief scientist says that everyone needs to rethink processors in order for Moore's Law to continue.



Bill Dally, the chief scientist and senior vice president of research at Nvidia, wrote an article for Forbes purporting that Moore's Law, the theory that transistor count and performance would double every 18 months, is dead.

The problem, according to Dally's paper on Forbes, is that current CPU architectures are still serial processors, while he believes that the future is in parallel processing. He gives the example of reading an essay, where a single reader can only read one word at a time – but having a group of readers assigned to a paragraph each would greatly accelerate the process.

"The computing industry must seize this opportunity and avoid stagnation, by focusing software development and training on throughput computers - not on multi-core CPUs," Dally concluded. "Let's enable the future of computing to fly--not rumble along on trains with wings."


More . . .

magyarficko
 
BAM!ID: 76666
Joined: 2009-10-30
Posts: 619
Credits: 287,367,952
World-rank: 3,976

2010-05-07 00:28:44

Bill Dally wrote:

The problem .... is .... serial processors


And just what exactly is wrong with cereal processing? I think makes some Grrrrrrrreat products


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7331
Credits: 579,013,549
World-rank: 2,376

2010-05-11 22:20:53


Conventional Wisdoms

1. Old CW: Power is free, but transistors are expensive
1. New CW: We can put more transistors on a chip than we have power to turn on

2. Old CW: Dynamic power is the only power concern
2. New CW: For desktops and servers, static power leakage can be 40% of total

3. Old CW: Monolithic processors are reliable internally, with errors only at the pins
3. New CW: As chips drop below 65nm feature sizes, they will have high error rates

4. Old CW: Building upon the past, we can continue to raise level of abstraction and the size of hardware designs
4. New CW: Wire delay, noise, cross coupling, manufacturer variability, reliability, validation, etc. increase cost of large designs below 65nm

5. Old CW: New architecture ideas are demonstrated by building chips
5. New CW: Cost of masks at 65nm, ECAD software for designing chips, and design for GHz clock rates mean that building believable prototypes no longer feasible



Parallel Computing: A View From Berkeley ~ E. M. Hielscher 6 February 2008

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7331
Credits: 579,013,549
World-rank: 2,376

2010-07-24 00:23:39


The Trouble With Multicore: Chipmakers are busy designing microprocessors that most programmers can't handle


In the past, programmers could just wait for transistors to get smaller and faster, allowing microprocessors to become more powerful. So programs would run faster without any new programming effort, which was a big disincentive to anyone tempted to pioneer ways to write parallel code. The La-Z-Boy era of program performance is now officially over, so programmers who care about performance must get up off their recliners and start making their programs parallel.



More . . .

Pages: [1]

Index :: Interesting things on the web. :: The many-core performance wall
Reason: