Pages: [1] 2
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 14:04:53


Justin Rattner (Intel) talks about the Larrabee computational co-processor and shows a demo where 1Tera-flop is reached with over-clocking.


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 14:26:52


Larrabee pullout: GPU battle far from over


Intel's decision to shutter what would have been its first discrete GPU (graphics processor unit) offers more breathing space for graphics market leaders Nvidia and AMD, but the battle is far from over, an analyst has pointed out.


From the start, in developing Larrabee, Intel tried to create a graphics architecture "that was programmable just like a standard x86 processor", which required both a new hardware architecture and a new programming model. Both were significantly challenging tasks, he noted.

Tom Halfhill, senior analyst for In-Stat's Microprocessor Report, concurred:

    Larrabee was a potential threat to [AMD and Nvidia's] GPU businesses, but now it should be apparent that designing a state-of-the-art graphics processor is very hard, even for the world's biggest semiconductor company. Anyone who thought Intel would easily stomp AMD and Nvidia needs to rethink their position.



To achieve its goals for graphics performance, Intel may have to compromise on x86 compatibility, he pointed out, "Intel is trying very hard to jam a square peg into a round hole. It may be possible, but obviously, it isn't easy."




More . . .


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 14:44:13

Maloney demonstrated early silicon based on the Larrabee architecture, the codename for a family of future graphics-centric co-processors. He also confirmed that key developers have received development systems.

With the first product due next year, Larrabee takes the programmability of Intel Architecture and dramatically extends its parallel processing capabilities. This flexible programmability and the ability to take advantage of available developers, software and design tools give programmers the freedom to realize the benefits of fully programmable rendering and thus easily implement a variety of 3-D graphics pipelines such as rasterization, volumetric rendering or ray tracing.

Combined, PC users will experience stunning visual experiences on Intel-based PCs that incorporate this product. While Larrabee silicon will initially appear in discrete graphics cards, the Larrabee architecture will eventually be integrated into the processor along with other technologies.


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 15:31:12



Memory Sharing for Visual Computing

Research to enable memory sharing between CPUs and the Intel® Architecture code-named Larrabee


Go to link for video and discussion

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 15:37:50



Larrabee Architecture

Larrabee is a many-core x86 visual computing architecture that is based on in-order cores that run an extended version of the x86 instruction set, including wide vector processing instructions and some specialized scalar instructions. Each of the cores contains a 32 KB instruction cache and a 32 KB L1 data cache, and accesses its own subset of a coherent L2 cache to provide high bandwidth L2 cache access. The L2 cache subset is 256 KB and the subsets are connected by a high bandwidth on-die ring interconnect. Data written by a CPU core is stored in its own L2 cache subset and is flushed from other subsets, if necessary. Each ring data path is 512 bits wide per direction. The fixed function units and memory controller are spread across the ring to reduce congestion.

Each core has 4 hyper-threads with separate register sets per thread. Instruction issue alternates between the threads and covers cases where the compiler is unable to schedule code without stalls. The core uses a dual issue decoder and the pairing rules for the primary and secondary instruction pipes are deterministic. All instructions can issue on the primary pipe, while the secondary pipe supports a large subset of the scalar x86 instruction set, including loads, stores, simple ALU operations, vector stores, etc. The core supports 64 bit extensions and the full Pentium processor x86 instruction set. Larrabee has a 16 wide vector processing unit which executes integer, single precision float, and double precision float instructions. The vector unit supports gather-scatter, masked instructions, and supports instructions with up to 3 source operands.


More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 15:59:30


Modern GPUs are increasingly programmable in order to support advanced graphics algorithms and other parallel applications. However, general purpose programmability of the graphics pipeline is restricted by limitations on the memory model and by fixed function blocks that schedule the parallel threads of execution. For example, pixel processing order is controlled by the rasterization logic and other dedicated scheduling logic.

This paper describes a highly parallel architecture that makes the rendering pipeline completely programmable. The Larrabee architecture is based on in-order CPU cores that run an extended version of the x86 instruction set, including wide vector processing operations and some specialized scalar instructions. The cores each access their own subset of a coherent L2 cache to provide high-bandwidth L2 cache access from each core and to
simplify data sharing and synchronization.

Larrabee is more flexible than current GPUs. Its CPU-like x86-based architecture supports subroutines and page faulting. Some operations that GPUs traditionally perform with fixed function logic, such as rasterization and post-shader blending, are performed entirely in software in Larrabee. Like GPUs, Larrabee uses fixed function logic for texture filtering, but the cores assist the fixed function logic, e.g. by supporting page faults.



More . . .

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 16:34:44


Larrabee Microarchitecture: The new flexible standard



As parallelism and advances in multi-core processing bring new levels of performance and application capabilities, Larrabee architecture's programmability creates new opportunities for developers to innovate in the visual computing realm. Designed to provide unprecedented freedom for developers, the Larrabee microarchitecture features a number of hardware advances, including a many-core throughput design for a wide range of highly parallel visual computing applications, including graphics, media, and medical imaging and financial services.

The enabling factor for each of these core elements is programmability. Platforms built around the Larrabee microarchitecture will operate across a unified infrastructure that includes other components based on Intel® architecture programmable cores.



More . . .

magyarficko
 
BAM!ID: 76666
Joined: 2009-10-30
Posts: 619
Credits: 287,367,952
World-rank: 5,175

2010-02-04 18:56:38

Guest

2010-02-04 19:10:53

it's not at all helpfull to dig out old, outdated 3rd- or whatever-source "information". just disturbing...
Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 19:56:29

magyarficko wrote:



As a video card. . . for now.

. . . as a co-processor, still in the development phase:


Market analyst Jon Peddie of Jon Peddie Research remains optimistic. "I believe they will definitely come back. Intel's commitment has not slackened. The part is being repositioned as a HPC co-processor where I think it will do very well," he said. "They learned a whole lot from this. A whole lot. They are not going to throw that investment or knowledge away. I wouldn't be surprised to see them come back in a few years with a graphics part. Intel could decide to follow the high performance trail like AMD is doing with Fusion," he added.

More. . .



Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-04 20:14:04

frankhagen wrote:
it's not at all helpfull to dig out old, outdated 3rd- or whatever-source "information". just disturbing...



Frank:


The thought of all those video cards being as obsolete for BOINC use as the PS3 disturbing?

magyarficko
 
BAM!ID: 76666
Joined: 2009-10-30
Posts: 619
Credits: 287,367,952
World-rank: 5,175

2010-02-04 20:41:03
last modified: 2010-02-04 20:47:16

Sid2 wrote:

Market analyst Jon Peddie of Jon Peddie Research remains optimistic.



Yeah well Sid, I think that first line explains it all. Of course John Pedie would remain optimistic -- he's a MARKET ANALYST! Do you what a "market analyst" is? They are PAID stock touts -- paid to keep a company's stock price high, nothing more.

Here he is pimping for ATI ...

http://www.youtube.com/watch?v=fQZxAgs5aMA

Sid! Please don't tell me you are one of those people that believe ... "I found it on the Internet so it MUST be true!".

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-05 18:02:02


The plan to field a multicore graphics engine, which would have put Intel into direct competition against Nvidia, has been put on hold for now.


Larrabee has been in the works for several years. It's been billed as a "many-core x86 architecture for visual computing." That tag line highlights how its DNA differs from prevailing industry thinking, which holds that graphics processors must implement specialized rendering and mapping functions in purpose-built hardware blocks.

In contrast, Intel was planning to extend its general-purpose X86 processors into the graphics realm, and by doing it smartly create an engine which is easy to build and leverages the Intel knowledge base. In simplest architectural terms, Larrabee would take thirty-two X86 CPU cores and tie them together with a right of cache memory enabling fast, inter-processor communication.

Lending credence to the thinking that Intel isn't abandoning Larrabee is the wealth of developer material on Intel's forums.


More . . .

Guest

2010-02-05 18:51:54

Sid2 wrote:
Lending credence to the thinking that Intel isn't abandoning Larrabee is the wealth of developer material on Intel's forums.




just go on turning this formerly usefull place of information into a rumor-box..

... and of course you'll make this comment vanish as many others before.


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-05 19:14:08

frankhagen wrote:
Sid2 wrote:
Lending credence to the thinking that Intel isn't abandoning Larrabee is the wealth of developer material on Intel's forums.




just go on turning this formerly usefull place of information into a rumor-box..

... and of course you'll make this comment vanish as many others before.





Not a chance, I will showcase this post when you are proved wrong [it might be a year or two. . .].

Intel invested big bucks on the Larrabee architecture and will eventually bring it to the market as a co-processor, not a video card.

. . . probably combining the CPU and graphics processor on a single chip.

I wouldn't sell Intel short anytime soon . . . .

magyarficko
 
BAM!ID: 76666
Joined: 2009-10-30
Posts: 619
Credits: 287,367,952
World-rank: 5,175

2010-02-05 19:31:49

Sid2 wrote:

The plan to field a multicore graphics engine, which would have put Intel into direct competition against Nvidia, has been put on hold for now.


Larrabee has been in the works for several years. It's been billed as a "many-core x86 architecture for visual computing." That tag line highlights how its DNA differs from prevailing industry thinking, which holds that graphics processors must implement specialized rendering and mapping functions in purpose-built hardware blocks.

In contrast, Intel was planning to extend its general-purpose X86 processors into the graphics realm, and by doing it smartly create an engine which is easy to build and leverages the Intel knowledge base. In simplest architectural terms, Larrabee would take thirty-two X86 CPU cores and tie them together with a right of cache memory enabling fast, inter-processor communication.

Lending credence to the thinking that Intel isn't abandoning Larrabee is the wealth of developer material on Intel's forums.


More . . .


Uhhhmmmmm, uh ... Sid??? Didn't you forget to quote the FIRST TWO paragraphs from that article you linked to? Since you actually gave the link but didn't quote the paragraphs I can only assume that you KNOW nobody bothers to read this trash anyway OR you think the SHEEP will not bother following the link and just take you at your word?

QUOTED FROM YOUR LINK ...

Intel's plan to field a standalone multicore processor dedicated solely to advanced graphics has hit a major roadblock, with the chip giant e-mailing around a statement saying that the chip, code-named Larrabee, won't be launching anytime soon.

What remains unclear is whether Intel is merely pushing its productization plans back until it can work the kinks out of an admittedly complex design, or whether there's any credence to rumblings from the Nvidia camp that Intel's envisioned graphics architecture isn't up to snuff.



magyarficko
 
BAM!ID: 76666
Joined: 2009-10-30
Posts: 619
Credits: 287,367,952
World-rank: 5,175

2010-02-05 19:36:58
last modified: 2010-02-05 19:38:17

By the way Sid .... did you watch that YouTube link I gave you yesterday? In it, your favorite Market Analyst (Dr. John Pedie) very clearly laid out his position that IF Intel was successful with their plans to integrate graphics on a chip that there will still and always be a NEED and a PLACE in the marketplace for discrete graphics cards. So, according to him, even IF Larrabee succeeds -- it will NOT DISPLACE that very expensive graphics card you just bought recently.



magyarficko
 
BAM!ID: 76666
Joined: 2009-10-30
Posts: 619
Credits: 287,367,952
World-rank: 5,175

2010-02-05 19:42:47

frankhagen wrote:
Sid2 wrote:
Lending credence to the thinking that Intel isn't abandoning Larrabee is the wealth of developer material on Intel's forums.




just go on turning this formerly usefull place of information into a rumor-box..

... and of course you'll make this comment vanish as many others before.




Hey Frank, what's the matter with you? I just noticed you don't have your BOINC Support Level ranking here in the forums yet. I got mine and I'm very proud of it! I can hardly wait until I acheive BOINC Grouchy Old Fart Level 99




Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-05 19:48:32


ATI and Nvidia will find it harder to build a CPU than a GPU. . . Intel is ramping up to do both

Intel's only competition in the CPU game is Moore's Law.

Larrabee co-processor architecture along with Sandy Bridge processors. . .





Guest

2010-02-05 20:40:40

magyarficko wrote:
Hey Frank, what's the matter with you? I just noticed you don't have your BOINC Support Level ranking here in the forums yet. I got mine and I'm very proud of it! I can hardly wait until I acheive BOINC Grouchy Old Fart Level 99


just forgot i am the real noob around..
Rakarin
 
BAM!ID: 1019
Joined: 2006-05-30
Posts: 92
Credits: 0
World-rank: 0

2010-02-06 04:21:25

I was considering posting this in another thread:

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3683

AMD is planning on the FPU (floating point unit) to eventually go away. On their server chips, they are planning a "bulldozer unit" which will be two integer units and one shared FPU. I have seen on a few blogs that there is a notion that with moderate GPU's being moved on to the processor dies, it will be possible to eventually do away with the FPU on the processor. The FPU is one of the larger and hotter areas on the processor silicon. Meanwhile, if a moderately powered GPU designed for double precision floating point math is on the same mount, and the GPU is designed for fast streaming data, it makes sense to use that. It would actually make sense to incorporate the FPU functions into the on-die GPU.

Larabee seems designed for this. The simple fact is, video cards will not go away for a long time. The high end GPU's are far too hot to put next to the processor. Also, Intel does not make high end video cards. Actually, I don't even know if they make any AGP / PCI video cards. Their video chipsets are designed to be used on the motherboard, so they don't get too hot. Larabee only has to aim for video processing power equal to on-board GPU chipsets.

If Larabee is on the processor, someone using it as the "video card" will probably not be doing any high end gaming or data crunching. Someone doing either of those will probably buy a good ATI or Nvidia card, which means that most of Larabee's video responsibility will be eliminated, allowing it to function entirely as a massively parallel, x86 programmable math co-processor. (A co-processor on the processor die, meaning an excellent data pipeline.)

Personally, I think this is exactly what Intel is planning for Larabee. I also think the new mystery GPU architecture AMD is working on for ATI will be similar for AMD Fusion chips.

As an aside, I was discussing this with the manager. Years ago, engineers used this wondrous device called a math co-processor. That is why better 286 and 386 (and some 486) motherboards had a second processor slot. Then, Intel introduced the DX chips, which had this miraculous math co-processor built in. Still, engineers often disabled the on-processor FPU (it was a bios setting or motherboard jumper) to get one of the better Cyrix math co-processors. Still, with this wonderful device, some functions were so much faster.

Now, skipping from the early 90's to present day, engineers are telling us of this wondrous device... a co-processor that specializes in math....
magyarficko
 
BAM!ID: 76666
Joined: 2009-10-30
Posts: 619
Credits: 287,367,952
World-rank: 5,175

2010-02-06 15:35:59
last modified: 2010-02-06 15:47:43

frankhagen wrote:
magyarficko wrote:
Hey Frank, what's the matter with you? I just noticed you don't have your BOINC Support Level ranking here in the forums yet. I got mine and I'm very proud of it! I can hardly wait until I acheive BOINC Grouchy Old Fart Level 99


just forgot i am the real noob around..


Well, THAT didn't last long ... I lost my "support level ranking" so now I'm rankless again like you Frank

I do wonder why (somebody else who shall remain unamed) is special though? He NOW has both a BOINC Support AND a TECH SUpport ranking. And NO!, it's not you Sid.


[EDIT] This is a JOKE! I just realized where these rankings come from and I can vote for myself [/EDIT]

Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-02-06 15:44:15

Rakarin wrote:


If Larabee is on the processor, someone using it as the "video card" will probably not be doing any high end gaming or data crunching. Someone doing either of those will probably buy a good ATI or Nvidia card, which means that most of Larabee's video responsibility will be eliminated, allowing it to function entirely as a massively parallel, x86 programmable math co-processor. (A co-processor on the processor die, meaning an excellent data pipeline.)



Rakarin:

Isn't that sorta what IBM was smoozing to with their Power6 processor?


Rakarin
 
BAM!ID: 1019
Joined: 2006-05-30
Posts: 92
Credits: 0
World-rank: 0

2010-02-07 16:34:40

Sid2 wrote:

Rakarin:
Isn't that sorta what IBM was smoozing to with their Power6 processor?


Do you mean the massively parallel thing? Yes. PowerPC architecture is vector based. With scalar architecture, the data sits in the processor pipeline longer, and you hammer the bajezuz out of it. In vector based (PPC and video), you have a shorter pipeline, and single instruction, multiple data processing. Vector data is also easier (as I understand it, on PPC through "AltiVec&quot to move between processors / cores, and even spread across cores, parallel processing is easier. Years ago, when I have a G4, I remember, Folding@Home had their two SMP (Symmetric Multi Processing, or parallel processing) clients in beta, and BOINC projects were discussing the issue in abstract. Meanwhile, I had BOINC running on one processor, and F@H on the other on my G4. If F@H couldn't get work, my WCG work units would temporarily "spread" over both processors, and the one process would take 200% of the processor. (Note: On Windows, dual core is 100% [50+50]; on OSX, dual core is 200% [100+100].)

Anyway, PowerPC and SPARC processors are working with this idea. If your data can be handled in parallel, you are better off with 6 or 8 or 10 or 12 small cores working together slower and cooler, rather than two or four cores that require a cooling system that can suck up pets and small children. On the floating-point side, you see the exact same things with Cell (8xi and BBE), CUDA, and Larabee:. (I don't know if that colon is required. It think they are trying to make it look like an old-school port, like COM1: or LPT1:.

Now, that's your SIMD (single=instruction, multiple-data) crunching. If you need high performance per thread, and you have a small / singular data (set) and just need to hammer it with a lot of math, Intel or AMD is the processor of choice. If you do scalar type math on a PPC processor, because of the short pipeline, the processor sucks in one or a few elements, runs a portion of the instructions, release, input, quick crunch, input, etc. It's not efficient. On x86, the data is held longer and more can be done on each cycle.

GPU processing is a hot thing because x86 processors have developed parallelism (SSE 1-4.x), but GPU's do it better and faster, and with floating point. Also, everyone has a GPU now. It's like trying to scrimp and save to buy a good engineering calculator, but discovering your neighbor has a high end server. Your time and resources are then better spent buying gifts for your neighbor so he will let you get a network connection and access.


Sid2
 
Forum moderator - BOINCstats SOFA member
BAM!ID: 28578
Joined: 2007-06-13
Posts: 7336
Credits: 593,058,630
World-rank: 3,055

2010-06-26 16:33:50


Intel plans to beat AMD’s and Nvidia’s graphics chips at High Performance Computing (HPC) with the Larrabee, but under a new name.

At the Supercomputer Conference ISC’10, server boss Kirk Skaugen explained that the ex-Larrabee is now sailing under the flag of the “Many Integrated Core” (MIC) architecture.

A 32-nm version named Aubrey Isle will be released as a developer sample with 32 cores, 8 MB of shared cache and a clock speed of 1.2 GHz. Thanks to quad hyperthreading each chip handles 128 threads in quasi-parallel.



More . . .

Pages: [1] 2

Index :: Gadgets, Games and Gizmos :: Intel's Larrabee: Architecture Evolution on a Collision Course
Reason: