Page 376 of 1210 FirstFirst ... 276326351366371372373374375376377378379380381386401426476876 ... LastLast
Results 3,751 to 3,760 of 12094

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3751

    Anandtech: Best Video Cards: March 2014

    We’re back once again with our regular guide to desktop video cards, this time for March of 2014.
    Since our last update in January 2014, we’ve seen some significant shifts on both the high-end and low-end of the market. On the low end we’ve seen AMD introduce the Radeon R7 250X and the Radeon R7 265 to shore up their competitive positioning in the sub-$150 markets. Meanwhile NVIDIA has launched the first of their new Maxwell series of cards, the GeForce GTX 750 series, to do battle in that price segment. Unlike the competitive positioning prior to these launches, AMD and NVIDIA have pulled apart a bit in their technical capabilities and pricing, so we have two families of cards that aren’t quite as close substitutes to each other as before.
    As for the high-end of the market, we’ve seen a pair of new product launches. From NVIDIA we have the GeForce GTX Titan Black. Titan Black is fundamentally a 6GB version of the GTX 780 Ti with increased GPU clockspeeds, and like the previous Titan, the Titan Black is NVIDIA's entry level prosumer double precision compute card, thanks to its unlocked FP64 performance. Unlike the original Titan though, the lead over its closest competitor (GTX 780 Ti) is slim, so while it's still the fastest gaming card around the $320 (~50%) premium means most gamers are better served by the GTX 780 Ti and leaving Titan Black to compute tasks.
    Meanwhile from AMD we have the Radeon R9 280. The R9 280 isn’t based on any new GPU from AMD – it’s essentially just a rebaged Radeon HD 7950 – but it goes hand-in-hand with AMD’s supply and demand situation coming back into balance. After a period of $900 R9 290Xs, the R9 280 has heralded shifts in supply and demand that have almost entirely corrected the prices of AMD cards in a matter of weeks. Cryptocoin Mania seems to have subsided to a degree as coin prices have softened and difficulty curves have increased (not to mention spring is upon us), and while it’s hard to directly measure supply, the fact that the R9 280 remains in supply is as good an indicator as we can hope for that supply stands stronger than before. As a result most AMD cards are only running slightly above their MSRPs (R9 290 excluded), which significantly improves AMD’s competitive positioning above $200.

    Recently Launched: GeForce GTX 750 Series
    Finally, we’ve also seen some inventory drawdowns on both sides as card families are prepared for retirement. For AMD cards the supply of Radeon 7700 series cards is finally drying up, shifting our focus to the 200 series from top to bottom. Meanwhile NVIDIA’s GTX 650 Ti is now starting to come in short supply – having been replaced by the GTX 750 Ti – and it’s likely that the rest of NVIDIA’s GK106 based products will soon follow. As a top-end GK106 card is not much faster than a top-end GM107 card, we don’t expect NVIDIA will want to be producing the larger GK106 for too much longer.
    Anyhow, market summaries behind us, let’s look at individual recommendations. As always, we’ve laid out our ideas of price/performance bands and recommendations in our table below, with our full explanations and alternative options to follow. For this edition we’ve tweaked our bands in response to the recent product launches from AMD and NVIDIA. But in the case of the sub-$200 market it’s worth pointing out that there’s a video card for roughly every $10, so picking a good video card is as much about budgets as it is finding an especially strong card.
    March 2014 GPU Performance Guide
    Performance Band Price Range Recommendation
    1080p (Low) $99-$149 AMD Radeon R7 250X
    1080p (Med)
    $149-$209
    1080p (High)
    $209-$329
    1440p (Med)
    $329-$499
    1440p (High)
    $499-$679
    1440p (Max)
    $679+
    4K/Multi-Monitor (High)
    $1200+
    As a general recommendation for gaming, we suggest starting at $99. There are cards below this price, but the amount of performance you have to give up below $99 far outweighs the cost. Even then, performance gains will generally exceed the price increases up to $150 or so.
    Meanwhile for gamers looking for high quality 1080p gaming or better, that will start at $209. Going above that will find cards that are good for 1440p, 4K, and multi-monitor, while going below that will find cards that will require some quality sacrifices to stay at 1080p.
    [h=3]Budget (

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3752

    Anandtech: GIGABYTE F2A88X-UP4 Review

    In terms of motherboard output, there seems a clear dichotomy between AMD based motherboards and Intel motherboards. Innovation starts on the higher selling Intel ATX products, whereas AMD is more focused on smaller form factors. With the Kaveri APUs moving to more integrated graphics power, this makes sense. However some of those high-end innovations do make it over to the AMD + ATX crowd, which is what GIGABYTE has done with the F2A88X-UP4. The F2A88X-UP4 is an AMD FM2+ motherboard with reinforced power delivery which we are reviewing today.

    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3753

    Anandtech: Return of the DirectX vs. OpenGL Debates

    With the announcement of DirectX 12 features like low-level programming, it appears we're having a revival of the DirectX vs. OpenGL debates—and we can toss AMD's Mantle into the mix in place of Glide (RIP 3dfx). I was around back in the days of the flame wars between OGL and DX1/2/3 devotees, with id Software's John Carmack and others weighing in on behalf of OGL at the time. As Microsoft continued to add features to DX, and with a healthy dose of marketing muscle, the subject mostly faded away after a few years. Today, the vast majority of Windows games run on DirectX, but with mobile platforms predominantly using variants of OpenGL (i.e. smartphones and tablets use a subset called OpenGL ES—the ES being for "Embedded Systems") we're seeing a bit of a resurgence in OGL use. There's also the increasing support of Linux and OS X, making a cross-platform grapics API even more desirable.
    At the Game Developers Conference 2014, in a panel including NVIDIA's Cass Everitt and John McDonald, AMD's Graham Sellers, and Intel's Tim Foley, explanations and demonstrations were given suggesting OpenGL could unlock as much as a 7X to 15X improvement in performance. Even without fine tuning, they note that in general OpenGL code is around 1.3X faster than DirectX. It almost makes you wonder why we ever settled for DirectX in the first place—particularly considering many developers felt DirectX code was always a bit more complex than OpenGL code. Anyway, if you have an interest in graphics programming (or happen to be a game developer), you can find a full set of 130 slides from the presentation on NVIDIA's blog. Not surprisingly, Valve is also promoting OpenGL in various ways; the same link also has a video from a couple weeks back at Steam Dev Days covering the same topic.
    The key to unlocking improved performance appears to be pretty straightforward: reducing driver overhead and increasing the number of draw calls. These are both items targeted by AMD's Mantle API, and presumably the low level DX12 API as well. I suspect the "7-15X improved performance" is going to be far more than we'll see in most real-world situations (i.e. games), but even a 50-100% performance improvement would be huge. Many of the mainstream laptops I test can hit 30-40 FPS at high quality 1080p settings, but there are periodic dips into the low 20s or maybe even the teens. Double the frame rates and everything becomes substantially smooter.
    I won't pretend to have a definitive answer on which API is "best", but just like being locked into a single hardware platform or OS can lead to stagnation, I think it's always good to have alternatives. Obviously there's a lot going on with developing game engines, and sometimes slower code that's easier to use/understand is preferable to fast/difficult code. There's also far more to making a "good" game than graphics, which is a topic unto itself. Regardless, code for some of the testing scenarios provided by John McDonald is available on Github if you're interested in checking it out. It should work on Windows and Linux but may require some additional work to get it running on OS X for now.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3754

    Anandtech: HTC launches the One (2014): Formerly M8

    HTC’s struggle in the smartphone OEM space has almost become a constant in the past few years, with a dramatic fall from the top of sales and market share in many regions around 2011, to now. While the One line in 2012 was hoped to be the reinvention that would bring HTC back, Samsung effectively dominated 2012 with the Galaxy S3 and Note 2, and while the HTC One/M7 in 2013 was a ground-breaking phone with great critical acclaim, HTC posted its first ever loss.
    That brings us to the new One, one of the most leaked devices ever. While the hype surrounding the One (2014) doesn’t quite approach Moto X levels, the leaks have certainly served to fan the hype in many ways. As always, the best way to get all of this out of the way is a table to show the specs. Interestingly enough, the Asian SKU will get the MSM8974ACv3 SoC, which is the 2.45 GHz bin of the Snapdragon 801. At any rate, all the relevant information for international units is below.
    [TR="class: tlblue"]
    [TD="width: 120"] [/TD]
    [TD="width: 170, align: center"] HTC One (2013)[/TD]
    [TD="width: 170, align: center"] HTC One (2014)[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] SoC[/TD]
    [TD="width: 170, align: center"] APQ8064AB 1.7 GHz Snapdragon 600[/TD]
    [TD="width: 170, align: center"] MSM8974ABv3 2.26 GHz Snapdragon 801[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] RAM/NAND[/TD]
    [TD="width: 170, align: center"] 2 GB LPDDR2, 32/64GB NAND[/TD]
    [TD="width: 170, align: center"] 2GB LPDDR3, 16/32GB NAND + microSD[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] Display[/TD]
    [TD="width: 170, align: center"] 4.7” SLCD3 1080p[/TD]
    [TD="width: 170, align: center"] 5” 1080p LCD[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] Network[/TD]
    [TD="width: 170, align: center"] 2G / 3G / 4G LTE (Qualcomm MDM9x15 UE Category 3 LTE)[/TD]
    [TD="width: 170, align: center"] 2G / 3G / 4G LTE (Qualcomm MDM9x25 UE Category 4 LTE)[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] Dimensions[/TD]
    [TD="width: 170, align: center"] 137.4 x 68.2 x 9.3mm max / 4mm min, 143 grams[/TD]
    [TD="width: 170, align: center"] 146.36 x 70.6 x 9.35mm max, 160 grams[/TD]
    [/TR]
    [TD="class: tlgrey"] Camera[/TD]
    4.0 MP (2688

    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3755

    Anandtech: NVIDIA GTC 2014 Keynote Live Blog

    We're live from NVIDIA's 2014 GPU Technology Conference. Jen-Hsun's keynote will begin at 9:00AM PT/12:00PM ET, check back here for live updates!

    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3756

    Anandtech: NVIDIA Announces GeForce GTX Titan Z: Dual-GPU GK110 For $3000

    Today at GTC NVIDIA announced their next GTX Titan family card. Dubbed the GTX Titan Z (no idea yet on why it’s Z), the card is NVIDIA's obligatory entry into the dual-GPU/single-card market, finally bringing NVIDIA’s flagship GK110 GPU into a dual-GPU desktop/workstation product.
    While NVIDIA has not released the complete details about the product – in particular we don’t know precise clockspeeds or TDPs – we have been given some information on core configuration, memory, pricing, and availability.
    GTX Titan Z GTX Titan Black GTX 780 Ti GTX Titan
    Stream Processors 2 x 2880 2880 2880 2688
    Texture Units 2 x 240 240 240 224
    ROPs 2 x 48 48 48 48
    Core Clock 700MHz? 889MHz 875MHz 837MHz
    Boost Clock ? 980MHz 928MHz 876MHz
    Memory Clock 7GHz GDDR5 7GHz GDDR5 7GHz GDDR5 6GHz GDDR5
    Memory Bus Width 2 x 384-bit 384-bit 384-bit 384-bit
    VRAM 2 x 6GB 6GB 3GB 6GB
    FP64 1/3 FP32 1/3 FP32 1/24 FP32 1/3 FP32
    TDP ? 250W 250W 250W
    Transistor Count 2 x 7.1B 7.1B 7.1B 7.1B
    Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
    Launch Date 04/XX/14 02/18/14 11/07/13 02/21/13
    Launch Price $2999 $999 $699 $999
    In brief, the GTX Titan Z is a pair of fully enabled GK110 GPUs. NVIDIA isn’t cutting any SMXes or ROP partitions to bring down power consumption, so each half of the card is equivalent to a GTX 780 Ti or GTX Titan Black, operating at whatever (presumably lower) clockspeeds NVIDIA has picked. And although we don’t have precise clockspeeds, NVIDIA has quoted the card as having 8 TFLOPS of FP32 performance, which would put the GPU clockspeed at around 700MHz, nearly 200MHz below GTX Titan Black’s base clock (to say nothing of boost clocks).
    On the memory front GTX Titan Z is configured with 12GB of VRAM, 6GB per GPU. NVIDIA’s consumer arm has also released the memory clockspeed specifications, telling us that the card won’t be making any compromises there, operating at the same 7GHz memory clockspeed of the GTX Titan Black. This being something of a big accomplishment given the minimal routing space a dual-GPU card provides.
    In terms of build the GTX Titan Z shares a lot of similarities to NVIDIA’s previous generation dual-GPU card, the GTX 690. NVIDIA is keeping the split blower design, with a single axial fan pushing out air via both the front and the back of the card, essentially exhausting half the hot air and sending the other half back into the case. We haven’t had any hands-on time with the card, but NVIDIA is clearly staying with the black metal styling of the GTX Titan Black.
    The other major unknown right now is power consumption. GTX Titan Black is rated for 250W, and meanwhile NVIDIA was able to get a pair of roughly 200W GTX 680s into the 300W GTX 690 (with reduced clockspeeds). So it’s not implausible that GTX Titan Z is a 375W card, but we’ll have to wait and see.
    But perhaps the biggest shock will be price. The GTX Titan series has already straddled the prosumer line with its $1000/GPU pricing; GTX Titan was by far the fastest thing on the gaming market in the winter of 2013, while GTX Titan Black is a bit more professional-leaning due to the existence of the GTX 780 Ti. With GTX Titan Z, NVIDIA will be asking for a cool $3000 for the card, or three-times the price of a GTX Titan Black.
    It goes without saying then that GTX Titan Z is aimed at an even more limited audience than the GTX Titan and GTX Titan Black. To be sure, NVIDIA is still targeting both gamers and compute users with this card, and since it is a GeForce card it will use the standard GeForce driver stack, but the $3000 price tag is much more within the realm of compute users than gamers. For gamers this may as well be a specialty card, like an Asus ARES.
    Now for compute users this will still be an expensive card, but potentially very captivating. Per FLOP GTX Titan Black is still a better deal, but with compute users there is a far greater emphasis on density. Meanwhile the GTX Titan brand has by all accounts been a success for NVIDIA, selling more cards to compute users than they had ever expected, so a product like GTX Titan Z is more directly targeted at those users. I have no doubt that there are compute users who will be happy with it – like the original GTX Titan it’s far cheaper per FP64 FLOP than any Tesla card, maintaining its “budget compute” status – but I do wonder if part of the $3000 pricing is in reaction to GTX Titan undercutting Tesla sales.
    Anyhow, we should have more details next month. NVIDIA tells us that they’re expecting to launch the card in April, so we would expect to hear more about it in the next few weeks.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3757

    Anandtech: NVIDIA SHIELD Price Cuts and Portal

    NVIDIA has a couple big SHIELD-related announcements today. The first is a “limited time” price cut to $199. The original price was $299, which then dropped to $249 – and there was an additional $50 rebate if you purchased an NVIDIA GTX GPU. Now the price is a flat $199, with no rebate for GTX GPU purchasers. This reduced pricing will be in effect at least through the end of April, though personally I think we might see the price stay there. There are additional incentives to go along with the April price cut, of course.
    First, in the way of software updates NVIDIA will be providing both a welcome update to Android 4.4.2 “KitKat”, and there will be other enhancements including modifications to the Gamepad Mapper, Bluetooth mouse and keyboard support in Console Mode, and other tweaks and changes. It’s great to see continued support for SHIELD with OS updates like this, as I’ve had several other Android tablets that basically got kicked to the curb after one or two updates. Naturally, with NVIDIA using SHIELD as more than just a standard tablet, it’s important to keep it up to date and relevant.
    Along with the upgrade to KitKat, NVIDIA will be updating the GameStream client with a new addition: Remote GameStream. The current GameStream is designed to work over your local WiFi network (5GHz dual-stream preferred/required), which limits its use to within your own home. On the other side of the equation, they’ve had the GRID Streaming Beta running for a while with remote access to games rendered on a GRID computing farm and streamed to your SHIELD device. Now the two aspects are being combined with remote streaming from your home PC to your SHIELD device, anywhere you have a (presumably “good”) WiFi connection – or if you have a good LTE connection on your smartphone, you can enable tethering and potentially use that as well.
    I think this is a much bigger deal than GameStream – if I’m in my home, I’m usually happier sitting at a PC with a keyboard and mouse (though admittedly gaming on an HDTV is one potential use case that’s still appealing). Now, running the GameStream client on your home PC with an appropriate (GTX 600 or later) desktop GPU gives you remote access to any of those games. What’s more, since SHIELD isn’t really doing much computational work, battery life will still be very good. The latest GameStream will also extend support to select laptop GPUs now: GTX 800M, GTX 700M, and select GTX 600M (Kepler GTX 600M, basically) will also support GameStream. Another GameStream addition is support for multiple-PC pairings, so you can choose to stream from different desktops/laptops (e.g. if one of your desktops is already running a game, you could use a secondary system).
    Returning to Console Mode, the addition of Bluetooth mouse and keyboard support allows you to connect your SHIELD to an HDTV and then sit on the couch with a Bluetooth keyboard and mouse. Basically, you get a “portable PC” that you can connect to any appropriate large screen. It’s not clear if Console Mode also supports Remote GameStream, but that would seem to make sense. NVIDIA notes that three of the top five games among current GeForce Experience users are mouse-and-keyboard-only (League of Legends, Civilization V, and Diablo III are specifically mentioned), so Console Mode extends gaming support to additional titles, albeit in a roundabout way. Seriously: PC streaming to SHIELD connected to HDTV controlled by Bluetooth keyboard and mouse – am I the only one that feels it’s perhaps a bit too involved?
    Wrapping up the announcement, NVIDIA also discussed their efforts to bring additional full PC and console ports to SHIELD. Recently they have worked with developers to port Grand Theft Auto: San Andreas, Mount & Blade Warband, and the indie hit Rochard to Android, with Tegra 4 enhancements for SHIELD devices. Coming soon is another major hit: Portal. This is the original title and not the sequel, but if you’ve never experience the joys of Portal then you’re in for a treat. What would make the announcement even better would be free copies for existing and future SHIELD owners, but that might be asking too much.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3758

    Anandtech: Facebook To Acquire Oculus VR Inc for $2 Billion

    Back in September 2012, a $2.4 million Kickstarter campaign finished to help develop the next wave of immersive gaming in Oculus Rift. The premise behind Oculus Rift is a virtual reality headset that puts you deeper into the game than any other headset has ever done before. Since that Kickstarter campaign news about Oculus has penetrated all of technical media in terms of the development, the nature of the device and what sort of games are going to be able to use it. I remember a few images of Brian and Anand trying the Crystal Cove prototype at CES this year.
    The news today comes as a shock (to me at least) – Facebook has announced today that it has reached a definitive agreement to acquire Oculus VR Inc at a $2 billion dollar value. This includes $400 million in cash and 23.1 million shares of Facebook stock (~$1.6 billion on last 20 day average).
    Oculus will keep their headquarters in Irvine, CA and continue development on the Rift. With Facebook moving to the help (it is unclear at this point just how much of a role they will play), the focus may shift towards a more social scenario and future for the device, alongside the anticipated action game genre.
    The deal is expected to be completed during Q2, and we are awaiting further information as to the depth of the acquisition and how each firm will operate under the new structure. Facebook should have a lot of money from its IPO in order to help drive Oculus investment, perhaps accelerating the process.
    Source: PRNewswire
    "We are excited to work with Mark and the Facebook team to deliver the very best virtual reality platform in the world," said Brendan Iribe, co-founder and CEO of Oculus VR. "We believe virtual reality will be heavily defined by social experiences that connect people in magical, new ways. It is a transformative and disruptive technology, that enables the world to experience the impossible, and it's only just the beginning."



    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3759

    Anandtech: NVIDIA Updates GPU Roadmap; Unveils Pascal Architecture For 2016

    In something of a surprise move, NVIDIA took to the stage today at GTC to announce a new roadmap for their GPU families. With today’s announcement comes news of a significant restructuring of the roadmap that will see GPUs and features moved around, and a new GPU architecture, Pascal, introduced in the middle.
    We’ll get to Pascal in a second, but to put it into context let’s first discuss NVIDIA’s restructuring. At GTC 2013 NVIDIA announced their future Volta architecture. Volta, which had no scheduled date at the time, would be the GPU after Maxwell. Volta’s marquee feature would be on-package DRAM, utilizing Through Silicon Vias (TSVs) to die stack memory and place it on the same package as the GPU. Meanwhile in that roadmap NVIDIA also gave Maxwell a date and a marquee feature: 2014, and Unified Virtual Memory.

    NVIDIA's Old Volta Roadmap
    NVIDIA's New Pascal Roadmap
    As of today that roadmap has more or less been thrown out. No products have been removed, but what Maxwell is and what Volta is have changed, as has the pacing. Maxwell for its part has “lost” its unified virtual memory feature. This feature is now slated for the chip after Maxwell, and in the meantime the closest Maxwell will get is the software based unified memory feature being rolled out in CUDA 6. Furthermore NVIDIA has not offered any further details on second generation Maxwell (the higher performing Maxwell chips) and how those might be integrated into professional products.
    As far as NVIDIA is concerned, Maxwell’s marquee feature is now DirectX 12 support (though even the extent of this isn’t perfectly clear), and that with the shipment of the GeForce GTX 750 series, Maxwell is now shipping in 2014 as scheduled. We’re still expecting second generation Maxwell products, but at this juncture it does not look like we should be expecting any additional functionality beyond what Big Kepler + 1st Gen Maxwell can achieve.
    Meanwhile Volta has been pushed back and stripped of its marquee feature. It’s on-package DRAM has been promoted to the GPU before Volta, and while Volta still exists, publicly it is a blank slate. We do not know anything else about Volta beyond the fact that it will come after the 2016 GPU.
    Which brings us to Pascal, the 2016 GPU. Pascal is NVIDIA’s latest GPU architecture and is being introduced in between Maxwell and Volta. In the process it has absorbed old Maxwell’s unified virtual memory support and old Volta’s on-package DRAM, integrating those feature additions into a single new product.


    With today’s announcement comes a small degree of additional detail on NVIDIA’s on-package memory plans. The bulk of what we wrote for Volta last year remains true: NVIDIA uses on-package stacked DRAM, allowed by the use of TSVs. What’s new is that NVIDIA has confirmed they will be using JEDEC’s High Bandwidth Memory (HBM) standard, and the prototype Pascal card we have seen uses entirely on-package memory, so there isn’t a split memory design. Though we’d also point out that unlike the old Volta announcement, NVIDIA isn’t listing any solid bandwidth goals like the 1TB/sec number we had last time. From what NVIDIA has said, this likely comes down to a cost issue: how much memory bandwidth are customers willing to pay for, given the cutting edge nature of this technology?
    Meanwhile NVIDIA hasn’t said anything else directly about the unified memory plans that Pascal has inherited from old Maxwell. However after we get to the final pillar of Pascal, how that will fit in should make more sense.
    Coming to the final pillar then, we have a brand new feature being introduced for Pascal: NVLink. NVLink, in a nutshell, is NVIDIA’s effort to supplant PCI-Express with a faster interconnect bus. From the perspective of NVIDIA, who is looking at what it would take to allow compute workloads to better scale across multiple GPUs, the 16GB/sec made available by PCI-Express 3.0 is hardly adequate. Especially when compared to the 250GB/sec+ of memory bandwidth available within a single card. PCIe 4.0 in turn will eventually bring higher bandwidth yet, but this still is not enough. As such NVIDIA is pursuing their own bus to achieve the kind of bandwidth they desire.
    The end result is a bus that looks a whole heck of a lot like PCIe, and is even programmed like PCIe, but operates with tighter requirements and a true point-to-point design. NVLink uses differential signaling (like PCIe), with the smallest unit of connectivity being a “block.” A block contains 8 lanes, each rated for 20Gbps, for a combined bandwidth of 20GB/sec. In terms of transfers per second this puts NVLink at roughly 20 gigatransfers/second, as compared to an already staggering 8GT/sec for PCIe 3.0, indicating at just how high a frequency this bus is planned to run at.
    Multiple blocks in turn can be teamed together to provide additional bandwidth between two devices, or those blocks can be used to connect to additional devices, with the number of bricks depending on the SKU. The actual bus is purely point-to-point – no root complex has been discussed – so we’d be looking at processors directly wired to each other instead of going through a discrete PCIe switch or the root complex built into a CPU. This makes NVLink very similar to AMD’s Hypertransport, or Intel’s Quick Path Interconnect (QPI). This includes the NUMA aspects of not necessarily having every processor connected to every other processor.
    But the rabbit hole goes deeper. To pull off the kind of transfer rates NVIDIA wants to accomplish, the traditional PCI/PCIe style edge connector is no good; if nothing else the lengths that can be supported by such a fast bus are too short. So NVLink will be ditching the slot in favor of what NVIDIA is labeling a mezzanine connector, the type of connector typically used to sandwich multiple PCBs together (think GTX 295). We haven’t seen the connector yet, but it goes without saying that this requires a major change in motherboard designs for the boards that will support NVLink. The upside of this however is that with this change and the use of a true point-to-point bus, what NVIDIA is proposing is for all practical purposes a socketed GPU, just with the memory and power delivery circuitry on the GPU instead of on the motherboard.
    NVIDIA’s Pascal prototype is one such example of what a card would look like. We cannot see the connector itself, but the basic idea is that it will lay down on a motherboard parallel to the board (instead of perpendicular like PCIe slots), with each Pascal card connected to the board through the NVLink mezzanine connector. Besides reducing trace lengths, this has the added benefit of allowing such GPUs to be cooled with CPU-style cooling methods (we’re talking about servers here, not desktops) in a space efficient manner. How many NVLink mezzanine connectors available would of course depend on how many the motherboard design calls for, which in turn will depend on how much space is available.

    An example of a modern, high bandwidth mezzanine connector
    One final benefit NVIDIA is touting is that the new connector and bus will improve both energy efficiency and energy delivery. When it comes to energy efficiency NVIDIA is telling us that per byte, NVLink will be more efficient than PCIe – this being a legitimate concern when scaling up to many GPUs. At the same time the connector will be designed to provide far more than the 75W PCIe is spec’d for today, allowing the GPU to be directly powered via the connector, as opposed to requiring external PCIe power cables that clutter up designs.
    With all of that said, while NVIDIA has grand plans for NVLink, it’s also clear that PCIe isn’t going to be completely replaced anytime soon on a large scale. NVIDIA will still support PCIe – in fact the blocks can talk PCIe or NVLink – and even in NVLink setups there are certain command and control communiques that must be sent through PCIe rather than NVLink. The best case scenario for NVLink right now is that it takes hold in servers, while workstations and consumers would continue to use PCIe as they do today.
    Meanwhile, though NVLink won’t even be shipping until Pascal in 2016, NVIDIA already has some future plans in store for the technology. Along with a GPU-to-GPU link, NVIDIA’s plans include a more ambitious CPU-to-GPU link, in large part to achieve the same data transfer and synchronization goals as with inter-GPU communication. As part of the OpenPOWER consortium, NVLink is being made available to POWER CPU designs, though no specific CPU has been announced. Meanwhile the door is also left open for NVIDIA to build an ARM CPU implementing NVLink (Denver perhaps?) but again, no such product is being announced today. If it did come to fruition though, then it would be similar in concept to AMD’s abandoned “Torrenza” plans to utilize HyperTransport to connect CPUs with other processors (e.g. GPUs).
    Finally, NVIDIA has already worked out some feature goals for what they want to do with NVLink 2.0, which would come on the GPU after Pascal (which by NV’s other statements should be Volta). NVLink 2.0 would introduce cache coherency to the interface and processors on it, which would allow for further performance improvements and the ability to more readily execute programs in a heterogeneous manner, as cache coherency is a precursor to tightly shared memory.
    Wrapping things up, with an attached date for Pascal and numerous features now billed for that product, NVIDIA looks to have to set the wheels in motion for developing the GPU they’d like to have in 2016. The roadmap alteration we’ve seen today is unexpected to say the least, but Pascal is on much more solid footing than old Volta was in 2013. In the meantime we’re still waiting to see what Maxwell will bring NVIDIA’s professional products, and it looks like we’ll be waiting a bit longer to get the answer to that question.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #3760

    Anandtech: Asustor AS-304T: 4-Bay Intel Evansport NAS Review

    Intel's Evansport NAS platform was meant to take on ARM's dominance in the low to mid-range consumer / SOHO NAS market. We covered it in detail while reviewing the Thecus N2560, a 2-bay solution. How does the platform fare in a unit with four bays? Read on for our report from the evaluation of Asustor's AS-304T to find out.

    More...

Thread Information

Users Browsing this Thread

There are currently 18 users browsing this thread. (0 members and 18 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title