Page 435 of 1210 FirstFirst ... 335385410425430431432433434435436437438439440445460485535935 ... LastLast
Results 4,341 to 4,350 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4341

    Anandtech: Intel’s Haswell-EP Xeons with DDR3 and DDR4 on the Horizon?

    Johan’s awesome overview of the Haswell-EP ecosystem showed that the server processor line from Intel is firmly in the track for DDR4 memory along with the associated benefits of lower power consumption, higher absolute frequencies and higher capacity modules. At that point, we all assumed that all Haswell-EP Xeons using the LGA2011-3 socket were DDR4 only, requiring each new CPU to be used with the newer generation modules. However thanks to ASRock’s server team, ASRock Rack, it would seem that there will be some Xeons for sale from Intel with both DDR3 and DDR4 support.
    Caught by Patrick at ServeTheHome, ASRock Rack had released their motherboard line without much of a fuss. There is nothing strange about that in itself; however the following four models were the subject of interest:
    A quick email to our contacts at ASRock provided the solution: Intel is going to launch several SKUs with a dual DDR3/DDR4 controller. These processors are available in eight, ten and twelve core flavors, ranging from 85W to 120W:
    QVL CPUs for ASRock Rack EPC612D8T
    E5-2629 v3 E5-2649 v3 E5-2669 v3
    Cores / Threads 8 / 16 10 / 20 12 / 24
    Base Frequency (GHz) 2.4 2.3 2.3
    L3 Cache (MB) 20 25 30
    TDP (W) 85 105 120
    At the current time there is no release date or pricing for these DDR3 Haswell-EP processors, however it would seem that ASRock Rack is shipping these motherboards to distributors already, meaning that Intel cannot be far behind. It does offer a server team the ability to reuse the expensive DDR3 memory they already have, especially given the DDR4 premium, although the processor counts are limited.
    CPU-World suggested that these processors have dual memory controllers, and we recieved confirmation that this is true. This could suggest that all Xeons have dual memory controllers but with DDR3 disabled. Note that these motherboards would reject a DDR4-only CPU as a result of their layout. It does potentially pave the way for combination DDR3/DDR4 based LGA2011-3 motherboards in the future. We have also been told that the minimum order quantity for these CPUs might be higher than average, and thus server admins will have to contact their Intel distribution network for exact numbers. This might put a halt on smaller configurations keeping their DDR3.
    Source: ServeTheHome, ASRock Rack


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4342

    Anandtech: Hands On With ODG's R-7: Augmented Reality Glasses

    While it's still unclear to me what the future of wearables will be, I must admit that all things considered I feel that glasses are a better idea than watches as a form factor. If the goal is glanceable information, a heads-up display is probably as good as it gets. This brings us to the ODG R-7, which is part of Qualcomm's Vuforia for Digital Eyewear (VDE) platform. This VDE platform brings new capabilities for augmented reality. What this really means is that developers no longer need to worry about coming up with their own system of aligning content from a VR headset to the real world, as this platform makes it a relatively simple process. Judging by the ODG R-7, there's no need for a 3D camera to pull this off.
    So let's talk about the ODG R-7, one of the most fascinating wearables I've ever seen. While its primary purpose is for government and industrial use, it isn't a far leap to see the possibilities for consumers. For reference, the ODG R-7 that I saw at this show is an early rev, and effectively still a prototype. However, the initial specs have been established. This wearable has a Qualcomm Snapdragon 805 SoC running at 2.7 GHz, with anywhere between one to four gigabytes of RAM and 16 to 128 gigabytes of storage. There are two 720p LCoS displays that run at 100 Hz refresh rate, which means that the display is see-through. There's one 5MP camera on the front to enable the augmented vision aspects. There's also one battery on each side of the frame for a 1400 mAh battery, which is likely to be a 3.8V nominal voltage.
    While the specs are one thing, the actual device itself is another. In person, this is clearly still a prototype as on the face it feels noticeably front heavy, which is where all of the electronics are contained. It's quite obvious that this is running up against thermal limits, as there is a noticeable heat sink running along the top of the glasses. This area gets noticeably hot during operation, and easily feels to be around 50-60C although the final product is likely to be much cooler in operation.
    However, these specs aren't really what matter so much as the use cases demonstrated. While it's effectively impossible to really show what it looks like, one demo shown was a terrain map. When this was detected by the glasses, it automatically turned the map into a 3D model that could be viewed from any angle. In addition, a live UAV feed was just above the map, with the position of the UAV indicated by a 3D model orbiting around the map.
    It's definitely not a long shot to guess the next logical steps for such a system. Overlaying directions for turn by turn navigation is one obvious use case, as is simple notification management, similar to Android Wear watches. If anything, the potential for glasses is greater than watches as it's much harder to notice glasses in day to day use as they rely on gravity instead of tension like a watch band. However, it could be that I'm biased though, as I've worn glasses all my life.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4343

    Anandtech: NVIDIA GameWorks: More Effects with Less Effort

    While NVIDIA's hardware is the big start of the day, the software that we run on the hardware is becoming increasingly important. It's one thing to create the world's fastest GPU, but what good is the GPU if you don't have anything that can leverage all that performance? As part of their ongoing drive to improve the state of computer graphics, NVIDIA has a dedicated team of over 300 engineers whose primary focus is the creation of tools and technologies to make the lives of game developers better.
    Gallery: NVIDIA GameWorks Overview


    GameWorks consists of several items. There's the core SDK (Software Development Kit), along with IDE (Integrated Development Environment) tools for debugging, profiling, and other items a developer might need. Beyond the core SDK, NVIDIA has a Visual FX SDK, a PhysX SDK, and an Optix SDK. The Visual FX SDK offers solutions for complex, realistic effects (e.g. smoke and fire, faces, waves/water, hair, shadows, and turbulence). PhysX is for physics calculations (either CPU or GPU based, depending on the system). Optix is a ray tracing engine and framework, often used to pre-calculate ("bake") lighting in game levels. NVIDIA also provides sample code for graphics and compute, organized by effect and with tutorials.
    Many of the technologies that are part of GameWorks have been around for a few years, but NVIDIA is constantly working on improving their GameWorks library and they had several new technologies on display at their GM204 briefing. One of the big ones has already been covered in our GM204 review, VXGI (Voxel Global Illumination), so I won't rehash that here; basically, it allows for more accurate and realistic indirect lighting. Another new technology that NVIDIA showed is called Turf Effects, which properly simulates individual blades of grass (or at least clumps of grass). Finally, PhysX FleX also has a couple new additions, Adhesion and Gases; FleX uses PhysX to provide GPU simulations of particles, fluids, cloth, etc.
    Still images don't do justice to most of these effects, and NVIDIA will most likely have videos available in the future to show what they look like. PhysX FleX for example has a page with a currently unavailable video, so hopefully they'll update that with a live video in the coming weeks. You can find additional content related to GameWorks on the official website.
    The holiday 2014 season will see the usual avalanche of new games, and many of the AAA titles will sport at least one or two technologies that come from GameWorks. Here's a short list of some of the games, and then we'll have some screen shots to help illustrate what some of the specific technologies do.
    Upcoming Titles with GameWorks Technologies
    Assassin’s Creed: Unity HBAO+, TXAA, PCSS, Tessellation
    Batman: Arkham Knight Turbulence, Environmental PhysX, Volumetric Lights, FaceWorks, Rain Effects
    Borderlands: The Pre-Sequel PhysX Particles
    Far Cry 4 HBAO+, PCSS, TXAA, God Rays, Fur, Enhanced 4K Support
    Project CARS DX11, Turbulence, PhysX Particles, Enhanced 4K Support
    Strife PhysX Particles, HairWorks
    The Crew HBAO+, TXAA
    The Witcher 3: Wild Hunt HairWorks, HBAO+, PhysX, Destruction, Clothing
    Warface PhysX Particles, Turbulence, Enhanced 4K Support
    War Thunder WaveWorks, Destruction
    In terms of upcoming games, the two most prominent titles are probably Assassin's Creed Unity and Far Cry 4, and we've created a gallery for each. Both games use multiple GameWorks elements, and NVIDIA was able to provide before/after comparisons for FC4 and AC Unity. Batman: Arkham Knight and The Witcher 3: The Wild Hunt also incorporate many effects from GameWorks, but we didn't get any with/without comparisons.
    Gallery: GameWorks - Assassin's Creed Unity


    {gallery 3946}
    Starting with HBAO+ (Horizon Based Ambient Occlusion), this is a newer way of performing Ambient Occlusion calculations (SSAO, Screen Space AO, being the previous solution that many games have used). Games vary in how they perform AO, but if we look at AC Unity the comparison between HBAO+ and (presumably SSAO) the default AO, HBAO+ clearly offers better shadows. HBAO+ is also supposed to be faster and more efficient than other AO techniques.
    TXAA (Temporal Anti-Aliasing) basically combines a variety of filters and post processing techniques to help eliminate jaggies, something which we can all hopefully appreciate. There's one problem I've noticed with TXAA however, which you can see in the above screenshot: it tends to make the entire image look rather blurry in my opinion. It's almost as though someone took Photoshop's "blur" filter and applied it to the image.
    PCSS (Percentage Closer Soft Shadows) was introduced a couple years back, which means it's now time for it to start showing up in shipping games. You can see the video from 2012, and AC Unity and Far Cry 4 are among the first games that will offer PCSS.
    Tessellation has been around for a few years now in games, and the concepts behind tessellation go back much further. The net result is that tessellation allows developers to extrude geometry from an otherwise flat surface, creating a much more realistic appearance to games when used appropriately. The cobble stone streets and roof shingles in AC Unity are great examples of the difference tessellation makes.
    God rays are a lighting feature that we've seen before, but now NVIDIA has implemented a new way of calculating the shafts of light. They now use tessellation to extrude the shadow mapping and actually create transparent beams of light that they can render.
    HairWorks is a way to simulate large strands of hair instead of using standard textures – Far Cry 4 and The Witcher 3 will both use HairWorks, though I have to admit that the hair in motion still doesn't look quite right to me. I think we still need an order of magnitude more "hair", and similar to the TressFX in Tomb Raider this is a step forward but we're not there yet.
    Gallery: GameWorks - Upcoming Games Fall 2014


    There are some additional effects being used in other games – Turbulence, Destruction, FaceWorks, WaveWorks, PhysX, etc. – but the above items give us a good idea of what GameWorks can provide. What's truly interesting about GameWorks is that these libraries are free for any developers that want to use them. The reason for creating GameWorks and basically giving it away is quite simple: NVIDIA needs to entice developers (and perhaps more importantly, publishers) into including these new technologies, as it helps to drive sales of their GPUs among other things. Consider the following (probably not so hypothetical) exchange between a developer and their publisher, paraphrased from NVIDIA's presentation on GameWorks.
    A publisher wants to know when game XYZ is ready to ship, and the developer says it's basically done, but they're excited about some really cool features that will just blow people away, and it will take a few more months to get those finished up. "How many people actually have the hardware required to run these new features?" asks the publisher. When the developers guess that only 5% or so of the potential customers have the hardware necessary, you can guess what happens: the new features get cut, and game XYZ ships sooner rather than later.
    We've seen this sort of thing happen many times – as an example, Crysis 2 shipped without DX11 support (since the consoles couldn't support that level of detail), adding it in a patch a couple months later. Other games never even see such a patch and we're left with somewhat less impressive visuals. While it's true that great graphics do not an awesome game make, they can certainly enhance the experience when used properly.
    It's worth pointing out is that GameWorks is not necessarily exclusive to NVIDIA hardware. While PhysX as an example was originally ported to CUDA, developers have used PhysX on CPUs for many games, and as you can see in the above slide there are many PhysX items that are supported on other platforms. Several of the libraries (Turbulence, WaveWorks, HairWorks, ShadowWorks, FlameWorks, and FaceWorks) are also listed as "planned" for being ported to the latest generation of gaming consoles. Android is also a growing part of NVIDIA's plans, with the Tegra K1 effectively brining the same feature set over to the mobile world that we've had on PCs and notebooks for the past couple of years.
    NVIDIA for their part wants to drive the state of the art forward, so that the customers (gamers) demand these high-end technologies and the publishers feel compelled to support them. After all, no publisher would expect great sales from a modern first-person shooter that looks like it was created 10 years ago [insert obligatory Daikatana reference here], but it's a bit of a chicken vs. egg problem. NVIDIA is trying to push things along and maybe hatch the egg a bit earlier, and there have definitely been improvements thanks to their efforts. We applaud their efforts, and more importantly we look forward to seeing better looking games as a result.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4344

    Anandtech: The NVIDIA GeForce GTX 980 Review: Maxwell Mark 2

    At the start of this year we saw the first half of the Maxwell architecture in the form of the GeForce GTX 750 and GTX 750 Ti. Based on the first generation Maxwell based GM107 GPU, NVIDIA did something we still can hardly believe and managed to pull off a trifecta of improvements over Kepler. GTX 750 Ti was significantly faster than its predecessor, it was denser than its predecessor (though larger overall), and perhaps most importantly consumed less power than its predecessor. In GM107 NVIDIA was able to significantly improve their performance and reduce their power consumption at the same time, all on the same 28nm manufacturing node we’ve come to know since 2012. For NVIDIA this was a major accomplishment, and to this day competitor AMD doesn’t have a real answer to GM107’s energy efficiency.
    However GM107 was only the start of the story. In deviating from their typical strategy of launching high-end GPU first – either a 100/110 or 104 GPU – NVIDIA told us up front that while they were launching in the low end first because that made the most sense for them, they would be following up on GM107 later this year with what at the time was being called “second generation Maxwell”. Now 7 months later and true to their word, NVIDIA is back in the spotlight with the first of the second generation Maxwell GPUs, GM204.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4345

    Anandtech: Microsoft Details Direct3D 11.3 & 12 New Rendering Features

    Back at GDC 2014 in March, Microsoft and its hardware partners first announced the next full iteration of the Direct3D API. Now on to version 12, this latest version of Direct3D would be focused on low level graphics programming, unlocking the greater performance and greater efficiency that game consoles have traditionally enjoyed by giving seasons programmers more direct access to the underlying hardware. In particular, low level access would improve performance both by reducing the overhead high level APIs incur, and by allowing developers to better utilize multi-threading by making it far easier to have multiple threads submitting work.
    At the time Microsoft offered brief hints that there would be more to Direct3D 12 than just the low level API, but the low level API was certainly the focus for the day. Now as part of NVIDIA’s launch of the second generation Maxwell based GeForce GTX 980, Microsoft has opened up to the press and public a bit more on what their plans are for Direct3D. Direct3D 12 will indeed introduce new features, but there will be more in development than just Direct3D 12.
    Direct3D 11.3

    First and foremost then, Microsoft has announced that there will be a new version of Direct3D 11 coinciding with Direct3D 12. Dubbed Direct3D 11.3, this new version of Direct3D is a continuation of the development and evolution of the Direct3D 11 API and like the previous point updates will be adding API support for features found in upcoming hardware.
    At first glance the announcement of Direct3D 11.3 would appear to be at odds with Microsoft’s development work on Direct3D 12, but in reality there is a lot of sense in this announcement. Direct3D 12 is a low level API – powerful, but difficult to master and very dangerous in the hands of inexperienced programmers. The development model envisioned for Direct3D 12 is that a limited number of code gurus will be the ones writing the engines and renderers that target the new API, while everyone else will build on top of these engines. This works well for the many organizations that are licensing engines such as UE4, or for the smaller number of organizations that can justify having such experienced programmers on staff.
    However for these reasons a low level API is not suitable for everyone. High level APIs such as Direct3D 11 do exist for a good reason after all; their abstraction not only hides the quirks of the underlying hardware, but it makes development easier and more accessible as well. For these reasons there is a need to offer both high level and low level APIs. Direct3D 12 will be the low level API, and Direct3D 11 will continue to be developed to offer the same features through a high level API.
    Direct3D 12

    Today’s announcement of Direct3D 11.3 and the new features set that Direct3D 11.3 and 12 will be sharing will have an impact on Direct3D 12 as well. We’ll get to the new features in a moment, but at a high level it should be noted that this means that Direct3D 12 is going to end up being a multi-generational (multi-feature level) API similar to Direct3D 11.
    In Direct3D 11 Microsoft introduced feature levels, which allowed programmers to target different generations of hardware using the same API, instead of having to write their code multiple times for each associated API generation. In practice this meant that programmers could target D3D 9, 10, and 11 hardware through the D3D 11 API, restricting their feature use accordingly to match the hardware capabilities. This functionality was exposed through feature levels (ex: FL9_3 for D3D9.0c capable hardware) which offered programmers a neat segmentation of feature sets and requirements.
    Direct3D 12 in turn will also be making use of feature levels, allowing developers to exploit the benefits of the low level nature of the API while being able to target multiple generations of hardware. It’s through this mechanism that Direct3D 12 will be usable on GPUs as old as NVIDIA’s Fermi family or as new as their Maxwell family, all the while still being able to utilize the features added in newer generations.
    Ultimately for users this means they will need to be mindful of feature levels, just as they are today with Direct3D 11. Hardware that is Direct3D 12 compatible does not mean it supports all of the latest feature sets, and keeping track of feature set compatibility for each generation of hardware will still be important going forward.
    11.3 & 12: New Features

    Getting to the heart of today’s announcement from Microsoft, we have the newly announced features that will be coming to Direct3D 11.3 and 12. It should be noted at this point in time this is not an exhaustive list of all of the new features that we will see, and Microsoft is still working to define a new feature level to go with them (in the interim they will be accessed through cap bits), but none the less this is our first detailed view at what are expected to be the major new features of 11.3/12
    Rasterizer Ordered Views

    First and foremost of the new features is Rasterizer Ordered Views (ROVs). As hinted at by the name, ROVs is focused on giving the developer control over the order that elements are rasterized in a scene, so that elements are drawn in the correct order. This feature specifically applies to Unordered Access Views (UAVs) being generated by pixel shaders, which buy their very definition are initially unordered. ROVs offers an alternative to UAV's unordered nature, which would result in elements being rasterized simply in the order they were finished. For most rendering tasks unordered rasterization is fine (deeper elements would be occluded anyhow), but for a certain category of tasks having the ability to efficiently control the access order to a UAV is important to correctly render a scene quickly.
    The textbook use case for ROVs is Order Independent Transparency, which allows for elements to be rendered in any order and still blended together correctly in the final result. OIT is not new – Direct3D 11 gave the API enough flexibility to accomplish this task – however these earlier OIT implementations would be very slow due to sorting, restricting their usefulness outside of CAD/CAM. The ROV implementation however could accomplish the same task much more quickly by getting the order correct from the start, as opposed to having to sort results after the fact.
    Along these lines, since OIT is just a specialized case of a pixel blending operation, ROVs will also be usable for other tasks that require controlled pixel blending, including certain cases of anti-aliasing.
    Typed UAV Load

    The second feature coming to Direct3D is Typed UAV Load. Unordered Access Views (UAVs) are a special type of buffer that allows multiple GPU threads to access the same buffer simultaneously without generating memory conflicts. Because of this disorganized nature of UAVs, certain restrictions are in place that Typed UAV Load will address. As implied by the name, Typed UAV Load deals with cases where UAVs are data typed, and how to better handle their use.
    Volume Tiled Resources

    The third feature coming to Direct3D is Volume Tiled Resources. VTR builds off of the work Microsoft and partners have already done for tiled resources (AKA sparse allocation, AKA hardware megatexture) by extending it into the 3rd dimension.
    VTRs are primarily meant to be used with volumetric pixels (voxels), with the idea being that with sparse allocation, volume tiles that do not contain any useful information can avoid being allocated, avoiding tying up memory in tiles that will never be used or accessed. This kind of sparse allocation is necessary to make certain kinds of voxel techniques viable.
    Conservative Rasterization

    Last but certainly not least among Direct3D’s new features will be conservative rasterization. Conservative rasterization is essentially a more accurate but performance intensive solution to figuring out whether a polygon covers part of a pixel. Instead of doing a quick and simple test to see if the center of the pixel is bounded by the lines of the polygon, conservative rasterization checks whether the pixel covers the polygon by testing it against the corners of the pixel. This means that conservative rasterization will catch cases where a polygon was too small to cover the center of a pixel, which results in a more accurate outcome, be it better identifying pixels a polygon resides in, or finding polygons too small to cover the center of any pixel at all. This in turn being where the “conservative” aspect of the name comes from, as a rasterizer would be conservative by including every pixel touched by a triangle as opposed to just the pixels where the tringle covers the center point.
    Conservative rasterization is being added to Direct3D in order to allow new algorithms to be used which would fail under the imprecise nature of point sampling. Like VTR, voxels play a big part here as conservative rasterization can be used to build a voxel. However it also has use cases in more accurate tiling and even collision detection.
    Final Words

    Wrapping things up, today’s announcement of Direct3D 11.3 and its new features offers a solid roadmap for both the evolution of Direct3D and the hardware that will support it. By confirming that they are continuing to work on Direct3D 11 Microsoft has answered one of the lingering questions surrounding Direct3D 12 – what happens to Direct3D 11 – and at the same time this highlights the hardware features that the next generation of hardware will need to support in order to be compliant with the latest D3D feature level. And with Direct3D 12 set to be released sometime next year, these new features won’t be too far off either.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4346

    Anandtech: Acer Releases XBO Series: 28-inch UHD/4K with G-Sync for $800

    Monitors are getting exciting. Not only are higher resolution panels becoming more of the norm, but the combination of different panel dimensions and feature sets means that buying the monitor you need for the next 10 years is getting more difficult. Today Acer adds some spice to the mix by announcing pre-orders for the XB280HK – a 28-inch TN monitor with 3840x2160 resolution that also supports NVIDIA’s G-Sync to reduce tearing and stuttering.
    Adaptive frame rate technologies are still in the early phases for adoption by the majority of users. AMD’s FreeSync is still a few quarters away from the market, and NVIDIA’s G-Sync requires an add-in card which started off as an interesting, if not expensive, monitor upgrade. Fast forward a couple of months and as you might expect, the best place for G-Sync to go is into some of the more impressive monitor configurations. 4K is becoming a go-to resolution for anyone with deep enough wallets, although some might argue that the 21:9 monitors might be better for gaming immersion at least.
    The XB280HK will support 3840x2160 at 60 Hz via DisplayPort 1.2, along with a 1 ms gray-to-gray response time and a fixed frequency up to 144 Hz. The stand will adjust up to 155mm in height with 40º of tilt. There is also 120º of swivel and a full quarter turn of pivot allowing for portrait style implementations. The brightness of the panel is rated at 300 cd/m2, with an 8 bit+HiFRC TN display that has a typical contrast ratio of 1000:1 and 72% NTSC. VESA is also supported at the 100x100mm scale, as well as a USB 3.0 Hub as part of the monitor, although there are no monitor speakers.
    The XB280HK is currently available for pre-order in the UK at £500, but will have a US MSRP of $800. Also part of the Acer XBO range is the XB270H, a 27-inch 1920x1080 panel with G-Sync with an MSRP of $600. Expected release date, according to the pre-orders, should be the 3rd of October.
    Source: Acer
    Gallery: Acer Releases XBO Series: 28-inch UHD/4K with G-Sync for $800



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4347

    Anandtech: Short Bytes: NVIDIA GeForce GTX 980 in 1000 Words

    To call the launch of NVIDIA's Maxwell GM204 part impressive is something of an understatement. You can read our full coverage of the GTX 980 for the complete story, but here's the short summary. Without the help of a manufacturing process shrink, NVIDIA and AMD are both looking at new ways to improve performance. The Maxwell architecture initially launched earlier this year with GM107 and the GTX 750 Ti and GTX 750, and with it we had our first viable mainstream GPU of the modern era that could deliver playable frame rates at 1080p while using less than 75W of power. The second generation Maxwell ups the ante by essentially tripling the CUDA core count of GM107, all while adding new features and still maintaining the impressive level of efficiency.
    It's worth pointing out that "Big Maxwell" (or at least "Bigger Maxwell") is enough of a change that NVIDIA has bumped the model numbers from the GM100 series to GM200 series this round. NVIDIA has also skipped the desktop 800 line completely and is now in the 900 series. Architecturally, however, there's enough change going into GM204 that calling this "Maxwell 2" is certainly warranted.
    NVIDIA is touting a 2X performance per Watt increase over GTX 680, and they've delivered exactly that. Through a combination of architectural and design improvements, NVIDIA has moved from 192 CUDA cores per SMX in Kepler to 128 CUDA cores per SMM in Maxwell, and a single SMM is still able to deliver around 90% of the performance of an SMX of equivalent clocks. Put another way, NVIDIA says the new Maxwell 2 architecture is around 40% faster per CUDA core than Kepler. What that means in terms of specifications is that GM204 only needs 2048 CUDA cores to compete with – and generally surpass! – the performance of GK110 with its 2880 CUDA cores, which is used in the GeForce GTX 780 Ti and GTX Titan cards.
    In terms of new features, some of the changes with GM204 come on the software/drivers side of things while other features have been implemented in hardware. Starting with the hardware side, GM204 now implements the full set of D3D 11.3/D3D 12 features, where previous designs (Kepler and Maxwell 1) stopped at full Feature Level 11_0 with partial FL 11_1. The new features include Rasterizer Ordered Views, Typed UAV Load, Volume Tiled Resources, and Conservative Rasterization. Along with these, NVIDIA is also adding hardware features to accelerate what they're calling VXGI – Voxel accelerated Global Illumination – a forward-looking technology that brings GPUs one step closer to doing real-time path tracing. (NVIDIA has more details available if you're interested in learning more).
    NVIDIA also has a couple new techniques to improve anti-aliasing, Dynamic Super Resolution (DSR) and Multi-Frame Anti-Aliasing (MFAA). DSA essentially renders a game at a higher resolution and then down-sizes the result to your native resolution using a high-quality 13-tap Gaussian filter. It's similar to super sampling, but the great benefit of DSR over SSAA is that the game doesn't have any knowledge of DSR; as long as the game can support higher resolutions, NVIDIA's drivers take care of all of the work behind the scenes. MFAA (please, no jokes about "mofo AA" is supposed to offer essentially the same quality as 4x MSAA with the performance hit of 2x MSAA through a combination of custom filters and looking at previously rendered frames. MFAA can also function with a 4xAA mode to provide an alternative to 8x MSAA.
    The above is all well and good, but what really matters at the end of the day is the actual performance that GM204 can offer. We've averaged results from our gaming benchmarks at our 2560x1440 and 1920x1080 settings, as well as our compute benchmarks, with all scores normalized to the GTX 680. Here's how the new GeForce GTX 980 compares with other GPUs. (Note that we've omitted the overclocking results for the GTX 980, as it wasn't tested across all of the games, but on average it's around 18% faster than the stock GTX 980 while consuming around 20% more power.)
    Wow. Obviously there's not quite as much to be gained by running such a fast GPU at 1920x1080, but at 2560x1440 we're looking at a GPU that's a healthy 74% faster on average compared to the GTX 680. Perhaps more importantly, the GTX 980 is also on average 8% faster than the GTX 780 Ti and 13.5% faster than AMD's Radeon R9 290X (in Uber mode, as that's what most shipping cards use). Compute performance sees some even larger gains over previous NVIDIA GPUs, with the 980 besting the 680 by 132%; it's also 16% faster than the 780 Ti but "only" 1.5% faster than the 290X – though the 290X still beats the GTX 980 in Sony Vegas Pro 12 and SystemCompute.
    If we look at the GTX 780 Ti, on the one hand performance hasn't improved so much that we'd recommend upgrading, though you do get some new features that might prove useful over time. For those that didn't find the price/performance offered by GTX 780 Ti a compelling reason to upgrade, the GTX 980 sweetens the pot by dropping the MSRP down to $549, and what's more it also uses quite a bit less power:
    This is what we call the trifecta of graphics hardware: better performance, lower power, and lower prices. When NVIDIA unveiled the GTX 750 Ti back in February, it achieved the same trifecta for the $150 market segment, but it seemed almost too much to hope for a repeat in the high performance GPU arena. NVIDIA doesn't disappoint, however, dropping power consumption by 18% relative to the GTX 780 Ti while improving performance by roughly 10% and dropping the launch price by just over 20%. If you've been waiting for a reason to upgrade, GeForce GTX 980 is about as good as it gets, though the much less expensive GTX 970 might just spoil the party. We'll have a look at the 970 next week.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4348

    Anandtech: Samsung Acknowledges the SSD 840 EVO Read Performance Bug - Fix Is on the

    During the last couple of weeks, numerous reports of Samsung SSD 840 and 840 EVO having low read performance have surfaced around the Internet. The most extensive one is probably a forum thread over at Overclock.net, which was started about month ago and currently has over 600 replies. For those who are not aware of the issue, there is a bug in the 840 EVO that causes the read performance of old blocks of data to drop dramatically like the HD Tach graph below illustrates. The odd part is that the bug only seems to affect LBAs that have old data (>1 month) associated with them because freshly written data will read at full speed, which also explains why the issue was not discovered until now.
    Source: @p_combe
    I just got off the phone with Samsung and the good news is that they are aware of the problem and have presumably found the source of it. The engineers are now working on an updated firmware to fix the bug and as soon as the fix has been validated, the new firmware will be distributed to end-users. Unfortunately there is no ETA for the fix, but obviously it is in Samsung's best interest to provide it as soon as possible.
    I do not have any further details about the nature of the bug at this point, but we will be getting more details early next week, so stay tuned. It is a good sign that Samsung acknowledges the bug and that a fix is in the works, but for now I would advice against buying the 840 EVO until there is a resolution for the issue.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4349

    Anandtech: NVIDIA 344.11 Drivers Available

    In the crazy rush to wrap up the GeForce GTX 980 review in time for the NDA lift yesterday, news of the first R343 driver release may have been lost in the shuffle. This is a full WHQL driver release from NVIDIA, and it's available for Windows 8.1, 7, Vista, and even XP (though I don't know what you'd be doing with a modern GPU on XP at this point). Notebooks also get the new drivers, though only for Windows 7 and 8 it seems. You can find the updates at the usual place, or they're also available through GeForce Experience (which has also been updated to version 2.1.2.0 if you're wondering).
    In terms of what the driver update provides, this is the Game Ready driver for Borderlands: The Pre-Sequel, The Evil Within, F1 2014, and Alien: Isolation – all games that are due to launch in early to mid-October. Of course this is also the publicly available driver for the GeForce GTX 980 and GTX 970, which are apparently selling like hotcakes based on the number of "out of stock" notifications we're seeing (not to mention some hefty price gouging on the GTX 970 and GTX 980).
    The drivers also enable NVIDIA's new DSR (Dynamic Super Resolution), with hooks for individual games available in the Control Panel->Manage 3D Settings section. It's not clear whether DSR will be available for other GPUs, but it's definitely not enabled on my GTX 780 right now and I suspect it will be limited to the new Maxwell GM204 GPUs for at least a little while.
    There are a host of other updates, too numerous to go into, but you can check the release notes for additional information.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4350

    Anandtech: DisplayPort Alternate Mode for USB Type-C Announced - Video, Power, & Data

    Earlier this month the USB Implementers Forum announced the new USB Power Delivery 2.0 specification. Long awaited, the Power Deliver 2.0 specification defined new standards for power delivery to allow Type-C USB ports to supply devices with much greater amounts of power than the previous standard allowed, now up to 5A at 5V, 12V, and 20V, for a maximum power delivery of 100W. However also buried in that specification was an interesting, if cryptic announcement regarding USB Alternate Modes, which would allow for different (non-USB) signals to be carried over USB Type-C connector. At the time the specification simply theorized just what protocols could be carried over Type-C as an alternate mode, but today we finally know what the first alternate mode will be: DisplayPort.
    Today the VESA is announcing that they are publishing the “DisplayPort Alternate Mode on USB Type-C Connector Standard.” Working in conjunction with the USB-IF, the DP Alt Mode standard will allow standard USB Type-C connectors and cables to carry native DisplayPort signals. This is designed to open up a number of possibilities for connecting monitors, computers, docking stations, and other devices with DisplayPort video while also leveraging USB’s other data and power capabilities. With USB 3.1 and Type-C the USB-IF was looking to create a single cable that could carry everything, and now that DisplayPort can be muxed over Type-C, USB is one step closer to that with the ability to carry native video.
    The Tech & The Spec

    From a technical level the DP Alt Mode specification is actually rather simple. USB Type-C – which immediately implies using/supporting USB 3.1 signaling – uses 4 lanes (pairs) of differential signaling for USB Superspeed data, which are split up in a 2-up/2-down configuration for full duplex communication. Through the Alt Mode specification, DP Alt Mode will then in turn be allowed to take over some of these lanes – one, two, or all four – and run DisplayPort signaling over them in place of USB Superspeed signaling. By doing so a Type-C cable is then able to carry native DisplayPort video alongside its other signals, and from a hardware standpoint this is little different than a native DisplayPort connector/cable pair.
    From a hardware perspective this will be a simple mux. USB alternate modes do not encapsulate other protocols (ala Thunderbolt) but instead allocate lanes to those other signals as necessary, with muxes at either end handling the switching to determine what signals are on what lanes and where they need to come from or go. Internally USB handles this matter via the CC sense pins, which are responsible for determining cable orientation. Alongside determining orientation, these pins will also transmit a Standard IDentification (SID), which will be how devices negotiate which signals are supported and which signals to use. After negotiation, the devices at either end can then configure themselves to the appropriate number of lanes and pin orientation.
    Along with utilizing USB lanes for DP lanes, the DP Alt Mode standard also includes provisions for reconfiguring the Type-C secondary bus (SBU) to carry the DisplayPort AUX channel. This half-duplex channel is normally used by DisplayPort devices to carry additional non-video data such as audio, EDID, HDCP, touchscreen data, MST topology data, and more. Somewhat perversely in this case, the AUX channel has even been used to carry USB data, which dutifully enough would still be supported here for backwards compatibility purposes.
    Since the main DisplayPort lanes and AUX channel can be carried over Type-C, when utilized in this fashion Type-C is very close to becoming a superset of DisplayPort. In a full (4 lane) DisplayPort configuration, along with all of the regular DisplayPort features a Type-C cable also carries the standard USB 2.0 interface and USB power, which always coexist alongside alt mode. So even in these configurations Type-C allows dedicated high power and USB 2.0 functionality, something the DisplayPort physical layer itself is not cable of. And of course when using a less-than-full configuration, 2-3 of those lanes on the Type-C cable then can be left to running USB Superspeed signaling, allowing USB 3.1 data to be carried alongside the narrower DisplayPort signal.
    Meanwhile since DP Alt Mode means that Type-C carries native DisplayPort signaling, this enables several different interoperability options with other Type-C devices and legacy DisplayPort devices. On the hardware side Type-C ports can be used for the sink (displays) as well as the source (computers), so one could have a display connected to a source entirely over Type-C. Otherwise simple Type-C to DisplayPort cables can be constructed which from the perspective of a DisplayPort sink would be identical to a real DisplayPort cable, with the cable wired to expose just the DisplayPort signals to the sink. Or since these cables will be bidirectional, a legacy DisplayPort source could be connected to a Type-C sink just as well.
    This also means that since DP Alt Mode is such a complete implementation of DisplayPort, that DisplayPort conversion devices will work as well. DisplayPort to VGA, DVI, and even HDMI 2.0 adapters will all work at the end of Type-C connection, and the VESA will be strongly encouraging cable makers to develop Type-C to HDMI 2.0 cables (and only HDMI 2.0, no 1.4) to make Type-C ports usable with HDMI devices. In fact the only major DisplayPort feature that won’t work over a Type-C connector is Dual-Mode DisplayPort (aka DP++), which is responsible for enabling passive DisplayPort adapters. So while adapters work over Type-C, all of them will need to be active adapters.
    From a cabling standpoint DP Alt Mode will have similar allowances and limitations as USB over Type-C since it inherits the physical layer. DisplayPort 1.3’s HBR3 mode will be supported, but like USB’s Superspeed+ (10Gbps) mode this is officially only specified to work on cables up to 1M in length. Meanwhile at up to 2M in length DisplayPort 1.2’s HBR2 mode can be used. Meanwhile DP Alt Mode is currently only defined to work on passive USB cables, with the VESA seemingly picking their words carefully on the use of “currently.”
    The Ecosystem & The Future

    Because of the flexibility offered through the DP Alt Mode, the VESA and USB-IF have a wide range of options and ideas for how to make use of this functionality, with these ideas ultimately converging on a USB/DisplayPort ecosystem. With the ability to carry video data over USB, this allows for devices that make use of both in a fashion similar to Thunderbolt or DockPort, but with the greater advantage of the closer cooperation of the USB-IF and the superior Type-C physical layer.
    At its most basic level, DP Alt Mode means that device manufacturers would no longer need to put dedicated display ports (whether DisplayPort, VGA, or HDMI) on their devices, and could instead fill out their devices entirely with USB ports for all digital I/O. This would be a massive boon to Ultrabooks and tablets, where the former only has a limited amount of space for ports and the latter frequently only has one port at all. To that end there will even be a forthcoming identification mark (similar to DP++) that will be used to identify Type-C ports that are DP Alt Mode capable, to help consumers identify which ports they can plug their displays into. The MUX concept is rather simple for hardware but I do get the impression that devices with multiple Type-C ports will only enable it on a fraction of their ports, hence the need for a logo for consumers to identify these ports. But we’ll have to see what shipping devices are like.
    More broadly, this could be used to enable single-cable connectivity for laptops and tablets, with a single Type-C cable providing power to the laptop/tablet while also carrying input, audio, video, additional USB data, and more. This would be very similar to the Thunderbolt Display concept, except Type-C would be able to be a true single cable solution since it can carry the high-wattage power that Thunderbolt can’t. And since Type-C can carry DisplayPort 1.3 HBR3, this means that even when driving a 4K@60Hz display there will still be 2 lanes of USB Superspeed+ available for any devices attached to the display. More likely though we’ll see this concept first rolled out in dock form, with a single dock device connecting to an external monitor and otherwise serving as the power/data hub for the entire setup.
    Speaking of which, this does mean that USB via DP Alt Mode will more directly be competing with other standards such as Thunderbolt at DockPort. Thunderbolt development will of course be an ongoing project for Intel, however for DockPort this is basically the end of the road. The standard, originally developed by AMD and TI before being adopted by the VESA, will continue on as-is and will continue to be supported over the DisplayPort physical layer as before. However it’s clear from today’s announcement that DisplayPort over USB has beaten USB over DisplayPort as the preferred multi-signal cabling solution, leaving DockPort with a limited duration on the market.
    It’s interesting to note though that part of the reason DP Alt Mode is happening – and why it’s going to surpass DockPort – is because of the Type-C physical layer. In designing the Type-C connector and cabling, the USB-IF has specific intentions of having the Type-C connector live for a decade or more, just like USB Type-A/B before it. That means they’ve done quite a bit of work to future-proof the connector, including plenty of pins with an eye on supporting speeds greater than 10Gbps in the future.
    For that reason the possibility is on the table of ditching the DisplayPort physical layer entirely and relying solely on Type-C. Now to be clear this is just an option the technology enables, but for a number of reasons it would be an attractive option for the VESA. As it stands the DisplayPort physical layer tops out at 8.4Gbps for HBR3, meanwhile Superspeed+ over Type-C tops out at 10Gbps with the design goal of further bandwidth increases. As the complexity and development costs of higher external buses goes up, one could very well see the day where DisplayPort was merely the protocol and signaling standard for monitors while Type-C was the physical layer, especially since DisplayPort and USB Superspeed are so very similar in the first place due to both using 4 lanes of differential signaling. But this is a more distant possibility; for now the DP Alt Mode ecosystem needs to take off for the kinds of mobile devices it’s designed for, and only then would anyone be thinking about replacing the DisplayPort physical layer entirely.
    Wrapping things up, the VESA tells us that they are going to hit the ground running on DP Alt Mode and are seeing quite a bit of excitement from manufacturers. The VESA is expecting the first DP Alt Mode capable devices to appear in 2015, which is the same year Type-C ports begin appearing on devices as well. So if everything goes according to schedule, we should see the first DP Alt Mode devices in just over a year.
    The all-in-one cable concept has been a long time coming, and after DockPort and Thunderbolt stumbling the market does look ripe for DP Alt Mode. So long as the execution is there, the manufacturers are willing to use it, and device compatibility lives up to the promises. Getting video over USB is the ultimate Trojan horse – unlike mDP, USB is already everywhere and will continue to be – so this may very well be the X factor needed to see widespread adoption where other standards have struggled.


    More...

Thread Information

Users Browsing this Thread

There are currently 11 users browsing this thread. (0 members and 11 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title