Page 558 of 1210 FirstFirst ... 584585085335485535545555565575585595605615625635685836086581058 ... LastLast
Results 5,571 to 5,580 of 12095

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5571

    Anandtech: HP Z27q Monitor Review: Aiming For More Pixels

    Almost a year ago, we reviewed the HP Z27x monitor, which was a 27-inch display capable of covering a very wide gamut. It had a reasonable 2560x1440 resolution, which was pretty common for this size of display. But at CES 2015, HP announced the HP Z27q monitor, which takes a step back on gamut and manageability, but takes two steps forward with resolution. The HP Z27q is a '5K’ display, which means it has an impressive 5120x2880 resolution. This easily passes the UHD or '4K' levels which are becoming more popular. The HP Z27q is one of a handful of 5K displays on the market now, and HP came in with a pretty low launch price of $1300. When I say pretty low, it’s of course relative to the other 5K displays in the market, but it undercuts the Dell UP2715K by several hundred dollars, even today.

    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5572

    Anandtech: NVIDIA Releases 361.43 WHQL Game Ready Driver

    The holiday season is fast approaching a climax, and this season the stream of driver updates isn’t ready to quit. With that NVIDIA has brought us a stocking loaded with bugfixes and even a couple new features to show the world.
    There are quite a few resolved issues this time around. The first of note is an issue with hot unplugging a display from an output, which caused any display hot plugged in afterward to be ignored. There were also issues with players of Star Wars Battlefront running SLI enabled systems experiencing lag after updating the driver to 359.06. Lastly was an issue covered over at PC Perspective a couple of months ago. Where Maxwell based cards (GM20x) were found guilty of rising from idle clock speeds to keep up with the output bandwidth required for refresh rates above 120Hz, which led to more power draw and more noise from the system.
    This driver update is also the first release under the R361 branch. As a major version change this update carries a larger number of changes than the usual updates we see. Along with the resolved issues from earlier we also have some notable feature changes to the drivers this time around. First on the list is added WDDM 2.0 support for Fermi based GPU’s. Unfortunately, WDDM 2.0 is only enabled for single GPU setups. SLI users will have to wait a little longer. This is also a good time to note that DX12 support for Fermi is not yet enabled, through will come in a future update (more on this later today).
    With Fermi out of the way professional users may be intrigued to hear that through GameWorks VR 1.1 NVIDIA has enabled VR SLI support for OpenGL. Which during NVIDIA’s internal testing led to a claimed 1.7x scaling from one GPU to two. While obviously a case of diminishing returns, that is still a large enough gap in performance to allow much more complexity in one’s workflow or make a job with unbearable performance jump to a much more pleasant framerate.
    Anyone interested can download the updated drivers through GeForce Experience or on the NVIDIA driver download page.



    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5573

    Anandtech: G.Skill Introduces 64GB DDR4-3200 Memory Kits

    Back in the days, enthusiasts of high-end personal computers had to make a choice between capacity and performance of their memory sub-systems. This year G.Skill, Corsair and a number of other makers of advanced memory modules introduced 16GB unbuffered DDR4 DIMMs capable of working at high clock-rates and thus wedding performance and capacity. G.Skill recently announced the industry’s first 64GB DDR4 memory kits that can operate at DDR4-3200 speeds.
    G.Skill’s new 16GB DDR4 memory modules are rated to function at 3200 MTs with CL14 14-14-35 or CL15 15-15-35 latency settings at 1.35V voltage, which is higher than industry-standard 1.2V. The modules are based on G.Skill’s printed circuit boards designed for high clock-rates as well as Samsung’s 8Gb memory chips made using 20nm fabrication technology. Such DRAM devices offer both high capacity as well as high frequencies. The new modules will be sold as 32GB and 64GB memory kits under Trident Z and Ripjaws V brands. Both product families come with efficient aluminum heat spreaders.
    The new 16GB DDR4 memory modules from G.Skill feature XMP 2.0 profiles in their SPD (serial presence detect) chips, hence, can automatically set maximum clock-rates on supporting platforms.
    G.Skill officially claims that the new 16GB memory modules were validated on the Intel Core i7-6700K central processing unit and the ASUS Z170 Deluxe motherboard. Nonetheless, the new quad-channel 64GB kits consisting of four modules should also be compatible with advanced Intel X99-based motherboards running multi-core Intel Core i7 “Haswell-E” processors thanks to XMP 2.0 technology.
    Earlier this year G.Skill demonstrated a 128GB DDR4 memory kit — consisting of eight 16GB modules — running at DDR4-3000 with CL14 14-14-35 timings on the Intel Core i7-5960X processor and the ASUS Rampage V Extreme motherboard.
    It is not an easy task to build high-capacity memory modules (e.g., 16GB, 32GB, etc.) capable of working at high frequencies. Server-class registered DIMMs use over 16 memory ICs (integrated circuits) or specially packaged memory chips along with buffers that enable flawless operation of such modules. RDIMMs work at default frequencies, but can barely be overclocked. Previous-generation 8Gb memory chips produced using thicker manufacturing technologies were moderate overclockers. Samsung’s 8Gb memory chips can operate at high clock-rates and are used to build high-capacity memory modules for PCs and servers.
    G.Skill’s 64GB DDR4 kit rated to operate at DDR4-3200 with CL15 15-15-35 timings will cost $499.99 in the U.S. The 64GB DDR4-3200 kit with CL14 14-14-34 latency settings will be priced at $579.99.
    Gallery: G.Skill Introduces 64GB DDR4 Memory Kits with 3.2GHz Frequency




    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5574

    Anandtech: Club3D Releases Their DisplayPort 1.2 to HDMI 2.0 Adapter: The Real McCoy

    Though we don't typically cover adapter news, this one is worth a special exception. Late last month Club3D announced their DisplayPort 1.2 to HDMI 2.0 adapter, and since then there has been some confusion over just what their adapter actually supports - a problem brought on by earlier adapters on the market that essentially only supported a subset of the necessary HDMI 2.0 specification. As a result Club3D sent over a second note last week more explicitly calling out what their adapter can do, and that yes, it supports HDMI 2.0 with full 4:4:4 chroma subsampling.
    But before we get too far ahead of ourselves, perhaps it's best we start with why these adapters are necessary in the first place. While 4K TVs are becoming increasingly prevalent and cheap, it's only in the last 18 months that the HDMI 2.0 standard has really hit the market, and with it the ability to drive enough bandwidth for full quality uncompressed 4K@60Hz operation. Somewhat frustrating from a PC perspective, PCs have been able to drive 4K displays for some time now, and the de facto PC-centric DisplayPort standard has offered the necessary bandwidth for a few years, since DisplayPort 1.2. However with DisplayPorts almost never appearing on TVs, there have been few good options to drive 4K TVs at both full quality and 60Hz.
    An interim solution - and where some of Club3D's promotional headaches come from - has been to use slower HDMI 1.4 signaling to drive these displays, using Chroma Subsampling to reduce the amount of color information presented, and as a result reducing the bandwidth requirements to fit within HDMI 1.4's abilities. While chroma subsampling suffices in movies and television, as it has for decades, it degrades desktop environments significantly, and can render some techniques such as subpixel text rendering useless.
    Meanwhile HDMI 2.0 support has been slow to reach PC video cards. NVIDIA offered it on the high-end the soonest with the Maxwell 2 family - though taking some time to trickle down to lower price HTPC-class video cards - while AMD missed out entirely as their initial plans for HDMI 2.0 were scratched alongside any planned 20nm GPUs. Thankfully PC video cards have supported DisplayPort 1.2 for quite some time, so DisplayPort to HDMI adapters were always an option.
    However early DisplayPort 1.2 to HDMI 2.0 adapters were in reality using HDMI 1.4 signaling and chroma subsampling to support 4K@60Hz at reduced image quality. As the necessary controllers were not yet on the market this was making the best of a bad situation, but it was not helped by the fact that many of these adapters were labeled HDMI 2.0 without supporting HDMI 2.0's full bandwidth. So with the release of the first proper HDMI 2.0 adapters, this has led to some confusion.
    And that brings us to Club3D's DisplayPort 1.2 to HDMI 2.0 adapter, the first such full HDMI 2.0 adapter to reach the market. Club3D's adapter should allow any DP 1.2 port to be turned into an HDMI 2.0 port with full support for 4K60p with full image quality 4:4:4 chroma subsampling. After the releases of pseudo-HDMI 2.0 adapters over the last several months, this is finally the real McCoy for HDMI 2.0 adapters.
    The key here today is that unlike those early pseudo-2.0 adapters, Club3D's adapter finally enables full HDMI 2.0 support with video cards that don't support native HDMI 2.0. This includes AMD's entire lineup, pre-Maxwell 2 NVIDIA cards, and Intel-based systems with a DisplayPort but not an HDMI 2.0 LS-Pcon. In fact, AMD explicitly stated support for DP 1.2 to HDMI 2.0 dongles in their recent driver update, paving the way to using this adapter with their cards.
    While we're covering the specifications, it also bears mentioning that Club3D's adapter also supports HDCP 2.2. Though as HDCP 2.2 is an end-to-end standard this means that the host video card still needs to support HDCP 2.2 to begin with, as Club3D's adapter simply operates as a repeater. As a result compatibility with 4K content on older cards will be hit and miss, as services like Netflix require HDCP 2.2 for their 4K content.
    Finally, Club3D will be offering two versions of the adapter: a full size DisplayPort version that should work with most desktop video cards, and a Mini DisplayPort version for laptops and all other video cards. And with a roughly $30 asking price listed today it is an attractive option when it is otherwise unreasonable to replace a video card with one that provides HDMI 2.0 in its place.
    Buy Club3D DisplayPort 1.2 to HDMI 2.0 Adapter on Amazon.com


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5575

    Anandtech: AMD Dual-Fiji “Gemini” Video Card Delayed To 2016, Aligned With VR Headset

    I’m going to start off this post a little differently tonight. Since our DX12 Ashes of the Singularity article back in October, I have been looking for an opportunity to do a bit of deep analysis on the current state of SLI, Crossfire, and Alternate Frame Rendering technology. I had expected that chance to come along when AMD launched their dual-GPU Fiji card in 2015. However as AMD is now confirming the card is not launching this year, clearly things have changed. Instead we’re now looking at a 2016 launch for that card, and in light of AMD publicly commenting on the fact that they are going to be focused on VR with the card, now is probably the best time to have that discussion about AFR in order to offer some more insight on AMD’s strategy.
    But first, let’s get to the big news for the day: the status of AMD’s dual-GPU Fiji card, codename Gemini. As our regular readers likely recall, back at AMD’s Fiji GPU launch event in June, the company showed off four different Fiji card designs. These were the Radeon R9 Fury X, the R9 Fury, the R9 Nano, and finally the company’s then-unnamed dual-GPU Fiji card (now known by the codename Gemini). At the time Gemini was still in development – with AMD showing off an engineering sample of the board – stating that they expected the card to be released in the fall of 2015.
    AMD GPU Specification Comparison
    AMD Radeon R9 Fury X AMD Radeon R9 Fury AMD Radeon R9 Nano AMD Gemini
    (Dual Fiji Card)
    Stream Processors 4096 3584 4096 2 x ?
    Texture Units 256 224 256 2 x ?
    ROPs 64 64 64 2 x 64
    Boost Clock 1050MHz 1000MHz 1000MHz ?
    Memory Clock 1Gbps HBM 1Gbps HBM 1Gbps HBM ?
    Memory Bus Width 4096-bit 4096-bit 4096-bit 2 x 4096-bit
    VRAM 4GB 4GB 4GB 2 x 4GB
    FP64 1/16 1/16 1/16 1/16
    TrueAudio Y Y Y Y
    Transistor Count 8.9B 8.9B 8.9B 2 x 8.9B
    Typical Board Power 275W 275W 175W ?
    Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
    Architecture GCN 1.2 GCN 1.2 GCN 1.2 GCN 1.2
    GPU Fiji Fiji Fiji Fiji
    Launch Date 06/24/15 07/14/15 09/10/15 2016
    Launch Price $649 $549 $649 (Unknown)
    However since that original announcement we haven’t heard anything further about Gemini, with AMD/RTG more focused on launching the first three Fiji cards. But with today marking the start of winter, RTG has now officially missed their original launch date for the card.
    With winter upon us, I reached out to RTG last night to find out the current status of Gemini. The response from RTG is that Gemini has been delayed to 2016 to better align with the launch of VR headsets.
    The product schedule for Fiji Gemini had initially been aligned with consumer HMD availability, which had been scheduled for Q415 back in June. Due to some delays in overall VR ecosystem readiness, HMDs are now expected to be available to consumers by early Q216. To ensure the optimal VR experience, we’re adjusting the Fiji Gemini launch schedule to better align with the market.
    Working samples of Fiji Gemini have shipped to a variety of B2B customers in Q415, and initial customer reaction has been very positive.
    And as far as video card launches go – and this will be one of the oddest things I’ve ever said – I have never before been relieved to see a video card launch delayed. Why? Because of the current state of alternate frame rendering.
    Stepping to the side of the Gemini delay for the moment, I want to talk a bit about internal article development at AnandTech. Back in November when I was still expecting a November/December launch for Gemini, I began doing some preliminary planning for the Gemini review. What cards/setups we’d test, what games we’d use, resolutions and settings, etc. The potential performance offered by dual-GPU cards (and multi-GPU setups in general) means that to properly test them we need to put together some more strenuous tests to fit their performance profile, and we also need to assemble scenarios to evaluate other factors such as multi-GPU scaling and frame pacing consistency. What I found for testing disappointed me.
    To cut right to the chase, based on my preliminary planning, things were (and still are) looking troubled for the current state of AFR. Game compatibility in one form or another in recently launched games is the lowest I’ve ever seen it in the last decade, which is rendering multi-GPU setups increasingly useless for improving gaming performance. For that reason, back in November I was seriously considering how I would go about broaching the matter with RTG; how to let them know point blank (and before I even reviewed the card) that Gemini was going to fare poorly if reviewed as a traditional high-performance video card. That isn’t the kind of conversation I normally have with a manufacturer, and it’s rarely a good thing when I do.
    This is why as awkward as it’s going to be for RTG to launch a dual-GPU 28nm video card in 2016, that I’m relieved that RTG has pushed back the launch of the card. Had RTG launched Gemini this year as a standard gaming card, they would have run straight into the current problems facing AFR. Instead by delaying it to 2016 and focusing on VR – one of the only use cases that still consistently benefits from multi-GPU setups – RTG gets a chance to salvage their launch and their engineering efforts. There’s definitely an element of making the most of a bad situation here, as RTG is going to be launching Gemini in the same year we finally expect to see FinFET GPUs, but it’s going to be the best course of action they can take at this time.
    The Current State of Multi-GPU & AFR

    I think it’s fair to say that I am slightly more conservative on multi-GPU configurations than other PC hardware reviewers, as my typical advice is to hold-off on multi-GPU until you’ve exhausted single GPU performance upgrades. In the right situations multi-GPU is a great way to add performance, offering performance greater than any single GPU, but in the wrong scenarios it can add nothing for performance, or worse it can drive performance below that of a single GPU.
    The problem facing RTG (and NVIDIA as well) is that game compatibility with alternate frame rendering – the heart of SLI and CrossFire – is getting worse and worse, year by year. In preparing for the Gemini review I began looking at new games, and the list of games that came up with issues was longer than ever before.
    AFR Compatibility (Fall 2015)
    Game Compatibility
    Batman: Arkham Knight Not AFR Compatible
    Just Cause 3 Not AFR Compatible
    Fallout 4 60fps Cap, Tied To Game Sim Speed
    Anno 2205 Not AFR Compatible
    Metal Gear Solid V: The Phantom Pain 60fps Cap, Tied To Game Physics
    Star Wars Battlefront AFR Compatible
    Call of Duty: Black Ops III AFR Compatible
    Out of the 7 games I investigated, 3 of them outright did not (and will not) support multi-GPU. Furthermore another 2 of them had 60fps framerate caps, leading to physics simulation issues when the cap was lifted. As a result there were only two major fall of 2015 games that were really multi-GPU ready: Call of Duty: Black Ops III and Star Wars Battlefront.
    I’ll first start things off with the subject of framerate caps. As far as framerate caps go, while these aren’t a deal-breaker for mutli-GPU use, as benchmarks they’re not very useful. But more importantly I would argue that the kind of gamers investing in a high-end multi-GPU setup are also the kind of gamers that are going to want higher framerates with 60fps. Next to 4K gaming, the second biggest use case for multi-GPU configurations is to enable silky-smooth 120Hz/144Hz gaming, something that has been made far more practical these days thanks to the introduction of G-Sync and Freesync. Unfortunately even when you can remove the cap, in the cases of both Fallout 4 and Metal Gear Solid V the caps are tied to the games’ simulations, and as a result removing the cap can break the game in minor ways (MGSV’s grenades) or major ways (Fallout 4’s entire game speed). Consequently multi-GPU users are left with the less than palatable choice of limiting the usefulness of their second GPU, or breaking the game in some form.
    But more concerning are the cases where AFR is outright incompatible. I touched upon this a bit in our DX12 Ashes article, that games are increasingly using rendering methods that are either inherently AFR incompatible, or at best are so difficult to work around that the development time and/or performance gains aren’t worth it. AFR incompatible games aren’t a new thing, as we’ve certainly had those off and on for years now – but I cannot recall a time where so many major releases were incompatible.
    The crux of the issue is that game engines are increasingly using temporal reprojection and similar techniques in one form or another in order to create more advanced effects. The problem with temporal reprojection is, as implied by the name, it requires reusing data from past frames. For a single GPU setup this is no problem and it allows for some interesting effects to be rendered with less of a performance cost than would otherwise be necessary. However this is a critical problem for multi-GPU setups due to the fact that AFR means that while one GPU is still rendering frame X, the other GPU needs to start rendering frame X+1.

    Temporal reprojection In Lords of the Fallen (Top) and the Frostbite Engine (Bottom)
    Frame interdependency problems like this have been at the heart of a lot of AFR issues over the years, and a number of workarounds have been developed to make AFR continue to work, typically at the driver level. In easier cases the cost is that there is some additional synchronization required, which brings down the performance gains from a second GPU. However in harder cases the workarounds can come with a higher performance hit (if the problem can be worked around at all), to the point where additional GPUs aren’t an effective means to improve performance.
    Ultimately the problem is that in the short-to-mid run, these kinds of issues are going to get worse. Developers are increasingly using AFR-unfriendly rendering techniques, and in this age of multiplatform releases where big-budget AAA games are simultaneously developed for two consoles and the PC, PC users are already a minority of sales, and multi-GPU users are a smaller fraction still. Consequently from a developer’s perspective – one that by the numbers needs to focus on consoles first and the PC second – AFR is something of a luxury that typically cannot come before creating an efficient renderer for the consoles.
    Which is not to say that AFR is doomed. DirectX 12’s explicit multi-adapter modes are designed in part to address this issue by giving developers direct control over how multiple GPUs are assigned work and work together in a system. However it is going to take some unknown amount of time for the use of DirectX 12 in games to ramp up – with the delay of the Fable Legends the first DX12 games will not be released until 2016 – so until that happens we’re left with the status quo of DirectX 11 and driver-assisted AFR. And that ultimately means that 2015 is a terrible time to launch a product heavily reliant on AFR.
    Virtual Reality: The Next Great Use Case for Multi-GPU

    That brings me to the second point of today’s analysis, which is what one possible future for multi-GPU setups may be. If AFR is increasingly unreliably due to frame interdependency issues, then if multi-GPU configurations are going to improve performance, then there needs to be a way to use multiple GPUs where frames are minimally dependent on each other. As it turns out, stereoscopic virtual reality is just such a use case.
    In its most basic implementation, stereoscopic VR involves rendering the same scene twice: once for each eye (view), with the position of the camera slightly offset to mimic how human eyes are separated. There are a number of optimizations that can be done here to reduce the workload, but in the end many parts of a scene need to be completely re-rendered because the second view can see things (or sees things slightly differently) than the first view.
    The beauty of stereoscopic rendering as far as multi-GPU is concerned is that because each view is rendering the same scene, there’s little-to-no interdependency between what each view is doing; one view doesn’t affect the other. This means that assigning a GPU to each view is a relatively straightforward operation. And if rendering techniques like temporal reprojection are in use, those again are per-view, so each GPU can reference its past frame in a single-GPU like manner.
    At the same time because VR requires rerendering large parts of a frame twice, coupled with the need for low latency it has very high performance requirements, especially if you want to make the jump to VR without significantly compromising on image quality. The recommended system requirements for the Oculus Rift are a Radeon R9 290, a GeForce GTX 970, or equivalent. And if users want better performance or developers want better looking games, then a still more powerful setup is required.
    This, in a nutshell, is why VR is the next great use case for multi-GPU setups. There is a significant performance need, and VR rendering is much friendlier to multi-GPU setups. RTG and NVIDIA are in turn both well aware of this, which is why both of them have multi-GPU technologies in their SDKs, Affinity Multi-GPU and VR SLI respectively.
    Gemini: The Only Winning Move Is To Play VR

    Finally, bringing this back to where I began, we have RTG’s Gemini. If you go by RTG’s statements, they have been planning to make Gemini a VR card since the beginning. And truthfully I have some lingering doubts about this, particularly because the frontrunner VR headset, the Oculus Rift, was already had a Q1’16 launch schedule published back in May, which was before AMD’s event. But at the same time I will say that RTG was heavily showing off VR at their Fiji launch event in June, and the company had announced LiquidVR back in March.
    Regardless of what RTG’s original plans were though, I believe positioning Gemini as a video card for VR headsets is the best move RTG can make at this time. With the aforementioned AFR issues handicapping multi-GPU performance in traditional games, releasing Gemini now likely would have been a mistake for RTG from both a reviews perspective and a sales perspective. There will always be a market for multi-GPU setups, be it multiple cards or a singular multi-GPU card, but that market is going to be far smaller if AFR compatibility is poor, as multi-GPU is a significant investment for many gamers.
    As for RTG, for better and for worse at this point they have tied Gemini’s future to the actions of other companies, particularly Oculus and HTC. The good news is that even with the most recent delay on the HTC Vive, both companies appear to be ramping up for their respective releases, with word coming just today that game developers have started receiving finalized Rift units for testing. Similarly, both companies are now targeting roughly the same launch window, with the Rift set to arrive in Q1’16 (presumably late in the quarter), and the Vive a month later in April. The risk then for RTG is whether these units arrive in large enough numbers to help Gemini reach RTG’s sales targets, or if the ramp-up for VR will bleed over into what’s expected to be the launch of FinFET GPUs. As is typically the case, at this point only time will tell.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5576

    Anandtech: The CUBOT H1 Smartphone Test: A Month with 3-4 Days of Battery per Charge

    The last time I fully road tested a smartphone, I was moving from a rather decrepit Samsung Galaxy S2 to the 'glorious' 6-inch HTC One max, at a time when my smartphone use case consisted of taking pictures and basic gaming. Two years on, and I'm upgrading again, because the One max has become frightfully slow and I now use my phone a lot for writing reviews on the road. My phone of choice for this next round comes from a whimsical tale but is an obscure number, from a Chinese company based in Shenzhen called CUBOT.

    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5577

    Anandtech: US ITC Finds NVIDIA Guilty of Infringing Three Samsung Patents

    An administrative law judge from the U.S. International Trade Commission on Tuesday found that NVIDIA Corp. infringed several patents of Samsung Electronics. The judge ruled that NVIDIA’s graphics processing units (GPUs) and system-on-chips (SoCs) infringed three fundamental patents that belong to Samsung. NVIDIA said that patents were outdated, but if the full agency finds that there was a violation, certain NVIDIA’s products could be banned in the U.S.
    David P. Shaw, an administrative law judge from Washington D.C., found that NVIDIA infringed U.S. patents 6,147,385, 6,173,349 and 7,804,734. The patents cover an implementation of SRAM, a shared computer bus system with arbiter and a memory sub-system with a data strobe buffer. Some of the patents were granted back in the nineties and such inventions may be considered as fundamental technologies or may not be used at all by modern chips. One of the patents will expire next year, hence, it will not have any effect on NVIDIA’s business even though it was infringed.
    NVIDIA’s lawyers in turn have said that Samsung had “chosen three patents that have been sitting on the shelf for years collecting nothing but dust.” NVIDIA hopes that when several judges review the case in the coming months, it will be found not guilty of patent infringement.
    “We are disappointed,” Hector Marinez, a spokesman for NVIDIA, said a statement. “We look forward to seeking review by the full ITC which will decide this case several months from now.”
    Samsung accused NVIDIA of infringing its patents in mid-November, 2014, two months after the Santa Clara, California-based developer of chips sued the Suwon, South Korea-based conglomerate. NVIDIA asserted that graphics processing units integrated into Samsung’s Exynos system-on-chips as well as into Qualcomm’s Snapdragon SoCs infringe its fundamental graphics patents. NVIDIA asked ITC to ban sales of Samsung’s smartphones and tablets that use Exynos and Snapdragon chips, which allegedly infringed its patents, in the U.S. Samsung also asked the commission to stop sales of certain NVIDIA-based products in the U.S.
    Last week a group of six ITC judges issued their final ruling concerning NVIDIA’s allegations against Samsung and Qualcomm. They found that Samsung and Qualcomm did not infringe two out of three patents of NVIDIA, whereas one patent was considered invalid. The ruling was very important not only for Samsung and Qualcomm, but for numerous other companies whom license graphics processing technologies from companies like ARM Holdings and Imagination Technologies, or buy SoCs from Qualcomm.
    NVIDIA is the world’s largest supplier of discrete graphics processing units for personal computers. The company’s Tegra system-on-chips for mobile devices did not become very popular among makers of smartphones and tablets, which is why the company changed its SoC strategy in 2014 – 2015 to include going after vehicles, drones and other automotive applications.
    NVIDIA Corp. has been trying to monetize its intellectual property by licensing its technologies and patents to third-parties starting from mid-2013. So far, NVIDIA has not managed to license its Kepler or Maxwell graphics cores to any other chip developer. Moreover, Samsung and Qualcomm will also not pay NVIDIA because they did not violate any of its patents, according to the ITC rulings.
    The results of the legal fight between Samsung and NVIDIA are not final. Although the U.S. ITC found no violations by Samsung and infringements by NVIDIA, usually patent-related legal battles last for many years.
    Image of gavel by Douglas Palmer, Flickr.



    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5578

    Anandtech: NVIDIA Releases Android Marshmallow Update For SHIELD Tablet K1

    NVIDIA this week released over-the-air (OTA) update to Google Android 6.0 Marshmallow operating system for its SHIELD Tablet K1. The new OS is expected to improve functionality of the gaming tablet as well as to prolong its battery life. The arrival of the update once again demonstrates importance of gaming devices for NVIDIA.
    The NVIDIA SHIELD Tablet K1 is based on the company’s Tegra K1 system-on-chip (four high-performance ARM Cortex-A15 R3 cores, one low-power ARM Cortex-A15 core, a graphics engine with 192 stream processors based on the Kepler architecture) and comes with an 8” multi-touch capacitive screen (1920x1080 resolution), 2GB DDR3L dynamic random access memory, 16GB or 32GB NAND flash storage, 802.11n 2x2 MIMO Wi-Fi, Bluetooth 4.0, 3G/4G module (32GB version only), stereo speakers, HDMI output and so on. The SHIELD Tablet K1 costs $199.
    The tablet was designed primarily for gamers. NVIDIA sells a special SHIELD Controller ($59.99) and offers its GeForce Now video game streaming service to owners of its tablets. While the company no longer targets mainstream smartphones and slates with its Tegra system-on-chips (SoCs), the developer of integrated circuits still considers gaming devices as a primary market for its SoCs. For NVIDIA, it is important to keep its SHIELD tablets up-to-date, which is why the device gets upgrade of its operating system ahead of many competing devices.
    The gaming slate from NVIDIA was originally launched as the SHIELD Tablet in mid-2014. In mid-2015, the company recalled its tablets because of battery issues and then re-launched the product in November under a slightly different name. Although the SHIELD Tablet and the SHIELD Tablet K1 feature the same hardware, only the newer model gets the Google Android 6.0 Marshmallow upgrade right now. The original slate will get the new operating system only in early 2016, according to NVIDIA.
    The Google Android 6.0 Marshmallow operating system is the latest version of Google’s mobile platform with improved functionality and user experience. The new OS, which was officially released in October, 2015, sports new power management system that reduces background activity, new application programming interfaces for contextual assistants, native support for fingerprint recognition and a number of other enhancements. One of the key improvements of the Android 6.0 Marshmallow is Google’s Now on Tap function, which helps to quickly find relevant information based on content that is currently displayed. Another advantage of Android 6.0 that gamers may consider important is better support for microSD storage.
    Owners of NVIDIA SHIELD Tablet K1 should check Settings -> About -> Check for Updates in a bid to get the new operating system.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5579

    Anandtech: Hard Disk Drives with HAMR Technology Set to Arrive in 2018

    While many client devices use solid-state storage technologies nowadays, hard disk drives (HDDs) are still used by hundreds of millions of people and across virtually all datacenters worldwide. Heat-assisted magnetic recording (HAMR) technology promises to increase capacities of HDDs significantly in the coming years. Unfortunately, mass production of actual hard drives featuring HAMR has been delayed for a number of times already and now it turns out that the first HAMR-based HDDs are due in 2018.
    Storage Demands Are Increasing

    Analysts from International Data Corp. and Western Digital Corp. estimate that data storage capacity shipped by the industry in 2020 will total approximately 2900 exabytes (1EB = 1 million TB), up from around 1000EB in 2015. Demand for storage will be driven by various factors, including Big Data, Internet-of-Things, user-generated content, enterprise storage, personal storage and so on. Samsung Electronics believes that the NAND flash industry will produce 253EB of flash memory in 2020, up from 84EB in 2015. Various types of solid-state storage will account for less than 10% of the storage market in terms of bytes shipped, whereas hard drives, tape and some other technologies will account for over 90%, if the estimates by IDC, Samsung and Western Digital are correct.
    In a bid to meet demand for increased storage needs in the coming years, the industry will need to expand production of NAND flash memory as well as to increase capacities of hard disk drives. Modern HDDs based on perpendicular magnetic recording (PMR) and shingled magnetic recording (SMR) platters have areal density of around ~0.95 Terabit per square inch (Tb/in²) and can store up to 10TB of data (on seven 1.43TB platters). Technologies like two-dimensional magnetic recording (TDMR) can potentially increase areal density of HDD disks by 5 to 10 per cent, which is significant. Moreover, Showa Denko K.K. (SDK), the world’s largest independent maker of hard drive platters, has outlined plans to mass produce ninth-generation PMR HDD media with areal density of up to 1.3Tb/in² next year.
    HAMR: The Key to Next-Gen HDDs

    Companies like Seagate Technology and Western Digital believe that to hit areal densities beyond 1.5Tb/in², HAMR technology along with higher anisotropy media will be required because of supermagnetic limit (physical “pitches” on optical media become so tiny that it will not be possible to produce a powerful enough magnetic field in the HDD space to write data into them).
    Certain principles of heat-assisted magnetic recording were patented back in 1954, even before IBM demonstrated the very first commercial hard disk drive. Heat-assisted magnetic recording technology briefly heats magnetic recording media with a special laser close to Curie point (the temperature at which ferromagnetic materials lose their permanent magnetic properties) to reduce its coercivity while writing data on it. HAMR HDDs will feature a new architecture, require new media, completely redesigned read/write heads with a laser as well as a special near-field optical transducer (NFT) and a number of other components not used or mass produced today.
    According to Seagate, its HAMR heads heat media to approximately 450°C using a laser with 810nm wavelength and 20mW power. The company does not disclose any details about its HAMR recording heads because they are the most crucial part of the next-generation hard drives. HDD makers, independent producers of recording heads, universities and various other parties have researched HAMR heads for years. NFT is a very important components of any HAMR head. It has to deliver the right amount of energy into a spot diameter of 30nm or smaller. NFT also has to be durable and reliable, which is something that many researchers are working on.
    This month Showa Denko disclosed its roadmap for next-generation hard drive media. While such plans tend to change as products get closer to mass production, at present SDK expects its first 2.5” platters for HAMR drives to feature 1.2TB – 1.5TB capacity (areal density of 1.5Tb/in² – 1.95Tb/in²). By the end of the decade, capacity of 2.5” disks for HDDs is projected to increase to 2TB. Showa Denko’s forecasts clearly show the benefits of HAMR technology and its potential.
    In Development for Years

    Manufacturers of hard disk drives, heads and HDD media have been working on technologies to enable HAMR-based HDDs for well over ten years now, as soon as they realized that at some point HAMR technology would be required to build hard drives with higher capacities.
    Starting from mid-2000s, various manufacturers of HDDs have demonstrated prototype HDDs that used HAMR technology for a number of times. For example, Western Digital showcased a 2.5” hard drive that used HAMR tech back in late 2013. In mid-2015, Seagate displayed a NAS powered by multiple drives featuring heat-assisted magnetic recording.
    Numerous demonstrations of HAMR-based HDDs in action prove that the technology actually works. Over the years, producers of hard drives, platters and recording heads have revealed various possible timeframes for commercial availability of drives with HAMR technology. Their predictions were not accurate. At present, there are still reliability issues with the technology, according to Seagate. In the recent months both Seagate and Showa Denko indicated that HAMR drives would be delayed again.
    Still Not Ready for Commercial Products

    Seagate plans to ship prototypes of its HAMR-based drives to select customers in late 2016 or early 2017. The drives will be intended mostly for test purposes. They will help Seagate and the company's clients to understand how reliable the HAMR-powered HDDs are in actual datacenters, whether they are compatible with existing infrastructure and how fast they are in real-world applications. Evaluation will take a long time and chances that Seagate starts volume shipments of hard disk drives with HAMR technology in 2017 are low.
    Last week Showa Denko also said that its platters for hard disk drives that use heat-assisted magnetic recording technology would be delayed to 2018.
    “As for new generation technologies, HAMR or TAMR, the start of mass production will be [slightly] delayed to 2018,” said Hideo Ichikawa, president of Showa Denko. The official mid-term business plan of the company reads that the new-generation media "will be launched in or after 2018".
    While it is evident that HAMR-powered hard drives are not ready for prime time, producers of HDDs do not reveal the nature of the issues. Seagate indicated earlier this year that HAMR-based drives were not stable enough, but did not elaborate.
    Higher-Capacity HDDs Are Incoming

    Even though HAMR seems to be at least two years away, hard drive makers will continue to increase capacities of their flagship drives going forward.
    SDK promises to start volume production of its ninth-generation perpendicular magnetic recording platters next year. So far, the company has announced that the ninth-gen PMR disks for 2.5” HDDs will feature 1TB capacity. Eventually the tech could be applied to 3.5” platters to increase their capacity up to around 2TB.
    Earlier this year Seagate introduced its 2TB hard disk drive in 2.5” form-factor that is just 7mm thick. The drive is based on two 1TB platters, which feature leading-edge 1.3Tb/in² areal density. The same technology will inevitably be used for 3.5” HDDs, enabling Seagate to introduce enterprise-class hard drives with over 10TB capacity in the coming years.
    Western Digital Corp. builds high-capacity platters in-house. While exact plans of the company are unclear, its HGST division has consistently offered the world’s highest-capacity hard drives for several years in a row now.
    Overall, while HAMR faces another delay, leading producers of hard disk drives will be able to expand capacities of their HDDs using PMR and SMR platters in the coming years.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5580

    Anandtech: ASUS and ASRock Prep Gaming Motherboards for Intel Xeon E3 v5 Processors

    ASUS and ASRock, two major makers of computer motherboards, are rolling out their platforms for Intel Xeon central processing units (CPUs) designed for gamers. While typically Xeon chips are more expensive than comparable Intel Core processors, they feature a number of technologies that make them rather attractive for end-users.
    Xeons and Desktops

    Intel Core processors for desktops offer performance and feature-set tailored for average users, but server-class Intel Xeon processors for single-socket machines sport such technologies as ECC, vPro as well as Trusted Execution, which may be important for users with custom requirements. Moreover, unlike Intel Core i5, all quad-core Xeon chips for uniprocessor systems feature 8MB cache and Hyper-Threading technology, which means slightly higher performance in single-threaded and multi-threaded applications. Intel Core CPUs have their own advantages over Xeon processors, such as unlocked multiplier on select models as well as Intel Identity Protection, but there are still users, who prefer to use Xeons.
    Back in the days, Intel Xeon microprocessors for uniprocessor computers were compatible with premium core-logic sets for desktops, whereas Intel Core chips could work in systems designed for Xeons. However, starting from the Skylake generation of microprocessors Intel decided to change its approach to workstation- and desktop-class PCs. Intel Xeon E3-1200 v5 microprocessors are incompatible with Intel 100-series chipsets for desktops. This helps Intel and its partners to better position their products for personal and professional usage.
    Even though Intel would prefer to keep its Xeon central processing units away from the consumer market, makers of motherboards plan to give users a choice and are rolling out Intel C232-based platforms with LGA1151 sockets with features for gamers. The Intel C232 is not the most advanced core-logic for the Xeon E3-1200 v5 processors — it only has eight PCI Express 3.0 lanes, up to six USB 3.0 ports, up to six Serial ATA-6Gb/s ports, does not support vPro or Rapid Storage technologies, etc. However, it is also cheaper than the fully-fledged Intel C236 chipset, which is used for higher-end workstations.
    ASUS Readies Four Xeon Motherboards for Desktop PCs

    This week ASUS introduced its E3 Pro Gaming V5 mainboard that supports server-grade Intel Xeon E3-1200 v5 processors as well as a variety of features for desktops PCs used by gamers, such as DDR4 memory overclocking, high-quality integrated audio, M.2 slot for solid-state drives, USB 3.1 support and so on. ASRock is also working on its Intel C232-based Fatal1ty E3V5 Performance Gaming platform compatible with the latest Intel Xeon E3 v5 chips.
    The ASUS E3 Pro Gaming V5 motherboard is compatible with all central processing units in LGA1151 form-factor, including the Intel Core i3/i5/i7 and the Intel Xeon E3-1200 v5 families of chips. The board features digital eight-phase voltage regulator module for CPUs that features solid-state chokes and high-quality capacitors. The platform comes with four 288-pin DDR4 DIMM slots, which support memory overclocking and XMP profiles, but do not support ECC technology. The motherboard also features two PCI Express x16 3.0 slots for graphics cards or high-performance SSDs (officially, only AMD’s CrossFireX multi-GPU technology is supported), two PCIe 3.0 x1 and two PCI slots for add-in-boards, one M.2 slot for SSDs (with NVMe support), six Serial ATA-6Gb/s ports for storage devices and so on. The ASUS E3 Pro Gaming V5 is equipped with the ASMedia ASM1142 USB 3.1 controller (and has one USB 3.1 type-A and one USB 3.1 type-C ports), Intel I219LM Gigabit Ethernet controller with ASUS GameFirst software technology that prioritizes gaming traffic, 7.1-channel SupremeFX audio with Realtek ALC1150 codec and so on. The E3 Pro Gaming V5 motherboard is compatible with liquid-cooling solutions, features onboard thermal sensors as well as automatic fan controls.
    The Intel C232-based motherboard from ASUS is clearly a consumer-oriented system board, yet, with support for Xeon processors. In fact, even the layout of the E3 Pro Gaming V5 resembles that of the ASUS B150 Pro Gaming/Aura, an affordable platform for gamers. Next year the company plans to introduce three ASUS Signature-series motherboards — including the E3-Pro V5 (ATX), the E3M-ET V5 and the E3M-Plus V5 in micro ATX (mATX) form-factor — that will also support Intel Xeon chips. Apparently, ASUS plans to offer relatively inexpensive platforms for Intel Xeon E3 v5 microprocessors.
    ASRock Preps Two Xeon Motherboards for Gamers

    ASRock is another company designing consumer-grade motherboards featuring Intel C232 core-logic and compatible with workstation-class Intel Xeon chips as well as desktop-class Intel Core CPUs. The ASRock Fatal1ty E3V5 Performance Gaming/OC and the ASRock E3V5 WS will be the company’s first two mainboards to support Xeons along with certain desktop features.
    The two Intel C232-based motherboards from ASRock will share one design, but will sport different style and different BIOS versions with slightly different feature-set. The motherboards will have high-quality five-phase digital VRM, four slots for DDR4 memory with or without ECC, two PCI Express x16 slots, three PCIe x1 slots, multiple Serial ATA-6Gb/s ports, USB 3.0 support, integrated audio and so on.
    The ASRock Fatal1ty E3V5 Performance Gaming/OC was designed primarily for enthusiasts and gamers. The motherboard sports ASRock’s technologies like Gaming Armor, Key Master and Fatal1ty mouse port that are typically found on desktop-class platforms from the company. In addition, the Fatal1ty E3V5 Performance Gaming/OC features DDR4 memory and BCLK overclocking, something that server-class motherboards do not typically support.

    The ASRock E3V5 WS was designed for workstations. It cannot overclock CPUs or memory and is not equipped with gaming features. However, it uses onboard Intel I219LM Gigabit Ethernet chip that was developed with servers. The E3V5 WS motherboard is compatible with AMD FirePro and NVIDIA Quadro professional graphics cards as well as with server operating systems.
    Worth Considering?

    While Intel’s Xeon E3-1200 processors are not designed for gamers or enthusiasts in general, given the shortage of Intel’s Core i7-6700K chips, some may consider to buy a Xeon E3 v5 processor with 8MB cache, Hyper-Threading and other technologies instead.
    Since the Intel Xeon E3-1200 v5 processors and the Intel C232 core-logic do not officially support overclocking, end-users, who would like to boost performance of their central processing units will have to experiment with BCLK overclocking, which may not provide very good results. Moreover, the C232 does not support as many PCI Express 3.0 lanes as the Intel Z170 or the Intel C236, which means expansion capabilities of desktops that use the core-logic will be limited.
    The motherboards for the Intel Xeon E3 v5 CPUs from ASUS and ASRock look rather solid in terms of quality, but can hardly be called feature-rich. Nonetheless, if their price is right, a Xeon-based gaming system may be worth considering.


    More...

Thread Information

Users Browsing this Thread

There are currently 5 users browsing this thread. (0 members and 5 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title