Page 406 of 1210 FirstFirst ... 306356381396401402403404405406407408409410411416431456506906 ... LastLast
Results 4,051 to 4,060 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4051

    Anandtech: Nvidia and Epic Games Showcase the Power of Tegra K1 With Unreal Engine 4

    For the past few years there have been claims that mobile graphics performance and capabilities are about to reach that of gaming consoles like the Xbox 360 and Playstation 3. Obviously because this has been going on for a few years that point hasn't quite been reached yet. But if a new tech demo from NVIDIA and Epic Games is any indication of where graphics performance is headed that goal of matching the previous generation of game consoles on a mobile device may not be far off. The below video was made in Unreal Engine 4 and rendered on NVIDIA's Tegra K1.

    This tech demo was played during the keynote at Google IO. To achieve some of the effects in the video the teams at Epic Games and NVIDIA used Google's new Android Extension Pack and OpenGL ES 3.1 which are supported in the upcoming Android L release. The Android Extension Pack is a set of extensions to OpenGL ES which provides features like tessellation to improve the detail of geometry rendered onscreen, and geometry shaders which can also be used to add detail to what is rendered onscreen as well as to add shadows to a scene. The Android Extension Pack also includes support for compute shaders, and Adaptive Scalable Texture Compression (ASTC) which we've talked about in depth previously.
    Of course software is just one half of the equation. The GPU in NVIDIA's Tegra K1 breaks free of the old GeForce ULP design and works with the same architecture as Nvidia's desktop GPUs. Specifically, the GPU in Tegra K1 is a Kepler based GPU with 192 CUDA cores, 4 ROPs (render output units), and 8 texture units. The 64-bit version of NVIDIA's Tegra K1 will also be one of the first chips to ship in a new wave of 64-bit Android L devices with Google having updated the OS and their ART runtime to support the ARMv8 instruction set. It will be exciting to see a new generation of games enabled by more powerful hardware like NVIDIA's Tegra K1
    Source: Unreal Engine on Youtube



    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4052

    Anandtech: Intel’s "Knights Landing" Xeon Phi Coprocessor Detailed

    Continuing our ISC 2014 news announcements for the week, next up is Intel. Intel has taken to ISC to announce further details about the company’s forthcoming Knights Landing processor, the next generation of Intel’s Xeon Phi processors.
    While Knights Landing in and of itself is not a new announcement – Intel has initially announced it last year – Intel has offered very few details on the makeup of the processor until now. However with Knights Landing scheduled to ship in roughly a year from now, Intel is ready to offer up further details about the processor and the capabilities.
    As previously announced, as the successor to Intel’s existing Knights Corner (1st generation Xeon Phi), Knights Landing makes the jump from using Intel’s enhanced Pentium 1 (P54C) x86 cores to using the company’s modern Silvermont x86 cores, which currently lie at the heart of the Intel’s Atom processors. These Silvermont cores are far more capable than the older P54C cores and should significantly improve Intel’s single threaded performance. All the while these cores are further modified to incorporate AVX units, allowing AVX-512F operations that provide the bulk Knights Landing’s computing power and are a similarly potent upgrade over Knights Corner’s more basic 512-bit SIMD units.
    All told, Intel is planning on offering Knights Landing processors containing up to 72 of these cores, with double precision floating point (FP64) performance expected to exceed 3 TFLOPs. This will of course depend in part on Intel’s yields and clockspeeds – Knights Landing will be a 14nm part, a node whose first products won’t reach end-user hands until late this year – so while Knights Landing’s precise performance is up in the air, Intel is making it extremely clear that they are aiming very high.
    Which brings us to this week and Intel’s latest batch of details. With last year focusing on the heart of the beast, Intel is spending ISC 2014 explaining how they intend to feed the beast. A processor that can move that much data is not going to be easy to feed, so Intel is going to be turning to some very cutting edge technologies to do it.
    First and foremost, when it comes to memory Intel has found themselves up against a wall. With Knights Corner already using a very wide (512-bit) GDDR5 memory bus, Intel is in need of an even faster memory technology to replace GDDR5 for Knights Landing. To accomplish this, Intel and Micron have teamed up to bring a variant of Hybrid Memory Cube (HMC) technology to Knights Landing.

    Hybrid Memory Cube (HMC)
    Through the HMC Consortium, both Intel and Micron have been working on developing HMC as a next-generation memory technology. By stacking multiple DRAM dies on top of each other, connecting those dies to a controller at the bottom of the stack using Through Silicon Vias (TSVs), and then placing those stacks on-package with a processor, HMC is intended to greatly increase the amount of memory bandwidth that can be used to feed a processor. This is accomplished by putting said memory as close to the processor as possible to allow what’s essentially an extremely wide memory interface, through which an enormous amount of memory bandwidth can be created.

    Image Courtesy InsideHPC.com
    For Knights Landing, Intel and Micron will be using a variant of HMC designed just for Intel’s processor. Called Multi-Channel DRAM (MCDRAM), Intel and Micron have taken HMC and replaced the standard memory interface with a custom interface better suited for Knights Landing. The end result is a memory technology that can scale up to 16GB of RAM while offering up to 500GB/sec of memory bandwidth (nearly 50% more than Knights Corner’s GDDR5), with Micron providing the MCDRAM modules. Given all of Intel’s options for the 2015 time frame, the use of a stacked DRAM technology is among the most logical and certainly most expected (we've already seen NVIDIA plan to follow the same route with Pascal); however the use of a proprietary technology instead of HMC for Knights Landing comes as a surprise.
    Moving on, while Micron’s MCDRAM solves the immediate problem of feeding Knights Landing, RAM is only half of the challenge Intel faces. The other half of the challenge for Intel is in HPC environments where multiple Knights Landing processors will be working together on a single task, in which case the bottleneck shifts to getting work to these systems. Intel already has a number of fabrics at hand to connect Xeon Phi systems, including their own True Scale Fabric technology, but like the memory situation Intel needs a much better solution than what they are using today.
    For Knights Landing Intel will be using a two part solution. First and foremost, Intel will be integrating their fabric controller on to the Knights Landing processor itself, doing away with the external fabric controller, the space it occupies, and the potential bottlenecks that come from using a discrete fabric controller. The second part of Intel’s solution comes from developing a successor to True Scale Fabric – dubbed Omni Scale Fabric – to offer even better performance than Intel’s existing fabric solution. At this point Intel is being very tight lipped about the Omni Scale Fabric specifications and just how much of an improvement in inter-system communications Intel is aiming for, but we do know that it is part of a longer term plan. Eventually Intel intends to integrate Omni Scale Fabric controllers not just in to Knights Landing processors but traditional Xeon CPUs too, further coupling the two processors by allowing them to communicate directly through the fabric.
    Last but not least however, thanks in large part to the consolidation offered by using MCDRAM, Intel is also going to be offering Knights Landing in a new form factor. Along with the traditional PCIe card form factor that Knights Corner is available in today, Knights Landing will also be available in a socketed form factor, allowing it to be installed alongside Xeon processors in appropriate motherboards. Again looking to remove any potential bottlenecks, by socketing Knights Landing Intel can directly connect it to other processors via Quick Path Interconnect as opposed to the slower PCI-Express interface. Furthermore by being socketed Knights Landing would inherit the Xeon processor’s current NUMA capabilities, sharing memory and memory spaces with Xeon processors and allowing them to work together on a workload heterogeneously, as opposed to Knights Landing operating as a semi-isolated device at the other end of a PCIe connection. Ultimately Intel is envisioning programs being written once and run across both types of processors, and with Knights Landing being binary compatible with Haswell, socketing Knights Landing is the other part of the equation that is needed to make Intel’s goals come to fruition.
    Wrapping things up, with this week’s announcements Intel is also announcing a launch window for Knights Landing. Intel is expecting to ship Knights Landing in H2’15 – roughly 12 to 18 months from now. In the meantime the company has already lined up its first Knights Landing supercomputer deal with the National Energy Research Scientific Computing Center, who will be powering their forthcoming Cori supercomputer with 9300 Knights Landing nodes. Intel currently enjoys being the CPU supplier for the bulk of the Top500 ranked supercomputers, and with co-processors becoming increasingly critical to these supercomputers Intel is shooting to become the co-processor vendor of choice too.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4053

    Anandtech: Apple Updates 16GB iPod Touch With Color Options and Rear-Facing Camera

    Today Apple quietly refreshed their iPod Touch lineup, with the changes specifically being focused on the 16GB iPod Touch. The fifth generation iPod Touch line was announced in September of 2012. The device was launched in 32GB and 64GB variants with six different color options and a hardware platform that borrowed from many different Apple devices on the market at the time. Internally it used the same Apple A5 chip used in the iPhone 4S which is a dual core Cortex-A9 based design running at 800MHz paired with a PowerVR SGX543MP2 GPU and 512MB of LPDDR2 memory. On the front it has the same 1136x640 IPS display used in the iPhone 5, 5c, and 5s, as well as the 1.2MP front-facing camera from the iPhone 5. On the back it has the 5MP rear-facing CMOS sensor used in the iPhone 4 paired with the optical system of the iPhone 5. The 32GB and 64GB models were priced at $299 and $399 respectively.
    Eight months after the initial launch of the new iPod Touch, Apple introduced a more inexpensive 16GB model which only came with a silver back and a black front face, and did not include the rear-facing camera or wrist strap from the more expensive iPod Touch models. This version was priced at $199 in the United States and until now the tradeoffs included with it were the only option for consumers who didn't want the larger storage capacity models.
    Apple's new 16GB iPod Touch introduced today now comes in the full array of colors that only the more expensive models offered previously. It also includes the rear-facing camera and the wrist strap, effectively making it an identical device to the more expensive models apart from the amount of internal storage. The refreshed 16GB iPod Touch still costs $199 and is currently only available in the United States for the time being, with availability in other countries in the near future. In addition to the upgrades to the 16GB model, Apple has dropped the price of the 32GB and 64GB versions to $249 and $299 respectively.
    Source: Apple


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4054

    Anandtech: PlayStation Plus July 2014 Games Preview

    Summer is upon us now, and with that comes new free games for PlayStation Plus members. Sony has now moved to a two game per system per month model, so it should be easier to keep track of what games are coming out when. Don't forget if you missed last month's games to go pick them up soon! June's games can be found here. On Tuesday July 1, the new round of games will be as follows:
    PlayStation 4

    For July, PlayStation Plus members on the PlayStation 4 will have access to TowerFall Ascension and Strider.
    TowerFall Ascension:

    “TowerFall Ascension is the definitive version of the hit archery combat game. Inspired by classics from the golden age of couch multiplayer, it’s a local party game centering around hilarious, intense versus matches. The core mechanics are simple and accessible, but hard to master and combat is fierce. Loot treasure chests for game-changing power-ups, master the art of catching arrows out of the air, or descend on your foes and stomp them into submission. TowerFall is best played competitively with friends, cross-legged on the floor within punching distance of each other.”
    Strider:

    “Strider returns in a brand new adventure, complete with incredible side-scrolling action, and lightning fast combat all in a massive interconnected world! Download the full game now and become the original assassin!”
    PlayStation 3

    PlayStation 3 owners get both Dead Space 3 and Vessel in July.
    Dead Space 3:

    “Dead Space 3 brings Isaac Clarke and merciless soldier John Carver on a journey across space to discover the source of the Necromorph outbreak. Crash-landed on the frozen planet of Tau Volantis, Isaac must comb the harsh environment for raw materials and scavenged parts. He will then put his engineering skills to the ultimate test to create and customize weapons and survival tools. Play together with a friend or alone as Isaac Clarke using the seamless new drop in, drop out co-op functionality. Each mode offers unique story elements and gameplay.”
    Vessel:

    “Vessel is built on an optimized liquid simulation featuring flowing water, scalding lava and steam, reactant chemicals, glowing goo, the mysterious ‘protoplasm’, and more. Each liquid has unique properties and mixes with other liquids for dramatic effects.”
    PlayStation Vita

    Finally, PlayStation Vita owners get both Muramasa Rebirth and Doki-Doi Universe.
    Muramasa Rebirth:

    “From master video game developer Vanillaware comes Muramasa Rebirth, an action RPG that blends the vibrant and beautiful world of Japanese mythology and high flying, fast-paced action! Enter a world where demon, samurai and other deadly enemies vie to destroy you as you search for the fabled Demon Blades. Become the possessed princess Momohime, as you travel West, or the fugitive ninja Kisuke, as you travel East, through mythical Japan battling anyone who stands in your way.”
    Doki-Doki Universe:

    “Embark on a journey with QT3 to discover humanity and engage with bizarre characters to learn more about them, and hopefully, learn more about yourself along the way.”


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4055

    Anandtech: First Impressions and Hands On of Android L

    Today, Google finally posted the system images for Android L on the Nexus 5 and 7, so I decided to take a look at them to see what’s going on. After flashing the images through fastboot, the first thing I noticed was just how much longer it takes to get past the first startup. This is definitely a significant departure from the Dalvik era, as the ahead of time compilation process happens on the first boot for system applications. There’s also a new boot animation that is a modification of the KitKat boot animations. The best description I can give is that the colors now orbit each other like electrons.
    After booting, the setup process remains mostly unchanged from 4.4. Things definitely start to change once you get into the main UI though. While it’s hard to show some of the animations, there’s definitely a great deal more depth to the UI than before. One of the first things that I noticed was the change in the notification drawer. Now, instead of tapping a button to get to the quick settings, it’s just another swipe down to view that panel. It definitely has a sense of depth as well, as the icons seem to scroll out underneath the notification panel.
    Once you actually go into the settings menus, things start to look very different. The old menu still remained rather dark in its design, but the new menu uses a white backdrop for a lighter feel. In general, it feels very much like Sense 6 in this regard. There’s also a new landscape view to increase information density when compared to previous versions. The new overscroll animations are also much more reactive than before, and the shape of the overscroll varies based upon where your finger is. This same reactive animation behavior can be seen throughout the UI now.
    It seems that the most consistent motif in this preview release is responsiveness, and not just in animations. For one, scrolling through a listview is the smoothest experience I’ve ever had in Android, bar none. It’s strange that I’ve come to expect this, but trying to scroll through a long comment thread in a Reddit client before caused pauses and stutters without fail. The same is no longer true in this build on the Nexus 5. Scrolling through a ~700 comment thread happens with no perceivable stutter. It’s still possible to get the device to choke though, and the Play Store home page still seems to have some stutters and pauses while scrolling. It’s definitely smoother than doing the same on the One (M8).
    There are also changes to the lockscreen. For now, it seems that lockscreen widgets are gone. The new lockscreen also adds an iOS-style notification display, which is definitely a useful feature. Swiping away these notifications is relatively simple as well. Swiping right on the lockscreen now brings up the phone application, and swiping left brings up the camera application as always. I did notice a bit of bugginess, as swiping down on the lockscreen seems to hide both the clock widget and notification bar with no way to get it back unless you unlock the phone. Swiping down from this state brings down the quick settings, but it’s no longer attached to the notification drawer. Also, it seems that there’s some sort of charge estimation display now, as on the lock screen it displayed the time left until the phone was fully charged.
    The new multitasking UI is also surprisingly usable. In this regard I think the information density has been increased, as it’s theoretically possible to show up to four application tiles at one time instead of the three that used to be shown. The same use of depth is also helpful in this design, as it help to establish a sense of chronology that wasn’t quite there with the old multitasking UI. Here, scrolling through even the longest of histories is flawlessly smooth and without pauses. As always, apps can be closed by swiping left or right to remove them from the multitasking UI.
    Going through the settings and digging a bit deeper, I’ve managed to find some information about this build. Based upon the build.prop, this release seems to be quite new as it was built on June 18th, just a week before the keynote. There are also some new settings in the developer options menu, such as WiFi verbose logging, simulated color space, and a NuPlayer option.
    Overall, I’m quite excited to see how Android L turns out by the time a release OTA rolls around. The only real issue I have at this point is that some UI elements such as the clear all notifications button have disappeared with this build. I suspect that this version of Android will be a significant change unlike the updates from 4.2 to 4.4. With any luck we’ll be able to track the changes between each preview release to see how Android L evolves until its release in the fall.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4056

    Anandtech: Google IO 2014 Day 2 Recap Video

    I just finished up another day at Google IO and wanted to share a few tidbits prior to getting this stuff down in text. I spend a little bit of time talking about how ART improves garbage collection performance in Android L release as well as how cloud pairing of mobile devices to Chromecast works using ultrasonic authentication. The big news is obviously Android Wear and I spend a bit of time talking about Google's plans for notifications in Android and my initial experience with LG's G Watch.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4057

    Anandtech: Hands On With AMD’s Gaming Evolved Client Game DVR

    Over the last few years we have seen both AMD and NVIDIA become increasingly interested in developing and nurturing software ecosystems around their products. Born out of a desire to improve sales by offering additional functionality and to get a leg up on the competition in what’s otherwise a market of near-perfect substitutes, we have seen everything from APIs to middleware to software rolled out alongside video cards to develop these ecosystems.
    The most recent spurt of ecosystem development has been especially focused on the software aspect, with AMD and NVIDIA branching out beyond drivers and control panels to offer more functionality and features through utility applications. In this respect the originator for this trend and the leader thus far has been NVIDIA, who kicked off this latest wave in ecosystem development with the release of their GeForce Experience utility a bit over a year ago. Since then NVIDIA has continued to flesh out GeForce Experience, and while AMD’s position has largely been one of copying NVIDIA and catching up, there’s something to be said for knowing when to copy a good idea for the benefit of your users.
    First announced back at their Hawaii event and released in beta shortly thereafter, AMD’s take on the GeForce Experience style utility is the AMD Gaming Evolved Client. Rather than developing it internally AMD has been developing it as a sort of second-party utility that is a customized version of the Raptr client. This has led to a sometimes odd dichotomy between the utility functions that AMD needs and the monetization that Raptr needs, but none the less over the last half year or so the GEC has progressively improved in functionality and presentation.

    An ealrier version of the Gaming Evolved Client
    With the GEC’s game optimization service now nailed down, AMD and the Ratpr developers have turned their eyes towards game recording and broadcasting, modeled after NVIDIA’s ShadowPlay feature. The first fruits of that labor were released last week with the release the GEC’s Game DVR feature.
    Initially released in beta form, Game DVR leverages AMD’s hardware H.264 video encoder – the Video Codec Engine (VCE) – to offer game recording and broadcasting functionality. Game DVR allows for both manual recording and an always-on rolling buffer of video up to 10 minutes long, allowing the buffer to be permanently saved at will (and hence the DVR name).
    Traditional video recording through applications such as FRAPS comes with a significant overhead on top of the time required to transcode a video for distribution, whereas leveraging the VCE in this manner is intended to significantly reduce the performance hit from recording and for the first time makes a DVR-like function practical. AMD is entering a slightly more crowded field for video recording utilities than NVIDIA did last year – MSI’s Afterburner recently added similar functionality, VCE and all – but this none the less marks the first time that VCE-accelerated recording has been available in an official AMD utility.
    Also of note, along with recording functionality GEC is also adding Twitch broadcasting support. This leverages the same basic capture and encode paths as recording, but instead of going to disk gets uploaded to Twitch’s live video streaming service. As most Twitch clients are still software driven, the same benefits to recording apply to broadcasting here, allowing footage to be broadcasted without the significant overhead that normally comes with it.
    Going Hands On

    With the above in mind, we took some time to take a look at the GEC’s new Game DVR function. While it’s still in beta (and clearly so), AMD and the Raptr team have put together a solid first shot at a utility that should do for AMD’s users what Shadowplay did for NVIDIA’s users: making high performance/low overhead H.264 video recording accessible and practical for all users.
    Diving right in then, as Game DVR is primarily keyboard driven, what little there is to look at takes place in the GEC preferences pane. Here you can set the quality of the recording, the recording location, and the keyboard shortcuts.
    The GEC offers 3 default settings plus a customization setting, intended for 480p, 720p, and 1080p captures respectively. Recording maxes out at 50Mb/sec at 60fps, with a maximum resolution of 1080p. Meanwhile the replay buffer defaults to a surprisingly low 15 seconds, but can be increased to as long as 10 minutes. 16:10 and other non-16:9 users will find that the Game DVR only supports 16:9 resolutions, so recording at 1900x1200 and other aspect ratios is not possible. Though we’d like to see AMD take a page from NVIDIA’s playbook and ultimately enable these resolutions.
    Game DVR Default Settings
    High Quality 1080p60 @ 50Mbps
    Medium Quality
    720p30 @ 30Mbps
    Low Quality
    480p30 @ 10Mbps
    The files generated by Game DVR are MP4 files for maximum compatibility. AMD is using High profile H.264 here, while audio is 192Kbps ABR AAC-LC. The actual bitrate of the recordings ends up being in flux, presumably due to AMD’s encoder having to take a best-shot based on how many frames it expects to encode. The actual framerates of our recordings are usually being reported at between 40fps and 50fps, despite the 290X’s ability to deliver well over 60fps, which is likely a factor in the inconsistent bitrates. When running decoupled – 60fps recording against an uncapped framerate above 60fps – it looks like Game DVR is often capturing frames at a fraction of the frame rate rather than skipping frames in an irregular manner. Recording with v-sync to lock framerates at 60fps produces far more consistent results (generally sustaining 60fps recorded) and is recommended in this case.
    Metro LL: Game DVR
    Moving on to image quality and performance, from an image quality perspective at its highest settings Game DVR’s image quality is looking solid. At this bitrate the encoding is reasonably transparent, and even the driving rains of Crysis 3 are handled reasonably well. The only real knock here other than some slight softening is that Crysis 3 exposes some of the pitfalls of the RGB to 4:2:0 colorspace conversion, giving some fine anti-aliased edges a purple/green tint.
    As for performance, Game DVR’s performance hit is minimal as promised. Between the 5 games we benchmarked with Game DVR on, the performance hit was consistent, but also consistently less than 3%, a generally inconsequential reduction. This ability to hold maintain high framerates even with Game DVR enabled is essential towards making the DVR/replay functionality viable, and to that end AMD has delivered the performance they need to.
    But with the above said, as we mentioned earlier Game DVR is still considered to be in beta, and as we’ve found out in our testing this is an apt description. Of the 6 games we tested only 4 of them worked correctly. On the other two we encountered different errors.
    First and foremost, in the case of Thief there’s an unusual gamma problem that results in the recording coming out far darker than it should be. This problem is only in the recording while the original image as displayed is correctly balanced as you’d expect. Meanwhile Game DVR compatibility is still spotty, with some games/applications not being detected and recorded. Bioshock: Infinite was one such game, and 3DMark (2013) also could not be recorded by Game DVR. Game DVR does not have a desktop recording capability – AMD/Raptr haven’t mentioned whether they’re working on it, but we’d certainly hope they are – so any game that Game DVR fails to detect is currently unrecordable.

    Thief With Game DVR Gamma Problems
    The developers for their part say they are aware of both issues and are working on it, but for the time being it reflects on the fact that Game DVR really is still in beta. Ultimately when Game DVR works it works well, however until compatibility is further improved there are going to be cases where it falls flat on its face. AMD and Raptr have built a solid base for Game DVR thus far, so this should just be a matter of working out the kinks.
    Finally, it’s worth pointing out that Game DVR is also for the moment limited to Direct3D. OpenGL and Mantle cannot currently be captured, the lack of the former being as much expected, though the lack of ability to use Game DVR to record Mantle is a bit ironic. Though given the low level nature of Mantle and the resulting inability for utilities to hook into it like Direct3D, it will be interesting to see just what the solution ends up being.
    Briefly, as for Twich, the broadcasting functionality works as expected. GEC’s Twitch capture support allows for broadcasting at up to 1080p60 at 3.5Mbps, which is in-line with other Twitch software. The software includes the ability to do webcam and chat overlays, covering the basic overlay functions that most broadcasters should need, though there will still be a need for more intricate 3rd party utilities for advanced users.
    Closing Thoughts

    Wrapping things up, AMD tells us that they are expecting the Game DVR and Twitch features to exit beta later this summer. AMD and Raptr clearly have some polishing to do between now and then, but in the interim it’s already in a very usable and useful state.
    As it stands AMD seems to have the basics nailed down for Game DVR, but from a competitive standpoint I don’t think there’s any getting around the fact that NVIDIA’s lead in developing recording software means that they still hold the edge in compatibility and functionality. AMD isn’t publishing a roadmap or feature list like NVIDIA did for Shadowplay so we won’t spend too much time speculating on what might be, but I’d certainly hope to see AMD continue to close the feature gap with NVIIDA. AMD doesn’t need 1:1 feature parity, but after having been spoiled by NVIDIA’s desktop capture, that’s really the last feature they need to make Game DVR a well-rounded utility.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4058

    Anandtech: ADATA Premier SP610 SSD (256GB & 512GB) Review: Say Hello to an SMI Contro

    The ADATA Premier SP610 is the first SSD with a new Silicon Motion SM2246EN controller to enter our test labs. This is a drive going after the lower cost markets, but performance is almost a complete unknown. Now we're ready to see what the SMI controller can do, so join us as we see if it can challenge Crucial's MX100 and Samsung's 840 EVO in the value market.

    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4059

    Anandtech: Best Video Cards: June 2014

    We’re back once again with our monthly guide to video cards and video card industry recap, this time for June of 2014.
    June has been another quiet month from a product standpoint. There were no new product launches from AMD nor NVIDIA in the more exciting over-$100 market, however NVIDIA did quietly launch the GT 740 and GT 730 for the sub-$100 market.
    NVIDIA New Video Cards (June 2014)
    GTX 740 (GDDR5) GTX 740 (DDR3) GTX 730 (GDDR5) GTX 730 (DDR3) GTX 730 (Fermi)
    CUDA Cores 384 384 384 384 96
    Texture Units 32 32 32 32 16
    ROPs 16 16 8 8 4
    Core Clock 993MHz 993MHz 902MHz 902MHz 700MHz
    Boost Clock N/A N/A N/A N/A N/A
    Memory Clock 5GHz GDDR5 1.8GHz DDR3 5GHz GDDR5 1.8GHz DDR3 1.8GHz DDR3
    Memory Bus Width 128-bit 128-bit 64-bit 64-bit 128-bit
    VRAM 1GB 2GB 1GB 2GB 1GB
    FP64 1/24 1/24 1/24 1/24 1/12
    TDP 64W 64W 23W 38W 49W
    Transistor Count 1.3B 1.3B N/A N/A N/A
    Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
    Architecture Kepler Kepler Kepler Kepler Fermi
    GPU GK107 GK107 GK208 GK208 GF117
    Launch Date 06/01/14 06/01/14 06/01/14 06/01/14 06/01/14
    Launch Price ~$89 ~$89 ~$75 ~$75 ~$75
    The GeForce GT 740 is for all practical purposes a rebadge of the GTX 650 with a lower retail price of $89. NVIDIA’s official specifications call for 993MHz on the core clock versus GTX 650’s 1058MHz, but most cards are going to be above this (some exceptionally so). As this is a rebadge this means we’re looking at another GK107 product. While NVIDIA already has the more efficient GM107 GPU, GK107 will still be with us for some time to come as GK107 is notably smaller than GM107, making GK107 cheaper to fab and a better fit for these sub-$100 cards.
    Since it is GK107 based power consumption follows the previous GTX 650, which puts official power consumption at 64W but still requiring an external PCIe power connector. This means that GTX 750 retails its edge (and its higher price premium) due to its combination of lower power consumption and greater performance. Finally, we should point out that NVIDIA has launched two GT 740 variants, a GDDR5 version and a DDR3 version. The GDDR5 version will be vastly superior to the DDR3 version, though NVIDIA is making the usual memory size/speed tradeoff here, with the GDDR5 version having a standard 1GB of VRAM versus a standard 2GB for the DDR3 version.
    Moving on, as is usually the case for NVIDIA’s lowest-price products, the GeForce GT 730 is an unfortunate mix of different GPUs. NVIDIA is offering GK208 and GF117 versions of the cards – yes, Fermi is back once again – which means the performance of the card is going to depend on which version it is. All told NVIDIA is offering 3 versions of the card: GK208 with 1GB of GDDR5, GK208 with 2GB of DDR3, and GF117 with 1GB of DDR3. The GK208 GDDR5 version should be the strongest performer, however due to GK208’s 64-bit memory bus, the GF117 card exists in a particular odd place due to its small CUDA core count (96 Fermi cores) versus its wider 128-bit memory bus.
    As it stands the sole GT 730 card on Newegg is the GF117 card, so for the moment there isn’t a choice. At $75 it looks to be unremarkable, and for all practical purposes we don’t expect to see much of this card in North America as these extreme budget cards are typically focused on APAC and other price-sensitive regions. But if nothing else it’s a stark reminder of just how much GPU vendors have to sacrifice to make these sub-$100 cards, which is why $100 to $150 is the performance sweet spot.
    As for AMD, although there are no new products to speak of we’ve seen the bulk of AMD’s products gradually decrease in price over the last month. Radeon R7 260X, R9 270X, R9 280X, and R9 290 have all fallen by $10-$20 over the last month, offering even better values for the money and further improving AMD’s competitive positioning in the process. Consequently we’ve made a few tweaks to our recommendations since last month, most notably replacing 250X with 260X at the $99 price point.
    Moving on, rather than repeat ourselves at every tier by recapping the current game bundles, we’ll go over it once at the beginning. AMD is continuing to offer their Never Settle Forever bundle, with their flagship titles being Thief and Murdered: Soul Suspect. The 260 and 270 series cards get 2 games, while the 280 and 290 series get 3. Meanwhile for their GTX 750 series cards NVIDIA is offering their $150 free-to-play bundle, while the GTX 760 and higher include Watch Dogs.
    Finally, speaking of Watch Dogs we’d like to throw out a Watch Dogs inspired question to our readers for this month’s guide. While Watch Dogs’ PC launch has been rough, one thing it has none the less made clear is that we’re on the cusp of another shift in VRAM requirements, due in large part to the fact that both the PS4 and XB1 have 8GB of RAM. While not all of that RAM is used for assets and other rendering data, a significant portion is, and as a result the consoles can dedicate more RAM to rendering that many video cards can today. So with that in mind we’d like to get your feedback on the importance of VRAM.
    Is 2GB/3GB still enough for a new GTX 770/R9 280X, or should these cards come with more memory? And how much more would you be willing to pay for double the memory? Keep in mind that it has been 2+ years since the base versions of these cards came out at 2GB and 3GB respectively, so we’re wondering where you guys stand since these cards are at a greater risk of being VRAM limited going forward.
    Anyhow, market summaries behind us, let’s look at individual recommendations. As always, we’ve laid out our ideas of price/performance bands and recommendations in our table below, with our full explanations and alternative options to follow. As always, in the case of the sub-$200 market it’s worth pointing out that there’s a video card for roughly every $10, so picking a good video card is as much about budgets as it is finding an especially strong card.
    June 2014 GPU Performance Guide
    Performance Band Price Range Recommendation
    1080p (Low) $99-$149 AMD Radeon R7 260X
    1080p (Med)
    $149-$179
    1080p (High)
    $179-$279
    1440p (Med)
    $279-$389
    1440p (High)
    $389-$649
    1440p (Max)
    $649+
    4K/Multi-Monitor (High)
    $800+
    As a general recommendation for gaming, we suggest starting at $99. There are cards below this price, but the amount of performance you have to give up below $99 far outweighs the cost. Even then, performance gains will generally exceed the price increases up to $150 or so.
    Meanwhile for gamers looking for high quality 1080p gaming or better, that will start at around $199. Going above that will find cards that are good for 1440p, 4K, and multi-monitor, while going below that will find cards that will require some quality sacrifices to stay at 1080p.
    Finally, this guide is by its very nature weighted towards price/performance, based on the passionate feedback we've received from our readers. For these purposes we consider AMD and NVIDIA to be equal from a functionality and compatibility perspective, but it should be said that both parties have been building out their ecosystem in the past year, and this will only continue to grow as the two companies try to differentiate themselves. So if you need or want functionality beyond the core functionality a video card offers, it may be worthwhile to familiarize yourself with the NVIDIA and AMD ecosystems, including Gameworks, Eyefinity, Mantle, GeForce Experience, and more.
    [h=3]Budget (

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4060

    Anandtech: Ask the Experts - ARM Fellow Jem Davies Answers Your GPU Questions

    When we ran our Ask the Experts with ARM CPU guru Peter Greenhalgh some of you had GPU questions that went unanswered. A few weeks ago we set out to address the issue and ARM came back with Jem Davies to help. Jem is an ARM Fellow and VP of Technology in the Media Processing Division, and he's responsible for setting the GPU and video technology roadmaps for the company. Jem is also responsible for advanced product development as well as technical investigations of potential ARM acquisitions. Mr. Davies holds three patents in the fields of CPU and GPU design and got his bachelor's from the University of Cambridge.
    If you've got any questions about ARM's Mali GPUs (anything on the roadmap at this point), the evolution of OpenGL, GPU compute applications, video/display processors, GPU power/performance, heterogeneous compute or more feel free to ask away in the comments. Jem himself will be answering in the comments section once we get a critical mass of questions.

    More...

Thread Information

Users Browsing this Thread

There are currently 10 users browsing this thread. (0 members and 10 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title