Page 451 of 1210 FirstFirst ... 351401426441446447448449450451452453454455456461476501551951 ... LastLast
Results 4,501 to 4,510 of 12091

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4501

    Anandtech: Nokia Announces N1 Tablet: 7.9” & Powered By Android

    Since selling their mobile device division to Microsoft, much contemplation has been had over the future of Nokia. As it slowly turns out, their future is not all that different from the past. Except perhaps that there’s a lot more Android in Nokia’s future.
    As part of the Slush 2014 conference, Nokia has announced their next consumer gadget, a new tablet going by the name Nokia N1. Measuring at 7.9” diagonal, powered by an Intel CPU, and running Android Lollipop, the N1 is Nokia’s first tablet since selling their mobile device division.
    From a design perspective I’m not sure there’s anything to call the N1 other than an unabashed duplicate of the iPad Mini. Built out of a single piece of aluminum, the N1 incorporates the iPad’s 7.9” diagonal size and many of its stylings, including curves, button placements, and even the location of the headphone jack. Short of the iPad’s home button, at first glance you would be hard pressed to tell the N1 and iPad Mini apart.
    In any case, while in many ways Nokia is looking to learn from the masters here, the N1’s design does have some elements that set it apart (and ahead) of the iPad Mini and similar tablets. Nokia has been able to drive the tablet to just 6.9mm thick and 318g heavy – thinner and lighter than any iPad Mini. Meanwhile the display has been fully laminated, with Nokia eliminating any kind of air gap between the display panel and the cover glass.
    In terms of technical specifications Nokia is tapping Intel’s Atom Z3580 to power the device. Z3580 includes a quad-core Silvermont processor running at 2.3Ghz, along with an Imagination PowerVR G6430 running at 533MHz. Paired with the processor is 2GB of LPDDR3-1600, Bluetooth 4.0, and 802.11ac WiFi. Meanwhile the 7.9” 4:3 IPS display is 2048x1536 pixels, once again identical to the retina iPad Mini. Powering the device will be an 18.5Whr battery.
    Tablet Specificaiton Comparison
    Nokia N1 iPad Mini 3 NVIDIA SHEILD Tablet
    SoC Intel Atom Z3580 Apple A7 Tegra K1
    CPU 4x Silvermont @ 2.3Ghz 2x Cyclone @ 1.3GHz 4x Cortex A15r3 @ 2.2GHz
    GPU PowerVR G6430 @ 533MHz PowerVR G6430 Kepler (1 SMX)
    RAM 2GB LPDDR3-1600 1GB LPDDR3 2GB DDR3L-1866
    NAND 32GB NAND (eMMC 5.0) 16GB/64GB/128GB NAND 16GB/32GB NAND + microSD
    Display 7.9" 2048 x 1536 IPS LCD 7.9" 2048 x 1536 IPS LCD 8” 1920 x 1200 IPS LCD
    Dimensions 200.7 x 138.6 x 6.9mm, 318 grams 200 x 134.7 x 7.5mm, 331 grams 221 x 126 x 9.2mm, 390 grams
    Camera 8MP Rear Camera
    5MP FFC
    5MP Rear Camera
    1.2MP FFC
    5MP Rear Camera
    5MP FFC
    Battery 5300 mAh, 3.7V chemistry (18.5 Whr) 23.8Whr 5197 mAh, 3.8V chemistry (19.75 Whr)
    OS Android 5.0 +
    Nokia Z Launcher
    iOS 8 Android 4.4.2
    Connectivity 802.11a/b/g/n/ac + BT 4.0, USB Type-C (USB 2.0) 802.11a/b/g/n + BT 4.0, Lightning (USB 2.0) 2x2 802.11a/b/g/n + BT 4.0, USB2.0, mini HDMI 1.4a
    Price $249 (32GB) $399 (16GB)
    $499 (64GB)
    $299 (16GB/WiFi)
    $399 (32GB/LTE)
    One notable first here, Nokia is utilizing the new USB Type-C connector for the tablet, and not entirely in the way we’d expect. With the Type-C connector serving as Nokia’s analogue to Apple’s Lightning, Nokia is using what they call a “Micro-USB 2.0 with a Type-C reversible connector” setup, which means that while this is a Type-C connector it is only wired up for USB 2.0 and not USB 3.0. Given the design goals of the Type-C connector, we expect this will be the first of many such mobile devices to make use of it in the coming months.
    Vision for the N1 will be provided by a pair of cameras on the front and back. The rear camera is an 8MP camera with autofocus and is capable of recording video at 1080p. Meanwhile the smaller front camera is 5MP and utilizes fixed focus. Finally, the tablet comes in a single storage configuration of 32GB, with Nokia’s NAND driven through eMMC 5.0.
    On the software side of matters, the N1 will run a semi-customized version of Android Loliipop. In this case Nokia has made only a handful of changes, primarily replacing the standard Android launcher with their newly released Z Launcher.
    Finally, while the N1 is a Nokia branded product, Nokia is calling special attention to their manufacturing arrangement with tablet partner Foxconn. As part of Nokia’s development strategy, the industrial design, IP, and Z Launcher software are being licensed to Foxconn for the production of the tablet. Foxconn in turn basically assumes all further responsibilities for the product, including business execution, engineering, and support, in many ways making this a Foxconn tablet with Nokia software and branding.
    No doubt due in part to this reason, the N1 will be launching first in China before coming to other regions. Nokia’s official announcement states that it will be launching in China in Q1’15 for the equivalent of $249 USD (before taxes) with no further markets announced at this time. Meanwhile BGR is reporting that it is expected to launch in China after Chinese New Year, with further releases in Russia and parts of Europe in the following months. To that end there are no currently announced plans to bring the N1 to North America, though at this stage by no means does it mean that the N1 won’t come at a later date.
    Gallery: Nokia N1 Tablet




    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4502

    Anandtech: NVIDIA 344.75 Drivers - MFAA Arrives for Maxwell

    When NVIDIA initially briefed us on their new GM204 GPUs (aka Maxwell 2.0), there were several new features discussed. Most of these are now publicly available – Dynamic Scalable Resolution (DSR) was initially available only on GM204 GPUs but it was later enabled on Kepler and even Fermi GPUs. Other features required some changes to the hardware, for example Voxel Global Illumination (VXGI) cannot be implemented on older architectures – at least not without a severe performance hit. There was a third feature also discussed and demonstrated called Multi-Frame Anti-Aliasing (MFAA), and with their latest 344.75 driver NVIDIA MFAA is finally publicly available. Like VXGI, MFAA apparently requires new hardware to function, which means MFAA will only be enabled on GM204 GPUs – specifically, you need a GTX 980, GTX 970, GTX 980M, or GTX 970M GPU.
    Where things get a bit interesting is when we discuss exactly what it is that MFAA is doing. Anti-aliasing is used to smooth out edges of polygons in order to eliminate jaggies – the stair step edges that are created when a line is rendered on the pixel grid of a modern display. Many people despise jaggies, but there's one big issue with anti-aliasing: it can exact a pretty heavy performance penalty. Depending on your hardware and tolerance for jaggies, it's often necessary to balance the use of anti-aliasing with other aspects of image quality.
    It's worth noting at this point that there are many ways of accomplishing anti-aliasing. From an algorithmic point of view, the easiest approach is to simply render a scene at a higher resolution and then scale that image down with a high quality filter (e.g. bicubic), and in many ways this also produces the best overall result as everything on the screen gets anti-aliased. This method of anti-aliasing is called super-sampling (SSAA), and while it looks good it also tends to be the most costly in terms of performance. At the other end of the spectrum is multi-sample anti-aliasing (MSAA), which focuses only on polygon edges and samples multiple points for each pixel to determine coverage. This tends to be the least costly in terms of performance, but it doesn't always produce the best result unless you use at least four samples.
    While NVIDIA's DSR is in many ways just a new take on SSAA, MFAA is a modification to MSAA plus a bit of extra "secret sauce". The sauce in this case consists of one hardware change – the sample patterns are now programmable and stored in RAM instead of ROM so they can be varies between frames or even within a frame – and there's also some form of inter-frame filtering. ATI (now AMD) first started doing this a while back as well, though not necessarily in the same way as NVIDIA is doing MFAA. The idea is also similar to Temporal AA, which swaps sampling patterns every other frame in order to approximate 4xMSAA while only doing 2xMSAA, but now MFAA is doing some image synthesis over the frames to help detect and remove jaggies.
    NVIDIA's goal with MFAA is to provide quality that approaches the level of 4xMSAA with 2xMFAA (or 8xMSAA using 4xMFAA). Finally, let's also clear up how MFAA differs from TXAA. MFAA can be programmed to implement MSAA or TXAA, but it has additional flexibility as well, so effectively it's a superset of MSAA and TXAA. The key difference is that the sample patterns can now be altered rather than being hard-coded into ROM. NVIDIA has also posted a video overview of MFAA and anti-aliasing in general that's worth watching if you're not familiar with some of the concepts.
    Investigating MFAA Performance

    While the discussion of how MFAA works is interesting, ultimately it comes down to two questions: what is the performance hit of MFAA compared to MSAA, and what is the image quality comparison between MFAA and MSAA? To test this, we have used two games form NVIDIA's list of currently supported titles, Assassin's Creed: Unity and Crysis 3. The complete list of supported games (at the time of writing – more should be added over time) can be seen in the following table.
    I've actually got a separate Benchmarked piece with further investigations of Assassin's Creed: Unity (ACU), which I'll try to finish up later today. Suffice it to say that ACU is extremely demanding when it comes to GPU performance, particularly at the highest quality settings. For the testing of MFAA, I have used just a single GTX 970 and GTX 980 GPU, and I tested at 1080p with High and Ultra settings in ACU, while for Crysis 3 I used Very High textures with High and Very High machine spec. I then tested both with and without MFAA enabled (this is a new setting in the NVIDIA Control Panel). Here's how things look in terms of performance, with the minimum frame rates being an average of the lowest 1% of frames.
    [Note: I'm still running the ACU tests on the GTX 980; I'll add the charts as soon as I'm finished!]
    There does appear to be a small hit from enabling MFAA, though it's small enough that in most cases it wouldn't be noticeable without running benchmarks. Using 2xMFAA to deliver the equivalent of 4xMSAA looks to be viable, at least from the performance figures in our testing. The other aspect of course is image quality, so let's look at that as well.
    Gallery: Assassin's Creed: Unity MFAA vs. MSAA


    Gallery: Crysis 3 MFAA vs. MSAA


    Here's where things get a bit interesting. It's a bit hard to compare them in the games we're using as you have to restart the game in order to enable/disable MFAA, so we can't just use a static shot (though the first image from ACU is basically a static location). I also feel like MFAA has a better chance to apply a temporal filter when the game isn't in motion, so I consider the "moving" sequences to be more important in some ways. Of course, when the game is in motion, seeing jaggies also becomes more difficult, but that's another topic…. Regardless, there are clearly differences between MSAA and MFAA in terms of rendering.
    2xMSAA is easy to pick out as having the most jaggies still present, but picking a "best" image from the 2xMFAA, 4xMSAA, and 4xMFAA options would be difficult. Then toss in the rain from the sequence in Crysis 3 and it becomes very difficult to see what's going on. One cool aspect of MFAA is that similar to TXAA, it has the ability to help remove jaggies on transparent textures and not just on polygon edges. You can see this on some of the elements in the above images (e.g. look at the fence in the second set of images for Assassin's Creed), though it's not always perfect.
    Game Ready

    Wrapping things up, there are a few other items to note with the 344.75 drivers that we've used for testing. Along with enabling MFAA for GM204 graphics cards, the 344.75 drivers are NVIDIA's latest "Game Ready" WHQL release, with newly added support for Far Cry 4 and Dragon Age: Inquisition, and The Crew. The first two of those were officially launched today, November 18, while The Crew is currently slated for release on December 2. We're working to get those games for additional testing as part of our Benchmarked series of articles, so stay tuned for that.
    Closing Thoughts

    Getting rid of jaggies is always a good thing in my book, and if you can do it better/faster then that's great. While there's definitely going to be some debate on whether 2xMFAA actually looks as good as 4xMSAA, for the most part I think it's close enough that it's the way to go unless you're already hitting sufficiently high frame rates. The performance aspect on the other hand is a clear win, at least in the games we tested. Assassin's Creed: Unity and Crysis 3 can absolutely punish even high-end GPUs, but other games can be equally demanding at higher resolutions. Perhaps the biggest issue isn't whether or not 2xMFAA can produce a good approximation of 4xMSAA with less "work", but it's simply a question of support for MFAA. The current list of 20 games is hardly all-encompassing, and several of the games on the list aren't even known to be all that demanding (I'd put about half of the games on the list into that category, including DiRT 3, DiRT Showdown, F1 2013, F1 2014, Just Cause 2, and Saints Row IV). What we need now is more titles where MFAA will work, and hopefully that will come sooner rather than later.
    One nice thing with MFAA is that it currently ties into the existing MSAA support in games, so there's no need for extra programming work on the part of the developers (unlike TXAA). Of course there are drawbacks with MSAA, specifically the fact that it doesn't work with deferred rendering techniques, which is why some games only support FXAA or SSAA for anti-aliasing. MFAA doesn't do anything for such titles, but that's no surprise. Considering the performance benefit of MFAA over MSAA and the fact that it can be enabled in the control panel and will automatically work on supported games, I don't see much problem in simply leaving it enabled for most users.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4503

    Anandtech: Intel's Xeon Phi: After Knights Landing Comes Knights Hill

    As SC’14 rolls on this week, taking part in the show’s events is Intel, who was at the show to deliver an update on the Xeon Phi lineup. As Intel already delivered a sizable update on Xeon Phi at ISC 2014 earlier this year, their SC’14 announcement is lighter fare, but we now know the name of the next generation of Xeon Phi.
    First and foremost, Intel has reiterated that the forthcoming Knights Landing generation of Xeon Phi is still on schedule for H2’15. Built on Intel’s 14nm process, Knights Landing should be a substantial upgrade to the Xeon Phi family by virtue of its integration of Silvermont x86 cores and a new stacked memory technology, Intel & Micron’s MCDRAM.

    Image Courtesy V3.co.uk
    Meanwhile Intel also used this occasion to announce the next generation of Xeon Phi. Dubbed Knights Hill, it will be built on Intel’s forthcoming 10nm process technology. Like Intel’s early announcement of Knights Landing, they aren’t discussing the part in any more detail, though they have mentioned that it will use the next generation of their Omni-Path interconnect architecture. Given the use of 10nm and the timing of the Knights Landing launch, this is likely a 2017+ part.
    Finally, Intel has also released a bit more information on Omni-Path. Previously announced alongside Knights Landing and newly renamed from Omni-Scale, Intel has announced that Omni-Path will offer link speeds of up to 100Gbps and its companion switch will offer 48 ports. As Intel has positioned Omni-Path as a competitor to Infiniband, at this point Intel is touting that they will be able to offer 56% lower switch fabric latency from the technology, as well as better density and scaling due to the larger number of ports available on the Omni-Path switch (48 vs. 36).

    Image Courtesy V3.co.uk

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4504

    Anandtech: GIGABYTE Launches X99M-Gaming 5: Micro-ATX for Haswell-E

    One noted trend in computing is the desire for more compute in a smaller space. Imagine mixing High Performance Computing with Small Form Factors, if such a thing could be possible without tons of heat or extreme fan noise/cooling. Haswell-E is the peak of multithreaded throughput on the consumer platform, and the choices for small systems based on this CPU line are limited to custom system integrators or micro-ATX motherboards for home builds. At present there are three micro-ATX X99 motherboards on the market, and GIGABYTE becomes the forth by entering the fray with the X99M-Gaming 5.
    The X99M-Gaming 5 matches the current generation gaming motherboards from GIGABYTE with its G1.Gaming styling, and unlike other motherboards in this segment uses the CPU PCIe lanes (with a 40 lane CPU) in an x16/x16/x8 arrangement rather than x16/x16 with an additional PCIe 2.0 x4 from the chipset. This means that users with single slot GPUs or PCIe co-processors can use three devices with direct bandwidth to the CPU. Users with an i7-5820K, with 28 PCIe 3.0 lanes, will have access to an x16/x8/x4 arrangement.
    GIGABYTE’s custom additions to the chipset begin with their upgraded Realtek ALC1150 audio solution with upgradeable OP-AMPs as well as gold plated audio connectors. Quad USB DAC-UP provides a cleaner power to four of the rear panel USB ports, and the rear IO shield bundled takes advantage of GIGABYTE’s Ambient LED system that we covered at X99 launch.
    Networking comes via a Qualcomm Atheros Killer network port and M.2 x1 slot for a WiFi M.2 module which can route the antenna to the back panel - users will have to purchase this separately, or GIGABYTE might make a WiFi edition if demand is sufficient. The X99M-Gaming 5 will support both M.2 x2 and SATA Express, with all ten SATA 6 Gbps ports from the chipset being used. The power delivery also continues GIGABYTE’s design with International Rectifier PowIRStage ICs and server level chokes.
    As this is an announcement from GIGABYTE’s headquarters in Taiwan, we are awaiting information via the US offices for North America pricing. The current micro-ATX X99 motherboards on the market currently retail for $235, $250 and $260, so one might suggest that this is also the target.
    Source: GIGABYTE


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4505

    Anandtech: The OnePlus One Review

    The OnePlus One has been one of the most hyped smartphones of 2014. There's really not much else to be said, as OnePlus' marketing has been quite noticeable amongst Android enthusiasts. The OnePlus One seems to come from nowhere, although there is a noticeable resemblance to the Oppo Find 7A which is produced in the same factory. The OnePlus One is said to be a flagship killer, as its high-end specs come with a mid-range price. The 16GB version starts at 299 USD and the 64GB version starts at 349 USD. With a 5.5" 1080p display, Snapdragon 801 SoC, and plenty of other bits and pieces to go around, the specs are certainly enough to make it into a flagship phone. Of course, the real question is whether it really is. After all, while specs provide the foundation, what makes a phone bad, good, or great has to do with the entire phone, not just the spec sheet. To find out how it does in our testing, read on for the full review.

    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4506

    Anandtech: EIZO Announces the IPS 26.5” FlexScan EV2730Q Monitor with 1920x1920 Resol

    I will be honest, the nearest I think I have come to a square monitor is the 1024x768 panel I use as a tiny second screen on my main computer. When I first saw EIZO’s press release regarding this new 1920x1920 monitor it took me aback, imagining what it might feel like to actually use. The consumer monitor market is expanding to various screen sizes, usually following 16:9, 16:10 or 21:9 for the most part. But after a few minutes, I realized that non-standard monitor sizes are most likely abundant in various industries, such as medical, when they are designed for a specific purpose and quality. So while a 1:1 monitor is something interesting to see in the consumer space, perhaps it might not be so new when considering industrial use scenarios. That all being said, it would be interesting to see this one in the flesh.
    1920x1920 means 3.7 megapixels, the same as 2560x1440. This compares to the regular desktop sizes of 1080p (2MP), 1200p (2.3MP), 3200x1800 (5.8MP) and 2160p (8.3MP), which indicates that if this monitor were to be used for gaming, performance would put it directly in the 1440p category. That being said, EIZO is not exactly targeting this monitor for gaming. The more vertical space provided is better suited to writers, coders or CAD who require many items on the screen at once, often side by side. As an editor, I often have an image on one side of my screen while writing my reviews on the other, so I can certainly see this marketing angle.
    The basic specification list gives the IPS-based EV2730Q as a 16.8 million color display with 178 degree viewing angles, a 300 nit brightness, a 1000:1 contrast ratio and 5ms gray-to-gray response time. Video inputs are via DisplayPort and a dual-link DVI-D, with a maximum refresh rate of 60 Hz. Two 1W speakers are built in, along with a 2-port USB 2.0 hub. 100mm VESA is supported with 344º of swivel and 35º/5º of tilt. Height is also adjustable.
    The button controls are on the front of the panel, and EIZO gives three profiles called sRGB, Movie and Paper along with two user customizable profiles. The Paper profile is designed to reduce the amount of blue in an image to prevent eyestrain while reading or coding against a white background. A feature called Auto Ecoview can detect the ambient light level and adjust the screen’s brightness to reduce eyestrain and power. This can also detect when a user leaves the desk to power down the monitor, with power on when the user returns.
    We are contacting EIZO to find what markets the EV2730Q will be sold in as well as the prices. EIZO has announced that the monitor will be available from Q1 in 2015, but this will vary by country. With any luck, it will be on display at CES.
    Source: EIZO via TFTCentral
    Gallery: EIZO Announces the IPS 26.5” FlexScan EV2730Q Monitor with 1920x1920 Resolution



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4507

    Anandtech: Encryption and Storage Performance in Android 5.0 Lollipop

    As alluded to in our Nexus 6 review, our normal storage performance benchmark was no longer giving valid results as of Android 5.0. While Androbench was not a perfect benchmark by any stretch of the imagination, it was a reasonably accurate test of basic storage performance. However, with the Nexus 5 on Android’s developer preview, we saw anywhere between 2-10x improvement to Androbench’s storage performance results with no real basis in reality. It seems that this is because the way that the benchmark was written relied upon another function for timing, which has changed with Android 5.0.
    While we haven’t talked too much about AndEBench, it has a fully functional storage test that we can compare to our Androbench results. While we’re unsure of the 256K sequential and random read results, it seems that the results are equivalent to Androbench on Android 4.4 when a 1.7x scaling factor is applied. However, AndEBench results should be trustworthy as we saw no difference in results when updating devices from 4.4 to 5.0. In addition, the benchmark itself uses low level operations that shouldn’t be affected by updates to Android.
    Nexus 5 Androbench Results on 4.4 KitKat
    Nexus 5 Androbench Results on 5.0 Lollipop
    Random Read 10.06 MB/s 27.70 MB/s
    Random Write 0.75 MB/s 13.09 MB/s
    Sequential Read 76.29 MB/s 182.78 MB/s
    Sequential Write 15.00 MB/s 47.10 MB/s
    As you can see, the results show a degree of improvement that is well beyond what could realistically be accomplished with any sort of software optimizations. The results for the random write test are the most notable, with a result that suggests the performance is over 17x faster on Android Lollipop, which could not be the case. This required further investigation, and it's one of the reasons why we were hesitant to post any storage benchmarks in the Nexus 6 review.
    The other factor affecting the results of the benchmarks on the Nexus 6 specifically is Android Lollipop's Full Disk Encryption (FDE). Android has actually had this ability since Android 3.0 Honeycomb, but Lollipop is the first time it's being enabled by default on new devices. When FDE is enabled, all writes to disk have the information encrypted before it's committed, and all reads have the information decrypted before they're returned to the process. The key to decrypt is protected by the lockscreen password, which means that the data should be safe from anyone who takes possession of your device. However, unlike SSDs, which often have native encryption, eMMC has no such standard. In addition, most SoCs don't have the type of fixed-function blocks necessary to enable FDE with little to no performance penalty.
    As a result, we've observed significant performance penalties caused by the use of FDE on the Nexus 6. Motorola was kind enough to reach out and provide a build with FDE disabled so we could compare performance, and we've put the results in the graphs below. For reference, the Nexus 5 (Lollipop) numbers are run using Andebench, while the original values are read out from Androbench on Android 4.4.
    As you can see, there's a very significant performance penalty that comes with enabling FDE, with a 62.9% drop in random read performance, a 50.5% drop in random write performance, and a staggering 80.7% drop in sequential read performance. This has serious negative implications for device performance in any situation where applications are reading or writing to disk. Google's move to enable FDE by default also may not be very helpful with real world security without a change in user behaviour, as much of the security comes from the use of a passcode. This poses a problem, because the users that don't use a passcode doesn't really benefit from FDE, but they're still subject to the penalties.
    When the Nexus 6 review was published, I commented that there were performance issues that weren't present on the Nexus 5 running Android Lollipop. Many users commented that the FDE may have been to blame. Like I mentioned earlier, Motorola provided us with a build of Android with FDE disabled. Unfortunately, I haven't noticed any improvements to many of the areas where there are significant frame rate issues such as Messenger and Calendar. I speculated in the Nexus 6 review that the performance issues may simply be the result of insufficient GPU performance or memory bandwidth to drive the QHD display.
    To me, the move to enable FDE by default in Lollipop seems like a reactionary move to combat the perception that Android is insecure or more prone to attack than iOS, even if that perception may not actually be accurate. While it's always good to improve the security of your platform, the current solution results in an unacceptable hit to performance. I hope Google will either reconsider their decision to enable FDE by default, or implement it in a way that doesn't have as significant of an impact on performance.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4508

    Anandtech: Benchmarked - Assassin's Creed: Unity

    The fifth game in the Assassin's Creed series (depending on how you want to count), Assassin's Creed: Unity is perhaps one of the most demanding games to ever hit the PC. It has also had a rather bumpy start, though some of that may be due to people expecting their system to handle the game without turning down a few of the graphics options. Can your GPU handle Unity, or does it need a few judicious upgrades first – or maybe just a couple more patches to the game – that's what we're here to find out.

    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4509

    Anandtech: Intel Haswell-EP Xeon 14 Core Review: E5-2695 V3 and E5-2697 V3

    Moving up the Xeon product stack, the larger and more complicated the die, the lower the yield. Intel sells its 14-18 core Xeons from a top end design that weighs in at over six billion transistors, and we have had two of the 14C models in for review: the E5-2695 V3 (2.3 GHz, 3.3 GHz turbo) and E5-2697 V3 (2.6 GHz, 3.6 GHz turbo).

    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #4510

    Anandtech: AMD Announces Upcoming Samsung FreeSync Displays

    Today at AMD's Future of Compute event in Singapore, AMD announced partnerships with several companies. One of the more noteworthy announcements is that Samsung will be making FreeSync enabled displays that should be available in March 2015. The displays consist of the 23.6" and 28" UD590, and there will be 23.6", 28", and 31.5" variants of the UE850. These are all UHD (4K) displays, and Samsung has stated their intention to support Adaptive-Sync (and thereby FreeSync) on all of their UHD displays in the future.
    FreeSync is AMD's alternative to NVIDIA's G-SYNC, with a few key differences. The biggest difference is that AMD proposed an extension to DisplayPort called Adaptive-Sync, and the VESA group accepted this extension as an amendment to the DisplayPort 1.2a specifications. Adaptive-Sync is thus an open standard that FreeSync leverages to enable variable refresh rates. As far as system requirements for FreeSync, other than a display that supports DisplayPort Adaptive-Sync, you need a supported AMD GPU with a DisplayPort connection and a driver from AMD with FreeSync support.
    FreeSync is also royalty free, which means displays with FreeSync support should cost less than G-SYNC equivalents. There are other costs to creating a display that can support Adaptive-Sync, naturally, so we wouldn't expect price parity with existing LCDs in the near term. On the FreeSync FAQ, AMD notes that the manufacturing and validation requirements to support variable refresh rates without visual artifacts are higher than traditional LCDs, and thus cost-sensitive markets will likely hold off on adopting the standard for now. Over time, however, if Adaptive-Sync catches on then economies of scale come into play and we could see widespread adoption.
    Being an open standard does have its drawbacks. NVIDIA was able to partner up with companies and develop G-SYNC and deploy it about a year ago, and there are now 4K 60Hz G-SYNC displays (Acer's XB280HK) and QHD 144Hz G-SYNC display (ASUS' ROG Swift PG278Q) that have been shipping for several months. In many ways G-SYNC showed the viability of adaptive refresh rates, but regardless of who gets credit the technology is quite exciting. If Adaptive-Sync does gain traction, as an open standard there's nothing to stop NVIDIA from supporting the technology and altering G-SYNC to work with Adaptive-Sync displays, but we'll have to wait and see on that front.
    Pricing for the Samsung displays has not been announced, though the existing UD590 models tend to cost around $600 for the 28" version. I'd expect the Adaptive-Sync enabled monitors to have at least a moderate price premium, but we'll see when they become available some time around March 2015.


    More...

Thread Information

Users Browsing this Thread

There are currently 24 users browsing this thread. (0 members and 24 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title