Page 507 of 1210 FirstFirst ... 74074574824975025035045055065075085095105115125175325576071007 ... LastLast
Results 5,061 to 5,070 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5061

    Anandtech: Intel Announces Thunderbolt 3 - Thunderbolt Meets USB (At Last)

    A lot has been happening in the world of external communication buses over the past year. In the last 12 months the USB consortium has announced both 10Gbps “Superspeed+” USB 3.1 and the new USB Type-C connector, USB’s new compact, reversible connector that is designed to drive the standard for the next decade or more. Meanwhile with the introduction of USB Alternate Mode functionality – the ability for USB Type-C to carry other protocols along with (or instead of) USB Superspeed data – has made USB more flexible than ever, with the VESA announcing that DisplayPort will be supporting alternate mode to deliver DisplayPort video over USB Type-C ports and cabling.
    As a result, the introduction of USB Type-C has led to a definite and relatively rapid transition over to the new standard. With the USB consortium having designed a very capable and desirable physical layer for Type-C, and then alternate modes allowing anyone to use that physical layer, there have been a number of other technologies that have started aligning themselves with USB in order to take advantage of what is becoming an even more common platform for external buses.

    USB Type-C Connector On Apple's MacBook
    This brings us to today, with the announcement of Thunderbolt 3 from Intel. With the advancements occurring elsewhere in the world of external communication buses, Intel has not been sitting idly by and letting other standards surpass Thunderbolt. Rather they have been hard at work on the next generation of Thunderbolt, one that in the end seeks to combine the recent developments of the USB Type-C physical layer with all of the feature and performance advantages of Thunderbolt, culminating in Thunderbolt 3 and its incredibly fast 40Gbps bus.
    As a bit of background, the last time Intel updated the Thunderbolt specification was in 2013 for Thunderbolt 2, AKA Falcon Ridge. By aggregating together two of Thunderbolt 1’s 10Gbps channels, Intel was able to increase the available bandwidth over a single channel from 10Gbps to 20Gbps, at the cost of reducing the total number of channels from two full duplex channels to one full duplex channel. Of particular note here is that with Thunderbolt 2 the Thunderbolt signaling layer didn’t change – Thunderbolt 2 still operated at 10Gbps for each of its four underlying lanes – so in reality the Thunderbolt signaling layer has remained unchanged since it was introduced 2011.
    Now at 4 years old, it’s time for the Thunderbolt signaling layer to change in order to support more bandwidth per cable than what Thunderbolt 1 and 2 could drive. To accomplish this upgrade in signaling layers, Intel has needed to change the physical layer as well. Thunderbolt 1 and 2 used the Apple-developed mini-DisplayPort interface for their cables, but with the VESA signaling that it may eventually replace the DisplayPort physical layer with USB Type-C, the DisplayPort physical layer’s days are likely numbered. Consequently mini-DisplayPort’s days are numbered as well, as consumer devices and the development of new standards both shift over to Type-C.
    This has put Thunderbolt in an interesting situation that has Thunderbolt moving forwards and backwards at the same time. As originally planned, Intel wanted to have Thunderbolt running through USB ports, only for the USB consortium to strike down that idea, resulting in the shift over to mini-DisplayPort. Now however with the waning of DisplayPort and the introduction of USB Type-C and its alternate modes, Thunderbolt is back to where Intel wanted to start all along, as a standard built on top of the common USB port.
    The end result of this upgrade of virtually every aspect of Thunderbolt is the latest generation of the technology, Thunderbolt 3, which seeks to combine the strengths and capabilities of the Thunderbolt platform with the strengths and capabilities of USB Type-C. This means bringing together Thunderbolt’s very high data speeds and the flexibility of its underlying PCI-Express protocol with the simple, robust design of the Type-C connector, all enabled via the USB alternate mode specification. Throw in Type-C’s associated power delivery standards, and you have what Intel believes to be the most powerful and capable external communications bus on the market.
    Thunderbolt Versions
    Win10 Thunderbolt 1 Thunderbolt 2 Thunderbolt 3
    Max Channel Bandwidth 10Gbps (Full Duplex) 20Gbps (Full Duplex) 40Gbps (Full Duplex)
    Channels 2 1 1
    Max Cable Bandwidth 40Gbps 40Gbps 80Gbps
    DisplayPort 1.1 1.2 1.2 x 2
    USB At Devices Optional, Attached Controller Optional, Attached Controller Yes, Built Into Alpine Ridge Controller
    Power 10W 10W 15W +
    Up To 100W USB PD (Optional)
    Passive Cable Option No No Yes (20Gbps)
    Interface Port Mini DisplayPort Mini DisplayPort USB Type-C
    Along with the change to using the USB Type-C port, the big news here is that Thunderbolt 3 is doubling the amount of bandwidth available to Thunderbolt devices. With Thunderbolt 2 topping out at a single full duplex 20Gbps channel, Thunderbolt 3 is increasing that to 40Gbps. Compared to DisplayPort 1.3 and USB 3.1, this is 1.5 to 4 times the available bandwidth, with DisplayPort 1.3 topping out at 25.9Gbps (after overhead), and USB 3.1 topping out at 10Gbps per channel (with Type-C carrying 2 such channels).
    From a signaling standpoint, Thunderbolt 3 is being implemented as a USB alternate mode, taking over the 4 lanes of high-speed data that Type-C offers. This is the same number of lanes as Thunderbolt 1 and 2 used, so the bandwidth increase comes as a result of doubling the amount of data carried per lane from 10Gbps (half duplex) to 20Gbps. Which when aggregated at either end is what gives us 20Gbps full duplex.
    To handle the new Type-C interface and the increased data rates, Intel is rolling out a new type of active cable for the new Thunderbolt standard. Like previous generation cables, the new cable includes significant active electronics at both ends of the cable, allowing Intel to achieve greater bandwidth than what passive cabling would allow, at the cost of increased cable prices. The new cable retains the distinctive Thunderbolt logo and is a bit larger than a passive at both ends to accommodate the electronics, but other than the change to the Type-C port is similar in concept to Thunderbolt 1 and 2’s active cables.
    Meanwhile because it’s built on Type-C, Thunderbolt 3.0 will also introduce support for passive cabling using the now-standard Type-C cable. When using a Type-C cable, Thunderbolt drops down to 20Gbps full duplex – the amount of bandwidth available in a normal Type-C cable today – sacrificing some bandwidth for cost. With Type-C cables expected to eventually cost only a few dollars compared to thirty dollars or more for traditional Thunderbolt cables, this makes Thunderbolt far more palatable as far as cable costs go, not to mention allowing cables to be more robust and more easily replaced.
    Driving these new cables in turn will be Intel’s new Alpine Ridge controller for Thunderbolt 3. The latest generation of the Ridge family, this controller steps up in capabilities to match Thunderbolt 3’s 40Gbps speeds. Alpine Ridge also integrates its own USB 3.1 (Superspeed+) host controller, which in turn serves dual purposes. When serving as a host controller for a USB Type-C port, this allows Alpine Ridge to directly drive USB 3.1 device if they’re plugged into an Alpine Ridge-backed Type-C port (similar to how DisplayPort works today with Thunderbolt ports). And when serving as a device controller (e.g. in a Thunderbolt monitor), this allows devices to utilize and/or offer USB 3.1 ports on their end.
    The addition of USB host controller functionality further increases the number of protocols that Thunderbolt 3 carries in one way or another. Along with PCI-Express and DisplayPort, the use of Alpine Ridge ensures that USB 3.1 is also available, as it’s now a built-in function of the controller. The only notable difference here is that while DisplayPort video and PCI-Express data encapsulated in the Thunderbolt data stream, USB 3.1 is being implemented on top of the PCI-Express connection that Thunderbolt already carries rather than being encapsulated in the Thunderbolt data stream as well.
    Speaking of encapsulation, Thunderbolt 3 also includes an update to the DisplayPort side of matters, though likely not what everyone has been expecting. With the increase in bandwidth, Thunderbolt 3 is able to carry twice as much video data as before. However Intel is not implementing the latest version of DisplayPort – DisplayPort 1.3 – in to the Thunderbolt 3 standard. Instead they are doubling up on DisplayPort 1.2, expanding the number of equivalent DisplayPort lanes carried from 4 to 8, essentially allowing one Thunderbolt 3 cable to carry 2 full DisplayPort 1.2 connections. The end result is that Thunderbolt 3 will not be able to drive the kind of next-generation displays DisplayPort 1.3 is geared towards – things like 8K displays and 5K single-tile displays – but it will be able to drive anything 1 or 2 DisplayPort 1.2 connections can drive today, including multiple 4K@60Hz monitors or 5K multi-tile displays.
    Meanwhile gamers will be happy to hear that Intel is finally moving forward on external graphics via Thunderbolt, and after more than a few false starts, external GPUs now have the company’s blessing and support. While Thunderbolt has in theory always been able of supporting external graphics (it’s just a PCIe bus), the biggest hold-up has always been handling what to do about GPU hot-plugging and the so-called “surprise removal” scenario. Intel tells us that they have since solved that problem, and are now able to move forward with external graphics. The company is initially partnering with AMD on this endeavor – though nothing excludes NVIDIA in the long-run – with concepts being floated for both a full power external Thunderbolt card chassis, and a smaller “graphics dock” which contains a smaller, cooler (but still more powerful than an iGPU) mobile discrete GPU.
    Another concept Intel has been floating around that will finally be getting some traction with Thunderbolt 3 is Thunderbolt networking. By emulating a 10GigE Ethernet connection, 2 computers can be networked via Thunderbolt cable, and with 10GigE still virtually unseen outside of servers and high-end workstations, this is a somewhat more practical solution for faster-than GigE networking. Thunderbolt networking has been around since 2013 in OS X, and in 2014 Intel demonstrated the technology working on PCs, however since it was a feature added to Thunderbolt 2 after its launch, the number of PCs with the necessary drivers for Thunderbolt networking has been quite low. With Thunderbolt 3 this is now a standard feature at launch, so system support for it should be greater.
    Moving on, by building Thunderbolt 3 on top of USB Type-C, Intel is also inheriting Type-C power delivery capabilities, which they will be making ample use of. With Type-C’s Power Deliver 2.0 specification allowing for chargers that can supply up to 100W of power, it will be possible (though optional) to use Thunderbolt 3 to deliver that same power, allowing for uses such as having a Thunderbolt dock or display charge a laptop over the single Thunderbolt cable (the one thing Apple’s Thunderbolt display can’t do today with Thunderbolt 2). That said, the USB power delivery standard is distinct from Thunderbolt’s bus power standard, so this doesn’t necessarily mean that all Thunderbolt hosts can provide 100W of power, or even any USB charging power for that matter. For standard bus-powered Thunderbolt devices, the Thunderbolt connection will now carry 15W of power, up from 10W for Thunderbolt 2.
    Finally, with the change in cabling, Intel is also clarifying how Thunderbolt backwards compatibility will work. Thunderbolt 3 to Thunderbolt adapters will be developed, which in turn will allow Thunderbolt 1/2 and Thunderbolt 3 hosts and devices to interoperate, so that older devices can work on newer hosts, and newer devices can work on older hosts. Though we’re not clear at this time whether the adapter is providing a simple bridge between the cable types (with the necessary regeneration), or if there’s going to be a complete Alpine Ridge controller in the adapter.
    Wrapping things up, Intel tells us that they expect to see Thunderbolt 3 products begin shipping by the end of the year, with a larger volume of products in 2016. Given this timing we’re almost certain to see Thunderbolt shipping alongside Skylake products, though Intel is making it clear that at a technical level Skylake and Thunderbolt 3 are not interconnected, and that it would be possible to pair Alpine Ridge Thunderbolt 3 controllers with other devices, be it Broadwell, Haswell-E, or other products.
    As for whether Intel will see more success with Thunderbolt 3 than the previous versions of Thunderbolt, this remains to be seen. The switch to a Type-C port definitely makes it a bit easier for OEMs to stomach – DisplayPort on laptops has been fairly rare outside of Apple – so now OEMs can integrate Thunderbolt without having to install a port they don’t see much value in. On the other hand this is still an external controller of additional cost, which incurs power, space, and cooling considerations, all of which would add to the cost of a desktop/laptop as opposed to pure USB 3.1. As was the case with Thunderbolt 1 and 2, Intel’s greatest argument in favor of the technology is docking, as the use of PCI-Express and now the addition of USB Power Delivery gives Thunderbolt a degree of flexibility and performance that USB Type-C alone doesn’t match.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5062

    Anandtech: ASUS Releases Android Lollipop For The ZenFone 4, 5, and 6

    Today ASUS is rolling out an update to Android Lollipop for their first generation ZenFones. The update covers most of the original ZenFone devices offered by ASUS.
    While the Qualcomm based ZenFone 5 A500KL has been running Lollipop since April, this update brings the Intel powered A500CG and A501CG versions up to date as well. The ZenFone 4 A400CG is also receiving its Lollipop update. The updates for all these devices are being sent out over the air, but users can download them directly from ASUS to flash themselves if they don't want to wait.
    In addition to the ZenFone 4 and ZenFone 5, the ZenFone 6 A600CG and A601CG are also being updated to Lollipop. The firmware updates for these two phones are not yet being sent out as OTA updates, but the files are already available to download from ASUS.
    The fact that ASUS is updating their original ZenFones to Lollipop sends a good message to potential ZenFone 2 buyers who were worried about ASUS's commitment to keeping their devices updated. Hopefully ASUS can continue to keep their older devices updated as newer generations of ZenFones are introduced.
    Source: ASUS via GSMArena


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5063

    Anandtech: AMD Launches Carrizo: The Laptop Leap of Efficiency and Architecture Updat

    Perform a small test for me. Close your eyes, and spend 15 seconds considering the state of the laptop market and what devices interest you, are available, or on the horizon. Done? Let me hazard a guess – Apple’s offerings loomed large over $800, with $1500+ gaming laptops on the periphery. At $300 we’re more in tablet-first space with a mix of cheap clamshell rubbish. In the middle is an assortment of $400-$700 good but not always great mixture of 2-in-1s (like the Surface) or clamshells (like the ASUS UX305), divided mostly on price and features but 95% of them contain Intel. Today’s launch of Carrizo by AMD is hoping to change that perception, particularly in $400-$700 and at 15W.

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5064

    Anandtech: Microsoft Confirms You Can Clean Install Windows 10 After Upgrading

    This is one question that a lot of people have been asking, and Gabe Aul, the head of the Windows Insider program, finally answered it on Twitter today. Credit goes to Brad Sams at Neowin for catching this since it was a reply to another tweet.
    Gabe states:
    Once you upgrade W10 w/ the free upgrade offer you will able to clean reinstall Windows 10 on same device any time
    There’s not a lot else to be said, but he also said they are working on some more information to make this more clear. What it does mean is that in order to get the free upgrade, you need to upgrade from an eligible device, and once done, you can then blow that away and do a clean install. I guess we’re not sure yet if that means you can do a reset using the Windows Recovery tools, or if you can actually start with a new hard drive or ISO in order to do the clean install.
    Hopefully we’ll get the final bit of clarification on this soon, but since this is one of the most asked questions that I have seen, I felt it was worth letting everyone know.
    Source: Gabe Aul via Neowin


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5065

    Anandtech: AMD Confirms June 16th Date for Upcoming GPU Announcement

    After an earlier vague deadline of Q2’15 and more than a few teases in the interim, AMD has finally announced when they’ll be revealing their forthcoming high-end video card.
    AMD will be hosting an event on June 16th at 9am PST to release the details on the card, in a presentation dubbed “AMD Presents: The New Era of PC Gaming.” The presentation will be taking place at the Belasco Theater in Los Angeles, CA during E3 week, and happens to be where the AMD-sponsored PC Gaming Show also takes place that evening. This event is open to the public, or can be viewed via webcast.
    I would quickly note here that at no point does AMD specifically call this a launch. And for that matter, the last time they held a public event like this – the Radeon 200/Hawaii reveal – Hawaii didn’t launch until a month later. In this case AMD has already committed to a June launch for the card, but at the moment we’re not expecting to see the card go on sale on the 16th.



    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5066

    Anandtech: Gigabyte Updates Gaming Laptops With Broadwell And Details New Aorus Model

    With today’s launch of the quad-core Broadwell laptop parts, there are going to be a lot of devices making the jump over to the new CPU. Pretty much the entire lineup from Gigabyte is getting some attention today, with plenty of news in their P Series lineup.
    There are a lot of products from Gigabyte moving to the Core i7-5700HQ processor, which is a 2.7-3.5 GHz quad-core 47 watt part. Here is a table of the models for reference.
    Gigabyte P Series Notebooks
    P37 P35 P55
    Processor Intel Core i7-5700HQ (2.7-3.5 GHz, 14nm, 47w TDP)
    X Series GPU NVIDIA GeForce GTX 980M N/A
    W Series GPU NVIDIA GeForce GTX 970M
    K Series GPU NVIDIA GeForce GTX 965M
    Memory 4-8 GB DDR3 (Max 16 GB) 8 GB, 16 GB Max
    Display 17.3" 1920x1080 IPS 15.6" 1920x1080 IPS or 2880x1620 IPS 15.6" 1920x1080 IPS
    Dimensions 417 x 287 x 22.5 mm (16.4 x 11.3 x 0.9 inches) 385 x 270 x 20.9 (15.2 x 10.6 x 0.82 inches) 380 x 269 x 26.8-34 mm (15 x 10.6 x 1.05 - 1.34 inches)
    Weight 2.7 - 2.8 kg (5.95 - 6.17 lbs) 2.2-2.3 kg (4.85 - 5.07 lbs) 2.4-2.5 kg (5.3 - 5.5 lbs)
    Gigabyte also has their Aorus line of gaming laptops, and they have a couple of updates here as well. The X7 line is one of the first notebooks available with NVIDIA’s G-SYNC technology. The X7 Pro-SYNC is the 17.3” model which features a 1080p IPS G-SYNC display, and is powered by the Haswell Core i7-4870HQ processor and dual GTX970M GPUs in SLI.
    New to the lineup is the X5 model which is a 15.6” gaming laptop with GTX 965M in SLI and the Core i7-5700HQ Broadwell processor. It is just 0.9” thick (22.9mm) and weights 5.5 lbs, which is fairly thin and light considering it has two GPUs inside. It comes with 8 GB of DDR3L-1866 memory but has four slots and can handle up to 32 GB of RAM. The display is a 2880x1620 IPS and also support G-SYNC.
    AORUS X Series Notebooks
    X7 Pro-SYNC X5
    Processor Intel Core i7-4870HQ (2.5-3.7 GHz 22nm 47w TDP) Intel Core i7-5700HQ (2.7-3.5 GHz 14nm 47w TDP)
    GPU NVIDIA GeForce GTX 970M SLI NVIDIA GeForce GTX 965 SLI
    Memory 8GB DDR3L (32 GB Max)
    Display 17.3" 1920x1080 IPS with G-SYNC 15.6" 2880x1620 IPS with G-SYNC
    Dimensions 428 x 305 x 22.9 mm (16.9 x 12.0 x 0.9 inches) 390 x 272 x 22.9 mm (15.4 x 10.7 x 0.9 inches)
    Weight 3 kg (6.6 lbs) 2.5 kg (5.51 lbs)
    Gigabyte is launching all of this immediately and you can currently find them on their site at www.gigabyte.com and www.aorus.com


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5067

    Anandtech: AMD Demonstrates FreeSync-over-HDMI Concept Hardware at Computex 2015

    While AMD wasn’t the first GPU vendor to implement a system for variable refresh, the company has made up for lost time with zeal. Since demonstrating their FreeSync proof-of-concept laptop demo back at CES 2014, AMD has since been able to get the necessary signaling and refresh technology implemented into the DisplayPort standard as an optional annex of 1.2a, more commonly known DisplayPort Adaptive-Sync. With DPAS implemented into monitors, AMD has been able to roll out their FreeSync implementation of variable refresh back in March of this year, when the first DPAS-enabled monitors shipped.
    Since then AMD has been relatively quiet (no doubt gearing up for their big GPU launch). However as it turns out they have been hard at work at expanding FreeSync past the realm of DisplayPort monitors, and they are for the first time showing off that technology at their suite at Computex 2015.
    AMD's demonstration and the big revelation from the company is that they now have a prototype implementation of FreeSync-over-HDMI up and running. Powered by R9 200 series card, AMD's demonstration involved running their windmill FreeSync demo against the prototype FreeSync-enabled HDMI monitor to showcase the viability of FreeSync-over-HDMI.
    We wasted no time in tracking down AMD's Robert Hallock for more details, and while AMD isn't being super deep at this time – it’s a proof-of-concept prototype after all - we do have a basic understand of what they are up to.
    The monitor in question is running a running a Realtek TCON, with AMD and Realtek developing the prototype together. The TCON itself is by all indications a bog-standard TCON (i.e. not custom hardware), with the only difference being that Realtek has developed a custom firmware for it to support variable refresh operation and the FreeSync-over-HDMI technology.
    On the signaling side, AMD tells us that they're running a custom protocol over HDMI 1.4a. As one might expect, the necessary functionality doesn't currently exist in HDMI, so AMD went and added the necessary functionality to their driver and the Realtek firmware in order for both ends to operate. Compared to FreeSync-over-DisplayPort all other operation is the same from what we're told, so the end result is the same kind of variable refresh support currently found in DPAS-enabled monitors, except now over HDMI instead of DisplayPort.
    The goal here from AMD is very similar to what they did with DisplayPort last year. The company wants to introduce variable refresh support into the HDMI standard, making it a standardized (and common) feature of HDMI. The payoff for AMD and their users would be that getting variable refresh support into HDMI would allow FreeSync to potentially work with many more monitors, as DisplayPort is not found in all monitors whereas HDMI is. This is especially the case in cheaper monitors, which of course make up the bulk of monitor sales.
    Because AMD has been working with Realtek on this, it was unknown if other TCON manufacturers would have issue writing firmware. However if variable refresh were implemented into the HDMI standard, then there's no reason to believe it wouldn't eventually be a solved issue. Meanwhile the fact that Realtek is doing this via custom firmware on a standard TCON does technically open up the possibility of flashing existing monitors to enable such functionality, but given the fact that this hasn't happened for DisplayPort monitors, it's unlikely here as well.
    In any case, AMD isn't saying too much else at this time. With a proof-of-concept up and running, AMD can now begin attempting to influence the necessary parties to add the feature to HDMI, and to get customers demanding the technology.
    First 30Hz Minimum Refresh Rate DisplayPort Adaptive-Sync Monitor

    Along with the FreeSync-over-HDMI demo, AMD also had one bit of FreeSync news at Computex. As regular readers are likely aware, all of the current DPAS monitors have a minimum refresh rate over 30Hz – this being a monitor limitation, not a DPAS/FreeSync limitation – which unfortunately for AMD’s FreeSync efforts is counterproductive to their goals since you lose the bulk of the benefits of FreeSync when framerates fall below the minimum refresh rate.
    To that end, monitor vendor Nixeus has announced the first 30hz minimum refresh rate DPAS monitor, the NX-VUE24. The NX-VUE24 is a 24”, 1080p TN display that supports variable refresh rates from 30Hz up to 144Hz. A 1080p TN monitor is admittedly not likely to set the world on fire at this point, but it’s still an important milestone in getting 30Hz-minimum DPAS displays out in to the market. And at 1080p and just 24”, this will likely be the most affordable/cheapest variable refresh monitor yet.
    Nixeus has not yet announced a release date or price for the monitor, but they tell us it should be coming soon.
    FreeSync’s Teething Issues

    Finally, while we had a chance to talk to AMD about FreeSync, we asked them about some of the barbs NVIDIA has been flinging their way lately, particularly on the subject matter of minimum refresh rates and pixel overdrive. Though NVIDIA is not above poking at AMD when it suits them, these were still important points that we wanted to hear the answer to.
    On the matter of pixel overdrive, AMD has clarified that pixel overdrive can work with FreeSync, but it is up to the monitor manufactures. DPAS/FreeSync doesn’t offer any control over overdrive to the video card, so whether any overdrive happens is up to the monitor manufacturer, who would need to implement it in their scaler. Ultimately pixel overdrive is not a required part of the DPAS standard, so its presence is going to be on a monitor-by-monitor basis, and the quality of any overdrive solution is up to the vendor. As with NVIDIA’s solution, this all boils down to doing frame delivery prediction and adjusting their overdrive values accordingly, with DPAS monitor manufacturers needing to do that in their scaler as part of their per-frame operations (just as how it works today on fixed refresh rate monitors.
    Meanwhile on the subject of minimum refresh rates, AMD’s comments were a bit less concrete, but also a bit more optimistic. How minimum refresh periods are handled is ultimately up to FreeSync; there needs to be a refresh within the maximum pixel decay period in order to maintain the display, but it’s up to AMD how they want to do those refreshes. For 144Hz monitors this can mean just running a quick refresh over 7ms, whereas for 60Hz monitors the subject is a bit trickier since a refresh takes 16ms.
    In any case the message from AMD has been that they have the ability to change their minimum refresh behavior as they see fit or as they develop better ways to handle the situation. However absent any concrete details at this time, all we can do is wait and see if and when AMD makes any changes. AMD clearly isn’t wanting to commit to anything right now, at least not until they have something ready to deploy.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5068

    Anandtech: Xeon E3-1200 V4 launch: only with GPU integrated

    Intel's server CPU portfolio just got more diversified and complex with the launch of the Intel Xeon E3-1200 V4 at Computex 2015. It is basically the same chip as the Core i7 "Broadwell" desktop that Ian reviewed yesterday: inside we find four Broadwell cores and the new Crystal Well based Iris Pro GPU, baked with Intel's state-of-the-art 14 nm process. The Xeon enables ECC RAM support, VT-D and PCI-passthrough, something that the desktops chips obviously lack.
    But the current line-up of the Xeon E3-1200 v4 based upon Broadwell is not a simple replacement for the current Xeon E3 1200 v3 "Haswell", which we tested a few months ago. Traditionally, the Xeon E3 was about either workstations or all kinds of low-end servers.
    It looks like the current Xeon E5-1200 v4 is somewhat a niche product. Besides being a chip for workstations with moderate graphic power, Intel clearly positions the chip as a video transcoding and VDI platform. It looks like - once again - Intel is delivering what AMD promised a long time ago. AMD's Berlin, a quad steamroller with Radeon GPU was supposed to address this market, but the product did not seem to convince the OEMs.

    Intel claims that the 65W TDP E3-1285L was able to decode 14 1080p (at 30 fps) 20Mbps streams, four or 40% more than on the Xeon E3-1286L v3, which could only sustain 10 video streams. Another use are virtual desktops that use PCI device passthrough to give the virtual machine (VM) full access to the GPU. That way of working is very attractive for an IT manager: it enables centralized management of graphical workstation in a secure datacenter.
    But it is should be noted that this kind of virtualization technology comes with drawbacks. First of all, there is only one VM that gets access to the GPU: one VM literally owns the GPU (unlike NVIDIA's GRID technology). Secondly you add network latency, something that many graphical designers will not like as adds lag compared to the situation where they are working on a workstation with a beefy OpenGL card.
    Below you can find the table of the 5 new SKUs. I added a sixth column with the Xeon-D, so you can easily compare.
    Intel Xeon E3 Broadwell Lineup For
    comparison:
    E3-1258L v4
    E3-1265L v4
    E3-1278L v4
    E3-1285 v4
    E3-1285L v4
    Xeon D-1540
    Price $481 $417 $546 $556 $445 $581
    Cores 4 4 4 4 4 8
    Threads 8 8
    8
    8
    8
    16
    Base CPU Freq. 1.8 GHz 2.3 GHz 2 GHz 3.3 GHz 3.5 GHz 2 GHz
    Turbo CPU Freq. 3.2 GHz 3.3 GHz 3.3 GHz 3.8 GHz 3.8 GHz 2.6 GHz
    Graphics HD5700
    1 GHz
    Iris Pro 6300 (GT3e)
    1.05 GHz
    Iris Pro 6300 (GT3e)
    1 GHz
    Iris Pro 6300 (GT3e)
    1.15 GHz
    Iris Pro 6300 (GT3e)
    1.15 GHz
    none
    TDP 47W 35W 47W 95W 65W 45W
    DRAM Freq.
    (DDR3L)
    1600MHz 1866MHz 1600MHz 1866MHz 1866MHz DDR4-2133
    L3 Cache 6MB 6MB
    6MB
    6MB
    6MB
    12 MB
    L4 Cache none 128MB (Crystal Well) 128MB (Crystal Well) 128MB (Crystal Well) 128MB (Crystal Well) none
    It is pretty clear that the Xeon-D is a much more attractive server chip for most purposes: twice the amount of cores, twice the amount L3-cache, while remaining inside a 45W TDP power envelop. On top of that, the new Xeon E3 still needs a separate C226 chipset and is limited to 32 GB of RAM. The Xeon-D does not need a separate chipset and supports up to 128 GB of DDR-4.
    In summary, the current Xeon E3-1200 v4 lineup is only interesting if you need a server chip for video transcoding, centralized workstation or a local workstation with relatively modest graphical needs.
    The Atom C2000 and hopefully the X-Gene 2 chips are the SoCs to watch if you want ultra dense and relatively cheap server cpus for basic server processing tasks (static web content, object caching). The Xeon E3-1240Lv3 is probably still the best "single/lowly threaded performance"/watt champion. And the Xeon-D? Well, we will be reviewing that one soon...





    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5069

    Anandtech: HTC Announces The One ME With MediaTek's Helio X10

    Today the HTC One ME was officially announced in China. While it's not likely that this device will ever be sold in other markets, it's worth taking a look at to see what differences there are from the devices that HTC ships globally. Below you can see the specifications of the new HTC One ME.
    HTC One ME
    SoC MediaTek Helio X10, 4 x Cortex A53 at 2.2GHz + 4 x Cortex A53 at 2.2GHz,
    PowerVR G6200 GPU at 700MHz
    Memory and Storage 3GB LPDDR3 RAM, 32GB NAND + MicroSDXC
    Display 5.2" 2560x1440 IPS LCD
    Cellular Connectivity 2G / 3G / 4G LTE (MediaTek Category 4 LTE)
    Dimensions 150.99 x 71.99 x 9.75 mm, 155g
    Cameras 20MP Rear Facing w/ 1.12 µm pixels, 1/2.4" CMOS size, f/2.2, 27.8mm (35mm effective)

    4MP Front Facing, 2.0 µm pixels, f/2.0 26.8mm (35mm effective)
    Battery 2840 mAh (10.79Wh)
    Other Connectivity 802.11a/b/g/n/ac + BT 4.1, GNSS, NFC, DLNA
    Operating System Android 5.0 Lollipop with HTC Sense
    SIM Dual NanoSIM
    As you can see, this is definitely positioned as a high end device. As far as HTC's overall lineup goes, the HTC One M9 is probably the best device to make comparisons to. The most obvious difference is with the SoC. While the One M9 uses Qualcomm's Snapdragon 810, the One ME uses MediaTek's Helio X10 SoC. This is one of MediaTek's high end chips, and it's really only second to the recently launched Helio X20. I wouldn't want to judge how the One ME's performance compares to the One M9 based on spec sheets, but I'm very interested in seeing comparisons of the two phones once the One ME gets into the hands of users.
    Moving on from the SoC, we see specs that mostly mirror those of the One M9. The battery capacity, cameras, RAM, and NAND are all exactly the same. The biggest specification change is to the display. While the One M9 sports a 5" 1920x1080 panel, the One ME has a higher resolution 5.2" 2560x1440 panel. This means that the One ME is also slightly larger and thicker than the One M9, and ever so slightly lighter.
    For me the most interesting thing about the HTC One ME is probably the fingerprint scanner on the bottom. Whether it's a swipe style sensor like the HTC One Max or a touch and hold sensor like the iPhone is currently unknown, but having a fingerprint scanner at all when the One M9 doesn't is notable to say the least. I also like the design where it's set between two speaker grilles.
    There's currently no indication of what the HTC One ME will cost, or when it will begin shipping in China. When it does go on sale, it'll be available in rose gold, gold sepia, and black. It's doubtful that it'll ever be seen on North American shores, although I would love to get my hands on one.
    Source: HTC via GSMArena


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5070

    Anandtech: The Huawei P8 Review

    It’s been a month now since Huawei launched its new smartphone flagship, the P8. Huawei started Ascend P-line of smartphones back in 2012 with the launch of the Ascend P1, and has since iterated every year with the follow up P6, P7, and this year’s P8. Read on as we review Huawei's new smartphone.

    More...

Thread Information

Users Browsing this Thread

There are currently 11 users browsing this thread. (0 members and 11 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title