Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6761

    Anandtech: BlackBerry KEYOne Announced: Snapdragon 625 with QWERTY, $549

    This week at MWC, TCL announced the BlackBerry KEYOne, which follows in BlackBerry's traditional style with a distinctive hardware QWERTY keyboard, but this time a more polished look at an Android implementation. The KEYOne implements a high-capacity battery, Qualcomm’s Snapdragon 625 SoC, Android 7.1, a 4.5” LCD display, and a USB-C connector for power and data. The KEYone will arrive in April at around $549.
    Last year BlackBerry Limited announced its intention to quit from the development and the manufacturing of smartphones, deciding to focus on creating and licensing its brand, other IP, and primarily its secure software suite for mobile devices. TCL became the primary licensee of BlackBerry and this is a company that will produce BlackBerry-branded devices going forward. TCL is currently the only global licensee of the trademark, so this KEYOne is the first BlackBerry-branded device, aiming at similar markets to BlackBerry's previous products.
    The KEYone is not the first BlackBerry-branded handset from TCL (the TCL-developed DTEK50 device released in 2016 essentially uses the same platform as the Alcatel Idol 4 handset), but this if the first one that was developed to be a BlackBerry from the ground up and used a close collaboration with BlackBerry Limited for the design. So, while the KEYone is produced by TCL, engineers from BBL have added their touch to the product. In addition, the phone comes with pre-loaded software from BlackBerry, including the BlackBerry DTEK application that monitor’s phones security.
    One of the product messages surrounding the launch was that when BBL and TCL started development of the KEYone, they set themselves a number of goals they wanted to achieve: keep the BlackBerry heritage, offer fast connectivity over today’s networks, ensure a long battery life, snappy multitasking performance, and make the device sturdy, yet stylish. We're sure that the sales numbers wil be the marker for how well they succeed.
    First off, the quintessential value add for BlackBerry users has always been the physical keyboard, so this stays. TCL decided to add functionality to the keyboard beyond just typing, which is why the keyboard becomes an extension to the display as it supports swiping and programmable shortcuts. TCL lists that this functionality is useful for scrolling, photo editing, and opening apps (with various gestures).
    Battery life is a major concern of virtually all smartphone users. To make the KEYone last as long as possible, TCL did two things: it installed a ~3500 mAh battery into the handset and also picked up the Qualcomm Snapdragon 625 SoC. The S625 is an SoC we're going to see a lot of in 2017, as an alternative to S652 phones: rather than using 2xA72/4xA53 on 28nm, the S625 offers 8xA53 but on Samsung's 14nm LPP process, so while the latter might actually be lower in peak performance, the smaller node and lower power cores enable significant battery life improvements. On the connectivity side of things, the Snapdragon 625 supports 802.11ac, Bluetooth 4.2 and integrates Qualcomm’s X9 LTE modem (Cat 7 LTE, up to 300 Mbit/s downlink and up to 150 Mb/s uplink).
    BlackBerry KEYone Specifications
    SoC Qualcomm Snapdragon 625 (MSM8953)
    8x ARM Cortex-A53 @ 2.0 GHz
    Adreno 506
    RAM 3 GB LPDDR3
    Storage 32 GB (eMMC)
    Display 4.5-inch 1620x1080 (434 ppi) with Gorilla Glass 4
    Network 3G: WCDMA (DB-DCHSDPA, DC-HSUPA),
    TD-SCDMA, EV-DO, CDMA1x
    2G: GSM/EDGE
    4G: depends on the version
    Canada, LATAM, APAC, US V1:
    LTE: 1, 2, 3, 4, 5, 7, 8, 12, 13, 17, 19, 20, 28, 29, 30
    TDD LTE: 38, 39, 40, 41
    EMEA:
    LTE: 1, 2, 3, 4, 5, 7, 8, 13, 17, 20, 28
    TDD LTE: 38, 40
    US V2:
    LTE: 1, 2, 3, 4, 5, 7, 12, 13, 20, 25, 26, 28, 29, 30
    TDD LTE: 41
    CDMA: BC 0, 1, 10
    LTE Down: 300 Mb/s
    Up: 150 Mb/s
    Audio Stereo speakers
    3.5-mm TRRS audio jack
    Dimensions see pictures
    Rear Camera 12 MP with f/2.0 aperture and dual LED flash
    Front Camera 8 MP
    Battery 3505 mAh with Qualcomm's FastCharging 3.0
    OS Android 7.1
    Connectivity 802.11ac Wi-Fi, Bluetooth 4.1, USB-C
    Sensors Fingerprint, accelerometer, gyroscope, magnetometer, proximity, ambient light
    Navigation GPS, GLONASS (?)
    SIM Size NanoSIM
    Colors Black/Metallic
    Launch Country NA, EMEA (parts), APAC(?)
    Price $549/€599/£499
    The BlackBerry look is distinctive. There are virtually no phones with a keyboard today, and given the more modern design (compared to say, the Passport), this handset should stand out for more positive reasons. The keyboard uses stainless-steel strips between the rows for touch-type assistance and to aid the look. Since part of KEYone’s front panel is occupied by its QWERTY keyboard, the IPS display of the smartphone has a 3:2 aspect ratio in its 1620x1080 resolution, which is unusual for a smartphone.
    The KEYOne has a rather unique texture to the back of the device (the material TCL uses for the back side is unknown), which looks like processed leather or carbon fiber, but which is designed to be both sturdy and oleophobic. Moreover, this coating is designed to prevent the phone from slipping from hand during usage. As for the overall feel, the KEYone feels very solid, but its thickness is 0.37”, which is considerably thicker than that of most modern smartphones of comparable dimensions (Apple’s iPhone 7 is 0.28”, the iPhone 7 Plus is 0.29”). Part of this is down to the battery.

    When it comes to imaging, the BlackBerry KEYone uses Sony’s Exmor IMX378 and dual-tone flash as its primary camera as well as an 8 MP sensor (with selfie flash via LCD) on the front. Given the fact that the KEYOne is primarily targeting business users, nothing extra special was needed here.

    As for pricing and availability, the BlackBerry KEYone will hit the market in April in multiple countries at $549/€599/£499 price points. It's going to be interesting to see how many octo-A53 devices ever reach that price point.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6762

    Anandtech: MediaTek Announces Helio X30 Availability: 10 CPU Cores On 10nm

    MediaTek first unveiled the Helio X30—its next-generation high-end SoC—last fall, but today at Mobile World Congress the Taiwanese company announced its commercial availability. The Helio X30 is entering mass production and should make its debut inside a mobile device sometime in Q2 2017.
    The Helio X30, like the Helio X20 family before it, incorporates 10 CPU cores arranged in a Max.Mid.Min tri-cluster configuration. Two of ARM’s latest A73 CPU cores replace the two A72s in the Max cluster, which should improve performance and reduce power consumption. The Mid cluster still uses 4 A53 cores, but they receive a 10% frequency boost relative to the top-of-the-line Helio X27. In the X30’s Min cluster we find the first implementation of ARM’s most-efficient A-series core. The A35 consumes 32% less power than the A53 it replaces (same process/frequency), while delivering 80%-100% of the performance, according to ARM. With a higher peak frequency of 1.9GHz, the X30’s A35 cores should deliver about the same or better performance than the X20’s A53 cores and still consume less power.
    MediaTek Helio X20 vs. Helio X30
    SoC MediaTek
    Helio X20
    MediaTek
    Helio X30
    CPU 2x Cortex-A72 @2.1GHz

    4x Cortex-A53 @1.85GHz

    4x Cortex-A53 @1.4GHz
    2x Cortex-A73 @2.5GHz

    4x Cortex-A53 @2.2GHz

    4x Cortex-A35 @1.9GHz
    GPU ARM Mali-T880MP4
    @780MHz
    PowerVR 7XTP-MT4
    @800MHz
    Memory
    Controller
    2x 32-bit @ 933MHz LPDDR3

    14.9GB/s b/w
    4x 16-bit @ 1866MHz LPDDR4x

    29.9GB/s b/w
    Video Encode/
    Decode
    encode:
    2160p30
    H.264 / HEVC w/HDR

    decode:
    2160p30 10-bit
    H.264 / HEVC / VP9
    encode:
    2160p30
    H.264 / HEVC w/HDR / VP9

    decode:
    2160p30 10-bit
    H.264 / HEVC / VP9
    Camera/ISP Dual ISP
    32MP @ 24fps (single camera)
    or
    13MP + 13MP @ 30fps (dual camera)
    Dual 14-bit ISP
    28MP @ 30fps (single camera)
    or
    16MP + 16MP (dual camera)
    Integrated
    Modem
    LTE Category 6
    DL = 300Mbps
    2x20MHz CA, 64-QAM

    UL = 50Mbps
    1x20MHz CA, 16-QAM

    FDD-LTE / TD-LTE / TD-SCDMA / WCDMA / CDMA / GSM
    LTE Category 10
    DL = 450Mbps
    3x20MHz CA, 64-QAM

    UL = 150Mbps
    2x20MHz CA, 64-QAM

    FDD-LTE / TD-LTE / TD-SCDMA / WCDMA / CDMA / GSM
    Mfc. Process TSMC 20SoC (planar) TSMC 10nm FinFET
    The Helio X30 will also be the first SoC to use TSMC’s 10nm process, which will offer significant power savings relative to TSMC’s 20nm planar process that the Helio X20 family uses. According to MediaTek, the X30 consumes 50% less power than the X20 when running an unspecified CPU workload and 60% less power when running GFXBench T-Rex. These power savings will increase battery life and improve sustained performance with less thermal throttling.
    Alongside the Helio X30, MediaTek is launching CorePilot 4.0, which manages CPU frequency and task scheduling. Optimized for its unique tri-cluster CPU configuration, CorePilot keeps track of the SoC’s power budget by monitoring temperature, and the global task scheduler component is responsible for migrating tasks between clusters based on workload and user experience parameters, such as frames per second. It also adjusts CPU frequency using Fast DVFS technology that increases sampling rate, allowing for faster voltage/frequency adjustments that better follow changes in workload. The overall goal of CorePilot 4.0 is to achieve the best possible performance at the lowest power levels.
    In a short presentation at its MWC booth Monday, Executive Vice President & Co-COO Jeffrey Ju stated that MediaTek expects only a limited number of phones to use the Helio X30, perhaps less than ten. He also mentioned that a low yield rate for TSMC’s 10nm process has delayed the X30’s availability. The X30 sounds compelling on paper, so it will be interesting to see how many design wins it can actually achieve.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6763

    Anandtech: The AnandTech Podcast, Episode 41: Let's Talk Server, with Patrick Kennedy

    While in San Francisco for AMD’s Ryzen Tech Day, I had a chance to catch up with a good friend by the name of Patrick Kennedy, who runs the tech news website ServeTheHome. We frequently battle STH here at AnandTech to be the first to break news on new server platforms, but it is a friendly rivalry where often we end up picking each other’s brains for information or to bounce ideas off of each other. To that end, I managed to convince Patrick to be a guest on our podcast, to talk about the recent issue with Avoton and Rangeley C2000 CPUs as well as the launch of C3000 and discuss what the upcoming Naples platform can do for AMD.
    Apologies in advance for parts of the recording. We did this in a high-rise hotel during a freak San Francisco storm, causing wind to whistle through the vents in the room and no way to close the vents. I tried to clean up the audio as best as I could, alas I am no expert. Experts, please apply to be our podcast editors, and tell us what equipment we should be using.

    Patrick Kennedy (ServeTheHome), Ian Cutress (AnandTech) and David Kanter (Microprocessor Report)
    Photo Taken by Raja Koduri (AMD). David was declared the winner of the 'Bring Your Suit A-Game' contest.
    The AnandTech Podcast #41: Let's Talk Server

    Featuring


    iTunes
    RSS - mp3, m4a
    Direct Links - mp3, m4a
    Total Time: 28 minutes 39 seconds
    Outline mm:ss
    00:00 – Introduction
    00:15 – Patrick’s 2000 cores
    01:41 – Atom C2000 Avoton/Rangeley Hardware Bug
    09:22 – Denverton and C3000
    15:17 – Xeon D-1500 Networking CPUs
    18:02 – Opportunities for AMD Naples
    28:39 – FIN
    Related Reading

    The Intel Atom C2000 Series Bug (via ServeTheHome)
    Intel launches Denverton C3000 Series
    AMD Naples Motherboard Analysis

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6764

    Anandtech: AMD GDC 2017: Asynchronous Reprojection for VR, Vega Gets a Server Custome

    In what has become something of an annual tradition for AMD's Radeon Technologies Group, their Game Developers Conference Capsaicin & Cream event just wrapped up. Unlike the company’s more outright consumer-facing events such as their major product launches, AMD’s Capsaicin events are focused more on helping the company further their connections with the game development community. This a group that on the one hand has been banging away on the Graphics Core Next architecture in consoles for a few years now, and on the other hand operates in a world where, in the PC space, NVIDIA is still 75% of the dGPU market even with AMD’s Polaris-powered gains. As a result, despite the sometimes-playful attitude of AMD at these events, they are part of a serious developer outreach effort for the company.
    For this year’s GDC then, AMD had a few announcements in store. It bears repeating that this is a developers’ conference, so most of this is aimed at developers, but even if you’re not making the next Doom it gives everyone a good idea of what AMD’s status is and where they’re going.
    Vive/SteamVR Asynchronous Reprojection Support Coming in March

    On the VR front, the company has announced that they are nearly ready to launch GPU support for the Vive/SteamVR’s asynchronous reprojection feature. Analogous to Oculus’s asynchronous timewarp feature, which was announced just under a year ago, asynchronous reprojection is a means of reprojecting a combination of old frame data and new input data to generate a new frame on the fly if a proper new frame will not be ready by the refresh deadline. The idea behind this feature is that rather than redisplaying an old frame and introducing judder to the user – which can make the experience anything from unpleasant to stomach-turning – instead a warped version of the previous frame is generated, based on the latest input data, so that the world still seems to be updating around the user and matching their head motions. It’s not a true solution to a lack of promptly rendered frames, but it can make VR more bearable if a game can’t quite keep up with the 90Hz refresh rate the headset demands.
    As to how this related to AMD, this feature relies rather heavily on the GPU, as the SteamVR runtime and GPU driver need to quickly dispatch and execute the command for reprojection to make sure it gets done in time for the next display refresh. In AMD’s case, they not only want to catch this scenario but improve upon it, by using their asynchronous execution capabilities to get it done sooner. Valve launched this feature back in November, however at the time this feature was only available on NVIDIA-based video cards. So for AMD Vive owners, this will be a welcome addition. AMD in turn will be enabling this in a future release of their Radeon Software, with a target release date of March.
    Forward Rendering Support for Unreal Engine 4

    Moving on, since the last GDC AMD has been working on some new deals and partnerships, which they have announced at this year’s Capsaicin event. On the VR front, the company has been working with long-time partner (and everyone’s pal) Epic Games on improving VR support in the Unreal Engine. Being demoed at GDC is a new forward rendering path for Unreal Engine 4.15.
    Traditional forward rendering has fallen out of style in recent years as its popular alternative, deferred rendering, allows for cheap screen space effects (think ambient occlusion and the like). The downside to deferred rendering is that it pretty much breaks any form of real anti-aliasing, such as MSAA. This hasn’t been too big of a problem for traditional games, where faux/post-process AA like FXAA can hide the issue enough to please most people. But it’s not good enough for VR; VR needs real, sub-pixel focused AA in order to properly hide jaggies and other aliasing effects on what is perceptually a rather low density display.
    By bringing back forward rendering in a performance-viable form, AMD believes they can help solve the AA issue, and in a smarter, smoother manner than hacking MSAA into deferred rendering. The payoff of course being that Unreal Engine remains one of the most widely used game engines out there, so features that they can get into the core engine upstream with Epic are features that become available to developers downstream who are using the engine.
    Partnering with Bethesda: Vulkan Everywhere

    Meanwhile, the company has also announced that they have inked a major technology and marketing partnership deal with publisher Bethesda. Publisher deals in one form or another are rather common in this industry – with both good and bad outcomes for all involved – however what makes the AMD deal notable is the scale. In what AMD is calling a “first of its kind” deal, the companies aren’t inking a partnership over just one or two games, but rather they have formed what AMD is presenting as a long term, deep technology partnership, making this partnership much larger than the usual deals.
    The biggest focus here for the two companies is on Vulkan, Khronos’s low-level graphics API. Vulkan has been out for just over a year now and due to the long development cycle for games is still finding its footing. The most well-known use right now is as an alternative rendering path for Bethesda/id’s Doom. AMD and Bethesda want to get Vulkan in all of Bethesda’s games in order to leverage the many benefits of low-level graphics APIs we’ve been talking about over the past few years. For AMD this not only stands to improve the performance of games on their graphics cards (though it should be noted, not exclusively), but it also helps to spur the adoption of better multi-threaded rendering code. And AMD has an 8-core processor they’re chomping at the bit to start selling in a few days…
    From a deal making perspective, the way most of these deals work is that this means AMD will be providing Bethesda’s studios with engineers and other resources to help integrate Vulkan support and whatever other features the two entities want to add to the resulting games. Not talked about in much detail at the Capsaicin event was the marketing side of the equation. I’d expect that AMD has a lock on including Bethesda games as part of promotional game bundles, but I’m curious whether there will be anything else to it or not.
    AMD Vega GPUs to Power LiquidSky Game Streaming Service

    Finally, while AMD isn’t releasing any extensive new details about their forthcoming Vega GPUs at the show (sorry gang), they are announcing that they’ve already landed a deal with a commercial buyer to use these forthcoming GPUs. LiquidSky, a game service provider who is currently building and beta-testing an Internet-based game streaming service, is teaming up with AMD to use their Vega GPUs with their service.
    The industry as a whole is still working to figure out the technology and the economics of gaming-on-demand services, but a commonly identified component is using virtualization and other means to share hardware over multiple users to keep costs down. And while this is generally considered a solved issue for server compute tasks – one needs only to see the likes of Microsoft Azure and Amazon Web Services – it’s still a work in progress for game streaming, where the inclusion of GPUs and the overall need for consistent performance coupled with real-time responsiveness adds a couple of wrinkles. AMD believes they have a good solution the form of their GPUs’ Single Root Input/Output Virtualization (SR-IOV) support.
    From what I’ve been told, besides their Vega GPUs being a good fit for their high performance and user-splitting needs, LiquidSky is also looking to take advantage of AMD’s Radeon Virtualized Encode, which is a new Vega-specific feature. Unfortunately AMD isn’t offering a ton of detail on this feature, but from what I’ve been able to gather AMD has implemented an optimized video encoding path for virtualized environments on their GPUs. A game streaming service requires that the contents of upwards of several virtual machines be encoded quickly, so this would be the logical next step for AMD’s on-board video encoder (VCE) by making it efficiently work with virtualization.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6765

    Anandtech: NVIDIA Unveils GeForce GTX 1080 Ti: Available Week of March 5th for $699

    In what has now become a bona fide tradition for NVIDIA, at their GDC event this evening the company announced their next flagship video card, the GeForce GTX 1080 Ti. Something of a poorly kept secret – NVIDIA’s website accidentally spilled the beans last week – the GTX 1080 Ti is NVIDIA’s big Pascal refresh for the year, finally rolling out their most powerful consumer GPU, GP102, into a GeForce video card.
    The Ti series of cards isn’t new for NVIDIA. The company has used the moniker for their higher-performance cards since the GTX 700 series back in 2013. However no two generations have really been alike. For the Pascal generation in particular, NVIDIA has taken the almighty Titan line in a more professional direction, so whereas a Ti card would be a value Titan in past generations – and this is still technically true here – it serves as more of a flagship for the Pascal generation GeForce.
    At any rate, we knew that NVIDIA would release a GP102 card for the GeForce market sooner or later, and at long last it’s here. Based on a not-quite-fully-enabled GP102 GPU (more on this in a second), like its predecessors the GTX 1080 Ti is meant to serve as a mid-generation performance boost for the high-end video card market. In this case NVIDIA is aiming for what they’re calling their greatest performance jump yet for a Ti product – around 35% on average – which would translate into a sizable upgrade for GeForce GTX 980 Ti owners and others for whom GTX 1080 wasn’t the card they were looking for.
    NVIDIA GPU Specification Comparison
    GTX 1080 Ti NVIDIA Titan X GTX 1080 GTX 980 Ti
    CUDA Cores 3584 3584 2560 2816
    Texture Units 224 224 160 176
    ROPs 88 96 64 96
    Core Clock ? 1417MHz 1607MHz 1000MHz
    Boost Clock 1600MHz 1531MHz 1733MHz 1075MHz
    TFLOPs (FMA) 11.5 TFLOPs 11 TFLOPs 9 TFLOPs 6.1 TFLOPs
    Memory Clock 11Gbps GDDR5X 10Gbps GDDR5X 10Gbps GDDR5X 7Gbps GDDR5
    Memory Bus Width 352-bit 384-bit 256-bit 384-bit
    VRAM 11GB 12GB 8GB 6GB
    FP64 1/32 1/32 1/32 1/32
    FP16 (Native) 1/64 1/64 1/64 N/A
    INT8 4:1 4:1 N/A N/A
    TDP 250W 250W 180W 250W
    GPU GP102 GP102 GP104 GM200
    Transistor Count 12B 12B 7.2B 8B
    Die Size 471mm2 471mm2 314mm2 601mm2
    Manufacturing Process TSMC 16nm TSMC 16nm TSMC 16nm TSMC 28nm
    Launch Date 03/2017 08/02/2016 05/27/2016 06/01/2015
    Launch Price $699 $1200 MSRP: $599
    Founders $699
    $649
    We’ll start as always with the GPU at the heart of the card, GP102. With NVIDIA’s business now supporting a dedicated compute GPU – the immense GP100 – GP102 doesn’t qualify for the “Big Pascal” moniker like past iterations have. But make no mistake, GP102 is quite a bit larger than the GP104 GPU at the heart of the GTX 1080, and that translates to a lot more hardware for pushing pixels.
    GTX 1080 Ti ships with 28 of GP102’s 30 SMs enabled. For those of you familiar with the not-quite-consumer NVIDIA Titan X (Pascal), this is the same configuration as that card, and in fact there are a lot of similarities between those two cards. Though for this generation the situation is not going to be cut & dry as in the past; the GTX 1080 Ti is not strictly a subset of the Titan.
    The big difference on the hardware front is that NVIDIA has stripped GP102 of some of its memory/ROP/L2 capacity, which was fully enabled on the Titan. Of the 96 ROPs we get 88; the last ROP block, its memory controller, and 256KB of L2 cache have been disabled.
    However what the GTX 1080 Ti lacks in functional units it’s partially making up in clockspeeds, both in regards to the core and the memory. While the base clock has not yet been disclosed, the boost clock of the GTX 1080 Ti is 1.6GHz, about 70MHz higher than its Titan counterpart. More significantly, the memory clock on the GTX 1080 Ti is 11Gbps, a 10% increase over the 10Gbps clock found on the Titan and the GTX 1080. Combined with the 352-bit memory bus, and we’re looking at 484GB/sec of memory bandwidth for the GTX 1080 Ti.
    Taken altogether then, the GTX 1080 Ti offers just shy of 11.5 TFLOPS of FP32 performance. This puts the expected shader/texture performance of the card 29% ahead of the current GTX 1080, while the ROP throughput advantage stands 27%, and memory bandwidth at a much greater 51.2%. Real-world performance will of course be influenced by a blend of these factors, so I’ll be curious to see how much the major jump in memory bandwidth helps given that the ROPs aren’t seeing the same kind of throughput boost. Otherwise, relative to the NVIDIA Titan X, the two cards should end up quite close, trading blows now and then.
    Speaking of the Titan, on an interesting side note, it doesn’t look like NVIDIA is going to be doing anything to hurt the compute performance of the GTX 1080 Ti to differentiate the card from the Titan, which has proven popular with GPU compute customers. Crucially, this means that the GTX 1080 Ti gets the same 4:1 INT8 performance ratio of the Titan, which is critical to the cards’ high neural networking inference performance. As a result the GTX 1080 Ti actually has slighty greater compute performance (on paper) than the Titan. And NVIDIA has been surprisingly candid in admitting that unless compute customers need the last 1GB of VRAM offered by the Titan, they’re likely going to buy the GTX 1080 Ti instead.
    Speaking of memory, as I mentioned before the card will be shipping with 11 pieces of 11Gbps GDDR5X. The faster memory clock comes courtesy of a new generation of GDDR5X memory chips from partner Micron, who after a bit of a rocky start with GDDR5X development, is finally making progress on boosting memory speeds that definitely has NVIDIA pleased. Until now NVIDIA’s GPUs and boards have been ready for the higher frequency memory, and the memory is just now catching up.
    Moving on, the card’s 250W TDP should not come as a surprise. This has been NVIDIA’s segment TDP of choice for Titan and Ti cards for a while now, and the GTX 1080 Ti isn’t deviating from that.
    However the cooling system has seen a small but important overhaul: the DVI port is gone, opening up the card to be a full slot blower. In order to offer a DVI port along with a number of DisplayPorts/HDMI ports, NVIDIA has traditionally blocked part of the card’s second slot to house the DVI port. But with GTX 1080 Ti, that port is finally gone, and that gives the GTX 1080 Ti the interesting distinction being the first unobstructed high-end GeForce card since the GTX 580. The end result is that NVIDIA is promising a decent increase in cooling performance relative to the GTX 980 Ti and similar designs. We’ll have to see how NVIDIA has tuned the card to understand the full impact of this change, but this likely will further improve on NVIDIA’s already great acoustics.
    Meanwhile the end result of removing the DVI port means that the GTX 1080 Ti’s display I/O has been pared down to just a mix of HDMI and DisplayPorts. Altogether we’re looking at 3x DisplayPort 1.4 ports and 1x HDMI 2.0 port. As a consolation to owners who may still be using DVI-based monitors, the company will be including a DisplayPort to DVI adapter with the card (presumably DP to SL-DVI and not DL-DVI), but it’s clear that DVI’s days are now numbered over at NVIDIA.
    Moving on, for card designs NVIDIA is once again going to be working with partners to offer a mix of reference and custom designs. The GTX 1080 Ti will initially be offered in a Founder’s Edition design, while partners are also bringing up their own semi and fully custom designs to be released a bit later. Importantly however, unlike the GTX 1080 & GTX 1070, NVIDIA has done away with the Founder’s Edition premium for the GTX 1080 Ti. The MSRP of the card will be the MSRP for both the Founder’s Edition and partners’ custom cards. This makes pricing more consistent, though I’m curious to see how this plays out with partners, as they benefitted from the premium in the form of more attractive pricing for their own cards.
    Finally, speaking of pricing, let’s talk about the launch date and availability. Just in time for Pi Day, NVIDIA will be launching the card on the week of March 5th (an exact date has yet to be revealed). As for pricing, long-time price watchers may be surprised. NVIDIA will be releasing the card at $699, the old price of the GTX 1080 Founder's Edition (which itself just got a price cut). This does work out to a bit higher than the GTX 980 Ti - it launched at $649 two years ago - but it's more aggressive than I had been expecting given the GTX 1080's launch price last year.
    In any case, at this time the high-end video card market is NVIDIA’s to command. AMD doesn’t offer anything competitive with the GTX 1070 and above, so the GTX 1080 Ti will stand alone at the top of the consumer video card market. Long-term here AMD isn’t hesitating to note their work on Vega, but that’s a bridge to be crossed only once those cards get here.
    Gallery: GeForce GTX 1080 Ti




    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6766

    Anandtech: GeForce GTX 1080 Price Cut to $499; NVIDIA Partners To Begin Selling 10-Se

    Along with this evening’s new of the GeForce GTX 1080 Ti, NVIDIA has a couple other product announcements of sorts. First off, starting tomorrow, the GeForce GTX 1080 is getting an official $100 price cut, bringing the card's price to $499. Since the card launched back in May at $599, prices for the card have held fairly steady around that MSRP. So once this price cut goes into effect, it will have a significant effect on card prices. Though it should be noted that the price here is the base price for vendor custom cards; the Founder's Edition card was not mentioned. If it maintains its $100 premium, then that card would be coming down to $599.
    As for the second announcement of the evening, NVIDIA has announced that their partners are going to be selling GeForce GTX 1080 and GTX 1060 6GB cards with faster memory. Partners will now have the option to outfit these cards with 11Gbps GDDR5X and 9Gbps GDDR5 respectively, to be sold as factory overclocked cards.
    To understand the change, let’s talk briefly about how board partners work. Depending on the partner, the parts, and the designs, partners can buy anything from just the GPU, to the GPU and RAM, up to a fully assembled board (the Founder’s Edition). With the release of faster GDDR5X and GDDR5 bins, NVIDIA is now giving their board partners an additional option to use these faster memories.
    GeForce 10 Series Memory Clocks
    GTX 1080 GTX 1060
    Official Memory Clock 10Gbps GDDR5X 8Gbps GDDR5
    New "Overclock" Memory Clock 11Gbps GDDR5X 9Gbps GDDR5
    To be clear, NVIDIA isn’t releasing a new formal SKU for either card. Nor are the cards' official specifications changing. However, if partners would like, they can now buy higher speed memory from NVIDIA for use in their cards. The resulting products will, in turn, be sold as factory overclocked cards, giving partners more configuration options for their factory overclocked SKUs.
    As factory overclocking has always been done at the partner level, this doesn’t change the nature of the practice. Partners have, can, and will sell cards with factory overclocked GPUs and memory, with or without NVIDIA's help. However with NVIDIA’s official specs already driving the memory clocks so hard, there hasn’t been much headroom left for partners to play with; factory overclocked GTX 1080 cards don’t ship much above 10.2Gbps. So the introduction of faster memory finally opens up greater memory overclocking to the partners.



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6767

    Anandtech: ZTE Announces The Blade V8 Mini And Blade V8 Lite

    After announcing the Blade V8 Pro at CES 2017, ZTE added two more phones to the Blade V8 family: the Blade V8 Mini and Blade V8 Lite. Both phones are smaller than the previously announced models, featuring 720p 5-inch IPS LCD screens, and they target lower price points.

    ZTE Blade V8 Mini
    The Blade V8 Mini uses the same Qualcomm Snapdragon 435 SoC as the Blade V8, with 8 A53 CPU cores that top out at 1.4GHz. The Blade V8 Lite swaps to a MediaTek MT6750 SoC, which also has an octa-core A53 CPU configuration that reaches up to 1.5GHz. Given the similarities, all three phones should deliver similar system performance, but none of them will excel at gaming. Both the Mini and the Lite come with 2GB of LPDDR3 RAM and 16GB of internal storage, which is somewhat limiting, but storage is expandable via microSD card.
    ZTE Blade V8 Series *
    ZTE Blade V8 Lite ZTE Blade V8 Mini ZTE Blade V8
    SoC MediaTek MT6750

    4x Cortex-A53 @ 1.5GHz
    4x Cortex-A53 @ 1.0GHz
    Mali-T860MP2
    Qualcomm Snapdragon 435
    (MSM8940)

    4x Cortex-A53 @ 1.4GHz
    4x Cortex-A53 @ 1.1GHz
    Adreno 505
    Qualcomm Snapdragon 435
    (MSM8940)

    4x Cortex-A53 @ 1.4GHz
    4x Cortex-A53 @ ?GHz
    Adreno 505
    Display 5.0-inch 1280x720 IPS LCD 5.0-inch 1280x720 IPS LCD 5.2-inch 1920x1080 IPS LCD
    Dimensions 143.0 x 71.0 x 8.0 mm
    ? grams
    143.5 x 70.0 x 8.9 mm
    ? grams
    148.4 x 71.5 x 7.7 mm
    ? grams
    RAM 2GB LPDDR3 2GB LPDDR3 2GB / 3GB LPDDR3
    NAND 16GB (eMMC 5.1)
    + microSD
    16GB (eMMC 5.1)
    + microSD
    16GB / 32GB (eMMC 5.1)
    + microSD
    Battery 2730 mAh
    non-replaceable
    2800 mAh
    non-replaceable
    2730 mAh
    non-replaceable
    Front Camera 5MP 5MP, f/2.2, flash 13MP, flash
    Rear Camera 8MP, HDR dual (13MP + 2MP), f/2.0, auto HDR, LED flash dual (13MP + 2MP), auto HDR, LED flash
    Modem MediaTek (Integrated)
    2G / 3G / 4G LTE (Category 6)
    Qualcomm X9 (Integrated)
    2G / 3G / 4G LTE (Category 7/13)
    Qualcomm X9 (Integrated)
    2G / 3G / 4G LTE (Category 7/13)
    Wireless 802.11b/g/n, BT 4.1, FM radio, GPS Wi-Fib/g/n, BT 4.1, FM radio, GPS Wi-Fib/g/n, BT 4.1, FM radio, GPS
    Connectivity microUSB 2.0, 3.5mm headset microUSB 2.0, 3.5mm headset microUSB 2.0, 3.5mm headset
    Launch OS Android 7 with MiFavor 4.2 Android 7 with MiFavor 4.2 Android 7 with MiFavor 4.2
    * Blade V8 Pro not shown (5.5-inch 1080p LCD, Snapdragon 625, 3140 mAh)
    While both new phones come with a 5MP selfie camera, only the Mini has a front-facing flash feature, which is far from ubiquitous and nice to see on a lower-cost phone. The Mini also includes the same dual camera (13MP + 2MP) module as the Blade V8. The second 2MP sensor helps capture depth information that can be used to apply a bokeh effect to images or to adjust the object of focus after a photo has been taken, features usually reserved for more expensive phones.

    ZTE Blade V8 Mini
    The smaller, one-hand friendly V8’s have a lightly-textured aluminum chassis with plastic RF windows at the top and bottom. There’s an inset fingerprint sensor on the back of each too. The rear camera module, which sits proud of the back surface, is an elongated oval that’s centered at the top of the phone. In the Mini’s case, there’s a camera in each corner with an LED flash in between but offset from center. The Lite uses a similar design for the camera module, with its lone 8MP sensor offset to the left of the centered flash.

    ZTE Blade V8 Lite
    Both phones use backlit capacitive navigation buttons. The larger, circular home button is a nice visual touch, and the other buttons are simple dots, allowing for customization. There’s a black border around the screen that’s noticeable on the color combinations with white fronts, but this is typical for phones in this price bracket.

    ZTE Blade V8 Lite
    All of the phones in ZTE’s Blade V8 series, except for the mid-range Blade V8 Pro that uses USB Type-C, still come with microUSB ports. There’s no 802.11ac Wi-Fi or NFC either, although they do come with Bluetooth and FM radio. It’s nice to see the Mini and Lite ship with Android 7 too.
    The ZTE Blade V8 Mini in silver, gold, black, red, and pink is coming to select markets in Asia and Europe, while the Blade V8 Lite will be available in Italy, Germany, and Spain in silver, gold, and black colors. Pricing is not available at this time, but should be well below $200 USD considering the Blade V8 Pro’s price.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6768

    Anandtech: Making AMD Tick: A Very Zen Interview with Dr. Lisa Su, CEO

    AMD held a Tech Day a week before the launch of Zen to go over the details of of the new Ryzen product with the technology press. As part of these talks, we were able to secure Dr. Lisa Su, the CEO of AMD, for 30 minutes to discuss Zen, Ryzen, and AMD.
    Profile:

    Dr. Lisa Su, CEO of AMD. Born in Taiwan and educated at MIT, Dr. Su comes fully equipped with a Bachelors, Masters, and Ph.D. in Electrical Engineering. Before AMD, Dr. Su had appointments as CTO of Freescale Semiconductor, Director of Emerging Products at IBM, and a stint at Texas Instruments. It was at IBM that Dr. Su worked alongside Mark Papermaster, who is currently AMD’s CTO. Dr. Su was initially hired as SVP and GM at AMD, overseeing the global business units, and became CEO/COO in 2012. A rare occurrence (unfortunately), but Dr. Su is one of a small handful of female C-Level Executives in the semiconductor industry. Dr. Su is consistently highly ranked in many 'top people to watch' lists of technology industry visionaries.

    The pin-layout of Ryzen
    Ian Cutress: Congratulations on formally releasing Zen!
    Q1: Both yourself and AMD officially have explicitly stated that AMD has needed to return back to the high-performance CPU market. We can all assume that there are still many hurdles ahead, but is getting Zen into retail silicon the spark that sets off the rest of the roadmap?
    Lisa Su: I think launching Zen in desktop was a big big hurdle. That being said we have many others to go, and as you can imagine how happy I am. I know I’m only as good as my last product, so there’s a lot of focus on: Vega, Naples, Notebook, and 2018.
    Q2: When we speak to some companies, they’ll describe that internally they have engineers working on the next generation of product, and fewer engineers working for the product after that, and continuing on for three, five or up to seven years of products. Internally, how far ahead in the roadmap do you have engineers working on product and product design?
    LS: It’s at least 3 to 4 years. If you look at what we have on CPU and GPU, we have our roadmap out to 2020. It’s not set in stone, but we know the major markets and we adjust timings a quarter here or there as necessary.
    Q3: A lot of analysts widely regard that rehiring Jim Keller was the right move for AMD, although at the time AMD was going through a series of ups and downs with products and financial issues. Was the 'new' CPU team shielded from those issues from day one, or at what point could the Zen team go full throttle?
    LS: If I put credit where credit is due, Mark Papermaster had incredible vision of what he wanted to do with CPU/GPU roadmap. He hired Jim Keller and Raja Koduri, and he was very clear when he said he needed this much money to do things. We did cut a bunch of projects, but we invested in our future. Sure we cut things, but it was very clear. A note to what you said about Jim Keller though - he was definitely a brilliant CPU guy, but he was a part of that vision of what we wanted to do.
    Q4: With Bulldozer, AMD had to work with Microsoft due to the way threads were dispatched to cores to ensure proper performance. Even though Zen doesn't have that issue, was there any significant back-and-forth with Microsoft to enable performance in Windows (e.g. XFR?)
    LS: Zen is a pretty traditional x86 architecture as an overall machine, but there is optimization work to do. What makes this a bit different is that most of our optimization work is more on the developer side – we work with them to really understanding the bottlenecks in their code on our microarchitecture. I see many apps being tuned and getting better going on as we work forward on this.
    Q5: How vital was it to support Simultaneous Multi Threading?
    LS: I think it was very important. I think it was very complicated! Our goal was to have a very balanced architecture. We wanted high single threaded performance, and SMT was important given where the competition is. We didn’t want to apologize for anything with Zen – we wanted high single thread, we wanted many cores, but sorry we don’t have SMT? We didn’t want to say that, we wanted to be ambitious and give ourselves the time to get it done.
    IC: Can you have your cake and eat it too?
    LS: Yes! The key is to help the team to believe it can be done.
    Q6: It has been noted that AMD has been working with ASMedia on the chipset side of the platform, using a 55nm PCIe 3.0x4 based chipset. Currently your competition implements a large HSIO model that can support up to 24 PCIe 3.0 lanes, albeit with limited bandwidth, but enables extensive networking, PCIe storage, and other features. What does AMD need to do to reach semi-parity for I/O ?
    LS: I think we will continue to want a rich ecosystem. On the chipset side we may not develop the chipsets ourselves but we certainly want to be in partnership with others to develop a wide set of IO. I think if you look at the set of motherboard partners that we have, and the extensive testing we’ve done with them, I would expect that as we gain some market share in the high-end, you will see that system build up.
    Q7: A couple of years ago, AMD's market value was lower than its assets. Today, it is trading over $10. A cynic would say that the increase has been a good Polaris launch combined with recent marketing cycles slowly giving out small bits of information. What's your response to AMD's recent share price rise over the past 12 months?
    LS: My view is that I can never predict the market, so I never try! The market has a mind of its own. But what I can say is that we had some very key fundamentals. We are in key markets like the PC market, the cloud market, the infrastructure market, gaming – these are big big markets. They are growing too – I think that the markets we are in are good. We had to convince people fundamentally that we could execute a competitive roadmap and 18 months ago I’d say people didn’t really believe. They thought ‘ah well, maybe’, but ‘we don’t know’ is a power point. Over the past 6-9 months we’ve proven that we can gain graphics share, with the launch of Polaris, and with the launch of Ryzen I think you’ll see that we can convince that we can execute on high performance CPUs. Importantly our customers have become convinced too. The key thing with our customers is that when it was a power point, they weren’t sure: it was interesting, but it could have been six months late or have 20% less performance. When we actually gave them samples, they were actually like ‘Wow, this is actually real!’ and they started pulling in their schedules. So when you ask me about investors it’s something like that. I think people want some proof points to believe they can trust us and that if we execute that we’ll do ok.
    Q8: Do you find that OEMs that haven’t worked with AMD are suddenly coming on board?
    LS: I will say that we have engagements with every OEM now on the high-performance space. Twelve months ago, a number of them would have said that they don’t have the resources to do multiple platforms. So yes, I think momentum helps in this space.
    Q9: At Intel's recent Investor Day we learned that future chips will incorporate multiple dies on the same package. This allows a semiconductor firm to focus on smaller chips and potentially better yields at the expense of some latency. Given what we predict will happen, what is your opinion on having large 600mm2 silicon? Is there a future?
    LS: There has been a lot of debate on this topic. I find it a very interesting debate. Certainly on the graphics side we view High Bandwidth Memory (HBM) and the ability to get that interconnect between the GPU and memory to be extremely differentiating. So certainly we will use that throughout our graphics roadmap. If you look at our CPU roadmap, I do think there’s a case to be made for multi-chip modules. It depends on the trade-offs you have to do, the bandwidth requirements you have, but yes as the process technology becomes more complicated, breaking up the tasks does make sense.
    Q10: With high-end GPUs, we commonly approach 250-300W power consumption. Why has the CPU stalled around 75-140W by comparison? Does AMD ever look to designing a CPU that actually aims for a power/efficiency sweet-spot north of 200W? Why hasn’t the high-performance CPU market gone and matched the GPU market in power consumption?
    LS: That’s a good question, let me see if I’ve thought about that. I think we’re limited by other things on the CPU side. I think we’re limited by some reliability.
    IC: But if you engineer for a specific power window…
    LS: Sure but if you think about it, GPUs tend to be a lot more parallel, so that’s what drives the power. With CPUs, you might argue with me about whether you actually need eight cores, or not! I have to think about that answer, but I think that’s the right one – the difference between a very parallel machine and one that is less parallel.
    Q11: Despite the differences, a lot of fingers point to the Zen microarchitecture floorplan and see significant similarities in the layout with Intel's Core microarchitecture. Without a fundamental paradigm shift, it seems we might be 'stuck' (for lack of a better term) with this kind of layout for perhaps a decade. How does AMD approach this, given your main competitor can easily invest in new seed firms or IP?
    LS: The way I look at it, and I get asked this question very often (sometimes phrased a bit differently) – your competition can invest so much more than you can, how can we be competitive? I think the simple answer is in that yes we are smaller, but I think that we are also more focused. I think that sometimes with constraints comes creativity and so when you’re talking about what processors look like 5-10 years from now, if you look at the innovation in the last 10 years , a bunch of that has come from AMD. You tend to solve problems when you’re put in a box that you have to live in, so when we look at possible microarchitectures, there are still a lot of ideas out there. There’s still a lot of opportunity to incrementally improve performance. I think the difference is that you used to be able to say ‘let me just shrink it’ and it will go faster, and that is a process that lends itself to money as you can just buy equipment to shrink it. Today you have to handcraft it a bit more, and that lends itself to more creativity I would say.
    Q12: We've recently seen your competitor announce a change in strategy regarding new process nodes, new architectures, and how markets will take advantage of the latest CPU designs. With Zen, AMD is first launching desktop, then server, then mobile: you've already mentioned Zen-plus on the roadmap - is the desktop-server-mobile roll-out method the best way for AMD to move forward?
    LS: Not necessarily. I think for this generation [our strategy] made a lot of sense. I think the desktop and server use very similar kind of tuning, they’re both tuned for higher frequency and higher performance. The desktop is a bit simpler; the ecosystem for desktop is a bit simpler. The server has a more complicated testing setup that needs to run so that gives some context there. We really wanted to have a product in the high-end space. It was more set the market strategy than a technical strategy.
    Q13: AMD currently has a very active semi-custom business, particularly when it comes to silicon design partnerships and when it comes to millions of Consoles. Speaking of custom silicon in consoles, current generation platforms currently use AMD’s low power ‘cat’ cores. Now that AMD has a small x86 core in Zen, and I know you won’t tell me what exactly is coming in the future, but obviously future consoles will exist, so can we potentially see Zen in consoles?
    LS: I think you know our console are always very secretive about what they are trying to do, but hypothetically yes I would expect Zen to show up in semi-custom opportunities including perhaps consoles at a certain point in time.
    Q14: AMD is in a unique position for an x86 vendor, with significant GPU IP at its beck and call. Despite the recent redefinition of the GPU group under RTG, would/has AMD ever consider using joint branding (similar to how ASUS markets ROG motherboards and GPUs for example)?
    LS: What I’d like to say is (and you’ll appreciate this) that I love CPUs and GPUs equally, and the markets are a little bit different. And so think about it this way: our GPUs have to work with Intel CPUs, and our CPUs have to work with NVIDIA GPUs, so we have to maintain that level of compatibility and clarity. That being said, I think with Ryzen you can imagine that as we launch Vega later in the year that you might here about Ryzen plus Vega in systems because they are both high-performance things and we want to build great systems! I don’t know about co-marketing, but the idea of being able to say that ‘A plus A builds a great system’ is something we will do.
    On the vendors who do motherboards and GPUs, I agree it takes a lot of work to bring out a new brand. We do view though that the way that people who buy CPUs and the people who buy GPUs do overlap, but they are still quite distinct.
    Q15: Ryzen is priced very competitively compared to the competition, and it follows the common thinking that AMD is still the option for a cheaper build. Is that a characteristic that sits well with AMD?
    LS: I think you should judge that question by what the 200 system integrators will be building with Ryzen in, and what the OEMs will be building. My view is that they system needs to be good enough for the CPU being putting in it. We are very picky about it, and we want Ryzen to be in high-end systems. Will we see Ryzen in some mainstream systems then sure, but if you look at our introduction order, there’s a reason why we’re going Ryzen 7 first, because it sets the brand. You know I want to sell millions and millions of processors, and I’ll sell a bunch that is less than eight cores but having that eight cores and sixteen threads defines what we’re capable of.
    Q16: We've all seen the details of how desktop PC sales are down, but PC gaming revenue is increasing. There is no doubt that Ryzen will be marketed as a chip for gamers; how do you see consumers reacting to Ryzen?
    LS: I think PC gaming is doing quite well, which is one of the hot markets. We are addressing as a gamers as a very important segment, but they are one of many important segments of users for us. We think Ryzen is a great gaming CPU, and you’ll test that for yourself – we’re not going to win every head-to-head, but if you think about gaming do you want theoretical performance or do you want the CPU to be good enough to showcase your GPU? I think what Ryzen allows is those folks to do something more than just gaming. So your gaming CPU might only uses four cores, but if you are doing video editing or streaming it will do a lot more. So I think what we’re trying to address is maybe the forward thinking users, not just the today gamer.
    Q17: A perennial question we get asked is 'Core i7, i5 or i3?'. There have been countless reviews and YouTube videos on the subject. With Ryzen in the mix, what becomes the new question?
    LS: I think you should help your users through that! I really think that’s the case. Ryzen 7 offers phenomenal capability for an eight-core, sixteen-thread device. As we introduce the next families you will see positioning but the end result is that you will see a top to bottom stack with a processor for everyone. At every price we will offer more performance. You will be able to see that in your own testing.
    Q18: Can you comment on whether Bristol Ridge will be available to consumers at any point?
    LS: Yes, good question. The answer is yes. The idea is that if you have an AM4 platform you can put an APU in there. As you saw we put Ryzen 7 first I think the intent was to ensure that AM4 was solidified on Ryzen. I say yes because that has always been the strategy for the AM4 socket to have a very long life and a very broad range. Exact timing has not yet been finalized.
    Q19: At the high-end, Ryzen 7 is competing with Intel's high-end desktop market. But where Intel has 28/40 native PCIe 3.0 lanes, Ryzen only has sixteen. Avago PLX switches are almost $100 each, which means a route to 2 full-bandwidth GPUs or >3 GPUs or more NVMe is hindered. Was sixteen PCIe 3.0 the right choice?
    LS: We think so, I mean we do. If you look at the distribution of PCs going into workstation I think that’s where the volume is. We still believe in the relative balance of performance and power in our decision, but it’s an interesting point.
    Q20: Your competition has had success with both 'Core' parts and 'Xeon' parts, with the latter being ECC and vPro and having the professional feature-set. With the launch, AMD does have its 'Pro' CPU line, but this is more for business agreements and not yet dealing with Ryzen. Does AMD discuss this internally?
    LS: I think what you should expect that Naples will go head to head in the server space, including single socket and dual socket. What you might be asking is if we are going to do something in between on the workstation stuff – I think it is fair to say that we view it all as interesting markets. To your question of ‘do we think about it?’, sure. With everything you roll out with a set of priorities so we set on consumer first, then we are going to do data center, and then we will see what will come next.
    Q21: Is there room for an 'Opteron' as a brand name?
    LS: You tell me!
    IC: I think so! I spoke to a few people today and reactions were mixed. Obviously coming up with a new brand name for a product line is difficult. I assume there is conversation internally, as I don’t suspect the product to be called Naples at launch.
    LS: I think you have hit on a topic that has had a good amount of debate [internally]. As we get closer to launch we will talk about branding on the server. But yes, it will not be called Naples in the market.
    Q22: AMD is formally launching Ryzen with AMD benchmark data and pre-orders on 2/22, but no third-party data until the official launch of 3/2. A lot of customers see value on third-party independent verified data, and might perhaps see this tactic in a bad light. Can you comment on the reasons for the launch structure?
    LS: We had a couple of things going on internally, to give you all our thought process. Because we were going so wide on the ecosystem in terms of sampling – motherboard sampling, system integrators, a lot of OEMs, and lot of ODMs, there has been (at least to my expectation) more chatter in the marketplace than I would have thought just in terms of things that are out there. Some of them were true, some were not true…
    IC: Fake News!
    LS: Ha! Our view was that we wanted to go through the review process very very diligently, and we also wanted to own the news as the news cycle was coming out. So the whole idea of taking the press out on the town to some boat or something, we thought that these guys are writing is real, so give them time to perhaps not do a full review but at least get comfortable with what you are seeing. I’ve said it before and I’ll say it again: everything we show you, what you see is what you get.
    IC: The problem is the perception that perhaps that up to that point, every personal in the chain that has ‘data’ wants to sell you something.
    LS: My view is that you guys are very smart guys, and you said you can do a lot in six hours!
    (I had mentioned previously I had bought 10kg of testing kit to Tech Day to test Ryzen in my hotel room that evening before catching a flight out. As this is being posted, we're still frantically writing up the review, so stay tuned as the official embargo time lifts at 9am ET, and not a minute sooner...)




    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6769

    Anandtech: The AMD Zen and Ryzen 7 Review: A Deep Dive on 1800X, 1700X and 1700

    For over two years the collective AMD vs Intel personal computer battle has been sitting on the edge of its seat. Back in 2014, when AMD first announced it was pursuing an all-new microarchitecture, old hands recalled the days when the battle between AMD and Intel was fun to be a part of, and users were happy that the competition led to innovation: not soon after, the Core microarchitecture became the dominant force in modern personal computing today. Through the various press release cycles from AMD stemming from that original Zen announcement, the industry is in a whipped frenzy waiting to see if AMD, through rehiring guru Jim Killer and laying the foundations of a wide and deep processor team for the next decade, can hold the incumbent to account. With AMD’s first use of a 14nm FinFET node on CPUs, today is the day Zen hits the shelves and benchmark results can be published: Game On!

    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6770

    Anandtech: Meizu Unveils Super mCharge: Fast Charging At 55W

    Meizu unveiled a new fast-charging technology—called Super mCharge—at MWC 2017 that’s capable of fully charging a 3000 mAh battery in just 20 minutes. Rapid charging has grown from novelty to highly desirable feature in a short period of time, with it being particularly popular in China, Meizu’s home market.
    Great Scott!

    While not powerful enough to send a DeLorean back to the future, the 55W rating for Super mCharge (11V, 5A) is significantly higher than anything we’ve yet seen. For comparison, Motorola’s TurboPower is rated for 28.5W, and Qualcomm’s Quick Charge 3.0 hits 18W.
    Meizu is using a charge pump, a type of DC to DC converter that uses an external circuit to control the connection of capacitors to the input voltage. By disconnecting the capacitor from the source via a switch and reconfiguring the circuit with additional switches, the charge pump’s output voltage can be raised or lowered relative to the input. Keeping the capacitors small and the switching frequency high improves efficiency. Meizu is claiming 98% efficiency for its design, and while charge pumps are known for high efficiency, this seems a little high at first glance.
    For Super mCharge, Meizu is dividing the input voltage in half, which doubles the output current. To accommodate the current increase, Meizu is pairing its new fast-charging circuit with a new lithium-based 3000 mAh battery made with “advanced manufacturing processes” that can handle 4x the current of previous batteries. This new battery is said to retain 80% of its original charge capacity after 800 complete charge cycles, where a charge cycle is defined as any possible sequence that ultimately goes from 100% to 0% to 100%. This rating is actually at the high end of the scale, with most fast-charging methods rated for 500 cycles or a little more. Battery life is likely improved by keeping temperature in check; Meizu claims that battery temperature does not exceed 38 °C (100 °F), a full 6 °C less than a competing solution in its testing.
    Super mCharge includes voltage, current, and temperature monitoring for battery health and safety. Because the USB Type-C cable conducts more than 3A of current, it includes an E-mark IC (electronically marked safety chip) on one connector.
    Meizu did not say when we’ll see Super mCharge in a shipping device, but I would not be surprised to see it later this year.


    More...

Thread Information

Users Browsing this Thread

There are currently 32 users browsing this thread. (0 members and 32 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title