Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6391

    Anandtech: Qualcomm Announces New X50 5G Modem, First Gigabit Class LTE Network and R

    Qualcomm made three announcements about connectivity products today at it 2016 4G/5G Summit in Hong Kong. The first announcement, aimed at easing the transition between the 4G LTE networks we have today and the 5G networks of the future, is a partnership with Netgear, Telstra, and Ericsson to deliver the first commercial gigabit class LTE device and network. Qualcomm sees gigabit LTE as a necessary building block and foundation for 5G, providing more bandwidth over a wider area until even higher bandwidth, shorter range 5G networks come online. Telstra, an Australian telecommunications provider, is currently building and testing the first gigabit LTE network, with equipment and software from Ericsson, in a bid to bring higher bandwidth internet access to the home without the expense of laying fiber cables.
    Telstra is pushing for a commercial launch in the next few months and will be providing its customers with the new Netgear Mobile Router MR1100—a gigabit LTE endpoint and Wi-Fi router that delivers up to 1Gbps download speeds through a combination of 3x carrier aggregation, MIMO technology (4x4 MIMO on two aggregated carriers plus 2x2 MIMO on the third carrier), and 256-QAM modulation. Inside is a Qualcomm Wi-Fi solution and Qualcomm’s Snapdragon X16 LTE modem paired with Qualcomm’s WTR5975 RF transceiver.
    This is the Snapdragon X16’s first appearance after it was unveiled back in February of this year at Qualcomm’s annual Analyst Day. The X16 is a sixth-generation discrete LTE modem manufactured on a 14nm FinFET process that’s capable of 1Gbps (Category 16) on the downlink and 150Mbps (Category 13) on the uplink. According to Qualcomm:
    The Snapdragon X16 LTE modem is designed to reach Gigabit Class LTE speeds using the same amount of spectrum as Category 9 LTE devices. By using carrier aggregation and 4x4 MIMO, the Snapdragon X16 LTE modem can receive 10 unique streams of data using only three 20 MHz carriers. Its support for 256-QAM boosts the peak throughput of each stream from ~75 Mbps to ~100 Mbps, with additional gains possible with modem data compression. Additionally, with support for Licensed Assisted Access (LAA) as well as LTE-U, this combination reduces the amount of licensed spectrum required—to 40 MHz or less—to vastly expand the number of operators that can deploy Gigabit Class LTE speeds around the globe.
    The X16 LTE modem will find a home integrated inside the next, currently unnamed, Snapdragon SoC, which is the second big announcement today. This next generation SoC will be powering flagship phones in 2017, enabling faster download speeds as carriers begin rolling out gigabit capable networks over the next 12 months, according to Qualcomm.
    The final big announcement is Qualcomm’s first 5G modem, the Snapdragon X50. Designed to work only in the 28GHz millimeter-wave spectrum initially (supporting the Verizon 5GTF and KT 5G-SIG specifications), it’s capable of 8x100MHz carrier aggregation, giving it access to 800MHz of spectrum versus 80MHz (4x20MHz) for Qualcomm’s X16 LTE modem. All of this extra bandwidth enables up to 5Gbps on the downlink.
    The downside to using higher wavelengths, however, is their limited penetration and range. Qualcomm looks to mitigate these issues by using an array of many small antennas and employing adaptive beamforming and beam tracking to enable non-line-of-sight operation, essentially using surface reflections to see around corners and objects.
    The Snapdragon X50 5G modem will be paired with two of the new SDR051 millimeter-wave transceivers and the PMX50 power management IC to support fixed wireless applications or future mobile devices. Because it only supports millimeter-wave frequencies, it needs to be paired with an additional 4G LTE modem to provide data uplink and voice capabilities and to allow seamless handoff between 4G and 5G networks depending on availability.
    Qualcomm expects the Snapdragon X50 to begin sampling in the second half of 2017, with the first commercial products incorporating the new components showing up in the first half of 2018.



    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6392

    Anandtech: Razer Acquires Majority Of THX Ltd Assets

    Today, Razer acquired the majority of the assets of THX, including the management and employees of THX. THX will continue to operate as an independent subsidiary of Razer, under its own management. Financial terms of the deal were not disclosed.
    This is a bit of an interesting acquisition by Razer, a company that focuses quite heavily on the PC gaming market, from systems, to accessories. THX is an audio company founded in 1983 by George Lucas, and for the past thirty-three years, they have focused on cinema audio systems, home audio systems, and audio certification.
    THX will continue to focus on its three main areas of business. The Certification business is one that they are likely most known for, and the THX logo on a piece of equipment means that the equipment has been tested and passed a set of criteria determined by THX, and over the years that certification has expanded outwards to more devices, and more levels of certification. THX Live! Is a second area of the company, where they focus on providing the best possible audio experience at large venues, as well as replicating that kind of experience in the home. THX Inside focuses the technologies inside of equipment.
    One big change from this announcement is that THX is going to now offer additional certification lines for headphones, Bluetooth speakers, streaming video, set-top boxes, and connected speakers. It’s pretty easy to see the Razer influence here, since Razer is a peripheral maker of many of these devices, so it would be pretty easy to see this collaboration in the future. A THX certified set of Razer Man’O’War 7.1 headphones would certainly be a great place to start, and any assistance by the audio engineers at THX to make the product better will be a good result. Razer also offers not only headphones, but also computer speakers like the Leviathan, which also offer Bluetooth connectivity, and they even have a streaming system in the Forge TV, so the advantages to Razer to own an audio engineering company seem pretty solid. THX has also been dabbling in VR, and Razer is a major sponsor of the OSVR kit.
    The advantage to THX is of course better funding. Razer has been very successful as a brand, and an infusion of funding will enhance their efforts.
    Although this is not an acquisition most people would have foreseen, there are some nice advantages to both parties on this. The THX brand has been diluted a bit over the years, so if anything it would be great if an infusion of funds from Razer would help stabilize this, and reciprocation of solid audio engineering for Razers products definitely seems like a win for them. I’m looking forward to seeing the improvements the two companies can make together on the products I use.
    Source: THX via Razer


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6393

    Anandtech: NVIDIA Announces GeForce GTX 1050 Ti & GTX 1050: Entry-Level Cards Launchi

    After a break of a couple of months in their Pascal launch schedule, NVIDIA is back again to launch a new Pascal desktop product. Following their near-perfect top-down launch schedule that started with GeForce GTX 1080 in May, being announced today and formally launching next week is the GeForce GTX 1050 Ti and GeForce GTX 1050. Aimed at the entry level discrete video card market, these products will round-out the GeForce 10-series desktop lineup.
    Launching their low-tier cards last instead of first for the Pascal generation marks an interesting inverse of what happened with the Maxwell generation. In 2014 it was the low-end Maxwell 1 parts that launched first, only to be followed up by the other Maxwell 2 parts later on in the year. As a result, the Maxwell 2 family went through a full cycle – from release to retirement – before NVIDIA’s entry-level cards were refreshed. Out of all of the segments in the NVIDIA product stack, it’s fair to say that the entry-level was the one most due for a refresh.
    And to that end, here we are with the GeForce GTX 1050 series. The previous GeForce GTX 750 series went very well for NVIDIA, so much so that the new 1050 series follows a number of beats laid out by its predecessor. NVIDIA is launching two cards – both of which are based on the new GP107 – which setup a two-tier product offering for the entry level market. The fast of the two cards is the GTX 1050 Ti, while the GTX 1050 follows closely to offer a bit less performance at a lower price point. And in order to maximize compatibility, both cards are being offered in configurations that draw their power entirely from the PCIe bus, forgoing the need for an external power connection.
    NVIDIA GPU Specification Comparison
    GTX 1060 3GB GTX 1050 Ti GTX 1050 GTX 750 Ti GTX 750
    CUDA Cores 1152 768 640 640 512
    Texture Units 72 48 40 40 32
    ROPs 48 32 32 16 16
    Core Clock 1506MHz 1290MHz 1354MHz 1020MHz 1020MHz
    Boost Clock 1709MHz 1392MHz 1455MHz 1085MHz 1085MHz
    Memory Clock 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5 5.4Gbps GDDR5 5Gbps GDDR5
    Memory Bus Width 192-bit 128-bit 128-bit 128-bit 128-bit
    VRAM 3GB 4GB 2GB 2GB 1GB
    FP64 1/32 1/32 1/32 1/32 1/32
    TDP 120W 75W 75W 60W 55W
    GPU GP106 GP107 GP107 GM107 GM107
    Transistor Count 4.4B 3.3B 3.3B 1.87B 1.87B
    Manufacturing Process TSMC 16nm Samsung 14nm Samsung 14nm TSMC 28nm TSMC 28nm
    Launch Date 08/18/2016 10/25/2016 10/25/2016 02/18/2014 02/18/2014
    Launch Price $199 $139 $109 $149 $119
    Diving into the specs, we’ll start with the GTX 1050 Ti. Based on a fully enabled GP107 GPU, this card is arguably the backbone of NVIDIA’s entry-level offerings. All-told, it has 6 SMs enabled – 60% that of GP106/GTX 1060 – so GP107 is a bit more than half of a GP106. The rest of the Pascal architecture has been scaled similarly; GP107/GTX 1050 Ti retains 2/3rds of the ROP and memory controller configuration, meaning we’re looking at 32 ROPs attached to a 128-bit memory bus. Notably, this is double the number of ROPs found on GTX 750, so all other factors held equal, GTX 1050 Ti will see a massive jump in ROP throughput compared to its predecessor.

    Unofficial GP107 Block Diagram
    Feeding GTX 1050 Ti is 4GB of GDDR5 memory, clocked at 7Gbps. This is a budget card – and a power limited one at that – so NVIDIA has pulled back on the memory clocks compared to the other Pascal cards. Meanwhile power consumption starts at 75W, the maximum amount allowed to be pulled from a PCIe slot. I say “starts” because NVIDIA will be allowing partners to sell factory overclocked cards, and these cards will feature a higher TDP and an external power connector in order to meet the card’s power needs. The significance of offering a 75W-and-under card cannot be overstated; there is a sizable market for end users who would like to upgrade an OEM system but don’t have an external power connector, and this is a role the preceding GTX 750 filled very well. Meanwhile HTPC users who were holding out for a 75W card will be equally pleased, as now Pascal’s suite of media features are on a lower power card.
    Joining the GeForce GTX 1050 Ti is its smaller, cheaper sibling, the GTX 1050. Based on a cut-down GP107 GPU, GTX 1050 drops 1 SM and 2GB of memory. This leaves us with a 5 SM (640 CUDA core) card paired with 2GB of GDDR5 running at 7Gbps. Otherwise it has the full ROP complement and memory bus, so while GTX 1050 loses some shader and geometry throughput, in other areas it holds firm. In fact due to the unusual clockspeeds of these cards – more on this in a moment – the GTX 1050 is actually clocked higher than the GTX 1050 Ti. So the net performance difference on paper is less than the loss of the SM; the smaller card should offer around 87% of the GTX 1050 Ti’s performance. With that said, unlike the last generation you don’t save any power versus the Ti card when going by the official TDP, as the GTX 1050 is also a 75W card, which happens to be 20W more than the GTX 750. Consequently while it’s still a card that can run on just PCIe slot power, by NVIDIA’s own numbers we may be looking at a relatively sizable increase in power consumption relative to its predecessor.
    GP107 – An Enigma of a GPU

    Having covered the basic specifications, I want to spill a bit more ink talking about the GP107 GPU. Reading the specs table, the GTX 1050 series cards are very unusual compared to their more powerful siblings. To be sure they’re still Pascal cards, but certain elements we take for granted about the Pascal family don’t apply here. At the same time there are certain elements we take for granted about x50 series cards which also aren’t applicable here. GP107 is, at the moment, an enigma of a GPU.
    I’ll address what’s likely the elephant in the room first, which is the manufacturing process. To date all Pascal GPUs have been fabbed over at TSMC on their 16nm FinFET process. GP107 is not one of those GPUs. Instead, it’s fabbed on a 14nm process – NVIDIA’s specification sheet doesn’t technically state whose process – but by simple elimination it’s a very safe bet that they’re making the chip over at Samsung. Feature size is a red herring here, and instead the significance of this deal is that NVIDIA has not used a fab other than TSMC for GPUs for a long time. In fact we’d have to go back to 2003 to find an NVIDIA GPU fabbed somewhere else, when NVIDIA tapped IBM to help fab the ill-fated NV3x series (GeForce FX).
    Suffice it to say, tapping another fab is a very big deal. There’s no second-sourcing here – GP107 is only being made on Samsung’s 14nm process and GP106+ only on TSMC’s 16nm process – but splitting orders like this may just as well be new territory for NVIDIA. As this is just a product announcement NVIDIA hasn’t said anything about the change in fabs, so let your imagination go wild here, but it definitely has some ramifications. I really need to get the GTX 1050 cards in house and on the testbed to figure out the full ramifications of this, but I think the most important change here is that a new process from a new vendor means that the voltage/frequency curve we’ve come to know with TSMC 16nm and Pascal has essentially been thrown out the window.
    This in turn may explain the clockspeeds of the GTX 1050 cards. All of the other desktop GeForce 10-series cards have an official boost clock of 1600MHz+, with all but one of those cards being 1700Mhz+. The massive jump in clockspeed relative to Maxwell 2 is one of the signature elements of the Pascal architecture, and a major factor driving the significant performance gains of this generation compared to the last. The GTX 1050 series, by comparison, is only rated to boost up to 1455MHz for the GTX 1050, and lower still for the GTX 1050 Ti at 1392MHz.
    Given that these are power-constrained cards, the final specifications of the cards are bound by a larger number of variables than usual – power curves, attainable frequency range, and now total power consumption – so I’m not even going to try to insinuate that the lower clockspeeds are solely a function of the change in fabs. However it’s very important to keep in mind that these lower clockspeeds come with a sometimes sizable increase in TDP relative to the GTX 750 series; instead of 55W/60W cards, we have 75W cards. So to use the fully enabled GTX 1050 Ti as an anchor point, power consumption has gone up 15W (25%) for a 28% increase in the boost clock, 1 more SM (20%), and somewhat decoupled from this, the doubled ROP count.
    It’s telling then that NVIDIA has informed the press that the higher TDP cards with an external power connector are going to have much higher boost clocks. Whatever is going on under the hood, power plays a big part, and at a TDP limit of 75W, GP107 isn’t getting all the room it needs to stretch. Meanwhile it’s also noteworthy that NVIDIA’s own marketing materials call for GTX 1050 to have a 3x performance increase over GTX 650, and only a bit over 50% increase over GTX 750 Ti.
    At the same time though, keep in mind we’re looking at a generation and a half architectural jump from the GTX 750 series (GM107) to the GTX 1050 series (GP107). So NVIDIA has to spend quite a bit of their transistor budget on supporting new features, and not just graphical features like SMP and Feature Level 12_1, but also features like the new video display block and the full fixed-function HEVC encode and decode blocks. By virtue of being the smallest Pascal, GP107 spends relatively more die size and space on non-graphics features. For those reasons the transistor count is quite a bit larger than GM107; NVIDIA has gone from 1.87B transistors to 3.3B, an increase of 76% (greater than the increase for any of the other Pascal GPUs). Or to put this another way, GP107 is 75% of the transistor count of GP106. Die size meanwhile stands at 135mm2, down a bit from the 148mm2 die size of GM107.
    Ultimately GP107 is not just another Pascal GPU. While it offers the same feature set, there’s more than meets the eye, and it will be interesting to see how things shake out in benchmarking and overclocking. Ahead of launch, this is easily the least predictable GPU and card set of the entire Pascal family.
    GTX 1050 Series Launch Info – No Reference Cards, GTX 1050 Ti Available Next Week

    Getting back to the cards at hand, let’s talk about positioning, pricing, and availability. As these are NVIDIA’s entry-level cards, it goes without saying that they’re not targeted at NVIDIA’s usual horde of PC gaming enthusiasts. These cards won’t have the power to run games at 1080p with all the bells and whistles turned up – that’s really what the GTX 1060 is for – but instead they’re meant to be the next step up from integrated GPUs. This is a market that the GTX 750 series served very well (and judging from market share, NVIDIA sold well to), and the GTX 1050 will do the same. From specifications alone, the GTX 1050 series should be head and shoulders above the Intel GT2 iGPU found on Skylake/Kaby Lake.
    Unlike the GTX 750 series, NVIDIA won’t be producing any reference cards this time around, Founders Edition or otherwise. So all GTX 1050 series cards are going to be vendor custom designs. Expect a mix of cards that follow the 75W TDP and cards that offer factory overclocks in exchange for a higher TDP. If for some reason you’re after a slot-powered card, be sure to check the specifications closely.
    Meanwhile, although NVIDIA is listing the launch date as the 25th for both cards, from what NVIDIA has told me at the product briefing, in reality this isn’t quite the case. GTX 1050 Ti should in fact be available next week, right in time for the launch. However NVIDIA is not expecting GTX 1050 to be in stock for a few more weeks. Vendors can sell the cards as soon as they have them, so indeed the retail embargo ends on the 25th, but if the GTX 1050 interests you, don’t expect to be able to buy it until around mid-November.
    Finally, let’s take a look at the completive landscape. Besides the outgoing Maxwell 1 and Maxwell 2 cards, the current-generation competition for the GTX 1050 series is AMD’s already-launched Radeon RX 460. The $109 GTX 1050 is the very clear counterpart to the 2GB RX 460. Meanwhile the $139 GTX 1050 Ti is in a bit of a different place; the cheaper 4GB RX 460 cards are running closer to $120 right now, so the GTX 1050 Ti is essentially $20 more expensive, which is a small but still significant difference given the overall low prices of entry-level cards. Still, it means that AMD’s two-month run as the sole supplier of current-generation entry-level video cards is about to come to an end.
    Gallery: GeForce GTX 1050 Series Announcement Slide Deck




    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6394

    Anandtech: The Samsung 960 Pro (2TB) SSD Review

    A year ago, Samsung brought their PCIe SSD technology to the retail market in the form of the Samsung 950 Pro, an NVMe M.2 SSD with Samsung's 3D V-NAND flash memory. The 950 Pro didn't appear out of nowhere—Samsung had shipped two generations of M.2 PCIe SSDs to OEMs, but before the 950 Pro they hadn't targeted consumers directly.
    Now, the successor to the 950 Pro is about to hit the market. The Samsung 960 Pro is from one perspective just a generational refresh of the 950 Pro: the 32-layer V-NAND is replaced with 48-layer V-NAND that has twice the capacity per die, and the UBX SSD controller is replaced by its Polaris successor that debuted earlier this year in the SM961 and PM961 OEM SSDs. However...

    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6395

    Anandtech: Huawei announces the HiSilicon Kirin 960: 4xA73 + 4xA53, G71MP8, CDMA

    Last year we had the opportunity to attend HiSilicon's launch event of the Kirin 950. The 950 SoC was the first to employ ARM's Cortex A72 in a smartphone, and the big.LITTLE 4x4 A72/A53 configuration of the 950 powered most of Huawei's flagships for 2016. It also marked a significant jump both in performance as well as efficiency compared to previous Kirin chipsets. The CPU part especially made such a good impression in our review of the Mate 8, and at the time we estimated that it would easily trade blows with Qualcomm and Samsung's own custom ARM microarchitecture designs, Kryo and Exynos M1, in their respective SoCs and flagship smartphones. Ultimately the Kirin 950 exceeded our expectations, in part due to the rather lackluster showing of this year's custom-CPU powered Android devices.
    The Kirin 950 was initially launched in the Huawei Mate 8, their new flagship smartphone. The SoC then followed into the Huawei P9, P9 Max, and the Honor 8 over the course of 2016 - essentially all the premium devices for Huawei and Huawei's sub-brands, and contributing to the 106 million smartphone sales Huawei recorded in 2016 (30% of which were devices over $450). This is the success that Huawei wants to emulate with the Kirin 960.
    Earlier this year, ARM announced their next generation microarchitecture IP in the form of the Cortex-A73 general-purpose core and the Mali-G71 graphics core. The A73 is a departure from the design of the A72 - the A72 was related to the A57 and A15 (Austin µarch family), whereas the A73 is a successor to the little-used Cortex A17 of the Sophia µarch family. Notable differences between the A73 to the A72 include a reduction of the decode-width from 3 to 2, but don't let this change fool you as the new micro-architecture is supposed to be both faster and smaller (and efficient) than that of the A72. On the GPU side, the Mali G71 is an even bigger upgrade for ARM as it marks the first IP of the Bifrost architecture which brings beyond a nomenclature change from the Mali-Txxx series also significant performance and efficiency improvements.
    Back in May we said that we would expect SoCs and devices with these new IPs by the end of the year, and indeed, the new Kirin 960 SoC from Huawei's semiconductor subsidiary HiSilicon combines these two new technologies, promising the first commercial device to use them in due course.
    While more details of the Kirin 960 will come forth over time as it makes its way into various devices, the official launch in Shanghai reiterated the partnership between HiSilicon and TSMC, using the 16FF+ process as the primary technology to enable the new SoC.
    HiSilicon High-End Kirin Lineup
    SoC Kirin 950 Kirin 960
    CPU 4x Cortex A72 (2.3 GHz)
    4x Cortex A53 (1.8 GHz)
    4x Cortex A73 (2.4 GHz)
    4x Cortex A53 (1.8 GHz)
    Memory
    Controller
    2x 32-bit LPDDR3
    or LPDDR4
    (hybrid controller)
    2x 32-bit LPDDR3
    or LPDDR4
    (hybrid controller)
    GPU ARM Mali-T880MP4
    900 MHz
    ARM Mali-G71MP8
    900 MHz
    Interconnect ARM CCI-400 ARM CCI-550
    Encode/
    Decode
    1080p H.264
    Decode & Encode

    2160p30 HEVC
    Decode
    ?
    Decode & Encode

    2160p30 HEVC
    Decode
    Camera/ISP Dual 14bit ISP
    940MP/s
    ?
    Sensor Hub i5 i6
    Storage eMMC 5.0 UFS 2.1
    Integrated
    Modem
    Balong Integrated
    UE Cat. 6 LTE
    Integrated
    UE Cat. 12 LTE
    4x CA
    4x4 MIMO
    The Kirin 960 features the same 4x4 big.LITTLE design implementation as the Kirin 950, but using four Cortex A73 cores at 2.4 GHz (by contrast, the Cortex A72 cores in the 950 were at 2.3GHz, and 2.5 GHz in the Kirin 955). The little cores remain of the Cortex A53 micro-architecture and at the same 1.8 GHz configuration.
    Performance for the Kirin 960, as presented in Huawei's announcement, is +10% single core performance and +18% on the multi-CPU performance compared to the Kirin 950. (This isn't quite as much as we would have hoped from an A73 design, though.) HiSilicon presented GeekBench 4 scores for the new chipset and the K960 seems to benchmark around 2000 for single-core and 6400 for multi-core. The single-core scores slightly beat other Android SoCs in the market, however isn't a too drastic improvement. We don't have more exact sub-scores yet but one we know about is the vastly improved memory bandwidth of the new design that seems to beat all other SoCs at the moment. This is surely a result of both the Cortex A73's strong memory subsystem as well as new use of the CCI-550 in the Kirin 950 that also promised an increase in effective memory bandwidth over the preceding CCI-400 design.
    Initially we expected A73 designs to use the improved power envelope to drive up the clock frequencies higher, however in the case of the Kirin 960 the A73 is merely 100MHz above the Kirin 950 and actually 100MHz slower than the 955's A72. The Kirin 950 already had the best sustainable CPU power envelope among this year's SoCs so it would be surprising if HiSilicon decided to limit performance lower power even more, however we have confirmation that that is exactly what happened as we've been told the CPU's power envelope has been lowered in favour of the GPU. This matches ARM's philisophy about the new A73 that was presented back at TechDay, so the Kirin 960 seems to put this philosophy into practice. While there wasn't talk about power efficiency we expect it to improve given that performance has gone up while power is supposed to have gone down.
    A Bigger, Faster, More Powerful GPU

    As mentioned earlier, the GPU sees a significant shift as we move from a Mali T880MP4 to a Mali-G71MP8, which essentially doubles the number of shader cores employed. HiSilicon decision to go with a wider GPU on the same process node was undoubtedly made much easier by the new G71's much better die area efficiency. The new GPU runs at 900 MHz, the same as the older SoC. The combination of doubling the shader cores over the 950 and moving to a new architecture over the 950 results in a 180% increase in the GPU's performance compared to the Kirin 950. As a result, Huawei is positioning the Kirin 960 directly above the other high-end SoCs launched this year (though we expect the other SoC vendors to also have respective increases with new generations soon). The internal benchmark from Huawei shows the performance of the G71MP8 in above both the Snapdragon 820 and the Exynos 8890, although Huawei did not elaborate any testing details.
    One of the popular criticisms of previous Huawei SoCs is the lack of graphics horsepower, with designs often opting for a four-core 'MP4' configuration despite using the high-performance graphics cores available from ARM. An MP4 configuration kept the die size of the SoC small and easier to implement. The downside of this is typically efficiency, suggesting that high graphics loads with fewer cores run at a worse point of the power efficiency curve to get an acceptable result.
    Arguably most graphics scenarios on a smartphone, apart from extreme gaming titles and VR, can easily be provided by an MP4 configuration, and that was HiSilicon's main rationale for past GPU configurations. However as Huawei devices mature and get more premium, there is also the fact that when buying a device above $450, the user should expect something performing near the best in the market, and it was clear that an MP4 design could not do that. The fact that the Kirin 960 moves to an MP8 design, while increasing die area and maximum power consumption, means that a Kirin SoC moves up to compete with larger GPU configurations such as Samsung's large Mali designs in Exynos SoCs, Qualcomm's Adreno in the Snapdragons, or the 6-cluster PowerVR GPU in Apple's A-series. The mention of higher maximum power consumption comes with rather mixed feelings as the reasonably low GPU power was one of the aspects of the Kirin 950 that we especially praised when compared to other SoC designs, so we'll have to wait to see real devices to see if this improvement isn't a double-edged sword that introduces more severe GPU thermal throttling to the Kirin lineup.
    Nevertheless, the improvement paves the way for things like VR, especially given Google's recent announcement of Daydream VR (starting with the Snapdragon 821 in the Google Pixel smartphone). With the right software support and display, we would expect all future devices using the Kirin 960 to support Daydream VR. With API support, the Kirin 960 will be promoted with Vulkan.
    New Category 12/13 Modem with a Custom CDMA Implementation

    The Kirin 960 also features a new HiSilicon based modem, allowing up to LTE UE Category 12/13 connectivity for up to 600 Mbps transfers, equaling the specifications of Qualcomm's Snapdragon 820 or Samsung's Exynos 8890 modems. The modem is being promoted as the first commercial SoC to provide quad carrier aggregation to meet that speed, although it will obviously require carrier support in the region in order to do so. The modem supports 4x4 MIMO (at only 2xCA) as well as up to 256QAM spatial stream modulation.
    One of the more interesting announcements from the Kirin briefing was the implementation of CDMA in the modem. Currently three smartphone modem providers have CDMA solutions (Qualcomm in integrated and discrete modems, Intel with discrete, Mediatek with VIA-based integrated), and we spoke with HiSilicon to confirm that this is a brand new custom CDMA solution, rather than a licensed platform. The value of CDMA is mixed, although a required element with certain carriers in China and the US, such that Huawei can now offer devices with the Kirin 960 can compete. It should be pointed out that CDMA certification for the US via the FCC takes 18-24 months, and I was unable to confirm when the process was started, so we may have to wait another year for a US-focused CDMA devices.
    We noticed that previous Kirin modem designs were made under the 'Balong' name, however the name is dropped for this model. We were told that Balong is an old brand, and it was being dropped at this performance level, although it is unclear if the modem in the Kirin 960 or future SoCs will be branded. This may be related to the fact that when asked about the presence of CEVA LTE IP in the modem similar to previous models, we were told that the LTE design in the new modem is a custom internal design without CEVA.
    A Good Showing

    Naturally we expect Huawei's next generation flagships to implement the Kirin 960 and variants over the next few months, especially as we move more towards Mobile World Congress in February. Similarly, updates to Kirin such as clock speed increases (the 955 is +200 MHz over the 950) moving into the middle of next year, along with an Honor device or two as time goes on. Huawei still keeps its HiSilicon SoC portfolio purely in-house, such that other smartphone OEMs do not have access to it, and given Huawei's current success in unit sales and revenue, we would expect the continue the status quo on this front.
    As perhaps was to be expected, when we asked Huawei about competitor's 10nm plans (especially in light of recent news), no official statement was made. It's very hard to not try to theorise comparisons between the Kirin 960 and next year's Exynos and Snapdragons, while the former does bring substantial improvements to the table, it'll be a hard fight competing against the new generation. Given Huawei and HiSilicon's semiconductor product cycles, we might expect to hear more when TSMC's foundry business starts making more public noise about the availability of future nodes and silicon mass production (or even Intel, depending on relationships). We will probably have to tune in next year, perhaps at a similar time, and there may be more concrete news about chipset roadmaps.
    We expect the first devices to use the Kirin 960 to come out shortly.
    Further Reading

    Andrei's Analysis of the ARM Cortex-A73 'Artemis' CPU Microarchitecture
    Ryan's Analysis of the ARM Mali-G71 'Bifrost' GPU Architecture
    The Huawei Mate 8 Review
    The Honor 8 / Huawei P9 Review


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6396

    Anandtech: Netgear Launches 802.11ad-Enabled Nighthawk X10 (R9000) Wi-Fi Router

    The Wi-Fi router market has seen a yearly cadence of flagship releases since the first 802.11ac routers came into the market in early 2012. Starting with 3x3 solutions, the market moved on to 4x4 and MU-MIMO-enabled Wave 2 solutions. We also saw Broadcom taking the lead with a dual 5 GHz radio solution (that has now been adopted by Qualcomm Atheros also). Many networking equipment vendors introduced solutions based on this. The recent mesh networking craze is also an offshoot of this dual radio solution - a dedicated 'invisible' channel is used to link the satellite to the main router (a discussion for another day). At CES earlier this year, the TP-Link Talon announcement indicated that tri-band Wi-Fi was getting traction in the market. Tri-band Wi-Fi involves radios operating in 2.4 GHz (802.11n), 5 GHz (802.11ac) and 60 GHz (802.11ad) bands. Today, Netgear is launching their first tri-band Wi-Fi solution - the flagship Nighthawk X10. It will be marketed as an AD7200-class router and will carry the R9000 model number.
    Similar to the other AD7200 routers in the market, the Nighthawk X10 adopts a dual band 4x4 802.11ac solution for (1733 Mbps + 800 Mbps) and a 1x1 802.11ad solution for 4600 Mbps to justify the AD7200 tag.
    There are currently two 802.11ad routers in the market, the Acelink BR-6774AD and the TP-LINK AD7200 (Talon). Both models are pure Qualcomm solutions (i.e, the radios, switches and network processor are all from Qualcomm). The Nighthawk X10 aims to differentiate itself from the existing solution by integrating first-to-market features - both in terms of hardware and software. It is the first consumer router to sport a 10G network interface, and it is the first router capable of running Plex with transcoding capabilities in a standalone manner.
    Qualcomm's IPQ solutions are pure network processors and do not have a video processing engine to enable the Plex capabilities that Netgear wanted to integrate in the Nighthawk X10. Therefore, Netgear decided to go with Qualcomm only for the radios (and, likely, the switches too). The network processing as well as the media capabilities are handled by a quad-core Annapurna Labs SoC running at 1.7 GHz. While Netgear didn't officially confirm the SoC model number, the specifications point to the Annapurna Labs AL-514 as the likely candidate. If this SoC sounds familiar to readers, it is the same as the one we saw in the Synology DS2015xs. It has four Cortex-A15 cores running at 1.7 GHz and comes with multiple network interfaces (including native 10G capabilities). Netgear claims that the AL-514 is the fastest router processor available for the consumer market. The exact layout of the board (number of switches and the way the link aggregation-capable network interfaces are hooked up to the SoC) will be interesting to analyze once the R9000 is out in the open market.
    Netgear introduced their active antenna design (moving the TX power amplifiers from the board onto the detachable antennas) in the Nighthawk X8. In the X10, the RX power amplifiers are also moved out of the main board, further reducing noise concerns and improving robustness.
    In addition to the obvious bandwidth advantages, 802.11ad also brings about a significant reduction in latency compared to 802.11ac. The 60 GHz band is not subject to interference like the 2.4 GHz and even the 5 GHz band now. These aspects makes the technology ideal for a number of use-cases that were simply not a good fit for traditional Wi-Fi. VR gaming and 4K streaming are being promoted by Netgear as ideal applications for the Nighthawk X10.
    Unlike 802.11ac, where USB WLAN adapters were available for purchase when the first routers came into the market, the 802.11ad ecosystem is just getting started. There is an existing install base in terms of dedicated docking solutions, and Intel's newer vPro-enabled platforms have Wi-Gig support. However, it is not clear whether all existing Wi-Gig products in the market will be interoperable with the AD7200 routers. It must also be remembered that 802.11ad is a short-range line-of-sight technology - this means that the AD7200 routers can't be tucked away out of sight in a closet. The industrial design of the R9000 is similar to previous routers, and I am not entirely sure the market appreciates that in the living room (often cited as the reason for the attractively designed mesh routers becoming popular). The other strange feature is the 10G SFP+ fiber port. My market research revealed no consumer NAS models with that network interface. A 10G BASE-T port would have been nice, but, that increases the BOM cost. Consumers must note that the real-world performance of 802.11ad, like all other Wi-Fi technologies, is nowhere near the theoretical numbers. Under ideal conditions, real-world TCP throughputs of the order of 1.7 Gbps can be expected.
    Netgear is also aiming to simplify setup and operation of the unit with a new 'Netgear Up' router installation app for both iOS and Android. Various housekeeping tasks will also be supported in that app.
    In addition, the usual set of Nighthawk features such as ReadyCloud, OpenVPN etc. are also supported. The R9000 will also be supported by the My Open Router Netgear open source community. However, installation of the open source firmware will result in loss of value-add features such as Plex.
    The Netgear R9000 Nighthawk X10 is now available for purchase in retail as well as e-tail stores for $500.



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6397

    Anandtech: Acer Begins Selling 9.98-mm Swift 7 Kaby Lake Laptop in the U.S.: Starts a

    This week Acer has started to sell its first Swift 7 notebook in the U.S. The black and gold Swift 7 laptop is based on Intel’s Kaby Lake-Y-series CPU, and with an FHD display is the industry’s first clamshell PC that is thinner than one centimeter. The system is available now for $1099 from a number of retailers.
    The Acer Swift 7 (SF713-51-M90J) comes in a black and gold aluminum unibody to emphasize that the device is one of the premium products in the manufacturer’s lineup. The thickness of the laptop with a 13.3” FHD display covered with Corning Gorilla Glass 4 is 9.98 mm (0.39”), which is thinner than Apple’s MacBook as well as Dell’s Adamo XPS, both of which are renowned for their thin form factors. As for mass, the device weighs 1.12 kilograms (2.48 lbs), which is a tad heavier than the MacBook.
    Inside the Acer Swift 7 ultra-thin notebook there is an Intel Core i5-7Y54 processor (2C/4T, 1.2GHz/3.2GHz frequency, HD Graphics 615, 4.5 W), 8 GB of LPDDR3 RAM as well as a 256 GB SSD (the manufacturer does not specify whether it is an AHCI or NVMe). The laptop uses an 802.11ac Wi-Fi + BT 4.0 wireless adapter to communicate with the outside world wirelessly. The laptop also has a 720p webcam and two 5 Gbps USB 3.1 Type-C ports to connect peripherals, a display as well as for charging. Moving on to audio sub-system, we see a TRRS connector for headsets as well as two stereo speakers that are Dolby Audio certified.
    Acer Swift 7
    SF713-51-M90J
    CPU SKU Core i5-7Y54
    Base 1.2 GHz
    Turbo 3.2 GHz
    TDP 4.5 W
    GPU SKU Intel HD Graphics 615 (GT2)
    24 EUs, Gen 9
    Base 300 MHz
    Turbo 950 MHz
    DRAM 8 GB LPDDR3
    SSD 256 GB
    Display 13.3-inch 1920x180 IPS LCD
    Ports 2 x USB 3.1 (Gen 1) Type-C
    3.5mm combo jack
    Network 2x2:2 802.11ac with BT 4.0
    Battery 2770 mAh (52.9 Wh?)
    Dimensions H: 0.39"
    W: 12.78"
    D: 9.04"
    H: 9.98 mm
    W: 32.46 cm
    D: 22.96 cm
    Weight 2.03 lbs (0.92 kg)
    Colors Gold and Black
    Price $1099.99
    Since the Swift 7 is powered by Intel’s Kaby Lake, expect significantly improved video encoding/decoding capabilities, better GPU performance, as well as Speed Shift v2 technology. This should make the new machine to be generally faster and snapper than ultra-thin notebooks based on previous-gen CPUs.
    When it comes to battery life, Acer claims that the Swift 7 (SF713-51-M90J) has a 4-cell Li-Ion battery with 2770 mAh of capacity, which enables it to work for up to nine hours on one charge. Do note however that Acer does not specify conditions or workloads for that number.
    The Acer Swift 7 (SF713-51-M90J) notebook is available directly from Acer as well as from leading retailers (Amazon, Newegg, etc.) for $1099.99. It comes with a one-year warranty. Finally, keep in mind that Acer plans to release a family of Swift 7 notebooks, as it revealed at IFA. So expect to see further models of the Swift 7, including a more affordable version based on Intel’s Core i3 at $1000, as well as a more advanced system featuring Intel’s Core i7-7Y75 and a higher price.
    Gallery: Acer Starts to Sell 9.98-cm Thick Swift 7 Kaby Lake Laptop in the U.S.: Starts at $1100




    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6398

    Anandtech: Quantenna Announces 802.11ax Draft 1.0-Compliant Wi-Fi Chipset

    Quantenna this week introduced the industry’s first 802.11ax Draft 1.0-compliant Wi-Fi chipset for access points. The QSR10G-AX is pin-to-pin compatible with Quantenna’s commercial QSR10G Wave 3 802.11ac-compliant Wi-Fi platform and can thus be easily integrated into existing APs (e.g., routers). The next-gen 802.11ax Wi-Fi standard promises to improve spectral efficiency of Wi-Fi operation in dense deployments and also to increase maximum theoretical bandwidth of Wi-Fi networks to around 10 Gbit/s.
    Wi-Fi nowadays is a pervasive technology that is used by loads of different devices. Every office or residential building has tens of Wi-Fi APs visible and many people carry two or more of Wi-Fi-enabled devices with themselves (a smartphone, a laptop, most recently, smart watches). As a result, the number of connections that every AP needs to handle is getting higher every year, which requires higher spectral efficiency and more channels to connect devices. In the recent years the 802.11 standard introduced multiple ways to improve spectral efficiency of Wi-Fi, including multiple-input multiple-output antennas (MIMO), multi-user MIMO (MU-MIMO), higher order modulation and so on. The next-gen 802.11ax will add OFDMA (Orthogonal Frequency-Division Multiple Access) to allow different devices to be served by one channel, by dedicating different sub-carriers for individual client devices.
    Quantenna’s QSR10G-AX is one of the industry's first 802.11ax Draft 1.0-compliant Wi-Fi chipsets, and supports the major features of the upcoming standard (including OFDMA, 1024-QAM and so on). Furthermore, to simplify deployment, it can be installed into existing designs due to pin-compatibility with the current QSR10G chipset. From a physical standpoint, devices based on the QSR10G-AX will use Quantenna’s 12-stream Wave 3 802.11ac dual-band Wi-Fi platform, which offers 8 streams in the 5 GHz band and a further 4 streams in the 2.4 GHz band. This essentially means that manufacturers using the QSR10G can simply throw in a more advanced chip and build an 802.11ax Draft 1.0-compliant router.
    Right now, Quantenna does not specify the speeds that devices based on its QSR10G-AX will be able to provide, primarily because a lot will depend on actual client equipment. In the best case scenario with the currently available devices we are looking at 1733 and 2167 Mbps using four 802.11ac streams with 600, 800 and 1000 Mbps per stream. Nonetheless, keep in mind that the whole point of the 802.11ax is not to skyrocket maximum bandwidth (even though, maximum bandwidth remains important and does improve), but to ensure the ability to robustly serve the greatest number of clients using a single AP.
    Meanwhile, the availability timeframe for such routers is unknown. Quantenna plans to make the QSR10G-AX chip available to its partners in early 2017. Then, it is going to take them some time to ensure that everything operates as planned (an uneasy thing to do due to lack of 802.11ax-compliant client equipment) and only then actual devices are set to hit the market.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6399

    Anandtech: AnandTech Call for Writers: 2016

    The Call for Writers has become something of an annual tradition over here at AnandTech. As anyone who follows the site knows very well, the list of things we have to review/cover usually exceeds our available time. So the call for writers gives us a chance to find new talent and new opportunities to grow, be it into new coverage areas entirely or just covering more of the existing products our readers have come to enjoy over the years.
    The ultimate purpose of the Call for Writers is to find new talent. To continue to grow and improve our content, we need your help. We're looking for writers with a true passion for the technology we cover, a deep understanding of what's out there and a thirst for more knowledge.
    We're looking for contributors to help out both with reviews as well as our short to medium form Pipeline coverage. The areas in particular we're looking for help with are listed below:

    • News/Pipeline (US & Euro)
    • Systems/Laptops (US-only)
    • GPUs (US-only)
    • Mobile/Smartphones
    • Wearables
    • Podcast Editor
    • Networking
    • Memory
    • Monitors
    • Home Automation/IoT
    • Professional Graphics/GPU

    If you find yourself at the intersection of knowledge and passion about any of those areas, and have some time to contribute, you're exactly what we're looking for. These are paid, part-time positions that we're looking to fill, with most positions open on a world-wide basis, and certain positions primed for a quick promotion to full-time. What I need is a writing sample that demonstrates your ability to talk about any one of these topics. Your sample can be in the form of a review, a pipeline post or an analysis piece - it should be something that looks like it would fit in on AnandTech.
    Once you've produced it, send it on over to callforwriters@anandtech.com. Please also include a description of what subject(s) you would be interested in writing about, and some basic information about your background and where you're located. We'll read through all samples but can't guarantee a reply due to the sheer volume of submissions we tend to receive. If we like what you've sent and there's a potential fit on the team, we'll be in touch.
    I'll conclude this post with a passage from our About page:
    In the early days of technology reporting on the web the focus was almost exclusively on depth. We had a new medium for content that didn't come with the same restrictions as more traditional forms. We could present as much data as we felt was necessary and we could do it quicker.
    As the web grew, so did the approach to gaining readership. In many cases, publishers learned from the tips and tricks of more traditional media to growing their audience. The focus shifted away from ultimate understanding of what was being reported, to producing content significantly motivated by increasing traffic, or revenue, or both. Thorough observations were out; sensationalism, link baiting, and the path to shallow 10-o'clock-news reporting were in.
    While I believe it's definitely easier to produce content by going this route, I don't believe it's the only way to build a well read website.
    If the above resonates with you and you'd like to help by being a part of something different, I'd encourage you to submit a writing sample.
    Frequently Asked Questions

    Q: How old do I need to be to work for AnandTech?
    A: You need to be old enough to legally work in your country of residence without significant restriction. Otherwise we have no specific requirements so long as you can do the job well. Anand started the site at 14, after all...
    Q: Do I need to be located in the United States to work for AnandTech?
    A: Some positions do require that you be in the US for logistical reasons, and those specific positions are noted. However unless otherwise noted, most positions are open on a world-wide basis.
    Q: Do I need to supply my own products for testing or contacts at companies? (i.e. do I need to be an insider?)
    A: No. Assuming for the moment you have a computer to write on, then you already have the most important piece of equipment that you need. Meanwhile you will need some knowledge of the field at hand, but we will introduce you to the people you need to know for your position at AnandTech.
    Q: Can I really work for AnandTech even though I don't have a Ph.D in electrical engineering?
    A: Yes! We are first and foremost looking for people with a passion to learn, and the knack to make it happen. There's a certain degree of baseline knowledge needed for any given position, but if you can read existing AnandTech articles then you're already half-way there.
    Q: Is there a submission deadline?
    A: We have a tentative end point for the middle of November.

    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #6400

    Anandtech: Chinese Tech Giant LeEco Enters US Market with Le Pro3 and Le S3 Smartphon

    LeEco announced its official entry into the U.S. market at its “Ignite the Eco World” event in San Francisco, an important step in the company's globalization plan. Its initial products include two large-screen smartphones, a range of 4K TVs, and new content partnerships.
    LeEco may be the biggest tech company you’ve never heard of. The Chinese company began life as LeTV in 2004 as a streaming content provider and is often referred to as the “Netflix of China.” It then began creating its own movie and TV content with its Le Vision Pictures studio and then expanded into hardware, selling TVs that plugged into its streaming service. More recently, it has expanded its hardware lineup to include smartphones and even a connected bicycle and electric car. It even has its own Uber-like taxi service in China, Yidao.
    LeEco now looks to extend its reach further west. Earlier this year it purchased U.S.-based TV maker Vizio, the second largest brand in the U.S., for $2 billion, giving it a familiar name and a foothold in a market wary of Chinese brands. LeEco also purchased Dichotomy Creative Group in September, rebranding it Le Vision Entertainment. The new division is headquartered in Los Angeles and will be responsible for creating original English-language content spanning movies, TV, VR, and the Web. LeEco is even setting up its global headquarters in Silicon Valley (its campus in Beijing will remain its headquarters for China), having purchased a 48-acre plot of land in San Jose, California from Yahoo that will eventually employ some 12,000 people.
    LeEco's US Ecophone Lineup
    Le Pro3 Le S3
    SoC Qualcomm Snapdragon 821
    (MSM8996 Pro)

    2x Kryo @ 2.34GHz
    2x Kryo @ 2.19GHz
    Adreno 530 @ 653MHz
    Qualcomm Snapdragon 652
    (MSM8976)

    4x Cortex-A72 @ 1.80GHz
    4x Cortex-A53 @ 1.44GHz
    Adreno 510
    Display 5.5-inch 1920x1080 IPS LCD 5.5-inch 1920x1080 IPS LCD
    Dimensions 151.4 x 73.9 x 7.5 mm
    177 grams
    151.1 x 74.2 x 7.5 mm
    153 grams
    RAM 4GB LPDDR4 3GB LPDDR3-1866
    NAND 64GB 32GB (eMMC 5.1)
    Battery 4070 mAh
    non-replaceable
    Qualcomm Quick Charge 3.0
    3000 mAh
    non-replaceable
    Qualcomm Quick Charge 3.0
    Front Camera 8MP, 1.4µm pixels, f/2.2 8MP, 1.4µm pixels, f/2.2
    Rear Camera 16MP, f/2.0, PDAF, dual-tone LED flash 16MP, f/2.0, PDAF, HDR, dual-tone LED flash
    Modem Qualcomm X12 LTE (Integrated)
    2G / 3G / 4G LTE (Category 12/13)
    Qualcomm X8 LTE (Integrated)
    2G / 3G / 4G LTE (Category 7)
    SIM Size 2x NanoSIM (dual standby) 2x NanoSIM (dual standby)
    Wireless 802.11a/b/g/n/ac, BT 4.2, NFC, IrLED, GPS/GNSS 802.11a/b/g/n/ac, BT 4.1, IrLED, GPS/GNSS
    Connectivity USB 2.0 Type-C
    Launch OS Android 6.0.1 with EUI 5.8
    Launch Price $399 $249
    To kickstart its U.S. strategy, LeEco is offering two smartphones: the Le Pro3 and the Le S3. Both phones have 5.5-inch 1080p displays, but use different internal hardware. The Le Pro3 includes Qualcomm’s latest Snapdragon 821 SoC, 4GB of LPDDR4 RAM, and 64GB of internal storage. Given its emphasis on streaming content it’s probably not too surprising it does not offer additional storage tiers or microSD card support. Its most interesting feature is a large 4070mAh battery, which should help keep the screen lit for extended movie watching.
    The Le S3 comes with a Qualcomm Snapdragon 652 SoC, which combines four ARM Cortex-A72 CPU cores, four Cortex-A53 cores, and an Adreno 510 GPU that supports all of the latest 3D graphics APIs. We saw its smaller sibling, the Snapdragon 650 that comes with two fewer A72 cores, push the performance of Xiaomi’s Redmi Note 3 Pro to near flagship levels, so the Le S3 has good performance potential; however, in my brief time with the phone, I noticed some stuttering when navigating the UI and far too many instances where frame rate dropped below 60fps, more so than with the Redmi Note 3 Pro.
    The Le S3 also comes with 3GB of LPDDR3 RAM, which should be adequate, but only 32GB of internal storage. For a device that needs to cache lots of music and videos, and still have room for pictures and apps, we definitely would like to see more NAND, especially without microSD card support. The Le S3 also does not get a big battery like the Pro3; its 3000mAh cell is a typical size for 5.1- to 5.2-inch phones rather than a 5.5-inch phablet.

    Le Pro3 (left), Le S3 (right)
    Both phones have a similar camera setup, with an 8MP front-facing camera and a 16MP primary camera that has an f/2.0 lens and PDAF for faster focusing but does not include optical image stabilization (OIS). Video recording options are pretty basic, supporting only 2160p30 (4K), 1080p30, and 720p30 modes, and with no electronic image stabilization (EIS) option, videos can look pretty shaky.

    Le S3 (rose gold)
    LeEco eliminated the 3.5mm headphone jack from its phones before Apple axed the analog port in its iPhone 7, opting to pass audio over the USB Type-C port instead. Both the Le Pro3 and Le S3 support LeEco’s Continual Digital Lossless Audio (CDLA) technology that passes digital audio over USB. CDLA compatible earphones, included with both phones, have their own DAC and audio processor that’s supposed to reduce noise and improve channel separation relative to traditional analog audio from a 3.5mm jack. A Type-C to 3.5mm adapter is also included so you can use traditional headphones too.
    Both phones also support Dolby Atmos surround-sound technology thanks to the Snapdragon SoCs, which produces more realistic positional audio and even accounts for an object’s vertical position in the soundfield as well.
    The Le Pro3 and Le S3 both use aluminum unibody construction and share a similar design language. On the back of the phones we find horizontal plastic antenna lines at the top and bottom, with a large, not-quite-circular camera surround and fingerprint scanner. LeEco was the first company to use Qualcomm’s ultrasonic Sense ID fingerprint sensor, but these phones are using the more common capacitive variety. The most obvious visual difference between the phones is the finish applied to the aluminum: The Le Pro3 has a brushed finish while the Le S3 has a sandblasted finish.
    The phones also look quite similar from the front, with relatively small upper and lower bezels and a symmetric layout. Hiding behind the lower bezel is a set of capacitive, backlit buttons. The LeEco logo replaces the standard home glyph on the Pro 3 and is flanked by the recents and back buttons of the left and right, respectively. The only other difference is that the Pro 3 uses 2.5D curved glass, while the S3’s edge-to-edge Corning Gorilla Glass 3 is flat all the way to the edge, giving it a sharper feeling in the hand.
    One thing that’s immediately obvious as soon as you turn on either phone is the ridiculously large black bezel on all sides of the screen. The black area is significantly larger in person than LeEco’s press renderings show. I have not seen a border this large on a phone in some time, at least not on a phone that costs more than $200. For me, it’s large enough to be distracting and, in my opinion, it gives the phones a low-cost look.
    Both phones include an infrared LED on the top edge for remote control duties, which fits in with LeEco’s philosophy of enabling its hardware to work together. Centered on the bottom edge is a USB 2.0 Type-C port flanked by two perforated grilles. On the Pro 3 they’re covering a pair of speakers, although, having both so close together and firing downward will diminish the stereo effect.
    Switching our focus to software, we find that both phones are still running Android 6 Marshmallow with LeEco’s Ecosystem User Interface (EUI) 5.8 running on top. It’s disappointing that the phones will not ship with Android 7, but the demo models I handled were at least running recent security patches (either September or October). What’s unique about EUI are the included apps and built in support for accessing LeEco’s streaming content. Swiping to the left of the default home screen opens LeView, a list of suggestions for video content that learns from your viewing habits, somewhat analogous to HTC’s BlinkFeed or Flipboard for news. Tapping the Live icon opens a 3x3 grid of live video previews of content available for streaming. The Le app gives more comprehensive access to LeEco’s catalog of videos, TV episodes, and music.
    It should be obvious by now that LeEco’s business strategy is more Amazon than Apple; it makes money from selling content and uses hardware as a delivery system. Because its revenue is not hardware-focused like with many of its competitors, it can sell its products at competitive prices, even at cost if necessary. LeEco’s ultimate goal is to create an internet-connected ecosystem encompassing hardware, software, and content that provides seamless access to all forms of digital entertainment. That’s no easy task, especially in the U.S. where many entrenched companies, including cable providers, movie studios, TV broadcasters, and internet companies provide similar services, albeit not always in a holistic way.
    LeEco thinks it has the answer to this problem with its EcoPlatform that’s open to third-party content providers and hardware partners, essentially giving them access to LeEco’s content delivery system with its apps and LeCloud network that encompasses more than 750 CDN nodes. LeEco announced a number of launch partners at its event, including Lionsgate, Showtime, Esquire, the History Channel, and Metro-Goldwyn-Mayer to name just a few, and it mentioned that another major content partner will be announced on November 2.
    To get the most from an ecosystem, whether it’s Apple’s or Google’s or Amazon’s, you need to fully buy in. This is especially true with LeEco, where signing up for Le EcoPass grants you access to 5TB of cloud storage, unlimited monthly movie viewing, extended warranties on LeEco hardware, and access to two EcoPass days (think Black Friday sales) at LeMall.com, its online storefront. It also offers the LeRewards program that grants further discounts on products.
    Both the Le Pro3 and Le S3, along with LeEco’s other products, will be available for purchase on November 2 at LeMall.com. The Le Pro3 will cost $399 and the Le S3 will cost $249. There will be a $100 LeRewards discount available for both phones, which will drop the prices to $299 and $149, respectively. The Le Pro3 will be available in two colors—gray and gold—while the Le S3 will be available in three colors—gray, gold, and rose gold. Both phones will be unlocked and are compatible with GSM networks from AT&T, T-Mobile, MetroPCS, and Cricket Wireless.

    More...

Thread Information

Users Browsing this Thread

There are currently 6 users browsing this thread. (0 members and 6 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title