Page 557 of 1210 FirstFirst ... 574575075325475525535545555565575585595605615625675826076571057 ... LastLast
Results 5,561 to 5,570 of 12095

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5561

    Anandtech: Apple and UnionPay Will Bring Apple Pay To China In Early 2016

    Today Apple and China UnionPay announced plans to bring support for Apple Pay to China by the beginning of 2016. China UnionPay is the only bank card organization in China, and so partnering with them was essential to Apple expanding the Apple Pay service to the country. Apple Pay is Apple's mobile payment service that utilizes NFC and either an iPhone or an Apple Watch to make mobile payments at merchants that have the necessary payment terminals to support contactless payments.
    At launch, Apple Pay will be available on 15 of China's leading banks. It's not specified exactly which banks have signed on for the original roll out, but if the service expands in the way it has in the United States that number will end up growing fairly quickly. The launch of the service will be subject to approvals by Chinese regulators, which could cause delays to Apple's planned roll out timeline. With China UnionPay having issued bank cards for hundreds of millions of customers, the expansion to China could potentially provide a big boost to the number of users using the service. The expansion into China also makes sense when one considers Apple's recent attempts to gain a better foothold in the Chinese market.
    As for Apple Pay in general, the launch of Apple Pay in China will mark the fifth expansion of the service. It originally launched exclusively in the United States before expanding to the United Kingdom, and it was recently introduced in Australia and Canada through a partnership with American Express.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5562

    Anandtech: AMD Releases Crimson 15.12 WHQL Drivers

    Just last month we saw the release of AMDs new Radeon Software Crimson Edition. This release included the brand new Radeon Settings and promised a new commitment to more frequent driver updates alongside better support for WHQL certification. To that end, today AMD has released a new driver set, Crimson 15.12.
    Functionally there is nothing new here and the display driver version number is identical. Aside from fixing two minor Crimson control panel issues, all AMD has done today is removed the beta status from 15.11.1, renamed it 15.12 (following their year.month naming scheme), and endowed WHQL certification. While not ground breaking by any means one difference here is that AMD is officially moving a driver from beta to release, which has not happened for a while and will be a welcome change moving forward.
    As always those interested in reading more or installing the updated WHQL drivers for AMD’s desktop, mobile, and integrated GPUs can find them either under the driver update section in Radeon Settings or on AMDs Radeon Software Crimson Edition download page.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5563

    Anandtech: Going for Gaming: An Interview with MSI VPs Charles Chiang and Ted Hung on

    MSI’s march on the gaming market has been well documented with plenty of pushes into notebooks, motherboards, graphics and an attempt to move the barrier forward with both brand recognition and user experience. One of the great things about going to trade shows is that we can often organize some special interview time with the individuals that actually make the decisions about products and corporate strategy within the companies that we talk about all the time. Way back at Computex, I had a rather extensive and wide ranging interview with two VPs from MSI deeply involved in product and strategy, and we were joined by a long-time contact with AnandTech who is now a regional MSI President.

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5564

    Anandtech: Seagate: Hard Disk Drives Set to Stay Relevant for 20 Years

    The very first hard disk drives (HDDs) were demonstrated by IBM back in 1956 and by the early 1980s they became the dominant storage technology for all types of computers. Some say, hard drives are no longer relevant as solid-state drives offer higher performance. According to Seagate Technology, HDDs will remain in the market for at least 15 to 20 years. In a bid to remain the primary bulk storage device for both clients and servers, hard drives will adopt a multitude of technologies in the coming decade.
    “I believe HDDs will be along around for at least 15 years to 20 years,” said David Morton, chief financial officer of Seagate, at the Nasdaq 33rd Investor Program Conference.
    Sales of HDDs Decrease, But Technology Keeps Evolving

    Sales of hard disk drives have been decreasing for several years now. Total available market of HDDs dropped to 118 million units in the third quarter of 2015, according to estimates by Seagate Technology and Western Digital Corp. By contrast, various makers of hard drives sold approximately 164 million units in Q3 2010, the two leading manufacturers claim.
    Shipments of HDDs decrease due to a variety of factors nowadays, including growing popularity of solid-state drives (SSDs), drop of PC sales, increasing usage of cloud storage and so on. Nonetheless, HDDs remain the most popular data storage technology, which is also the cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon. As a result, HDDs will remain a key bulk storage technology for a long time.
    To stay relevant in the long term, hard disk drives need to keep increasing their capacity. Last year ASTC (Advanced Storage Technology Consortium), an international organization that unites various companies who develop, manufacture or use hard disk drives, unveiled its vision of the HDD future. According to this technology roadmap, capacity of hard drives will rise to 100TB by 2025. In the coming years HDDs will adopt many new writing technologies in a bid to bolster their data storage capacities.
    PMR, SMR and Helium

    Modern hard disk drives are based on perpendicular magnetic recording (PMR) and shingled magnetic recording (SMR) technologies. PMR-based drives have been around for about a decade and will remain popular for years to come thanks to their relatively high sequential performance, low cost per gigabyte, and well-understood reliability.
    HDDs that use shingled recording write new tracks that overlap part of the previously written magnetic tracks. The overlapping tracks may slowdown writing because the architecture requires HDDs to rewrite adjacent tracks after any writing operation. While SMR allows increasing areal density of hard drives to 1.1 Tbit per square inch or even higher, performance of such HDDs may be lower compared to performance of PMR-based devices. In a bid to “conceal” peculiarities of SMR, which can slowdown performance, makers of HDDs develop special firmware or even alter software applications that use such hard drives in datacenters.
    To boost capacities of PMR and SMR hard disk drives today without increasing areal densities, HDD makers need to install more platters into their devices. While it is possible to fit six disks into a standard 3.5” HDD thanks to new technologies, more platters require major redesigns and use of helium inside the drives. The density of helium is one-seventh that of air, which reduces drag force acting on the spinning disk stack and lowers fluid flow forces affecting the platters and the heads. The lower density of helium allows to fit up to seven disks into one drive today, reduce power consumption of HDD motors and improve accuracy of arm’s positioning, something that is important for high bit densities too. Unfortunately, hermetically sealed helium-filled HDDs are rather expensive to manufacture, which is why at present they are positioned for datacenters.
    At present only HGST, a wholly-owned subsidiary of Western Digital, ships helium-filled Ultrastar He and Ultrastar Ha hard disk drives in high volume. The company sold around 1.1 million sealed HDDs in Q3 2015 and demand for such drives is increasing. Seagate Technology plans to introduce its helium-filled HDDs in the first half of 2016.
    TDMR Incoming

    Earlier this year Seagate announced plans to start commercial shipments of hard disk drives featuring two dimensional magnetic recording (TDMR) technology in the next couple of years. Heat-assisted magnetic recording technology (HAMR) — that has been demonstrated for multiple times so far by various manufacturers of hard drives, heads and platters — is still not ready for prime time, according to Seagate. The world’s second largest maker of HDDs claims that reliability of devices based on HAMR is not sufficient, which is why the tech will be used commercially at a later date.
    “We talked last year about two dimensional magnetic recording, we will be ready to ship that in the next year or two,” said Dave Mosley, president of operations and technology at Seagate, at the company’s analyst and investor strategic update conference in September. “HAMR is still not ready for prime time, I was not tremendously happy with the progress made last year, but there was a progress.”
    TDMR technology allows to increase areal density of hard disk drives by making HDD tracks narrower and pitches even smaller than today. Since tracks are projected to become narrower than actual magnetic read heads, it will get increasingly hard for the latter to perform read operations because nearby tracks will create too much inter-track interference (ITI). HDDs featuring TDMR technology counteract ITI by reading data from multiple nearby tracks and then determining which data is needed. The industry is working on several implementations of TDMR. It is possible to read data from multiple adjacent tracks using one reading head, but that greatly reduces performance of HDDs. Alternatively, it possible to use an array of heads to read data from several nearby tracks. While such approach guarantees rather high performance, it is very hard to build a complex array of multiple readers. TDMR lets HDD makers to increase areal density by 5 to 10 per cent, according to Seagate. Moreover, it also solves ITI problems that will likely occur in the future. As a result, it is logical to expect all HDD makers to use two dimensional magnetic recording technology in the future.
    HAMR Not Ready for Prime Time

    Hard drives featuring heat-assisted magnetic recording technology — as the name implies — record data on high-stability magnetic media with laser thermal assistance to reduce its coercivity for a very short amount of time. Seagate’s HAMR technology heats media to approximately 450°C using a laser with 810nm wavelength and 20mW power, according to details revealed by the company earlier this year. The method helps to reduce size of magnetic “pitches” without undesirable effects on readability, write ability and reliability.
    Hard drives with HAMR technology will sport significantly higher areal densities — around 1.5Tbit per square inch initially and 2Tbit per square inch shortly after introduction — and will be able to store noticeably more data than today’s HDDs featuring perpendicular recording technology. Eventually, companies like Seagate expect that HAMR will help to increase bit densities of hard disk drives to 5Tbit per square inch. While TDMR technology is important, HAMR will mean a breakthrough for areal densities and capacities of HDDs.
    “The highest areal density that we see today have to be written with HAMR,” said Mr. Mosley. “We still have some issues working through the reliability. We have actually solved a lot of problems, but the whole industry — through various consortiums — is really focused on getting the last of the problems solved so we could get [HAMR] into the products.”
    Seagate hopes to deliver its first HAMR-powered hard disk drives with 4TB capacity sometimes in late 2016 or early 2017 to select clients. The customers are expected to tryout the HDDs in their datacenters, verify that HAMR drives are compatible with their infrastructure, and are generally reliable. Volume shipments of HAMR-based HDDs are now expected to start in late 2017, or even in 2018. Unfortunately, at this point the HAMR roadmap is not completely clear.
    Long Road Ahead

    In addition to two dimensional magnetic recording and heat-assisted magnetic recording technologies — that Seagate expects in commercial products in 2016 – 2017 and beyond — other technologies are being researched by the industry. Among those already disclosed are Bit Patterned Media Recording (BPMR), Heated Dot Magnetic Recording (HDMR), Microwave-Assisted Magnetic Recording (MAMR) and some other. In the future, hard disk drives will adopt combinations of various technologies to maximize bit densities and capacities.
    Technologies like TDMR, HAMR or BPMR will be commercialized by manufacturers like Seagate, Western Digital and Toshiba. However, there are many companies and universities, who explore technologies for future hard disk drives. For example, reliability of HAMR-based HDDs is something addressed by the whole industry, not just by Seagate or Western Digital. Such collaborative approach and with continuing investments into fundamental magnetic recording research increases the likelihood of many technological breakthroughs going forward. The latter may guarantee that HDDs will remain relevant for a long time — for at least 15 to 20 years, according to Seagate.
    If HDDs remain in the market in 2035, it will mean that the technology will have served the humankind for about 80 years at the time, which is a very long period for any high-tech industry. By the time, HDDs will have outlived floppy discs, cassettes, CRT displays and televisions as well as numerous manufacturers of hard drives.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5565

    Anandtech: Windows 10 Mobile Update For Older Lumias Pushed To 2016

    Earlier this year Microsoft stated that their plan was to begin the roll out of Windows 10 Mobile to existing Lumia devices in December. With December being half over by this point, the fact that the update would probably be pushed to 2016 was something of an unofficial but accepted truth. Today Microsoft has made that truly official. The company said the following in a statement to ZDNet:
    "This November we introduced Windows 10 to phones including brand new features such as Continuum and Universal Windows Apps with the introduction of the Lumia 950 and 950 XL. The Windows 10 Mobile upgrade will begin rolling out early next year to select existing Windows 8 and 8.1 phones."
    It's not made explicitly clear why the update has been delayed, as the OS is already available on Microsoft's recently launched Lumia 950 and 950 XL. From my experience with the beta/developer releases of the OS that Microsoft has made available, it's entirely possible that they still need to work on ironing out bugs and improving performance. Many reviews of the new Lumia phones have a similar sentiment, and with many of the older Lumia devices running less capable hardware from Qualcomm's Snapdragon 400 series than the 950 and 950 XL, it wouldn't be such a bad thing to have the update delayed to ensure it doesn't cripple performance on those phones.
    At this time it's also not known how many devices will receive the update to Windows 10 Mobile. Microsoft has stated that phones will need 8GB of NAND, but it's not clear if there are other hardware requirements. Given that the Lumia 550 just launched with Snapdragon 210 and runs Windows 10 Mobile, I would hope that Microsoft plans to update a significant number of existing devices.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5566

    Anandtech: Q&A Session with ASUS at CES 2016: 10 Years of the Republic of Gamers

    As part of our coverage of CES 2016 a few short weeks away, we have teamed up with ASUS for a round-table into their Republic of Gamers (ROG) brand, which is celebrating its 10-year birthday throughout 2016. In the round table, we will be discussing the origins of ROG, with some insight into those first initial products and to how the brand is perceived today, with a few questions from our readers. This is where you come in!
    As part of the discussion, we have synchronized a very interesting group of individuals, including all the motherboard senior editors of AnandTech dating back well over a decade:
    Vivian Lien Kralevich

    Chief Marketing Officer, ASUS USA

    ASUS Marketing
    from 2006
    Gary Key

    Director of Marketing, ASUS USA

    AnandTech Motherboard Senior Editor 2005-2008
    Rajinder 'Raja' Gill

    Technical PR Manager, ASUS USA

    AnandTech Motherboard Senior Editor 2008-2010
    Ian Cutress

    10 Years of ROG Round Table Chair

    Current AnandTech Motherboard Senior Editor from 2011
    Between Gary, Rajinder and myself, we have covered the Republic of Gamers brand from its inception, with both Gary and Raja now involved in various levels with members of the team that designs, develops, tests and pushes the ROG ecosystem, then managing the perception of it as part of the ASUS brand within North America. At the time when Gary was probing the original models, Vivian was one of his direct ASUS contacts, ensuring that direct line of communication and filling him in on the details. Then when Gary joined ASUS, Raja had Gary as his main contact, and so on, meaning that for this discussion we have the ASUS-AnandTech contact line right from the initial ROG launch.
    You may remember we interviewed Dr Albert Chang, Senior Division Director of ASUS Motherboard Business Unit Research and Development back in 2014 about the general path for motherboard design, and how the ROG team is designed to be that skunkworks element of engineering. Raja assists ROG’s internal impromptu extreme overclocking events with top overclockers as well as community management, so we will pick his brains on how design ideas from the forums and events assist product design. With any luck, we will also have some old ROG boxes or hardware on hand through to the newest Maximus line.
    This round-table and Q&A session will be video recorded then uploaded after CES, and we invite questions from you. Please leave them in the comments below!


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5567

    Anandtech: Price Check: DDR4 Memory Down Nearly 40% in 6 Months, Expected To Continue

    Today we're launching a new feature on the AnandTech Pipeline: Price Check. Here we'll periodically examine hardware prices and analyze what's behind price changes
    Just a year ago DDR4 dynamic random access memory (DRAM) was rather expensive and was sold at a noticeable premium compared to DDR3. Today, DDR4 memory modules cost less than DDR3 modules cost a year ago and continue to get more affordable. Next year prices of DDR4 are expected to decline further as manufacturers of DRAM are gradually increasing production of memory in general and DDR4 in particular.
    DDR4 Gets Cheaper as Price Premium Over DDR3 Erodes

    The average spot price of one 4Gb DDR4 memory chip rated to run at 2133MHz was $2.221 at press time, according to DRAMeXchange, one of the world’s top DRAM and NAND market trackers based in Taipei, Taiwan. Spot price of a similar memory integrated circuit (IC) was $2.719 in late September and $3.618 in late June, 2015. As it turns out, the price of a single 4Gb DDR4 DRAM IC dropped 38.62% in about half of a year.
    Spot prices of DDR3 memory are also declining. One 4Gb DDR3 chip rated to operate at 1600MHz cost $1.878 in Taiwan at press time. A similar chip was priced at $2.658 in late June, which means that the spot price of a 4Gb DDR3 IC dropped 29.4% in less than six months.
    The difference between a 4Gb DDR3 memory chip and a 4Gb DDR4 DRAM IC used to be approximately 26.5% in June. Today, a 4Gb DDR4 chip costs about 18.5% more than a 4Gb DDR3 memory IC.
    Spot prices of DRAM chips directly affect prices of actual memory modules. At present one 4GB DDR4 SO-DIMM costs $18 in Taiwan, according to DRAMeXchange. A DDR3 4GB SO-DIMM is priced at $16.75. For many PC configurations, price difference between DDR3 and DDR4 memory modules is already negligible. Next year it will erode further and the new type of memory will replace DDR3 as the mainstream DRAM for personal computers and servers.
    Retail Prices of DDR4 Modules Drop by Over 50% This Year

    While spot and contract prices give a good idea about ongoing trends and help to understand the market in general, they do not reveal the retail situation, something that is important for the end user. As it appears, some DDR4 memory kits became over 50% cheaper this year.
    Kingston’s HyperX Fury Black 16GB kit (2*8GB) rated to operate at 2133MHz with CL14 latency used to cost $229.20 in March at Amazon.com, according to CamelCamelCamel. Today, the dual-channel kit costs $108.99. Kingston’s HyperX memory modules are relatively affordable solutions for PC enthusiasts, which are used by system integrators too.
    G.Skill’s latest Ripjaws V family of memory modules started to show up in retail only in October or November, but they have already got more affordable. Based on checks from CamelCamelCamel, the G.Skill Ripjaws V DDR4 16GB dual-channel kit rated to operate at 3200MHz used to cost $176.64 in early November, but the product is available today for $136.59.
    Memory modules for high-end desktop systems tend to get cheaper faster and more significantly than solutions for mainstream PCs. Corsair’s Dominator Platinum 64GB DDR4 quad-channel kit capable of operating at 2666MHz with CL15 latency used to cost up to $1759.99 at Amazon.com early in 2015. By the middle of the year the price of the kit declined to around $1000 and currently the set of four premium 16GB DDR4 memory modules is available for $679.99.
    Since the Dominator Platinum series is designed for ultra-high-end systems, it is not surprising that they are generally overpriced. Nonetheless, even such DDR4 memory solutions get more affordable these days.
    Supply Exceeds Demand

    There are two key reasons why computer memory is getting more affordable. Firstly, demand for DRAM is not high these days. Secondly, makers of memory chips are transiting to thinner process technologies, effectively increasing their output. Since supply exceeds demand, prices are getting lower.
    Sales of personal computers as well as tablets dropped this year, which decreased demand for DRAM by the industry. According to International Data Corp. (IDC), shipments of PCs in the third quarter of 2015 totaled 71 million units, a 10.8% decline from the same period a year ago, but a 7.4% increase from the second quarter of 2015. Sales of tablets in Q3 2015 reached 48.7 million units, which is 12.6% less than in Q3 2014, but 8.94% more than in Q2 2015. By contrast, the industry shipped 355.2 million smartphones in the third quarter, up 6.8% year-over-year and 5.3% sequentially.
    The vast majority of personal computers and many tablets use commodity DDR3 or DDR4 memory, whereas contemporary smartphones use LPDDR3 or LPDDR4 memory. Typically, when demand for PCs and commodity DRAM drops, memory makers start to increase output of more expensive server DRAM as well as LPDDR memory to offset revenue declines. According to DRAMeXchange, 40% of global DRAM bit output was LPDDR in Q3 2015.
    Modest growth of smartphone sales amid declines of PCs and tablets in the third quarter barely helped DRAM makers to maintain their revenue at approximately the same level as in the second quarter. Global DRAM revenue in Q3 2015 totaled $11.298 billion, down 1.2% from Q2 2015, DRAMeXchange found.
    DRAM Prices to Keep Declining

    While there are only three major makers of DRAM left on the planet, they continue to fight for market share and profits. In a bid to cut-down costs, manufacturers of memory have to adopt thinner process technologies, which decreases sizes of memory cells and thus increases bit output per wafer. As a result, global supply of DRAM upsurges and affects prices.
    This year Samsung Electronics continued its transition to 20nm DRAM manufacturing technology, whereas its rivals — Micron Technology and SK Hynix — only started to use their 20nm and 21nm fabrication processes. The thinner production technology helped Samsung to increase its profit margins and market share. The company controlled 46.7% of the DRAM market in Q3 2015, up from 45.1% in the second quarter. SK Hynix and Micron commanded 28% and 19.2% of the memory market, respectively, according to the market tracker.
    Analysts from DRAMeXchange believe that transition to 20nm/21nm manufacturing technologies, slow economy and weak demand for electronics will negatively affect prices of DRAM going forward as supply will exceed demand. To stay profitable, DRAM makers will have to migrate to thinner fabrication processes faster and balance their product mixes.
    “Looking ahead to next year’s DRAM market, the annual demand and supply bit growth rates are projected around 23% and 25% respectively,” said to Avril Wu, research director of DRAMeXchange. “Supply will still outpace demand by bit and average sales prices will continue to drop. Whether suppliers can turn a profit will mainly depend on their progression in technology migration and product-mix strategies.”
    One good news for DRAM makers is that 20nm and 21nm process technologies help to reduce costs of DDR4 ICs in general and 8Gb DDR4 memory chips in particular. Such DRAM ICs are required to build high-capacity — 32GB, 64GB, 128GB — memory modules. Such products are sold at a considerable premium to server makers, which helps to bolster revenue and profits of memory producers.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5568

    Anandtech: The Angelbird Wings PX1 M.2 Adapter Review: Do M.2 SSDs Need Heatsinks?

    The M.2 form factor has quickly established itself as the most popular choice for PCIe SSDs in the consumer space. The small size easily fits in to most laptop designs, and the ability to provide up to four lanes of PCI Express accommodates even the fastest SSDs. By comparison, SATA Express never caught on and never will due to its two-lane limitation. And the more recent U.2 (formerly SFF-8639) does have traction, but has seen little adoption in the client market.
    Meanwhile, although M.2 has its perks it also has its disadvantages, often as a consequence of space. The limited PCB area of M.2 can constrain capacity: Samsung's single-sided 950 Pro is only available in 256GB or 512GB capacities while the 2.5" SATA 850 Pro is available in up to 2TB. And for Intel, the controller used in their SSD 750 is outright too large for M.2, as it's wider than the most common M.2 form factor (22mm by 80mm). Finally and most recently, as drive makers have done more to take advantage of the bandwidth offered by PCIe, a different sort of space limitation has come to the fore: heat.
    When testing the Samsung SM951 we found that our heavier sustained I/O tests could trigger thermal throttling that would periodically restrict the drive's performance. We also had a brief opportunity to run some of our tests on the SM951 using the heatsink from Plextor's M6e Black Edition. We found that extra cooling made noticeable differences in performance on some of our synthetic benchmarks, but our more realistic AnandTech Storage Bench tests showed little or no change. But other than the quick look at the SM951, we haven't had the chance to do a thorough comparison of how cooling affects high-performance M.2 drives, until now.

    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5569

    Anandtech: Dell Issues Patch For Content Adaptive Brightness Control On The XPS 13

    The XPS 13 was one of the best laptops of the year, but it did have some issues, as all devices do. One that was very frustrating to deal with during the review was the aggressive Content Adaptive Brightness Control (CABC) which was enabled by default, with no way to disable it. CABC is a common method of saving power, since the backlight can be lowered depending on what content is on the display. Unfortunately, it was so aggressive that trying to accurately establish battery life was difficult, since we set the displays to 200 nits. With the CABC, brightness would vary quite substantially just with webpages flashing onto the screen.
    It was also an issue when trying to calibrate the display. The calibration software first sets a baseline brightness on white (200 nits again is what we use) and then flashes various shades of gray and color to create a profile for the display. Once again, the CABC would get in the way, changing the brightness that the software was expecting.
    I think for most people, it would be something that they would notice, but not something that would bother them too much, unless you were doing certain tasks where it would kick in. I am all for power saving features, but anytime you add something like this, you need to have a way to disable it for customers who don’t want it. Luckily Dell is now offering a patch to disable this feature.
    At the moment, the only way to get the patch is to contact Dell support. It would be nice if they would just offer it as a link to download, but for the moment this is what we have.
    Being able to remove the aggressive CABC fixes one of my biggest issues with the XPS 13, and it was already one of the best laptops of the year. With this fix, it moves up a bit more.
    Source: Dell



    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5570

    Anandtech: Host-Independent PCIe Compute: Where We're Going, We Don't Need Nodes

    The typical view of a cluster or supercomputer that uses a GPU, an FPGA or a Xeon Phi type device is that each node in the system requires one host or CPU to communicate through the PCIe root complex to 1-4 coprocessors. In some circumstances, the CPU/host model adds complexity, when all you really need is more coprocessors. This is where host-independent compute comes in.
    The CPU handles the networking transfer and when combined with the south bridge, manages the IO and other features. Some orientations allow the coprocessors to talk directly with each other, and the CPU part allows large datasets to be held in local host DRAM. However for some compute workloads, all you need is more coprocessor cards. Storage and memory might be decentralized, and adding in hosts creates cost and complexity - a host that seamlessly has access to 20 coprocessors is easier to handle than 20 hosts with one each. This is the goal of EXTOLL as part of the DEEP (Dynamical Exascale Entry Platform) Project.
    At SuperComputing 15, one of the academic posters on display from Sarah Neuwirth and her team from the University of Heidelberg was around developing the hardware and software stacks to allow for host-independent PCIe coprocessors through a custom fabric. This is theory would allow for compute nodes in a cluster to be split specifically into CPU and PCIe compute nodes, depending on the need of the simulation, but also allows for fail over or multiple user access. All of this is developed through their EXTOLL network interface chip, which has subsequently been spun out into a commercial entity.
    A side note - In academia, it is common enough that the best ideas, if they're not locked down by funding terms and conditions, are spun out into commercial enterprises. With enough university or venture capital in exchange for a percentage of ownership, an academic team can hire external experts to make their ideas a commercial product. These ideas either work and fail, or sometimes the intellectual property is sold up the chain to a tech industry giant.
    The concept of EXTOLL is to act as a mini-host to initialize the coprocessor but also handles the routing and memory address translation such that it is transparent to all parties involved. On a coprocessor with EXTOLL equipped, it can be connected into a fabric of other compute, storage and host nodes and yet be accessible to all. Multiple hosts can connect into the fabric, and coprocessors in the fabric can communicate directly to each other without the need to move out to a host. This is all controlled via MPI command extensions for which the interface is optimised.
    The top level representation of the EXTOLL gives seven external ports supporting cluster architectures up to a 3D Torus plus one extra. The internal switch manages which network port is in use, derived from the translation layer provided by the IP blocks: VELO is the Virtualized Engine for Low Overhead that deals with MPI and in particular small messages, RMA is the Remote Memory Access unit that implements put/get with one-or-zero-copy operations and zero CPU interaction, and the SMFU which is the Shared Memory Function Unit for exporting segments of local memory to remote nodes. This all communicates to the PCIe coprocessor via the host interface which supports both PCIe 3.0 or HyperTransport 3.0.
    From topology point of view, EXTOLL is not to act as a replacement for a regular network fabric and adds in a separate fabric layer. In the diagram above, the exploded view gives compute and host nodes (CN) offering standard fabric options, booster interface nodes (BI) that have both the standard fabric and EXTOLL fabric, then booster nodes (BN) which are just the PCIe coprocessor and an EXTOLL NIC. With this there can be a 1 to many or a many to many representation depending on what is needed, or in most cases the BI and BN can be combined into a single unit. From the end users perspective, this should all be seamless.
    I discussed this and was told that several users could allocate themselves a certain number of coprocessors or the admin can set the limits depending on login or other workloads queued.
    On the software side, EXTOLL sits between the coprocessor driver as a virtual PCI layer. This communicates to the hardware through the EXTOLL driver, telling the hardware to perform the required methods of address translation or MPI messages etc. The driver provides the tools to do the necessary translation of PCI commands across its own API.
    The goal of something like EXTOLL is to be part of the PCIe coprocessor itself, similar to how Omni-Path will be on Knights Landing, either as a custom IC on the package or internal to the die. That way the EXTOLL connected devices can be developed into devices in a different physical format to the standard PCIe coprocessor cards, perhaps with integrated power and cooling to make design more efficient. The first generation of this was built on an FPGA and used as an add-in to a power and data only PCIe interface. The second generation is similar, but this time has moved out into a 65nm TSMC based ASIC, reducing power and increasing performance. The latest version is the Tourmalet card, using upgraded IP blocks and featuring 100 GB/s per direction and 1.75 TB/s switching capacity.

    Early hardware in the DEEP Project, to which EXTOLL is a key part
    Current tests with the 2nd generation, the Galibier, and a dual node design gave LAMMPS (a biochemistry library) speed up of 57%.
    The concept of host-less PCIe coprocessors is one of the next steps towards example computing, and EXTOLL are now straddling the line between commercial products and presenting their research as part of academic endeavours, even though there is the goal of external investment, similar to a startup. I am told they already have interest and proof of concept deployment with two partners, but this sort of technology needs to be integrated into the coprocessor itself - having something the size of a motherboard with several coprocessors talking via EXTOLL without external cables should be part of the endgame here, as long as power and cooling can be controlled. The other factor is ease of integration with software. If it fits easily into current MPI based codes and libraries, on C++ and FORTRAN, and it can be supported as new hardware is developed with new use cases, then it is a positive step. Arguably EXTOLL thus needs to be brought into on of the large tech firms, most likely as an IP purchase, or others will develop something similar depending on patents. Arguably the best person into that position will be Intel with its Omni-Path, but consider that FPGA vendors have close ties to Infiniband, so there could be potential there.
    Relevant Paper: Scalable Communication Architecture for Network-Attached Accelerators


    More...

Thread Information

Users Browsing this Thread

There are currently 74 users browsing this thread. (0 members and 74 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title