Page 419 of 1210 FirstFirst ... 319369394409414415416417418419420421422423424429444469519919 ... LastLast
Results 4,181 to 4,190 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4181

    Anandtech: Intel Broadwell Architecture Preview: A Glimpse into Core M

    Typically we would see Intel unveil the bulk of the technical details of their forthcoming products at their annual Intel Developer Forum, and with the next IDF scheduled for the week of September 9th we’ll see just that. However today Intel will be breaking from their established standards a bit by not waiting until IDF to deliver everything at once. In a presentation coinciding with today’s embargo, dubbed Advancing Moore’s Law in 2014, Intel will be offering a preview of sorts for Broadwell and the 14nm process.
    Today’s preview and Intel’s associated presentation are going to be based around the forthcoming Intel Core M microprocessor, Broadwell configuration otherwise known at Broadwell-Y. The reason for this is a culmination of several factors, and in all honesty it’s probably driven as much by investor relations as it is consumer/enthusiast relations, as Intel would like to convince consumer and investor alike that they are on the right path to take control of the mobile/tablet through superior products, superior technology, and superior manufacturing. Hence today’s preview will be focused on the part and the market Intel feels is the most competitive and most at risk for the next cycle: the mobile market that Core M will be competing in.

    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4182

    Anandtech: AMD’s Big Bet on ARM Powered Servers: Opteron A1100 Revealed

    It has been a full seven months since AMD released detailed information about its Opteron A1100 server CPU, and twenty two months since announcement. Today, at the Hot Chips conference in Cupertino, AMD revealed the final missing pieces about its ARM powered server strategy headlining the A1100. One thing is certainly clear, AMD is betting heavily on ARM powered servers by delivering one of the most disruptive server CPUs yet, and it is getting closer to launch.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4183

    Anandtech: Intel’s 14nm Technology in Detail

    Much has been made about Intel’s 14nm process over the past year, and admittedly that is as much as Intel’s doing as it is the public’s. As one of the last Integrated Device Manufacturers and the leading semiconductor manufacturer in the world, Intel has and continues to set the pace for the semiconductor industry. Which means that Intel’s efforts to break the laws of physics roughly every 2 years mark an important milestone in the continuing development of semiconductor technology and offer a roadmap of sorts to what other semiconductor manufacturers might expect.
    To that end, at a time when ramping up new process nodes is more complex and more expensive than ever, Intel’s 14nm process is especially important. Although concerns over the immediate end of Moore’s Law remain overblown and sensationalistic, there is no denying that continuing the pace of Moore’s Law has only gotten more difficult. And as the company on the forefront of semiconductor fabrication, if anyone is going to see diminishing returns on Moore’s Law first it’s going to be Intel.
    Today Intel is looking to put those concerns at rest. Coinciding with today’s embargo on Intel’s 14nm technology and a preview of Intel’s upcoming Broadwell architecture based Core M processor, Intel will be holding a presentation dubbed Advancing Moore’s Law in 2014. Intel for their part is nothing short of extremely proud over what advancements they have made over the last several years to make their 14nm process a reality, and with that process now in volume production in their 14nm Oregon fab and being replicated to others around the world, Intel is finally ready to share more information about the 14nm process.
    We’ll start off our look at Intel’s 14nm process with a look at Intel’s yields. Yields are important for any number of reasons, and in the case of Intel’s 14nm process the yields tell a story of their own.
    Intel’s 14nm process has been their most difficult process to develop yet, a fact that Intel is being very straightforward about. Throughout the life of the 14nm process so far its yields have trailed the 22nm at equivalent points in time, and while yields are now healthy enough for volume production Intel still has further work to do to improve the process to catch up with 22nm. In fact at the present Intel’s 22nm process is the company’s highest yielding (lowest defect density) process ever, which goes to show just how big a set of shoes the up and coming 14nm process needs to fill to completely match its predecessor.
    Concerns over these yields has no doubt played a part in Intel’s decision to go ahead with today’s presentation, for if nothing else they need to showcase their progress to their investors and justify the company’s heavy investment into 14nm and other R&D projects. While 14nm has made it into production in 2014 and the first 14nm products will hit retail by the end of the year, these yield issues have caused 14nm to be late for Intel. Intel’s original plans, which would have seen the bulk of their Broadwell lineup launch in 2014, have been reduced to the single Broadwell-Y SKU this year, with the rest of the Broadwell lineup launching in 2015.
    Ultimately while 14nm is still catching up to 22nm, Intel is increasingly confident that they will be able to finish catching up, forecasting that 14nm will reach parity with 22nm on a time adjusted basis in the first quarter of 2015, or roughly 6 months from now. Intel is already in the process of replicating their 14nm to their other fabs, with fabs in Arizona and Ireland expected to come online later this year and in 2015 respectively. These fab ramp-ups will in turn allow Intel to further increase their manufacturing capacity, with Intel projecting that they will have sufficient volume to handle multiple 14nm product ramps in H1’2015.
    Moving on to the specifications and capabilities of their 14nm process, Intel has provided the minimum feature size data for 3 critical feature size measurements: transistor fin pitch, transistor gate pitch, and the interconnect pitch. From 22nm to 14nm these features have been reduced in size by between 22% and 35%, which is consistent with the (very roughly) 30%-35% reduction in feature size that one would expect from a full node shrink.
    Intel is especially proud of their interconnect scaling on the 14nm node, as the 35% reduction in the minimum interconnect pitch is better than normal for a new process node.
    Along with the immediate feature size improvements that come with a smaller manufacturing node, Intel has also been iterating on their FinFET technology, which is now in its second generation for the 14nm process. Compared to the 22nm process, the 14nm process’s fins are more tightly packed, thinner, taller, and fewer in number (per transistor).
    Each one of these changes in turn improves the performance of the FinFETs in some way. The tighter density goes hand-in-hand with 14nm’s feature size reductions, while the taller, thinner fins allow for increased drive current and increased performance. Meanwhile by reducing the number of fins per transistor, Intel is able to improve on density once again while also reducing the transistor capacitance that results from those fins.
    Intel is also reporting that they have been able to maintain their desired pace at improving transistor switching speeds and reducing power leakage. Across the entire performance curve the 14nm process offers a continuum of better switching speeds and/or lower leakage compared to Intel’s 22nm process, which is especially important for Intel’s low power ambitions with the forthcoming Core M processor.
    Plotted differently, here we can see how the last several generations of Intel’s process nodes compare across mobile, laptop, and server performance profiles. All 3 profiles are seeing a roughly linear increase in performance and decrease in active power consumption, which indicates that Intel’s 14nm process is behaving as expected and is offering similar gains as past processes. In this case the 14nm process should deliver a roughly 1.6x increase in performance per watt, just as past processes have too.
    Furthermore, these base benefits when coupled with Intel’s customized 14nm process for Core M (Broadwell-Y) and Broadwell’s power optimizations have allowed Intel to more than double their performance per watt as compared to Haswell-Y.
    Moving on to costs, Intel offers a breakdown of costs on a cost per mm2 and pairs that with a plot of transistor sizes. By using more advanced double patterning on their 14nm node Intel was able to achieve better than normal area scaling, as we can see here. The tradeoff for that is that wafer costs continue to rise from generation to generation, as double patterning requires additional time and ever-finer tools that drive up the cost of production. The end result is that while Intel’s cost per transistor is not decreasing as quickly as the area per transistor, the cost is still decreasing and significantly so. Even with the additional wafer costs of the 14nm process, on a cost per transistor basis the 14nm process is still slightly ahead of normal for Intel.
    At the same time the fact that costs per transistor continue to come down at a steady rate may be par for the course, but that Intel has been able to even maintain par for the course is actually a very significant accomplishment. As the cost of wafers and fabbing have risen over the years there has been concern that transistor costs would plateau, which would lead to chip designers being able to increase their performance but only by increasing prices, as opposed to the past 40 years of cheaper transistors allowing prices to hold steady while performance has increased. So for Intel this is a major point of pride, especially in light of complaints from NVIDIA and others in recent years that their costs on new nodes aren’t scaling nearly as well as they would like.
    Which brings us to the final subject of Intel’s 14nm presentation, the competitive landscape. Between the ill-defined naming of new process nodes across the entire industry and Intel’s continuing lead in semiconductor manufacturing, Intel likes to point out how their manufacturing nodes compare to foundry competitors such as TSMC and the IBM alliance. Citing 3rd party journal articles for comparison, Intel claims that along with their typical lead in rolling out new nodes, as of the 14nm node they are going to have a multiple generation technical advantage. They expect that their 14nm node will offer significantly smaller feature sizes than competing 14nm nodes, allowing them to maintain consistent logic area scaling at a time when their competitors (i.e. TSMC) cannot.
    For what it’s worth the basic facts do appear to check out, but we would be the first to point out that there is more to semiconductor manufacturing than just logic area scaling. At least until Intel’s competitors start shipping their FinFET products this is going to be speculative, and doesn’t quantify how well those competing process nodes will perform. But then again, the fact that Intel is already on their second FinFET node when their competitors are still ramping up their first is no small feat.

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4184

    Anandtech: HTC Brings the Desire 816 to the US

    HTC is expanding their lineup of devices in the United States with the official launch of the Desire 816 on Virgin Mobile USA. We talked about the Desire 816 when it launched earlier this year at MWC, and much like the recently launched Desire 610 on AT&T it has taken quite some time for the Desire 816 to makes its way to the US. In many ways the Desire 816 can be viewed as a big brother to the Desire 610, with improved specs across the board. It's also a device that helps HTC combat inexpensive phablets like the Huawei Ascend Mate2. The full specifications of the Desire 816 are laid out below.
    HTC Desire 816
    SoC Qualcomm Snapdragon 400 (MSM8928) 4 x Cortex A7 at 1.6GHz
    Adreno 305
    Memory and Storage 8GB NAND + MicroSDXC, 1.5GB LPDDR2
    Display 5.5” 1280x720 Super LCD2 at 267ppi
    Cellular Connectivity 2G / 3G (EVDO) / 4G LTE (Qualcomm MDM9x25 UE Category 4 LTE)
    Dimensions 156.6 x 78.7 x 7.9 mm, 165g
    Camera 13 MP f/2.2 Rear Facing, 5MP f/2.8 Front Facing
    Battery 2600 mAh (9.88 Whr)
    Other Connectivity 802.11 b/g/n + BT 4.0, USB2.0, GPS/GNSS, NFC
    SIM Size Nano-SIM
    Operating System Android 4.4 KitKat with HTC Sense 5.5
    Looking at the specs there's not a whole lot to talk about. The Snapdragon 400 platform has become ubiquitous among devices in this price bracket, and the 720p display is also fairly standard. It's interesting to compare the Desire 816 to the Huawei Ascend Mate2. While they don't compete on the same carrier in the US, they share similar specifications right down to the camera resolutions. The big difference comes with the larger display on the Mate2 and the subsequent larger battery due to the increased physical size of the device. Overall, HTC looks to have put together a very decent device for its price bracket. Features like multiple color choices and HTC's front facing Boomsound speakers will also help to differentiate the Desire 816 from the other competing devices that have a similar hardware platform.
    Currently the Desire 816 will launch on Virgin Mobile USA on August 12 for $299 off contract. This variant will support the Sprint EVDO network that Virgin Mobile utilizes. HTC has also stated that they intend to bring other smartphones in the Desire lineup to the United States later this year. Whether that also means expanded carrier availability for the Desire 610 and 816 is something only time will tell.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4185

    Anandtech: Lenovo Announces New ThinkStation P Series Desktop Workstations

    As much as I would like to be at SIGGRAPH, one of the reasons to visit would be to see Lenovo’s latest launch of their Haswell-E Desktop Workstation series. One of the key elements to the workstation market in recent quarters is to develop a professional-grade system that can encompass all the critical industries that require horsepower under the desk: engineering, media, energy, medical, finance and others. These systems have to be verified with industry standards to even be considered by these markets, but also the shift to Haswell-E and DDR4 will be an all-important factor for those that rely on speed and performance. One of the issues that these system manufacturers have is to define themselves in the market – Lenovo is already a big player in many other PC-related industries, so listening to customers is all important when trying to develop market share.
    The new ThinkStation P series will be based around the LGA2011-3 socket, using Xeon processors and high capacity DDR4 memory. Given the nature of the platform, we can assume that the DDR4 will be ECC by default. For GPU Compute Quadro is being used, with the top line P900 model supporting dual Xeons alongside up to three Quadro K6000 graphics cards and up to 14 storage devices. All the P series will be certified to work on all key ISV applications, and via Intel they are quoted as supporting Thunderbolt 2, which should make for interesting reading regarding the PCIe lane distribution or PLX chip distribution depending if it is onboard or via an optional add-in-card.
    In terms of that all important product differentiation, the P series will use ‘tri-channel cooling’ and air baffles to direct the cool air immediately to the component in question and then out of the chassis without touching other components. This essentially becomes a more integrated solution than the compartmentalized chassis we see in the consumer market, except when the company makes the whole system, the company can control the experience to a much tighter level.
    The P series also runs a trio of ‘FLEX’ themed additions. The FLEX Bay is designed to support an optical drive or the FLEX module which can hold an ultraslim ODD, media reader or firewire hub. The FLEX Tray on the P900 allows each of the seven HDD trays to support either one 3.5” drive or two 2.5” drives, hence the fourteen drive support mentioned earlier. The FLEX Connector is a mezzanine card allowing users to add in storage related cards without sacrificing rear PCIe slots, meaning that this connector brings this extra card away from the other devices, presumably at right angles. Lenovo is also wanting to promote their tool-less power supply removal without having to adjust the cables on the P700 and P500, which suggests that the PSU connects into a daughter PCB with all the connectors pre-connected, allowing the PSU to be replaced easily.
    Lenovo is also adorning their components with QR codes so if a user has an issue the code can be scanned such that the user will be directed to the specific webpage dealing with the component. The chassis will have integrated handles for easier movement or rack mounting. Lenovo is also promoting its diagnostic port, allowing the user to plug in an Android smartphone or tablet via USB for system analysis using the ThinkStation app.
    Until Haswell-E and the motherboard chipsets are officially announced, Lenovo cannot unfortunately say more about the specifications regarding the series beyond memory capacities, DRAM support and power supply numbers, however they do seem confident in their ability to provide support and an experience to their ThinkStation users. We have been offered a review sample later in the year when we can test some of these features.
    Source: Lenovo
    Gallery: Lenovo Announces New Thinkstation P Series Desktop Workstations


    Addition: Ryan recently met with Lenovo last week where we were allowed to take some images of the chassis and design:



    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4186

    Anandtech: Short Bytes: Intel's Core M and Broadwell-Y SoC

    Intel has slowly been feeding us information about their upcoming Broadwell processors for a couple years now, with the first real details kicking off almost a year ago at IDF 2013. Since then, the only other noteworthy piece of information came back in March when it was revealed that socketed Broadwell CPUs with unlocked multipliers will be available with Iris Pro Graphics. Today, Intel is ready to begin providing additional information, and it starts with the Broadwell-Y processor, which Intel is now referring to as an SoC (System On Chip). We have an in-depth article on the subject, but for Short Bytes we want to focus on the bottom line: what does this mean for end users?
    The big news for Broadwell is that it will be the first 14nm processor available to the public, following on the success of Intel's 22nm process technology. Shrinking the process technology from 22nm to 14nm can mean a lot of things, but the primary benefit this time appears to be smaller chip sizes and lower power requirements. The first parts will belong to the Core M family of products, a new line catering specifically to low power, high mobility form factors (typically tablets and hybrid devices). With Core M, Intel has their sights set on the fanless computing market with sub-9mm thick designs, and they have focused on reducing power requirements in order to meet the needs of this market. This brings us to Broadwell-Y, the lowest power version of Broadwell and the successor to Haswell-Y and the codename behind the new Core M.
    The reality of Intel's Y-series of processors is that they haven't been used all that much to date. Only a handful of devices used Haswell-Y (and even fewer used Ivy Bridge-Y), mostly consisting of 2-in-1 devices that can function as both a laptop and a tablet. For example, the base model Surface Pro 3 uses a Core i3-4020Y, and Dell's XPS 11 and certain Venue Pro 11 tablets also use Y-series parts; Acer, HP, Sony, and Toshiba also have some detachable hybrid devices with the extreme low power processors. Unfortunately, pricing on the Y-series is generally much higher than competiting solutions (i.e. ARM-based SoCs), and there have been criticisms of Intel's higher power requirements and lower overall battery life as well.
    Core M thus serves marketing needs as well as technical requirements: it replaces the Core i3/i5/i7 Y-series parts and gives Intel a brand they can market directly at premium tablets/hybrids. And in another move likely driven by marketing, Core M will be the launch part for Intel's new 14nm process technology. Transitions between process technology usually come every 2-3 years, so the 14nm change is a big deal and launching with their extreme low power part makes a statement. The key message of Broadwell is clear: getting into lower power devices and improving battery life is a critical target. To that end, Broadwell-Y probably won't be going into any smartphones, but getting into more premium tablets and delivering better performance with at least competitive battery life relative to other SoCs is a primary goal.
    Compared to the Haswell-Y parts, Intel has made some significant advances in performance as well as power use, which we've covered elsewhere. The cumulative effect of the improvements Intel is bringing is that Broadwell-Y has a greater than 2X reduction in TDP (Thermal Design Power) compared to Haswell-Y. It also has a 50% smaller and 30% thinner package and uses 60% lower idle power. Intel points out that Broadwell-Y is set to deliver more than a 2X improvement in performance per Watt over Haswell-Y, though that's a bit more of a nebulous statement (see below). Many of the improvements come thanks to Intel's increased focus on driving down power requirements. Where previous Intel processors targeted laptops and desktops as the primary use case and then refined and adjusted the designs to get into lower power envelopes, with Broadwell Intel is putting the Y-series requirements center stage. The term for this is "co-optimization" of the design process, and these co-optimizations for Broadwell-Y are what allows Intel to talk about "2x improvements". But you need to remember what is being compared: Haswell-Y and Broadwell-Y.
    Broadwell parts in general will certainly be faster/better than the current Haswell parts – Intel doesn't typically "go backwards" on processor updates – but you shouldn't expect twice the performance at the same power. Instead, Broadwell-Y should offer better performance than Haswell-Y using much less power, but if you reduce total power use by 2X you could increase performance by 5% and still claim a doubling of performance per Watt. And that's basically what Intel is doing here. Intel estimates the core Broadwell architecture to be around 5% faster than Haswell at the same clocks; specifically, IPC (Instructions Per Cycle) are up ~5% on average. Similarly, changes and improvements to the graphics portion of the processor should deliver more performance at a lower power draw. Add in slightly higher clock speeds and you get a faster part than last generation that uses less power. These are all good improvements, but ultimately it comes down to the final user experience and the cost.
    Everywhere you go, people are increasingly using tablets and smartphones for many of their daily computing needs, and being left out of that market is the road to irrelevance. Core M (Broadwell-Y) is Intel's latest push to make inroads into these extremely low power markets, and on paper it looks like Intel has a competitive part. It's now up to the device vendors to deliver compelling products, as fundamentally the choice of processor is only one element of an electronics device. Being the first company to deliver 14nm parts certainly gives Intel an edge over the competition, but high quality Android and iOS tablets sell for $300-$500, so there's not a lot of room for a $100+ processor – which is why Intel has their Atom processors (due for the 14nm treatment with Braswell, if you're wondering).
    Core M is going after the premium tablet/hybird market, with benefits including full Windows 8 support, but will it be enough? If you're interested in such a device and you don't already own the Haswell-Y version, Core M products should deliver slimmer and lighter devices with improved battery life and better performance. Don't expect a 10" Core M tablet to deliver the same battery life as a 7" Android/iOS device (at least, not without a larger battery), since the display and other components contribute a lot to power use and Windows 8 has traditionally been far less battery friendly than Android; still, Core M tablets may finally match or perhaps even exceed the battery life of similarly sized iOS/Android tablets. The first retail products with Core M should be shipping before the end of the year, so we'll find out later this year and early next how well Broadwell-Y is able to meet its lofty goals. And we'll also find out how much the Core M products cost.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4187

    Anandtech: Browser Face-Off: Battery Life Explored 2014

    It has been five years since we did a benchmark of the various web browsers and their effect on battery life, and a lot has changed. Our testing then included Opera 9 & 10, Chrome 2, Firefox 3.5.2, Safari 4, and IE8. Just looking at those version numbers is nostalgic. Not only have the browsers gone through many revisions since then, but computer hardware and the Windows operating system are very different. While there has been a lot of talk, there hasn't been a lot of data comparing browser battery usage. Today we're going to put the latest browsers to the test and deliver some concrete numbers.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4188

    Anandtech: AMD Completes FirePro Refresh, Adds 4 New FirePro Cards

    Kicking off a busy day for professional graphics, AMD is first up to announce that they will be launching a quartet of new FirePro cards. As part of the company’s gradual FirePro refresh that began with the W9100 in April and followed by W8100 in June, today AMD is gearing up to refresh the rest of their FirePro lineup with new products for the mid-range and low-end segments of the pro graphics market.
    Being announced today are the FirePro W7100, W5100, W4100, and W2100. These parts are based on a range of AMD GPUs – including Tonga, a new GPU that has yet to show up in any other AMD products – and are designed to the sub-$2500 market segment that the current W8100 tops out at. With a handful of exceptions, the bulk of these upgrades are straightforward, focused on making AMD’s entire FirePro lineup 4K capable, improving performance across the board, and doubling the amount of VRAM compared to the past generation to allow for larger data sets to be used.
    AMD FirePro W Series Specification Comparison
    AMD FirePro W7100 AMD FirePro W5100 AMD FirePro W4100 AMD FirePro W2100
    Stream Processors 1792 768 512 320
    ROPs ? 16 16 8
    Memory Clock 5GHz GDDR5 6GHz GDDR5 5.5GHz GDDR5 ?
    Memory Bus Width 256-bit 128-bit 128-bit 64-bit
    VRAM 8GB 4GB 4GB 2GB
    Double Precision ? 1/16 1/16 1/16
    TDP ? ? 50W 26W
    GPU Tonga Bonaire Cape Verde Oland
    Architecture GCN 1.1 GCN 1.1 GCN 1.0 GCN 1.0
    Display Outputs 4 4 4 2
    Starting at the top, from a technical perspective the W7100 is the most interesting of the new FirePro cards. Whereas the previous-generation W7000 was based on a second-tier version of AMD’s venerable Tahiti GPU, the W7100 gets a brand new GPU entirely, one that we haven’t seen before. Named Tonga, this new GPU is a smaller, lower performance GCN 1.1 part that slots in under the Hawaii GPU used in the W9100 and W8100. However while AMD is announcing the W7100 today they are not disclosing any additional information on Tonga, so while we can draw some basic conclusions from W7100’s specifications a complete breakdown of this new GPU will have to wait for another day.
    From a specification point of view AMD is not disclosing the GPU clockspeed or offering any floating point throughput performance numbers, but we do know that W7100 will feature 1792 stream processors. Coupled with that is 8GB of GDDR5 clocked at 5GHz sitting on a 256-bit memory bus.
    The W7100 is designed to be a significant step up compared to the outgoing W7000. Along with the doubling W7000’s memory from 4GB to 8GB, the Tonga GPU in W7100 inherits Hawaii’s wider geometry front-end, allowing W7100 to process 4 triangles/clock versus W7000’s 2 tris/clock. Overall compute/rendering performance should also greatly be increased due to the much larger number of stream processors (1792 vs. 1280), but without clockspeeds we can’t say for sure.
    Like the W7000 before it, the W7100 is equipped with 4 full size DisplayPorts, allowing for a relatively large number of monitors to be used with the card. Because this is a GCN 1.1 GPU it gets the newer GCN 1.1 display controller, which for AMD’s 4K ambitions means that the monitor can drive 3 4K@60Hz displays or 4 4Kdisplays if some operate at 30Hz.
    In AMD’s product stack the W7100 is designed be a budget alternative to the W9100 and W8100, offering reduced performance but at a much lower cost. AMD is primarily targeting the engineering and media markets with the W7100, as its compute performance and 8GB of VRAM should be enough for most engineering workloads, or alternatively its VRAM capacity and ability to drive 4 4K displays makes it a good fit for 4K video manipulation.
    The second card being introduced today is the W5100. This part is based on AMD’s Bonaire GPU, a GCN 1.1 GPU that has been in AMD’s portfolio for over a year now but has not made it into a FirePro part until now. W5100 replaces the outgoing W5000, which was a heavily cut-down Pitcairn part.
    In terms of specifications, the W5100 utilizes a slightly cut-down version of Bonaire with 768 SPs active. It is clocked at approximately 910MHz, which puts its compute performance at 1.4 TFLOPS for single precision. Feeding W5100 is 4GB of VRAM attached to a 128-bit memory bus and clocked at 6GHz.
    Compared to the outgoing W5000 the W5100 gains the usual VRAM capacity upgrades that the rest of the Wx100 cards have seen, while the other specifications are a mixed bag on paper. Compute performance is only slightly improved – from 1.28 TFLOPS to 1.4 TFLOPS – and memory bandwidth has actually gone regressed slightly from 102GB/sec. Consequently the biggest upgrade will be found in memory bound scenarios, otherwise the W5100’s greatest improvements would be from its GCN 1.1 lineage.
    Speaking of which, with 4 full size DisplayPorts the W5100 has the same 4K display driving capabilities as the W7100. However with lower performance and half the VRAM, it’s decidedly a mid-range card and AMD treats it as such. This means it’s targeted towards lower power usage scenarios where the high compute performance and 8GB+ VRAM capacities of the W7100 and higher aren’t needed.
    The third of today’s new FirePro cards is the W4100. Based on AMD’s older Cape Verde GPU, this is not the first time that Cape Verde has appeared in a FirePro product. But it is the first time that it has appeared in a workstation part, its previous appearance being the display wall niche W600. At the same time the W4100 doesn’t have a true analogue in AMD’s previous generation FirePro stack, which stopped at the W5000, so the W4100 marks a newer, lower priced and lower performance tier for FirePro.
    With just 512 SPs active the W4100 tops out at only 50W power consumption, reflecting the fact that it is targeted towards lower power use cases. AMD has paired the card with 2GB of VRAM, and based on Cape Verde’s capabilities we expect that this is on a 128-bit bus. AMD has not provided any more technical details on the card, but it goes without saying that this is not a card meant to be a performance powerhouse.
    AMD’s target market for this is lightweight 2D and 3D workloads such as finance and entry level CAD. The 4 mini-DisplayPorts allow the card to directly drive up to 4 displays, though because this is a GCN 1.0 GPU it doesn’t have the same flexibility of the W5100.
    The final FirePro card being introduced today is the FirePro W2100, which is AMD’s new entry-level FirePro card. Like the W4100 it had no true analogue in AMD’s older product stack, but functionally it replaces the old Turks based V4900, a card which AMD kept around even after the launch of GCN to serve as their entry level FirePro product.
    W2100 is based on AMD’s Oland GPU, which marks the first time that this existing AMD GPU has appeared in a FirePro product. W2100 uses a cut down version of Oland with 320 SPs active and attached to 2GB of memory on a 64-bit bus. Oland is a very limited functionality GPU, and while it’s more than suitable for basic imaging it should be noted that it doesn’t have a video decoder.
    At a TDP of just 26W, the W2100 is AMD’s lowest power, lowest performance card. Functionally it’s a cheaper alternative to the W4100 for users who don’t need to drive 4 displays, with W2100 featuring just 2 DisplayPorts. The targeted market is otherwise similar, with a focus on lightweight 2D and 3D workloads over 1-2 monitors.
    Meanwhile along with today’s product announcements AMD is also announcing that they will be bringing their low-level Mantle API over to the FirePro family. The nature of the pro graphics market means that it will likely be some time before we see Mantle put in meaningful use here since the API is still under development, but once AMD gets the API locked down they believe that Mantle can offer many of the same benefits for professional graphics workloads as it can gaming. The greatly reduced draw call overhead should be a boon here for many 3D workloads, and Mantle’s ability to more easily transition between compute and graphics workloads would map well towards engineering tasks that want to do both at the same time.
    Wrapping things up, AMD has not revealed final pricing for these cards at this time, though we expect pricing to follow the previous generation W series cards. Meanwhile the W2100, W4100, and W5100 will be available next month. Otherwise no doubt owing to its use of the new Tonga GPU, W7100 will be farther out, with availability expected in Q4 of this year.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4189

    Anandtech: NVIDIA Refreshes Quadro Lineup, Launches 5 New Quadro Cards

    Continuing today’s spate of professional graphics announcements, along with AMD’s refresh of their FirePro lineup NVIDIA is announcing that they are undertaking their own refresh of their Quadro lineup. Being announced today and shipping in September are 5 new Quadro cards that will come just short of a top-to-bottom refresh of the Quadro lineup.
    With the exception of NVIDIA’s much more recently introduced Quadro K6000 – which will continue its reign as NVIDIA’s most powerful professional GPU – NVIDIA’s Quadro refresh comes as the bulk of the current Quadro K5000 family approaches 2 years old. At the point NVIDIA is looking to offer an across-the-board boost to their Quadro lineup, to increase performance and memory capacity at every tier. As a result this refresh will involve replacing NVIDIA’s Quadro cards with newer models based on larger and more powerful Kepler and Maxwell GPUs, and released as the Quadro Kx200 series. All told, NVIDIA is shooting for an average performance improvement of 40%, on top of any benefits from the larger memory amounts.
    NVIDIA Quadro Refesh Specification Comparison
    Quadro K5200 Quadro K4200 Quadro K2200 Quadro K620 Quadro K420
    CUDA Cores 2304 1344 640 384 192
    Core Clock 650MHz 780MHz 1GHz 1GHz 780MHz
    Memory Clock 6GHz GDDR5 5.4GHz GDDR5 5GHz GDDR5 1.8GHz DDR3 1.8GHz DDR3
    Memory Bus Width 256-bit 256-bit 128-bit 128-bit 128-bit
    VRAM 8GB 4GB 4GB 2GB 1GB
    Double Precision ? 1/24 1/32 1/32 1/24
    TDP 150W 105W 68W 45W 41W
    GPU GK110 GK104 GM107 GM107? GK107?
    Architecture Kepler Kepler Maxwell Maxwell Kepler
    Displays Supported (Outputs) 4 (4) 4 (3) 4 (3) 4 (2) 4 (2)
    We’ll start things off with the Quadro K5200, NVIDIA’s new second-tier Quadro card. Based on a cut down version of NVIDIA’s GK110 GPU, the K5200 is a very significant upgrade to the K5000 thanks to the high performance and unique features found in GK110. The combination of which elevates the K5200 much closer to the K6000 than the K5000 it replaces.
    The K5200 ships with 12 SMXes (2304 CUDA cores) enabled and utilizes a 256-bit memory bus, making this the first NVIDIA GK110 product we’ve seen ship without the full 384-bit memory bus. NVIDIA has put the GPU clockspeed at 650MHz while the memory clock stands at 6GHz. Meanwhile the card has the second largest memory capacity of the Quadro family, doubling K5000’s 4GB of VRAM for a total of 8GB.
    Compared to the K5000, K5200 offers an increase in shader/compute throughput of 36%, and a smaller 11% increase in memory bandwidth. More significant however are GK110’s general enhancements, which elevate K5200 beyond K5000. Whereas K5000 and its GK104 GPU made for a strong graphics card, it was a relatively weak compute card, a weakness that GK110 resolved. As a result K5200 should be similar to K6000 in that it’s a well-balanced fit for mixed graphics/compute workloads, and the ECC memory support means that it offers an additional degree of reliability not found on the K5000.
    As is usually the case in rolling out a refresh wave of cards based on existing GPUs, because performance has gone up power consumption has as well. NVIDIA has clamped K5200 at 150W (important for workstation compatibility), which is much lower than the full-fledged K6000 but is 28W more than the K5000. None the less the performance gains should easily outstrip the power consumption increase.
    Meanwhile display connectivity remains unchanged from the K5000 and K6000. NVIDIA’s standard Quadro configuration is a DL-DVI-I port, a DL-DVI-D port, and a pair of full size DisplayPorts, with the card able to drive up to 4 displays in total through a combination of those ports and MST over DisplayPort.
    NVIDIA’s second new Quadro card is the K4200. Replacing the GK106 based K4000, the K4200 sees NVIDIA’s venerable GK104 GPU find a new home as NVIDIA’s third-tier Quadro card. Unlike K5200, K4200’s GPU shift doesn’t come with any kind of dramatic change in functionality, so while it will be an all-around more powerful card than the previous K4000, it’s still going to be primarily geared towards graphics like the K4000 and K5000 before it.
    For the K4200 NVIDIA is using a cut down version of GK104 to reach their performance and power targets. Comprised of 7 active SMXes (1344 CUDA cores), the K4200 is paired with 4GB of VRAM. Clockspeeds stand at 780MHz for the GPU and 5.4GHz for the VRAM.
    On a relative basis the K4200 will see some of the greatest performance gains of this wave of refreshes. Its 2.1 TFLOPS of compute/shader performance blasts past K4000 by 75%, and memory bandwidth has been increased by 29%. However the 4GB of VRAM makes for a smaller increase in VRAM than the doubling most other Quadro cards are seeing. Otherwise power consumption is once again up slightly, rising from 80W to 105W in exchange for the more powerful GK104 GPU.
    Finally, as was the case with K5200 display connectivity remains unchanged. Since the K4200 is a single slot card like K4000 before it, this means NVIDIA uses a single DL-DVI-I port along with a pair of full size DisplayPorts. Like other Kepler products the card can drive up to 4 displays, though doing this will require a DisplayPort MST hub to get enough outputs. Which on that note, users looking to pair this card with multiple monitors will be pleased to find that Quadro Sync is supported in the K4200 for the first time, being limited to the K5000 and higher previously.
    In NVIDIA’s refreshed Quadro lineup, the K4200 will primarily serve as the company’s highest-end single-slot offering. As with other GK10x based GPUs compute performance is not its strongest suit, while for graphics workloads such as CAD and modeling it should offer a nice balance of performance and price.
    Moving on, NVIDIA’s third Quadro refresh card is the K2200. This replaces the GK107 based K2000 and marks the first Quadro product to utilize one of NVIDIA’s newest generation Maxwell GPUs, tapping NVIDIA’s GM107 GPU. The use of Maxwell on a Quadro K part makes for an amusing juxtaposition, though the architectural similarities between Maxwell and Kepler mean that there isn’t a meaningful feature difference despite the generation gap.
    As was the case with NVIDIA’s consumer desktop GM107 cards, NVIDIA is aiming to produce an especially potent sub-75W card for K2200. Here NVIDIA uses a fully enabled GM107 GPU – all 5 SMMs (620 CUDA cores) are enabled – and it’s paired with 4GB of VRAM on a 128-bit bus. Meanwhile based on NVIDIA’s performance figures the GPU clockspeed should be just north of 1GHz while the memory clock stands at 5GHz.
    Since the K2200 is replacing a GK107 based card, the performance gains compared to the outgoing K2000 should be significant. On the consumer desktop side we’ve seen GM107 products come close to doubling GK107 parts, and we’re expecting much the same here. K2200’s 1.3 TFLOPS of single precision compute/shader performance is 78% higher than K2000’s, which means that K2200 should handily outperform its predecessor. Otherwise the 4GB of VRAM is a full doubling over the K2000’s smaller VRAM pool, greatly increasing the size of the workloads K2200 can handle.
    Meanwhile display connectivity is identical to the new K4200 and the outgoing K2000. The K2200 can drive up to 4 displays by utilizing a mix of its DL-DVI port, two DisplayPorts, and a DisplayPort MST hub.
    In NVIDIA’s new Quadro lineup the K2200 will serve as their most powerful sub-75W card. As we’ve seen in other NVIDIA Maxwell products, this is an area the underlying GM107 excels at.
    NVIDIA’s fourth Quadro card is the K620. This is another Maxwell card, and while NVIDIA doesn’t specify the GPU we believe it to be based on GM107 (and not GM108) due to the presence of a 128-bit memory bus. K620 replaces the GK108 based K600, and should offer substantial performance gains similar to what is happening with the K2200.
    K620’s GM107 GPU features 3 SMMs (384 CUDA cores) enabled, and it is pair with 2GB of DDR3 operating on a 128-bit memory bus. Like K2200 the GPU clockspeed appears to be a bit over 1GHz, and meanwhile the memory clockspeed stands at 1.8GHz.
    Compared to the K600 overall performance should be significantly improved. Though it’s worth pointing out that since memory bandwidth is identical to NVIDIA’s previous generation card, in memory bandwidth bound scenarios the K620 may not pull ahead by too much. None the less the memory pool has been doubled from 1GB to 2GB, so in memory capacity constrained situations the K620 should fare much better. Power consumption is just slightly higher this time, at 45W versus K600’s 41W.
    As this is a 3 digit Quadro product, NVIDIA considers this an entry level card and it is configured accordingly. A single DL-DVI port and a single full size DisplayPort are the K620’s output options, with an MST hub being required to attach additional monitors to make full use of its ability to drive 4 displays. By going with this configuration however NVIDIA is able to offer the K620 in a low profile configuration, making it suitable for smaller workstations that can’t accept full profile cards.
    Finally, NVIDIA’s last new Quadro card is the K420. Dropping back to a Kepler GPU (likely GK107), it replaces the Quadro 410. From a performance perspective this card won’t see much of a change – the number of CUDA cores is constant at 192 – but memory bandwidth has been doubled alongside the total VRAM pool, which is now 1GB.
    Like K620, K420 can drive a total of 4 displays, while the physical display connectors are composed of a single DL-DVI port and a single full size DisplayPort. This low profile card draws 41W, the same as the outgoing 410.
    With all but 1 of these cards receiving a doubled VRAM pool and significantly improved performance, NVIDIA expects that these cards should be well suited to accommodating the larger datasets that newer applications use, especially in the increasingly important subject of 4K video. Coupled with NVIDIA’s existing investment in software – both ISVs and their own cloud technology ecosystem – NVIDIA expects to remain ahead of the curve on functionality and reliability.
    Wrapping things up, NVIDIA tells us that the Quadro refresh cards will be shipping in September. In the meantime we’ll be reviewing some of these cards later this month, so stay tuned.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #4190

    Anandtech: NVIDIA Launches Next GeForce Game Bundle - Borderlands: The Pre-Sequel

    After letting their previous Watch Dogs bundle run its course over the past couple of months, NVIDIA sends word this afternoon that they will be launching a new game bundle for the late-summer/early-fall period.
    Launching today, NVIDIA and their partners will be bundling Gearbox and 2K Australia’s forthcoming FPS Borderlands: The Pre-Sequel with select video cards. This latest bundle is for the GTX 770 and higher, so buyers accustomed to seeing NVIDIA’s bundles will want to take note that this bundle is a bit narrower than usual since it doesn’t cover the GTX 760.
    As for the bundled game itself, Borderlands: The Pre-Sequel is the not-quite-a-sequel to Gearbox’s well received 2012 title Borderlands 2. As was the case with Borderlands 2 before it, this latest Borderlands game will be receiving PhysX enhancements courtesy of NVIDIA, leveraging the PhysX particle, cloth, and fluid simulation libraries for improved effects.
    NVIDIA Current Game Bundles
    Video Card Bundle
    GeForce GTX
    770/780/780Ti/Titan Black
    Borderlands: The Pre-Sequel
    GeForce GTX 750/750Ti/760 None
    Meanwhile on a lighter note, it brings a chuckle to see that NVIDIA is bundling what will most likely be a Direct3D 9 game with their most advanced video cards. This if nothing else is a testament to longevity of the API, having long outlasted the hardware it originally ran on.
    Finally, as always, these bundles are being distributed in voucher from, with retailers and etailers providing vouchers with qualifying purchases. So buyers will want to double check whether their purchase includes a voucher for either of the above deals. Checking NVIDIA’s terms and conditions, the codes from this bundle are good through October 31st, so it looks like this will bundle will run for around 2 months.


    More...

Thread Information

Users Browsing this Thread

There are currently 42 users browsing this thread. (0 members and 42 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title