Page 273 of 1210 FirstFirst ... 173223248263268269270271272273274275276277278283298323373773 ... LastLast
Results 2,721 to 2,730 of 12094

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2721

    Anandtech: Imagination Technologies Confirms PowerVR SGX 544 IP used in Exynos 5 Octa

    ARM was being unusually coy when talking about the GPU IP used in Samsung's recently announced Exynos 5 Octa. We eventually found out why: unlike the Exynos 5 Dual and Exynos 4 silicon, ARM's Mali GPU isn't included in the Exynos 5 Octa's floorplan. Through a bit of digging we concluded that Samsung settled on a PowerVR SGX 544MP3 GPU. We couldn't disclose how we came to this conclusion publicly, but thankfully today Imagination Technologies confirmed the use of their IP in the Exynos 5 Octa 5410. All Imagination confirmed was the use of PowerVR SGX 544 IP in Exynos 5 Octa, however we still believe that Samsung used three cores running at up to 533MHz.
    Thankfully we should be able to confirm a lot of this very soon. The Exynos 5 Octa is widely expected to be used in the international variants of Samsung's upcoming Galaxy S 4. We will be at Samsung's Galaxy S 4 unpacked launch event in NYC this Thursday to find out.
    Gallery: Imagination Technologies Confirms PowerVR SGX 544 IP used in Exynos 5 Octa




    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2722

    Anandtech: Inside AnandTech 2013: All-SSD Architecture

    When it comes to server hardware failures, I've seen them all with our own infrastructure. With the exception of CPUs, I've seen virtually every other component that could fail, fail in the past 16 years of running AnandTech. Motherboards, power supplies, memory and of course, hard drives.
    By far the most frequent failure in our infrastructure had to be mechanical drives. Within the first year after the launch of Intel's X25-M in 2008, I had transitioned all of my testbeds to solid state drives. The combination of performance and reliability was what I needed. Most of my testbeds were CPU bound, so I didn't necessarily need a ton of IO performance - but having the headroom offered by a good SSD meant that I could get more consistent CPU performance results between runs. The reliability side was simple to understand - with a good SSD, I wouldn't have to worry about my drive dying unexpectedly. Living in fear of a testbed hard drive dying over the weekend before a big launch was a thing of the past.
    When it came to rearchitecting the AnandTech server farm, these very same reasons for going the SSD route on all of our testbeds (and personal systems) were just as applicable to the servers that ran AnandTech.
    Our infrastructure is split up between front end application servers and back end database servers. With the exception of the boxes that serve our images, most of our front end app servers don't really stress IO all that much. The three 12-core virtualized servers at the front end would normally be fine with some hard drives, however we instead decided to go with mainstream SSDs to lower the risk of a random mechanical failure. We didn't need the endurance of an enterprise drive in these machines since they weren't being written to all that frequently, but we needed reliable drives. Although quite old by today's standards, we settled on 160GB Intel X25-M G2s but partitioned the drives down to 120GB in order to ensure they'd have a very long lifespan.

    Where performance matters more is in our back end database servers. We run a combination of MS SQL and MySQL, and our DB workloads are particularly IO intensive. In the old environment we had around a dozen mechanical drives in various RAID configurations powering all of the databases that ran the site. To put performance in perspective, I grabbed our old Forum Database server and took a look at the external SAS RAID array we had created. Until last year, the Forums were powered by a combination of 6 x Seagate Barracuda ES.2s and 4 x Seagate Cheetah 10K.7s.
    For the new Forums DB we moved to 6 x 64GB Intel X25-Es. Again, old by modern standards, but a huge leap above what we had before. To put the performance gains in perspective I ran some of our enterprise IO benchmarks on the old array and the new array to compare. We split the DB workload across the Barracuda ES.2 array (6 drive RAID-10) and the Cheetah array (4 drive RAID-5), however to keep things simple I just created a 4-drive RAID-0 using the Cheetahs which should give us more than a good indication of peak performance of the old hardware:
    AnandTech Forums DB IO Performance Comparison - 2013 vs 2007
    MS SQL - Update Daily Stats MS SQL - Weekly Stats Maintenance Oracle Swingbench
    Old Forums DB Array (4 x 10K RPM SAS RAID-0) 146.1 MB/s 162.9 MB/s 2.8 MB/s
    New Forums DB Array (6 x X25-E RAID-10) 394.4 MB/s 450.5 MB/s 55.8 MB/s
    Performance Increase 2.7x 2.77x 19.9x
    The two SQL tests are actually from our own environment, so the performance gains are quite applicable. The advantage here is only around 2.7x. In reality the gains can be even greater, but we don't have good traces of our live DB load - just some of our most IO intensive tasks on the DB servers. The final benchmark however does give us some indication of what a more random enterprise workload can enjoy with a move to SSDs from a hard drive array. Here the performance of our new array is nearly 20x the old HDD array.
    Note that there's another simplification that comes along with our move to SSDs: we rely completely on Intel's software RAID. There are no third party RAID controllers, no extra firmware/drivers to manage and validate, and there's no external chassis needed to get more spindles. We went from a 4U HP DL585 server with a 2U Promise Vtrak J310s chassis and 10 hard drives, down to a 2U server with 6 SSDs - and came out ahead in the performance department. Later this week I'll talk about power savings, which ended up being a much bigger deal.

    This is just the tip of the iceberg. In our specific configuration we went from old hard drives to old SSDs. With even greater demands you could easily go to truly modern enterprise SSDs or even PCIe based solutions. Using a combination of consumer and enterprise drives isn't a bad idea if you want to transition to an all-SSD architecture. Deploying reliable consumer drives in place of lightly used hard drives is a way to cut down the number of moving parts in your network, while moving to higher performing/higher endurance enterprise SSDs can deliver significant performance benefits as well.



    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2723

    Anandtech: Inside AnandTech 2013: The Hardware

    By the end of 2010 we realized two things. First, the server infrastructure that powered AnandTech was getting very old and we were seeing an increase in component failures, leading to higher than desired downtime. Secondly, our growth over the previous years had begun to tax our existing hardware. We needed an upgrade.
    Ever since we started managing our own infrastructure back in the late 90s, the bulk of our hardware has always been provided by our sponsors in exchange for exposure on the site. It also gives them a public case study, which isn't always possible depending on who you're selling to. We always determine what parts we go after and the rules of engagement are simple: if there's a failure, it's a public one. The latter stipulation tends to worry some, and we'll get to that in one of the other posts.
    These days there's an tempting alternative: deploying our infrastructure in the cloud. With low (to us) hardware costs however, doing it internally still makes more sense. Furthermore, it also allows us to do things like do performance analysis and create enterprise level benchmarks using our own environment.
    Spinning up new cloud instances at Amazon did have it appeal though. We needed to embrace virtualization and the ease of deployment benefits that came with it. The days of one box per application were over, and we had more than enough hardware to begin to consolidate multiple services per box.
    We actually moved to our new hardware and software infrastructure last year. With everything going on last year, I never got the chance to actually talk about what our network ended up looking like. With the debut of our redesign, I had another chance to do just that. What will follow are some quick posts looking at storage, CPU and power characteristics of our new environment compared to our old one.
    To put things in perspective. The last major hardware upgrade we did at AnandTech was back in the 2006 - 2007 timeframe. Our Forums database server had 16 AMD Opteron cores inside, it's just that we needed 8 dual-core CPUs to get there. The world changed over the past several years, and our new environment is much higher performing, more power efficient and definitely more reliable.
    In this post I want to go over, at a high level, the hardware behind the current phase of our infrastructure deployment. In the subsequent posts (including another one that went live today) I'll offer some performance and power comparisons, as well as some insight into why we picked each component.
    I'd also like to take this opportunity to thank Ionity, the host of our infrastructure for the past 12 months. We've been through a number of hosts over the years, and Ionity marks the best yet. Performance is typically pretty easy to guarantee when it comes to any hosting provider at a decent datacenter, but it's really service, response time and competence of response that become the differentiating factors for us. Ionity delivered on all fronts, which is why we're there and plan on continuing to be so for years to come.
    Out with the Old

    Our old infrastructure featured more than 20 servers, a combination of 1U dual-core application servers and some large 4U - 5U database servers. We had to rely on external storage devices in order to give us the number of spindles we needed in order to deliver the performance our workload demanded. Oh how times have changed.

    For the new infrastructure we settled on a total of 12 boxes, 6 of which are deployed now and another 6 that we'll likely deploy over the next year for geographic diversity as well as to offer additional capacity. That alone gives you an idea of the increase in compute density that we have today vs. 6 years ago: what once required 20 servers and more than a single rack can easily be done in 6 servers and half a rack (at lower power consumption too).
    Of the six, a single box currently acts as a spare - the remaining five are divided as follows: two are on database duty, while the remaining three act as our application servers.
    Since we were bringing our own hardware, we needed relatively barebones server solutions. We settled on Intel's SR2625, a fairly standard 2U rackmount with support for the Intel Xeon L5640 CPUs (32nm Westmere Xeons) we would be using. Each box is home to two of these processors, each of which features 6-cores and a 12MB L3 cache.

    We're a bit light on the memory side, but each database server features 48GB of Kingston DDR3-1333 while the application servers only use 36GB each. At the time that we speced out our consolidation plans, we didn't need a ton of memory but going forward it's likely something we'll have to address.

    When it comes to storage, the decision was made early on to go all solid-state. The problem we ran into there is most SSD makers at the time didn't want to risk a public failure of their SSDs in our environment. Our first choice declined to participate at the time due to our requirement of making any serious component failures public. Things are different today as the overall quality of all SSDs has improved tremendously, but back then we were left with one option: Intel.
    Our application servers use 160GB Intel X25-M G2s, while our database servers use 64GB Intel X25-Es. The world has since moved to enterprise grade MLC in favor of SLC NAND, but at the time the X25-Es were our best bet to guarantee write endurance for our database servers. As I later discovered, using heavily overprovisioned X25-M G2s would've been fine for a few years, but even I wanted to be more cautious back then.
    The application servers each use 6 x X25-M G2s, while the database servers use 6 x X25-Es. To keep the environment simple, I opted against using any external RAID controllers - everything here is driven by the on-board Intel SATA controllers. We need multiple SSDs not for performance reasons but rather to get the capacities we need. Given that we migrated from a many-drive HDD array, the fact that we only need a couple of SSDs worth of performance per box isn't too surprising.

    Storage capacity is our biggest constraint today. We actually had to move our image hosting needs to our host's cloud environment due to our current capacity constraints. NAND lithographies have shrunk dramatically since the days of the X25-Es and X25-Ms, so we'll likely move image hosting back on to a few very large capacity drives this year.
    That's the high level overview of what we're running on, I also posted some performance data for the improvement we saw in going to SSDs in our environment here.



    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2724

    Anandtech: Inside AnandTech 2013: The Hardware

    By the end of 2010 we realized two things. First, the server infrastructure that powered AnandTech was getting very old and we were seeing an increase in component failures, leading to higher than desired downtime. Secondly, our growth over the previous years had begun to tax our existing hardware. We needed an upgrade.
    Ever since we started managing our own infrastructure back in the late 90s, the bulk of our hardware has always been provided by our sponsors in exchange for exposure on the site. It also gives them a public case study, which isn't always possible depending on who you're selling to. We always determine what parts we go after and the rules of engagement are simple: if there's a failure, it's a public one. The latter stipulation tends to worry some, and we'll get to that in one of the other posts.
    These days there's an tempting alternative: deploying our infrastructure in the cloud. With low (to us) hardware costs however, doing it internally still makes more sense. Furthermore, it also allows us to do things like do performance analysis and create enterprise level benchmarks using our own environment.
    Spinning up new cloud instances at Amazon did have it appeal though. We needed to embrace virtualization and the ease of deployment benefits that came with it. The days of one box per application were over, and we had more than enough hardware to begin to consolidate multiple services per box.
    We actually moved to our new hardware and software infrastructure last year. With everything going on last year, I never got the chance to actually talk about what our network ended up looking like. With the debut of our redesign, I had another chance to do just that. What will follow are some quick posts looking at storage, CPU and power characteristics of our new environment compared to our old one.
    To put things in perspective. The last major hardware upgrade we did at AnandTech was back in the 2006 - 2007 timeframe. Our Forums database server had 16 AMD Opteron cores inside, it's just that we needed 8 dual-core CPUs to get there. The world changed over the past several years, and our new environment is much higher performing, more power efficient and definitely more reliable.
    In this post I want to go over, at a high level, the hardware behind the current phase of our infrastructure deployment. In the subsequent posts (including another one that went live today) I'll offer some performance and power comparisons, as well as some insight into why we picked each component.
    I'd also like to take this opportunity to thank Ionity, the host of our infrastructure for the past 12 months. We've been through a number of hosts over the years, and Ionity marks the best yet. Performance is typically pretty easy to guarantee when it comes to any hosting provider at a decent datacenter, but it's really service, response time and competence of response that become the differentiating factors for us. Ionity delivered on all fronts, which is why we're there and plan on continuing to be so for years to come.
    Out with the Old

    Our old infrastructure featured more than 20 servers, a combination of 1U dual-core application servers and some large 4U - 5U database servers. We had to rely on external storage devices in order to give us the number of spindles we needed in order to deliver the performance our workload demanded. Oh how times have changed.

    For the new infrastructure we settled on a total of 12 boxes, 6 of which are deployed now and another 6 that we'll likely deploy over the next year for geographic diversity as well as to offer additional capacity. That alone gives you an idea of the increase in compute density that we have today vs. 6 years ago: what once required 20 servers and more than a single rack can easily be done in 6 servers and half a rack (at lower power consumption too).
    Of the six, a single box currently acts as a spare - the remaining five are divided as follows: two are on database duty, while the remaining three act as our application servers.
    Since we were bringing our own hardware, we needed relatively barebones server solutions. We settled on Intel's SR2625, a fairly standard 2U rackmount with support for the Intel Xeon L5640 CPUs (32nm Westmere Xeons) we would be using. Each box is home to two of these processors, each of which features 6-cores and a 12MB L3 cache.

    Each database server features 48GB of Kingston DDR3-1333 while the application servers use 36GB each. At the time that we speced out our consolidation plans, we didn't need a ton of memory but going forward it's likely something we'll have to address.

    When it comes to storage, the decision was made early on to go all solid-state. The problem we ran into there is most SSD makers at the time didn't want to risk a public failure of their SSDs in our environment. Our first choice declined to participate at the time due to our requirement of making any serious component failures public. Things are different today as the overall quality of all SSDs has improved tremendously, but back then we were left with one option: Intel.
    Our application servers use 160GB Intel X25-M G2s, while our database servers use 64GB Intel X25-Es. The world has since moved to enterprise grade MLC in favor of SLC NAND, but at the time the X25-Es were our best bet to guarantee write endurance for our database servers. As I later discovered, using heavily overprovisioned X25-M G2s would've been fine for a few years, but even I wanted to be more cautious back then.
    The application servers each use 6 x X25-M G2s, while the database servers use 6 x X25-Es. To keep the environment simple, I opted against using any external RAID controllers - everything here is driven by the on-board Intel SATA controllers. We need multiple SSDs not for performance reasons but rather to get the capacities we need. Given that we migrated from a many-drive HDD array, the fact that we only need a couple of SSDs worth of performance per box isn't too surprising.

    Storage capacity is our biggest constraint today. We actually had to move our image hosting needs to our Ionity's cloud environment due to our current capacity constraints. NAND lithographies have shrunk dramatically since the days of the X25-Es and X25-Ms, so we'll likely move image hosting back on to a few very large capacity drives this year.
    That's the high level overview of what we're running on, I also posted some performance data for the improvement we saw in going to SSDs in our environment here.
    Gallery: Inside AnandTech 2013: The Hardware





    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2725

    Anandtech: Calxeda's ARM server tested

    ARM based servers hold the promise of extremely low power and excellent performance per Watt ratios. It's possible to place an incredible amount of servers into a single rack—there are already implementations with as many as 1000 ARM servers in one rack (48 server nodes in a 2U chassis). And all of those nodes consume less than 5KW (or around 5W per quad-core ARM node).
    But whenever a new technology is hyped, it is good to remain skeptical. The media hypes and raves about new trends because people love to read about something new, but at the end of the day, the system administrator has to keep his IT services working and convince his boss to invest in new technologies.
    Hundreds of opinion pages have been and will be written about the ARM vs. x86 server war, but nothing can beat a test run with real world benchmarks, and that is what we'll look at today. We have put some heavy loads on our Boston Viridis cluster system running 24 web sites—among other applications—and measured throughput, response times, and power. We'll be comparing it with the lower power Xeons to see how the current ARM servers compare to the best Intel Xeon offerings. Performance per Watt, Performance per dollar, whatever your metric is, we have the hard numbers.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2726

    Anandtech: CPU Air Cooler Roundup: Six Coolers from Noctua, SilverStone, be quiet!, a

    Now that CPU cooler reviews have begun in earnest here at AnandTech, it's been interesting to see just how conventional wisdom plays out in practice. There's been a pervasive attitude that closed loop coolers are only really competitive with the highest end air coolers, and there may be some truth to that, but we have at least one of those flagship coolers on hand today along with parts from SilverStone and Cooler Master.

    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2727

    Anandtech: Samsung's Unpacked 2013 Launch Event Live Blog

    We're live from Samsung's Unpacked event in NYC where the company is about to announce their next-generation flagship smartphone. Keep it parked here for live updates!


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2728

    Anandtech: Samsung's Galaxy S 4: Introduction & Hands On

    Since 2010 Samsung has grown to become not only the clear leader in the Android smartphone space, but the largest smartphone manufacturer in the world. Its annual iteration of the Galaxy S platform is now arguably one of the most widely anticipated smartphone launches each year.
    Like clockwork, tonight Samsung announced the Galaxy S 4: a 5-inch 1080p smartphone, and the new flagship for the Galaxy brand. We just finished learning about the device and spent a short time playing around with it.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2729

    Anandtech: Apple TV 2013 (A1469) Short Review: Analysis of a New A5

    The Apple TV is an incredibly relevant device today. It’s Apple’s attempt to augment the Netflixes and Hulu Pluses of the world with an on-demand cable TV alternative. Other companies are working on the same problem, with different solutions. Even Intel recently threw its hat into the ring.
    Apple famously refers to the Apple TV as a hobby, but I view it as more of an experiment. A test for the infrastructure, the delivery mechanism and a test of how to work with content companies. Long term if the Apple TV is to become something much more than it is, it’ll have to be more aggressive in delivering content, but for now it exists as a very successful experiment.
    Apple shipped five million Apple TVs last year. That’s $495M in revenue for the year just from the Apple TV. That’s not a lot of money for Apple, but it’s a business of considerable size. Given the low (relative to Apple’s other products) price point for the Apple TV, and steadily ramping shipment volume, it makes sense that the device would be a target for cost optimization. And that’s exactly what appears to have happened with the latest update to the platform.
    Apple still refers to this new Apple TV as a 3rd generation platform, and it doesn’t introduce any new features, but it does carry a different model string:
    Apple TV Models
    Year Released Model New Features
    Apple TV (1st gen) 2007 A1218 Initial Release
    Apple TV (2nd gen) 2010 A1378 New Platform
    Apple TV (3rd gen) 2012 A1427 1080p, WLAN+
    Apple TV (3rd gen, rev2) 2013 A1469 New Silicon
    The small increment in model gives you an indication of the magnitude of change here. No new functionality is added, but the device just gets cheaper for Apple to make. How Apple got there is particularly interesting.
    Out with the Old, in with the Same

    The Apple TV is a great device for anyone who lives purely within the Apple ecosystem. Users looking to play their own content that’s not already in an iTunes friendly format will have to either transcode or look elsewhere for something a bit more flexible. Apple must walk a fine line between tending to the needs of its customers while at the same time not upsetting the content owners that work with the company on the iTunes side of things. If you’re looking for a pirate box, the Apple TV is not the best solution.

    The Apple TV runs its own OS and there’s no application compatibility between it and the iPhone/iPad, despite running on very similar hardware and software. I suspect Apple recognizes the difficulty in simply opening the floodgates for a bunch of applications that were optimized for touch to run on a platform that’s controlled with a tiny remote.
    The Apple TV OS itself saw a major update last year, substantially changing the UI. Its functionality remained largely unchanged with this update, and since then not all that much has changed either - although there have been improvements since our review last year. Hulu Plus is now a supported streaming service on the Apple TV.
    As Brian summed up in our last review, the Apple TV remains a competent Netflix box and does a great job of interfacing with all iTunes services (Photo Stream, iTunes Match, as well as iTunes video content). There’s always room for improvement of course, but if you do live in the Apple/iTunes ecosystem the experience is pretty decent.
    The Apple TV also acts as an AirPlay sink, you can use it as a wireless receiver for display sent from a Mac running Mountain Lion as well as your iPhone/iPad.
    None of this has changed with the new Apple TV, nor has the external hardware - we’re still dealing with the same chassis and port configuration as before. To truly appreciate what’s new about A1469 however, you have to dive inside.
    Inside A1469



    A1427 (left) vs. A1469 (right)
    Getting inside the new revision of the Apple TV is no different than the previous model, the bottom snaps into place so you’ll need to pry it open with some (strong) plastic tools. Once the bottom is somewhat separated, just pull it out and you’re done.
    Internally the name of the game is cost reduction. Whereas the previous model (A1427) had a metal slab stacked on top of the PCB, the new Apple TV moves the heatspreader to the bottom of the chassis entirely - simplifying assembly.
    The power supply remains unchanged (3.4V, 1.75A), and there are just two cables running to the Apple TV’s PCB: one for the PSU and one for the power/status LED. Remove a few screws and we can pull out the PCB.
    The overall PCB size hasn’t changed tremendously, but the layout and component arrangements have. The changes to the bottom of the PCB (what you first see when you open up the Apple TV) aren’t significant, it’s what happens on the flip side that’s more interesting.

    A1427 (left) vs. A1469 (right)
    Apple moved to a highly integrated ceramic package from USI for the WiFi/BT solution, which saved a good amount of board area. Apple also went back to a single antenna design, further reducing complexity from the short stint with the dual-antenna design in the A1427 model.
    Removing the single large EMI shield we see the remaining changes. The single-core A5 SoC saw a package size reduction, and the DRAM is no longer integrated in a PoP (Package-on-Package) stack but is rather a discrete component.
    Apple Silicon Evolution
    Internal Name External Name Used In Fab + Process Node
    S5L8940 Apple A5 iPad 2, iPhone 4S Samsung 45nm
    S5L8942 Apple A5r2 iPad 2,4, Apple TV 3 Samsung 32nm
    S5L8945 Apple A5X iPad 3 Samsung 45nm
    S5L8947 Apple A5 Apple TV 3r2 Samsung 32nm
    S5L8950 Apple A6 iPhone 5 Samsung 32nm
    S5L8955 Apple A6X iPad 4 Samsung 32nm
    The old A5 package measured roughly 14mm x 13mm, while the new package is approximately 12mm x 12mm. Chipworks removed and de-lidded the new chip, determining that it’s truly a new piece of silicon with a single core ARM Cortex A9 and a dual-core GPU. The previous part was a die harvested A5 with one CPU core fused off (S5L8945), but this new chip physically removes the unused core (S5L8947). The GPU seems to be untouched. There are other changes however, resulting in a 37.8mm^2 die down from 69mm^2 in the previous A5 design.

    Thanks to Chipworks’ analysis we know that both of these chips are still made on Samsung’s 32nm process, meaning that Apple’s experimenting with a new silicon revision isn’t to act as a pipe cleaner for a new process (as it was with the previous gen Apple TV) but rather to reduce cost.
    The move to a smaller die directly impacts cost, as does the move away from a PoP stack and to external DRAM. It could very well be that Apple is finally selling enough Apple TVs to warrant a custom A5 of its own rather than continue to ship die harvested A5s from iPhones/iPads. The problem with relying exclusively on die harvesting is that eventually, as yields improve, you end up selling fully functional (and unnecessarily expensive) silicon into a market that’s unwilling to pay for the added performance. If you’ve got the volumes to justify it, it usually makes sense to bring out custom silicon for major price points. This is why Intel ships multiple configurations in its processor families (e.g. there are distinct dual and quad-core Ivy Bridge die in Intel’s lineup, this avoids Intel having to sell a disabled $300 quad-core chip as a $100 dual-core chip).

    32nm Apple A5 - S5L8942 (left) vs. new Apple A5 for ATV S5L8947 (right) - Chipworks
    There’s also the possibility that Apple would use this part in another device entirely.
    I was curious to see if power was impacted at all, but as we’ve seen in previous Apple TVs the power draw at the wall is very low - on the order of a couple of watts. Slight silicon changes require much finer grained power analysis.
    Pulling a page from our recent foray into measuring tablet power consumption, I wired an external power supply to the Apple TV motherboard and measured total platform power draw. There aren’t exactly any benchmarks for the Apple TV, but I put together a few tests to stress video decode, CPU and a little bit of GPU performance.
    All of my tests were run on Ethernet, but I did connect to an 5GHz 802.11n network to see if there were any changes in power consumption due to the new wireless stack.
    On Brian’s suggestion I streamed the hilariously awesome Netflix 29.97 short, as well as the 1080p Skyfall trailer. Both of these tasks should be handled by the A5’s video decode block.
    I also enabled Photo Stream on the Apple TV, and recorded power consumption while scrolling back and forth through a gallery of my last 68 photos. This test drives CPU usage and power consumption.
    Finally I ran an idle power test.
    Apple TV (3rd gen) Platform Power Consumption
    A1427 (2012) A1469 (2013)
    Idle - Min Power (Ethernet Connected) 1.41W 0.70W
    Photo Stream Scrolling (CPU Test) 1.84W 1.07W
    Skyfall 1080p iTunes Trailer (Ethernet) 1.58W 0.81W
    Skyfall 1080p iTunes Trailer (5GHz WiFi) 1.55W 0.85W
    Netflix 29.97 Short (Ethernet) 1.62W 0.85W
    The power savings are nothing short of significant. The previous generation Apple TV wasn’t really a power hog, with platform power maxing out at around 1.6W, but the new model tops out at just a watt. Overall the power savings seem to be around 800mW across the board.
    With no change to process technology, I can only assume that the reduction in power consumption came from other architectural or silicon optimizations. The significant power reduction is the only thing that makes me wonder if this new A5 silicon isn’t destined for another device, perhaps one powered by a battery. That’s pure speculation however, it could very well be that the A5 in the Apple TV is just lower power for the sake of being better designed.

    Floorplan of new Apple A5 S5L8947 for ATV - Chipworks
    Brian asked me how long it would take to make up the cost of the new Apple TV compared to the previous model (A1427) in power savings alone. Assuming you’re using the Apple TV for watching video 8 hours a day, every day of the year, you’d save about $0.26 per year on your power bill (assuming $0.11/kWh). You’d break even on the $99 cost of a new Apple TV in about 385 years. Maybe by then we’ll actually have a true replacement to cable TV.
    I didn’t notice any performance difference between the two platforms, but given the nature of the Apple TV it’s kind of difficult to really say for sure.
    Wireless Performance

    With the previous model (A1427), Apple improved WiFi performance through the use of two antennas and driving the primary antenna at a higher gain. With the A1469 model, Apple moves back to a lower gain, single antenna design however driven by a new WiFi solution (likely BCM4334 based).
    I was curious to see if there was a noticeable difference in WiFi performance, however in my testing I noted very similar performance to the A1427 version of the 3rd gen Apple TV:
    Apple TV (3rd gen) WiFi Performance
    Apple TV (A1469) Apple TV (A1427)
    Signal (dBm) Noise (dB) Rate (Mbps) Band (GHz) Signal (dBm) Noise (dB) Rate (Mbps) Band (GHz)
    Location 1 (Close) -38 -95 54 5 -44 -96 54 5
    Location 2 (Far) -70 -88 54 2.4 -69 -90 54 2.4
    Final Words

    The latest Apple TV doesn’t change functionality, nor does it appear to be a step back in performance. The A1469 model really helps Apple reduce costs, both through better engineering and through a physically smaller A5 SoC.

    The implications of this smaller, lower power A5 SoC are unclear to me at this point. It seems to me that the Apple TV now sells well enough to warrant the creation of its own SoC, rather than using a handmedown from the iPad/iPhone lineup. The only question that remains is whether or not we’ll see this unique A5 revision appear in any other devices. There’s not a whole lot of room for a single-core Cortex A9 in Apple’s existing product lineup, so I’m encouraged to believe that this part is exclusively for the Apple TV. Then again, I’m not much of a fortune teller.



    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #2730

    Anandtech: Inside AnandTech 2013: CPU Performance

    Last week I kicked off a short series on the hardware behind our current network infrastructure. In the first post I presented a high level overview of the hardware that runs AnandTech, while in the second post I detailed our All-SSD Architecture. Today I wanted to do some rough CPU comparisons to show how much faster the new hardware is, and on Monday I'll conclude the series with a discussion of power savings.
    Our old infrastructure was built in the mid-2000s, mostly around dual-core processors. For our most compute heavy systems, we had to rely on multiple dual-core processors to deliver the CPU performance we needed. Once again I'm using our old Forums database server, an HP DL585, as an example here.
    This old 4U box used AMD's Opteron 880 CPUs. The Opteron 880 was a 90nm 2.4GHz part with 2 x 1MB L2 caches, and a 95W TDP. Our DL585 configuration had four of these 880s, each on their own processor card, bringing the total core count up to 8.
    We replaced it (and all of our other servers) with 2U dual-socket systems based on Intel's Xeon L5640 processors. Each L5640 features 6-cores, making each server a 12-core machine, running at 2.26GHz with a 60W max TDP.
    The spec comparison is pretty interesting:
    AnandTech Server CPU Comparison
    AT Forums DB Server (2006) AT Forums DB Server (2013)
    Server Size 4U 2U
    CPU 4 x AMD Opteron 880 2 x Intel Xeon L5640
    Total Cores / Threads 8 / 8 12 / 24
    Manufacturing Process 90nm 32nm
    Release Year 2005 2010
    Number of Cores per Chip 2 6
    L1 / L2 / L3 Cache per Chip 2 x 64KB / 2 x 1MB / 0MB 6 x 64KB / 6 x 256KB / 12MB
    On-die Memory Interface 2 x 64-bit DDR-400 2 x 64-bit DDR3-1333
    Max Frequency (Non-Turbo) 2.40GHz 2.26GHz
    Max Turbo Frequency - 2.80GHz
    Max TDP 95W 60W
    Die Size per Chip 199 mm2 240 mm2
    Transistor Count per Chip 233M 1.17B
    Launch Price per Chip $2649 $996
    Although die area has gone up a bit, you get 3x the number of cores, a lot more cache and much more memory bandwidth. The transistor count alone shows you how much things have improved from 2005 to 2010. It's also far more affordable to deliver this sort of compute. Although I won't touch on it here (saving that for the final installment), you get all of this with a nice reduction in power consumption.
    Now the question is how much has performance improved? Simulating our live workload on a single box without the infrastructure that goes along with it is a bit difficult, so I turned to some quick Integer and FP benchmarks to give a general idea of the magnitude of improvement:
    AnandTech Server CPU Performance Comparison
    4 x AMD Opteron 880 2 x Intel Xeon L5640 Speedup
    7-Zip Benchmark (1 Thread) 2194 3053 1.39x
    7-Zip Benchmark (Multithreaded) 16130 38764 2.40x
    Cinebench 11.5 (Multithreaded) 4.5 11.43 2.54x
    Unlike what we saw in our SSD vs. HDD comparison, there are no 19x gains here. Single threaded performance only improves by 39% over a span of 5 years at roughly similar clocks (it's a little worse if you take into account the L5640's turbo boost). The days of huge/easy improvements in single threaded performance ended a while ago. Multithreaded performance shows much better gains thanks to the fact that we have 50% more cores and 3x the number of threads in the new server.
    All of this comes at a lower TDP and lower price point, despite the 20% larger die area. The market was very different back when the Opteron 880 launched.
    If you're still running on large, old and outdated hardware, you can easily double performance by moving to something more modern - all while reducing power and rackspace. If your workload remained the same (or very similar), you could thoretically replace multiple 4U servers with half as many 2U servers given the sort of performance/density improvements we saw here. And 2U boxes aren't even as dense as they get. If space/rack density is a concern, there are options in the 1U space or by going to blades.
    Looking back at what we had almost 7 years ago makes me wonder about what the enterprise market will look like 5 - 7 years from now. Presumably we'd be able to deliver similar performance to what we have deployed today with a much smaller, even more power efficient platform (~10nm). There's a lot of talk of going to many smaller cores vs. the big beefy Xeon cores we have here. There's also the question of how much headway the ARM players will make into the enterprise space over the coming years. I suspect, at a minimum, we'll see substantially increased price pressure on Intel. AMD was headed in that direction prior to its recent struggles, but the ARM folks should be able to deliver once they navigate their own 64-bit transition.
    This will be a very interesting post to revisit around 2017 - 2018, when our L5640s will be 7+ years old...



    More...

Thread Information

Users Browsing this Thread

There are currently 15 users browsing this thread. (0 members and 15 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title