Page 317 of 1210 FirstFirst ... 217267292307312313314315316317318319320321322327342367417817 ... LastLast
Results 3,161 to 3,170 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3161

    Anandtech: HTC Announces One mini - 4.3 inch display, aluminum, and Snapdragon 400

    We knew it was coming, and after a long wait and endless leaks the HTC One mini is upon us. Smaller phones seem to be something everyone wants more of to augment the ever-growing size of the flagships, and with the HTC One mini we get some of that, although the miniaturized HTC One isn't quite as powerful as its full fledged brethren. The One mini isn't exactly that miniaturized flagship that everyone was looking for, rather a more midrange, cost-reduced version of the One with a number of concessions made to get there.
    Starting off, the HTC One mini continues the same predominantly aluminum construction and virtually the same exact design language, although there is visibly more polycarbonate around the edges. I'm told that the One mini doesn't use exactly the same construction methods as the One, you can see this bear itself out in the photos with the plastic wrapping around the edges a bit more on the front and back. The backside is still curved and segmented into three pieces, with the bottom and top strips serving as the primary and secondary cellular antennas from what I can tell. In that plastic band for the top antenna separation is also still a secondary microphone, for stereo audio on video and ambient noise suppression on calls. You'll notice the vertical strip running along the middle to the camera module is gone, and with it, the NFC functionality which necessitated it. The power button is also now silver since there's no IR Tx/Rx port behind it, and the volume rocker is now two discrete buttons instead of one.
    Flash moves to a centered 12-o-clock position above the rear-facing camera aperture, which is still 4.0 MP with 2.0 µm "ultrapixels," although there's no OIS this time around for cost reasons, which is a bit unfortunate since that was half of what made the HTC One's camera exciting.
    [TR="class: tgrey"]
    [TD="colspan: 5, align: center"] HTC One mini Specifications[/TD]
    [/TR]
    [TR="class: tlblue"]
    [TD="class: tlgrey, width: 171"] [/TD]
    [TD="width: 401, align: center"] HTC One mini[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] SoC[/TD]
    [TD="align: center"] 1.4 GHz Snapdragon 400
    (MSM8930 - 2 x Krait 200 CPU, Adreno 305 GPU)[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] RAM/NAND/Expansion[/TD]
    [TD="align: center"] 1GB LPDDR2, 16 GB NAND[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] Display[/TD]
    [TD="align: center"] 4.3-inch LCD 720p, 341 ppi[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] Network[/TD]
    [TD="align: center"] 2G / 3G / 4G LTE (MSM8930 MDM9x15 IP block)[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"] Dimensions[/TD]
    [TD="align: center"] 132 x 63.2 x 9.25 mm, 122 grams[/TD]
    [/TR]
    [TD="class: tlgrey"] Camera[/TD]
    4.0 MP (2688

    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3162

    Anandtech: NVIDIA GeForce 326.19 Beta Drivers Available

    Following the release of the WHQL 320.49 drivers earlier this month, NVIDIA has moved on to their next driver branch, R325. The first release of these drivers, 326.01, was published in a limited form as part of the Windows 8.1 preview, and today NVIDIA is following that up by releasing the first full beta of R325 with the 326.19 beta drivers.
    As is usually the case NVIDIA’s release notes mostly focus on the performance of these drivers. Among other things, NVIDIA notes that 326.19 “Increases performance by up to 19% […] in several PC games vs. GeForce 320.49”, calling out Tomb Raider and Codemasters’ racing games in particular.
    Meanwhile users on the cutting edge of display hardware will want to pay attention to the fact that this is the first full GeForce driver to support “tiled” 4K displays such as the Asus PQ321Q, which requires support for 2 device “mosaic” mode to properly drive the display at 3840x2160@60Hz. Outside of 3 display Surround modes, this functionality was previously limited to NVIDIA’s Quadro cards as a product differentiator.
    As usual, you can grab the drivers for all current desktop and mobile NVIDIA GPUs over at NVIDIA’s driver download page. And thanks to reader SH SOTN for the heads up.



    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3163

    Anandtech: NZXT Phantom 530 Case Review

    With the 530 model, NZXT continues to fill out their product line with a Phantom for every season and price tag. But is the 530 the Phantom they and we needed?


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3164

    Anandtech: The AnandTech Podcast: Episode 23

    In this episode Brian and I talk about Nokia's Lumia 1020, Microsoft's struggles in the phone space, the HTC One mini, a giant battery for the Galaxy S 4, Aptina and ARM's Cortex A12.
    The AnandTech Podcast - Episode 23
    featuring Anand Shimpi, Brian Klug
    iTunes
    RSS - mp3, m4a
    Direct Links - mp3, m4a

    Total Time: 1 hour 4 minutes
    Outline h:mm
    Nokia Lumia 1020, Windows Phone - 0:00
    HTC One mini - 0:23
    HTC One 4.2.2 - 0:27
    SGS4 with a Giant Battery - 0:29
    SGS4 Wireless Charging Pad - 0:33
    ARM Cortex A12 - 0:36
    Aptina - 0:55




    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3165

    Anandtech: The AnandTech Podcast: Episode 22

    This is a special episode where Dustin and I debate the merits of Haswell on the desktop, from an enthusiast's perspective.
    The AnandTech Podcast - Episode 22
    featuring Anand Shimpi, Dustin Sklavos
    iTunes
    RSS - mp3, m4a
    Direct Links - mp3, m4a

    Total Time: 1 hour 28 minutes
    Outline h:mm
    Haswell on the Desktop - The Entire Time




    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3166

    Anandtech: NVIDIA Confirms Ship Date for Shield - July 31

    After a bout of recent bad news regarding Shield's launch date slipping due to a mechanical issue with a third party component supplier, comes news that NVIDIA will meet its end-of-July ship deadline for the Tegra 4-packing handheld gaming console. Earlier today, NVIDIA sent off an email with some good news to customers who have preordered Shield, confirming that their units with the mechanical issue fixed will be shipped out on July 31.
    “We want to thank you for your patience and for sticking with us through the shipment delay of your SHIELD. We have great news to share with you - your SHIELD will ship on July 31st.

    Our goal has always been to ship the perfect product, so we made sure we submitted SHIELD to the most rigorous mechanical testing and quality assurance standards in the industry. We built SHIELD because we love playing games, and we hope you enjoy it as much as we do.”
    We've played with NVIDIA's final Shield hardware a while back, and came away decently impressed. Now all that remains is the full review.
    Source: NVIDIA Blog



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3167

    Anandtech: Khronos @ SIGGRAPH 2013: OpenGL 4.4, OpenCL 2.0, & OpenCL 1.2 SPIR Announc

    Kicking off this week is the annual SIGGRAPH conference, the graphics industry’s yearly professional event. Outside of the individual vendor events and individual technologies we cover throughout the year, SIGGRAPH is typically the major venue for new technology and standards announcements.This year will be no exception, with Khronos announcing major new additions to OpenGL and OpenCL, along with the long-awaited intermediate SPIR format for OpenCL 1.2.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3168

    Anandtech: BitFenix Ronin Case Review

    While cases north of $100 tend to offer the best performance and features, $99 and under is a place where vendors can aggressively innovate and differentiate between one another by carefully choosing what to add and where to cut. Yet with the Ronin, BitFenix may have missed the balance.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3169

    Anandtech: Is Haswell Ready for Tablet Duty? Battery Life of Haswell ULT vs Modern AR

    With the exception of article production or live blogging, my on-the-road notebook usage model is filled with tons of idle time. Last week I was in a large conference room, sitting through presentations from 9AM - 5PM every day. There was an hour break for lunch, and a couple of 15 minute breaks spread throughout, but for the most part I had my notebook open, taking notes and occasionally pulling up websites to reference/research after having a thought.
    My in-meeting notebook usage is actually a lot like what my notebook usage was as a student in college. Very light web browsing (unless I really didn’t have to pay attention), coupled with background IM, email and tons of note taking. I have a feeling this usage model isn’t all that unique to me. On the contrary, I bet it’s quite common. Which makes this next part hilarious: notebook PCs actually did a terrible job of running this very scenario.
    Power efficiency was always a problem. The only notebooks you wanted to carry around with you were the ones that had tiny batteries. The larger notebooks had big batteries but also had big screens and power hungry components. We used to have a battery life test that simply measured how long it would take a notebook to die if we left it idling at the windows desktop. Just two years ago, it wasn’t unusual to see notebooks incapable of breaking 5 hours of idle battery life.
    The truth is that it wasn’t just display quality, terrible track pads and sluggish mechanical hard drives that drove people to tablets. Great platform idle power coupled with very efficient mobile OSes really made the current tablet revolution possible.
    Back in the early 2000s, Matthew Witheiler (our first graphics editor) was on a tablet PC kick. He searched high and low for anyone who’d bring a tablet to market. In college at the time, I understood why he wanted a tablet. The experience fell short at the very same points every time. Tablets back then were too big, too slow and had terrible battery life. The PaceBook PaceBlade lasted under 3 hours on a single charge back when we reviewed it in 2002. It also took 11 seconds to wake up from standby, and 84 seconds to boot (the Transmeta Crusoe TM5600 inside was slower than a 433MHz Celeron at the time).
    The current crop of ARM based tablets largely fixed this problem. They aren’t all that quick if you compare to modern high-end CPU and GPU architectures, but they benefit from much lighter weight apps and OSes that are more efficient.
    When Intel first started talking about Haswell and Ultrabooks, it did so under the banner of fixing the “ARM problem” and merging the best of tablets and notebook PCs. Looking around at the first implementation of Haswell ULT and the Ultrabooks based on it, they just look like better versions of the systems that came before them. Haswell ULT definitely posts better battery life than any previous Intel Core microarchitecture, but everything the world did with it seemed so very...predictable. Even Apple just slotted Haswell ULT into the same chassis as Sandy Bridge and Ivy Bridge ULV.
    The idea for this article struck me as I was in meetings last month. Sitting in that conference room for 8 hours straight each day would’ve killed my rMBP13 without plugging it in. The 2013 MacBook Air on the other hand did just fine. At one point there was some drama around a few power outlets not working. Much like using a tablet, I didn’t care. Even when I had only 50% of my battery charged, I had more than enough juice to get through the day without hunting for a power outlet. Given a very light usage model, Haswell ULT behaved like an ARM tablet platform. The difference being that if/when I needed more performance, it was available.
    This whole situation convinced me to run a test that a few AnandTech readers had asked me for a few weeks ago: run our tablet battery life workload on the 2013 MacBook Air. Even our lightest Mac battery life workload is still heavier than what we run on smartphones/tablets, so the light workload battery life numbers aren’t really representative of a tablet usage model with Haswell ULT. Luckily our tablet battery life tests are fairly portable, so I prepped the 2013 13-inch MBA the same way I would one of our tablets: brightness calibrated to 200 nits running the very same workload as what we would on a tablet. You’ll notice two bars for the 2013 MacBook Air, one indicating its result and one with that result scaled down to simulate what would happen if it had 78.7% of its actual battery capacity - putting it on equal footing to the 42.5Wh iPad 4. With workload and performance constant, it’s safe to assume that battery life scales linearly at best with battery capacity. In other words, our MacBook Air numbers at 42.5Wh should be indicative of what we’d expect if the 13-inch MBA actually had a 42.5Wh battery rather than 54Wh unit.
    First off, our WiFi web browsing test:
    We regularly load web pages at a fixed interval until the battery dies (all displays are calibrated to 200 nits as always). The differences between this test and our previous one boil down to the amount of network activity and CPU load.
    On the network side, we've done a lot more to prevent aggressive browser caching of our web pages. Some caching is important otherwise you end up with a baseband/WiFi test, but it's clear what we had previously wasn't working. Brian made sure that despite the increased network load, the baseband/WiFi still have the opportunity to enter their idle states during the course of the benchmark.
    We also increased CPU workload along two vectors: we decreased pause time between web page loads and we shifted to full desktop web pages, some of which are very js heavy. The end result is a CPU usage profile that mimics constant, heavy usage beyond just web browsing. Everything you do on your device ends up causing CPU usage peaks - opening applications, navigating around the OS and of course using apps themselves. Our 5th generation web browsing battery life test should map well to more types of mobile usage, not just idle content consumption of data from web pages.
    This is what I hinted at during Podcast #21: total platform power of the 2013 13-inch MacBook Air is lower than Apple’s 4th generation iPad. Even if you take into account battery capacity, the 13-inch MBA lasts around 18% longer on a single charge.
    What we aren’t taking into account however are the different display panels. The MacBook Air uses a 13.3-inch 1440 x 900 panel, compared to a 9.7-inch 2048 x 1536 panel on the 4th gen iPad. I’m not sure how big of a difference the delta would make. DisplayMate measured 7W for the 3rd gen iPad’s backlight, compared to 2.7W for the iPad 2. If we assume the delta is around 4.2W, that’s roughly another 10% hit that the Haswell ULT platform would have to take in order to bring its display power consumption in line with the iPad. With an 18% advantage in battery life in this test, it looks like even moving to a similar panel would deliver equal if not slightly better platform power consumption for Haswell ULT. Similarly, deploying Haswell ULX instead (lower TDP/SDP version of Haswell) could drive battery life even higher.
    Tablets are very often used for video playback, so this next test is just as important as a more interactive workload:
    Here I'm playing a 4Mbps H.264 High Profile 720p rip I made of the Harry Potter 8 Blu-ray. The full movie plays through and is looped until the battery dies. Once again, the displays are calibrated to 200 nits.
    The video playback results show exactly where Intel needs to focus on improving power efficiency. Granted I’m using QuickTime here, which I can only assume offloads video decode to Intel’s video engine. The video playback story looks better than it did on Microsoft’s Surface Pro, but it’s still not great at all. Modern ARM based SoCs have extremely low power video decoders integrated into the silicon. I wonder if Haswell’s video decode engine just isn’t as low power as what you can get in most ultra mobile SoCs today. Intel’s public documentation tends to focus on transcoding power efficiency relative to software based encode/decode, but not decode power efficiency alone.
    What This Means

    With Haswell ULT, Intel finally got its platform power story in order. Haswell ULT and, eventually, Haswell ULX platforms appear to have idle power characteristics that are at least within the range of high-end ARM based tablets. It’s finally possible to use Core in a tablet (the thermal considerations can be negated by going with a Y-series or even lower power SKU). Video decode power consumption remains a question in my mind. Assuming the results I saw weren’t due to software, I’d be willing to bet that video decode power efficiency becomes a target for improvement in Broadwell and/or Skylake.

    Microsoft Surface Pro (left) vs. 4th gen iPad (right)
    There are obvious implications for the next-generation of Microsoft’s Surface Pro. It’s unclear whether Microsoft will wait until Broadwell to reduce the thickness of Surface Pro, or if it’ll go with a Y-series Haswell ULX part this year and release something that’s much thinner immediately. The challenges Microsoft would face there are similar to those Apple faced with the 2013 MacBook Air, namely Microsoft would have to be accepting of a CPU performance regression but a significant improvement in battery life (and form factor). Broadwell should deliver (some of) the best of both worlds, but that’s another year/generation of waiting for Microsoft.
    What about Apple? I am not convinced that Apple would leave the intersection of iPad and MacBook Air alone. Tablets are under heavy pricing pressure, and Apple itself has established upper bounds to iPad pricing. As the world continues to shift towards tablets and lower cost/margin computing devices, Apple needs a solution to keep ASPs high. With iPad sales shifting to the mini, a higher end convergence solution between (replacing?) the iPad and 11-inch MBA might not be a bad idea.
    At this year’s WWDC, Apple made it very clear that idle power optimizations were high on the list for OS X Mavericks. Reducing the number of CPU cycles used by active but visually occluded application windows, and putting idle applications in a nap mode. These optimizations obviously benefit the Mac notebook lineup, but they’re also very important should Apple try to build a Surface Pro competitor.
    The platform could run OS X with a modified launchpad in tablet mode, or the standard OS X desktop in docked mode. Perhaps I’m just projecting Windows 8/Surface onto Apple, but I feel like the possibility is there.
    Final Words

    If you look at the first Haswell ULT systems, they generally don’t appear all that different from the Ivy Bridge ULV systems that came before them. The biggest change is a tremendous increase in battery life, due to idle power platform optimizations, but in terms of functionality they’re largely unchanged. This brings two thoughts to mind:
    The first is that Haswell ULT will ultimately do nothing to change the current trajectory of the PC industry. The problem isn’t in the silicon (for the most part), but rather in the traditional implementation of the silicon by Intel’s OEM partners. From Apple to the army of Ultrabook OEMs, Haswell ULT has only been used to enable good ultraportable notebooks and nothing more exotic. Companies invested in a return to growth in the PC industry won’t find it as a result of Haswell ULT. The question you should be asking instead is how much worse would things have been had Haswell ULT not been as good as it is.
    The second is that the best has yet to come. I have high hopes for the second generation of Microsoft’s Surface Pro. Microsoft could build the second generation into a true convergence device that further blurs the lines between tablet and productivity notebook. For the first time in quite a while Microsoft could have a product that shows significant improvement year over year, for multiple years in a row. The first Surface Pro was good, a Haswell ULT/ULX based device could really make the experience more tablet-like and a Broadwell ULT/ULX successor could make it even thinner.
    The next few generations won’t be a walk in the park for Intel however. There’s a ton of catching up to do. Just because Intel now has a single-chip Haswell SoC solution doesn’t mean that Intel and the ARM ecosystem are at parity in terms of capabilities. Qualcomm is quick to point out that the CPU island in its Snapdragon SoCs can be around 15% of the total die area, the rest of the SoC being devoted to GPU, ISP, video encode/decode, connectivity, etc... While I don’t expect a high-end Core based SoC to be only 15% CPU cores, I do expect that Intel will need to integrate similarly high-quality, high-performance and low power IP blocks in its flagship silicon. At this point all we’ve established is that on a largely CPU driven workload that Haswell ULT can be competitive (from a power efficiency standpoint) with a high-end ARM based SoC. The video playback results alone point out there’s so much more to the story that matters.
    The OS vendors have to similarly make sure they're adequately prepared for this transition. The name of the game is making all usage appear as idle as possible. We've seen improvements along this front in Windows 8, and promised in OS X Mavericks. To blur the lines between tablet and notebook hardware, you need to do the same between tablet and notebook OSes.
    Intel prepping its Core family of microarchitectures for low power tablet duty matters quite a bit to notebook OEMs. I don’t believe the computing world will top out at $499, but I do believe that any solution above $499 will have to be something more unique than just a really thin notebook. In my Surface Pro review I talked about that device being a tablet that could serve as a notebook, or as a desktop when docked to a large display/kb/mouse. I’m not much of a visionary, but I feel like such a flexible device might not be a bad idea.



    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #3170

    Anandtech: Intel re-architecting the datacenter

    Intel is hosting the "Datacenter Days" in San Francisco. The basic message is that the datacenter should be much more flexible and that the datacenter should be software defined. So when a new software service is launched, storage, network and compute should all be adapted in a matter of minutes instead of weeks.
    One example is networking. Configuring the network for a new service takes a lot of time and manual intervention: think of router access lists, gateway/firewall configurations and so on. It requires a lot of very specialized people: the Netfilter expert does not necessarily master the intricacies of Cisco's IOS. Even if you master all the skills it takes to administer a network, it still takes a lot of time to log in to all those different devices.
    Intel wants the propietary network devices to be replaced by software, running on top of its Xeons. That should allow you to administer all your network devices from one centralized controller. And the same method should be applied to storage and the proprietary SANs.
    If this "software defined datacenter" sounds very familiar to you, you have been attention to the professional IT market. That is also what VMWare, HP and even Cisco have been preaching. We all know that, at this point in time, it is nothing more than a holy grail, a mysterious and hard to reach goal. Intel and others have been showing a few pieces of the puzzle, but the puzzle is not complete at all. We will get into more detail in later articles.
    But there were some interesting news tidbits we like to share with you.
    First of all, the announcement of the new Broadwell SoC. Broadwell is the successor to Haswell, but Intel also decided to introduce a highly integrated SoC version. So we get the "brawny" Broadwell cores inside a SoC that integrates Network, storage etc. just like the Avoton SoC. As this might be a very powerful SoC for microservers, it will be interesting to see how much room is still left for the Denverton SoC - the successor of the atom based Avoton SoC - and the ARM server SoCs.
    Jason Waxman, General Manager of the Cloud Infrastructure Group, also showed a real Avoton SoC package.
    A quick recap: the Atom Avoton is the 22 nm successor of the dualcore Atom S1260 Centerton.
    The Avoton SoC has up to 8 cores and integrates SATA, Gigabit Ethernet, USB and PCIe.
    Intel promises up to 4x better performance per watt, but no details were given at the conference. The interesting details that we hardware enthusiasts love can be found at the end of the PDF though. Performance per Watt was measured with SPEC CPU INT rate 2006. The dualcore Atom S1260 (2 GHz, HT enabled) scored 18.7 (base) while the Atom C2xxx (clockspeed 1.5 GHz?, Turbo disabled) on an alpha motherboard (Intel Mohon) reached 69. Both platforms included a 250 GB harddisk and a small motherboard. The Atom "Avoton" had twice as much memory (16 vs 8 GB) but the whole platform needed 19 W while the S1260 platform needed 20W. Doubling the amount of memory is not unfair if you have four times as much cores (and thus SPEC CPU INT instances). So from these numbers it is clear that Intel's Avoton is a great step forward. The SPEC numbers tell us that Intel is able to get four times more cores in the same power envelop without (tangibly) lowering the single threaded performance (the lower clock speed is compensated by the IPC improvements in Silvermont).
    Intel does not stop at integrating more features inside a SoC. Intel also wants to make the server and rack infrastructure more efficient. Today, several vendors have racks with shared cooling and power. Intel is currently working on servers with a rack fabric with optical interconnects. And in the future we might see processors with embedded RAM but without a memory controller, placed together inside a compute node and with a very fast interconnect to a large memory node. The idea is to have very flexible, centralized pools of compute, memory and storage.
    The Avoton server at the conference was showing some of these server and rack based innovations. Not only did it have 30 small compute nodes....
    ... it also did not have any PSU, drawing power from a centralized PSU.
    In summary, it looks like the components in the rack will be very different in the near future. Multi-node servers without PSUs, SANs replaced by storage pools and proprietary network gear by specialized x86 servers running networking software.
    Gallery: Intel re-architecting the datacenter





    More...

Thread Information

Users Browsing this Thread

There are currently 51 users browsing this thread. (0 members and 51 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title