Page 274 of 1210 FirstFirst ... 174224249264269270271272273274275276277278279284299324374774 ... LastLast
Results 2,731 to 2,740 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2731

    Anandtech: Samsung ATIV Smart PC: Revisiting Clover Trail Convertibles

    The Windows 8 tablet space at launch consisted exclusively of Tegra 3-based or Core i5/i7 ULV-based systems. That changed with the release of Krait and Clover Trail tablets like the ATIV Tab and Acer W510, respectively, but with 7W IVB and AMD Z60 on the very near horizon, we’re seeing the Windows 8 tablet market start to expand and evolve quite rapidly. After a very positive initial experience with the Windows RT slates, I was very eager to get my hands on an x86-based tablet. So when Anand gave me the chance to review one, I jumped at the opportunity. And so we have the Samsung ATIV Smart PC, which is also known as the Samsung Series 5 Slate 500T in other parts of the world. It’s an 11.6” 1366 x 768 Clover Trail tablet that ships with Windows 8, 64GB of NAND, a laptop dock, and an MSRP of $749.

    It, along with the VivoTab TF810C, were the two slates I had marked as most interesting in my mind during the lead up to the Windows 8 launch. Clover Trail meant good battery life and x86 compatibility, the inclusion of Wacom active digitizers were exciting, and the 11.6” PLS/S-IPS displays seemed promising. The two are very comparable devices, though the ASUS is priced higher at $799 and doesn’t include the laptop dock anymore (it did at launch). That gives the Samsung a pretty sizable price advantage, as $749 is only about $50 more than the 64GB Windows RT tablets when the keyboard accessory cost is included—more than worth it given the disparity in features and capability. This is even more true when you consider that the street price of the ATIV Smart PC has been fallen to $549 at Amazon without the laptop dock or $649 with.
    It seemed like the ATIV Smart PC would offer a good compromise between the mobility of the ARM-based slates and the power and features of the Intel Core-based ones, something aiming for the sweet spot of the Windows tablet lineup. After spending an extended amount of with it, I think it’s close, but there are some definite areas of improvement. Read on for our full review.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2732

    Anandtech: ReadyNAS 100, 300 and 500 Series Reboots Netgear's SMB NAS Lineup

    Netgear got into the SMB / SOHO / consumer NAS market with the purchase of Infrant Technologies in May 2007. The first generation ReadyNAS NV products were based on Infrant's own chips (using a SPARC core). A few years later, Netgear also started producing models based on Intel platforms. These included the Pro, Ultra and Ultra Plus models. Netgear soon realized that the market served by Infrant's old chips (low to mid-range SMB / SOHO / consumer) was being taken over by models based on Marvell's ARM based platforms. To address this, the ReadyNAS Duo v2 and ReadyNAS NV+ v2 were introduced in late 2011. The result was that Netgear had three variants of its RAIDiator OS with different features (one for the SPARC-based Infrant chips, one for x86 and one for ARM). The naming convention for each of the models was also not consumer friendly.
    Today, Netgear is taking steps to correct these issues with the launch of new models as well as a completely new operating system, the ReadyNAS OS 6.
    Hardware Refresh:
    The Marvell-based Duo v2 and NV+ v2 are being replaced with the next-generation ARMADA 370-based ReadyNAS 102 and 104 respectively. The amount of DRAM is also doubled from 256 to 512 MB. Models in the 300 series are based on the Intel Atom D2701 platform, while the 500 series is based on the Intel Core i3-3220. The final digit in each model number refers to the number of bays available. A comparison of the different models is provided in the table below:
    Some of the interesting hardware features include the addition of an IR receiver in some of the models, as well as a touchscreen in the 500 series. Netgear is also introducing Expansion Disk Array (EDA) units to provide scalability using the eSATA port in the main device.
    ReadyNAS OS 6.0:
    In order to maintain a consistent feature set across all models, Netgear has decided to start on a clean slate. Therefore, ReadyNAS OS 6.0 is not going to be made available for any of the earlier models (including the Duo v2 / NV+ v2). The file system has been updated to BTRFS from ext3 / ext4. The usage of BTRFS allows Netgear to provide advanced snapshotting capabilities usually present only in enterprise NAS units. We have already seen the capabilities of ReadyNAS Replicate, a $45 add-on for scheduling secure backups across different NAS units in physically different locations. With ReadyNAS OS 6.0, this feature is included for free. The operating system also includes an antivirus engine which provides real-time protection, and not just scheduled scans. Full iSCSI support for virtualized environments is available (with VMWare and Microsoft certifications).
    An update to the firmware in late May is scheduled to bring in encryption support to the OS. AES-256 will be used and the key will be stored / required to be available on a USB dongle connected to the NAS. An unfortunate aspect seems to be the fact that none of the models in the 100 / 300 / 500 series have hardware accelerated encryption support.
    Amongst consumer targeted features, ReadyNAS OS 6.0 supports 'cloud-based' discovery, where the user can just enter the serial number of the NAS unit to be set up online, and Netgear's backend handles the firmware initialization and first-time actions. This has been typically handled by the RAIDar utility (which will also continue to be supported). This ReadyCLOUD feature can not only be used for discovery, but also management and access. Support is also in place for local and remote backup / restore with Time Machine. DLNA is a standard feature in all NAS units now. Netgear claims that they need only a single DLNA server to service both local and remote devices. The ReadyDROP feature provides Dropbox-like real time file synchronization between mobile devices / PCs and a ReadyNAS device. Netgear's Genie Marketplace is also available on the new devices for access to free as well as paid apps which extend the functionality of the device.
    Pricing and Availability:
    The ReadyNAS 100 and 300 series are available for purchase today, while the 500 series will make its appearance in the market next month. Both diskless and populated models are available. MSRP for diskless configurations are provided below:

    • ReadyNAS 102 : $199
    • ReadyNAS 312 : $449
    • ReadyNAS 516 : $1299

    Netgear is also introducing the 4-bay rackmount ReadyNAS 2120 for $1229. More details regarding the internal hardware platform of this model will be made available later.



    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2733

    Anandtech: NVIDIA and Continuum Analytics Announce NumbaPro, A Python CUDA Compiler

    As NVIDIA’s GPU Technology Conference 2013 kicks off this week, there will be a number of announcements coming down the pipeline from NVIDIA and their partners. The biggest and more important of these announcements will be Tuesday morning with NVIDIA CEO’s Jen-Hsun Huang’s keynote speech, while some other product announcements such as this one are being released today with the start of the show.
    Starting things off is news from NVIDIA and Continuum Analytics, who are announcing that they are bringing Python support to CUDA. Specifically, Continuum Analytics’ will be introducing a new Python CUDA compiler, NumbaPro, for their high performance Python suite, Anaconda Accelerate. With the release of NumbaPro, Python with be joining C, C++, and Fortran (via PGI) as the 4th major CUDA language.
    For NVIDIA of course the addition of Python is a big deal for them by opening the door to another substantial subset of programmers. Python is used in several different areas; though perhaps most widely known as an easy to learn, dynamically typed language common in scripting and prototyping, it’s also used professionally in fields such as engineering and “big data” analytics, the latter of which is where Continuum’s specific market comes in to play. For NVIDIA this brings with it both the benefit of making CUDA more accessible due to Python’s reputation for simplicity, and at the same time opening the door to new HPC industries.
    Of course this is very much a numbers game for NVIDIA. Python has been one of the more widely used programming languages for a number of years now – though by quite how much depends on who’s running the survey – so after getting C++ under their belts it’s a logical language for NVIDIA to focus on to quickly grow their developer base. At the same time Python has a much larger industry presence than something like Fortran, so it’s also an opportunity for NVIDIA to further grow beyond academia and into industry.
    Meanwhile, though NumbaPro can’t claim to be the first such Python CUDA compiler – other projects such as PyCUDA have come first – Continuum’s Python compiler is setup to become the all but defacto Python implementation for CUDA. Like The Portland Group’s Fortran compiler, NVIDIA has singled out NumbaPro for a special place in their ecosystem, effectively adopting it as a 2nd party CUDA compiler. So while Python isn’t a supported language in the base CUDA SDK, NVIDIA considers it a principle CUDA language through the use of NumbaPro.
    Finally, NVIDIA is also using NumbaPro to tout the success of their 2011 CUDA LLVM initiative. One of the goals of bringing CUDA support to LLVM was to make it easier to add support for new programming languages to CUDA, which in this case is exactly what Continuum has used to build their Python CUDA compiler. NVIDIA’s long term goal remains to bring more languages (and thereby more developers) to CUDA, and being able to discuss success stories involving their LLVM compiler is a big part of accomplishing that.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2734

    Anandtech: Inside AnandTech 2013: Power Consumption

    Two of the previous three posts I've made about our upgraded server infrastructure have focused on performance. In the second post I talked about the performance (and reliability) benefits of going with our all-SSD architecture, while in the third post I talked about the increase in CPU performance between our old and new infrastructures. Today however, it's time to focus on power consumption.
    Our old server infrastructure came from a time where power consumption mattered, but it hadn't yet been prioritized. This was before Nehalem's 2:1 rule (2% perf increase for every 1% power increase), and it was before power gating. Once again I turned to our old HP DL585 server with four AMD Opteron 880s (8-cores total) as an example of just how things have changed.
    As a recap, we moved from the DL585 (and over 20 other 1U, 2U and 4U machines with similar, or slightly newer class processors) to an array of 6 Intel SR2625s (dual-socket 6-core Westmere based platforms), with another 6 to be deployed this year. All of our previous servers used hard drives, while all of our new servers use SSDs. The combination resulted in more than a doubling of peak CPU performance, and an increase in IO performance of anywhere between a near tripling to over an order of magnitude.
    Everything got better, but the impressive part is that power consumption went down dramatically:
    AnandTech Forums DB Server Power Consumption (2006 vs 2013)
    Off Idle 7-Zip Bench Cinebench Heavy IO
    HP DL585 (2007) 29.5W 524W 675W 655W 693.1W
    AnandTech 2013 12.8W 105.6W 267W 247W 170W
    With both machines plugged in to a power outlet but both completely off, the new server already draws considerably less power. The difference at idle however is far more impressive however. Without power gating and without a clear focus on minimizing power consumption, our old DL585 pulled over 500W when completely idle. It shocked me at first, but remembering back to how things used to be back then it stopped being so surprising. There was a time when even our single socket CPU testbeds would pull over 200W at idle.
    Under heavy integer (7-zip) and FP (Cinebench) workloads, the difference is still staggering. You could run 2.5 of the new servers in the same power envelope as a single one of the old machines.
    The power consumption under heavy IO needs a bit of explaining. We were still on an all 3.5-inch HDD architecture back then, so we had to rely on a combination of internal drives as well as an external Promise Vtrak J310s chassis to give us enough spindles to deliver the performance we needed. The 693.1W I report above includes the power consumption of the vTrak chassis (roughly 150W). In reality, all of the other tests here (idle, 7-zip, Cinebench) should include the vTrak's power consumption as well since the combination of the two were necessary to service the needs of the Forums alone. With the new infrastructure everything can be handled by this one tiny 2U box. So whereas under a heavy IO load our old setup would pull nearly 700W, the new server only needs 170W.
    Datacenter power pricing varies depending on the size of the customer and the location of the datacenter, but if you were to assume roughly $0.10 per kWh you'd be talking about $459 per year (assuming 100% idle workload) for our old server compared to $92.50 per year for the new one. That's a considerable savings per year, just for a single box - and assuming the best case scenario (also not including the J310s external chassis). For workloads that don't necessarily demand huge increases in performance, modernizing your infrastructure can come with significant power and space savings (not to mention a positive impact to reliability).

    Keep in mind that we're only looking at a single machine here. While the DL585 was probably the worst example from our old setup, there over a dozen other offenders in our racks (e.g. dual socket Pentium 4 based Xeons). It's no wonder that power consumption in datacenters became a big issue very quickly.
    Our old infrastructure at our old datacenter was actually at the point where we were power limited. Although we only used a rack and a half of space we had to borrow power from adjacent racks because our requirements were so high. The new setup not only allows us better performance, but it gives us headroom on the power consumption side as well.
    As I mentioned in my first post, we went down this path back in 2010 - there have been further power (and performance) enhancements since then. A move to 22nm based silicon could definitely help further improve things. For some workloads, this is where the impact of microservers can really be felt. While I don't see us moving to a microserver environment for our big database servers, it's entirely possible that the smaller, front-end application servers could see a power benefit. The right microprocessor architectures aren't available yet, but as Intel moves to its new 22nm Atom silicon and as ARM moves to 20nm Cortex A57/A53 things could be different.



    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2735

    Anandtech: Hardware Tricks: Can You Fix a Failing Mobile GPU with a Hair Dryer?

    Over the years, I’ve encountered my fair share of hardware failures while writing for AnandTech. For example, nearly every SFF I reviewed back in my early days failed within a couple years (usually a dead motherboard); Both of the first AM2 motherboards I reviewed also died within six months. I’ve seen more than a few bad sticks of memory, particularly overclocking RAM that couldn’t handle long-term use at higher voltages. And let’s not even talk about hard drives—lately I’ve noticed an uptick in the number of people coming to me with laptops that have a dead hard drive; so far I’ve only managed to successfully recover data from one drive using the famous (infamous?) “put your hard drive in the freezer” trick.
    Needless to say, when a friend came to me with an old Gateway P-6831 FX from early 2008—a laptop I awarded a Gold Editors’ Choice award to, no less!—and it was giving him a “Code 43” error on the GeForce 8800M GTS graphics, I didn’t have much hope of fixing the problem. Still, five years out of a $1300 gaming notebook isn’t too bad, and when I saw some suggestions online that I might be able to fix the GPU by putting it under the heat of a hair dryer for a couple minutes, I figured, “What do we have to lose?” Well, what we had to lose was about four hours of my time, as this particular notebook is something of a pain to disassemble down to the GPU. But in the interest of testing out the “hair dryer” trick, I though it worth a shot. Here’s the video footage of the process.

    Much to my surprise, all of the effort proved worthwhile, at least in the short term. Most fixes of this nature will only prolong the lifetime of failing hardware, but if you can get another several months—or dare we hope for a year?—out of a laptop with such a simple solution, that’s pretty good. I did take a moment to at least do a quick check of graphics performance. Five years ago, the 8800M GTS was one of the fastest mobile GPUs on the block—surpassed only by the more expensive 8800M GT and 8800M GTX. 64 DX10 CUDA cores running at 500MHz might not seem like much, but the 256-bit memory interface (clocked at 1600MHz) is nothing to scoff at.
    And what sort of performance does the 8800M GTS deliver? Even when paired with a now-decrepit Core 2 Duo T5450 (1.66GHZ), the notebook still managed a reasonable score of just under 7000 in 3DMark06. To put that in perspective, however, Intel’s HD 4000 with a standard voltage mobile CPU now manages around 7500. Of course, 3DMark06 optimizations are pretty common, but we’re basically looking at top-end mobile GPU performance from five years back now being found in Intel’s IGP. When Haswell launches in a few months with GT3 and GT3e mobile parts, we’ll likely see IGP performance start to encroach on decent midrange GPUs like the GT 640M and HD 7730M—at least, that’s what I’m hoping to get!
    Anyway, if you’ve got a failing GPU or other component and you’re at the point where you’re ready to throw it in the trash, if you’ve got a bit of time you might give this hair dryer trick a shot. I’ve seen others recommend baking a GPU PCB in the oven at 200F for eight minutes, and while that could work as well it seems more likely to burn out some other component if you’re not careful. Sadly, this trick (and the freezer trick) both failed on another recent HDD failure; next up on my list of hardware tricks to try: transplanting a dead HDD’s platters into a working drive. Wish me luck; my dad’s data needs it!
    Gallery: Gateway P-6831 FX Repairs





    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2736

    Anandtech: Sigma Designs Updates Z-Wave SoC Portfolio for Affordable Home Automation

    The rise of connected devices has brought about an increased interest in home automation amongst consumers. Readers looking for a brief background on the various home automation (HA) technologies can peruse our primer piece from last year. HA technologies have been around since the 1970s, but the costs (mainly due to the technology's complexity and the necessity for custom installers) have kept it out of the reach of the common man. However, the usage of Wi-Fi in HA devices has suddenly made the technology more accessible.
    Sigma Designs is known for its video decoder chipsets, but they have been trying to transform into a one-stop shop for 'powering the new digital home' by making some strategic acquisitions. One of these was the 2008 purchase of the Danish company, Zensys, responsible for creating the Z-Wave home control technology.
    Sigma Designs is announcing the fifth generation of Z-Wave SoCs today. The cost of the SoCs has gone down compared to the previous generation. Sigma claims improved RF performance and lower power consumption compared to previous generation products. Platform developers now also have more memory in the SoCs to work with. With this generation, the company is taking extra steps to ensure better returns for their customers. by providing customizable reference designs and enabling faster time-to-market for end products. A feature-heavy middleware stack is also being supplied.
    Z-Ware and Z-Ware Apps form the APIs and customizable UI designs for multiple platforms. ZIPR is the reference design to handle translation between an Z-Wave and IP network and the Z-Wave mesh network. Z/IP Gateway is the gateway reference design (transforming IP commands to Z-Wave commands and vice versa) while UZB is a reference design to enable Z-Wave functionality over a USB port.
    The SoCs being introduced today are the SD3503 Z-Wave serial interface for use in HA gateways and the SD3502 general purpose SoC for use in HA devices. The ZM5101 and ZM5202 are modules integrating these SoCs with integrated and pre-FCC / CE approved RF designs for faster time to market.
    An essential aspect of today's introductions is the fact that these are backward compatible. So, existing Z-Wave controllers should be able to interface with HA products using the new Z-Wave SoCs. The combination of Z-Wave, ZigBee and Wi-Fi will rule the HA space for the next few years and Sigma's new platforms ensures that Z-Wave will continue to stay relevant.



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2737

    Anandtech: NVIDIA's GPU Technology Conference 2013 Keynote Live Blog

    We're live at NVIDIA's 2013 GPU Technology Conference (GTC) press conference, seated and ready to go. Anand, Ryan, and myself are here and expecting Jen-Hsun's keynote to get under way shortly.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2738

    Anandtech: Cooler Master Storm Scout II Advanced Case Review: Falling Behind the Curv

    Cooler Master has been fairly gung ho on the PR side about their Storm Scout II Advanced. While we missed the opportunity to review its predecessor, the Storm Scout II, we aim to rectify that omission by putting this new semi-portable ATX chassis through its paces. Cooler Master has a long history of strong enthusiast offerings (with their HAF line being particularly well loved), but does the Storm Scout II Advanced inherit that legacy of greatness or are they falling behind the curve?


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2739

    Anandtech: NVIDIA Updates Tegra Roadmap Details at GTC - Logan and Parker Detailed

    We're at NVIDIA's GTC 2013 event where team green just updated their official roadmap and shared some more details about their Tegra portfolio, specifically additional information about Logan and Parker, the codename for the SoCs beyond Tegra 4. First up is Logan, which will be NVIDIA's first SoC with CUDA inside, specifically courtesy a Kepler architecture GPU capable of CUDA 5.0 and OpenGL 4.3. There's no details on the CPU side of things, but we're told to expect Logan demos (and samples) inside 2013 and production inside devices early 2014.
    After Logan is Parker, which NVIDIA shared will contain the codename Denver CPU NVIDIA is working on, with 64 bit capabilities and codename Maxwell GPU. Parker will also be built using 3D FinFET transistors, likely from TSMC. We're going to see what more we can find out about Logan and Parker later today.

    In addition NVIDIA showed off a new product named Kayla which is a small, mITX-like board running a Tegra 3 SoC and unnamed new low power Kepler family GPU.



    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #2740

    Anandtech: Piz Daint Supercomputer Announced, Powered By Tesla K20X

    Along with NVIDIA’s keynote this morning (which should be wrapping up by the time this article goes live), NVIDIA also has a couple other announcements that are hitting the wire this morning. The first of which is the announcement of NVIDIA landing another major supercomputer contract, this time with the Swiss National Computing Center (CSCS).
    CSCS will be building a new Cray XC30 supercomputer, “Piz Daint.” Like Titan last year, Piz Daint is a mixed supercomputer that will pack both a large number of CPUs – Xeon E5s to be precise – and of course to great interest to NVIDIA, a large number of Tesla K20X GPUs. We don’t have the complete specifications for Piz Daint at this time, but when completed it is expected to exceed 1 PFLOPS in performance and be the most powerful supercomputer in Europe.
    Piz Daint will be filling in several different roles at CSCS. Its primary role will be weather and climate modeling, working with Switzerland’s national weather service MeteoSwiss. Along with weather work, CSCS will also be using time on Piz Daint for other science fields, including astrophysics, life science, and material science.
    For NVIDIA of course this marks another big supercomputer win for the company. Though not a huge business on its own at this time relative to the complete Tesla business, wins like Titan and Piz Daint are prestigious for the company due the importance of the work done on these supercomputers and the name recognition they bring.


    More...

Thread Information

Users Browsing this Thread

There are currently 39 users browsing this thread. (0 members and 39 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title