Page 258 of 1210 FirstFirst ... 158208233248253254255256257258259260261262263268283308358758 ... LastLast
Results 2,571 to 2,580 of 12091

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2571

    Anandtech: Vizio's New Touch Notebook and AIO PCs at CES

    Vizio used CES as the platform to debut the third revision to its PC lineup, which currently consists mostly of ultrabooks and all-in-ones. The first revision was the initial launch last summer, while the second revision brought touchpad updates (replacing the godawful Sentelic pads with better Synaptics units) and Windows 8. This third revision brings touchscreens and quad-core CPUs across the board to all Vizio systems, regardless of notebook or all-in-one. 

    Vizio’s notebook lineup is presently structured with a Thin+Light and a Notebook; the former is available in two form factors (14” 900p and 15.6” 1080p) with Intel’s ULV processors, solid state storage, and integrated graphics, while the Notebook is 15.6” 1080p with quad-core IVB processors, Nvidia’s GT 640M LE graphics, and a 1TB hard drive paired with a 32GB caching drive. Across the board, we see IPS display panels, fully aluminum chassis, and uniform industrial design. 
    The new Thin+Light Touch again come in 14” and 15” models, with either AMD A10 or Ivy Bridge i7 quads exclusively, with AMD dedicated graphics available with the AMD model. The dual-core and ULV parts are gone, and with nary a mention of the CN15 Notebook, it would appear that it has been killed off because of too much overlap with the Thin+Light Touch. Both quad-core CPUs and dedicated GPUs are available in the latter, so you’re not losing much, though that means there is no longer an Intel quad + dGPU config on offer.

    As can probably be surmised from the name, the Thin+Light Touch is available exclusively with a capacitive multitouch display. This adds a bit of thickness and weight to the chassis, but the 15.6” model is still 4 pounds (from 3.89lbs before) so it’s not a huge amount. Other improvements include a much more structurally sound palmrest and interior, which results in significantly less flex in both the body as well as the keyboard. This is likely the most significant of the chassis-level upgrades, and fixes the last major flaw from the second revision notebooks. Battery capacity has been “nearly doubled” which indicates capacity should be close to 100Wh (the previous Thin+Light was 57.5Wh) with the hope of substantially improving battery life.
    Gallery: Vizio Laptops


    It seems like a pretty targeted generational update, with all of the pain points from the first two notebooks fixed. I think I’d still like to see some improvements in terms of ports on offer (2xUSB and no SD slot just isn’t enough), but the gorgeous IPS display and nice industrial design make up for any remaining flaws. Price points are expected to be similar to the previous Thin+Light, and availability is expected to be in the early spring timeframe. 
    Vizio also had its All-in-One Touch series desktops at their suite in the Wynn, though these are not new products. Vizio updated the AIO series with touchscreen displays and Synaptics touchpads at the Windows 8 launch, and simply brought those to Las Vegas to complement their new notebook, tablet, and HDTV products on the show floor.
    Gallery: Vizio All-In-Ones








    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2572

    Anandtech: Intel's Quick Sync: Coming Soon to Your Favorite Open Source Transcoding A

     
    Intel's hardware accelerated video transcode engine, Quick Sync, was introduced two years ago with Sandy Bridge. When it was introduced, I was immediately sold. With proper software support you could transcode content at frame rates that were multiple times faster than even the best GPU based solutions. And you could do so without taxing the CPU cores. 
     
    While Quick Sync wasn't meant for high quality video encoding for professional production, it produced output that was more than good enough for use on a smartphone or tablet. Given the incredible rise in popularity of those devices over recent history and given that an increasing number of consumers moved to notebooks as primary PCs, a fast way of transcoding content without needing tons of CPU cores was exactly what the market needed.
     
    There was just one problem with Quick Sync: it had zero support in the open source community. The open source x264 codec didn't support Quick Sync, and by extension applications like Handbrake didn't either. You had to rely on Cyberlink's Media Espresso or ArcSoft's Media Converter. Last week, Intel put the ball in motion to change all of this. 
     
    With the release of the Intel Media SDK 2013, Intel open sourced its dispatcher code. The dispatcher simply detects what driver is loaded on the machine and returns whether or not the platform supports hardware or software based transcoding. The dispatcher is the final step before handing off a video stream to the graphics driver for transcoding, but previously it was a proprietary, closed source piece of code. For open source applications whose license requires that all components contained within the package are open source as well, the Media SDK 2013 should finally enable Quick Sync support. I believe that this was the last step in enabling Quick Sync support in applications like Handbrake.
     
    I'm not happy with how long it took Intel to make this move, but I hope to see the results of it very soon. 






    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2573

    Anandtech: The Tegra 4 GPU, NVIDIA Claims Better Performance Than iPad 4

    At CES last week, NVIDIA announced its Tegra 4 SoC featuring four ARM Cortex A15s running at up to 1.9GHz and a fifth Cortex A15 running at between 700 - 800MHz for lighter workloads. Although much of CEO Jen-Hsun Huang's presentation focused on the improvements in CPU and camera performance, GPU performance should see a significant boost over Tegra 3.
    The big disappointment for many was that NVIDIA maintained the non-unified architecture of Tegra 3, and won't fully support OpenGL ES 3.0 with the T4's GPU. NVIDIA claims the architecture is better suited for the type of content that will be available on devices during the Tegra 4's reign.
     
    Despite the similarities to Tegra 3, components of the Tegra 4 GPU have been improved. While we're still a bit away from a good GPU deep-dive on the architecture, we do have more details than were originally announced at the press event.

        
    Tegra 4 features 72 GPU "cores", which are really individual components of Vec4 ALUs that can work on both scalar and vector operations. Tegra 2 featured a single Vec4 vertex shader unit (4 cores), and a single Vec4 pixel shader unit (4 cores). Tegra 3 doubled up on the pixel shader units (4 + 8 cores). Tegra 4 features six Vec4 vertex units (FP32, 24 cores) and four 3-deep Vec4 pixel units (FP20, 48 cores). The result is 6x the number of ALUs as Tegra 3, all running at a max clock speed that's higher than the 520MHz NVIDIA ran the T3 GPU at. NVIDIA did hint that the pixel shader design was somehow more efficient than what was used in Tegra 3. 
     
    If we assume a 520MHz max frequency (where Tegra 3 topped out), a fully featured Tegra 4 GPU can offer more theoretical compute than the PowerVR SGX 554MP4 in Apple's A6X. The advantage comes as a result of a higher clock speed rather than larger die area. This won't necessarily translate into better performance, particularly given Tegra 4's non-unified architecture. NVIDIA claims that at final clocks, it will be faster than the A6X both in 3D games and in GLBenchmark. The leaked GLBenchmark results are apparently from a much older silicon revision running no where near final GPU clocks.
     
    Mobile SoC GPU Comparison
      GeForce ULP (2012) PowerVR SGX 543MP2 PowerVR SGX 543MP4 PowerVR SGX 544MP3 PowerVR SGX 554MP4 GeForce ULP (2013)
    Used In Tegra 3 A5 A5X Exynos 5 Octa A6X Tegra 4
    SIMD Name core USSE2 USSE2 USSE2 USSE2 core
    # of SIMDs 3 8 16 12 32 18
    MADs per SIMD 4 4 4 4 4 4
    Total MADs 12 32 64 48 128 72
    GFLOPS @ Shipping Frequency 12.4 GFLOPS 16.0 GFLOPS 32.0 GFLOPS 51.1 GFLOPS 71.6 GFLOPS 74.8 GFLOPS
     
    Tegra 4 does offer some additional enhancements over Tegra 3 in the GPU department. Real multisampling AA is finally supported as well as frame buffer compression (color and z). There's now support for 24-bit z and stencil (up from 16 bits per pixel). Max texture resolution is now 4K x 4K, up from 2K x 2K in Tegra 3. Percentage-closer filtering is supported for shadows. Finally, FP16 filter and blend is supported in hardware. ASTC isn't supported.
     
    If you're missing details on Tegra 4's CPU, be sure to check out our initial coverage. 






    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2574

    Anandtech: Checking Their Pulse: Hisense's Google TV Box at CES

    So, Google TV is still happening. Indeed, more players are getting into the game than ever. Hisense is a Chinese OEM/ODM that's seen steady growth in the television market internationally, and hopes to build a big presence in the US this year. Their Google TV box, Pulse, was announced as among the first to be built around the Marvell Armada 1500 chipset, and we've been waiting for it patiently ever since. It's available on Amazon right now, and we'll hopefully have it in for review soon. For now, we got a chance to take a peek at Hisense's interpretation of Google TV while on the show floor at CES. 
     
    To re-cap, Google TV is the stab at altering the television viewing paradigm by Mountain View's finest. It has gone through some pretty immense transformations since it was first introduced and while all implementations share a basic UI paradigm, they've allowed OEMs to skin parts of the experience. The latest software iteration (V3, in their parlance), has three key conceits: Search, Voice and a recommendation engine. Search, understandably, is Google's strong suit, and is leveraged to great success. Voice's execution is good, though the value is limited. Primetime is their recommendation engine, and while it's no doubt quite good, it feels little different than the similar features provided by Netflix and the like. 
     
    Hisense isn't shipping V3 software just yet, but a few things stand out about their software. We'll start with the Home screen. Lightly skinned, and functional, the screen is fairly satisfying. The dock and the three featured apps across the top are static, but that "Frequently Used" field is populated automatically based on your usage. That area below the video field would make a great place for a social feed widget, or perhaps some other useful data, but, as usual, is instead devoted to ad space. Just off the Home button, is a new button, that maps to an old function. Previously, hitting the Home button from the Home screen, brought you to a field where a user could configure widgets. Here that "double tap" is moved to a separate button, but looks largely the same. 
     
         
    The remote control is a many buttoned affair, with a large touchpad (complete with scroll regions) on one side, and a QWERTY keyboard on the back. The touchpad is quite large, though responsiveness was a bit hit or miss, it's hard to blame the BT/WiFi powered hardware in such a spectrum crowded environment. The button lay out is oddly cramped for such a large remote, thanks to that touchpad and a similarly large set of directional keys. The QWERTY keyboard on the back, though, benefits from the acreage, and has a good layout. No motion controls are on offer here, this a tactile interface all the way. And truly, I'm not going to miss waving a wand around. 
    There are three hardware things a Google TV needs to get right, and so far none have hit on all three. Video decode needs to be flawless and extensive; if local file playback is available, it shouldn't be limited to just a handful of codecs and containers, and it shouldn't ever falter. 3D rendering should at least be passable; as an Android device, it'd be nice to be able to play some games on these things, and so far that's something that's been ignored. More important than 3D though, 2D composition must be fast, no matter how many effects you throw at the screen. In many past devices, the UI was generally sluggish, but it slowed to an absolute crawl when you asked it to overlay a UI component over video. Imagine our surprise, then, when Hisense pulled it off without a hiccup. 
     
    Hitting the Social button while watching a video brings up this lovely widget, which shows your Twitter and Facebook feeds and even offers sharing and filtering options. The filtering options are most intriguing, since they'd allow you to follow a content based hashtag (say #TheBigGame) and participate in the coversation related to the content you're watching, and all on the same screen. For terrestrial content the widget shifts the content into the upper left region so that none of it is obscured by the widget. 
     
    But as nifty as the widget may be, what really set it apart was how quickly its components were drawn and updated. From the time the button was depressed to the fully composited and updated widget was shown couldn't have been but a second. Jumping from there to the Home screen was quicker, and opening Chrome and navigating to our home page all happened without noticeable stutter. 
     
    Chatting with Marvell later, we discussed how they used their own IP to develop their composition engine and targeted just this sort of use case for it. Based on our time with their solution on the show floor, they and Hisense have done some good work. We can't wait to get our hands on the hardware ourselves and see jsut how good it gets. 






    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2575

    Anandtech: Dragging Core2Duo into 2013: Time for an Upgrade?

    As any ‘family source of computer information’ will testify, every so often a family member will want an upgrade.  Over the final few months of 2012, I did this with my brother’s machine, fitting him out with a Sandy Bridge CPU, an SSD and a good GPU to tackle the newly released Borderlands 2 with, all for free.  The only problem he really had up until that point was a dismal FPS in Runescape.
    The system he had been using for the two years previous was an old hand-me-down I had sold him – a Core2Duo E6400 with 2x2 GB of DDR2-800 and a pair of Radeon HD4670s in Crossfire.  While he loves his new system with double the cores, a better GPU and an SSD, I wondered how much of an upgrade it had really been.
    I have gone through many upgrade philosophies over the decade.  My current one to friends and family that ask about upgrades is that if they are happy installing new components. then upgrade each component to one of the best in its class one at a time, rather than at an overall mediocre setup, as much as budget allows.  This tends towards outfitting a system with a great SSD, then a GPU, PSU, and finally a motherboard/CPU/memory upgrade with one of those being great.  Over time the other two of that trio also get upgraded, and the cycle repeats.  Old parts are sold and some cost is recouped in the process, but at least some of the hardware is always on the cutting edge, rather than a middling computer shop off-the-shelf system that could be full of bloatware and dust.
    As a result of upgrading my brother's computer, I ended up with his old CPU/motherboard/memory combo, full of dust, sitting on top of one of my many piles of boxes.  I decided to pick it up and run the system with a top range GPU and an SSD through my normal benchmarking suite to see how it faired to the likes of the latest FM2 Trinity and Intel offerings, both at stock and with a reasonable overclock.  Certain results piqued my interest, but as for normal web browsing and such it still feels as tight as a drum.
    The test setup is as follows:
    Core2Duo E6400 – 2 cores, 2.13 GHz stock
    2x2 GB OCZ DDR2 PC8500 5-6-6
    MSI i975X Platinum PowerUp Edition (supports up to PCIe 1.1)
    Windows 7 64-bit
    AMD Catalyst 12.3 + NVIDIA 296.10 WHQL (for consistency between older results)
    My recent testing procedure in motherboard reviews pairs the motherboard with an SSD and a HD7970/GTX580, and given my upgrading philosophy above, I went with these for comparable results.  
    The system was tested at stock (2.13 GHz and DDR2-533 5-5-5) and with a mild overclock (2.8 GHz and DDR2-700 5-5-6).  
    Gaming Benchmarks
    Games were tested at 2560x1440 (another ‘throw money at a single upgrade at a time’ possibility) with all the eye candy turned up, and results were taken as the average of four runs.
    Metro2033
    While an admirable effort by the E6400, and overclocking helps a little, the newer systems get that edge.  Interestingly the difference is not that much, with an overclocked E6400 being within 1 FPS of an A10-5800K at this resolution and settings while using a 580.
    Dirt3
    The bump by the overclock makes Dirt3 more playable, but it still lags behind the newer systems.
    Computational Benchmarks
    3D Movement Algorithm Test
    This is where it starts to get interesting.  At stock the E6400 lags at the bottom but within reach of an FX-8150 4.2 GHz , but with an overclock the E6400 at 2.8 GHz easily beats the Trinity-based A10-5800K at 4.2 GHz.  Part of this can be attributed to the way the Bulldozer/Piledriver CPUs deal with floating point calculations, but it is incredible that a July 2006 processor can beat an October 2012 model.  One could argue that a mild bump on the A10-5800K would put it over the edge, but in our overclocking of that chip anything above 4.5 GHz was quite tough (we perhaps got a bad sample to OC).
    Of course the situation changes when we hit the multithreaded benchmark, with the two cores of the E6400 holding it back.  However, if we were using a quad core Q6400, stock CPU performance would be on par with the A10-5800K in an FP workload, although the Q6400 would have four FP units to calculate with and the A10-5800K only has two (as well as the iGPU).
    WinRAR x64 3.93 - link
    In a variable threaded workload, the DDR2 equipped E6400 is easily outpaced by any modern processor using DDR3.
    FastStone Image Viewer 4.2 - link
    Despite FastStone being single threaded, the increased IPC of the later generations usually brings home the bacon - the only difference being the Bulldozer based FX-8150, which is on par with the E6400.
    Xilisoft Video Converter
    Similarly with XVC, more threads and INT workloads win the day.
    x264 HD Benchmark
    Conclusions
    When I start a test session like this, my first test is usually 3DPM in single thread mode.  When I got that  startling result, I clearly had to dig deeper, but the conclusion produced by the rest of the results is clear.  In terms of actual throughput benchmarks, the E6400 is comparatively slow to all the modern home computer processors, either limited by cores or by memory. 
    This was going to be obvious from the start.
    In the sole benchmark which does not rely on memory or thread scheduling and is purely floating point based the E6400 gives a surprise result, but nothing more.  In our limited gaming tests the E6400 copes well at 2560x1440, with that slight overclock making Dirt3 more playable. 
    But the end result is that if everything else is upgraded, and the performance boost is cost effective, even a move to an i3-3225 or A10-5800K will yield real world tangible benefits, alongside all the modern advances in motherboard features (USB 3.0, SATA 6 Gbps, mSATA, Thunderbolt, UEFI, PCIe 2.0/3.0, Audio, Network). 
    My brother enjoys playing his games at a more reasonable frame rate now, and he says normal usage has speed up by a bit, making watching video streams a little smoother if anything.  The only question is where Haswell will come in to this, and is a question I look forward to answering.






    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2576

    Anandtech: ASRock Z77 OC Formula Review: Living In The Fast Lane

    Enthusiasts and speed freaks are always looking for an edge – a little something that will help push their gear that little bit faster.  There is already a market for pre-overclocked GPUs, and now SSDs are coming with internal RAID to push the boat over the SATA connections.  These require little-to-no knowledge of overclocking and are essentially plug and play.  When it comes to pushing GPUs higher, and motherboards, we get a dichotomous nature of ‘easy to OC’ against ‘advanced options to push the limits’.  In order to meet these two markets, the top four motherboard manufacturers have all come out with their respective weapons for Z77 and Ivy Bridge, aiming for either ~$220 or ~$380, and all of them have broken overclocking records at one stage or another since their release.  First up on our battle bridge is the ASRock Z77 OC Formula, designed by ASRock’s in-house overclocker Nick Shih, and commands a paltry $240 for all the goodies.





    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2577

    Anandtech: Vizio's AMD Z60 Hondo-based Windows 8 Tablet PC at CES 2013


    Even with the comprehensive overhaul of their notebook lineup, the big news out of Vizio’s CES booth was definitely their new Windows 8 tablet. The Vizio Tablet PC is the first system we’ve come across with AMD’s Z60 APU inside. It’s a 1GHz dual-core part, with a pair of Bobcat cores and an HD 6250 GPU onboard. The low clock speed allows it to hit a TDP of roughly 4.5W, easily the lowest of AMD’s APUs, but likely means that compute performance will likely be similar to or slightly worse than Clover Trail. This isn’t unexpected, since we saw the same situation play out with Ontario last year - basically a faster microarchitecture clocked significantly lower such that it performed roughly on par with Atom, except with significantly better GPU performance.
    In addition to the AMD Z60, the Vizio Tablet PC comes with an 11.6” 1080p display, 2GB of memory, a 64GB SSD, stereo speakers, and Vizio’s now customary industrial design and attention to detail. The chassis is pretty thin at 0.4”, and at 1.66lbs isn’t too heavy for a system of this form factor. It’s a nice design, very flat and clean, and feels good in hand. The frame is aluminum, with a soft-touch back and glass front. I'll explore the hardware fully in the review, but for now, just know that it's a good looking, well executed design.

    My main comparison point was the Samsung ATIV Smart PC 500T, a Clover Trail-based 11.6” (1366x768) tablet which weighs a very similar 1.64lbs. The ATIV isn’t a particularly well designed system, which I’ll get into in my review, so the Vizio is unsurprisingly a much nicer piece of hardware design, but what really got me was the performance of Z60. Even at 1080p, the Vizio feels smoother throughout the Windows 8 UI than Clover Trail at WXGA. The extra GPU horsepower of the APU certainly makes itself felt when compared to the PowerVR SGX545 in Atom Z2760. This is a good sign, and all of the hardware acceleration capabilities that opens up should make Z60 a much more livable computing situation than Atom. Obviously, it won’t come anywhere near 7W IVB, which I’d say is the current preferred Windows 8 tablet platform (and should be until Haswell comes) but it should be a good deal cheaper. 
    The display is supposedly not IPS but is definitely some wide-angle panel type, so perhaps it’s a Samsung-sourced PLS panel or something similar. Pretty crisp, 1080p on an 11.6” panel is fantastic from a pixel density standpoint. We have no indications on price or release date, but Vizio says that it will be priced “competitively”. Competitive to what still remains a question, since the Z60-based Vizio kind of bridges the gap between Clover Trail and Ivy Bridge tablets, but I wouldn’t be shocked to see it drop at around $800. That puts it on par with the ASUS VivoTab 810C (the Atom one, not the one we reviewed) and just above the ATIV Smart PC ($749) but well below the 1080p Ivy Bridge tablets ($899 for Surface Pro, $949 for Acer’s W700). 
    Gallery: Vizio Tablets and Smartphones


    I’m excited, it looks like a pretty decent offering and I’m glad to see AMD get such a solid design win. Intel has long owned the mobile and ultramobile PC space, so it’s nice to see AMD finally put out a viable chip that will hopefully shake things up going forward. 






    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2578

    Anandtech: OpenCompute servers and AMD Open 3.0

    Remember our review of Facebook's first OpenCompute Server? Facebook designed a server for their own purposes, but quickly released all the specs to the community. The result was a sort of "Open-source" or rather "Open-specifications" and "Open-CAD" hardware. The idea was that releasing the specifications to the public would advance and improve the platform quickly. The "public" is in this case mostly ODMs, OEMs and other semiconductor firms.
    The real cool thing about this initiative is that the organisation managed to convince Intel and AMD to standarize certain aspects of the hardware. Yes, they collaborated. The AMD and Intel motherboard will have the same form factor, mounting holes, management interface and so on. The ODM/OEM has to design only one server: the AMD board can be swapped out for the Intel one and vice versa. The  Mini-Mezzanine Slot and the way the power supply is connected, it is all standarized.
    AMD is the first with this new "platform", that contrary to Intel's own current customized version of OpenCompute 2.0, is targeted at the mass market. The motherboard is designed and produced by several partners (Tyan, Quanta) and based upon the specifications of large customers such as Facebook. But again, this platform is not about the Facebooks of this world, the objective to lower the power, space and cost of traditional servers. So although AVnet and Penguin Computing will be the first integrators that offer complete server systems based upon this spec, there is nothing stopping DELL, HP and others from doing the same. The motherboard design can be found below.
     
     
    The T shape enables to place the PSU on the left, or the right, or on both sides (redundant PSUs). In  cases, the PSU is behind the rest of the hardware and thus not heat up the air as you can see below.
    The voltage regulation is capable of running EE and SE CPUs, ranging from 85 to 140W TDP.  The voltage regulation disables several phases if they are not necessary in order  to save power.  
    Server can be 1U, 1.5U, 2U or 3U high. This server  platform is highly modular, and the solutions build upon it can be widely different. AMD sees 3 different target markets.
    Motherboards will not offer more than 6 SATA ports, but with the help of the PCIe cards you can get up to 35 SATA/SAS drives in there, to build a storage server. The  HPC market demands exactly the opposite: in most cases CPU power and memory bandwidth matter the most. There will be an update around late February that wil support faster 1866 MHz DIMMs (1 DIMM per channel).
    Our first impression is that this is a great initiative, building further upon the excellent ideas on which OpenCompute was founded. It should bring some of cost and power savings that Facebook and Google got to the rest of us. The fact that the specifications are open and standarized should definitely result into some serious cost savings as vendors can not lock you in like they do in the traditional SAN and blade server market. We are curious how the final the management hardware and sofware will turn out. We don't expect it to be at the "HP ILO advanced" level, but we hope it is as good as a "Intel barebone server" management solution: being able to boot directly into the BIOS, a solid remote management solution and so on. The previous out of band management solution was very minimal as the first OpenCompute platform was mainly build for the "Hyperscale" datacenters.
    The specifications of AMD Open 3.0 are available here.
     






    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2579

    Anandtech: Fusion-io Launches ioScale for Hyperscale Market

    We haven't even had time to cover everything we saw at CES last week, but there are already more product announcements coming in. Fusion-io launched their new ioScale product line at the Open Compute Summit, which was originally started by a few Facebook engineers who were looking for the most efficient and economical way to scale Facebook's computing infrastructure. Fusion-io's aim with the ioScale is to provide a product that makes building an all-flash datacenter more practical, the key benefits being the data density and pricing.
    Before we look more closely at the ioScale, let's talk briefly about its target market: Hyperscale companies. The term hyperscale may not be familiar to all, but in essence it means a computing infrastructure that is highly scalable. A good example of a hyperscale company would be Facebook or Amazon, both of which must constantly expand their infrastructure due to increasing amounts of data. Not all hyperscale companies are as big as Facebook or Amazon, though, there are lots of smaller companies that may need as much scalability as Facebook and Amazon do.
    Since hyperscale computing is all about efficiency, it's also common that commodity designs are used instead of more pricier blade systems. Along with that goes expensive RAID arrays, network solutions and redundant power supplies for instance. The idea is that high-availability and scalability should be the result of smart software, not based upon expensive and - even worse - complex hardware. That way the cost of the infrastructure investments and management is kept as low as possible, which is crucial for a cloud service when a big portion of the income is often generated through ads or low-cost services. The role of software is simply huge in hyperscale computing and to improve the software, Fusion-io also provides an SDK called ioMemory that will assist developers in optimizing their software for flash memory based systems (for example, the SDK allows SSDs to be treated as DRAM, which will cut costs even more since less DRAM will be needed). 
    The ioScale comes in capacities from 400GB to up to 3.2TB (single half length PCIe slot) making it one of the highest density, commercially available drives. Compared to traditional 2.5" SSDs, the ioScale provides significant space savings as you would need several 2.5" SSDs to build a 3.2TB array. The ioScale doesn't need RAID for parity as there is built-in redundancy, which is similar to SandForce's RAISE (some of the NAND die is reserved for parity data, so you can rebuild the data even if one or more NAND dies fail). 
    The ioScale is all MLC NAND based, although Fusion-io couldn't specify the process node or manufacturer because they source their NAND from multiple manufacturers (makes sense given the volume required by Fusion-io). Different grades of MLC are also used but Fusion-io is promising that all their SSDs will match with the specifications regardless of the underlying components.
    The same applies to the controller: Fusion-io uses multiple controller vendors, so they couldn't specify the exact controller used in the ioScale. One of the reasons is extremely short design intervals because the market and technology is evolving very quickly. Most of Fusion-io's drives are sold to huge data companies or governments, who are obviously very deeply involved in the design of the drives and also do their own validation/testing, so it makes sense to provide a variety of slightly different drives. In the past I've seen at least Xilinx' FPGAs used in Fusion-io's products, so it's quite likely that the company stuck with something similar for the ioScale.
    What's rather surprising is the fact that ioScale is a single-controller design, even at up to 3.2TB. Usually such high capacity drives use a RAID approach, where multiple controllers are put behind a RAID controller to make the drive appear as a single volume. There are benefits with that approach too, but using a single controller often results in lower latencies (no added overhead by the RAID controller), prices (less components needed) and it takes less space. 
    The ioScale has previously been available to clients buying in big volumes (think tens of thousands of units) but starting today it will be available in minimum order quantities of 100 units. Pricing starts at $3.89 per GB, which puts the 450GB model at $1556. For Open Compute Platforms, Fusion-io is offering a 30% immediate discount, which puts the ioScale at just $2.72/GB. For comparison, a 400GB Intel SSD 910 currently retails at $2134, so the ioScale is rather competitive in price, which is one of Fusion-io's main goals. Volume discounts obviously play a major role, so the quoted prices are just a starting point.






    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #2580

    Anandtech: Lenovo Introduces Rugged Chromebook Aimed at K-12

    Google's Chromebook initiative hasn't really caught fire as well as their other OS of choice, Android, but with the latest updates and reduced pricing there's still life in the initiative. Acer's C7 for instance is apparently the fastest selling "laptop" on Amazon.com, no doubt helped by the $199 price point. Today Lenovo is joining the Chromebook offerings with their ThinkPad X131e, which takes a different approach.
    Unlike the other Chromebooks to date, Lenovo is specifically touting the ruggedness of the X131e as a major selling point, highlighting the benefits such a laptop can offer to educational K-12 institutions. The X131e Chromebook is "built to last with rubber bumpers around the top cover and stronger corners to protect the Chromebook against wear and tear." The hinges are also rated to last more than 50K open/close cycles.
    Other specifications include an 11.6" 1366x768 anti-glare LCD, low-light webcam, HDMI and VGA ports, and three USB ports (2x USB 3.0, 1x USB 2.0). Battery life is stated as 6.5 hours, which should be sufficient for the entire school day. The X131e weighs just under four pounds (3.92 lbs./1,78kg) with the 6-cell battery and measures 1.27" (32.2mm) thick. Storage consists of a 16GB SSD, and the X131e comes with 4GB of DDR3-1600. Lenovo does not state the specific processor being used, merely listing it as "latest generation Intel", which presumably means an Atom CPU though Celeron or Pentium are certainly possible. Customization options including colors, asset tagging, and school logo etching are also available.
    Besides the rugged build quality, Lenovo cites other advantages of Chrome OS for the K-12 environment. There's built-in protection since all apps are curated through the Google Play store, and Lenovo's Chromebook allows IT teams to manage security and scalability through a management console, where they con configure, assign, and manage devices from a single interface.
    The ThinkPad X131e Chromebook will be available starting February 26th via special bid volume pricing starting at $429. That's certainly higher than other options, but for a laptop that can actually withstand the rigors of the K-12 environment that's not too bad.






    More...

Thread Information

Users Browsing this Thread

There are currently 15 users browsing this thread. (0 members and 15 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title