Page 511 of 1210 FirstFirst ... 114114614865015065075085095105115125135145155165215365616111011 ... LastLast
Results 5,101 to 5,110 of 12091

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5101

    Anandtech: Plextor M7e PCIe SSD to Ship in Q3, M7V TLC SSD in 2016 & New Software Fea

    Plextor first showed off the M7e at CES earlier this year and at Computex we got an update on the release schedule. Plextor is now aiming for Q3 release, meaning that we will likely hear the final release at Flash Memory Summit in August. Specifications have not really changed since the M7e still utilizes the same Marvell PCIe 2.0 x4 AHCI controller with performance rated at up to 1.4GB/s read and 1GB/s write as well as up to 125K random read and 140K write IOPS. M7e will be available in both M.2 and PCIe card form factors with capacities range from 256GB to 1TB, so the M7e may very well be the first M.2 2280 drive to break the 1TB barrier.
    Regarding the TLC drive M6V (or M7V as Plextor now calls it), Plextor is taking its time to fine tune the firmware to squeeze every megabyte of performance out of the drive and more importantly ensure high reliability and endurance. Plextor told me that its firmware can boost the endurance to 2,000 P/E cycles with 15nm TLC, so it the claim holds true then I'm fine with Plextor taking a little longer and pushing the release to 2016.
    On the software side, Plextor actually had three new items to show. The first one is updated PlexTurbo, which now carries version number 3 and increases the maximum cache size to 16GB. The cache size is also now user adjustable and supports multiple disks, so one can decide what Plextor SSD to speed up with PlexTurbo.
    The first new addition to Plextor's software suite is PlexVault, which creates a hidden partition for storing sensitive data. The partition is completely hidden and isn't even visible in Disk Management, so other users won't even know that such hidden partition exists. Accessing the partition works through a hot key, although a password can also be entered to protect the hidden partition from accidental access. I'm not sure how useful the feature really is, but I guess it creates another layer of security for NSFW (not safe for the wife) content for those who may need it.
    The final piece of new software is PlexCompressor, which is an automated compression utility. If a file is not accessed for 30 days, PlexCompressor will automatically compress the file to increase free space. The file will then be uncompressed when accessed, which obviously takes a bit of the free space since the file will now be stored in uncompressed format for another 30 days. The compression is transparent to the user and is done fully in software (i.e. by the CPU), so it's not SandForce-like hardware compression. There is no impact on SSD performance, although as compression consumes some CPU cycles there may be impact on CPU heavy workloads and especially battery life. Out of the three pieces of software Plextor has, I think PlexCompressor is the most potent because it results in concrete extra free space for the end-user and with SSD prices still being relatively high (compared to HDDs) it makes sense to get the most out of the storage one has.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5102

    Anandtech: Synology Launches RC18015xs+ / RXD1215sas High-Availability Cluster Soluti

    Synology is no stranger to high-availability (HA) systems. Synology High Availability is touted as one of the features that differentiate Synology's NAS units from other vendors' for small business and enterprise usage. Put simply, Synology HA allows two NAS units (same model) to be connected to each other directly through their LAN ports, while also being connected to the main network through their other LAN ports. One of the NAS units is designated as the active unit, while the other passively tracks updates made to that unit. In case of any failure in the active unit, the other one can seamless take over without any downtime.
    Synology is now extending this concept to a high-availability cluster. The products being introduced today are the RackStation RC18015xs+ compute node and the 12-bay RXD1215sas expansion unit.
    Unlike Synology's traditional RackStation products, the compute node doesn't come with storage bays. They are just 1U servers sporting a Xeon E3-1230 v2 (4C / 8T Ivy Bridge running at 3.3 GHz) CPU. The specifications of the RC18015xs+ are provided below.
    The PCIe 3.0 x8 slot allows for installation of 10 GbE adapters, if required. The compute node is priced at $4000. The expansion unit comes with the following specifications, and it is priced at $3500.
    In order to set up a high-availability cluster, two compute nodes and at least one expansion unit is needed (as shown in the diagram on top). The operation of the cluster and high-availability features are similar to Synology HA. Performance numbers are of the order of 2,300 MBps and 330K IOPS using dual 10G adapters. All DSM (v5.2) features such as SSD caching and virtualization certifications are available. High-availability is also ensured with redundancy of hardware components (PSUs / SAS connectors / fans etc.).
    The other important aspect of today's announcement is the usage of btrfs for the file system. As of now, the only COTS NAS units with btrfs support in this market segment have been those from Netgear and Thecus. So, it is heartening to see Synology also adopting it. btrfs brings along many advantages, including snapshots with minimal overhead and protection against bit-rot. The unfortunate aspect is that it is currently only available in this high-availability cluster solution. We hope it becomes an option for other NAS models soon.
    Coming to the pricing aspect, we see that consumers need to buy two compute nodes and one expansion unit at the minimum, bringing the cost of a diskless configuration to $11500. This is pretty steep, considering that Quanta's cluster-in-a-box solutions (with similar computing performance) can be had along with Windows Server licenses for around half the price. Synology's products have always carried a premium (deservedly so for the ease of setup and maintenance), so it is not a surprise to see the pricing strategy here.



    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5103

    Anandtech: Nantero Exits Stealth: Using Carbon Nanotubes for Non-Volatile Memory with

    The race for next generation non-volatile memory technology is already on at full throttle. We covered Crossbar’s ReRAM announcement last year and last week a very exciting company with a different non-volatile technology exited stealth mode and shed light on its technology and commercialization plans. The company is called Nantero and it’s been developing its NRAM technology for well over a decade now.

    Before we talk about the technology itself, let’s briefly discuss the company and its key persons as Nantero is probably an unfamiliar name to many (it was for me, at least). The company was founded by Greg Schmergel, Dr. Tom Rueckes and Dr. Brent M. Segal in 2001. Mr. Schmergel and Dr. Rueckes are both still with the company and serve as CEO and CTO respectively, but Dr. Segal left the company in 2008 as a part of Nantero's Government Business Unit acquisition by Lockheed Martin. Mr. Schmergel is a well renowned serial entrepreneur who founded ExpertCentral that was later acquired by About.com where Mr. Schmergel served as a Senior Vice President before co-founding Nantero. While Mr. Schmergel brings valuable business expertise to the company, the technology comes from Dr. Tom Rueckes who is a Harvard Ph.D in chemistry and the inventor of NRAM technology.
    The Board of Directors includes several semiconductor industry veterans. Mr. Lai was one of the leading developers of NAND technology at Intel and also led Intel’s Phase Change Memory (PCM) team. Dr. Makimoto is a former Chief Technologist of Sony and Hitachi and Mr. Scalise is actually the former President of Silicon Industry Association (SIA) and also served as an Executive Vice President at Apple briefly in the late 90s. Mr. Raam may too be a familiar name to some since he is the former CEO of SandForce (the SSD controller company) that is now owned by Seagate.
    The Technology

    It goes without saying that Nantero is packed with semiconductor experience and know-how, but its technology isn’t any less interesting. NRAM is made out of carbon nanotubes, which is the strongest material known to man and provides far better thermal and electrical conductivity than any other known material.
    The way NRAM works is in fact relatively simple. Essentially there are two nanotubes, which have high resistance when in physical contact and low resistance when separated. The amount of resistance then determines whether the cell is considered to be programmed as ‘0’ or ‘1’. Program operation (or “SET” as Nantero calls it) works by applying a voltage on one of the nanotubes, which will then attract the other nanotube and create a bond. The SET operation is very fast and takes only ~20 nanoseconds, which is on par with DRAM latency. The bond is kept in tact by Van der Waal's interactions and is practically immortal with data retention terms even in 300°C is over ten years. In an erase operation (or RESET as Nantero calls it) the voltage is simply applied in the other direction, which will “heat up” (given the scale it’s more like vibration) the nanotube contacts and cause them to separate. Given that carbon nanotubes are one of the strongest materials in the world, the write/erase endurance is practically infinite as independent university study has shown Nantero’s NRAM technology to have over 1011 P/E cycles (for your information, 1011 translates to 100 billion).
    The other great news is that carbon nanotubes are extremely small. One nanotube can have a diameter of only 2nm and the pitch between the two nanotubes in off-state can be an even tinier 1nm, so the technology has potential to scale below 5nm. NRAM can also scale vertically, or go 3D, and since the cell structure and manufacturing process are both quite simple, 3D stacking should, in theory, be much easier compared to what 3D NAND is today with no need for high aspect ratio etching as an example.
    The Manufacturing Process

    The process of making an NRAM wafer starts by taking a normal CMOS wafer with the normal cell select and array line circuitry, which is then spin coated with carbon nanotubes. Carbon nanotubes are grown from iron that would normally contaminate a clean room, thus Nantero had to develop a patented process that creates ‘pure’ carbon nanotubes with less than one out of billion particles being foreign (the standard for the highest quality clean rooms). Nantero has worked hard in the past two years to bring the cost of carbon nanotubes down and currently the company says that the nanotubes have a negligible impact on chip cost, meaning making NRAM isn't inherently more expensive than any other semiconductor.
    Top-down SEM of NRAM
    With the nanotubes on the wafer, the top electrode is deposited on top of the nanotubes, followed by the photoresist, which is then patterned using a single mask. Finally the wafer is etched to cut the nanotubes into smaller pieces (i.e. more memory cells) and that’s it in a nutshell. Obviously there are other general semiconductor processing steps involved, but those are the same for all memory technologies, so the fundamental process of manufacturing NRAM isn’t that complex. All that is needed is a normal CMOS fab because the NRAM process requires no special or additional tools.
    Fortunately, NRAM isn’t just a technology that exists on paper. Nantero’s NRAM process has already been installed in seven production CMOS fabs ranging from 20nm to 250nm and mass production has been taking place for several years now, although only in small few megabit capacities. As a matter of fact, Nantero completed a successful space test with NASA on Space Shuttle Atlantis back in 2009 where NRAM operated without any shielding throughout the trip without any errors despite the intense radiation, because as I mentioned earlier, the nanotube bonds are practically unbreakable and are not affected by heat, magnetism, radiation and the like.
    Nantero’s Business Plan: Bringing NRAM to Everyone

    Because Nantero is an IP licensing company, it relies solely on its partners for production. It's a logical strategy because a decent sized fab requires an investment in the order of billions of dollars and in the end the company would have to compete against Intel, Samsung and the rest of the semiconductor giants. Actual end products will be sold under the manufacturer's brand (e.g. Intel), so you won't see any Nantero branded products on the market.
    Nantero isn't disclosing any of its partners at this point as most of them are still developing products that have the potential for higher volume production. While Nantero has its own chip team that is developing high capacity (several gigabits) dies, every partner is also doing its own work to implement NRAM at a larger scale, which makes sense given that the big semiconductor companies have far more resources and are familiar with high capacity memory devices.
    Aside from semiconductor companies, Nantero has also partnered with several more consumer-facing companies to develop concepts and products around NRAM technology. Since NRAM provides the same level of performance as DRAM but is non-volatile, NRAM could open the doors for products that aren't achievable (at least properly) with today's NAND and DRAM technology. As examples Nantero mentions 3D smartphones and commercial 3D printers (although to be frank both already exist to some extent), but practically anything that's handicapped by IO performance and volatility can be fixed with NRAM in the future.
    Since it will take several years before NRAM is even close to modern NAND capacities, Nantero has a three step strategy of bringing NRAM to the market. In the first step Nantero is simply offering a class of memory (both standalone and embedded) that has DRAM's performance characteristics and NAND's non-volatility. Technically that means NRAM is competing against current MRAM and ReRAM products for a specialized niche market that really needs high performance and non-volatility. The consumer market is obviously not one of those and even for the enterprise NRAM is likely too small capacity and expensive, but the industrial and especially space/military applications should benefit from NRAM despite the high initial cost.
    The next step is to grow NRAM to gigabit-class capacities and offer a non-volatile alternative to DRAM. Going to gigabit-class certainly opens the doors for NRAM as a mainstream memory because it could be used for a variety of caching applications that benefit from non-volatility (SSDs with their DRAM caches for NAND mapping table are a prime example). Tape out of first gigabit NRAM wafers is still about 18 months away, so I would expect to see something shipping perhaps in late 2017 or 2018.
    The final step, of course, is a terabit-class die to replace NAND (FYI, Samsung is projecting 1Tbit NAND die in 2017). Achieving that requires work on both lithography scaling and 3D integration technologies because such a high capacity die is only economical with either multiple layers or advanced lithography, or both.
    NRAM also has the potential to operate in MLC mode for further density improvements, but for now Nantero is focusing on scaling NRAM down and adding layers through 3D to increase density. Once the work on those two is done and has been implemented to a production fab, Nantero will start commercializing NRAM MLC technology, but that is likely at least several years away.
    Final Words

    The announcement is intriguing to say the least. From a technology standpoint NRAM sounds very exciting because it's effectively bringing us non-volatile DRAM performance, and better yet the cell design is scalable whereas DRAM has major struggles going below 20nm. I like the fact that Nantero has decided to go with IP licensing model because it means that NRAM is a technology available to everyone. The reason why DRAM and NAND are where they are today is because there are multiple companies producing them, resulting in competition with billions of R&D dollars.
    I wonder if any of the big semiconductor companies has partnered with Nantero yet. Most of them have been tight-lipped about their post-NAND plans, but maybe Nantero's announcement will sooner than later force the companies to talk about their strategies. Obviously a lot depends on how far 3D NAND can efficiently scale, but from what I have heard the transition to next generation memory technologies should begin around 2020. The future of memory isn't here yet, but it's certainly getting closer and it will be interesting to see what technology ends up taking the crown.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5104

    Anandtech: Oculus Rift Controllers, VR Games, And Software Features Announced

    On the eve of E3, Oculus held a livestream to announce some more details of the upcoming Oculus Rift Virtual Reality headset. Just about a month ago, they announced that they were targeting a Q1 2016 release, and with that time fast approaching, they have given some more details on the unit itself, as well as what kind of experiences you can expect with it. Oculus has re-affirmed the Q1'16 launch date, and now we finally know the specs for the retail consumer unit.
    One of the key points they brought up was that the unit itself needs to be comfortable, and part of that comfort is weight. January seems like a long time ago when I got to try out the Crescent Bay version of the Rift, but at the time I was impressed with how it felt, and I don’t recall the weight at all which I guess is the point. Audio is also a big part of the experience, and the included headphones were quite good, but today they said that you will be able to wear your own headphones as well if you prefer that. The directional audio is a key piece to the immersion and the Oculus team has done a great job with that aspect.
    Another part though is the displays. When we met with Oculus’s CEO Brendan Iribe at CES, one of the interesting things he told us was that they have found that by interleaving a black frame in between each video frame, it can prevent ghosting. In order to do this though, the refresh rate needs to be pretty high with the unit we tested running at 90 Hz. Today they announced a tiny bit about the hardware, and the Oculus Rift will ship with two OLED panels with low-persistence. The OLED panels are behind optical lenses which help the user focus on a screen so close to their eye without eye strain, and the inter-pupil distance is important. There will be an adjustment dial that you can tweak to make the Rift work best for you.
    Tracking of your head movement is done with the help of an IR LED constellation tracking system, unlike the Hololens which does all of the tracking itself with its own cameras. This makes installation a bit more difficult but should be more precise and reduce the overall weight of the head unit.
    For those that wear glasses, the company has improved the design to better allow for glasses, and they also make it easy to replace the foam surrounding the headset.
    One thing that was really not known yet was what kind of control mechanism Oculus was going to employ. In the demos I did at CES, there was no interaction, and you were basically a bystander. Oculus announced today that every Rift will be shipping with an Xbox One wireless controller and the just announced wireless adapter for Windows. This is a mutually beneficial agreement to say the least, with Microsoft getting in on the VR action and Oculus getting access to a mature controller design. Oculus even stated that the controller is going to be the best way to play a lot of VR games. However they also announced their own controller for a new genre of VR games to give an even more immersive experience.
    Oculus Touch is the name of new controller system that Oculus has come up with. Each controller has a traditional analog thumbstick, two buttons, an analog trigger, and a “hand trigger” input mechanism. The two controllers are mirror images of each other, with one for each hand. They are wireless as well, and use the IR LED tracking system as well in order to be used in space. The controllers will also offer haptic feedback so that they can be used to simulate real world touch experiences. They also detect some finger poses (not full finger tracking) in order to perform whatever task is assigned to that pose. These should be pretty cool and I can’t wait to try them out.

    Hardware is certainly part of the story, but software is going to be possibly an even bigger part. The Rift needs to launch with quality games, and it looks like Oculus has some developers on board with EVE: Valkyrie, Chronos, and Edge of Nowhere being some of the featured games.
    They also showed off their 2D homescreen which they are projecting into the 3D rift world. There will be easy access to social networks and of course multiplayer gaming in virtual reality.
    In addition to the Xbox controller, Oculus has also worked with Microsoft to enable the upcoming Xbox Game Streaming into the Rift, so that you can be fully immersed. This will not magically make Xbox games 3D VR worlds, but instead will project the Xbox game into a big 2D screen inside the Rift and block out all distractions.
    I’ve been a bit of a VR skeptic, but my time with the Rift was pretty cool. I can see a lot of applications for this outside of gaming, but of course gaming is going to be a big part of VR and Oculus looks to be lining up a pretty nice looking launch. A big part is going to be quality titles for the Rift and Oculus is working hard on that aspect. The hardware is now pretty polished.
    Source: Oculus


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5105

    Anandtech: JMicron SSD Controller Roadmap: JMF680 SATA 6Gbps & JMF815 PCIe Controller

    JMicron is getting ready to ship its new JMF670H controller to its customers and we also have reference design samples in for testing, but in its suite at Computex JMicron shed light to its plans for future controllers. We stopped by JMicron last year as well and the plans have since changed a bit.
    JMicron is already working on the successor of the JMF670H, which will simply be called JMF680. That's still a SATA 6Gbps design, but it will bring support for TLC NAND thanks to what JMicron calls 'advanced ECC'. JMicron is confident that its ECC implementation will be competitive against the LDPC engines that its competitors have and ultimately I believe that LDPC is more of a marketing gimmick at this point because everyone's ECC algorithms and implementations are slightly different anyway, but the market is associating strong ECC and TLC enablement with LDPC.
    Another new feature in the JMF680 is increased capacity support that will go to up to 2TB. That is thanks to the updated (and larger) DRAM controller, which can now support up to 2GB as modern drives typically need about 1MB of DRAM cache per 1GB of NAND. The four NAND channels will also get an upgrade to Toggle 3.0 and ONFi 4.0 standards to support the upcoming NAND dies with faster interfaces. The JMF680 also supports Write Booster, which is JMicron's SLC caching feature that debuts in the JMF670H (more on that in our upcoming JMF670H review).
    On the PCIe side JMicron has canceled the JMF810 and JMF811 controllers, and will now be focusing solely on the JMF815. JMicron made the decision to concentrate on the value segment and thus the JMF815 is a PCIe 3.0 x2 design with four NAND channels (no NVMe, unfortunately). A four-lane design would have required moving to 28nm process node, which would have increased the cost substantially and the packaging would have to move away from BGA to FCBGA (used by e.g. Phison and SandForce in their upcoming PCIe controllers) that would further increase the cost. I think it's a good play from JMicron to focus on a segment that isn't as populated because right now everyone is focusing solely on performance with PCIe, but ultimately cost and power consumption will be a major factors in widespread adoption and JMicron should have an advantage there if the JMF815 is executed well.
    First engineering samples of the JMF680 and JMF815 are expected to be ready in Q4'15 with first retail products entering the market in early 2016.
    One of the trends I saw at Computex was the move towards DRAM-less SSD controllers. The JMF608 has been relatively popular in China given its ultra-low cost and its successor, the JMF60F, will be available within the next few months. It features an improved ECC engine and a larger capacity support as well as a new, cheaper QFN packaging. Following this trend, I wouldn't be surprised if JMicron also has plans for DRAM-less versions of the JMF680 and JMF815.
    All in all, JMicron has a pretty solid roadmap for 2016. It's not aiming to be the performance leader, but to offer cost efficient designs for the value segment. We will have to wait and see how JMicron executes its PCIe controller, but in the meantime stay tuned for our JMF670H review that will be up in the coming weeks!


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5106

    Anandtech: LIFX White 800 Smart Bulb Review

    The Internet of Things (IoT) revolution has sparked an increased interest in home automation. Lighting is one of the major home automation aspects. LIFX is one of the popular crowdfunded companies in this space to have come out with a successful product. The success of their multi-colored LED bulbs brought venture capital funding, allowing them to introduce a new product in their lineup - the White 800. In this review, we take a look at the White 800 platform and our usage experience.

    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5107

    Anandtech: NVIDIA Acquires Game Porting Group & Tech From Transgaming

    While NVIDIA’s core businesses and gaming have been inseparable since the start, it’s only relatively recently that NVIDIA has become heavily involved in game creation itself, and not just supplying the hardware that games are played on. The launch of the company’s Tegra ARM SoCs, their SHIELD product lineup, and the overall poor state of the Android gaming market has led to the company investing rather significantly in bringing higher quality games over to SHIELD and Android devices. This has culminated in NVIDIA paying for the Android ports for a number of games, some of the most famous including the Android ports of Valve’s Half-Life 2 and Portal.
    Meanwhile with the launch of the SHIELD Android TV, NVIDIA is essentially doubling-down on Android gaming as part of their efforts to become the premiere Android TV set top box. And now as part of those efforts, the company has announced that they are acquiring the Graphics & Portability Group (GPG) from game tool developer Transgaming.
    Transgaming is best known for their work developing Cider, a WINE-derived Windows compatibility layer used to quickly port Windows games over to OS X. With the rise of Apple’s fortunes and the move to x86, Transgaming has been responsible for either directly porting or supplying Cider to developers to bring a number of Windows games over to OS X. However in a blink-and-you’ll-miss-it moment, back in March of this year the company announced that they were also going to get in to using their technology and expertise to port games over to architectures, partnering with NVIDIA to bring Metal Gear Rising: Revengeance to SHIELD Android TV.

    NVIDIA's SHIELD Console: The Reason For The Acquisition
    Now just 3 months later NVIDIA is acquiring the GPG outright from Transgaming. This acquisition will see the group open a new office in Toronto, while structurally they are folded into the NVIDIA GameWorks division. And although NVIDIA doesn't state what precisely they intend to do with the group and its technology beyond the fact that the “acquisition will enrich our GameWorks effort,” it’s a safe bet that NVIDIA intends to do more game ports for their SHIELD devices. Given their existing (if short) relationship, the acquisition is not too surprising, however it is a bit interesting since the bulk of the group’s experience is with porting games among different x86 OSes, not porting games to new architectures entirely.
    As for Transgaming, having sold the GPG to NVIDIA, the company has retained their SwiftShader (software 3D rendering) technology and their GameTree TV business. Transgaming has indicated that they are going to focus on providing apps for the Smart TV market, which they see as a greater growth opportunity than porting games.

    Games Published By Transgaming GPG On the Mac App Store
    Finally, while this acquisition will undoubtedly be a big deal for NVIDIA’s efforts to bring more major games to SHIELD, perhaps the more profound ramifications of this deal will be what it means for Mac gaming. Though NVIDIA doesn’t definitively state what they will be doing with Cider, the fact that they have their own platform to worry about certainly gives pause for thought. There are a large number of games that have received native Mac ports over the years, but Cider has still been used in everything from Metal Gear Solid to EVE Online. If Cider becomes unavailable to developers, then this may cut down on the number of Windows games that get ported to OS X, especially those games where marginal sales may make a native port impractical. In either case with this acquisition NVIDIA seems to have co-opted a lot of the technology and relationships behind Mac game porting, which should be a boon for their SHIELD platform.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5108

    Anandtech: EVGA expands the SuperNOVA G2 PSU series

    As users are becoming more and more aware of how PSUs operate and what the real energy requirements of their systems are, sales of high wattage units decrease in relevance to middle range units. Many manufacturers realize that and they began marketing high performance products of reasonable power output and pricing instead of focusing their efforts on high output units. In that light, EVGA expanded their very popular G2 PSU series downwards, adding 550W and 650W models to it.
    EVGA's G2 series is synonymous with the excellent balance between cost, quality and performance. We have seen their capabilities in our review of the 850W version. After all, there is good reason why the Super Flower Leadex platform is so popular. The new 550W and 650W models are physically smaller but share the same features, so it is very likely that they are based on a Super Flower platform as well.
    According to EVGA, the main features of the new 550 G2 and 650 G2 PSUs are:

    • 80 PLUS Gold certified, with 90% (115VAC) / 92% (220VAC~240VAC) efficiency or higher under typical loads
    • Highest quality Japanese brand capacitors ensure long-term reliability
    • Fully Modular to reduce clutter and improve airflow
    • NVIDIA SLI & AMD Crossfire Ready
    • Heavy-duty protections, including OVP (Over Voltage Protection), UVP (Under Voltage Protection), OCP (Over Current Protection), OPP (Over Power Protection), and SCP (Short Circuit Protection)
    • Ultra Quiet Fan with ECO Intelligent Thermal Control Fan system (Zero Fan Noise < 45&deg;C)
    • Unbeatable 7 Year Warranty and unparalleled EVGA Customer Support.

    We should note that both units are rated at 50&deg;C and have a ridiculous number of connectors for their power output. Even the 550W version has three PCI Express connectors (two 8 pin and one 6 pin) and nine SATA connectors. Apparently, EVGA is very confident about the capabilities of their new units - or of their OCP, at least. Nevertheless, the 550W version should be able to easily power any system with a single CPU and a single GPU, with the possible exception that the extreme combination of an AMD FX-9590 and an R9 295X2.
    The new G2 series units are available as of the 12th of June.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5109

    Anandtech: Comparing OpenGL ES To Metal On iOS Devices With GFXBench Metal

    In the past couple of years we&#39;ve seen the creation of a number of new low level graphics APIs. Arguably the first major initiative was AMD&#39;s Mantle API, which promised to improve performance on any GPUs that used their Graphics Core Next (GCN) architecture. Microsoft followed suit in March of 2014, with the announcement of DirectX 12 at the 2014 Game Developers Conference. While both of these APIs promise to give developers more direct access to graphics hardware in the PC space, there was still no low level graphics API for mobile devices, with the exception of future Windows tablets. That changed in the middle of 2014 at WWDC, where Apple surprised a number of people by revealing a new low level graphics and compute API that developers could use on iOS. That API is called Metal.
    The need for a low level graphics API in the PC space has been fairly obvious for some time now. The level of abstraction in earlier versions of DirectX and OpenGL allows them to work with a wide variety of graphics hardware, but this comes with a significant amount of overhead. One of the biggest issues caused by this is reduced draw call throughput. A simple explanation of a draw call is that it is the command sent by the CPU which tells the GPU to render an object (or part of an object) in a frame. CPUs are already hard-pressed to keep up with high-end GPUs even with a low level API, and the increased overhead of a high level graphics API further reduces the amount that can be issued in a given period of time. This overhead mainly exists because most graphics APIs will do shader compilation and state validation (ensuring API use is valid) when a draw call is made, which takes up valuable CPU time that could be used to do other things like physics processing or drawing more objects.
    Because a draw call involves the CPU preparing materials be rendered, developers can use tricks such as batching, which involves grouping together items of the same type to be rendered with a single draw call. Even this can present its own issues, such as objects not being culled when they are out of the frame. Another trick is instancing, which involves making a draw call for a single object that appears many times, and having the GPU duplicate it to various coordinates in the frame. Despite this, the overhead of the graphics API combined with the time that it takes the CPU itself to issue a draw call ultimately limits how many can be made. This reduces the number of unique objects developers can put on screen, as well as the amount of CPU time that is available to perform other tasks. Low level graphics APIs aim to address this by removing much of the overhead that exists in current graphics APIs.
    The question to ask is why do Apple and iOS developers need a low level graphics API for their mobile games? The answer ends up being the same as the PC space. While the mobile space has seen tremendous improvements in both CPU and GPU processing power, the pace of CPU improvements is slowing when compared to GPU improvements. In addition, the increases GPU processing power were always of a greater magnitude than the CPU increases. You can see this in the chart above, which shows the level of the CPU and GPU performance of the iPad relative to its original model. Having CPU performance improve by a factor of twelve in less than five years is extremely impressive, yet it pales in comparison to the GPU performance which, in the case of the iPad Air 2, is 180 times faster than its original version.
    Because of this widening gap between CPU and GPU speeds, it appears that even mobile devices have begun to experience the issue of the GPU being able to draw things much faster than the CPU can issue commands to do so. Metal aims to address this issue by cutting through much of the abstraction that exists in OpenGL ES, and this is possible in part because of Apple&#39;s control over their hardware and software in their devices. Apple designs their own CPU architectures, and while they don&#39;t design the GPU architecture, it&#39;s clear they&#39;re free to do what they desire to with the IP to create the GPUs they need.
    The other side of the discussion is compatibility. Much of the abstraction in higher level graphics APIs is done to support a wide variety of hardware. Low level graphics APIs often are not as portable or widely compatible as high level ones, and this is also true of Metal. The iOS Metal API currently only works on devices that use GPUs based on Imagination Technologies&#39; Rogue architecture, which limits it to devices that use Apple&#39;s A7, A8, and A8X SoCs.
    This can pose a dilemma for developers, as programming only for Metal limits the number of users they can target with their application. The number of older iPads and iPhones still in use, as well as Apple&#39;s insistence on selling the original iPad Mini and iPod Touch which use their A5 SoC from 2011, can limit the market for games that use Metal. If I were to make a prediction, it would be that Metal&#39;s adoption among iOS developers will grow substantially in the next year or two when devices that use the A5 and A6 chips are retired from sale.
    Kishonti Informatics, the developer of the GFXBench GPU benchmarking application, have released a new version of their benchmark. The new benchmark is called GFXBench Metal, and it&#39;s essentially the same benchmark as the normal GFXBench 3.0 / 3.1. The difference is that this version of the benchmark has been built to use Apple&#39;s Metal API rather than OpenGL ES. Although it&#39;s not one of the first Metal applications, it&#39;s one of the first benchmarks that can give some insight into what improvements developers and users can see when games and other 3D applications are built using Metal rather than OpenGL ES.
    Before getting into the results, I did want to address one disparity that may be noticed about the non-Metal iPad Air 2 results. It appears that Apple has been making some driver optimizations for the A8X GPU with iOS releases that have come out since our original review. Because of this, the iPad Air 2&#39;s performance in the OpenGL version of GFXBench 3.0 is noticeably improved over our original results. To avoid incorrectly characterizing the improvements that Metal brings to the table, all of the iPad tests for the OpenGL and Metal versions of the benchmark were re-run on iOS 8.3. Those are the results that are used here. Testing with the iPhone 5s and 6 revealed that there are no notable improvements to the performance of Apple A7 and A8 devices.
    GFXBench 3.0&#39;s driver overhead test is one we don&#39;t normally publish, but in this circumstance it&#39;s one of the most important tests to examine. What this test does is render a large number of very simple objects. While that sounds like an easy task, the test renders each object one by one, and issues a separate draw call for each. This is essentially the most inefficient way possible to render the scene, as the GPU will be limited by the draw call throughput of the CPU and the graphics API managing them.
    In this test, it&#39;s clear that Metal provides an enormous increase in performance. Even the lowest performance improvement for a device on Metal compared to OpenGL is still well over a 3x increase. While this test is obviously very artificial, it&#39;s an indication that Metal does indeed provide an enormous improvement in draw call throughput for developers to take advantage of.
    While the driver overhead test is an interesting way of looking at how Metal allows for more draw call throughput, it&#39;s important to look at how it performs with actual graphics tests that simulate the type of visuals you would see in a 3D game. In both the Manhattan and T-Rex HD parts of GFXBench we do see an improvement when using Metal instead of OpenGL ES, but the gains are not enormous. The iPad Air 2 shows the greatest improvement, with an 11% increase in frame rate in T-Rex HD, and an 8.5% increase in Manhattan.
    The relatively small improvements in these real world benchmarks illustrate an important point about Metal, which is that it is not a magic bullet to boost graphics performance. While there will definitely be small improvements due to general API efficiency and lower overhead, Metal&#39;s real purpose is to enable new levels of visual fidelity that were previously not possible on mobile devices. An example of this is the Epic Zen Garden application from Epic Games. The app renders at 1440x1080 with 4x MSAA on the iPad, and it displays 3500 animated butterflies on the screen at the same time. This scene has an average of 4000 draw calls per frame, which is well above what can currently be achieved with OpenGL ES on mobile hardware.
    I think that Metal and other low level graphics APIs have a bright future. The introduction of Metal on OS X can simplify the process of bringing games to both Apple&#39;s desktop and mobile platforms. In the mobile space, developers of the most complicated 3D applications and games will be eager to adopt Metal as they begin to hit the limits of what visuals can be accomplished under OpenGL ES. While there are titles like Modern Combat 5 which use both Metal and OpenGL ES depending on the device, that method of development prevents you from using any of Metal&#39;s advantages effectively, as they will not scale to the OpenGL ES version. I cannot stress enough how much the continued sale of Apple A5 and A6 devices impedes the transition to using Metal only, and I hope that by the time Apple updates their product lines again those devices will be gone from sale, and eventually gone from use. Until that time, we&#39;ll probably see OpenGL ES continue to be used in most mobile game titles, with Metal serving as a glimpse of the mobile games that are yet to come.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #5110

    Anandtech: The Acer Aspire R 13 Review: Convertible Notebook With A Twist

    The world of the convertible notebook has come a long way in just a couple of years, but we seem to have settled in on two basic types of convertible devices. There are the tablet style devices where the display can be removed from the keyboard and used separately, and there are the notebook style devices where the keyboard can be rotated around and under the display in order to act like a tablet. Acer has decided to try something different with the Aspire R 13 which features their Ezel Aero hinge.


    More...

Thread Information

Users Browsing this Thread

There are currently 18 users browsing this thread. (0 members and 18 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title