Page 592 of 1210 FirstFirst ... 924925425675825875885895905915925935945955965976026176426921092 ... LastLast
Results 5,911 to 5,920 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5911

    Anandtech: The MSI Z170A SLI PLUS Review: Redefining the Base Line at $130

    MSI’s motherboard range seems to expand every generation. Alongside the channel range, there’s the MSI Gaming, micro-ATX gaming, OC Certified, Krait, ECO, SLI PLUS, PC Mate and some I probably can’t think of. Each set can have a chipset mix, depending on their target market. The SLI PLUS line is relatively new, with the Z170A SLI PLUS in his review being the latest model. The goal of the SLI PLUS is form, function and application at a low price, with a few future-proof features and enough hardware for most PC enthusiasts systems. They seem to sell well, so we got a sample in to see the fuss. Two-word verdict: pleasantly surprised. Read on to see why.

    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5912

    Anandtech: USB-C Authentication Tech to Restrict Usage of Uncertified USB-C Accessori

    The USB Implementers Forum has announced a new addition to the USB Type-C specification, which is projected to restrict usage of uncertified or potentially malicious accessories with reversible USB-C connectors. The USB Power Delivery 3.0 specification contains a special extension called, the USB Type-C Authentication specification, which promises to help host devices to identify chargers, cables, storage solutions and hosts before making connections. However, to take advantage of the tech, new devices will be needed.
    USB interconnections are expected to get more popular than ever thanks to convenience of reversible USB Tpe-C, its ability to deliver up to 100W of power and support for custom features. However, expanded functionality requires more sophisticated cables with multiple wires and special ID chips, which are more expensive to make than traditional USB cables. As it turned out in the recent months, many cheap cables are not compliant with USB-IF’s requirements; they either do not support high data-rates, cannot charge USB-C devices, or may even damage products they are connected to. The USB authentication promises to end frustrations and make future USB-C devices a little more secure, as an added bonus.
    Devices compliant with the USB PD 3.0’s USB-C authentication tech will be able to verify capabilities of accessories compliant with the authentication technology and whether or not they have been certified by the USB-IF. The verification information will be exchanged right after devices are connected, before any data or energy is transferred. The USB-IF will make it possible to set up policies that will restrict usage of incompatible or uncertified accessories with particular host devices.
    The USB-C authentication will divide accessories into three types: USB devices, USB power delivery devices (e.g., chargers) and USB Type-C alternate mode devices (e.g., displays). The authentication data messages will be transmitted using different communication paths (USB bus, USB PD or mixed) and will be encrypted using 128-bit methods.
    USB Type-C Authentication Cryptographic Methods
    Method Use
    Framework (ITU X.509)
    OID (ITU-T X.402)
    DER-encoding (ITU-T X.690)
    Certificate format
    ECDSA (ANSI X9.62) using NIST P-256 curve (NIST-FIPS-186-4) Digital signing of certificates and authenticationmessages
    SHA256 (NIST-FIPS-180-4) Hash algorithm
    NIST-compliant PRNG source (SP800-90A) seeded with a 256-bit fullentropy value (SP800-90B) Random numbers
    Based on what is known about the USB authentication, the technology can restrict usage of uncertified cables only in cases their usage is prohibited by manufacturers or end-users users themselves. Moreover, it will only be completely supported by fully-featured cables compatible with the USB Power Delivery 3.0 specifications, which will contain a chip with ID as well as optional vendor defined messages.
    According to the USB-IF, it is possible to add the USB-C authentication protocol to host devices by updating their software and firmware, but that will depend on device manufacturers. Since it is not feasible to update things like chargers or cables, they will need to be replaced, or, their usage should be permitted by software-defined security policies. Owners of PCs, tablets and smartphones will be able to authorize only certain accessories to work with their devices, making it impossible to plug a USB flash drive to a host containing confidential data. Nonetheless, once an accessory is authorized, it will be able to work with hosts, harm them or even infect them with viruses. Therefore, the new USB technology is not a replacement for antiviruses.
    It remains to be seen how different manufacturers take advantage of the new technology. If implemented too strictly, some hosts may get incompatible with the majority of cheap USB-C products on the market.
    At present we do not know when the USB-IF plans to start certification of devices with the USB authentication technology and how the organization plans to certify thousands of cables and chargers. Perhaps, Intel, the company that developed the USB PD 3.0, will reveal more information at its IDF trade-show in the coming days, so, stay tuned.
    Gallery: USB-C Authentication Tech to Restrict Usage of Uncertified USB-C Accessories and Cables




    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5913

    Anandtech: Thunderbolt 3 in Action: Akitio Thunder3 Duo Pro DAS Review

    A lot of attention has been paid to high-speed I/O interfaces for computing systems over the last five years. Flash-based storage media capable of multi-Gbps throughput have become very affordable. Display resolutions have also seen a rapid rate of increase. The necessity to support multiple such devices in both consumer and professional computing solutions have exposed the limitations of the traditional external I/O interfaces. Intel has been attempting to solve this problem with Thunderbolt Technology since 2011. Unfortunately, the uptake outside the Apple ecosystem for the first two versions has been minimal at best. The introduction of Thunderbolt 3, however, has been a game-changer. Systems and motherboards with Thunderbolt 3 support started coming to the market in late 2015. The first Thunderbolt 3 peripheral to appear in the market was the Akitio Thunder3 Duo Pro, a hardware RAID solution with two drive slots. Why do we think that Thunderbolt 3 is a game changer? How does the Akitio Thunder3 Duo Pro perform? What advantages does it deliver over other standard 2-bay hardware RAID solutions? Read on for our detailed analysis and review.

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5914

    Anandtech: HTC 10: A Quick Look At Battery Life & Storage Performance

    While I’m still working on the full review for the HTC 10, there’s obviously a lot of questions flying around for at least a few of the claims that HTC is making. While I’m still not quite sure how I feel about the audio and camera on the HTC 10, I can at least start to talk about battery life. While the HTC 10 uses the same Snapdragon 820 SOC we've already seen a couple of other places - and as a result application performance doesn't deviate much from those other phones - the same cannot be said for battery life, as the choice of displays and firmware optimizations play a major role here.
    As mentioned in part 1 of the Galaxy S7 review, our new web browsing battery life test attempts to have much more realistic workloads. In addition to updating the test pages to modern websites, we’ve added a component of scrolling through webpages which will help with making a more realistic and reasonable test. While we don’t have a touchscreen CPU boost interrupt firing upon scrolling, scrolling introduces a workload that can test how well the governor can select a proper CPU frequency to complete a fixed amount of work over a given amount of time rather than simply detecting a high CPU load and sending the CPU into a maximum performance state. The scrolling component also means that we no longer give PSR more credit than it realistically should have, and provides some kind of stress upon the display pipeline that we previously left unexplored.
    In WiFi, we can pretty clearly see that the HTC 10 actually has a notable lead over the Galaxy S7 with Snapdragon 820, although the difference isn’t necessarily enormous. I’m actually pretty surprised by this showing from the HTC 10 because our test is specifically designed to decrease the average APL with a few pages that have dark themes. Another factor that tilts the scales against the HTC 10 when comparing against phones like the Galaxy S7 is that the Galaxy S7 has an undefeatable power save mode which drops brightness by more than half and throttles the CPU noticeably, while the HTC 10 was run with all power save options disabled.
    On the LTE side, we see the same sort of pattern continue. Qualcomm’s modem prowress is showing here on the S820 devices, as they’ve managed to get WiFi and LTE power drain to be effectively equal in situations where the power amplifier on the transmit side isn’t trying to pull a watt or so to keep a connection to an eNodeB. Interestingly enough, one thing that our old web browsing test didn’t catch was that LTE modem efficiency on the Galaxy S6 is pretty disappointing. Our new test simulates the effects of ads on webpage loads, so I suspect that the Verizon Galaxy S6’s modem is not very efficient at idle, while the sort of race to sleep workload that we saw in the old web browsing test wouldn’t show these issues.
    In the interest of having a comparison that is basically almost all static display at this point, I also ran our last generation web browsing test. As another data point it’s pretty interesting because this actually suggests that Samsung AMOLED is more efficient than HTC’s LCD, which is probably a function of the much higher subpixel density as HTC is using RGB stripe instead of PenTile. Given that the HTC 10’s Tianma display definitely doesn’t use photoalignment to achieve higher contrast ratios and its relatively low maximum brightness compared to the Galaxy S7, this result is really more in line with what I was expecting. I’m honestly curious as to what optimizations HTC is doing here to pull off better power efficiency here in our 2016 web browsing test, because the SoC bin in the HTC 10 review unit is noticeably worse than the Galaxy S7 and Galaxy S7 edge that I received. On top of this, the display doesn’t seem to be particularly efficient compared to the Galaxy S7’s AMOLED display as seen in the test results above.
    In the interest of discussing throttling performance and to get an idea for what the lower bound of battery life is like on the HTC 10, I also went ahead and put it through our Basemark OS II and GFXBench battery life test. While we’re actively transitioning to GFXBench 4 and also moving to a newer Manhattan 3.1 battery life test, for 1440p Snapdragon 820 devices our traditional T-Rex rundown is sufficient to show throttling behavior and give an idea of what the lower bound for performance looks like.
    Starting with Basemark OS II, we’re really seeing the effects that a worse bin has on the HTC 10 as the device noticeably trails the Galaxy S7 here, but overall runtime is pretty similar as the difference in display efficiency is going to be minimized. We’re also seeing a difference in the kind of load that the two devices can sustain, as the HTC 10’s aluminum unibody means the maximum allowed skin temperature is going to be lower than what the glass-backed Galaxy S7 will allow. Whatever the case, the Galaxy S7 is ahead here.
    In GFXBench again we can see a similar sort of pattern in which the HTC 10 lasts about as long as the Galaxy S7, but in general it tends to throttle faster. However, it’s possible to see how the Galaxy S7’s throttling is best described as oddly configured as Samsung seems to prefer some oscillating behavior which negatively affects power efficiency before settling into steady state in the long run. It’s probably not a surprise that both have the same steady state as the back cover of the HTC 10 distributes heat quite evenly and both are roughly the same size. At any rate, the HTC 10 ends up being quite similar if you’re only comparing runtime and steady state frame rate, although the Galaxy S7 does manage to sustain more time before throttling down.
    Whatever the case, overall battery life between the Galaxy S7 and HTC 10 are going to be similar, although how similar is going to be workload dependent and the bin of the SoC you end up getting. If your workload is almost purely display-bound, the Galaxy S7 seems to come out on top. If your usage is more mixed and primarily stresses the CPU like our 2016 web browsing test, the HTC 10 will edge out the Galaxy S7 and iPhone 6s by a nose. If you intend on running power viruses on your phone, it's likely that the bin of your SoC will matter more than anything else but runtime on both devices will be similar. Of course, due to the aluminum unibody limiting maximum skin temperatures the HTC 10 will probably start to throttle sooner but steady state performance should be similar. If you want a clear upgrade in battery life, basically the only choice at this time seems to be to go to a larger device like the Galaxy S7 edge.
    Outside of battery life, one inevitable question is whether the HTC 10's eMMC is a detriment to the device. While this is by no means an exhaustive examination of storage performance, we can look at AndroBench 4 to get a good idea for performance. Unfortunately with Android 6 AndroBench 3.6 has broken timers yet again leading to wildly inaccurate figures for performance, and AndroBench 4 varies significantly from run to run for random read and write so for now we really can only disclose sequential figures with any level of confidence that these are comparable to AndroBench 3.6 and StorageBench performance figures.
    If you were fully expecting the HTC 10 to perform worse than the Samsung Galaxy S7 here like I was, you'll probably be surprised to learn that it doesn't actually do worse all the time. In this test at least, write performance of the HTC 10 is 75% greater than the Samsung MLC UFS solution in the Galaxy S7 due to the use of an SLC write cache. However, sequential reads on the Galaxy S7 are about 35% higher than what they are on the HTC 10. The same sort of pattern repeats itself in the random read and write tests for AndroBench 4, so at a high level it's pretty fair to say that things like burst camera photos, app updates, and similar write-intensive operations are going to be faster on the HTC 10, while read-intensive operations like loading apps may be slightly faster if storage reads are the critical path.
    Overall, it's clear to me that the HTC 10 could be a contender for high end Android smartphones. If you were just to go down the spec sheet, it's probably fairly easy to conclude that HTC can't really go toe to toe with Samsung. However, with our tests so far there is a surprising amount of nuance to all of these comparisons that has to be considered. Battery life seems to be worse than the Galaxy S7 if you just consider display or SoC efficiency, but in a mixed use scenario HTC manages to close the gap. NAND performance seems to inevitably trail the Galaxy S7, but with the right eMMC selection HTC has a significant lead in write performance in all scenarios. In my experience, this seems to be the overall story of the HTC 10 thus far, although there are cases where one device is clearly superior to the other. Of course, this will have to wait for the full review, which should be in the near future.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5915

    Anandtech: GIGABYTE Adds 75W GeForce GTX 950 to Lineup

    GIGABYTE has quietly added a low-power GeForce GTX 950 video card to its lineup. The product does not require auxiliary PCIe power connector and can be powered entirely by a PCIe x16 slot. Low-power graphics cards featuring the GM206 graphics chip were released by multiple manufacturers recently, GIGABYTE’s board will compete against similar products by three other makers.
    The GIGABYTE GV-N950D5-2GD graphics card is based on the GeForce GTX 950 GPU in default configuration (768 stream processors, 48 texture units, 32 ROPs, 128-bit GDDR5 memory interface), but does not require external power, unlike Nvidia’s reference design of the product. Power consumption of the GV-N950D5-2GD does not exceed 75 W, which is why it can be powered by the PCIe x16 slot. Due to reduced power consumption, the new graphics card from GIGABYTE does not feature significant factory-overclocking, it comes with GPU frequencies of 1051/1228 MHz (base/boost) and thus offers performance close to that of Nvidia’s reference card.
    The board is equipped with 2 GB of GDDR5 memory, two DVI connectors, one HDMI 2.0 port and one DisplayPort output. The product uses a rather simplistic dual-slot cooling system with one fan, which should be sufficient given low-power nature of the device. In a bid to ensure long lifespan of the product, GIGABYTE uses high-quality components, such as solid-state inductors and capacitors, to build the GV-N950D5-2GD.
    GIGABYTE added its low-power GV-N950D5-2GD graphics card to its lineup after ASUS, EVGA and MSI quietly introduced their GeForce GTX 950-based products with 75 W power consumption that do not require auxiliary six-pin PCIe power connectors. Such adapters can be used to upgrade inexpensive PCs that do not have an extra power connector inside, or to build low-power gaming or HTPC systems. Since the GM206 GPU is still the only GPU on the market that supports hardware-accelerated decoding and encoding of H.265 (HEVC) video, as well as HDCP 2.2 content protection over HDMI 2.0 (which is required for Ultra HD Blu-ray playback), those who build modern HTPCs, do not have a lot of choice.
    NVIDIA GeForce GTX 950 Graphics Cards Specification Comparison
    GIGABYTE
    GTX 950
    EVGA GTX 950 EVGA GTX 950 MSI
    GTX 950
    ASUS
    GTX 950
    Ref
    Product Name GV-N950D5-
    2GD
    02G-P4-
    0954
    02G-P5-258 2GD5 OCV2 GTX950-2G -
    CUDA Cores 768
    Texture Units 48
    ROPs 32
    Core Clock 1051
    MHz
    1025
    MHz
    1076
    MHz
    1076
    MHz
    1026
    MHz
    1024
    MHz
    Boost Clock 1228
    MHz
    1190
    MHz
    1253
    MHz
    1253
    MHz
    1190
    MHz
    1188
    MHz
    Memory Clock 6.6 Gbps
    GDDR5
    Memory Bus Width 128-bit
    VRAM 2 GB
    TDP 75 W 90 W
    Outputs DVI-D
    DVI-I
    DP 1.2
    HDMI 2.0
    DVI-D
    DVI-I
    DP 1.2
    HDMI 2.0
    DVI-D
    DVI-I
    DP 1.2
    HDMI 2.0
    DVI-I
    DisplayPort 1.2
    HDMI 2.0
    Architecture Maxwell 2
    GPU GM206
    Transistor Count 2.94 B
    Manufacturing Process TSMC 28nm
    Launch Date Apr '16 Mar '16 Aug '15
    Launch Price unknown $159
    With GIGABYTE’s addition of a GM206-251-based graphics board to its product family, low-power GeForce GTX 950 is now available from virtually all well-known suppliers of video cards. Pricing of GIGABYTE’s GV-N950D5-2GD video card is unknown, but GeForce GTX 950-based adapters are generally inexpensive.
    Gallery: GIGABYTE Adds GeForce GTX 950 with 75W Power Consumption to Lineup




    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5916

    Anandtech: AMD Announces 32GB FirePro W9100 - Pro Graphics Gets a Memory Bump

    Thanks to the further proliferation of 8Gb GDDR5 memory modules, we’ve seen an uplift over the last few months with the memory capacity of professional graphics cards. For the professional graphics market this is always a welcome development, as datasets are already massive and always growing, especially in the content creation field.
    Due to various technical considerations – primarily a larger memory bus – over the past generation AMD has traditionally offered the highest capacity professional graphics cards, with the current FirePro W9100 topping out at 16GB. More recently, last month NVIDIA surpassed AMD with the launch of the 24GB Quadro M6000. However this week in advance of the 2016 NAB Show, AMD is firing back and retaking the top spot with their own capacity bump, updating the FirePro W9100 to 32GB.
    AMD FirePro W Series Specification Comparison
    AMD FirePro W9100 (32GB) AMD FirePro W9100 (16GB) AMD FirePro W9000 AMD FirePro W8100
    Stream Processors 2816 2816 2048 2560
    Texture Units 176 176 128 160
    ROPs 64 64 32 64
    Core Clock 930MHz 930MHz 975MHz 824MHz
    Memory Clock 5GHz GDDR5 5GHz GDDR5 5.5GHz GDDR5 5GHz GDDR5
    Memory Bus Width 512-bit 512-bit 384-bit 512-bit
    VRAM 32GB 16GB 6GB 8GB
    Double Precision 1/2 1/2 1/4 1/2
    Transistor Count 6.2B 6.2B 4.31B 6.2B
    TDP 275W 275W 274W 220W
    Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 28nm
    Architecture GCN 1.1 GCN 1.1 GCN 1.0 GCN 1.1
    Warranty 3-Year 3-Year 3-Year 3-Year
    Launch Price (List) $4999 $3999 $3999 $2499
    Launch Date Q2 2016 April 2014 August 2012 July 2014
    The updated FirePro W9100 takes off right where the previous model left off. Based around a fully enabled version of AMD’s Hawaii GPU, the specifications outside of memory capacity are unchanged. As for the memory itself, this update sees AMD replace their 4Gb GDDR5 chips with 8Gb chips, moving from a 32 x 4Gb configuration to a 32 x 8Gb configuration. Consequently any possible performance impact is data set size dependent. Performance essentially doesn’t change for data sets that fit within memory, while sets between 16GB and 32GB that were slow before because they didn’t fit on the card will now be able to be loaded in their entirety.
    With their latest capacity bump, AMD becomes the first company to ship a 32GB pro graphics card, and consequently retakes their top spot in the market. At the same time AMD will have final bragging rights for this generation, as AMD and NVIDIA have now both maxed out the memory capacity of their current cards.
    The 32GB FirePro W9100 will be launching this quarter through AMD’s usual distribution and OEM partners. The MSRP will be $4999, which is closely aligned to competitor NVIDIA’s own pricing, though also higher than the 16GB card it supplants. Meanwhile AMD will continue to ship the 16GB card as well, and while there isn’t a current MSRP attached to it, it’s currently available from retailers for around $3000.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5917

    Anandtech: The PNY CS1311 and CS2211 SSD Review: MLC vs TLC at 15nm

    PNY's latest consumer SSDs incorporate Toshiba 15nm NAND and are based on the Phison S10 controller. The TLC-based PNY CS1211 and MLC-based PNY CS2211 offer the rare opportunity of a direct comparison of MLC against TLC on the same hardware platform. In addition, they are also reasonably priced for the entry-level and mainstream segments of the SSD market.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5918

    Anandtech: ASUS Announces Three New Displays with Adaptive-Sync Technology

    ASUS has introduced three new inexpensive displays for gamers with full-HD and 4K resolution: the MG248Q, the MG28UQ and the MG24UQ. The new monitors support VESA’s Adaptive-Sync technology and thus should be compatible with video cards that feature AMD’s FreeSync dynamic refresh rate technology. While the monitors do not carry the FreeSync badge just now, they will likely gain it eventually.
    Specifications of ASUS MG-Series Displays
    MG248Q MG24UQ MG28UQ
    Panel 24" TN 23.6 IPS 28" TN
    Resolution 1920 x 1080 3840 x 2160
    Refresh Rate 40 Hz - 144 Hz 30 Hz - 60 Hz
    Adaptive-Sync
    Range
    unknown 40 Hz - 60 Hz
    Response Time 1 ms gray-to-gray 4 ms gray-to-gray 1 ms gray-to-gray
    Brightness 350 cd/m² 300 cd/m² 330 cd/m²
    Contrast 100,000,000:1 (ASUS smart contrast ratio)
    Viewing Angles 170°/160° horizontal/vertical 178°/178° horizontal/vertical 170°/160° horizontal/vertical
    PPI 92 ppi 186 ppi 157 ppi
    Pixel Pitch 0.276 mm 0.136 mm 0.16 mm
    Colors 16.7 million 1.07 billion
    Color Saturation unknown
    Inputs DisplayPort 1.2
    HDMI 1.4
    DVI-D
    DisplayPort 1.2
    HDMI 2.0
    HDMI 1.4 x 2
    Audio 2 x 2 W
    The biggest display among the novelties is the ASUS MG28UQ, which is based on a TN panel with 3840×2160 resolution, 100,000,000:1 ASUS smart contrast ratio and 330 cd/m2 brightness. The MG28UQ has default refresh rate of 60 Hz and supports dynamic refresh rates between 40 and 60 Hz, which is typical for 4K monitors. The display is equipped with one DisplayPort 1.2 and three HDMI inputs, a dual-port USB 3.0 hub with quick charge support as well as two 2 W speakers. The unit also features tilt, swivel, pivot and height adjustment and is compatible with VESA display wall mounts.
    The ASUS MG28UQ is available now for $549 from Amazon. The product does not seem to be very affordable for a TN-based display, possibly because ASUS charges premium for the Adaptive-Sync feature. Nonetheless, the monitor is not too expensive either.
    Buy ASUS MG28UQ on Amazon.com
    Next up, the MG24UQ is not as big as its large brother (it has 23.6” diagonal), but it will be a more interesting option for those, who prefer IPS panels with high pixel density. The monitor sports 3840×2160 resolution with up to 60 Hz refresh rate, 100,000,000:1 ASUS smart contrast ratio and 300 cd/m2 brightness. Adaptive-Sync works for refresh rates between 40 and 60 Hz, just like in case of the MG28UQ. The monitor features one DisplayPort 1.2, three HDMI inputs as well as two 2 W speakers. The design of the MG24UQ display is very similar to that of the MG28UQ (hence, it sports the same set of adjustments and VESA mounts) with the exception of dimensions and the lack of a USB hub on the smaller one.
    The ASUS MG24UQ can be pre-ordered now for $399 on Amazon.
    Buy ASUS MG24UQ on Amazon.com
    Finally, the ASUS MG248UQ is designed for gamers who value high dynamic refresh rates most of all other features. This display will be the first 24” monitor from the company which supports up to 144 Hz refresh rate as well as Active-Sync technology. The monitor uses a TN panel with 1920×1080 resolution, 100,000,000:1 ASUS smart contrast ratio and 350 cd/m2 brightness, offering slightly better specifications and a more aggressive visual design compared to the VG247H and the VG248QE. The display supports dynamic refresh rates between 40 and 144 Hz, according to ASUS, which is a very decent range. As an added bonus, thanks to the extremely high refresh rate, the MG248UQ could be used with NVIDIA's 3D Vision stereo-3D kit
    ASUS plans to start selling the MG248UQ in the coming weeks for an undisclosed price. Typically, such monitors are not expensive, thus, the MG248UQ could be used to build relatively affordable ultra-fast multi-monitor setups with Adaptive-Sync.
    ASUS is one of the leading suppliers of displays for gamers with a huge market share, according to the company. The new MG-series monitors should help ASUS to better address the segment of affordable displays for gamers.
    Gallery: ASUS Announces Three New Displays with Adaptive-Sync Technology




    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5919

    Anandtech: Corsair Extends Warranty of Advanced PSUs to 10 Years

    Unlike CPUs, video cards, motherboards and memory, power supplies do not get outdated after two or three years, which is why high-end PSUs usually survive multiple configurations. However, only a few companies offer warranty on advanced PSUs which are longer than five years. This month, Corsair became the second supplier to extend its warranty on its high-end PSUs to 10 years.
    Effective immediately and retroactively, Corsair has increased the warranty of all AXi range (launched in 2012), HXi range (launched in 2014) and RMi/RMx range (launched in 2015) PSUs from 7 years to 10 years. The extended warranty covers not only newly purchased power supplies, but also all the PSUs that belong to the aforementioned families sold to date.
    Note that Corsair did not extend warranties on its regular AX, HX and RM/RX PSUs, which were on the market for a long time, but now are not available from the company. While those power supplies are considered as good, Corsair upgraded capacitors, fans, heatsinks and some other internal components in their new PSU lineups (see our review of the RM1000i and the RM1000x PSUs for details), not to mention changed OEMs in some cases.

    Most of power supplies on the market today are covered by 3 or 5 year warranties, but Antec, Seasonic and Thermaltake offer 7 year warranties on high-end PSUs. To date, only EVGA has covered its select PSUs with a 10 year warranty. As it appears, Corsair’s AXi, HXi, RMi and RMx power supplies are so good that Corsair wants to extend their warranties in a bid to show that.
    Owners of Corsair’s power supplies do not need to register or contact the manufacturer anyhow to get the extra warranty coverage.
    Source: Corsair (via The Tech Report)


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5920

    Anandtech: OpenPOWER Gains Support as Inventec, Inspur, Supermicro Develop POWER8-Bas

    When IBM, Google, Mellanox, NVIDIA and Tyan founded OpenPOWER Foundation three years ago, the initiative was supported by only two server manufacturers: Google, which builds servers for itself, and Tyan. Today, OpenPOWER has expanded significantly in terms of the number of members. Moreover, major server producers, including Inventec, Supermicro, Wistron and some others are developing POWER8-based servers under the OpenPOWER initiative.
    IBM to Expand Power Systems: LC Server Family with Support from Supermicro

    Last week IBM disclosed plans to expand the lineup of its Power LC servers that are based on the POWER8 microprocessors as well as Linux operating system. In particular, the company intends to add Open Compute Project-compliant systems for big data analytics to the Power Systems LC portfolio, which will be important for the company as well as Open Compute Project in general. In addition, Supermicro will develop two servers that will be sold under IBM’s Power LC brand.
    Supermicro is currently working on a 2-way IBM POWER8-based 2U server with up to 512 GB of DDR4 memory, 12 LFF/SFF hot-swap drive bays, and either two NVIDIA Tesla K80 or two Alpha-Data KU3 CAPI adapters (based on the Xilinx UltraScale KU115 FPGA). Another machine that Supermicro is working on is a 1U server featuring two IBM POWER8 processors, 512 GB of DDR4 memory, and either one NVIDIA Tesla K80 or two Alpha-Data KU3 CAPI adapters, as well as four hot-swap drive bays. The servers feature Supermicro’s “Ultra” architecture, which enables the company to maximize density of devices inside its chassis, which means greater expansion capabilities and flexibility for the platform.
    Right now, IBM sells the Power S812LC (1-way system with up to 10 cores, up to 1 TB of memory and up to 14 storage devices in 1U form-factor) and the Power S822LC (2-way system with up to 20 cores, up to 1 TB of memory, two NVIDIA Tesla K80 accelerators in 2U form-factor) developed by Tyan and Wistron, respectively. Adding Supermicro, one of the world’s largest producers of x86 servers, to the list of POWER8 suppliers is an important step for IBM. Still, at present, it does not look like Supermicro plans to sell POWER8-based servers under its own brand directly.
    POWER8-Based Machines from Inventec and Wistron Incoming

    Supermicro is not the only big server maker to develop POWER8-based machines. Inventec, a major Taiwan-based ODM that sells, inter alia, servers to such companies as Dell and Lenovo, is also working on an OpenPOWER project. The platform will be based on one IBM POWER8 CPU with NV-Link as well as two NVIDIA Tesla P100 compute accelerators. The machine will also feature 16 DDR4 DIMMs (thanks to IBM’s Centaur memory buffer chip) and will thus support a lot of memory (at least, for a 1-way system). Inventec’s POWER8-based platform is designed primarily for high-performance computing (HPC) and it remains to be seen in what form it will actually reach the market (and whether it will reach it at all). Right now, this is only a motherboard project, therefore, it could be a prototype for evaluation by customers, or a prototype for an undisclosed interested party (which is also interesting, given Inventec’s list of customers).
    Wistron, which is another major Taiwan-based server ODM, is working with NVIDIA, IBM and Mellanox on a prototype of a 2-way IBM POWER8-based machine with four Tesla P100 accelerators aimed at HPC applications which we told you about last week. However, the company is developing three more OpenPOWER products. The first product, available today, is Wistron Polaris (co-developed with E4 Computer Engineering), a 2-way 2U system featuring IBM POWER8 CPUs with CAPI support. The Polaris is aimed primarily at HPC applications. Later on, Wistron plans to offer its Polaris Plus, which will resemble the machine co-developed with IBM and NVIDIA: it will feature two POWER8 processors with NVLink as well as four NVIDIA Tesla accelerators featuring Pascal architecture. Finally, the company is working on the Dark King project, a 4-way 4U server with POWER8 CPUs, 128 DIMMs (and several terabytes of memory) and CAPI support. The Dark King will be designed for large-scale data analysis.
    Speaking of E4, it naturally offers what it co-developed with Wistron: the E5 OP205 2-way 2U server with support for CAPI accelerators for various industries that require high-performance computing.
    POWER Gains Support from Chinese Server Makers

    Another noteworthy company, which is designing two POWER8-based servers, is China-based Inspur. Inspur is among the largest makers of servers in the country. The manufacturer is working on 1-way and 2-way 4U machines for new data center and big data applications. Not a lot of information is known about Inspur’s 2-way IBM POWER8 4U server, except the fact that it will support up to 2 TB of DDR3 memory (using 64 DIMMs).
    Undisclosed partners from OpenPOWER Foundation also helped Beijing Neu Cloud Oriental System Technology to develop the NL2200 2-way server that supports IBM POWER8 chips with up to 12 cores, up to 1 TB of memory, two GPU-based compute accelerators, two hard drives, a PCIe-based SSD as well as a 100 Gbps Infiniband adapter. Such machine could be used for HPC and other applications and its development just shows that some China-based companies are investing into OpenPOWER.
    One more Chinese server maker, which demonstrated its products at OpenPOWER’s Summit last week, was Zoom Server. They showcased their Redpower C210/C220 and P210 machines. The C210/C220 is a 2-way system with up to 12 storage devices, which is aimed at data storage, database and other applications. Meanwhile, the P210 by contrast is a more advanced server for HPC, which is powered by two IBM Power8+ CPUs that support four NVIDIA Tesla compute processors with NVLink (e.g., Tesla P100) and up to 1 TB of memory.
    While three server makers from China is not a lot, keep in mind that OpenPOWER Foundation is only several years old and the vast majority of datacenter owners are not familiar with IBM’s POWER8 processors. Therefore, the fact that Inspur and smaller companies are building POWER8-based systems could show potential and viable performance offered by such servers, but not that they are gaining market share just yet.
    Availability of Open Compute Project-Compliant Servers to Expand

    In addition to custom systems from large server manufacturers, a number of companies (including Mark III Systems, Penguin Computing and Stack Velocity) are also offering/working on Open Compute Project-compliant OpenPOWER systems. Such machines are important because server makers will sell them to software developers, who would like to optimize their programs for IBM’s POWER8, as well as to companies who would like to try POWER-based systems. In the long term, this could help increase market share of IBM’s POWER8 platforms in datacenters.
    For example, Penguin Computing offers the Magna 2001 for software development, the Magna 1015 (1-way IBM POWER8) for Open Rack infrastructure and virtualization workloads as well as the Magna 2002 (2-way IBM POWER8, NVIDIA Tesla K80 or M40 accelerator) for accelerated computing and machine learning applications. Mark III Systems will offer IBM POWER8-based servers which will rely on the Open Compute Project design specification and will follow the Barreleye server design by Rackspace (2-way IBM POWER8, 32 DIMMs, CAPI, etc.). Meanwhile, StackVelocity intends to sell a machine based on the Barreleye server design as well as Saba 2U system with POWER8 and CAPI accelerators for big data analytics and HPC applications.
    POWER9-Based Systems in Development

    It is also noteworthy that Google and Rackspace are already working on server architecture specification based on the upcoming IBM POWER9 microprocessors, which may indicate that the two companies are interested in the chip as well as the next-gen POWER servers.
    This custom server plan developed by Google and Rackspace is code-named Zaius. The machine will not come online until the POWER9 CPUs are officially released by IBM in the roadmaps, but Google is already sharing some of its details. The Zaius will be based on two IBM POWER9 processors (with an unknown amount of cores) and will feature 32 DDR4 DIMM slots (which helps to double the amount of memory vs. Barreley). The POWER9 processors will support both NVLink and CAPI, therefore, the Zaius will be compatible not only with FPGA-based accelerators but also with NVIDIA’s Tesla P100 and other upcoming GPU-based compute solutions from the company. Such compatibility will let Google and Rackspace deploy NVIDIA’s Tesla processors more broadly than today. What is important is that the POWER9 are also expected to bring support for PCIe 4.0 next year, which means higher bandwidth for storage and various accelerators.
    The Zaius server will be compatible with proposed Open Rack 48V and will be 1.25U in height. It will be able to house two full-length full-height PCIe 4.0 x16 cards, one half-length half-height PCIe 4.0 x16 card, one device connected using a mezzanine PCIe 4.0 x16-based OCP connector as well as up to 15 2.5” SAS/SATA/NVME storage devices.
    According to Google, many of its cloud services, including Gmail, are already processed by systems featuring IBM’s Power8 processors. Apparently, the company finds performance of these CPUs competitive and IBM’s microprocessor roadmap promising, which is why it is co-developing POWER9-based server with Rackspace.
    A long-term commitment from companies like Google and Rackspace is not something to underestimate. It shows that two major users of servers plan to continue adopting IBM POWER-supporting software and hardware, thus, helping to create an alternative for Intel’s Xeon platform (which is important for the market in general).
    Sources: OpenPOWER Foundation, Nikkei IT Pro.
    Image Sources: IBM Power Systems Japan, Rackpace, Nikkei IT Pro.


    More...

Thread Information

Users Browsing this Thread

There are currently 38 users browsing this thread. (0 members and 38 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title