Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11641

    Anandtech: First PCIe Gen5 SSDs Finally Hit Shelves - But The Best Is Yet To Come

    This week, consumer-grade PCIe 5.0 M.2 drives have finally hit the U.S. market, well over a year since the first client PC platforms supporting PCIe Gen5 became available. The new drives offer higher performance than the flagship PCIe 4 drives they supplant, albeit with some trade offs such as high prices and a greater need for good cooling. Meanwhile, for better or worse, the current crop of drives are largely interim solutions; as faster NAND becomes more readily available later this year, drive vendors will be able to push out even speedier drives based on the same controllers.
    Up to 10 GB/sec Now for $170/TB

    Gigabyte and Inland (a Micro Center brand) are the first companies to offer PCIe Gen5 consumer SSDs in the U.S. Gigabyte's Aorus Gen5 10,000 and Inland's TD510 drives come in a 2TB configuration and are rated for a maximum sequential read speeds of 10GB/sec and maximum sequential write speeds of 9.5GB/sec. Compared to the 7GB/sec or so limit of high-end PCIe 4 drives, this is a notable improvement in sequential read speeds for the same form factor.
    Both drives are based on Phison’s PS5026-E26 controller (Arm Cortex-R5 cores, special-purpose CoXProcessor 2.0 accelerators, LDPC, eight NAND channels with ONFI 5.x and Toggle 5.x interfaces at up to 2400 MT/s data transfer speeds) as well as 3D TLC NAND memory. To sustain high performance levels even under high loads, Gigabyte equipped its SSD with a massive passive cooling system with a heat pipe.
    Whereas Gigabyte has built their own drive, the drive that Inland/MicroCenter sells is thought to be made by Phison itself (or at least under its supervision). The company not only offers turn-key solutions featuring controllers with firmware and reference design, but can also produce actual SSDs and let its partners resell them under their own brands. Compared to the Gigabyte drive, the Inland drive comes with a rather compact cooling system, but this one is equipped with a small fan that is expected to produce a decent bit of noise (as small fans are wont to do).
    Since these are the first PCIe Gen5 SSDs for client PCs on the market and they carry 2TB of raw 3D NAND memory, it is not surprising that they are quite expensive. Amazon and Newegg charged $340 per drive, but quickly sold out the units they had. Micro Center offers its product for $399, but with an immediate $50 discount it can be obtained for $349 once it back in stock.
    But Faster Drives Incoming

    While these current crop of drives are already hitting 10GB/sec reads, as we often see for first-generation products, they are still leaving performance on the table. Because the NAND needed to make the most of the Phison E26 controller has only recently become available (and only in small quantities at that), these initial drives, as fast as they are, are being held back by overall NAND throughput.
    After Phison formally introduced its PS5026-E26 controller in September, 2021, it demonstrated prototypes E26-powered SSDs with 12.5 GB/s reads and 10.2 GB/s writes for a number of times. In fact, a number of the company's partners, such as MSI, even announced E26-based drives with similar performance characteristics, but Gigabyte's Aorus Gen5 10,000 and Inland's TD510 instead start things off a bit slower.
    Under the hood, with 8 channels of NAND to pull from the E26 controller needs NAND running at 2400 MT/s in order to saturate its own internal throughput. These data rates, in turn, only recently became available via NAND built to the new Toggle NAND 5.0 and ONFi 5.0 standards. Micron's ONFi 5.0 232-layer 3D TLC NAND chips were used for Phison's prototype drives, but while Micron is slowly ramping up production of 232-layer NAND in general, the company slowed the ramp of 232-layer NAND running at 2400 MT/s. Meanwhile, Phison has yet to validate SK Hynix's 2400 MT/s NAND with its controller.
    As a result, due to scarce availability of 2400 MT/s NAND, SSD makers have to use 1600 MT/s NAND with their PCIe Gen5 SSDs for now. Once faster NAND is more readily available, they can start using them to build E26-based drives that will be able to hit 12.3 GB/sec and make the most of the E26 controller, surpassing the performance of this initial generation of drives.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11642

    Anandtech: Intel Scraps Rialto Bridge GPU, Next Server GPU Will Be Falcon Shores In 2

    On Friday afternoon, Intel published a letter by Jeff McVeigh, the company’s interim GM of their Accelerated Computing Systems and Graphics group (AXG). In it, McVeigh offered a brief update on the state of Intel’s server GPU product lineups and customer adoption. But, more importantly, his letter offered an update to Intel’s server GPU roadmap – and it’s a bit of a bombshell. In short, Intel is canceling multiple server GPU products that were planned to be released over the next year and a half – including their HPC-class Rialto Bridge GPU – and going all-in on Falcon Shores, whose trajectory has also been altered, delaying it until 2025.

    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11643

    Anandtech: Intel Shares Stopgap Solution For Erratic Connection Drops With I226-V Eth

    The transition to 2.5Gbps Ethernet has not been an easy one for Intel. The company's I225/I226 2.5 GbE Ethernet controllers (codename Foxville), a prevalent choice on Intel platform motherboards for the last few years, has presented a fair share of issues since its introduction, including random networking disconnections and stuttering. And while Intel has been working through the issues with multiple revisions of the hardware, they apparently haven't hammered out all of the bugs yet, as evidenced by the latest bug mitigation suggestion from the company. In short, Intel is suggesting that users experiencing connection issues on the latest I226-V controller disable some of the its energy efficiency features, which appear to be a major contributor in the connection stability issues I226-V has been seeing.
    To mitigate the connection problems on the I226-V Ethernet controller, Intel is advising affected users to disable Energy-Efficient Ethernet (EEE) mode through Windows Device Manager. The same guidance applies to Linux users as well. EEE mode aims to lower power consumption when the Ethernet connection is in an idle state. The issue is that EEE mode seems to activate when an Ethernet connection is in active use, causing it to drop out momentarily.
    And while deactivating EEE does reportedly improve connection stability, deactivating it doesn't seem to be the ultimate solution. Intel has received reports that some users still experienced disconnections with EEE mode disabled. Furthermore, disabling EEE mode forgoes its intended benefits – such as reducing power draw by up to 50% when an Ethernet connection is idling – so it's not an option that cost-conscious consumers would normally want to disable.
    Intel has also released an updated driver set for the I226-V/I225-V family of Ethernet controllers that automatically makes this adjustment. Specifically, the patch deactivates EEE mode for connection speeds above 100 Mbps, but users may have to disable it entirely if the workaround doesn't work with their combination of hardware. MSI and Asus have already deployed the new Ethernet driver for their respective Intel 700-series motherboards, so other vendors shouldn't take long to do the same.
    In the interim, Intel will continue investigating the root cause and provide a concrete solution for motherboards with the I226-V Ethernet controller. The Foxville family of Intel Ethernet controllers has a long history of connectivity quirks – going back to the original I225-V in 2019 and E3100 in 2020 – ultimately requiring multiple hardware revisions (B1, B2, & B3 steppings) before finding solutions to many of its issues. As a result, it's not off the table that the I226-V Ethernet controller may suffer the same fate.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11644

    Anandtech: The FSP Hydro PTM X Pro 1000W ATX 3.0 PSU Review: Premium Platinum Power

    Today we're taking a look at the Hydro PTM X Pro 1000W ATX 3.0, FSP’s latest ATX 3.0-compliant unit. FSP released one of the first ATX 3.0 units in the market, the Hydro G Pro series, which we took a look at a couple of months ago. In comparison to that power supply, the PTM X Pro series is aimed at system builders seeking a higher level of performance, with the primary discernible difference being the 80 Plus Platinum efficiency certification. In order to get there, FSP had to build a platform with even better power regulation, giving the Hydro PTM X Pro a level of electrical excellence that few PSUs can match.

    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11645

    Anandtech: NVIDIA Releases Hotfix For GeForce Driver To Resolve CPU Usage Spikes

    NVIDIA released the company’s GeForce 531.18 WHQL driver on February 28th. It didn’t take long before user reports started to pile up on the NVIDIA forums about a strange bug causing processor usage to spike. The problem would occur after the user exited a game, and the issue would persist until a system restarted. Now a week later, NVIDIA has solved the problem and deployed a hotfix tailored to replace the GeForce 531.18 WHQL driver, bringing the version number up to 531.26.
    The GeForce 531.18 WHQL driver was an installment with several notable features, including DLSS 3 support and optimizations to Atomic Heart and the closed beta for The Finals. More importantly, the GeForce 531.18 WHQL driver enabled support for RTX Video Super Resolution (VSR), an upscaling feature that uses AI to improve streaming video in Google Chrome and Microsoft Edge.
    User feedback revealed the bug would increase processor usage anywhere between 10% to 15%. While it’s not a system-breaking issue, NVIDIA’s hotfix has restored things to normal by eliminating the CPU usage bug. Surprisingly, the problem didn’t impact every GeForce system. According to a discussion on a Reddit thread, the NVIDIA Game Session Telemetry plugin (NvGSTPlugin.dll), which is loaded by the NVIDIA Display Container service, could have been the perpetrator of the unusual processor spikes. Users previously had to block or erase the DLL file to solve the problem temporarily. Unfortunately, the latter would render the control panel unserviceable since it depends on the NVIDIA Container service. A more sound alternative included rolling back the driver to a previous version, which meant users would lose the optimizations and new functionalities. NVIDIA’s hotfix comes at just the right time.
    When it comes to hotfixes, it's worth noting that as the name implies, these are quick, interim solutions that typically don’t go through the lengthy QA process as a standard GeForce driver does. In other words, NVIDIA supply these hotfixes to consumers as-is to fix a notable bug. So installing the hotfix is only recommended for systems that are affected by the bug; otherwise users should wait for the next WHQL driver release as usual.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11646

    Anandtech: AMD Announces The Last of Us Part 1 Game Bundle For Radeon RX 6000 & 7000

    The Last of Us Part 1, a remaster of the original Playstation-hit,is making its way to the PC on March 28. To celebrate the feat or to let gamers relive the TV series, AMD has kicked off a new game bundle offer for Radeon RX 6000 and RX 7000 video card purchases. The promotion also applies to pre-built gaming desktops that leverage one of the eligible Radeon cards. AMD's The Last of Us Part 1 bundle, which starts today and runs until April 15, 2023, arrives just in time now that the previous Radeon bundle with The Callisto Protocol and Dead Island 2 has ended.
    The new bundle applies to AMD's entire lineup of Radeon RX 6000 and Radeon RX 7000 desktop video cards, unlike the previous bundle that only focused on AMD's last-generation 6000 series graphics cards. This means everything from the top of the stack Radeon RX 7900 XTX down to the $140 entry-level RX 6400 all qualify for a free copy of the game.
    AMD Current Game Bundles
    (March 2023)
    Video Card
    (incl. systems and OEMs)
    Game
    Radeon RX 7000 Desktop (All) The Last of Us, Part 1
    Radeon RX 6000 Desktop (All) The Last of Us, Part 1
    Unfortunately, Naughty Dog, the developer behind the title, hasn't revealed the system specifications for The Last of Us Part 1. So we don't know how much graphics firepower gamers will need to run the game. The Radeon RX 6400 can conceivably handle the game, but it'll likely require image fidelity or resolution compromises. In any event, the game is launching with solid technical underpinnings for the AMD crowd, with support for AMD's latest FidelityFX Super Resolution (FSR) 2.2 upscaling technology.
    The latest game bundle comes as we're seeing some movement in video card pricing. While the flagship Radeon RX 7900 XTX's price tag has remained stagnant, street prices on the Radeon RX 7900 XT have fallen a bit from its $899 MSRP. For example, ASRock's Phantom Gaming Radeon RX 7900 XT currently retails for $799 on Newegg, so the Radeon RX 7000 series is getting cheaper – at its own creeping pace. As is almost always the case, bundles such as these are offered as an alternative to cutting prices, with AMD using the add-in game to instead add value to the product.
    AMD's The Last of Us Part 1 gaming bundle is available in different parts of the world. The participating retailers in the U.S. include Amazon, AVADirect Custom Computers, Best Buy, Cybertron PC (CLX), iBuypower, Maingear, Memory Express, Meta PCs, Micro Center, Newegg, Origin PC, and Xidax.



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11647

    Anandtech: Cadence Delivers Technical Details on GDDR7: 36 Gbps with PAM3 Encoding

    When Samsung teased the ongoing development of GDDR7 memory last October, the company did not disclose any other technical details of the incoming specification. But Cadence recently introduced the industry's first verification solution for GDDR7 memory, and in the process has revealed a fair bit of additional details about the technology. As it turns out, GDDR7 memory will use PAM3 as well as NRZ signaling and will support a number of other features, with a goal of hitting data rates as high as 36 Gbps per pin.
    A Short GDDR History Lesson

    At a high level, the evolution of GDDR memory in the recent years has been rather straightforward: newer memory iterations boosted signaling rates, increased burst sizes to keep up with those signaling rates, and improved channel utilization. But none of this substantially increased the internal clocks of the memory cells. For example, GDDR5X and then GDDR6 increased their burst size to 16 bytes, and then switched to dual-channel 32-byte access granularity. While not without its challenges in each generation of technology, ultimately the industry players have been able to crank up the frequency of the memory bus with each version of GDDR to keep the performance increases coming.
    But even "simple" frequency increases are increasingly becoming not so simple. And this has driven the industry to look at solutions other than cranking up the clocks.
    With GDDR6X, Micron and NVIDIA replaced traditional non-return-to-zero (NRZ/PAM2) encoding with four-level pulse amplitude modulation (PAM4) encoding. PAM4 increases the effective data transmission rate to two data bits per cycle using four signal levels, thus enabling higher data transfer rates. In practice, because GDDR6X has a burst length of 8 bytes (BL8) when it operates in PAM4 mode, it is not faster than GDDR6 at the same data rate (or rather, signaling rate), but rather is designed to be able to reach higher data rates than what GDDR6 can easily accomplish.
    Four-level pulse amplitude modulation has an advantage over NRZ when it comes to signal loss. Since PAM4 requires half the baud rate of NRZ signaling for a given data rate, the signal losses incurred are significantly reduced. As higher frequency signals degrade more quickly as they travel through a wire/trace - and memory traces are relatively long distances by digital logic standards - being able to operate at what's essentially a lower frequency bus makes some of the engineering and trace routing easier, ultimately enabling higher data rates.
    The trade-off is that PAM4 signaling in general is more sensitive to random and induced noise; in exchange for a lower frequency signal, you have to be able to correctly identify twice as many states. In practice, this leads to a higher bit error rate at a given frequency. To reduce BER, equalization at the Rx end and pre-compensation at the Tx end have to be implemented, which increases power consumption. And while it's not used in GDDR6X memory, at higher frequencies (e.g. PCIe 6.0), forward-error correction (FEC) is a practical requirement as well.
    And, of course, GDDR6X memory subsystems require an all-new memory controllers, as well as a brand-new physical interface (PHY) both for processors and memory chips. These complex implementations are to a large degree the main reasons why four-level coding has, until very recently, been almost exclusively used for high-end datacenter networking, where the margins are there to support using such cutting-edge technology.
    GDDR7: PAM3 Encoding for Up to 36 Gbps/pin

    Given the trade-offs mentioned above in going with either PAM4 signaling or NRZ signaling, it turns out that the JEDEC members behind the GDDR7 memory standard are instead taking something of a compromise position. Rather than using PAM4, GDDR7 memory is set to use PAM3 encoding for high-speed transmissions.
    As the name suggests, PAM3 is something that sits between NRZ/PAM2 and PAM4, using three-level pulse amplitude modulation (-1, 0, +1) signaling, which allows it to transmit 1.5 bits per cycle (or rather 3 bits over two cycles). PAM3 offers higher data transmission rate per cycle than NRZ – reducing the need to move to higher memory bus frequencies and the signal loss challenges those entail – all the while requiring a laxer signal-to-noise ratio than PAM4. In general, GDDR7 promises higher performance than GDDR6 as well as lower power consumption and implementation costs than GDDR6X.
    And for those keeping score, this is actually the second major consumer technology we've seen introduced that uses PAM3. USB4 v2 (aka 80Gbps USB) is also using PAM3 for similar technical reasons. To quote from our initial coverage back in 2021:
    So what on earth in PAM3?
    PAM3 is a technology where the data line can carry either a -1, a 0, or a +1. What the system does is actually combine two PAM3 transmits into a 3-bit data signal, such as 000 is an -1 followed by a -1. This gets complex, so here is a table:
    PAM3 Encoding
    AnandTech Transmit
    1
    Transmit
    2
    000 -1 -1
    001 -1 0
    010 -1 1
    011 0 -1
    100 0 1
    101 1 -1
    110 1 0
    111 1 1
    Unused 0 0
    When we compare NRZ to PAM3 and PAM4, we can see the rate of data transfer for PAM3 is in the middle of NRZ and PAM4. The reason why PAM3 is being used in this case is to achieve that higher bandwidth without the extra limitations that PAM4 requires to be enabled.
    NRZ vs PAM-3 vs PAM4
    AnandTech Bits Cycles Bits Per
    Cycle
    NRZ 1 1 1
    PAM-3 3 2 1.5
    PAM-4 2 1 2

    With that said,It remains to be seen how much power a 256-bit memory subsystem with the 36 Gbps data transfer rate promised by Samsung will use. The GDDR7 spec itself has yet to be ratified, and the hardware itself is still being constructed (which is where tools like Cadence's come in). But keeping in mind how bandwidth hungry applications for AI, HPC, and graphics are, that bandwidth will always be welcome.
    Optimizing Efficiency and Power Consumption

    In addition to increased throughput, GDDR7 is expected to feature a number of ways to optimize memory efficiency and power consumption. In particular, GDDR7 will support four different read clock (RCK) modes in a bid to enable it only when needed:

    • Always running: Always running and stops during sleep modes;
    • Disable: Stops running;
    • Start with RCK Start command: Host can start RCK by issuing the RCK Start command before reading out dataand stop using the RCK Stop command when needed.
    • Start with Read: RCK automatically starts running when DRAM receives any command which involves reading data out. It can be stopped using the RCK Stop command.

    In addition, GDDR7 memory subsystems will be able to issue two independent commands in parallel. For example, Bank X can be refreshed by issuing a Refresh per bank command on CA[2:0], while Bank Y can be read by issuing a read command on CA[4:3] at the same time. Also, GDDR7 will support linear-feedback shift register (LFSR) data training mode to determine appropriate voltage levels and timings to ensure consistent data transfers. In this mode, the host will keep track of each individual eye (connection), which will allow it to apply appropriate voltages to better optimize power consumption.
    Finally, GDDR7 will be able to shift between PAM3 encoding and NRZ encoding modes based on bandwidth needs. In high bandwidth scenarios, PAM3 will be used, while in low bandwidth scenarios the memory and memory controllers can shift down to more energy efficient NRZ.
    Cadence Delivers First GDDR7 Verification Solution

    While JEDEC has not formally published the GDDR7 specification, this latest technical data dump comes as Cadence has launched their verification solution for GDDR7 memory devices. Their solution fully supports PAM3 simulation by a real number representation, it supports binary bus, strength modeling, and real number modeling.
    The verification IP also supports various modes of error injection in multiple fields of transactions during array data transfer and interface trainings. Furthermore, it comes with the waveform debugger solution to visualize transactions on the waveform viewers for faster debugging and verification.
    "With the first-to-market availability of the Cadence GDDR7 VIP, early adopters can start working with the latest specification immediately, ensuring compliance with the standard and achieving the fastest path to IP and SoC verification closure," a statement by Cadence reads.
    When Will GDDR7 Land?

    While GDDR7 promises major performance increases without major increases of power consumption, perhaps the biggest question from technical audiences is when the new type of memory is set to become available. Absent a hard commitment from JEDEC, there isn't a specific timeframe to expect GDDR7 to be released. But given the work involved and the release of a verification system from Cadence, it would not be unreasonable to expect GDDR7 to enter the scene along with next generation of GPUs from AMD and NVIDIA. Keeping in mind that these two companies tend to introduce new GPU architectures in a roughly two-year cadence, that would mean we start seeing GDDR7 show up on devices later on in 2024.
    Of course, given that there are so many AI and HPC companies working on bandwidth hungry products these days, it is possible that one or two of them release solutions relying on GDDR7 memory sooner. But mass adoption of GDDR7 will almost certainly coincide with the ramp of AMD's and NVIDIA's next-generation graphics boards.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11648

    Anandtech: Lenovo Teams Up with Aston Martin for New ThinkStations: Up to 120 Cores,

    Lenovo has introduced its all-new ThinkStation machines designed for performance-hungry professionals. The new ThinkStation P-series lineup consists of four machines based on up to two Intel Sapphire Rapids processors and up to four Nvidia RTX-series professional graphics cards. One of the interesting wrinkles in Lenovo's announcement is that the chassis of the new workstations were co-designed with Aston Martin, an automaker whose designers use ThinkStations to build cars.
    Lenovo's latest ThinkStation P-series family of workstations is composed of three base machines: the top-of-the-range ThinkStation PX based on two 4th Generation Xeon Scalable 'Sapphire Rapids' processors with up to 120 cores in total as well as up to four Nvidia RTX 6000 Ada Lovelace graphics cards with 48GB of GDDR6 memory onboard; the high-end ThinkStation P7 powered by Intel's Xeon W-3400-series processor with up to 56 cores and up to three Nvidia RTX 6000 Ada Lovelace graphics boards; and the relatively compact ThinkStation 5 with Intel's Xeon W-2400-series CPU featuring up to 24 cores as well as up to two Nvidia RTX A6000 Ampere-based graphics cards with 48GB of memory that can be connected using NVLink. The updated ThinkStations have a baseboard management controller and can be serviced remotely.
    Rob Herman, the Vice President of Lenovo's Workstation and Client AI Business Unit states, "We partnered closely with Intel, Nvidia, and Aston Martin to ensure these new systems offer the best of form and functionality by combining a premium chassis with ultra high-end graphics, memory, and processing power."

    Lenovo ThinkStation PX
    Speaking of Aston Martin, all the workstations have a chassis featuring a front panel inspired by the front grill of Aston Martin's DBS vehicles (e.g., the Aston Martin DBS Superleggera). The chassis, co-designed by Lenovo and Aston Martin, features Lenovo's tri-channel cooling system and can be serviced without using tools.
    Lenovo ThinkStation P5, P7, and PX: General Specifications
    AnandTech ThinkStation P5 ThinkStation P7 ThinkStation PX
    CPU Xeon W-2400
    up to 24 cores
    Xeon W-3400
    up to 56 cores
    2x Xeon Scalable
    up to 120 cores
    Chipset W790 W790 C741
    RAM
    Up to 512 GB DDR5-4800 with ECC
    Up to 1 TB DDR5-4800 with ECC
    Up to 2 TB DDR5-4800 with ECC
    GPU up to 2 x Nvidia RTX A6000
    Ampere
    up to 3 x Nvidia RTX 6000
    Ada Lovelace
    up to 4 x Nvidia RTX 6000
    Ada Lovelace
    Storage Up to 6 total drives:
    M.2: 3 (12 TB)
    3.5": 3 (36 TB)

    RAID:
    M.2: 0/1/10/5
    SATA: 0/1/5
    Up to 7 total drives:
    M.2: 4 (16TB)
    3.5": 3 (36TB)

    or

    Up to 6 total drives:
    M.2: 5 (20 TB)
    3.5": 1 (12 TB)

    RAID:
    M.2: 0/1/10/5
    SATA: 0/1/5
    Up to 9 total drives:
    M.2: 7 (28 TB)
    3.5": 2 (24 TB)

    or

    Up to 7 total drives:
    M.2: 3 (12 TB)
    3.5: 4 (48 TB)

    RAID:
    M.2: 0/1/10/5
    SATA: 0/1/10/5
    Expansion 2x PCIe 5.0 x16
    1x PCIe 4.0 x8
    3x PCIe 4.0 x4
    3x PCIe 5.0 x16
    1x PCIe 4.0 x16
    1x PCIe 4.0 x8
    1x PCIe 5.0 x4
    1x PCIe 4.0 x4
    Dual CPU system:
    4x PCIe 5.0 x16
    4x PCIe 4.0 x16
    1x PCIe 4.0 x8

    Single CPU system:
    2x PCIe 5.0 x16
    2x PCIe 4.0 x16
    Networking
    1GbE
    Wi-Fi 6E 2x2 + BT 5.2
    10GbE
    1GbE
    Wi-Fi 6E 2x2 + BT 5.2
    10GbE
    1GbE
    Wi-Fi 6E 2x2 + BT 5.2
    I/O
    Front Ports:
    Audio Combo Jack

    Optional Front Ports:
    2x USB-A 3.2 Gen 2
    2x USB-C 3.2 Gen 2

    Rear Ports:
    2x USB 2.0
    3x USB-C 3.2 Gen 2
    1x USB-C 3.2 Gen 2x2
    1GbE
    Line in
    Line out
    Serial (optional)
    Front Ports:
    Audio Combo Jack

    Optional Front Ports:
    2x USB-A 3.2 Gen 2
    2x USB-C 3.2 Gen 2

    Rear Ports:
    2x USB 2.0
    3x USB-A 3.2 Gen 2
    1x USB-C 3.2 Gen 2x2
    1GbE
    10GbE
    Line in
    Line out
    Serial (optional)
    Optional Front Ports:
    2x USB-A 3.2 Gen 2
    2x USB-C 3.2 Gen 2

    Rear Ports:
    2x USB 2.0
    4x USB-A 3.2 Gen 1
    1x USB-C 3.2 Gen 2x2
    1GbE
    10GbE
    Ethernet
    Line in
    Line out
    Serial (optional)
    Dimensions (mm): 165 x 453 x 440
    (inches): 6.5 x 17.8 x 17.3
    (mm): 175 x 508 x 435
    (inches): 6.9 x 20 x 17.1
    (mm): 220 x 575 x 435
    (inches): 8.7 x 22.6 x 17.1
    PSU 750W
    1000W
    1000W
    1400W
    1850W
    optional redundancy
    Security TPM 2.0
    Self-healing BIOS
    Power On Password
    UEFI Secure Boot
    Kensington Lock Slot
    Padlock Loop
    TPM 2.0
    Self-healing BIOS
    Power On Password
    UEFI Secure Boot
    Kensington Lock Slot
    OS Preloaded:
    Windows 11 Pro for Workstation
    Windows 10 Pro for Workstation (preinstalled through downgrade rights in Windows 11 Pro)
    Ubuntu Linux

    Supported:
    Windows 10 Enterprise Edition
    Red Hat Enterprise Linux (certified)
    ThinkStation PX: An Ultimate Machine

    The range-topping Lenovo ThinkStation PX is an old-school no-compromise dual-socket workstation with up to two Intel 4th Generation Xeon Scalable processors based on the latest 120 general-purpose cores coupled with up to 2 TB of DDR5-4800 memory as well as up to four Nvidia RTX 6000 Ada 48 GB GDDR6 graphics cards. Regarding storage, the ThinkStation PX can house three or seven M.2 SSDs and four 3.5-inch hard drives. Depending on the configuration, the Lenovo ThinkStation PX offers various levels of storage capacity, with 28 TB or 12 TB of NAND flash storage and up to 24 TB or 48 TB of HDD storage. For the first time, Lenovo's most powerful workstation cannot be equipped with an optical disk drive.
    Connectivity and expansion (or rather flexibility) are among the key features of workstations since these machines are used for a wide range of professional applications, including animation, professional visualization, simulation, rendering, and video editing, among other things. To address these needs, the ThinkStation PX can be equipped with up to four PCIe 5.0 x16 add-in-boards (e.g., four graphics cards), four PCIe 4.0 x16 AIBs, and one PCIe 4.0 x8 card. The machine also has one 10GbE port, one GbE connector, an Intel AX210 Wi-Fi 6E and Bluetooth 5.2 adapter, one USB 3.2 Gen2x2 Type-C on the back, multiple USB 3.2 Gen2 Type-A and Type-C on the front and on the back, and audio connectors. Surprisingly, the workstation lacks Thunderbolt 4 or USB4 connectivity.
    While the ThinkStation PX is an ultimate machine with unprecedented performance, Lenovo understands that such dual-socket workstations are headed for extinction. The system comes in a rack-optimized chassis and can be used remotely and/or for virtual desktop infrastructure (VDI) applications.
    ThinkStation P7: A Versatile Xeon W Workstation

    Lenovo's ThinkStation P7 sits below the PX, but this single-socket machine offers formidable performance to address the needs of demanding architects, content creators, designers, engineers, and data scientists. Just like its bigger brother, this unit comes in a rack-optimized case.
    The ThinkStation P7 machine packs Intel's Xeon W-3400-series CPU with up to 56 cores accompanied by up to 1 TB of DDR5-4800 memory and up to three Nvidia RTX 6000 Ada graphics cards with 48 GB of memory. The machine can be equipped with up to four M.2 SSDs and three 3.5-inch hard drives that provide up to 52TB of storage space.
    As for expansion capabilities, the unit can accommodate three PCIe 5.0 x16 AIBs, one PCIe 5.0 x4 card, one PCIe 4.0 x16 board, one PCIe 4.0 x8 AIB, and a PCIe 4.0 x4 card. It has one 10GbE port, one GbE connector, an Intel AX210 Wi-Fi 6E and Bluetooth 5.2 adapter, a USB 3.2 Gen2x2 Type-C on the back, multiple USB 3.2 Gen2 Type-A and Type-C ports, and audio jacks. Again, the machine lacks TB4 and USB4 connectivity.
    ThinkStation P5: Compact and Powerful

    The ThinkStation P5 may not be as advanced and powerful as the P7 and PX, but it packs more punch than almost any high-performance desktop and is aimed at a variety of performance-hungry workloads. Meanwhile, measuring 165mm x 453mm x 440mm, this relatively compact traditional desktop is not meant for rack installation (although this is probably not completely impossible with appropriate third-party kits).
    Lenovo's ThinkStation P5 is powered by Intel's Xeon W-2400-series processor with up to 24 cores that is mated with up to 512 GB of DDR5-4800 memory and up to two Nvidia RTX A6000 graphics boards with 48 GB of GDDR6 SGRAM that can be connected using NVLink with each other. The workstation can be equipped with three M.2 SSDs and three 3.5-inch HDDs for a total of 48 TB of storage space.
    The machine has two PCIe 5.0 x16 slots, one PCIe 4.0 x4 slot, and three PCIe 4.0 x4 slots. It also comes with a GbE, an Intel AX210 Wi-Fi 6E and Bluetooth 5.2 adapter, a USB 3.2 Gen2x2 Type-C port, several USB 3.2 Gen2 connectors, and audio jacks. Unfortunately, the system lacks Thunderbolt 3 and USB4 ports.
    Availability and Pricing

    Lenovo plans to start selling its new ThinkStation PX, P7, and P5 workstations this May. The company does not disclose the pricing of these machines, though they will likely resemble the prices of its current-generation dual-socket and P700 and P500-series machines.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11649

    Anandtech: AMD AM5 Motherboards Finally Reach $125 Mark with ASRock's mATX B650M-HDV/

    When AMD introduced its Ryzen 7000-series processors and AM5 platform last September, it said that over time motherboards for the new CPUs would get to mainstream price points. Yet, nearly six months after launch, AM5 motherboards have proven to be stubborn in their high prices, which is one of the reasons of the platform's slow adoption by the masses. Fortunately, this week ASRock started to sell a new budget microATX motherboard, the B650M-HDV/M.2, which has become first one to land on retail shelves for $125.
    Most existing AM5 motherboards, including those based on the mid-range B650 chipset, are generally aimed at enthusiasts who are looking for extra features and willing to pay for them. As a result, there's been a dearth of truly cheap B650 boards on the market. But, at long last, things are starting to turn around with ASRock's B650M-HDV/M.2, which at $125 is able to bring the advantages of AMD's latest platform down to a lower price point.
    Given that this is a $125 motherboard, it's fair to say it's not packed to the gills with frills. But ASRock seems to have done a good job balancing features with costs. The resulting board offers just 2 DDR5 DIMM slots (1 DPC) and as a non-Extreme motherboard the PCIe x16 slot is just PCIe 4.0. But ASRock has still been able to build in a powerful enough VRM system to supports all AM5 processors (including the top-of-the-range Ryzen 9 7950X3D). Meanwhile, I/O connectivity includes two physical PCIe 4.0 x16 slots (x16 and x4 electrical), a PCIe 5.0 x4 M.2 slot and a PCIe 4.0 x4 M.2 slot for SSDs, a USB 3.2 Gen2 Type-C port, four SATA connectors, a 2.5GbE port, an M.2-2230 slot for a Wi-Fi adapter, and a Realtek ALC897 7.1-channel audio controller.
    The end result is a relatively cheap AM5 board that, on paper, looks like it should still be more than enough for a building a high-performance Ryzen 7000 system.
    "When AM5 launched I said that we would see motherboards starting at $125," wrote David McAfee, CVP and GM of Ryzen channel business at AMD, in a Tweet. "As HotHardware noticed, my timing 'might' have been a bit off, but I'm happy to see that ASRock is first to market with a $125 B650 board for AMD Ryzen."
    The motherboard has a basic 8+2+1-phase CPU VRM that is not meant for overclocking, but it is good enough for running processors at stock clocks. Since it only has 2 DIMM slots, it only supports 64 GB of DDR5-6400 memory; though besides the fact that 64 GB should be enough for most desktop workloads, Ryzen 7000 processors take a significant memory frequency hit with more than 1 DIMM per channel anyhow, making 4 DIMMs non-ideal. The platform does not have a PCIe 5.0 x16 slot for graphics boards, but there aren't any consumer graphics cards with such interface anyway. Otherwise, it should be noted that the motherboard does not have built-in Wi-Fi support, though M.2-2230 adapters are not expensive to come by if Wi-Fi is needed on a desktop computer.
    For now, ASRock's B650M-HDV/M.2 is the only $125 motherboard for AM5 processors, but we hope to see other board makers to follow suit and offer cheaper boards to allow for more inexpensive Ryzen 7000 system builds. Eventually AMD is expected to introduce its A620 chipset with cut-down features that will allow motherboard makers to offer even cheaper AM5 platforms. But for now the inexpensive B650M-HDV/M.2for $125 at Newegg seems like a reasonable choice.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #11650

    Anandtech: Fujitsu Preps Monaka Datacenter CPU to Succeed A64FX: Greater Efficiency a

    Fujitsu has revealed that the company is prepping the successor for its A64FX processor for high-performance computing. The company's second-generation Arm-based server CPU is slated to offer considerably higher performance and energy efficiency than its predecessor, as well as will add features to address AI and data analytics applications. The CPU is codenamed Monaka and it will arrive sometimes in 2027 and will power a next-generation supercomputer due in 2028.
    Like the original A64FX, Fujitsu's Monaka will once again be an Arm ISA processor. But it will also integrate hardware to accelerate artificial intelligence (AI) and data analytics applications, according to details released by the company at its ActivateNow: Technology Summit at the Computer History Museum in Mountain View, California, reports The Register.
    The promise to boost performance in traditional HPC and emerging AI workloads is logical. Although Fujitsu's existing A64FX already has support for 512-bit Scalable Vector Extensions (SVE) and can operate in FP64, FP32, FP16 and INT8 modes for a variety of AI and traditional supercomputer applications, the rapidly developing field of AI workloads has been adopting new data formats beyond FP16 and INT8. Meanwhile, Retaining the Arm architecture will ensure that the Monaka processor will be able to run code developed for the original A64FX CPU as well as other Arm-based system-on-chips for datacenters.
    "The next-generation DC CPU (Monaka) that we are developing will have a wider range of features and will prove more energy efficient," a Fujitsu spokesperson told The Register. "The range of potential applications is wider than that of the A64FX, which has special characteristics (e.g., interconnects) specific to Fugaku.
    One of Fujitsu's main goals with Monaka is to provide 'overwhelming energy efficiency' when compared with competing processors available at the time, claims The Register citing the company's officials. The firm is aiming to deliver 70% higher overall performance and 100% higher performance-per-watt than competing chips. Though with delivery not expected until 2027, it goes without saying that any competitive performance expectations are aspirational at best.
    Fujitsu's current 48+4-core A64FX processor for HPC has proven that the Arm architecture is perfectly capable of powering supercomputers, in this case Fugaku, which was the world's fastest supercomputer from 2020 to 2022. But the CPU is chiefly tailored for traditional supercomputer workloads, and as a result it's only been used in a handful of systems, including Fugaku, Fujitsu's PrimeHPC FX700 and FX1000 systems (which are available for purchase), and HPE's Apollo 80 HPC platform.
    Monaka, in turn, will allow Fujitsu to take a stab at supplying the broader HPC market with a high performance Arm processor. While the company isn't offering specific technical details at this time, they are making it clear that they're designing the chip for a wider audience, as opposed to the supercomputer-focused A64FX and its niche features like on-package HBM2 and the Tofu Interconnect D fabric to connect multiple nodes in a cluster. Shifting to a broader audience opens up more sales opportunities for Fujitsu, but it will put the company in more direct competition with other Arm server CPU vendors such as NVIDIA, Ampere, and the many internal projects at hyperscalers.
    In any case, it'll be interesting to see how things unfold once Monaka arrives in 2027. The Arm server CPU market has quickly blossomed over the last few years, so by the time Monaka hits the scene, it's going to be coming into a market with lots of opportunity for Arm servers and Arm software, but also a market with no shortage of companies trying to claim their piece of the pie.


    More...

Thread Information

Users Browsing this Thread

There are currently 21 users browsing this thread. (0 members and 21 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title