Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11721

    Anandtech: Samsung Foundry Vows to Surpass TSMC Within Five Years

    The head of Samsung's semiconductor unit acknowledged last week that the company's current mass production, leading-edge process technologies are a couple of years behind TSMC's most advanced production nodes. But Samsung is working hard to catch up with its larger rival in five years.
    "To be honest, Samsung Electronics' foundry technology lags behind TSMC," said Dr. Kye Hyun Kyung, the head of the Samsung Electronics Device Solutions Division, overseeing global operations of the Memory, System LSI and Foundry business units," at a lecture at the Korea Advanced Institute of Science & Technology (KAIST), reports Hankyung. "We can outperform TSMC within five years."
    Samsung has been investing tens of billions of dollars in its foundry division in the recent years in a bid to catch up with TSMC and Intel, both in terms of production capacity for LSI chips as well as process technology advantages. The company has significantly closed the gap with its rivals, but it is still not quite on par with TSMC's fabrication technologies when it comes to performance, power, area (transistor density), and cost metrics.
    While Samsung Foundry is the first contract maker of chips to adopt gate-all-around (GAA) transistors with its SF3E (3GAE, 3 nm, gate-all-around early) node, and the company's customers are enthusiastic about the technology itself and the novel transistor architecture, this process is not used for Samsung's own leading-edge system-on-chips for smartphones.
    "Customers' response to Samsung Electronics' 3nm GAA process is good," said Dr. Kye Hyun Kyung.
    Meanwhile, Samsung's latest Galaxy S23-series uses Qualcomm's Snapdragon 8 Gen 2 SoC is made by TSMC on its N4 fabrication process.
    Samsung Foundry's most advanced technology that can be used to make highly-complex SoCs for smartphones or other demanding applications is SF4 (4LPP, 4 nm, low-power plus), which, as the company admits, is significantly behind TSMC's N3 (N3B) node, is rumored to be used for mass production of Apple's highly-complex SoCs at this time.
    The company may somewhat close the gap with TSMC's N3 and N4P with its SF4P (4LPP+) that will be available for customers later this year, according to a clarification published by @Tech_Reve.
    Samsung Foundry will have a better chance to catch up with TSMC when its SF3 (3GAP) fabrication node enters high volume production in 2024, though by the time TSMC will also be offering its more advanced N3P manufacturing technology. Around the same time Samsung also plans to offer SF4X (4HPC), a 4 nm-class fabrication technology that will (as the name suggests) address high-performance CPUs and GPUs.
    Samsung reportedly believes that transition to GAA transistors in the 2022 ~ 2023 timeframe makes a great sense since it will have time to fix teething problems of the new architecture ahead of its rivals, most notably Intel and TSMC. As a result, when they start fabbing chips on their 2 nm-class technologies (20A, N2) in 2024 – 2025 and possibly encounter the same issues that Samsung is solving today, its SF2 node will be able to offer a better combination of power, performance, transistor density, costs, and yields.
    Source: Hankyung.com (via @Tech_Reve)



    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11722

    Anandtech: Noctua Publishes Roadmap: Next-Gen AMD Threadripper Coolers Incoming

    Unlike other makers of cooling systems, Noctua has its roadmap advertised on its websites and always updates it to reflect changes in its product development plans. The company's May 2023 roadmap brings several surprises as it adds 'Next-gen AMD Threadripper coolers' and removes white fans from its plans.
    The main thing that strikes the eye in Noctua's roadmap is the mention of 'next-gen AMD Threadripper coolers' coming in the third quarter. These products were not on the roadmap in January, per a slide published by Tom's Hardware. AMD has been rumored to introduce its next-generation Ryzen Threadripper processors for workstations for a while, but this is almost the first time when we have seen a more or less official confirmation about the existence of such plans, albeit not from AMD, but one of its partners.
    Since the confirmation does not come from the CPU developer, we would not put our money into launching the next-generation Ryzen Threadripper based on the Zen 4 microarchitecture in Q3. Meanwhile, it is reasonable to expect AMD's codenamed Storm Peak processor to arrive sooner than later since the company has not updated this lineup in a while.
    Other notable things in Noctua's roadmap are a bunch of Chromax black products due in Q4, a 24V to 12V voltage converted set to arrive in Q2, and a 24V 40-mm fan, which emphasizes that the company considers the ATX12VO ecosystem essential to address. In addition, the firm is prepping its next-generation 140-mm fans, which will arrive in Q1 2024 in regular colors and then later in the year in Chromax—black version.
    Unfortunately, Noctua's next-generation NH-D15 cooler, which once was promised to arrive in Q1 2023, is not slated for sometime in 2024. Meanwhile, the company's roadmap no longer includes white fans for a reason we cannot explain. Perhaps, the company decided to devote its resources elsewhere, or maybe white plastic that the company considered for white fans did not meet its expectations.
    Source: Noctua
    Gallery: Noctua Publishes Roadmap: Next-Gen AMD Threadripper Coolers Incoming





    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11723

    Anandtech: NVIDIA Launches Diablo IV Bundle for GeForce RTX 40 Video Cards

    NVIDIA is launching a new game bundle for its latest generaiton GeForce RTX 40-series graphics cards and OEM systems. This time, NVIDIA has teamed up with Activision Blizzard to offer a free copy of the latest iteration of their wildly popular action RPG series, Diablo IV.
    This promotion will run globally, starting now and running through June 16, 2023. For more than a month, customers purchasing GeForce RTX 4090, 4080, 4070 Ti, 4070 graphics cards or desktops containing one of them from various vendors will get a free digital download code of Diablo IV Standard Edition on Battle.net. The code for the title must be redeemed before July 13, 2023.
    NVIDIA Current Game Bundles
    (May 2023)
    Video Card
    (incl. systems and OEMs)
    Game
    GeForce RTX 40 Series Desktop (All) Diablo IV
    GeForce RTX 30 Series Desktop (All) None
    GeForce RTX 40 Series Laptop (All) None
    GeForce RTX 30 Series Laptop (All) None
    For NVIDIA, Diablo IV will also be a technology showcase, as it is set to support the DLSS 3 upscaling technology as well as the Reflex latency cutting out-of-box at launch. Ray tracing is also slated to be added at some point after the game launches. At retail pricing, Activision Blizzard's Diablo IV Standard Edition costs $69.99 at Battle.net, though NVIDIA is undoubtedly getting a bulk deal.
    It should be noted that this latest game bundle is just for NVIDIA's RTX 40 series desktop cards. Unlike the since-expired Redfall bundle, NVIDIA is not offering Diablo IV (or any other games) with GeForce-based laptops. Nor are any remaining GeForce RTX 30 series producted covered.
    Diablo IV will officially release on June 4, 2023.
    Source: NVIDIA



    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11724

    Anandtech: AMD To Host AI and Data Center Event on June 13th - MI300 Details Inbound?

    In a brief note posted to its investor relations portal this morning, AMD has announced that they will be holding a special AI and data center-centric event on June 13th. Dubbed the “AMD Data Center and AI Technology Premiere”, the live event is slated to be hosted by CEO Dr. Lisa Su, and will be focusing on AMD’s AI and data center product portfolios – with a particular spotlight on AMD’s expanded product portfolio and plans for growing out these market segments.
    The very brief announcement doesn’t offer any further details on what content to expect. However, the very nature of the event points a clear arrow at AMD’s forthcoming Instinct Mi300 accelerator. MI300 is AMD’s first shot at building a true data center/HPC-class APU, combining the best of AMD’s CPU and GPU technologies. AMD has offered only a handful of technical details about MI300 thus far – we know it’s a disaggregated design, using multiple chiplets built on TSMC’s 5nm process, and using 3D die stacking to place them over a base die – and with MI300 slated to ship this year, AMD will need to fill in the blanks as the product gets closer to launch.
    As we noted in last week’s AMD earnings report, AMD’s major investors have been waiting with baited breath for additional details on the accelerator. Simply put, investors are treating data center AI accelerators as the next major growth opportunity for high-performance silicon – eyeing the high margins these products have afforded over at NVIDIA and other AI-adjacent rivals – so there is a lot of pressure on AMD to claim a slice of what’s expected to be a highly profitable pie. MI300 is a product that has been in the works for years, so the pressure is more of a reaction to the money than the silicon itself, but still, MI300 is expected to be AMD’s best opportunity yet to capture a meaningful portion of the data center GPU market.
    MI300 aside, given the dual AI and data center focus of the event, this is also where we’re likely to see more details on AMD’s forthcoming EPYC “Genoa-X” CPUs. The L3 V-Cache-equipped version of AMD’s current-generation EPYC 9004 series Genoa CPUs, Genoa-X has been on AMD’s roadmap for a while. And with their consumer equivalent parts already shipping (Ryzen 7000X3D), AMD should be nearing completion of the EPYC parts. AMD has previously confirmed that Genoa-X will ship with up to 96 CPU cores, with over 1GB in total L3 cache available on the chip to further boost performance on workloads that benefit from the extra cache.
    AMD’s ultra-dense EPYC Bergamo chip is also in the pipeline, though given the high-performance aspects of the presentation, it’s a little more questionable whether it will be at the show. Based on AMD’s compacted Zen4c architecture, Bergamo is aimed at cloud service providers who need lots of cores to split up amongst customers, with up to 128 CPU cores on a single Bergamo chip. Like Genoa-X, Bergamo is slated to launch this year, so further details about it should come to light sooner than later.
    But whatever AMD does (or doesn’t) show at their event, we’ll find out on June 13th at 10am PT (17:00 UTC). AMD will be live streaming the event from their website as well as YouTube.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11725

    Anandtech: Samsung to Unveil Refined 3nm and Performance-Enhanced 4nm Nodes at VLSI S

    Samsung Foundry is set to detail its second generation 3 nm-class fabrication technology as well as its performance-enhanced 4 nm-class manufacturing process at the upcoming upcoming 2023 Symposium on VLSI Technology and Circuits in Kyoto, Japan. Both technologies are important for the contract maker of chips as SF3 (3GAP) promises to offer tangible improvements for mobile and SoCs, whereas SF4X (N4HPC) is designed specifically for the most demanding high-performance computing (HPC) applications.
    2nd Generation 3 nm Node with GAA Transistors

    Samsung's upcoming SF3 (3GAP) process technology is an enhanced version of the company's SF3E (3GAE) fabrication process, and relies on its second-generation gate-all-around transistors – which the company calls Multi-Bridge-Channel field-effect transistors (MBCFETs). The node promises additional process optimizations, though the foundry prefers not to compare SF3 with SF3E. Compared to its direct predecessor, SF4 (4LPP, 4nm-class, low power plus), SF3 claims a 22% performance boost at the same power and complexity or a 34% power reduction at the same clocks and transistor count, as well as a 21% logic area reduction. Though it is unclear whether the company has achieved any scaling for SRAM and analogue circuits.
    In addition, Samsung claims that SF3 will provide additional design flexibility facilitated by varying nanosheet (NS) channel widths of the MBCFET device within the same cell type. Curiously, variable channel width is a feature of GAA transistors that has been discussed for years, so the way Samsung is phrasing it in context of SF3 might mean that SF3E does not support it.
    Thus far neither Samsung LSI, the conglomerate's chip development arm, nor other customers of Samsung Foundry have formally introduced a single highly-complex processor mass produced on SF3E/3GAE process technology. In fact, it looks like the only publicly-acknowledged application that uses the industry's first 3 nm-class fabrication process is a cryptocurrency mining chip, according to TrendForce. This is not particularly surprising as usage of Samsung's 'early' nodes is typically quite limited.
    By contrast, Samsung's 'plus' technologies are typically used by a wide range of customers, so the company's SF3 (3GAP) process is likely to see much higher volumes when it becomes available sometime in 2024.
    SF4X for Ultra-High-Performance Applications

    In addition to SF3, which is designed for a variety of possible use cases, Samsung Foundry is prepping its SF4X (4HPC, 4 nm-class high-performance computing) designed for performance-demanding applications like datacenter-oriented CPUs and GPUs.
    To address such chips, Samsung's SF4X offers a performance boost of 10% coupled with a 23% power reduction. Samsung doesn't explicitly specify what process node that comparison is being made against, but presumably, this is against their default SF4 (4LPP) fabrication technology. To achieve this, Samsung redesigned transistors' source and drain after reassessing their stresses (presumably under high loads), performed further transistor-level design-technology co-optimization (T-DTCO), and introduced a new middle-of-line (MOL) scheme.
    The new MOL enabled SF4X to offer a silicon-proven CPU minimum voltage (Vmin) of 60mV, a 10% decrease in the variation of off-state current (IDDQ), guaranteed high voltage (Vdd) operation at over 1V without performance degradation, and an improved SRAM process margin.
    Samsung's SF4X will be a rival for TSMC's N4P and N4X nodes, which are due in 2024 and 2025 respectively. Based on claim specificaitons alone, it is hard to tell which technology will offer the best combination of performance, power, transistor density, efficiency, and cost. That said, SF4X will be Samsung's first node in the recent years that was specifically architected with HPC in mind, which implies that Samsung has (or is expecting) enough customer demand to make it worth their time.



    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11726

    Anandtech: Philips Reveals Dual Screen Display: a 24-Inch LCD with E Ink Secondary Sc

    Although E Ink technology has remained a largely niche display tech over the past decade, it's none the less excelled in that role. The electrophoretic technology closely approximates paper, providing significant power advantages versus traditional emissive displays, not to mention making it significantly easier on readers' eyes in some cases. And while the limitations of the technology make it unsuitable for use as a primary desktop display, Phillips thinks there's still a market for it as a secondary display. To that end, Philips this week has introduced their novel, business-oriented Dual Screen Display, which combines both an LCD panel and and E Ink panel into a single display, with the aim of capturing the benefits of both technologies.
    The Philips Dual Screen Display (24B1D5600/96) is a single display that integrates both a 23.8-inch 2560x1440 IPS panel as well as a 13.3-inch, greyscale 1200x1600 resolution E Ink display. With each display operating independently, the idea is similar to previous concepts of multi-panel monitors; however Phillips is taking things in a different direction by using an E Ink display as a second panel – combining two otherwise very different display technologies into a single product. By offering an E Ink panel in this product, Phillips is looking to court the market for users who would prefer the reduced eye strain of an E Ink display, but are working at a desktop computer, where an E Ink display would not be viable as a primary monitor.
    As you might expect from the basic layout of the monitor, the primary panel is a rather typical office display that's designed for video and productivity applications – essentially anything where you need a modern, full color LCD. The secondary E Ink display, on the other hand, is a greyscale screen whose strength is the lack of flicker that comes from not being backlit by a PWM light. Both screens act independently, but since they are encased into the same chassis, they are meant to work together. For example, the secondary monitor can display supplementary information in text form, whereas the primary monitor can display photos.
    Ultimately, Philips is pitching the display on the idea that the secondary screen can reduce the eye strain of the viewer while viewing documents. It's a simple enough concept, but one that requires buyers to overlook the trade-offs of E Ink, and the potential drawbacks of having two dissimilar displays directly next to each other.
    Under the hood, the LCD panel on the Deal Screen Display is an unremarkable office-grade display. Phillips is using 23.8-inch anti-glare 6-bit + Hi FRC IPS panel with a 2560x1440 resolution, which can hit a maximum brightness of 250 nits while delivering 178-degree viewing angles. Meanwhile, the E Ink panel is a 13.3-inch 4-bit greyscale electrophoretic panel, with a resolution of 1200x1600. Notably here, there is no backlighting; the E Ink panel is meant to be environmentally lit (e.g. office lighting) to truly minimize eye strain.
    When it comes to connectivity, the primary screen is equipped with a DisplayPort 1.2 and a USB Type-C input (with DP Alt mode and USB Power Delivery support), a USB hub, and a GbE adapter. Meanwhile, the secondary screen connects to host using a USB Type-C connector that also supports DP Alt Mode, and Power Delivery.
    Specifications of the Philips Dual Screen Display
    24B1D5600/96
    Primary Screen Secondary Screen
    Panel 27" IPS 6-bit + Hi FRC 13.3" E Ink 4-bit
    Native Resolution 2560 × 1440 1200 × 1600
    Maximum Refresh Rate 75 Hz ?
    Response Time 4ms ?
    Brightness 250 cd/m² (typical) ?
    Contrast 1000:1 ?
    Viewing Angles 178°/178° horizontal/vertical high
    HDR none none
    Dynamic Refresh Rate none none
    Pixel Pitch 0.2058 mm² 0.2058 mm²
    Pixel Density 123 ppi 150 ppi
    Display Colors 16.7 million greyscale
    Color Gamut Support NTSC: 99%
    sRGB: 99%
    4-bit
    Aspect Ratio 16:9 3:4
    Stand Height: +/-100 mm
    Tilt: -5°/23°
    Swivel: 45°
    Inputs 1 × DisplayPort (HDCP 1.4)
    1 × USB-C (HDCP 1.2 + PD)
    1 × USB-C (HDCP 1.4 + PD)
    Outputs - -
    USB Hub USB 3.0 hub -
    Launch Date Q2 2023
    The Philips Dual Screen Display has a rather sleek stand which can adjust height, tilt, and swivel. It makes the whole unit look like one monitor rather than like two separate screens. Though to be sure, the E Ink portion of the display can be angled independently from the LCD panel, allowing the fairly wide monitor to contour to a user's field of view a bit better.
    When it comes to pricing, Philips's Dual Screen Display is available in China for $850 (according to Liliputing), which looks quite expensive for a 24-inch IPS LCD and a 13.3-inch secondary screen. Though as this is a rather unique product, it is not surprising that it is sold at a premium.



    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11727

    Anandtech: Asus Formally Unveils ROG Ally Portable Console: Eight Zen 4 Cores and RDN

    Asus on Thursday officially introduced the ROG Ally, its first handheld gaming PC. With numerous handheld gaming systems around, most notably Steam Deck, Asus needed something special to be successful and fulfill the promise of the ROG brand. To that end, the ROG Ally promises a unique combination of performance enabled by AMD's latest mobile CPU, high compatibility due to usage of Windows 11, portability, and other features.
    Performance: To Extreme, or Not to Extreme?

    First teased by Asus last month, the ROG Ally is the company's effort to break into the handheld gaming PC space, which Valve has essentially broken open in the past year with the Steam Deck.
    When developing ROG Ally, Asus wanted to build a no-compromise machine that would bring the performance of mobile PCs the portability that comes with handheld device. This is where AMD's recently-launched Zen 4-based Ryzen Z1 and Ryzen Z1 Extreme SoCs, which are aimed specifically at ultra-portable devices, come into play.
    Based on AMD's 4nm Phoenix silicon, the eight-core Ryzen Z1 Extreme processor and its 12 CU RDNA 3-based GPU resembles the company's Ryzen 7 7840U CPU. Meanwhile Asus is also offering a version of Ally using the lower-tier Z1 chip, which still uses eight CPU cores and pairs that with a 4 CU GPU. On paper, the Z1 Extreme chip is significantly more powerful in graphics tasks as a result (~3x), however in practice the chips are closer, as thermal and memory bandwidth limits keep the Extreme chip from running too far ahead.
    Speaking of graphics performance, it should be noted that Asus's ROG Ally console is equipped with the ROG XG Mobile connector (a PCIe 3.0 x8 for data and a USB-C for power and USB connections) that can be used to connect an Asus ROG XG Mobile eGFX dock with the handheld. The XG docks come with a range of GPUs installed, up to a GeForce RTX 4090 Laptop GPU. The XG dock essentially transforms ROG Ally into a high-performance gaming system, albeit by supplanting much of its on-board functionality. The fact that Asus offers eGFX capability right out-of-box is a significant feature differentiator for the ROG Ally, though be prepared to invest the $1999.99 if you want the top-end GeForce RTX 4090 Laptop-equipped XG dock.
    Both versions of ROG Ally will come with 16GB of LPDDR5-6400 memory and a 512GB SSD in an M.2-2230 form-factor with a PCIe 4.0 interface. While replacing the M.2 drive is reportedly a relatively easy task, for those who want to expand storage space without opening anything up, the console also has an UHS-II-compliant microSD card slot.
    Display: Full-HD at 120 Hz

    The ROG Ally is not only the first handheld with the Ryzen Z1 Extreme CPU, but will also be among the first portable game consoles with a 1920x1080 resolution 7-inch display; and one that supports a maximum refresh rate of 120 Hz, no less. The Gorilla Glass Victus-covered display uses an IPS-class panel with a peak luminance of 500 nits as well as Dolby Vision HDR support to make games more appealing.
    In addition to Dolby Vision HDR-badged display, the Asus ROG Ally also has Dolby Atmos-certified audio subsystem with Smart Amp speakers and noise cancelation technology.
    Ergonomics: 600 Grams and All the Controls

    When it comes to mobile devices, ergonomics is crucial. Yet, it is pretty hard to design a handheld game console that essentially uses laptop-class silicon with all of its peculiarities. When Asus began work on its ROG Ally, it asked mobile gamers about what they think was the most important feature for their portable console and apparently it was weight. So Asus set about deigning a device that would weigh around 600 grams and would be comfortable to use.
    "When we go through survey with our focus group, the number one thing that they wanted was a balanced weight handheld device," said Shawn Yen, vice president of Asus's Gaming Business Unit responsible for ROG products. "The target was 600 grams because the current handheld devices in the market today are too heavy. It is not something that they can engage for a very long period of time. So, their game time got cut down because it is not comfortable. So, uh, when we first thought about the design target for ROG Ally, we were thinking about a device that can get into gamers' hands for hours of fun time."

    The display and chassis are among the heaviest components of virtually all mobile devices, so there is little that can be done about those. But in a bid to optimize the weight and distribute it across the device, the company had to implement a very well thought motherboard design, and use anti-gravity heat pipes to ensure proper cooling at all times without using too many of them as this increases weight. Meanwhile, Asus still had to use two fans and a radiator with 0.1 mm ultra-thin fins to ensure that the CPU is cooled down properly as it still can dissipate up to 30W of heat. To further optimize weight, Asus opted for a polycarbonate chassis.
    Since Asus ROG Ally is essentially a Windows 11-based PC albeit in a portable game console form factor, the company had to incorporate all the pads and buttons featured on conventional gamepads and some more controls for Windows (e.g., touchscreen) and ROG Ally-specific things like Armor Crate game launcher and two macro buttons. It's also worth noting that, seemingly because of the use of Windows 11, the Ally is not capable of consistently suspending games while it sleeps, a notable difference compared to other handheld consoles.
    Meanwhile, the trade-off to hitting their weight target while still using a relatively powerful SoC has been battery life. The Ally comes with a 40Wh batter, and Asus officially advertises the handheld as offering up to 2 hours of battery life in heavy gaming workloads. Early reviews, in turn, have matched this, if not coming in below 2 hours in some cases. The higher-resolution display and high-performance AMD CPU are both key differentiating factors of the Ally, but these parts come at a high power cost.
    Vast Connectivity

    Being a PC, the ROG Ally is poised to offer connectivity that one comes to expect from a portable computer. Therefore, the unit features a Wi-Fi 6E and Bluetooth adapter for connectivity, it includes a MicroSD card slot for additional storage, a USB Type-C port for both charging and display output, an ROG XG Mobile connector for external GPUs, and a TRRS audio connector for headsets.
    The Price

    The ROG Ally with AMD's Ryzen Z1 Extreme CPU is set to be launched globally on June 13, 2023, at a price point of $699.99. Meanwhile the non-extreme Z1 version of the Ally has been lited for $599.99, though no release date has been set. The first reviews are already out, so Asus is giving potential customers a long lead time to evaluate the console before it's released next month.



    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11728

    Anandtech: Asus Unveils Two Slimmer GeForce RTX 4090 Video Cards: ROG Strix LC and TU

    Asus has expanded the company's GeForce RTX 40-series product portfolio with two new RTX 4090 graphics cards. The ROG Strix LC GeForce RTX 4090 and TUF Gaming GeForce RTX 4090 OC, which are available in regular and OC editions, have arrived to compete in the high-end segment. What makes these cards notable, in turn, is their reduced size: the new cards are physically smaller than Asus' early RTX 4090 offerings, as well as many of the competitors on the market.
    The GeForce RTX 4090 is a 450W gaming graphics card, with large coolers to match. Even NVIDIA's hard-to-get GeForce RTX 4090 Founders Edition is a triple-slot graphics card, and air-cooled AIB cards tend to be larger still. So for the size-conscious gamer, this leaves liquid cooled cards, which brings us to Asus's new ROG Strix LC GeForce RTX 4090. The closed-loop card moves a lot of its bulk off to an attached 240 mm radiator block, bringing the card itself down to 2.6-slots wide.
    The ROG Strix LC GeForce RTX 4090's hybrid cooling system packs a cold plate that cools the large AD102 GPU and neighboring GDDR6X memory chips. The heat is transferred to the 240 mm radiator through 560 mm tubing, so there won't be an issue with large cases. A low-profile heatsink with a blower-style cooling fan keeps the other power delivery components cool. Meanwhile the radiator itself is equipped with a pair of 120 mm ARGB cooling fans are present to dissipate the heat once it gets there.
    Asus GeForce RTX 4090 Specifications
    AnandTech ROG Strix LC GeForce RTX 4090 TUF Gaming GeForce RTX 4090 OG TUF Gaming GeForce RTX 4090
    Regular Edition Boost Clock
    (Default / OC)
    2,520 MHz / 2,550 MHz 2,520 MHz / 2,550 MHz 2,520 MHz / 2,550 MHz
    OC Edition
    Boost Clock
    (Default / OC)
    2,610 MHz / 2,640 MHz 2,565 MHz / 2,595 MHz 2,565 MHz / 2,595 MHz
    Display Outputs 2 x HDMI 2.1a
    3 x DisplayPort 1.4a
    2 x HDMI 2.1a
    3 x DisplayPort 1.4a
    2 x HDMI 2.1a
    3 x DisplayPort 1.4a
    Design 2.6 Slot 3.2 Slot 3.65 Slot
    Power Connectors 1 x 16-pin 1 x 16-pin 1 x 16-pin
    Dimensions 293 x 133 x 52 mm 325.9 x 140.2 x 62.8 mm 348.2 x 150 x 72.6 mm
    Radiator Dimensions 272 x 121 x 54 mm N/A N/A
    Asus's other new RTX 4090 card, the air-cooled TUF Gaming GeForce RTX 4090 OG, is a unique case of its own. Technically, it's a new SKU; however, the graphics card reuses the TUF Gaming cooler from the TUF Gaming GeForce RTX 3090 Ti.
    This is notable because the TUF cooler used on the 3090 Ti was a good bit smaller than Asus's first RTX 4090 cooler. The net result is that these changes bring the new OG card's width from 3.65-slots (and arguably, wide enough that you need to leave a 4th slot open for air flow) down to 3.2 slots - just enough room for proper airflow if the neighboring 4th sot is occupied. Altogether, the OG model is smaller in every dimension, shaving off 6% of its height, 7% of its length, and 13% of its width. Asus doesn't list the weight of its graphics cards, so we cannot comment on whether the new OG version has lost weight.
    By most accounts, Asus's current RTX 4090 cooler is highly effective – it's just also really big. So offering a separate SKU with a smaller cooler makes a good deal of sense, especially given how popular NVIDIA's true triple-slot Founders Edition card has been. The smaller TUF cooler is rated for the same 450W TDP as the larger TUF 4090 cooler, but, as always, there may be performance/acoustic tradeoffs involved.
    There's one other change that Asus doesn't advertise with the TUF Gaming GeForce RTX 4090 OG. The renders on the product page show the graphics card with a longer PCB. One of the advantages of the more compact PCB on the previous model was that it permitted Asus (and NVIDIA) to vent heat out of the back side of the card, as well as to optimize the trace layouts and component placement. Meanwhile, with the longer PCB, Asus relocated the 16-pin power connector. Instead of being placed in the middle, the power connector is on the farther right side.
    Gallery: Asus GeForce RTX 4090 GPUs


    Between the two new cards, the ROG Strix LC GeForce RTX 4090 ends up with the edge in clockspeeds, flaunting boost clock speeds up to 2,640 MHz when in its highest performance mode. Meanwhile, the TUF Gaming GeForce RTX 4090 OG series have the same clock speeds as the vanilla models, with a rated boost clock of 2520 MHz stock and 2595 MHz when the OC card is in its highest mode. In addition, the ROG Strix LC GeForce RTX 4090 and TUF Gaming GeForce RTX 4090 OG have other attributes in common, including using a single 16-pin power connector and a display output layout consisting of two HDMI 2.1a ports and three DisplayPort 1.4a outputs.
    Asus hasn't revealed the pricing or availability of the new graphics cards. For reference, the TUF Gaming GeForce RTX 4090 and OC Edition retail for $1,599 and $1,799, respectively. The OG counterparts likely have similar price tags. Meanwhile, we'd expect the ROG Strix LC GeForce RTX 4090 to carry a more considerable premium due to the AIO liquid cooling design.



    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11729

    Anandtech: Voltage Lockdown: Investigating AMD's Recent AM5 AGESA Updates on ASRock's

    It's safe to say that the last couple of weeks have been a bit chaotic for AMD and its motherboard partners. Unfortunately, it's been even more chaotic for some users with AMD's Ryzen 7000X3D processors. There have been several reports of Ryzen 7000 processors burning up in motherboards, and in some cases, burning out the chip socket itself and taking the motherboard with it.
    Over the past few weeks, we've covered the issue as it's unfolded, with AMD releasing two official statements and motherboard vendors scrambling to ensure their users have been updating firmware in what feels like a grab-it-quick fire sale, pun very much intended. Not everything has been going according to plan, with AMD having released two new AGESA firmware updates through its motherboard partners to try and address the issues within a week.
    The first firmware update made available to vendors, AGESA 1.0.0.6, addressed reports of SoC voltages being too high. This AGESA version put restrictions in place to limit that voltage to 1.30 V, and was quickly distributed to all of AMD's partners. More recently, motherboard vendors have pushed out even newer BIOSes which include AMD's AGESA 1.0.0.7 (BETA) update. With even more safety-related changes made under the hood, this is the firmware update AMD and their motherboard partners are pushing consumers to install to alleviate the issues – and prevent new ones from occurring.
    In this article, we'll be taking a look at the effects of all three sets of firmware (AGESA 1.0.0.5c - 7) running on our ASRock X670E Taichi motherboard. The goal is to uncover what, if any, changes there are to variables using the AMD Ryzen 9 7950X3D, including SoC voltages and current drawn under intensive memory based workloads.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #11730

    Anandtech: Solidigm D5-P5430 Addresses QLC Endurance in Data Center SSDs

    Solidigm has been extremely bullish on QLC SSDs in the data center. Compared to other flash vendors, their continued use of a floating gate cell architecture (while others moved on to charge trap configurations) has served them well in bringing QLC SSDs to the enterprise market. The company realized early on that the market was hungry for a low-cost high-capacity SSD to drive per-rack capacity. In order to address this using their 144L 3D NAND generation, Solidigm created the D5-P5316. While the lineup did include a 30TB SKU for less than $100/TB, the QLC characteristics in general, and the use of a 16KB indirection unit (IU) resulted in limiting the use-cases to read-heavy and large-sized sequential / random write workloads.
    Solidigm markets their data center SSDs under two families - the D7 line is meant for demanding workloads with 3D TLC flash. The D5 series, on the other hand, uses QLC flash and targets mainstream workloads and specialized non-demanding use-cases where density and cost are more important. The company further segments this family into the 'Essential Endurance' and 'Value Endurance' line. The popular D5-P5316 falls under the 'Value Endurance' line.
    The D5-P5430 being introduced today is a direct TLC replacement drive in the 'Essential Endurance' line. This means that, unlike the D5-P5316's 16K IU, the D5-P5430 uses a 4KB IU. The company had provided an inkling of this drive in their Tech Field Day presentation last year.
    Despite being a QLC SSD, Solidigm is promising very competitive read performance and higher endurance ratings compared to previous generation TLC drives from its competitors. In fact, Solidigm believes that the D5-P5430 can be quite competitive against TLC drives like the Micron 7450 Pro and Kioxia CD6-R.
    Solidigm D5-P5430 NVMe SSD Specifications
    Aspect Solidigm D5-P5430
    Form Factor 2.5" 15mm U.2 / E3.S / E1.S
    Interface, Protocol PCIe 4.0 x4 NVMe 1.4c
    Capacities 3.84 TB, 7.68 TB, 15.36 TB
    E1.S / U.2 / E3.S
    30.72 TB
    U.2 / E3.S
    3D NAND Flash Solidigm 192L 3D QLC
    Sequential Performance (GB/s) 128KB Reads @ QD 256 7.0
    128KB Writes @ QD 256 3.0
    Random Access (IOPS) 4KB Reads @ QD 256 971K
    4KB Writes @ QD 256 120K
    Latency (Typical) (us) 4KB Reads @ QD 1 108
    4KB Writes @ QD 1 13
    Power Draw (Watts) 128KB Sequential Read ??
    128KB Sequential Write 25.0
    4KB Random Read ??
    4KB Random Write ??
    Idle 5.0
    Endurance (DWPD) 100% 128KB Sequential Writes 1.83
    100% 4KB Random Write 0.58
    Warranty 5 years
    Based on market positioning, the Micron 6500 ION launched earlier today is the main competition for the D5-P5430. The sequential writes and power consumption numbers are not particularly attractive for the Solidigm drive on a comparative basis, but the D5-P5430 does win out on the endurance aspect - 0.3 RDWPD for the 6500 ION against 0.58 RDWPD for the D5-P5430 (surprising for a QLC drive). Solidigm prefers total NAND writes limit as a better estimtate of endurance and quotes 32 PBW as the endurance rating for the D5-P5430's maximum capacity SKU. Another key aspect here is that the D5-P5430 is only available in capacities up to 15.36 TB today. The 30 TB SKU is slated to appear later this year. In comparison, the 30 TB SKU for the 6500 ION is available now. On the other hand, the D5-P5430 is available in a range of capacities and form-factors, unlike the 6500 ION. The choice might just end up being dependent on how each SSD performs for the intended use-cases.



    More...

Thread Information

Users Browsing this Thread

There are currently 19 users browsing this thread. (0 members and 19 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title