Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10581

    Anandtech: AMD's B550 Motherboards Start Appearing Online

    Today is the next stage of the AMD AM4 B550 motherboard rollout: numerous vendors have started listing its models in place of an expected launch on June 16th. One big feature of B550 that B450 didn't have in the specifications was PCIe 4.0 support, and so this will be a big uplift with the new motherboards. Vendors today are unveiling the more price conscious models when directly compared to the premium X570 models.
    The B550 chipset has been touted for many months, with much speculation on feature set, compatibility, and which AMD Ryzen processors users will opt to go for when pairing up a new board. One prevalent issue which AMD has addressed recently is that its impending Zen 3 and Ryzen 4000 processors will now be supported on B450 and X470, albeit without the benefits of PCIe 4.0 and its increased bandwidth. With this in mind, one of the main advantages of the new B550 chipset is that it will openly support PCIe 4.0 and Zen 3, which gives users more affordable options when it comes to selecting a new PC, whether it is a budget gaming system or an AMD Ryzen 3950X 16-core laden powerhouse.
    AMD recently announced its Ryzen 3 3000 series processors, the Ryzen 3 3300X and Ryzen 3 3100 which we reviewed. Users looking to buy a new motherboard for AMD's more affordable Ryzen 5 and Ryzen 3 processors may not want to spend the big bucks some vendors are asking for some of its X570 models. Enter B550, and with over 50 models across all the prominent vendors to choose from, it's likely a user will be able to find one that not only matches their style requirements but matches what they need from a feature set as well.
    AMD X570, B550 and B450 Chipset Comparison
    Feature X570 B550 B450
    PCIe Interface from CPU 4.0 4.0 3.0
    PCIe Interface from Chipset 4.0 3.0 2.0
    Max PCH PCIe Lanes 24 24 24
    USB 3.1 Gen2 8 2 0
    Max USB 3.1 (Gen2/Gen1) 8/4 2/6 0/6
    DDR4 Support 3200 ? 2667
    Max SATA Ports 8 6 6
    PCIe GPU Config x16
    x8/x8
    x8/x8+x8*
    x16
    x8/x8
    x16/+x4
    x16
    x16/+x4
    Memory Channels (Dual) 2/2 2/2 2/2
    Integrated 802.11ac Wi-Fi MAC N N N
    Chipset TDP 11W ?W 4.8W
    Overclocking Support Y Y Y
    XFR2/PB2 Support Y Y Y
    The biggest benefit going to B550 from B450 is official support for PCIe 4.0 devices within the full-length PCIe slot which is driven by the processor. This means that the top full-length slot will run at PCIe 4.0 x16, with some models allowing for x8/x8 from a second full-length slot with official support for NVIDIA SLI. From the CPU support pages that we've seen announced from vendors, B550 will only be compatible with AMD's Ryzen 3000 series processors. For users looking to use PCIe 4.0 storage devices, B550 models include a single PCIe 4.0 x4 M.2 slot, with any additional M.2 slots coming via PCIe 3.0 lanes from within the chipset itself.
    Another benefit is the B550 chipset includes support for up to two USB 3.1 G2 ports which B450 didn't offer. B550 also retains the same capability in regards to USB 3.1 G1 support as B450 with up to six ports available from the chipset. Users can also install up to six SATA ports from the chipset, with the onus on vendors if they want to use additional re-drivers or SATA controllers to push the numbers further at the cost of PCH lanes. Some models leverage Realtek's ALC1220 on the more premium B550 models, with the ALC1200 featured on MSI's MAG B550 Tomahawk.

    The GIGABYTE B550 Aorus Master motherboard
    Motherboard vendors across the world have been enabling their public listings of the B550 product stacks online, with some notable models from GIGABYTE which include the B550 Aorus Pro AC which provides for one PCIe 4.0 x4 M.2 slot, an additional PCIe 3.0 x4 slot which is driven from the chipset and includes an Intel Wi-Fi 3168 wireless interface which is Wi-Fi 5 standard. ASUS has also announced its stack with the top model, the ROG Strix B550-E Gaming offering an Intel I225-V 2.5 G Ethernet controller, with three full-length slots offering x16, and x8/x8 for NVIDIA SLI multi-graphics configurations, with the third full-length slot operating at PCIe 3.0 x4.
    MSI has also dropped its B550 stack publically, with the popular Tomahawk series appearing via the B550 Tomahawk has also been announced with support for up to DDR4-4866 memory, a full-length PCIe 4.0 x16 slot with a secondary full-length PCIe 3.0 x4 slot, with two PCIe 3.0 x1 slots. Users looking for small form factor models will find at least three mini-ITX models, with the most prominent model coming from ASUS via the ROG Strix B550-I with Intel's I225-V 2.5 G Ethernet onboard. There are also plenty of micro-ATX models with five from ASUS alone expected at launch, although some models may be region dependent.
    In regards to the pricing on AMD's B550 models, only ASUS so far has announced any pricing, with prices starting at $134 for the ASUS Prime B550M-A, all the way up to $279 for the ASUS ROG Strix B550-E Gaming which has an Intel's AX200 Wi-Fi 6 wireless interface, an Intel I225-V 2.5 G Ethernet controller, and a SupremeFX S1220A HD audio codec. We expect more pricing to be available in the coming days and weeks.
    Related Reading




    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10582

    Anandtech: Hot Chips 32 (2020) Schedule Announced: Tiger Lake, Xe, POWER10, Xbox Seri

    I’ve said it a million times and I’ll say it again – the best industry conference I go to every year is Hot Chips. The event has grown over the years, to around 1700 people in 2019 if I remember correctly, but it involves two days of presentations about the latest hardware that has hit the market. This includes new and upcoming parts that change the industry we work in, including deep dives into some of the most important silicon at play in the market today. There are also extensive keynote presentations from the most prominent members of the industry that give insights into how these people (and the companies) work, but also where the future is going.
    This week the lid was lifted on the provision Hot Chips 2020 schedule. With COVID-19 in mind, this year will also be the first year the conference will be offered online-only for attendees. Hot Chips 2020 is scheduled for August 16th to August 18th.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10583

    Anandtech: Avantek's Arm Workstation: Ampere eMAG 8180 32-core Arm64 Review

    Arm desktop systems are quite a rarity. In fact, it’s quite an issue for the general Arm software ecosystem in terms of having appropriate hardware for developers to actually start working in earnest on more optimised Arm software.
    To date, the solution to this has mostly been using cloud instances of various Arm server hardware – it can be a legitimate option and new powerful cloud instances such as Amazon’s Graviton2 certainly offer the flexibility and performance you’d need to get things rolling.
    However, if you actually wanted a private local and physical system, you’d mostly be relegated to small low-performing single-board computers which most of the time had patchy software support. It’s only been in the last year or two where Arm-based laptops with Qualcomm Snapdragon chips have suddenly become a viable developer platform thanks to WSL on Windows.
    For somebody who wants a bit more power and in particular is looking to make use of peripherals – actively using large amounts of storage or PCIe connectivity, then there’s options such as Avantek’s eMag Workstation system.

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10584

    Anandtech: NVIDIA Reports Q1 FY2021 Earnings: Let The Good Times Roll

    This week NVIDIA announced their earnings for the first quarter of their 2021 fiscal year. The current fiscal year is an especially important one for NVIDIA on both a business level and a product level, as the company is enjoying closing the Mellanox deal, all the while opening up shipments of their new datacenter-class A100 accelerators. Especially coming off of last year’s crypto-hangover, NVIDIA has started their new fiscal year with the good times rolling on.
    NVIDIA Q1 FY2021 Financial Results (GAAP)
    Q1'FY2021 Q4'FY2020 Q1'FY2020 Q/Q Y/Y
    Revenue $3080M $3105M $2220M -1% +39%
    Gross Margin 65.1% 64.9% 58.4% +0.2% +6.8%
    Operating Income $1028M $990M $358M -1% +116%
    Net Income $917M $950M $394M -4% +106%
    EPS $1.47 $1.53 $0.64 -5% +105%
    For Q1’FY20, NVIDIA booked $3.08B in revenue. Compared to the year-ago quarter, this is a jump in revenue of 39%, making for a very strong first quarter that was only a hair under Q4, which is commonly a very strong quarter for NVIDIA. Those sizable revenues, in turn, are reflected in NVIDIA’s profits: the company booked $917M in net income for the quarter, more than double Q1’FY20. In fact it’s the second-best Q1 ever for the company; only Q1’FY19 was better, which was in the middle of the crypto boom.
    What was a record, however, was NVIDIA’s gross margin. For the quarter NVIDIA booked a GAAP gross margin of 65.1%, edging out the previous quarter and beating even Q1’FY19. As NVIDIA’s revenues have shifted increasingly towards higher-margin products like accelerators, it’s helped the already profitable NVIDIA to extend that profitability even further.
    NVIDIA Quarterly Revenue Comparison (GAAP)
    ($ in millions)
    In millions Q1'FY2021 Q4'FY2020 Q1'FY2020 Q/Q Y/Y
    Gaming $1339 $1491 $1055 -10% +27%
    Professional Visualization $307 $331 $266 -7% +15%
    Datacenter $1141 $968 $634 +18% +80%
    Automotive $155 $163 $166 -5% -7%
    OEM & IP $138 $152 $99 -9% +39%
    Breaking down NVIDIA’s revenue by platform, while there are no great surprises per-se, the company has reached some milestones that are strong indicators of where things are going. Starting with NVIDIA’s datacenter revenue, that platform of their business has set a record for revenue for a second consecutive quarter, with $1.141B in revenue. This marks the first time NVIDIA’s datacenter business has booked more than $1B in revenue in a single quarter, and NVIDIA doesn’t expect it to be the last.
    While the picture will get muddled a bit next quarter as Mellonox revenue is folded into this mix, the big picture is that datacenter accelerator sales are strong, and set to grow. NVIDIA’s Ampere-based A100 accelerators began shipping for revenue in Q1, helping to boost the numbers there, while Q2 will be the first full quarter of sales. According to NVIDIA, they’re already seeing broad demand for datacenter products, with the major hyperscalers quickly picking up A100s. Overall, NVIDIA’s Volta-generation accelerators were extremely successful for the company, almost but not quite growing the datacenter business to one billion dollars per quarter, and the company is eager to repeat and extend that success with Ampere.
    Meanwhile, NVIDIA’s largest business, gaming, was also strong for the quarter, with the company booking $1.339B in revenue. While down seasonally as usual, NVIDIA is reporting that they have weathered the current pandemic similar to other chipmakers, with soft sales in some areas being counterbalanced by greater demand for chips for home computers as workers shift to working from home.
    Interestingly, there’s a very real chance that this could be one of the last quarters where gaming is NVIDIA’s biggest revenue generator. Along with folding Mellanox into the company – and into the datacenter segment – NVIDIA’s datacenter business as a whole has been growing at a much greater clip than gaming. NVIDIA has made it very clear that they’re pushing for a more diversified revenue stream than their traditional gaming roots, and if the datacenter business grows too much more they may just get there this year. Though it will be interesting to see what the eventual launch of Ampere-based gaming products does for gaming revenue, as NVIDIA’s revenue also reflects the fact that they’re nearing the end of the Turing generation of products.
    Bringing up third place was NVIDIA’s professional visualization platform, which saw $307M in revenue. As with gaming sales, the company is seeing a boost in sales due to work from home equipment purchases. This comes on top of the day-to-day demand for workstation laptops, which NVIDIA has been increasingly invested in.
    Meanwhile NVIDIA’s automotive business ended up being something of a laggard for Q1’FY21. The segment booked $155M in revenue, which is down 7% from the year-ago quarter. NVIDIA’s automotive business moves at a much different pace than its GPU businesses – in part because it’s not set to really take off until self-driving cars become a retail reality – so the business tends to ebb and flow.
    Finally, NVIDIA booked $138M in OEM & IP revenue for Q1’FY21. While this platform is small potatoes compared to gaming and datacenter, on a percentage basis it’s actually another big jump for NVIDIA; the segment grew 39% over the year-ago quarter. According to NVIDIA, the main driving factor here was increased entry-level GPU sales for OEM systems.
    Wrapping things up, looking ahead to Q2 of FY2021, NVIDIA’s current predictions call for another strong quarter. Having closed the Mellanox deal, Mellanox’s earnings will be folded into NVIDIA’s numbers starting in Q2, helping to push the company to what should be record revenue. Meanwhile on the product side of matters, Q2 will be the first whole quarter for A100 accelerator shipments, which should help NVIDIA further grow their datacenter business.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10585

    Anandtech: Gaming AIs: NVIDIA Teaches A Neural Network to Recreate Pac-Man

    Following last week’s virtual GTC keynote and the announcement of their Ampere architecture, this week NVIDIA has been holding the back-half of their conference schedule. As with the real event, the company has been posting numerous sessions on everything NVIDIA, from Ampere to CUDA to remote desktop. But perhaps the most interesting talk – and certainly the most amusing – is coming from NVIDIA’s research group.
    Tasked with developing future technologies and finding new uses for current technologies, today the group is announcing that they have taught a neural network Pac-Man.
    And no, I don’t mean how to play Pac-Man. I mean how to be the game of Pac-Man.
    The reveal, timed to coincide with the 40th anniversary of the ghost-munching game, is coming out of NVIDIA’s research into Generative Adversarial Networks (GANs). At a very high level, GANs are a type of neural network where two neural networks are trained against each other – typically one learning how to do a task and the other learning how to spot the first doing that task – with the end goal being that the competition between the networks can help make the two networks better by forcing them to improve to win. In terms of practical applications, GANs have most famously been used in research projects to create programs that can create realistic-looking images of real-world items, upscale existing images, and other image synthesis/manipulation tasks.
    For Pac-Man, however, the researchers behind the fittingly named GameGAN project took things one step further, focusing on creating a GAN that can be taught how to emulate/generate a video game. This includes not only recreating the look of a game, but perhaps most importantly, the rules of a game as well. In essence, GameGAN is intended to learn how a game works by watching it, not unlike a human would.
    For their first project, the GameGAN researchers settled on Pac-Man, which is as good a starting point as any. The 1980 game has relatively simple rules and graphics, and crucially for the training process, a complete game can be played in a short amount of time. As a result, over 50K “episodes” of training, the researchers taught a GAN how to be Pac-Man solely by having the neural network watch the game being played.
    And most impressive of all, the crazy thing actually works.
    In a video released by NVIDIA, the company is briefly showing off the Pac-Man-trained GameGAN in action. While the resulting game isn’t a pixel-perfect recreation of Pac-Man – notably, GameGAN’s simulated resolution is lower – the game none the less looks and functions like the arcade version of Pac-Man. And it’s not just for looks, either: the GameGAN version of Pac-Man accepts player input, just like the real game. In fact, while it’s not ready for public consumption quite yet, NVIDIA has already said that they want to release a publicly playable version this summer, so that everyone can see it in action.
    Fittingly for a gaming-related research project, the training and development for the GameGAN was equally as silly at times. Because the network needed to consume thousands upon thousand of gameplay sessions – and NVIDIA presumably doesn’t want to pay its staff to play Pac-Man all day – the researchers relied on a Pac-Man-playing bot to automatically play the game. As a result, the AI that is GameGAN has essentially been trained in Pac-Man by another AI. And this is not without repercussions – in their presentation, the researchers have noted that because the Pac-Man bot was so good at the game, GameGAN has developed a tendency to avoid killing Pac-Man, as if it were part of the rules. Which, if nothing else, is a lot more comforting than finding out that our soon-to-be AI overlords are playing favorites.
    All told, training the GameGAN for Pac-Man took a quad GV100 setup four days, over which time it monitored 50,000 gameplay sessions. Which, to put things in perspective of the amount of hardware used, 4 GV100 GPUs is 84.4 billion transistors, almost 10 million times as many transistors as are found in the original arcade game’s Z80 CPU. So while teaching a GAN how to be a Pac-Man is incredibly impressive, it is, perhaps, not an especially efficient way to execute the game.
    Meanwhile, figuring out how to teach a neural network to be Pac-Man does have some practical goals to it as well. According to the research group, one big focus right now is in using this concept to more quickly train simulators, which traditionally have to be carefully constructed by humans in order to capture all of the possible interactions. If a neural network can instead learn how something behaves by watching what’s happening and what inputs are being made, this could conceivably make creating simulators far faster and easier. Interestingly, the entire concept leads to something of a self-feedback loop, as the idea is to then use those simulators to then train other neural networks how to perform a task, such as NVIDIA’s favorite goal of self-driving cars.
    Ultimately, whether it leads to real-world payoffs or not, there’s something amusingly human about a neural network learning a game by observing – even (and especially) if it doesn’t always learn the desired lesson.



    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10586

    Anandtech: ViewSonic Announces Elite XG270QC Monitor: 1440p@165 Hz, Curved For Gaming

    The latest monitor in Viewsonic's large and varied portfolio comes via the XG270QC, which is a part of its gaming-focused Elite series. Available in the US now, the 27 inch Viewsonic Elite XG270QC features a 1500R curved screen, with a refresh rate of 165 Hz, and is certified for VESA DisplayHDR 400.
    Designed with gaming in mind, the Viewsonic Elite XG270QC comes with many of the features you'd expect for a contemporary gaming displaying. including a 27-inch, 2560x1440 VA panel with a fast refresh rate of 165 Hz, variable refresh support including AMD's FreeSync Premium Pro certification, and is VESA certified DisplayHDR 400. Although officially it has a 3 ms response time, Viewsonic is stating that it has a 1 ms MPRT response time, with Viewsonic's PureXP Motion Blur reduction technology making this possible. The curve of the panel is rated at 1500R which Viewsonic claims is provide a more immersive gaming experience.
    Looking at the dimensions, it's 24.1 inches wide with a 4-inch depth. It has an adjustable height of between 18.97 and 23.59 inches, with a net weight of 7.5 kg with the stand installed. For users looking to mount it to a monitor stand or wall mount, it is VESA 100 x 100 mm mounting on the rear and weighs 4.9 kg without the stand installed. The XG270QC has a black glossy finish and includes a single DisplayPort 1.4 input, two HDMI 2.0 inputs, a 3.5 mm audio output, and for security, it features a Kensington Lock slot. Provided with the Elite XG270QC is Viewsonic's Elite Display Controller software which connects to its device via a Type-A cable which is supplied, and allows users to adjust the integrated RGB LED lighting. It is certified to work with ThermalTake's RGB Plus and Razer's popular Chroma RGB Ecosystems.
    Touching on some of the finer details of the 27-inch panel, it has a 178-degree viewing angle and offers VESA Adaptive-Sync support. It features AMD FreeSync Premium Pro certification, which is AMD's own classification system for grading monitors, ensuring among other things a wide enough refresh rate for Low Framerate Compensation support, as well as low-latency HDR support. In terms of color reproduction, Viewsonic is claiming 16.7 million colours, with a 3,000 to 1 static contrast ratio and 120 million to 1 dynamic contrast ratio. For power, Viewsonic states that in Eco mode, it's optimized for 45 W, while it has a 55 W typical consumption rate, with a maximum of up to 59 W.
    Viewsonic has said that the Elite XG270QC is to purchase in the US for a price around the $460 mark. Users in the EU, AU, and other regions around the world will, however, need to wait until June.
    Related Reading




    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10587

    Anandtech: Arm's New Cortex-A78 and Cortex-X1 Microarchitectures: An Efficiency and P

    2019 was a great year for Arm. On the mobile side of things one could say it was business as usual, as the company continued to see successes with its Cortex cores, particularly the new Cortex-A77 which we’ve now seen employed in flagship chipsets such as the Snapdragon 865. The bigger news for the company over the past year however hasn’t been in the mobile space, but rather in the server space, where one can today rent Neoverse-N1 CPUs such as Amazon’s impressive Graviton2 chip, with more vendors such as Ampere expected to release their server products soon.
    While the Arm server space is truly taking off as we speak, aiming to compete against AMD and Intel, Arm hasn't reached the pinnacle of the mobile market – at least, not yet. Arm’s mobile Cortex cores have lived in the shadow of Apple’s custom CPU microarchitectures over the past several years, as Apple has seemingly always managed to beat Cortex designs by significant amounts. While there’s certainly technical reasons to the differences – it was also a lot due to business rationale on Arm’s side.
    Today for Arm’s 2020 TechDay announcements, the company is not just releasing a single new CPU microarchitecture, but two. The long-expected Cortex-A78 is indeed finally making an appearance, but Arm is also introducing its new Cortex-X1 CPU as the company’s new flagship performance design. The move is not only surprising, but marks an extremely important divergence in Arm’s business model and design methodology, finally addressing some of the company’s years-long product line compromises.

    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10588

    Anandtech: Arm Announces The Mali-G78: Evolution to 24 Cores

    Today as part of Arm’s 2020 TechDay announcements, alongside the release of the brand-new Cortex-A78 and Cortex-X1 CPUs, Arm is also revealing its brand-new Mali-G78 and Mali-G68 GPU IPs.
    Last year, Arm had unveiled the new Mali-G77 which was the company’s newest GPU design based on a brand-new compute architecture called Valhall. The design promised major improvements for the company’s GPU IP, shedding some of the disadvantages of past iterations and adapting the architectures to more modern workloads. It was a big change in the design, with implementations seen in chips such as the Samsung Exynos 990 or the MediaTek Dimensity 1000.
    The new Mali-G78 in comparison is more of an iterative update to the microarchitecture, making some key improvements in the matter of scalability of the configuration as well as balance of the design for workloads, up to some more radical changes such as a complete redesign of its FMA units.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10589

    Anandtech: Arm Announces Ethos-N78 NPU: Bigger And More Efficient

    Yesterday Arm released the new Cortex-A78, Cortex-X1 CPUs and the new Mali-G78 GPU. Alongside the new “key” IPs from the company, we also saw the reveal of the newest Ethos-N78 NPU, announcing Arm’s new second-generation design.
    Over the last few years we’ve seen a literal explosion of machine learning accelerators in the industry, with a literal wild west of different IP solutions out there. On the mobile front particularly there’s been a huge amount of different custom solutions developed in-house by SoC vendors, this includes designs such as from Qualcomm, HiSilicon, MediaTek and Samsung LSI. For vendors who do not have the design ability to deploy their own IP, there’s the possibility of licensing something from an IP vendor such as Arm.
    Arm’s “Ethos” machine learning IP is aimed at client-side inferencing workloads, originally described as “Project Trillium” and the first implementation seeing life in the form of the Ethos-N77. It’s been a year since the release of the first generation, and Arm has been working hard on the next iteration of the architecture. Today, we’re covering the “Scylla” architecture that’s being used in the new Ethos-N78.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #10590

    Anandtech: The ASRock Z490 Taichi Motherboard Review: Punching LGA1200 Into Life

    In our first Intel Z490 motherboard review, the ASRock Z490 Taichi takes center stage. With its recognizable Taichi clockwork inspired design, a 12+2 power delivery, three PCIe 3.0 x4 M.2 slots, and a Realtek 2.5 gigabit Ethernet port on the rear panel, it looks to leave its stamp on the Z490 market. The Taichi remains one of ASRock's perennial premium mid-range models.


    More...

Thread Information

Users Browsing this Thread

There are currently 18 users browsing this thread. (0 members and 18 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title