Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9991

    Anandtech: Micron Finally Announces A 3D XPoint Product: Micron X100 NVMe SSD

    Micron and Intel co-developed 3D XPoint memory as a high-performance alternative to flash, but so far only Intel has brought products to market, under their Optane brand. Despite owning the fab where 3D XPoint memory is produced, the closest Micron has come to commercializing that tech for themselves was their announcement in 2016 that upcoming Micron products using 3D XPoint memory would be branded as Micron QuantX, their counterpart to Intel's Optane brand. Years later, we finally have a concrete product announcement, and they seem to have abandoned the QuantX name.
    The new Micron X100 is a high-end enterprise NVMe SSD to compete against Intel's upcoming second-generation Optane SSDs and any specialized low-latency SLC NAND their competitors can come up with (eg. Samsung Z-NAND, Toshiba XL-FLASH). Micron has not yet released full specs for the X100, but the top line performance numbers are 2.5M IOPS for 4kB random reads and around 10GB/s for sequential transfers—both likely to be new records for a single SSD if they can ship it soon enough. A preview video posted by Micron includes a graph that labels the 2.5M IOPS figure as being tested at QD1, which sounds too good to be true: almost 5x the performance of Intel's current Optane SSDs. Micron says the X100 should be good for at least 9GB/s for reads, writes, or mixed workloads, reflecting how much closer 3D XPoint is to symmetrical read/write performance than any flash memory. (And also suggesting that the controller may be the bottleneck for sequential transfers more than the 3D XPoint memory itself.) For QoS, Micron is listing both read and write latencies of 8µs or less, slightly better than the 10µs that Intel's current Optane SSDs promise.
    The card Micron is showing off today is a full-height half-length PCIe x16 add-in card, so it should be able to reach full throughput even on PCIe 3.0 systems. Micron says the X100 will be in limited sampling to select customers sometime this quarter, so it's not going to be shaking up the storage market much in the immediate future but it is far enough past the vaporware stage that Micron should be able to deliver the rest of the specs soon—including the range of available capacities. Since Micron hasn't said anything about a second generation of 3D XPoint memory being ready, the density and costs of the X100 shouldn't be drastically different from Intel's Optane offerings.



    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9992

    Anandtech: Intel Announces Q3 FY 2019 Earnings: Record Results

    Today Intel announced their earnings for the third quarter of the 2019 fiscal year, which ended September 29, and the company has set a record for revenue thanks to increased growth of their datacenter business. Revenue for the quarter came in at $19.2 billion, beating Q3 2018 by $27 million, which results in a mere 0.14% growth over last year, but enough to make this the highest revenue ever for the company. Gross margin was 58.8%, down from 64.5% a year ago. Operating income was down 12% to $6.4 billion, and net income was down 6% to $6.0 billion. This resulted in earnings-per-share of $1.35, down 2% from a year ago.
    Intel Q3 2019 Financial Results (GAAP)
    Q3'2019 Q2'2019 Q3'2018
    Revenue $19.2B $16.5B $19.2B
    Operating Income $6.4B $4.6B $7.3B
    Net Income $6.0B $4.2B $6.4B
    Gross Margin 58.9% 59.8% 64.5%
    Client Computing Group Revenue $9.7B +10% -5%
    Data Center Group Revenue $6.4B +28% +4%
    Internet of Things Revenue $1.0B +1% +9%
    Mobileye Revenue $229M +14% +20%
    Non-Volatile Memory Solutions Group $1.3B +38% +19%
    Programmable Solutions Group $507M +3.7% +2%
    Intel splits their business into two main areas. The Client Computing Group is the PC-Centric products, and the Data Center Group consists of everything else. Despite the contraction of the PC market over the last several years, it has continued to be the main source of revenue for Intel, and that continues this quarter as well, but only by a small margin. The Client Computing Group revenue was down 5% year-over-year to revenue of $9.7 billion. Intel attributes this drop to lower year-on-year platform volume, although loss was partially offset by some of the higher-cost products especially in the commercial segment.
    Although the overall PC side from Intel was down, Intel has only just launched their latest 10th generation Core products which won’t make up much of Q3’s numbers due to the cut-off date of the end of September. Intel now has over 30 devices launched based on the 10 nm Ice Lake platform, signalling the end of 14 nm which has been iterated on many times over the last several years as Intel struggled to get their 10 nm process off the ground. The good news for Intel is that despite the initial setbacks for 10 nm, they have stated that 10 nm yields are actually ahead of their internal expectations for this point in its lifecycle, which should help alleviate some of the backlog the company has been facing with production assuming the can use the improved yields to transition more of their lineup over to 10 nm a bit quicker. Intel has also stated that despite the years lost on 10 nm, they are moving back to a 2 to 2.5 year process cadence, with 7 nm on track for their GPU lineup in 2021.
    Intel lumps the rest of their business into the “Data-Centric” role, and this side of the company has been making strong gains over the last several years, and now almost matches the Client Computing Group in total revenue at $9.5 billion, versus $9.7 billion for the CCG. But Data-Centric includes not only the Data Center Group, but also Internet of Things, Mobileye, Non-Volatile Storage, and Programable Solutions. Altogether these segments achieved record revenue, up 6% from 2018. Individually, Data Center Group was up 4% to $6.4 billion with a strong mix of Xeon sales and growth in all segments. Internet of Things also had record revenue, up 9% to $1.0 billion. Mobileye is also on the record train, with a 20% year-over-year gain to $229 million, as did Non-Volatile Storage which was up 19% to $1.3 billion. Programable Solutions was the only segment in the Data-Centric listings to not hit a record in revenue, but it was still up 2% to $507 million for the quarter, and they shipped their first 10 nm Agilex FPGA this quarter as well.
    Looking ahead to Q4, Intel is expecting revenue around $19.2 billion with earnings-per-share of $1.28.
    Source: Intel Investor Relations



    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9993

    Anandtech: Razer’s Raptor 27 Gaming Monitor Now Available: QHD with 144 Hz FreeSync &

    Razer this week started sales of its unique Raptor 27 gaming display, which it first introduced earlier this year. The monitor packs numerous gaming-oriented features such as AMD’s FreeSync, and it comes in a one-of-a-kind stand that offers some relatively extreme tilt options, as well as programmable Razer Chroma RGB lighting on the bottom.
    The Razer Raptor 27 is a non-glare 27-inch IPS display featuring a 2560×1440 resolution, a 420 nits peak luminance, a 1000:1 contrast ratio, a 144 Hz maximum refresh rate, and a 1 ms ULMB response time; all of which is fairly typical for an IPS QHD gaming monitor nowadays. A more unique feature of the Raptor 27 is its internal 10-bit dimming processor that, as its name suggests, controls the backlighting. The same processor seems to be responsible for managing the backlight's total color gamut, allowing the monitor to cover 95% of the DCI-P3 color space, something that not all gaming LCDs can do.
    Meanwhile, as a gaming monitor, the Raptor 27 supports AMD’s FreeSync variable refresh rate technology, and is also listed as NVIDIA G-Sync compatible. The monitor is also HDR capable, as the VESA DisplayHDR 400 badge will attest to, but like other DisplayHDR 400 monitors, only marginally so.
    Meanwhile the chassis of the Raptor 27 sports ultra-thin 2.3-mm bezels on three sides, as well as a CNC-machined stand with integrated cable management. The stand can tilt all the way to 90º, providing easy access to display's inputs.
    Speaking of inputs, the Raptor 27 has a DisplayPort 1.4 input, an HDMI 2.0b port, and a USB Type-C port (with DP 1.4 alt-mode) that can also power a laptop. For peripherals, the monitor offers a dual port USB 3.0 Type-A hub, as well as a headphone jack.
    The Razer Raptor 27 Gaming Display
    General Specifications
    Display Size 27-inch
    Panel Type IPS
    Resolution 2560x1440
    Refresh Rate 144 Hz with FreeSync
    Response time 7ms typical
    4ms Overdrive
    1ms with Motion Blur Reduction
    Contrast Ratio 1000:1
    Brightness 420 nits
    Color Gamut 95% DCI-P3
    HDR DisplayHDR 400
    Other 10-bit dimming processor
    Connectivity 1 x HDMI 2.0
    1 x DisplayPort 1.4
    1 x USB Type-C with power delivery
    1 x Headphone output
    2 x USB 3.0
    Availability October 2019
    Price $699.99
    Razor's Raptor 27 monitor is hitting the streets at $699, which brings it in at the high-end of the prce range for 27-inch gaming displays with comparable characteristics.
    Related Reading:


    Source: Razer


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9994

    Anandtech: The ASUS ROG Crosshair VIII Impact: A Sharp $430 Impulse on X570

    One of the most interesting unveilings from the X570 launch earlier this year came from ASUS, with the reintroduction of the ROG Impact series of small form factor motherboards. Not seen since the days of Intel Z170 days, the ASUS ROG Crosshair VIII Impact is the first truly AMD high-end SFF model from the vendor. Accompanied by its SO-DIMM.2 slot for dual PCIe 4.0 x4 M.2 SSDs, a SupremeFX S1220 HD audio codec and support for up to DDR4-4800 memory, the Impact looks to leave its mark on AM4 for enthusiasts just like previous iterations have done on Intel platforms. The only difference this time round is that it's not a true Mini-ITX like the previous Impact designs.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9995

    Anandtech: My First Time Playing Minecraft, Ever: Testing The Ray Tracing Beta

    Earlier this year at Gamescom, NVIDIA and Mojang showed off an early beta build of the popular game Minecraft with additional ray tracing features. Ray Tracing is a rendering technology that should in principle more accurately generate an environment, offering a more immersive experience. Throughout 2018 and 2019, NVIDIA has been a key proponent of bringing ray tracing acceleration hardware to the consumer market, and as the number of titles supporting NVIDIA’s ray tracing increases, the company believes that enabling popular titles like Minecraft is going to be key to promoting the technology (and driving hardware sales). NVIDIA UK offered some of the press a short hands-on with the Minecraft Beta, and it is actually my first proper Minecraft experience.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9996

    Anandtech: GlobalFoundries Teams Up with Singapore University for ReRAM Project

    GlobalFoundries has announced that the company has teamed up with Singapore’s Nanyang Technological University and the National Research Foundation to develop resistive random access memory (ReRAM). The next-generation memory technology could ultimately pave the way for use as a very fast non-volatile high-capacity embedded cache. The project will take four years and will cost S$120 million ($88 million).
    Under the terms of the agreement, the National Research Foundation will provide the necessary funding to Nanyang Technological University, which will spearhead the research. GlobalFoundries will support the project with its in-house manufacturing resources, just like it supports other universities on promising technologies, the company says.
    Right now, GlobalFoundries (and other contract makers of semiconductors) use eFlash (embedded flash) for chips that need relatively high-capacity onboard storage. This technology has numerous limitations, such as endurance and performance when manufactured using today's advanced logic technologies (i.e., sub-20nm nodes), which is something that is required of embedded memories. This is the main reason why GlobalFoundries and other chipmakers are looking at magneto resistive RAM (MRAM) to replace eFlash in future designs as it is considered the most durable non-volatile memory technology that exists today that can be made using contemporary logic fabrication processes.
    MRAM relies on reading the magnetic anisotropy (orientation) of two ferromagnetic films separated by a thin barrier, and thus does not require an erase cycle before writing data, which makes it substantially faster than eFlash. Furthermore, its writing process requires a considerably lower amount of energy. On the flip side, MRAM’s density is relatively low, its magnetic anisotropy decreases at low temperatures, which makes it a no-option for numerous applications, but which is still very promising for the majority of use cases that do not involve low temperatures.
    This brings researchers to ReRAM, which relies on changing the resistance across a dielectric material (from ‘0’ to ‘1’ or otherwise) by electrical current. The technology also doesn't require an erase cycle, promises very high endurance, and — assuming that the right materials are used — can work at a wide range of temperatures. Meanwhile, alloys used for ReRAM should be very stable in general in a bid to survive millions of switches and retain data, even when memory cells are produced using 'thin' moden fabrication processes (e.g., GF's 12LP or 12FDX). Finding the right substances for ReRAM will be the main topic of NTU’s research, whereas GlobalFoundries will have to find a cost-efficient way to produce the new type of memory at its facilities if the research is successful.
    For years to come, GlobalFoundries (and its rivals) will use MRAM for a wide variety of applications as the technology is mature enough, fast enough, and durable enough. The company’s eMRAM implementation ‘integrates well’ with both FinFET and FD-SOI process technologies (although FinFET implementation is not yet ready), the company says, so expect it to be used widely. According to the foundry, it has multiple 22FDX eMRAM tape outs planned for 2019 and 2020.
    GlobalFoundries is not standing still and is evaluating several eNVM technologies for its roadmap beyond 2020, including ReRAM. The company does not expect the research to come to fruition before 2021, but it certainly hopes that ReRAM will become another useful embedded memory technology.
    It is noteworthy that companies like Western Digital are working on ReRAM-based storage class memory (SCM) to compete against Intel’s 3D XPoint and other SCM technologies. SCM-class ReRAM will have its differences when compared to embedded ReRAM that GlobalFoundries is particularly interested in, which once again shows that the technology could be applied very widely.
    Related Reading:


    Sources: GlobalFoundries, Crossbar, Everspin, ChannelNewsAsia


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9997

    Anandtech: The Intel Core i9-9990XE Review: All 14 Cores at 5.0 GHz

    Within a few weeks, Intel is set to launch its most daring consumer desktop processor yet: the Core i9-9900KS, which offers eight cores all running at 5.0 GHz. There’s going to be a lot of buzz about this processor, but what people don’t know is that Intel already has an all 5.0 GHz processor, and it actually has 14 cores: the Core i9-9990XE. This ultra-rare thing isn’t sold to consumers – Intel only sells it to select partners, and even then it is only sold via an auction, once per quarter, with no warranty from Intel. How much would you pay for one? Well we got one to test.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9998

    Anandtech: GlobalFoundries and TSMC Sign Broad Cross-Licensing Agreement, Dismiss Law

    GlobalFoundries and TSMC have announced this afternoon that they have signed a broad cross-licensing agreement, ending all of their ongoing legal disputes. Under the terms of the deal, the two companies will license each other's semiconductor-related patents granted so far, as well as any patents filed over the next 10 years.
    Previously, GlobalFoundries has been accusing TSMC of patent infringement. At the time of the first lawsuit in August, TSMC said that the charges were baseless and that it would defend itself in court. In October, TSMC countersued its rival and, in turn, accused GlobalFoundries of infringing multiple patents. Now, less than a month after the countersuit, the two companies have agreed to sign a broad cross licensing agreement and dismiss all ongoing litigation.
    According to the agreement, GlobalFoundries and TSMC cross-license to each other’s worldwide existing semiconductor patents, as well as any patents that are filed by the two companies in the next 10 years. Broadly speaking, GlobalFoundries and TSMC have thousands of semiconductor-related patents between them, some of which were originally granted to AMD and IBM.
    Cross-licensing agreements are not uncommon in the high-tech world. Instead of fighting each other in expensive legal battles, companies with a broad portfolio of patents just sign cross-licensing agreements with peers, freeing them up to focus on innovating with their products rather than having to find ways to avoid infringing upon rivals' patents.
    Related Reading:


    Source: GlobalFoundries/TSMC Press Release


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #9999

    Anandtech: Apple Unveils AirPods Pro: A New Design with Active Noise Cancellation

    Apple today has introduced a new version of its AirPods wireless earbuds, which the company is calling the AirPods Pro. Designed to be an even more premium version of Apple's earbuds, the AirPods Pro features a revamped design that is equipped with a custom high dynamic range amplifier, as well as adding support for active noise cancellation. And with a price tag of $249, Apple's high-end earbuds will carry a price premium to match its new premium features.
    Apple’s AirPods Pro is based on the company’s H1 system-in-package, the same SiP that is used for the 2nd Generation AirPods introduced earlier this year. The new earbuds feature a new design with soft silicone ear tips (the company will ship AirPods Pro with three different tips) as well as new vent system that promises to minimize the discomfort of using the in-ear headphones. The earbuds come with a new custom high dynamic range amplifier, which is used to power a low-distortion speaker that can provide bass down to 20 Hz. Meanwhile, according to Apple the H1 SiP as well as the Adaptive EQ technology automatically tunes low- and mid-frequencies of the audio according to the shape of an individual’s ear.
    The sweat and water resistant AirPods Pro comes with outward-facing and inward-facing microphones. These are able to detect external sounds, allowing the headset to support active noise cancellation. According to Apple, the AirPods sample the environment at 200Hz, allowing them to quickly respond to changes in outside noise. Meanwhile the new AirPods also add a new feature that Apple is calling transparency mode, which that allows the user to hear the environment around them while using the earbuds, essentially offering an option to reduce/eliminate the noise-blocking properties of the earbuds.
    Meanwhile, the new AirPods also support an Ear Tip Fit Test, which can detect whether the headset has a good fit. And of course, the earbuds also fully support the usual AirPods features, including hands-free ‘Hey Siri’ functionality and everything that is derived from that.
    Apple's AirPods Pro can work for up to 4.5 hours on one charge with ANC or Transparency mode activated, or for up to 5 hours without them. Talk time of the new headset is 3.5 hours.
    The new AirPods Pro are compatible with a variety of Apple’s devices running iOS 13.2 or later, iPadOS 13.2 or later, watchOS 6.1 or later, tvOS 13.2 or later, or macOS Catalina 10.15.1 or later.
    Apple’s AirPods Pro with a wireless charging case will be available starting Wednesday, October 30 in the US and 25 other countries. In the US, the product will cost $249.
    Related Reading:


    Source: Apple
    Gallery: Apple Unveils AirPods Pro: A New Design with Active Noise Cancellation




    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,802
    Post Thanks / Like
    #10000

    Anandtech: NVIDIA Announces GeForce GTX 1650 Super: Launching November 22nd

    Alongside today’s GeForce GTX 1660 Super launch, NVIDIA is also taking the wraps off of one more GeForce Super card. Having already given a Super mid-generation refresh to most of their lineup, they will be giving it to one of their last, untouched product lineups, the GTX 1650 series. The resulting product, the GeForce GTX 1650 Super, promises to be an interesting card when it actually launches next month on November 22nd, as NVIDIA will be aiming significantly higher than the original GTX 1650 that it supplants. And it will be just in time to do combat with AMD’s Radeon RX 5500 series.
    NVIDIA GeForce Specification Comparison
    GTX 1660 GTX 1650 Super GTX 1650 GTX 1050 Ti
    CUDA Cores 1408 1280 896 768
    ROPs 48 32 32 32
    Core Clock 1530MHz 1530MHz 1485MHz 1290MHz
    Boost Clock 1785MHz 1725MHz 1665MHz 1392MHz
    Memory Clock 8Gbps GDDR5 12Gbps GDDR6 8Gbps GDDR5 7Gbps GDDR5
    Memory Bus Width 192-bit 128-bit 128-bit 128-bit
    VRAM 6GB 4GB 4GB 4GB
    Single Precision Perf. 5 TFLOPS 4.4 TFLOPS 3 TFLOPS 2.1 TFLOPS
    TGP 120W 100W 75W 75W
    GPU TU116
    (284 mm2)
    TU116
    (284 mm2)
    TU117
    (200 mm2)
    GP107
    (132 mm2)
    Transistor Count 6.6B 6.6B 4.7B 3.3B
    Architecture Turing Turing Turing Pascal
    Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" TSMC 12nm "FFN" Samsung 14nm
    Launch Date 03/14/2019 11/22/2019 04/23/2019 10/25/2016
    Launch Price $219 TBA $149 $139
    Like the other Super cards this year, the GTX 1650 Super is intended to be a mid-generation kicker for the GeForce family. However unlike the other Super cards, NVIDIA is giving the GTX 1650 Super a much bigger jump in performance. With a planned increase in GPU throughput of 46%, and paired with faster 12Gbps GDDR6 memory, the new card should be much farther ahead of the GTX 1650 than what we saw with today’s GTX 1660 Super launch, relatively speaking.
    The single biggest change here is the GPU. While NVIDIA is calling the card a GTX 1650, in practice it’s more like a GTX 1660 LE; NVIDIA has brought in the larger, more powerful TU116 GPU from the GTX 1660 series to fill out this card. There are cost and power consequences to this, but the payoff is that it gives NVIDIA a lot more SMs and CUDA Cores to work with. Coupled with that is a small bump in clockspeeds, which pushes the on-paper shader/compute throughput numbers up by just over 46%.
    Such a large jump in GPU throughput also requires a lot more memory bandwidth to feed the beast. As a result, just like the GTX 1660 Super, the GTX 1650 Super is getting the GDDR6 treatment as well. Here NVIDIA is using slightly lower (and lower power) 12Gbps GDDR6, which will be attached to the GPU via a neutered 128-bit memory bus. Still, this one change will give the GTX 1650 Super 50% more memory bandwidth than the vanilla GTX 1650, very close to its increase in shader throughput.
    Do note, however, that not all aspects of the GPU are being scaled out to the same degree. In particular, the GTX 1650 Super will still only have 32 ROPs, with the rest of TU116’s ROPs getting cut off along with its spare memory channels. This means that while the GTX 1650 Super will have 46% more shader performance, it will only have 4% more ROP throughput for pushing pixels. Counterbalancing this to a degree will be the big jump in memory bandwidth, which will keep those 32 ROPs well-fed, but at the end of the day the GPU is getting an uneven increase in resources, and gaming performance gains are likely to reflect this.
    The drawback to all of this, then, is power consumption. While the original GTX 1650 is a 75 Watt card – making it the fastest thing that can be powered solely by a PCIe slot – the Super-sized card will be a 100 Watt card. This gives up the original GTX 1650’s unique advantage, and it means builders looking for even faster 75W cards won’t get their wish, but it’s the power that pays the cost of the GTX 1650 Super’s higher performance. Traditionally, NVIDIA has held pretty steadfast at 75W for their xx50 cards, so I’ll be curious to see what this means for consumer interest and sales; but then again at the end of the day, despite the name, this is closer to a lightweight GTX 1660 than it is a GTX 1650.
    Speaking of hardware features, besides giving NVIDIA a good deal more in the way of GPU resources to play with, the switch from the TU117 GPU to the TU116 GPU will also have one other major ramification that some users will want to pay attention to: video encoding. Unlike TU117, which got the last-generation NVENC Volta video encoder block for die space reasons, TU116 gets the full-fat Turing NVENC video encoder block. Turing’s video encode block has been turning a lot of heads for its level of quality – while not archival grade, it’s competitive with x264 medium – which is important for streamers. This also led to TU117 and the GTX 1650 being a disappointment in some circles, as an otherwise solid video card was made far less useful for video encoding. So with the GTX 1650 Super, NVIDIA is resolving this in a roundabout way, thanks to the use of the more powerful TU116 GPU.
    Moving on, the GTX 1650 Super is set to launch on November 22nd. And, while NVIDIA does not directly call out AMD in its production descriptions, the card’s configuration and timing makes a very compelling case that this is meant to be NVIDIA’s answer to AMD’s impending Radeon RX 5500. The first Navi 14-based video card is set to launch to retail sometime this quarter, and in their promotional material, AMD has been comparing it to the vanilla GTX 1650. So adding a GTX 1650 Super card allows NVIDIA to get ahead, in a fashion, by making available another (relatively) cheap card that, knowing NVIDIA, they expect to outperform what AMD has in the works. Of course the proof is in the pudding, so to speak, and at this point we’re waiting on both AMD and NVIDIA to actually launch their respective products before we can see how the competing cards actually stack up.
    The other major wildcard here will be pricing. While NVIDIA is announcing the full specifications of the GTX 1650 Super today, they are withholding pricing information. This admittedly isn’t unusual for NVIDIA (they rarely release it more than a few days in advance), but in this case in particular, both NVIDIA and AMD seem to be playing a bit of a game of chicken. Neither side has announced where their card will be priced at, and it would seem that each is waiting on the other to go first so that they can counter with the best possible position for their respective card. Though with NVIDIA’s card not set to launch for another month, and AMD’s card more indeterminate still, we’re all going to be waiting for a while regardless.
    At any rate, we’ll have more to talk about over the next month or so as the GTX 1650 Super and the rest of this holiday season’s video cards start hitting store shelves. So stay tuned.
    Q4 2019 GPU Pricing Comparison
    AMD Price NVIDIA
    Radeon RX 5700 XT $399 GeForce RTX 2060 Super
    Radeon RX 5700 $329 GeForce RTX 2060
    $279 GeForce GTX 1660 Ti
    $229 GeForce GTX 1660 Super
    $219 GeForce GTX 1660
    Radeon RX 590 $199
    Radeon RX 580 $179
    Radeon RX 5500 ? GeForce GTX 1650 Super
    $149 GeForce GTX 1650


    More...

Thread Information

Users Browsing this Thread

There are currently 24 users browsing this thread. (0 members and 24 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title