Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11541

    Anandtech: Intel To Wind Down Optane Memory Business - 3D XPoint Storage Tech Reaches

    It appears that the end may be in sight for Intel’s beleaguered Optane memory business. Tucked inside a brutal Q2’2022 earnings release for the company (more on that a bit later today) is a very curious statement in a section talking about non-GAAP adjustments: In Q2 2022, we initiated the winding down of our Intel Optane memory business. As well, Intel’s earnings report also notes that the company is taking a $559 Million “Optane inventory impairment” charge this quarter.
    Beyond those two items, there is no further information about Optane inside Intel’s earnings release or their associated presentation deck. We have reached out to company representatives seeking more information, and are waiting for a response.
    Taking these items at face value, then, it would seem that Intel is preparing to shut down its Optane memory business and development of associated 3D XPoint technology. To be sure, there is a high degree of nuance here around the Optane name and product lines here – which is why we’re looking for clarification from Intel – as Intel has several Optane products, including “Optane memory” “Optane persistent memory” and “Optane SSDs”. None the less, within Intel’s previous earnings releases and other financial documents, the complete Optane business unit has traditionally been referred to as their “Optane memory business,” so it would appear that Intel is indeed winding down the Optane business unit, and not just the Optane Memory product.
    Update: 6:40pm ET
    Following our request, Intel has sent out a short statement on the Optane wind-down. While not offering much in the way of further details on Intel's exit, it does confirm that Intel is indeed exiting the entire Optane business.
    We continue to rationalize our portfolio in support of our IDM 2.0 strategy. This includes evaluating divesting businesses that are either not sufficiently profitable or not core to our strategic objectives. After careful consideration, Intel plans to cease future product development within its Optane business. We are committed to supporting Optane customers through the transition.
    Intel, in turn, used 3D XPoint as the basis of two product lineups. For its datacenter customers, it offered Optane Persistent Memory, which packaged 3D XPoint into DIMMs as a partial replacement for traditional DRAMs. Optane DIMMs offered greater bit density than DRAM, and combined with its persistent, non-volatile nature made for an interesting offering for systems that needed massive working memory sets and could benefit from its non-volatile nature, such as database servers. Meanwhile Intel also used 3D XPoint as the basis of several storage products, including high-performance SSDs for the server and client market, and as a smaller high-speed cache for use with slower NAND SSDs.
    3D XPoint’s unique attributes have also been a challenge for Intel since the technology launched, however. Despite being designed for scalability via layer stacking, 3D XPoint manufacturing costs have continued to be higher than NAND on a per-bit basis, making the tech significantly more expensive than even higher-performance SSDs. Meanwhile Optane DIMMs, while filling a unique niche, were equally as expensive and offered slower transfer rates than DRAM. So, despite Intel’s efforts to offer a product that could crossover the two product spaces, for workloads that don’t benefit from the technology’s unique abilities, 3D XPoint ended up being neither as good as DRAM or NAND in their respective tasks – making Optane products a hard sell.
    As a result, Intel has been losing money on its Optane business for most (if not all) of its lifetime, including hundreds of millions of dollars in 2020. Intel does not break out Optane revenue information on a regular basis, but on the one-off occasions where they have published those numbers, they have been well in the red on an operating income basis. As well, reports from Blocks & Files have claimed that Intel is sitting on a significant oversupply of 3D XPoint chips – on the order of two years’ of inventory as of earlier this year. All of which underscores the difficulty Intel has encountered in selling Optane products, and adding to the cost of a write-down/write-off, which Intel is doing today with their $559M Optane impairment charge.
    Consequently, a potential wind-down for Optane /3D XPoint has been in the tea leaves for a while now, and Intel has been taking steps to alter or curtail the business. Most notably, the dissolution of the Intel/Micron IMFT joint venture left Micron with possession of the sole production fab for 3D XPoint, all the while Micron abandoned their own 3D XPoint plans. And after producing 3D XPoint memory into 2021, Micron eventually sold the fab to Texas Instruments for other uses. Since then, Intel has not had access to a high volume fab for 3D XPoint – though if the inventory reports are true, they haven’t needed to produce more of the memory in quite some time.
    Meanwhile on the product side of matters, winding-down the Optane business follows Intel’s earlier retreat from the client storage market. While the company has released two generations of Optane products for the datacenter market, it never released a second generation of consumer products (e.g. Optane 905P). And, having sold their NAND business to SK Hynix (which now operates as Solidigm), Intel no longer produces other types of client storage. So retiring the remaining datacenter products is the logical next step, albeit an unfortunate one.

    Intel's Former Optane Persistent Memory Roadmap: What WIll Never Be
    Overall, Intel has opted to wind-down the Optane/3D XPoint business at a critical juncture for the company. With their Sapphire Rapids Xeon CPUs launching this year, Intel was previously scheduled to launch a third generation of Optane products, most importantly their “Crow Pass” 3rd generation persistent DIMMs, which among other things would update the Optane DIMM technology to use a DDR5 interface. While development of Crow Pass is presumably complete or nearly complete at this point (given Intel’s development schedule and Sapphire Rapids delays), actually launching and supporting the product would still incur significant up-front and long-term costs, requiring Intel to support the technology for another generation.
    In lieu of Optane persistent memory, Intel’s official strategy is to pivot towards CXL memory technology (CXL.mem), which allows attaching volatile and non-volatile memory to a CPU over a CXL-capable PCIe bus. This would accomplish many of the same goals as Optane (non-volatile memory, large capacities) without the costs of developing an entirely separate memory technology. Sapphire Rapids, in turn will be Intel’s first CPU to support CXL, and the overall technology has a much broader industry backing.

    AsteraLabs: CXL Memory Topology
    Still, Intel’s retirement of Optane/3D XPoint marks an unfortunate end of an interesting product lineup. 3D XPoint DIMMs were a novel idea even if they didn’t quite work out, and 3D XPoint made for ridiculously fast SSDs thanks to its massive random I/O advantage – and that’s a feature it doesn’t look like any other SSD vendor is going to be able to fully replicate any time soon. So for the solid state storage market, this marks the end of an era.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11542

    Anandtech: The Intel Core i9-12900KS Review: The Best of Intel's Alder Lake, and the

    As far as top-tier CPU SKUs go, Intel's Core i9-12900KS processor sits in noticeably sharp In contrast to the launch of AMD's Ryzen 7 5800X3D processor with 96 MB of 3D V-Cache. Whereas AMD's over-the-top chip was positioned as the world's fastest gaming processor, for their fastest chip, Intel has kept their focus on trying to beat the competition across the board and across every workload.
    As the final 12th Generation Core (Alder Lake) desktop offering from Intel, the Core i9-12900KS is unambiguously designed to be the powerful one. It's a "special edition" processor, meaning that it's low-volume, high-priced chip aimed at customers who need or want the fastest thing possible, damn the price or the power consumption.
    It's a strategy that Intel has employed a couple of times now – most notably with the Coffee Lake-generation i9-9900KS – and which has been relatively successful for Intel. And to be sure, the market for such a top-end chip is rather small, but the overall mindshare impact of having the fastest chip on the market is huge. So, with Intel looking to put some distance between itself and AMD's successful Ryzen 5000 family of chips, Intel has put together what is meant to be the final (and fastest) word in Alder Lake CPU performance, shipping a chip with peak (turbo) clockspeeds ramped up to 5.5GHz for its all-important performance cores.
    For today's review we're putting Alder Lake's fastest to the test, both against Intel's other chips and AMD's flagships. Does this clockspeed-boosted 12900K stand out from the crowd? And are the tradeoffs involved in hitting 5.5GHz worth it for what Intel is positioning as the fastest processor in the world? Let's find out.

    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11543

    Anandtech: Akasa AK-ENU3M2-07 USB 3.2 Gen 2x2 SSD Enclosure Review: 20Gbps with Excel

    Storage bridges have become an ubiquitous part of today's computing ecosystems. The bridges may be external or internal, with the former ones enabling a range of direct-attached storage (DAS) units. These may range from thumb drives using an UFD controller to full-blown RAID towers carrying Infiniband and Thunderbolt links. From a bus-powered DAS viewpoint, Thunderbolt has been restricted to premium devices, but the variants of USB 3.2 have emerged as mass-market high-performance alternatives. USB 3.2 Gen 2x2 enables the highest performance class (up to 20 Gbps) in USB devices without resorting to PCIe tunneling. The key challenges for enclosures and portable SSDs supporting 20Gbps speeds include handling power consumption and managing thermals. Today's review takes a look at the relevant performance characteristics of Akasa's AK-ENU3M2-07 - a USB 3.2 Gen 2x2 enclosure for M.2 NVMe SSDs.

    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11544

    Anandtech: OpenCAPI to Fold into CXL - CXL Set to Become Dominant CPU Interconnect St

    With the 2022 Flash Memory Summit taking place this week, not only is there a slew of solid-state storage announcements in the pipe over the coming days, but the show is also increasingly a popular venue for discussing I/O and interconnect developments as well. Kicking things off on that front, this afternoon the OpenCAPI and CXL consortiums are issuing a joint announcement that the two groups will be joining forces, with the OpenCAPI standard and the consortium’s assets being transferred to the CXL consortium. With this integration, CXL is set to become the dominant CPU-to-device interconnect standard, as virtually all major manufacturers are now backing the standard, and competing standards have bowed out of the race and been absorbed by CXL.
    Pre-dating CXL by a few years, OpenCAPI was one of the earlier standards for a cache-coherent CPU interconnect. The standard, backed by AMD, Xilinx, and IBM, among others, was an extension of IBM’s existing Coherent Accelerator Processor Interface (CAPI) technology, opening it up to the rest of the industry and placing its control under an industry consortium. In the last six years, OpenCAPI has seen a modest amount of use, most notably being implemented in IBM’s POWER9 processor family. Like similar CPU-to-device interconnect standards, OpenCAPI was essentially an application extension on top of existing high speed I/O standards, adding things like cache-coherency and faster (lower latency) access modes so that CPUs and accelerators could work together more closely despite their physical disaggregation.
    But, as one of several competing standards tackling this problem, OpenCAPI never quite caught fire in the industry. Born from IBM, IBM was its biggest user at a time when IBM’s share in the server space has been on the decline. And even consortium members on the rise, such as AMD, ended up skipping on the technology, leveraging their own Infinity Fabric architecture for AMD server CPU/GPU connectivity, for example. This has left OpenCAPI without a strong champion – and without a sizable userbase to keep things moving forward.
    Ultimately, the desire of the wider industry to consolidate behind a single interconnect standard – for the sake of both manufacturers and customers - has brought the interconnect wars to a head. And with Compute Express Link (CXL) quickly becoming the clear winner, the OpenCAPI consortium is becoming the latest interconnect standards body to bow out and become absorbed by CXL.
    Under the terms of the proposed deal – pending approval by the necessary parties – the OpenCAPI consortium’s assets and standards will be transferred to the CXL consortium. This would include all of the relevant technology from OpenCAPI, as well as the group’s lesser-known Open Memory Interface (OMI) standard, which allowed for attaching DRAM to a system over OpenCAPI’s physical bus. In essence, the CXL consortium would be absorbing OpenCAPI; and while they won’t be continuing its development for obvious reasons, the transfer means that any useful technologies from OpenCAPI could be integrated into future versions of CXL, strengthening the overall ecosystem.
    With the sublimation of OpenCAPI into CXL, this leaves the Intel-backed standard as dominant interconnect standard – and the de facto standard for the industry going forward. The competing Gen-Z standard was similarly absorbed into CXL earlier this year, and the CCIX standard has been left behind, with its major backers joining the CXL consortium in recent years. So even with the first CXL-enabled CPUs not shipping quite yet, at this point CXL has cleared the neighborhood, as it were, becoming the sole remaining server CPU interconnect standard for everything from accelerator I/O (CXL.io) to memory expansion over the PCIe bus.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11545

    Anandtech: Solidigm Announces P41 Plus SSD: Taking Another Shot at QLC With Cache Tie

    Although Intel is no longer directly in the SSD market these days, their SSD team and related technologies continue to live on under the SK hynix umbrella as Solidigm. Since their initial formation at the very end of 2021, Solidigm has been in the process of reestablishing their footing, continuing to sell and support Intel’s previous SSD portfolio while continuing development of their next generation of SSDs. On the enterprise side of matters this recently culminated in the launch of their new D7 SSDs. Meanwhile on the consumer side of matters, today at Flash Memory Summit the company is announcing their first post-Intel consumer SSD, the Solidigm P41 Plus
    The P41 Plus is, at a high level, the successor to Intel’s 670p SSD, the company’s second-generation QLC-based SSD. And based on that description alone, a third generation QLC drive from Soldigm is something that few AnandTech readers would find remarkable. QLC makes for cheap high(ish) capacity SSDs, which OEMs love, while computing enthusiasts are decidedly less enthusiastic about them.
    But then the P41 Plus isn’t just a traditional QLC drive.
    One of the more interesting ventures out of Intel’s time as a client SSD manufacturer was the company’s forays into cache tiering. Whether it was using flash memory as a hard drive cache, using 3D XPoint as a hard drive cache, or even using 3D XPoint as a flash memory cache, Intel tried several ways to speed up the performance of slower storage devices in a cost-effective manner. And while Intel’s specific solutions never really caught on, Intel’s core belief that some kind of caching is necessary proved correct, as all modern TLC and QLC SSDs come with pseudo-SLC caches for improved burst write performance.
    While they are divorced from Intel these days, Solidigm is picking up right where Intel left off, continuing to experiment with cache tiering. Coming from the same group that developed Intel’s mixed 3D XPoint/QLC drives such as the Optane Memory H20, Solidigm no longer has access to Intel’s 3D XPoint memory (and soon, neither will Intel). But they do have access to flash memory. So for their first solo consumer drive as a stand-alone subsidiary, Solidigm is taking a fresh stab at cache tiering, expanding the role of the pSLC cache to serve as both a write cache and a read cache.

    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11546

    Anandtech: Compute Express Link (CXL) 3.0 Announced: Doubled Speeds and Flexible Fabr

    While it’s technically still the new kid on the block, the Compute Express Link (CXL) standard for host-to-device connectivity has quickly taken hold in the server market. Designed to offer a rich I/O feature set built on top of the existing PCI-Express standards – most notably cache-coherency between devices – CXL is being prepared for use in everything from better connecting CPUs to accelerators in servers, to being able to attach DRAM and non-volatile storage over what’s physically still a PCIe interface. It’s an ambitious and yet widely-backed roadmap that in three short years has made CXL the de facto advanced device interconnect standard, leading to rivals standards Gen-Z, CCIX, and as of yesterday, OpenCAPI, all dropping out of the race.
    And while the CXL Consortium is taking a quick victory lap this week after winning the interconnect wars, there is much more work to be done by the consortium and its members. On the product front the first x86 CPUs with CXL are just barely shipping – largely depending on what you want to call the limbo state that Intel’s Sapphire Ridge chips are in – and on the functionality front, device vendors are asking for more bandwidth and more features than were in the original 1.x releases of CXL. Winning the interconnect wars makes CXL the king of interconnects, but in the process, it means that CXL needs to be able to address some of the more complex use cases that rival standards were being designed for.
    To that end, at Flash Memory Summit 2022 this week, the CXL Consortium is at the show to announce the next full version of the CXL standard, CXL 3.0. Following up on the 2.0 standard, which was released at the tail-end of 2020 and introduced features such as memory pooling and CXL switches, CXL 3.0 focuses on major improvements in a couple of critical areas for the interconnect. The first of which is the physical side, where is CXL doubling its per-lane throughput to 64 GT/second. Meanwhile, on the logical side of matters, CXL 3.0 is greatly expanding the logical capabilities of the standard, allowing for complex connection topologies and fabrics, as well as more flexible memory sharing and memory access modes within a group of CXL devices.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11547

    Anandtech: UCIe Consortium Incorporates, Adds NVIDIA and Alibaba As Members

    Among the groups with a presence at this year’s Flash Memory Summit is the UCIe Consortium, the recently formed group responsible for the Universal Chiplet Interconnect Express (UCIe) standard. First unveiled back in March, the UCIe Consortium is looking to establish a universal standard for connecting chiplets in future chip designs, allowing chip builders to mix-and-match chiplets from different companies. At the time of the March announcement, the group was looking for additional members as it prepared to formally incorporate, and for FMS they’re offering a brief update on their progress.
    First off, the group has now become officially incorporated. And while this is largely a matter of paperwork for the group, it’s none the less an important step as it properly establishes them as a formal consortium. Among other things, this has allowed the group to launch their work groups for developing future versions of the standard, as well as to offer initial intellectual property rights (IPR) protections for members.
    More significant, however, is the makeup of the incorporated UCIe board. While UCIe was initially formed with 10 members – a veritable who’s who of many of the big players in the chip industry – there were a couple of notable absences. The incorporated board, in turn, has picked up two more members who have bowed to the peer (to peer) pressure: NVIDIA and Alibaba.
    NVIDIA for its part has already previously announced that it would support UCIe in future products (even if it’s still pushing customers to use NVLink), so their addition to the board is not unexpected. Still, it brings on board what’s essentially the final major chip vendor, firmly establishing support for UCIe across all of the ecosystem’s big players. Meanwhile, like Meta and Google Cloud, Alibaba represents another hyperscaler joining the group, who will presumably be taking full advantage of UCIe in developing chips for their datacenters and cloud computing services.
    Overall, according to the Consortium the group is now up to 60 members total. And they are still looking to add more through events like FMS as they roll on towards getting UCIe 1.0 implemented in production chiplets.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11548

    Anandtech: SK hynix Announces 238 Layer NAND - Mass Production To Start In H1'2023

    As the 2022 Flash Memory Summit continues, SK hynix is the latest vendor to announce their next generation of NAND flash memory at the show. Showcasing for the first time the company’s forthcoming 238 layer TLC NAND, which promises both improved density/capacity and improved bandwidth. At 238 layers, SK hynix has, at least for the moment, secured bragging rights for the greatest number of layers in a TLC NAND die – though with mass production not set to begin until 2023, it’s going to be a while until the company’s newest NAND shows up in retail products.
    Following closely on the heels of Micron’s 232L TLC NAND announcement last week, SK hynix is upping the ante ever so slightly with a 238 layer design. Though the difference in layer counts is largely inconsequential when you’re talking about NAND dies with 200+ layers to begin with, in the highly competitive flash memory industry it gives SK hynix bragging rights on layer counts, breaking the previous stalemate between them, Samsung, and Micron at 176L.
    From a technical perspective, SK hynix’s 238L NAND further builds upon the basic design of their 176L NAND. So we’re once again looking at a string stacked design, with SH hynix using a pair of 119 layer decks, up from 88 layers in the previous generation. This makes SK hynix the third flash memory vendor to master building decks over 100 layers tall, and is what’s enabling them to produce a 238L NAND design that holds the line at two decks.
    SK hynix’s NAND decks continue to be built with their charge-trap, CMOS under Array (CuA) architecture, which sees the bulk of the NAND’s logic placed under the NAND memory cells. According to the company, their initial 512Gbit TLC part has a die size of 35.58mm2, which works out to a density of roughly 14.39 Gbit/mm2. That’s a 35% improvement in density over their previous-generation 176L TLC NAND die at equivalent capacities. Notably, this does mean that SK hynix will be ever so slightly trailing Micron’s 232L NAND despite their total layer count advantage, as Micron claims they’ve hit a density of 14.6 Gbit/mm2 on their 1Tbit dies.
    SK hynix 3D TLC NAND Flash Memory
    238L 176L
    Layers 238 176
    Decks 2 (x119) 2 (x88)
    Die Capacity 512 Gbit 512 Gbit
    Die Size (mm2) 35.58mm2 ~47.4mm2
    Density (Gbit/mm2) ~14.39 10.8
    I/O Speed 2.4 MT/s
    (ONFi 5.0)
    1.6 MT/s
    (ONFI 4.2)
    CuA / PuC Yes Yes
    Speaking of 1Tbit, unlike Micron, SK hynix is not using the density improvements to build higher capacity dies – at least, not yet. While the company has announced that they will be building 1Tbit dies next year using their 238L process, for now they’re holding at 512Gbit, the same capacity as their previous generation. So all other factors held equal, we shouldn’t expect the first wave drives built using 238L NAND to have any greater capacity than the current generation. But, if nothing else, at least SK hynix’s initial 238L dies are quite small – though whether that translates at all to smaller packages remains to be seen.
    Besides density improvements, SK hynix has also improved the performance and power consumption of their NAND. Like the other NAND vendors, SK hynix is using this upcoming generation of NAND to introduce ONFi 5.0 support. ONFi 5.0 is notable for not only increasing the top transfer rate to 2400MT/second – a 50% improvement over ONFi 4.2 – but it also introduces a new NV-LPDDR4 signaling method. As it’s based on LPDDR signaling (unlike the DDR3-derrived mode in ONFi 4.x), NV-LPDDR4 offers tangible reductions in the amount of power consumed by NAND signaling. SK hynix isn’t breaking their power consumption figures out to this level of detail, but for overall power consumption, they’re touting a 21% reduction in energy consumed for read operations. Presumably this is per bit, so it will be counterbalanced by the 50% improvement in bandwidth.
    This week’s announcement comes as SK hynix has begun shipping samples of the 238L NAND to their customers. As previously mentioned, the company is not planning on kicking off mass production until H1’2023, so it will be some time before we see the new NAND show up in retail products. According to SK hynix, their plan is to start with shipping NAND for consumer SSDs, followed by smartphones and high-capacity server SSDs. That, in turn, will be followed up with the introduction of 1Tbit 238L NAND later in 2023.



    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11549

    Anandtech: ASRock Industrial NUC BOX-1260P and 4X4 BOX-5800U Review: Alder Lake-P and

    The past few years have seen Intel and AMD delivering new processors in a staggered manner. In the sub-45W category, Intel's incumbency has allowed it to deliver products for both the notebook and ultra-compact form factor (UCFF) within a few months of each other. On the other hand, AMD's focus has been on the high-margin notebook market, with the chips filtering down to the desktop market a year or so down the road. In this context, AMD's Cezanne (most SKUs based on the Zen 3 microarchitecture) and Intel's Tiger Lake went head-to-head last year in the notebook market, while Rembrandt (based on Zen3+) and Alder Lake-P are tussling it out this year. In the desktop space, Cezanne-based mini-PCs started making an appearance a few months back, coinciding with the first wave of Alder Lake-P systems. ASRock Industrial launched the NUC BOX-1200 series (Alder Lake-P) and the 4X4 BOX-5000 series (Cezanne) within a few weeks of each other. The company sent over the flagship models in both lineups for review, giving us a chance to evaluate the performance and value proposition of the NUC BOX-1260P and 4X4 BOX-5800U. Read on to find out how Alder Lake-P and Cezanne stack up against each other in the mini-PC space, and a look into what helps ASRock Industrial introduce mini-PCs based on the latest processors well ahead of its competitors.

    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,799
    Post Thanks / Like
    #11550

    Anandtech: The AlphaCool Eisbaer Aurora 360 AIO Cooler Review: Improving on Expandabl

    Today, we are taking a look at the updated version of the Alphacool Eisbaer AIO CPU cooler, the Eisbaer Aurora. For its second-generation product, Alphacool has gone through the Eisbaer design and improved every single part of this cooler, from the pump to the radiator and everything in-between. Combined with its unique, modular design that allows for additional blocks to be attached to this otherwise closed loop cooler, and Alphacool has a unique and powerful CPU cooler on its hands – albeit one that carries a price premium to match.

    More...

Thread Information

Users Browsing this Thread

There are currently 11 users browsing this thread. (0 members and 11 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title