Page 764 of 1210 FirstFirst ... 264664714739754759760761762763764765766767768769774789814864 ... LastLast
Results 7,631 to 7,640 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7631

    Anandtech: Western Digital, Toshiba And Bain Capital Settle Disputes Over Sale Of Tos

    Western Digital announced in a press conference this afternoon that they have reached a settlement agreement with Toshiba over Western Digital's objections to the sale of Toshiba's stake in their NAND flash memory joint ventures to a consortium led by Bain Capital.

    Toshiba's financial troubles came to a head a year ago with the acknowledgement of severe losses from Toshiba's nuclear power subsidiary. To maintain solvency, Toshiba was forced to offer up a share of its NAND flash memory business, by far the most lucrative portion of Toshiba's conglomerate. Over the course of several months as the severity of Toshiba's financial situation became clearer, their plans shifted to a complete sale of the memory business, valued around $18 billion.

    Western Digital acquired SanDisk in May 2016 and with it, SanDisk's half of the Toshiba-SanDisk partnerships to develop and manufacture NAND flash memory. Citing rights stemming from these partnerships, Western Digital claimed that Toshiba needed their consent before Toshiba could spin off and sell their side of the partnerships. Western Digital had been unable to keep pace in the bidding war for the spun-off Toshiba Memory Corporation (TMC) and sought to use whatever leverage they had to strengthen their position as one of the few major NAND manufacturers.

    Western Digital initiated arbitration proceedings against Toshiba in May 2017. Toshiba responded with a lawsuit in Japanese courts alleging unfair competition and mishandling of Toshiba trade secrets by Western Digital. As the disputes escalated, they also became more acrimonious, with Western Digital accusing Toshiba of cutting off some Western Digital employees from accessing shared databases and facilities. Toshiba also shut Western Digital out of the initial round of investment into a new fab.

    While Toshiba and Western Digital did manage to re-open negotiations, Toshiba eventually decided to sell TMC to a consortium led by Bain Capital and including US companies like Apple, Seagate and Dell as investors, as well as competing NAND manufacturer SK Hynix. That deal was signed in September and approved by the Toshiba board and shareholders a month later.

    With today's settlement and cessation of all hostilities, it appears at first glance to be a loss for Western Digital, who is not going to be acquiring a larger share of the joint ventures. However, several of the agreements have been extended through 2027 and 2029 (one had already been extended to 2029), and the terms have been updated to strengthen the protections for the joint venture intellectual property: While Western Digital competitors like SK Hynix and Seagate are investors in the purchase of TMC, they will not gain access to any of the IP. Western Digital has also secured the right to match Toshiba's future investments in the new Fab 6 at Toshiba's Yokkaichi operations and in a new wafer fabrication project in Iwate. This is crucial to Western Digital's long-term competitiveness because they would not be able to begin manufacturing NAND flash outside of the joint ventures without essentially starting from scratch.

    Updated Outlook

    Western Digital also provided an update on their outlook for the current second quarter of fiscal year 2018, and for 2018 overall. They expect to finish calendar year 2017 with over 65% of their NAND flash bit output having transitioned to 3D NAND, and over 90% of their 3D NAND output is on their current 64L BiCS3 process. Their 96L BiCS4 process will debut in a retail product due to ship this week (!), but no further details on that are available at this time. Longer term, Western Digital expects to continue reducing NAND costs at a faster pace than the 2D to 3D transition has allowed for, but evolving 3D NAND is expected to be slower than the progression of process shrinks was for 2D NAND. Modest price declines for NAND are expected as the industry-wide shortage relaxes, though one factor delaying this is the fact that some competitors have chosen to migrate some 2D NAND fabs to DRAM manufacturing instead of 3D NAND, in order to address the shortage of DRAM that also exists.




    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7632

    Anandtech: AMD Releases Radeon Pro Software Adrenalin Edition 17.12.1: Adrenalin and

    Alongside the much more expansive Radeon Software Adrenalin Edition, today AMD has released Radeon Pro Software Adrenalin Edition. As implied by the name, 17.12.1 Pro includes Radeon Pro ReLive, Radeon Pro Overlay, and a Pro-oriented ‘Connect Tab,’ with the same functionality covered in our primary Radeon Software Adrenalin Edition article. Supporting Windows 10 Fall Creators Update and Mixed Reality, today’s release also brings a good number of ProRender updates, rounding out AMD’s professional driver take on Adrenalin.
    Additionally, AMD is announcing a change to the release schedule of Radeon Pro Software Enterprise Drivers. For 2018, release dates will fall on the 2nd Wednesday of the 2nd month of the quarter, instead of the 4th Thursday of the quarter. Subsequently, the first enterprise driver of 2018 will release on February 14th instead of January 25th.
    17.12.1 Pro continues to offer the optional Driver Options feature, which allows single Radeon Pro WX or Radeon Vega Frontier Edition cards on Windows 10 to utilize Radeon Software gaming drivers. A specialized “Radeon Software Adrenalin Edition for Radeon Pro” driver is available for this purpose, and is downloadable from within Radeon Pro settings.
    Buy AMD Radeon Pro WX 9100 16GB HBM2 on Newegg
    Radeon ProRender has also been enhanced with several new features and updated plugins. The renderer now has interactive viewport denoising, which can speed-up 3D render times, and GL Transmission Format (glTF) support, a Khronos Group specification that compacts 3D asset size while reducing the processing time needed to unpack said assets. In turn, this assists exporting assets to other applications while preserving materials.

    Autodesk 3ds Max 2018 and Autodesk Maya 2018 are now supported by ProRender plugins, and AMD also pre-announced macOS support for Autodesk Maya and Blender. And ProRender now has ‘Game Engine Importer’ for importing Solidworks geometry and materials to Unreal Engine for CAD visualization in a VR environment.
    And touching on virtualization, AMD announced their new Guest Interface Manager (GIM) open source KVM host OS driver, and stated that it was available on GitHub.
    In terms of fixed issues, 17.12.1 Pro resolves the following:

    • Corruption may be observed when applying poly extrude face on selected plan for Maya 2017
    • Uninstaller may not remove amdssg64.sys during Radeon Pro SSG uninstall
    • On Vega based hardware, unexpected behavior running 3dMark Fire Strike on a six 4K display configuration

    As listed earlier in 17.Q4, 17.12.1 Pro carries the same ISV certification notes in regards to Autodesk Maya 2017/2018 issue #41945 and Arnold to Maya issue #3142. ISV applications, certified drivers, and product compatibility are all listed on AMD’s ISV Application Certified Drivers page.
    The amdgpu-pro + amdgpu-all-open Radeon Pro Software Adrenalin Edition 17.12.1 for Linux also makes its mark today, and more information can be found on its release notes.
    The updated drivers for AMD’s professional workstation GPUs are available online at the AMD’s professional graphics driver download page. More information on this update and further issues can be found in the Radeon Pro Software Adrenalin Edition 17.12.1 release notes.
    Gallery: Radeon Pro Software Adrenalin Edition 17.12.1 Slide Deck & Infographics




    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7633

    Anandtech: Apple Starts iMac Pro Sales on Dec. 14: 8/10-Core Models At Launch, 14/18-

    Apple has announced that sales of its previously announced iMac Pro all-in-one workstations will start on Thursday, December 14. On that day, the company is expected to make its new systems available at least in some retail locations as well as through its website. Initially, Apple is expected to release iMac Pro workstations with eight- and ten-core processors, whereas the higher-end models featuring CPUs with up to 18 cores will hit the market in early 2018.
    Apple’s iMac Pro AIO workstations are aimed at professional users with demanding workloads, such as video editing, 3D animation, scientific research, software development, and other. To a large degree, the iMac Pro addresses the same crowd as the Mac Pro introduced four years ago (and never updated since then) and offers them to buy a 27” AIO featuring modern multi-core CPUs, up-to-date GPUs, loads of RAM, plenty of solid-state storage and advanced connectivity that includes four Thunderbolt 3 ports and one 10 GbE header. The company is still working on an all new desktop computer that will replace the Mac Pro sometimes in 2018, but for a while the new iMac Pro will be Apple’s most powerful PCs.
    The Apple iMac Pro comes in aluminum space gray chassis and is outfitted with a 27” display panel with a 5120x2880 resolution, up to 500 nits brightness that supports the DCI-P3 color gamut and 10-bit spatial and temporal dithering (no word on HDR10 support though). Since many professional workloads require more than one monitors, Apple proposes to use two out of four Thunderbolt 3 ports to connect two additional 5K displays (such as LG’s UltraFine 5K launched a year ago).

    Inside, the iMac Pro more resembles a high-end desktop rather than a AIO machine. The system is based on Intel’s Xeon W processors with eight, ten, 14 or 18 cores and up to 42 MB of L2+L3 cache. The 14-core iMac Pro was not a part of Apple’s original announcement, but a blogger was told by Apple that the fourth iMac Pro SKU would be available as well. Apple does not disclose exact CPU models that it intends to use with the iMac Pro, but it looks like we are dealing with off-the-shelf Xeon W CPUs with up to 140 W TDP.
    By default, an entry-level iMac Pro is to be equipped with 32 GB of DDR4-2666 ECC memory, but it is expandable to 64 GB or even 128 GB, if needed. As for storage, 1 TB SSD is the default option, but the iMac Pro can be equipped with 2 TB or 4 TB SSDs as well. All the drives use the NVMe protocol, a PCIe 3.0 x4 interface and up to 3 GB/s peak sequential read speed. While it looks like Apple is going to use standard memory modules, the iMac Pro does not seem to be user-upgradeable, unlike regular iMacs.
    For graphics, Apple picked up AMD’s latest Radeon Pro Vega 56 with 8 GB of HBM2 for prêt-à-porter SKUs and the Radeon Pro Vega 64 with 16 GB of HBM2 for build-to-order configurations and, perhaps, for machines that feature an 18-core CPU and 128 GB of RAM. Both Radeon Pro graphics adapters will not come as cards, but will be soldered to iMac Pro’s motherboard, based on one picture supplied by Apple. The company does not disclose frequencies of the bespoke Radeon Pro Vega GPUs it uses, but says that their maximum FP32 compute performance is 11 TFLOPS (which points to around 1340 MHz clock-rate for the Vega 64) and their peak memory bandwidth is 400 GB/s (indicating about 1600 MT/s memory speed), which is slower when compared to the Radeon RX Vega cards for desktops. The main reasons why Apple downlocks its GPUs are of course power consumption and heat dissipation. The company says that Mac Pro’s cooling system can cope with up to 500 W of heat, so it cannot use a 140 W CPU and a 295 W GPU in order to avoid overheating.
    Moving on to connectivity. The iMac Pro will feature an 802.11ac Wi-Fi with Bluetooth 4.2 module (there is no word on the 802.11ac wave2 support, so it could be the same Broadcom controller used inside the latest MacBook Pro laptops), a 10 GbE connector (no word on the controller or its developer), four USB 3.0 Type-A headers, one SDXC card slot, a 3.5-mm audio jack and four Thunderbolt 3 ports to connect additional displays, RAID storage and other peripherals that demand high bandwidth. The Mac Pro also has integrated 1080p webcam, stereo speakers, an array of microphones and so on. Some rumours say that the iMac Pro will feature voice-activated “Siri” assistant and for this reason integrate a recent A-series SoC, but Apple yet has to confirm this.
    Apple iMac Pro Brief Specifications
    iMac Pro 27"
    Display 27" with 5120 × 2880 resolution
    500 cd/m² brightness
    DCI-P3 support
    10-bit spatial and temporal dithering
    CPU Intel Xeon W-2145
    8C/16T
    3.7/4.5 GHz
    8 MB L2
    11 MB L3
    140 W
    Intel Xeon W-2155
    10C/20T
    3.7/4.5 GHz
    10 MB L2
    13.75 MB L3
    140 W
    Intel Xeon W-2175
    14C/28T
    2.5/4.3 GHz
    14 MB L2
    19.25 MB L3
    140 W
    Intel Xeon W-2195
    18C/36T
    2.4/4.3 GHz
    18 MB L2
    24.75 MB L3
    140 W
    PCH C422
    Graphics AMD Radeon Pro Vega 56 with 8 GB HMB2 or Radeon Pro 64 with 16 GB HBM2
    Memory 32 GB DDR4-2666 with ECC
    Configurable to 64 GB or 128 GB DDR4-2666 with ECC
    Storage 1 TB SSD (NVMe, PCIe 3.0 x4)
    Configurable to 2 TB SSD or 4 TB SSD
    Wi-Fi IEEE 802.11ac Wi-Fi + BT 4.2
    Ethernet 10 GbE
    Display Outputs 4 × Thunderbolt 3
    Audio Stereo speakers
    Integrated microphones
    1 × audio out
    USB 4 × USB 3.0 Type-A (5 Gbps)
    4 × USB 3.1 Gen 2 Type-C (via TB3)
    Other I/O FHD webcam
    SDXC card reader
    Dimensions Width 65 cm | 25.6"
    Height 51.6 cm | 20.3"
    Depth 20.3 cm | 8"
    PSU ~ 500 W (to be confirmed)
    OS Apple MacOS High Sierra
    The iMac Pro will ship with space grey wireless Magic Keyboard with a numeric keypad as well a choice between the Magic Mouse 2 or Magic Trackpad 2. We do not know the official price just yet or details about support and warranty, but there are unofficial indications that the cheapest iMac Pro will sell for $4999.

    Related Reading:


    Sources: Apple, MacRumours, Marques Brownlee


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7634

    Anandtech: MediaTek Launches Sensio MT6381 Biosensor

    In a change of pace from the normal SoC related MediaTek news, today we see the announcement of a new discrete biosensor from the Taiwanese chip manufacturer. The new Sensio MT6381 is a health monitoring solution which integrates six key measurements data points into an all-in-one package which can be integrated into smartphones.
    While the Helio series includes the popular SoC lineup of MediaTek, the Sensio branding is a new marketing effort by the company to give more exposure to its new and future sensor business, of which the new biosensor is the first announced product.
    The industry leader in biosensor solutions is currently Maxim Integrated who also are the component suppliers of the heart-rate monitor module integrated into Samsung Galaxy smartphones, undoubtedly the smartphone line-up which popularized and became the trend-setters in regards to integrating health monitoring solutions into mobile phones.
    While the wearable market has seen more widespread adoption of biosensor integration, outside of Samsung Galaxy phones there’s been sparse to close to no adoption of such solutions (Please correct me if a device comes to mind!), which considering that Samsung first introduced the sensor in the Galaxy S5 close to 4 years ago is quite a bit odd.
    As such, MediaTek sees opportunity and demand from its OEM partners to enter this market. The MT6381 doesn’t just try to be a follower in terms of functionality but tries to one-up rival component offering by claiming to be the first 6-in-1 biosensor solution. The six data points that the sensor package is able to collect are as follows:

    • Heart-rate – heart bets per minute
    • Heart-rate variability – variation in time between heartbeats
    • Blood pressure trends – measured range of data over time
    • Peripheral oxygen saturation (SpO2) – measurement of blood oxygen levels
    • Electrocardiography (ECG) – electrical activity of the heart over a period of time
    • Photoplethysmography (PPG) – measurement of blood volume changes

    The first four features can be found in today’s Galaxy phones and existing biosensor solutions but it’s the integration of ECG and PPG that stands out as additional features exclusive to MediaTek’s new offerings. The sensor is actually a discrete pre-assembled integrated package consisting of package housing, substrate, the actual front-end IC and sensor assembly, protected by a glass cover. Red and infra-red LEDs serve as the illumination sources for the light sensitive sensors which measure the light absorption by a user’s fingertips and with help of some processing convert that input data into heart-rate and blood pressure and oxygen measurements.
    The ECG and PPG functions which we haven’t seen so far seen integrated into smartphones are also handled by the package with help of two additional simple electrodes that the OEM has to implement into a device’s housing. When the user then touches both electrodes with a finger on each hand this creates a closed loop between the device and the heart to enable ECG measurements.
    The MT6381 is marketed as a complete solution and with that also comes the software support. As such MediaTek offers all related software as well as an in-house and 3rd party application for health monitoring, which should ease adoption for low-cost OEMs.
    All in all while maybe not as exciting as a new SoC, MediaTek’s new biosensor does open up the possibility of wider market adoption of integrated health sensors into smartphones. MediaTek also explains that the target application device for the MT6381 is indeed smartphones as opposed to wearables, a decision linked to simple market opportunity and demand for a integrated solution in this segment. The new Sensio sensor will be made available to MediaTek partners in early 2018.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7635

    Anandtech: The DeepCool Captain 240 EX RGB AIO Cooler Review: Pump it Up, Without the

    Today we are taking a look at the latest AIO liquid cooler from DEEPCOOL, the Captain EX 240 RGB. As its name hints, one of the major features of the cooler is RGB lighting. Its design and stock fans suggest a product designed for low-noise operation, trying to combine the fanciness of RGB lighting with good everyday performance.

    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7636

    Anandtech: PowerColor Announces Gaming Box: The Devil Box’s Spawn, TB3 eGFX Enclosure

    PowerColor recently announced its second eGFX enclosure named, Gaming Station. The Gaming Station, like the Devil Box preceding it, is an accessory which enables gamers to connect desktop level video cards to laptops, AIOs, or SFF PCs using the Thunderbolt 3 interface. The Gaming Box is slightly smaller and moved away from the angular look of the Devil Box to a more traditional black, rectangular, almost UPS-like, appearance. It supports select NVIDIA and AMD GPUs through AMD's XConnect technology and comes with a 550W power supply able to easily power the compatible video cards. The Gaming Box joins an increasing number of TB3-based external enclosures with the ability to run desktop level video cards.
    On the outside, the PowerColor Gaming Box is matte black with the Gaming Station and PowerColor name, as well as two USB 3.0 ports, on the front panel. All sides of the enclosure are closed off except for the left side which has small circle vents taking up most of that side to allow fresh air to enter and cool the video card inside. Surprisingly, we do not see any (RGB) LEDs on the enclosure. The dimensions are smaller coming in at 343.2 x 163 x 245mm (13.5” x 6.4” x 9.6”) versus 400 x 172 x 242mm on the Devil Box so portability has increased, if only by a small amount, via size.
    The device uses a single Thunderbolt 3 port on the back panel able to offer 40 Gbps bandwidth. Internally, this translates to the installed graphics card able to access PCIe 3.0 x4 lanes. Both the TB and I/O cards are the same ones found in the Devil Box according to PowerColor. As time goes on and other options hit the market, we are seeing now dual TB3 connectivity for increased bandwidth and flexibility.PowerColor did not mention when or if this will be included in future iterations. In addition to the Thunderbolt 3 port, there are three more USB 3.0 ports as well as a Gigabit Ethernet port to enable high-speed wired internet on ultra-thin laptops or other devices that do not feature GbE.
    Internally, the Gaming Box supports GPUs up to 310 x 157 x 46mm (12.2” x 6.2” x 1.8”), enough to support a double-wide video card, through a full-length PCIe slot running in PCIe 3.0 x4 mode. The Power Supply is also different now using an SFX format and rated at 550W 80 Plus Gold; an upgrade from the Devil Box at 500W. Unlike the Devil Box, the Gaming Box does away with 2.5” HDD/SSD support. If storage expansion is a goal through an eGFX enclosure, users will need to select the Devil Box instead.
    PowerColor Gaming Box Specifications
    Max Video Card Size Double-Wide, 12.2" Long
    (310 × 157 × 46 mm)
    Max Video Card Power 375 W
    Connectivity 1 × Thunderbolt 3 (40 Gbps) port to connect to host PCs and charge them
    5 × USB 3.0 Type-A (2x Front Panel, 3x Back Panel)
    1 × Gigabit Ethernet
    Chassis Size 6.4 × 13.5 × 9.6 inches
    (163 × 343 × 245 mm)
    Internal PSU 550 W
    System Requirements Thunderbolt 3 eGFX Certified PC
    Thunderbolt 3 w/Active Cable (included - 50cm)
    Windows 10 64bit Only
    Shipping Date 1Q 2018
    Price $379 / €419
    For GPU compatibility, the Gaming Box lists AMD Radeon R9 285/290/290x/300 series, R9 Nano/Fury, and the RX400 and RX 500 series. On the NVIDIA side support ranges from the Kepler based 750/750Ti the 900 series, 1060/1070/1080/1080Ti, the Titan X/Xp, as well as select Quadro chipsets.
    PowerColor Gaming Box Video Card Compatibility List
    AMD NVIDIA
    Radeon RX 500 Series GeForce GTX 1080 / 1080 Ti
    Radeon RX 400 Series GeForce GTX 1070
    Radeon R9 Fury GeForce GTX 1060
    Radeon R9 Nano GeForce GTX Titan X / Titan Xp
    Radeon R9 300 Series GeForce GTX 980 Ti
    Radeon R9 290X GeForce GTX 980
    Radeon R9 290 GeForce GTX 970
    Radeon R9 285 GeForce GTX 960
    GeForce GTX 950
    GeForce GTX 750/750 Ti
    NVIDIA Quadro P4000 / 5000 / 6000 / GP100
    The Gaming Station will be available 12/15 with an MSRP of $329. This is priced less than the Devil Box's MSRP upon release ($399).
    Related Reading:




    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7637

    Anandtech: Synaptics Unveils Clear ID In-Display Fingerprint Sensor for 18:9, 20:9 Sm

    Synaptics this week introduced its first Clear ID FS9500 in-display optical sensor that can be installed under the surface of an OLED screen. The fingerprint reader works faster than facial recognition and can be used in any environments, including dark rooms and sunny beaches, Synaptics says. The first smartphones featuring the technology are due in Q1 2018.
    Biometric authentication has become a mandated feature of every smartphone in the recent years, but integrating fingerprint sensor has become a challenge as screen-to-body ratio is growing and many front panels have no space for the reader. Some smartphone makers install fingerprint sensor on the back panels of their devices, other, like Apple, introduce facial recognition technologies that require IR and RGB sensors accompanied by appropriate processing resources. Synaptics argues that fingerprint sensors on the back are uncomfortable to use, whereas facial recognition technologies are relatively slow and can be fooled. By contrast, Synaptics’ latest Clear ID FS9500 sensor is hidden under the screen on the front and it works as fast as users come to expect from similar devices in the recent years.
    The Synaptics Clear ID FS9500 fingerprint sensor is a tiny CMOS device that sits below the AMOLED display assemly and captures the reflected fingerprint between and through the OLED pixels. Synaptics said that the captured fingerprint has a very high resolution, but naturally does not quantify this claim. To make a fingerprint reflect, a part of the screen has to be lit-up, so while the device itself only uses 80 mA, the screen consumes some additional power too (Synaptics does not disclose the lowest brightness level of the display required for scanning as it depends on many factors). Meanwhile, Synaptics uses accelerometers and other sensors inside the phone to activate the sensor and turn on an appropriate area on the screen. The activation happens instantly, then it takes 0.7 seconds to scan the fingerprint, match it and grant or deny access. By contrast, Synaptics says, it takes modern smartphones about 1.4 seconds to scan a face.

    The SF9500 does not support Match-In-Sensor technology, so the matching is performed by host using Synaptics’ Quantum Matcher software. It is noteworthy that the software Synaptics supplies with its sensors is quite complex. Apart from matching, the software is responsible for activating the reader and the screen whenever the phone is touched and for other things (like taking into account outside conditions that may affect minutiae). The new fingerprint sensor from Synaptics connects to host using the SPI bus. Depending on application and requirements, device manufacturer may choose to use an AES-encrypted SecureLink interface (e.g., if the SF9500 is installed into a bezel-less tablet, or a laptop display) if required.
    The Clear ID FS9500 is “smart” enough to detect fingerprints in sunlight and bright conditions, take into account wet and/or cold fingers, detect spoofed fingerprints and so on. It remains to be seen how the sensor behaves when individual light-emitting pixels burn-in over time, but since Synaptics has a robust program stack supplied with its sensors, it can tweak them using software to compensate for screen degradation.

    According to Synaptics, the Clear ID SF9500 sensor itself is only 0.69 mm thick and its integration does not make smartphones significantly thicker. The sensor integration process has to be performed by the screen manufacturer at a fab where display assemblies are made and Synaptics is working with appropriate makers. The process is not very complex, so it does not make final devices considerably more expensive, says the developer.

    Synaptics initiated mass production of its Clear ID FS9500 sensors this month and will start their commercial shipments in the coming weeks. The company says that the first smartphone that uses the in-display fingerprint reader will be available in early 2018. Synaptics naturally does not disclose the manufacturer of the device, but only says that it is one of the Top 5 smartphone suppliers. Meanwhile, since the FS9500 sensor only works with OLED screens, this big producer of the smartphones has to have access to the AMOLED technology. Synaptics plans to showcase the handset at CES, but it is unclear whether the device will have been announced by that time, or Synaptics will show it camouflaged. Anyway, the first smartphone featuring the FS9500 is incoming and it will be available soon.
    Related Reading:




    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7638

    Anandtech: Toshiba Launches MN06ACA 10 TB HDD for NAS: 7 Platters, Up to 249 MB/s

    Toshiba continues to apply its seven-platter HDD platform to new applications and market segments, increasing hard drive capacity to 10 TB and slightly improving performance. Recently the company introduced its 10 TB HDD for SOHO and SMB NAS appliances that is designed to operate for 24/7 in vibrating multi-drive environments. The drive promises to be faster than competitors of the same class (and even some higher-end rivals), at a cost of a higher power consumption.
    Toshiba’s MN06ACA 10 TB hard drive for NAS is based on the company’s HDD platform that leverages seven 1.43 TB PMR platters along with a 7200 RPM spindle speed, a 256 MB cache buffer and a SATA 6 Gbps interface. The 10TB HDD for NAS is well prepared to work in vibrating multi-bay environments — it features top and bottom attached motors, rotational vibration (RV) sensors that detect and compensate for transient vibrations, second-generation dual-stage actuators, and some other enhancements.
    To a large degree, the MG06ACA resembles Toshiba’s recently announced MG06ACA HDDs for enterprise/cloud datacentres and MD06ACA-V HDDs for video surveillance that use the same platform, but come with different firmware and components. The MN06ACA10T HDD supports 512e technology (4K physical sectors on the platter with 512-byte logical configuration reported to the host) to maintain compatibility with legacy applications, but is also compatible with modern NAS appliances. Meanwhile, unlike the enterprise-class MG06, the MN06 does not support persistent write cache with power loss protection technology. As for reliability, Toshiba rates the MN06ACA HDDs for 180 TB annual workload as well as for one million hours MTBF rating.
    Brief Specifications of Toshiba's MN06ACA HDD
    Capacity 10 TB
    P/N 512e MN06ACA10T
    RPM 7200 RPM
    Interface SATA 6 Gbps
    DRAM Cache 256 MB
    Sustained Transfer Rate 249 MB/s
    MTBF 1 million
    Rated Annual Workload 180 TB
    Acoustics (Seek) 34 dBA (?)
    Power Consumption Operating 9.2 W
    Idle 7.2 W
    Warranty 3 Years
    Since Toshiba’s new 10 TB HDD is based on the company’s seven-platter platform, its performance is rather high — up to 249 MB/s sustained transfer rate, which is significantly faster when compared to other 10 TB hard drives for SOHO/SMB NAS (which feature a 5400 RPM spindle speed) and which even outperforms higher-end models for big business/enterprise NAS. Meanwhile, not everything is rosy with power consumption of the MN06ACA10T. The HDD consumes 9.2 W in operating mode and 7.2 W in active idle mode, which is 61% and 157% higher (respectively) when compared to the latest generation helium-filled WD Red/WD Red Pro drives.
    If you run a four-bay NAS at home or in an office of a small company and the HDDs sleep most of the time, power consumption may not be an issue worth talking about. But if you run a fully populated 8-bay NAS that is accessed 24/7 and spends 50% time in active idle (I am omitting cases when they actually perform read and write operations, but they naturally don’t play in Toshiba’s favour here), then the difference in power consumption between Toshiba’s MN06 10 TB and WD’s Red Pro 10 TB over three to five years will be rather noticeable ($48 in three years and $80 in five years, see the table below for details), but not critical, if Toshiba prices its drives right. Obviously, the relatively high power consumption is a payback for not adopting helium for the seven-platter drives, but avoiding helium lowers costs, so Toshiba gets more flexible in terms of pricing.
    Brief Comparison of 10 TB HDDs for NAS Appliances
    Spindle Speed Sustained
    Transfer Rate
    Active Idle Power Consumption Power Consumption
    50% Active Idle
    Electricity Costs*
    50% Active Idle
    4-bay NAS
    1 Year
    8-bay NAS
    1 Year
    4-bay NAS
    1 Year
    8-bay NAS
    1 Year
    Toshiba MN06 7200 RPM 249 MB/s 7.2 W 126 kWh 252 kWh $13 $26
    Seagate IronWolf 5400 RPM 210 MB/s 5 W 88 kWh 175 kWh $9 $18
    Seagate IronWolf Pro 7200 RPM 214 MB/s
    WD Red 5400 RPM 210 MB/s 2.8 W 49 kWh 98 kWh $5 $10
    WD Red Pro 7200 RPM 240 MB/s
    *According to the U.S. Energy Information Administration, average retail price of a kWh was $0.1041 as of January, 2017. In the states of Alaska, California, Connecticut, New York, Rhode Island and Vermont electricity costs sigifnicantly more, at $0.14 - $0.17, so the HDD electricity costs will be different.

    Note: All numbers are rounded.
    Toshiba understands that power consumption is not a strong side of the MN06ACA10T HDD and officially positions the drive for SOHO and SMB NAS appliances as well as for archive and data backup applications, where it is not going to be a significant problem (archives and backups are rarely accessed and spend most of the time sleeping). Moreover, since formally the MN06 is positioned to compete against Seagate’s IronWolf and WD’s Red drives, it has an edge over rivals when it comes to performance.
    Toshiba says that its 10 TB HDDs for NAS are already available to interested parties. The company does not disclose pricing because it is subject to negotiations, as well as planned retail availability timeframe.
    Buy Toshiba N300 8 TB HDD for NAS on Amazon.com
    Related Reading:




    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7639

    Anandtech: Western Digital to Use RISC-V for Controllers, Processors, Purpose-Built P

    Western Digital recently announced plans to use the RISC-V ISA across its existing product stack as well as for future products that will combine processing and storage. The company plans to develop RISC-V cores internally and license them from third parties to use in its own controllers and SoCs, along with using third-party RISC-V based controllers. To develop the RISC-V ecosystem, Western Digital has already engaged in partnerships and investments in various companies working on RISC-V projects. For example, recently Western Digital invested in Esperanto Technologies, a company led by experienced CPU designers.
    Given the diverse portfolio of products that Western Digital has today as well as its longer-term intention to move compute closer to data (by embracing in-storage computing, for example), it is evident that Western Digital is going to need a diverse portfolio of compute cores with significantly different performance and feature set. In fact, Western Digital will need two groups of cores, one for storage devices, and another for processing data. Western Digital says that it does not want to develop all the silicon it needs in house, but it will likely have to increase its chip investments in the future.
    “We intend to develop some processor cores internally, we also expect to use many other companies’ processor cores to complement our own and are currently evaluating several technologies,” a statement by Western Digital reads.
    Since the RISC-V ecosystem is early stages of development, the transition to new cores is not going to happen overnight, but will likely be slow, gradual and will span for many years, if not decades. The first products from Western Digital with RISC-V cores will ship in late 2019, or early 2020, says Western Digital without going into details.
    ZetaBytes and PetaFLOPS

    As the world generates more data, new storage devices need to store zetabytes of information. HDDs and SSDs have been evolving rather rapidly in the recent years, but their evolution requires not only relentlessly increasing areal density for magnetic media and higher bit density for NAND flash (or other types of memory), but also more processing horsepower. Modern HDD and SSD use controllers that contain multiple processing cores for management, signal processing, contemporary ECC algorithms and other operations.
    For example, for today’s TLC/3D TLC NAND memory, SSD controllers already use LDPC with 120bits/1KB recovery in order to correct read errors that occur with the latest types of NAND and thus maximize endurance rating of modern flash memory. Going forward, SSD controllers will need to use stronger algorithms as the industry moves to higher bit densities with TLC and then QLC architectures. As a result, SSD controllers will have to use better cores with higher performance.
    Hard drives for their part do not require as vast compute resources as SSDs do, but their processing requirements are also growing because of new magnetic recording technologies, increasing areal densities and more sophisticated functionality (e.g., NAND flash-based caching, health management, QoS enhancements, etc.). Western Digital says that its current products consume about a billion of compute cores per year and the consumption is rising. Given the current SSD/HDD trends, going forward Western Digital is going to consume more cores in general and at least some of those cores will have to be more powerful than the ones the company uses today. Considering the fact that at present there are almost no commercial products based on RISC-V, Western Digital’s plan essentially involves bringing development of a substantial amount of SoCs, controllers, microcontrollers and other chips in-house (more on this later).


    Earlier this year IDC and Seagate published a paper, which claims that 16.1 ZB (ZetaBytes) of data was generated worldwide in 2016 and predicts that the global datasphere would grow by an order of magnitude to 163 ZB by 2025. If the prediction turns out to be correct, demand for high-end storage products by Western Digital, Seagate, Samsung, Micron, Toshiba and others will only grow going forward. This is an important factor for existing business of these companies, but there is another aspect that is not discussed often.
    The amount of data that requires real-time processing and low latency is growing and there are cases when Endpoint devices (smartphones, cameras, wearables, cars, PCs, etc.) cannot process it locally, which is why the data has to be sent to more powerful computers. Since processing has to be done as soon as possible, the data is sent to the nearest servers featuring appropriate horsepower. Such machines belong to the Edge category and include server rooms, servers in the field, and smaller datacenters located regionally (as IDC puts it). Since the amounts of generated data is going to grow, Edge servers and Core servers (located in large datacenters) will naturally require more compute horsepower. In the end, the more ZetaBytes stored, the more PetaFLOPS (or even ExaFLOPS) needed to process it.

    Meanwhile, Western Digital believes that traditional server architectures will not be efficient for Big Data and Fast Data applications in the future. The company believes that processing has to be brought closer to data to minimize data movement at the Edge and within Core datacentres. Apparently, this is where Western Digital sees an opportunity for solutions that not only store data, but also process it. This is actually the second part of Western Digital’s plans concerning the RISC-V architecture: use processing cores (and SoCs) that are powerful enough for Big Data and Fast Data processing.
    Western Digital Accelerates Chip Development Efforts

    As mentioned above, Western Digital nowadays uses a variety of controllers for its HDDs, SSDs, NAS, DAS and various storage platforms tailored for particular needs (such as HGST’s Active Archive System and SanDisk’s/Fusion-io’s InfiniFlash). Over the years, Western Digital and SanDisk have acquired numerous developers of enterprise-class SSDs and flash storage solutions that developed their own controllers and other hardware. By now, all of these assets have been integrated into various product families, some have been discontinued. Meanwhile, all controllers for Western Digital’s products (it does not matter where they were developed) use compute cores based on ARM, ARC and MIPS architectures.

    At present, there are no proven RISC-V-based controllers for storage devices, so transitioning to the architecture essentially means that Western Digital will have to develop some controllers itself and encourage other designers to use RISC-V cores. Western Digital confirms that it plans to develop some of the RISC-V cores itself and license other cores from third parties. It is unknown whether companies like Marvell (which supplies select controllers to Western Digital) has any plans concerning RISC-V, but it is safe to say that Western Digital expects at least some of controller developers to adopt the architecture. In fact, the company is eager to help other companies to start using RISC-V for relevant projects.
    “We are committing to help lead the advancement of and transition to data-centric, open compute architectures through the work of the RISC-V Foundation,” said Western Digital in a statement. “[We are] committed to advancing RISC-V and is engaged in active partnerships and investments in RISC-V ecosystem partners.”
    As Western Digital will transit to controllers featuring RISC-V cores, it will gradually reduce and eventually cease purchasing third-party controllers based on different architectures. For developers of controllers as well as CPU architecture licensees (Arm, Tallwood MIPS, and Synopsys) this means lost revenues. For Western Digital, it means lower royalty payments, increased development costs, and ability to differentiate its storage products from those using off-the-shelf controllers. It does not seem that Western Digital wants to move development of all controllers it uses in-house, but some of the things it buys from others today will have to be developed internally tomorrow. In fact, further vertical integration of Western Digital is unavoidable as the company moves beyond NAND flash in the coming years. We do know that the company has big plans for ReRAM storage class memory and at least initially controllers for SCM-based storage solutions will have to be developed in-house.

    It is interesting to note that apart from HDD/SSD controllers, Western Digital uses Intel’s x86 CPUs for NAS and some other devices. Such chips offer significant performance, so replacing them is impossible and this is why WD will continue working with Intel. Nonetheless, it looks like the storage company expects RISC-V-based SoCs to catch up with its NAS requirements in the future.
    “As we transform from a data storage company to a data technology company, in general, we expect to continue our existing, highly valued partnerships, while building new relationships with companies and organizations that share a vision for enabling the data-centric world,” said Western Digital.
    Moving Computing Closer to Data

    Besides using RISC-V-based compute cores for SSD, HDD, NAS, DAS and other controllers, the company plans to advance “RISC-V technology for use in mission-critical applications so that it can be deployed in its products”. In particular, Western Digital wants to create purpose-built architectures for tomorrow’s workloads of Big Data and Fast Data applications. Here is how Western Digital describes data-centric compute architectures that it plans to develop, or at least help develop:
    “Data-centric compute architectures will need the ability to scale resources independent of one another,” explained Western Digital. “The architectures for tomorrow will need to enable purpose-built solutions with data-optimized levels of OS processing, specialty processing, memory, storage and interconnect. The extreme data and compute workloads for analytics, machine learning, artificial intelligence and smart systems demand purpose-built architectures.”

    It is noteworthy that throughout its RISC-V-related press release, the company avoided using the term “in storage computing”. There are dozens of companies experimenting with ISC and early results look quite promising: offloading select tasks from CPU to SSDs can reduce latencies by a factor of 2-3 while also decreasing power consumption. The key purpose of ISC is to reduce (or even avoid) "expensive" data transfers from a storage device to a processor by performing computing operations on the former. Latency reductions will be crucial in the looming 5G era, especially for edge computing environments.
    Western Digital yet has to share details concerning its RISC-V-related ISC projects, but the company did reveal its common vision of data centric compute architectures back at FMS 2016 (albeit, with some bias towards SCM, which is okay because SCM is a good fit for ISC). In general, the company seems to bet on small/inexpensive purpose-built CPU cores, but their actual performance or capabilities are not disclosed at the moment.



    Meanwhile, since no actual roadmap has been shown, it does not make a lot of sense to speculate what exactly the company plans to do and when.
    Strategic Investment in Esperanto Technologies

    Besides announcing its RISC-V plans, Western Digital also disclosed that it had made a strategic investment in Esperanto Technologies, a developer of RISC-V-based SoCs. The company was founded by Dave Ditzel, who co-founded Transmeta in the 1995 and more recently worked at Intel developing HPC products. A strategic investment is a recognition of Esperanto’s potential, but it does not automatically mean that Western Digital intends to use cores developed by this company.
    In the meantime, Esperanto’s ongoing projects demonstrate the potential of the RISC-V ISA in general. So far, Esperanto has developed the ET-Maxion core with maximized single-thread performance as well as the ET-Minion energy-efficient core with a vector FPU. These cores will be used for an upcoming 7 nm SoC for AI and machine learning workloads. In adddition, these are the cores that Esperanto will license to other companies.
    Esperanto's Upcoming AI-Focused SoC
    High-Performance Cores 16 “ET-Maxion” 64-bit RISC-V cores for highest single thread performance
    Energy-Efficient Cores 4096 “ET-Minion” energy-efficient RISC-V cores with vector FPUs
    Lithography 7 nm
    Long Road Ahead

    Western Digital has supported the RISC-V Foundation for years and therefore it understands how well can it scale for its short-term and long-term needs. The disclosure that it had officially become an adopter of the RISC-V architecture probably means that it already has a roadmap concerning cores, controllers and, perhaps, even products on their base. Meanwhile, the transition is going to take quite a while as Western Digital claims that when it is completed, it expects to be shipping two billion RISC-V cores annually. The number of cores to be consumed indicates a significant growth of product unit shipments that does not happen overnight.
    Related Reading:


    Sources: Western Digital, Esperanto Technologies, IDC/Seagate


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #7640

    Anandtech: Spotted: 960 GB & 1.5 TB Intel Optane SSD 900P

    Intel’s Optane SSD 900P featuring 3D XPoint memory have an edge over SSDs based on NAND flash when it comes to performance and promise to excel them in endurance. Meanwhile the Optane SSD 900P lineup is criticized for relatively low capacities — only 240 GB and 480 GB models are available now, which is not enough for hosting large virtual machines. Apparently, Intel has disclosed that there are 960 GB and 1.5 TB models up its sleeve.
    Intel on Thursday issued a product change notification informing its customers about the Optane SSD 900P regulatory and other label changes. Among other things, the document lists Intel Optane 900P 960 GB and 1.5 TB drives. The SSDs are mentioned in context with their voltage and current, which may indicate that we are dealing with products that already have their specs, at least when it comes to power consumption. Meanwhile, Intel does not list part numbers of the higher-capacity 960 GB and 1.5 TB Optane drives, so it is unclear whether the SKUs are meant for general availability, or for select customers only.
    Intel intends to start shipments of Optane SSD 900P products with new labels on 27 December, but it is unknown when we are going to see the 900P with enlarged capacities. Intel officially positions the Optane SSD 900P for workstations and high-end desktops, which is why two out of three available models come in HHHL form-factor. Therefore, a potential launch of the 900P 960 GB/1.5 TB models in U.2 form-factor may indicate expansion of 3D XPoint to servers that store massive amounts of data. In the meantime, Intel has already confirmed plans to expand capacity of its Optane SSD DC P4800X for datacenters to 1.5 TB, so Optane capacity increases are in the table for Intel.
    We have reached out to Intel for comments and will update the news story once we get more information.
    Related Reading:



    Source: Intel (via ServeTheHome)


    More...

Thread Information

Users Browsing this Thread

There are currently 44 users browsing this thread. (0 members and 44 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title