Page 568 of 1210 FirstFirst ... 684685185435585635645655665675685695705715725735785936186681068 ... LastLast
Results 5,671 to 5,680 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5671

    Anandtech: Securifi Updates Smart Home Hub Lineup with New Almond 3 Wireless Router

    Securifi is famous for bringing out the world's first commercially available touchscreen router. We have covered them a couple of times in the past. They were one of the first vendors to realize the potential of integrating radios for home automation protocols (ZigBee and Z-Wave) in a wireless router. Google also seems to be adopting this strategy with the OnHub routers which integrate Bluetooth and 802.15.4 support.
    Securifi's family of routers (the Almonds) consists of two product lines - one targeting the high-end market and the other aiming to be more affordable. At CES, Securifi launched the Almond 3, a new member in the second category. The following extract from the press brochure shows how the currently available models compare against each other.
    It is obvious that the Almond+ belongs to the high-end line, while the Almond 3 belongs to the affordable category. Like the Almond 2015, the Almond 3 is also based on a Mediatek chipset. The chipset used is likely to be the MT7612E along with MT7621N SoC (waiting for confirmation from Securifi for this). It is an AC1200 router (2x2 802.11ac for 867 MBps in the 5 GHz band and 2x2 in the 2.4 GHz band for 300 Mbps).
    The above specifications indicate that the Almond 3 is a definite step up from the Almond 2015, which was a 100 Mbps N300 router. The built-in siren enables some interesting scenarios, particularly as a security alarm when combined with ZigBee door / window magnetic reed sensors. Considering the specifications and the focus on user experience with the touchscreen interface, the Almond 3 targets the average consumer.
    Integrating home automation radios in the hardware for the router is only one side of the equation. Other vendors (like Google via TP-Link and Asus and TP-Link itself) have also started to explore this area. The other important ingredient for market success is the user experience. Perfecting the web user interface as well as the mobile apps is a challenge, particularly when home automation is involved.
    At CES, Securifi demonstrated their mobile app, and I have to say that it has one of the most user-friendly interfaces to setting up the 'rules and scenes' (i.e, how changes reported by one sensor (or, even just the time) can be used to trigger events in other connected devices). They also talked about an innovative idea for implementing geofencing by recognizing the connection status of the user's smartphone in the router.
    Securifi has also opened up their Websockets API. This should help power users and third-party developers to interface with the Almonds / home automation devices and develop their own applications. The other important takeaway from my conversation with Securifi was that they have implemented full cloud-less control of all supported home automation devices on all the Almond routers. I have always been a big proponent of isolating home automation devices from the Internet for security and reliability purposes. Power users on the go have multiple ways to obtain access to the home automation controller (in this case, the Almond device) over the Internet - including, but not restricted to, running a VPN server in the home network. On a general note, I am waiting for a consumer networking equipment vendor to make VPNs more accessible to the general audience. This will be very useful for consumers who don't want their home automation devices to be at the mercy of a cloud server somewhere on the Internet.
    The Almond 3 is slated to become available later this quarter. It will retail for $119. Coupled with a sensor such as this, we believe it is a value-focused solution for the average consumer's networking and security alarm needs.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5672

    Anandtech: Microsoft to Recall Power Cables for Previous-Gen Surface Pro Tablets

    Microsoft plans to recall power cables for previous-generation Surface Pro tablets. The cords can overheat and pose fire hazard, according to reports. While Microsoft recalls millions of cables, the company insists that only a very small number of them can actually be dangerous.
    Power cables of Surface Pro, Surface Pro 2 and Surface Pro 3 are vulnerable to overheating and could pose a fire hazard after they are sharply or repeatedly bent or tightly wrapped, according to Microsoft. Microsoft did not name the supplier of the power cords it shipped for about three years. The potentially dangerous cables look like regular power cords used with variety of notebook PSUs. Such cables are not very bendable and, as it appears, can be damaged. Fortunately, they are detachable and users, who want to replace their cables now, can do so without waiting for Microsoft.
    On Wednesday the company confirmed to ZDNet that the recall will be taking place, and will officially issue a statement on the matter of Surface Pro power cables early on Friday. The voluntary recall will be applied to all devices sold before mid-July, 2015, worldwide. Eligible customers wishing to get a replacement will have to order it via a special web-site. Microsoft plans to advice customers to stop using potentially dangerous power cords and to dispose of them in accordance with local regulations.

    Microsoft Surface Pro charger is on the left side of the picture.
    Microsoft’s Surface (non-Pro) slates as well as the latest Surface Pro 4 tablets are not affected, the software giant said, reports Channelnomics.eu.
    The first-generation Surface Pro was introduced along with the Windows 8 operating system in October, 2012. It became available in early 2013 and was replaced by the Surface Pro 2 later that year. The third-generation Surface Pro hit the market in mid-2014. To date, Microsoft has sold millions of its slates, which it positions as notebook replacement tablets.
    Many power cords should not be bent or wrapped too tightly because they can be damaged this way. Some companies try to use softer cables and/or equip their cables with some form of cable management. Unfortunately, power cords of Microsoft Surface Pro only come with a tiny hook.
    Keeping in mind that so far, there have been no reports about overheating cables or PSUs of Microsoft’s Surface Pro tablets, the cables should be generally safe to use. Nonetheless, it is somewhat sad that Microsoft has not discovered the potential issue earlier.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5673

    Anandtech: Silicon Motion at CES: 3D NAND support for SM2246EN and roadmap update

    2015 was a great year for SSD controller designer Silicon Motion. Their SM2246EN controller was at the heart of some of the best mainstream and value SATA SSDs, while their DRAM-less SM2246XT and their TLC-compatible SM2256 each had several design wins for even more affordable SSDs. At CES, Silicon Motion showed off their full range of products and shared some of their plans to stay competitive through 2016.
    The most important development for the SSD market in 2016 will almost certainly be the availability of 3D NAND from companies other than Samsung (who's been shipping 3D NAND since 2014 and will be rolling out their third generation of it this year). Silicon Motion has updated their firmware for the SM2246EN controller to support 3D MLC NAND, and they showed off drives using 3D NAND flash sourced from Intel, Micron, and Hynix. This demonstrates that Silicon Motion is ready for the transition to 3D NAND and that we can expect drives to be hitting the shelves as soon as the flash itself is available in bulk on the open market. It's also nice to have independent confirmation that both IMFT and Hynix are on track with their 3D NAND development. Conspicuously absent from the lineup was 3D NAND from the Toshiba and SanDisk joint venture. We already expected them to be last to ship 3D NAND due to their fab for it not being scheduled to begin mass production until this year, so it's no surprise if they're keeping things under wraps for a little longer.
    To support 3D TLC NAND, Silicon Motion will be releasing a SM2258 controller as the successor to SM2256, but this new controller was not on display and we don't have information on what other changes it may bring to the table. SM2258 should be ready by the middle of the year, so it shouldn't be too long before we have more details.
    The last big update concerns the SM2260 PCIe SSD controller. A launch date hasn't been announced, but we were told to expect a more interesting demo at Flash Memory Summit, suggesting it will be ready to ship in the second half of 2016. The expected performance specifications have changed slightly from what we last heard in June 2015: sequential read speed is up from 2200 MB/s to 2400 MB/s while sequential write is down from 1100 MB/s to 1000 MB/s. Random read and write ratings remain at 200K and 125K IOPS respectively. With the exception of random write those numbers are a bit below what Samsung advertises for the 950 Pro, but close enough that SM2260-based drives can probably be competitive by just undercutting Samsung's pricing by a little bit. 3D NAND support has also been added to the feature list, and NVMe version 1.2 will be supported. To make use of the higher speeds of the PCIe 3.0 x4 interface, the SM2260 uses a dual-core ARM processor instead of the single-core ARC processor used by Silicon Motion's SATA SSD controllers and the SM2260 has 8 NAND interface channels compared to 4 channels for SM2246EN and SM2256.
    Silicon Motion still has no direct successor planned for the DRAM-less SM2246XT controller but they confirmed that all of their controllers could be used in a DRAM-less configuration with appropriate firmware, so a DRAM-less TLC SSD could be built using SM2256 if somebody thought the cost savings were worth the firmware development efforts. Silicon Motion was also showing off their current lineup of solutions for USB flash drives including Type-C and Lightning port support, as well as their eMMC and single-package SSD products intended mainly for industrial, automotive and other embedded applications.
    Gallery: Silicon Motion at CES 2016




    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5674

    Anandtech: CES 2016: Phison previews upcoming SSD controllers

    Phison may not be a household name, but they're a major player in the SSD market. Where Marvell's SSD controllers are typically sold to drive vendors who then pair them with custom or third-party firmware, and SandForce and Silicon Motion controllers are typically bundled with firmware, Phison's controllers are mostly sold as part of a turnkey drive platform that's ready to be put into a branded case and put on store shelves. This business model has made Phison the favorite supplier for new players in the SSD market with no existing drive manufacturing infrastructure, and for established brands that need to update their product line but can't stomach the high R&D costs of staying competitive with custom controllers or firmware.
    For 2016, the mainstay of Phison's controller lineup will continue to be the PS3110-S10, which has been used in drives sold by OCZ/Toshiba, Mushkin, Corsair, Zotac, Patriot, Kingston, PNY and others, and paired with both TLC and MLC NAND. Squeezing in below the S10 and more or less displacing the S9 will be the new PS3111-S11 low-cost SATA controller with the option of operating as a DRAM-less controller and providing only two NAND channels but also the first Low Density Parity Correction (LDPC) support from Phison. Thanks to SLC caching support its peak performance numbers only suffer slightly and its support of capacities up to 1TB should be sufficient for this year's value SSDs, but don't expect the S11 to sustain great performance on heavy workloads.
    The much more exciting product is Phison's PCIe 3.0 x4 NVMe SSD controller, the PS5007-E7. The E7 controller is very close to launch and we've already seen numerous product announcements based on that platform. The E7 is aiming to be the highest performance consumer SSD controller and will be competing directly against Samsung's 950 Pro. The controller hardware has been finalized and the firmware is in the last stages of performance optimization. Phison plans to finalize the firmware in February and drives should be on the shelves in March.
    We've previously seen prototypes of the E7 controller from G.Skill at Computex last year and from Mushkin at CES 2015. Since Computes the write performance specifications have improved slightly: sequential write is up from 1400MB/s to 1500MB/s, and random write is up from 200k IOPS to 250k IOPS. Sequential read and random read speeds published by Phison match what G.Skill said at Computex: 2600MB/s sequential read and 300k IOPS for random read, though Phison notes their random performance numbers as being burst performance. They also are claiming a sustained random performance of 36k IOPS, presumably referring to steady-state random writes. Those numbers are all for planar MLC NAND, but the E7 controller also supports TLC and 3D NAND. Given the imminent availability of 3D NAND, Phison is also able to declare support for capacities up to 4 TB where G.Skill's demo only promised up to 2TB.
    Phison E7 drives will be available in a variety of form factors. M.2-2280 has been the most popular choice for client PCIe SSDs, but some E7-based drives will be opting for the longer M.2-22110 size. This will provide room for 8 flash packages instead of 4, allowing for higher capacities or cheaper NAND packaging by stacking fewer dies per package. Most importantly, the larger M.2 card will make it possible to populate all 8 channels on the E7 controller while still using standard off the shelf flash packages. The longer M.2 size won't be usable with as many motherboards and will have even more trouble in the notebook market, but many SSD vendors targeting the enthusiast market are willing to make those compromises.
    Several vendors will also be selling drives in a PCIe half-height half-length add-in card form factor. This relatively spacious PCB allows for the highest capacities and better passive cooling with or without a heatsink. Phison's reference model also included power loss protection capacitors on the card, though they won't be present on all retail models—Patriot's Hellfire AIC didn't have the capacitor bank populated. Phison also showed a 2.5" U.2 model, but we didn't encounter any vendors that were showing off that option.
    The add-in cards and U.2 drives may be more popular in the enterprise market, which Phison is confident they can break into. However, Phison teamed up with Kingston and Liqid to demonstrate an add-in card that puts four M.2 drives under a heatsink and provides power loss protection capacitors. This can allow for better density and utilization of PCIe slots than a single-controller PCIe x4 add-in card and drop-in compatibility for server platforms that don't have U.2 backplanes, so even in the enterprise space M.2 might win out.
    Gallery: Phison E7 drives at CES 2016




    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5675

    Anandtech: Corsair and G.Skill Introduce 128 GB (8x16 GB) DDR4-3000 Memory Kits

    An average personal computer nowadays is equipped with 8 GB or less of DRAM, according to analysts from DRAMeXchange. Due to the requirements of Microsoft Windows 10 operating system, 8 GB may be enough for general-purpose computing. But there are PCs, particularly at the high-end desktop and workstation level, which need a lot of memory either for software, computation, RAM disks or even RAM caches to the point where motherboard manufacturers are now including such software in their bundles. To fulfill demand from owners of high-end desktops, Corsair and G.Skill this month unveiled their 128 GB quad-channel DDR4 memory kits consisting of eight DRAM modules.
    Corsair and G.Skill's 128 GB DDR4 memory kits are rated to run at 3000 MT/s per pin data-rate (DDR4-3000) and are subsequently designed for Intel's X99 platform where the quad memory bus allows for up to 96 GB/s of bandwidth with 4 or 8 DIMMs. These quad-channel kits consist of eight 16 GB unbuffered memory modules, which are based on 8 Gb DRAM chips made by Samsung using its 20 nm fabrication process. The memory sticks fully support Intel XMP 2.0 SPD profiles and can automatically set their clock-rates when installed into appropriate PCs.
    Corsair’s Black Vengeance LPX 128 GB DDR4-3000 memory kit comes in with CL16 18-18-36 latency settings as well as the higher specification 1.35 V voltage for DDR4. The modules are equipped with black aluminum heat-spreaders to aid with cooling. Corsair also supplies their Vengeance Airflow cooling system, a removable 40mm fan cooling bracket, with the kit. Corsair’s Black Vengeance LPX 128 GB DDR4-3000 kit costs $1174.99 without tax and is currently available from the company’s online store with the official name of CMK128GX4M8B3000C16.
    Meanwhile G.Skill’s Ripjaws V 128 GB DDR4-3000 set of DRAM modules for high-end desktop features surprisingly low latencies of CL14 14-14-34, as well as the higher 1.35V voltage. G.Skill’s Ripjaws V memory come with black or red aluminum heat-spreaders, and we assume these kits also come with extra fan cooling similar to G.Skill's other high end kits. G.Skill’s Ripjaws V 128 GB DDR4 memory kit will be priced at $999.99 when it becomes available later this month under the SKU name F4-3000C16-16GVK.
    It is noteworthy that despite of more aggressive timings and potentially higher real-world performance, G.Skill’s 128 GB DDR4 memory kit costs less than Corsair’s 128 GB DDR4 set of modules. The two companies are addressing a relatively small segment of the market with their 128 GB DRAM kits, hence, the competition between Corsair and G.Skill is inevitable. The reason for the high price for both kits comes down to binning - the ICs used for these are typically sold by the IC manufacturer as a certain bin (e.g. DDR4-2400 low voltage) and then they are individually tested by the memory stick manufacturer to fit within certain frequency ranges. At DDR4-3000 C14 for example, the process of testing might only produce one memory kit per 10000 ICs tested (educated guess) - and then the modules have to be tweaked to ensure they run together. We always recommend buying a single kit for a PC, especially of high speed memory, because the modules are designed to work together, whereas two separate kits hold no guarantee, especially if the secondary and tertiary sub-timings are close to the grain (typically these are slightly loosened for larger kits).
    At present both Corsair and G.Skill market their 16 GB DDR4-3000 memory modules as solutions for overclockers because highest JEDEC data rate validated by Intel’s Haswell-E processors is 2133 MT/s. As JEDEC’s DDR4 memory standard supports data-rates up to 3200 MT/s, eventually we might see high-speed 16 GB+ memory sticks becoming normal for workstations with memory speed-limited workloads.
    Source: Corsair, G.Skill


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5676

    Anandtech: CES 2016: ASRock Shows mini-STX 5x5 for Business and Education

    Ever since Intel announced their 5x5 platform (that’s 5-inch by 5-inch), we have had several requests from users saying ‘when?’. At the time of the announcement, it was difficult where Intel was trying to place the platform – the goal seemed to show something for embedded platforms that also had a socked processor. This would allow customers to choose how much processing power they needed up to 91W if it is built for it, or potentially upgrade later down the line. This is compared to the NUC, which runs mobile processors in an even smaller form factor. Despite the interest from end-users, it has always come across as a non-consumer play. ASRock’s showcase at CES pushes it further into that B2B market with specific verticals in mind.
    We learned that 5x5 now has an ‘official’ name in Mini-STX, similar to mini-ITX which is 6.7-inch square. But on display from ASRock were a singular motherboard, the H110M-STX, and a prebuilt system called the H110M-STX Mini PC.
    As the H110 name implies, this system is for Skylake processors and built on the H110 chipset. The motherboard uses a three-phase power delivery, rated at 65W, and memory comes via two DDR4 SO-DIMM slots supporting up to 32GB of DDR4-2133 (we wouldn’t really expect anything higher than 2133 in this form factor anyway). The socket area pushes right up against what would be the rear IO panel because of space, and the ports here have a low z-height to ensure cooler compatibility.
    Storage comes via an M.2 2280 slot supporting SATA 6 Gbps – the specifications say it also has two SATA 6 Gbps ports, but unless they’re available through a breakout cable I can’t see the traditional way to connect these to a motherboard. Network connectivity is through the Intel I219-V NIC as well as an M.2 2230 slot for WiFi and BT. Video output is designed to come through the processor (so Intel HD Gen 9) and the rear IO has a VGA, HDMI and DP port for use. There are two USB 3.0 ports on the back as well as one on the front, two USB 2.0 headers, and a custom USB-C header for the H110M-STX Mini-PC. Audio comes via a Realtek ALC283 codec using the onboard header. TPM 2.0 is also included.
    As for the Mini-PC system ASRock showed, this is designed specifically for this motherboard only and comes in at 1.92 liters (155 x 155 x 80 mm). It will be boxed with the Intel stock fan, and come with a 2.5-inch drive bay as well as a Kensington Lock. Separate SIs will have to decide what CPUs, DRAM and WiFi modules to use, as well as the M.2 slot for storage. Power for the system is provided by a DC-In port on the rear of the system, and given that the socket is designed for up to 65W in this case, I’d imagine that the power brick should be in the 90W range. It is also worth noting that to use the VGA connector, there seems to be a long cable from that odd port next to the DRAM to the VGA connector on the rear.
    We saw a few other 5x5 systems on display at CES, although they all pretty much aim for the same business crowd – either verticals such as education or digital signage/gambling, which is essentially what a lot of NUCs end up in. 5x5 is clearly a play for more performance, attempting to reduce costs, but it seems Intel is letting its partners get the first bite of the cherry – we did see a 5x5 from ECS, who plays a big part in Intel’s NUC production. Then on the other side we have people like Zotac, who end up doing their own custom designs anyway.
    But for now, it seems ASRock is keeping this as a B2B play and testing the water. We’ve not heard if this is going to be worldwide or a specific market play, but as a result pricing will be relative to the market and interest, meaning interested parties should contact their local ASRock sales offices.
    Source: ASRock
    Gallery: CES 2016: ASRock Shows mini-STX 5x5 for Business and Education




    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5677

    Anandtech: The Apple iPad Pro Review

    At this point it probably isn’t a secret that tablet sales have leveled off, and in some cases they have declined. Pretty much anywhere you care to look you’ll see evidence that the tablet market just isn’t as strong as it once was. It’s undeniable that touch-only tablets have utility, but it seems that the broader market has been rather lukewarm about tablets. I suspect at least part of the problem here is that the rise of the phablet has supplanted small tablets. Large tablets are nice to have, but almost feel like a luxury good when they’re about as portable as an ultrabook. While a compact laptop can’t easily be used while standing, or any number of other situations where a tablet is going to be better, a compact laptop can do pretty much anything a touch-only tablet can. A laptop is also going to be clearly superior for a significant number of cases, such as typing or precise pointing.
    This brings us to the iPad Pro. This is probably the first time Apple has seriously deviated from traditional iPad launches, putting together a tablet built for (limited) productivity and content creation rather than just simple content consumption, creating what's arguably the iPad answer to the Surface Pro. To accomplish this, Apple has increased the display size to something closer to that of a laptop, and we see the addition of a stylus and a keyboard cover for additional precision inputs. Of course, under the hood there have been a lot of changes as well. Read on for the full review of the Apple iPad Pro.

    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5678

    Anandtech: Interview with Ian Livingstone CBE: Gaming in VR and Development in the UK

    This week I decided last minute to attend PG Connects, a trade show conference on mobile gaming, attended by developers and business looking to promote or sell their games and services. As part of the conference, several presentation tracks relating to mobile gaming, such as promotion, media interaction and ‘tales of the industry’ were included to help educate the (mostly young) developers present. There were also a few of the old guard in the UK games industry presenting, and I jumped at the opportunity to speak to Ian Livingstone for a quick fifteen minutes.
    Ian Livingstone is a well-known figure, particularly in the UK, for the many roles he has played in developing the sector from starting with text and table-top based imagination gaming right the way through to full on graphical immersion.
    - Ian started in 1975 by co-founding Games Workshop, the miniature wargaming company that quickly spread as a vestige for Dungeons & Dragons and Warhammer enthusiasts to gain supplies to build battlefields, paint figurines, or teach newcomers. As part of this, Games Workshop brought the official original D&D to the UK.
    - Ian is also the co-founder and co-writer of the Fighting Fantasy series of RPG novels, part of the Choose Your Own Adventure style of story-telling. This was the ‘to turn left, go to page 72’ sort of dungeon crawlers that would explain the narrative but still leave the important decisions to the reader. I have fond memories of these books.
    - On the videogame side, Ian is the former Life President of Eidos Interactive, originally investing and doing design work for publisher Domark before it was acquired by Eidos. Part of Ian’s role involved securing the popular Eidos franchises and IP such as Tomb Raider and Hitman as the industry evolved. Eidos was acquired by Square Enix in 2009, and since then Ian has been a champion of the UK games industry. In 2011, he was tasked by the UK government to produce a report reviewing the UK video games industry, described as ‘a complete bottom up review of the whole education system relating to games’. Ian’s current interests, aside from promoting the strength of UK gaming, involves investing in talent for the gaming industry and the future.
    - In recognition for his work, Ian was appointed an OBE and CBE for services to the gaming industry, won the BAFTA Interactive Special Award and Fellowship, a British Inspiration Award and has an Honorary Doctorate of Technology by the University of Abertay, Dundee.
    Virtual Reality

    Ian Cutress: What are your thoughts on VR (Virtual Reality)?
    Ian Livingstone: Technology evolves in the gaming industry like no other entertainment industry. There’s always a new platform that comes along that gets people very excited when it comes to leveraging their content to new areas, new technologies and new audiences. Of course VR is causing that excitement right now. We have seen in previous years, and not too long ago, places like Facebook became a great platform for commercial games and mobile became an amazing platform for games people who didn’t even think of themselves as gamers. It became a mass market entertainment industry because of Apple coming along with swipe technology and then everyone was able to play a game. People were no longer intimidated by sixteen button controllers which was the realm of console gamers. So then video games become a mass market if it is intuitive - if people don’t have to learn any particular rules or even learn how to play. Therefore I would hope that VR, at the starting point, is a mass market entertainment device in allowing people to play intuitively.
    Now clearly Mark (Zuckerburg) didn’t buy Oculus merely as a games platform – he sees it as an immersive social platform that will include games but it is going to be much wider in scope. But from a games point of view it is a fantastic opportunity yet again, allowing people to have experiences they couldn’t have without it. My worry about it is that it is going to be too much content on a device that is going to be too expensive at launch.
    IC: So your thoughts on $600 for Oculus?
    IL: It’s a lot. In many ways it is a peripheral, and peripherals have never been hugely successful unless they became the technology of the day. So a peripheral-based idea like Guitar Hero – it was hugely successful and people were prepared to pay a lot of money for a single trick device. Clearly VR gives you the scope to play many games on the device but in short term as far as developers are concerned they are more likely to be getting revenue from the hardware manufacturers rather than consumers as it is sort of a strange launch point because of people being wary of VR, not being used to having a device around their head for more than five minutes when playing games, or motion sickness due to any sort of acceleration that makes some people feel a bit queasy. I think there’s a huge amount of excitement, a huge amount of opportunity, but it’s not going to be a slam dunk. I think there’s going to be a lot of people who don’t succeed but there’s going to be some fantastic success stories.
    IC: When you say succeed; are you speaking more on hardware of software?
    IL: On the software side. I mean everyone seems to be creating some sort of VR opportunity today and the consumers can’t possibly digest it all. I’m just caveating the excitement behind VR with a little bit of realism! This is quite a change in games.
    IC: What price would a headset have to be more widely accepted?
    IL: One of the issues is that you can buy a console for less!
    IC: So does a VR headset have to be an integrated gaming system on its own, or does it have to reduce down?
    IL: I would think it has to reduce down to that $150 mark. At $600 it can’t be a mass market proposition today. But as we know, technology always starts off expensive – the early adopters are going to buy it no matter what the price and over time the market will sort out what price it should be in order for it to be successful. But it many ways, hardware is a tough business to be in. I mean Sega pulled out of hardware, Nintendo has had its highs and its lows in hardware. It’s a tough business, and by comparison software is a lot easier.
    IC: How many of the headsets have you tried personally? Any favourites?
    IL: I’ve tried three, but I don’t feel qualified to comment on any in particular! I’ve enjoyed the experience if there’s no acceleration involved because I do feel a little bit queasy. Apart from games I have toured the Serengeti and climbed a couple of mountains, and that has been fantastic. I’ve sat in a cockpit of a plane too.
    IC: Today in your talk you mentioned that the App Store and Google Play were essentially the world’s largest shops with the smallest shop windows, referring to the top lists where everyone is trying to game the system. Is there anything that could be done to improve it? Is this even a problem?
    IL: I think everyone is tired of seeing the same top ten! Users want to know more, so the App Store has to give a way for greater discoverability for great games that aren’t being seen. That is easier said than done, and there isn’t a single answer. But I know it would be welcomed by consumers and creators alike.
    The UK and Gaming Education

    IC: What makes the UK a good place to make games? We’ve seen other regional industries dissolve but the UK is still strong.
    IL: We have a rich heritage of making games, and got off to a flying start in the 1980s when kids were coding in schools – plus we are a naturally creative nation with our film, our fashion, our music, architecture, design, our publishing and now of course our games industry. We have that ability to create entertainment that resonates with global audiences and most of our content is admired around the world. We have that ability to create unique entertainment – it’s a magic fairy dust that makes you come back time and time again and we punch way above our weight in content creation. So combine creativity with the early adoption of technology and hey presto: video games!
    IC: Are there any video games made in the UK that you feel don’t get that ‘made in UK’ recognition?
    IL: There are many cases of games that people would not know have originated in the UK. Grand Theft Auto V, developed in Scotland by Rockstar North, the incredible and largest entertainment franchise in any medium and not always known that it was developed in the UK. The success of companies like Jagex with Runescape, or that originally Tomb Raider was developed in the UK. Games like Football Manager probably have been mostly acknowledged as being from the UK! But companies like Creative Assembly with their Total War series, or Moshi Monsters, CSR Racing. There’s a huge list of content and new successes – Batman from Rocksteady for example. The list is seemingly endless, but most people assume that video games are developed in the United States or Japan, so they don’t get recognized as being from the UK, plus we’re not very good at blowing our own trumpet! We don’t shout about our successes. That’s why I always try to get the message out to media, to parents and to investors that we are very good at making games, it’s a great British success story, it’s a proper job and it’s a real investment opportunity – so go for it.

    Ian Livingstone's TEDxZurich talk on 'The Power of Play'
    IC: You’ve been working with the UK Government on a number of projects for the gaming industry. Can you talk about what you’ve done in this field in recent years?
    IL: I’m delighted the way the UK Government is now very supportive of the video games industry here. I’ve worked a lot with Ed Vaisey, the Culture Minister, on a number of projects. I was chair of the Computer Games Skills Council for Creative Skillset for seven years and we mapped out every university course with the word ‘games’ in them. Out of the 144 courses, we only felt able to accredit ten of those courses as being fit for purpose to earn the Creative Skillset Kite at the time.
    As an industry we’re struggling to find enough computer programmers of a high enough quality for some of the games in development. It was crazy that in the early days we had so many young people unemployed and we were so good at making games and programming that we had to outsource production overseas. Also the fact that a lot of our (UK) companies had to be bought out because they couldn’t access finance because the investment community didn’t understand the value of digital intellectual property or the ability to scale great games very profitably and globally.
    So the government tasked Alex Hope (the Managing Director of Double Negative, a major UK video effects studio) and I to write a review called Next Gen which was published by Nesta and we made twenty recommendations about education and additional education (for the skills related to the gaming industry). We found that IT taught in schools was largely a strange hybrid of office skills. Kids were being bored to death with Word, PowerPoint and Excel. Against all odds we were actually putting them off technology while they ran their lives through social media, using a phone as almost a part of their brain. Effectively ICT was teaching kids how to read but not how to write. They could use an application but not make an application. They could play a game but not make a game. What we wanted to do was turn them from consumers to creators of technology, so our number one recommendation in Next Gen was to put Computer Science as an essential discipline on the national curriculum. Next Gen came out in 2011, and the Department for Education at first said they weren’t interested in our recommendations and that ICT was perfectly fine. It might have been fine for what it was but it was outdated, outmoded and absolutely no good for the 21st century skills required.
    So we started the Next Gen Skills Coalition backed by UKIE, the trade body association for UK Interactive Entertainment, for campaigning and talks and being mad campaigners for about four years when we finally got to meet Michael Gove’s (the Education Minister at the time) special advisors. Eric Schmidt (current Executive Chairman of Alphabet, formerly Google) also referenced Next Gen in his MacTaggart Lecture in 2011. We finally got to meet Michael Gove himself, and to his credit he isn’t always Mr. Popular when it comes to further education, but he did take on-board our recommendations and said he would change the curriculum. 2014 saw the new curriculum coming to English schools so now every child can have the opportunity to learn how to code, and more importantly how to think computationally, problem solve and give them better skills for the 21st century and for jobs that don’t yet exist rather than training for jobs that will no longer exist. So we’re getting from the passenger seat to the driver seat in technology and hopefully the UK might be able to create the next Google, Facebook or Twitter, as well as its games.
    There are a lot more university courses now accredited aside from those initial ten, but the important thing was changing the curriculum in schools, moving away from entry level digital literacy to a much higher set of skills. Not everyone is going to become a coder or a programmer but they should understand how code works to be a true digital citizen. You have to understand its place, so I think digital literacy is as important as literacy and numeracy for the 21st century and you could argue that computer science is the new Latin because it underpins the digital world in the way that Latin underpins the analogue world. So we have to think about digital creativity and to make things interesting – get kids to build an app, make a game, build a website, do some robotics and to learn by doing in order to create.
    I think games are also misunderstood as a medium. You can park your prejudice against one or two titles and think about what is happening when you play the game – you problem solve, you learn intuitively, you’re in a fair and safe environment, you’re almost incentivised to try again, you’re not punished for your mistakes and it enables creativity. Like Minecraft where you are building these wonderful 3D architectural worlds like digital Lego and sharing them with your friends. For me games are a wonderful learning tool, and why can’t learning be fun and playful – there’s no reason not to be.
    The second thing with Ed Vaisey is that he did understand the need for access to finance and helped bring about the introduction of tax credits because film and TV have already had that access and the games industry has never had any help. There’s no BFI (British Film Institute) or Film Council equivalent. There were certainly no tax incentives. So now we’ve got production tax credits so we can build games that would not ordinarily have been built from a cultural sense or from an economic sense.
    IC: When you see somebody that has a good idea for a game or for content, what is the barrier to production (talent, financial, etc.)?
    IL: All of the above!
    IC: Are there any current bottlenecks?
    IL: The best thing to do is to make a game, learn from your mistakes, and then make another game. Fail fast. There’s no point in saying you had an idea for a game – having the idea is very easy and we can all say that. You have to find out if you’re up to doing it. But don’t be put off by failure – failure is just success in progress. Angry Birds was Rovio’s 51st game, not their first game. So you have to have some real passion and follow your heart. Hopefully one day you will find an audience and find a way.
    IC: What are your current projects?
    IL: I’m currently applying to open a free school (a non-profit, state funded, but not run by the state, similar to an academy, subject to the same rules as state schools). Its aim is to be the flagship school on all the things I’ve been campaigning for. So more creativity in the classroom, more computer science, more computational thinking, more project based work and more learning by doing to get people creative in games as a cross disciplinary approach to problem solving rather than rote learning of silent subjects. It will have greater engagement and greater traction with kids because Generation Z is different. They naturally collaborate, they naturally share, and collaboration shouldn’t be seen as cheating because it’s what we do in the workplace. So let’s work with that and bring the workplace closer to the classroom and vice versa.
    Many thanks to Ian for his time at PG Connects, and best of luck in his future endeavours. Hopefully in a few years we can loop back and get his opinion again on how the industry is changing.
    Relevant Links

    Ian Livingstone's Twitter: https://twitter.com/ian_livingstone
    Next Gen Report: http://www.nesta.org.uk/publications/next-gen
    The Power of Play, Ian Livingstone's TEDxZurich talk: https://www.youtube.com/watch?v=58P8JU5p_Z4
    How British Video Games Became a Billion Pound Industry (BBC): http://www.bbc.co.uk/timelines/zt23gk7
    Eric Schmidt’s MacTaggart Lecture 2011: https://www.youtube.com/watch?v=hSzEFsfc9Ao
    Creative Skillset: http://creativeskillset.org/
    Free School Application: http://www.bbc.co.uk/news/technology-29550486


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5679

    Anandtech: Logitech Formally Exits OEM Mouse Market

    In a bit of news that is a sign of the times, this week Logitech announced that it had completed its exit from the OEM mouse business. The company no longer sells OEM mice, which for a long time accounted for a large portion of Logitech’s revenue. Instead the company will continue to focus on new categories of premium products for retail markets.
    Logitech was among the first companies to mass-produce computer mice back in the eighties. For decades, its mice were supplied with PCs made by various manufacturers and for a long time Logitech’s brand was synonymous to pointing devices. In fact, Logitech’s U96 is among the world’s most famous optical mice since it was bundled with millions of PCs. However, a lot has changed for Logitech in recent years. As sales of desktop PCs began to stagnate in the mid-2000s and the competition intensified, OEM margins dropped sharply. At some point, OEM business ceased to make sense for Logitech: there was no growth and profitability was minimal.
    Last March the company announced plans to stop selling OEM devices, and in December Logitech made its last-time shipments, entirely depleting its inventory. Sales of OEM hardware accounted for about 4.45% of the company’s revenue in Q3 FY2016, which ended on December 31, 2015. Due to razor-thin margins, Logitech’s OEM business was not exactly something that could be sold for a lot, according to the company. Moreover, it did not make a lot of sense for Logitech to sell it and license the brand to a third party.
    Logitech has been expanding its product portfolio for many years now and while mice, trackballs and keyboards remain three key types of products for the company, they no longer account for the lion’s share of Logitech’s revenue. The manufacturer recognizes gaming gear (which includes mice, keyboards, speakers, headsets, controllers and other devices), mobile speakers, video collaboration as well as tablet and other accessories as its key growth categories of products. Net sales of Logitech's growth category products totaled $224.87 million in Q3 FY2016, net sales of traditional devices totaled $368.87 million, whereas OEM business brought only $26.512 million in revenue. The lack of OEM mice in Logitech's portfolio will be offset by growing sales of other products.
    Ultimately even though Logitech stopped to sell cheap mice to producers of PCs, Logitech remains one of the world’s largest suppliers of pointing devices and keyboards, and many premium personal computers still come equipped with the company’s advanced keyboards and mice designed for gamers. These days the company has also taken on a more well-rounded portfolio, with significant presences in speakers, PC headsets, webcams, remotes and other devices.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5680

    Anandtech: GDDR5X Standard Finalized by JEDEC: New Graphics Memory up to 14 Gbps

    In Q4 2015, JEDEC (a major semiconductor engineering trade organization that sets standards for dynamic random access memory, or DRAM) finalized the GDDR5X specification, with accompianing white papers. This is the memory specification which is expected to be used for next-generation graphics cards and other devices. The new technology is designed to improve bandwidth available to high-performance graphics processing units without fundamentally changing the memory architecture of graphics cards or memory technology itself, similar to other generations of GDDR, although these new specifications are arguably pushing the phyiscal limits of the technology and hardware in its current form.
    The GDDR5X SGRAM (synchronous graphics random access memory) standard is based on the GDDR5 technology introduced in 2007 and first used in 2008. The GDDR5X standard brings three key improvements to the well-established GDDR5: it increases data-rates by up to a factor of two, it improves energy efficiency of high-end memory, and it defines new capacities of memory chips to enable denser memory configurations of add-in graphics boards or other devices. What is very important for developers of chips and makers of graphics cards is that the GDDR5X should not require drastic changes to designs of graphics cards, and the general feature-set of GDDR5 remains unchanged (and hence why it is not being called GDDR6).
    Performance Improvements

    Nowadays highly binned GDDR5 memory chips can operate at 7 Gbps to 8 Gbps data rates. While it is possible to increase performance of the GDDR5 interface for command, address and data in general, according to Micron Technology, one of the key designers of GDDR5X, there are limitations when it comes to array speed and command/address protocols. In a bid to improve performance of the GDDR5 memory, engineers had to change internal architecture of memory chips significantly.
    The key improvement of the GDDR5X standard compared to the predecessor is its all-new 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. By contrast, the GDDR5 technology features 8n prefetch architecture and can read or write up to 256 bit (32 Bytes) of data per cycle. Doubled prefetch and increased data transfer rates are expected to double effective memory bandwidth of GDDR5X sub-systems. However, actual performance of graphics cards will depend not just on DRAM architecture and frequencies, but also on memory controllers and applications. Therefore, we will need to test actual hardware to find out actual real-world benefits of the new memory.
    Just like the predecessor, GDDR5X functions with two different clock types - a differential command clock (CK) to where address and command inputs are referenced, as well as a forwarded differential write clock (WCK) where read and write data are referenced to. WCK runs at a frequency that is two times higher than the CK. The data can be transmitted at double data rate (DDR) or quad data rate (QDR) relative to the differential write clock (WCK), depending whether 8n prefetch or 16n prefetch architecture and protocols are used. Accordingly, if makers of chips manage to increase CK clock to 1.5 GHz, then data rate in QDR/16n mode will rise to 12 Gbps.
    Since the GDDR5X protocol and interface training sequence are similar to those of the GDDR5, it should be relatively easy for developers of chips to adjust their memory controllers to the new type of memory. However, since the QDR mode (which is called Ultra High Speed mode in Micron’s materials) mandates usage of PLLs/DLLs (Phase Locked Loops, Delay Locked Loops), there will be certain design changes to design of high-end memory chips.
    JEDEC’s GDDR5X SGRAM announcement discusses data rates from 10 to 14 Gbps, but Micron believes that eventually they could be increased to 16 Gbps. It is hard to say whether commercial chips will actually hit such data rates, keeping in mind that there are new types of memory incoming. However, even a 256-bit GDDR5X memory sub-systems running at 14 Gbps could provide up to 448 GBps of memory bandwidth, just 12.5% lower compared to that of AMD’s Radeon R9 Fury X (which uses first-gen HBM).
    GPU Memory Math
    AMD Radeon
    R9-290X
    NVIDIA GeForce
    GTX 980 Ti
    NVIDIA GeForce
    GTX 960
    AMD Radeon
    R9 Fury X
    Samsung's 4-Stack HBM2 based on 8 Gb DRAM Theoretical GDDR5X 256-bit
    sub-system
    Theoretical GDDR5X 128-bit
    sub-system
    Total Capacity 4 GB 6 GB 2 GB 4 GB 16 GB 8 GB 4 GB
    B/W Per Pin 5 Gb/s 7 Gb/s 7 Gb/s 1 Gb/s 2 Gb/s 14 Gb/s 14 Gb/s
    Chip capacity 2 Gb 4 Gb 4 Gb 1 GB 4 GB 1 GB
    (8 Gb)
    1 GB
    (8 Gb)
    No. Chips/Stacks 16 12 4 4 4 8 4
    B/W Per Chip/Stack 20
    GB/s
    28
    GB/s
    28
    GB/s
    128
    GB/s
    256
    GB/s
    56
    GB/s
    56
    GB/s
    Bus Width 512-bit 384-bit 128-bit 4096-bit 4096-bit 256-bit 128-bit
    Total B/W 320
    GB/s
    336
    GB/s
    112
    GB/s
    512
    GB/s
    1
    TB/s
    448
    GB/s
    224
    GB/s
    Estimated DRAM
    Power Consumption
    30 W 31.5 W 10 W 14.6 W n/a 20 W 10 W
    Capacity Improvements

    Performance was not the only thing that developers of the GDDR5X had to address. Many applications require not only high-performance memory, but a lot of high-performance memory. Increased capacities of GDDR5X chips will enable their adoption by broader sets of devices in addition to graphics/compute cards, game consoles and network equipment as well as other areas. Initially one would expect the high density configurations to be slightly conservative on frequency to begin with.
    The GDDR5 standard covered memory chips with 512 Mb, 1 Gb, 2 Gb, 4 Gb and 8 Gb capacities. The GDDR5X standard defines devices with 4 Gb, 6 Gb, 8 Gb, 12 Gb and 16 Gb capacities. Typically, mainstream DRAM industry tends to double capacities of memory chips because of economic and technological reasons. However, with GDDR5X the industry decided to ratify SGRAM configurations with rather unusual capacities — 6Gb and 12Gb.
    The mobile industry already uses LPDDR devices with 3 Gb, 6 Gb and 12 Gb capacities in a bid to maximize flexibility of memory configurations for portable electronics. As it appears, companies developing standards for graphics DRAM also wanted to capitalize on flexibility. A GDDR5X chip with 16 Gb capacity made using 20 nm or 16/18 nm process technology would have a rather large die size and thus high cost. However, the size and cost of a 12 Gb DRAM IC should be considerably lower and such a chip could arguably address broader market segments purely on cost.
    Just like in case of the GDDR5, the GDDR5X standard fully supports clamshell mode, which allows two 32-bit memory chips to be driven by one 32-bit memory controller by sharing address and command bus while reducing the number of DRAM IC’s I/Os to 16. Such operation has no impact on system bandwidth, but allows doubling the amount of memory components per channel. For example, it should be theoretically possible to build a graphics card with 64 GB of GDDR5X using one GPU with a 512-bit memory bus as well as 32 16 Gb GDDR5X memory chips.
    Unusual capacities will help GDDR5X to better address all market segments, including graphics cards, HPC (high performance computing), game consoles, network equipment and so on. However, it should be noted that the GDDR5X has extremely potent rival, the second-gen HBM, which offers a number of advantages, especially in the high-end segment of the graphics and HPC markets.
    Energy Efficiency

    Power consumption and heat dissipation are two major limiting factors of compute performance nowadays. When developing the GDDR5X standard, the industry implemented a number of ways to keep power consumption of the new graphics DRAM in check.
    Supply voltage and I/O voltages of the GDDR5X were decreased from 1.5V on today’s high-end GDDR5 memory devices to 1.35V. Reduction of Vdd and Vddq should help to cut power consumption of the new memory by up to 10%, which is important for high-performance and mobile devices where the memory can take a sizable chunk of the available power budget.
    The reduction of supply and I/O voltages is not the only measure to cut power consumption of the new memory. The GDDR5X standard makes temperature sensor controlled refresh rate a compulsory feature of the technology, something that could help to optimize power consumption in certain scenarios. Moreover, there are a number of other features and commands, such as per-bank self refresh, hibernate self refresh, partial array self refresh and other, that were designed to shrink the energy consumption of the new SGRAM.
    Due to lower voltages and a set of new features, power consumption of a GDDR5X chip should be lower compared to that of a GDDR5 chip at the same clock-rates. However, if we talk about target data rates of the GDDR5X, then power consumption of the new memory should be similar or slightly higher than that of GDDR5, according to Micron. The company says that GDDR5X’s power consumption is 2-2.5W per DRAM component and 10-30W per board. Even with similar/slightly higher power consumption compared to the GDDR5, the GDDR5X is being listed as considerably more energy efficient due to its improved theoretical performance.

    We do not know specifications of next-generation graphics adapters (for desktops and laptops) from AMD and NVIDIA, but if developers of GPUs and DRAMs can actually hit 14 Gb/s data-rates with GDDR5X memory, they will double the bandwidth available to graphics processors vs GDDR5 without significantly increasing power consumption. Eventually, more efficient data-rates and unusual capacities of the GDDR5X could help to actually decrease power consumption of certain memory sub-systems.
    Implementation

    While internally a GDDR5X chip is different from a GDDR5 one, the transition of the industry to GDDR5X is a less radical step than the upcoming transition to the HBM (high-bandwidth memory) DRAM. Moreover, even the transition from the GDDR3/GDDR4 to the GDDR5 years ago was considerably harder than transition to the GDDR5X is going to be in the coming years.
    The GDDR5X-compliant memory chips will come in 190-ball grid array packaging (as compared to 170-ball packaging used for current GDDR5), thus, they will not be pin-to-pin compatible with existing GDDR5 ICs or PCBs for modern graphics cards. But while the GDDR5X will require development of new PCBs and upgrades to memory controllers, everything else works exactly like in case of the GDDR5: the interface signal training features and sequences are the same, error detection is similar, protocols have a lot of resemblances, even existing GDDR5 low and high speed modes are supported to enable mainstream and low-power applications. BGA packages are inexpensive, and they do not need silicon interposers nor use die-stacking techniques which HBM requires.
    Implementation of GDDR5X should not be too expensive both from R&D and production perspectives; at least, this is something that Micron implied several months ago when it revealed the first details about the technology.
    Industry Support

    The GDDR5X is a JEDEC standard supported by its members. The JEDEC’s document covering the technology contains vendor IDs for three major DRAM manufacturers: Micron, Samsung and SK Hynix. Identification of the memory producers are needed for controllers to to differentiate between various vendors and different devices, and listing the memory makers demonstrates that they participated in development, considered features and balloted on them at JEDEC’s meetings, which may indicate their interest in supporting the technology. Unfortunately, exact plans for each of the companies regarding GDDR5X production are unknown, though we would expect GDDR5X parts to fit between the current GDDR5 high end and anything implementing HBM, or for implementing higher memory capacity on lower end GPUs. Micron plans to start mass production of its GDDR5X memory chips in mid-2016, so we might see actual GDDR5X-based memory sub-systems in less than six months from now.
    NVIDIA, currently the world’s largest supplier of discrete graphics processors, said that that as a member of JEDEC it participates in the development of industry standards like GDDR5X. AMD is also a member of JEDEC and it usually plays a key role in development of memory standards. Both of these companies also employ compression algorithms to allieviate the stress on texture transfers between the GPU and memory, and thus an increase in bandwidth (as shown by Fiji) plus an increase in density can see benefits in texture rich or memory bound compute scenarios.
    While specific plans of various companies regarding the GDDR5X are unclear, the technology has a great potential if the numbers are accurate (it has to be, it's a standard) and has all chances to be adopted by the industry. The main rival of the GDDR5X, second-generation HBM, can offer higher bandwidth, lower power consumption and smaller form-factors, but at the cost of design and manufacturing complexities. In fact, what remains to be seen is whether the HBM and the GDDR5X will actually compete directly against each other or will just become two complementary types of memory. Different applications nowadays have different requirements, and an HBM memory sub-system with 1 TBps of bandwidth makes a perfect sense for a high-end graphics adapter. However mainstream video cards should work perfectly with GDDR5X, and chances are we will see both in play at different market focal points.


    More...

Thread Information

Users Browsing this Thread

There are currently 20 users browsing this thread. (0 members and 20 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title