Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6781

    Anandtech: The NVIDIA GeForce GTX 1080 Ti Founder's Edition Review: Bigger Pascal for

    Unveiled last week at GDC and launching tomorrow is the GeForce GTX 1080 Ti. Based on NVIDIA’s GP102 GPU – aka Bigger Pascal – the job of GTX 1080 Ti is to serve as a mid-cycle refresh of the GeForce 10 series. Like the GTX 980 Ti and GTX 780 Ti before it, that means taking advantage of improved manufacturing yields and reduced costs to push out a bigger, more powerful GPU to drive this year’s flagship video card. And, for NVIDIA and their well-executed dominance of the high-end video card market, it’s a chance to run up the score even more.

    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6782

    Anandtech: The Chuwi LapBook 14.1 Review: Redefining Affordable

    In this industry, it is all too easy to focus only on the high end of the PC market. Manufacturers want to show off their best side, and often provide samples of high-end, high-expense devices more than their other offerings. While these devices are certainly exciting, and can set the bar for how products should perform, there is definitely a gap compared to being able to review the other end of the market. When Chinese manufacturer Chuwi reached out with an opportunity to take a look at the Chuwi LapBook 14.1, it was a great chance to see how this market has evolved over the last several years, and to see how another manufacturer tackles the inescapable compromise of this end of the market. The Chuwi LapBook 14.1 offers a lot of computer for the money.

    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6783

    Anandtech: Sony Demonstrates Concept Xperia Ear Headphones and Xperia Touch Android P

    As we move through our MWC meeting writeup backlog this week, one of the interesting developments we saw was from Sony. Apart from new smartphones, Sony showed two interesting devices at MWC 2017: the Xperia Ear Open-Style Concept as well as the Xperia Touch Android projector. Both devices use a number of Sony’s proprietary technologies, and Sony states that their usage models differ from what we expect to see from today’s devices. The open-style headphone is officially a concept device, with Sony wanting feedback, whereas the latter is a product that is about to ship.
    The Xperia Ear Open-Style Concept: Headphones That Lets You Listen to Outside World

    Sony showcased its wireless stereo headphones called the Xperia Ear Open-Style Concept on the show floor. The headphones enable users to listen to music and receive notifications from their apps, while also hearing sounds from the outside world. Sony stated that people wearing headphones may not be aware of what is happening in the periphery and may not know whether a car or something heavier is incoming and they cannot be warned because their ears are busy.
    Sony does not go too deep explaining how the Xperia Ear Open-Style concept device works, but only says that it has two spatial acoustic conductors, and driver units transmitting sounds to the ear canal. Based on the look of two of Sony's prototypes (one was demonstrated at MWC, another was shown in images from Sony's labs), the driver units seem to be rather large and it is unclear whether the company can make them considerably smaller. Moreover, keep in mind that everything is wireless, which adds its complexities (e.g., power consumption and the stability of connection to the audio source, etc.).
    Sony compares its Xperia Ear Open-Style to its Xperia Ear headset, so we are talking about a device that connects to an Android smartphone and supports Sony’s Agent assistant. Based on Sony’s and our own pictures, the Xperia Ear Open-Style is rather huge, which may indicate that it might have more compute in there than just for audio, perhaps teasing other functionality. If the concept goes into production, it will likely be a bit smaller, but exact dimensions are something that even Sony most probably does not know right now.
    The Xperia Touch: Android Apps Outside the Phone

    Android apps are run on Android-based smartphones, tablets or Chromebooks, which makes it hard to use them collaboratively unless you happen to own one of those 32-inch table(t)s. Sony wants to change that with its Xperia Touch projector. The device is such that it not only projects images but also senses interactions with them, making any surface a 23” touchscreen. It can also be used to project to an 80” wall. Sony says that this is a useful tool for collaborative family entertainment, but could also be used for collaborative work or for public services such as cafés.
    The Sony Xperia Touch is a fully fledged Google Android 7.0-based computer, based on an unknown SoC and is equipped with 3 GB of LPDDR3, 32 GB eMMC storage, sensors (e-compass, GPS, ambient, light barometer, temperature, humidity, human detection that also recognizes certain gestures), communication capabilities (802.11ac, Bluetooth 4.2, NFC, USB Type-C, HDMI), a microphone that can be used for voice commands, stereo speakers as well as a battery (lasts for one hour with half brightness). The display system uses a laser diode-based 0.37 SXRD LCD shutter projection with 1366x768 resolution, 100 nits brightness and a 4000:1 contrast ratio. To detect what users do, the Xperia Touch uses an IR sensor and Sony Exmor RS RGB sensors that capture images at 60 fps.

    Sony originally introduced its touch-sensing projector at MWC 2016 a year ago, but did not share any details about availability and pricing because it was a prototype back then. In one year, the device has evolved to a commercial product that Sony plans to start selling this spring on select markets in Europe for €1599. Given the relatively low resolution, lack of a significant library of consumer-grade software designed with the Xperia Touch in mind, a rather short battery life and high price, the projector is barely aimed at the mainstream audience at this point. For the time being, this product will be aimed at companies and individuals who already have ideas how to use it.
    Gallery: Sony Xperia Touch Projector Hands-On at MWC 2017




    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6784

    Anandtech: SD Association Announces UHS-III (up to 624 MB/s), A2 Class, LV Signaling

    The SD Association has made three important announcements in the course of the past couple of weeks. First is the introduction of its UHS-III bus that increases the potential maximum throughput of SD cards. Second is the announcement of the Application Performance Class 2 standard, for devices that meet a new set of criteria that relate to IOPS. Third is a new Low-Voltage Signaling specification (LVS) that has potential to reduce complexity and power consumption of future applications featuring SD memory cards. Both the A2 and the LVS are parts of the SD 6.0 specification.
    UHS-III: Up to 624 MB/s

    As 4K, 8K and 360° content are becoming more widespread, the performance requirements for storage on cameras and similar devices is ever increasing. To support them, the SDA is introducing a new UHS-III interface bus that increases potential read/write bandwidth up to 624 MB/s (double that of the UHS-II). The UHS-III high-speed interface signals are assigned to the second row of SD card pins that are also present on the UHS-II cards, which means that the upcoming UHS-III cards will be backward compatible with UHS-II and UHS devices as well as any other SD hosts.
    Comparison of UHS Bus Performance
    UHS-I UHS-II UHS-III
    50 - 104 MB/s 156 MB/s full duplex
    312 MB/s half duplex
    up to 624 MB/s (full duplex?)
    Since the SD Association only manages standards, it cannot make announcements regarding the availability of devices and memory cards supporting UHS-III. Given the fact that the new bus requires a redesign of SoCs currently in the market (the addition of a faster PHY may not be too hard from an engineering point of view, but it is time consuming), qualification and other efforts, it is logical to expect UHS-III applications about a year from now, if not later. Keep in mind that a couple of years passed between the announcement of the UHS-II and commercial availability of cards supporting the standard.
    The SDA expects the camera industry, as well as various emerging devices that need high data throughput, to adopt UHS-III first with others following sometimes later. Not many smartphones currently support even UHS-II, so it remains to be seen when mobile devices adopt the UHS-III.
    A2 Class: Higher Performance and New Functionality

    Last November the SDA introduced its Application Performance Class set of requirements for SD cards and kicked off with the A1 class that guarantees minimum write IOPS of 500, minimum read IOPS of 1500 and sustained throughput of 10 MB/s. Today, A1-compliant SD and microSD cards are on the market and are available from various manufacturers.
    Application Performance Class
    Minimum Performance Requirements
    Sequential Read IOPS Write IOPS
    A1 10 MB/s 1500 500
    A2 4000 2000
    At MWC 2017 the SDA expanded the application performance class with a new A2 rating. This rating requires SD cards to provide a random performance of 2000 write IOPS and 4000 read IOPS, while leaving sustained sequential read/write speed at 10 MB/s.
    The A2 Class is a part of the SD 6.0 protocol specification and this means that apart from higher random and sequential numbers, some of the A2 cards will support such functions as command queuing and caching to hit the performance targets as well as 'self-maintenance'. For example, command queuing could optimize random read performance, whereas support for caching could improve random write performance as SD cards can use higher-speed NAND flash cache to write data (in a manner similar to what TLC-based SSDs do with their pSLC cache). High-performance random writes are important not only for computer programs installed on cards, but also for devices that write data intensively (e.g., 360° cameras).
    The SDA does not explain much about the 'self-maintenance' element to the standard, but it says that it contributes to “better memory access performance” and allows “internal background data management.” In general, it looks like the upcoming SD 6.0-compliant cards will support some sort of garbage collection, akin to that performed by SSDs. Keep in mind that some of the already available SD cards can hit the A2 performance requirements today and therefore manufacturers can put the appropriate logo on them to signify the performance levels, although they may not be SD 6.0-compliant.
    At present, multiple makers of SD cards are sampling their A2-rated products and it is expected that they are going to release commercial A2 products in the coming months. As mentioned, cards with the A2 logo are not mandated to support all of the features of the SD 6.0 specification, but only have to guarantee the aforementioned performance numbers. Meanwhile, since command queuing and caching enable cards to hit the targets, there will be A2 cards featuring this functionality. Keep in mind that today’s hosts do not support the SD 6.0 spec, so, certain cards with the A2 logotype may not demonstrate all of their advantages on hosts compatible with the SD 5.0 and the SD 5.1 specs.
    LVS: Bringing the Voltage Down

    When the Secure Digital standard was designed in the late 1990s, the developers decided to use 3.3 V signaling because at the time it was considered low enough even for mobile devices. Eventually, the SDA added 1.8 V signaling to UHS and UHS-II modes, but kept 3.3 V signals for initialization and operations with legacy hosts. As a result, modern hosts also had to support both voltages. The SD 6.0 introduces low voltage (LV) signaling cards that need to support either 1.8 V or 3.3 V signaling with auto detection mechanism, eliminating any needs for upcoming hosts to support both and thus decreasing complexity and saving some power because of reduced signaling voltage. The new cards will carry the LV logo and will be backward compatible with existing hosts, but LV hosts will only work with LV cards.
    Putting It All Together

    Despite the somewhat confusing mess surrounding SD cards, performance and metrics, the three announcements made by the SDA are independent of each other. Some new cards coming to the market will be UHS-III-only, some cards will be UHS-III with LVS, and others will be A2 with or without LV. As always, host support will be critical.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6785

    Anandtech: Infineon Shows Off Future of eSIM Cards:

    At MWC this year, Infineon showcased a lineup of its current and embedded SIM products. The company demonstrates not only the industry-standard MFF2 eSIM chip, but also considerably smaller ICs designed for future miniature devices (many of which may not even exist yet as a category) as well as M2M (machine to machine) applications. It is noteworthy that to manufacture an eSIM the size of a match head, Infineon uses GlobalFoundries 14LPP process technology, taking advantage of leading-edge lithography to bring the size of a simple device down.
    The first SIM cards were introduced in 1991 along with the world’s first GSM network operated by Radiolinja in Finland (now the company is called Elisa). Back then, mobile phones were so bulky that a card the size of a credit card (1FF) could fit in. Eventually, handsets got smaller, Mini-SIMs (2FF) replaced full-sized SIMs and then Micro-SIM (3FF) and Nano-SIM (4FF) cards took over. While mobile phones have evolved considerably in terms of feature-set in the last 25 years, the function of the SIM card remained the same: it stores an integrated circuit card identifier (ICCID), an international mobile subscriber identity (IMSI), a location area identity, an authentication key (Ki, this part actually requires a basic 16- or 32-bit compute unit) as well as a phone book and some SMS messages.
    By today’s standards, the amount of data that each SIM card stores is so tiny that its physical dimensions are simply not justified. Even Nano-SIMs are too large for applications like a smartwatch, and this is when embedded SIMs come into play: their form-factor is considerably smaller, they can be used with various operators (which makes them more flexible in general) and some of such cards have an expanded feature set (e.g., hardware crypto-processors). Today, there is one internationally recognized form-factor for eSIMs, the MFF2, which is used inside devices like Samsung’s Gear series smartwatch with GSM/3G connectivity. If we actually take a look inside the Gear S2 smartwatch, we will notice that the eSIM is actually one of the largest components and its functionality is disproportional to its dimensions.
    At MWC 2017 Infineon is demonstrating two more eSIM implementations, which have not been standardized (yet?), but which are already used inside millions of devices.
    The first one, when packaged, has dimensions of 2.5×2.7×0.5 mm, which essentially means that it has no packaging at all. This IC is produced using a mature 65 nm process technology and that means that it is very cheap.
    The second eSIM implementation that Infineon demonstrates is actually even tinier: its dimensions when fully packaged and ready to use are just 1.5×1.1×0.37 mm. The IC is made using 14LPP process technology by GlobalFoundries and the foundry charges the chip developer accordingly. Using a leading-edge process technology to make eSIM cards is not something common, but the approach enables developers of various devices to take advantage of the smallest cards possible (another advantage of such cards are low voltages and power consumption).
    It remains to be seen when the industry formally adopts eSIM standards smaller than the MFF2, but dimensions of the eSIMs that Infineon is demonstrating clearly indicate that there are ways to make these cards smaller. Moreover, companies are not afraid of using proprietary/non-standard form-factors are already using the offerings from Infineon. It is up for debate whether using leading-edge process technology for making eSIMs makes sense in general (after all, far not all devices require tiny dimensions and expanded functionality of eSIMs, such as crypto-processors), but with 10 million non-standard eSIMs shipped to date it is obvious that there are mass market devices that can absorb such chips even at potentially premium pricing.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6786

    Anandtech: MWC 2017: Panasonic Demonstrates Store Window as a Transparent Screen

    At Mobile World Congress this year, Panasonic demonstrated a glass that can be turned into a display in an instant. The solution relies on a thin film between the sheets of glass that can quickly change its properties when electricity is supplied, allowing a rear projector to focus and provide an image. The system is currently aimed at retailers that want to attract more attention to their stores and shelves. The company says that the first deployments of the technology are expected this spring.
    There are typically two ways for stores to attract the attention of those passing by. Either put something interesting in the shop window, or replace the window with LCD screens that showcase something appealing. The new solution that Panasonic is showing blends traditional showcases and displays, enabling owners of stores to have both. The technology behind the solution appears to be relatively simple: Panasonic takes two glasses and puts a special film between them.
    The film is matte and can be used to display images that are projected onto it using a conventional off-the-shelf projector. But when electricity is applied to the film, it becomes transparent. Similar opaque glass technologies are in frequent use, applying a potential difference across two electrodes embedded in the glass and between an electrolyte whereby larger particles in the electrolyte self-assemble in the presence of an electronic charge to allow light to pass through. This ends up being a natural extension of what Panasonic has shown at other recent events regarding large glass projection display technology.
    At MWC 2017, Panasonic showed a booth with a mannequin wearing a red dress, a pair of black shoes, a green handbag. The lens of the projector was camouflaged with the environment. Once the film is “switched”, the 1×2 meter window can be used as a screen and this is where Panasonic is demonstrating a video with a model wearing that exact red dress (albeit, with red shoes). The manufacturer says that the resolution of the display depends entirely on the resolution of the projector, but the density of the non-transparent particles as well as the placement of the projector have its effect on the quality too. Meanwhile, since the videos are displayed using a projector, it should not be too hard for stores to set everything up for transparent screens.
    Panasonic does not reveal the tech behind its smart glass and as there are multiple types of films that can change their properties when electricity is applied, which makes estimating difficult without an official announcement. What is important here is that the glass can either be a screen, or completely transparent. So, unless you stick several glasses together, the window will be either a window or a display, which limits the number of applications that can use the tech.
    At present, a 1×2 meter wall is the maximum size of Panasonic’s “transparent screen”, so, if someone wants a larger wall, they have to use several glasses and projectors in sync. The total cost for a single 1×2 meter display like this will be around $3000-$4000 according to a Panasonic rep at the booth (not sure if this includes the projector, it doesn't sound like it does, but that price is minus a support contract). Panasonic states that the company already has customers interested in these products and are basically ready to accept delivery. The high price of Panasonic’s transparent screen glass is conditioned not only by its capabilities but also by the fact that everything has to be rugged and work properly for different weather and temperatures. Panasonic plans to start selling its “transparent screens” in Japan first and then look for customers in other parts of the world as well.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6787

    Anandtech: Intel to Acquire Mobileye for $15 Billion

    In an interesting announcement today, Intel and Mobileye have entered into an agreement whereby Intel will commence a tender offer for all issued and outstanding ordinary shared of Mobileye. At $63.54 per share, this will equate to a value of approximately $15 billion.
    Mobileye is currently one of a number of competitors actively pursuing the visual computing space, and the high item on that agenda is automotive. We’ve seen Mobileye announcements over the last few years, with relationships with car manufacturers on the road to fully autonomous vehicles. Intel clearly wants a piece of that action, aside from its own movement into automotive as well as cloud computing required for various automotive tasks.
    Intel estimates that vehicle systems, data, and the services market for automotive to have a value around $70 billion by 2030, including edge cases through backhaul into cloud. This includes predictions such that 4TB of data per day per vehicle will be generated, which is going to require planning in infrastructure. Intel’s expertise in elements such as the RealSense technology and high-performance general compute will be an interesting match to Mobileye’s portfolio.
    “This acquisition is a great step forward for our shareholders, the automotive industry, and consumers,” said Brian Krzanich, Intel CEO. “Intel provides critical foundational technologies for autonomous driving including plotting the car’s path and making real-time driving decisions. Mobileye brings the industry’s best automotive-grade computer vision and strong momentum with automakers and suppliers. Together, we can accelerate the future of autonomous driving with improved performance in a cloud-to-car solution at a lower cost for automakers.”
    The acquisition will combine into a single organization under Intel’s Automated Driving Group, to be HQ in Israel and led by Prof Shashua, Mobileye’s co-founder, Chairman and CEO. All current contracts under Mobileye for automotive OEMs and tier-one suppliers will be retained under the single group, which will also be under Doug Davis, Intel’s SVP of Intel’s Automotive.
    Mobileye currently offers on its roadmap products such as the EyeQ4 and EyeQ 5 SoCs, for level 3/4 autonomy in 2018 and 2020 respectively, as well as high-performance FPGAs for vision analytical techniques. The acquisition of Altera by Intel over a year ago as a step into the FPGA market may come into play here, as well as Intel’s semiconductor manufacturing facilities. As with Altera, it will likely take some time before full integration between Intel’s resources and Mobileye’s technology occurs.
    There will be an investor call webcast on 3/13 at 8:30 am (ET) about this announcement at this link here. The full transaction is expected to close within nine months, subject to regulatory approval, and is not subject to any financing conditions. Intel intends to fund the acquisition with cash from the balance sheet.
    As we get more information we will let you know.
    Additional 1: For scope, Intel's purchase of Altera was $16.7 billion, as we reported here.
    Additional 2: Here is the Investor Call slide deck.
    Gallery: Investor Call Slide Deck


    Additional 3: It will require purchasing 95% of the ordinary stock, and will use offshore cash that Intel has not repatriated into the US.



    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6788

    Anandtech: MWC 2017: Oppo Demonstrates 5X Optical Zoom for Smartphones

    This year at MWC, Oppo showed off a smartphone prototype that used a new implementation of dual cameras to offer a 5X optical zoom. The company did not reveal anything about the actual plans to use it for products, nor did they reveal the cost of its implementation, but it is likely that it will reach the market sometimes in the future.
    Imaging capabilities of smartphones have been evolving rapidly since the introduction of the first handsets with cameras. Throughout the history of phones making photos, manufacturers have developed new lens packs, new CMOS sensors and extensive ISPs (image sensor processors) in order to improve the capability and/or quality of images. For a while, a number of makers tended to install higher-resolution sensors simply because the 'megapixel number' was easier to explain than the quality of optics or advanced ISPs. A lot has changed in the recent years as various smartphone makers have invested in high-end lenses (co-developed with Carl Zeiss, Leica, etc.), developed their own SoCs/ISPs for image processing, and other potential differentiators in a cramped smartphone ecosystem.
    So at MWC 2017, multiple smartphone manufacturers demonstrated their products with dual back-facing sensors (RGB+RGB or RGB+IR) to further improve their photography acumen. One of those was Oppo using the two sensors to build a portable camera system with a 5X optical zoom in a very different configuration to what we have seen before.
    Optical zoom is not anything new for smartphones, but Oppo’s approach is a little bit different compared to that used by other makers. The 5X dual camera optical zoom from Oppo relies on two image sensors:

    • The first is placed inline with the motherboard (just like sensors inside all smartphones) and is equipped with a regular lens pack such that the light hits the sensor with minimal adjustment.
    • The second is placed perpendicularly to the motherboard and is equipped with other optics with image stabilization and optical zoom. It is possible that the lens system here can physically move to allow for extra enhancement.

    To direct the light to the second sensor, Oppo uses a special prism mirror placed perpendicularly to the motherboard (so, basically, everything works like a periscope) and which it can precisely regulate angles as low as 0.0025 degrees to compensate shaking. To enable 5X optical zoom, an unknown ISP processes images from both sensors.
    In its booth at the MWC 2017, Oppo demonstrated promo videos describing the added qualities of its optical zoom capabilities, as well as its optical image stabilization. In addition, the company allowed visitors to try out the prototype devices. One of the concerns, when you use mirrors to transfer light, is that luminous intensity drops as well as a drop in the quality of images. In its video at the trade show, Oppo showcased that the quality of the photos made using the prototype featuring its 5X dual camera optical zoom in dark conditions was better when compared to images made by an 'unknown' rival. With a minor hands-on, we noticed no immediate problems shooting the images in light conditions. There are other phones with prisms used in the market it should be noted, however not quite used in this way.
    Oppo did not mention which smartphones are going to use its 5X dual camera optical zoom technology, nor did it mention when. The reference system on the show floor looks slim, so it could be installed into various handsets by Oppo and give the company an opportunity to use it for its top-of-the-range smartphones with large displays, or perhaps for smaller models as well (provided that they have appropriate SoCs/ISPs).
    It is noteworthy that in its briefing materials, Oppo did not state the type of sensors in use, but solely emphasized only the 5X dual cam optical zoom. This may likely be a work in progress for a future device, which may or may not be a smartphone.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6789

    Anandtech: GDC 2017 Roundup: VR for All - Pico Neo CV, Tobii, & HTC

    Now that I’ve wrapped up the major GDC product launches, I want to spend a bit of time talking about the rest of GDC.
    The annual show has always been a big draw for game developers and hardware companies alike, and since the end of the Great Recession that process has only accelerated. But without a doubt the fastest growth in terms of developer and vendor presence at the show has been VR. GDC 2016’s VR sessions exceeded any and all expectations – the show management had to scramble to move them to larger spaces because the attendance was so high – and it took all of half a year for VR to become its own stand-alone show as well with the GDC spinoff VRDC. Suffice it to say, the amount of attention being paid and resources being invested in VR is very significant, both for software and hardware developers.
    So for GDC 2017, I spent an afternoon on the expo floor dedicated to VR meetings, to see what new hardware was on display. While a common theme throughout is that everyone is still looking for the killer app of VR – both in terms of hardware design and the actual must-have game/application – it’s clear that there’s a lot of progress being made for future VR headsets, and that developers aren’t afraid to experiment in the workshop and show off those experiments to the public.
    Pico Neo CV – Stand-alone VR Headset with Inside-Out Tracking

    The first stop was Pico Interactive’s booth, where the company was showing off their Pico Neo CV headset. Pico is one of several companies developing headsets around Qualcomm’s Snapdragon SoCs, for whom VR has become a priority. Already a major force in high-end smartphones, Qualcomm believes that the Snapdragon is a great fit for VR given the mix of portability and high-performance required. As a result the company has gone all-in on VR, dedicating quite a bit of engineering and marketing resources towards helping their customers develop VR headsets and bring them to market.
    Pico headset, in turn, is one of several headsets in development based around a Snapdragon processor. However more than just being a stand-alone headset for the purposes of on-board processing, arguably Pico’s big claim to fame in the world of VR development is their inside-out position tracking, which is designed to do one better than current VR headsets. Whereas setups like the Samsung Gear VR and various Cardboard headsets primarily rely on inertial tracking, the Pico Neo CV can do true inside-out tracking, fixing itself relative to the outside world on an absolute basis.
    The advantage to positional tracking is that it allows much greater accuracy, which in turn allows for greater freedom of movement than inertial tracking. You can actually do a lot by interpolating accelerometer and gyroscope data from Inertial tracking, and as a result it’s generally satisfactory for rotation – think 360 degree videos and fixed-position gaming such as Gunjack – but it is ultimately limiting in what can be done with interactive experiences where errors add up. The drawback to absolute tracking is that it normally takes an external camera or beacon of some kind – such as the Vive Lighthouse system – which in the case of stand-alone, untethered headsets is antithetical to their portability.
    The solution then, as several companies like Pico are playing with, is inside-out tracking. In the case of the Pico Neo CV, the company combines the usual gyroscope and accelerometer data with a camera looking at the outside world, using computer vision processing to extract the user’s position relative to the rest of the world. Computer vision is a fairly straightforward solution to the problem – witness the number of self-driving cars and other projects using CV for similar purposes thanks to the explosion in deep learning – but it’s made all the more interesting on a headset given the processing requirements.
    In the case of the Pico Neo CV, while the company won’t be shipping the headset until later this year, they already have a prototype up and running, inside-out tracking and all. In my hands-on time with the headset, the positioning of the Neo seemed very accurate; the demo software always reacted to my head position as I felt it should across all six degrees of freedom, and pulling off the headset to check my actual position revealed that I was positioned where (and facing where) I should be. It’s an experience that in principle is no different than using external tracking, but then that’s the point of inside-out tracking: it is meant to be the same thing, but without the external gear.
    That said, like the first-generation of PC headsets, I suspect the Pico Neo CV is going to be a transitional product as the hardware further improves. The camera-based tracking system only updates at 20Hz, meaning there’s 50ms between position updates. Without getting deeper into the headset I’m not sure what the actual input lag is, but the low refresh rate is noticeable if you turn your head quickly. In my experience it’s not nauseating in any way, but like some of the other drawbacks of first-generation VR headsets, there’s clear room for improvement. The headset display itself operates at 90Hz, so it’s a matter of getting tracking operating at the same frequency.
    Part of the catch, I suspect, is processing power. The Pico Neo CV is based around a Snapdragon 820 SoC, which although is powerful by SoC standards, now is splitting its time between rendering in VR and processing the additional tracking information. Future SoCs are going to go a long way towards helping with this problem.
    Looking at the rest of the headset, Pico has clearly set out to develop something better than the vast array of cellphone-powered VR experiences out there. Pico has combined their tracking gear and the 820 with a pair of 1.5K displays, so the total pixel count – and resulting DPI – is a lot higher than on a Cardboard or Daydream setup. Along with built-in audio, and the Pico Neo CV is everything needed for stand-alone mobile-caliber VR gaming.
    Pico hasn’t yet announced a precise launch date for the headset, but they expect to start selling it later this year. As one of the first serious efforts at a stand-alone Snapdragon-based headset, it should be interesting to see where these kinds of devices fall into the market, and just how much more Pico can improve the inside-out tracking before the headset’s launch.
    Tobii – VR Eye Tracking

    Going from the inside looking out, let’s talk about the inside looking even further inside. One of the technologies various companies have been investigating for second-generation VR headsets is eye tracking. Besides enabling a more immersive experience, eye tracking could also potentially change how VR rendering works by allowing foveated rendering. By using eye tracking to keep tabs on what direction a user is looking, foveated rendering would allow games to efficiently render in a non-uniform fashion, rendering at full quality only where a user is looking, and rendering at a lower level of quality outside of that focus area.
    But to get there you first need to be able to accurately track users’ eyes, and that’s where Tobii comes in. The company, which focuses on eye tracking for gaming and other applications, has already made a name for itself with their eye-tracking cameras, which are available both stand-alone and integrated into some laptops and displays. The use of external eye tracking has proven a bit gimmicky, but the technology is sound, and VR stands to be a much more useful application.
    To that end, the company was at GDC showcasing a modified HTC Vive headset with their eye tracking technology installed. The company’s demo was primarily focused on how eye tracking can improve the gaming experience, both as an input method and as a way to add life to avatars, and true to their claims, it worked. The eye tracking implementation in the company’s modified headset was very rapid, to the point that it didn’t feel like it was operating any slower than the headset tracking. And while it took some practice to get used to – it’s a bit jarring at first that where you look actively matters – once I got used to it, it worked very well.
    But from a technical perspective, perhaps the most impressive part was just how well the company had integrated the eye tracking hardware into the headset itself. While the external cameras were by no means big to begin with, I was surprised just how easily it fit into the prototype. Adding eye tracking did not make the headset feel significantly heavier, and the sensors easily fit into the already limited free space inside the headset. From a hardware perspective, this very much felt like a technology that was already at a point where it could get integrated into a commercial headset tomorrow.
    Consequently, if Tobii’s technology (or similar eye tracking tech) shows up in second-generation VR headsets, I would not be the least-bit surprised. While I’m not sold on the gaming aspects of the tech – it’s neat, not must-have – it’s the kind of thing where I expect developers would need some time to play with it to really figure out if it’s useful and just what the best use cases are. Otherwise the big use case here is going to be foveated rendering, which is likely going to prove critical for higher resolution VR headsets. The latter is outside of Tobii’s hands, but offering a good eye tracking experience is the first and most important part in making that happen.
    HTC Vive – Hand on with the Deluxe Audio Strap & Tracker

    My final stop for the afternoon was HTC’s private demo room, where the company was showing off some new games and other software technology being developed for the Vive. We’re at least a year too early for second-generation headsets, so the company wasn’t showing off anything new in that respect, but they did have on-hand their new Deluxe Audio Strap and the Tracker device for third-party peripherals. Both of these devices have been previously announced, but this is the first time I’ve had a chance to actually use them.
    The Deluxe Audio Strap is an interesting device. Despite its plain-sounding name, it’s a lot more than just an audio solution for the headphone-free Vive. In adding earphones, HTC went and radically altered the entire strapping mechanism for the headset. As a result the Deluxe Audio Strap not only rectifies one of the competitive drawbacks of the Vive – it requires a pair of headphones/earbuds on top of everything else – but it also greatly improves the fit of the headset. The latter has always been of particular interest to me; the original Vive strap system just never fit my admittedly oversized head very well. So improving this would go a long way towards making the Vive more comfortable to wear over a long period of time.
    Coming from the original strap system, I’ve found the difference rather pronounced. With the Deluxe Audio Strap installed, the Vive is not only easier to adjust, but it feels a lot more secure as well. The former comes thanks to a small dial (a “sizing dial”) on the back of the harness, which replaces the use of Velcro straps along the sides of the headset. Now you can just turn the dial to adjust the fit of the headset, which is easy enough to do both wearing the headset and with it off. Combined with some other general fitting tweaks HTC has made to the strap, and it feels like the strap they should have had for the headset’s launch last year.
    Meanwhile the new earphones are similarly impressive. Relative to the Rift HTC has gone with something a little bigger and a little more versatile. The drivers HTC are using are larger than those used in the Rift’s earphones, and should give it a bit more kick in the bass, though that’s something that would need to be tested. The fit of the earphones is also very good; the ratchet mechanism keeps them pushed towards the ears, while it’s easy enough to flip one or both earphones out to hear the world around you (or in my case, the engineer giving the presentation). While I doubt most Vive owners will want to buy the new strap solely on the basis of audio since they already have headphones or another solution, combined with the new strap system, it’s a very compelling offering.
    Also on display was the Vive Tracker. The external widget is designed to be used with the Vive’s Lighthouse system, allowing for Lighthouse tracking to be added to third party objects. The tracker itself does look a bit weird, owing to its need to match the pitted appearance of the Vive headset that the Lighthouse system is meant to work with, but it does its job well. Besides the obvious use case of third party controllers – which could prove interesting for developers since it’s just the Tracker and not the entire controller being tracked – HTC was also using it for more unusual applications such as attaching it to a camera to allow accurately superimposing recorded footage (i.e. unsuspecting editors) over the rendered game itself.
    The Deluxe Audio Strap is available immediately for developers and other commercial firms who are buying the Vive Business Edition. Otherwise larger-scale consumer sales will start a bit later this year; HTC is pricing it at $99.99 and pre-orders start on May 2nd, while HTC will begin shipping it in June. Meanwhile the Tracker will go on sale to developers on the 27th of this month, also for $99.99.
    Gallery: GDC 2017 Roundup: VR for All - Pico Neo CV, Tobii, & HTC




    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #6790

    Anandtech: MWC 2017: Netgear Nighthawk M1 Coming to Europe in Mid-2017, But

    Earlier this year Netgear introduced its Nighthawk M1 router, powered by Qualcomm’s X16 LTE modem and is the first Gigabit LTE router on the market. Right now, the device is available on Telstra’s 4GX LTE network in Australia, but the router made a surprise appearance at the MWC 2017 show and it will actually hit the market in Europe later this year. There is a catch however: there will not be a lot of Gigabit LTE deployments because of technical challenges.
    The Netgear Nighthawk M1 is based on Qualcomm’s Snapdragon X16 LTE modem (paired with Qualcomm’s WTR5975 RF transceiver) that uses 4×4 MIMO, three carrier aggregation (3CA) and 256QAM modulation to download data at up to 1 Gbps (in select areas) as well as 64QAM and 2CA to upload data at up to 150 Mbps. The Nighthawk M1 router is designed for those who need to set up ultra-fast mobile broadband connection but do not want an incoming physical data connection. The router is equipped with Qualcomm’s 2×2 802.11 b/g/n/ac Wi-Fi solution that can connect up to 20 devices simultaneously using 2.4 GHz or 5 GHz frequencies concurrently. Generally speaking, the Nighthawk M1 is aimed at mobile workgroups who need high-speed Internet connection where there is no broadband. In Australia, there are areas where Telstra’s 4GX LTE network is available, whereas regular broadband is not, so the device makes a lot of sense there.
    Netgear’s Nighthawk M1 router is clearly one of the flagship products offered by the company. Nonetheless, it was still a bit surprising to see the device at the MWC (given its current Australia-exclusive status). When asked about availability in Europe, a representative of the company said that the Nighthawk M1 is coming to Europe this summer and will be available from multiple operators. What this means is that a number of operators from Europe will be ready to deploy Gigabit LTE later this year. Netgear did not talk about which operators or which geographies, or at what pricing, right now because it will depend entirely on operators that are going to offer the Nighthawk M1 with certain service packages.
    While there are Gigabit LTE deployments coming, do not expect them to be widespread in the next couple of years with 4G networks. To enable Gigabit LTE, devices and operators have to support 4×4 MIMO, carrier aggregation (CA) and 256QAM modulation. It is not particularly easy to enable 4×4 MIMO and 256QAM modulation because of interference. In fact, far not all networks today use even 64QAM. Moreover, operators have to have enough spectrum and backhaul bandwidth to transfer all the data. Thus, to offer Gigabit LTE, operators have to upgrade their infrastructure both in terms of base stations and backhaul. Some operators may be reluctant to upgrade networks to Gigabit LTE because right now there are not a lot of announced devices featuring the technology, or not all operators have enough customers who need the tech and could use/are prepared to pay for routers like the Nighthawk M1. Despite this, wireless Gigabit networks are coming first with 4G/LTE in select areas, and then with 5G sometime from 2020+ and onwards.
    Even if there are not a lot of Gigabit LTE deployments across Europe this year, the Netgear Nighthawk M1 will still have enough advantages to attract customers seeking for an high-end mobile router that can work for up to 24 hours on a charge (it comes with a 5040-mAh battery). In Australia, the Nighthawk M1 is available for less than $300 from Telstra, but we know nothing about the price in Europe.
    Related Reading:




    More...

Thread Information

Users Browsing this Thread

There are currently 50 users browsing this thread. (0 members and 50 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title