Page 562 of 1210 FirstFirst ... 624625125375525575585595605615625635645655665675725876126621062 ... LastLast
Results 5,611 to 5,620 of 12096

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5611

    Anandtech: Hands On With the Huawei Honor 5X

    We’ve reviewed Huawei Honor devices before, but by and large they were designed to target China and similar markets. There were also a number of growing pains as seen in our Huawei Honor 6 review. However, in the time since that review Huawei has done quite a bit of growing up when it comes to resolving some of their weaknesses and improving upon their strengths. Their Kirin SoCs started off with some notable issues in implementations, but with the Kirin 950 we’ve seen a major leap in performance and power efficiency. To keep their momentum going, Huawei Honor is bringing their first phone to the US, the Honor 5X.
    Huawei Honor 5X
    SoC Qualcomm Snapdragon 615 1.5/1.2 GHz 4x4 A53
    RAM 2GB
    NAND 16GB + microSD
    Display 5.5" 1920x1080 IPS LCD
    Dimensions 151.3mm x 76.3mm x 8.15mm; 158g
    Camera 13MP Rear Facing f/2.0 28mm equivalent IMX214
    5MP Front Facing f/2.4 22mm equivalent OV5648
    Battery 3000 mAh (11.4Wh)
    OS Android 5.1.1 EmUI 3.1
    Connectivity 802.11 b/g/n 2.4 GHz Only, Bluetooth 4.1, GPS/GNSS, Micro USB 2.0
    Network 2G / 3G / 4G LTE Category 4
    The basic specs aren’t really going to be all that fascinating at this point as Snapdragon 615 is a known quantity. Huawei continues their trend of shipping odd WiFi configurations as this device only supports 2.4 GHz 802.11b/g/n WiFi. The rear camera is a rather well-understood Sony IMX214 sensor and the front camera sensor is a similarly common OmniVision OV5648 sensor.
    However, the Honor 5X actually manages to hit the right point for price and features. The display is a 5.5” 1080p LCD, with an aluminum unibody design. There’s also the usual dual SIM capabilities along a decently sized battery and an FPC1020 fingerprint scanner shared with the Ascend Mate7. At 200 USD, this has the potential to beat out the Moto G for best value smartphone in that price range.
    Subjectively, the in-hand feel and overall build quality is shockingly good for the price. The Ascend P8 Lite that we reviewed last year was pretty much par for the course when it came to materials and in-hand feel for a ~200 USD phone, so to go from some rather hard and cheap-feeling plastic to an aluminum unibody that is basically comparable to the HTC One M9 in feel is quite a leap in the course of less than a year. The comparison to the One M9 is rather apt in this case, as the design of the phone is such that the phone has a brushed finish that can be seen, but not really felt in the hand.
    Unfortunately, the performance of the Honor 5X is a bit wanting. I suspect that Cortex A53s alone aren’t quite enough to get the amount of performance needed to make Android run perfectly smooth, as while in some cases the phone was perfectly smooth in some transitions like opening and closing app folders I saw noticeable frame drops and similar issues.
    Casual use of the fingerprint scanner was also quite impressive, as the Honor 5X behaves pretty much identically to the Ascend Mate7 in how the fingerprint scanner will automatically detect and scan a fingerprint even when the screen is off, so with fingerprint unlock set up it’s possible to unlock the phone by simply placing a finger over the fingerprint scanner and waiting for the phone to wake up and unlock automatically.
    As previously mentioned, Huawei is selling the Honor 5X for 199.99 USD. It will be available for preorder starting January 6th, and will have general availability starting January 31st on HiHonor.com and Amazon. Although it would have really been exciting to see something like Snapdragon 650 show up in this phone, at the price it’s going it could be a viable option if Huawei has managed to nail down the details without show-stopping issues.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5612

    Anandtech: ZOTAC to Expand Lineup of SSDs with PCIe Offerings

    ZOTAC is primarily known for its NVIDIA GeForce-based video cards, but in the recent years, the company started to sell motherboards, small form-factor personal computers and various accessories. Last year ZOTAC introduced its entry-level solid-state drives to add another revenue stream. At the International CES 2016, ZOTAC announced its new-generation PCIe SSDs, which are expected to address higher-end market segments.
    The new solid-state drives from ZOTAC will be powered by Phison’s PS5007-E7 controller as well as multi-level cell (MLC) NAND flash memory produced by Toshiba. ZOTAC claims that its new SSDs will have sequential read performance of up to 2400 MB/s and sequential write performance of up to 1200 MB/s. The new solid-state drives from ZOTAC will come in half-length half-height PCI Express 3.0 x4 card form-factor and will fully support the NVMe protocol. The first model in ZOTAC’s PCIe SSD lineup will feature 480 GB capacity and will be available sometimes in February, according to the manufacturer.
    ZOTAC does not reveal too many details about its new solid-state drives, but since they are based on the Phison PS5007-E7 controller, expect support for the NVMe 1.2, error correction with 120-bit/2KB BCH code, NVMe L1 power sub-states, end-to-end data path protection, advanced global wear-leveling, an AES-256 engine and so on. The AIC form-factor also means that the controller will be able to use all of its eight transfer channels, thus, maximizing performance.
    Since the Phison PS5007-E7 controller was developed not only for gaming PCs, but also for enterprise and datacenter applications, it can enable SSDs with up to 350,000 4KB random read IOPS (input/output operations per second) and up to 250,000 random write IOPS. While consumer SSDs featuring the PS5007-E7 may not hit maximum IOPS performance, they will definitely be considerably faster than any previous-generation solid-state drives.
    ZOTAC’s first-generation SSDs were arguably a business experiment for the company and its parent, PC Partner Group, which specializes on production of graphics cards, motherboards and other similar products, but not on storage devices. ZOTAC’s initial SSDs use Serial ATA interface and deliver moderate levels of performance. The cautious approach makes a lot of sense. Nowadays the end-user demands SSDs with maximum durability and reliability. ZOTAC yet has to become a well-known maker of solid-state drives and if its products are not rock-solid, its brand will be harmed. As a result, the company decided to focus on maximum quality rather than on maximization of sequential reads and writes.
    With its new PCIe SSDs, ZOTAC plans to deliver rather extreme levels of performance. ZOTAC’s solid-state drives with PCI Express 3.0 x4 interface may not be as fast as Samsung’s 950 Pro (at least, on paper), but if the price and performance have the right balance, many end-users will gladly buy them.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5613

    Anandtech: Razer Launches The Razer Blade Stealth Ultrabook And Razer Core At CES 201

    Today we have some news that is kind of unexpected. Razer, the company known for gaming peripherals and gaming laptops such as the Razer Blade, has decided to enter the Ultrabook market with the launch of the Razer Blade Stealth. Not only is an Ultrabook not something expected from Razer, it is also priced very competitively and undercuts the competition on price.
    Razer did not cut any corners either when designing the Stealth. Just like it’s more powerful and higher priced siblings, it is built out of a CNC-milled aluminum chassis, which is a designing feature of Razer laptops. But despite the solid frame, the laptop is still only 0.52-inches thick and weighs in at just 2.75 lbs. On the styling front, it keeps the black finish of other Razer laptops, but also outdoes them with a full “Chroma” keyboard with individually lit RGB keys. I’ve been hoping that they would do this for a while when reviewing the Razer Blade, so it’s great to see the RGB keyboard come to the Stealth model.
    The 12.5-inch display comes in two options. The base model is a QHD (2560x1440) resolution, but you can also opt for a UHD (3840x2160) model with full Adobe RGB color gamut. I need to check in with Razer on how they are going to handle the wider color gamut, and will let you know after we get some hands-on time on the show floor.
    The Stealth, as an Ultrabook, is going to be powered by Ultrabook class components, which in this case is the Intel Core i7-6500U processor. This Skylake chip features two cores, hyperthreading, and a base/turbo frequency of 2.5 GHz / 3.1 GHz. I was hoping that Razer would also offer a model with Intel’s Iris GPU, but that won’t be the case, at least at launch. The only memory option is 8 GB of LPDDR3-1866, and storage options range from 128 GB to 256 GB of PCIe storage on the QHD model, and 256 GB to 512 GB on the UHD model. The battery life will need to be tested, but the laptop has a 45 Wh battery, so it’s not going to be class leading in that regard.
    For connectivity, the Stealth will have two USB 3.0 ports, and a USB 3.1 Type-C connector with Thunderbolt 3 support. The Thunderbolt is a key component to the Stealth, thanks to the accessory that Razer is also launching.
    The Razer Core is a Thunderbolt 3 connected external GPU, which also acts as a docking station for the Stealth. With a single cable connection, the laptop can power an external display, all of the docking connections with four USB 3.0 ports and Gigabit Ethernet, and support for a 375W GPU.
    The Core features a built-in 500W power supply, and the GPU support is for any single card which is full-length and double-wide, which means pretty much any GPU out there. The Core also features two additional Chroma lighting zones so that you can tailor it to your liking.
    Razer has not yet announced any updates to the Razer Blade or Razer Blade Pro, but I would expect that both of these will also feature support for the Core when they do get their next refresh.
    The Core supports plug and play with validated graphics cards, without the need to reboot.
    The addition of the Razer Core brings back some of the gaming performance that Razer has been known for, although with a U series CPU it will be interesting to see what level of GPU is required to become CPU bound, especially with DX 12. If we can track down a review unit, we’ll try to sort that out.
    The Razer Blade Stealth will be on-sale starting today, with a starting price of just $999. Considering the high resolution panel and Core i7, this undercuts most, if not all, of the Ultrabook competition on price. The top end 4K model with the UHD display and 512 GB of storage will be $1599.
    Source: Razer


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5614

    Anandtech: Razer Launches The Stargazer Webcam With Intel RealSense3D At CES 2016

    The webcam market may seem fairly pedestrian, but Razer is trying to kickstart it with a new webcam designed for the modern game streamer. Many of the major game streaming sites have begun the move to 60 FPS video, but the modern webcam is basically stuck at 30 FPS. Razer is directly targeting this market with the Stargazer webcam.
    The 60 FPS video capture can be done at 720p, and the camera also supports 1080p at 30 FPS. It also features automatic a noise-cancelling, dual-array microphone.
    The Stargazer is powered by the Intel RealSense SR300 camera, which means that it also brings 3D to the mix. This may sound like a waste, but it brings quite a few benefits. The first obvious one is Windows Hello support, for facial recognition in Windows 10.
    The part that Razer is most excited about though is the Dynamic Backround Removal capability, which means that the 3D camera can filter out the entire image except for the person. Traditionally when doing game streaming, the game is on most of the screen with the person playing as a box in one of the corners, but with the 3D camera Razer can focus on just the gamer, eliminating the required video box and just leaving the person. This has generally required an elaborate green-screen for gamers to invest in, and the Stargazer brings a similar result for much less cost.
    On the other side of gaming, the Stargazer can be used to scan real objects into a digital world, for use as in-game assets, potentially speeding up development.
    Finally, the Stargazer supports gesture and facial recognition with up to 78 points on the face and 22 points on each hand. Developers can leverage this for in-game actions, and it is something that Intel is promoting with it’s RealSense camera system, so we’ll have to see if it gains traction with developers.
    It may be just a webcam, but as one of the first Windows Hello compatible devices launched, it already has a place with some people. The game streaming crowd will gain the bulk of the benefits for this, and that market is growing quite a bit.
    The Stargazer will be available staring in Q2 for $199.99 USD.
    Source: Razer



    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5615

    Anandtech: Imagination Announces PowerVR Series7XT Plus Family - Rogue Gets Improved

    A regular sight at CES at most years is a new PowerVR graphics announcement from the crew over at Imagination, and this year is no exception. Shortly before CES last year we were introduced to the company’s PowerVR Series7XT family, a significant iteration on their base Rogue architecture that added full support for the Android Extension Pack to their GPUs, along with specific improvements to improve energy efficiency, overall graphics performance, and compute performance. Imagination also used Series7XT to lay the groundwork for larger designs containing more GPU clusters, giving the architecture the ability to scale up to rather sizable 16 cores.
    After modernizing Rogue’s graphics capabilities with Series7XT, for their follow-up Imagination is taking a slightly different path. This year they are turning their efforts towards compute, with while also working on energy and memory efficiency on the side. To that end the company is using CES 2016 to announce the next iteration of the Rogue architecture, PowerVR Series7XT Plus.
    With Series7XT Plus, Imagination is focusing first and foremost on improving Rogue’s compute performance and compute capabilities. To accomplish this they are making two important changes to the Rogue architecture. First and foremost, Imagination is upgrading Rogue’s integer ALUs to more efficiently handle smaller integer formats.
    Though Imagination hasn’t drawn out the integer ALUs in previous generations’ architecture diagrams, the architecture has always contained INT32 ALUs. What has changed for Series7XT then is how those ALUs handle smaller INT16 and INT8 formats. Previously those formats would be run through the integer ALUs as INT32s, which though practical meant that there were few performance gains from using smaller integers since they weren’t really processed as smaller numbers. Series7XT Plus significantly changes this: the integer ALUs can now combine operations into a single operation based on their width. One ALU can now process 1 INT32, 2 INT16s, or 4 INT8s.
    Imagination’s press release doesn’t offer a ton of detail in how they are doing this, however I suspect that they have gone with the traditional (and easiest) method, which is to simply bundle like-operations. An example of this would be bundling 4 INT8 adds into what is essentially one large INT32 addition operation, an action that requires minimal additional work from the ALU. If this is the case then the actual performance gains from using and combining smaller operations will depend on how often these operations are identical and can be bundled, though since we’re talking about parallel computing, it should be the case quite often.
    From an architecture perspective this is an interesting and unexpected departure from Imagination’s usual design. One of the traditional differences between PowerVR and competitor ARM’s Mali designs is that Imagination went with dedicated FP16 and FP32 ALUs, whereas ARM would combine operations to fill out a 128-bit SIMD. The dedicated ALU approach has traditionally allowed for greater power efficiency (your ALUs are simpler), but it also means you can end up with ALUs going unused. So for Imagination to go this route for integers is surprising, though I suspect the fact that integer ALUs are simpler to begin with has something to do with it.
    As for why Imagination would care about integer performance, this brings us back to compute workloads. Rather like graphics, not all compute workloads require full INT32/FP32 precision, with computer vision being the textbook example for compute workloads. Consequently, by improving their handling of lower precision integers, Imagination can boost their performance in these workloads. For a very low precision workload making heavy use of INT8s, the performance gains can be up to 4x as compared to using INT32s on Series7XT. Pragmatically speaking I’m not sure how much computer vision work that phone SoCs will actually be subjected to – it’s still a field looking for its killer apps – but at the same time from a hardware standpoint I expect that this was one of the easier changes that Imagination could make, so there’s little reason for Imagination not to do this. Though it should also be noted that Rouge has far fewer integer ALUs than FP ALUs - there is just 1 integer pipeline per USC as opposed to 16 floating point pipelines - so even though smaller integers are now faster, in most cases floating point should be faster still.
    Moving on, along with augmenting their integer ALUs, Imagination is also bringing OpenCL 2.0 support to their GPUs for the first time with Series7XT Plus. Previous PowerVR parts were only OpenCL 1.2 capable, so for Imagination 2.0 support is a big step up, and one that required numerous small changes to various areas of the Rogue architecture to support 2.0’s newer features.
    We’ve already covered OpenCL 2.0 in depth before, so I won’t go too deep here, but for Imagination the jump to OpenCL 2.0 will bring them several benefits. The biggest change here is that OpenCL 2.0 adds support for shared virtual memory (and pointers) between CPU and GPU, which is the cornerstone of heterogeneous computing. Imagination of course also develops the MIPS architecture, so they now have a very straightforward path towards offering customers a complete heterogeneous computing environment if they need one. Otherwise from a performance perspective, OpenCL 2.0’s dynamic parallelism support should improve compute performance in certain scenarios by allowing compute kernels to directly launch other compute kernels. This ultimately makes Imagination just the second mobile SoC vendor to announce support for OpenCL 2.0, behind Qualcomm and the Adreno 500 series.
    Aside from compute improvements, for Series7XT Plus Imagination has also made some smaller general improvements to Rogue to further improve power efficiency. Of particular note here is the Image Processing Data Master, a new command processor specifically for 2D workloads. By routing 2D operations through this simpler command processor, Imagination can save power by not firing up the more complex pixel/vertex data masters, making this another example of how mobile GPUs have slowly been adding more dedicated hardware as the power is more important than the die size cost. Meanwhile Imagination’s press release also notes that they have made some memory system changes, including doubling the memory burst size to match newer fabrics and components (presumably this is an optimization for DDR4), and tweaking the caches and their respective sizes to reduce off-chip memory bandwidth needs by 10% or so.
    Overall these efficiency changes don’t appear to be as extensive as what we saw with Series7XT – and Imagination isn’t treating them as nearly as big of a deal – so the jump from Series7XT to Series7XT Plus shouldn’t be as great as what came before. Series7XT Plus in that regard is definitely a more incremental upgrade of Rogue, with Imagination focusing on improving a few specific use cases over the last year.
    PowerVR GPU Comparison
    Series7XT Plus Series7XT Series6XT
    Clusters 2 - 16 2 - 16 2 - 8
    FP32 FLOPS/Clock 128 - 1024 128 - 1024 128 - 512
    FP16 Ratio 2:1 2:1 2:1
    INT32 OPS/Clock 8 - 64 8 - 64 8 - 32?
    INT8 Ratio 4:1 1:1 1:1
    Pixels/Clock (ROPs) 4 - 32 4 - 32 4 - 16
    Texels/Clock 4 - 32 4 - 32 4 - 16
    OpenGL ES 3.2 3.2 3.1
    Android Extension Pack / Tessellation Yes Yes Optional
    OpenCL 2.0 Base: 1.2 EB
    Optional: 1.2 FP
    1.2 EB
    Architecture Rogue Rogue Rogue
    Finally, along with announcing the overarching Series7XT Plus family and its architecture, Imagination is also announcing two initial GPU designs for this family: GT7200 Plus and GT7400 Plus. As alluded to by their names, these are Series7XT Plus versions of the existing two-cluster GT7200 and four-cluster GT7400 designs. That imagination is only announcing smartphone designs is a bit odd – both of these designs are smaller than the GT7600 used in number-one customer Apple’s A9 smartphone SoC – though as Apple is the only customer using such a large design in a phone, for Imagination’s other customers these designs are likely more appropriate.
    In any case, while Imagination does not formally announce when to expect their IP to show up in retail products, if history is any indicator, we should be seeing Seires7XT Plus designs by the end of this year and leading into 2017.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5616

    Anandtech: Oculus VR Reveals Retail Price of Its Virtual Reality Headset: $599

    Oculus VR on Wednesday revealed the price of its Oculus Rift virtual reality headset as well as its launch date. The price of the VR hardware appears to be considerably higher than expected by gamers and industry analyst. The developer claims that the high price is conditioned by high costs and the use of custom hardware. However, such price point may slowdown adoption of virtual reality technologies by the masses.
    The Oculus Rift bundle includes the VR headset, an Xbox One gamepad, a sensor, the Oculus Remote controller as well as EVE: Valkyrie and Lucky's Tale VR games. The initial bundle will not include the Oculus Touch controllers, which were recently delayed to the second half of the year. The Oculus Rift virtual reality headset is available for pre-order for $599 on the company’s web-site and will ship starting March 28, 2016, to 20 countries. Select retailers will also sell Oculus Rift hardware in April. In addition, makers of gaming PCs plan to offer Oculus Ready PCs with the headset next month starting at $1499.
    Back in early October, 2015, Palmer Luckey, the founder of Oculus VR, said in an interview that the price of one Oculus Rift headset was in the “$350 ballpark”, but it was “going to cost more than that”. As it appears, the virtual reality head mounted display (HMD) costs nearly two times more than that. The $599 price-point is a yet another indicator that the first-generation VR headsets are expensive to make in general. However, that price is too high for the mass market and for many gamers, believes Jon Peddie, the head of Jon Peddie Research, which tracks sales of graphics adapters and PC gaming hardware.
    A Lot of Custom Hardware

    While the virtual reality HMD is available for pre-order now, Oculus VR still has to confirm its final technical specifications. Based on what the company revealed about six months ago, the Oculus Rift uses two custom AMOLED panels (one per eye) with 2160×1200 resolution and 90 Hz refresh rate (1080×1200 per eye). The AMOLED displays were architected for low persistence, they display each image for about 2 ms in a bid to minimize delays and avoid effects like motion blur, which can cause nausea. The headset also features specially designed adjustable lenses to enable wide field of view. Each headset has integrated headphones and a microphone. Besides, the Oculus Rift sports various sensors, including the company’s own Constellation system based on infrared sensors, which tracks position of the user’s head.
    To connect to a PC, the Oculus Rift and devices that accompany it (gamepad, sensor, remote, etc.) use one HDMI 1.3/HDMI 1.4 interconnection, three USB 3.0 interconnections and one USB 2.0 interconnection.
    The Oculus Rift virtual reality headset uses a lot of custom components that were designed specifically for this device. For example, the low-persistence AMOLED display panels were co-developed by Oculus and Samsung Electronics. Oculus VR claims that they wanted to make a device that will offer the best virtual reality experience possible today, which is why they tried to avoid any trade-offs or compromises. Due to extensive usage of parts that are not mass-produced today, the cost of each Oculus Rift should be rather high, which is one of the reasons why the headset is priced at $599.
    High-End PC Needed

    Since the Oculus Rift should run games in 2160×1200 resolution at 90 Hz with minimal latency, it requires a rather powerful personal computer to offer comfortable experience. Oculus VR recommends a PC with a quad-core Intel Core i5-4590 microprocessor, an AMD Radeon R9 290 or NVIDIA GeForce GTX 970 graphics adapter as well as 8GB of RAM. The company admits that the more powerful your system is, the better experience with Oculus Rift you are going to get.
    Developers of graphics processing units have implied multiple times that for the best VR experience a dual-GPU graphics sub-system is required today. For example, AMD plans to align release of its new dual-chip Fiji video card with availability of VR headsets in the second quarter. In a dual-GPU graphics sub-system, each graphics chip renders its own part of the scene for one eye. Such approach doubles performance and lowers latency. However, two GPUs also require a more powerful central processing unit as well as a high-end power supply unit.
    For makers of computer hardware the launch of the first VR headset for gamers means a chance to improve sales of their higher-end products. Not only manufacturers of video cards or microprocessors can benefit from availability of the Oculus Rift, but also producers of RAM, solid-state drives and motherboards can take advantage of the headset as enthusiasts begin to build their new systems. Unfortunately, significant investments in hardware may slowdown adoption of virtual reality HMDs by both gamers and the general public.
    Oculus VR: 100+ Virtual Reality Games to Be Available in 2016

    Oculus VR claims that more than 100 games designed for virtual reality and compatible with the Rift are set to be available by the end of 2016, including “dozens of full-length AAA” games. The company does not reveal a lot of names, but in addition to the titles bundled with the VR headset, the firm mentions Rockband VR by Harmonix, Edge of Nowhere by Insomniac, and The Climb by Crytek.
    While over a hundred of titles that support VR is a lot, only a handful of them will actually attract users to the platform. Since $599 is a significant investment for many gamers, there should be several compelling titles, which not only demonstrate the technology itself, but make people want to play.
    A Lot of Excitement

    There is a lot of excitement about virtual reality technologies not only among gamers, but also among developers of hardware and software. While the technology itself has a lot of potential for video games and beyond, the very first Oculus Rift headset is designed primarily for games. The price of the HMD is high for many gamers, but for general users it is prohibitively expensive. Therefore, sales of the device will likely be rather limited. In fact, even Facebook, the owner of Oculus VR, does not expect to sell a lot of VR headsets this year.
    Sales enthusiast-class graphics cards, which cost $399 and higher, total approximately three million units a year, according to Jon Peddie Research. There are many PC gamers nowadays, but only a fraction of them invests thousands of dollars in hardware. Various analysts make different predictions about sales of the first-generation VR gear, some are optimistic and some are pessimistic. For example, according to a report released by Juniper Research several months ago, cumulative sales of VR headsets in their first year of availability (i.e., 2016) will be approximately three million units. There are three major VR devices to be released this year: the Oculus Rift, the Vive from HTC and the PlayStation VR from Sony. It is highly likely that the majority of hardcore enthusiast gamers will buy only one of them. Juniper predicted that cumulative sales of VR headsets will hit around 30 million units by 2020 as hardware and software evolves.
    It remains to be seen how many virtual reality head-mounted displays Oculus VR will sell this year. Palmer Luckey said in an interview that the first consumer version of the Oculus Rift was developed to offer great experience and to show potential of the technology to the world. Hopefully, it will deliver to the promise.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5617

    Anandtech: Revisiting Keyssa: Commercial Availability, Products in Q1 2016

    While we talked about Keyssa at CES last year, details were rather sparse as the technology was still in the early stages of getting off the ground. However, this year Keyssa’s connector technology is now commercially available. Based upon discussions with those at Keyssa, products with this new technology could ship as early as Q1 2016.
    For those that haven’t seen Keyssa in action before, it’s hard to really understand the potential of this technology. At a high level, it’s basically like NFC in the sense that this technology is very short range wireless with a range of roughly an inch or a few centimeters before the signal disappears completely. However, within that range you get 6 Gbps of bandwidth and relatively low power compared to something like 802.11ad/WiGig. Unlike 802.11ad WiFi, the connector and chip needed to enable this technology is almost absurdly tiny, as the chip is no more than a few millimeters squared. This is purely a physical layer technology, which means that at the operating system level a Keyssa connector can appear to be USB, DisplayPort, HDMI, SATA, PCIe, and pretty much any point to point digital connection protocol you can name today.
    As a result, Keyssa has the potential to completely do away with physical data ports in devices. Probably the most obvious example of this would be 2-in-1 hybrid devices like the Surface Book, which in theory could completely do away with all of the wired connections that introduce additional engineering challenges when designing a device like the Surface Book.
    Keyssa has also discussed the potential to replace flex cables internally in smartphones and other devices, which could reduce board area and/or z-height along with simplifying design and reducing cost as flex cables would no longer need to be laid out by hand.
    This connector can also use simple plastic with certain shapes like tubes to introduce directionality and make wire-like connections over distance without the need for actual wires or proper connections.
    Overall, Keyssa shows great potential and judging by the discussions I’ve had there’s a significant amount of interest from OEMs and ODMs for this technology, with hints that devices with this technology are already in development. It’s hard to say what the full potential of this technology is, but it’s definitely going to be interesting to see how this develops.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5618

    Anandtech: Ambarella CES 2016 Tour

    Yesterday Josh and I met with Ambarella and went on a tour of their exhibits. The main topic was their new line of SoCs, along with the various products and projects that have branched off from what these SoCs and their video encoding and decoding capabilities can enable.
    The high end SoC in Ambarella's line of chips for cameras is H2. H2 is built on Samsung's 14nm process, and it incorporates a quad core 1.2GHz Cortex A53 cluster, and its capable of encoding 4K HEVC video at 60fps, or 4K AVC video at 120fps, the latter of which makes it capable of doing 4K slow mo videos by playing back the 120fps footage at 30fps. H2 also includes support for capturing video with 10bit color, as well as support for HDR which has recently been integrated into the Bluray and UHD standards.
    The next SoC in Ambarella's line is H12. H12 isn't shown in the image above, but it's capable of encoding 4Kp30 video using AVC or HEVC. It uses a single 1GHz Cortex A9 core, and it’s built on a 28nm process.
    The last two SoCs are A9SE and A120. A120 is an entry level chip, while A9SE has some advanced functionality, but is intended for devices sitting at lower prices than ones that incorporate H2. A9SE offers 4Kp30 support, and can do 1080p60 video with electronic image stabilization.
    One of the demos that Ambarella showed was an example of their electronic image stabilization for 4K video. Part of the drive behind this is the fact that stabilization on drones has had to be implemented using a mechanical system that shifts the camera along each axis to keep the sensor in the same position. This type of system increases the size, mass, and cost of the drone, and so it's obviously something that drone makers would be keen to eliminate in order to allow for reduced prices and improved battery life. Above you can see a short video which compares two real time video feeds with EIS on and off. As you can see, the difference is dramatic, and the level of stabilization that can be done by the SoC is extremely impressive.
    Another exhibit showcased the ability to record 4Kp120 video. This is the first time that I’ve seen any 4K footage recorded at a high enough frame rate to slow it down an appreciable amount.
    Several of the exhibits that Ambarella had related to technology that will be used in self driving cars. Some of this builds on demos that were shown at last year’s CES. The demo that I found most interesting is the electronic mirror. Essentially this is a mirror that integrates a display which streams footage from a rear-mounted camera on your car. The reasons for using an electronic mirror include the ability to have a higher field of view, no obstruction from passengers in rear seats, and better visibility at night due to the HDR processing that can be done by the SoC at night in order to make the car behind you visible without making the headlights overpoweringly bright. It’s important to note that the mirror can act as a normal mirror in conditions when the camera is not necessary.

    Another car-related demo from Ambarella involved mapping the environment around a vehicle using cameras mounted on the various sides. This isn’t exactly a new concept, but it does tie in with their new SoCs. Some things demoed included environment mapping for self-driving cars, and using the cameras to view an environment in order to implement features like automatic parking.
    The last demo that I found quite interesting demonstrated the image de-warping capabilities of the H2 and H12 SoCs. The demonstration shown gave the example of a fisheye camera being used in a security camera mounted on on a door, with the de-warping being used to put the image into a state that is easy to view.
    As far as video encode and decode goes, the tech being shown off by Ambarella definitely impresses. I haven’t seen 4Kp120 recording in anything else that is consumer-focused, and the push for improved 4Kp60 HEVC encoding with 10bit color and HDR support is something that will be necessary as new standards for UltraHD video are adopted.



    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5619

    Anandtech: Dell Demonstrates 30-inch 4K OLED Display

    Dell does not produce its own display panels, but when it comes to unique “world’s first” monitors, it is sometimes years ahead of all of its rivals. At the International CES 2016, Dell introduced its UltraSharp 30-inch OLED display, the company’s first monitor to use organic light emitting diode panel. The product is designed for professionals and carries a rather extreme price tag, but this is going to be a dream display for years to come.
    The Dell UltraSharp UP3017Q is a 30-inch display with 3840×2160 resolution, 0.1 ms response time and an unknown refresh rate (yet, it should be very high). The monitor can reproduce 1.07 billion colors, it covers 100% of Adobe RGB color space as well as and 97.8% of DCI-P3 color space (used for digital movie projection by the U.S. movie industry and is expected to be adopted in televisions and in home cinema), according to Dell. Just a few professional displays nowadays cover 100% of Adobe RGB. The manufacturer declares 400,000:1 dynamic contrast ratio, but admits the value is only that because testing equipment won't go higher.
    The UltraSharp UP3017Q ultra-high-definition display has very narrow bezels; the monitor itself is thin, but not remarkably thin like OLED TVs, possibly because it features internal power supply unit as well as complex logic inside. The monitor features a mini DisplayPort (mDP) connector, an HDMI port as well as a USB type-C port, which could be used for video and data connectivity as well as for power delivery (it can be powered using a type-C cable, or deliver power to another device).
    Emissive electroluminescent layer in organic light-emitting diode is made of organic compound that emits light in response to an electric current. The organic semiconductor layer is situated between two electrodes and does not require a backlight. As a result, it can display truly deep black levels, unlike liquid crystal display (LCD) panels, which use various kinds of backlighting. Besides, since the emissive electroluminescent layer is very thin and can take different shapes, it is possible to build ultra-thin and even curved monitors and TVs using OLEDs.
    While OLED technology can deliver deep blacks, high contrast ratio and exceptional colours, it is not free of drawbacks. The organic layer may burn down over prolonged amount of time, and colors can shift over time. To maximize lifespan of the OLED panel inside the UltraSharp UP3017Q, Dell integrated a special presence detector into the front panel of the display, which switches the monitor off when nobody uses it. Another disadvantage of OLEDs is a possibility of static image burn in. To reduce the chance of burn in, the UP3017Q has a special pixel-shifting technology.
    The Dell UltraSharp 30 OLED monitor will cost whopping $4,999, which it becomes available on March 31, 2016, in the United States. The display at this point is only aimed at professionals, who work in color-critical environments such as graphic arts and photography. However, due to exceptional colors and contrast as well as ultra-fast response time, the UltraSharp UP3017Q will be a dream display for gamers, prosumers and other users, who value quality.
    OLED panels are considerably more expensive to produce than modern LCD panels, partly because of lower yields. Last year an executive from LG Electronics said that yields of OLED panels had reached 80% and would continue to grow. At the International CES 2016, Kwon Bong-suk, the head of LG’s TV business, said that the company had cut prices of OLED TVs in the U.S. by 45% late in 2015. As a result, LG now expects sales of OLED televisions to triple this year. Price reduction of OLED TVs indicates that production costs of organic light-emitting diode panels are going down. Perhaps, over time, the Dell UltraSharp UP3017Q will also become more affordable, or Dell will release an OLED display for a wider audience.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,809
    Post Thanks / Like
    #5620

    Anandtech: Zeiss Smart Optics: Discreet Smart Glasses

    With Google Glass, one of the major barriers to adoption was just how glaringly obvious it was that you were wearing Google Glass. The display was a field sequential color LCoS with a simple projection system to make the display visible at a short distance away from the eye. Unfortunately, the problem with this system was that the display was completely obvious and wasn’t really integrated into traditional thin lens glasses. It was also pretty obvious when the display was active, as you could see light coming out of the LCoS array without getting uncomfortably close to the person wearing Glass.
    Some of the other systems I’ve seen for projecting a display for smart glasses have also been pretty obvious to spot such as any of ODG’s smart glasses, although those aren’t really designed to be subtle in any way as they try to pack a full tablet of hardware into something head-worn. Sony’s SmartEyeGlass gets closer to something subtle, but it’s still glaringly obvious that you’re wearing something odd.
    Zeiss identified this as an issue, and in response they created an internal team to try and make an optical system that resolves all of these issues. Their solution is what they’re now calling Smart Optics. This optical system takes a display mounted at the edge of the lens and can project it directly into the eye at an arbitrary position on the lens, with an arbitrary focus to either place the displayed image a short distance away from the eye (~2m), or even at infinity to create a true HUD.
    In essence, this optical system relies upon total internal reflection and a Fresnel structure to transmit the light from the display through the lens into the eye. A complex prism design reflects the light from the display at the edge of the display into the lens, where the Fresnel structure then reflects the light in the lens out into the eye. The Fresnel structure is index-matched with the lens itself, which makes it almost invisible to the eye unless you have the right lighting to highlight the structure.
    The entire design is made from injection-molded polycarbonate, which means that it’s capable of being mass-produced in a method similar to most current glasses. Based on discussions with those that worked on this project, the prism in particular was especially difficult to create as its shape is complex and voids and other defects would appear in the polycarbonate as it cooled. Zeiss also emphasized that their design was covered with over 250 patents to drive home the difficulty of making this optical system work.
    Zeiss showed off an early prototype, which even at this stage was impressive as there was no visible translucency that could occlude vision and the projection worked flawlessly. Unfortunately, as the design is supposed to be made with a prescription for those that need one I couldn’t quite get the full experience as the corrective lenses that they had for the prototype weren’t quite strong enough for my eyes, but their rapid prototyping rig worked well and showed acceptable resolution and luminance.
    I wasn’t really able to get much in the way of details regarding whether any devices with this optical system were imminent, but it sounds like Zeiss is working with partners that can put their optics to use. Based upon a lot of the discussions I’ve had with various people working with wearables it sounded like smart glasses were at least 5-10 years out, but with technologies like this I wouldn’t be too surprised to know that by the 7nm node that smart glasses using this technology will start to reach mass consumer adoption.


    More...

Thread Information

Users Browsing this Thread

There are currently 61 users browsing this thread. (0 members and 61 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title