Page 480 of 1210 FirstFirst ... 380430455470475476477478479480481482483484485490505530580980 ... LastLast
Results 4,791 to 4,800 of 12094

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4791

    Anandtech: Qualcomm @ MWC 2015: Cat 11 LTE, Cat 6 Dual-Sim LTE, & LTE/Wi-Fi Link Aggr

    Not to be outdone by Qualcomm’s SoC group, Qualcomm’s communication groups are busy at MWC 2015 as well. Though Qualcomm Technologies and Qualcomm Atheros are not announcing any major new products at this moment, the two of them are on the show floor to demonstrate the status of their various LTE initiatives that we should see in upcoming and future products, in conjunction with infrastructure partner Ericsson.
    First and foremost, Qualcomm and Ericsson will be offering the first public demonstration of LTE category 11 hardware in action. LTE category 11 increases the download rate of LTE to 600Mbps through a combination of tri-band (3x20MHz) carrier aggregation and the use of QAM256 encoding, with the latter being the major addition of category 11. Due to the use of QAM256 and the higher SNR required to use it – not to mention 60MHz of spectrum – category 11 is being targeted at small scale deployments where cleaner signals and more spectrum is readily available, such as indoor deployments and carefully constructed outdoor environments.
    LTE Categories
    Category Max Download Max Upload
    Category 6 300Mbps 50Mbps
    Category 7 300Mbps 100Mbps
    Category 9 450Mbps 50Mbps
    Category 10 450Mbps 100Mbps
    Category 11 600Mbps 100Mbps
    Qualcomm is not currently announcing the modem being used in this demonstration. However we are likely looking at the successor to Qualcomm’s current X12 LTE modem (9x45), which tops out at category 10.
    Meanwhile Qualcomm will also be demonstrating the ability to use category LTE with dual SIMs. Qualcomm’s forthcoming hardware will support dual standby with dual receive.
    Finally, Qualcomm will also be demonstrating their current progress on implementing LTE/Wi-Fi call handoff and LTE/Wi-Fi link aggregation. With call handful – or as Qualcomm likes to call it, Call Continuity – VoLTE calls can be seamlessly transferred between LTE and Wi-Fi, allowing phones to tap into Wi-Fi for call handling when possible, avoiding the greater network expense of using LTE. Meanwhile the first public demonstration of LTE/Wi-Fi link aggregation builds off of handoff to utilize both networks at once to take advantage of Wi-Fi speeds while allowing operators to better control a call via the normal LTE channel. Link aggregation essentially brings Wi-Fi access points under control of the LTE network itself – essentially limiting it to operator owned/controlled access points – and is being created as a solution to reliability concerns over using disparate, independent Wi-Fi networks.



    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4792

    Anandtech: Lenovo at MWC 2015: VIBE Shot SmartPhone/Camera Crossover Announced

    As part of our booth tour at Lenovo during Mobile World Congress, on display was the recently announced Lenovo VIBE Shot and we managed to get some hands-on time. The VIBE Shot is described by Lenovo as a ‘2-in-1 camera smartphone’ attempting to bridge a gap between smartphones and point-and-click cameras. The device attempts this by placing buttons on the sides of the smartphone similar to how a point-and-click would do so, as well as having a full-frame 16:9 16MP low light sensor and a tri-color flash.
    The 5-inch full HD device includes optical image stabilization as well as providing simple and pro modes with a button adjustment on the top. Simple mode is equivalent to the auto mode on most cameras, whereas the pro-mode offers manual adjustments such as exposure, white balance, focus mode, saturation and more. Hardware under the hood includes an eight-core Snapdragon 615 (A57/A57) at a 1.7 GHz peak on the fast cluster with 3GB DRAM and 32GB of internal storage.
    Battery capacity comes in at 2900 mAh, with LTE Cat-4 and Android 5.0. The device will be offered in a dual Nano-SIM arrangement, weighs 145g and comes in at 7.3mm thin. Storage is expandable, with guaranteed support of up to 128GB via a microSD.
    The phone felt pretty solid in hand, and the thinness is remarkable. What wasn't remarkable was the aluminium band on the back along the camera side, as it attracted fingerprints. The display unit had seen a lot of use, and it was quite hard to clean it.
    The VIBE Shot will be available in red, white and grey, and come to Lenovo’s regular markets in June starting at $349.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4793

    Anandtech: Intel at MWC 2015: SoFIA, Rockchip, Low Cost Integrated LTE, Atom Renaming

    After day zero at Mobile World Congress already boasting some impressive releases, Intel tackles their platform on day one on several different fronts. As part of a pre-briefing, we were invited into the presentation where Intel discussed the current state of their mobile portfolio along with looking to the future. The pre-briefing was run by Aicha Evans, Corporate Vice President and General Manager of the Wireless Platform Research and Development Group, who you may remember was interviewed by Anand in a series of videos back in 2013. Ms. Evans' focus stems on the connectivity side of the equation, making sure that Intel’s portfolio develops into a strong base for future platforms.
    One of the big elements for Intel is the rebranding of their mobile Atom line of SoCs. Up until this point, all the SoCs were difficult to follow and very similar names such as Z3580 or Z3760. This is adjusted into three different segments as follows:
    Similar to their personal computing processor line, the Intel Atom structure will take on x3/x5/x7 naming, similar to the i3/i5/i7 of the desktop and notebook space. This is not to be confused with Qualcomm’s modem naming scheme, or anything by BMW.
    The x3 sits at the bottom, and is comprised of Bay Trail based SoCs at the 28nm node all previously part of the SoFIA program aimed at emerging markets. There will be three x3 parts – a dual core x3, a quad core x3 from the Rockchip agreement, and a final quad core x3 with an integrated LTE modem.
    This set raises some interesting points to discuss. Firstly is the use of 28nm is the same node as previous Intel Atoms, and thus should be derived from a TSMC source. It is also poignant to note that for these SoCs Intel is using a Mali GPU rather than the Gen 8 graphics and their own IP. This is due to the SoFIA program being aimed at bringing costs down and functionality into the low price points in a competitive time-to-market.
    The Rockchip model, indicated by the ‘RK’ at the end of the name of the SoC, comes from the partnership with Rockchip we reported on back in May 2014. At the time Intel discussed the roadmap for producing a quad core SoC with 3G for the China market in the middle of 2015, which this provides.
    The final part of the x3 arrangement revolves combining a 5-mode LTE modem on the same die. Intel is going to support 14 LTE bands on a single SoC with PMIC, WiFi and geolocation technologies (GPS, GLONASS, BeiDou).
    The Atom x5 and x7 SoCs represent the next step up, implementing Intel’s 14nm process and bringing Cherry Trail to market. The x5 and x7 SoCs are aimed primarily at tablets, but can find their way into sub 10.1 inch tablets as well, providing an interesting counterbalance to the high price premium of Intel Core-M 4.5W products based on Broadwell-Y. While the x3 line will focus first on Android moving into Windows, x5 and x7 is designed to be targeting both, particularly with the bundled Gen 8 graphics and LTE with XMM276x supporting Cat-6 and carrier aggregation.
    Not a lot of detail was provided about x5 and x7, suggesting that they are aimed more at late 1H/2H 2015 down the line. This coincides with the next generation of Intel’s XMM 7360 modem, featuring up to 450 Mbps downlink and support for up to 29 LTE bands.
    One interesting element in the x5/x7 scenario was the bundled platform block diagram provided by Intel, showing clearly the two dual-core Airmont CPUs each with 1MB of L2 cache, Gen 8 graphics, separate security processors and ISP, as well as USB 3.0 support.
    Finally, Intel addressed the obvious lack of a high-end mobile SoC that fits into the performance smartphone category. Intel is still working on development of such a SoC in the form of Braxton and we'll have more news on this piece in the future.
    We are lining up a chance to interview Ms. Evans about Intel’s Atom lineup later this week at MWC, so stay tuned for that.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4794

    Anandtech: Broadcom at MWC 2015: BCM4359 and BCM43455 Wifi Combo Chips Announced

    Today Broadcom took the lead by announcing two new Wifi combo chip solutions meant for the smartphone and tablet market. The BCM4359 is a high-end 2x2 MIMO solution for high-performance smartphones, while the BCM43455 is an updated 1x1 MIMO 802.11ac for mass market phones.
    Taking a closer look at the BCM4359, we see several innovative new features, the most characterizing one being the inclusion for the first time of Real Simultaneous Dual Band (RSDB). RSDB enables the chip to connect to both 2.4GHz and 5GHz bands simultaneously. This is achieved by doubling up on the baseband processors on the combo chip. Broadcom uses ARM Cortex R4 as the processing units of the IC, and the 4359 uses two of them. What this enables is a sort of "full duplex" on the two frequency bands instead of having the baseband having to switch between each in an interleaving manner. The PHY bandwidth has been upped to 867 Mbps in the two-stream MIMO mode.
    In the demo that Broadcom showed us, we had two test devices and a TV as the showcase setup. One device running the BCM4356 was streaming a video to the secondary device which employed the BCM4359 via the 2.4GHz band, who in turn would then stream via Wifi Display on 5GHz to the TV. As a comparison demo, we had the same setup next to it, but with both streaming devices equiped with only a BCM4356 solution. While the BCM4359 setup managed to achieve enough bandwidth to receive and forward the stream to the TV in full 1080p, the other side with the BCM4356 would only be fluid if the quality was reduced to 480p.
    Another advantage of RSDB is that it enables the chip to scan for networks on both bands simultaneously, accelerating the time needed to show available Wifi networks, effectively giving a 2x speed improvement.
    The BCM43455 is also a new member of the Broadcom family and serves as a solution for the mass market, meaning a cheaper price-point. It is a 1x1 HT80 802.11ac 2.4 and 5GHz solution, enabling up to a 433Mbps PHY rate at 80MHz channel bandwidth. The chip is able to reduce the BoM by 50%, although Broadcom didn't specify to what this was compared with.
    One key aspect of these new Wifi generation chips is that SDIO has been retired (but still available as a seconary option) as the connection interface to the SoC and instead replaced by PCIe. The BCM4358 was the first such chip to take advantage of this switch, which was employed on for example the Galaxy Note 4. The PCIe interface not only provides higher bandwidths which are beyond what SDIO is capable of, but also enables crucial power advantages such as low power states on the bus and bonuses such as Direct Memory Access (DMA) for the Wifi chipset.
    Both the BCM4359 and BCM43455 are sampling now and will be available in devices later in the year.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4795

    Anandtech: AMD Lays Out Future of Mantle: Changing Direction In Face of DX12 and glNe

    Much has been made over the advent of low-level graphics APIs over the last year, with APIs based on this concept having sprouted up on a number of platforms in a very short period of time. For game developers this has changed the API landscape dramatically in the last couple of years, and it’s no surprise that as a result API news has been centered on the annual Game Developers Conference. With the 2015 conference taking place this week, we’re going to hear a lot more about it in the run-up to the release of DirectX 12 and other APIs.
    Kicking things off this week is AMD, who is going first with an update on Mantle, their in-house low-level API. The first announced of the low-level APIs and so far limited to AMD’s GCN’s architecture, there has been quite a bit of pondering over the future of the API in light of the more recent developments of DirectX 12 and glNext. AMD in turn is seeking to answer these questions first, before Microsoft and Khronos take the stage later this week for their own announcements.
    In a news post on AMD’s gaming website, AMD has announced that due to the progress on DX12 and glNext, the company is changing direction on the API. The API will be sticking around, but AMD’s earlier plans have partially changed. As originally planned, AMD is transitioning Mantle application development from a closed beta to a (quasi) released product – via the release of a programming guide and API reference this month – however AMD’s broader plans to also release a Mantle SDK to allow full access, particularly allowing iit to be implemented on other hardware, has been shelved. In place of that AMD is refocusing Mantle on being a “graphics innovation platform” to develop new technologies.
    As far as “Mantle 1.0” is concerned, AMD is acknowledging at this point that Mantle’s greatest benefits – reduced CPU usage due to low-level command buffer submission – is something that DX12 and glNext can do just as well, negating the need for Mantle in this context. For AMD this is still something of a win because it has led to Microsoft and Khronos implementing the core ideas of Mantle in the first place, but it also means that Mantle would be relegated to a third wheel. As a result AMD is shifting focus, and advising developers looking to tap Mantle for its draw call benefits (and other features also found in DX12/glNext) to just use those forthcoming APIs instead.
    Mantle’s new focus in turn is going to be a testbed for future graphics API development. Along with releasing the specifications for “Mantle 1.0”, AMD will essentially keep the closed beta program open for the continued development of Mantle, building it in conjunction with a limited number of partners in a fashion similar to how Mantle has been developed so far.
    Thie biggest change here is that any plans to make Mantle open have been put on hold for the moment with the cancelation of the Mantle SDK. With Mantle going back into development and made redundant by DX12/glNext, AMD has canned what was admittedly the hardest to deliver aspect of the API, keeping it proprietary (at least for now) for future development. Which is not to say that AMD has given up on their “open” ideals entirely though, as the company is promising to deliver more information on their long-term plans for the API on the 5th, including their future plans for openness.

    Mantle Pipeline States
    As for what happens from here, we will have to see what AMD announces later this week. AMD’s announcement is essentially in two parts: today’s disclosure on the status of Mantle, and a further announcement on the 5th. It’s quite likely that AMD already has their future Mantle features in mind, and will want to discuss those after the DX12 and glNext disclosures.
    Finally, from a consumer perspective Mantle won’t be going anywhere. Mantle remains in AMD’s drivers and Mantle applications continue to work, and for that matter there are still more Mantle enabled games to come (pretty much anything Frostbite, for a start). How many more games beyond 2015 though – basically anything post-DX12 – remains to be seen, as developers capable of targeting Mantle will almost certainly want to target DX12 as well as soon as it’s ready.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4796

    Anandtech: Next Generation OpenGL Becomes Vulkan: Additional Details Released

    Continuing this week’s GDC-2015 fueled blitz of graphics API news releases, we have Khronos, the industry consortium behind OpenGL, OpenCL, and other cross-platform compute and graphics APIs. Back in August of 2014 Khronos unveiled their own foray into low-level graphics APIs, announcing the Next Generation OpenGL Initiative (glNext). Designed around similar goals as Mantle, DirectX 12, and Metal, glNext would bring a low-level graphics API to the Khronos ecosystem, and in the process making it the first low-level cross-platform API. 2014’s unveiling was a call for participation, and now at GDC Khronos is announcing additional details on the API.
    First and foremost glNext has a name: Vulkan. In creating the API Khronos has made a clean break from OpenGL – something that game industry developers have wanted to do since OpenGL 3 was in development – and as a result they are also making a clean break on the name as well so that it’s clear to users and developers alike that this is not OpenGL. Making Vulkan distinct from OpenGL is actually more important than it would appear at first glance, as not only does Vulkan not bring with it the compatibility baggage of the complete history of OpenGL, but like other low-level APIs it will also have a higher skill requirement than high-level OpenGL.
    Naming aside, Vulkan’s goals remain unchanged from the earlier glNext announcement. Khronos has set out to create an open, cross-platform low-level graphics API, bringing the benefits of greatly reduced draw call overhead and better command submission multi-threading – not to mention faster shader compiling by using intermediate format shaders – to the entire ecosystem of platforms that actively support Khronos’ graphics standards. Which these days is essentially everything outside of the gaming consoles. This is also Khronos’s unifying move for graphics APIs, doing away with separate branches of OpenGL – the desktop OpenGL and the mobile/scaled-down OpenGL ES – and replacing it with the single Vulkan.
    Being announced this week at GDC are some additional details on the API, which given the intended audience is admittedly a bit developer centric. Vulkan is not yet complete – the specification itself is not being released in any form – but Khronos is further detailing the development and execution flows for how Vulkan will work.
    Development tools have been a long-standing struggle for Khronos on OpenGL, and with Vulkan they are shooting to get it right, especially given the almost complete lack of hand-holding a low-level graphics API provides. For this reason the Vulkan specification includes provisions for common validation and debug layers that can be inserted into the rendering chain and used during development, allowing developers to perform in-depth debugging on the otherwise bare-bones API. Meanwhile conformance testing is also going to be heavily pushed and developed, having been something OpenGL lacked for many years and something that was a big help in developing Khronos’ more recent APIs such as WebCL. This being Khronos, even the conformance testing is “open” in a way, with developers able to submit new tests and Khronos encouraging it.
    The actual Vulkan API itself has yet to be finalized, however at this point in time Khronos expects it to behave very similar to Mantle and DX12, so developers capable of working on the others shouldn’t have much trouble with Vulkan. In fact Khronos has confirmed that AMD has contributed Mantle towards the development of Vulkan, and though we need to be clear that Vulkan is not Mantle, Mantle was used to bootstrap the process and speed its development. What has changes is that Khronos has gone through a period of refinement, throwing out portions of Mantle that didn’t work well in Vulkan – mainly anything that would prevent it from being cross-vendor – and replacing it with the other necessary/better functionality.
    Meanwhile from a shader programing perspective, Vulkan will support multiple backends for shaders. GLSL will be Vulkan’s initial shading language, however long-term Khronos wants to enable additional languages to be used as shading languages, particularly C++ (something Apple’s Metal already supports). Bringing support for other languages as shaders will take some effort, as those languages will need graphics bindings extended into them.
    As for hardware support for Vulkan, Khronos tells us that Vulkan should work on any platform that supports OpenGL ES 3.1 and later, which is essentially all modern GPUs, and desktop GPUs going some distance back. To be very clear here whether a platform’s owner actually develops and enables their Vulkan runtime is another matter entirely, but in principle the hardware should support it. Though this comes as something of an interesting scenario, as a bare minimum of OpenGL ES 3.1 implies that tessellation and geometry shaders will not be a required part of the standard. As these are common features in desktop parts and more recent mobile parts that are Android Extension Pack capable, this means that these will be optional features for developers to either use (and require) or not at their own discretion.
    Wrapping up our look at the API, Khronos tells us that they’re on schedule to release initial specifications this year, with initial platform implementations shortly behind that. Given the fact that Khronos tends to do preliminary releases of APIs first, this puts Vulkan a bit behind DirectX 12 (which will see its shipping implementation this year), but not too far behind. By which time we should have a better idea of what platforms and GPUs will see Vulkan support added, and what the first games are that will support the API.
    SPIR-V

    Finally, no discussion of Vulkan can be complete without a discussion of its language frontend. Vulkan’s frontend will be powered by SPIR-V, the latest version of Khronos’ Standard Portable Intermediate Representation.
    By basing Vulkan around SPIR-V, developers gain the ability to write to Vulkan in more languages, being able to feed Vulkan almost any code that can be compiled down to SPIR. This is similar to what SPIR has done for OpenCL – which is what SPIR was initially created for – allowing for many languages to work on OpenCL-capable hardware through SPIR. As a side benefit for Vulkan, this also means that Vulkan shaders can be shipped in intermediate format, rather than as raw high-level GLSL code as OpenGL’s shader compiler path currently requirements.
    In putting together SPIR-V, what Khronos has done is essentially extended Vulkan’s graphics constructs into the API, allowing SPIR-V to service both compute and graphics workloads. In the short term this is unlikely to make much of a difference for developers (who will be busy just learning the graphics side of Vulkan), but in the long run this would allow developers to more freely mix graphics and compute workloads, as the underlying runtime is all the same. This is also where Vulkan’s ability to extend its shading language from GLSL to other languages comes from, as SPIR’s flexibility is what allows multiple languages to all target SPIR.
    SPIR-V also brings with it some compute benefits as well, but for that we need to talk about OpenCL 2.1…


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4797

    Anandtech: Khronos Announces OpenCL 2.1: C++ Comes to OpenCL

    Alongside today’s announcements of Vulkan and SPIR-V, Khronos is also using the occasion of the 2015 Game Developers Conference to announce the next iteration of OpenCL, OpenCL 2.1.
    OpenCL 2.1 marks two important milestones for OpenCl. First and foremost, OpenCL 2.1 marks the point where OpenCL (compute) and graphics (Vulkan) come together under a single roof in the form of SPIR-V. With SPIR-V now in place, developers can write graphics or compute code using SPIR, forming a common language frontend that will allow Vulkan and OpenCL to accept many of the same high level languages.
    But more significant about OpenCL 2.1 is that after several years of proposals and development, OpenCL is now gaining support for an official C++ dialect, extending the usability of OpenCL into even higher-level languages. Having originally launched using the OpenCL C dialect in 2008, there was almost immediate demand for the ability to write OpenCL code in C++, something that has taken the hardware and software some time to catch up to. And though C++ is not new to GPU computing – NVIDIA’s proprietary CUDA has supported it for some time – this marks the introduction of C++ to the cross-platform OpenCL API.

    OpenCL 2.1’s C++ support comes in the form of a subset of C++, stripping out a few parallel-compute unfriendly features such as catch/throw, function pointers, and virtual functions. What remains then is virtually everything else, including classes, templates, and C++’s powerful lambda functionality. This opens up OpenCL programming to the same general benefits that C++ enables over C, giving developers access to a higher level language that is more capable, and generally speaking better known as well.

    The addition of C++ to OpenCL is driven by the use of SPIR-V, with Khronos creating an OpenCL C++ to SPIR-V compiler to compile C++ down to the intermedia representation, and then the OpenCL runtime executing the SPIR-V code from there. And meanwhile though OpenCL C isn’t going anywhere for both compatibility and tuning reasons, this is the overall direction that Khronos wants to go with OpenCL, pushing everything through SPIR so that the languages supported is largely a function of what compilers are available, and not what the OpenCL runtime can do.
    [
    Meanwhile, in the long run C++ support should help Khronos and its partners to better push and deploy OpenCL, with C++ support making the API more useful and accessible than before. Differences such as these have been a big part of the reason that NVIDIA’s CUDA has remained so popular despite being limited to NVIDIA platforms, and though OpenCL C++ arguably won’t erase the gap between the two APIs, it should cut down on the gap significantly. That said, part of this may come down to whether NVIDIA implements OpenCL 2.1 support; with their current dominance with CUDA, NVIDIA has yet to even implement OpenCL 2.0 support, which greatly limits how many discrete GPUs can run the newest versions of OpenCL.
    Finally, along with the addition of OpenCL C++, OpenCL 2.1 also adds a few extra features to the overall API to improve the API’s flexibility. Low latency device timers should allow for much more reliable/accurate profiling of code execution than relying on potentially divergent clocks, and kernel cloning functionality has been introduced via the clCloneKernel command.

    Wrapping things up, as is common for Khronos, OpenCL 2.1 is initially being released as a provisional specification. While Khronos isn’t commenting on a finalization date just yet, given how early it is in the year, we would be surprised not to see a final version of the API before the year was out.



    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4798

    Anandtech: ARM At GDC 2015: Geomerics Enlighten 3 Released

    One of ARM’s more unusual acquisitions in recent years has been Geomerics, a fellow UK company who specializes in video game lighting technology. Geomerics is a far cry from ARM’s day-to-day business of developing hardware blocks and ISAs to license to customers who want to put together their own chips, but Geomerics has been a long-term play for the company. By investing in a company with strong ties to the video gaming industry, ARM would in turn gain an important tool in helping to bring higher quality lighting to SoC-class GPUs, and also help to ensure that such important middleware was including SoC-class GPUs in their feature & performance targets.
    With GDC 2015 taking place this week, the ARM is seeing the first real payoff from their acquisition with the release of the latest version of Geomerics’ lighting technology, Enlighten 3. Enlighten 3 in turn is designed to be one of the most advanced global illumination systems on the market, designed to scale up from mobile to desktop PCs. Previous versions of Enlighten were already in several games and engines, including the Frostbite 2 engine backing Battlefield 3, and now with Enlighten 3 the company is hoping to extend its reach further with its inclusion into the ever popular for mobile Unity 5 engine, and as an add-on for the similarly popular Unreal Engines 3 and 4.
    From a feature standpoint Enlighten 3 introduces several new features, including a greatly improved indirect lighting system. Also on the docket is a richer materials system, allowing for improved support for transparent surfaces, which in turn allows for the lighting to be updated to reflect when the transparency of a surface has changed. Alternatively, for scenarios without real-time lighting, the middleware also has increased the quality of lightmaps it can bake.

    Ultimately ARM tells us that they believe 2015 will be a big year for Geomerics in the mobile space, saying they expect a number of mobile titles to use the technology. To that end, as part of their GDC launch, ARM and Geomerics are showcasing several Enlighten 3 demos, including an in-house demo they are calling Subway, and a demo showcasing Enlighten 3 running inside Unreal Engine 4.


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4799

    Anandtech: AMD’s LiquidVR Announced: AMD Gets Expanded VR Headset Functionality

    2015 is going to be known as the year of virtual reality at GDC. Before the expo floor has even opened VR pitches, announcements, and press conference invitations are coming fast and furious. Though Oculus is still the favored child in the PC space, a number of other companies are either trying to pitch their own headsets, or alternatively are working on the middleware portion of the equation – bridging the gap between current systems and the VR hardware. Recent developments in the field have clearly sparked a lot of consumer and developer interest in the idea, and now we are in the rapid expansion phase of technological growth.
    As one of the curators to the best system to drive these VR setups – the venerable PC – AMD is taking the stage at GDC 2015 to showcase their own VR plans. If the PC is going to be the center of high-performance VR, then it’s the GPU that’s going to be the heart, and that makes AMD’s role in all of this development a very important one. This in turn is a role they are embracing today as they announce their LiquidVR family of technologies.
    Having good hardware is a start, but from a software perspective VR is a very different challenge than traditional PC desktop rendering. Latency is paramount – a fully immersed mind has a much lower tolerance for irregularities – and at the same time the big goal of delivering AAA caliber graphics in a VR environment requires all the GPU processing you can throw at it (and then some). As a result there are a series of improvements and optimizations AMD will be making available to developers through LiquidVR and its alternative rendering path, with the ultimate goal of making AMD’s Radeon video cards capable of delivering a great VR experience.
    The collection of features/technologies in LiquidVR are based around 3 core concepts for AMD: comfort (latency/warping), compatibility (work with more headsets, bypass OSes when necessary), and compelling content (multi-GPU/be fast enough to work). In fact if you are already familiar with NVIDIA’s VR Direct initiative, then what AMD is doing today should seem very similar. Facing the same problems AMD has engineered very similar solutions, which comes as no surprise as both companies have been following Oculus VR’s best practice suggestions in developing the technologies.
    Moving on then, for AMD’s comfort goal the company is introducing 2 technologies in LiquidVR to reduce latency and warping. These are Latest Data Launch and Asynchronous Shaders.
    Latest Data Launch and Async Shaders go hand-in-hand in this case, the two needing each other to implement a single latency reduction and warping system. With Async Shaders, AMD implements the technical means to do time warping – modifying/warping the rendered image via shaders, using the very latest tracking data to cut down on the perceived latency – and Latest Data Launch is the means to grab that tracking data. Meanwhile the asynchronous aspect of Async Shaders refers to the fact that these shader operations can take place while the next frame is already being rendered, further cutting down on latency.
    Of all of the LiquidVR technologies, Async Shaders is arguably going to be the most important. While AMD can (and does) reduce latency elsewhere, there is a minimum amount of time needed to generate the game state and render a frame that hardware cannot get around, and warping in turn is a relatively cheap hack to reduce how much latency is perceived. By fudging the image after the fact and running a few final tasks after the main frame render job is complete, warping via Async Shaders can make latency feel a lot lower than it really is.
    Moving on, we have AMD’s compelling content goal, which is backed by their Affinity Multi-GPU technology. Short and to the point, Affinity Multi-GPU allows for each eye in a VR headset to be rendered in parallel by a GPU, as opposed to taking the traditional PC route of alternate frame rendering (AFR), which has the GPUs alternate on frames and in the process can introduce quite a bit of lag. Though multi-GPU setups are not absolutely necessary for VR, the performance requirements for high quality VR combined with the simplicity of this solution make it a easy way to improve performance (reduce latency) just by adding in a second GPU.
    At a lower level, Affinity Multi-GPU also implements some rendering pipeline optimizations to get rid of some of the CPU overhead that would come from dispatching two jobs to render two frames. With each eye being nearly identical, it’s possible to cut down on some of this work by dispatching a single job and then using masking to hide from each eye what it can’t actually see.
    AMD’s final LiquidVR technology is Direct-to-Display, which implements their compatibility goal. Direct-to-Display really sets out to solve two problems, which is to improve latency/compatibility by going around the OS at times, and by making it easier for AMD to support VR headsets from multiple vendors. As far as the OS goes it can add quite a bit of latency on its own, so Direct-To-Display allows the OS to be bypassed and the VR image set straight to the headset, shaving off some of that latency. Meanwhile by limiting what the OS has to do, it becomes easier to support multiple headsets since they don’t need to interact with the OS nearly as much.
    Ultimately with the combination of LiquidVR technologies, AMD is aiming to offer a top-tier VR experience. Reduce latency, improve compatibility, make multi-GPU work for each eye, and above all enabling time warping and time re-warping to reduced the amount of perceived latency. All of these are small but necessary steps to enable the kind of VR experience that Oculus and other VR headset makers have been seeking to create.
    At this point AMD’s LiquidVR technology is still in early development, with the Alpha 1.0 SDK being released today to AMD’s registered partners. AMD for their part has wasted no time in getting the ball rolling on partnerships, working of course with Oculus on hardware, and with game developers such as Crytek with software. Meanwhile at this point AMD isn’t offering any indication of when LiquidVR will be in a shipping state, but given the overall conservative approach of the VR industry – the Oculus Rift’s shipping date is TBD – I suspect any launch will be a phased launch with partners getting successively newer versions of the toolkit and working with what they have when the VR headsets themselves finally become available.
    Longer term here it will be interesting to see how the middleware situation evolves for VR. With AMD announcing LiquidVR we now have NVIDIA and AMD producing their own branded solutions, and meanwhile 3rd party interlopers such as Oculus and Valve with their SteamVR technology are influencing outcomes as well. These aren’t competing technologies per-se and everyone already seems to be converging towards the same solutions, in which case we may see true standardization come very quickly for what’s still a very young market.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #4800

    Anandtech: NVIDIA GDC 2015 Liveblog


Thread Information

Users Browsing this Thread

There are currently 18 users browsing this thread. (0 members and 18 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title