Page 540 of 1210 FirstFirst ... 404404905155305355365375385395405415425435445455505655906401040 ... LastLast
Results 5,391 to 5,400 of 12095

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5391

    Anandtech: AMD Releases Catalyst 15.9.1 Beta Drivers

    In a rare episode for AMD we get to see a second beta driver release in the same week. Earlier this week (Tuesday September 29th) AMD had released their Catalyst 15.9 Beta Driver. It brought optimizations to the Fable Legends benchmark we all saw recently and the upcoming Star Wars: BattleFront beta along with a grocery list of other fixes.
    Unfortunately a major memory leak was discovered shortly after release that could be triggered when a browser or other windows were resized. The result of this memory leak was all of the video memory on the graphics card being consumed. Upon becoming aware AMD was prompt to announce through Twitter and on their own site that there was a problem and had recommended those affected to revert drivers.
    Late on Wednesday AMD re-released their 15.9 beta driver as the 15.9.1 Beta driver to address this issue. The Display Driver version is still 15.201.1151, leaving the only change here being to fix the memory leak. If you have updated to 15.9 it would be advisable to update again to 15.9.1 to avoid the memory leak.
    Those looking for the driver update can find it on AMD's Catalyst Beta download page.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5392

    Anandtech: The Samsung Galaxy Note5 and Galaxy S6 edge+ Review

    The Galaxy Note line has long been one of Samsung’s greatest assets in the mobile market. While other Android OEMs have made phablets before, Samsung was pretty much the first OEM to ship a high-end device in this segment. Although other Android OEMs have made phablets in the time since, Samsung continues to have a strong hold on this market.
    For Samsung, unlike previous iterations of the Note family, the Galaxy Note5/S6 edge+ represents a significant change in design compared to previous generations, integrating many of the design aspects of the Galaxy S6 across the whole family. In many ways, the Galaxy Note5 resembles the Galaxy S6 in a different size. Meanwhile the Note5's companion device, the Galaxy S6 edge+, is effectively a second take on the Galaxy Note5, aiming for a design closer to a large format phone than a phablet as originially envisioned by Samsung. In this case the Galaxy S6 edge+ uses many of the design accents of the Galaxy S6 edge such as the curved display, all the while getting rid of the stylus.
    To see how these phablets perform, read on for the full review.

    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5393

    Anandtech: Windows 10 Feature Focus: .NET Native

    Programming languages seem to be cyclical. Low level languages give developers the chance to have very fast code with the minimum of commands necessary, but the closer you code to hardware the more difficult it becomes, and the developer should have a good grasp of the hardware in order to get the best performance. In theory, everything could be written in assembly language but that has some limitations. Programming languages, over time, have been abstracted from the hardware they run on which gives advantages to developers in that they do not have to micro-manage their code, and the code itself can be compiled against different architectures.
    In the end, the processor just executes machine code, and job of moving from a developer’s mind to machine code can be done in a several ways. Two of the most common are Ahead-of-Time (AOT) and Just-in-Time (JIT) compilation. Each have their own advantages, but AOT can yield better performance because the CPU is not translating code on the fly. For desktops, this has not necessarily been a big issue since they generally have sufficient processing power anyway, but in the mobile space processors are much more limited in resources, especially power.
    We’ve seen Android moving to AOT with ART just last year, and the actual compilation of the code is done on the device after the app is downloaded from the store. With Windows, apps can be written in more than just one language, and apps in the Windows Store can be written in C++, which is compiled AOT, as well as C# which runs as JIT code using Microsoft’s .NET framework, or even HTML5, CSS, and Javascript.
    With .NET Native, Microsoft is now allowing C# code to be pre-compiled to native code, eliminating the need to fire up the majority of the .NET framework and runtime, which saves time at app launch. Visual Studio will now be able to do the compile to native code, but Microsoft is implementing quite a different system than Google has done with Android’s ART. Rather than having the developer do the compilation and upload to the cloud, and rather than have the code be compiled on the device, Microsoft will be doing the compilation themselves once the app is uploaded to the store. This will allow them to use their very well-known C++ compiler, and any tweaks they make to the compiler going forward will be able to be applied to all .NET Native apps in the store rather than having to get the developer to recompile. Microsoft's method is actually very similar to Apple's latest changes as well, since they will also be recompiling on their end if they make changes to their compiler.
    They have added some pretty interesting functionality to their toolkit to enable .NET Native. Let’s take a look at the overview.
    Starting with C# code, this is compiled into IL code using the same compiler that they use for any C# code running on the .NET framework as JIT code. The resulting Intermediate Language code is actually the exact same result as someone would get if they wanted to run the code as JIT. If you were not going to compile to native, you would stop right here.
    To move this IL code to native code, next the code is run through an Intermediate Language Complier. Unlike a JIT compiler which runs on the fly when the code is running, the ILC can see the entire program and can therefore make larger optimizations than the JIT compiler would be able to do since it only sees a tiny portion of the code. ILC also has access to the .NET framework to add in the necessary code for standard functions. The ILC will actually create C# code for any WinRT calls made to avoid the framework having to be invoked during execution. That C# code is then fed back into the toolchain as part of the overall project so that all of these calls can be static calls for native code. The ILC then does transforms on any C# code that are required; C# code can rely on the framework for certain functions, so these need to be transformed since the framework will not be invoked once the app is compiled as native. The resulting output from the ILC is one file which contains all of the code, all of the optimizations, and any of the .NET framework necessary to run this app. Next, the single monolithic output is put through a Tree Shaker which looks at the entire program and determines what code is being used, and what is not, and expunging code that is never going to be called.
    The resultant output from ILC.exe is Machine Dependent Intermediate Language (MDIL) which is very close to native code but with a light abstraction of the native code. MDIL contains placeholders which must be replaced before the code can be run by a process called binding.
    The binder, called rhbind.exe, changes the MDIL into a final set of outputs which results in a .exe file and a .dll file. This final output is run as native code, but it still relies on a minimal runtime to be active which contains the garbage collector.
    The process might seem pretty complicated, with a lot of steps, but there is a method to the madness. By keeping the developer’s code as an IL and treating it just like it would be non-native code, the debugging process is much faster since the IDE can run the code in a virtual machine with JIT and avoid having to recompile all of the code to do any debugging.
    The final compile will be done in the cloud for any apps going to the store, but the tools will be available locally as well to assist with testing the app before it gets deployed to ensure there are no unseen consequences of moving to native code.
    All of this work is done for really one reason. Performance. Microsoft’s numbers for .NET Native shows up to 60% quicker cold startup times for apps, and 40% quicker launch with a warm startup. Memory usage will also be lower since the operating system will not need to keep the runtime active at the same time. All of this is important for any system, but especially for low powered tablets and phones.
    Microsoft has been testing .NET Native with two of their own apps. Both Wordament and Fresh Paint have been running as native code on Windows 8.1. This is the case for both the ARM and x86 versions of the app, and the compilation in the cloud ensures that the device downloading will be sure to get the correct executable for its platform.
    Microsoft Fresh Paint
    .NET Native has been officially released with Visual Studio 2015, and is available for use with the Universal Windows App platform and therefore apps in the store. At this time, it is not available for desktop apps, although that is certainly something that could come with a future update. It’s not surprising though, since they really want to move to the new app platform anyway.
    There is a lot more under the covers to move from managed code to native code, but this is a high level overview of the work that has been done to provide developers with a method to get better performance without having to move away from C#.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5394

    Anandtech: AMD Announces FirePro Mobile W7170M, W5170M, & W5130M

    In what’s turning out to be a busy week for mobile workstation announcements, AMD is announcing this week that they are launching three new FirePro Mobile parts. This latest announcement roughly coincides with the announcement of new mobile workstations from partners such as Dell, who are in turn gearing up for their first mobile workstations based on the recently announced Xeon E3-1500M v5, the very first mobile Xeon.
    [TABLE="align: center"]
    [TR="class: tgrey"]
    [TD="colspan: 6, align: center"]AMD FirePro Mobile Specification Comparison[/TD]
    [/TR]
    [TR="class: tlblue"]
    [TD="width: 181"] [/TD]
    [TD="width: 152, align: center"]FirePro W7170M[/TD]
    [TD="width: 151, align: center"]FirePro W5170M[/TD]
    [TD="width: 151, align: center"]FirePro W5130M[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"]Stream Processors[/TD]
    [TD="align: center"]2048[/TD]
    [TD="align: center"]640[/TD]
    [TD="align: center"]512[/TD]
    [/TR]
    [TR]
    [TD="class: tlgrey"]GPU Clock[/TD]
    [TD="align: center"]723MHz[/TD]
    [TD="align: center"]

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5395

    Anandtech: Windows 10 Feature Focus: CompactOS

    Microsoft took a serious look at how to save space from the operating system files with Windows 8.1 Update. What they came up with at the time was WIMBoot, which used the recovery partition’s compressed WIM image file as the basis for most of the system files needed by Windows. Since the recovery partition is at least 4 GB in size, this is a pretty substantial savings especially on the lower cost devices which WIMBoot was targeted at.
    I’ve discussed the changes with Windows 10 a couple of times, but a recent blog post from Michael Niehaus outlines how the new system works, what it is called, and how to manually enable it.
    The last bit there is pretty important, since moving to WIMBoot was not something that could be done easily. It had to be done at the time the system image was put onto the computer, and there were a couple of extra steps OEMs could take in order to incorporate their own software into the WIMBoot.
    Standard Partition with Windows 8.1WIMBoot Enabled Windows 8.1
    This also lead to some of the first issues with WIMBoot. The actual size of the recovery partition, if it was just Windows, would be around 4 GB, but once an OEM adds in their own software, along with maybe a copy of Microsoft Office, and all of a sudden the recovery partition could bloat to 10 GB or more. This was a major issue because unlike with a standard install of Windows, the recovery partition can not be removed on a WIMBoot system leaving a large chunk of a possibly small drive used up with no way to reclaim that space.
    The other issue was that over time the WIMBoot partition would become less and less used, since when there were security updates to the operating system, key system files would be replaced with a full uncompressed version, but the original version would still be part of the WIM which could not be modified. Over time, Windows would grow and grow to fill more and more of the drive, and the WIMBoot concept was clearly not working out as intended.
    So with Windows 10, Microsoft has moved away from the recovery partition altogether. When you do a system reset, Windows will be rebuilt from the components in the \Windows\winsxs folder. This means that the system will also be almost fully patched after a reset, unlike with earlier versions of Windows where any restore off of the recovery partition would revert you back to whatever set of files was used to create the WIM. Only the most recent 30 days of patches will be installed, and this was a design decision in case the reset itself is due to something going wrong within the last 30 days.
    The other part of the space savings is from a compression tool Microsoft is calling Compact OS. This kind of goes back to WIMBoot in a way, since the system files are compressed into what amounts to a WIM file. The big difference here is that unlike WIMBoot, CompactOS can be enabled and disabled on the fly.
    From an administrative command prompt, simply use the commands:

    Compact.exe /CompactOS:queryThis will query Windows to see if CompactOS is enabled or not

    Compact.exe /CompactOS:alwaysThis will enable CompactOS

    Compact.exe /CompactOS:neverThis will disable CompactOS
    I ran CompactOS on an ASUS TP200S which has 64 GB of eMMC storage. Windows 10 did not enable CompactOS automatically since it was not needed, but manually enabling it saved over 3 GB of space on the C: drive. Luckily ASUS has included enough storage in the TP200S where it’s not really necessary out of the box, but on any system with 32 GB or less this could be a big help.
    There is going to be a performance impact of course since the files will need to be decompressed when accessed and the actual differences are something I hope to have a chance to test and document at some point in the not too distant future.
    In the end, CompactOS looks to be a nice upgrade over WIMBoot which had a lot of promise, but was not as effective as hoped.


    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5396

    Anandtech: AMD Releases New PRO Mobile APUs: Carrizo up to A12-8800B

    We touched upon this very briefly in our recent HP Elitebook news, but at the end of September AMD officially launched four new professional mobile APUs under the AMD PRO line. The PRO line is similar to the commercial line of APUs that end up in the hands of casual users, except they are mostly sold in machines aimed at the professional market, and might have some slightly different arrangements in configuration to ensure a long-tail support program. This typically means that features such as TrustZone (using ARM IP) embedded in the processors go through ISV (independent software vendor) certification to ensure a fully functioning product.
    The four AMD PRO processors being released today all use AMD’s latest microarchitecture codenamed Carrizo, which fits comprises of one or two ‘Excavator’ class modules and Radeon Rx graphics. In a chance to regular AMD A-Series nomenclature, the top processor of the stack is now an ‘A12’ class design which reaches greater parity with previous microarchitecture designs on the desktop. This means a dual module design paired with eight graphics compute units giving what AMD calls 12 compute cores in total with ‘R7’ graphics.
    AMD’s Carrizo platform was built focused on the 15W TDP window, although AMD will allow its partners to boost the designs with a configurable TDP up to 35W on the A12, A10 and A8. AMD is also promising an enterprise package with partners to ensure a 36-month extended OEM warranty, 24-month product longevity, 18-month image stability and a ‘richer configuration’ package. That last point is promoted through the use of Qualcomm’s Snapdragon X5 LTE modem (Cat4) in certain HP branded professional notebook designs.
    Carrizo’s raison d’être was to bring use cases that required high end laptop configurations down into the mainstream (>$800 into $500-$700), which could be considered important if a business is considering deployment of several hundred devices at once along with a support package to go along with it. The PRO APUs will also support DASH for remote desktop management as well as AMD PRO Control Center for SMBs.
    AMD expects a number of partners to release information over the next few months. We are working towards obtaining a suitable Carrizo unit for testing as well.
    Source: AMD
    Gallery: AMD Releases New PRO Mobile APUs: Carrizo up to A12-8800B




    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5397

    Anandtech: The Acer Aspire S7-393 Review: Broadwell Comes To Acer's Ultrabook

    The last time we got a chance to try out the Acer Aspire S7, it was back in 2013. At the time it was a big step up from Acer, and the Ivy Bridge based S7 came with one of the slimmest and lightest bodies of that era. That was 2013. In 2015, the competition in the Ultrabook space has not sat idly by. One thing is for certain in the technology sector. No matter what kind of lead you have, if you stand still, you will be passed.

    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5398

    Anandtech: Software Guard Extensions on Specific Skylake CPUs Only

    Through the staggered release of Intel’s 6th Generation Core processors, known as Skylake, we reported in our architecture deep dive that Intel would be introducing a raft of known features, including Software Guard Extensions (SGX) among others. These extensions would allow programs to allocate a set of DRAM, resources and a runtime environment (known as an enclave) specifically for that software alone, such that other programs could not access its functions or violate its memory area through 0-day intrusions. At the time we were under the impression that the SGX extensions would be enabled across all Skylake CPUs (or at least a specific subset, similar to TXT) from day one, but some sleuthing from Tech Report has determined this is not the case.
    As described in a Product Change Notification, which is basically a PDF released via the website and to major partners involved, only certain upcoming versions of Skylake processors will have SGX capabilities enabled. Rather than changing the commonly used nomenclature in order to identify these processors (Core i7, i5 etc), the ones with SGX enabled will have a different S-Spec code. This code is a series of letters and numbers printed on the processor (and the box it came in) to indentify the processor for Intel’s internal database. So while the outer-ring name might not change (e.g. i7-6700K), the S-Spec can change for a number of reasons (stepping, updates or source) and this will not be readily apparent to the end-user unless they get a chance to see the code before purchasing the product. The S-Spec change should be seamless, meaning no BIOS or microcode updates required for existing systems, which makes it harder to confirm without opening an SGX enabled detection tool or if it appears in the instruction list for SGX.
    Normally with this sort of change we would expect a difference in the stepping of the processor, e.g. a move from C-0 to C-1 or something similar, but Intel has not done this here. As a result it could be speculated that an issue with the first few batches of processors rendered this part of the silicon non completely viable or consistent, and tweaks to the process (rather than creating new masks) has brought the issue under control for manufacturing.
    Many users have noted that sourcing Skylake processors is still rather difficult outside the two overclockable versions and their non-K counterparts, and this might have something to do with it, if Intel was waiting for the full extension set to be enabled. It might not be considered that big of a deal, despite the fact that SGX has been part of Intel’s software mantra since at least 2013. We would imagine that specific enterprise software packages from vendors would be expecting these extensions to go live with certified systems since the launch of Skylake, meaning there might be some confusion if two identical named processors are not separated by the S-Spec code. As far as we know from Intel, we are also expecting a relevant update to current operating systems to allow SGX to work.

    In the document, the new SGX enabled S-Spec codes are provided on the right.
    To that extent, Intel has said in the PDF which specific processors will have the change, which covers the Skylake Core i7, i5 and Xeon E3 v5 parts in both OEM and boxed processors. These new parts will be available to customers from October 26th, and in systems by November 30th, without the need for requalification. For non-business and non-enterprise use, we imagine that sets of parts will be in the chain for a good while, although one would imagine that Intel would solely be creating the SGX enabled parts from now on.
    Source: via Tech Report


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5399

    Anandtech: Google’s Chromecast 2 is Powered By Marvell’s ARMADA 1500 Mini Plus - Dual

    When Google originally announced the second-generation Chromecast last week, in typical Google fashion they focused on features and uses over specifications. Given the capabilities of the new product we knew that there had to have been some changes – at a minimum the wireless component has changed – and thanks to a press release from Marvell we finally know what chips are in the new media receiver.
    The Chromecast 2 is powered by Marvell’s ARMADA 1500 Mini Plus (88DE3006), one of Marvell’s lower-end “digital entertainment processors.” The Mini Plus is the successor to the ARMADA 1500 Mini (88DE3005), which in turn was first introduced for the Chromecast 1 back in 2013. Like the original Mini, the Mini Plus is essentially tailor-made for the Chromecast, as it’s geared to be a low-cost solution for simple streaming devices.
    Google Chromecast Family
    Chromecast (1) Chromecast (2) Chromecast Audio
    SoC Marvell ARMADA 1500 Mini SoC (88DE3005) Marvell ARMADA 1500 Mini Plus SoC (88DE3006) Marvell ARMADA 1500 Mini Plus SoC (88DE3006)
    CPU 1x Cortex-A9 2x Cortex-A7 (1.3GHz?) 2x Cortex-A7 (1.3GHz?)
    Memory 512MB 512MB N/A
    Wireless 1x1 2.4GHz 802.11n 1x1 2.4GHz/5GHz 802.11ac 1x1 2.4GHz/5GHz 802.11ac
    Display Output 1080p 1080p N/A
    Max Video Decode 1080p30 1080p N/A
    Ports HDMI
    Micro-USB (Power)
    HDMI
    Micro-USB (Power)
    3.5mm Combo Jack
    (Analog + Optical Audio)
    Micro-USB (Power)
    Launch Date 07/24/2013 09/29/2015 09/29/2015
    Launch Price $35 $35 $35
    Unfortunately in-depth details on the Mini Plus are hard to come by at the moment – and Marvell never published all that much about the original Mini either – but we do know that unlike the original Mini, Marvell has put in a bit more customization work into the Mini Plus. The original Mini was in a few ways a cut down version of Marvell’s more powerful Cortex-A9 based chips, such as implementing just a single CPU core versus multiple cores. This time around the Mini Plus drops the single Cortex-A9 for a dual-core Cortex-A7 implementation, and is the only ARMADA product utilizing A7.
    Officially Marvell isn’t specifying clockspeeds, however they are advertising that the Mini Plus gets “up to 4900 DMIPS”. This is notable since we know that the Cortex-A7 has an estimated DMIPS/MHz ratio of 1.9, with puts the maximum CPU clockspeed at roughly around 1.3GHz (4900 DMIPS / 2 cores / 1.9 ratio = ~1289). Meanwhile according to Marvell’s press release the Mini Plus is supposed to deliver 2.5x the CPU perf of the Mini, which is especially interesting because A7, though not too far off of A9, is still a simpler part with lower IPC. So the fact that CPU performance is ahead of the A9-based Mini even after factoring out the second CPU core (1.25x) bodes well that Google hasn’t traded an immediate multi-threaded performance cap for a single-threaded performance cap. Curiously this implies that the Mini in the original Chromecast was clocked quite low (~800MHz), but for the moment these are the numbers we have to work with.
    As for the GPU, Marvell is stating even less, only that it’s an OpenGL ES 2.0 part. ES 2.0 parts are still very common in pure media streaming devices and TVs, as for the most part you don’t need much more GPU performance than is necessary to do basic drawing and compositing at 1080p. All of the other ARMADA 1500 parts have using Vivante GPUs, and I expect the story is the same for the Mini Plus.
    The bigger question on the video processing side is whether the Mini Plus has HEVC support or not. All ARMADA 1500 parts launched since mid-2014 have included HEVC support, however as the Mini Plus is a low-cost part, it remains to be seen whether Marvell was willing to spend the die space and licensing costs on support for HEVC in a device only designed for 1080p in the first place. By and large, HEVC is being utilized by media streaming firms for 4K media rather than 1080p.
    Finally, Marvell’s press release also opens up on the wireless networking chip used in the Mini Plus. Here Marvell has dropped the 3rd party AzureWave solution for their own Avastar 88W8887 solution. The Avastar 88W8887 (ed: that’s a lot of eights) is a “quad radio” solution, offering support for WiFi, Bluetooth, NFC, and FM radio receive. In the case of the Chromecast 2, Google is only making use of the WiFi functionality, where the 88W8887 supports 802.11ac with 1 spatial stream, allowing it to transfer up to 433Mbps over 5GHz. Otherwise it’s interesting to note that of all of the technical changes that come with the switch from the Mini to the Mini Plus, it’s the improved WiFi capabilities from the 88W8887 that have seen the most promotion from Google itself.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,807
    Post Thanks / Like
    #5400

    Anandtech: Microsoft Windows 10 Devices Day NYC Live Blog


Thread Information

Users Browsing this Thread

There are currently 69 users browsing this thread. (0 members and 69 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title