Page 528 of 1210 FirstFirst ... 284284785035185235245255265275285295305315325335385535786281028 ... LastLast
Results 5,271 to 5,280 of 12094

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5271

    Anandtech: NVIDIA Announces GameStream Co-Op, Beta Next Month

    Alongside today’s launch of the GeForce GTX 950, NVIDIA is also announcing a new streaming mode for GeForce Experience, the company’s multi-feature game streaming and optimization tool. The new feature, dubbed GameStream Co-op, is a case where the feature does exactly what it says on the tin, allowing for players to engage in co-op gaming in supported games via GeForce Experience game streaming.
    The feature, designed to allow another player to remotely join into a game, is from a technical level a fairly straightforward extension of NVIDIA’s existing GameStream technology. Only rather than taking over primary control of a game via GameStream, the remote client is an additional controller or mirrored controller – a second, remote player. The ultimate idea here is that this makes games that have local co-op but lack network co-op playable over the internet, complete with integrated voice chat to better replicate the couch co-op experience, or it allows co-op with only a single copy of the game instead of copies on each end.
    What is especially interesting though is that for the first time in a GameStream feature, the remote endpoint does not need to be an NVIDIA Shield device. Rather, via a new plugin for Google’s Chrome browser, the endpoint can be any PC fast enough to decode the H.264 video stream and send back commands (officially NV specs the minimum as a Core i3-2100). Given the limited proliferation of Shield devices this makes GameStream co-op much more widely usable, as it would now be accessible from most Windows 7+ PCs.
    From a usability standpoint GameStream Co-Op is going to incur the same kinds of latency penalties as straight-up GameStream, which is to say that it will depend on the game and internet connection. NVIDIA likes to promote GameStream as low-latency – and strictly speaking this is true for the NVENC video encoder – but NVIDIA doesn’t control the rest of the network. Some games will handle this better than others, and playing with a friend in the same city will usually be a better experience than in the next country over.
    As for the host side, GameStream co-op will work with most devices that currently support GameStream. The one exception for now is that GameStream Co-Op is limited to desktops only, with laptop support coming at a future date (much like GameStream initially). NVIDIA is also recommending a relatively high 7Mbps upload for the feature, in-line with previous GameStream internet requirements.
    Moving on, while GameStream Co-Op is being announced today alongside the GTX 950 launch, unlike the GTX 950 it is not available today. A beta will begin in September, with the feature coming out of beta at a later date, similar to previous NVIDIA GeForce Experience feature betas.
    Finally, the fact that NVIDIA now allows a degree of GameStream support to non-Shield devices is an interesting development. The company has until now kept GameStream and Shield tied close together, declining requests to allow game streaming to other PCs. Though the announcement of GameStream co-op doesn’t truly enable the full GameStream experience to any remote PC, all the pieces are now in place if and when NVIDIA decides to enable it.



    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5272

    Anandtech: The SilverStone SX600-G SFX PSU Review

    With the recent strong penetration of computers into the living room and other applications that often require small form factor cases, the demand for quality SFX PSUs is slowly but surely rising. Today we reviewing SilverStone's SX600-G, one of the most powerful SFX form factor PSUs currently available. 600 Watts, fully modular, and 80Plus Gold certified, all within a small box that can fit into the palm of a hand, and it is full of surprises.

    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5273

    Anandtech: Understanding Intel's Dynamic Power and Thermal Framework 8.1: Smarter Thr

    In mobile, thermal throttling is effectively a fact of life as modern, thin tablets and smartphones leave little room for implementing high-performance fans. We can use CMOS scaling to try and reduce the amount of power consumption, but in order to keep up with increasing performance demands it’s important to also scale performance as well.
    This means that while performance per watt will increase from generation to generation thanks to manufacturing and architectural improvements, finding ways to allow CPUs to use more power is also part of the equation in order to get the best possible performance out of a passively cooled device. This has been evidenced in recent years by the ever-increasing dynamic power ranges for mobile CPUs, which has seen idle power consumption drop while maximum load power consumption has risen.
    By increasing the dynamic range of these CPUs, it has allowed manufacturers to further optimize their devices for workloads that require high CPU performance for only short periods of time, a surprisingly common workload. For the end user then, there’s a clear benefit to both effective turbo and thermal management, as any kind of race to sleep workload sees benefit from turbo clocks, while long-running high-load workloads benefit significantly from smart thermal management.
    Although Intel’s Dynamic Platform and Thermal Framework (DPTF) 8.1.x has been out for months now, these features haven’t really received much attention so far. For those that are probably unaware of what Intel’s DPTF is, it’s effectively Intel’s solution to managing throttling in a smart manner according to the TDP limits of the device based upon thermal sensors and power monitoring for x86 tablets, 2-in-1s, and PCs in general. If you think this sounds a lot like ARM’s Intelligent Power Allocation in some ways, you’d be right.
    For the most part, previous iterations of DPTF have been pretty standard in the sense that they rely on a fixed correlation between temperature sensors and critical values like Tskin max and Tjunction max of all chips on the board. As these devices are unable to directly read skin temperatures, the system must instead infer what Tskin should be. And once certain temperature sensors read out certain values, the system assumes that the skin temperature has reached a maximum value, which means it’s necessary to begin throttling the system. Similarly, if an on-die chip sensor reads a specific value that is close to the maximum junction temperature, the system will react by throttling appropriately.
    However in the case of DPTF 8.1, this system has changed. Instead of a fixed correlation, the system is now adaptive depending upon a number of factors. One of the key examples cited is device orientation, as how a device is placed has a significant impact on its ability to cool itself. For example, when a tablet is placed flat on a table with the display up, the back of the tablet is unable to rely on convection and ambient air flow to cool the back cover. With previous iterations of DPTF, this worst-case style setup was what was used to determine how to correlate temperature sensors with skin temperatures.
    The problem with that approach was that when the device was placed in a situation where cooling was better, such as held vertically in the air or held in a dock with a circulation fan, DPTF wouldn’t change the temperature sensor correlations to skin temperature. This meant that in long run TDP-gated situations that the device was throttled to a greater extent than truly necessary.
    It turns out this one change has enormous effects on performance in these thermally limited situations. With a vertical orientation, heat dissipation and thereby power headroom increases by 66%. With an active cooling dock, power headroom increases by 97%. As Intel reasons and as their data backs up, there are clear benefits in not being conservative with throttling in situations where physics says cooling performance is better than the worst case scenario.
    Of course, system performance won’t increase by quite those levels due to the fact that CPU power draw increases quadratically with clock speed. According to Intel, in benchmarks this leads to an average performance increase of about 35%, with some use cases showing as much as double the performance in this mode.
    Ultimately, coming from IDF 2015 it isn’t clear at this time when we can expect this to show up in 2-in-1s, tablets, and other devices. But given that DPTF is a software suite it’s well within possibility that devices already out there with DPTF could receive an update that implements these improved throttling mechanisms.



    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5274

    Anandtech: Intel Launches New Socketed 5x5 mini-PC Motherboards

    Over the last couple of years, the ultra-compact form factor (UCFF) has emerged as one of the bright spots in the troubled PC market. Kickstarted by Intel's NUC (Next Unit of Computing) designs, it has been successfully cloned by other vendors such as GIGABYTE (BRIX), Zotac (C-series nano) and ASRock (Beebox). With platform performance increasing every generation, and performance requirements getting tempered by the rise of the not-so-powerful smartphones and tablets, Intel could pack a heavy punch with their 102x102mm NUC motherboards.
    Atom-based units (using Bay Trail) could provide very good performance for most users. Intel tried to shrink the PC even further by releasing a Compute Stick based on the Bay Trail Atom Z series SoCs earlier this year. ECS, with their LIVA designs, has adopted the Mini Lake reference design for their UCFF PCs. All of these UCFF PCs come with BGA CPUs / SoCs. The configurability aspect is minimal from an end-user's perspective. Looking at the mini-ITX form factor immediately leads us to a hole in the mini-PC lineup between it and the NUC.
    At IDF last week, Intel quietly launched the new 5x5 motherboard form factor. Coming in at 147x140mm, it is closer to the NUC in the fact that it can operate directly off DC power and takes SODIMM memory. Approaching from the mini-ITX side gives us the LGA socket for a Core processor. Unfortunately, at this size, we have to make do without the full length PCIe slot.
    Intel suggests that solutions using the 5x5 boards could come in with a 39mm height for a volume of 0.89L (when using M.2 drives and a heat sink suitable for 35W TDP CPUs). 65W TDP CPUs and 2.5" drive support would obviously increase the height requirements.
    Many usage areas which required custom-sized embedded boards (such as digital signage / point of sale terminals / kiosks etc.) have now opened up for the PC, thanks to the NUC and other similar form factors that were introduced over the last year or so. The new 5x5 form factor ensures that a mini-PC is available for every size and performance requirement. As of now, it looks like In-Win has a chassis design ready for the new form factor. We are awaiting more information on the board(s) and availability details.


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5275

    Anandtech: ECS LIVA x2 Review: A Compact & Fanless Braswell PC

    The popularity of the NUC form factor has led to a resurgence in the nettop category. Thankfully, the core computing performance of the new systems have been miles ahead of the nettops of the past, and this has created an overall positive sentiment for the ultra-compact form factor (UCFF) in the minds of the consumers. ECS has been attempting to differentiate in the UCFF space with fanless systems using Mini Lake boards and custom-designed chassis in their LIVA series. The feature set and pricing of the LIVA units make it target the developing and cost-sensitive markets. We have already reviewed two of their Bay Trail-based systems, the original LIVA and the LIVA X. Read on to find out what ECS manages to deliver with the Braswell-based LIVA x2.

    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5276

    Anandtech: Qualcomm Details Hexagon 680 DSP in Snapdragon 820: Accelerated Imaging

    Although we tend not to focus too much on the tertiary aspects of an SoC, they are often important to enabling many aspects of the user experience. DSPs are important for a number of unique applications such as voice processing, audio processing, and other input processing applications.
    Today at Hot Chips, Qualcomm elected to reveal a number of details about their Hexagon 680 DSP, which will ship in the Snapdragon 820. Those that have followed our coverage regarding the Snapdragon 820 ISP features will probably be able to guess that a number of features on the Snapdragon 820 Spectra ISP are enabled through the use of this newer DSP.
    For those that are unfamiliar with DSPs, the basic idea behind DSPs is that they are a sort of in-between point in architecture design between highly efficient fixed function hardware (think: video decoders) and highly flexible CPUs. DSPs are programmable, but are rigid in design and are designed to do a limited number of tasks well, making them efficient at those tasks relative to a CPU, but more flexible than fixed function hardware. These deisgn goals are typically manifested in DSPs as in-order architectures, which means that there's much less power and area dedicated on silicon to parallelize code on the fly. This means that while a DSP can do a number of workloads that would otherwise be impossible on a completely fixed-function block, you wouldn't want to try and use one to replace a CPU.
    Consequently the architecture of DSPs like the Hexagon 680 are relatively alien compared to standard CPUs, as optimization is everything in the applications where DSPs make sense. For example, DSP instruction sets are often VLIW (very long instruction word), in which multiple execution units are driven in parallel with a single instruction. Certain arithmetic operations are also highly accelerated with special instructions in order to enable key algorithms for signal processing such as Fast Fourier Transform (FFT).
    In the case of the Hexagon 680, one of the key features Qualcomm is focusing on for this launch are Hexagon Vector Extensions (HVX). HVX is designed to handle significant compute workloads for image processing applications such as virtual reality, augmented reality, image processing, video processing, and computer vision. This means that tasks that might otherwise be running on a relatively power hungry CPU or GPU can run a comparatively efficient DSP instead.
    The HVX extension to Hexagon has 1024-bit vector registers, with the ability to address up to four of these slots per instruction, which allows for up to 4096 bits per cycle. There are 32 of these vector registers, which appear to be split between two HVX contexts. There is support for up to 32 bit fixed point decimal operations, but floating point is not supported to reduce die size and power consumption, as the previously mentioned applications for Hexagon 680 don’t need floating point support. As DSPs tend to have ISAs tailored for the application, the Hexagon 680 HVX units support sliding window filters, LUTs, and histogram acceleration at the ISA level. The performance of these units are said to be sufficient for 4K video post-processing, 20MP camera burst processing, and other applications with similar compute requirements.
    Outside of these details at a per-context basis, the threading model and memory hierarchy of the Hexagon 680 is quite unique. For scalar instructions, four threads are available with a 4-way VLIW architecture running at 500 MHz per thread. These scalar units all share an L1 instruction cache, L1 data cache, and L2 cache. The two HVX contexts in the Hexagon 680 can be controlled by any two scalar threads and also run at 500 MHz without stalling other scalar units not involved in controlling the vector units. This level of hardware-level multithreading along with QoS systems and L2 soft partitioning on a per-thread helps to make sure audio and imaging tasks aren’t fighting for execution time on the Hexagon DSP.
    Meanwhile the vector units are fed exclusively from the L2 cache that is shared with the scalar units, a choice Qualcomm made due to the overhead that comes with an L1 cache for image processing workloads. This L2 cache can do load to use in a single cycle though, so one could argue that this is technically an L1 cache at times anyhow. The Hexagon 680 in the Snapdragon 820 will also be able to have data from the camera sensor directly streamed to the L2 cache and shared with the ISP to avoid the power cost of going off-die to DRAM. There’s also an SMMU (System Memory Management Unit) which allows for no-copy data sharing with the CPU for multiple simultaneous applications. DSP memory writes will also snoop-invalidate CPU cache without the need for the CPU to do any work involving cache maintenance to reduce power consumption and improve performance.
    Relative to a quad-core Krait, the advantages of running some workloads on a DSP is enormous based on Qualcomm's internal benchmarks. According to Qualcomm, the NEON units in the Krait CPU are generally representative of NEON units within the industry, which is the reason why they've been used as the reference point here. Within a single logical “core”, Krait will only support 128-bit NEON with a single SIMD pipeline, compared to the 4-way, 1024-bit SIMD units of the Hexagon 680. SIMD threads also run on a 512KB L2-but-almost-L1 cache, as opposed to the 32KB L1 instruction/data cache of Krait, which helps to hide latency effects of DRAM. The NEON units of a Krait and many other ARM CPUs are capable of floating point, but in a workload like low light video enhancement Hexagon 680 will be able to complete the same amount of work at three times the speed, while using an order of magnitude less power due to the inherent advantages of a task-specific DSP architecture. The four scalar threads available in the DSP also means that entire algorithms can be off-loaded to the DSP instead of partially running on the CPU, which also reduces power consumption and makes it easier for developers to take advantage of the DSP.
    While Hexagon 680’s vector and scalar engines are useful for heavy-duty signal processing workloads, the addition of the low power island (LPI) DSP makes it possible to do away with separate sensor hubs in smartphones. According to Qualcomm, this DSP is completely separate from the scalar and vector compute DSP previously discussed (yet still part of the overall Hexagon DSP design), and sits on its own power island so the rest of the SoC can be shut down while keeping the LPI on. This also shouldn’t have a different process technology or a radically different standard cell library, as the advantages from the leading edge FinFET process should help significantly with power consumption.
    It’s said that this low power island with an independent DSP and newer process node is enough to improve power efficiency by up to three times in certain workloads compared to Snapdragon 808. I suspect that this was done instead of a comparison to the MSM8974/Snapdragon 800 generation because the Hexagon DSP was updated in the move from Snapdragon 805 to 808. Qualcomm emphasized the choice of a DSP over an MCU for this task, as in their internal testing a DSP delivers better power efficiency than a Cortex M-class MCU for more advanced sensor algorithms. The software stack for all of these features is already said to be quite complete, with a framework and algorithms included for OEM development. The broader Hexagon 600 series SDK is also quite extensive, with a number of utilities to allow for faster and easier development.
    If you’re like me, after going through all of this information you might be wondering what the value of these vector DSP extensions are. In discussions with Qualcomm, it seems that the reasoning behind pushing a number of image processing tasks to the Hexagon DSP core is mostly because the algorithms behind things such as HDR video, HDR image merging, low light image enhancement, and other advanced algorithms are still in flux even from software update to software update. As a result, it isn’t viable to make these aspects of the imaging pipeline done in fixed-function hardware. Without the use of the Hexagon DSP, these tasks could potentially end up running on the CPU or GPU, affecting user experience in the form of higher shot to shot latency, reduced battery life when using the camera, and higher skin temperatures. It remains to be seen whether OEMs using Snapdragon 820 will use these DSPs to the fullest extent, but the Snapdragon 820 is shaping up to be a promising 2016 high-end SoC.


    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5277

    Anandtech: Microsoft Makes Cortana For Android Available As A Public Beta

    Earlier this year Microsoft announced that their digital assistant Cortana would be making its way to iOS and Android in addition to its launch on Windows Phone and Windows 10. Today Microsoft opened the public beta for Cortana on Android, allowing users to use the same digital assistant on their Android smartphones and tablets as the one on their Windows 10 computers
    Because of Android's ability to choose what applications are used for certain tasks, users can alter the long press of their device's home button to trigger Cortana instead of Google Now. As of right now, Cortana on Android has a similar interface and functionality to its Windows counterpart, but at this point in the beta there's no way to use Cortana to toggle settings, launch apps, or to activate Cortana itself by saying "Hey Cortana."
    Users interested in trying the public beta for Cortana on Android can use this link to become a beta tester.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5278

    Anandtech: The Windows 10 Review: The Old & New Face of Windows

    Let’s flash back to 2012. About three years ago, Windows 8, the last major release of Microsoft’s ubiquitous operating system, was released to manufacturers. This was to be Microsoft’s most ambitious release yet. Traditional PC sales were in decline, and more personal devices such as the iPad tablet were poised to end the dominant PC platform. Microsoft’s response to this was to change Windows more than in any previous release, in a bit to make it usable with the tablet form factor. Windows 8 launched in October 2012 to much fanfare, but did not help the struggling PC market recover. Windows 10 is here to fix what ailed Windows 8, with a goal of driving adoption from all older versions with a combination of returning features, new features, and a great set of improvements.

    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5279

    Anandtech: Podcast 34: IDF Special! Interview with Dr Genevieve Bell, Intel Fellow

    Intel’s Developer Forum is an annual event in San Francisco focusing on how Intel attempts to enable product design and by extension, experience. Part of that is along the technical route, covering developments in Intel’s latest technologies in both software and hardware and how to use them best. The other side of the coin is more towards positioning Intel for future development, especially when it comes to IoT, homebrew and more focal applications.
    Due to some chance meetings (such as randomly bumping into each other after 9pm near the show floor), we were able to secure some time to interview Dr Genevieve Bell, Intel Fellow, resident anthropologist and Vice President of Intel’s Strategy Office.
    It might seem odd for a company like Intel to hire an anthropologist, but Dr Bell has been examining the intersection of technology and human interaction with Intel for almost 16 years, including how computational assistants develop a personality, or are given a gender, to where that data is shared and used and what unique joys and frustrations can arise from the different perception of various aspects of technology based on the users’ environment.

    Image from @hgw1967 of Intel
    Dr Bell presented a keynote talk at IDF focusing on the maker community currently revolving around Intel, which included examples of historic inventors (Curie, Edison) that could have been considered the makers of their time to the present day with an appearance by Dale Dougherty, the publisher behind Maker magazine and numerous maker events/hackathons around the US and worldwide. As a result, during our time with Dr Bell, we focused on that intersection of makers and the maker community, the altruistic intentions of the designers competing with corporate interests in this space and how the perception of the maker community is currently in a large state of flux from both the perspective of end-users and regulatory insights. Someone also had to bring up the recent cricket scoreline as well, in classic Aussie vs Brit style.
    Unfortunately IDF was being dismantled around us after the 16 minute mark, so there is some background noise.
    The AnandTech Podcast - Episode 34
    Featuring

    • Dr Ian Cutress, Host, Senior Editor (@IanCutress)
    • Dr Genevieve Bell, Special Guest, Senior Anthropologist and VP of the Corporate Strategy Group at Intel (@FeralData)

    iTunes
    RSS - mp3, m4a
    Direct Links - mp3, m4a

    Total Time: 20m 39s
    Many thanks to Dr Bell for her time – having been following her online thoughts for a time and finally getting a chance to sit and talk was an almost overwhelming experience. I subsequently forgot most of the questions I had mentally stored, perhaps dwelling on a couple of points with long winded questioning. I promise I’ll be better prepared next time!


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    34,805
    Post Thanks / Like
    #5280

    Anandtech: Understanding Smartwatch Design

    Although I mentioned this in passing in the Apple Watch review, I’ve increasingly realized that it is difficult for both manufacturers and consumers to really understand what makes a wearable well-designed. Manufacturers are still struggling to figure out what the real market is for smartwatches, and as a result what kind of designs they should be chasing. Meanwhile manufacturers have also done a so-so job communicating the purpose, use cases, and abilities of their devices with consumers, and as a result consumers are unsure what makes one product better or worse than the next.
    After giving the matter some thought, I wanted to take a look at the subject of smartwatch design and by extension who manufacturers are designing for. Who is the real audience for smartwatches? Which features does that audience really need? How do those features mesh with consumer expectations? The market is broad - and that goes for both audiences and manufacturers - but there are several common threads across all smartwatches that can help everyone better understand who the primary audience is and what makes a good smartwatch for that audience.
    Assumptions

    To start things off, I think it’s worth talking about audiences, as there are two very different audiences that have to be addressed in the overall watch marketplace. The first audience is the audience that wears traditional watches. I don’t necessarily speak for this audience since I don't wear a traditional watch, but it’s clear that features like long battery life, always-on watchface, standardized bands, highly refined mechanical movements, and general craftsmanship are major points of focus.
    However, at the same time the traditional watch market is inherently limited because even the cheapest cellphone has already integrated timekeeping capabilities; if all one needs is a device to keep time, phones can already do this and more. As a result, the second audience can be defined in opposition to the former, as the people who don't wear watches on a regular basis and have very different needs an expectations from the first group. There's also increasing evidence to suggest that there are significant demographic shifts, as younger people tend not to wear watches as often.
    The problem with this sort of disruption by the cellphone in the traditional watch market is that in the end there will be a growing segment of the market that doesn’t necessarily see the value of a traditional watch. To me, a college student that has grown up with cellphones, a traditional watch is something that’s mostly unnecessary because I already have a way of checking the time, setting alarms/timers, and running a stopwatch. Combined with the discomfort of existing bands, I simply found it difficult to justify the need for a watch as there were issues with ergonomics on top of a general lack of utility. Although the discomfort aspect may be a personal problem, I suspect I’m hardly alone in the latter. As a result, as I see it the only real remaining “functional” purpose of a traditional watch is effectively a fashion accessory outside of niche use cases.
    Implications

    If we assume that people in general no longer wear traditional watches - and hence don't value the functionality provided by those devices - then the implications are significant for how the design of a smartwatch should be approached. The first, and perhaps most obvious change is that battery life is irrelevant when comparing to traditional watches. After all, current quartz-based watches last years between battery changes, but regardless of battery life someone that doesn’t wear a watch obviously won’t care how long a watch lasts. Battery life still matters in as much as hours versus days, but no amount of battery life a smartwatch could ever attain would be enough for someone who sets their standards based on traditional watches.
    Similarly, things like always-on display are surprisingly not strictly necessary. Even if a traditional watch has an “always-on display”, it once again doesn’t matter because our design assumes that the average person has already decided that they don’t need a traditional watch, that they don't need to be able to see the time at every second of every day. Of course, responsiveness to make sure that the display is on instantly/just in time for the user to see the display matters, but it certainly doesn’t matter if other people can see what’s on the display. Things like mechanical movements also become irrelevant because the people that care about mechanical movements are people that have already bought a watch. Accurate time-keeping does matter, but a precise quartz-based clock is sufficient for this.
    With that in mind, while there are a number of traditional watch features that are arguably unimportant to the audience for smartwatches, there clearly are other features that matter just as much to smartwatches as traditional watches. Probably the easiest way to turn users off is to have a smartwatch with an uncomfortable or otherwise irritating design. Even ergonomic problems that aren’t immediately obvious will compound over time. In some ways, this is like scratching skin as at first it might feel neutral or even soothing, but with enough time it will become painful and even harmful.
    The other problem here is also industrial and material design. I’m probably not the right person to consult regarding the “right” design choices in this regard, but even I can recognize when a design is just not particularly tasteful. Given that a smartwatch is inevitably even more personal than a smartphone, poor design is basically unacceptable here. In some ways, lessons can be drawn from the watch industry and applied to this problem, but regardless of the approach taken it’s important for the watch to be broadly appealing in design.
    Plastic watches in this context may have some appeal to the same people that like rubberized Casio G-Shock watches, but browsing for watches above 200 USD in price on Amazon shows that pretty much every watch is going to have leather and/or metal construction. Materials that aren’t authentic like pleather will inevitably affect the end user perception of value, even if polymers are enormously useful and often superior in some ways to metal or leather.
    Moving on, now that we’ve covered what doesn’t matter to our assumed audience and what will make this hypothetical watch easily abandoned or returned, we can start to talk about what is going to sell the watch. This is easily the hardest part of all three aspects discussed here, which is only obvious if you start with the assumption that a significant portion of buyers have already decided that they don’t care for a traditional watch. If your product is effectively a traditional watch but with extra smart functionality, it may end up in the uncomfortable position of being too “digital” for those that want traditional mechanical watches, and not particularly interesting to those that aren’t interested in traditional watches because the extra functionality isn’t enough to justify the expense.
    We can’t use traditional watches for a model of how smartwatches should function, but we can draw some inspiration from smartphones. This doesn’t mean that an OEM should try to cram a smartphone into a watch form factor and expect success, but the same sort of general model will help to create a solid foundation on which the rest of the user experience can rest upon. This means adding significant amounts of general purpose compute, which includes elements like a CPU, GPU, RAM, and NAND.
    However, it’s dangerous to assume that this means using smartphone SoC with some BSP-level changes, as just the difference in TDP and battery size between a smartphone and a watch means that it’s important to recalibrate expectations for clock speeds. The severe constraints to PCB size also mean that there is an even greater need for integration on an SoC relative to smartphones, especially because unlike smartphones there are hard size constraints based upon wrist size. No one wants to wear a wall clock on their wrist, so this is especially critical.
    We can also draw a lot of lessons learned from the smartphone industry for displays. As the display is going to be the main way the end user is going to receive information from the watch, it has to be extremely effective at this task. In order to do this, we need a display with high resolution, reasonably high frame rate, and an acceptably wide color range/grayscale.
    From the smartphone space, we’ve discussed display resolution before, and there’s a sliding scale of acceptable resolutions. However, smartwatch displays will inevitably have a much larger minimum viewing distance because no one is going to hold a smartwatch close to the eye unlike a smartphone, which could be used for reading in bed or VR which effectively holds the display as close as possible to the eye. Smartwatches as a result need less resolution, but the average distance from the eye is going to be similar to smartphones, so 300-400 PPI will be needed to maximize spatial resolution to allow for maximum information density.

    Frame rate is also important, as smartphones have shown that it’s important to have a UI that responds quickly to user input and other changes in the system. E-Ink is therefore effectively unacceptable here, as the end user is going to end up spending a lot of time waiting on the display to refresh in order to continue navigating a user interface, and animation is effectively impossible. 30 FPS might be acceptable here to try and reduce power consumption, but 60 FPS or higher is necessary for fluid UI and good user experience (cinematic or not).
    Color range is the last element, but especially critical. Although sRGB/Rec. 709 is the industry-standard gamut, this really just represents a starting point and as technology advances wider gamuts will be necessary to accurately reproduce user content as gamuts like Rec. 2020 and others become industry standards. Grayscale isn’t really color, but the two problems are generally related from a calibration standpoint. Accurate grayscale reproduction falls under calibration, but the other aspects that affect grayscale include peak contrast of the display and reflectivity of the display.
    The former is a generally well-understood problem and true infinite contrast can be achieved by using an OLED display, but the latter has generally been overlooked even in the smartphone space as testing reflectance is often well beyond the scope of most websites and subjective observation of reflectance is strongly affected by changes in ambient light conditions. Although single crystal sapphire can avoid scratches to the display, without anti-reflection coatings the reflectance of the material is significantly higher than traditional aluminosilicate glass. Regardless, it’s clear here that experience in the mobile industry will help significantly with getting display right in wearables.
    If the display is output, then the other side of the problem that needs to be tackled is input. In this regard, the challenges are even more significant than they were with smartphones. Unlike a smartphone, a touchscreen keyboard is absurd and unacceptable for user experience. Touchscreens can definitely be helpful, but the precision of a touchscreen on such a small display means that large touch targets are necessary. Trying to solve this problem is difficult, but we’ve definitely seen viable solutions already in the form of the digital crown and Force Touch on the Apple Watch and Google ATAP’s Project Soli. I’m not sure how many other ways there are to implement similarly precise input solutions, but this is one case where experience in smartphones is insufficient to deal with the new challenges presented in wearables.
    Outside of these hardware challenges, probably the most important aspect of the user experience will be software. Existing smartphone OSes can definitely serve as a useful base, but the entire UX has to be redesigned to deal with realities of smartwatches and also to enable use cases that will actually make the watch worth buying. This isn’t nearly as easy as it sounds, because pretty much every wearable OS I’ve tried so far has been rather disappointing here. WatchOS arguably stands alone here right now as the only wearable OS that complete enough to provide a user experience that justifies a smartwatch, but as I noted in the review even watchOS has a long way to go before it’s worth recommending to a mass-market audience.
    A smartwatch OS has to have a sort of hierarchy of information, in which there has to be information that is immediately given to the user on first glance such as the time, notifications, and other quick information. Equally quick actions should be possible. However, it’s also important to have actions that might not necessarily be quick but allow for useful interaction without the need for a significant context switch that happens from using a smartphone. Key use cases here include opening a messaging application to read and send messages, reading/managing email, viewing/creating calendar events, turn by turn GPS navigation using the wrist, short checks of social media/news like Blinkfeed, and various metrics from sensor tracking such as fitness. There is no step by step guide here for how to design a UI here, but we can draw lessons from smartphone UI design. However, it’s important to keep in mind though that blind translation of smartphone UI to smartwatch UI is an excellent way to have poor information density and frustrating user experiences.
    A lot of these previously discussed use cases are definitely already present on smartphones, which may be confusing to those that haven’t used smartwatches before but it’s really important to emphasize that the smartwatch avoids the loss of context that comes from using a smartphone. This is because using a smartwatch means the display is usually kept a good distance away, and even if it is the center of focus it’s still relatively easy to keep track of one’s surroundings. With a smartphone, the display is usually sufficiently large that it’s easy to only focus on the display content, as evidenced by anyone that uses their smartphone while walking.
    However, these advantages are only obvious to those that already have experience with smartwatches. It’s also important to appeal to people that haven’t had any experience with smartwatches. This means implementing functionality that is otherwise impractical on a smartphone. The first, and perhaps most obvious way of doing this is health tracking, which can be done by using pulse oximeters, accelerometer/gyroscope sensor fusion to track distance/exercise/standing, and various other sensors. Sleep tracking is also reasonably viable with a smartwatch, but this imposes some pretty significant battery life requirements if charge time isn’t less than about half an hour to an hour.
    In general, the applications here beyond fitness are difficult to think of because they need to effectively exploit the placement a wrist-mounted general compute platform for applications that would otherwise be impossible to accomplish on a smartphone. Applications like using a smartwatch to unlock locks would definitely fall into this category, but requires significant infrastructure in addition to buying the watch, which represents a barrier to adoption. Intel’s proof of concept for a security wristband could definitely be extended in a number of applications that would make sense in a smartwatch.
    Final Words

    In the end, smartwatch design isn’t something that can be fully explained in a single short article. Despite this, it’s clear to me that a number of companies are simply making smartwatches as a possible growth market without really understanding the value of smartwatches, and people in general don’t seem to understand the value of smartwatches either. This is hardly unsurprising though, as this segment of the market has yet to hit a point of widespread adoption.
    As a result of the immature state of the market, the industry as a whole can't rely on consumer feedback either. People who haven't worn watches in years won't know what they want from a smartwatch until they see what they want. Even people who continue to wear traditional watches won’t be the ideal source for information on what a smartwatch should do and what it should look like because a smartwatch cannot just be a better watch. Experiences from the watch and smartphone industry can be applied to solve some engineering challenges in the smartwatch form factor, but in other cases completely novel solutions must be created, especially in regards to user interface.
    Of course, one lingering doubt remains here, as just about every argument here is built upon the assumption that most people no longer wear watches every day. However, even if this assumption is wrong, it's still important to consider when viewing how smartwatches are designed. After all, the new features and capabilities a wearable brings to the market should be able to stand alone. If the "smart" functionality isn't enough to stand on its own, all we're left with is a watch. If all we're left with is a watch, is it really worth calling a smartwatch?


    More...

Thread Information

Users Browsing this Thread

There are currently 16 users browsing this thread. (0 members and 16 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Title