Foldable Displays Scheduled for 2016

Foldable Displays Scheduled for 2016

image

Foldable phones

We have been waiting for foldable smartphones for some years, but 2016 could really could the year we finally get one.

Back in 2013, Samsung leaked a five-year plan for its mobile technology. Under the screen section, it pegged 2016 for foldable displays. These phones would fold up to fit in your pocket, but could have a screen the size of a tablet’s.

image

Samsung has already shown something called Project Valley (pictured below), so we even have a hint at what 2016’s folding phone could look like.

image

 

Rollable screens

If folding your phone like a wallet does not get your attention, what about rolling up a larger display like a poster?

2016 might be just a few weeks old, but LG has already publicly shown an 18-inch rollable screen. It is only the display that LG has managed to make super-flexible, with all the other hardware housed in a solid case at the bottom, but it’s still pretty impressive.

bottom, but it’s still pretty amazing.

Given the screen size, it seems LG is thinking big tablet rather than smartphone, but there’s no reason the company couldn’t spend 2016 testing something like the picture below.

- See more at: http://lowdown.carphonewarehouse.com/news/top-tech-to-look-out-for-in-2016/33602/?cid=EMAIL_additions_insights_290116_header&PPC=5016955115#sthash.ry1G447K.dpuf

Given the screen size, it seems LG is thinking big tablet rather than smartphone, but there’s no reason the company could not spend 2016 testing something like the picture below.

Foldable Display

Saving the touch-screen: the search for new nanotech crystals

Saving the Touch-Screen: the search for new Nanotech Crystals

image

Until the late 1980s, touch-screen technology was considered unreliable and imprecise, but in the era of smartphones and tablets, fast, accurate capacitive touchscreens are everywhere.

Such screens are based on transparent crystals that conduct electricity and mostly the compound used is indium tin oxide (ITO).

As a powder, indium tin oxide looks a pale yellow-green but when deposited as a thin film it becomes transparent and conducts electricity.

However, indium is running out and increasingly expensive. Internationally the race is on to find an alternative.

Whatever that turns out to be, it must be low cost, boast high conductivity and deliver good optical transparency.

Fujitsu pioneered the flexible polymer PEDOT. While this did not have anywhere near the conductivity of ITO and had questionable stability, it is improving with new formulations.

Silver nanowires could offer a way forward, as could graphene and a large number of other compounds, however, easily etchable zinc oxide is a likely candidate - if it can be made more stable. It would also be cheaper and more environmentally friendly.

The University of Wisconsin-Madison has a cross-disciplinary team developing chemically-coated zinc oxide touch-screen alternatives. This involves coating zinc oxide crystals in different chemicals to change how it behaves and then to find out if it could be a viable option for use as semiconductors in technology.

How to really erase any drive -- even SSDs -- in 2016

How to Erase Any Drive -- Even SSDs -- in 2016

image

There are easier and safer methods for erasing hard drives - including SSDs - than 10 years ago. Windows has two easy methods and Mac OS X has another. They are built into the operating system and are free to use. Also included here is another method for regulated industries or frequent erasures.

SSDs - now the standard in industrial systems - are a little different. Because of the Flash Translation Layer (FTL) the OS does not know where the data is physically. As a result the Mac's "Secure Empty Trash" command has been removed because it can not be sure that the data has actually been erased. The easy workaround for SSDs is to encrypt, reformat, and re-encrypt, which is described below.

 

Secure Erase

Delete does not delete the data. All that delete does is erase the file's reference information in the disk directory, marking the blocks as free for reuse. The data is still there, even though the OS can not see it. That is what "file recovery" programs look for: data in blocks that the directory says aren't in use.

The Secure Erase command built in to the ATA standard overwrites every track on the disk - including bad blocks, the data left at the end of partly overwritten blocks, directories, everything. There is no data recovery from Secure Erase.

Secure Erase is sometimes difficult to use because it has been disabled in the BIOS, but there are other ways to achieve data security.

 

Encrypt, Reformat and Encrypt Again.

Full disk encryption is built into Windows (Vista, 7, 8, & 10) and Mac OS X. These operating systems will work on any attached drive. However the Windows encryption tool - BitLocker - usually requires a system with a TPM (Trusted Platform Module) chip. If the system does not have TPM module, it will not be able to access BitLocker or will generate an error message. (This varies with Windows releases and versions).

 

Windows

To run BitLocker, go to the Control Panel, click System and Security and then click on BitLocker Drive Encryption. Select the drive and start the process. Encryption will take a few hours on a large disk, but the system can still be used while encryption completes.

If BitLocker cannot be used, the drive can still be erased by performing a standard - NOT quick - format of the drive. From Control Panel, click Computer Management, click Storage, then Disk Management, then the drive to erase. Right click on the disk, choose New Simple Volume, and let the wizard guide you until you get to the Format window. Make sure that Perform a quick format is NOT checked.

A standard format overwrites the entire drive and on a hard drive will again take a few hours. If a hard drive format takes less than a minute, go back and make sure a standard format has been slected.

 

MAC OS

The Mac OS FileVault 2 (10.7 and later) function is accessed from System Preferences>Security & Privacy>FileVault. Choose Turn On FileVault, select a password option, enable any other accounts - in this case none - and click Restart. The encryption process will begin and like Windows will take some hours for a large drive.

 

Encrypted – Next Step

After the drives are encrypted, they can now be reformated as a new drive and encrypted again. Since the drive is now empty, the second encryption will be much faster.

The second encryption ensures the first encryption key - which is usually kept on the drive - is overwritten. A zealous decrypter could recover the key and decrypt the data. With the second encryption only the second key can be recovered and since the older data is also encrypted still cannot be read.

 

Wiping that has Legal Requirements

In a regulated industry - such as health care or finance - with a regulatory or fiduciary responsibility for data protection, the foregoing will protect the data, but may not protect against claims for mishandled data. For that something stronger is required.

For this type of security, something like the StarTech Standalone Eraser Dock is required. It will invoke the ATA Secure Erase function and print a receipt to document the fact. Since the Secure Erase option is NIST approved and better than the DOD requirement, it gives strong legal protection. It does however, require removing the drives from their enclosures.

A faster option is to physically distroy the drive's disk platters. Few players will attempt a recovery from a physically damaged drive.

Google makes its $149 Nik Collection photo editing software free to download

Google makes its $149 Nik Collection photo editing software free to download

free, download, adobe photoshop, deals, snapseed, photo editing, freebie, nik software, nik collection

Back in 2012, Google purchased German company Nik Software, the firm behind a series of photo editing plug-ins designed for amateur and professional photographers. The acquisition was primarily so Google could get its hands on popular photo app Snapseed, but the company also reduced the cost of Nik's collection of seven plug-ins from $499 to $149. Now, the price has dropped even more, right down to zero.

The collection includes Analog Efex Pro, Color Efex Pro, Silver Efex Pro, Viveza, HDR Efex Pro, Sharpener Pro and Dfine. It's free to download for Windows and Mac, and provides “a powerful range of photo editing capabilities — from filter applications that improve color correction, to retouching and creative effects, to image sharpening that brings out all the hidden details, to the ability to make adjustments to the color and tonality of images,” according to a statement from Google.

image

“We’re excited to bring the powerful photo editing tools once only used by professionals to even more people now,” said Google.

The company does say in its Google+ post that it is continuing to focus on building photo editing tools like Snapseed and Google Photos for mobile, so there’s a chance that the Nik Collection may stop receiving support and updates now it’s free. You can download Google Nik Collection for free here.

How Much RAM

RAM Requirements for High End Systems

 

With the arrival of Skylake earlier this year, many are looking into Intel's latest platform. This requires not just a Core i7-6700K or Core i5-6600K processor, but also a new Z170 motherboard and DDR4 memory.

Even with current memory prices, there is still a premium to pay of between 20% to 40% for DDR4 memory versus DDR3, but regardless of moving to Intel's new platform and DDR4, the question of the amount of RAM to use still remains.

What is the performance difference between using 8GB or 16GB of RAM?

When developing a high-end Core i7 system with a high-end GPU and a fast SSD, the 16GB memory kit is going to be one of the smaller considerations.

However, developing a Core i3 / i5 system with the on-board graphics or using a low end graphics card; knowing if the extra 8GB of memory is going to be of benefit is an important consideration.

Current productivity applications can consume upwards of 4GB, so there is little argument for not going with at least 8GB of memory. However, the need for 16GB of memory is a hotly debated topic.

Below are a number of benchmark tests, which indicates where this much memory might be useful for power users.

Test System Specification

Skylake Performance PC
  • Intel Core i7-6700K (4.0GHz - 4.2GHz)
  • Asrock Z170 Gaming K6+
  • Dual-Channel: 16GB DDR4-2666 RAM
  • Dual-Channel: 8GB DDR4-2666 RAM
  • Single-Channel: 4GB DDR4-2666 RAM
  • GeForce GTX 980
  • Crucial MX200 1TB
  • SilverStone Essential Gold 750w
  • Windows 10 Pro 64-bit

 

Application Performance

In reality, it is difficult to find any commonly used programs that require more than 4GB system memory on its own.

As an example, a Windows 10 desktop machine, with two browsers open and over a dozen tabs between the two, Postbox email client, Adobe Photoshop, Microsoft Word and Excel, two IM clients, Sublime Text, an SFTP application, Plex Server, Dropbox, OneDrive, Malwarebytes and other system tools running in the background, then multi-tasking between programs, the RAM usage would max out at around 70%.

Once there is 'enough' memory for all applications to run, having more memory does not increase performance any further. What that means is that for most regular work,  there is no tangible performance difference between 8GB and 16GB of system memory.

Among the programs tested, Adobe Premier CC proved to be very demanding as showcased below:

 

 

This custom workload features a 17 minute video made up of dozens of small clips, images and audio tracks. To maximize system memory usage the bitrate was turned right down and this saw a total system memory usage of 12GB when encoding.

With 16GB of memory installed the workload took 290 seconds. Surprisingly, with just 8GB of RAM the encoding time was not greatly impacted, now taking 300 seconds. It was not until the system memory usage was dropped to just 4GB that  a massive drop in performance can be seen, 38% slower than the 8GB configuration to be exact.

Moving on to 7-Zip... the normal benchmark default is to use a 32MB dictionary, which is generally enough to represent true compression performance. It should be noted that when compressing a number of files that measure in the gigabytes then a much higher dictionary size would be preferred. Larger dictionaries often make the process slower and require more system memory but result in a smaller file (better compression).

When testing with a 32MB dictionary, the Skylake Core i7-6700K processor is good for 25120 MIPS and this test only requires around 1.7GB of available system memory. Doubling the dictionary size to 64MB requires 3.1GB and 128MB requires 6.1GB.

For the test a 512MB dictionary size was used. This overwhelmed the system memory as it requires 24GB of available system memory for operation. The system then has to rely on the Windows Pagefile to pick up the slack. The more data that needs to be loaded onto the SSD the slower the system becomes.

 

With 16GB of RAM the system is still able to produce 9290 MIPS where the 8GB configuration is over 3x slower.

 

Looking at the kilobytes per second data we see that the 8GB configuration is 11x slower than the 16GB configuration.

While having 16GB is a real advantage here, an extreme and unlikely scenario had to be created. Those looking at compressing with such a large dictionary would likely realise the need for more system memory and go with 32GB of RAM.

 

Graphics and Workstation Performance Group

SPEC/GWPG (Graphics & Workstation Performance Group) is a non-profit corporation formed to establish, maintain and endorse a standardised set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops benchmark suites and also reviews and publishes submitted results from member organisations and other benchmark licensees. The group supports the development of a range of graphics and workstation benchmarks that have a value to the user and vendor communities.

The Standard Performance Evaluation Corporation's SPECwpc V1.2 benchmark does not feature many tests that are relevant for the average user, but here are some of the few benchmark programs that occasionally exceeds 8GB of memory.

 

Blender

Blender is free/open source 3D creation software which is commonly used by professionals all around the world.

 

Unfortunately while it can be a heavy user of RAM, theSPECwpc test only pushed the entire system usage to 6.1GB, so is not ideal for looking at the difference between 8GB and 16GB of RAM. There is not even a significant drop off when using just 4GB of memory either.

 

LAMMPS

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a classical molecular dynamics code that is distributed under open source GPL terms. The LAMMPS Molecular Dynamics Simulator test did see system memory usage reach 10.5GB in the SPECwpc test, so it is ideal for looking at the difference between 8GB and 16GB of memory.

Here we see that performance was improved by 10%, which is not a significant difference, though we are only exceeding the 8GB memory capacity by 31%. A much larger deficit can be seen when comparing 4GB to 8GB, as the larger memory capacity offered 306% more performance.

 

NAMD

NAMD is another molecular dynamics application designed for high-performance simulation of large biomolecular systems.

The workload pushed system memory usage to 7.2GB, which wasn't enough to overwhelm 8GB of memory and in fact the 16GB configuration was not even 10% faster than the 4GB configuration.

 

Rodinia

SPECwpc categorises Rodinia as a product development benchmark and is designed to help system architects study emerging platforms such as GPUs (Graphics Processing Units).

Rodinia includes applications and kernels which target multi-core CPU and GPU platforms.

Rodinia was included because it pushed system memory usage to 9.5GB, but despite exceeding 8GB of memory being used, the 16GB configuration was just 4% faster.

 

Conclusion

For those developing new systems or considering the upgrade to 16GB, the answer is simple: do not bother unless you have a specific requirement driven by software. For general usage there is no advantage to be had by using 16GB of RAM.

Even applications such as Adobe Premier CC that pushed system memory usage to 12GB, there was little advantage using 16GB of system memory instead of 8GB. It was noted though, that a significant drop off was seeen when lowering the system memory to just 4GB. Users who do encoding will appreciate having at least 8GB of memory.

Just one of the SPECwpc tests provided a performance advantage when using 16GB of memory, though the margin was not significant. Anyone running professional grade software would probably not be concerned with the extra cost of the system memory anyway.

The only application that really benefited from 16GB was 7-Zip, but a big dictionary size had to be used, to create heavy RAM demand. Given that a 128MB dictionary only requires around 6GB of memory, it seems unlikely that many would require more.

Virtualisation is a very different situation however, since these types of applications require dedicated resources away from the host PC. If more than one VM is running, or other related specialised work, it's safe to say much more RAM will be required.

This obviously does not apply to regular PC usage.

Overall,  8GB is seen as the standard right now, while 16GB of RAM is only essential for specific high-demand scenarios.

Mattel ThingMaker 3D Printer

Mattel ThingMaker 3D Printer

 
Had you been a child in the 1960s, you might remember something called the ThingMaker, a do-it-yourself toy creation kit from Mattel that came out decades before 3D printing would itself become a thing. In that respect, it was an idea well ahead of its time, so Mattel is bringing it back as a reimagined product for the modern era

This time around, the relaunched ThingMaker is a pre-assembled 3D printer that's quick and easy to setup and use. It links by WiFi to the ThingMaker Design app developed by AutoDesk, which is available free of charge for both iOS and Android devices. A simple push of a button exports the files to the printer.

 

Mattel Thingmaker

Like many 3D printers, the ThingMaker uses different coloured spools of PLA (Polyactic Acid) filament. When printing, the printer door automatically locks to prevent access. After it's finished printing, the heated print head retracts into a recess where it can not be reached.

ThingMaker App

Mattel deserves praise for branching out of what might be its comfort zone and entering into the maker movement. It's a category that's growing in popularity, but isn't quite mainstream yet. This tends to scare away big companies from participating. One of the reasons why 3D printing isn't mainstream yet, is because it is not not cheap, 3D printers can run several thousands of pounds.

The ThingMaker is being sold by Amazon.

Intel SSD Caching

Intel SSD Caching

 

image

Introduction

SSD caching (also known as Intel Smart Response Technology) is not new, as it was introduced with the Z68 chipset. SSD caching is intended to provide improved performance for computers that use traditional hard drives in a way that is both cost effective and easy to configure. This is done by using a small, relatively cheap SSD drive to cache or store commonly accessed data. Since SSDs are much faster than traditional hard drives, this allows the computer to read the cached data much faster than if it had to read the same data directly from the hard drive.

SSD caching reduces the time it takes to load commonly used programs, but there is a limit to the benefits. If the data is already stored in the computer's RAM, then SSD caching does not improve load times at all since the computer's RAM is much faster than even the fastest SSD drive currently on the market. The main advantage of SSD cache comes into play when booting into Windows or when a program is run for the first time after a reboot or power off. Since the data in RAM gets cleared each time the computer power cycles, the data is not present in the RAM whereas it is still present on the SSD cache drive. 

With SSD caching setup and configured, all that the cache needs in order to function is for the program to run once. After that, the data is stored for future access within the cache drive. On a chipset level, Intel SSD caching is currently only compatible with certain chipsets and their mobile equivalents, but some motherboard manufactures have released software that does much the same thing as Intel SSD caching. In this article we will only be examining the performance of the SSD caching as provided directly from Intel.

 

Behind the Scenes

In order to understand SSD caching and how it works, you need to understand some of what is going on behind the scenes when a computer is trying to find the data it needs. When a program is first run, all of the different files needed to launch that program (the main .exe, dll files, etc.) are read from the hard drive and loaded into the different levels of temporary storage found in a computer. On most computers, the basic hierarchy goes: 

image

Whenever a computer needs to find data, it goes through this hierarchy of different storage locations starting with the CPU cache and working all the way back to the hard drive. As you go through the list, the speed of the storage gets slower so ideally the most commonly accessed data higher should be up on the list. For example, reading data from the CPU cache is much faster than reading from the RAM, which itself is much faster than reading directly from the hard drive. SSD caching adds an extra step between the RAM and hard drive, so this process becomes:

image

Since SSD drives are much faster than traditional hard drives, this gives the system one more place to look for data before having to read from the comparatively slow hard drive.

Due to space limitations, not all of the data in a computer can be stored into the relatively small CPU cache or RAM, and SSD caching is no different. While the algorithm to decide what data gets cached is confidential, testing shows that at the very least, files larger than a few MB are not stored in the SSD cache.

 

Setup and Configuration

Setting up an SSD cache is very easy as long as the following requirements are met:

  1. The chipset supports Intel Smart Response Technology
  2. A traditional primary hard drive
  3. A secondary SSD drive
  4. The SATA controller must be set to RAID mode (no arrays need to be configured)
  5. The Intel RST software must be installed.

Before we get into actually setting up SSD caching, there are a few things to note: First, a platter hard drive is required since the benefits of SSD caching are completely non-existent if you already using a SSD as your primary drive. Second, SSD caching is limited to 64GB. If the SSD is larger than 64GB, the remaining space is partitioned as a standard drive. You can either leave this space empty, or format it for use as additional storage through Window's Disk Management utility.

Once the five requirements listed above are fulfilled, simply follow the four easy steps shown below to enable and configure SSD caching:

1) Launch the Intel RST software and click on the "Accelerate" button. If the button is not present, you either are not in RAID mode or one of the other requirements have not been met.

 

2) On the next screen, click on the link "Enable acceleration"

 

3) A window comes up allowing you to configure the SSD cache. From here, you can select the cache size (with a maximum of 64GB), the drive to accelerate (likely the OS drive) and can choose between Enhanced* and Maximized** mode.

 

4) After clicking OK, the SSD cache is fully configured and ready for use. If you want to change the cache mode or disable SSD caching, you can do so from the Accelerate tab.

 

*Enhanced Mode: Acceleration optimised for data protection. This mode uses the write-through cache method to write data to the cache memory and the disk simultaneously. In the event that the accelerated disk or volume becomes inaccessible, fails, or is disconnected, there is no risk of data loss because data on the disk is always synchronized with the data in the cache memory. For data safety reasons, this mode is the default acceleration setting.

**Maximized Mode: Acceleration optimized for input/output performance. This mode uses the write-back cache method where data is written to the disk at intervals. In the event that the cache device or the accelerated disk or volume becomes inaccessible or disconnected, there is a chance of data loss. However, if the device was missing and can be reconnected, reboot the system and caching activity will resume from where it stopped. To remove the cache device in the future, make sure that acceleration is first disabled on that disk or volume.

Once these four steps are completed, SSD caching is fully setup and will start working immediately; no reboots or additional configuration is needed. At first, not much performance improvement will be seen, but this is due to the simple fact that no data has yet been stored to the SSD cache drive. As use continues, more and more data will get saved to the cache, resulting in better and better system performance.

 

Performance Benefits

To test the performance benefit of SSD caching, the following hardware in a laptop system was tested:

While most of the testing was performed with the operating system installed to the traditional hard drive and the SSD drive configured as the cache drive; testing was also carried out without the SSD cache configured as well as with the OS installed directly onto the SSD. This will give a good look at the difference between not using SSD caching, using SSD caching in both modes and running the system directly from the SSD drive.

To benchmark the performance advantages, testing was carried out to measure the difference SSD caching makes to Window's boot times. This was done using BootRacer to record how long it took Windows to load in each of the configurations. BootRacer provides measurement from two different points: when the base OS is loaded and ready to start user-specific applications; and when the OS is fully loaded and completely ready to go.

Below are both readings, although the total is of most interest. Note that these results are only for the Windows boot time and do not include pre-boot actions such as POST or BIOS hardware detection.

Sufficed to say, the results were impressive. The boot times were only 60% of what they were without SSD caching and essentially identical to simply using the SSD as the boot drive. The only unexpected result was that the difference between Enhanced and Maximized caching modes was so little. Given the risks associated with using Maximized cache, it is highly recommend using the Enhanced cache setting if the primary goal is to improve Windows boot times.

Moving on to the application load testing, three different programs were run and how long it took each to fully load recorded: GIMP (image editing), Avid Studio (video editing) and Google SketchUp (3D modeling). This is a fairly limited number of programs, but gives an insight into SSD caching performance, without getting too caught up on how SSD caching improves performance on a per-application basis. 

Each of the programs had been previously launched (so that they would be present in the SSD cache when available) but the system was rebooted before taking measurements to ensure that they were not simply running from the computer's RAM.

 

 

GIMP Avid Studio Google SketchUp
No Cache 14.2 45.9 4.7
Enhanced Cache 4.3 (30%) 10.2 (22%) 1.6 (34%)
Maximized Cache 3.9 (27%) 11.9 (26%) 1.7 (36%)
OS Installed On SSD 3.8 (27%) 10.9 (24%) 1.5 (33%)
RAM (additional runs) 3.0 (21%) 6.9 (15%) 1.0 (21%)

From testing, it is clearl to see that SSD caching proves great benefits to program load times. Once again it has to be noted that this is only for the first time you run the program after a reboot. Any subsequent runs will be pulled from the computer's RAM so the load times no matter what caching you use will be much better.

While it is impossible to define a trend using so few data points, the results appear to show that the longer a program normally takes to load, the greater the benefit gained from using SSD caching. With load times reduced to 25-35% of the standard load times, it's clear that SSD caching is not just a hype tool, but is very useful in minimising load times.

What really surprised was how well the SSD caching performed compared to having the OS running directly from the SSD. Having the OS and programs installed directly onto the SSD provided a bit better load times, but the results were close enough that we completely validated Intel's claims of getting SSD-like performance by using SSD caching.

The testing performed just barely scratches the surface of the various programs that can benefit from using SSD caching. To learn more about the performance benefits of SSD caching, we recommend the Anandtech and Tom's Hardware articles about Intel SSD caching. These articles are a little bit dated, but the technology for SSD caching has not changed much since the launch of Z68 so they are still very valid.

 

Conclusion

Overall, it is impressive how much OS and program load times were reduced when using SSD caching. One thing to keep in mind, however, is that using SSD caching does not result in a system-wide increase in performance. Reading data over a few MB in size and writing data must still be done directly from the hard drive and will not benefit from having an SSD as a cache drive. Depending on the size of the cache drive, some data might also get "evicted" from the cache as newer data gets written too it..

The performance improvement using SSD caching depends on what the computer is used for. If most of your time it is opening lots of different programs, but rarely any large files, SSD caching will likely give you a very good performance boost. If it is used for reading and saving very large files SSD caching will still give a performance boost when opening programs, but the actual reading and writing of the files will not be any faster. At the very least, Windows load times will be greatly improved no matter what the computer is used for.

Z170 vs Z97: A Comparison

Z170 vs Z97: A Comparison

 

The new Skylake-S CPUs use a new microarchitecture which means that the socket on the CPU and motherboard are physically different from the previous generation. To accomodate this change, Intel has also launched the Z170 chipset to go along with the new CPUs. In addition to the change in socket, there have also been a number of other improvements to both the CPU and Z170 chipset including DDR4 support, a faster connection between the chipset and the CPU (via DMI 3.0), and more PCI-E lanes through the chipset

Unlike previous launches where Intel released a number of new chipsets and CPUs at the same time, for Skylake-S only the top chipset and unlocked (K-series) CPUs will be available at first. There is expected to be a range of Skylake-S CPUs and at least two lower-end chipsets available at some point, but Intel has not yet announced an official launch date for those products.

 

Chipset Specification Comparison

  Z97 Z170
Processor Support Haswell/Broadwell (LGA 1150) Skylake-S (LGA 1151)
Graphics Support 1x16 or 2x8 or 1x8+2x4 1x16 or 2x8 or 1x8+2x4
DRAM Support DDR3 DDR3/DDR4
Mem/DIMMs Per Channel 2/2 2/2
DMI Version 2.0 3.0
Intel RST12 Yes Yes
Intel Smart Response Technology Yes Yes
Small Business Advantage No No
USB Total (USB 3.0) 14(6) 14(10)
Total SATA 6Gb/s 6 6
Additional PCI-E lanes 8x PCI-E 2.0 20x PCI-E 3.0
Independent Display Support 3 3
CPU Overclocking Yes Yes
Max PCIe Storage
(x4 M.2 or x2 SATA Express)
1 (x2 M.2) PCI-E 2.0 3 PCI-E 3.0

*In addition to the 16 PCI-E 3.0 lanes from the CPU

From an official chipset perspective, there are a number of very important differences between the Z97 and Z170 chipsets. The first and most important change is the move to the new socket 1151 in order to support the Skylake-S CPUs. This change in socket means that you cannot use a Skylake-S CPU in a Z97 board or a Haswell/Broadwell CPU in a Z170 board. However, the heatsink mounting is still identical so any heatsink that worked on socket 1150 (or socket 1155/1156 for that matter) will still work on socket 1151.

Along with the change in socket is the addition of DDR4 memory support. DDR4 is still a bit more expensive than DDR3, but it is faster, allows for twice the density, and uses less power than DDR3. Z170 will continue to only allow four physical sticks of RAM to be used (in dual channel mode) but with the density of DDR4 you should be able to use up to 64GB of RAM with Z170 versus only 32GB with Z97. 16GB sticks are not available today (except in Reg. ECC which is not supported by this platform) but expect them to be available by the end of 2015.

In addition to the RAM update, the connectivity between the CPU and chipset has been upgraded to DMI 3.0. By using DMI 3.0, the chipset is now able to support 20 PCI-E 3.0 lanes versus the 8 PCI-E 2.0 lanes that is possible on the Z97 chipset. Most of these lanes will likely go towards features like USB 3.1, onboard WiFi, or Thunderbolt - but this increase in PCI-E lanes technically means a motherboard manufacturer could put up to three x4 M.2 PCI-E 3.0 ports on a Z170 motherboard. M.2 may not be incredibly popular in desktops today, but with faster and faster storage being developed (like the recent breakthrough in memory chips by Intel and Micron) it is expected M.2 to increase in popularity over the next few years.

Both chipsets support CPU overclocking and while there has not been an increase in the total number of native USB ports, ten of the fourteen USB ports on Z170 are now USB 3.0 ports (unfortunately, USB 3.1 is still too new to be a native part of the chipset). As far as their additional feature sets, both Z97 and Z170 support Rapid Storage Technology and Smart Response Technology (otherwise known as SSD Caching), but do not support Small Business Advantage.

 

Conclusion

Overall, there have been a number of very significant changes in the Z170 chipset. Not only does it support the new Skylake-S CPUs but it also adds DDR4 support, more USB 3.0 ports, and more PCI-E lanes on the chipset. The addition of DDR4 and more PCI-E lanes especially are great improvements that really increase the capabilities of the Z170 chipset. Add in the significant performance increases with the Skylake-S CPUs and there is very little reason to use the Z97 chipset over the Z170 chipset.

Perhaps the biggest problem with Skylake-S right now is that you are very limited in terms of choices. Only two CPUs are available at launch (the i5 6600K and the i7 6700K) alongside the Z170 chipset. We expect more CPUs and chipsets to be released relatively soon, but if you want to low to mid-range system you will simply have to wait for the rest of the product line to become available.

CPU Coolers

Market Leading CPU Coolers

http://static.techspot.com/articles-info/1130/images/best-cpu-cooling.jpg

There is not a one solution fits all product when it comes to CPU coolers. A large spacious full tower PC will not have an issue using a massive 160mm tower style cooler such as the Noctua NH-D15, while those who are more cost conscious might lean toward more affordable options such as the Cryorig H7.

Even if their is space available, some users prioritise operating volume over temperatures in which case the Be Quiet! Dark Rock Pro 3 offers a fine balance. If any level of fan noise is an issue, then a big passive cooler such as the Thermalright Le Grand Macho will be in order.

Those with a limited headroom will want to check out the ever growing range of compact CPU coolers, such as the Thermalright's AXP-200R.

If air cooling comes off as unadventurous, an all-in-one liquid cooler may be of interest. This category has a wide range of options now such as the Corsair Hydro Series H115i GTX (renamed H110i GTX).

 

Performance Air-Cooler

Noctua NH-D15

image

The NH-D15 is not outrageously expensive and yet it seems to make no compromises. This is an air-cooler that is designed to deliver the best performance while generating as little noise as possible.

Despite the NH-D15's massive size, compatibility generally isn't an issue though you will need at least 165mm of clearance for it. Conveniently, Noctua also proves an extensive motherboard compatibility list on their website covering several hundred models.

Out of the box the NH-D15 is a universal product supporting all current AMD and Intel platforms as well as older sockets such as AMD's FM1 or Intel's LGA1156.

 

Value Air-Cooler

Cryorig H7 Universal

image

Cooler Master 212 products have been recommended for low cost cooling for a long time. The CM Hyper 212 Evo remains a favorite for overclocking on a budget simply because it delivers exceptional performance without sounding like a leaf blower at low cost.

However the Cryorig H7 has gained a lot of followers recently and for good reason. For a little extra money it provides superior cooling performance and generates less noise than the CM 212. The Cryorig's H7 also has a better installation process and arguably looks better with its hive-fin design.

Compatibility shouldn't be an issue either as the Cryorig H7 measures just 145mm tall making it one of the smallest tower coolers on the market to support a 120mm fan. In other words, the H7 will fit in most mid-tower PC chassis.

The 98mm depth also means that there is no limitation on memory module height as the Cryorig H7 does not cover the DIMM slots. Out of the box it will support mainstream Intel LGA1150, 1151, 1155 and 1156 platforms as well as all current AMD platforms.

 

Low Noise Cooler

Be Quiet! Dark Rock Pro 3

image

Those who prioritize operating volume over performance will be more interested in the Be Quiet! Dark Rock Pro 3 than Noctua's NH-D15.

Like the NH-D15, the Dark Rock Pro 3 touts top-notch build quality and that's to be expected considering it matches the Noctua in cost.

It is thought the Dark Rock Pro 3 would perform similarly to the NH-D15 using the same fans at the same operating speed. However, as a package the Dark Rock Pro 3 has more of a bias towards operating volume.

The Dark Rock Pro 3 comes with a 135mm fan sandwiched between the towers while a 120mm can be found on the front forcing air though the heatsink.

 

Passive Cooler

Thermalright Le Grand Macho CPU Cooler

image

If you find the whisper quiet operating volume of the Be Quiet! Dark Rock Pro 3 too much then perhaps a completely passive solution is the solution. One of the best opinions for a passive passive heatsink is the massive Thermalright Le Grand Macho.

Despite its truly epic size the Le Grand Macho is not expensive for such a massive lump of neatly crafted metal. While it has a passive design, the Le Grand Macho is intended to be used with high-end desktop processors such as the Core i7-5960X as it can actively dissipate up to 300W. Boasting an enlarged copper base, the Le Grand Macho covers the entire surface area of large Haswell-E (2011-3) processors.

Measuring 140mm x 125mm and 159mm tall, the heatsink tips the scales at 900g without a fan thanks to 35 0.4mm aluminium fins spaced 3.1mm apart and six 6mm heatpipes.

The Le Grand Macho is to be offered in a silver finish with black-anodised lid, and will be supplied without fans but including mounting parts and a long-neck screwdriver for installation. Thermalright is also selling an optional 120mm or 140mm fan duct that allows users to better direct airflow from their case fans over the heatsink.

 

Low-Profile Cooler

Thermalright AXP-200R

image

There is a good range of low-profile HTPC type coolers now because of the ever popular Mini-ITX platform. Stand out low-profile coolers include the Silverstone NT06-Pro, Noctua NH-L12, Raijintek Pallas and the one featured here the Thermalright AXP-200.

The Thermalright AXP-200 is probably the market leader in this category, but there is also the considerably cheaper Raijintek Pallas. If the focus is on cost then the Pallas is probably the option as it is essentially a clone of the AXP-200.

As with all clones, the quality of the Pallas isn't as good as that of the more expensive AXP-200 and the installation process is not as refined. Still, it is hard to ignore that at just half the price it delivers a similar level of performance.

With the AXP-200, Thermalright has also introduced the AXP-200R and AXP-200 Muscle. The AXP-200R is designed to match the Asus ROG series and as expected, the fan is red and black. The AXP-200 Muscle is a black and white version.

 

AIO Liquid Cooler

Corsair Hydro Series H115i

image

All-in-one liquid coolers are extremely popular these days as they generally outperform air coolers and do so while generating less noise. With most units costing just a little more than a high-end air cooler such as the Noctua NH-D15, going liquid is a viable option.

Adding to the appeal of liquid cooling is the ease of installation of all-in-one kits as they come pre-assembled and ready to mount in the system. There are no hoses to connect, no reservoirs to fill and no leaks to check for. These units allow typical users to get in on the benefits of liquid cooling with almost none of the pitfalls.

With dozens of AIO liquid coolers to choose from, narrowing it down to a single one is no easy task and perhaps not even possible. The H115i product is priced competitively, performs well, installs easy and features an excellent build quality along with impeccable reliability.

Other products such as the Predator series as are hideously expensive and do not manage to outperform the Corsair H115i. The Predator 240 is more flexible as additional parts can be added into the loop without any trouble, but for the extra cost it is hard to justify. Not just that, but so far reliability has proven to be a real issue for EK's Predator range.

The Corsair Hydro Series H115i provides the very best performance in its class and thus far has proven to be extremely reliable.

Intel 100 Series Chipsets Compared

Intel 100 Series Chipsets Compared

http://static.techspot.com/articles-info/1088/images/intel-chipsets-comparison.jpg

The Z170 chipset has been available for some time now, but due to Intel's staggered launch of Skylake-S the other chipsets from this generation have just recently become available.

In addition to the Z170 chipset, there are now five other consumer chipsets available: the H170 and H110 for consumers and the B150, Q150, and Q170 for business. With the move to the new Skylake-S CPUs, all of these chipsets have some large changes over their predecessors, such as the move the DDR4 and many other things, but they also have a couple of key ways in which they differ from each other.

 

Consumer Chipsets (Z170, H170, H110)

 

Specifications

Z170

H170

H110

Processor support

Skylake-S LGA 1151

CPU overclocking

Yes

No

No

Processor PCIe configuration

1x16 or 2x8 or 1x8+2x4

1x16

1x16

Chipset PCI-E lanes (Gen)*

20 (3.0)

16 (3.0)

6 (2.0)

Max PCIe storage
(x4 M.2 or x2 SATA Express)

3

2

0

DMI version

DMI3 (8GT/s)

DMI3 (8GT/s)

DMI2 (5GT/s)

Independent display ports/pipes

3/3

3/3

3/2

Mem/DIMMs per channel

2/2

2/2

2/1

USB total (USB 3.0)

14 (10)

14 (8)

10 (4)

Total SATA 6Gb/s

6

6

4

Maximum HSIO lanes**

26

22

14

*In addition to the 16 PCI-E 3.0 lanes from the CPU
**This represents roughly how many PCI-E devices (LAN, USB, Thunderbolt, etc.) are able to use the available chipset PCI-E lanes.

 

Features

Z170

H170

H110

Intel Smart Sound Technology

Yes

Yes

No

Intel RST12 for SATA/PCI-E RAID

Yes

Yes

No

Intel Smart Response Technology

Yes

Yes

No

Intel Small Business Advantage

No

Yes (select boards)

No

Intel Small Business Basics

No

Yes

Yes

There are a large number of differences between the three consumer chipsets, but we have marked what should be the most important for the average consumer in red. The first and most commonly known difference is the fact that the Z170 chipset fully supports CPU overclocking, while the H-series chipsets do not.

The second major difference is in regards to the PCIe lanes. Modern Intel-based systems actually have two sets of PCIe lanes: one from the CPU and one from the chipset. The CPU PCIe lanes are used primarily for graphics cards and other add-on PCIe devices. For the 16 PCIe 3.0 lanes that are available from all Skylake-S CPUs, the Z170 chipset has the ability to split up the lanes two or three ways which allows for the use of multiple video cards or simply more PCIe devices to be directly connected to the CPU as long as they do not need to run at full x16 speeds.

The chipset lanes are a bit different - while a few may be used for add-on devices, they are mostly there for additional features the manufacturer has built into the motherboard that are not native to the chipset like WiFi, more USB ports, additional LAN ports, etc. The number and speed of these lanes changes based on the chipset: Z170 has 20 PCIe 3.0 lanes, H170 has 16 PCIe 3.0 lanes, and H110 has just 6 lanes that run at the slower PCIe 2.0 speeds. The biggest impact of having fewer lanes is that there is less opportunity for manufacturers to add additional features to the board, although another factor is the number of x4 M.2 or SATA Express devices that can be used on the chipset: Z170 can have 3 such devices, H170 can have 2 and H110 can have none. In addition to having fewer and slower PCIe lanes, H110 also still uses the older DMI 2.0 revision which means the connection between the chipset and the CPU is a bit slower than it is on the other chipsets.

As far as connectivity goes, Z170 and H170 can both power 6 SATA drives and have the same total number of USB ports (14) - although Z170 can have two more USB 3.0 ports than H170 (10 versus 8). H110, being the more budget-oriented chipset, can only power 4 SATA drives and can have only 10 USB ports (4 of which can be USB 3.0)

For the additional feature sets, both Z170 and H170 support Smart Sound Technology, Rapid Storage Technology, and Smart Response Technology (otherwise known as SSD Caching). For business-based customers who do not wish to use the business chipsets for whatever reason, both H170 and H110 support Small Business Basics while only the H110 chipset supports Small Business Advantage.

 

Business Chipsets (Q170, Q150, B150)

 

Specifications

Q170

Q150

B150

Processor support

Skylake-S LGA 1151

CPU overclocking

No

No

No

Processor PCIe configuration

1x16 or 2x8 or 1x8+2x4

1x16

1x16

Chipset PCI-E lanes (Gen)*

20 (3.0)

10 (3.0)

8 (3.0)

Max PCIe storage
(x4 M.2 or x2 SATA Express)

3

0

0

DMI version

DMI3 (8GT/s)

DMI3 (8GT/s)

DMI3 (8GT/s)

Independent display ports/pipes

3/3

3/3

3/3

Mem/DIMMs per channel

2/2

2/2

2/2

USB total (USB 3.0)

14 (10)

14 (8)

12 (6)

Total SATA 6Gb/s

6

6

6

Maximum HSIO lanes**

26

20

18

*In addition to the 16 PCI-E 3.0 lanes from the CPU
**This represents roughly how many PCI-E devices (LAN, USB, Thunderbolt, etc.) are able to use the available chipset PCI-E lanes.

 

Features

Q170

Q150

B150

Intel SIPP

Yes

Yes

No

Intel vPro Technology

Yes

No

No

Intel RST12 for SATA/PCI-E RAID

Yes

No

No

Intel Smart Response Technology

Yes

No

No

Intel Small Business Advantage

Yes

Yes

Yes

Intel Small Business Basics

Yes

Yes

Yes

Unlike the consumer chipsets, there is actually not a huge amount that is different between the three business chipsets, but we have marked what we consider to be the most important ones in red.

Like the consumer chipsets, one of the key differences between these chipsets is in regards to the PCIe lanes. As we stated in the previous section, modern Intel-based systems actually have two sets of PCIe lanes: one from the CPU and one from the chipset. The CPU PCIe lanes are used primarily for graphics cards and other add-on PCIe devices. For the 16 PCIe 3.0 lanes that are available from all Skylake-S CPUs, the Q170 chipset has the ability to split up the lanes two or three ways which allows for the use of multiple video cards or simply more PCIe devices to be directly connected to the CPU as long as they do not need to run at full x16 speeds.

The chipset lanes are a bit different - while a few may be used for add-on devices, they are mostly there for additional features the manufacturer has built into the motherboard that are not native to the chipset like WiFi, more USB ports, additional LAN ports, etc. The number and speed of these lanes changes based on the chipset: Q170 has 20 PCIe 3.0 lanes, Q150 has 10 PCIe 3.0 lanes, and B150 has just 8 PCIe 3.0 lanes. The biggest impact of having fewer lanes is that there is less opportunity for manufacturers to add additional features to the board, although another factor is the number of x4 M.2 or SATA Express devices that can be used on the chipset: Q170 can have 3 such devices, while Q150 and B150 can have none.

As far as connectivity goes, all of the chipsets are able to power 6 SATA drives. Q170 and Q150 have the same total number of USB ports (14) although Q170 can have two more USB 3.0 ports than Q150 (10 versus 8). B150, being the more budget-oriented chipset, can only have 12 USB ports (6 of which can be USB 3.0)

For the additional feature sets, all of the business chipsets support Small Business Basics and Small Business Advantage. The key difference in terms of features is that only the Q170 supports vPro and only the Q170 and Q150 chipsets support SIPP (Stable Image Platform Model).

 

Conclusion

Keep in mind that the chipset is only one of the may factors that should be taken into consideration when choosing a motherboard. If there is a specific feature needed like CPU overclocking or M.2 support, knowing what each chipset offers gives a great starting place.

In general, the Z170 chipset is best for users who want to be sure they are getting all the features they may possibly need. However, H170 can be great in small form factor systems since things like additional PCIe lanes is not a big deal for mini-ITX motherboards that have only a single PCIe slot.

For most applications either the Z170 or H170 motherboards would be suitable. The only time a business-class chipset is required is to get a feature specifically needed such as vPro, SIPP, or Small Business Advantage. In most other cases, a consumer chipset is going to give you a wider range of options and will generally be easier to source and maintain.

"One of the biggest factors between the Z170 and lesser 100 series chipsets is memory support. None of the H170, H110 or B150 motherboards support memory overclocking and are therefore limited to DDR4-2133. This is a big issue for the Core i3 and Pentium processors that need that extra boost from higher clocked DDR4 memory."