Microsoft Gold Cloud CRM and Gold Cloud Platform Partner

In September, Microsoft finally announced Azure encryption support for data at rest, a long awaited feature by many companies which are bound by regulation and requirements. The great thing about this is you can easily enable encryption by toggling it to ‘On’ in the storage options within the Azure portal. Any current data on the storage account will not be encrypted but any data written going forward will be. Keep in mind, there is no support in the classic Azure Service Manager (ASM) portal. One of the many questions as always with customers is how encryption will affect performance so I took some time do some benchmarking with Crystal Disk Mark for general performance and and Diskspd to test general SQL activity.

 

Test Method & Settings

  • Azure Standard DS5 v2 (16 Cores, 56GB RAM) Virtual Machine
  • Standard Storage Drive (read/write caching set to none)
  • P10 Premium Drive (read/write caching set to none)
  • P20 Premium Drive (read/write caching set to none)
  • P30 Premium Drive (read/write caching set to none)
  • Crystal Disk Mark for general performance testing (1GB – 5 Pass Sequential Read/Write)
  • Diskspd for SQL performance testing
    • Single 75GB File
    • 128KB Block Size
    • 16 thread
    • 60 Seconds

 

Crystal Disk Mark Multi-Threaded Results

When we look at the multi-threaded read/write sequential tests, it looks like there isn’t much of a performance hit with the exception of the P10 storage. I included the advertised throughput from Microsoft for their non-encrypted storage.

Azure Multi-Threaded Sequential Read
Azure Multi-Threaded Sequential Write

 

Crystal Disk Mark Single-Threaded Results

2016-11-23-14_53_32-document2-word
Azure Single-Threaded Sequential Write

 

Diskspd SQL Results

diskspd-sql-read

diskspd-sql-write

 

Conclusion

Based on the Crystal Disk Mark results, we see almost no performance losses with encryption enabled on both read/write across the standard storage, P20 storage, and P30 storage. The P10 storage result came was a bit of a surprise for me as they don’t fall in line with the P20/P30 multi-threaded results but then I remembered that they are limited to only 500 IOPS per disk and I was only using 128KB blocks during my Diskspd benchmark testing.