Oracle RAC on Bare Metal Cloud at 55 GB/s

Our first results of measuring performance of a 3-node Oracle RAC on Oracle's own Bare Metal Cloud (BMCS) with FlashGrid software are quite spectacular! We focus our testing on the I/O performance because CPU performance is generally the same whether the servers are on-premise or in the cloud. I/O in the cloud is more challenging and considered much slower compared to the on-premise. But things are changing fast. Just look at our results.

Calibrate_IO

  • Max IOPS = 6,413,125 - that is 6.4 million IOPS
  • Latency = 0 - meaning it is below the measurements threshold for the tool
  • Max MB/s = 55880 - almost 60GB/s (yes, that is 18 times faster than the 3 GB/s of EMC XtremIO flash array!)

SLOB 

  • Physical Reads per sec: 537,000
  • Physical Writes per sec: 120,000

It is interesting to compare these numbers with the results of on premise deployments. The common perception has been that an in-house data center is faster than an IaaS deployment.  Looking at our results it is clear that it is no longer true:  Oracle BMCS DenseIO instances unleash the full power of NVMe flash devices and literarily leave the traditional flash arrays in the dust.

Here is what it takes to get these results:

  • Three DenseIO.36 instances (each has nine 3.2TB NVMe SSDs)
  • Oracle Linux 7
  • Oracle Grid Infrastructure 12.1 and Oracle Database 12.1
  • FlashGrid Cloud Area Network software
  • FlashGrid Storage Fabric software with FlashGrid Read-Local Technology (our secret sauce behind the 55 GB/s)

But performance alone is not enough. We also need capacity and HA. In this configuration we have 25.6 TB of usable capacity. And the data is mirrored by Oracle ASM across the three nodes, each in a separate Availability Domain. Yes, three copies of every data block at different physical sites. Not bad! Can't wait to get access to the new AWS i3 instances that will have similar storage capabilities with up to eight NVMe SSDs per instance.

Some configuration and test details are below.

P.S. On February 9 we are showing a live demo at Oracle Cloud Day / NoCOUG at Oracle headquarters in Redwood Shores.

 

[[email protected] ~]# flashgrid-cluster
FlashGrid 17.1.31.83301 #a56855ae17a1a7efefd3ecdab5a8de24a8710ad8
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
FlashGrid running: OK
Clocks check: OK
Configuration check: OK
Network check: OK

Querying nodes: rac1, rac2, rac3 ...

Cluster Name: BMCS1
Cluster status: Good
------------------------------------------------------------
Node  Status  ASM_Node  Storage_Node  Quorum_Node  Failgroup
------------------------------------------------------------
rac1  Good    Yes       Yes           No           RAC1
rac2  Good    Yes       Yes           No           RAC2
rac3  Good    Yes       Yes           No           RAC3
------------------------------------------------------------
---------------------------------------------------------------------------------------------------
GroupName  Status  Mounted   Type    TotalMiB  FreeMiB   OfflineDisks  LostDisks  Resync  ReadLocal
---------------------------------------------------------------------------------------------------
DATA       Good    AllNodes  HIGH    82413720  80766288  0             0          No      Enabled
GRID       Good    AllNodes  NORMAL  30720     21394     0             0          No      Enabled
---------------------------------------------------------------------------------------------------

[[email protected] ~]# flashgrid-cluster drives
FlashGrid 17.1.31.83301 #a56855ae17a1a7efefd3ecdab5a8de24a8710ad8
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Querying nodes: rac1, rac2, rac3 ...

Cluster drives:
-----------------------------------------------------------------------------------------------------------------------
DriveName            Status  SizeGiB  Slot            WritesUsed  ASMName              ASMSizeGiB  DiskGroup  ASMStatus
-----------------------------------------------------------------------------------------------------------------------
rac1.s2lhna0h801288  Good    2980     /SYS/DBP/NVME4  0%          RAC1$S2LHNA0H801288  2980        DATA       ONLINE
rac1.s2lhna0h801289  Good    2980     /SYS/DBP/NVME0  0%          RAC1$S2LHNA0H801289  2980        DATA       ONLINE
rac1.s2lhna0h801300  Good    2980     /SYS/DBP/NVME2  0%          RAC1$S2LHNA0H801300  2980        DATA       ONLINE
rac1.s2lhna0h801301  Good    2980     /SYS/DBP/NVME1  0%          RAC1$S2LHNA0H801301  2980        DATA       ONLINE
rac1.s2lhna0h801308  Good    2980     /SYS/DBP/NVME7  0%          RAC1$S2LHNA0H801308  2980        DATA       ONLINE
rac1.s2lhna0h801331  Good    2980     /SYS/DBP/NVME8  0%          RAC1$S2LHNA0H801331  2980        DATA       ONLINE
rac1.s2lhna0h801334  Good    2980     /SYS/DBP/NVME6  0%          RAC1$S2LHNA0H801334  2980        DATA       ONLINE
rac1.s2lhna0h802857  Good    2980     /SYS/DBP/NVME3  0%          RAC1$S2LHNA0H802857  2980        DATA       ONLINE
rac1.s2lhna0h802915  Good    2980     /SYS/DBP/NVME5  0%          RAC1$S2LHNA0H802915  2980        DATA       ONLINE
rac1.vg1-grid        Good    10       N/A             N/A         RAC1$VG1_GRID        10          GRID       ONLINE
rac1.vg1-gridtmp     Good    10       N/A             N/A         RAC1$VG1_GRIDTMP     N/A         N/A        N/A
rac2.s2lhna0h905742  Good    2980     /SYS/DBP/NVME4  0%          RAC2$S2LHNA0H905742  2980        DATA       ONLINE
rac2.s2lhna0h905748  Good    2980     /SYS/DBP/NVME8  0%          RAC2$S2LHNA0H905748  2980        DATA       ONLINE
rac2.s2lhna0h906102  Good    2980     /SYS/DBP/NVME6  0%          RAC2$S2LHNA0H906102  2980        DATA       ONLINE
rac2.s2lhna0h906185  Good    2980     /SYS/DBP/NVME2  0%          RAC2$S2LHNA0H906185  2980        DATA       ONLINE
rac2.s2lhna0h906188  Good    2980     /SYS/DBP/NVME7  0%          RAC2$S2LHNA0H906188  2980        DATA       ONLINE
rac2.s2lhna0h906216  Good    2980     /SYS/DBP/NVME1  0%          RAC2$S2LHNA0H906216  2980        DATA       ONLINE
rac2.s2lhna0h906226  Good    2980     /SYS/DBP/NVME5  0%          RAC2$S2LHNA0H906226  2980        DATA       ONLINE
rac2.s2lhna0h906236  Good    2980     /SYS/DBP/NVME0  0%          RAC2$S2LHNA0H906236  2980        DATA       ONLINE
rac2.s2lhna0h906237  Good    2980     /SYS/DBP/NVME3  0%          RAC2$S2LHNA0H906237  2980        DATA       ONLINE
rac2.vg1-grid        Good    10       N/A             N/A         RAC2$VG1_GRID        10          GRID       ONLINE
rac2.vg1-gridtmp     Good    10       N/A             N/A         RAC2$VG1_GRIDTMP     N/A         N/A        N/A
rac3.s2lhnaah664492  Good    2980     /SYS/DBP/NVME2  0%          RAC3$S2LHNAAH664492  2980        DATA       ONLINE
rac3.s2lhnaah668538  Good    2980     /SYS/DBP/NVME8  0%          RAC3$S2LHNAAH668538  2980        DATA       ONLINE
rac3.s2lhnaah668803  Good    2980     /SYS/DBP/NVME1  0%          RAC3$S2LHNAAH668803  2980        DATA       ONLINE
rac3.s2lhnaah669157  Good    2980     /SYS/DBP/NVME4  0%          RAC3$S2LHNAAH669157  2980        DATA       ONLINE
rac3.s2lhnaah669338  Good    2980     /SYS/DBP/NVME7  0%          RAC3$S2LHNAAH669338  2980        DATA       ONLINE
rac3.s2lhnaah669347  Good    2980     /SYS/DBP/NVME0  0%          RAC3$S2LHNAAH669347  2980        DATA       ONLINE
rac3.s2lhnaah669348  Good    2980     /SYS/DBP/NVME5  0%          RAC3$S2LHNAAH669348  2980        DATA       ONLINE
rac3.s2lhnaah669351  Good    2980     /SYS/DBP/NVME3  0%          RAC3$S2LHNAAH669351  2980        DATA       ONLINE
rac3.s2lhnaah669356  Good    2980     /SYS/DBP/NVME6  0%          RAC3$S2LHNAAH669356  2980        DATA       ONLINE
rac3.vg1-grid        Good    10       N/A             N/A         RAC3$VG1_GRID        10          GRID       ONLINE
rac3.vg1-gridtmp     Good    10       N/A             N/A         RAC3$VG1_GRIDTMP     N/A         N/A        N/A
-----------------------------------------------------------------------------------------------------------------------

[[email protected] ~]$ asmcmd lsdg -g
Inst_ID  State    Type    Rebal  Sector  Block       AU  Total_MB   Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  HIGH    N         512   4096  4194304  82413720  80766288          6104720        24887189              0             N  DATA/
      3  MOUNTED  HIGH    N         512   4096  4194304  82413720  80766288          6104720        24887189              0             N  DATA/
      2  MOUNTED  HIGH    N         512   4096  4194304  82413720  80766288          6104720        24887189              0             N  DATA/
      1  MOUNTED  NORMAL  N         512   4096  1048576     30720     21394            10240            5577              0             Y  GRID/
      3  MOUNTED  NORMAL  N         512   4096  1048576     30720     21394            10240            5577              0             Y  GRID/
      2  MOUNTED  NORMAL  N         512   4096  1048576     30720     21394            10240            5577              0             Y  GRID/

[[email protected] ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Feb 8 22:15:00 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2  lat INTEGER;
  3  iops INTEGER;
  4  mbps INTEGER;
  5  BEGIN DBMS_RESOURCE_MANAGER.CALIBRATE_IO (27, 10, iops, mbps, lat);
  6  DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
  7  DBMS_OUTPUT.PUT_LINE ('latency = ' || lat);
  8  DBMS_OUTPUT.PUT_LINE ('max_mbps = ' || mbps);
  9  end;
 10  /

max_iops = 6413125
latency = 0
max_mbps = 55880

PL/SQL procedure successfully completed.

 
 
SLOB 2.3 workload profile: db_block_size=8kb, UPDATE_PCT=20, REDO_STRESS=LITE

I/O stats from the AWR report during the SLOB test:

System Statistics - Per Second               DB/Inst: ORCL/orcl1  Snaps: 12-13

             Logical     Physical     Physical         Redo        Block         User
  I#         Reads/s      Reads/s     Writes/s   Size (k)/s    Changes/s      Calls/s      Execs/s     Parses/s   Logons/s       Txns/s
---- --------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ---------- ------------
   1      215,379.21    191,924.2     42,372.6     34,329.2     84,556.7          4.6      3,331.4         13.6       0.45        644.7
   2      218,059.56    187,808.2     42,662.8     34,784.0     85,712.0          3.9      3,361.4         11.2       0.48        653.0
   3      179,747.89    157,875.1     35,342.9     28,734.2     70,660.7          4.3      2,765.9         11.0       0.48        537.8
 ~~~ ~~~~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~~~ ~~~~~~~~~~ ~~~~~~~~~~~~
 Sum      613,186.65    537,607.4    120,378.2     97,847.4    240,929.5         12.8      9,458.8         35.8       1.41      1,835.5
 Avg      204,395.55    179,202.5     40,126.1     32,615.8     80,309.8          4.3      3,152.9         11.9       0.47        611.8
 Std       21,387.54     18,584.4      4,144.9      3,369.2      8,376.3          0.3        335.5          1.5       0.01         64.3
      --------------------------------------------------------------------------------------------------------------------