As Oracle Sharding is built on shared-nothing hardware architecture, OLTP applications designed to run on it can elastically scale (data, transactions and concurrent users) to any level, on any platform, simply by deploying new shards on additional stand-alone servers. Also, the unavailability or slowdown of a shard due to either an unplanned outage or planned maintenance affects only the users of that shard; it does not affect the availability or performance of the application for users of other shards (i.e. faults are contained). This blog covers the results of the Oracle Sharding MAA benchmark on Oracle Bare Metal Cloud.
The objectives of this benchmark are to:
- Elastically scale-out the Sharded Database (SDB) on Oracle Bare Metal IaaS Cloud – following the Oracle Maximum Availability Architecture (MAA) best practices
- Demonstrate the linear scalability of relational transactions with Oracle Sharding
- Observe the fault isolation aspects of a sharded database
Figure 1. Sharded Database Topology on Oracle Bare Metal Cloud
As shown in Figure 1, the sharded database used in this benchmark has two shard groups – one in each of the availability domains of the Oracle Bare Metal Cloud. The network latency between the availability domains is less than 1ms. Shardgroup1 in Availability_Domain1 has 100 primary shards and the shardgroup2 in Availability_Domain2 has 100 HA standby shards. Each shard is hosted on dedicated bare metal server, which has 36 cores, 512G RAM and four 12.8TB NVMe flash disks. These flash disks are used for the creation of two ASM Diskgroups (+DATA, +RECO) with normal redundancy. Three shard directors were deployed in each of the Availability Domains for high availability. The shard catalog is placed in Availability_Domain1 and its standby is located in Availability_Domain2.
For this benchmark, Swingbench Order-entry application has been used. Customer_id column is defined as the sharding key. The data model has been modified so that every sharded table in the table family contains the sharding key. Tables used for reference data have been defined as duplicated tables. From the application stand point, the connection pooling code has been modified to use the new Oracle Sharding API calls – to check out a connection on a given shard based on the sharding key. Once the connection is checked out, transactions and queries are executed within the scope of that shard.
Role-based global services have been created so that read-write transactions run on primary shards and queries run on standby shards. The total size of the sharded database is 50TB with 500G on each of the 200 shards.
The replication chosen for this sharded database is Active Data Guard in Max Availability mode. SDB automated deployment method using the “CREATE SHARD” command is used. So, the creation of shards and the configuration of the Active Data Guard replication setup are automatically done by the Oracle Sharding management tier. After which the Data Guard Broker configurations with Fast-Start Failover (FSFO) are automatically enabled for all the shards. The FSFO observers are also automatically started on the regional shard director.
Here are the steps taken for the execution of the benchmark:
- Begin the application workload on 25 primary shards and 25 standby shards
- Bring up 25 more primary and 25 more standby shards
- Repeat step #2 until all the 200 shards are online on both Availability Domains
- Observe linear scaling of transactions per second (TPS) and queries per second (QPS)
- Compute the total read-write transaction per second and queries per second across all shards in the sharded database
The following table (Table #1) shows the raw data of the TPS and QPS observed, as the sharded database is elastically scaled-out.
|Primary shards||Standby shards||Read/Write
Transactions per Second
Queries per Second
Table 1. Linear scalability of relational transactions as shards are added
Figure 2. Transactions and queries scaled linearly as shards are added
Figure 2 illustrates that as shards were doubled, tripled and quadrupled, we were able to observe that the rate of transactions and queries doubled, tripled and quadrupled accordingly. This demonstrated the frictionless linear scaling due to shared-nothing hardware architecture of Oracle Sharding. Altogether we were able to execute 11.2 Million transactions per second that includes 4.38 Million read-write transactions per second across all the 100 primary shards and 6.82 Million read-only transactions per second across all the 100 active standby shards.
The study also illustrated that Oracle Sharding provides extreme data availability. When a fault was induced on a given shard, there was absolutely no impact to the other shards in the SDB. This is due to zero shared hardware or software among the shards.
Figure 3. A shard outage has zero impact on surviving shards
Figure 3 depicts that the application remained available on other shards even when a fault is induced on a given shard.
This blog covered the results of the Oracle MAA study on Oracle Sharding and highlighted the linear scalability and fault isolation benefits. This study showcased that high-performance and reliable Oracle Bare Metal Cloud infrastructure is an ideal platform for running Oracle sharded databases. Visit www.oracle.com/goto/oraclesharding for cookbooks on SDB deployment on Oracle Cloud and Oracle Sharding MAA Best Practices White Paper (To be published soon). Follow me on Twitter @nageshbattula