Load Balance the Reader Farm with Oracle GDS

Many organizations maintain one or more replicas of their production databases in local and/or geographically disparate data centers to meet various business requirements such as high availability, disaster recovery, content localization and caching, scalability, optimal performance for local clients or compliance with local laws. Oracle Active Data Guard and Oracle GoldenGate are the strategic replication technologies native to Oracle Database used to synchronize one or more replicated copies for such purposes.

Achieving high performance and high availability by distributing workload across multiple database replicas, however, presents challenges that extend beyond the capabilities of the replication technology. Workload must be intelligently load balanced to effectively utilize all resources and to achieve the best performance.

Oracle Global Data Services (GDS) is a feature of Oracle Database 12c that provides connect-time and run-time load balancing, region affinity, replication lag tolerance based workload routing, and enables inter-database service failover over a set of replicated databases.

In this blog post, I will talk about how you can perform load balancing of your read-only workload across the reader farm (a set of Active Data Guard standby databases or a set of Oracle GoldenGate replicas) using Oracle GDS.

GDS provides connect-time and run-time load balancing (within and across data centers) on a reader farm. In this, client connections are load balanced at the connect time. Connection pool based Oracle Integrated Clients can subscribe to the RLB events and can load balance the workload requests at run-time as well. GDS allows better resource utilization and higher scalability by balancing Read Only workload on an Active Data Guard or Oracle GoldenGate reader farm.

readerfarm.pngFigure 1: Load balancing of Read-Only workloads on a reader farm

The above pictorial depicts GDS enabled for an Active Data Guard or Oracle GoldenGate reader farm with physical standbys/replicas located in both local and remote data centers. Order Entry (Read Write) global service runs on the Primary/Master database. Reporting (Read Only) global Services run on the reader farm. Client connections are load balanced among the Read Only global services running on the reader farm (within or across data centers).

To load balance the read-only workload over the reader farm, the global service should be defined as shown below.

Oracle GoldenGate Example:

In this example, DB01 is the master and DB02, DB03, DB04 are the replicas.

GDSCTL>add service -service reporting_srvc -gdspool sales –preferred DB02, DB03, DB04 -clbgoal LONG –rlbgoal SERVICE_TIME

Active Data Guard Example:

GDSCTL>add service -service reporting_srvc -gdspool sales –preferred_all –role PHYSICAL_STANDBY -clbgoal LONG –rlbgoal SERVICE_TIME

Additional Notes on CLBGOAL and RLBGOAL attributes:

With the CLBGOAL attribute of a global Service, we can attain connect-time load balancing i.e. choosing the least loaded database instance for establishing a new connection.

CLBGOAL supports two values – LONG and SHORT. Use the ‪LONG connection load balancing method for applications that have long-lived connections. This is typical for connection pools and SQL*Forms sessions. ‪LONG is the default connection load balancing goal. Use the ‪SHORT connection load balancing method for applications that have short-lived connections. When using connection pools that are integrated with FAN, set the ‪CLBGOAL to ‪SHORT.

Run-time load balancing is a feature of Oracle connection pools that can distribute client work requests across persistent connections that span databases. GDS supports Runtime load balancing feature of connection pool based clients (OCI, JDBC, ODP.NET, and WLS) that are integrated with the Oracle database, i.e. for a particular work request, “picking” a connection in the pool which belongs to the least loading instance.

RLBGOAL supports two values – SERVICE_TIME and THROUGHPUT. With the RLBGOAL set to SERVICE_TIME, Connection Pools route work requests to DB instances to minimize response time. The load balancing advisory is based on elapsed time for work requests using the service and network latency. With the RLBGOAL set to THROUGHPUT, Connection Pool routes work requests to DB instances to maximize total throughput of the system. The load balancing advisory is based on rate of work completion in the service plus available bandwidth to service.

To learn about how to setup and configure GDS, take a look at the GDS Cookbook: http://www.oracle.com/technetwork/database-features/database-ee/gds-with-ogg-cookbook-4004663.pdf .

For more info on GDS, do visit GDS OTN Portal

Workload routing based on replication lag – Global Data Services (GDS) with Active Data Guard

As we studied in the previous blog posts, Oracle Global Data Services (GDS) is a feature of Oracle Database 12c that provides connect-time and run-time load balancing, region affinity, replication lag tolerance based workload routing, and enables inter-database service failover over a set of replicated databases.

In this blog post, I will talk about how you can perform workload routing across the standby databases based on replication lag tolerance.

A Data Guard standby database can lag behind its Primary for various reasons. With GDS, applications can choose between accessing real-time vs. slightly out-of-date data. Applications can set maximum acceptable lag limit for a global service. GDS routes requests to standby databases whose replication lag is below the limit. As showcased in Figure 1, when the replication lag at a given standby database exceeds the configured lag limit, the global service is brought down by GDS on that particular database and the new requests are routed to a database that satisfies the lag limit.

Note: If there is no available standby database, then the global service is shutdown. Once the lag is resolved or comes within the limit, GDS automatically brings up the global service.

 

lagFigure 1: Routing after replication lag exceeded the tolerance

In Oracle Database 12c Global Data Services, the lag attribute is supported only for Active Data Guard. To achieve this capability, the global services should be defined as shown below with the –lag attribute denoted in Seconds.

GDSCTL>add service -service reporting_srvc -gdspool sales –preferred_all –role PHYSICAL_STANDBY –lag 180

By setting the max tolerable replication lag at the service level, customers can make sure that their applications always access the standby databases that provide right data for the business.

For more info, do visit the Oracle GDS portal on Oracle Technology Network (OTN). Follow me on Twitter @nageshbattula

GDS Cookbook for Oracle GoldenGate

Oracle GDS extends the familiar RAC-Style connect-time and run-time load balancing, service failover and management capabilities – so far applicable only to a single database, to a set of replicated databases, be it within or across data centers. With a newly created concept called Global Service, Oracle GDS framework extends the notion of a database Service to a set of replicas running on a combination of Single Instance, Oracle RAC, Oracle Engineered systems, Active Data Guard and Oracle GoldenGate.

I’ve compiled a detailed cookbook that gives a walk through of the deployment and configuration of Global Data Services (GDS) for Oracle GoldenGate based replicated environments.  The cookbook covers:

  • Installation of Global Service Managers
  • Creation of GDS catalog
  • GDS Setup
  • Configuration of global services for the following GDS features: Inter-database global service failover, Connect-time and Run-time load balancing,  and Locality based workload routing

Here is the link to the cookbook: http://www.oracle.com/technetwork/database-features/database-ee/gds-with-ogg-cookbook-4004663.pdf  . I strongly encourage you to try it out in your lab.

For more info on GDS, do visit GDS OTN Portal

Manage workloads across replicas via GDS Global Service attributes

Oracle Global Data Services (GDS) is a feature of Oracle Database 12c that provides connect-time and run-time load balancing, region affinity, replication lag tolerance based workload routing, and enables inter-database service failover over a set of replicated databases. GDS maximizes performance and availability of the applications and it allows database workloads to be managed across distributed replicas with a single unified framework.

Achieving the capabilities of GDS is as simple as setting the attributes of the global Services. There isn’t any custom code that needs to be written to attain these capabilities. The blog post lists some of the key attributes of global Services, which are set as part of the definition of the global Services.

Note:  All the attributes of the Global Service are applicable to both Oracle GoldenGate and Oracle Active Data Guard, except the Role and the Lag attributes – which are currently supported only for Active Data Guard.

Preferred, Available, Preferred_All: When a Global Service is created, we can define which databases support that service – known as the preferred databases for that Service. We can also define available databases to support a Global Service if the Service’s preferred database fails. If a service needs to be run on all the replicas of a given GDS pool, we can set the preferred_all attribute. Following are some examples for setting the preferred, available and preferred_all attributes for global Services:

GDSCTL>add service -service floor_report1 -gdspool sales -preferred sfo -available chi

GDSCTL>add service -gdspool sales -service sales_reporting_srvc  -preferred_all

If a database running a global service crashes, GDS taking into account the service placement attributes, automatically performs an inter-database service failover to another available database in the pool. It then sends Fast Application Notifications so that the client connection pools can reconnect to new database where the global Service has been started.

CLBGOAL: With the CLBGOAL attribute of a global Service, we can attain connect-time load balancing i.e. choosing the least loaded database instance for establishing a new connection. This is done for any client that uses a GSM to connect to a global service. GDS takes into account the load statistics from all GDS pool databases, Inter-region network latency, locality and CLBGOAL attributes.

CLBGOAL supports two values – LONG and SHORT.

Use the ‪LONG connection load balancing method for applications that have long-lived connections. This is typical for connection pools and SQL*Forms sessions. ‪LONG is the default connection load balancing goal.

Use the ‪SHORT connection load balancing method for applications that have short-lived connections. When using connection pools that are integrated with FAN, set the ‪CLBGOAL to ‪SHORT.

Here is an example for setting the CLBGOAL:

GDSCTL>add service -service sales_clb_srvc -gdspool sales –preferred_all -clbgoal LONG

RLBGOAL: Run-time load balancing is a feature of Oracle connection pools that can distribute client work requests across persistent connections that span databases. GDS supports Runtime load balancing feature of connection pool based clients (OCI, JDBC, ODP.NET, and WLS) that are integrated with the Oracle database, i.e. for a particular work request, “picking” a connection in the pool that belongs to the least loading instance. RLB does not require establishing of new connections and can be done much faster and more often.

GDS, as part of advisory compilation, takes into account per-service performance data from pool databases, inter-region network latency, locality and RLBGOAL settings. GDS compiles an array, such that each entry in the array corresponds to the percentage of work requests that ought to be sent to a particular database instance in the GDS Pool. GDS publishes this array every 30 Seconds to regional client connection pools as RLB event via ONS Channels. Based on the RLB advisory, clients distribute workload requests across persistent connections spanning GDS Pool database instances

RLBGOAL supports two values – SERVICE_TIME and THROUGHPUT. With the RLBGOAL set to SERVICE_TIME, Connection Pools route work requests to DB instances to minimize response time. The load balancing advisory is based on elapsed time for work requests using the service and network latency. With the RLBGOAL set to THROUGHPUT, Connection Pool routes work requests to DB instances to maximize total throughput of the system. The load balancing advisory is based on rate of work completion in the service plus available bandwidth to service. Here is an example for setting the RLBGOAL:

GDSCTL>add service -service sales_rlb_srvc -gdspool sales –preferred_all –rlbgoal SERVICE_TIME

However, when there is long term change in instance performance, there may be a need to redistribute pooled connections among the instances. The process of automatically moving unused connections from under-loaded to heavily loaded instances is termed as Gravitation. Sometimes when there are long-term changes in the database instance load, the client cannot follow RLB advisory because there are not enough connections to push the workload to a particular instance. At the same time another instance may have unused connections. The gravitation algorithm implemented in the connection pool detects the imbalance in the connection distribution, and creates or terminates connections from the database accordingly. GDS supports the gravitation feature of Oracle connection pools.

Besides supporting Gravitation, GDS proactively handles the instance UP and DOWN events sent through the ONS. It allows to quickly removing dead connections from the pool preventing the application from using them and also proactively establishing connections to an instance that was restarted or added to the pool.

When a GDS pool contains replicas hosted on database servers of different processor generations and different resources (CPU, memory, I/O), GDS understands how the Global services are performing across these replicas running on hardware with varying power.

In such scenarios, work requests are sent to less powerful databases, but not until more powerful databases are overloaded and aims to equalize the response time.

 

gds lb

Figure 1: GDS does intelligent load balancing even across asymmetrical database servers

Figure 1 showcases a benchmark against three database servers – ORLBb, ORLBc, ORLBd that have 4 CPUs, 3 CPUSs and 2 CPUs respectively. The top graph depicts the run-time load balancing information that is sent to connection pool based clients. Accordingly, higher percentage of workload is advised to be sent to ORLBb. The bottom graph demonstrates how GDS directed the client load in such a way that equalized the response time.

GDS understood that the faster hardware is able to crunch more workload and thus sends Runtime Load balancing information that advises the clients to send more work requests inclined towards the faster hardware relative to the slower resources.

This is a neat capability wherein customers can transparently phase in faster and efficient infrastructure resources and GDS allows them to take advantage of the newer hardware by pushing more workload to it and reaps the benefits of the efficient resources. Thus, GDS does intelligent load balancing even across asymmetrical database servers.

Locality: Customers can also set preferences on which region the given applications should connect to. With the Locality attribute, GDS controls geographical affinity between clients and databases.

With the Locality attribute set to ANYWHERE, the client connections and work requests are routed to the best database in any region that satisfies CLB and RLB goals for the given Global Service. For example:

GDSCTL>add service -service sales_reader_srvc -gdspool sales –locality ANYWHERE – preferred_all

With the Locality attribute set to LOCAL_ONLY, the client connections and work requests are routed to the best database in the client’s local region that satisfies CLB and RLB goals for the given Global Service. For example:

GDSCTL>add service -service sales_reader_srvc -gdspool sales –locality LOCAL_ONLY – preferred_all

With the Locality attribute set to LOCAL_ONLY -REGION_FAILOVER, the client connections and work requests are routed to the best database in the client’s local region that satisfies CLB and RLB goals for the given Global Service. If no DB in local region offers a service, client request is forwarded to the best database in another region. For example:

GDSCTL>add service -service sales_reader_srvc -gdspool sales –locality LOCAL_ONLY –region_failover –preferred_all

Lag (for Active Data Guard): In a replicated environment, the replica database may lag behind its primary database. With GDS, customers can choose the lag tolerance (in Seconds) that is acceptable for a given application. GDS routes requests to a standby whose replication lag is below the limit. When the replication lag exceeds the lag limit, the global Service is brought down and the new requests are routed to a database that satisfies the lag limit. In Oracle Database 12c Global Data Services, the lag attribute is supported for Active Data Guard only. An example for setting the lag attribute is shown below:

GDSCTL>add service -service sales_reader_lag180_srvc -gdspool sales -preferred_all -role PHYSICAL_STANDBY -lag 180

Role (For Active Data Guard): The role attribute is applicable to the GDS pools containing an Oracle Data Guard Broker configuration. When the role attribute is set, GDS makes sure that it can be started on a database if its role matches global service’s role attribute. GDS provides this functionality without requiring Oracle Clusterware to be installed.

GDSCTL>add service -gdspool sales -service sales_reporting_srvc  -preferred_all -role physical_standby

If the -role option is PHYSICAL_STANDBY, GDS allows the use of -failover_primary option to enable the service to failover to the primary database, if all standby databases are down. But, if there are other standby databases that are available, then those will be given the priority for the service to failover to before trying to failover to the primary database.

GDSCTL>add service -gdspool sales -service sales_reporting_srvc  -preferred_all -role physical_standby –failover_primary

For both manual and Fast-Start Failover (FSFO) based data guard role transitions, GSMs ensure that appropriate global Services are started based on the specified role. Fast Application Notification (FAN) events are published after a role change. The FAN events enable Fast Connection Failover (FCF) of client connections to an appropriate database instance within GDS Pool.

As I stated earlier, attaining GDS capabilities is as simple as setting the attributes of the global Services. There isn’t any custom code that needs to be written to attain these capabilities. For the complete list of all attributes of a global service, refer to the Oracle GDS documentation.

Components & Architecture of Oracle Global Data Services (GDS)

With automated workload balancing and service failover capabilities, GDS improves performance, availability, scalability, and manageability for all databases that are replicated within a data center and across the globe. GDS improves the ROI of Active Data Guard and GoldenGate investments.

Note: GDS is included with  Active Data Guard and Oracle GoldenGate license.

This blog post describes the key components of the Oracle GDS architecture, namely, Global Service, GDS Configuration, GDS Pool, GDS Region, Global Service Manager (GSM), GDS Catalog, GDSCTL and GDS ONS Network.

gds arch.png

Figure 1 illustrates a typical GDS architecture that has two data centers and two sets of replicated databases (SALES, HR).

Global Service

Distributed workload management for replicated databases relies on the use of Global Services. Global Services hide the complexity of a set of replicated databases by providing a single system image to database clients. The set of databases may include clustered or non-clustered Oracle databases (hosted on homogeneous or heterogeneous server platforms) which are synchronized with some form of a replication technology such as Oracle Data Guard, or Oracle GoldenGate etc. A client request for a Global Service can be forwarded to any database in the set.

GDS Configuration

A GDS Configuration is a set of databases integrated by the GDS framework into a single virtual server offering global services. The databases in a GDS configuration can be located in any data center. Within a GDS configuration, there can be various sets of replicated databases belonging to different domains that do not share anything besides GDS framework components that manage them.

Clients connect by specifying the global service name and need not know the architectural topology of the GDS configuration.

GDS Pool

A set of replicated databases within a GDS configuration providing a unique set of global services and that belong to a certain administrative domain is termed as a GDS Pool. Examples of a GDS Pool can be a SALES pool or HR pool etc. This segregation of databases in a GDS configuration into pools allows simplified service management and better security.

A given GDS configuration can comprise of multiple GDS Pools spanning across multiple GDS Regions.

GDS provides scalability on demand by allowing dynamic provisioning of databases. This means GDS provides the ability to add another replicated database to a pool (for example, SALES POOL) dynamically and transparently (as needed) to obtain additional resource capability to scale the application workloads. GDS allows this with no change to the application configuration or client connectivity.  The same transparency is attained during the removal of the databases from a given pool.

If an application suddenly needs lot of resources to process massive quarter-ending reports etc., a replica can be added to the given GDS Pool and the application can transparently start taking advantage of the newly added replica and work requests will automatically be sent to it. This does not require any application-level code modification.

GDS Region

A set of databases in a GDS configuration and the database clients are said to be in the same GDS Region, if they share network proximity such that the network latency between the members of a given region is typically lower than between the members of different regions. A GDS Region typically corresponds to a data center and clients that are in the geographical proximity to the data center. Examples of a GDS Regions are East region or West region etc.

A GDS Region can contain databases belonging to different GDS Pools. However, all the GDS Pools belong to the same GDS configuration. A database can be associated with at most one GDS Region, one GDS Pool and one GDS configuration. For high availability purposes, each region in a GDS configuration should have a designated buddy region, which is a region that contains global service managers that can provide continued access to a GDS configuration if the global services managers in the local region become unavailable.

Global Service Manger (GSM)

GSM is the “brain” of the GDS technology and is the central component of the GDS configuration. At a minimum, there must be one GSM per GDS Region. GSMs can be installed on any commodity server similar to the ones used for the application tier. The recommended approach is to deploy three GSMs per region for availability and scalability reasons.

A GSM can be associated with one and only one GDS configuration. All the GDS Pools share all the GSMs in a GDS configuration. Every GSM manages all global services supported by the GDS configuration.

A Global Service Manager provides the following set of functions:

  • Acts as a regional listener that the clients use to connect to global services. Any GSM can forward a connection request to any database (in any GDS Region) that provides a given global service.
  • Manages GDS configuration by making changes to the configuration data in all GDS components
  • Gathers connection load balancing and performance metrics from all instances of the databases in the GDS configuration. Measures network latency between its own Region and all other GDS Regions and relays this information to all GSMs in other Regions.
  • Performs connection load balancing routing (CLB) the client connections to the best database instance servicing a given global service in the GDS configuration – based on CLB metrics , network latency and the region affinity of the global service
  • Generates FAN runtime load balancing (RLB) advisory and publishes it to client connection pools – based on Performance metrics and integrating them with estimated network latency
  • Manages failover of global services
  • Monitors availability of database instances and global services, and notifies clients via FAN HA events upon failure incidents

GDS Catalog

A GDS Catalog is a repository that keeps track of the configuration data and the run-time status of a given GDS configuration. Basically it contains information pertinent to Global Services, their attributes, GDS Pools, Regions, GSMs and database instances that are in the GDS configuration. Each GDS catalog is dedicated to one and only one GDS configuration. It is a recommended to host the GDS Catalog schema along with the other catalogs such as EM Cloud Control Repository and RMAN Catalog in a single consolidated database that is protected with Oracle-recommended MAA technologies.

GDSCTL / Enterprise Manager

GDS Configuration can be administered either by the GDSCTL command-line user interface or Oracle Enterprise Manager Cloud Control (EMCC) 12c graphical user interface. Administrators who have used SRVCTL will be quite familiar with the look and feel of GDSCTL. GDS is also supported by EMCC DB plug-in starting from 12.1.0.5.

GDS ONS Network

Oracle 12c FAN clients use the Oracle Notification System to receive FAN events. ONS runs outside the database and across the system with no database dependencies. When a database is stopped or fails, FAN posts the status change events immediately, and ONS delivers them immediately. Starting from Oracle Database 12.1, FAN is posted by Global Data Services (GDS) for spanning data centers. GDS takes into account the service placement attributes, automatically performs an inter-database service failover to another available database for planned maintenance that involves a data center change, and if unplanned outages occur, notifies failures of an entire database. The Oracle clients and connection pools are interrupted on failure events and notified when a global service has been newly started.

Each GSM instance contains an Oracle Notification Service (ONS) server. The ONS server is used to publish FAN HA events and RLB metrics, subscribed by the clients. All the ONS servers of the regional GSMs are interconnected and make up a regional ONS network. GDS clients connect to the ONS servers of all regional GSMs and its buddy region and subscribe to the relevant FAN events.

Client Connectivity in GDS

In a GDS enabled environment, the clients connect to the GSM listeners instead of the database listeners. GSM forwards the connection to the local listener (bypassing the SCAN listeners).

The TNS entry on the client side includes the specification of the endpoints of the GSM listeners. It must contain a list of addresses for each GDS Region – one list of local GSMs for load balancing and intra-region failover, and another list of addresses for remote GSMs for inter-region failover. CLB balances connection requests across GSM listeners and includes connect-time failover. Client connection requests are randomly distributed among all regional GSMs first.

The following snippet shows an example connect descriptor mapped to a TNS alias using GSM Listeners in the tnsnames.ora file.

sales_reporting_srvc =

(DESCRIPTION =(CONNECT_TIMEOUT=90)(RETRY_COUNT=30)(RETRY_DELAY=3)    (TRANSPORT_CONNECT_TIMEOUT=3)

(FAILOVER=ON)

(ADDRESS_LIST =                                         /* GSMs of Datacenter1 */

(LOAD_BALANCE=ON)

(ADDRESS = (PROTOCOL = TCP)(HOST = gsm-host1a)(PORT = 1571))

(ADDRESS = (PROTOCOL = TCP)(HOST = gsm-host2a)(PORT = 1571))

(ADDRESS = (PROTOCOL = TCP)(HOST = gms-host3a)(PORT = 1571))

)

(ADDRESS_LIST =                                         /* GSMs of Datacenter2 */

(LOAD_BALANCE=ON)

(ADDRESS = (PROTOCOL = TCP)(HOST = gsm-host1e)(PORT = 1572))

(ADDRESS = (PROTOCOL = TCP)(HOST = gsm-host2e)(PORT = 1572))

(ADDRESS = (PROTOCOL = TCP)(HOST = gsm-host3e)(PORT = 1572))

)

(CONNECT_DATA =

(SERVICE_NAME = sales_reporting_srvc.sales.oradbcloud) (REGION=WEST)

Note: For clients close to Datacenter2, it would be better to put the 2nd address list before the 1st address list (in the tnsnames entry).

Clients load balance among local GSMs and use the remote GSMs, if all the local GSMs are unavailable. It is possible for a client to connect to a GSM in another region. Therefore, it is not possible to infer the clients region by the GSM it connects to. Clients specify global service name and which region they originate from.

Simplification of client configuration

In the client configuration files, (for a given replication), customers typically have to list all the database servers that a given service can run on. Any addition or deletion of these servers requires them to modify configuration files on each of the client nodes to reflect the servers that are part of the replication. Some customers, as part of role change, opt to make the DNS alias re-pointed to the current Primary/Master. Role-based PL/SQL triggers are used to bring up the services on the appropriate role for each replicated configuration.

However, with GDS, all these complexities disappear. GDS automatically addresses these scenarios and requires no change to the client configuration files or the DNS modifications or the need for any role-based triggers on any database of any GDS Pool that is under the governance of the shared GDS infrastructure.

Client Compatibility

GDS supports all clients for Connect-time load balancing and for the clients using Oracle connection pools and drivers (JDBC/UCP, OCI and Oracle Data Provider for .NET). It also supports the Run-time load balancing. Oracle WebLogic Active GridLink, Apache Tomcat, IBM Websphere and Red Hat JBOSS will leverage UCP. It is recommended that the clients use the version 12.1 or greater.

GDS – A Shared infrastructure for multiple replicated configurations

GDS is a shared infrastructure and in Oracle Database 12c, a single GDS configuration can manage up to:

  • 300 Database Instances
  • 1000 Global Services
  • 20 GDS Pools
  • 10 GDS Regions
  • 5 GSMs per Region
  • 1000 Mid-Tier connection pool based clients

Recommended GDS configuration:

  • GDS catalog database (can be consolidated with EM Cloud Control repository and RMAN catalog) in a Data Guard configuration for HA/DR (MAA best practice)
  • 3 GSMs per Region (GDS binaries installed on each GSM node)

Note: GDS supports both Policy and Administrator Managed RAC databases. It is recommended that Data Guard configuration is Broker enabled.

I plan to cover various use cases of GDS, in a series of blog posts. Coming soon!

My takeaways from – “High Availability and Sharding with Next Generation Oracle Database” OOW-2017 conference session

In his High Availability (HA)  conference session, Wei Hu, VP of Product Development covered the new Oracle Sharding and HA features of 18c, and how the Oracle Maximum Availability Architecture (MAA) powers the Oracle Autonomous Database Cloud.

Here are my key takeaways from the session:

  • Oracle Autonomous Database Cloud Service relies completely on Oracle Maximum Availability Architecture (MAA) for automated protection and repair for both planned and unplanned downtime:
    • System Failure   – Exadata, RAC, ASM
    • Regional Outage   – Active Data Guard
    • Patches   – RAC Rolling Updates
    • Major Upgrades   – Transient Logical Standby
    • Table Changes   – Online Redefinition
    • User Error   – Flashback
  • Sharding 1.0 (Oracle 12.2) is great for internet-style applications
  • Sharding 2.0 (Oracle 18c) expands the range of use cases. Following are some of the key Sharding features in 18c.
    • User-defined sharding – Partition data across shards by RANGE or LIST. By explicitly mapping the data to the shards, you achieve better control, compliance & application performance. This sharding method supports the geo-distributed, hybrid cloud and cloud bursting use cases.
    • Swim-lanes – Establish mid-tier affinity with shards to improve mid-tier cache locality, effective connection management and for geo- distributed shards – eliminate the chatty midtier-to-shard  connections across regions.
    • Multishard queries – Proxy routing now supports all sharding methods and all query shapes.
    • RAC Sharding – Affinitizes table partitions to  RAC instances. Requests that specify sharding key will be routed to the RAC instance that logically holds the partition. Affinity gives better cache utilization and reduced block pings across instances. Oracle RAC on steroids, I say.
  • Wei showcased the first customer implementation of Oracle Sharding. China Telecom went from setup to go-live in only 3 months.  China Telecom’s rational – “Migration cost is too high, if we go to other data stores…DBAs and Developers are familiar with Oracle Database. Since Oracle has sharding, why don’t use Oracle Sharding?”  Details of China Telecom’s implementation will be covered in my Oracle Sharding conference session.
  • Some of the Active Data Guard 18c Enhancements:
    • Automated Nologging support in Active Data Guard
    • Update-like capabilities on Active Data Guard for  “Mostly Read, Occasional Updates” applications
    • Automatically update password file on Active Data Guard standbys – changing Admin password on primary automatically updates standbys’
    • Standby-first encryption – Encrypt tablespaces on standby first, switchover, then encrypt on the old primary. How cool this is!
    • New Broker “Validate” command – For validation and proactive health checks to automatically detect and fix issues before they happen
  • He also covered Automated backups with Zero Data Loss Recovery Appliance, Online operations, Online patching enhancements and the Zero Downtime upgrade utility and many other capabilities.

Oracle Database 18c HA features are the core building blocks for the reliability of Oracle Autonomous Database cloud. On-premise deployments must deliver the same availability as the Oracle Cloud. Attain this by adopting the same MAA architecture used by the Oracle Autonomous Database Cloud Service.

Are your on-premise database deployments as resilient and reliable as Autonomous Database cloud? Something to ponder.

 

Oracle Sharding – Licensing

This blog post covers how Oracle Sharding is licensed for On-premises and Oracle Cloud deployments.

Q: How is Oracle Sharding Licensed for On-premises?

A: For a sharded database (SDB) with 3 or fewer primary shards, Oracle Sharding is included with EE (includes Data Guard). No limit on number of standby shards.

For an SDB with more than 3 primary shards, in addition to EE, all shards must be licensed either for Active Data Guard, Oracle GoldenGate or RAC.

So if your licensing agreement covers EE and one of the HA options – Active Data Guard or Oracle GoldenGate or RAC, you can use Oracle Sharding at no additional cost.

Q: How is Oracle Sharding Licensed for Oracle Cloud (PaaS)?

A: With DBCS EE and DBCS EE-High Performance (HP), use is limited to three primary shards. (No limit on number of standby shards)

With DBCS EE-Extreme Performance (EP) and Exadata Cloud Service (ECS), there is no limit on the number of primary shards or standby shards.

Q: What is the Sharding BYOL model for Oracle Cloud (IaaS)?

A: For a sharded database (SDB) with 3 or fewer primary shards, Oracle Sharding is included with EE (includes Data Guard). No limit on number of standby shards. For the 3 primary shards, if Active Data Guard, Oracle GoldenGate or RAC are required for these shards, they ought to be licensed accordingly.

For an SDB with more than 3 primary shards, in addition to EE, all shards must be licensed either for Active Data Guard, Oracle GoldenGate or RAC.

Ref: Oracle Database Licensing Information User Manual, 12c Release 2 (12.2)