Oracle University Podcast

Oracle Database@AWS: Monitoring, Logging, and Best Practices


Listen Later

Running Oracle Database@AWS is most effective when you have full visibility and control over your environment. In this episode, hosts Lois Houston and Nikita Abraham are joined by Rashmi Panda, who explains how to monitor performance, track key metrics, and catch issues before they become problems. Later, Samvit Mishra shares key best practices for securing, optimizing, and maintaining a resilient Oracle Database@AWS deployment. Oracle Database@AWS Architect Professional: https://mylearn.oracle.com/ou/course/oracle-databaseaws-architect-professional/155574 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, Anna Hulkower, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!

00:26

Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services

Lois: Hello again! Last week's discussion was all about how Oracle Database@AWS stays secure and available. Today, we're joined by two experts from Oracle University. First, we'll hear from Rashmi Panda, Senior Principal Database Instructor, who will tell you how to monitor and log Oracle Database@AWS so your environment stays healthy and reliable.

Nikita: And then we're bringing in Samvit Mishra, Senior Manager, CSS OU Cloud Delivery, who will break down the best practices that help you secure and strengthen your Oracle Database@AWS deployment. Let's start with you, Rashmi. Is there a service that allows you to monitor the different AWS resources in real time?

Rashmi: Amazon CloudWatch is the cloud-native AWS monitoring service that can monitor the different AWS resources in real time. It allows you to collect the resource metrics and create customized dashboards, and even take action when certain criteria is met. Integration of Oracle Database@AWS with Amazon CloudWatch enables monitoring the metrics of the different database resources that are provisioned in Oracle Database@AWS.

Amazon CloudWatch collects raw data and processes it to produce near real-time metrics data. Metrics collected for the resources are retained for 15 months. This facilitates analyzing the historical data to understand and compare the performance, trends, and utilization of the database service resources at different time intervals. You can set up alarms that continuously monitor the resource metrics for breach of user-defined thresholds and configure alert notification or take automated action in response to that metric threshold being reached.

02:19

Lois: What monitoring features stand out the most in Amazon CloudWatch?

Rashmi: With Amazon CloudWatch, you can monitor Exadata VM Cluster, container database, and Autonomous database resources in Oracle Database@AWS. Oracle Database@AWS reports metrics data specific to the resource in AWS/ODB namespace of Amazon CloudWatch. Metrics can be collected only when the database resource is an available state in Oracle Database@AWS.

Each of the resource types have their own metrics defined in AWS/ODB namespace, for which the metrics data get collected.

02:54

Nikita: Rashmi, can you take us through a few metrics?

Rashmi: At Exadata database VM Cluster, there is CPU utilization, memory utilization, swap space storage file system utilization metric. Then there is load average on the server, what is the node status, and the number of allocated CPUs, et cetera. Then for container database, there is CPU utilization, storage utilization, block changes, parse count, execute count, user calls, which are important elements that can provide metrics data on database load. And for Autonomous Database metrics data include DB time, CPU utilization, logins, IOPS and IO throughput, RedoSize, parse, execute, transaction count, and few others.

03:32

Nikita: Once you've collected these metrics and analyzed database performance, what tools or services can you use to automate responses or handle specific events in your Oracle Database@AWS environment?

Rashmi: Then there is Amazon EventBridge, which can monitor events from AWS services and respond automatically with certain actions that may be defined. You can monitor events from Oracle Database@AWS in EventBridge, which sends events data continuously to EventBridge at real time. Eventbridge forwards these events data to target AWS Lambda and Amazon Simple Notification Service to perform any actions on occurrence of certain events.

Oracle Database@AWS events are structured messages that indicate changes in the life cycle of the database service resource. Eventbridge can filter events based on your defined rules, process them, and deliver to one or more targets. Event Bus is the router that receives the events, optionally transform them, and then delivers the events to the targets. Events from Oracle Database@AWS can be generated by two means: they can be generated from Oracle Database@AWS in AWS, and they can also be generated directly from OCI and received by EventBridge in AWS.

You can monitor Exadata Database and Autonomous Database resource events. Ensure that the Exadata infrastructure status is an available state. You can configure how the events are handled for these resources. You can define rules in EventBridge to filter the events of interest and the target, who is going to receive and process those events. You can filter events based on a pattern depending on the event type, and apply this pattern using Amazon EventBridge put-rule API, with the default event bus to route only those matching events to targets.

05:13

Lois: And what about events that AWS itself generates?

Rashmi: Events that are generated in AWS for the Oracle Database@AWS resources are delivered to the default event bus of your AWS account. These events that are generated in AWS for Oracle Database@AWS resources include lifecycle changes of the ODB network. The different network events are successful creation or failure of the creation of the ODB network, and successful deletion or failure in deletion of the ODB network.

When you subscribe to Oracle Database@AWS, then an event bus with prefix aws.partner/odb is created in your AWS account. All events generated in OCI for the Oracle Database@AWS resources are then received in this event bus. When you are creating filter pattern using Amazon EventBridge put-rule API, you must set the event bus name to this event bus. Make sure you do not delete this event bus. Events generated in OCI and received into event bus are extensive. They include events of Oracle Exadata infrastructure, VM Cluster, container, and pluggable databases.

06:14

Lois: If you want to look back at what's happened in your environment, like who made the changes or accessed resources, what's the best AWS service for logging and auditing all that activity?

Rashmi: Amazon CloudTrail is a logging service in AWS that records the different actions taken by a user or roles, or an AWS service. Oracle Database@AWS is integrated with Amazon Cloud Trail. This enables logging of all the different events on Oracle Database@AWS resources.

Amazon Cloud Trail captures all the API calls to Oracle Database@AWS as events. These API calls include calls from the Oracle Database@AWS console, and code calls to Oracle Database@AWS API operations. These log files are delivered to Amazon S3 bucket that you specify. These logs determine the identity of the caller who made the call request to Oracle Database@AWS, their IP from which the call originated, the time of the call, and some additional details.

CloudTrail event history stores immutable record of the past 90 days of management events in an AWS region. You can view, search, and download these records from CloudTrail Event History. You can access CloudTrail when you create an AWS account that automatically gives you the access to CloudTrail. Event history. If you would like to retain the logs for a longer period of time beyond 90 days, you can create CloudTrail trails or CloudTrail Lake event data store.

Management events in AWS provide information about management operations that are performed on the resources in your AWS account. Management operations are also called control plane operations. Thus, the control plane operations in Oracle Database@AWS are logged as management events in CloudTrail logs.

07:59

Are you a MyLearn subscriber? If so, you're automatically a member of the Oracle University Learning Community! Join millions of learners, attend exclusive live events, and connect directly with Oracle subject matter experts. Enjoy the latest news, join challenges, and share your ideas. Don't miss out! Become an active member today by visiting mylearn.oracle.com.

08:25

Nikita: Welcome back! Samvit, let's talk best practices. What should teams keep in mind when they're setting up and securing their Oracle Database@AWS environment?

Samvit: Use IAM roles and policies with least privilege to manage Oracle Database@AWS resources. This ensures only authorized users can provision or modify DB resources, reducing the risk of accidental or malicious changes.

Oracle Data Safe monitors database activity, user risk, and sensitive data, while AWS CloudTrail records all AWS API calls. Together, they give full visibility across the database and cloud layers.

Autonomous Database supports Oracle Database Vault for enforcing separation of duties. Exadata Database Service can integrate with Audit Vault and Database Firewall to prevent privileged users from bypassing security controls.

Enable multifactor authentication for AWS IAM users managing Oracle Database@AWS. This adds a strong second layer of protection against stolen credentials.

Always deploy your Oracle Database@AWS in private subnets without public IPs. Use AWS security groups and NACLs to strictly limit inbound and outbound traffic, allowing access only from trusted applications.

Exadata Database Service supports integration with Oracle Vault for key lifecycle management. And in case of Autonomous Database, the transparent data encryption keys are automatically managed. But you can bring your own keys with OCI Vault. Key rotation ensures compliance and reduces risk of key compromise.

Oracle Database@AWS enforces encrypted connections by default. Ensure clients connect with TLS 1.2 or 1.3 to protect data in transit from interception or tampering.

Use Oracle Data Safe's user assessment features to detect dormant users or excessive privileges. Disable unused accounts and rightsize permissions to reduce insider threats and security gap.

Export database audit logs to Oracle Data Safe Audit Vault or AWS S3 with object lock for immutability. This prevents lock tampering and ensures audit evidence is preserved for compliance.

11:25

Lois: OK, that covers security. Do you have any tips for making sure your Oracle Database@AWS setup is reliable and resilient?

Samvit: Start with clear recovery objectives. Define how much downtime and data loss each workload can tolerate. These targets drive your HADR architecture and backup strategy.

Implement business continuity measures to deliver maximum uptime for your databases. As a best practice, you must configure disaster recovery environment for your critical databases so that, in the event of any disaster affecting the primary database, applications can be immediately failed over to the DR environment, ensuring least application downtime and zero or minimal data loss. With Oracle Database@AWS, you can automate the creation and management of DR environment for your database services using different deployment capabilities. You can opt to configure either cross-availability zone DR in the same region or configure cross-region DR. Since cross-availability zone can only provide site failure protection, you must also configure a cross-region DR to protect against regional failure.

A DR plan is only effective if tested. Regular failover and switchover drills validate that people, processes, and systems can recover as designed.

For Exadata Database, Autonomous Recovery Service provides automated backup validation, recovery guarantees, and protection against accidental data loss or corruption.

Oracle-managed backups are fully managed by OCI. When you create your Oracle Exadata Database, you can enable automatic backups by choosing Enable Automatic Backups in the OCI Console. When you do that, you can select Amazon S3 or OCI Object Storage or Autonomous Recovery Service as the backup destination.

Don't just take backups. You also need to test them. Regularly restore backups into non-production environment to validate integrity and recovery time.

Plan beyond just the database. Map application and middleware dependencies to ensure end-to-end business resilience. A database failover is useless if dependent apps can't reconnect.

14:09

Nikita: Another area of interest is performance and cost. What practices help teams balance the two?

Samvit: Autonomous Database automatically scales CPU and storage as workloads grow. This ensures performance during peaks while avoiding overprovisioning. So you should enable ADB auto-scaling.

Monitor CPU, memory, and IO metrics with AWS CloudWatch to rightsize your compute. Scale up or down based on actual utilization instead of static provisioning.

Autonomous databases continuously evaluate and creates indexes automatically. This improves query performance without requiring manual tuning.

Use connection pooling in your applications to optimize database connections. Minimizing round-trip reduces latency and improves throughput.

Apply AWS tags to database and related resources for cost allocation and chargeback. Tagging also helps with governance and cost visibility.

Choose between bring your own license and license-included models for Oracle Database@AWS. The right model depends on your existing license portfolio and cost strategy.

Not all workloads need long backup retention. Adjust retention policies based on business needs to balance compliance with storage costs.

Exadata Database supports Oracle multitenant with pluggable databases. Consolidating databases reduces infrastructure footprint and licensing costs.

Performance tuning isn't just technical. Align metrics with business KPIs. correlating DB performance to user experience and revenue impact helps prioritize optimizations.

16:20

Lois: Before we wrap up, Samvit, let's look at operational efficiency. What advice do you have for making day-to-day operations more efficient?

Samvit: Use infrastructure as code tools like Terraform or AWS CloudFormation to automate provisioning. This ensures consistent, repeatable deployments with minimal manual errors.

For Autonomous Database, enable auto-start/stop to optimize costs by running databases only when needed. This is ideal for dev test or seasonal workloads.

Exadata Database Service provides fleet maintenance to patch multiple systems consistently. This reduces downtime and simplifies lifecycle management.

Integrate AWS CloudWatch for performance monitoring and EventBridge for event-driven automation. This helps detect issues early and trigger automated workflows.

Oracle Data Safe provides ready-to-use audit and compliance reports. Use these to streamline governance and reduce the effort of manual compliance tracking.

For Autonomous databases, Performance Hub simplifies monitoring while Exadata users benefit from AWR and ASH reports. Together, they give deep insights into performance trends.

Automated tagging policies and change management workflows help maintain governance. They ensure resources are tracked properly and changes are auditable.

Monitor storage consumption and growth patterns using AWS CloudWatch and the ADB Console. Proactive tracking helps avoid capacity issues and unexpected costs.

Send CloudTrail logs into EventBridge to trigger automated incident responses. This shortens response time and builds operational resilience.

18:36

Nikita: Samvit and Rashmi, thanks for spending time with us today. Your insights always help bring the bigger picture into focus.

Lois: They definitely do. And if you'd like to go deeper into everything we covered, head over to mylearn.oracle.com and look up the Oracle Database@AWS Architect Professional course. Until next time, this is Lois Houston…

Nikita: And Nikita Abraham, signing off!

19:03

That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

...more
View all episodesView all episodes
Download on the App Store

Oracle University PodcastBy Oracle Corporation

  • 3.5
  • 3.5
  • 3.5
  • 3.5
  • 3.5

3.5

6 ratings


More shows like Oracle University Podcast

View all
Hidden Brain by Hidden Brain, Shankar Vedantam

Hidden Brain

43,659 Listeners

Global News Podcast by BBC World Service

Global News Podcast

7,911 Listeners

Exchanges by Goldman Sachs

Exchanges

967 Listeners

WSJ What’s News by The Wall Street Journal

WSJ What’s News

4,423 Listeners

Odd Lots by Bloomberg

Odd Lots

2,002 Listeners

Genstart by DR

Genstart

118 Listeners

Huberman Lab by Scicomm Media

Huberman Lab

29,274 Listeners

Fidelity Viewpoints: Market Sense by Fidelity Investments

Fidelity Viewpoints: Market Sense

95 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

688 Listeners

A Beginner's Guide to AI by Dietmar Fischer

A Beginner's Guide to AI

53 Listeners

Prof G Markets by Vox Media Podcast Network

Prof G Markets

1,477 Listeners