Posts

Who makes the decisions? I thought it was Optimizer

Image
Some decisions are actually not taken by the optimizer. I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. We started to experience a performance problem with an SQL query which we cannot change the software code and tried to find a solution in solely the database layer.  SQL query was a sophisticated one, but the problem can be summarized with the short one below. Here is the plan output. Products table was a large table exceeding 50 Gb size and query's execution time was 180 - 200 seconds.  Query was searching in PRODUCTS table for the HTML_CONTENT with not null records, but for all rows,  HTML_CONTENT column values were null, and as this column is not indexed, the query was performing a full ...

Reset "_gc_policy_minimum" parameter to its default.

Image
No more frequent remastering of objects. I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. As default value of "_gc_policy_minimum"(formerly known as _gc_affinity_minimum) is quite low for busy environments, Those who are setting this parameter to 15000 according to the Best Practices and Recommendations for RAC databases with SGA size over 100GB (Doc ID 1619155.1) , I have good news for you.  From 19.20 (19c DBRU JUL '23) on, You may reset it to its default value.  Due to internal bug 34729755, 15000 is the new default in 23c, 19c DBRU JUL '23, and 19c ADB.  DBAs having compulsive tuning disorder like me won't have to tune this parameter any more in those releases or later. According to the  DRM - Dynamic Resource management (Doc ID 390483.1) , DRM attribu...

Database Patching Tips #JoelKallmanDay

Image
Bonus Step - What a Wonderful Patch (Patching Tips) I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. Failure to patch your database software may lead to losses of data that can cost money and reputation. That is the point of security. Also there are many other benefits, including adding features, and fixing bugs that make your applications run slow or not work as intended.   Every database administrator should have a patching strategy that suits up with his/her organization. I will share my tips that can also be useful for others and may not be suitable for some. Because every organization has their own constraints and unique environment. Generally Applicable  * First know your database environment and identify all the features you are using actively. Kno...

Cleaning old Oracle Grid and Database Homes #JoelKallmanDay

Image
Step 5 - Another one Patches the Dust. (Cleaning Old Oracle Homes) I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. After you patch your database servers grid and database homes whether using out-of-place method or fleet patching method, you should remove the old Oracle Homes.  As a precautionary measure, I keep old Oracle Homes at least 15 days and once i am comfortable with the new ones, then i remove old ones. Removing old Oracle Grid Infrastructure Home with deinstall command:  The deinstall command detects cluster nodes, displays a short summary, asks for confirmation, operates on all nodes and warns you to run root.sh on all nodes, and usually leaves some leftovers. I also remove the old directory. After deinstallation, old grid home is flagged as deleted ...

Update Oracle RAC Database Using Fleet Maintenance #JoelKallmanDay

Image
Step 4 - Patchin' Alive. (Oracle RAC Database Patching using Fleet Maintenance) I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there.   I will patch all my database servers database software from 19.16 to 19.20. As there are more than 20 servers to patch, we will use fleet patching. First, I have patched a 2-node cluster database homes by using out of place (OOP) patching methodology through runInstaller in silent mode in Step 2 - Patch My Breath Away(DB OOP Patching) . Now I will use patched an Oracle RAC database home to create a gold image for fleet patching of all Oracle RAC database homes. Some useful references:       *  Primay Note for Database Patching Using Enterprise Manager 13c Cloud Control Fleet Maintenance (Doc Id 2435251.1)   ...

Update Grid Infrastructure Using Fleet Maintenance #JoelKallmanDay

Image
Step 3 - Patching in the Deep. (Grid Patching using Fleet Maintenance) I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there.        I will patch all my database servers grid infrastructure from 19.16 to 19.20. As there are more than 20 servers to patch, we will use fleet patching. First i have patched a 2 node cluster grid infrastructure by using out of place ( OOP ) patching  methodology through gridSetup.sh in silent mode in Step 1 - Patch Me If You Can (Grid OOP Patching)      Now i will use patched grid homes to create a gold image for fleet patching of all grid homes. Some useful references:      *  Primay Note for Database Patching Using Enterprise Manager 13c Cloud Control Fleet Maintenance (Doc Id 2435251.1)   ...

Database Out of Place Patching Through runInstaller #JoelKallmanDay

Image
Step 2 - Patch My Breath Away. (DB OOP Patching) I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. I will patch all my database servers database software from 19.16 to 19.20. As there are more than 20 servers to patch, we will use fleet patching. First, I will patch a 2-node cluster database homes by using out-of-place (OOP) patching methodology through runInstaller in silent mode. Later, I will use patched database homes to create a gold image for fleet patching of all database homes. Setup List:  * Database 19.3 Base Release (LINUX.X64_193000_db_home.zip)  * Database 19.20 RU (p35320081_190000_Linux-x86-64.zip)  * Grid 19.20 August MRP involves DB August MRP. (p35656840_1920000DBRU_Linux-x...

Grid Infrastructure Out of Place ( OOP ) Patching Through gridSetup.sh #JoelKallmanDay

Image
Step 1 - Patch Me If You Can. (Grid OOP Patching) I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. I will patch all my database servers grid infrastructure from 19.16 to 19.20. As there are more than 20 servers to patch, we will use fleet patching. First i will patch a 2 node cluster grid infrastructure by using out of place ( OOP ) patching  methodology through gridSetup.sh in silent mode.  Doc ID 2853839.1   and Patching Oracle Grid Infrastructure 19c using out-of-place SwitchGridHome video by Daniel Overby Hansen   can be used a reference.  Later i will use patched grid homes to create a gold image for fleet patching of all grid homes. Setup List:  * Grid 19.3 Base Release (LINUX.X64_193000_grid_home.zip)  * Grid 19.20 RU (p35319490_190...

Context Index Dictionary Cleaning Orphan Records

Image
DRG-10507 - Duplicate Index Name - False Positive I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. When doing some regular checks on SYSAUX tablespace occupants,  we have detected that CTXSYS.DR$PENDING table was at 2GB size and there were 15 million records despite we have daily periodic context index sync scheduler jobs.  We used below query to inspect unsynced context index records.  We searched for the 2243 id numbered context index in the CTXSYS.DR$INDEX table. It looked like a user created context index. Index owner and names are changed as KARTAL.IDXCTX_SOMETABLE_SOMECOLUMN. I searched dba_objects to get creation date of the object. But there was no record named as KARTAL.IDXCTX_SOMETABLE_SOMECOLUMN. That looked strange. This record seemed like a...

10035 event tracing no more records sql_texts to alert.log

Image
Cursordump is the way to go. I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. Parse failures are actually not stored in the data dictionary and therefore cannot be identified through querying the data dictionary. As of Oracle 10g, event 10035 can be used to report all failed parses.   With the 19.16 Release Update, Oracle does not record SQL statements to alert.log. Any statement that fails at the parsing stage is recorded with the error number and the process OSPID. Also, setting the hidden "_kks_parse_error_warning" parameter to 1 does not help report ...

Select query on dba_mviews takes too long

Image
Who should tune "select * from dba_mviews" query?  I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. In one of our database, whenever "Select * from dba_mviews" statement, with no other filter, gets executed,  it takes 45 seconds to complete.  Actually this is not an acceptable length and requires a detailed investigation. Basically "dba_mviews" is a data dictionary view. Its query is as shown below, a really complex one, just skip it. A great number of subquery runs behind the scenes. As this database serves in an air-gapped environment, I do not have any execution plans of the real-time scenario. Below is the execution plan in my test environment. Access paths are similar. The execution plan has some full table scans such as SYS.SUMDELTA$, SYS....

Queries against DBA_SEGMENTS are not shared (not using bind variables)

Image
I’ve moved my blog from https://insanedba.blogspot.com to https://dincosman.com Please update your bookmarks and follow/subscribe at the new address for all the latest updates and content. More up-to-date content of this post may be available there. In one of our mission-critical databases, I have discovered excessive memory utilization for SQLs querying DBA_SEGMENTS and they were not using bind variables. Total sharable memory usage for SQLs using the same plan_hash_value was up to 7GB on 4 instances. Sadly, it was an internal query issued from FBDA (Flashback Data Archiver) process. Before I deep dive into the issue, I want to explain "What is hard parse?"   Experienced DBAs can skip this section. What is Hard Parse? When an application or user issues a SQL statement, a parse call to prepare the statement for execution is made. The parse call opens a cursor, which is a handle for the session specific private SQL area that holds a parsed SQL statement and other pr...