Posts

Secure PostgreSQL : Patroni, Etcd, Pgbackrest Included

Image
Step by step guide for securing your Postgresql Database Environment In today's world, keeping data secure is absolutely crucial. As companies depend more and more on databases to handle all sorts of sensitive information, making sure those databases are locked down tight is a top priority. Ensuring the overall security of PostgreSQL is a multifaceted task that involves implementing various measures to protect the database from potential threats. Many measures may be taken in every layer of the database environment. I will explain some in detail and summarize others in one sentence. Lets start with the database itself. Secure PostgreSQL Database: 1. Authentication and Authorization : Use RBAC (role based access control), different schemas for apps,  use advanced authentication methods like LDAP, Kerberos, and client certificate authentication. 2. Encryption :   Utilize encryption for both data in transit and data at rest. There are some encryption options in PostgreSQL but not at f

Speed up All Spatial Operations with SPATIAL_VECTOR_ACCELERATION parameter

Image
100's of times faster Spatial operators and functions Last week, I tried to diagnose an application problem which mostly consisted of queries related to spatial operators. When I delved into the issue, I discovered that the SPATIAL_VECTOR_ACCELERATION parameter was set to false on my mission-critical databases, despite the spatial component being used effectively. Although many bloggers had written about it, i was not aware of this parameter. So I wanted to draw everyone's attention to this parameter.   According to the Vector Performance Acceleration in Oracle Spatial and Graph (Doc ID 1506885.1) , In Oracle Database versions 12c and above, significant enhancements have been made to improve vector operations through the utilization of new vector performance acceleration capabilities. These enhancements lead to better index performance, increased efficiency of the geometry engine, optimized secondary filter optimizations for spatial operators, and enhanced CPU and memory util

How to Upgrade PostgreSQL, PostGIS and Patroni in Air-Gapped Environments

Image
Upgrade Your PostgreSQL Database: Major Steps and Minor Touches In this blog post, I will show how a complete PostgreSQL environment major update can be done. Minor upgrades are not a tough task; they are no different from a standard Linux package upgrade. As my production environment serves in an airgapped-environment, I will complete all the major upgrade with no internet connection. I've downloaded all the required whl and .rpm packages before upgrade. Before starting, A short description of major and minor Postgresql upgrades are provided below: Major Upgrade: Upgrading PostgreSQL to a new major version, such as moving from 10 to 11 or 12 to 13. Major upgrades bring significant changes, introduce new features, and may require thorough testing and planning due to potential incompatibilities. Minor Upgrade: Updating PostgreSQL to a newer minor version within the same major release, like going from 13.1 to 13.2 or 12.5 to 12.6. Minor upgrades focus on bug fixes, security patches,

Oracle Grid Release Update by using Ansible Playbooks

Image
Ansible is a friend in need and a friend indeed for DBAs. In this blog post, I will mention about Redhat Ansible and show how you can patch your grid infrastructure by using it. Ansible is an open source, command-line IT automation software application written in Python. It can configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. Ansible's main strengths are simplicity and ease of use. It is a little bit similar to dcli commands in Exadata environments, but much more powerful as you may run playbooks which consist of different tasks. Ansible stands out from other configuration management tools because it doesn't need agents installed on target hosts. Instead, it leverages SSH for connecting to them and executes commands directly through the shell. This means Ansible communicates with remote hosts securely using SSH, eliminating the need for additional software on the managed machines. It's like h

Oracle Database Release Update by Using Ansible Playbooks

Image
Ansible to Rule Them All In the previous post " Oracle Grid Software Patching by using Ansible Playbooks" , I mentioned how Ansible orchestration can be used to make Oracle Grid Software Patching more clean and simple way. Now I will also patch database software by using Ansible Playbooks. In this post,  I will use blt01 host as the ansible host. Target environment is a two node RAC database. (gns01, gns02). I will patch grid software to 19.22 by using out-of-place methodology with no down time.  Although, there are lots of useful blog posts about how you may use Ansible for automating tasks, there is not much about a complete Oracle database software patching.  Ansible is simply installed by "yum install ansible"  from EPEL repository. Add your hosts(gns01, gns02) in the hosts file under /etc/ansible/hosts and create ssh equivalency for oracle and root users. We need:     * OPatch -   p6880880_122010_Linux-x86-64.zip     * 19.22 RU -  p35940989_190000_Linux-x86-64.

ACS and bind variable peeking does not kick in when using TOAD

Image
GOOD NEWS is, It will kick in soon. Last week, when i was investigating a performance issue, i have observed that an identical query results in different  execution plans in TOAD and sqlplus.  According to the " Doc ID 1671642.1  How To Diagnose Why an Identical Query Has Different Plans (and Performance) in Different Environments ", there may be different factors in plan selection, Although not all, Some of them are as follows.   Statistics     * Base Object Statistics     * System Statistics     * Dynamic Statistics  (OPTIMIZER_DYNAMIC_SAMPLING)     * Statistics Feedback (_OPTIMIZER_USE_FEEDBACK)      Physical Differences     * Physical and Logical Layout of the Database         * Different volume and distribution of data         * Different block sizes (DB_BLOCK_SIZE)     * Schema Changes      Settings and Features     * Parallelism     * Baselines, Profiles and Stored Outlines     * Bind Variables Optimizer and System Parameters     * SPFILE/PFILE     * PGA and SORT AREA

How to Implement a Filtered Index in Oracle

Image
Previously on my blog post ( Sometimes a histogram existence may mislead to a suboptimal plan ), I analyzed a poorly performing query in detail and detected that sometimes a histogram existence may mislead to a suboptimal plan. I have listed some possible solutions.  But what i applied as a solution was to emulate a partial/filtered index in Oracle.   I know partial indexes concept from Postgresql database, it is also available on MS SQL and some others.  What is Partial Index? ... A partial index is an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries only for those table rows that satisfy the predicate. Partial indexes are a specialized feature, but there are several situations in which they are useful. One major reason for using a partial index is to avoid indexing common values. Since a query searching for a common value (one that accounts for more than a few percent of all

Sometimes a histogram existence may mislead to a suboptimal plan

Image
INTRO   In one of our critical OLTP database, we faced " Latch: cache buffer chains " wait event issue(a long running query) on a commonly used page. Long running query was called inside a PL/SQL package function. To alleviate the situation, first we changed the function to a deterministic function and rewrite the query's function call into a scalar subquery allows Oracle to use scalar subquery caching to optimize the number of function calls. As these cautions could not cure the issue and segment access statistics were not even, we changed table partitioning strategy from list partitioned to hash-partitioned. For the problematic query issue was solved, but a batch job which completes in 5 minutes before, started to experience performance problems and i digged down the issue. Root cause was the change of partitioning strategy. One query suffered deeply. This is how i investigated poor query performance and cured it.  ANALYSIS After changing the partitioning strategy o

Using table stats hint to force a direct path read but with a rownum predicate surprise

Image
Manipulating table statistics for the statement period On my blog post " Who makes the decisions? I thought it was Optimizer ", I used parallel hint to force a direct path read. This time i will try to make use of table_stats hint to force a direct path read. There is not much documentation about the usage of this hint. Roger Macnicol blog post ( Correct syntax for the table_stats hint  ) offers up some useful information about it. These are the variables play role in direct path read decision.  STT stands for small table threshold = _db_block_buffers * 0,02 =  3875820 * 0,02 = 77516 MTT stands for medium table threshold = _small_table_threshold * 5 = 77516 * 5 = 387580 VLOT stands for Very Large Object Threshold = _db_block_buffers * 5  = 3875820 * 5 = 19379100 I will show how the thresholds effect the direct path decision. We will use nsmtio (non smart IO) event  to trace the direct path or non-direct path/buffered decision.   Before 11.2.0.2 release, the direct path read d

Child Cursors Related with LANGUAGE_MISMATCH

Image
The reason behind is NLS_SETTINGS I generally use Stew Ashton's below query to summarize reasons why cursors are not shared.  In one of our production database, the amount of child cursor count related with "LANGUAGE_MISMATCH" attracted my attention and i tried to make clear the reason behind. According to the  Database Hang With 'cursor: mutex X' Contention Due To High Version Count Under LANGUAGE_MISMATCH (Doc ID 2542447.1) , Whenever there are different NLS settings got changed in different combinations in every execution, it creates new child and version count increases with LANGUAGE_MISMATCH. Exactly the same queries and PL/SQL packages are called from different applications . Different applications connect to database with different database users, but execute some common sql statements. Also These applications are deployed to different servers with different operating systems and client environments. Some example child cursor reasons are as below. <Child

Who makes the decisions? I thought it was Optimizer

Image
Some decisions are actually not taken by the optimizer. We started to experience a performance problem with an sql query which we can not change the software code and tried to find a solution in solely database layer.  Sql query was a sophisticated one, but problem can be summarized with the short one below. Here is the plan output. Products table was a large table exceeding 50 Gb size and query's execution time was 180 - 200 seconds.  Query was searching in PRODUCTS table for the HTML_CONTENT with not null records, but for all rows,  HTML_CONTENT column values was null and as this column is not indexed, query was performing a full table scan. And also, html_content column data type was CLOB. For that reason I could not create a functional index and i could not rewrite the query as the source code is not available. I tried to execute the query with  /*+ PARALLEL(2) */ hint, execution completed within milliseconds.  In Parallel Execution with Oracle Database White Paper document  it

Reset "_gc_policy_minimum" parameter to its default.

Image
No more frequent remastering of objects. As default value of "_gc_policy_minimum"(formerly known as _gc_affinity_minimum) is quite low for busy environments, Those who are setting this parameter to 15000 according to the Best Practices and Recommendations for RAC databases with SGA size over 100GB (Doc ID 1619155.1) , I have good news for you.  From 19.20 (19c DBRU JUL '23) on, You may reset it to its default value.  Due to internal bug 34729755, 15000 is the new default in 23c, 19c DBRU JUL '23, and 19c ADB.  DBAs having compulsive tuning disorder like me won't have to tune this parameter any more in those releases or later. According to the  DRM - Dynamic Resource management (Doc ID 390483.1) , DRM attributes are intentionally undocumented since they may change depending on the version. These attributes should not be changed without discussing with Support. You can access much more information about the Dynamic Resource management from the " RAC object rema