Oracle DBA 2 Q&A
Oracle DBA 2 Q&A
tartup nomount:In this phase, the database reads the initialization parameters (pfile/spfile).If
S
any invalid parameter defined in the pfile , then it will throw error.
Startup mount:In this phase the database checks the consistency of control file which records all
the physical structures of the datase files.
tartup open:During this phase the database tries to start in open mode i.e (read,write) for end
S
users. Here it will check the consistency of the the datafiles and redologs . In case of any
inconsistency it will try to recover the database from redologs.
3. What are the different states of a redolog member and its significance.
NUSED– Online redo log has never been used. This is the status of a redo log that was newly
U
added, or just after a RESETLOGS, and not being used yet.
CURRENT– Means this redo log is currently getting written. This implies that the redo log is
active.
ACTIVE– Log is active but is not the current log. It is needed for crash recovery. It may be in use
for block recovery. It may or may not be archived.
CLEARING– Log is being re-created as an empty log after an ALTER DATABASE CLEAR
LOGFILE statement. After the log is cleared, the status changes to UNUSED.
CLEARING CURRENT- Current log is being cleared of a closed thread. The log can stay in this
status if there is some failure in the switch such as an I/O error writing the new log header.
INACTIVE– Log is no longer needed for instance recovery. It may be in use for media recovery.
It might or might not be archived.
3. What do you mean by multiplexing controlfile and how to achieve this.
4. What are the different partitioning techniques in oracle db .
5. What is global and local partitioned index.
. What are the methods of converting non-partitioned table to partitioned table and explain on
6
this.
4. What is an ACL.
rom 11g onwards, to access network packages like utl_mail,utl_http,utl_smtp, we need to have
F
addtional privilege through ACL.
You can check the below link for usage of ACL, for sending mail using utl_mail.
assword parameters are defined in profiles. So to make an password non-expiry, First we need
P
to create a profile with password_life_time set to UNLIMITED and assign that profile to the
user.(This user will inherit all the password limits of the profile)
6. How to drop a private database link, when you are not the owner.
e should reset the online logs, when we do incomplete recovery or recovery through backup
W
control file. When we do open resetlog, a new incarnation number is generated for the
database. Incarnation details can be viewed in v$database_incarnation.
CN Means, system change number. SCN is the logical point in time at which changes are
S
made to a database. Oracle assigns every committed transaction a unique SCN. The database
uses these SCNs to query and track the changes. For example, if a transaction inserts a row,
then the database records the SCN at which this delete occurred.
racle keeps copies of database blocks in an area of the SGA known as the buffer cache. The
O
cache may hold more than one copy of a block from different points in time, and may contain
‘dirty’ blocks – ie: blocks which have been updated but not yet flushed back to disk.
he buffer cache hit ratio measures how many times a required block was found in memory
T
rather than having to execute a expensive read operation on disk to get the block.
A good buffer cache hit ratio is generally considered when it is > 80.
he pfile is read at instance startup time to get specific instance characteristics. Any changes
T
made the pfile would only take effect when the database is restarted.
he spfile is a server-side initialization parameter file and it permits dynamic changes without
T
requiring you to bring down the instance.
AME VALUE
N
———- ————————————————–
spfile /fsys1/oracle/product/9.2.0/spfileTEST.ora
2) SQL> show parameter spfile;
15. What is the default block size in database? Can we use different block sizes and how?
tarting from oracle 10g, the default block size(DB_BLOCK_SIZE) is 8Kb. In previous versions
S
it was 2Kb.
For using 2K block size, set db_2k_cache_size=2G in the init pfile and bounce the database.
Similarly for using block size of diffent type, below can be used.
B_2K_CACHE_SIZE
D
DB_4K_CACHE_SIZE
DB_16K_CACHE_SIZE
DB_32K_CACHE_SIZE
We need to open the database using resetlog option after an incomplete database recovery.
It recreates the online redologs and reset the log sequence to 1.
Updates all the datafile and online redolog files with new resetlog timestamp and scn.
New incarnation number is generated.
Note – It is always recommended to take a fresh full DB backup after opening the database with
resetlog.
hen we do any changes to the database and commit , then modified blocks are not written to
W
datafile directly,Instead they are written to redolog.And checkpoint is the action, when these
modified(committed) blockes will be flushed to the datafile. These blocks are also known as dirty
blocks.
A checkpoint number is the SCN number, at which all the dirty blocks are written to
datafile(disk).
Below events make a checkpoint to occur:
CN is system change number. Oracle assigns unique number is each committed transaction in
S
the database. SCN value gets incremented with each transaction.SCN information is updated in
controlfile.
LOBAL_NAMES specifies whether a database link is required to have the same name as the
G
database to which it connects. GLOBAL_NAMES is either set to FALSE OR TRUE.
If the value of GLOBAL_NAMES is FALSE, then any name can be given to DB_LINK. If the
value is TRUE, then database link name should be same as that of the database it is pointing.
It stores the data dictionary i.e the metadata ( Means data about different objects in the
database).
his tablespace was introduced in Oracle 10g to reduce to workload from SYSTEM tablespace.
T
Like SYSTEM tablespace also gets created by creating the database and it cannot be dropped.
work area -> is a private allocation of PGA memory used for memory-intensive operations.
A
For example,
Sort area – > a sort operator uses the sort area to sort a set of rows. S
ash area – > Similarly, a hash join operator uses a hash area to build a hash table from its left
H
input,
itmap merge area -> A bitmap merge uses the bitmap merge area to merge data retrieved from
b
scans of multiple bitmap indexes.
24. What is the job of DBWR process and how many DBWR processes are there?
BWR Means, database writer is mainly responsible for writing modified blocks(ie. dirty blocks)
D
to the physical data files(disks).
heckpoint is issued
C
Too much dirty buffers in Buffer cache
No free space in the buffer cache
During database shutdown(Except abort method)
Tablespace being dropped (or)taken offline(or) placed in read only mode, (or) in hot backup
mode.
The parameter DB_WRITER_PROCESS controls the number of DBW processes you want to
use. It can be from 1 to 20. Using more number of DBWR process will increase the write
performance. However, it will increase the CPU usage on the DB server.
LGWR, i.e log writer writes the contents of the redolog buffer to an online redo log file.
e can have multiple DBWR process , using the parameter db_writer_process. But there is by
W
default one lgwr process. However from 12c onward, two additional log writer slave process are
introduced( ora_lg00,ora_lg01).
here is hidden parameter _max_outstanding_log_writes , that controls the number of log writer
T
slave processes. Default value is 2. These slave processes are technically called outstanding
log writer process.
hen dbwr wants to write to datafiles, if it feels that some of redo information in log buffer has
W
not been written to redo log file, then DBWR will ask LGWR, to copy redo information to redo
log file first, so that DBWR can write dirty buffers to disk after that.
28. What is the function of CKPT process?
MON i.e Manageability Monitor ,background process which performs tasks like taking AWR
M
snapshots and performing ADDM analysis.
hen an instance terminated as abnormally or crashed the database goes down in
W
an inconsistent state that
means all ongoing transactions committed or uncommitted were not completed.So before it can
be opened, the database must be in a consistent mode.
Hence SMON performs critical role in recovering the database. Oracle using last SCN in the
control file and will apply committed or uncommitted transaction from Redo logs, which is known
as roll forward. In this state database is in MOUNT state . Database then checks the
accessibility of UNDO segments and opens the database. Now uncommitted transactions are
rolled back with the help of UNDO ,which is called ROLL BACK.
33. Difference between dedicated server vs shared server configuration in oracle db?
local connections are known as bequeath connection. I.e when we connect to database from
the same db server like sqlplus / as sysdba. then it creates a bequeath connection.
35. What is user global area? and where it is located?
direct path read I/O operation reads data into the session’s PGA instead of the SGA.
e cannot create a local partitioned Index on a non-partitioned Table. But we can create global
W
partitioned Index on non-partitioned Table.
Delete will not change high water mark of a table. If truncate will reduce the HWM.
lso , we can restore deleted data , if it is present in. undo. But truncate cannot be restored at
A
all.
43. What happens when you put the database in hot backup mode?
BWn checkpoints the tablespace (writes out all dirty blocks as of a given SCN)
D
CKPT stops updating the Checkpoint SCN field in the datafile headers and begins updating the
Hot Backup Checkpoint SCN field instead
LGWR begins logging full images of changed blocks the first time a block is changed after being
written by DBWn
ull block image logging during backup eliminates the possibility that the backup will contain
F
unresolvable split blocks. To understand this reasoning, you must first understand what a split
block is. Typically, Oracle database blocks are a multiple of O/S blocks. For example, most Unix
filesystems have a default block size of 512 bytes, while Oracle’s default block size is 8k. This
means that the filesystem stores data in 512 byte chunks, while Oracle performs reads and
rites in 8k chunks or multiples thereof. While backing up a datafile, your backup script makes a
w
copy of the datafile from the filesystem, using O/S utilities such as copy, dd, cpio, or OCOPY. As
it is making this copy, your process is reading in O/S-block-sized increments. If DBWn happens
to be writing a DB block into the datafile at the same moment that your script is reading that
block’s constituent O/S blocks, your copy of the DB block could contain some O/S blocks from
before the database performed the write, and some from after. This would be a split block. By
logging the full block image of the changed block to the redologs, Oracle guarantees that in the
event of a recovery, any split blocks that might be in the backup copy of the datafile will be
resolved by overlaying them with the full legitimate image of the block from the archivelogs.
Upon completion of a recovery, any blocks that got copied in a split state into the backup will
have been resolved by overlaying them with the block images from the archivelogs. All of these
mechanisms exist for the benefit of the backup copy of the files and any future recovery. They
have very little effect on the current datafiles and the database being backed up. Throughout the
backup, server processes read datafiles DBWn writes them, just as when a backup is not taking
place. The only difference in the open database files is the frozen Checkpoint SCN, and the
active Hot Backup Checkopint SCN.
es dbwr writes uncommited data to datafile. If the buffer cache is full , then uncommited data
Y
will be written to the datafiles.
NA
46. What are the different types of Buffer states in Buffer cache?
Free Buffers – Means buffer data is same as that of block in disk and it has not been changed.
Dirty Buffer – the buffer has been has modified , but not copied to disk yet
49. If I flush the buffer cache, what will happen to the uncommitted transactions in buffer?
If we flush the buffer cache, then all the uncommitted transactions , i.e dirty blocks will be
ritten to disk. We can flush the buffer cache manually using command ALTER SYSTEM
w
FLUSH BUFFER_CACHE;
Important pont to
0. If i issue alter system flush buffer_cache command in a live production database with huge
5
number of transaction, then what will be the impact?
his is not recommended to do in production. As all the buffer data will be flushed and the
T
next transaction have to do physical read on disk, i.e it will have impact on the i/o for some time.
51. What if i flush the shared pool in a live production database? Will there be any impact?
If we flush the shared pool then all the queries need to do hard parsing, and it will slow down the
database trasnaction for sometime, till the hard parsing count has been decreased.
52. In RAC, how can you flush shared pool or buffer cache of all the nodes in one command?
Just need to add global keyword at the end of the normal command.
ow chaining occurs when one row cannot fit in a single block and it has to span into multiple
R
blocks. This usually has serious implications, and we need to avoid this.
60. Are redo entries for undo segements stored in redo log file?
es undo related redo entries are stored in redo log file also. Just like whenever changes to
Y
datablock or index block changes are written to redo record. Similarly undo records changes
also written to redo log.
Why?
uring Recovery, part of roll forward process is to re-create undo segments. Just like data and
D
index , undo segments are also rollforwarded . As the rollforward completed, Now the data file
blocks contain both committed and noncommited changes. And the undo segements also
contain commited(inactive) and uncommited(active) changes. And the undo segment header
block contains the transaction table. So rollbackward process will access this information and
rollback the uncommited transactions.
So it is necessary for the oracle to keep undo related redo in redolog file.
61. Does dml activities on temporary table generates Undo and redo?
2. What is force logging in oracle? In which scenarios we need to put the database in force
6
logging mode?
If force_logging is enabled in the database/tablespace level, then logs will be generated even
for nologging operations. Usually In dataguard setup, we enable force_logging .
63. What is the difference between force logging and supplemental logging?
64. What is shared pool? What are the components inside shared pool?
Components are:
Shared pool is a part of SGA. Below are the components of shared pool.
ictionary Cache: – > Information like table/index definition , referential constraint relation etc.
d
So whenever user want to access a table defination, it gets the details from dictionary cache,
instead of hitting the system datafile. This is also known as row cache, because this cache
contains rows instead of full blocks.
6. Explain the mechanism how sql query decides whether to do hard parsing or soft parsing in
6
library cache?
67. What you know about parent cursor and child cursor?
cursor is a memory area in library cache allocated to a SQL statement which stores various
A
info about the SQL statement like its text, execution plan, statistics etc.
Each SQL statement has
– One Parent cursor
– One or more child cursor
Parent cursor: – > It contains the sql_text of the query.
Child cursor – > A parent cursor can have more than one child cursor.
It stores information about, execution plan, bind value, environmental setting etc.
wo textually same sql statements will have have one parent cursor. But can have different child
T
cursor.
hen a statement is issued, after going through syntax and symentic check, it searches the
W
library cache for existing cursor.If the cursor is not present, then it creates a new cursor for the
statement( which is a resource intensive operations). But if the cursor is already present, then it
will do soft parsing( less expensive then hard parsing). But still these soft parsing need to use
some resource for searching and handling with latches .
ow when session cursor cache is enabled, sessions cursors of repetitive statements will be
N
stored in the session cursor cache(i.e PGA/UGA). So now the session cursor cache contains a
pointer into the library when the cursor was closed. So when a sql is resubmitted, the syntax
and symantic checks are bypassed( as the presence of the cursor in session cursor cache
guarantee this).
o when the query is submitted, if the cursor is closed, then it will be a soft parse( but without
S
syntax and symentic check)
But if the cursor is open, then it will skip the soft parse also.
We have a pl/sql block,inside which we need to execute a sql 100 times.
ithout session cached cursor, soft parsing will happen around 100 times.(1st one might be
W
hard parsing)
ut with session cached cursor, parsing will be skipped completely( once the cursor is opened
B
after 1 or 2 executions).
e can say, good value of session_cached_cursor can help in reducing library cache latch
W
contention.
owever too much session_cached_cursor value when no. of sessions are very high can cause
H
ORA-4031 out of memory issue.
69. Are hash_value and plan_hash_value both are same or different? Explain more on this.
70. Can two queries have same hash_value but different plan_hash_value?
71. Can two queries have same plan_hash_value but different hash_value?
2. Lets say for the query select * from EMP, cursor is present in library cache? Now i have
7
dropped a column from EMP table? What will happen now?
If any DDL operation happened on the table, then cursor the all the statements using this table
will be invalidated. So next time the query hits library cache, it need to do hard parsing again.
73. How do i know whether a specific query is doing soft parsing or hard parsing?
e can check the parse_call column in v$sql table, If the parse_call is 1 means, it was getting
W
parsed for the 1st time.
4. Explain different values of cursor_sharing parameter? What is the default value? How this
7
parameter behaves with different values?
cursor_sharing=EXACT
76. What is the difference between oracle foreground events and background events?
oreground events are those which happens due to the server process like row lock contention,
F
buffer busy wait etc.
ut background events are those which occurs due to the activities of background processes
B
like , LGWR, DBWR etc.. Event examples are log file parallel write etc.
9. When crash recovery happens and how oracle determines whether it needs crash recovery
7
or not?
rash recovery happen we try to open the database after instance has been terminated
C
abruptly.
o while opening the database, it checks the SCNs of each datafile and SCN in control file. If
S
the SCNs are same, then database is consistent and will be opened without any need of crash
recovery. But SCNs are different , then crash recovery will happen . Redo will be used to roll
forward the database by processing both commited and uncommited transactions. After that
uncommited transactions will be rolledback with the help of UNDO.
gain, there will another question – > Why scn of datafile is different from scn of controlfile ??
A
Because For any changes happens in the database the scn will be incremented. But the scn in
the datafile headers will be updated, only when checkpoint happens. So till checkpoint is not
happened scn will be differnt from datafile scn.
cn is incremented
s
data from log buffer is written to logfile.
locks will be released.
81. Difference between small file tablespace and big file tablespace . Explain pros and cons of
both.
Bigfile tablespace
bigfile tablespace consists of a single datafile ,which size can be extended upto 32 TB. This is
A
useful in very large environments. Because no. of datafiles in db is controlled by db_file
parameter , creating a big file tablespace of large sizes can help in this.
lso we can resize the tablespace using alter tablespace command, instead of alter database
A
datafile command in case of smallfile tablespace.
It is always recommended to use big file tablespace with ASM ( which provides striping method
,otherwise parallel processes might impacted).
small file tablespace
It contain lot of datafiles(upto 1022) with each size of max 32GB. It gives us the flexibilty to
create datafile in different directory or asm diskgroups .
ut only thing is , if we want to resize a tablespace, We cannot use alter tablespace. Rather we
B
need to resize each datafile of that tablespace individually.
lock header contains information about the type of block (table block, index block, and so on);
b
transaction information when relevant
table directory, if present, contains information about the tables that store rows in this block
he row directory contains information describing the rows that are to be found on the block.
T
This is an array of pointers to where the rows are to be found in the data portion of the block.
These three pieces of the block are collectively known as the block overhead,
emp files never have REDO generated for them, although they can have UNDO generated.
T
Thus, there will be REDO generated working with temporary tables since UNDO is always
protected by REDO
hen you drop a table, the effects of that drop are written to the redo log. The data from the
W
table you dropped is not written; however, the recursive SQL that Oracle performs to drop the
table does generate redo. For example, Oracle will delete a row from the SYS.OBJ$ table (and
other internal dictionary objects), and this will generate redo, and if various modes of
supplemental logging are enabled, the actual DROP TABLE statement will be written into the
redo log stream.
86. What is granules in SGA?
emory is allocated to the various pools in the SGA in units called granules. A single granule is
M
an area of memory of 4MB, 8MB, or 16MB in size. The granule is the smallest unit of allocation,
so if you ask for a Java pool of 5MB and your granule size is 4MB, Oracle will actually allocate
8MB to the Java pool (8 being the smallest number greater than or equal to 5 that is a multiple
of the granule size of 4). The size of a granule is determined by the size of your SGA (this
sounds recursive to a degree, as the size of the SGA is dependent on the granule size). You
can view the granule sizes used for each pool by querying V$SGA_DYNAMIC_COMPONENTS.
In fact, we can use this view to see how the total SGA size might affect the size of the granules:
he fixed SGA contains a set of variables that point to the other components of the SGA, as
T
well as variables that contain the values of various parameters.
here is no standard formula for this . The size will vary depending upon the database
T
transaction.
Optimal is to set it to value such that, the one log switch happens in every 15 minutes.
ou can have redologs of different size, But all the redologs inside a group should be of same
Y
size.
94. What is shared pool? What are the components inside shared pool?
Shared pool is a part of SGA. Below are the components of shared pool.
Library Cache:
ictionary Cache: – > Information like table/index definition , referential constraint relation etc.
d
So whenever user want to access a table defination, it gets the details from dictionary cache,
instead of hitting the system datafile. This is also known as row cache, because this cache
contains rows instead of full blocks.
95. What is the role of library cache in memory architecture and what it stores.
hen oracle executes a sql statement which is not present in shared pool, then it wll do hard
W
parsing.
oft parsing means, sql statement has been already executed before and its execution plan is
S
present in shared pool.
7. Explain the mechanism how sql query decides whether to do hard parsing or soft parsing in
9
library cache?
98. What is cursor? Explain about parent cursor and child cursor?
cursor is a memory area in library cache allocated to a sql statement, which stores information
A
like sql text, statistics and executions plans.
Each sql statement will have one parent cursor and one or more child cursor.
hild cursor – It stores information like , execution plan, bind variable, statistics, environment
C
details.
wo identical sql queries will share the same parent cursor , but the child cursor may or may not
T
be shared.
hen a statement is issued, after going through syntax and symentic check, it searches the
W
library cache for existing cursor.If the cursor is not present, then it creates a new cursor for the
statement( which is a resource intensive operations). But if the cursor is already present, then it
will do soft parsing( less expensive then hard parsing). But still these soft parsing need to use
some resource for searching and handling with latches .
ow when session cursor cache is enabled, sessions cursors of repetitive statements will be
N
stored in the session cursor cache(i.e PGA/UGA). So now the session cursor cache contains a
pointer into the library when the cursor was closed. So when a sql is resubmitted, the syntax
and semantic checks are bypassed( as the presence of the cursor in session cursor cache
guarantee this).
o when the query is submitted, if the cursor is closed, then it will be a soft parse( but without
S
syntax and sementic check)
But if the cursor is open, then it will skip the soft parse also.
We have a pl/sql block inside which we need to execute a sql 100 times.
ithout session cached cursor, soft parsing will happen around 100 times.(1st one might be
W
hard parsing)
ut with session cached cursor, parsing will be skipped completely( once the cursor is opened
B
after 1 or 2 executions).
e can say, good value of session_cached_cursor can help in reducing library cache latch
W
contention.
owever too much session_cached_cursor value when no. of sessions are very high can cause
H
ORA-4031 out of memory issue.
100. Are hash_value and plan_hash_value both are same or different? Explain more on this.
101. Can two queries have same hash_value but different plan_hash_value?
102. Can two queries have same plan_hash_value but different hash_value?
03. Let’s say for the query select * from EMP, cursor is present in library cache? Now i have
1
dropped a column from EMP table? What will happen now?
If any DDL operation happened on the table, then cursor the all the statements using this table
will be invalidated. So next time the query hits library cache, it need to do hard parsing again.
104. How do i know whether a specific query is doing soft parsing or hard parsing?
e can check the parse_call column in v$sql table, If the parse_call is 1 means, it was getting
W
parsed for the 1st time.
So if parse_call is 1, then it is doing hard_parse.
es they can. The archivelogs are simply archive copies of online redologs(when they are filled).
Y
So like online redologs also contains both committed and uncommited data, archivelogs also
does the same.
109. Can i create a tablespace with 16k block size and explain how you will do it?
or that we need to set db_16k_block_size to a value. and then create tablespace with 16k
F
block size.
111. Explain about cross platform migration and what different methods we can do for this?
112. Do you remember how many phases are there in db upgrade (in 12c/19c etc)?
113. What will be the impact after db upgrade. If i am not doing the timezone upgrade.
No impact
15. Is it recommended to enable huge pages and transparent hugepages in Linux for running
1
oracle database? Explain
Why hugepages:
y default operating system memory page is of 4KB. But with hugepages we can use memory
B
pages greater than 4KB .Keeping large page size will help in minimum resource requirement for
managing the page table entries. The hugepage can vary from 2MB to 256 MB.
If transperant hugepages is enabled, then the memory will be allocated during the run time and
it might cause delay in allocation.
o oracle recommended for standard hugepages, with which memory will preallocated ,while
S
starting the instance.
orce logging means , even nologging operations will be logged. Supplemental logging means,
F
additional information of tables will be captures in the redolog, ( mostly used for goldengate
extracts).
ndian is the storage method of multi-byte data types in memory. In other words, it determines
E
the byte order of the data.
Little – > Means end byte will be smaller( with hexadecimal represenatation)
Big – > Means end byte will be bigger( with hexadecimal represenatation)
In dataguard environment, All instances will have same db_name and different
db_unique_name .
es user will see those deleted data. It will see all the data as it was at the time of starting the
Y
transaction.
25. We know both pga and temp tablespace is used for sorting. So what is difference between
1
thse two?
orting is always done in pga . But if the pga becomes full, before sorting operation is completed
s
, then pga content is swapped from pga to temp. when it is required, it will be read back to pga.
1
27. Does the db upgrade time depends upon the size of the database? I mean will upgrading
1
5TB db will take more time than upgrading a 10gb database?
o, size doesn’t matter for upgrade. But upgrade time depends upon the number of
N
components present in the database.
28. Does applying patch on a 1TB database take more time than applying patch on 50 GB
1
database?
129. What do you mean by pinning in oracle? like cursor pin, buffer pin?
If a buffer in the buffer cache is getting changed, first it gets pinned to ensure, other processes
don’t replace this buffer.
ame is for cursor. If a statement is getting executed, then its cursor gets pinned, so that cursor
S
memory dont gets deallocated
130. what is the difference between alter system kill session and kill -9 <pid>?
131. what is the difference between rman and expdp backup?