May 13, 2010

about Oracle 11g Database

Please find here all the posts I’ve did about Oracle 11g Database between 2009 and 2010

June 7, 2010
New Physical Database Limits of 11.2
It has been a while since I've gone through the documentation part of Oracle Database Server that talks about limits! I think the last one was when the rowid format changed from Oracle7 to Oracle8.


This post is not just about publishing those in another website, it's commenting them. If you want to read the original text click here, if you want to read my remarks on the limitations, keep scrolling down, they are in bold (but don't expect rocket science though!)
Item
Type of Limit
Limit Value
Database Block Size
Minimum
2048 bytes; must be a multiple of operating system physical block size
Database Block Size
Maximum
Operating system dependent; never more than32 KB
Database Blocks
Minimum in initial extent of a segment
2 blocks (just because one is for data other is for segment header)
Database Blocks
Maximum per datafile
Platform dependent; typically 222 - 1 blocks
Controlfiles
Number of control files
1 minimum; 2 or more (on separate devices) strongly recommended
Controlfiles
Size of a control file
Dependent on operating system and database creation options; maximum of 20,000 x (database block size)
Database files
Maximum per tablespace
Operating system dependent; usually 1022(Here the bigfile tablespaces actually can index much more space with just one datafile. It's because the rowid lacks the datafile encoding hence saving more space to index more blocks)
Database files
Maximum per database
65533 - May be less on some operating systems

Limited also by size of database blocks and by the 
DB_FILES initialization parameter for a particular instance
Database extents
Maximum per dictionary managed tablespace
4 GB * physical block size (with K/M modifier);4 GB (without K/M modifier)
Database extents
Maximum per locally managed (uniform) tablespace
2 GB * physical block size (with K/M modifier);2 GB (without K/M modifier)
Database file size
Maximum
Operating system dependent. Limited by maximum operating system file size; typically 222 or 4 MB blocks
MAXEXTENTS
Default value
Derived from tablespace default storage orDB_BLOCK_SIZE initialization parameter
MAXEXTENTS
Maximum
Unlimited (you should put a brake here not letting the segment have more than 100,000 extents on huge objects; 10,000 on big ones and less than 1,000 on average sized ones: typically huge is terabyte sized, big is hundreds of gigs and average sized less than 10 gigs)
Redo Log Files
Maximum number of logfiles
Limited by value of MAXLOGFILES parameter in the CREATE DATABASE statementControl file can be resized to allow more entries; ultimately an operating system limit (always change the default setting specially in a clustered environment)
Redo Log Files
Maximum number of logfiles per group
Unlimited
Redo Log File Size
Minimum size
4 MB (always start with at least 100MB on OLTP and 500MB on DWH shops)
Redo Log File Size
Maximum Size
Operating system limit; typically 2 GB
Tablespaces
Maximum number per database
64 K Number of tablespaces cannot exceed the number of database files because each tablespace must include at least one file
Bigfile Tablespaces
Number of blocks
A bigfile tablespace contains only one datafile or tempfile, which can contain up to approximately 4 billion ( 232 ) blocks. The maximum size of the single datafile or tempfile is 128 terabytes (TB) for a tablespace with 32 K blocks and 32 TB for a tablespace with 8 K blocks.
Smallfile (traditional) Tablespaces
Number of blocks
A smallfile tablespace is a traditional Oracle tablespace, which can contain 1022 datafiles or tempfiles, each of which can contain up to approximately 4 million (222) blocks.
External Tables file
Maximum size
Dependent on the operating system.An external table can be composed of multiple files.

You shouldn't base your tuning strategy on dynamic sampling alone, or luck. Some people freak out because once they generated stats for some system and it all went south. That's the same as saying that because cars crash and people get killed we shouldn't drive at all.
You have to use stats. Period.
So my small contribution to make your life easier is to give 5 tips if you are using Oracle 11g:
1) Run all queries against empty tables to populate column usage (histograms will be gathered based on this column usage).
2) Run stats after data has been loaded but before indexes are created (the creation of the index gathers stats on them automatically).
3) Find (business) correlations between columns and inform the optimizer about these correlations by the means of creating Column Groups. Also use this technique instead of function-based indexes.
4) Always gather stats with AUTO_SAMPLE_SIZE so the new stats gathering algorithm of Oracle 11g can kick in.
5) If you use partition tables you *must* use synopsis so the global stats can be derived much much much faster in an incremental fashion.
I guess 1, 2 and 4 don't offer any doubts.
Regarding 3 and 5 here's the usual techie yada yada:
3) Column Groups are created by the means of Extended Optimizer Statistics a new procedure of the DBMS_STATS package: CREATE_EXTENDED_STATS.
See an example in this very good article.

5) Synopsis are created when you enable incremental stats at the table or system level:

table level: SQL> exec DBMS_STATS.SET_TABLE_PREFS('SCOTT','TABLE_XYZ','INCREMENTAL''TRUE');

system level: SQL> exec DBMS_STATS.SET_GLOBAL_PREFS('INCREMENTAL''TRUE');

This will avoid rescaning partitions that haven't changed since the last stats generation, making it faster.

So remember: ASICS it's a matter of survival, not an option.

LMC.


------------------------------ -------- ------------ ----------
TAB1                           ENABLED  BASIC              3584
TAB2                           ENABLED  OLTP               4096
TAB1 was compressed using the BASIC compression method and has 3585 blocks, whilst TAB2 has been compressed using the Advanced Compression algorithm (OLTP) and strangely has more blocks, 4096 in total.
What’s going on here?
Is Advanced Compression worst than regular compression? If so why Oracle charges extra for that feature?
I think the answer is in the future! Let’s insert another 250,000 in each table and see what will be the size in each one. I have a table called NPTAB with millions of records in which those two tables were based upon. So I’ll just go to that table and fetch another 250k records:
250000 rows created.
SQL> c/tab1/tab2
  1* insert into rep.tab2 select * from rep.nptab where rownum < 250001
SQL> r
  1* insert into rep.tab2 select * from rep.nptab where rownum < 250001
250000 rows created.
SQL> commit;
Commit complete.
And now let’s check out how many blocks are there in each table:
------------------------------ -------- ------------ ----------
TAB1                           ENABLED  BASIC             16384
TAB2                           ENABLED  OLTP               9216
What the hell? The Advanced Compressed table is now almost 7 thousand blocks smaller than the TAB1 compressed with the basic method. What happened?
Easy! Advanced Compression is not just good compression, but it’s online compression. Data is compressed once and read many times. Data is compressed as it comes, it’s not a two time process, it’s all done at the same time. So when data is inserted compressed it already takes less space, whilst the other table (TAB1) had it’s data compressed when it was created, but when data came, everything was de-compressed.
Queries take less time because objects are smaller. Take a look of how much time it takes to count all the records:
  COUNT(*)
----------
    499999
Elapsed: 00:00:16.03
SQL> select count(*) from rep.tab2;
  COUNT(*)
----------
    499999
Elapsed: 00:00:07.45
 
The Advanced Compressed table takes 9 seconds less to scan!
Less space, less time to execute queries, wow! Is it all good? I’m inclined to say yes, but data insertion in Advanced Compressed table have a slight performance degradation. How much? Let’s truncate both tables and see how much time it will take to put all the 499999 records back in:
499999 rows created.
Elapsed: 00:00:41.37
SQL> insert into rep.tab2 select * from rep.nptab where rownum < 500000;
499999 rows created.
Elapsed: 00:01:25.87
There is no perfect world, and so data insertion was slower 200%.
You decide what’s best for you: slower data insertion or less storage and far faster queries?
Take your pick.
LMC.
"How to transport a tablespace from 9i to 11g".
It's easy. No science here. You just have to follow the basics steps of the regular from 9i to 9i TTS procedure. I'll reproduce the drill here of my testings with 9.2.0.1 to 11.1.0.6.
SQL> create tablespace lmc datafile 'c:oracleoradataorc9lmc_01.dbf' size 10m;
Tablespace criado.
SQL> create table mytable tablespace lmc as
2  select * from dba_objects where rownum < 101;
Tabela criada.
SQL> EXEC DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'LMC', incl_constraints => TRUE);
Procedimento de PL/SQL concluφdo com Ωxito.
SQL> SELECT * FROM transport_set_violations;
VIOLATIONS
--------------------------------------------------------------------------------
Sys owned object  MYTABLE in tablespace LMC not allowed in pluggable set
-- ooooopsie dasy
SQL> drop table mytable;
Tabela suprimida.
SQL> conn system/oracle
Ligado.
SQL> create table mytable tablespace lmc as
2  select * from dba_objects where rownum < 101;
Tabela criada.
SQL> conn sys as sysdba
Introduzir senha:
Ligado.
SQL> EXEC DBMS_TTS.TRANSPORT_SET_CHECK(ts_list => 'LMC', incl_constraints => TRUE);
Procedimento de PL/SQL concluφdo com Ωxito.
SQL> SELECT * FROM transport_set_violations;
nπo foram seleccionadas linhas
-- Cool, we can go on
exp USERID='sys/oracle AS SYSDBA'         TRANSPORT_TABLESPACE=y         TABLESPACES=LMC    FILE=C:oracleoradataorc9lmc_tt.dmp
SQL> alter tablespace lmc read only;
Tablespace alterado.
------- On Database 11G ------
Copy the file "lmc_01.dbf" to: "c:oracleoradataorco"
imp USERID='sys/oracle AS SYSDBA'   TRANSPORT_TABLESPACE=y     DATAFILES='C:oracleoradataorcoLMC_01.DBF'   TABLESPACES=lmc FILE=C:oracleoradataorc9lmc_tt.dmp
SQL> alter tablespace lmc read write;
Tablespace alterado.
SQL> conn system/oracle
Ligado.
SQL> desc mytable
SQL> select count(*) from mytable;
COUNT(*)
----------
100
Bingo. Has if by magic the table arrives at the 11g database untouched!
Hope you liked it.
LMC.
The example is hilarious!
 2  col1 number primary key,
 3  col2 varchar2(100));
Table created.
SQL> insert into exemplo values (1,'Primeira Linha');
1 row created.
SQL> insert into exemplo values (2,'Segunda Linha');
1 row created.
SQL> commit;
Commit complete.
SQL> -- NOW HERE IT COMES THE 11gR2 TWEAK
SQL> -- Create a new table based on EXEMPLO
SQL> create table exemplo2 as select * from exemplo;
Table created.
SQL> -- Insert an extra line in this new table
1 row created.
SQL> commit;
Commit complete.
SQL> -- And NOW if we insert all the lines from the 2nd table into the first
SQL> -- The line 1 and 2 will colide
SQL> insert into exemplo select * from exemplo2;
insert into exemplo select * from exemplo2
*
ERROR at line 1:
ORA-00001: unique constraint (LMC.SYS_C0014102) violated
SQL> -- Buahhhhh But I wanted Oracle to be smart without me having the need to learn PL/SQL...
SQL> -- What? It is possible now?
SQL> -- How?
SQL> -- Like this:
 2  into exemplo select * from exemplo2;
insert --+IGNORE_ROW_ON_DUPKEY_INDEX
*
ERROR at line 1:
ORA-00001: unique constraint (LMC.SYS_C0014102) violated
SQL> -- What did I did wrong?
SQL> -- See ...
 2  into exemplo select * from exemplo2;
1 row created.
SQL> commit;
Commit complete.
SQL> select * from exemplo;
 COL1 COL2
---------- ------------------------------
 1 Primeira Linha
 2 Segunda Linha
 3 Terceira Linha
SQL> -- The 3rd line just went in and the colisions ignored!!
How many times you've had modified a table and just because of that simple change, all objects (views, subprograms, etc.) dependent on that table get invalidated?
It's probably the best known Oracle "feature" for developers!
Well guys, this all is about to change taking a 180 degrees spin! Only the changes that afect dependent objects will invalidate them. This will decrease 90% of total invalidations.
Let's use a simple, very simple scenario:
1 - Table with one column
2- View based on this table but created with "SELECT *"
3- We add a new column to the table
4- The view not only doesn't invalidates, but discards the new column from the "SELECT *"
Wow!!! This is freaky! It is a new paradigm for developers.
Here's the code for who need to see it to understand it:
Copyright (c) 1982, 2008, Oracle.  All rights reserved.
SQL> conn hr/hr
Connected.
SQL> create table inva (n number);
Table created.
SQL> create or replace view invaview as select * from inva;
View created.
SQL> select count(*) from user_objects where status='INVALID';
  COUNT(*)
----------
         0
SQL> alter table inva add (n2 number);
Table altered.
SQL> select count(*) from user_objects where status='INVALID';
  COUNT(*)
----------
         0
SQL> insert into inva values (1,1);
1 row created.
SQL> select * from  invaview;
         N
----------
         1
One of those new partition features is system partitioning that enables you to create a table that in practice it's dynamically partitioned. You just have to select the names and number of partitions and then at the DML stage you decide.
Let's take it slowly. First create the table with two partitions, each in a different tablespace. Here we take two default tablespaces, but you should use "real" permanent tablespaces:
  2  (ncol number, xar varchar2(80))
  3  PARTITION BY SYSTEM
  4  (partition P1 tablespace users,
  5   partition P2 tablespace example);
Table created.
The key is that DML is done in a different fashion, because you have to decide in which partition you'll put your rows into. Let's imagine you don't know this is a system partitioned table and you'll do a "normal" insert. In this case you get the following error:
insert into SMALL_PARTS_ISOLATED values (1,'Nomeansno')
            *
ERROR at line 1:
ORA-14701: partition-extended name or bind variable must be used for DMLs on
tables partitioned by the System method
Wow! This error message wasn't produced at the regular shop where Oracle uses to make them! Why? Because it's crystal clear.
And now the right way to do DML in a system partitioned table by using the PARTITION keyword:
1 row created.
SQL> insert into SMALL_PARTS_ISOLATED PARTITION (p1) values (2,'TwiceIsCheaper');
1 row created.
When you select from it this is a regular table like any other:
      NCOL XAR
---------- --------------------------------------------------------------------------------
         2 TwiceIsCheaper
         1 YesMan
But how the heck to we know the name of the partition in order to insert it into the right place? This last table had partition P1 and P2, so it should be right to assume that it might have a partition called "P3":
insert into SMALL_PARTS_ISOLATED PARTITION (p3) values (30,'NonExistentPartition')
                                            *
ERROR at line 1:
ORA-02149: Specified partition does not exist
Oopsy dasy... Looks like our assumption was wrong...
The way I see this, Oracle Corp. should have had the idea to incorporate partition's names at the DESCRIBE command, but this is asking too much from those folks. So describing the table won't help a thing:
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 NCOL                                               NUMBER
 XAR                                                VARCHAR2(80)
Looks like we're going to have a trip to the dreadfull Oracle data dictionary. For this we used a query from 24th June 1997:
  2  from user_tab_partitions
  3  where table_name = 'SMALL_PARTS_ISOLATED';
PARTITION_NAME
------------------------------
P1
P2
Extended Optimizer Statistics

Physical Database Limits


May 13, 2010
Best Practices for Statistics Gathering on Oracle 11g
Some people look at Statistics Gathering as taking vitamins: nice to have. And it's wrong.

November 27, 2009
The Difference between Online and Offline compression
Imagine you have two tables each one with 250,000 records and they just had been created with CTAS. One was created using regular compression and the other with Advanced Compression. Let’s check out many 8k data blocks each one has:


TABLE_NAME                     COMPRESS COMPRESS_FOR     BLOCKS











SQL> insert into rep.tab1 select * from rep.nptab where rownum < 250001;





TABLE_NAME                     COMPRESS COMPRESS_FOR     BLOCKS







SQL> select count(*) from rep.tab1;



SQL> insert into rep.tab1 select * from rep.nptab where rownum < 500000;










November 20, 2009
How to transport a tablespace from 9i to 11g
It has been a while since I've written here. Main reason: Oracle Corp aquired me! Anyway, today on the menu we have a very interesting thing:
Nome                                      Nulo?    Tipo

----------------------------------------- -------- -------------

OWNER                                              VARCHAR2(30)

OBJECT_NAME                                        VARCHAR2(128)

SUBOBJECT_NAME                                     VARCHAR2(30)

OBJECT_ID                                          NUMBER

DATA_OBJECT_ID                                     NUMBER

OBJECT_TYPE                                        VARCHAR2(18)

CREATED                                            DATE

LAST_DDL_TIME                                      DATE

TIMESTAMP                                          VARCHAR2(19)

STATUS                                             VARCHAR2(7)

TEMPORARY                                          VARCHAR2(1)

GENERATED                                          VARCHAR2(1)

SECONDARY                                          VARCHAR2(1)


September 8, 2009
Oracle 11gR2 feature of the Day!
I did this on a CentOS 5 box with Oracle 11.2.0.1 and it was to show the latest new hint that you can use when transporting lines from one table to another, and there are colisions due to unique/primary keys.
SQL> create table exemplo (



SQL> insert into exemplo2 values (3,'Terceira Linha');



SQL> insert --+IGNORE_ROW_ON_DUPKEY_INDEX

SQL> insert --+IGNORE_ROW_ON_DUPKEY_INDEX(exemplo(col1))

March 5, 2009
A new paradigm: Oracle 11g's feature for context objects invalidation (COI)
Attention: revolution has come to Oracle RDBMS!
SQL*Plus: Release 11.1.0.7.0 - Production on Thu Mar 5 00:37:25 2009


February 26, 2009
Oracle 11g new System Partitioning feature
Oracle 11g is an exciting new release in terms of new features. There are tons of new features when it comes to partitioning. From virtual column partitioning, to new combinations of subpartitioning.
SQL> create table SMALL_PARTS_ISOLATED

SQL> insert into SMALL_PARTS_ISOLATED values (1,'Nomeansno');

SQL> insert into SMALL_PARTS_ISOLATED PARTITION (p2) values (1,'YesMan');

SQL> select * from small_parts_isolated;

SQL> insert into SMALL_PARTS_ISOLATED PARTITION (p3) values (30,'NonExistentPartition');

SQL> desc small_parts_isolated

SQL> select partition_name




11 comments:

  1. Luis, I am testing performance of our application on Oracle11gR2 and comparing statistics gathering step with what we used to have on Oracle11gR1. Same scope of data and same instances configuration: statistics on Oracle11gR2 takes twice as long. Originally I have estimate_pct set to null. So I gathered statistics on R2 schema first with estimate_pct set to Null - took 17s and then on the same R2 instance with auto_sample_size- took 30 s. So, it seems that it takes longer on R2 than on R1 and auto_sample_size did not help. Would you have any suggestions, explanations?
    Best regards.

    ReplyDelete
  2. If it's 11.2.0.2 then you can try gathering stats in parallel which will speed up the process

    ReplyDelete
  3. [...] Luis Moreno Campos – Best Practices for Statistics Gathering on Oracle 11g [...]

    ReplyDelete
  4. Quick Question. I'm referring to point number 2.

    My scenario:

    On a weekly basis we 'Duplicate' a table onto a new table space, load the data, build the indexes and analyze.

    My question:

    If I understand correctly its safe to say that you first have to analyze the table after the dataload before Creating the indexes? We are running on Oracle 10g and not 11?

    ReplyDelete
  5. It's safe to say for both 10g and 11g.
    LMC

    ReplyDelete
  6. Interesting suggestions. I'm curious about the importance of 1 & 2, trying to imagine a situation when I have empty tables in production or tables without indexes. This seems to assume you create a new db and run scripts to create tables, then run load jobs. We use export/import at the schema level to migrate everything. For us it would be hard to justify the former approach "just" to achieve 1&2

    ReplyDelete
  7. Thanks for the feedback Steve. You might want to load the tables with data pump as well but doing it in two blows: first structure, then run load, then data, then you might want to create indexes (which you can extract as a script from the data pump file). Does this makes sense for you?

    ReplyDelete
  8. Hi, Neat post. There is a problem together with your website in
    web explorer, might check this? IE nonetheless is the market
    chief and a big element of people will leave out your wonderful writing
    because of this problem.

    ReplyDelete
  9. Hi. Thanks for the comment. I've tested with IE8 and it seems to be working fine. Can you be more precise on what the problem seems to be?
    Cheers.

    ReplyDelete
  10. Catching up on stats with a production system means step 1 is not feasible. I cannot run it against empty tables. So does it make sense to run it for a longer period to try and approximate? Or is there some way to interpret the results given that the tables did not start out empty?

    ReplyDelete
  11. Sure you can't run it against production because tables are not empty. But you could run it against an empty set of the same tables to where you just imported the production stats. Once you got the ball rolling it's a matter of using incremental stats.

    ReplyDelete