Feed aggregator

Procedure that deletes an input from table with foreign key constraints

Tom Kyte - Sun, 2017-10-22 21:26
I have written a procedure that has as input the CUSTOMER_ID. Then the procedure deletes the corresponding customer from the table CUSTOMERS. <code>CREATE OR REPlACE PROCEDURE DELETE_CUSTOMER (CUSTOMER_ID NUMBER) AS TOT_CUSTOMERS ...
Categories: DBA Blogs

Parameterized dynamic query

Tom Kyte - Sun, 2017-10-22 21:26
Hi, I'm trying to improve the performances of some queries in my production environment. The current scenario is: 1) retrieve some data from "<b>ParentTable</b>". The query, in software code, is composed dynamically, based on the specified s...
Categories: DBA Blogs

SQLcl works worse with pipelined function than SQL*Plus

Tom Kyte - Sun, 2017-10-22 21:26
Let's imagine, we have a pipelined function like shown in this example https://oracle-base.com/articles/misc/pipelined-table-functions#pipelined_table_functions My aim is to show, wether that function really return data row by row, rather than at o...
Categories: DBA Blogs

Database is more than persistence. It is shared. My thoughts about Microservice Database.

Yann Neuhaus - Sun, 2017-10-22 14:36

The development teams often see the database as a ‘persistence layer’ only. And sometimes it is mentioned that the database is always the cause of the problems, especially when implementing continuous integration and trying to be agile. Then cames the idea to have this ‘persistence layer’ in an isolated environment, such as a docker container, with the database for each developer. However, this overlooks the real cause of the problems, which is not the persistence function of the database, but the fact that it is shared. And when you share something, in a multi-user environment, you reach another level of complexity. And if you are not prepared for that, you perceive it as a problem.

This philosophical blog post contains some cool words which, in my opinion, must be carefully considered when dealing database: agile, persistence, stateless, docker, microservices, stateless, NoSQL, containers, lake, cloud,…


When I hear about ‘persistence layer’ it always reminds me a meeting a few years ago where the architects of a new application draw on the whiteboard something that they could have called microservices a few years later. Dedicated web services reading data, processing it, and writing the result to the ‘persistence layer’. And when the data was only a transient interface between two of those services, they’ve written ‘temporary persistence’ on the whiteboard diagram. When you come with two antonyms to define something, there is probably something wrong in the design.

Commitment

In order to understand what’s beyond persistence, you should think about what happens when you:
SQL> commit work;
Commit complete.

First, the commit statement closes a transaction that has been started by the first DML. That’s the first purpose of a database management system: process transactions. This is already something that is not very clear for some development architects. They came with stateless services, just because thinking about transactions is not easy when there is no one-to-one relationship between the database session and the end-user (as in client/server on dedicated terminal). So they designed stateless services. Then they encountered issues such as lost updates as soon as they had more than one user. And then implemented optimistic locking in their application server, not always in the right layer. I recommend this talk from Vlad Mihalcea about transactions.

Second, the commit statement ensures that changes are persistent and durable. The changes are written to disk, to backups, and to standby databases. This could be as simple as a write call, but is a bit more complex for performance reasons: random writes done on cache, written to disk asynchronously, redo transaction log written sequentially, synced on commit. Here, the developer do not need to care about the underlying mechanisms, beyond just trying to change only what is needed and commit only when needed.

Third, the commit marks the changes as publicly visible. Because the data is shared among multiple users. This is why developing and testing on your own personal database is limited to unit tests. As soon as you have to work on implementing a real use-case, you must work on a database shared by different services. I have seen enterprises going early on agile Database as a Service for agile development where each developer was working on his own copy (thin clone) of the database. They are now asking for common environments where multiple developers can work and test their different part of the application. Back to this pre-agile idea, and back to the same problem: the database is shared.

Finally, I think that some developers like to see the database as only a ‘persistence layer’ just because it is easier. You can be lazy and let the database system and the database administrators manage the durability of the data. The non-optimal performance will be compensated by software and hardware. And ignoring the two other properties of a database system is just walking away from the complexity. NoSQL to ignore transactions and consistencies, and containers to ignore the sharing concern.

Impedance Mismatch

This is not new. ‘Object Oriented’ was the buzzword before ‘buzzword’ itself was a buzzword. And OO development was ok until it had to manipulate data from the database. They called ‘impedance mismatch’ the problems encountered when trying to match the object-oriented model with the relational model. And they built frameworks to do this matching as if it were simple, such as matching the Object Identity concept with the Primary Key concept. And my experience is that this was nice to build good Proof or Concepts, but failed in production on consistency, performance, and scalability. Object Oriented development is good for non-shared transient objects. A GUI is based on objects, such as a window or a text field, where the object identity is the address in memory. If you restart the application, it is another object, with different address memory. And the GUI on your colleague computer is again another object. It is not persistent but transient. And it is not shared but all different. Applying this model to data doesn’t work. You can simulate persistency with an Object-Relational mapping (ORM) but sharing will be a problem. Those ORM usually work on proxy cached objects in the application server, trying to re-invent the management of concurrent changes, without using the database system which is built for that.

RDBMS

The current shift from ‘database technology’ to ‘developer community’ is probably a good idea, but only if we do not do the same errors such as using the database as a black box to persist objects. We must keep in mind the reasons why Relational Database Management Systems were built for.

Before RDBMS, data was modeled hierarchically. This was good for performance (because data is clustered) but was good for only one use-case. Other use-cases had to replicate the data into another physical design. The relational modeling stores data in a way it can be used by all use-cases. For example, You can look at the orders from one customer, or for the orders on one product. The business entities are in different tables and are joined dynamically when queried. This is a data point of view. The developer builds something different because the developer works on one use-case. This was the trend for XML a few years ago, and JSON now, bringing back the hierarchical model that failed decades ago. Very good to work on your use-case, but very bad when data is shared with other use cases. You have good Proof of Concept and good unit test. But integration will be impossible. I think that we will see the same with microservices: each one designed for its own usage without thinking about other (including future) use-cases.

Before RDBMS, data definition was included in the code for each use-case. A COBOL program had a data division describing the structure of the data that will be accessed by the program. This was not good for shared databases because one structural change had to change all programs. And this is why the relational model was introduced with physical and logical independence. The logical model is designed for the application, and it is the common interface between the developers and the database administrators. Here again, I’ve seen some application architects going backward, using ORM mapping used in the same way the COBOL data division was used in the past.

Microservice Database

Today, developers are running their code in containers. This has the same properties as the OO development I described before: isolated and transient. Docker starts a few processes in its own image of the disk. This is perfect. But, like they did with OO, they try to bring this idea to the database. And that will fail again because, except in early stage of development, you need a database that is shared and persistent.

In my opinion, running the Oracle Database in a Docker container gives a wrong message to the developer community. A database involves a lot of resources and is highly coupled with the OS. My lab databases on my laptop are VirtualBox VMs. Easy to build, easy to start, and easy to share. I’ve tested database on docker 3 years ago to see how we can build data virtualization, and thin clones for development, with a standby database on Docker. I’ve written an article about that for DOAG. The technology has evolved but I think that it is not a good idea except for the fun of trying something new.

Today, we have containers on the database itself, with multitenant pluggable databases and application containers. You can provision a pluggable database for each docker container running the application. And you have easy ways to copy, clone, refresh or have common data. This is ok for development.

Application containers have been introduced for SaaS: each application tenant has its own pluggable database. The data in the PDB is still shared by all application use-cases, and metadata (and some reference tables) is shared in application root by several applications.

However, we also see in 12cR2, and probably more in 18c, some cross-PDB DML which allows queries and modification among several pluggable databases. It is not yet stable, using database links for modifications (see here). But I can imagine that it will be fixed in 18c or later being able to update different PDBs within the same transaction.

And then, I foresee how this will be perceived by some developer architects (those ignoring that the database must be shared). They will build microservices, with small application servers running in a Docker container. And they will map one PDB for each service, doing something like a Microservice Database. Some will store data in XML, others in JSON, and some in relational tables. All those will be consolidated into a multi-tenant database to be managed easily. And my fear is that we will see a large bowl of spaghetti to exchange data between those Microservice PDBs using database links, common views, cross-PDB DML, and maybe some logical replication.

In my opinion, microservices can be an interesting design for future applications, but trying to map the same idea to the database will be a failure if developers don’t realize that the database is not there only to persist data but also to share it. But that’s just my own current opinion and I’ve no idea about the future. Maybe all this data modified by microservices will be shared in real time in a big Data Lake universally available on the cloud, and all IT problems about concurrency and latency will be solved.

 

Cet article Database is more than persistence. It is shared. My thoughts about Microservice Database. est apparu en premier sur Blog dbi services.

Archivelog deletion policy on Data Guard configuration

Yann Neuhaus - Sun, 2017-10-22 14:02

The deletion policy on a dataguard configuration should be:

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
for the site where you don’t backup. It can be the standby or the primary.

and:

CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
for the site where you do the backups. It can be the primary or the standby.

I’ve always configured it in this way, but I recently discovered that the order of the subclause matters. Do not CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY; because no archivelogs will be reclaimable, and your recovery area will be full. This is probably a bug. I’ll update this post when I have more information about this.

Test case

I’m on the standby database where I do the backups:

 
Recovery Manager: Release 12.2.0.1.0 - Production on Sun Oct 22 17:37:18 2017
 
Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.
 
RMAN-06568: connected to target database: CDB2 (DBID=693620717, not open)
 
RMAN>
RMAN-03029: echo set on
 

I start with all default configuration:

RMAN> show all;
RMAN-06607: RMAN configuration parameters for database with db_unique_name CDB2B are:
RMAN-01005: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
RMAN-01005: CONFIGURE BACKUP OPTIMIZATION OFF; # default
RMAN-01005: CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
RMAN-01005: CONFIGURE CONTROLFILE AUTOBACKUP ON; # default
RMAN-01005: CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
RMAN-01005: CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
RMAN-01005: CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
RMAN-01005: CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
RMAN-01005: CONFIGURE MAXSETSIZE TO UNLIMITED; # default
RMAN-01005: CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
RMAN-01005: CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
RMAN-01005: CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
RMAN-01005: CONFIGURE RMAN OUTPUT TO KEEP FOR 7 DAYS; # default
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
RMAN-01005: CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/app/oracle/product/12.2.0/dbhome_1/dbs/snapcf_CDB2B.f'; # default

I have one full backup and then some archivelog backups:

RMAN> list backup summary;
RMAN-06345: List of Backups
RMAN-06346: ===============
RMAN-06347: Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
RMAN-06348: ------- -- -- - ----------- --------------- ------- ------- ---------- ---
RMAN-06349: 9 B F A DISK 22-OCT 17:05:16 1 1 NO TAG20171022T170250
RMAN-06349: 10 B F A DISK 22-OCT 17:06:23 1 1 NO TAG20171022T170250
RMAN-06349: 11 B F A DISK 22-OCT 17:07:18 1 1 NO TAG20171022T170250
RMAN-06349: 12 B F A DISK 22-OCT 17:07:35 1 1 NO TAG20171022T170732
RMAN-06349: 13 B A A DISK 22-OCT 17:15:09 1 1 NO TAG20171022T171502
RMAN-06349: 14 B F A DISK 22-OCT 17:15:19 1 1 NO TAG20171022T171518
RMAN-06349: 15 B A A DISK 22-OCT 17:27:28 1 1 NO TAG20171022T172721
RMAN-06349: 16 B F A DISK 22-OCT 17:27:39 1 1 NO TAG20171022T172737

I have no obsolete backups:

RMAN> report obsolete;
RMAN-06524: RMAN retention policy will be applied to the command
RMAN-06511: RMAN retention policy is set to redundancy 1
RMAN-06147: no obsolete backups found

APPLIED ON ALL STANDBY

I have 1.8% of the recovery area that has been applied:

RMAN> configure archivelog deletion policy to applied on all standby;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.08 3

BACKED UP 1 TIMES TO DISK

I have 1.3% of the recovery area that has been backed up:

RMAN> configure archivelog deletion policy to backed up 1 times to disk;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.3 3

This looks good. I have some archived logs that have been applied but not backed up yet.

Both in the ‘bad’ order

But now I want to combine both:

RMAN> configure archivelog deletion policy to backed up 1 times to disk applied on all standby;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 0 3

Nothing is recoverable here, wich is in my opinion a bug.

Both in the ‘right’ order

Trying the same but with different order:

RMAN> configure archivelog deletion policy to applied on all standby backed up 1 times to disk;
RMAN-06600: old RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK APPLIED ON ALL STANDBY;
RMAN-06601: new RMAN configuration parameters:
RMAN-01005: CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY BACKED UP 1 TIMES TO DISK;
RMAN-06604: new RMAN configuration parameters are successfully stored
 
 
RMAN> select percent_space_used,percent_space_reclaimable,number_of_files from v$recovery_area_usage where file_type='ARCHIVED LOG';
RMAN-01005: PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
RMAN-01005: ------------------ ------------------------- ---------------
RMAN-01005: 1.3 1.08 3

This look good. 1.08% have been applied and backed up.

I’ve verified the same behavior when backups are done on primary, or on standby, and same in 11.2.0.4, 12.1 and 12.2 with latest RU.
When the deletion policy is starting with the backup clause before the standby clause, the files are not marked as reclaimable. However, they are deleted with a ‘delete archivelog’ statement without the ‘force’ option.
The behavior is the same with ‘shipped to’ instead of ‘applied on’.

So what?

Be careful with the syntax: the ‘applied’ or ‘shipped’ clause must be written before the ‘backup’ one.
Check that archived logs are reclaimable. The query I use for that is in: https://blog.dbi-services.com/archivelog-deletion-policy-for-standby-database-in-oracle-data-guard/. When more archived log detail is required, I’ve also this query: https://blog.dbi-services.com/drilling-down-vrecoveryareausage/
Always have a look at the recovery area usage after you change the deletion policy, the site where you backup, or after a switchover.

 

Cet article Archivelog deletion policy on Data Guard configuration est apparu en premier sur Blog dbi services.

OGG-00868/ORA-01291 – Missing log files with Integrated Extract

DBASolved - Sun, 2017-10-22 10:05

Oracle GoldenGate is reliant on the Oracle Redo Logs and Archive Logs when capturing transactions. Since I do a lot of testing on VMs (limited space) and in the cloud (limited space … don’t want to burn to much $), I often delete my archive logs. Normally this is not a problem; however, every once-in-awhile I delete more archivelogs than I should. This throws the IE into a state where it will not start because of OGG-00868/ORA-01291 – Missing Log Files.

On some level this is to be expected, but when you are using IE you have to remember that the extract is registered with the database. Since I’m using Integrated Extract, we have to reset how the extract is registered with the database. The below steps will show you how this should be done:

Note: Registering/Unregistering process have to be done at the container database (CDB) level.

adminclient> dblogin useridalias <alias> domain <domain>
adminclient> stop extract <extract_name>
adminclient> unregister extract <extract_name> database
adminclient> register extract <extract_name> database container <container_name>
adminclient> start extract <extract_name>
adminclient> info extract <extract_name>

Once the extract is re-registered with the database; the integrated extract will start.

Enjoy!!!


Filed under: Golden Gate
Categories: DBA Blogs

How to install Oracle R Enterprise (ORE) on Exadata

Syed Jaffar - Sun, 2017-10-22 08:38
Very recently, we have deployed ORE (R Distribution and R Enterprise) 3.1.1 packages on 4 node Exadata environment. This blog will discuss the prerequisites and procedure to deploy Oracle R Distribution v3.1.1.

Note: Ensure you have a latest system (root and /u01) backup before you deploy the packages on the db server.

What is R and Oracle Enterprise

R is third-party, open source software. Open source R is governed by GNU General Public License (GPL) and not by Oracle licensing. Oracle R Enterprise requires an installation of R on the server computer and on each client computer that interacts with the server.

Why Oracle R Distribution? 
  • Oracle R Distribution simplifies the installation of R for Oracle R Enterprise.
  • Oracle R Distribution is supported by Oracle for customers of Oracle Advanced Analytics, Oracle Linux, and Oracle Big Data Appliance.

What is needed for R Distribution deployment for Oracle Linux 6?
The Oracle R Distribution RPMs for Oracle Linux 6 are listed as follows:

http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/R-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/R-core-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/R-devel-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/libRmath-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/libRmath-devel-3.1.1-2.el6.x86_64.rpm
http://public-yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/getPackage
/libRmath-static-3.1.1-2.el6.x86_64.rpm
 
If the following dependent RPM is not automatically included, then download and install it explicitly:
texinfo-tex-4.13a-8.el6.x86_64.rpm

The picture below depicts the ORE client/server installation steps:


Description of Figure 1-2 follows

Oracle R Distribution on Oracle Linux Using RPMsOracle recommends that you use yum to install Oracle R Distribution, because yum automatically resolves RPM dependencies. However, if yum is not available, then you can install the RPMs directly and resolve the dependencies manually. Download the required rpms and its dependent rpms from below link:


http://yum.oracle.com/repo/OracleLinux/OL6/addons/x86_64/index.html

To know more about rpms and its dependent rpms, visit the following Oracle website:

https://docs.oracle.com/cd/E57012_01/doc.141/e57007/install_r.htm#BABECIBB

You can install the rpms in the following order:

yum localinstall libRmath-3.1.1-2.el6.x86_64.rpm
yum localinstall libRmath-devel-3.1.1-2.el6.x86_64.rpm
yum localinstall libRmath-static-3.1.1-2.el6.x86_64.rpm
yum localinstall R-core-3.1.1-2.el6.x86_64.rpm
yum localinstall R-devel-3.1.1-2.el6.x86_64.rpm
yum localinstall R-3.1.1-2.el6.x86_64.rpm

Once the rpms are installed, you can validate the installation , using the below procedure:

go to /usr/lib64/R directory on the database, as oracle user, type R:

You must see the output below:



type q() to exit from the R interface.

And repeat on the rest of the db nodes, if you are on RAC.

To install R distribution, use the procedure below:

rpm -e R-Rversion
rpm -e R-devel
rpm -e R-core
rpm -e libRmath-devel
rpm -e libRmath
 
 
 In the blog post, I will demonstrate how to configure Oracle R Enterprise.





Comparing queries…

Bar Solutions - Sun, 2017-10-22 02:25

How do you compare a rewritten query to its original version? Most of the time I just run a MINUS operation on the original and new query. Actually I execute two. Old query (A) MINUS New query (B) and vice versa (B) MINUS (A). Both should result in no rows. That way I thought I had proven that the resultsets for both queries are equal.
But there is a flaw in this assumption.

What if there are duplicate rows in one of the resultsets? The MINUS operator removes a row the resultset if it exists in both collections. But if one of the collections has the row twice and the other collection has it one, then it is completely removed.
Let’s say we have two queries:

-- first query
select 'alpha' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'charlie' nato_spelling from dual
NATO_SPELLING
-------------
alpha
bravo
bravo
charlie

and

-- second query
select 'alpha' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'charlie' nato_spelling from dual
NATO_SPELLING
-------------
alpha
bravo
charlie

As you can see, by just eye-balling the queries the resultsets are different.
But when you execute the minus operator on the queries you’ll get the impression the resultsets are the same:

select *
from
((-- first query
select 'alpha' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'charlie' nato_spelling from dual
 )
 MINUS
 (-- second query
select 'alpha' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'charlie' nato_spelling from dual
 )
)

NATO_SPELLING
----------------------------
select *
from
((-- second query
select 'alpha' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'charlie' nato_spelling from dual
 )
 MINUS
 (-- first query
select 'alpha' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'bravo' nato_spelling from dual union all
select 'charlie' nato_spelling from dual
 )
)
NATO_SPELLING
----------------------------

It gets worse when you compare queries that include a UNION or a UNION ALL operator:

select *
from
(
 (-- first query
  select 'alpha' nato_spelling from dual union all
  select 'bravo' nato_spelling from dual union all
  select 'bravo' nato_spelling from dual union all
  select 'charlie' nato_spelling from dual
 )
 UNION
 (-- third query
  select 'x-ray' nato_spelling from dual union all
  select 'yankee' nato_spelling from dual union all
  select 'zulu' nato_spelling from dual
 )
)
NATO_SPELLING
-------------
alpha
bravo
charlie
x-ray
yankee
zulu

6 rows selected

and

select *
from
(
 (-- first query
  select 'alpha' nato_spelling from dual union all
  select 'bravo' nato_spelling from dual union all
  select 'bravo' nato_spelling from dual union all
  select 'charlie' nato_spelling from dual
 )
 UNION ALL
 (-- third query
  select 'x-ray' nato_spelling from dual union all
  select 'yankee' nato_spelling from dual union all
  select 'zulu' nato_spelling from dual
 )
)
NATO_SPELLING
-------------
alpha
bravo
bravo
charlie
x-ray
yankee
zulu

7 rows selected

Clearly the results are different. This is because of how the UNION operator works. It remove duplicates from the resultset. But, when I check it using the MINUS operator (both ways):

select *
  from 
  (
   (select *
      from
      (
       (-- first query
        select 'alpha' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'charlie' nato_spelling from dual
       )
       UNION
       (-- third query
        select 'x-ray' nato_spelling from dual union all
        select 'yankee' nato_spelling from dual union all
        select 'zulu' nato_spelling from dual
       )
      )
   )
   MINUS
   (select *
      from
      (
       (-- first query
        select 'alpha' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'charlie' nato_spelling from dual
       )
       UNION ALL
       (-- third query
        select 'x-ray' nato_spelling from dual union all
        select 'yankee' nato_spelling from dual union all
        select 'zulu' nato_spelling from dual
       )
      )
   )
  )
NATO_SPELLING
-------------

and

select *
  from
  (
   (select *
      from
      (
       (-- first query
        select 'alpha' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'charlie' nato_spelling from dual
       )
       UNION ALL
       (-- third query
        select 'x-ray' nato_spelling from dual union all
        select 'yankee' nato_spelling from dual union all
        select 'zulu' nato_spelling from dual
       )
      )
   )
   MINUS
   (select *
      from
      (
       (-- first query
        select 'alpha' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'bravo' nato_spelling from dual union all
        select 'charlie' nato_spelling from dual
       )
       UNION
       (-- third query
        select 'x-ray' nato_spelling from dual union all
        select 'yankee' nato_spelling from dual union all
        select 'zulu' nato_spelling from dual
       )
      )
   )
  )
NATO_SPELLING
-------------

These results tell me the queries have the same resultsets, when in fact they do not.
A simple (extra) check could be to check if both queries have the same number of rows in the resultset.
In my opinion the resultsets are equal when both the MINUS queries give no results AND the count of the individual queries are the same.
Is there something I am still missing here?
I hope not.

Ubuntu 17.10: Upgrade from xorg to wayland: window decorations broken!

Dietrich Schroff - Sat, 2017-10-21 14:14
Today i upgraded my ubunto from 16.04 to 17.10:






Up to this step everything worked fine.
But after the reboot i got tiny window decorations. This had to do with some further changes to the unity desktop - i used compiz there and this broke the new wayland configuration.

I found some hints about
dpkg --purge compiz-plugins ...but this did not really help.
The way out was this:
mv .compiz .compiz.old
mv .gconf .gconf.old
mv .config .config.oldAfter moving such old configuration directories the desktop looked ok.
For changing the windows decorations i had to install
 apt install gnome-tweak-toolSorry, that i do not post any screenshot of the new 17.10 desktop - but the screenshot application does not work anymore ;-)

VirtualBox 5.2 exports the VM to the Oracle Cloud

Yann Neuhaus - Sat, 2017-10-21 10:32

The new release of Oracle VM VirtualBox (aka VirtualBox) is there with a new functionality to export a VM to the Oracle Cloud Compute (aka Oracle Cloud Infrastructure). That can be interesting to prepare a VM on my laptop and move it to the Cloud to get it accessible from everywhere. Here’s my first try. In my opinion, it’s idea but probably need further evolution.

VirtualBox

Here is what is new: in addition to .ova you can export to an Oracle Public Cloud image:
CaptureVboxCloud000

This takes some time, as it compresses and writes all the disk images

CaptureVboxCloud002

The result is a .tar.gz for each disk attached to my VM. It is actually the image of the disk (.img) that is tar-ed and then gzipped. My VM (called VM101) had two disks (VM101-disk1.vdi and VM101-disk2.vdi). The export generated: VM101.tar.gz (containing VM101-disk002.img which looks like my first disk) and VM101-disk003.tar.gz (VM101-disk003.img which looks like my second disk)

Here is the content:


$ tar -ztvf VM101.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 4294967296 2017-10-19 21:23 VM101-disk002.img
 
$ tar -ztvf VM101-disk003.tar.gz
-rw-r----- vboxopc10/vbox_v5.2.VBOX_VERSION_PATCHr11 27917287424 2017-10-19 21:25 VM101-disk003.img

The .img is the image of the disk, with the partition and boot sector.

Compute Cloud

In the Oracle Public Cloud I can import this image: Compute Classic -> Images -> Upload Image

CaptureVboxCloud003

I upload only the image of the first disk, which contains the root filesystem:

CaptureVboxCloud004

CaptureVboxCloud005

And then I create the compute instance with the ‘Associate Image’ button:

CaptureVboxCloud006

Now, I’m ready to create an instance for it: Instance -> Customize -> Private Images

CaptureVboxCloud010

Then, I can define the shape (OCPU and memory), upload my SSH public key, and add storage (I could add my second disk here) and create the instance.

Here I’ve started it:

CaptureVboxCloud008

Unfortunately, my VM still has the network interface defined for my VirtualBox environment and then I have no way to connect to it. I hope that this feature will evolve to also export virtual network interfaces.

I have not seen any way to open a terminal on console. The only thing I can do is take snapshots of it:

CaptureVboxCloud009

Ok, so there’s a problem way before the network interfaces. My VM from Oracle VM VirtualBox (aka VirtualBox) now starts on Oracle VM (aka OVM) and besides the similar marketing name, they are different hypervisors (OVM running XEN). Probably a driver is missing to access block devices and maybe this Bug 21244825.

That’s probably all my tests on this until the next version. It is currently not easy to have a VM that can be started on different hypervisors and network environment.

So what?

Nothing very special here. Moving a VM from one hypervisor to the other is not an easy thing, but it is a good idea. And I hope that the integration into Oracle Cloud will be easier in the future with virtual disk and network interfaces. For the Oracle Cloud, it will be nice to have access to the console, but at least a screenshot may help to troubleshoot.

 

Cet article VirtualBox 5.2 exports the VM to the Oracle Cloud est apparu en premier sur Blog dbi services.

Fragmentation

Tom Kyte - Sat, 2017-10-21 08:46
Can tables which only experience inserts and updates and not delete be fragmented.
Categories: DBA Blogs

Join with where clause

Tom Kyte - Sat, 2017-10-21 08:46
Hi there, I have an example below which I'm not sure how oracle execute the where clause. Select a.id, b.column_A, b.column_B, b.column_C from A left join B on a.id = b.id where b.column_C = 'Yes' My question is that do oracle execute the...
Categories: DBA Blogs

Are results from SELECT with no order by and with ROWNUM predicate used in WHERE condition stable?

Tom Kyte - Sat, 2017-10-21 08:46
Consider the following simple scenario: PREPARATIONS: 1) Let's say we have a table USERS with one COLUMN NAME: create table USERS(NAME varchar(100)); 2) Let's put some values there: insert into USERS(name) values('User1'); insert into USERS...
Categories: DBA Blogs

difference of explain plan, autotrace and tkprof

Tom Kyte - Sat, 2017-10-21 08:46
I have confusion about explain plan, autotrace and tkprof. When explain plain showing to us the prediction about how optimizer will work to produce result and tkprof show the real thing that happen, so how about autotrace. In autotrace there are two ...
Categories: DBA Blogs

About BLOB and BFILE

Tom Kyte - Sat, 2017-10-21 08:46
Hi Tom. We are designing an office automation application and we want to store image of letters and documents.Our organization is very large (it has 100 departments) and We have to use replications. Now we want to know is BLOB a proper type for sto...
Categories: DBA Blogs

Getting RMAN-05502 when trying to duplicate from backup

Tom Kyte - Sat, 2017-10-21 08:46
I have very little experience with Oracle. We are a SQL Server shop with one legacy Oracle 11.2 database that serves an obscure but critical function. As part of our testing routine, this database needs to be copied from our production instance into ...
Categories: DBA Blogs

nVision Performance Tuning: 2. Effects of Performance Options

David Kurtz - Sat, 2017-10-21 04:16
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

In this post, I examine the effect of different settings for the tree performance options on how the SQL is generated.  It is common, to see different performance options in use on different trees in the same SQL query.  It is important to be able to look at a piece of SQL generated by nVision and to be able to work out which performance options have been set on which trees.
  • Access Method: join to tree selector –v- use literal values
  • Tree Selectors: statics –v- dynamic
  • Selector Options: single values –v- inequalities/between
Access Method
This option allows you control how the tree selector table is combined with the ledger table.  The choices of interest are whether to join the tree selector, or whether to use literal values where feasible.
JoinHere, two tree selectors are joined to the ledger table
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L4, … PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017

AND L.SELECTOR_NUM=73130
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
It is not feasible to use literal values for L4 because the data is grouped by L4.TREE_NODE, however, it is feasible to use literal values for L because no column from this table is included in either the SELECT or GROUP BY clauses.
Literal ValuesWhen 'use literal values' is selected nVision replaces the join to the tree selector table with a set of literal values generated from the contents of the tree selector table.
In this example, the join to the CHARTFIELD1 tree selector has been replaced with a series of literal value predicates.  There is one for every selected tree leaf.  A simple equality condition for single value leaves, and a BETWEEN condition for ranged leaves.
SELECT L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L2.SELECTOR_NUM=101142
AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.CHARTFIELD1='0070700'
OR A.CHARTFIELD1='0078999'

OR A.CHARTFIELD1 BETWEEN '0091100' AND '0091101'
OR A.CHARTFIELD1='0150204')
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM
RecommendationsIn general, the literal values option is very beneficial.
  • It simplifies the queries generated by nVision by removing a table from the from clause and leaves the optimizer with fewer choices when determining the execution plan.
  • Data is filtered on the scan of the ledger table rather than when it is joined to the tree selector.  On an engineered system these predicates can be pushed down to the storage cells.
However, there are some disadvantages
  • As the number of leaves on a tree increase, so the number of literal predicates in the query increases, and the time taken to parse the SQL increases.  As a rough guideline, I would say that the parse time starts to become significant as the tree exceeds 2000 leaves, and you might be better using the Join Option.
  • Also, as the number of literal predicates increase the time taken to evaluate them increases.  This applies to both conventional and engineered systems.
  • Some PeopleSoft customers have found it generally beneficial to the system to raise the value of OPTIMIZER_DYNAMIC_SAMPLING from the default of 2 to 4 so that Oracle collects dynamic statistics if there are expressions in where clause predicates or multiple predicates on the same table (see Database SQL Tuning Guide -> Managing Optimizer Statistics).  However, this can significantly increase the parse overhead of nVision SQL using many predicates.  If necessary, this parameter can be reset at session level for nVision with a trigger.
  • The nVision client also has to do more work to generate the SQL.
  • Literal values increase the variability of SQL generated, reports using different tree nodes will generate SQL with different numbers of predicates and there is effectively no chance that any two statements would be similar enough to use any of Oracle's plan stability technologies (SQL baselines, SQL profiles, SQL plan directives).
  • In theory, cursor sharing could reduce the parse, but in my experience is that the optimizer does not correctly cost the predicates resulting in different execution plans with poorer performance.
Tree SelectorsThis option allows you to choose whether a tree selector is managed in a static or dynamic manner.  As you will see, the Tree Selectors performance option is tied in with the Selector Options performance option.
StaticWhere a tree uses a static selector, the entire tree is extracted into the tree selector.  There will be no tree node number criteria in the extract statement.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT 78722,L.TREE_NODE_NUM, SUBSTR(L.RANGE_FROM,1,10),SUBSTR(L.RANGE_TO,1,10)
FROM PSTREELEAF L
WHERE L.SETID='GLOBE'
AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='FUNCTION'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
Note that the tree leaf was not joined to the underlying detail table.  Ranged leaves become ranged selectors.  This is because there is no mechanism to determine when the detail table has changed and thus when the static selector would be out of date and need to be refreshed.  This is why single value joins cannot be performed with static selectors.
Tree node number criteria will appear in the nVision queries to identify the required selectors.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10
AND L4.TREE_NODE_NUM BETWEEN 1000000000 AND 2000000000
A control row is maintained on the table PSTREESELCTL.  This row indicates that a particular selector is static, and the version number is compared to the version number of PSTREEDEFN to determine whether the selector is valid, or whether the tree has changed.  If the tree has changed it will have a higher version number, and the selector will be extracted again.  The selector number from the query can be looked up on this table to identify the tree.
DynamicDynamic selectors are built on-the-fly by nVision as the report runs.  They select just the rows from the tree that are required by the report.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT DISTINCT 108090,L.TREE_NODE_NUM,D.ACCOUNT,D.ACCOUNT FROM
PS_GL_ACCOUNT_TBL D, PSTREELEAF L
WHERE L.SETID='GLOBE' AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='ACCOUNT'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
AND L.RANGE_FROM<>L.RANGE_TO
AND D.ACCOUNT BETWEEN L.RANGE_FROM AND L.RANGE_TO
AND D.SETID='GLOBE' AND
(L.TREE_NODE_NUM BETWEEN 789473386 AND 793372019
OR L.TREE_NODE_NUM BETWEEN 810915873 AND 812865189

OR L.TREE_NODE_NUM BETWEEN 1089668204 AND 1095516154
OR L.TREE_NODE_NUM BETWEEN 1113060008 AND 1115009324)
All the tree node number predicates are found in the query that populates the selector table and not in the query that generates the report.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10 AND
A.CURRENCY_CD='USD' GROUP BY L4.TREE_NODE_NUM
nVision will delete dynamic selectors after they have been used.  However, if the report crashes or times out, rows will be left in the selector table, and this debris can build up over time.  So there is a requirement to manage these tables.
Note that in this example when the selector was extracted, the tree leaf table was joined to the underlying table on which the tree was based (in this case GL_ACCOUNT_TBL).  This occurs when the single value join option is selected.  The tree selector contains a row for each value rather than for each tree leaf.  This will result in larger tree selectors where ranged leaves are used.
PeopleSoft do not provide any way to identify the tree used to create a dynamic selector.  From the SQL query, all we can see is the field to which it was joined.  The above example is a tree related to CHARTFIELD1.
RecommendationsAlthough static selectors are easier to manage, the decision to use dynamic selectors is driven by the need to use single value joins.
  • It is important to maintain up-to-date statistics on the selector tables.  As new selectors are extracted, a series of ever-increasing selector numbers are allocated from a sequence maintained on the table PSTREESELNUM.   If not maintained, the selector numbers will exceed the high value recorded in the column statistics for column SELECTOR_NUM, and the database will not correctly calculate the number of rows expected, resulting in inappropriate execution plans.
  • It is easier to manage optimizer statistics on static selectors because the PSTREESELCTL table records when they have been updated.  In fact, it is possible to put a trigger on that table to maintain statistics.  However, static selectors imply range predicates that bring other problems that I discuss in the next section.
  • Using compound triggers on the selector tables it is possible to maintain a log of selectors, maintain statistics on them and purge selectors left behind by nVision reports that have not cleaned up after themselves.  This is also used to identify the trees used in dynamic selectors.  I will describe this solution later.
Recommendation: Prior to Oracle 11g, I have recommended using static selectors and ranged joins, and by careful management of statistics on the tree selector and indexing of the ledger tables good results can be obtained.  The tree selectors are Cartesian joined together before using index lookups on the ledger tables.
However, this approach is generally not effective for queries that reference more than 3 tree selector, nor for very large trees.  The size of the Cartesian product is the product of the number of rows extracted from each tree selectors table.  For example, three selectors with 100 rows each will result in 1 million lookups on the ledger table.  As the size of the Cartesian product grows the number of index look-ups also grows.  There comes a point when it is better to remove the least selective tree from the Cartesian and hash joining it after the looking up ledger.  Striking the correct balance can be difficult, and leads to using different performance options in different reports.
From Oracle 11g, and especially on engineered systems, I now prefer dynamic selectors because they permit the use of single value joins that can use Bloom filters and Extended Statistics.  Although, where smaller trees are in use that do not significantly cut down the query result set, range joins may still be preferable, in which cases, so might static selectors.
Selector OptionsThis performance option controls how the tree selector is joined to the ledger table.  The choice is between single value equality joins or inequality joins.
Ranged JoinsThere are two forms of ranged joins, either using the BETWEEN keyword or using a pair of inequalities.  There is no difference to Oracle because the optimizer rewrites BETWEEN as a pair of inequalities.
NB: If you specify single value joins on a static selector you still get an inequality ranged join because only the tree leaves are extracted to the selector.  .  nVision still does this even if all of the leaves in the tree are single values and not ranges.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10

AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
Single Value JoinsIf single value joins are selected, nVision generates an equality join between the tree selector and the ledger table.  The tree selector must be dynamic, and nVision extracts all the intermediate values between the tree node ranges on the underlying detail table.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
RecommendationsIn general, I would recommend using single value joins and therefore also dynamic selectors.  The main advantage is that they enable two particular optimisations in Oracle.
  • Bloom filters only work with equality predicates, and therefore only with single value joins.  They enable a filter on one column of one table to be roughly mapped to another column of another table upon which there is an equality join.  This filters data earlier in a query, cutting down the size of the eventual join operation.  It is a rough match in that it may return some false positives to the subsequent hash operation, but it will not omit any results.  The hash join then does an exact match of the rows to produce the correct result set.  On an Engineered Oracle system this filter can also be pushed down to the storage cells. 
  • This is an example of the kind of execution plan that we should see when a Bloom filter is used.  A filter has been created on PSTREESELECT10 at line 3, and it is applied to PS_LEDGER at line 6.
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 12876 (100)| | | |
| 1 | HASH GROUP BY | | 501 | 35571 | 12876 (42)| 00:00:01 | | |
| 2 | HASH JOIN | | 975K| 66M| 12822 (41)| 00:00:01 | | |
| 3 | JOIN FILTER CREATE | :BF0000 | 2577 | 59271 | 7 (0)| 00:00:01 | | |
| 4 | PARTITION RANGE SINGLE | | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 5 | INDEX FAST FULL SCAN | PS_PSTREESELECT10 | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 6 | JOIN FILTER USE | :BF0000 | 1715K| 78M| 12804 (41)| 00:00:01 | | |
| 7 | PARTITION RANGE ITERATOR | | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
| 8 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_LEDGER | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
-------------------------------------------------------------------------------------------------------------------------------
  • Extended statistics, (i.e. on groups of columns) also only work with equality predicates.  Single value join queries can, therefore, benefit from them.
There are drawbacks to be overcome.
  • Single value joins require the use dynamic selectors.  That requires management of statistics on the dynamic selectors, and debris left in the tree selectors table from failed reports that have not cleaned up after themselves.
  • Single value joins can result in larger tree selector tables with many more rows than tree leaves.  For large trees or trees with many values in leaf ranges, resulting in more than about 2000 rows on the tree selector tables, it can be advantageous to change the Access Method performance option from using literal values to join because the parse overhead becomes significant.

Submitted two talks to Collaborate 18

Bobby Durrett's DBA Blog - Fri, 2017-10-20 19:52

I submitted my two talks to Collaborate 18 through IOUG. These are the same two that I submitted to RMOUG Training Days. Hopefully one of the talks will get accepted at one of the two conferences but I don’t know. The nice thing is that they are both professional development talks and a user group meeting might be more open to that sort of talk than a vendor (i.e. Oracle OpenWorld) conference. But, there is a lot of competition out there so we will see. Hopefully I will get to put some of my new Toastmaster skills to work at a conference early next year. &#x1f642;

Bobby

Categories: DBA Blogs

ORA-04091: table is mutating, trigger/function may not see it

Tom Kyte - Fri, 2017-10-20 14:26
Hi, I am getting "ORA-04091: table is mutating, trigger/function may not see it" in my trigger. Please find the test case details as follows: <code>create table test(bug_number number, subject varchar2(50),ANALYZED_BY varchar2(50)); insert...
Categories: DBA Blogs

Neat and optimized code

Tom Kyte - Fri, 2017-10-20 14:26
I need the logic to derive the direction and line hour direction is Inbound if either one of org and dest or both falls under the jrny_in list and the rt list direction is outbound if either one of org and dest or both falls under the jrny_out ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator