Feed aggregator

IM_DOMAIN$

Jonathan Lewis - 5 hours 19 min ago

A few months ago Franck Pachot wrote about a recursive SQL statement that kept appearing in the library cache. I discovered the note today because I had just found a client site where the following statement suddenly appeared near the top of the “SQL ordered by Executions” section of their AWR reports after they had upgraded to 18c.


select domain# from sys.im_domain$ where objn = :1 and col# = :2

I found Franck’s article by the simple expedient of typing the entire query into a Google search – his note was the first hit on the list, and he had a convenient example (based on the SCOTT schema) to demonstrate the effect, so I built the tables from the schema and ran a simple test with extended SQL tracing (event 10046) enabled.

Here’s an extract (with no deletions) from the resulting trace file:

PARSING IN CURSOR #139819795591784 len=110 dep=0 uid=104 oct=3 lid=104 tim=31306461773 hv=3572295767 ad='6bf8b8a0' sqlid='8n2bcc3aftu2r'
select /*+ leading(EMP DEPT) USE_HASH(DEPT) USE_HASH(BONUS) */ * from DEPT natural join EMP natural join BONUS
END OF STMT
PARSE #139819795591784:c=59,e=59,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306461772

PARSING IN CURSOR #139819795585328 len=64 dep=1 uid=0 oct=3 lid=0 tim=31306461966 hv=1240924087 ad='69a8b760' sqlid='0b639nx4zdzxr'
select domain# from sys.im_domain$ where objn = :1 and col# = :2
END OF STMT
PARSE #139819795585328:c=37,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306461965
EXEC #139819795585328:c=32,e=31,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306462058
FETCH #139819795585328:c=17,e=17,p=0,cr=1,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306462098
STAT #139819795585328 id=1 cnt=0 pid=0 pos=1 obj=10422 op='TABLE ACCESS BY INDEX ROWID IM_DOMAIN$ (cr=1 pr=0 pw=0 str=1 time=21 us cost=0 size=39 card=1)'
STAT #139819795585328 id=2 cnt=0 pid=1 pos=1 obj=10423 op='INDEX UNIQUE SCAN IM_DOMAIN_UK (cr=1 pr=0 pw=0 str=1 time=18 us cost=0 size=0 card=1)'
CLOSE #139819795585328:c=5,e=5,dep=1,type=1,tim=31306462287

EXEC #139819795591784:c=484,e=484,p=0,cr=1,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306462316
FETCH #139819795591784:c=0,e=804,p=0,cr=44,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306463191
STAT #139819795591784 id=1 cnt=0 pid=0 pos=1 obj=0 op='HASH JOIN  (cr=45 pr=0 pw=0 str=1 time=1222 us cost=72 size=97 card=1)'
STAT #139819795591784 id=2 cnt=4 pid=1 pos=1 obj=0 op='HASH JOIN  (cr=45 pr=0 pw=0 str=1 time=1001 us cost=70 size=232 card=4)'
STAT #139819795591784 id=3 cnt=4 pid=2 pos=1 obj=117764 op='TABLE ACCESS FULL EMP (cr=22 pr=0 pw=0 str=1 time=259 us cost=35 size=152 card=4)'
STAT #139819795591784 id=4 cnt=4 pid=2 pos=2 obj=117765 op='TABLE ACCESS FULL DEPT (cr=22 pr=0 pw=0 str=1 time=81 us cost=35 size=80 card=4)'
STAT #139819795591784 id=5 cnt=0 pid=1 pos=2 obj=117766 op='TABLE ACCESS FULL BONUS (cr=0 pr=0 pw=0 str=1 time=4 us cost=2 size=39 card=1)'
CLOSE #139819795591784:c=24,e=24,dep=0,type=1,tim=31306508552

PARSE #139819795591784:c=41,e=42,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306508798
PARSE #139819795585328:c=21,e=22,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306509010
EXEC #139819795585328:c=132,e=132,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306509220
FETCH #139819795585328:c=20,e=19,p=0,cr=1,cu=0,mis=0,r=0,dep=1,og=4,plh=2321277860,tim=31306509415
CLOSE #139819795585328:c=8,e=8,dep=1,type=3,tim=31306509494
EXEC #139819795591784:c=682,e=704,p=0,cr=1,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306509558
FETCH #139819795591784:c=588,e=1246,p=0,cr=44,cu=0,mis=0,r=0,dep=0,og=1,plh=4262704031,tim=31306511014
CLOSE #139819795591784:c=23,e=22,dep=0,type=3,ti

As you can see, every time I do a parse call for the query against the SCOTT schema (PARSE #139819795591784), Oracle does a parse/exec/fetch/close for the query against im_domain$ (PARSE #139819795585328) – and this happens even when the SCOTT query is in the session cursor cache!

As Franck points out, this looks like something to do with the In Memory option even though the option wasn’t enabled in his database, and wasn’t enabled in my client’s database. Once you’ve got a reproducible example of a problem, though, you can start fiddling to see if you can bypass it. In this case I decided to check all the parameters to do with the in-memory option – which is a bit like hard work because there are 208 parameters that include the word “inmemory”. After checking the descriptions of the first twenty or so I decided there was an easier option – if Oracle is asking about “domains” for columns possibly it’s something to do with the relatively new “join group” feature for in-memory columnar compression, so I ran a query to produce the list of parameter names and description for parameter with the words “join” and “group” in their names – there are two:


_sqlexec_join_group_aware_hj_enabled              enable/disable join group aware hash join
_sqlexec_join_group_aware_hj_unencoded_rowset     minimum number of unencoded rowsets processed before adaptation 

The first one looks rather promising – and it has a default value to TRUE, and it can be changed by “alter session” or “alter system”. So I executed:


alter session set "_sqlexec_join_group_aware_hj_enabled" = false;
alter system flush shared_pool;

Then I ran my test again and voila! there it wasn’t. No more queries against in_domain$.

Problem solved (until the client decides they want to use the in-memory option, perhaps).

There may be other reasons why this recursive query appears which aren’t to do with hash joins, of course, but join groups are specifically to allow efficient hash joins with the in-memory option, (it’s a trick to do with common encoding for compression to allow Bloom filtering to eliminate CUs without decoding) so I’m hoping I won’t have to track down and eliminate another sources for the query.

 

[Video] How To Create Database on Oracle’s Gen2 Cloud (OCI)

Online Apps DBA - 6 hours 57 min ago

[VIDEO] How to Create Database on Oracle’s Gen2 Cloud (OCI) Watch Here: https://k21academy.com/clouddba41 Three basic things you need to know: ✔ Database Types on Oracle Gen2 Cloud ✔ Provisioning your First User-Managed Database ✔ How to Connect Deployed Database Machine On Cloud Learn above 3 points in this latest Video, where Oracle ACE & Cloud Expert, Atul Kumar […]

The post [Video] How To Create Database on Oracle’s Gen2 Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How To Push An Image Into Amazon ECR With Docker

Yann Neuhaus - 10 hours 25 min ago
8 Steps To Push An Image Into Amazon ECR With Docker

Please bear in mind that Amazon elastic container registry (ECR) is a managed AWS Docker registry service. In this topic, we will use the Docker CLI to push an CentOS image into Amazon ECR.

1. Install Docker desktop for Windows and AWS CLI

Verify and confirm that each version has been installed properly (see below):

  • docker (dash dash)version
  • aws (dash dash)version
2. Authentication to AWS

Open Power Shell interface with administration privileges and enter the following commands:

  • aws configure
  • Access key: ****
  • Secret key: ****

The region name and output format information are not mandatory.
The data above can be found from the IAM service on AWS console management.

3. Log in to AWS elastic container registry

Use the get-login command to log in to AWS elastic container registry and save it to a text file (see below):

  • aws ecr get-login (dash dash)region eu-west-3 > text.txt
4. Authenticate Docker to AWS elastic container registry

Replace the aws account id provided into the text file saved previously and specify the password:

  • docker login -u AWS https://aws_account_id.dkr.ecr.eu-west-3.amazonaws.com
  • Password: *****
  • Login_AWS
5. Download the CentOS image

Use the pull command to download the CentOs image:

  • docker pull centos:6.6
  • Docker_Pull_image
6. Create a repository
  • aws ecr create-repository (dash dash)repository-name centos

The repository has been created successfully into Amazon Elastic Container Registry (see below):

AWS_ECR_Repository

Before proceeding to the next step, make sure that the following requirements are met:

  • Docker version must be greater or equal to 1.7
  • The repository is created and that the user has sufficient privileges to access it
  • The Docker authentication is successful
7. List the images stored into Docker and tag them
  • docker images

Docker_images

  • docker tag centos:6.6 aws_account_id.dkr.ecr.eu-west-3.amazonaws.com/centos:6.6 (replace the aws_account_id by your account id)

Verify that the image has been tagged:

  • docker images

Docker_images2

8. Push the CentOS image into Amazon ECR

Use the push command to move the centos image into Amazon elastic container registry:

  • docker push aws_account_id.dkr.ecr.eu-west-3.amazonaws.com/centos:6.6 (replace the aws_account_id by your account id)

From the Amazon management console, verify that the image has been pushed properly into Amazon elastic container registry (see below):

ECR_Push_image

If you are in a test environment, to avoid extra costs, make sure to delete the image and the repository from Amazon elastic container registry.

Use the following command to delete the image:

  • aws ecr batch-delete-image (dash dash)repository-name centos (dash dash)image-ids imageTag=6.6

Use the following command to delete the repository:

  • aws ecr delete-repository (dash dash)repository-name centos

Need further details about Docker basics for Amazon ECR, click here.

Cet article How To Push An Image Into Amazon ECR With Docker est apparu en premier sur Blog dbi services.

Oracle Database 19c Automatic Indexing: Predicted Back In 2008 (A Better Future)

Richard Foote - Mon, 2019-03-18 19:24
I’ve recently received a number of correspondences regarding one of my most popular blog posts, dating back to February 2008: Index Rebuild vs. Coalesce vs. Shrink Space (Pigs – 3 Different Ones). In the comments section, there’s an interesting discussion where I mention: “If Oracle19 does everything for you and all the various indexes structures get […]
Categories: DBA Blogs

How to count pairs in a consecutive number of rows

Tom Kyte - Mon, 2019-03-18 18:06
I have the following example: COLUMN 19 20 26 28 29 32 33 34 I need to count the rows based on pairs, 19-20, 28-29, 32-33. I'm having difficulty to check if a pair is already counted or not, any sugestions ? The result should be s...
Categories: DBA Blogs

Error generating DUMP: ORA-39006: internal error

Tom Kyte - Mon, 2019-03-18 18:06
Hi, I have a problem creating a dump with SQL Developer, the PL/SQL generated is: <code> set scan off set serveroutput on set escape off whenever sqlerror exit DECLARE h1 number; s varchar2(1000):=NULL; errorvarchar varchar2(1...
Categories: DBA Blogs

ORA-06533: Subscript Beyond Count error

Tom Kyte - Mon, 2019-03-18 18:06
Hi I have the following PLSQL code - if run 1st time - it works fine - running 2nd or 3rd time it fails with "Subscript beyond count" error If I make the declaration of g_response private to the procedure (not globally in the package) - it works...
Categories: DBA Blogs

Partner Webcast – Oracle Cloud Business Analytics Data Visualizations

Providing fast and flexible analysis of any data from any source is a business requirement these days. Oracle Analytics Cloud is a cloud-first analytics platform, built on the industry-leading Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Insights into IT Governance and Security

Chris Warticki - Mon, 2019-03-18 11:52
CXOTALK with Oracle’s Brennan Baybeck

 

Businesses continue to expand their digital footprint while cyber risks are rapidly increasing in quantity and complexity. Which is why it’s more important than ever to embed security solutions and IT governance processes into the fabric of your business—to protect your digital assets, customers, and your bottom line.

As an IT professional, you should make it a priority to understand how these crucial relationships, combined with ongoing support from a trusted source, can help you develop a solid IT governance program to protect your business now and in the future.

Good Governance Program

“Having a good governance program gives you many benefits,” says Oracle’s Vice President of Global IT Risk Management Brennan Baybeck. “The first one is protecting the business and the critical assets of that business. One of the main critical assets is the data. Good governance also helps drive compliance with contractual requirements with customers [and] regulatory compliance with regulators.”

Brennan continues, “additionally, it also helps with ensuring that the various components of your security program are covered, whether it's security operations, change management, configuration management, patching, or threat intelligence.”

Hear from Brennan Baybeck, Oracle VP Global, IT Risk Management

In this short CXOTALK, Baybeck and industry analyst and CXOTALK host, Michael Krigsman further explore why having a strong security governance program is absolutely essential for all businesses today. Including:

  • What is excellent security
  • Top three benefits of a strong security governance program
  • Security measures you need to think about for the future
  • Best practices for managing threats and vulnerabilities
  • The growing role of governance and risk management in DevOps

 

Watch the CXOTALK: Insights into IT Governance and Security to learn more.

 

 

Brennan Baybeck

Oracle VP,

Global IT Risk Management

 

 

 

 

 

 

 

Resources:
  • See the CXOTALK #1: Why Every Business Needs Trusted Support.
  • Learn more about Oracle Premier Support.
  • Protect Your Business From Cybercrime

Data Pump Exit Codes

Learn oracle 12c database management - Mon, 2019-03-18 11:48


oracle@Linux01:[/u01/oracle/DPUMP] $ exp atoorpu file=abcd.dmp logfile=test.log table=sys.aud$
About to export specified tables via Conventional Path ...
. . exporting table                           AUD$     494321 rows exported
Export terminated successfully without warnings.

oracle@qpdbuat211:[/d01/oracle/DPUMP] $ echo $?
0


oracle@Linux01:[/u01/oracle/DPUMP] $ imp atoorpu file=abcd.dmp logifle=test.log
LRM-00101: unknown parameter name 'logifle'

IMP-00022: failed to process parameters, type 'IMP HELP=Y' for help
IMP-00000: Import terminated unsuccessfully

oracle@Linux01:[/u01/oracle/DPUMP] $ echo $?
1
Can be used in export shell scripts for status verification:
if test $status -eq 0
then
echo "export was successfull."
else
echo "export was not successfull."
fi
Also check below page fore reference :
Categories: DBA Blogs

Automate recyclebin purge in oracle

Learn oracle 12c database management - Mon, 2019-03-18 11:46


Setup this simple scheduler job as sysdba to purge the objects in the recycbin.
This is one of the most space cosuming location that often dba's forget to cleanup and the
objects get piled up occupying lot of space. Based on how long you want to save these dropped object setup a job under scheduler to run below plsql block either daily, weekly or monthly. 


 I suggest to run weekly.


--For user_recycbin purge--
-- plsql --

declare
VSQL varchar2(500);
begin
VSQL:='purge user_recyclebin';
execute immediate VSQL;
dbms_output.put_line('USER RECYCLEBIN has been purged.');
end;

/




--For dba_recycbin purge--
-- plsql --

declare
VSQL varchar2(500);
begin
VSQL:='purge dba_recyclebin';
execute immediate VSQL;
dbms_output.put_line('DBA RECYCLEBIN has been purged.');
end;





Prerequisites
The database object must reside in your own schema or you must have the DROP ANY ... system privilege for the type of object to be purged, or you must have the SYSDBA system privilege. To perform the PURGE DBA_RECYCLEBIN operation, you must have the SYSDBA or PURGE DBA_RECYCLEBINsystem privilege.
Categories: DBA Blogs

Oracle Again Cited as a Leader in Data Management Solutions for Analytics

Oracle Press Releases - Mon, 2019-03-18 07:00
Press Release
Oracle Again Cited as a Leader in Data Management Solutions for Analytics Oracle positioned highest for ability to execute and furthest for completeness of vision in latest Gartner Magic Quadrant for Data Management Solutions for Analytics

Redwood Shores, Calif.—Mar 18, 2019

Oracle today announced it was positioned highest for ability to execute and furthest for completeness of vision in Gartner’s 2019 “Magic Quadrant for Data Management Solutions for Analytics” report1.

Oracle’s leadership in data management emanates from its deep portfolio of database management solutions, including the self-driving Oracle Autonomous Database. Oracle believes these innovations enabled the company to be positioned 13 consecutive times as a Leader in this report.

“Oracle is proud to be positioned highest for ability to execute and furthest for completeness of vision in Gartner's 2019 Magic Quadrant for Data Management Solutions for Analytics," said Andrew Mendelsohn, Executive Vice President, Oracle Database Server Technologies. "Oracle's self-driving Autonomous Data Warehouse combines the power of a Data Warehouse with the flexibility of Big Data to drive all analytic data management use cases."

Oracle Autonomous Database consists of Oracle Autonomous Transaction Processing, optimized for running transactions and mixed workloads, and Oracle Autonomous Data Warehouse for running analytic database workloads. Both options of autonomous database offer self-driving, self-securing, and self-repairing capabilities that can automatically discover threats and remediate them while the database is running. Oracle Autonomous Database can help customers dramatically improve data security, be more efficient in the face of budget constraints, and quickly drive innovation that creates a competitive advantage.

Download a complimentary copy of Gartner’s 2019 Magic Quadrant Data Management Solution for Analytics here.

[1] Source: Gartner, Magic Quadrant Data Management Solution for Analytics, Adam Ronthal, Roxane Edjlali, Rick Greenwald, 21 January 2019.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

[BLOG] The Heartbeat Table of Oracle GoldenGate (12.2)

Online Apps DBA - Mon, 2019-03-18 05:25

Are you learning GoldenGate but are unaware of the built-in Heartbeat Table feature that has been added in Oracle GoldenGate 12.2? If yes, then visit: https://k21academy.com/goldengate34 and learn all about: ✔Features of Heartbeat Table ✔How you can add it on Target and Source ✔Managing and Viewing Heartbeat Data & much more… Are you learning GoldenGate […]

The post [BLOG] The Heartbeat Table of Oracle GoldenGate (12.2) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is not modified

Yann Neuhaus - Mon, 2019-03-18 03:30

Below, I will try to explain, a particular case for the general error : ORA-01555 snapshot too old error..

Normally, when we have this error, we are trying to adapt the retention parameters or to tune our queries.

SQL> show parameter undo;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled                    boolean     FALSE
undo_management                      string      AUTO
undo_retention                       integer     3600 --extended from 900,
undo_tablespace                      string      UNDOTBS1

But, are some scenarios where the above rule is not working .

We got from the alert log file of the DB the sql id which caused the issue : pmrbk5fdfd665

But, when you want to search for it, in V$SQL/V$SQLAREA  is not there

SQL> select sql_fulltext from v$sql where sql_id like '%pmrbk5fdfd665%';

no rows selected

Why?

Seems that, sql_id is present in V$OPEN_CURSOR, with an input into the sqltext column.
The issue is coming from the fact that the statement is accessing a LOB column, which causes to Oracle to generate a new sql id.
The execution part related to the LOBs will not appear into V$SQL/V$SQLAREA and is not captured into the AWR reports.

SQL>  select distinct * from v$open_cursor
  2     where rownum < 25
  3     and sql_id like '%pmrbk5fdfd665%';

SADDR                   SID USER_NAME                      ADDRESS          HASH_VALUE SQL_ID        SQL_TEXT                                                     LAST_SQL SQL_EXEC_ID CURSOR_TYPE
---------------- ---------- ------------------------------ ---------------- ---------- ------------- ------------------------------------------------------------ -------- ----------- ---------------
0000000670A19780         74 my_user                   00000002EB91F1F0 3831220380 pmrbk5fdfd665 table_104_11_XYZT_0_0_0
00000006747F0478        131 my_user                   00000002EB91F1F0 3831220380 pmrbk5fdfd665 table_104_11_XYZT_0_0_0

Apparently, the string into the sql_text column is  a  HEX representation of the object_id that is being accessed.
In our case is :XYZT

SQL>    select owner, object_name, object_type
  2    from dba_objects
  3    where object_id = (select to_number('&hex_value','XXXXXX') from dual);
Enter value for hex_value: XYZT
old   3:   where object_id = (select to_number('&hex_value','XXXXXX') from dual)
new   3:   where object_id = (select to_number('XYZT','XXXXXX') from dual)

                                                                                                                    
OWNER                  OBJECT_TYPE                                               OBJECT_NAME
---------------------- --------------------------------------------------------------------------
my_user                TABLE                                                     my_table


SQL> desc my_user.my_table;
 Name                  Type
 -------------------   ----------------
 EXPERIMENT_ID          VARCHAR2(20)
 DOCUMENT               BLOB
............….

If we are looking for the retention on the ” DOCUMENT ” column, we will see :

SQL> select table_name, pctversion, retention,segment_name from dba_lobs where table_name in ('my_table');

TABLE_NAME                                                                               
                                                  PCTVERSION  RETENTION                  SEGMENT_NAME
---------------------------------------------------------------------------------------- ------------------------------------
my_table                                                       900                       SYS_LOB0000027039C00002$$

In order to fix it , try to run this column to adapt the retention of the blob column, related to the new value of the UNDO_RETENTION parameter,

ALTER TABLE my_table MODIFY LOB (DOCUMENT) (3600);

Cet article When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is not modified est apparu en premier sur Blog dbi services.

Microsoft Azure: How to use waagent (Microsoft Azure Linux Agent)

Dietrich Schroff - Sat, 2019-03-16 15:35
After installation waagent on my ubunu server, i tried to use this tool.
First guess was to read the manpages, but there is no entry for waagent:
root@ubuntuserver:~# man waagent
No manual entry for waagent
See 'man 7 undocumented' for help when manual pages are not available.So for documentation you have to visit the Microsoft Azure portal:
https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/agent-linux



Here are some commands i tried:
root@ubuntuserver:~# waagent -show-configuration
AutoUpdate.Enabled = True
AutoUpdate.GAFamily = Prod
Autoupdate.Frequency = 3600
CGroups.EnforceLimits = False
CGroups.Excluded = customscript,runcommand
DVD.MountPoint = /mnt/cdrom/secure
DetectScvmmEnv = False
EnableOverProvisioning = True
Extension.LogDir = /var/log/azure
Extensions.Enabled = True
HttpProxy.Host = None
HttpProxy.Port = None
Lib.Dir = /var/lib/waagent
Logs.Verbose = False
OS.AllowHTTP = False
OS.CheckRdmaDriver = False
OS.EnableFIPS = False
OS.EnableFirewall = True
OS.EnableRDMA = False
OS.HomeDir = /home
OS.OpensslPath = /usr/bin/openssl
OS.PasswordPath = /etc/shadow
OS.RootDeviceScsiTimeout = 300
OS.SshClientAliveInterval = 180
OS.SshDir = /etc/ssh
OS.SudoersDir = /etc/sudoers.d
OS.UpdateRdmaDriver = False
Pid.File = /var/run/waagent.pid
Provisioning.AllowResetSysUser = False
Provisioning.DecodeCustomData = False
Provisioning.DeleteRootPassword = True
Provisioning.Enabled = False
Provisioning.ExecuteCustomData = False
Provisioning.MonitorHostName = False
Provisioning.PasswordCryptId = 6
Provisioning.PasswordCryptSaltLength = 10
Provisioning.RegenerateSshHostKeyPair = False
Provisioning.SshHostKeyPairType = rsa
Provisioning.UseCloudInit = True
ResourceDisk.EnableSwap = False
ResourceDisk.Filesystem = ext4
ResourceDisk.Format = False
ResourceDisk.MountOptions = None
ResourceDisk.MountPoint = /mnt
ResourceDisk.SwapSizeMB = 0
or list all commands:
root@ubuntuserver:~# waagent -help
usage: /usr/sbin/waagent [-verbose] [-force] [-help] -configuration-path:-deprovision[+user]|-register-service|-version|-daemon|-start|-run-exthandlers|-show-configuration]


Monitoring Database in AWS Aurora After Migrating from Oracle to PostgreSQL

Pakistan's First Oracle Blog - Fri, 2019-03-15 19:08
Suppose you have an Oracle database on-premise, which you have now moved over to AWS Cloud in AWS Aurora PostgreSQL. 
For your Oracle database, you have been using v$ views to monitor your runtime performance of instance, long running operations, top SQLs from ASH, blocking etc. How do you continue doing that when you migrate your database to cloud especially in AWS Aurora based PostgreSQL?

Well, PostgreSQL provides statistics collection views, which is a subsystem that collects runtime dynamic information about certain server activities such as statistical performance information. For example, you can use  pg_stat_activity view to check for long running queries.

There are various other statistics views too in PostgreSQL such as pg_stat_all_tables to see size of table, access method like FTS or index scan, and so on. There are other views to check IO on tables and indexes and plethora of others.

In addition to these statistics views, Aurora PostgreSQL provides a nifty tool called as Performance Insights. Performance insights monitors Amazon RDS or Aurora databases (both MySQL and PostgreSQL) and captures workloads so that you can analyze and troubleshoot database performance. Performance insights visualizes the database load and provides very useful filtering using various attributes such as: waits, SQL statements, hosts, or users.

As part of operational excellence, its imperative after a database migration that performance is monitored, documented and continuously improved. Performance Insights and the statistics views are great for proactive and reactive database tuning in AWS RDS and AWS Aurora.
Categories: DBA Blogs

Playing with oracleasm and ASMLib

Michael Dinh - Fri, 2019-03-15 19:02

Forgot about script I wrote some time ago: Be Friend With awk/sed | ASM Mapping

[root@racnode-dc1-1 ~]# cat /sf_working/scripts/asm_mapping.sh
#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done

[root@racnode-dc1-1 ~]# /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Mar 16 10:25 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Mar 16 10:25 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Mar 16 10:25 /dev/sde1

[root@racnode-dc1-1 ~]#

HOWTO: Which Disks Are Handled by ASMLib Kernel Driver? (Doc ID 313387.1)

[root@racnode-dc1-1 ~]# oracleasm listdisks
CRS01
DATA01
FRA01

[root@racnode-dc1-1 dev]# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 oracle dba 8, 33 Mar 15 10:46 CRS01
brw-rw---- 1 oracle dba 8, 49 Mar 15 10:46 DATA01
brw-rw---- 1 oracle dba 8, 65 Mar 15 10:46 FRA01

[root@racnode-dc1-1 dev]# ls -l /dev | grep -E '33|49|65'|grep -E '8'
brw-rw---- 1 root    disk      8,  33 Mar 15 23:47 sdc1
brw-rw---- 1 root    disk      8,  49 Mar 15 23:47 sdd1
brw-rw---- 1 root    disk      8,  65 Mar 15 23:47 sde1

[root@racnode-dc1-1 dev]# /sbin/blkid | grep oracleasm
/dev/sde1: LABEL="FRA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="205115d9-730d-4f64-aedd-d3886e73d123"
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"
/dev/sdc1: LABEL="CRS01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="232e214d-07bb-4f36-aba8-fb215437fb7e"
[root@racnode-dc1-1 dev]#

Various commands to retrieved oracleasm info and more.

[root@racnode-dc1-1 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# uname -r
4.1.12-61.1.18.el7uek.x86_64

[root@racnode-dc1-1 ~]# rpm -q oracleasm-`uname -r`
package oracleasm-4.1.12-61.1.18.el7uek.x86_64 is not installed

[root@racnode-dc1-1 ~]# rpm -qa |grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# oracleasm -V
oracleasm version 2.1.9

[root@racnode-dc1-1 ~]# oracleasm -h
Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ]
       oracleasm --exec-path
       oracleasm -h
       oracleasm -V

The basic oracleasm commands are:
    configure        Configure the Oracle Linux ASMLib driver
    init             Load and initialize the ASMLib driver
    exit             Stop the ASMLib driver
    scandisks        Scan the system for Oracle ASMLib disks
    status           Display the status of the Oracle ASMLib driver
    listdisks        List known Oracle ASMLib disks
    querydisk        Determine if a disk belongs to Oracle ASMlib
    createdisk       Allocate a device for Oracle ASMLib use
    deletedisk       Return a device to the operating system
    renamedisk       Change the label of an Oracle ASMlib disk
    update-driver    Download the latest ASMLib driver

[root@racnode-dc1-1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@racnode-dc1-1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

[root@racnode-dc1-1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@racnode-dc1-1 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@racnode-dc1-1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@racnode-dc1-1 ~]# oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

[root@racnode-dc1-1 ~]# oracleasm querydisk -p DATA01
Disk "DATA01" is a valid ASM disk
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"

[root@racnode-dc1-1 ~]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:CRS01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:DATA01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:FRA01 [104853504 blocks (53684994048 bytes), maxio 1024]

[root@racnode-dc1-1 ~]# lsmod | grep oracleasm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Mar  5 20:21 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm

[root@racnode-dc1-1 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# rpm -qi oracleasmlib-2.0.4-1.el6.x86_64
Name        : oracleasmlib
Version     : 2.0.4
Release     : 1.el6
Architecture: x86_64
Install Date: Tue 18 Apr 2017 10:56:40 AM CEST
Group       : System Environment/Kernel
Size        : 27192
License     : Oracle Corporation
Signature   : RSA/SHA256, Mon 26 Mar 2012 10:22:51 PM CEST, Key ID 72f97b74ec551f03
Source RPM  : oracleasmlib-2.0.4-1.el6.src.rpm
Build Date  : Mon 26 Mar 2012 10:22:44 PM CEST
Build Host  : ca-build44.us.oracle.com
Relocations : (not relocatable)
Packager    : Joel Becker <joel.becker@oracle.com>
Vendor      : Oracle Corporation
URL         : http://oss.oracle.com/
Summary     : The Oracle Automatic Storage Management library userspace code.
Description :
The Oracle userspace library for Oracle Automatic Storage Management
[root@racnode-dc1-1 ~]#

Explicitly providing values in a WHERE clause showing much better performance compared to using sub query

Tom Kyte - Fri, 2019-03-15 16:46
Hi I am new to oracle and not sure how to provide the liveSQL link. I have 2 tables to join huge_table contains about 1 billion rows big_table contains about 100 million rows and small tables contains 999 rows providing the condition to fil...
Categories: DBA Blogs

Compare columns in two tables and report which column is different

Tom Kyte - Fri, 2019-03-15 16:46
Compare columns in two tables and list out column names which are different. for ex:- create table t1(c1 number(2), c2 varchar2(10)); create table t2(c1 number(2), c2 varchar2(10)); insert into t1 values(1,'a'); insert into t2 values(1,'b'); result ...
Categories: DBA Blogs

Documentum : Dctm job locked after docbase installation

Yann Neuhaus - Fri, 2019-03-15 11:19

A correct configuration of Documentum jobs is paramount, that’s why it is the first thing we do after the docbase installation.
A few days ago, I configured the jobs on a new docbase using DQL, and I got an error because a job is locked by the user dmadmin.

The error message was:

DQL> UPDATE dm_job OBJECTS SET target_server=' ' WHERE target_server!=' ' ;
...
[DM_QUERY_F_UP_SAVE]fatal:  "UPDATE:  An error has occurred during a save operation."

[DM_SYSOBJECT_E_LOCKED]error:  "The operation on dm_FTQBS_WEEKLY sysobject was unsuccessful because it is locked by user dmadmin."

I checked the status of this job:

API> ?,c,select r_object_id from dm_job where object_name ='dm_FTQBS_WEEKLY';
r_object_id
----------------
0812D68780000ca6
(1 row affected)

API> dump,c,0812D68780000ca6
...
USER ATTRIBUTES

  object_name                     : dm_FTQBS_WEEKLY
  title                           :
  subject                         : qbs weekly job
...
  start_date                      : 2/28/2019 05:21:15
  expiration_date                 : 2/28/2027 23:00:00
...
  is_inactive                     : T
  inactivate_after_failure        : F
...
  run_now                         : T
...

SYSTEM ATTRIBUTES

  r_object_type                   : dm_job
  r_creation_date                 : 2/28/2019 05:21:15
  r_modify_date                   : 2/28/2019 05:24:48
  r_modifier                      : dmadmin
...
  r_lock_owner                    : dmadmin
  r_lock_date                     : 2/28/2019 05:24:48
...

APPLICATION ATTRIBUTES

...
  a_status                        :
  a_is_hidden                     : F
...
  a_next_invocation               : 3/7/2019 05:21:15

INTERNAL ATTRIBUTES

  i_is_deleted                    : F
...

The job was locked 3 minutes after the creation date… And still locked since (4 days).

Let’s check job logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*0812D68780000ca6*
-rw-r--r--. 1 dmadmin dmadmin   0 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6.lck
-rw-rw-rw-. 1 dmadmin dmadmin 695 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
Thu Feb 28 05:24:50 2019 [ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, 
status: 0, with error message [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error:  "The DocBroker running on host (CONTENT_SERVER1:1489) does not know of a server for the specified docbase (repository1)"
...NO HEADER (RECURSION) No session id for current job.
Thu Feb 28 05:24:50 2019 [FATAL ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, status: 0, with error message .
..NO HEADER (RECURSION) No session id for current job.

I noted three important information here:
1. The DocBroker consider that the docbase is stopped when the AgentExec sent the request.
2. The timestamp corresponds to the installation date of the docbase.
3. LAUNCHER 20749.

I checked the install logs to confirm the first point:

[dmadmin@CONTENT_SERVER1 ~]$ egrep " The installer will s.*. repository1" $DOCUMENTUM/product/7.3/install/logs/install.log*
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:03:24,757  INFO [main]  - The installer will start component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:24:39,588  INFO [main]  - The installer will stop component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:26:49,110  INFO [main]  - The installer will start component process for repository1.

The AgentExec logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*agentexec.log*
-rw-rw-rw-. 1 dmadmin dmadmin    640 Feb 28 05:24 agentexec.log.save.02.28.19.05.27.54
-rw-rw-rw-. 1 dmadmin dmadmin    384 Feb 28 05:36 agentexec.log.save.02.28.19.05.42.26
-rw-r-----. 1 dmadmin dmadmin      0 Feb 28 05:42 agentexec.log.save.02.28.19.09.51.24
...
-rw-r-----. 1 dmadmin dmadmin 569463 Mar  8 09:11 agentexec.log
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1/agentexec/agentexec.log.save.02.28.19.05.27.54
Thu Feb 28 05:17:48 2019 [INFORMATION] [LAUNCHER 19584] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:19 2019 [INFORMATION] [LAUNCHER 20191] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:49 2019 [INFORMATION] [LAUNCHER 20253] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:19 2019 [INFORMATION] [LAUNCHER 20555] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:49 2019 [INFORMATION] [LAUNCHER 20749] Detected during program initialization: Version: 7.3.0050.0039  Linux64

I found here the LAUNCHER 20749 noted above ;) So, this job corresponds to the last job executed by the AgentExec before being stopped.
The AgentExec was up, the Docbase should be up also, but the DocBroker said that the docbase is down :(

Now, the question is : when execatly the DocBroker was informed that the docbase is shut down?

[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1.log.save.02.28.2019.05.26.49
...
2019-02-28T05:24:48.644873      20744[20744]    0112D68780000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (CONTENT_SERVER1) with port (1489).  
Information: (Config(repository1), Proximity(1), Status(Server shut down by user (dmadmin)), Dormancy Status(Active))."

To recapitulate:
– 05:24:48.644873 : Docbase shut down and DocBroker informed
– 05:24:49 : AgentExec sent request to DocBroker

So, we can say that the AgentExec was still alive after the docbase stop!

Now, to resolve the issue it is easy :D

API> unlock,c,0812D68780000ca6
...
OK

I didn’t find in the logs when exactly the docbase stop the AgentExec, I guess the docbase request the stop (kill) but don’t check if it has been really stopped.
I confess that I encounter this error many times after docbase installation, that’s why it is useful to know why and how to resolve it quickly. I advise you to configure Dctm jobs after each installation, at least check if the r_lock_date is set and if it is justified.

Cet article Documentum : Dctm job locked after docbase installation est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator