Feed aggregator

Merge statement failing when selecting from dual

Tom Kyte - Mon, 2018-01-22 20:46
Hi Tom, How do i make use of the query in the ref cursor inside a merge statement. in the "ON" condition it says "Src.column_name " is an invalid identifier. <code> create or replace PROCEDURE TingTong ( in_tax_year IN tdir_le...
Categories: DBA Blogs

Optimizer choosing Index Full Scan over Index Fast Full Scan

Tom Kyte - Mon, 2018-01-22 20:46
Per Connor's request, I asked our DBA to generate the trace files and sent them to the asktom_us e-mail address. Thanks. Alfred ------------------------------------------------------------------------------ Hello, I have observed in an Ora...
Categories: DBA Blogs

Tables with no tablespace

Tom Kyte - Mon, 2018-01-22 20:46
Hi Tom, This may sound a little silly question but have not come across such situation yet. I have few tables create by user (not owned by sys or system) which do not have tablespace name. The following query returns table names. select ...
Categories: DBA Blogs

ORA-14300 on Interval partition

Tom Kyte - Mon, 2018-01-22 20:46
Team, We have an application, where we receive a set of files to be processed by application in 24/7. Each time a file recieves we generate a file_key in master_table using sequence, and take that file_key to load the file contents into this tab...
Categories: DBA Blogs

SQL*Loader-510: Physical record size limit

Tom Kyte - Mon, 2018-01-22 20:46
Dear Tom, I tried to load a text file to Oracle db using SQL*Loader. Here is the table definition: create table mytable ( key varchar2(40), adate date, atag number(10), aclob clob, version number(10) ); Here is...
Categories: DBA Blogs

Query not working in Oracle , Missing expression error

Tom Kyte - Mon, 2018-01-22 20:46
i want to get column 2 of y table if column 1 of y table is equal to column 1 of z table this query working in mysql but,this query not working in oracle am getting missing expression error....can u please help me to sort this issue
Categories: DBA Blogs

The chaotic politics of privacy

DBMS2 - Mon, 2018-01-22 09:23

Almost nobody pays attention to the real issues in privacy and surveillance. That’s gotten only slightly better over the decade that I’ve written about the subject. But the problems with privacy/surveillance politics run yet deeper than that.


The politics of privacy and surveillance are confused, in many countries around the world. This is hardly surprising. After all:

  • Privacy involves complex technological issues. Few governments understand those well.
  • Privacy also involves complex business issues. Few governments understand those well either.
  • Citizen understanding of these issues is no better.

Technical cluelessness isn’t the only problem. Privacy issues are commonly framed in terms of civil liberties, national security, law enforcement and/or general national sovereignty. And these categories are inherently confusing, in that:

  • Opinions about them often cross standard partisan lines.
  • Different countries take very different approaches, especially in the “civil liberties” area.
  • These categories are rife with questionably-founded fears, such as supposed threats from terrorism, child pornographers, or “foreign interference”.

Data sovereignty regulations — which are quite a big part of privacy law — get their own extra bit of confusion, because of the various purposes they can serve. Chief among these are: 

  • Preventing foreign governments or businesses from impinging citizens’ privacy.
  • Helping their own governments impinge on citizens’ privacy.
  • Providing a pretext to favor local companies at the expense of foreign ones.

The United States

Specifically in the United States, I’d like to drill into two areas:

  • An important bit of constitutional confusion.
  • Just how bipartisan this all gets in our generally hyper-partisan times.

The constitutional confusion goes something like this:

  • A new communication technology is invented, such as telephones or email.
  • The courts rule that there is no Fourth Amendment expectation of privacy in using such optional services, because:
    • Given how the technology works, the information is temporarily under a third party’s control.
    • If you weren’t willing to give up your privacy, you wouldn’t have used the technology in the first place.
  • Later the technology becomes so central to everyday life that courts start finding the previous reasoning to be inaccurate, and extend the Fourth’s protection of your “papers and effect” to the new communication medium.
  • In the meantime, laws are passed regulating privacy for that particular medium.

For example:

Those links are all to Wikipedia. At the time of this writing, the ones on Warshak and the SCA go into considerable constitutional depth.

The Email Privacy Act is also the single best example of this post’s premises about the general chaos of privacy politics.

  • It passed the House of Representatives unanimously in 2016 — 419-0 — which is an honor usually reserved for such noncontroversial subjects as renaming post offices.
  • Even so, it was shot down in the Senate, under opposition from Senators of both parties,* never coming up for vote.
  • It was passed by voice vote in the House again in 2017.
  • It again didn’t come up for vote in the Senate.

Last week’s FISA reauthorization is another example; it wouldn’t have passed without senior-level Democratic support in the House and Senate alike.

*A chief opponent among the Democrats was Diane Feinstein, who despite representing California is commonly hostile to technological good sense. She voted for FISA reauthorization as well.

Like many folks, I’ve been distracted by all the other political calamities that have befallen since November, 2016. But the time to refocus on privacy/surveillance is drawing near.

Related links

  • I wrote about similar subjects in May, 2016, and offered many links then.
Categories: Other

Modify your nodemanager.properties in wlst

Darwin IT - Mon, 2018-01-22 09:12
In 2016 I did several posts on automatic installs of Fusion MiddleWare, including domain creation using wlst.

With weblogic 12c you automatically get a pre-configured per-domain nodemanager. But you might find the configuration not completely suiting your whishes.

It would be nice to update the nodemanager.properties file to with your properties in the same script.

Today I started with upgrading our Weblogic Tuning and Troubleshooting training to 12c, and one of the steps is to adapt the domain creation script. In the old script, the AdminServer is started right way, to add the managed server to the domain. In my before mentioned script, I do that offline. But since I like to be able to update the nodemanager.properties file I figured that out.

Earlier, I created  a function to just write a new property file:
# Create a NodeManager properties file.
def createNodeManagerPropertiesFile(javaHome, nodeMgrHome, nodeMgrType, nodeMgrListenAddress, nodeMgrListenPort):
print ('Create Nodemanager Properties File for home: '+nodeMgrHome)
print (lineSeperator)
fileNew=open(nmProps, 'w')
fileNew.write('#Node manager properties\n')
fileNew.write('#%s\n' % str(datetime.now()))
fileNew.write('DomainsFile=%s/%s\n' % (nodeMgrHome,'nodemanager.domains'))
fileNew.write('NodeManagerHome=%s\n' % nodeMgrHome)
fileNew.write('JavaHome=%s\n' % javaHome)
fileNew.write('ListenAddress=%s\n' % nodeMgrListenAddress)
fileNew.write('ListenPort=%s\n' % nodeMgrListenPort)
if nodeMgrType == 'ssl':
fileNew.write('LogFile=%s/%s\n' % (nodeMgrHome,'nodemanager.log'))

But this one just rewrites the file, and so I need to determine the values for properties like DomainsFile, JavaHome, etc., which are already set correctly in the original file. I only want to update the ListenAddress, and ListenPort, and possibly the SecureListener property based on the nodemanager type. Besides that, I want to backup the original file as well.

So, I adapted this  function to:
# Update the Nodemanager Properties
def updateNMProps(nmPropertyFile, nodeMgrListenAddress, nodeMgrListenPort, nodeMgrType):
nmProps = ''
print ('Read Nodemanager properties file%s: ' % nmPropertyFile)
f = open(nmPropertyFile)
for line in f.readlines():
if line.strip().startswith('ListenPort'):
line = 'ListenPort=%s\n' % nodeMgrListenPort
elif line.strip().startswith('ListenAddress'):
line = 'ListenAddress=%s\n' % nodeMgrListenAddress
elif line.strip().startswith('SecureListener'):
if nodeMgrType == 'ssl':
line = 'SecureListener=true\n'
line = 'SecureListener=false\n'
# making sure these properties are set to true:
elif line.strip().startswith('QuitEnabled'):
line = 'QuitEnabled=%s\n' % 'true'
elif line.strip().startswith('CrashRecoveryEnabled'):
line = 'CrashRecoveryEnabled=%s\n' % 'true'
elif line.strip().startswith('weblogic.StartScriptEnabled'):
line = 'weblogic.StartScriptEnabled=%s\n' % 'true'
elif line.strip().startswith('weblogic.StopScriptEnabled'):
line = 'weblogic.StopScriptEnabled=%s\n' % 'true'
nmProps = nmProps + line
# Backup file
print nmProps
print ('Rename File %s to %s ' % (nmPropertyFile, nmPropertyFileOrg))
os.rename(nmPropertyFile, nmPropertyFileOrg)
# Save New File
print ('\nNow save the changed property file to %s' % nmPropertyFile)
fileNew=open(nmPropertyFile, 'w')
It first reads the property file, denoted with nmPropertyFile line by line.
If a line starts with a particular property that I want to set specifically, then the line is replaced. Each line is then added to the nmProps  variable. For completeness and validation I print the resulting variable.
Then I rename the original file to nmPropertyFile+'.org' using os.rename(). And lastly, I write the contents of the nmProps to the original file in one go.

This brings me again one step further to a completely scripted domain.

Partner Webcast – Exadata Database Machine X7 is the biggest change in Exadata in many years

body {-webkit-text-size-adjust:none; -ms-text-size-adjust:none;} body {margin:0; padding:0;} table td {border-collapse:collapse;} h1, h2, h3,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Data Guard And Notification

Tom Kyte - Mon, 2018-01-22 02:26
As part of the my project, we are migrating legacy system to new system. As a part of one process, we have reporting data base from which different reports are generated. Report generation is started by trigger(alert) once the replication of producti...
Categories: DBA Blogs

how to recover a bad block in a big file tablespace

Tom Kyte - Mon, 2018-01-22 02:26
some days ago,I asked a question about big file tablespace vs normal small file tablespace, you said "That said, its rare to need to recover a particular datafile - the most common scenarios are recover a database, or recover/fix some blocks. In eith...
Categories: DBA Blogs

How many context switches my session does?

Tom Kyte - Mon, 2018-01-22 02:26
Hello, My questios is already in subject. What performance views has information about about PL/SQL and SQL context switches? SELECT n.Name FROM v$statname n WHERE n.Name LIKE '%context%' Returns only "OS Voluntary context switches" and "OS...
Categories: DBA Blogs


Tom Kyte - Mon, 2018-01-22 02:26
would like to know why it is not advisable to use "two_tasks" Thank you Regards
Categories: DBA Blogs

Primary Key

Tom Kyte - Mon, 2018-01-22 02:26
Dear Sir, When asked what is the difference between Primary Key and Unique Key , People say Primary Key cannot be NULL, and Unique Key can be NULL. My Question is If we add a NOT NULL constraint on Unique Key , we can use Unique key constraint...
Categories: DBA Blogs

Announcement: Oracle Indexing Internals Seminars Coming to New Zealand in March 2018 (Shipyards of New Zealand)

Richard Foote - Sun, 2018-01-21 19:14
I’m very pleased to announce I’ve now finalised some dates in New Zealand for my popular and highly acclaimed “Oracle Indexing Internals and Best Practices” seminar. They are: Wellington 12-13 March 2018: Tickets and Registration Link Auckland 15-16 March 2018: Tickets and Registration Link As usual, numbers will be strictly limited due to the small class nature […]
Categories: DBA Blogs

Oracle SOA Suite: Installing the Fusion Middleware SOA Suite

Dietrich Schroff - Sun, 2018-01-21 14:30
After the installation of the Fusion Middleware Infrastructur the next step is to install the SOA Suite software.

The software can be found here:

The first try failed with this error:

java -d64 -jar fmw_12.
Launcher-Logdatei ist /tmp/OraInstall2017-10-07_11-47-20PM/launcher2017-10-07_11-47-20PM.log.
Nicht genügend freier Speicherplatz in /tmp/orcl3797124329273264119.tmp, um das Installationsprogramm zu extrahieren. Aktuell 2796 MB. Erforderlich 3532 MB.Ok. Some cleanups inside /tmp and then:

Next step: Run the config.sh to create a SOA Suite Server....

Be Friend With awk/sed | ASM Mapping

Michael Dinh - Sun, 2018-01-21 11:10

I had request to add disks to ASM Disk Group without any further details for what new disks were added.

Need to figure out which disks are on ASM now, which disks should be used as new ones.

Got lazy and created scripts for this for future use.

[root@racnode-dc1-1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ /etc/init.d/oracleasm listdisks

--- [8,49] is major,minor for device
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

--- Extract major,minor for devide
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'

--- Remove [] brackets
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'

--- Alternative option to remove []
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed 's/[][]//g'

--- Create patterns for grep
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'

--- Test grep using pattern
[oracle@racnode-dc1-1 ~]$ ls -l /dev/* | grep -E '8,.*49'
brw-rw---- 1 root    disk      8,  49 Jan 21 16:42 /dev/sdd1
[oracle@racnode-dc1-1 ~]$

--- Test grep with command line syntax
[oracle@racnode-dc1-1 ~]$ ls -l /dev/*|grep -E `oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
brw-rw---- 1 root    disk      8,  49 Jan 21 16:42 /dev/sdd1
[oracle@racnode-dc1-1 ~]$

--- Run script
[oracle@racnode-dc1-1 ~]$ /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Jan 21 21:42 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Jan 21 21:42 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Jan 21 21:42 /dev/sde1

[oracle@racnode-dc1-1 ~]$

--- ASM Lib version
[oracle@racnode-dc1-1 ~]$ rpm -qa|grep asm
[oracle@racnode-dc1-1 ~]$

--- Script
[oracle@racnode-dc1-1 ~]$ cat /sf_working/scripts/asm_mapping.sh

#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
[root@racnode-dc1-1 ~]# fdisk -l /dev/sdd1

Disk /dev/sdd1: 8587 MB, 8587837440 bytes, 16773120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@racnode-dc1-1 ~]#

Using SQL*Plus to Seed your Dockerized Oracle Database

Debu Panda - Sat, 2018-01-20 19:28
In my last blog, you learned  to create a containerized Oracle database for your development/testing purpose. You also know how to connect to your container and run command in the container.

Most applications require some reference data e.g. example,my  OrderApp application based on Apache Tom EE  requires catalog data to be pre-populated before I can test my application. 

One of the readers asked me how can we run a SQL script on his local or shared drive to seed the containerized database.

In this blog, I will show how you can execute scripts with SQL*Plus inside the container to seed your dockerized Oracle database.

Connecting to SQLPlus

In the last blog, we learned that ORACLE_HOME for the database running in the container is /u01/app/oracle/product/12.1.0/dbhome_1.

I can connect to the database by running the following command:

docker exec -it orcldb /u01/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus system/welcome1@ORCL

SQL*Plus: Release Production on Sat Jan 20 06:22:58 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sun Jan 14 2018 03:09:54 +00:00

Connected to:
Oracle Database 12c Standard Edition Release - 64bit Production


You might remember ORCL was the instance name that provided for my database.

Also note that when I run the command, SQL*Plus is getting executed inside the container.

Running a SQL script Using SQLPlus

As the command is getting executed inside the container running the Oracle database, the SQL script has to be accessible from the container.

My Script

My application depends upon a user in the PDB. My script creates the user, creates tables in that user's schema and populates data in those tables. 

I have a script named user.sql that I want to execute and here are the contents of /Users/dpanda/orderapp2/orcl/sql/user.sql script.

create user orderapp identified by orderapp2
default tablespace users temporary tablespace temp

alter user orderapp quota unlimited on users
grant connect, resource to orderapp
connect orderapp/orderapp@pdb1

As I am invoking the SQL*Plus inside the container, I have to specify the drive inside the container.

Mapping Volume from the Container to Local or Shared Drive

You might remember from the last blog that when I started the Database container, I mapped the drive in /u04/app in the container to /Users/dpanda/orderapp2/orcl by using –v option as below:

docker run -d --env-file db.properties -p 1521:1521 -p 5500:5500 --name orcldb --net appnet  --shm-size="4g" -v /Users/dpanda/orderapp2/orcl:/u04/app:/u04/app container-registry.oracle.com/database/standard

The script directory has to be specified as /u04/app/sql as my script is located in /Users/dpanda/orderapp2/orcl/sql directory on my MAC .

Here is the docker command I can use to run my script:

docker exec -it orcldb
/u01/app/oracle/product/12.1.0/dbhome_1/bin/sqlplus system/welcome1@PDB1 @/u04/app/sql/user

As you can see, I can connecting to the pdb1 database by executing SQLPlus command and running the user.sql script.

Here is the output you will get

SQL*Plus: Release Production on Sat Jan 13 06:16:32 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Sat Jan 13 2018 06:16:19 +00:00

Connected to:
Oracle Database 12c Standard Edition Release - 64bit Production

User created.

User altered.

Grant succeeded.


1 row created.

1 row created.

1 row created.

Commit complete.

Disconnected from Oracle Database 12c Standard Edition Release - 64bit Production

Hope this helps to automate your script to seed your containerized Oracle database.

In a future blog, I will demonstrate how can you Oracle Instant Client in a Docker container to automate your scripts.

Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported)

Yann Neuhaus - Sat, 2018-01-20 16:16

In the Oracle Database Cloud DBaaS you provision a multitenant database where tablespaces are encrypted. This means that when you unplug/plug the pluggable databases, you also need to export /import the encryption keys. You cannot just copy the wallet because the wallet contains all CDB keys. Usually, you can be guided by the error messages, but this one needs a little explanation and an example.

Here I’ll unplug PDB6 from CDB1 and plug it into CDB2

[oracle@VM122 blogs]$ connect /@CDB1 as sysdba
SQLcl: Release 17.4.0 Production on Fri Jan 19 22:22:44 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release - 64bit Production
22:22:46 SQL> show pdbs
------ ---------- ------------ ----------

Here are the master keys:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
------ --- --------- ------- ------- ------------- ------ --------------- ------------------

Export keys and Unplug PDB

Let’s try to unplug PDB6:
22:22:51 SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
22:23:06 SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Error starting at line : 1 in command -
alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml'
Error report -
ORA-46680: master keys of the container database must be exported

This message is not clear. You don’t export the container database (CDB) key. You have to export the PDB ones.

Then, I have to open the PDB, switch to it, and export the key:

SQL> alter session set container=PDB6;
Session altered.
SQL> administer key management set keystore open identified by "k3yCDB1";
Key MANAGEMENT succeeded.
SQL> administer key management
2 export encryption keys with secret "this is my secret password for the export"
3 to '/var/tmp/PDB6.p12'
4 identified by "k3yCDB1"
5 /
Key MANAGEMENT succeeded.

Note that I opened the keystore with a password. If you use an autologin wallet, you have to close it, in the CDB$ROOT, and open it with password.

Now I can unplug the database:

SQL> alter pluggable database PDB6 close immediate;
Pluggable database PDB6 altered.
SQL> alter pluggable database PDB6 unplug into '/var/tmp/PDB6.xml';
Pluggable database PDB6 altered.

Plug PDB and Import keys

I’ll plug it in CDB2:

SQL> connect /@CDB2 as sysdba
SQL> create pluggable database PDB6 using '/var/tmp/PDB6.xml' file_name_convert=('/CDB1/PDB6/','/CDB2/PDB6/');
Pluggable database PDB6 created.

When I open it, I get a warning:

18:05:45 SQL> alter pluggable database PDB6 open;
ORA-24344: success with compilation error
24344. 00000 - "success with compilation error"
*Cause: A sql/plsql compilation error occurred.
*Action: Return OCI_SUCCESS_WITH_INFO along with the error code
Pluggable database PDB6 altered.

The PDB is opened in restricted mode and then I have to import the wallet:

SQL> show pdbs
------ -------- ---- ---- ----------
SQL> select name,cause,type,status,message,action from pdb_plug_in_violations;
---- ----- ---- ------ ------- ------
PDB6 Wallet Key Needed ERROR PENDING PDB needs to import keys from source. Import keys from source.

Then I open the destination CDB wallet and import the PDB keys into it:

SQL> alter session set container=PDB6;
Session altered.
SQL> administer key management set keystore open identified by "k3yCDB2";
Key MANAGEMENT succeeded.
SQL> administer key management
2 import encryption keys with secret "this is my secret password for the export"
3 from '/var/tmp/PDB6.p12'
4 identified by "k3yCDB2"
5 with backup
6 /
Key MANAGEMENT succeeded.

Now the PDB can be opened for all sessions

SQL> alter session set container=CDB$ROOT;
Session altered.
SQL> alter pluggable database PDB6 close;
Pluggable database PDB6 altered.
SQL> alter pluggable database PDB6 open;
Pluggable database PDB6 altered.

Here is a confirmation that the PDB has the same key as the in the origin CDB:

SQL> select con_id,tag,substr(key_id,1,6)||'...' "KEY_ID...",creator,key_use,keystore_type,origin,creator_pdbname,activating_pdbname from v$encryption_keys;
------ --- --------- ------- ------- ------------- ------ --------------- ------------------


Cet article Unplug an Encrypted PDB (ORA-46680: master keys of the container database must be exported) est apparu en premier sur Blog dbi services.

Who used apex.oracle.com in 2017?

Joel Kallman - Sat, 2018-01-20 09:55
A number of years ago, I provided statistics on the geographic distribution of people who used the free, evaluation service for Oracle Application Express at https://apex.oracle.com.  I did this in 2008, 2009 and 2011.  It's time for an update.

I've included a graphic of the top 30 and the full report below, and here are my observations:

  • Since 2008, the number of visitor sessions to apex.oracle.com increased 280 times! That's not percentage, but times.  Percentage-wise, that's 27,985 % growth from 2008 to 2017.
  • In 2008, there were 1 or more visitor sessions from 122 countries/territories.  In 2017, that number increased to 212 countries/territories
  • The USA, India and United Kingdom remain in the top 3.  But the UK is barely hanging onto the #3 position.
  • Colombia vaulted from #11 in 2011 to #6 in 2017.  Bangladesh jumped from #26 to #14.  Japan jumped from #19 to #9.  Wow!

Usage of apex.oracle.com doesn't necessarily translate into usage of APEX.  These statistics are only a measurement of who is logging into apex.oracle.com - the actual usage of APEX is far greater.  Also, I fully anticipate the comment "...but as a percentage of population size, our country has the top usage in the world!" (here's looking at you, Netherlands).  But just because someone's country has declined in the rankings doesn't mean that APEX isn't growing there....it's just growing in other countries too!

Lastly, the statistics presented below are sessions, not to be confused with activity.  At the time of this writing, there are 36,133 workspaces on apex.oracle.com.  In the past 7 days, there were 4,643,958 page views, and 1,808 new workspaces requested and approved.

Not only is usage of APEX growing on a global basis, but that growth is accelerating.


Subscribe to Oracle FAQ aggregator