Feed aggregator

PL/SQL procedure and HTTPS request

Tom Kyte - 2 hours 34 min ago
Hi Tom, I'm trying to invoke a RESTful service (using HTTPS and JSON objects) from a procedure. The version of Oracle installed is: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production "CORE 11.2...
Categories: DBA Blogs

group records by interval of 3 seconds

Tom Kyte - 2 hours 34 min ago
Hi Tom, I am not able to find the exact answer to my question although I think I might need to use LAG and LEAD: I have records like this: Time Amount 11/22/2003 12:22:01 100 11/22/2003 12:22:03 200 11/22/2003 12:22:04 ...
Categories: DBA Blogs

enable trace for specific schema

Tom Kyte - 2 hours 34 min ago
Hi, We are having slowness issues so, I would like to activate the trace for 1 hour in my database for an specific schema, I have reviewed in several websites and I have these steps: 1- ALTER SESSION SET TRACEFILE_IDENTIFIER = "TEST_TRACE"; 2- A...
Categories: DBA Blogs

add subpartitions to partitioned table using alter table

Tom Kyte - 2 hours 34 min ago
Hi Tom, I have a list based partitioned table as shown below. create table t( col1 number, col2 date, data_id varchar2(10), archived char(1)) partition by list (data_id) (partition a_data_id values ('A'), partition b_data_id values ('B') )...
Categories: DBA Blogs

"Table access by index rowid" is slower than "table access by user rowid"

Tom Kyte - 2 hours 34 min ago
Hi Tom, Anonymous procedure having a type for table rowid on one column for a billion plus record table, i open a cursor selecting rowid of the table for rownum less than 200000, then i fetch a cursor bulk collecting from the table by limiting ever...
Categories: DBA Blogs

Select for update skip locked failing

Tom Kyte - 2 hours 34 min ago
Hi, I have a scenario where I have to get first 10 rows from filtered and ordered result set for update. As we are on multi user environment we are trying to use SELECT FOR UPDATE SKIP LOCKED. Let me put it in a simple example Scenario-1...
Categories: DBA Blogs

Expand your reach with Oracle Cloud Marketplace

Oracle Cloud Marketplace is an online store dedicated to marketing cloud business apps and professional services offered by Oracle Cloud partners. Oracle Cloud Marketplace was launched back in...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Installing ZFS on OEL7 UEK4 for Docker storage

Yann Neuhaus - Thu, 2018-07-19 23:08

The Oracle Database is fully supported on Docker according that Linux is Red Hat Enterprise Linux 7 or Oracle Enterprise Linux 7 with Unbreakable Enterprise 4. This is documented in MOS Note 2216342.1. Given the size of the Oracle database in GigaBytes even empty, the way it is installed at build with many file updates, and the per-block modifications of the datafiles, a block level copy-on-write filesystem is a must and deduplication and compression are appreciated. This makes ZFS a good option for the Docker storage driver, but also the external volumes. By the way, the Docker documentation about the storage drivers mention that zfs is a good choice for high-density workloads such as PaaS and this of course includes Database as a Service.

I’ve run this example on OEL 7.2 created in the the Oracle Cloud:
CaptureCreateInstance

kernel-uek-devel

We need to install the kernel headers. Of course, it is probably better to run a ‘yum update’ and reboot in order to run the latest kernel.
Here, I’m just installing the headers for the current kernel:

[root@localhost opc]# yum -y install kernel-uek-devel-$(uname -r)
...
Installed:
kernel-uek-devel.x86_64 0:4.1.12-112.14.13.el7uek
Dependency Installed:
cpp.x86_64 0:4.8.5-28.0.1.el7_5.1 gcc.x86_64 0:4.8.5-28.0.1.el7_5.1 glibc-devel.x86_64 0:2.17-222.el7
glibc-headers.x86_64 0:2.17-222.el7 kernel-headers.x86_64 0:3.10.0-862.9.1.el7 libdtrace-ctf.x86_64 0:0.8.0-1.el7
libmpc.x86_64 0:1.0.1-3.el7 mpfr.x86_64 0:3.1.1-4.el7
Dependency Updated:
glibc.x86_64 0:2.17-222.el7 glibc-common.x86_64 0:2.17-222.el7 libgcc.x86_64 0:4.8.5-28.0.1.el7_5.1
libgomp.x86_64 0:4.8.5-28.0.1.el7_5.1

DKMS

We need Dynamic Kernel Module Support to load ZFS modules. I had problems in the past with this so I install it step by step to verify that everything is ok. First, enable the EPEL repository:

[root@localhost opc]# yum install -y yum-utils
[root@localhost opc]# yum-config-manager --enable ol7_developer_EPEL

Then install DKMS:

[root@localhost opc]# yum -y install -y dkms
...
Installed:
dkms.noarch 0:2.4.0-1.20170926git959bd74.el7
Dependency Installed:
elfutils-default-yama-scope.noarch 0:0.170-4.el7 elfutils-libelf-devel.x86_64 0:0.170-4.el7
kernel-debug-devel.x86_64 0:3.10.0-862.9.1.el7 zlib-devel.x86_64 0:1.2.7-17.el7
Dependency Updated:
elfutils-libelf.x86_64 0:0.170-4.el7 elfutils-libs.x86_64 0:0.170-4.el7 zlib.x86_64 0:1.2.7-17.el7

Install ZFS repository

There is a zfs-release package that installs the /etc/yum.repos.d/zfs.repo:

[root@localhost opc]# sudo rpm -Uvh http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
Retrieving http://download.zfsonlinux.org/epel/zfs-release.el7_4.noarch.rpm
warning: /var/tmp/rpm-tmp.yvRURo: Header V4 RSA/SHA256 Signature, key ID f14ab620: NOKEY
Preparing... ################################# [100%] Updating / installing...
1:zfs-release-1-5.el7_4 ################################# [100%]

Basically, all it contains is the following enabled section:

[zfs] name=ZFS on Linux for EL7 - dkms
baseurl=http://download.zfsonlinux.org/epel/7.4/$basearch/
enabled=1
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux

Install ZFS

This is the important part, installing ZFS:

[root@localhost opc]# sudo yum install -y zfs
...
======================================================================================================================
Package Arch Version Repository Size
======================================================================================================================
Installing:
zfs x86_64 0.7.9-1.el7_4 zfs 413 k
Installing for dependencies:
kernel-devel x86_64 3.10.0-862.9.1.el7 ol7_latest 16 M
libnvpair1 x86_64 0.7.9-1.el7_4 zfs 30 k
libuutil1 x86_64 0.7.9-1.el7_4 zfs 35 k
libzfs2 x86_64 0.7.9-1.el7_4 zfs 130 k
libzpool2 x86_64 0.7.9-1.el7_4 zfs 591 k
lm_sensors-libs x86_64 3.4.0-4.20160601gitf9185e5.el7 ol7_latest 41 k
spl x86_64 0.7.9-1.el7_4 zfs 29 k
spl-dkms noarch 0.7.9-1.el7_4 zfs 456 k
sysstat x86_64 10.1.5-13.el7 ol7_latest 310 k
zfs-dkms noarch 0.7.9-1.el7_4 zfs 4.9 M

The most important is to check that the zfs module is installed correctly:

zfs.ko:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.1.12-112.14.13.el7uek.x86_64/extra/

I’ve seen cases where it was not and then the module cannot load. You can also check:

[root@localhost opc]# dkms status
spl, 0.7.9, 4.1.12-112.14.13.el7uek.x86_64, x86_64: installed
zfs, 0.7.9, 4.1.12-112.14.13.el7uek.x86_64, x86_64: installed

If you have a problem (such as “modprobe: FATAL: Module zfs not found” when loading the module), check the status and maybe re-install it with:

dkms remove zfs/0.7.9 --all
dkms --force install zfs/0.7.9

If everything is ok, you can load the module:

[root@localhost opc]# /sbin/modprobe zfs
[root@localhost opc]#

Create a ZFS filesystem

If the ZFS module was not loaded you have this error:

[root@localhost opc]# zpool list
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

If it has been loaded correctly, you have no ZFS Storage Pool yet:

[root@localhost opc]# zpool list
no pools available

First I need to add a disk to my machine. Here I have only one disk created when I created the Compute Service:

[root@localhost opc]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 128G 0 disk
├─xvdb1 202:17 0 500M 0 part /boot
└─xvdb2 202:18 0 127.5G 0 part
├─vg_main-lv_root 249:0 0 123.5G 0 lvm /
└─vg_main-lv_swap 249:1 0 4G 0 lvm [SWAP]

I add a new disk in the Storage tab:
CaptureCreateStorage
And attach it and attach it to my Cloud Instance:
CaptureAttachStorage

Here is the new disk visible from the system:

[root@localhost opc]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 128G 0 disk
├─xvdb1 202:17 0 500M 0 part /boot
└─xvdb2 202:18 0 127.5G 0 part
├─vg_main-lv_root 249:0 0 123.5G 0 lvm /
└─vg_main-lv_swap 249:1 0 4G 0 lvm [SWAP] xvdc 202:32 0 120G 0 disk
&nbsp
[root@localhost opc]# ls -l /dev/xvdc /dev/block/202:32
lrwxrwxrwx 1 root root 7 Jul 19 15:05 /dev/block/202:32 -> ../xvdc
brw-rw---- 1 root disk 202, 32 Jul 19 15:05 /dev/xvdc

Here is where I add a ZFS Storage Pool for Docker:

[root@localhost opc]# zpool create -f zpool-docker -m /var/lib/docker /dev/xvdc
 
[root@localhost opc]# zpool status
pool: zpool-docker
state: ONLINE
scan: none requested
config:
 
NAME STATE READ WRITE CKSUM
zpool-docker ONLINE 0 0 0
xvdc ONLINE 0 0 0
 
[root@localhost opc]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zpool-docker 119G 118K 119G - 0% 0% 1.00x ONLINE -

And while I’m there I set some attributes to enable compression and deduplication. And as Docker writes to layers with 32k I/O I set the recordsize accordingly:

zfs set compression=on zpool-docker
zfs set dedup=on zpool-docker
zfs set recordsize=32k zpool-docker

Just to test that everything is ok, I install Docker as I did in a previous post:

[root@localhost opc]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@localhost opc]# yum-config-manager --enable ol7_addons
[root@localhost opc]# yum -y install docker-ce
[root@localhost opc]# systemctl start docker

Docker layers

I pull a small image and start a container on it:

[root@localhost opc]# docker run oraclelinux:7-slim

Here is the image and the ZFS dataset for its layer, mounted under /var/lib/docker/zfs:

[root@localhost opc]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
oraclelinux 7-slim b1af4ba0cf19 12 days ago 117MB
 
[root@localhost opc]# docker inspect oraclelinux:7-slim | jq -r .[0].GraphDriver
 
{
"Data": {
"Dataset": "zpool-docker/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc",
"Mountpoint": "/var/lib/docker/zfs/graph/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc"
},
"Name": "zfs"
}

And here is the container layer:

[root@localhost opc]# docker container ls -a
 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9eb7610c1fc5 oraclelinux:7-slim "/bin/bash" 6 minutes ago Exited (0) 6 minutes ago inspiring_shannon
 
[root@localhost opc]# docker inspect inspiring_shannon | jq -r .[0].GraphDriver
 
{
"Data": {
"Dataset": "zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982",
"Mountpoint": "/var/lib/docker/zfs/graph/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982"
},
"Name": "zfs"
}

If you don’t have jq just ‘yum install jq’. It is very convenient to filter and display the ‘inspect’ output.

We can see those datasets from ZFS list:

[root@localhost opc]# zfs list -o creation,space,snapshot_count,written -r | sort
 
CREATION NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD SSCOUNT WRITTEN
Thu Jul 19 15:13 2018 zpool-docker 115G 126M 0B 964K 0B 125M none 964K
Thu Jul 19 15:38 2018 zpool-docker/fe31ff466872588506b1a3a3575c64d458beeb94d15bea593e5048237abf4fcc 115G 125M 0B 125M 0B 0B none 0
Thu Jul 19 15:39 2018 zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982 115G 87K 0B 87K 0B 0B none 87K
Thu Jul 19 15:39 2018 zpool-docker/5d1022761b6dee28a25e21ec8c5c73d99d09863f11439bbf86e742856f844982-init 115G 46K 0B 46K 0B 0B none 0

Here, sorted by creation time, we see the datasets used by each layer. The initial files before having any image are less than 1MB. The image uses 125MB. The container creation has written 87KB and 46KB additional once running.

 

Cet article Installing ZFS on OEL7 UEK4 for Docker storage est apparu en premier sur Blog dbi services.

Rowtype variable in dynamic SQL

Tom Kyte - Thu, 2018-07-19 20:06
How to pass below variable in dynamic sql? v_emp_rec org.emp%rowtype; I am getting ORA-01008: not all variables bound error for below code. declare v_sql varchar2(1000); begin v_sql:='demo_pkg.demo_rates(p_rec=>v_emp_rec);'; execute ...
Categories: DBA Blogs

Line Level Logging using controlled FND_LOG\DBMS_OUTPUT statements vs DBMS_UTILITY.FORMAT_ERROR_BACKTRACE

Tom Kyte - Thu, 2018-07-19 20:06
Hi Tom, I have used DBMS_UTILITY.FORMAT_ERROR_BACKTRACE for understanding the exact line where error occurred in a package which is exposed using a service outside. However one of my client feels I should write diagnostic message after certain set ...
Categories: DBA Blogs

hierarchical query with date consideration

Tom Kyte - Thu, 2018-07-19 20:06
How to include an effectivity date range check in hierarchical query? Such that a loop may exist if the date range is excluded, but considering the range a loop would not. <code>with output_tab as (select 'P1' as output_id, 100 as output_key, to_...
Categories: DBA Blogs

Oracle Security Training by Pete Finnigan in 2018

Pete Finnigan - Thu, 2018-07-19 19:46
Are you worried about the data in your databases being stolen? GDPR has just become law across the EU and the UK and affects business in other countries that process EU citizens data. Maybe you store and process credit card....[Read More]

Posted by Pete On 19/07/18 At 02:04 PM

Categories: Security Blogs

Slides from SV JUG Jul-18th Meetup

Kuassi Mensah - Thu, 2018-07-19 14:28
SV JUGers, it was a great Meetup.
The slides are here.

Enjoy!

Oracle VBCS - Pay As You Go Cloud Model Experience Explained

Andrejus Baranovski - Thu, 2018-07-19 14:03
If you are considering starting using VBCS cloud service from Oracle, may be this post will be useful. I will share my experience with pay as you go model.

Two payment models are available:

1. Pay As You Go - good when accessing VBCS time to time. Can be terminated at any time
2. Monthly Flex - good when need to run VBCS 24/7. Requires commitment, can't be terminated at any time

When you create Oracle Cloud account, initially you will get 30 days free trial period. At the end of that period (or earlier), you can upgrade to billable plan. To upgrade, go to account management and choose to upgrade promotional offer - you will be given choice to go with Pay As You Go or Monthly Flex:


As soon as you upgrade to Pay As You Go, you will start seeing monthly usage amount in the dashboard. Also it shows hourly usage of VBCS instance, for the one you will be billed:


Click on monthly usage amount, you will see detail view per each service billing. When VBCS instance is stopped (in case of Pay As You Go) - you will be billed only for hardware storage (Compute Classic) - this is relatively very small amount:


There are two options, how you can create VBCS instance - either autonomous VBCS or customer managed VBCS. To be able to stop/start VBCS instance and avoid billing when instance is not used (in case of Pay As You Go) - make sure to go with customer managed VBCS. In this example, VBCS instance was used only for 1 hour and then it was stopped, it can be started again at anytime:


To manage VBCS instance, you would need to navigate to Oracle Cloud Stack UI. From here you can start stop both DB and VBCS in single action. It is not enough to stop VBCS, make sure to stop DB too, if you are not using it:

Playing With Service Relocation 12c

Michael Dinh - Thu, 2018-07-19 09:14
With 12c, use verbose to display services running.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl -V
srvctl version: 12.1.0.2.0

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status instance -d hawk -i hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

There is option to provide comma delimited list of services to check the status.
Unfortunately, option is not available for relocation which I failed to understand.
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p11,p12,p13,p14"
Service p11 is running on instance(s) hawk1
Service p12 is running on instance(s) hawk1
Service p13 is running on instance(s) hawk1
Service p14 is running on instance(s) hawk1

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status service -d hawk -s "p21,p22,p23,p24,p25"
Service p21 is running on instance(s) hawk2
Service p22 is running on instance(s) hawk2
Service p23 is running on instance(s) hawk2
Service p24 is running on instance(s) hawk2
Service p25 is running on instance(s) hawk2

Puzzled that status for services is able to use delimited list where as relocation is not.

I have blogged about new features for service failover: 12.1 Improved Service Failover

Another test shows that it’s working as it should be.

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 ~]$ srvctl stop instance -d hawk -instance hawk1 -failover

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$


[root@racnode-dc1-1 ~]# crsctl stop crs
[root@racnode-dc1-1 ~]# crsctl start crs


[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk

[oracle@racnode-dc1-1 ~]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 ~]$

However, the requirement is to relocate services versus failover.

Here are scripts and demo for that.

Script will only work for 2-nodes RAC and service is running on 1 instance only.

[oracle@racnode-dc1-1 ~]$ srvctl config service -d hawk |egrep 'Service name|instances'
Service name: p11
Preferred instances: hawk1
Available instances: hawk2
Service name: p12
Preferred instances: hawk1
Available instances: hawk2
Service name: p13
Preferred instances: hawk1
Available instances: hawk2
Service name: p14
Preferred instances: hawk1
Available instances: hawk2
Service name: p21
Preferred instances: hawk2
Available instances: hawk1
Service name: p22
Preferred instances: hawk2
Available instances: hawk1
Service name: p23
Preferred instances: hawk2
Available instances: hawk1
Service name: p24
Preferred instances: hawk2
Available instances: hawk1
Service name: p25
Preferred instances: hawk2
Available instances: hawk1
[oracle@racnode-dc1-1 ~]$
DEMO:
[oracle@racnode-dc1-1 rac_relocate]$ ls *relocate*.sh
relocate_service.sh  validate_relocate_service.sh

[oracle@racnode-dc1-1 rac_relocate]$ ls *restore*.sh
restore_service_instance1.sh  restore_service_instance2.sh
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ SAVE SERVICES LOCATION AND PREVENT ACCIDENTAL OVERWRITE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org

[oracle@racnode-dc1-1 rac_relocate]$ chmod 400 /tmp/service.org; ll /tmp/service.org; cat /tmp/service.org
-r-------- 1 oracle oinstall 222 Jul 18 14:54 /tmp/service.org
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v > /tmp/service.org
-bash: /tmp/service.org: Permission denied
[oracle@racnode-dc1-1 rac_relocate]$

	
========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 1 TO 2

Validate is similar to RMAN validate.
No relocation is performed and only syntax is provided for verification.
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh
./validate_relocate_service.sh: line 4: 1: ---> USAGE: ./validate_relocate_service.sh -db_unique_name -oldinst# -newinst#

[oracle@racnode-dc1-1 rac_relocate]$ ./validate_relocate_service.sh hawk 1 2
+ OUTF=/tmp/service_1.conf
+ srvctl status instance -d hawk -instance hawk1 -v
+ ls -l /tmp/service_1.conf
-rw-r--r-- 1 oracle oinstall 109 Jul 18 14:59 /tmp/service_1.conf
+ cat /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ set +x

**************************************
***** SERVICES THAT WILL BE RELOCATED:
**************************************
srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2


[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 1 2
-rw-r--r-- 1 oracle oinstall 109 Jul 18 15:00 /tmp/service_1.conf
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RELOCATE SERVICES FROM INSTANCE 2 TO 1
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ ./relocate_service.sh hawk 2 1
-rw-r--r-- 1 oracle oinstall 129 Jul 18 15:02 /tmp/service_2.conf
Instance hawk2 is running on node racnode-dc1-2 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ srvctl relocate service -d hawk -service p11 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p12 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p13 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p14 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p21 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk2 -newinst hawk1
+ set +x
+ srvctl status instance -d hawk -instance hawk2 -v
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.
+ srvctl status instance -d hawk -instance hawk1 -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
+ set +x
[oracle@racnode-dc1-1 rac_relocate]$


========================================================================
+++++++ RESTORE SERVICES FOR INSTANCE
========================================================================
[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14,p21,p22,p23,p24,p25. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh
./restore_service_instance2.sh: line 4: 1: ---> USAGE: ./restore_service_instance2.sh -db_unique_name

[oracle@racnode-dc1-1 rac_relocate]$ ./restore_service_instance2.sh hawk
+ srvctl relocate service -d hawk -service p21 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p22 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p23 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p24 -oldinst hawk1 -newinst hawk2
+ set +x
+ srvctl relocate service -d hawk -service p25 -oldinst hawk1 -newinst hawk2
+ set +x

[oracle@racnode-dc1-1 rac_relocate]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1 with online services p11,p12,p13,p14. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2 with online services p21,p22,p23,p24,p25. Instance status: Open.
[oracle@racnode-dc1-1 rac_relocate]$
CODE:


========================================================================
+++++++ validate_relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
set -x
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
set +x
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
echo
echo "**************************************"
echo "***** SERVICES THAT WILL BE RELOCATED:"
echo "**************************************"
for s in ${svc}
do
echo "srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}"
done
exit

========================================================================
+++++++ relocate_service.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OLD=${2:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
NEW=${3:?"---> USAGE: $DN/$BN -db_unique_name -oldinst# -newinst#"}
OUTF=/tmp/service_${OLD}.conf
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v > $OUTF
ls -l $OUTF;cat $OUTF
export svc=`tail -1 $OUTF | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}${OLD} -newinst ${DB}${NEW}
set +x
done
set -x
srvctl status instance -d ${DB} -instance ${DB}${OLD} -v
srvctl status instance -d ${DB} -instance ${DB}${NEW} -v
set +x
exit

========================================================================
+++++++ restore_service_instance1.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`head -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}2 -newinst ${DB}1
set +x
done
exit

========================================================================
+++++++ restore_service_instance2.sh
========================================================================
#!/bin/sh -e
DN=`dirname $0`
BN=`basename $0`
DB=${1:?"---> USAGE: $DN/$BN -db_unique_name"}
export svc=`tail -1 /tmp/service.org | awk -F" " '{print $11}'|awk '{$0=substr($0,1,length($0)-1); print $0}'`
IFS=","
for s in ${svc}
do
set -x
srvctl relocate service -d ${DB} -service ${s} -oldinst ${DB}1 -newinst ${DB}2
set +x
done
exit

How to install Docker Enterprise Edition on CentOS 7 ?

Yann Neuhaus - Thu, 2018-07-19 07:54

In this blog we are going to see how to install Docker EE trial edition on CentOS 7 hosts. As you may know or not, Docker has two editions: Docker Community Edition  (CE) and Docker Enterprise Edition (EE). To make it simple, let’s say that Docker EE is designed for production environment. More infos here.

 

This will be our architecture:

  • 1 manager node
    • hostname: docker-ee-manager1
  • 1 worker node + Docker Trust Registry (DTR) node
    • hostname: docker-ee-worker1

Both nodes should be in the same network range.

We will assume that CentOS 7 is already installed on all hosts:

[root@docker-ee-manager1 ~] cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

[root@docker-ee-worker1 ~]$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core)

 

 Create docker user and group
[root@docker-ee-manager1 ~]$ groupadd docker
[root@docker-ee-manager1 ~]$ useradd -g docker docker
[root@docker-ee-manager1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-manager1 ~]$ su - docker

Do the same on worker

[root@docker-ee-worker1 ~]$ groupadd docker
[root@docker-ee-worker1 ~]$ useradd -g docker docker
[root@docker-ee-worker1 ~]$ echo "docker ALL=(ALL) ALL: NOPASSWD" >> /etc/sudoers
[root@docker-ee-worker1 ~]$ su - docker

 

 

Get the Docker URL for installating Docker EE

Then you need to go to this link, make sure that you already have a Docker account, it’s free you can make one very quickly.

 

 

Fill the formula and you will have access to this :

 

dockerstore

storebits

 

Copy the url and save the license key in a safe location, you will need it later.

 

1. Configure Docker URL
[docker@docker-ee-manager1 ~]$ export DOCKERURL="<YOUR_LINK>"
[docker@docker-ee-manager1 ~]$ sudo -E sh -c 'echo "$DOCKERURL/centos" > /etc/yum/vars/dockerurl'

 

2. We install required packages
[docker@docker-ee-manager1 ~]$ sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2

 

3. Add the Docker-EE repository
[docker@docker-ee-manager1 ~]$ sudo -E yum-config-manager \
--add-repo \
"$DOCKERURL/centos/docker-ee.repo"

 

4. Install docker-ee package
[docker@docker-ee-manager1 ~]$ sudo yum -y install docker-ee
[docker@docker-ee-manager1 ~]$ sudo systemctl enable docker.service
[docker@docker-ee-manager1 ~]$ sudo systemctl start docker.service

 

Repeat step 1 to 4 for worker1 node

 

Install UCP on manager

Simple command, just run this on your manage

 

[docker@docker-ee-manager1 ~]$ docker container run --rm -it --name ucp   -v /var/run/docker.sock:/var/run/docker.sock   docker/ucp:3.0.2 install   --host-address <YOUR_IP>   --interactive
INFO[0000] Your engine version 17.06.2-ee-15, build 64ddfa6 (3.10.0-514.el7.x86_64) is compatible with UCP 3.0.2 (736cf3c)
Admin Username: admin
Admin Password:
Confirm Admin Password:
WARN[0014] None of the hostnames we'll be using in the UCP certificates [docker-ee-manager1 127.0.0.1 172.17.0.1 <YOUR_IP>] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases:
INFO[0000] Found existing UCP config com.docker.ucp.config-2
Do you want to proceed with the install with config com.docker.ucp.config-2? (y/n): y
y
INFO[0032] Installing UCP with host address 10.29.14.101 - If this is incorrect, please specify an alternative address with the '--host-address' flag
INFO[0032] Deploying UCP Service... (waiting for all 2 nodes to complete)
INFO[0083] Installation completed on docker-ee-manager1 (node uvzvuefehznf22k4wa5zg9cy1)
INFO[0083] Installation completed on docker-ee-worker1 (node z7gq7z3336jnwcyojyqq1h3wa)
INFO[0083] UCP Instance ID: x0fg0phnkgzm5730thoncucn2
INFO[0083] UCP Server SSL: SHA-256 Fingerprint=E6:2F:38:69:5D:26:A8:06:D3:8B:11:69:D9:DC:3A:77:CE:16:EA:23:9C:D0:D8:8F:34:D6:97:9D:4B:D2:E2:D2
INFO[0083] Login to UCP at https://<YOUR_IP>1:443
INFO[0083] Username: admin
INFO[0083] Password: (your admin password)

Ignore if there is a insecure message and accept exception. We can see the UCP admin interface. Enter your credentials and upload your license key ucplogin     ucpinterface  

Adding a worker node

  clickonnodes   And click on Add Node Addnode Then tell UCP that you want to deploy a new worker node and copy the command displayed. chooseWorker     Connect to the worker and run this command to join the worker to the cluster

[docker@docker-ee-worker1 ~]$ docker swarm join --token SWMTKN-1-4kt4gyk00n69tiywlzhst8dwsgo4oblylnsl1aww2048isi44u-7j9hmcrsn3lr048yu30xlnsv7 <IP_OF_MANAGER>:2377
This node joined a swarm as a worker.

 

Now, we have two nodes: one manager and one worker

 

Now2nodes

 

Install Docker Trusted Registry

 

The docker EE includes a DTR which is a secure registry where you can store your docker images.  The DTR will be installed on the worker node, it’s not recommended to install it on a manager node.

To install it, you just need to run this command:

[docker@docker-ee-worker1 ~]$ docker run -it --rm docker/dtr install --ucp-node docker-ee-worker1 --ucp-url https://<IP_OF_MANAGER> --ucp-username admin --ucp-password <YOUR_PASSWORD> --ucp-ca "-----BEGIN CERTIFICATE-----
MIIBggIUJ+Y+MFXH1XcyJnCU4ACq26v5ZJswCgYIKoZIzj0EAwIw
HTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMB4XDTE4MDcxOTA4MjEwMFoX
DTIzMDcxODA4MjEwMFowHTEbMBkGA1UEAxMSVUNQIENsaWVudCBSb290IENBMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEDJxHOIhHoV4NBZGnEQClFShjQfpoL5mQ
LH7E6x6GL4AexYtdWgGIcOlV2NXQpdadBK9cZG2z6r7+zwCj7EP/iqNFMEMwDgYD
VR0P7ojp1CIMAoGCCqGSM49BAMCA0gAMEUCIQDqbBiCqXgFdtIb6uP9
EdDTI1YGWn97AFPU+YJ9s1/CSAIgBsqIn1v7BVNjJ3AeUQfo1d8Kfc//ZwHYr4XW
uWIHmkM=
-----END CERTIFICATE-----"

You can find the certificate here: https://<IP_OF_MANAGER>/manage/settings/certs

findCertificate

Then go to the DTR URL which is https://<IP_OF_WORKER>  and enter your credentials

 

DTRLogin

 

 

 

Here we are:

 

DTRUI

Congratulations, you just have installed Docker EE. Hope this helps  :-)

 

 

Cet article How to install Docker Enterprise Edition on CentOS 7 ? est apparu en premier sur Blog dbi services.

Google Cloud Spanner – inserting data

Yann Neuhaus - Thu, 2018-07-19 04:17

In a previous post I’ve created a Google Cloud Spanner database and inserted a few rows from the GUI. This is definitely not a solution fo many rows and here is a post about using the command line.

If I start the Google Shell from the icon on the Spanner page for my project, everything is set. But if I run it from elsewhere, using the https://console.cloud.google.com/cloudshell as I did in A free persistent Google Cloud service with Oracle XE I have to set the project:

franck_pachot@cloudshell:~$ gcloud config set project superb-avatar-210409
Updated property [core/project].
franck_pachot@superb-avatar-210409:~$

Instance

I create my Spanner instance with 3 nodes across the world:
¨
franck_pachot@superb-avatar-210409:~$ time gcloud spanner instances create franck --config nam-eur-asia1 --nodes=3 --description Franck
Creating instance...done.
 
real 0m3.940s
user 0m0.344s
sys 0m0.092s

Database

and Spanner database – created in 6 seconds:

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases create test --instance=franck
Creating database...done.
&nbssp;
real 0m6.832s
user 0m0.320s
sys 0m0.128s

Table

The DDL for table creation can also be run from there:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO1 ( ID1 int64, TEXT string(max) ) primary key (ID1)'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty

I’m now ready to insert one million rows. Here is my table:

franck_pachot@superb-avatar-210409:~$ gcloud spanner databases ddl describe test --instance=franck
--- |-
CREATE TABLE DEMO1 (
ID1 INT64,
TEXT STRING(MAX),
) PRIMARY KEY(ID1)

Insert

The gcloud command line has a limited insert possibility:

franck_pachot@superb-avatar-210409:~$ time for i in $(seq 1 1000000) ; do gcloud beta spanner rows insert --table=DEMO1 --database=test --instance=franck --data=ID1=${i},TEXT=XXX${i} ; done
commitTimestamp: '2018-07-18T11:09:45.065684Z'
commitTimestamp: '2018-07-18T11:09:50.433133Z'
commitTimestamp: '2018-07-18T11:09:55.752857Z'
commitTimestamp: '2018-07-18T11:10:01.044531Z'
commitTimestamp: '2018-07-18T11:10:06.285764Z'
commitTimestamp: '2018-07-18T11:10:11.106936Z'
^C

Ok, let’s stop there. Calling a service for each row is not efficient with a latency of 5 seconds.

API

I’ll use the API from Python. Basically, a connection is a Spanner Client:

franck_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> spanner_client = spanner.Client()
>>> instance = spanner_client.instance('franck')
>>> database = instance.database('test')
>>>

Batch Insert

With this I can send a batch of rows to insert. Here is the full Python script I used to insert one million, by batch of 1000 rows:

from google.cloud import spanner
spanner_client = spanner.Client()
instance = spanner_client.instance('franck')
database = instance.database('test')
for j in range(1000):
records=[] for i in range(1000):
records.append((1+j*1000+i,u'XXX'+str(i)))
with database.batch() as batch:
batch.insert(table='DEMO1',columns=('ID1', 'TEXT',),values=records)

This takes 2 minutes:

franck_pachot@superb-avatar-210409:~$ time python3 test.py
 
real 2m52.707s
user 0m21.776s
sys 0m0.668s
franck_pachot@superb-avatar-210409:~$

If you remember my list of blogs on Variations on 1M rows insert that’s not so fast. But remember that rows are distributed across 3 nodes in 3 continents but here inserting with constantly increasing value have all batched rows going to the same node. The PRIMARY KEY in Google Spanner is not only there to declare a constraint but also determines the organization of data.

Query

The select can also be run from there from a read-only transaction called ‘Snapshot’ because it is doing MVCC consistent reads:

frank_pachot@superb-avatar-210409:~$ python3
Python 3.5.3 (default, Jan 19 2017, 14:11:04)
[GCC 6.3.0 20170118] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import spanner
>>> with spanner.Client().instance('franck').database('test').snapshot() as snapshot:
... results = snapshot.execute_sql('SELECT COUNT(*) FROM DEMO1')
... for row in results:
... print(row)
...
[1000000]

The advantage of the read-only transaction is that it can do consistent reads without locking. The queries executed in a read-write transaction have to acquire some locks in order to guarantee consistency when reading across multiple nodes.

Interleave

So, you can look at the PRIMARY KEY as a partition by range, and we have also reference partitioning with INTERLEAVE IN PARENT. This reminds me of the Oracle CLUSTER segment that is so rarely used because storing the tables separately is finally the better compromise on performance and flexibility for a multi-purpose database.

Here is my creation of DEMO2 where ID1 is a foreign key referencing DEMO1

franck_pachot@superb-avatar-210409:~$ time gcloud spanner databases ddl update test --instance=franck --ddl='create table DEMO2 ( ID1 int64, ID2 int64, TEXT string(max) ) primary key (ID1,ID2), interleave in parent DEMO1 on delete cascade'
DDL updating...done.
'@type': type.googleapis.com/google.protobuf.Empty
 
real 0m24.418s
user 0m0.356s
sys 0m0.088s

I’m now inserting 5 detail rows per each parent row:

from google.cloud import spanner
database = spanner.Client().instance('franck').database('test')
for j in range(1000):
records=[] for i in range(1000):
for k in range(5):
records.append((1+j*1000+i,k,u'XXX'+str(i)+' '+str(k)))
with database.batch() as batch:
batch.insert(table='DEMO2',columns=('ID1','ID2','TEXT'),values=records)

This ran in 6 minutes.

Join (Cross Apply)

Here is the execution plan for

SELECT * FROM DEMO1 join DEMO2 using(ID1) where DEMO2.TEXT=DEMO1.TEXT

where I join the two tables and apply a filter on the join:
CaptureSpannerCrossApply

Thanks to the INTERLEAVE the join is running locally. Each row from DEMO1 (the Input of the Cross Apply) is joined with DEMO2 (the Map of Cross Apply) locally. Only the result is serialized. On this small number of rows we do not see the benefit from having the rows in multiple nodes. There are only 2 nodes with rows here (2 local executions) and probably one node contains most of the rows. The average time per node is 10.72 seconds and the elapsed time is 20.9 seconds, so I guess that one node ran un 20.9 seconds and the other in 1.35 only.

The same without the tables interleaved (here as DEMO3) is faster to insert but the join will be more complex where DEMO1 must be distributed to all nodes.
CaptureSpannerDistributedCrossApply
Without interleave, the input table of the local Cross Apply is a Batch Scan, which is actually like a temporary table distributed to all nodes (seems to have 51 chunks here), created by the ‘Create Batch’. This is called Distributed Cross Applied.

So what?

Google Spanner has only some aspects of SQL and Relational databases. But it is still, like the NoSQL databases, a database where the data model is focused at one use case only because the data model and the data organization have to be designed for specific data access.

 

Cet article Google Cloud Spanner – inserting data est apparu en premier sur Blog dbi services.

SYS_CONTEXT('userenv','module') behaviour in Database Vault

Tom Kyte - Thu, 2018-07-19 01:46
Hello Tom, I have implemented DB Vault on a 12.2.0.1.0 Oracle database. I created a Vault policy to block adhoc access to application schema using DB tools like Toad etc. The policy should allow only application connection to DB from application s...
Categories: DBA Blogs

Speaking At DOAG 2018 Conference And IT Tage 2018

Randolf Geist - Wed, 2018-07-18 15:29
I will be speaking at the yearly DOAG conference in December as well as at the IT Tage in November. My talk will be "Oracle Optimizer System Statistics Update 2018" where I summarize the history and current state of affairs regarding System Statistics and I/O calibration in recent Oracle versions like 12c and 18c.

A Quick Look At What's New In Oracle JET v5.1.0

OTN TechBlog - Wed, 2018-07-18 12:11

On June 18th, the v5.1.0 release of Oracle JET was made available. It was the 25th consecutive on-schedule release for Oracle JET. Details on the release schedule are provided here in the FAQ.

As indicated by the release number, v5.1.0 is a minor release, aimed at tweaking and consolidating features throughout the toolkit. As in other recent releases, new features have been added to support development of composite components, following the Composite Component Architecture (CCA). For details, see the entry on the new Template Slots in Duncan Mills's blog. Also, take note of the new design time metadata, as described in the release notes

Aside from the work done in the CCA area, the key new features and enhancements to be aware of in the release are listed below, sorted alphabetically:

Component Enhancement Description oj-chart New "data" attribute. Introduces new attributes, slots, and custom elements. oj-film-strip New "looping" attribute. Specifies filmstrip navigation behavior, bounded ("off) or looping ("page"). oj-form-layout Enhanced content flexibility. Removes restrictions on the types of children allowed in the "oj-form-layout" component. oj-gantt New "dnd" attribute and "ojMove" event.  Provides new support for moving tasks via drag and drop. oj-label-value New component. Provides enhanced layout flexibility for the "oj-form-layout" component. oj-list-view Enhanced "itemTemplate" slot. Supports including the <LI> element in the template. oj-swipe-actions New component. Provides a declarative way to add swipe-to-reveal functionality to items in the "oj-list-view" component.

For all the details on the items above, see the release notes.

Note: Be aware that in Oracle JET 7.0.0, support for Yeoman and Grunt will be removed from generator-oraclejet and ojet-cli. As a consequence, the ojet-cli will be the only way to use the Oracle JET tooling, e.g., to create new Oracle JET projects from that point on. Therefore, if you haven't transferred from using Yeoman and Grunt to ojet-cli yet, e.g., to command line calls such as "ojet create", take some time to move in that direction before the 7.0.0 release.

As always, your comments and constructive feedback are welcome. If you have questions, or comments, please engage with the Oracle JET Community in the Discussion Forums and also follow @OracleJET on Twitter.

For organizations using Oracle JET in production, you're invited to be highlighted on the Oracle JET site, with the latest addition being a brand new Customer Success Story by Cagemini.

On behalf of the entire Oracle JET development team: "Happy coding!"

Pages

Subscribe to Oracle FAQ aggregator