Feed aggregator

Accelerate your #BI Performance with #Exasol

The Oracle Instructor - 2 hours 42 min ago

Your BI users complain about slow performance of their analytical queries? Is this your Status Quo?

tableau was taken as a popular example for AdHoc analytics but it might be any of the others like MicroStrategy, Looker, you name it. The good news is that this problem can be solved quite easily and without having to spend a fortune trying to speed up your legacy DWH to keep up with the BI demands:

Exasol High Performance Sidecar

Using Exasol as a High Performance Sidecar to take away the pain from your BI users is the easy and fast cure for your problem! This is actually the most common way how Exasol arrives at companies. More often than not this may lead to a complete replacement of the legacy DWH by Exasol:

Exasol replaces legacy DWH

That’s what adidas, Otto and Zalando did, to name a few of our customers.

Don’t take our word for it, we are more than happy to do a PoC!

Categories: DBA Blogs

Foglight

Yann Neuhaus - Mon, 2018-12-10 11:37

As many Oracle Database Administrators, I am a regular user of Oracle Enterprise Manager in order to monitor or administer my client’s databases. I am lucky because most of them have the diagnostic pack and the tuning pack :=).

But we always have to be curious and discover new products. As described by my colleague Nicolas Penot, I installed Foglight:

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-01/

https://blog.dbi-services.com/foglight-monitoring-solution-for-databases-part-02/

The installation was successful and very quick to realize, remember when you have to install OEM 13c, if you well know the product and have a powerful server, if you can install in half a day , you are pretty efficient.

I successfully discovered two Oracle databases (version 12.2.0.1 and 18.3.0.0). As you can see in the picture below, you can also monitor MySQL , Postgres, DB2 , and SQL Server.

fg1

 

The first summary database page shows a lot of useful information (process activity, memory activity, logical reads … a.s.o:

fg2

 

fg3

We have the possibility to navigate in different menus for example the Storage Tablespace Summary:

fg4

We have the possibility to display the datafiles I/O summary:

fg5

 

We can display the pluggable databases and to have an overview of workload metrics with nice graphs:

fg6

 

fg7

 

Foglight also displays the alert.log file error messages with four categories (Informational, Critical Warning or Fatal):

We create false critical messages in the alert.log file, I used dbms_system.ksdwrt() to generate false ORA-00600 or ORA_04031 error messages:

fg8

 

You can visualize , enable or disable the alarms, edit and modify the collections, the configuration is intuitive:

fg9

The report menu is also interesting, many templates for different databases are defined:

fg10

It is very difficult to make a choice between Foglight and OEM. My first opinion (but I did not test Foglight in an active production environment) is that Foglight is a monitoring solution for many types of databases (MySQL, Oracle, Postgres, SQL Server, DB2), thus OEM is a monitoring and administration tool for Oracle databases.

Effectively you can add SQL server or MySQL plugin , Medora plugin for Postgres in OEM  in order to administer SQL Server, MySQL or Postgres databases, but you will never reach the powerful functionalities offered by OEM to monitor or administer Oracle databases.

The big advantage is their threshold, notification functionality and the reporting facilities which both OEM and Foglight have.  I did not test a lot SQL PI in Foglight, and I’m asking myself if like Oracle we have the possibility to have an historical graph of what happened the week before, if we have the possibility to display the execution plan and of course to run the SQL tuning advisor to offer another execution plan for example ?

The time dedicated to administer Foglight seems to be equivalent to OEM. You cannot install those enterprise products and let them live, like OEM Foglight seems to be a complex system and requires a significant time to run effectively.

If you do not have a lot of databases and not a big budget, you can use simple SQL scripts to keep you alerted. You have several different types of database to monitor, it might be a good idea to test and use Foglight. If most of your databases are Oracle based and you have paid for the tuning and diagnostic pack, use OEM to fully benefit of the administration and monitoring console.

 

Cet article Foglight est apparu en premier sur Blog dbi services.

Case Study

Jonathan Lewis - Mon, 2018-12-10 07:10

A recent thread on the ODC database forum highlighted a case where the optimizer was estimating 83,000 for a particular index full scan when the SQL Monitor output for the operation showed that it was returning 11,000,000 rows.

Apart from the minor detail that the OP didn’t specifically ask a question, the information supplied was pretty good. The OP had given us a list of bind variables, with values, and the SQL statement, followed by the text output of the Monitor’ed SQL and, to get the predicate section of the plan, the output from a call to dbms_xplan. This was followed by the DDL for the critical index and a list of the stats for all the columns in the index.

Here’s the critical line of the plan (from the SQL Monitor report) followed by its predicate section (from the dbms_xplan output, but cosmetically enhanced) and some details of the columns used in the predicate:

SQL Plan Monitoring Details (Plan Hash Value=3210215320)
=================================================================================================================================================================================================================================
| Id    |            Operation            |         Name            |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write | Mem  | Temp | Activity |       Activity Detail       | Progress | 
|       |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |      |      |   (%)    |         (# samples)         |          |
=================================================================================================================================================================================================================================
|    11 |             INDEX FULL SCAN     | PK_HOUSEHOLD_GDC        |   83917 | 22799 |        86 |     +1 |     1 |      11M |     9 | 73728 |       |       |      |      |    24.21 | Cpu (77)                    |          |
=================================================================================================================================================================================================================================

  11 - filter(
        (    TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS_B_02)>=ADD_MONTHS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B_06)) 
         AND TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS_B_02)<=TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10)
        )

COLUMN_NAME                    DATA_TYPE       NUM_DISTINCT  DENSITY  NUM_NULLS LAST_ANALYZED       HISTOGRAM
------------------------------ --------------- ------------ -------- ---------- ------------------- ---------------
YEAR                           NUMBER                     5        0          0 2018-12-02 13:19:10 FREQUENCY
MONTH                          NUMBER                    12        0          0 2018-12-02 13:19:10 FREQUENCY

I’ve included the full Monitor output at the end of the posting, or you could visit the ODC page if you want to see it, but if we look at just this line we can see that the index full scan starts running in the first second of the query (‘Start Active’), runs once (‘Execs’) and, as the OP said, retrieved 11M rows in that one scan compared to an estimated 83,917.

When we examine the predicate section we can understand why the optimizer could make such a large error – the SQL requires Oracle to combine two columns from the table with various bits of bind variables to construct a date which is then compares with a couple of constant dates derived from several input bind variables using range based comparisons.

This is an example of Oracle using a fixed estimate of 5% for the selectivity of “unknown range-based comparison” – but with two comparisons the selectivity becomes 5% of 5% = 0.25% (i.e. 1/400).

If we look at the column definitions and stats we see that we seem to have 5 possible years and 12 possible months (which could mean a range as small as 3 years and 2 months) – so a selectivity of 1/400 would be in the right ballpark if we were querying for a date range of roughly 4.5 days. Working the figures the other way around – if 83,917 is 1/400 of the data then there are about 33.5M rows in the table and we are querying for something more like 1/3 of the table.

Observations

I find it curious that the optimizer used an “index full scan” to fetch a huge amount of data from the index when there is no requirement for sorting (there is a subsequent “hash unique”, rather than “sort unique nosort”). I would have expected an “index fast full scan” so I am curious to know if some optimizer parameters have been fiddled with to get the optimizer to bypass the fast full scan. Possibly a change in parameter settings would result in a very different plan.

The names of the bind variables are of the form “SYS_B_nn” – which means that the original query has been subject to the effects of forced cursor sharing. Since we are apparently expecting to identify and manipulate millions of rows this looks like the type of query where you don’t want to use cursor sharing. If the session can set “cursor_sharing=exact” before running the query, or inject the hint /*+ cursor_sharing_exact */ into the query then perhaps we’d get a better estimate of rows (and a better plan). If hinting or setting session parameters is possible then setting optimzer_dynamic_sampling to level 3, or possibly 4, might be sufficient.

The messy expression combining month and year is a crippling handicap to the optimizer – so fixing the query to make the literals visible isn’t actually going to help. This is Oracle 12c, though – so we could add a virtual date column (declared as invisible to avoid the threat of inserts that don’t specify column lists) and gather stats on it. The combination of virtual column and literal values might give the optimizer the information it really needs. Here’s a little script to demonstrate:


rem
rem     Script:         virtual_study.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:
rem
rem     Last tested
rem             12.1.0.2

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        sysdate - (5 * 365) + rownum / 550      d1,
        to_number(
                to_char(
                        (sysdate - (5 * 365) + rownum / 550),
                        'MM'
                )
        )                                       month,
        to_number(
                to_char(
                        (sysdate - (5 * 365) + rownum / 550),
                        'YYYY'
                )
        )                                       year,
        lpad(rownum,10,'0')                     v1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns month size 12 for columns year size 6'
        );
end;
/

I’ve created a table with a million rows with data going back roughly 5 years from current date, which means I need roughly 550 rows per day. I’ve then created histograms on the month and year columns to match the original posting. Now I’ll set up the bind variables and values specified by the OP and run a simple query to show the date information that the bind variables give, and the 1/400 selectivity of the OP’s predicate:


var SYS_B_00 varchar2(32);
var SYS_B_01 varchar2(32);
var SYS_B_02 varchar2(32);
var SYS_B_03 varchar2(32);
var SYS_B_04 varchar2(32);
var SYS_B_05 varchar2(32);
var SYS_B_06 number;
var SYS_B_07 varchar2(32);
var SYS_B_08 varchar2(32);
var SYS_B_09 varchar2(32);
var SYS_B_10 number;

exec :SYS_B_00:='01/';
exec :SYS_B_01:='/';
exec :SYS_B_02:='dd/MM/yyyy';
exec :SYS_B_03:='10/04/2018';
exec :SYS_B_04:='MM/dd/yyyy';
exec :SYS_B_05:='q';
exec :SYS_B_06:=12;
exec :SYS_B_07:='10/04/2018';
exec :SYS_B_08:='MM/dd/yyyy';
exec :SYS_B_09:='q';
exec :SYS_B_10:=1;

select
        to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02)  d1, 
        add_months(trunc(to_date(:sys_b_03,:sys_b_04),:sys_b_05),(-:sys_b_06))  c1,
        to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02)  d2,
        trunc(to_date(:sys_b_07,:sys_b_08),:sys_b_09)-:sys_b_10                 c2
from
        t1
where
        rownum = 1
;

set serveroutput off
alter session set statistics_level = all;

select  count(*)
from    t1
where
        (    to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02) >= add_months(trunc(to_date(:sys_b_03,:sys_b_04),:sys_b_05),(-:sys_b_06)) 
         and to_date(:sys_b_00||to_char(month)||:sys_b_01||to_char(year),:sys_b_02) <= trunc(to_date(:sys_b_07,:sys_b_08),:sys_b_09)-:sys_b_10 )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

===========================================

D1        C1        D2        C2
--------- --------- --------- ---------
01-DEC-13 01-OCT-17 01-DEC-13 30-SEP-18


  COUNT(*)
----------
    200750

--------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:07.39 |    4980 |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:07.39 |    4980 |
|*  2 |   FILTER            |      |      1 |        |    200K|00:00:06.42 |    4980 |
|*  3 |    TABLE ACCESS FULL| T1   |      1 |   2500 |    200K|00:00:04.59 |    4980 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10 .ge. ADD_MON
              THS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B_06)))
   3 - filter((TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR")
              ,:SYS_B_02) .ge. ADD_MONTHS(TRUNC(TO_DATE(:SYS_B_03,:SYS_B_04),:SYS_B_05),(-:SYS_B
              _06)) AND TO_DATE(:SYS_B_00||TO_CHAR("MONTH")||:SYS_B_01||TO_CHAR("YEAR"),:SYS
              _B_02) .le. TRUNC(TO_DATE(:SYS_B_07,:SYS_B_08),:SYS_B_09)-:SYS_B_10))


Note: in this and subsequent text I’ve had to use .le. to represent “less than or equal to” and .ge. to represent “greater than or equal to”. in the execution plans

This shows us that the first row in my table has a date component of 1st Dec 2013, while the date range required by the OP was one year’s worth of data between 1st Oct 2017 and 30th Sept 2018. The optimizer’s estimate of 2,500 rows out of 1M is the 1/400 we expect.

Let’s test the effect of running the query using literals (i.e. in the OP’s environment stop the “cursor_sharing = force” effect):


select
        count(*)
from    t1
where
        (    to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') >= add_months(trunc(to_date('10/04/2018','dd/MM/yyyy'),'q'),(-12)) 
         and to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') <= trunc(to_date('10/04/2018','dd/MM/yyyy'),'q')-1 )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'))

========================================================

 COUNT(*)
----------
    200750


--------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |   892 (100)|      1 |00:00:05.17 |    4980 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |            |      1 |00:00:05.17 |    4980 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |   2500 |   892  (30)|    200K|00:00:04.30 |    4980 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .ge. TO_DAT
              E(' 2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .le. TO_DATE(' 2018-03-31
              00:00:00', 'syyyy-mm-dd hh24:mi:ss')))


We can see that the literals have echoed through the plan to the predicate section, but the optimizer hasn’t changed its estimate. Let’s create the virtual column, gather stats on it, and try again:


alter table t1 add v_date invisible generated always as (
        to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy')
) virtual
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns v_date size 1')

select  /* virtual column */
        count(*)
from    t1
where
        (    to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') >= add_months(trunc(to_date('10/04/2018','dd/MM/yyyy'),'q'),(-12)) 
         and to_date('01/'||to_char(month)||'/'||to_char(year),'dd/MM/yyyy') <= trunc(to_date('10/04/2018','dd/MM/yyyy'),'q')-1 )
;

 select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

=======================================================================

 COUNT(*)
----------
    200750

--------------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |   950 (100)|      1 |00:00:06.27 |    4980 |
|   1 |  SORT AGGREGATE    |      |      1 |      1 |            |      1 |00:00:06.27 |    4980 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |    236K|   950  (34)|    200K|00:00:04.78 |    4980 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .ge. TO_DAT
              E(' 2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              TO_DATE('01/'||TO_CHAR("MONTH")||'/'||TO_CHAR("YEAR"),'dd/MM/yyyy') .le. TO_DATE(' 2018-03-31
              00:00:00', 'syyyy-mm-dd hh24:mi:ss')))



The optimizer sees that the expression involving month and year matches the virtual column definition, and evaluates the two date expression to produce simple constants and gives us a cardinality estimate in the right ballpark.

Conclusion

Cursor sharing and “big” queries don’t mix. If you have queries that have to manipulate large volumes of data then the overhead of optimising each one separately is likely to be insignificant, and the threat of cardinality errors introduced by bind variables being re-used could be significant.

If you have to make use of an existing (bad) table definition, and can’t managed to write predicates that allow the optimizer to use existing column statistics, remember that you might be able to create a virtual (and invisible) column that captures the necessary definition thereby allowing you to give Oracle some statistics about the necessary predicate.

Footnote

In case you didn’t want to scan through the ODC page, here’s the full SQL Monitor output for the original query:


Global Stats
==============================================================================================
| Elapsed |   Cpu   |    IO    | Cluster  |  Other   | Buffer | Read | Read  | Write | Write |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Waits(s) |  Gets  | Reqs | Bytes | Reqs  | Bytes |
==============================================================================================
|     320 |      76 |      140 |       39 |       66 |     8M | 257K |   2GB |  1528 | 306MB |
==============================================================================================
 
 
SQL Plan Monitoring Details (Plan Hash Value=3210215320)
=================================================================================================================================================================================================================================
| Id    |            Operation            |         Name            |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read  | Read  | Write | Write | Mem  | Temp | Activity |       Activity Detail       | Progress | 
|       |                                 |                         | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs  | Bytes | Reqs  | Bytes |      |      |   (%)    |         (# samples)         |          |
=================================================================================================================================================================================================================================
|  -> 0 | SELECT STATEMENT                |                         |         |       |       180 |   +142 |     1 |        0 |       |       |       |       |      |      |          |                             |          |
|  -> 1 |   SORT UNIQUE                   |                         |    1093 | 52574 |       180 |   +142 |     1 |        0 |       |       |   534 | 107MB |   2M | 113M |     0.94 | Cpu (3)                     |          |
|  -> 2 |    NESTED LOOPS                 |                         |    1093 | 52573 |       180 |   +142 |     1 |       3M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|  -> 3 |     NESTED LOOPS                |                         |    1118 | 52573 |       180 |   +142 |     1 |       3M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|  -> 4 |      HASH JOIN RIGHT SEMI       |                         |    1118 | 52238 |       189 |   +133 |     1 |       3M |       |       |       |       | 153M |      |     1.57 | Cpu (5)                     |          |
|     5 |       VIEW                      |                         |    157K | 31145 |         9 |   +134 |     1 |       2M |       |       |       |       |      |      |          |                             |          |
|     6 |        WINDOW SORT              |                         |    157K | 31145 |        57 |    +86 |     1 |       4M |  3777 | 199MB |   994 | 199MB |      |      |     3.14 | Cpu (5)                     |     100% |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | direct path read temp (5)   |          |
|     7 |         HASH JOIN               |                         |    157K | 29653 |        50 |    +85 |     1 |       4M |       |       |       |       |      |      |     1.26 | Cpu (4)                     |          |
|     8 |          VIEW                   |                         |   81771 | 23273 |         1 |    +86 |     1 |       1M |       |       |       |       |      |      |          |                             |          |
|     9 |           HASH UNIQUE           |                         |   81771 | 23273 |        75 |    +12 |     1 |       1M |       |       |       |       |      |      |     1.89 | Cpu (6)                     |          |
|    10 |            FILTER               |                         |         |       |        78 |     +9 |     1 |      11M |       |       |       |       |      |      |     0.31 | Cpu (1)                     |          |
|    11 |             INDEX FULL SCAN     | PK_HOUSEHOLD_GDC        |   83917 | 22799 |        86 |     +1 |     1 |      11M |     9 | 73728 |       |       |      |      |    24.21 | Cpu (77)                    |          |
|    12 |          INDEX FULL SCAN        | PK_ADV_HOUSEHOLD_ACCT   |      8M |  6332 |        49 |    +86 |     1 |       8M |       |       |       |       |      |      |    12.58 | gc cr block 2-way (37)      |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | gc current block 2-way (3)  |          |
| -> 13 |       INDEX FULL SCAN           | PK_ADV_HOUSEHOLD_ACCT   |      8M |  6332 |       180 |   +142 |     1 |       7M |       |       |       |       |      |      |     0.63 | Cpu (2)                     |          |
| -> 14 |      INDEX RANGE SCAN           | IDX4_LPL_BETA_CUST_RLTN |       1 |     1 |       181 |   +141 |    3M |       3M | 75759 | 592MB |       |       |      |      |    23.27 | gc current grant 2-way (1)  |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | Cpu (21)                    |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file parallel read (52)  |          |
| -> 15 |     TABLE ACCESS BY INDEX ROWID | IMPL_LPL_BETA_CUST_RLTN |       1 |     1 |       180 |   +142 |    3M |       3M |  177K |   1GB |       |       |      |      |    29.56 | Cpu (12)                    |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file parallel read (81)  |          |
|       |                                 |                         |         |       |           |        |       |          |       |       |       |       |      |      |          | db file sequential read (1) |          |
=================================================================================================================================================================================================================================

Oracle Cloud Infrastructure (OCI): Compute Update Dec 2018

Online Apps DBA - Mon, 2018-12-10 06:45

Do You know what is the OCI: Compute Update in Dec 2018? Upgrade yourself with new Information in My [Blog] Oracle Cloud Infrastructure (OCI): Compute Update Dec 2018 At: https://k21academy.com/oci23 & grab the knowledge about: ✔ Things You Should Know About Compute ✔ OCI Compute New Features: DEC 2018 ✔ Changes in VCN (Virtual Cloud […]

The post Oracle Cloud Infrastructure (OCI): Compute Update Dec 2018 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Disabling Spring Security if you don't require it

Pas Apicella - Sun, 2018-12-09 17:42
When using Spring Cloud Services Starter Config Client dependency for example Spring Security will also be included (Config servers will be protected by OAuth2). As a result this will also enable basic authentication to all our service endpoints on your application which may not be the desired result here if your just building a demo for example

Add the following to conditionally disable security in your Spring Boot main class
  
package com.example.employeeservice;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.WebSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

@SpringBootApplication
@EnableDiscoveryClient
public class EmployeeServiceApplication {

public static void main(String[] args) {
SpringApplication.run(EmployeeServiceApplication.class, args);
}

@Configuration
static class ApplicationSecurity extends WebSecurityConfigurerAdapter {

@Override
public void configure(WebSecurity web) throws Exception {
web
.ignoring()
.antMatchers("/**");
}
}
}
Categories: Fusion Middleware

Polymorphic Table Functions – Part 2

Bar Solutions - Sun, 2018-12-09 00:04

In my previous post I wrote about a possible use-case for Polymorphic Table Functions. I also wrote that I had some ideas to implement extra functionality to make the function more generic.
After attending DOAG Konferentz und ausstellung 2018 and UKOUG Tech18 I finally came around to publishing my new version.
Lets start off with a table to hold my data, I chose to use the NATO phonetic alphabet for this:

create table t_alphabet
(thecount number
,alphabet varchar2(4000)
)
/
begin
  insert into t_alphabet(alphabet, thecount) values ('Alfa;Bravo;Charlie;Delta;Echo;Foxtrot;Golf',7);
  insert into t_alphabet(alphabet, thecount) values ('Hotel;India;Juliett;Kilo;Lima;Mike;November',7);
  insert into t_alphabet(alphabet, thecount) values ('Oscar;Papa;Quebec;Romeo;Sierra;Tango;Uniform',7);
  insert into t_alphabet(alphabet, thecount) values ('Victor;Whiskey;;X-ray;Yankee;;Zulu',5);
  insert into t_alphabet(alphabet, thecount) values ('Alfa.Bravo.Charlie.Delta.Echo.Foxtrot.Golf',7);
  insert into t_alphabet(alphabet, thecount) values ('Hotel.India.Juliett.Kilo.Lima.Mike.November',7);
  insert into t_alphabet(alphabet, thecount) values ('Oscar.Papa.Quebec.Romeo.Sierra.Tango.Uniform',7);
  insert into t_alphabet(alphabet, thecount) values ('Victor.Whiskey..X-ray.Yankee..Zulu',5);
  commit;
end;
/

First of all, I learned from attending the presentations by attending presentations by Andrej Pashchenko: Polymorphic Table Functions in 18c: Einführung und Beispiele and Keith Laker: Patterns and Use Cases For Polymorphic Tables that there is no need for global (package) variables to have access to the parameters supplied. The more I can rely on Oracle to take care of the value of variables, the better I like it.
I won’t bore you with all the intermediate versions of the code, lets jump straight into the ‘final’ result.
The package:

create or replace package separated_ptf is
  function describe(tab        in out dbms_tf.table_t
                   ,cols       in dbms_tf.columns_t default null
                   ,coltosplit in varchar2 default null
                   ,separator  in varchar2 default ';') return dbms_tf.describe_t;

  procedure fetch_rows(coltosplit in varchar2 default null
                      ,separator  in varchar2 default ';');
end separated_ptf;
/

The package body:

create or replace package body separated_ptf as
  function describe(tab        in out dbms_tf.table_t
                   ,cols       in dbms_tf.columns_t default null
                   ,coltosplit in varchar2 default null
                   ,separator  in varchar2 default ';') return dbms_tf.describe_t as
    -- metadata for column to add
    l_new_col dbms_tf.column_metadata_t;
    -- table of columns to add
    l_new_cols dbms_tf.columns_new_t;
    -- make sure the column to split is in the correct format (uppercase with doublequotes)
    l_coltosplit dbms_quoted_id := dbms_assert.enquote_name(str => coltosplit, capitalize => true);
  begin
    -- if the coltosplit parameter is null then
    if coltosplit is null then
      -- Mark the first column ReadOnly and don't display it anymore
      tab.column(1).for_read := true;
      tab.column(1).pass_through := false;
    else
      -- if the coltosplit parameter is not null then
      -- check every column from the source table
      for indx in tab.column.first .. tab.column.last loop
        -- if this is the column we want to split then
        if tab.column(indx).description.name = l_coltosplit then
          -- Mark this column ReadOnly and don't display it anymore
          tab.column(indx).for_read := true;
          tab.column(indx).pass_through := false;
        end if;
      end loop;
    end if;
    -- Add the new columns, as specified in the cols parameter
    for indx in 1 .. cols.count loop
      -- define metadata for column named cols(indx)
      -- that will default to a datatype of varchar2 with
      -- a length of 4000
      l_new_col := dbms_tf.column_metadata_t(name => cols(indx));
      -- add the new column to the list of columns new columns
      l_new_cols(l_new_cols.count + 1) := l_new_col;
    end loop;
    -- Instead of returning NULL we will RETURN a specific
    -- DESCRIBE_T that adds new columns
    return dbms_tf.describe_t(new_columns => l_new_cols);
  end;

  procedure fetch_rows(coltosplit in varchar2 default null
                      ,separator  in varchar2 default ';') is
    -- define a table type of varchar2 tables
    type colset is table of dbms_tf.tab_varchar2_t index by pls_integer;
    -- variable to hold the rowset as retrieved
    l_rowset dbms_tf.row_set_t;
    -- variable to hold the number of rows as retrieved
    l_rowcount pls_integer;
    -- variable to hold the number of put columns
    l_putcolcount pls_integer := dbms_tf.get_env().put_columns.count;
    -- variable to hold the new values
    l_newcolset colset;
    -- get the name of the column to be split from the get columns
    l_coltosplit dbms_quoted_id := trim('"' from dbms_tf.get_env().get_columns(1).name);
  begin
    --    dbms_tf.Trace(dbms_tf.Get_Env);
    -- fetch rows into a local rowset
    -- at this point the rows will have columns
    -- from the the table/view/query passed in
    dbms_tf.get_row_set(l_rowset, l_rowcount);
    -- for every row in the rowset...
    for rowindx in 1 .. l_rowcount loop
      -- for every column
      for colindx in 1 .. l_putcolcount loop
        -- split the row into separate values
        --  FUNCTION Row_To_Char(rowset Row_Set_t,
        --                       rid    PLS_INTEGER,
        --                       format PLS_INTEGER default FORMAT_JSON)
        --           return VARCHAR2;
        -- splitting the regexp way: http://nuijten.blogspot.com/2009/07/splitting-comma-delimited-string-regexp.html
        l_newcolset(colindx)(rowindx) := trim(separator from regexp_substr(json_value(dbms_tf.row_to_char(l_rowset, rowindx), '$.' || l_coltosplit)
                                                           ,'[^' || separator || ']*' || separator || '{0,1}'
                                                           ,1
                                                           ,colindx));
      end loop; -- every column
    end loop; -- every row in the rowset
    -- add the newly populated columns to the rowset
    for indx in 1 .. l_putcolcount loop
      dbms_tf.put_col(columnid => indx, collection => l_newcolset(indx));
    end loop;
  end;
end separated_ptf;
/

The wrapper function:

create or replace function separated_fnc(p_tbl      in table
                                        ,cols       columns default null
                                        ,coltosplit in varchar2 default null
                                        ,separator  in varchar2 default ';') return table
  pipelined row polymorphic using separated_ptf;
/

Back to the improvements I suggested earlier.
Supporting duplicate separators:
Using the Regular Expression ‘[^;]+’ didn’t make this possible, because double ; (;;) would be regarded as one. So this had to be changed into ‘[^;]+;{0,1}’. This expression says (in my words): find all the characters which are not a ; followed by 0 or 1 ;. Since this will result in a string with a ; at the end I had to add the trim function around it.
Making the column to be split up a parameter
To find the column to be split I need to make the value look the same as the tab.column(indx).description.name value I can of course add quotes around the parameter myself, but I think it is better to use the built in sys.dbms_assert.enquote_name function to do this. If the value is not supplied or null I just (try to) split the first column.
Making the separator character a parameter
This seemed like an easy task, just replace every ; in my code by a variable, but when running a couple of tests I received an error which put me on the wrong path.
Calling the function using positioned parameters works like a charm, just as you would expect it:
select *
from separated_fnc(t_alphabet, columns(first, second, third, fourth, fifth, sixth, seventh),’alphabet’,’.’)
/

  THECOUNT FIRST                SECOND               THIRD                FOURTH               FIFTH                SIXTH                SEVENTH
---------- -------------------- -------------------- -------------------- -------------------- -------------------- -------------------- --------------------
         7 Alfa;Bravo;Charlie;D                                                                                                          
           elta;Echo;Foxtrot;Go                                                                                                          
           lf                                                                                                                            
         7 Hotel;India;Juliett;                                                                                                          
           Kilo;Lima;Mike;Novem                                                                                                          
           ber                                                                                                                           
         7 Oscar;Papa;Quebec;Ro                                                                                                          
           meo;Sierra;Tango;Uni                                                                                                          
           form                                                                                                                          
         5 Victor;Whiskey;X-ray                                                                                                          
           ;Yankee;Zulu                                                                                                                  
         7 Alfa                 Bravo                Charlie              Delta                Echo                 Foxtrot              Golf
         7 Hotel                India                Juliett              Kilo                 Lima                 Mike                 November
         7 Oscar                Papa                 Quebec               Romeo                Sierra               Tango                Uniform
         5 Victor               Whiskey                                   X-ray                Yankee                                    Zulu

8 rows selected

But when I tried to use named parameters like this:

select *
  from separated_fnc(t_alphabet, columns(first, second, third, fourth, fifth, sixth, seventh),coltosplit => 'alphabet',separator => '.')
/

it resulted in the following error:

ORA-62573: new column (FIRST) is not allowed with describe only polymorphic table function

After the DOAG conference I looked at this together with Chris Saxon, but we couldn’t find what is going on. So we contacted Keith Laker and he told me that it was bug in the version (18.3.0.0.0) of the database I am using and that it should be fixed in an upcoming (patch) release.
If you know about this behavior, I think it is quite a useful function (and with the upcoming fixes it will become even more useful).

If you have any suggestions or maybe ideas for other use cases for Polymorphic Table Functions, please don’t hesitate to use the comments.

Cloud Control 13c Release 3 Database Template

Michael Dinh - Sat, 2018-12-08 18:20

Got excited to find:
Creating a Database Instance with Preconfigured Repository Using Database Templates

Templates can be download at:
Database Template (with EM 13.3.0.0 repository pre-configured) for Installing Oracle Enterprise Manager Cloud Control 13c Release 3 (13.3.0.0)

Got disappointed to find template is only available for 12.1.0.2.0 DB and 18.1.0.0.0 DB.

Creating a Database Instance with Preconfigured Repository Using Database Templates Caution:

Make sure that the 18.1.0.0.0_Database_Template_for_EM13_3_0_0_0_.zip file is used only for Enterprise Manager Cloud Control 13c Release 3.

It looks like the template version must match exactly from an example below.

EM 13c: Creating an Enterprise Manager 13c Cloud Control Repository using a Database Template Fails: ORA-00603: ORACLE server session terminated by fatal error (Doc ID 2291220.1)	

While the 12.2.0.1 version of the database is supported for EM 13.2 repository use, a database template for the EM 13.2 repository database is only supplied for the 12.1.0.2 version of the database, and they are specific for the version:

Oracle skipped 12.2 DB version when creating DB template which is disappointing.

Does database really need to be on 12.2?

First immersion in the Docker Conference EU 2018

Yann Neuhaus - Sat, 2018-12-08 12:02

In short, a very interesting event for both Devs and Ops. Every day was organized around workshops, hands-on-labs sessions and Hallway tracks. My colleague Mehdi Bada and I tried to attend as much sessions as possible but obviously it was difficult to cover all the topics.

blog 149 - 0 - dockercon small

Anyway, workshops and hands-on-labs were very interesting especially if you like to mix theory and practice. But I had to admit sometimes we got in trouble to keep up the pace of some workshops. Regarding the workshop we ran into what I call the “Context switch” issue between following the trainer’s explanation and doing exercises at the same time :) The migrating .NET applications to Docker workshop with Elton Stoneman (Docker) was one that comes I mind in this case :)

As database specialists at dbi services we obviously had a special focus on storage-oriented topics and we were interested in attending sessions and workshops on this topic including Use Cases and Practical Solutions for Docker Container Storage on Swarm and K8s session with Don Stewart (Docker) and Mark Church (Docker) as well as Container Storage Panel Q&A with Ed Beauvais (Oracle), Chris Brandon (Storage OS Inc) and Keith Hudgins (Docker). We got an overview of different possible solutions to implement as file-based, block-based and object-based storage in order to address different pattern workloads including fileserver, OLTP, BigData etc. Container Storage Landscape is large and vendor-specific actually but Docker storage team announced some plans to introduce first snapshot / restore capabilities and to provide an CSI (Common Storage Interface) to offer a simple community driven approach and a more predictable and functional interface for most common use cases as well. Let’s see what’s happen in the future but my guess (speculation mode) is that for “specific” applications like databases, vendor storage drivers will likely remain the most viable option when performance will be at the heart of concerns.

blog 149 - 1 - storage drivers

Even if containers are formally design to handle stateless applications it is not uncommon to see databases in such infrastructure nowadays. After all databases are also (special) applications, right? I was already convinced by the fact that containerization infrastructure was now enough mature to handle database workloads, these sessions reinforced my strong belief that Docker Swarm or K8s are production database ready from a storage perspective at least.

We also got the opportunity to attend to workshops and sessions around container orchestrator topics including mainly Docker Swarm and K8s orchestration. It was interesting to see that the same question often raised by attendees during these sessions: Do we have to use Swarm over K8s and vice-versa-ca? This is also a question we are going to ask for a dbi services internal project by the way and obviously, there is no black-or-white response. What is certain is that Docker Swarm remains important for customers as confirmed by Steve Singh during the first general session on Tuesday 4th December 2018. We got feedback from customer stories like Citizens bank that an orchestrator choice depends on different factors and in the context of this customer, they are using the both from Docker EE :) We also attended to other interesting Swam and K8s infrastructure topics including Swarm Orchestration – features and workflows by Bret Fisher (Docker Captain) and Container Networking for Swarm and Kubernetes in Docker Enterprise by Guillaume Morini (Docker) as well. Finally, and probably one of my favorite workshops was troubleshooting with sysdig by Michael Ducy. Sysdig is part of well-known monitoring / troubleshooting tools for containers in the market. Let’s say that it was a subtle combination between deep dive immersion of Linux kernel principals and practical scenarios about using sysdig tools to fix container issues as confirmed by my working desktop below:

blog 149 - 2- WS sysdig

In addition to sessions, workshops and hands-on-labs, new announcements were done at the DockerCon EU 2018, during general sessions with Steve Singh (CEO) and Scott Johnston (Chief Product Officer) as main speakers.

blog 149 - 3 - dockercon general session 1

First general session announcements include new innovative tools including docker-app, docker-assemble and enhancement of docker stack support for both Swarm and Kubernetes since Docker EE 2.0 and probably the most expected one: Docker Desktop Enterprise. It turns out that the adoption of Docker Desktop from developers was a real success but not really designed to scale to Enterprise-class environment and this is basically what Docker Desktop enterprise product is supposed to address.

It was also an opportunity to get some interesting figures about Docker (EE) adoption across the world:

  • 1M of new developer
  • 5M of new applications
  • 1B of containers downloaded every week
  • 650+ customers on docker EE
  • 76.4% of companies running mission-critical apps in containers in production

The last one is by far my favorite because it highlights that most of Docker environments are not anymore developer-scoped limited environments. Moreover, it is worth noting that the other following figures seem to point out that Docker is not a visionary developer whim anymore and it drives a strong adoption for customer due to an interesting ROI:

  • 69% differentiating products and services vs competitors
  • 70% => increasing sales of product
  • 71% bringing products to market faster

Finally, let’s finish with the second general session that was more Docker community-oriented and I know how important community may be for contribution and to bring people for interaction as well. As Microsoft with MVPs, Docker Captains are the Docker counterpart and were thanked for their wonderful contribution during this event. But obviously contribution is beyond MVPs, ACEs or Docker captains and Kal De (CTO, EVP, Product Development) explained how to contribute to different Docker projects and showed then contribution figures from the community through GitHub:

  • Compose 1 MM monthly
  • 25K new compose files published on GitHub per week
  • 14K GitHub contributors – 280+ people

This first immersion in the Docker World conference was definitely a good experience and a great opportunity to feel the emphasis around Docker and future directions made by the company. I also appreciated discussions and feedbacks from some attendees during network track to prepare our future challenges on this topic.

 

 

 

Cet article First immersion in the Docker Conference EU 2018 est apparu en premier sur Blog dbi services.

The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here

Pas Apicella - Sat, 2018-12-08 05:30
That’s Pivotal Function Service, and it’s available as an alpha release today. Read more about it here

https://content.pivotal.io/blog/the-first-open-multi-cloud-serverless-platform-for-the-enterprise-is-here-try-out-pivotal-function-service-today

Docs as follows

https://docs.pivotal.io/pfs/index.html
Categories: Fusion Middleware

AWS: IAM & security - Best practices: Using a non-root user

Dietrich Schroff - Fri, 2018-12-07 18:35
After my successful solutions architect practice i knew that i had to take care of my shortcomings in security.
So i decided to visit the IAM (Identity and Access Managemen) of AWS:

So let's move to IAM users:
Click an "add user"

Then insert a "user name", choose an access type and click next:

Then you have to create the first group:

 I created a group with full AWS access:



Then move on with creating the user:

 Additional Tags:
 And finally click "create user"
 This will show you a page with an AWS management console URL:

Use this URL to login with the new user:
 (i had to change the passwort - the checkbox "require password reset")

And then i am logged into my AWS Management console with this non-root user:


If you want to login with your root user, you have to use the link blow the "sign in" button:

PostgreSQL : Get my database server name

Yann Neuhaus - Fri, 2018-12-07 16:32

I was looking for a build-in function to get the hostname of the server hosting my PostgreSQL cluster.
But seems that there is no build-in function. Looking in the extensions, I find the extension hostname which can allow to get the database server host name.
In this this blog I am explaining how to install and how to use it. The installation is very easy. The first step is to download it here .

After let’s go to the directory where the archive was decompressed and let’s run the command make

master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:26 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1] ls
Changes  doc  hostname.control  Makefile  META.json  README.md  sql  src  test
14:46:48 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

master/pg-hostname-master/ [PG1] make
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -I. -I./ -I/u01/app/postgres/product/95/db_0/include/server -I/u01/app/postgres/product/95/db_0/include/internal -D_GNU_SOURCE -I/usr/include/libxml2   -c -o src/hostname.o src/hostname.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -O2 -fpic -L/u01/app/postgres/product/95/db_0/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/postgres/product/95/db_0/lib',--enable-new-dtags  -shared -o src/hostname.so src/hostname.o
cp sql/hostname.sql sql/hostname--1.0.0.sql
14:46:55 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-

Once done let’s run the command make install

pg-hostname-master/ [PG1] make install
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/extension'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/lib'
/bin/mkdir -p '/u01/app/postgres/product/95/db_0/share/doc/extension'
/bin/install -c -m 644 .//hostname.control '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 644 .//sql/hostname--1.0.0.sql .//sql/hostname--unpackaged--1.0.0.sql  '/u01/app/postgres/product/95/db_0/share/extension/'
/bin/install -c -m 755  src/hostname.so '/u01/app/postgres/product/95/db_0/lib/'
/bin/install -c -m 644 .//doc/hostname.mmd '/u01/app/postgres/product/95/db_0/share/doc/extension/'
14:47:29 postgres@dbi-pg-essentials:/home/postgres/pg-hostname-master/pg-hostname-master/ [PG1]

If everything is ok we should now have the extension in our $PGHOME/share/extension

15:02:39 postgres@dbi-pg-essentials:[PG1] ls -ltra *hostname*
-rw-r--r--. 1 postgres postgres 140 Dec  5 14:47 hostname.control
-rw-r--r--. 1 postgres postgres  96 Dec  5 14:47 hostname--1.0.0.sql
-rw-r--r--. 1 postgres 

And that’s all. We just have now have to install the extension in the database

postgres=# CREATE EXTENSION hostname;
CREATE EXTENSION

And then now we can have the hostname of our the server

postgres=# SELECT hostname();
     hostname
-------------------
 dbi-pg-essentials
(1 row)

postgres=#

Cet article PostgreSQL : Get my database server name est apparu en premier sur Blog dbi services.

PostgreSQL 12 : New option –socketdir for pg_upgrade

Yann Neuhaus - Fri, 2018-12-07 16:30

PostgreSQL 12 is under development but tests can be done. Steps to install this version can be found in this dbi blog . Many thanks to Daniel.
While reading the documentation I found that there is a new parameter for pg_upgrade. This new parameter is –socketdir.
Why this parameter?
In fact the path name of a UNIX socket is allowed to be maximally 108 chars long. Before PostgreSQL 12, the default directory for the sockets created for the temporary postmasters started by pg_upgrade was the current directory. But depending of the current directory the pathname might be very long for a socket name. In PostgreSQL 12 The default location is still the current working directory, but the parameter socketdir now allows us to specify another location

To better understand I am going to upgrade from PostgreSQL 10 to PostgreSQL 12

20:59:44 postgres@dbi-pg-essentials:/u02/pgdata/PG12TEST/ [PG12TEST] psql -U postgres -d postgres
psql (12devel dbi services build)
Type "help" for help.

postgres=# select version();
                                                            version
---------------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11
(1 row)

postgres=#

Now let’s create a very deeply nested directory

mkdir -p ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd

And let’s do a cd into this new directory and let’s try to do an upgrade

$ export PGDATAOLD=/u02/pgdata/PG3
$ export PGBINOLD=/u01/app/postgres/product/10/db_1/bin
$ export PGDATANEW=/u02/pgdata/PG12TEST
$ export PGBINNEW=/u01/app/postgres/product/12dev/db_0/bin

When running the pg_upgrade with the check option, we got following errors

$ pg_upgrade -c
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: Unix-domain socket path "/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/.s.PGSQL.50432" is too long (maximum 107 bytes)

could not connect to source postmaster started with the command:
"/u01/app/postgres/product/10/db_1/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/u02/pgdata/PG3" -o "-p 50432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c unix_socket_directories='/home/postgres/ttttttttttttttttttttttttttttttttttttttttt/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd/UUUUUUUUUUUUUUUUUUUUUUUUUUUUUuuuuuuuuuuuuuuuuuuuuuuuuuuuu/ddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd'" start
Failure, exiting
$

Seems that the pathname for the socket is very long.
And if we use this new parameter, we can specify a new location for the sockets. And we can see that the checks are now successful

$ pg_upgrade -c --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

And then we can upgrade the cluster using this new parameter and still staying in this new created directory

$ pg_upgrade  --socketdir=/home/postgres/
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDs                               ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh
$
Conclusion :

In this blog we have seen the new option –socketdir for pg_upgrade. It’s a good thing to know that this parameter exists, but in most case the current working directory should be ok for an upgrade

Cet article PostgreSQL 12 : New option –socketdir for pg_upgrade est apparu en premier sur Blog dbi services.

Ubuntu: if WLAN is gone after an update and lspci lists no wlan devices

Dietrich Schroff - Fri, 2018-12-07 12:42
On my ubuntu laptop after an update the WLAN networking was gone. First check was to boot into Windows and check wether the WLAN devices is still working:
The WLAN still worked with Windows, so the worst case (hardware error) did not hit me ;-)

So i switched back to Ubuntu and tried the following:

$ lspci -nnk | grep -iA2 net;
02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
    Subsystem: Acer Incorporated [ALI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [1025:104c]
    Kernel driver in use: r8169
    Kernel modules: r8169
Very strange - the WLAN network adapter was not even listed with lspci!
Another check shows:
$ lshw|grep -iA2 network
WARNUNG: Sie sollten dieses Programm mit Systemverwalterrechten (root) ausführen.
           *-network
                Beschreibung: Ethernet interface
                Produkt: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
It look like the hardware controller is missing, but the check with Windows explicitly showed, that the WLAN is still ok.

The solution was to run:
apt-get install bcmwl-kernel-source
(After connecting the laptop via ethernet to my router)

Then the commands showed up again with the WLAN adapter:
$ lspci -nnk | grep -iA2 net;
02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)
    Subsystem: Acer Incorporated [ALI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [1025:104c]
    Kernel driver in use: r8169
    Kernel modules: r8169
03:00.0 Network controller [0280]: Qualcomm Atheros QCA6174 802.11ac Wireless Network Adapter [168c:003e] (rev 32)
    Subsystem: Lite-On Communications Inc QCA6174 802.11ac Wireless Network Adapter [11ad:0807]
    Kernel driver in use: ath10k_pci
    Kernel modules: ath10k_pci, wl

$ lshw|grep -iA2 network

WARNUNG: Sie sollten dieses Programm mit Systemverwalterrechten (root) ausführen.

           *-network

                Beschreibung: Ethernet interface

                Produkt: RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller

--

           *-network

                Beschreibung: Kabellose Verbindung

                Produkt: QCA6174 802.11ac Wireless Network Adapter

                Hersteller: Qualcomm Atheros

                Physische ID: 0

I found the following with google:
https://ubuntuforums.org/showthread.php?t=1411216


So let's try this:

# apt-mark hold bcmwl-kernel-source
bcmwl-kernel-source auf Halten gesetzt.

Plans and Trees

Jonathan Lewis - Fri, 2018-12-07 11:58

Prompted by a question on the ODC database forum – and also because I failed to get to the “Bonus slides” on my presentation on basic execution plans at both the DOAG and UKOUG conferences, here’s a small of slides demonstrating how to convert a text execution plan into a tree that you can read using the mechanism described in Oracle’s white paper by the phrase: “start from the bottom left and work across and then up”.

The file is a Microsoft Powerpoint file (early version).

 

Misdirection

Jonathan Lewis - Fri, 2018-12-07 05:48

A recent post on the ODC database forum prompted me to write a short note about a trap that catches everyone from time to time. The trap is following the obvious; and it’s a trap because it’s only previous experience that lets you decide what’s obvious and the similarity between what you’re looking and your previous experience may be purely coincidental.

The question on OTN (paraphrased) was as follows:

When I run the first query below Oracle doesn’t use the index on column AF and is slow, but when I run the second query the Oracle uses the index and it’s fast. So when the input starts with ‘\\’ the indexes are not used. What’s going on ?


SELECT * FROM T WHERE AF = '\\domain\test\1123.pdf';
SELECT * FROM T WHERE AF = 'a\\domain\test\1123.pdf';

Looking at the two queries my first thought was that it’s obvious what’s (probably) happening, and my second thought was the more interesting question: “why does this person think that the ‘\\’ is significant ?”

The cause of the difference in behaviour is probably related to the way that Oracle stores statistics (specifically histograms) about character columns, and the way in which the cardinality calculations can go wrong.  If two character match over the first few characters the numeric representation of those strings that Oracle uses in a histogram is identical, and if they are long enough even the “actual value” stored would be identical. It looks as if this person is storing URLs, and it’s quite likely that there are a lot of long URLs that start with the same (long) string of characters – it’s a very old problem – and it’s an example of a column where you probably want to be absolutely sure that you don’t gather a histogram.

But why did the OP decide that the ‘\\’ was the significant bit ? I don’t know, of course, but  how about this:

  • No contrary tests: Perhaps every single time the query misbehaved the value started with ‘\\’ and it never went wrong for any other starting values. And maybe the OP tested several different domain names – it would be much easier to see the ‘\\’ as the common denominator rather than “repetitive leading character string” if you tested with values that spanned different domains.

combined with

  • An easily available “justification”: In many programming languages (including SQL) ‘\’ is an escape character – if you don’t really know much about how the optimizer works you might believe that that could be enough to confuse the optimizer.

It can be very difficult when you spot an obvious pattern to pause long enough to consider whether you’ve identified the whole pattern, or whether you’re looking at a special case that’s going to take you in the wrong direction.

 

API for Amazon SageMaker ML Sentiment Analysis

Andrejus Baranovski - Thu, 2018-12-06 13:50
Assume you manage support department and want to automate some of the workload which comes from users requesting support through Twitter. Probably you already would be using chatbot to send back replies to users. Bu this is not enough - some of the support requests must be taken with special care and handled by humans. How to understand when tweet message should be escalated and when no? Machine Learning for Business book got an answer. I recommend to read this book, my today post is based on Chapter 4.

You can download source code for Chapter 4 from book website. Model is trained based on sample dataset from Kaggle - Customer Support on Twitter. Model is trained based on subset of available data, using around 500 000 Twitter messages. Book authors converted and prepared dataset to be suitable to feed into Amazon SageMaker (dataset can be downloaded together with the source code).

Model is trained in such way, that it doesn't check if tweet is simply positive or negative. Sentiment analysis is based on the fact if tweet should be escalated or not. It could be even positive tweet should be escalated.

I have followed instructions from the book and was able to train and host the model. I have created AWS Lambda function and API Gateway to be able to call model from the outside (this part is not described in the book, but you can check my previous post to get more info about it - Amazon SageMaker Model Endpoint Access from Oracle JET).

To test trained model, I took two random tweets addressed to Lufthansa account and passed them to predict function. I exposed model through AWS Lambda function and created API Gateway, this allows to initiate REST request from such tool as Postman. Response with __label__1 needs esacalation and __label__0 doesn't need. Second tweet is more direct and it refers immediate feedback, it was labeled for escalation by our model for sentiment analysis. First tweet is a bit abstract, for this tweet no escalation:


This is AWS Lambda function, it gets data from request, calls model endpoint and returns back prediction:

Let's have a quick look into training dataset. There are around 20% of tweets representing tweets marked for escalation. This shows - there is no need to have 50%/50% split in training dataset. In real life probably number of escalations is less than half of all requests, this realistic scenario is represented in the dataset:


ML model is built using Amazon SageMaker BlazingText algorithm:


Once ML model is built, we deploy it to the endpoint. Predict function is invoked through the endpoint:

Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems

Most organizations have silos of content spread out amongst databases, file shares, and one or more document management systems. Without a unified search system to tap into this information, knowledge often remains hidden and the assets employees create cannot be used to support design, manufacturing, or research objectives.

An enterprise search system that can connect these disparate content stores and provide a single search experience for users can help organizations increase operational efficiencies, enhance knowledge sharing, and ensure compliance. PTC Windchill provides a primary source for the digital product thread, but organizations often have other key systems storing valuable information. That is why it is critical to provide workers with access to associated information regardless of where it is stored.

This past August, Fishbowl released its PTC Windchill Connector for Google Cloud Search. Fishbowl developed the connector for companies needing a search solution that allows them to spend less time searching for existing information and more time developing new products and ideas. These companies need a centralized way to search their key engineering information stores, like PLM (in this case Windchill), ERP, quality database, and other legacy data systems. Google Cloud Search is Google’s next generation, cloud-based enterprise search platform from which customers can search large data sets both on-premise and in the cloud while taking advantage of Google’s world-class relevancy algorithms and search experience capabilities.

Connecting PTC Windchill and Google Cloud Search

Through Google Cloud Search, Google provides the power and reach of Google search to the enterprise. Fishbowl’s PTC Windchill Connector for Google Cloud Search provides customers with the ability to leverage Google’s industry-leading technology to search PTC Windchill for Documents, CAD files, Enterprise Parts, Promotion Requests, Change Requests, and Change Notices. The PTC Windchill Connector for Google Cloud Search assigns security to all items indexed through the connector based on the default ACL configuration specified in the connector configuration. The connector allows customers to take full advantage of additional search features provided by Google Cloud Search including Facets and Spelling Suggestions just as you would expect from a Google solution.

To read the rest of this blog post and see an architecture diagram showing how Fishbowl connects Google Cloud Search with PTC Windchill, please visit the PTC LiveWorx 2019 blog.

The post Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

[Solved] Oracle EBS (R12) Installation Issue: Could not find the main class: oracle.apps.ad.rapidwiz.RIWizard

Online Apps DBA - Thu, 2018-12-06 06:52

Troubled with the issue while running rapidwiz on an Exadata database machine to install Oracle EBS R12.2 ? If yes, Visit: https://k21academy.com/appsdba39 and consider our new Blog Covering: ✔What is rapidwiz ✔Issues, Causes and Solution for Installing Through Rapidwiz Troubled with the issue while running rapidwiz on an Exadata database machine to install Oracle EBS […]

The post [Solved] Oracle EBS (R12) Installation Issue: Could not find the main class: oracle.apps.ad.rapidwiz.RIWizard appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DockerCon18 Barcelona – Day 3

Yann Neuhaus - Thu, 2018-12-06 06:37

Wednesday was the last day of the DockerCon18 Europe. Like the previous day, we started the journey by a keynote of 2 hours, more oriented “Docker Community”, the core message of the keynote, the community is one of the pillars of open source technologies and Docker Inc want to push more and more the community aspect. The community is growing very fast and competitive.

They take the opportunity to award the community leader of the year and a new Docker Captain, Bret Fisher.

dockercommunity-min

Then we attended an interesting session: Docker Storage with Swarm and Kubernetes.

The guy who presented started the session with a funny part: Container Storage Fake News!! During a few minutes, he listed one by one all fake news related to storage in the container world. The best fake news for us:

RDBMS and databases cannot be run on containers: NO! Official images are available from the providers. The best example is SQL Server, who provide a very competitive Docker image for their users.

The core message of the session is that databases containers are coming more and more and will be used and deployed. The very interesting thing is about the collaboration between Docker and storage provider, who are developing API for Docker compatibility, so in the future, each storage provider will have its own API to communicate with Docker container.
 

The last but not least session of the day for me was about Provisioning and Managing Storage for Docker Containers.

The goal of the session was the explanation of How we can manage easily storage operations for containers.

Docker EE Platform with Kubernetes (with PV and PVC) help us in the future to manage storage operations for containers.

PV = Persistent Volume
PVC = Persistent Volume Claim

They present us, also, the difference between static and dynamic provisioning in Kubernetes and the future of storage management in Docker using CSI.

Core message: Docker is making storage a priority.

oznor

Cet article DockerCon18 Barcelona – Day 3 est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator