Feed aggregator

Raspberry PI on Ubuntu: yarn: Cannot find module 'worker_threads'

Dietrich Schroff - Fri, 2021-11-26 12:54

This evening i tried to install a nodejs application with yarn on my raspberry pi. This failed with:

/usr/local/bin/yarn install
    throw err;
Error: Cannot find module 'worker_threads'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
    at Function.Module._load (internal/modules/cjs/loader.js:562:25)
    at Module.require (internal/modules/cjs/loader.js:692:17)
    at require (internal/modules/cjs/helpers.js:25:18)
    at /opt/zwavejs2mqtt/.yarn/releases/yarn-3.1.0-rc.8.cjs:287:2642
    at Object.<anonymous> (/opt/zwavejs2mqtt/.yarn/releases/yarn-3.1.0-rc.8.cjs:585:7786)
    at Module._compile (internal/modules/cjs/loader.js:778:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:789:10)
    at Module.load (internal/modules/cjs/loader.js:653:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:593:12)

This error occurs because the nodejs version which is delivered by ubuntu is version v.10.19.0.

You have to download the armv8 package from https://nodejs.org/en/download/

With version v16.13.0 the error was gone...


Question from USER399341 for Database In-Memory Office Hours - 2021-11-24

Tom Kyte - Wed, 2021-11-24 08:06
The following question was asked, but got attached to the Nov 24th session which was re-scheduled to this Nov 18th session: Question from USER399341 for Database In-Memory Office Hours - 2021-11-24 I have a Table with all transaction amounts (both +ve and -ve amount). I have to find a combination of these amounts that equate to a given sum transaction amount stored in another table. There will be 10s of thousands of transactions int he first table. I am using Oracle 19c. Appreciate your help.
Categories: DBA Blogs

Unnest a nested table with the extracted data in single row

Tom Kyte - Tue, 2021-11-23 13:46
Hi, I have a nested table with three columns within the nested column. I have 3 entries for the same ID within the nested column. I want to unnest this table and get the 3 entries as separate columns in single row. How do I do it? Below is the code: <code>create or replace TYPE "TEST" AS OBJECT ( point NUMBER(3), latitude NUMBER(10), longitude NUMBER(10) ) create or replace TYPE "TESTS" IS TABLE OF TEST; CREATE TABLE TEST_TABLE ( "ID" NUMBER(3,0), "LOCATION" "SYS"."TESTS" ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SYSTEM" NESTED TABLE "LOCATION" STORE AS "LOCATIONS" (PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) NOCOMPRESS TABLESPACE "SYSTEM" ) RETURN AS VALUE; Insert into TEST_TABLE (ID,LOCATION) values (161,SYS.TESTS(SYS.TESTS(0, 4009716, 50056416), SYS.TESTS(1, 4324450, 51769233), SYS.TESTS(2, 5570283, 51604983), SYS.TESTS(3, 5845666, 49989300))); Insert into TEST_TABLE (ID,LOCATION) values (162,SYS.TESTS(SYS.TESTS(0, 4862133, 43994149), SYS.TESTS(1, 3183550, 43960533), SYS.TESTS(2, 3970383, 45314300), SYS.TESTS(3, 5032600, 44909200)));</code> Expected Output: <code>ID POINT1 LATITUDE1 LONGITUDE1 POINT2 LATITUDE2 LONGITUDE2 POINT3 LATITUDE3 LONGITUDE3 POINT4 LATITUDE4 LONGITUDE4 --- ------- --------- ---------- ------ --------- ---------- ------ --------- ---------- ------ --------- ---------- 161 0 4009716 50056416 1 4324450 51769233 2 5570283 51604983 3 5845666 49989300 162 0 4862133 43994149 1 3183550 43960533 2 3970383 45314300 3 5032600 44909200</code>
Categories: DBA Blogs

Json_Transform in Oracle 21c

Tom Kyte - Tue, 2021-11-23 13:46
Team, Given this JSON, how do i increment the quantity by two in each level? tried the below using json_transform but ended up with error. is that possible using json_transform function? kinldy help. <code> demo@XEPDB1> select json_serialize(y pretty) y 2 from t 3 where x =2; Y ---------------------------------------- { "produce" : [ { "fruit" : "apple", "quantity" : 10 }, { "fruit" : "orange", "quantity" : 15 } ] } demo@XEPDB1> select json_serialize( 2 json_transform( y, set '$.produce[*].quantity' = 3 '$.produce[*].quantity' + 2 ) pretty ) 4 from t 5 where x =2; '$.produce[*].quantity' + 2 ) pretty ) * ERROR at line 3: ORA-01722: invalid number demo@XEPDB1> </code> Was able to get this done using JSON object types from PL/SQL. would like to know if the same can be done using JSON_TRANSFROM function in sql? <code> demo@XEPDB1> create or replace function update_json( p_input json ) 2 return json 3 as 4 l_data json_object_t; 5 l_size number := 0; 6 l_ele json_element_t; 7 l_array json_array_t; 8 l_obj json_object_t; 9 l_qty number := 0; 10 begin 11 l_data := json_object_t( p_input ); 12 13 if l_data.has('produce') then 14 l_ele := l_data.get('produce'); 15 if l_ele.is_array then 16 l_size := l_ele.get_size()-1; 17 l_array := json_array_t( l_ele ); 18 for i in 0..l_size 19 loop 20 l_obj := treat( l_array.get(i) as json_object_t ); 21 l_qty := l_obj.get_Number('quantity'); 22 l_obj.put( 'quantity', l_qty+2 ); 23 end loop; 24 end if; 25 end if; 26 return l_data.to_json; 27 end; 28 / Function created. demo@XEPDB1> select json_serialize(y) 2 from t t1 3 where x =2; JSON_SERIALIZE(Y) -------------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":10},{"fruit":"orange","quantity":15}]} demo@XEPDB1> select update_json(y) 2 from t t1 3 where x =2; UPDATE_JSON(Y) -------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":12},{"fruit":"orange","quantity":17}]} demo@XEPDB1> </code>
Categories: DBA Blogs

Your Kitchen Design Choices: Find the Best Habits

OraQA - Tue, 2021-11-23 08:11

In contrast to interior design, the decorating service does not need much labor. Interior decorating, which does not interfere with the building’s internal structure, tries to maximize available space by experimenting with different materials, volumes, colors, and furnishings. Look no further if you’re seeking for ideas to decorate your terrace, duplex, or family house. In case of modern kitchen designs this is important.

Assign Your Interior Design Project To A Professional

Entrusting your interior design project to a professional means that you will be able to benefit from the advice of a specialist in trends, colors, and materials who will perfectly sublimate your interior by organizing the space, optimizing it, and highlighting your furniture and objects.

Make an appointment Diagnostics and recommendations for redecorating your home’s interior

The requirements for your project are established at our initial meeting, which takes into consideration your wishes, your inspirations, your demands, and your budget. During this can’t-miss occasion, we will have the opportunity to get to know one another better as well as immerse ourselves in the possibilities of your house.

We evaluate every factor when defining a style and designing a house that is tailored to your needs. Colors, furniture arrangement, exhibitions, we consider it everything.

Following this first meeting, we create a custom quotation that includes all of the services that we have discussed and agreed on together. Our pricing are computed based on the square footage of your house and the services you want to include. With Foyr Neo you can find all supports.

Research for the interior design project

Discover your new layout: furnishings, a design plan for the interior, we create for you a new interior that is designed to your taste, reflects your personality, and is, above all, one-of-a-kind. Using our knowledge, we put at your disposal a turnkey project that will meet all of your requirements as quickly as possible. We provide a shopping list, material and color ideas, and a turnkey project that will satisfy all of your requirements.

Implementation and monitoring of the development plan

It is unnecessary for you to contact craftsmen since we will handle everything for you. We will gather numerous quotations from artisans and specialists who are ready to assist you in the completion of your project upon your request.

It is neither a project manager nor a site coordinator; nevertheless, you may appoint us to accompany you on your project site as an adviser and to keep track of the progress made with the many service providers you have chosen. As a result, we will carry out the following tasks on your behalf: site inspection, weekly report, and financial oversight of your work.

Finally, we will assist and advise you throughout the process of selecting furniture and décor to meet your specific requirements, as well as throughout the integration of your new purchases into your existing interior. We will remain by your side from the beginning to the completion of your project, providing you with our advice and experience, as well as putting the finishing touches on your new interior design project.

What is the best way to build a home fence?

Take a pencil, a tape measure, a saw, and a drill-driver with you to get started.Define the position of the structure by taking the height and breadth of the battens from floor to ceiling and multiplying them by the length of the structure.


The post Your Kitchen Design Choices: Find the Best Habits appeared first on ORA QA.

[DP-300] Day3 Q/A Review: Implement A Secure Environment

Online Apps DBA - Mon, 2021-11-22 07:27

Implement a secure environment Azure SQL Database has several authentication and authorization options that are different from the options in SQL Server. This is because Azure SQL Database and Azure SQL Managed Instance rely on Azure Active Directory instead of Windows Server Active Directory. In Azure, while implementing a secure environment we explore encryptions, firewalls, […]

The post [DP-300] Day3 Q/A Review: Implement A Secure Environment appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Introduction to Apache Spark (PySpark) Q & A: Day 8 Live Session Review

Online Apps DBA - Mon, 2021-11-22 07:18

Apache Spark Apache Spark is a lightning-fast cluster computing technology, designed for fast computation. It is based on Hadoop MapReduce and it extends the MapReduce model to efficiently use it for more types of computations, which includes interactive queries and stream processing. The main feature of Spark is its in-memory cluster computing that increases the processing speed […]

The post Introduction to Apache Spark (PySpark) Q & A: Day 8 Live Session Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs


Jonathan Lewis - Mon, 2021-11-22 05:12

Prompted by a recent question on the MOSC community forum (link needs support account) I thought I’d dust off this little script (that I wrote nearly 20 years ago for Oracle 9.2) so see if it still worked, needed any new columns, or added extra rows in 21c.

The script is just a simple report of v$statistics_level, which reports the various real-time statistics collections and advisors that could be enabled, the view that holds associated results (where relevant) and the “activation_level” – in effect telling you whether or not it is necessary to set the statistics_level to “all” before you can enable a particular collection.

The results are not as helpful as you might hope, however, and may result in a mild panic attack if you are prone to getting worried about licensing requirements.

rem     Script:         statistics_level.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2002
rem     Purpose:        Simple script to pick up 9.2 statistics activation details

column statistics_name          format a40
column statistics_view_name     format a24
column description              format a64 word_wrapped

column system_status            heading "Sys"
column session_Status           heading "Ses"
column session_settable         heading "Set"

break on activation_level skip 1

set linesize 160
set pagesize  90
set trimspool on

spool statistics_level

order by
--      length(description) desc,

clear breaks

spool off

By default the script has to be run by the SYS user, and you’ll note that I’ve omitted the con_id column, which always seemed to be zero whether I query from the root or from a pluggable database on my 21.3 instance.

Here are the results I got from the root container with the statistics_level set to typical:

ACTIVAT STATISTICS_NAME                          STATISTICS_VIEW_NAME     Sys      Ses      Set DESCRIPTION
------- ---------------------------------------- ------------------------ -------- -------- --- ----------------------------------------------------------------
ALL     Plan Execution Statistics                V$SQL_PLAN_STATISTICS    DISABLED DISABLED YES Enables collection of plan execution statistics
        Timed OS Statistics                                               DISABLED DISABLED YES Enables gathering of timed operating system statistics

TYPICAL Active Session History                   V$ACTIVE_SESSION_HISTORY ENABLED  ENABLED  NO  Monitors active session activity using MMNL
        Adaptive Thresholds Enabled                                       ENABLED  ENABLED  NO  Controls if Adaptive Thresholds should be enabled
        Automated Maintenance Tasks                                       ENABLED  ENABLED  NO  Controls if Automated Maintenance should be enabled
        Automatic DBOP Monitoring                V$SQL_MONITOR            ENABLED  ENABLED  YES Controls if automatic DBOP Monitoring should be enabled
        Bind Data Capture                        V$SQL_BIND_CAPTURE       ENABLED  ENABLED  NO  Enables capture of bind values used by SQL statements
        Buffer Cache Advice                      V$DB_CACHE_ADVICE        ENABLED  ENABLED  NO  Predicts the impact of different cache sizes on number of
                                                                                                physical reads

        Column Tracking Level                                             ENABLED  ENABLED  YES Sets Up Column Tracking Level
        Global Cache Statistics                                           ENABLED  ENABLED  NO  RAC Buffer Cache statistics
        Longops Statistics                       V$SESSION_LONGOPS        ENABLED  ENABLED  NO  Enables Longops Statistics
        MTTR Advice                              V$MTTR_TARGET_ADVICE     ENABLED  ENABLED  NO  Predicts the impact of different MTTR settings on number of
                                                                                                physical I/Os

        Modification Monitoring                                           ENABLED  ENABLED  NO  Enables modification monitoring
        OLAP row load time precision                                      ENABLED  ENABLED  YES Sets precision of olap row load time statistics
        Object Activity Tracking                                          ENABLED  ENABLED  YES Sets Up Object Activity Tracking (OATS)
        PGA Advice                               V$PGA_TARGET_ADVICE      ENABLED  ENABLED  NO  Predicts the impact of different values of pga_aggregate_target
                                                                                                on the performance of memory intensive SQL operators

        Plan Execution Sampling                  V$ACTIVE_SESSION_HISTORY ENABLED  ENABLED  YES Enables plan lines sampling
        SQL Monitoring                           V$SQL_MONITORING         ENABLED  ENABLED  YES Controls if SQL Monitoring should be enabled
        Segment Level Statistics                 V$SEGSTAT                ENABLED  ENABLED  NO  Enables gathering of segment access statistics
        Shared Pool Advice                       V$SHARED_POOL_ADVICE     ENABLED  ENABLED  NO  Predicts the impact of different values of shared_pool_size on
                                                                                                elapsed parse time saved

        Streams Pool Advice                      V$STREAMS_POOL_ADVICE    ENABLED  ENABLED  NO  Predicts impact on Streams perfomance of different  Streams pool

        Threshold-based Alerts                                            ENABLED  ENABLED  NO  Controls if Threshold-based Alerts should be enabled
        Time Model Events                        V$SESS_TIME_MODEL        ENABLED  ENABLED  YES Enables Statics collection for time events
        Timed Statistics                                                  ENABLED  ENABLED  YES Enables gathering of timed statistics
        Ultrafast Latch Statistics                                        ENABLED  ENABLED  NO  Maintains statistics for ultrafast latches in the fast path
        Undo Advisor, Alerts and Fast Ramp up    V$UNDOSTAT               ENABLED  ENABLED  NO  Transaction layer manageability features
        V$IOSTAT_* statistics                                             ENABLED  ENABLED  NO  Controls if I/O stats in v$iostat_ should be enabled

27 rows selected.

The key thing to note from this output is that there are only two statistics collections that are enabled by setting statistics_level to all, the “Plan Execution Statistics” and the “Timed OS Statistics”. Make the change and you’ll see these two statistics reported as ENABLED. If you look behind the scenes you’ll also find that the parameter “timed_os_statistics” has changed from 0 to 60 (and v$sysstat now starts reporting values for the statistics with names like ‘%OS%’) and the hidden parameter “_rowsource_execution_statistics” has changed from false to true (and when you execute new queries and use the option format=>’allstats [last]’ in calls to dbms_xplan.display_cursor you get execution stats appearing in every line of the resulting execution plan).

Guideline: do not set statistics_level to all at the system level, the overheads can be significant. You might want to set it occasionally for a single session for a brief interval to investigate a performance problem – bearing in mind that setting the parameter might actually introduce a whole new performance problem.

User-friendly: NOT

The question on the forum that prompted this note was asking if there were licensing implications of setting the statistics_level, in particular whether there was any requirement to license the diagnostic and performance packs. I didn’t reply to the question – someone else took it on – but the answer is no.

If you look at the output above, though you’ll see that it reports both “Active Session History” and “Plan Execution Sampling” as ENABLED – when I know that I’ve set the parameter control_management_pack_access to none. And when I query v$active_session_history the rowcount is always zero – so it’s not enabled.

Without going through the manuals, checking the dynamic performance views in the output above, and looking for related parameters (e.g. view V$MTTR_TARGET_ADVICE and parameter fast_start_mttr_target) I can’t really be sure what it means to say that an entry in the output is “Enabled”.

I wonder if my query to check whether v$active_session_history was populated will have set the feature audit to say that I’ve used the active session history!


There have been a couple of changes over time in this view. In 12.2, for example, I noted a statistic called “Global Cache CPU Statistics” which wasn’t present in 21.3; conversely in 21.3 I noted a statistic “Object Activity Tracking” that wasn’t present in 12.2

You could disable most of the stats collections and advisors at the system level by setting the statistics level to “basic” – but (a) you might find that the attempt to do so raised Oracle errors if you haven’t previously disabled some of the default mechanism e.g:

ORA-32017: failure in updating SPFILE
ORA-00830: cannot set statistics_level to BASIC with auto-tune SGA enabled

and (b) you will probably find one day that 20/20 hindsight tells you that really could have used some of the advisors to find out why your production system is misbehaving.

OpenAI GPT-3 API Overview

Andrejus Baranovski - Sun, 2021-11-21 14:16
GPT-3 API review. I walk through a few examples and show how it works in OpenAI's playground. You will see how GPT-3 generates SQL statement from natural text, how it creates an outline for the essay, and generates recipe directions from food ingredients. There is an option to use GPT-3 API in your applications through REST interface.


SQL Server Migration Assistant for Oracle

Hemant K Chitale - Sat, 2021-11-20 08:08

 Here's a Video Demo of the SQL Server Migration Assistant for Oracle   https://youtu.be/zNTF1ncr45g  

The tool is available for download here

Categories: DBA Blogs

Introduction To Pandas In Python & Hands-On Exercise (Data Analysis Using Pandas)

Online Apps DBA - Sat, 2021-11-20 07:54

 Pandas is a software library written for the Python programming language for data manipulation and analysis. It provides fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data easy and intuitive. It aims  to be the fundamental high-level building block for doing practical, real-world data analysis in Python. Pandas in […]

The post Introduction To Pandas In Python & Hands-On Exercise (Data Analysis Using Pandas) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Step by Step set up Single Sign-on between multiple OCI tenancies Oracle Identity Cloud Service (IDCS)

Online Apps DBA - Sat, 2021-11-20 07:40

Oracle’s next-generation security (Single sign-on)and identity management platform that is cloud-native and designed to be a part of the enterprise security fabric, providing modern identity for modern applications. A single instance of the software and supporting infrastructure serves a single client. With a single tenancy, each & every client has his/her own independent database & instance of […]

The post Step by Step set up Single Sign-on between multiple OCI tenancies Oracle Identity Cloud Service (IDCS) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

AZ-900 achieved: Microsoft Azure Fundamentals

Dietrich Schroff - Sat, 2021-11-20 02:30

Yesterday evening i passed Microsofts AZ-900 exam:

Taking the exam on site was no option because of COVID-19, so tried the first time the online option. Nice thing: Many schedules and i chose 20:45. 

As examinee you have to start your online session half an hour earlier and this time you really need for the onboarding: 

  1. Download the software to your PC and do some checks (audio, network, ...)
    This is an .exe - so only windows PCs are possible
  2. Install the app "Pearson VUE" on your smartphone to provide
    1. selfie
    2. passport/driver license/...
    3. photos of your room
  3. Talking to an instructor
    You are not allowed to wear a headset - even a watch is not allowed

 After that the exam is about 40 questions in 45 minutes - quite fair.

 The questions are about these topics:

  • Describe cloud concepts (20-25%)
  • Describe core Azure services (15-20%)
  • Describe core solutions and management tools on Azure (10-15%)
  • Describe general security and network security features (10-15%)
  • Describe identity, governance, privacy, and compliance features (15-20%)
  • Describe Azure cost management and Service Level Agreements (10-15%)

More information can be found here: https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE3VwUY

If you want to do this exam, start here:

How to change the file name dynamically for an external table creation (ex ABC_YYYYMMDDHH24MISS)

Tom Kyte - Fri, 2021-11-19 18:06
<code></code>Team, <i>The version of oracle we use is :" Oracle Database 19c Enterprise Edition Release". </i> Lets consider our basic scott.emp table for this scenario limited to few columns. The external table creation script goes something like this : <code>CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( 'EMPLOYEE_20211116203018.csv' ) ) REJECT LIMIT UNLIMITED; The file name passed over here is : 'EMPLOYEE_20211116203018.csv'</code> If we see the file name structure it is some thing like EMPLOYEE_YYYYMMDDHH24MISS YYYY--2021,MM-11,DD-16,HH24:MI:SS --203018 if the file name had only extension of only YYYYMMDD , this could have been handled using dynamic sql like including execute immediate <code>select 'EMPLOYEE_'||to_char(sysdate,'YYYYMMDD')||'.csv' AS tablename from dual CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( tablename ) ) REJE?CT LI?MIT UNLIMITED;</code> But unfortunately the file extension is including hh24miss along with YYYYMMDD extension. The timestamp is not a fixed value , as it may vary atleast in minutes and seconds. Any code snippet to handle timestamp as well in the external creation script is most welcome and appreciated. Regards, Satyam Reddy.
Categories: DBA Blogs

Free Resource!! Sample Questions for Professional Scrum Master (PSM) Certification Exam

Online Apps DBA - Fri, 2021-11-19 07:11

Check the blog at https://k21academy.com/scrum15 &  Download the exam guide now  The Professional Scrum Master certification credential showcases your employees that you have the experience and skills to lead an agile team successfully and serves as a differentiator showing potential employers that you have the edge over your counterparts. This certification equips you with the right skills needed […]

The post Free Resource!! Sample Questions for Professional Scrum Master (PSM) Certification Exam appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs


The Anti-Kyte - Fri, 2021-11-19 05:38

Tempus Fugit !”…may well be similar to the collection of syllables that spring to mind when you find out the “fun” way that your carefully crafted DBMS_SCHEDULER calendar string doesn’t quite do what you thought it did.
Fortunately, the gift of clairvoyance is within your grasp if only you follow the Wisdom of the Ancient DBAs and their mantra of Legere mandata (Lit. “Read the instructions”)…

The path of enlightenment will eventually lead to to the mystical and and ancient ( well, at least since 11g) writings of :


… which reveals how to acquire the power to fortell the next date and time match for a given calendar string.

Let’s start with a simple example, a calender for the same time every day :

    v_cal_string varchar2(4000);
    v_next_date timestamp;
    dbms_output.put_line('Now is : '||to_char(sysdate, 'DY DD-MON-YYYY HH24:MI'));
    -- Calendar string for every day at 06:15
    v_cal_string := 'FREQ=DAILY; BYHOUR=6; BYMINUTE=15';
        calendar_string => v_cal_string,
        start_date => null,
        return_date_after => null,
        next_run_date => v_next_date);
    dbms_output.put_line('Next Run Date is : '||to_char(v_next_date, 'DY DD-MON-YYYY HH24:MI'));    

Run this and we can see that, the next run date is tomorrow morning as expected ( unless you’ve started work especially early today) :

No chickens were sacrificed in the making of this screenshot

Note that all of the parameters to EVALUATE_CALENDAR_STRING are mandatory.

If you want something a bit more exciting, how about we pretend that payday is the third Thursday of the month…

    v_cal_string varchar2(4000);
    v_next_date timestamp;
    dbms_output.put_line('Now is : '||to_char(sysdate, 'DY DD-MON-YYYY HH24:MI'));
    v_cal_string := 'FREQ=MONTHLY; BYDAY=3 THU; BYHOUR=0; BYMINUTE=0';
        calendar_string => v_cal_string,
        start_date => null,
        return_date_after => null,
        next_run_date => v_next_date);
    dbms_output.put_line('Next Payday is : '||to_char(v_next_date, 'DY DD-MON-YYYY HH24:MI'));    

That’s simple enough, but what if we want to check for something a bit more subtle, such as a string only for weekdays, or weekends for that matter.
If fiddling around with the FIXED_DATE init.ora parameter in your development environment doesn’t appeal, you have the rather simpler option of setting a value for RETURN_DATE_AFTER

    v_weekday_string varchar2(4000) := 'FREQ=DAILY; BYDAY=MON,TUE,WED,THU,FRI; BYHOUR=6; BYMINUTE=15';
    v_next_run timestamp;
    v_start_from timestamp;
    -- I want to know that the calendar string will hit Monday - Friday but not Saturday or Sunday
    v_start_from := to_timestamp(next_day(sysdate, 'TUESDAY'));

    for i in 1..5 loop
            calendar_string => v_weekday_string,
            start_date => null,
            return_date_after => v_start_from,
            next_run_date => v_next_run);
        dbms_output.put_line(to_char( v_next_run, 'Day DDth Month YYYY HH24:MI'));
        -- set RETURN_DATE_AFTER to be the date last found by the string
        v_start_from := v_next_run;
    end loop;    

Alternatively, we can specify the start_date parameter. This time, we’ll test a calendar string for weekends…

    v_weekend_string varchar2(4000) := 'FREQ=DAILY; BYDAY=SAT,SUN; BYHOUR=9; BYMINUTE=30'; 
    -- I like a lie-in at weekends 
    v_start_date timestamp;
    v_next_run timestamp;

    v_start_date := to_timestamp(next_day(systimestamp, 'MONDAY'));
    for i in 1..2 loop
            calendar_string => v_weekend_string,
            start_date => v_start_date,
            return_date_after => null,
            next_run_date => v_next_run);
        dbms_output.put_line(to_char( v_next_run, 'Day DDth Month YYYY HH24:MI'));
        -- Unlike the RETURN_DATE_AFTER parameter, we need to increment the start date by a day
        -- in each loop iteration
        v_start_date := v_next_run + 1;
    end loop;    

Whatever your future may hold, you can now be confident that it does not include further scheduling surprises.

DOAG 2021

Yann Neuhaus - Fri, 2021-11-19 03:11

All virtual, rumours say about 1000 participants, over 200 Sessions, most of them interesting, many people to meet: that was DOAG 2021.

Myself I enjoy virtual conferences as it’s easier to get in contact with people, easier to escape to normal (family) live, no travel time and therefore eco-friendly. See dbi services booth in 8 Bits:

Larry Ellison finally discovered dbi services

Larry Ellison finally discovered dbi services

One drawback: no food and drinks. Another one: Many technical incidents. As everybody was supporting, at the end everything could be fixed, but time was wasted. If you want to organise yourself, either contact DOAG office or have a look at gather.town for meetup, Big Blue Button (open source) to record streams and chat, vimeo to distribute the streams. And as usual with digital solutions: test test test and instruct involved people well.

Next my three favourites among about 15 session followed. Way to much, I recommend to concentrate on only three per day.

25 Years of Tips and Techniques – Connor Mc Donald

Connor is technically brillant and top entertaining. Did you know there is a default commit when you accidentely exit sqlplus? Once in panic mode, many people close a session instead of rolling back. So usability wise, better to deactivate:

SQL> set exitcommit off;

Are you using Oracle RDBMS Enterprise Edition and looking for a reliable disaster recovery solution? As you probably licensed more than one server, use Data Guard which is free. So you put your primary database (the one users connect to and write) on one server and on physical standby database on another.
Bonus tip from my side: Create a second standby database which is not applying redo logs in real time but maybe with 24 hours delay. Most errors that leads in data restore and recovery are told to DBAs within 24 hours. So with the second standby, there is only recovery, no restore.
There are more advantages on Data Guard, one is certainly that you can open a standby database read write (but stop applying), select for report, test, …,  and the get in sync with primary again. So personally, I do not see any reason why not to use dataguard if you have more than one server licensed. You may want to learn more on Data Guard, so why not get instructed by my colleagues Marc, Mouhamadou or Hervé?
To sum up, Connor had more than 25 tipps for everybody dealing with Oracle RDBMS, have a look at his slides. And next time you get the chance, it worth time to follow his sessions.

A postgres-compatible DB on K8s for microservices: YugabyteDB – Franck Pachot The Importance of a Distributed Database – Frits Hoogland

Ok, unfair, those are two session, both about Yugabyte. Frits and Franck, our formerly dbi colleague, are great team-players: One was presenting, the other one answering questions in the session chat. I’m a person that by listening I’m constantly asking questions (only to myself and sometimes verbal) to better understand, sum things up or simply in respect of my talking partner. So having questions answered promptly is a real plus for me (and maybe others).

I got in touch with database replication first developing on Lotus Notes (around 1996) and since one year when dealing with Oracle Unified Directory (where multi-master replication is being used). Frits explained replication principle and advantages (like rolling update) well, where as Franck showed how to scale up and down without service interruption. How cool would that be with Oracle RDBMS, with the free choice on cloud or on premises and without an unnecessary license nightmare.
Personally, I wish Yugabyte all the best as they are still in startup mode. To me, there is a new star shining.

Brücken bauen: mit LEGO SERIOUS PLAY® den Wandel begleiten – Dr. Reinhard Ematinger

In my experience, to deal with people is more important than to deal with technical stuff. In addition, people stay, but there are constant updates on technical side . Second problem is that not only IT  is turning more and more complex. Simplicity is a true value in my opinion, but seldom a goal among features.
However, the cool thing is that people change their mind and, being an optimistic person, there is a solution to every problem. Which are mostly people related. But how to develop a common sense to a problem? That is where LEGO SERIOUS PLAY® comes into place. Initially it was invented and used to bypass a crisis at LEGO corporation itself in the ’90s. Since some years the method is open source and it’s easier to get trained. By using LEGO people seems to get focused again and start to work to better understand each other. Seems very powerful to me, not only that I like to play LEGO myself and with family.

So hope to see you again or for the first time at a DOAG conference. They are always very well organised. Thanks to the people behind the conference, hard work, well done.

Cet article DOAG 2021 est apparu en premier sur Blog dbi services.

Move data to warehouse

Tom Kyte - Thu, 2021-11-18 23:46
looking for the best way to move data from oracle 19c oltp system(size 13tb) to oracle 19c warehouse db. Currently using stantby database to fetch the modified rows and add to warehouse system. This process is taking so much time.
Categories: DBA Blogs

dblink blocked

Tom Kyte - Thu, 2021-11-18 23:46
Hi, I have a simple sql like "SELECT max(case when day=:1 then tra else 0 end) tra, max(case when tra=1 and day<:2 then day else '0' end) lasttradingday FROM settlement.t_calendar@dl_otcdb WHERE day <= :3 ". It query blocked offen. I cannot find any blocking session in v$session. After killed this session, it works after query again. How could it be blocked? it query at 19c, and remote database is 11g.
Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator