Union All of two tables based on month of a date in Oracle

I have two tables table_a and table_b as below. column invdate is of date datatype and column amount is of number datatype.

table_a

invdate amount
20-01-2021 50
20-01-2021 100
20-02-2021 50
20-03-2021 50

table_b

invdate amount
01-01-2021 250
01-02-2021 300
01-03-2021 40
01-03-2021 50

I am doing a UNION ALL of both tables to get the sum of the amounts based on only month and year.

SELECT to_char(invdate, 'MM-YYYY') as "Date", sum(amount) as "Total" FROM ( SELECT to_char(invdate, 'MM-YYYY'), amount FROM table_a UNION ALL SELECT to_char(invdate, 'MM-YYYY'), amount FROM table_b ) GROUP BY to_char(invdate, 'MM-YYYY') ORDER BY to_char(invdate, 'MM-YYYY') asc; 

to get the final output like below

Date Total
01-2021 400
02-2021 350
03-2021 140

but it gives me the below error.

ORA-00904: "INVDATE": invalid identifier 00904. 00000 -  "%s: invalid identifier" *Cause:     *Action: Error at Line: 10 Column: 18 

What am I doing wrong here?

Connect two remote oracle databases as a schemas on each other

This Question is about Oracle Database.

I have two remote databases DB_1 and DB_2, and I have two schemas on each database: Schema DB_1 and Schema DB_2 on database DB_1, and Schema DB_1 and Schema DB_2 on database DB_2. I want to reach database DB_2 Schema DB_2 from database DB_1 Schema DB_1. For example: when I try this query in DB_1 from schema DB_1:

SELECT * FROM DB_2.example_table

I got an error because there is not table "example_table" in DB_2 Schema on the DB_1 database. But, when I try the same query on DB_2 Schema DB_2, I got the correct result (some data).

So, I need to tune my databases in some way so I can SELECT * FROM DB_2.example_TABLE from DB_1 Database.

But, there is another problem, I don’t have permission to CREATE LINK. However, when I query SELECT * FROM ALL_LINKS; on my production database DB_1 schema DB_1 – there are no links at all. So, I need to figure out which way my production Databases was linked (unfortunately my colleagues don’t know either). On my production database DB_1 schema DB_1 I can fetch data with the query:

SELECT * FROM DB_2.example_table

And the data I got is the same if I try to query the same on DB_2 production database DB_2 schema.

But there are no public\private Links on DB1/DB2 production databases at all. Is there another way to create a "connect" between my databases? (I queried SELECT * FROM ALL_LINKS from DB_1 schema DB_1, is this possible so I can fetch some other data if I try the query on DB_1 schema DB_2?)

Thanks in anticipation!

Oracle 11gR2 => Oracle Data Pump (expdp, impdp) => can backup be safely taken during runtime

until now I create a daily Logical backup backup of my Oracle 11gR2 Database at midnight while the database is running but the client application is in an idle state so that no queries are executed on the database.

Now I also want to implement a second Backup during the Day while the Database and the Client application are both up and running and queries (select/update/insert/delete) are executed.

Because I have already well tested Backup and Restore Scripts I want to continue using expdp and impdp.

This second "during the day" backup would not be directly imported in the production system after a potential data loss. I would import it on a mirrored test system and then manually use OracleSqlExplorer to query the lost data.

This leads to the following questions:

  1. If I use expdp to perform a backup during the database runtime is it guaranteed that the database where I take the backup from and SQL statements are executed during the backup process does contain its integrity and consistency ?
  2. Do I need to add certain Parameters to the expdp command to guarantee consistency ? I found this:

"expdp options for creating a consistent export dump: FLASHBACK_SCN, FLASHBACK_TIME, CONSISTENT=Y"

So far I use this linux shell script:

$  ORACLE_HOME/bin/expdp \"$  USERNAME/$  PASSWORD as sysdba\" SCHEMAS=<csv list of schemas> REUSE_DUMPFILES=Yes DIRECTORY=backup DUMPFILE=$  BACKUP_NAME.dmp 
  1. Can I use a backup create with expdp during the database runtime as a valid source for a impdp without having to worry about integrity and consistency ? For question number 2 I found a thread that says NO

I am counfused about when Oracle database won’t do parsing

I am confused about when Oracle database won’t do parsing? In the AWR report, there is a metrics called "execute to parse", which means more SQL just execute without parsing when it increases. But as the Oracle document describe: "When an application issues a SQL statement, the application makes a parse call to the database to prepare the statement for execution. " It seems that everytime a SQL statement is issued, parsing will be called. So I wandering when Oracle won’t do parsing and make the "execute to parse" become a larger number? Or I just misunderstood?

What Oracle document said is:

SQL Parsing The first stage of SQL processing is parsing. The parsing stage involves separating the pieces of a SQL statement into a data structure that other routines can process. The database parses a statement when instructed by the application, which means that only the applicationĀ­, and not the database itself, can reduce the number of parses. When an application issues a SQL statement, the application makes a parse call to the database to prepare the statement for execution.

https://docs.oracle.com/database/121/TGSQL/tgsql_sqlproc.htm#TGSQL178

So if "an application issues a SQL statement, the application makes a parse call", then how applications "can reduce the number of parses"?

Oracle 19 ORA-01450: maximum key length (6398) exceeded

I am using Oracle 19 and recently changed the collation to XGERMAN_AI. This required to change the MAX_STRING_SIZE to be EXTENDED as in the following link https://docs.oracle.com/database/121/REFRN/GUID-D424D23B-0933-425F-BC69-9C0E6724693C.htm#REFRN10321

Now, trying to create tables with unique constraints on varchar2 columns produce the error

ORA-01450: maximum key length (6398) exceeded.

Here is my table

CREATE TABLE MY_TABLE (     ID NUMBER(38, 0) DEFAULT PKS_MY_TABLE_SEQ.nextval NOT NULL,      VERSION NUMBER(38, 0) DEFAULT 0 NOT NULL,      CREATED_BY VARCHAR2(50 CHAR) NOT NULL,      CREATED_AT TIMESTAMP NOT NULL,      MODIFIED_BY VARCHAR2(50 CHAR) NOT NULL,      MODIFIED_AT TIMESTAMP NOT NULL,      ID_OLD NUMBER(38, 0),      RISK_LEVEL VARCHAR2(255 CHAR) NOT NULL,      REMARKS VARCHAR2(255 CHAR),      IS_DEACTIVATED NUMBER(1) DEFAULT 0 NOT NULL,      COLOR VARCHAR2(255 CHAR) NOT NULL,      CONSTRAINT MY_TABLE_PK PRIMARY KEY (ID),      UNIQUE (RISK_LEVEL) ) 

I googled this error, and all results say that it is becase the index size is bigger than the block size which is 8k. But in my case the column RISK_LEVEL is only 255 char long! (255*4 bytes = 1020 bytes), which is much less than 8k and also 6398

Any idea how to fix this?

Oracle impdp INCLUDE parameter

I have an script that uses IMPDP over a database link (no expdp used) to import some data from our PROD environment to a DEV environment.

Some tables are rather gigantic and partitioned, and we want just a sample of it like the last 30 days of data. I noticed that even specifying a QUERY parameter, impdp still slowly goes thru every partition displaying several 0 rows imported and taking time to get tot he desired partitions.

Is it possible to specify the partition names inside the INCLUDE parameter (this is where i am listing desired tables, its a subquery of table names on the PROD environment)?

I know there’s a TABLES parameter that is usually used for this, but it can be dinamically populated like the INCLUDE one can, from a query?

I imagine i can generate a string in the script prior to calling impdp, but would prefeer a native solutuion using impdp itself.

Bellow is a sample command structure i am using, its not filtered as it is.

impdp user/"pwd"@bd directorySOMEDIR NETWORK_LINK=somelink schemas=someschema logfile=somefile.log remap_tablespace=TBS_BLA:TBS_BLE CONTENT=DATA_ONLY PARALLEL=40 include=TABLE:\"IN \(\'MY_TABLE\' \)\" 

If I use TABLES=MY_TABLE:SYS_P01 it works, but I would like to dynamically define this as everyday a new partition is created.

Would be nice if I could specify partition names in the INCLUDE parameter, but not sure its suported.

ORACLE execute error: ORA-01950: no privileges on tablespace ‘PDATA’

I’m new in Oracle, so maybe my question could be stupid. I use Oracle only for data storage. I have made some research but I’m blocked. I use Oracle 12c. I created a PDB with admin user PEEI_SYS like this:

create pluggable database PEEI admin user PEEI_SYS identified by PEEI  roles = (DBA); -- open PDB PEEI  alter pluggable database PEEI open read write;` 

I have created another user called PEEI which should only do select, update, insert on tables owned by PEEI_SYS. I have created the user PEEI like this:

CREATE USER "PEEI" IDENTIFIED BY "PEEI" DEFAULT TABLESPACE PDATA TEMPORARY TABLESPACE TEMP PROFILE DEFAULT ACCOUNT UNLOCK;` Now I would like that the user PEEI could insert rows in the table PEEI_SYS.PEEI_P_TRACKING. This table is created like this:  `CREATE TABLE PEEI_SYS.PEEI_P_TRACKING (  "CODE_WORKFLOW" VARCHAR2(30 BYTE),  "STATUS" VARCHAR2(15 BYTE),  "DATE_UPDATE" DATE,  "USER_UPDATE" VARCHAR2(20 BYTE),  "DEB_WORKFLOW" DATE,  "FIN_WORKFLOW" DATE,  "TIME_SECOND" NUMBER ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PDATA" ; GRANT SELECT ON PEEI_SYS.PEEI_P_TRACKING TO ROLE_PEEI_READ; GRANT DELETE ON PEEI_SYS.PEEI_P_TRACKING TO ROLE_PEEI_WRITE; GRANT INSERT ON PEEI_SYS.PEEI_P_TRACKING TO ROLE_PEEI_WRITE; GRANT UPDATE ON PEEI_SYS.PEEI_P_TRACKING TO ROLE_PEEI_WRITE; 

When I got the error I granted unlimited privileges to PEEI user on PDATA tablespaces like this: ALTER USER PEEI QUOTA UNLIMITED ON PDATA; I have still the error. Could you please help me ? Thank you very much in advance. Kind regards, enter image description here

Oracle Enterprise Manager 11g Database Control – Agent Unreachable

Single instance DB. Can login to OEM. However, it says Status – Agent Unreachable on the Home page.

Most tasks such as checking RMAN status, adding datafiles etc can still be done through the OEM. I do not know how "Agent Unreachable" impacts the OEM in this case (DB Control)

I checked status of the agent from $ ORACLE_HOME/bin/emctl status agent, it showed "The Agent is not Running"

So, I want to start the agent, but $ ORACLE_HOME/bin/emctl start agent is not working. The command is not available. In Oracle docs, it is mentioned that the command emctl start agent should be executed from AGENT_HOME/bin. However, I do not know the AGENT_HOME path.

Please help. Thanks in advance.

How to improve Oracle Standard Edition’s performance for testing?

There’s a great post on StackOverflow about improving Postgres performance for testing.

https://stackoverflow.com/questions/9407442/optimise-postgresql-for-fast-testing/9407940#9407940

However, there aren’t any resources on doing the same for OracleDB. I don’t have a license for Enterprise Edition, that has features like ‘In-Memory’ columnar storage that would almost definitely improve performance.

https://docs.oracle.com/en/database/oracle/oracle-database/19/inmem/intro-to-in-memory-column-store.html

I’m really limited in what I can try in Standard Edition. It’s running in a Docker container in a CI pipeline. I’ve tried putting the tablespace on a RAM disk, but that doesn’t improve performance at all. I’ve tried fiddling with FILESYSTEMIO_OPTION, but no performance change.

Would anyone know of some more obvious things I can do in OracleDB in a CI environment?