## Oracle 11gR2 => Oracle Data Pump (expdp, impdp) => can backup be safely taken during runtime

until now I create a daily Logical backup backup of my Oracle 11gR2 Database at midnight while the database is running but the client application is in an idle state so that no queries are executed on the database.

Now I also want to implement a second Backup during the Day while the Database and the Client application are both up and running and queries (select/update/insert/delete) are executed.

Because I have already well tested Backup and Restore Scripts I want to continue using expdp and impdp.

This second "during the day" backup would not be directly imported in the production system after a potential data loss. I would import it on a mirrored test system and then manually use OracleSqlExplorer to query the lost data.

This leads to the following questions:

1. If I use expdp to perform a backup during the database runtime is it guaranteed that the database where I take the backup from and SQL statements are executed during the backup process does contain its integrity and consistency ?
2. Do I need to add certain Parameters to the expdp command to guarantee consistency ? I found this:

"expdp options for creating a consistent export dump: FLASHBACK_SCN, FLASHBACK_TIME, CONSISTENT=Y"

So far I use this linux shell script:

$ORACLE_HOME/bin/expdp \"$  USERNAME/$PASSWORD as sysdba\" SCHEMAS=<csv list of schemas> REUSE_DUMPFILES=Yes DIRECTORY=backup DUMPFILE=$  BACKUP_NAME.dmp 
1. Can I use a backup create with expdp during the database runtime as a valid source for a impdp without having to worry about integrity and consistency ? For question number 2 I found a thread that says NO

## Parse Data on runtime into JSON

I have been trying to work out a subscription form for my game and thanks to you guys here, I came to a solution which works but still needs some changes.

The idea is to have an input field in the scene that takes the users email and convert it into JSON (to do a POST request). I have made the post request and everything works fine but I have to manually type the email in the inspector. What I would want is to get that value from the input field!

    public class testing : MonoBehaviour     {         public TMP_InputField myField;         //public InputField field;         [SerializeField]         private Email _email = new Email();         private string URL = "";          public void SaveData()         {             string data = JsonUtility.ToJson(_email); // this part here needs to be like _email.text             System.IO.File.WriteAllText(Application.persistentDataPath + "Data.json", data);             StartCoroutine(SaveIntoJson(URL , data));         }         IEnumerator SaveIntoJson(string url, string data)         {             var request = new UnityWebRequest(url, UnityWebRequest.kHttpVerbPOST);             request.SetRequestHeader("Content-Type", "application/json");             var jsonBytes = Encoding.UTF8.GetBytes(data);             request.uploadHandler = new UploadHandlerRaw(jsonBytes);             request.downloadHandler = new DownloadHandlerBuffer();                  yield return request.SendWebRequest();             if (request.isNetworkError || request.isHttpError)             {                 Debug.Log(request.error);                 Debug.Log(request.downloadHandler.text);             }             else             {                 Debug.Log("Form upload complete!");             }             Debug.Log(data);         }     }     [System.Serializable]     public class Email     {         public List<Profiles> profiles = new List<Profiles>();     }     [System.Serializable]     public class Profiles     {         [SerializeField]         public string email; // this part here works but i need to type the email in the inspector     } 

also the requirements for the json are like this

 {"profiles":[{"email":"testing@tes.com"}]} 

## Using several render textures at the same time during runtime

I want to achieve a system where I can use render textures as portraits/icons. So when I select something I want to show it in my UI, currently I do:

1. Spawn my prefab with the model (I have one of these prefabs for each model), in my case this prefab has a camera (for render texture), a model, and a spotlight.
2. Show this render texture in my UI

This is fairly straight forward when I only need one render texture at a time. But say I select a house and want to see all residents, now I need 6-10 render textures all showing different models.

Should I create/destroy render texture assets at runtime for this type of feature, since each camera would need its own render texture? Im worried its an expensive feature that costs more than its worth perhaps.

Or do I need to create one render texture asset for each model in my game, and simply point to it in the prefab mentioned above?

Is there a smarter way?

## How to Reconcile Apparent Discrepancy in this Algorithm’s Runtime?

I’m currently working through Algorithms by Dr. Jeff Erickson. The following is an algorithm presented in the book:

NDaysOfChristmas(gifts[2 .. n]):      for i ← 1 to n         Sing “On the ith day of Christmas, my true love gave to me”          for j ← i down to 2             Sing “j gifts[j],”         if i > 1             Sing “and”          Sing “a partridge in a pear tree.”  

Here’s the runtime analysis of the algorithm presented by Dr. Erickson:

The input to NDaysOfChristmas is a list of $$n − 1$$ gifts, represented here as an array. It’s quite easy to show that the singing time is $$\Theta(n^{2})$$; in particular, the singer mentions the name of a gift $$\sum_{i=1}^ni = \frac{n(n + 1)}{2}$$ times (counting the partridge in the pear tree). It’s also easy to see that during the first $$n$$ days of Christmas, my true love gave to me exactly $$\sum_{i=1}^{n}\sum_{j=1}^{i}= \frac{n(n + 1)(n + 2)}{6} = \Theta(n^3)$$ gifts.

I can’t seem to grasp how it is possible your $$“$$true love$$“$$ had given you $$\Theta(n^3)$$ gifts, while a computer scientist looking at this algorithm would say the algorithm’s runtime complexity is $$\Theta(n^2)$$?

Dr. Erickson also says the name of a gift is mentioned $$\frac{n(n+1)}{2}$$ times, which is in $$\Theta(n^2)$$.

## Is it possible for the runtime and input size in an algorithm to be inversely related?

I’m wondering if it’s possible for algorithms that have monotonically decreasing runtime with the input-size – just as a fun mental exercise. If not, is it possible to disprove this claim? I haven’t been able to come up with an example or counterexample so far, and this sounds like an interesting problem.

P.S. Something like $$O(\frac{1}{n})$$, I guess (if it exists)

## Bubble Sort: Runtime complexity analysis like Cormen does

I’m trying to analyze Bubble Sort runtime in a method similar to how Cormen does in "Introduction to Algorithms 3rd Ed" for Insertion Sort (shown below). I haven’t found a line by line analysis like Cormen’s analysis of this algorithm online, but only multiplied summations of the outer and inner loops.

For each line of bubblesort(A), I have created the following times run. Appreciate any guidance if this analysis is correct or incorrect. If incorrect, how it should be analyzed. Also, I do not see the best case where $$T(n) = n$$ as it appears the inner loop always runs completely. Maybe this is for "optimized bubble" sort, which is not shown here?

Times for each line with constant run time $$c_n$$, where $$n$$ is the line number:

Line 1: $$c_1 n$$

Line 2: $$c_2 \sum_{j=2}^n j$$

Line 3: $$c_3 \sum_{j=2}^n j – 1$$

Line 4: $$c_4 \sum_{j=2}^n j – 1$$ Worst Case

$$T(n) = c_1 n + c_2 (n(n+1)/2 – 1) + c_3 (n(n-1)/2) + c_4 (n(n-1)/2)$$

$$T(n) = c_1 n + c_2 (n^2/2) + c_2 (n/2) – c2 + c_3 (n^2/2) – c_3 (n/2) + c_4 (n^2/2) – c_4 (n/2)$$

$$T(n) = (c_2/2+c_3/2+c_4/2) n^2 + (c_1 + c_2/2+c_3/2+c_4/2) n – c_2$$

$$T(n) = an^2 + bn – c$$

## Runtime for Search in Unordered_map C++ [closed]

I have come across lot of articles and questions suggesting that unordered_map is a lookup table that offers O(1) search time complexity. And I wonder how this is possible, And they say it is amortized to O(1) and worst case is O(n) for lookup. Now, even though after an extensive search I haven’t found when this lookup time hits O(n) and how actually unordered_map is implemented under the hood?

## Runtime error : How do I avoid it for a large test case?

I have been solving the CSES problem set and I am stuck on the following problem : CSES-Labyrinth

Here is my solution :

#include <bits/stdc++.h> using namespace std;  int main() {     int n,m,distance=0,x=0,y=0;     string str1="NO",str2="";     cin>>n>>m;     char grid[n+1][m+1];     int vis[n+1][m+1];     int dis[n+1][m+1];     string path[n+1][m+1];     int dx[]={0,0,1,-1};     int dy[]={1,-1,0,0};     char dz[]={'R','L','D','U'};     queue<pair<int,int>>s;      for(int i=0;i<n;i++)         for(int j=0;j<m;j++){             cin>>grid[i][j];             if(grid[i][j]=='A'){                 x=i; y=j;             }             vis[i][j]=0;             dis[i][j]=0;             path[i][j]="";         }          s.push({x,y});     while(!s.empty()){         pair<int,int>a=s.front();         s.pop();         if(grid[a.first][a.second]=='B'){             distance=dis[a.first][a.second];             str1="YES";             x=a.first; y=a.second;             break;         }         if(vis[a.first][a.second]==1)         continue;         else{             vis[a.first][a.second]=1;             for(int i=0;i<4;i++){                 if(a.first+dx[i]<n && a.first+dx[i]>=0 && a.second+dy[i]<m && a.second+dy[i]>=0 && (grid[a.first+dx[i]][a.second+dy[i]]=='.' || grid[a.first+dx[i]][a.second+dy[i]]=='B')){                     s.push({a.first+dx[i], a.second+dy[i]});                     dis[a.first+dx[i]][ a.second+dy[i]]=dis[a.first][a.second]+1;                     path[a.first+dx[i]][ a.second+dy[i]]=path[a.first][a.second]+dz[i];                 }             }         }     }     if(str1=="YES"){         cout<<str1<<endl<<distance<<endl<<path[x][y];     }     else     cout<<str1; } 

I am getting a Runtime error on 3/15 test cases and this was the best result I could reach (other 12 cases are accepted). How do I avoid runtime errors? What is wrong with my solution?

## Improve Prim’s algorithm runtime

Assume we run Prim’s algorithm when we know all the weights are integers in the range {1, …W} for W, which is logarithmic in |V|. Can you improve Prim’s running time?

When saying ‘Improving’, it means to at-least: $$O(|E|)$$

My question is – without using priority queue, is it even possible? Currently, we learned that Prim’s runtime is $$O(|E|log|E|)$$

And I proved I can get to O(|E|) when weights are from {1,….,W) when W is constant, but when W is logarithmic in |V|, I can’t manage to disprove/prove it.

Thanks

## Is it correct or incorrect to say that an input say $C$ causes an average run-time of an algorithm?

I was going through the text Introduction to Algorithm by Cormen et. al. where I came across an excerpt which I felt required a bit of clarification.

Now as far as I have learned that that while the Best Case and Worst Case time complexities of an algorithm arise for a certain physical input to the algorithm (say an input $$A$$ causes the worst case run time for an algorithm or say an input $$B$$ causes the best case run time of an algorithm , asymptotically), but there is no such physical input which causes the average case runtime of an algorithm as the average case run time of an algorithm is by it’s definition the runtime of the algorithm averaged over all possible inputs. It is something I hope which only exists mathematically.

But on the other hand inputs to an algorithm which are neither the best case input nor the worst case input is supposed to be somewhere in between both the extremes and the performance of our algorithm is measured on them by none other than the average case time complexity as the average case time complexity of the algorithm is in between the worst and best case complexities just as our input between the two extremes.

Is it correct or incorrect to say that an input say $$C$$ causes an average run-time of an algorithm?

The excerpt from the text which made me ask such a question is as follows:

In context of the analysis of quicksort,

In the average case, PARTITION produces a mix of “good” and “bad” splits. In a recursion tree for an average-case execution of PARTITION, the good and bad splits are distributed randomly throughout the tree. Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits. Figure(a) shows the splits at two consecutive levels in the recursion tree. At the root of the tree, the cost is $$n$$ for partitioning, and the subarrays produced have sizes $$n- 1$$ and $$0$$: the worst case. At the next level, the subarray of size $$n- 1$$ undergoes best-case partitioning into subarrays of size $$(n-1)/2 – 1$$ and $$(n-1)/2$$ Let’s assume that the boundary-condition cost is $$1$$ for the subarray of size $$0$$.

The combination of the bad split followed by the good split produces three sub- arrays of sizes $$0$$, $$(n-1)/2 – 1$$ and $$(n-1)/2$$ at a combined partitioning cost of $$\Theta(n)+\Theta(n-1)=\Theta(n)$$. Certainly, this situation is no worse than that in Figure(b), namely a single level of partitioning that produces two subarrays of size $$(n-1)/2$$, at a cost of $$\Theta(n)$$. Yet this latter situation is balanced!