## Finding the worst case running time of this piece of code?

I am working with this code:

``function strange (list a[0..n-1] of integers such that abs(a[i]) ≤ n for every 0 ≤ i ≤ n - 1, list b[0..2n] of zeroes)  for i ← 0 to n - 1 do        a[i] ← a[i] + n for i ← 0 to n - 1 do        for j ← 0 to abs(a[i] - 1) do                b[j] ← b[j] + 1 return b ``

I am trying to figure out the worst running time for the code above and so far I’m guessing that the first for loop will run n times, but not sure how to prove this. For the second and third for loop, I’m unsure how to approach this. If possible, could someone help me solve this?

## Difficulty understanding the use of arbitrary function for the worst case running time of an algorithm

In CLRS the author said

"Technically, it is an abuse to say that the running time of insertion sort is $$O(n^2)$$, since for a given $$n$$, the actual running time varies, depending on the particular input of size $$n$$. When we say “the running time is $$O(n^2)$$,” we mean that there is a function $$f(n)$$ that is $$O(n^2)$$ such that for any value of $$n$$, no matter what particular input of size $$n$$ is chosen, the running time on that input is bounded from above by the value $$f(n)$$. Equivalently, we mean that the worst-case running time is $$O(n^2)$$. "

What I have difficulties understanding is why did the author talked about an arbitrary function $$f(n)$$ instead of directly $$n^2$$.

I mean why didn’t the author wrote

"When we say “the running time is $$O(n^2)$$,” we mean that for any value of $$n$$, no matter what particular input of size $$n$$ is chosen, the running time on that input is bounded from above by the value $$cn^2$$ for some +ve $$c$$ and sufficiently large n. Equivalently, we mean that the worst-case running time is $$O(n^2)$$".

I have very limited understanding of this subject so please forgive me if my question is too basic.

## Finding the Time Complexity – Worst Case (Big-Θ) – Array List, BST

Hi I’m a bit confused on how to find the time complexity of the following in the worst case in terms of big-Θ, I’ve figured out 1 and 2.

What is the worst-case time complexity, in terms of big-Θ, of each of these operations: (1) insert an element in the array list = Θ(1) (2) remove an element from the array list (e.g. remove an occurrence of the number 5) = Θ(n)

(3) remove the second element from the array list (i.e. the one in position 1)

(4)count the number of unique elements it contains (i.e. the number of elements excluding duplicates; e.g.[6,4,1,4,3] has 4 unique elements)

Suppose you have an initially empty array list with an underlying array of length 10. What is the length of the underlying array after:

(5) inserting 10 elements in the array list (6) inserting 10 more elements in the array list (i.e. 20 elements in total) (7) inserting 10000 more elements in the array list (i.e. 10020 elements in total)

What is the worst-case time complexity, in terms of big-Θ, of each of these operations on binary search trees: (8) add an element in the tree (assuming that the tree is balanced) (9) add an element in the tree (without assuming that the tree is balanced) (10) find the largest element in the tree (assuming that the tree is balanced) After each operation, we should still have a valid heap.

## What is Simple Uniform Hashing, and why searching a hashtable has complexity Θ(n) in the worst case

Can anyone explain nicely what Simple Uniform Hashing is, and why searching a hashtable has complexity Θ(n) in the worst case if we don’t have uniform hashing (where n is the number of elements in the hashtable)

## Why has the Final Fantasy series largely changed for the worst (or JRPs/RPGs in general)?

From what many remembered as open-world, explorable, side-quest, challenging battles and tactics of many similar RPGS/JPRGs of the 90s to the 2000s even, it now largely seems like the genre — especially referencing to FF series since they are among the "top dogs" of it — have diminished. I get the impression that lots of what made the old games good is lost:

1. What was once more explorable of a main navigation element seems to have become more centered, linear, and/or restrictive. You can have nicer walking animations and prettier backgrounds, but the same "tunnel" like forward direction — or more aimless all-way walking potential in huge open areas replaces that special emphasis on simple old rooms (often smaller) with less to give graphically but more to give in a travel, explorative or more sensible approach than just "hunting" or "running around and grinding." If you make a large area you should at least give different elements to it than just "lots of space." If you scale up you need more of that "something" to scale up too — otherwise it’s more empty.

2. The old free-to-explore open-worlds/world maps, airship/flying ship/etc. mechanics (even re-visit mechanics) are almost always chopped down or implemented much less attractively (think how it started with FFX and then onwards — i.e. you can "explore" fast but it’s really largely watered down stuff/processes in doing such). The whole "open world" aspect to the classics is largely reduced to large areas/fields but no longer a blend of different terrains, sub-areas, sections or just the general nature/element of traveling/entering/exiting different areas rather than storyline/linear rules imposed on all areas/paths.

3. The battle system is definitely a hot topic, as some will tell you the new mechanics add some new flesh to the table while others feel the older system worked best and it’s been "slaughtered" merely at the attempt of "spicing up" something that people already liked for the most part/settled in with over time. The thing is — if the battle system is to be made "better" so to speak — it should try and maintain the same elements of what the skeleton of original battle systems were based on. As an example old turn-based games kept the same skeleton even when becoming "active time" battles where it’s every turn to grab for themselves the quickest. The idea was that you can maintain the same "skeletal base" of the mechanics and only tweak them better — but lots of newer stuff almost always tries to go completely a new path that strays away from this with new experimentation, impositions, rules, and/or unneeded "extra steps" at times too. Basically it’s like the game’s old and functioning system has been put less concern to while trying to "splice" its old DNA under the impression that you can supposedly better an old thing by going in a completely a new "frankenstein" direction rather than just sprucing up the initial base in a more specific/oriented/targeted manner that fulfills its initial life blood/base than trying

4. Always an extreme. Nowadays it seems games of this series are either too linear or not linear at all — there’s no longer a good balance between the two. For example one game may have so much explorable, massiveness to specific areas that you would be to get lost/tired/grinding excessively/etc. in one area to then go to the next one and rinse and repeat. On the other hand you can go super linear (think FFXIII for example) where everything is just "new area -> go straight -> battle -> story -> repeat" and such. What made the classics arguably more "wow" is the fact that the game — when it needs to — switches from storyline/linearity to open/some explorableness (to pique the natural exploring instinct) while going back to restriction when danger arose (defensive mechanism/protective inhibition) — because both of these angles match human behavior/etc. it suits gameplay. But if you make it either too open or too linear you force one side too long and it doesn’t align naturally with the cycle of human operability/engagement well enough to have proper "ups" and "downs."

5. More "complex" systems or angles regarding leveling/power ups/etc. In old games it’s often fairly simple and straightforward to a large degree on how something more direct leads to a more expandable nature of said system to grow and keep delivering. What has replaced easy but expandable seems to be complex and rigid — more learning curves but less direction to go once you "have it." Slowly I think the series has gone this way, possibly starting with FFXIII/around that era. You make something simple that expands as needed become complex that really doesn’t give much over time. Something like junctioning in FF8 starts simple but can scale up to cool stuff as the game goes on, especially with the addition of GFs to character stats and so on. In a game like FFXIII for example you can liken the "powering system" to weak remnants of FFX and FFXII in ways of both combat means and stat growth.

6. Games/scenes (probably applies to others outside this genre/series/etc. though) are now largely presented as cinematics/films with bits of gameplay as the only crux to break apart that concept of whether it’s innately a movie with gameplay or gameplay with cinematics (like older games of the series where "movies" in, say, the FMV form/class were much less emphasized as part of the overall game). "Cutscenes" in old FFs were mostly seamless or passive — now they are expected to emphasize more (due to the graphics) and "fill" a part of the game/impression as such rather than just be more of a seamless flow with only particular moments having more "weight" to them. In old FFs, how much of the story is lost removing the dialogue/locked moment/cutscenes? Now compare that to how much would be lost in modern games. If there is more to "lose" from the cutscenes overall then maybe they are relied on too much to shape the impression or experience of the game.

## Worst Case for AVL Tree Balancing after Deletion

After deleting a node in an AVL tree, self-balancing (zig-zag rotation or the left-right balancing) maintains O(logn) time that is not guaranteed in other unbalanced trees (like BST).

The Balancing operation is said to be O(1).

What is the worst case for balancing?

Any specific type of tree providing the worst case?

## What a malicious website can do in the worst scenario on a upgraded system [closed]

I use last Debian stable (buster as June 2020).

• system upgraded everyday (and browser addons updated automatically)
• Firefox 68.9.0esr (64 bits) (the one from `apt` package system)
• decent hardware (less than 5 years old)
• Debian security upgrade enabled

I’m aware of security concerns, I…

• verify (before clicking a HTTP link) if the link looks like `example.org`, but are in fact `example.org.random.tracker.io` by example (I take care about phising and tracking)
• take care of untrusted `X509` certificates for `https` websites
• avoid using non trusted Firefox addons
• never open suspicious files in web or mails
• don’t use weak passwords (and I don’t use the same on 2 websites)
• never run Firefox as root (who do this ?)
• use httpsEverywhere, uBlock-Origin, Ghostery, Decentraleyes Firefox addons

So my question:

• what is the risk of opening a malicious website (if not in google safe browsing DB) ? What it can do, the worst way, apart phishing website ? (I guess crypto-mining at least, exploit of Firefox vulnerability…)

## which algorithm is the best and which one is the worst. Why?

Three different techniques to solve the GCD problem

1 Euclid’s algorithm

2 Consecutive integer checking algorithm .An algorithm based on the definition of GCD

3 Middle-school procedure .Prime factorization

From above which algorithm is the best and which one is the worst. Why?

## Lower bound and worst case scenario

We know that the lower bound is the minimum amount of work needed to solve a problem. So for a given problem say x it has the best algorithm ( the most efficient algorithm to solve this problem ) say algorithm y, then the lower bound efficiency calculated from this algorithm y is the least time this problem x can be solved through . So why do we calculate the lower bound efficiency for this algorithm on the worst case input ? why not on the best case input ? I mean lower bound is the minimum amount of work which therefore occurs on the best case scenario . Every time I see a decision tree algorithm problem to solve the lower bound of some sorting algorithm , the usual word is always mentioned “the worst case lower bound is blah blah blah” which confuse me so much ! Someone please fix my understanding 🙁 .

## Is it safe to assume that my computer’s clock will always be synced with actual time within the second or a few seconds at the worst?

Years ago, I was running a service where the moderators were able to do various actions with massive privacy implications if the accounts or contributions were less than a short period of time. I did this by checking the timestamp against the current Unix epoch, allowing for X hours/days. Normally, this worked well.

One day, the server where this was hosted on had been “knocked offline” in the data centre where I was renting it, according to the hosting company. When it came back on again, its clock had been reset to the factory default, which was many years back.

This resulted in all my moderators potentially being able to see every single account’s history and contributions in my service until I came back and noticed the wrong time (which I might not even have done!) and re-synced it. After that, I hardcoded a timestamp into the code which the current time had to be more than or else it would trigger “offline mode”, to avoid any potential disasters like this in the future. I also set up some kind of automatic timekeeping mechanism (in FreeBSD).

You’d think that by now, not only would every single computer be always auto-synced by default with tons of fallback mechanisms to never, ever be in a situation where the clock isn’t perfectly synced with “actual time”, at least down to the second, if not more accurately; it would be impossible or extremely difficult to set the clock to anything but the current actual time, even if you go out of your way to do it.

I can’t remember my Windows computer ever having been out of time for the last “many years”. However, I do important logging of events in my system running on it. Should I just assume that the OS can keep the time at all times? Or should I use some kind of time-syncing service myself? Like some free HTTPS API, where I make a lookup every minute and force the system clock to me whatever it reports? Should I just leave it be and assume that this is “taken care of”/solved?