Find out whether number is of Power of Two

The task

is taken from leetcode

Given an integer, write a function to determine if it is a power of two.

Example 1:

Input: 1

Output: true

Explanation: 20 = 1

Example 2:

Input: 16

Output: true

Explanation: 24 = 16

Example 3:

Input: 218

Output: false

My solution

/**  * @param {number} n  * @return {boolean}  */ var isPowerOfTwo = function(n) {   if (n <= 0 ) { return false; }   if (n <= 2) { return true; }   let num = n;   do {     const x = num / 2;     if ((x | 0) !== x) { return false; }     if (x === 2) { return true; }     num = x;   } while(num);   return false; }; 

Check whether $f \mapsto f+ \frac{df}{dx}$ is injective or surjective!!

Consider maps $ C^{\infty} \to C^{\infty}$ s.t $ f \mapsto f+ \frac{df}{dx}$ . We have to check whether this map is injective or surjective.

My try: The map is clearly not injective as $ x$ and $ x+e^{-x}$ maps to $ x+1$ .

Now to check whether the map is subjective. Consider $ g \in C^{\infty}$ . Then I was thinking in this way that considering $ \int_0^xg$ then $ f=g-\int_0^xg$ now $ f+\frac{df}{dx}=g-\int_0^xg+\frac{dg}{dx}-g=-\int_0^xg+\frac{dg}{dx}$ still I am not getting a proof whether it is surjective or not!!

How to determine whether custom or “standard” cursors are to be used in a software application?

There are many applications that rely only on the standard cursors (normal cursor for pointing at something, cursor for showing an operation is currently in progress, text cursor etc.) that are shipped with the operating system. There are also a lot of applications that have their own custom set of cursors. And last but not least we have applications that have custom cursors that are confusing (even after a decent amount of use) and applications that might have had better UX if they had custom cursors (imho Blender could have done with some of those).

On one hand this allows the user to have something familiar in the new environment (software application) just like using OS-styled buttons, scrollbars, textfields ec.

On the other hand custom cursors (bucket tool, pencil etc. in all Paint-like application is a nice example for these types of cursors) can provide essential application-specific information the visualization of which may otherwise prove to be too difficult.

My question is probably too broad but I would like to know how people usually determine whether or not to use custom cursors. When exactly comes the point where a designer says “Ok, we need to introduce custom cursor X because Y.”?

I want to know whether it is a case of Deadlock or LiveLock in Java Multithreading

I request you to review my below and let me know whether it is a case of Deadlock or Livelock. I have deliberately used join() inside the run method of threads.

public class ThreadTask extends Thread {    @Override   public void run() {     System.out.println(Thread.currentThread().getName() + " started running ...");     try {       join();     } catch (InterruptedException e) {       e.printStackTrace();     }   } } 

Test Program is given below.

public class Test {   public static void main(String[] args) {     Thread t1 = new ThreadTask();     t1.setName("Thread-1");     Thread t2 = new ThreadTask();     t2.setName("Thread-2");     t1.start();     t2.start();   } } 

NTM, which should check as efficiently as possible for a 3-dimensional map whether it can be coloured with 42 colours

I need really your help to solve this problem. Can somebody give me a idea how to solve the following problem with NTM?

“We are in the year 2500. People also populate the oceans. Since people in the water no longer rely on a solid ground for their homes, countries have established themselves at different depths of the oceans. So it may happen that different countries are not only neighbours in the horizontal dimensions but also in the vertical dimension. However, to give the resulting 3-dimensional maps a little overview, give neighbouring countries on the map should always have different colours. Fortunately, in the year 2468, people succeeded in building NTMs. Describe the operation of an NTM, which should check as efficiently as possible for a 3-dimensional map, whether it can be coloured with 42 colours so that neighbouring countries always have different colours on the map.”

Every idea can help me to solve this problem. Thank you πŸ™‚

When do we still check whether a theorem (proven statement) works for a particular example or not?

This long title is inspired by this on-hold question in which the OP presented a “counterexample” for the four color theorem. Interestingly, in the comments and the two answers given, no one just mentioned that it is a theorem and instead, the answers constructively colored the suggested figure with four colors. We might think that people just tried to be nice and constructive. But then there is nothing to be inspired by πŸ™‚

Instead, we might think of the historical debate surrounding the proof of the four color theorem and the doubts that might have still remained for some. In addition, there is a rather big list of Widely accepted mathematical results that were later shown to be wrong? So it seems reasonable to be constructive from time to time and simply check the example that supposed to be a counterexample for a theorem.

This rang an educational bell for me. Quite often, students check a theorem by examples (after the theorem is proven for them). The usual interpretation is that they have not understood yet what a mathematical proof means. But, it seems that there might be more in this checking-by-examples process Hence the title: When do we still check whether a theorem (proven statement) works for a particular example or not? Have you (as a mathematician) ever come to a point that somehow felt you have to check a particular counter-theorem (just made this term up) example.

Your answers might help us to better understand students’ conceptions of proof. That is why I tag the question with “mathematics education” and “teaching”. Feel free to add more tags. Also, please someone makes it a wiki as it hasn’t got a correct answer.

Must we define methods and async when we don’t know whether the implementation is synchronous or asynchronous?

I think I know the answer to this, but it’s particular enough that I don’t want to go telling other people stuff until I’m 100% certain.

Suppose I have a class with some dependency:

public interface IDependency {     int DoSomething(string value); }  public class DependsOnSomething {     private readonly IDependency _dependency;      public DependsOnSomething(IDependency dependency)     {         _dependency = dependency;     }      public int GetSomeValue(string input)     {         return _dependency.DoSomething(input);     } } 

IDependency is an abstraction. I haven’t yet determined what its implementation will be. I don’t know whether it will be CPU-bound or perhaps make some API call.

What’s more, the implemenation of IDependency could have its own dependencies, and the same could be true of those. They may or may not call async methods.

Would it be correct to say that

  • If I consider it likely that something, somewhere will be async, that I should make all of these methods async?
  • If nothing in any of the dependencies is async but at some point that changes, and I want to take advantage of that opportunity to free up a thread instead of letting it wait, I would need to go back through all of my methods and make everything async?

Generally I can plan for what does or doesn’t need to be asynchronous, but I’m trying to understand the potential cost of a) guessing synchronous and b) guessing wrong.

Is my understanding of this correct?

One workaround to the problem might be, in some cases, to define both synchronous and asynchronous methods on interfaces. But that feels wrong because then the interface is describing implementation details, and if the underlying implementation isn’t really asynchronous then my interface is lying. (And it could lead to me or someone else writing even more async methods to call something that isn’t really async.)

SUM( OFFSET(range, rowOffsetsArray, 0) ) behavior varies based on whether or not it is an array parameter to another function?

In trying to systematically enumerate the possibilities when rolling four identical but loaded four-sided dice, I came across some unusual excel behavior. Hoping someone can shed some light on what’s going on under the hood.

The following table illustrates the possible roles of a die:

1000 A

0100 B

0010 C

0001 D

each row is a possibility with a distinct probability.

In trying to display all possible combinations of four roles of such a die– where the fist combination might be A + A + A + A or 4000, the second might be 3 1 0 0, and so on–I decided that I wanted to systematically offset A by 0,1,2, or 3 rows for each of four roles then sum the results. Oddly, though, I get the following. (all formulas are array formulas keyed in with shift+ctrl+enter).

=TRANSPOSE( SUM( OFFSET( A, 4x1ArrayOfRowOffsets, 0)))

displays the correct sum. Likewise if =TRANSPOSE(…) is replaced by =INDEX(…,1,1). I take it because both functions natively support array arguments. However,

=SUM( OFFSET( A, 4x1ArrayOfRowOffsets, 0))

does not work–it seems that here the summation is conducted along the 4 rows returned by offset, each of which has value 1. Oddly,

=SUM( TRANSPOSE( OFFSET( A, 4x1ArrayOfRowOffsets, 0)))

does not work either–the transpose makes it so the summation is properly conducted along the columns returned by offset, but seems to throw out all but the first column. Interweaving INDEX calls does not fix the problem.

All that is to say, calling SUM(OFFSET(—)) with array arguments seems to produce varied output depending on what is doing the calling. Why is this? What is actually going on, here?