Can Someone please explain what is happening in recursive nature of this algorithm

I’ve implemented a program to print all possible outcomes of N dice rolls

Like:

import java.util.*;  public class RollNDice {     private static int calls = 0;     private static int base = 0;     public static void main(String[] args) {         int n = 2;         rollNDices(n);     }      private static void rollNDices(int N)     {         List<Integer> chosen = new ArrayList<>();         rollNDiceHelper(N, chosen, 1);         System.out.println("Total Calls: "+calls+" Base Hit: "+base);     }      private static void rollNDiceHelper(int dices, List<Integer> chosen, int index)     {         calls++;         //System.out.println("rollNDiceHelper("+dices+", "+chosen+")");         if(0 == dices)         {             base++;             System.out.println(chosen);         }         else         {             for(int i = 1; i <= 6; i++)             {                 chosen.add(i);                 rollNDiceHelper(dices - 1,chosen,i);                 chosen.remove(chosen.size() - 1);             }         }     } } 

Which works fine and prints all the possible outcomes with The Analysis line: Total Calls: 43 Base Hit: 36

Which makes sense because for N dice there will be 6 ^ N possible outcomes.

However,

When I change it slightly like:

private static void rollNDiceHelper(int dices, List<Integer> chosen, int index)     {         calls++;         //System.out.println("rollNDiceHelper("+dices+", "+chosen+")");         if(0 == dices)         {             base++;             System.out.println(chosen);         }         else         {             for(int i = index; i <= 6; i++)             {                 chosen.add(i);                 rollNDiceHelper(dices - 1,chosen,i);                 chosen.remove(chosen.size() - 1);             }         }     } 

Everything starts to Change strangely..

Now not even possible outcomes are printed but the Analysis line is also: Total Calls: 28 Base Hit: 21

I know we don’t need to use index for this specific task here But I wanted to know what will happe if we use it?

Can someone please help me in understanding what is happening here? Where Can we use index (As It reduces calls dramatically)? What will be best example to carfy both the cases (use of index parameter and no use of it). Also What will be the complexity in other case?

Recursive definition character counter

Question Screenshot

This is my definition for part 1 (in latex form)

\begin{alignat*}{2} \text{Base Case }& &&\text{ if } \mathtt{ones}(\varepsilon) = 0 \qquad \mbox{ ($ \varepsilon$ is the empty string)} \ \text{Inductive Step }& &&\text{ if } v \in \Sigma^{\ast} \text{ and } x \in \{1\} \text{ then } \mathtt{ones}(vx) = \mathtt{ones}(v) + 1\ & &&\text{ if } v \in \Sigma^{\ast} \text{ and } x \in \{0\} \text{ then } \mathtt{ones}(vx) = \mathtt{ones}(v) \end{alignat*}

For part 2, I was thinking of :

  1. inducting on $ v$ by taking $ v=v^{\prime} x$

  2. assuming true for $ \mathtt{ones}(w ++ v^{\prime}) = \mathtt{ones}(w) + \mathtt{ones}(v^{\prime})$

  3. proceed to show that $ \mathtt{ones}\left(w ++ (v^{\prime}x)\right)$ holds for both $ x \in \{1\}$ and $ x \in \{0\}$ .

Like so,

For $ x \in 1$

\begin{align*} \mathtt{ones}(w ++ (v^{\prime}x)) &= \mathtt{ones}(w ++ v^{\prime})x) \quad \mbox{Concatenation}\ &= \mathtt{ones}(w ++ v^{\prime}) + 1 \quad \mbox{Definiton of } \mathtt{ones}\ &= \mathtt{ones}(w) + \mathtt{ones}(v^{\prime}) + 1 \quad \mbox{Induction Hypothesis}\ &= \mathtt{ones}(w) + \mathtt{ones}(v^{\prime}x) \ &= \mathtt{ones}(w) + \mathtt{ones}(v) \quad \mbox{ since } v=v^{\prime} x \end{align*}

For $ x \in 0$

\begin{align*} \mathtt{ones}(w ++ (v^{\prime}x)) &= \mathtt{ones}(w ++ v^{\prime})x) \quad \mbox{Concatenation}\ &= \mathtt{ones}(w ++ v^{\prime}) + 0 \quad \mbox{Definiton of } \mathtt{ones}\ &= \mathtt{ones}(w) + \mathtt{ones}(v^{\prime}) + 0 \quad \mbox{Induction Hypothesis}\ &= \mathtt{ones}(w) + \mathtt{ones}(v^{\prime}x) \ &= \mathtt{ones}(w) + \mathtt{ones}(v) \quad \mbox{ since } v=v^{\prime} x \end{align*}

I’m concerned about the following?

  1. Is inducting on $ v$ the correct way to go about it? I chose it because it seems to be how concatenation is defined

  2. I haven’t seen any definition with two branches in lectures. Is splitting the domain of $ x$ in 2 ok, given that I show that both cases follow? Is there a better and/or simpler alternative?

Recursive Call Inside Argument List (C++)

So, my professor asked me to implement recursion in different ways to compute $ a^n$ (a and n being integers) and rank them according to their space efficiency. Now, here is one of the methods I came up with:

  int Power (int a, int n)  { if ( n == 0 )   return 1;    if ( n % 2 == 0)   return Power(Power(a, n/2), 2);    else return Power(Power(a, n/2), 2)*a;   } 

The code compiles well, but leads to a segmentation fault. On debugging, I came to the conclusion that recursive call within the argument list is not acceptable. That is, something like

  return Power(Power(a, n/2), 2) 

or

  int m = Power(a, n/2);   return Power(m, 2); 

is not allowed but

  int m = Power(a, n/2);   return m*m; 

is allowed. Why is this the case? Is this true only in C++, or is it a general phenomenon?

Te complexity of recursive function

I am trying to find out how they calculated the time complexity of this small function . I am studying for an exam and found this question and the final answer is given, but I am trying to understand how they got to this answer, I tried solving this problem using Iterative but when I tried to find the number of Iterations of this function I got stuck !

What I tried: let $ T(n,k)$ represent the time complexity of $ g$ . It satisfies the recurrence

$ $ T(n,k)=ck+\sum_{j=1}^i(2^j-1)k + T(n-i,2^ik)$ $

when $ i$ is the number of iteration in this function, so according to this function the iteration ends when $ n\le k$ , which means $ n-i=2^ik$ , but I couldn’t extract $ i$ from the equation.

Here is the function, whose time and space complexity are stated to be $ \Theta(n)$ and $ \Theta(\log n)$ :

   int g(int n, int k) {         if (n <= k) return 1;         int result = 0;        for (int i = k; i > 0; --i, ++result);         return result + g(n - 1, 2 * k);     }        int f2(int n) {       return g(n, 2);      }  

How does this recursive algorithm work?

One question from the Grokking Algorithms book:

Implement a max([]int) function, returning the biggest element in the array.

Here’s my solution in Golang (adapted from the Python solution in the book):

func maxNumber(arr []int) (r int) {     // base case     if len(arr) == 2 {         if arr[0] > arr[1] {             return arr[0]         } else {             return arr[1]         }     }      // recursive case     subMax := maxNumber(arr[1:])      if arr[0] > subMax {         fmt.Println("arr[0] > subMax")         return arr[0]     } else {         fmt.Println("arr[0] < subMax")         return subMax     } } 

It works, but I can wrap my head around it. How is the third block called if every time maxNumber(arr[1:]) is called it executes the function again, passing the array with one last element? If so, at some point in time it will hit the first block (base case) and return or the first element or the second.

I know the function works, but I’m missing the how.

Could someone help me around, explain like I’m 5 styles haha

Thanks!

recursive vs recursively enumerable vs non-recursively enumerable vs uncomputable

I can’t tell if there are any differences between non-recursively enumerable and uncomputable? what makes a language one or the other?

Is it safe to assume that all recursive languages are decideable or is it possible to have a undecideable language that’s recursive?

Are all recursively enumerable languages semi decideable (like halting problem) ?

please give examples if possible I’ve spent hours trying to wrap around these concepts and it just doesn’t feel right.