Calculating amount left at the end of a repetitive cutting down cycle

Let’s say I have a group of X’s characters that I want to cut down. I use a method similar to ‘find-and-replace-all’ tool, which takes an amount of X’s each time, deletes them, and replaces them with another, smaller amount. For example, I can state that I want an amount of 8 X’s to be replaced with 5 X’s from an original amount of 14 X’s. This cycle happens forever, or until there isn’t a sufficient remainder left to replace anymore. To visualise the example I just provided:


So, you start with 14 X’s and end with 5 X’s, and the process carried out here can be summarised as ’14, 11, 8, 5′. The process that was undergoing in terms of finding and replacing can be described as (8,5) as 8 X’s are replaced with 5 each time. Now, the question here is:

  1. When you initially have a string of 1,000,000 X’s, and apply the find-and-replace process of (99,90), what is the summary of the process here? By summary of the process, I mean the ’14, 11, 8, 5′ which shows how many X’s are left after each step.

I tried solving this manually, starting the process summary as ‘1,000,000, 909,091, 826,453, 751,321, 683,020, 620,929, 564,481, 513,172, 466,525’ however I realised this was a very inefficient way of solving this, and prone to mistakes. Can anyone help me do it in a better, easier way?

ALSO FOLLOW UP QUESTION 2. For the (7,3) process, find all start lengths which eventually result in an end length of 6.

I can see 10 being applicable for this, however manual testing every number to find a suitable match is also quite inefficient. If you can help me for this question too, please do. Also, please explain in more simple math terms, rather than using symbols (as I am still a beginner) Thank you!

This problem is also a slightly varied question from a math contest.

Calculating expectation and variance for having rolled 1 and 6 twice out of rolling a die 12 times

First i have calculated the probability to get each possible number {1,2,3,4,5,6} twice from 12 rolls(A).

$ Pr[A]=\frac{\binom{12}{2,2,2,2,2,2}}{6^{12}}.$

Then there are 2 random variables:

X-number of times that 1 was received.

Y-number of times that 6 was received.

Before calculating $ E(X),Var(X),E(Y),Var(Y)$ i’m uncertain of how i should calculate the probabilities of X and Y

When we are calculating the cost for implementing a new web application, should we have separate cost for providing the source code to the client [on hold]

I am working on a new core MVC web application as an out sourced senior software developer/architecture. now i did many requirements gathering sessions with the client, and i came our with a detailed requirement and design document, which have been singed by the client. then i break-down the document into tasks and i calculated the total effort for designing,developing,testing and hosting the web application, something as follow:

  1. Database creation and Design. 50 hours + cost 4,000 USD.
  2. Developing user registration and profile creation. 10 hours + cost 8,00 USD.
  3. Developing transaction workflow. 200 hours + cost 16,000 USD.
  4. tasks for development and hosting and UAT goes on
  5. Final. 500 hours + cost 40,000 USD.

so the above cover the full implementation time and cost for the project. but should this covers by defualt giving the client the application source code? or this is not necessary ? and if our client need/ask the source code then we can separately provide costing for it?

Calculating the HD Wallet Balance

While the notion of a balance is not strictly correct in the context of a wallet, it is something that has presented some confusion for me in the context of an HD wallet, especially when importing a seed into a new wallet.

It is to my understanding that with an HD wallet you can generate a tree structure of infinite depth, but with such a structure, how would a wallet feasibly verify the unspent transaction outputs (UTXOs) if it would have to check the entire blockchain for the addresses that the wallet can generate?

Lighting abnormal when calculating in tangent space. Probably something wrong with coordinate transformation matrix

I’m trying to calculate lighting in tangent space. But I just keep getting abnormal results. I was modifying the book’s demo code and I wander if there maybe something wrong with the transformation matrix I created.

I’m having trouble solving a problem in Introduction to 3D Game Programming with DirectX 11. I tried to use matrix TBN

Tx, Ty, Tz,
Bx, By, Bz,
Nx, Ny, Nz

as the book provided but I found the light vector to be wrongly transformed to tangent space and now I have no clue how to debug this shader.

Here is my Pixel Shader:

float4 PS1(VertexOut pin, uniform int gLightCount, uniform bool gUseTexure, uniform bool gAlphaClip, uniform bool gFogEnabled, uniform bool gReflectionEnabled) : SV_Target{ // Interpolating normal can unnormalize it, so normalize it. pin.NormalW = normalize(pin.NormalW); pin.TangentW = normalize(pin.TangentW);  // The toEye vector is used in lighting. float3 toEye = gEyePosW - pin.PosW;  // Cache the distance to the eye from this surface point. float distToEye = length(toEye);  // Calculate normalMapSample float3 normalMapSample =  normalize(SampledNormal2Normal(gNormalMap.Sample(samLinear, pin.Tex).rgb));  // normalize toEye toEye = normalize(toEye);  // Default to multiplicative identity. float4 texColor = float4(1, 1, 1, 1); if (gUseTexure) {     // Sample texture.     texColor = gDiffuseMap.Sample(samLinear, pin.Tex);      if (gAlphaClip)     {         // Discard pixel if texture alpha < 0.1.  Note that we do this         // test as soon as possible so that we can potentially exit the shader          // early, thereby skipping the rest of the shader code.         clip(texColor.a - 0.1f);     } }  // // Lighting. //  float4 litColor = texColor; if (gLightCount > 0) {     // Start with a sum of zero.      float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);     float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);     float4 spec = float4(0.0f, 0.0f, 0.0f, 0.0f);      // Sum the light contribution from each light source.       [unroll]     for (int i = 0; i < gLightCount; ++i)     {         float4 A, D, S;         ComputeDirectionalLightInTangent(gMaterial, gDirLights[i],              normalMapSample, World2TangentSpace(pin.NormalW, pin.TangentW, gTexTransform), toEye,             A, D, S);          ambient += A;         diffuse += D;         spec += S;     }      litColor = texColor*(ambient + diffuse) + spec;      if (gReflectionEnabled)     {         float3 incident = -toEye;         float3 reflectionVector = reflect(incident, normalMapSample);         float4 reflectionColor = gCubeMap.Sample(samLinear, reflectionVector);          litColor += gMaterial.Reflect*reflectionColor;     } }  // // Fogging //  if (gFogEnabled) {     float fogLerp = saturate((distToEye - gFogStart) / gFogRange);      // Blend the fog color and the lit color.     litColor = lerp(litColor, gFogColor, fogLerp); }  // Common to take alpha from diffuse material and texture. litColor.a = gMaterial.Diffuse.a * texColor.a;  return litColor; }   

And Here are function SampledNormal2Normal, World2TangentSpace and ComputeDirectionalLightInTangent:

float3 SampledNormal2Normal(float3 sampledNormal) { float3 normalT = 2.0f*sampledNormal - 1.0f; return normalT; }  float3x3 World2TangentSpace(float3 unitNormalW, float3 tangentW, float4x4 texTransform) { // Build orthonormal basis. float3 N = unitNormalW; float3 T = normalize(tangentW - dot(tangentW, N)*N); float3 B = cross(N, T);  float3x3 TBN = float3x3(T, B, N); /*float3x3 invTBN = float3x3(T.x, T.y, T.z, B.x, B.y, B.z, N.x, N.y, N.z); return invTBN;*/   float3 T_ = T - dot(N, T)*N; float3 B_ = B - dot(N, B)*N - (dot(T_, B)*T_) / dot(T_, T_); float3x3 invT_B_N = float3x3(T_.x, T_.y, T_.z, B_.x, B_.y, B_.z, N.x, N.y, N.z); return invT_B_N; }  void ComputeDirectionalLightInTangent(Material mat, DirectionalLight L, float3 normalT, float3x3 toTS, float3 toEye, out float4 ambient, out float4 diffuse, out float4 spec) { // Initialize outputs. ambient = float4(0.0f, 0.0f, 0.0f, 0.0f); diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f); spec = float4(0.0f, 0.0f, 0.0f, 0.0f);  // The light vector aims opposite the direction the light rays travel. float3 lightVec = -L.Direction; lightVec = mul(lightVec, toTS); lightVec = normalize(lightVec);  // toEye to Tangent Space toEye = mul(toEye, toTS); toEye = normalize(toEye);  // Add ambient term. ambient = mat.Ambient * L.Ambient;  // Add diffuse and specular term, provided the surface is in  // the line of site of the light.  float diffuseFactor = dot(lightVec, normalT);  // Flatten to avoid dynamic branching. [flatten] if (diffuseFactor > 0.0f) {     float3 v = reflect(-lightVec, normalT);     float specFactor = pow(max(dot(v, toEye), 0.0f), mat.Specular.w);      diffuse = diffuseFactor * mat.Diffuse * L.Diffuse;     spec = specFactor * mat.Specular * L.Specular; } } 

The result I got seem to be much darker in most places and too bright in several highlight area. I wonder if anyone can help me with my code or give me advice on how to debug a hlsl shader. My thousand thanks!

Why iterative code times out for calculating the number of trailing zeros in A factorial

Both the codes caluclate the trailing number of zeros in A!.

Here is the iterative code : It times out for large input

public int trailingZeroes(int A) {   int count=0;   for(int i=5;A/5>=1;i=i*5)  {     count+=A/i;  }   return count; } 

And here is the recursive code and it gets completed in time.

public int trailingZeroes(int A) {  return A==0?0:(A/5 + trailingZeroes(A/5)); } 

Does in all cases, recursive codes are faster than iterative codes ? e.g. Tree Traversals etc.

Calculating all products of $n-1$ factors when given $n$ factors

Let’s assume we have an operator $ $ \times: E^2\to E$ $ of which we merely know that it is associative. Let’s say a multiplication $ e\times f$ always takes up a time of $ M$ for all $ e, f\in E$ .

We’re now given $ n$ elements $ e_1,…,e_n\in E$ , and are tasked to calculate all $ n$ products $ $ \def\bigtimes{\mathop{\vcenter{\huge\times}}} p_j :=\bigtimes_{i=1\i\neq j}^n e_i$ $

Naively multiplying out all $ n$ products takes $ O(n^2M)$ .

A more sophisticated approach that runs in $ O(n^{3/2}M)$ splits $ \bigtimes_{i=1}^n e_i$ at arbitrary positions into $ \sqrt n$ factors.
For each of the $ n$ products, we now only have to recalculate one factor, and multiply the resulting factor, and the other $ k-1$ factors together.

What is the fastest algorithm for this problem, and how does it look like?

Calculating the cube root

Inspired by a question on SO I was checking how to calculate the cube root of an integer and found a C-ish solution on Hacker’s Delight:

// Program for computing the integer cube root. // Max line length is 57, to fit in #include <stdio.h> #include <stdlib.h>     //To define "exit", req'd by XLC.  // Execution time is 3 + (11 + mul)11 = 124 + 11*mul (avg) cycles. // ------------------------------ cut ---------------------------------- int icbrt1(unsigned x) {    int s;    unsigned y, b;     y = 0;    for (s = 30; s >= 0; s = s - 3) {       y = 2*y;       b = (3*y*(y + 1) + 1) << s;       if (x >= b) {          x = x - b;          y = y + 1;       }    }    return y; } 

I tried to adapt this to Common Lisp and came up with this:

(defun icbrt (x)   "Returns the integer cube root of X."   (assert (plusp x) (x) "Please provide a positive integer! ~D < 0" x)   (loop for s downfrom 30 to 0 by 3         for y integer = 0 then (* 2 y)         for b integer = (ash (1+ (* 3 y (1+ y))) s)         when (>= x b)         do (incf y)            (setf x (- x b))         finally (return y))) 

While it works I am still wondering if it could be express more idiomatic. I am especially unsure about the setf within the loop construct.

Any comment is gratefully acknowledged.

Calculating elapsed time

The code below compares recv.rcpt_dtim (which is a datetime type) against the current date/time. It calculates an elapsed time resulting in hours and minutes formatted like: “04:22”. It took me a while to get it functional, which it is, but it just seems sloppy. Does anyone have any tips to clean it up?

TRIM((((CURRENT YEAR TO SECOND - recv.rcpt_dtim)::INTERVAL SECOND(9) to  SECOND)/3600)::VARCHAR(12) || ':' || CASE WHEN (MOD(MOD(((CURRENT YEAR TO  MINUTE - recv.rcpt_dtim)::INTERVAL MINUTE(9) to  MINUTE)::VARCHAR(12)::INT,60),60))<10 THEN "0" ELSE "" END ||  (MOD(MOD(((CURRENT YEAR TO MINUTE - recv.rcpt_dtim)::INTERVAL MINUTE(9)  to MINUTE)::VARCHAR(12)::INT,60),60))::VARCHAR(12)) 

Calculating the Complexity of a Two Part Algorithm

This is in relation to this post I made. I eventually solved this by the following approach:

  1. Take the un-ordered file with all the purchasing data and use the UNIX sort utility to order the file on the customer identifier. Since the file can be 500 million lines (and grows over time) I used the -T option and directed the utility to use my hard drive to carry out it’s sorting computation. The sort utility uses external sort to do the sorting.
  2. I created a program which took this ordered file and in one pass computed the required values.

Step 2 is O(n) clearly. But what about Step 1? What is the complexity here and more to the point of the question, what is the overall complexity of this two-stage solution?