Does using onclick to redirect parameters impact seo

For SEO I need to have my link https://www.example.com/ on a third party website. However, I also need this website to pass parameters to my site.

If the third party website used the below structure would it pass a backlink to my main url https://www.example.com/ or would it pass a backlink to https://www.example.com/?xyz=55

Would this be seen as legit from SEO perspective or pushing it? Is there a better way to accomplish this from an SEO perspective?

<script>    var val = 55; </script>  Link to <a href="https://www.example.com/" onclick="location.href=this.href+'?xyz='+val;return false;">My Site</a> 

How to determine values of parameters such that an inequality is satisfied?

Given inequality f[x, y]>0 where f[x, y]=1/16 (-1 + x (2 - x + x^3 (-1 + y)^2 y^2)), how can one find the values x (keeping y fixed) such that the inequality is satisfied. And then repeat the same to find y (keeping x fixed)? The answer appears in equation (107) of this article:

$ $ 2(\sqrt{2}-1) \le x \le 1, \qquad \frac{1}{2}\left( 1- \sqrt{ \frac{x^2 +4x – 4}{x^2} } \right) \le y \le \frac{1}{2}\left( 1+\sqrt{ \frac{x^2 +4x – 4}{x^2} } \right)$ $

Solving coupled ODE symbolically for the parameters

I’m trying to solve the following system:

$ $ \dot{x_1} = -i w_1 x_1+ i s_1 x_2 $ $ $ $ \dot{x_2} = -i w_2 x_2- i s_2 x_1 $ $

For $ X(t)_{1,2}$ , where $ x_{1,2} = X(t)_{1,2}\cdot e^{ip(t)_{1,2}}$

I know the solution is:

$ $ \dot{X_1}=s_1X_2sin(p_1 – p_2) $ $ $ $ \dot{X_2}=s_2X_1sin(p_1 – p_2) $ $

I tried the following:

x1 = X1[t]*E^(I*p1[t]); x2 = X2[t]*E^(I*p2[t]);  x1dot = -I*w1*x1 + I*s1*x2; x2dot = -I*w2*x2 + I*s2*x1;  x1d = D[x1, t]; x2d  = D[x2, t];  {{X1s[t], X2s[t]}} =   {X1[t], X2[t]} /.   Simplify[    Solve[     {x1d == x1dot, x2d == x2dot},     {X1[t], X2[t]}]] 

But The result I get is:

$ $ -\frac{i \left(\text{x1d} \text{p2}'(t)+\text{s1} e^{i \text{p2}(t)} \text{X2}'(t)+\text{w2} \text{x1d}\right)}{e^{i \text{p1}(t)} \left(-\text{w1} \text{p2}'(t)+\text{s1} \text{s2}-\text{w1} \text{w2}\right)},-\frac{i \left(\text{w1} \text{X2}'(t)+\frac{\text{s2} \text{x1d}}{e^{i \text{p2}(t)}}\right)}{-\text{w1} \text{p2}'(t)+\text{s1} \text{s2}-\text{w1} \text{w2}}$ $

I’m not sure if Mathematica can use Euler’s formula to simplify the complex exponents and get to the known solution. I guess it’s possible but I’m not sure how to do it

How to pass parameters to a Blueprint function called from C++?

I have found this piece of code showing how to call a blueprint function from C++:

 UFUnction* Func = Obj->GetClass()->FindFunction(FName("FuncName"));  if(Func == nullptr){return;}    FStructOnScope FuncParam(Func);  UProperty* ReturnProp = nullptr;    for (TFieldIterator<UProperty> It(Func); It; ++It)     {      UProperty* Prop = *It;      if (Prop->HasAnyPropertyFlags(CPF_ReturnParm))      {          ReturnProp = Prop;      }      else      {          //FillParam here                  }  }    Obj->ProcessEvent(Func, FuncParam.GetStructMemory()); 

But… I don’t know how to //Fillparam here.

How can I fill the FuncParam with the parameters that I need to pass?

SSRS reports resets the parameters to default after I click on “View Report”

We are using SSRS version 16(build 13.0.x) in Native mode.

I built a report using Report Builder with 2 parameters(independent). Everything works as it should when I run the report in Report Builder.

When I publish it to WebPortal – some random set of users have the following issue:

  1. Through WebPortal URL, they click on the report to open in browser, we have always used chrome.
  2. They populate the parameters and click on "View Report".
  3. The report starts "Loading" and immediately the parameters are reset to default(or blank) and the report area goes blank. They couldn’t even see the column header or the menu bar in the report area.

Apart from a handful of seemingly random users, everyone else have no issues.

We use SQL accounts for authentication and all users have permissions to fetch data from tables. I checked if this issue is only with certain kind of parameters, but there seems to no common denominator.

I’ve come up dry on PowerBI, Technet forums. I appreciate any help/troubleshooting steps. Thanks 🙂

PostgreSQL connection URI fails, but parameters work

Running postgreSQL 13.2 with ssl on.

I need to connect to a database from a third party application that requires a connection URI, but it is not working. So I tested connecting with psql using two different formats with the same credentials. One worked and one didn’t.

The following command logs me into mydb as user myuser:

# psql -U myuser -d mydb -h 127.0.0.1 -p 5432 -W Password: psql (13.2 (Debian 13.2-1.pgdg100+1)) SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help.  mydb=> 

However this command fails:

# psql postgresql://myuser:MYPASSWORD@127.0.0.1:5432/mydb?sslmode=require psql: error: FATAL: password authentication failed for user "myuser" 

I am using exactly the same credentials. I’ve verified it more than 10 times. Removing "sslmode=require" does not fix the problem.

My pg_hba.conf file contains:

host   mydb   myuser   127.0.0.1/32   password 

I made it the first line in my pg_hba.conf file, so it can’t be getting hung up on any other line.

What am I doing wrong?

Can I indicate (via robots.txt, meta robots or another approach) that one or more queryString parameters should be ignored by crawlers?

I’ve written my own SiteSearch Script in PHP.

The SiteSearch parses two GET parameters from the queryString:

  • search // the search-phrase
  • filters (optional) // indicating which parts of the site to include or ignore

I don’t mind GoogleBot and other crawlers reading the search parameter.

But I would like to advise crawlers to ignore the filters parameter, because a very high number of configurations of that parameter would simply return the same results – and an identical, duplicate page as far as the crawlers are concerned.


As much as I would like to add to my robots.txt file something like:

User-agent: * IgnoreParameter: filters 

this isn’t an option.

And a meta robots directive like:

<meta name="robots" content="ignoreparams[filters]"> 

isn’t an option either.

Is there any creative way I can enable crawling of the page and have crawlers ignore the filters parameter in the queryString?

Or am I stuck with a directive as unrefined as:

<meta name="robots" content="noindex"> 

if I don’t want to risk pages with identical search parameters (but different filters parameters) being crawled?

Plot with three functions ana three parameters

i think my code doesn t have any problem but the program shows me this error and i comfused!/* Plot::argr: Plot called with 1 argument; 2 arguments are expected.*/

 a = 1; l = 2;     w1[x_, c_, theta1_] := x^c + theta1;     f[x_] := ( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a];     mesi = Integrate[x*f[x], {x, 0, Infinity}];     mesi1 = Integrate[(x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]), {x, 0, Infinity}];     fw1[x_, c_, theta1_] := ((x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]))/(Integrate[(x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]), {x, 0, Infinity}]);     DH1[x_, c_,theta1_] := ((Integrate[(x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]), {x, 0,Infinity}])/((Integrate[(x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]), {x, 0, Infinity}]) +theta1))*(((x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/ Gamma[a]))/(Integrate[(x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]), {x, 0, Infinity}])) + (theta1/ (Integrate[(x^c + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]), {x, 0, Infinity}]) + theta1)*(( l^a*x^(a - 1)*Exp[-l*x])/Gamma[a]);  Manipulate[Plot[DH1[x, c, theta1]], {x, 0, 100}, , {c, 0, 10}, {theta1, 0, 10}]  /*my plot doesnt run ? i want to run this function with mean mesi */ 

Response time optimization – Getting record count based on Input Parameters

I’m trying to optimize the process of calculating count of records, based on variable input parameters. The whole proces spans several queries, functions and stored procedures.

1/ Basically, front-end sends a request to the DB (it calls a Stored procedure) with an input parameter (DataTable). This DataTable (input parameter collection) contains 1 to X records. Each record corresponds to one specific rule.

2/ SP receives the collection of rules (as a custom typed table) and iterates through them one by one. Each rule apart from other meta-data contains a name of a specific function that should be used in evaluating the said rule.

For every rule, the SP prepares a dynamic query wherein it calls the mentioned function with 3 input parameters.

a/ Custom type Memory Optimized Table (Hashed index) b/ collection of lookup values (usually INTs) that the SELECT query uses to filtr data. Ie. "Get me all records, that have fkKey in (x1, x2, x3)" c/ BIT determining if this is the first rule in the whole process.

Each function has an IF statement, that determines based on the c/ parameter if it should return "all" records that fullfill the input criteria (b/). Or if it should fullfill the criteria on top of joining the result of previous rule that is contained in the custom table (a/)

3/ Once the function is run, it’s result is INSERTed into a table variable called @tmpResult. @result is then compared to tmpResult and records that are not in the tmpResult are DELETEd from result.

  • @result is a table variable (custom memory optimized table type), that holds intermediate result during the whole SP execution. It is fully filled up on the first rule, every consequent rule only removes records from it.

4/ The cycle repeats for every rule until all of the rules are done. At the end, count is called on the records in @result and returned as a result of SP.

Few things to take into account:

  • There are dozens of different types of rules. And the list of rules only grows bigger over time. That’s why dynamic query is used.
  • The most effective way to temporarily store records between individual rule execution so far proved to be custom Memory-Optimized table type. We tried a lot of things, but this one seems to be the fastest.
  • The number of records that are usually returned for 1 single rule is roughly somewhere between 100 000 and 3 000 000. That’s why a bucket_size of 5 000 000 for the HASHed temporary tables is used. And even though we tried nonclustered index, it was slower than that HASH.
  • The input collection of rules can vary strongly. There can be anything from 1 rule up to dozens of rules used at once.
  • Most every rules can be defined with at minimum 2 lookup values .. at most with dozens or in a few cases even hundred values. For a better understanding of rules, here are some examples:

Rule1Color, {1, 5, 7, 12} Rule2Size, {100, 200, 300} Rule3Material, {22, 23, 24}

Basically every rule is specified by it’s Designation, which corresponds to a specific Function. And by it’s collection of Lookup values. The possible lookup values differ based on the designation.

What we have done to optimize the process so far:

  • Where big number of records need to be temporarily stored, we use Memory-Optimized variable tables (also tried with temp ones, but it was basically same when using Memory-Optimized variants).
  • We strongly reduced and optimized the source tables the SELECT statements are run against.

Currently, the overal load is somewhat balanced 50/50 between I/O costs pertaining to SELECT statements and manipulation with records between temporary tables. Which is frankly not so good .. ideally the only bottleneck should be the I/O operations, but so far we were not able to come up with a better solution since the whole process has a lot of variability.

I will be happy for any idea you can throw my way. Of course feel free to ask questions if I failed to explain some part of the process adequately.

Thank you