product spotted as article in google structured data

i have fixed all errors and warnings of my website with Schema & Structured Data For WP.

problem is that product pages defined as article so i got bad rank. i’m not very familar with coding.

is there any simple way for fix this problem?

<meta property="og:type" content="article"/> 

google structure data tester

google structure data tester

please tell me if need more information.

an algorithm for detecting if noisy univariate data is constant or is sum of step functions

In an explicit algorithm i’m writing, there is a certain stage where i need to determine whether or not a certain noisy univariate data is constant or is sum of step functions.

For example, defining foo as the algorithm i’m after (writing in python): assert foo([0]*20) == False assert foo([0]*20 + [1]*20) == True assert foo([0]*20 + [5]*30 + [1]*70) == True

  • The data in the examples is not noisy, but assume the real one is (just a bit, one can pinpoint where a step might take place by observing the plot of the data.

I’d be happy to hear any ideas, thank you.


Does anyone know how to make the extra data, about the number of jobs on page, show in google search results?

I have noticed that when I google for jobs for example ‘plumber jobs in Melbourne’ some results have a prepended piece of data ‘407 jobs’ before the normal meta description is shown.

example of search result

Anyone know what seek has done to get this data shown in google search results?

What are the best practices in a manufacturing/production facility for data retention?

My scenario: A production facility that uses data sets provided by customer(s) to produce personalized goods in bulk. Data sets can range from 100,000 to 2,000,000 names and addresses in the US. This isn’t PCI and doesn’t fall under HIPPA or Sarbanes-Oxley Act.

In a job shop environment, how long is too long to keep data lists provided by customers? Project Managers would love to keep “everything” indefinitely to refer to. Network admins would like data sets scrubbed once a project has shipped.

I’d like to have a solid balance. What are some of the best practices in this area and sources that you refer to for setting policy?

NMinimize doesn’t work with Defined function and data set

I have a data set

data={{-35., 0.315382}, {-30., 0.510487}, {-25., 0.808823}, {-20.,    1.25604}, {-15., 1.91404}, {-10., 2.86533}, {-5., 4.21811}, {0.,    6.11213}, {5., 8.7253}, {10., 12.2811}, {15., 17.0568}, {20.,    23.3919}, {25., 31.6982}, {30., 42.4692}, {35., 56.2906}, {40.,    73.8511}, {45., 95.9534}, {50., 123.525}, {55., 157.628}, {60.,    199.474}, {65., 250.427}, {70., 312.022}, {75., 385.967}, {80.,    474.158}, {85., 578.681}, {90., 701.827}, {95., 846.09}, {100.,    1014.18}, {105., 1209.02}, {110., 1433.77}, {115., 1691.8}, {120.,    1986.71}} 

and a function

f[t_, a_, b_, c_] := Exp[a + b/(c + t)]; 

Now I do the NMinimize to find parameters a, b, c by using command:

NMinimize[  Total[((f[data[[All, 1]], a, b, c] - data[[All, 2]])/      data[[All, 2]])^2], {a, b, c}] 

The output parameters are wrong. Please let me know what is the problem? Why NMinimize give wrong results.

Thank you

Chi^2 fitting for correlated data

Suppose you have $ N$ correlated data points $ \vec{y}_\mathrm{data}$ and a model that is a function of $ M$ parameters $ \vec{x}$ . The associated $ \chi^2$ statistic is

$ \chi^2 = (\vec{y}_\mathrm{data} – \vec{y}_\mathrm{theo}(\vec{x}) \cdot C^{-1} \cdot (\vec{y}_\mathrm{data} – \vec{y}_\mathrm{theo}(\vec{x}))$ ,

where $ \vec{y}_\mathrm{data}$ is the vector with the data points, $ \vec{y}_{theo}(\vec{x})$ is the fit function and $ C$ is the covariance matrix associated the data points. To best fit the model, one minimizes $ \chi^2$ with respect to $ \vec{x}$ .

What is the best function on Mathematica to do this? NonlinearModelFit doesn’t handle correlated data and FindMinimum doesn’t provide useful statistics (like the covariance matrix associated with $ \vec{x}$ ).

Data structure to query intersection of a line and a set of line segments

We want to pre-process a set of $ n$ line segments in $ S$ into a data structure, such that we can answer some queries: Given a query line $ l$ , report how many line segment in $ S$ dose it intersect.

It is required that the query should be done in $ O(\log{n})$ time. The data structure itself should take up to $ O(n^{2})$ storage and be built in $ O(n^{2}\log{n})$ time. It is suggested that it should be done in dual plane.

I understand that the question may require me to look for the number of double wedge that the a query point is in, but I can’t think of any efficient data structure to report such information. Any suggestions?

This question is basically a homework question from the textbook Computational Geometry by de Berg et al (Question 8.15). I would like to apologize that this question may not be exciting to you.

Should backup jobs be partitioned by type and applicable regulations of data?

Looking at how Microsoft categorizes data, customer data can be broken into a few different categories:

  • Customer Data
  • Customer Content
  • Personal Data

Should backup jobs be configured in a way to support special needs, such as

  • retention
  • deletion (partial or impartial for right to be forgotten)
  • access control (catalog, or metadata)

How to get data from database week wise between two given dates?

I have a SQL database where i saved data of all dates in table. What i want is, If given any two dates, i want all the data between those two dates in weekly format where week starts on Monday. if given date starts with Thursday for example, so data from Thursday to sunday should be displayed as a week data.

I give you one example. If i choose start date 08-01-2020 and end date 08-03-2020. Now this is two months gap and there are probably 30 days. I want data of those 30 days, but week wise from Monday to Sunday. But here given start date is on Wednesday, so the that weeks data would start from wednesday and end on sunday.

Hope you got my point. Thanks in advance.

Packets contained no EAPOL data; unable to process this AP

I’m trying to hack my own WiFi using aircrack but have had no success. With aircrack I cannot achieve a successful handshake as the deauth doesn’t seem to have any effect on my targeted devices. This is what it outputs:

root@RPI02:~# aircrack-ng -w password.lst *.cap Opening WIFI_APPLE.cap-01.cap.. Read 180751 packets.     #  BSSID              ESSID                     Encryption     1  F1:2E:DG:F2:EE:0F  WIFI APPLE                WPA (0 handshake)  Choosing first network as target.  Opening WIFI_APPLE.cap-01.cap.. Read 180751 packets.  1 potential targets  **Packets contained no EAPOL data; unable to process this AP.** 

What exactly means this line?

Packets contained no EAPOL data; unable to process this AP.