Scrapebox + Automator : How to Grab links froms hundreds of websites ?

Hello,
I have Scrapebox and the premium plugin : Automator.
I have a text file with 800 domains (one domain per line).
I would like to Grab Links (function that is in Automator) of each domain one by one, with a level crawl of 4.
Now I manually create the task “Grab Link” several times.
So for 800 domains, I need to create this task in Automator 800 times with a different domain each time.
Does someone know how can I do in Automator to Grab links from 800 domains more quickly please ? Smile
Thank you very much Smile

Why is my view selecting hundreds of duplicates?

This view selects 696 entries. The CSV file has 48 entries.

CREATE OR REPLACE VIEW insert_3_char_abts AS SELECT     ext.construct_id,     n_term,     enz_name,     c_term,     cpp,     mutations,     ext.g_batch,     ext.p_batch,     emptycol,          c_batch,     abts5_mean,     abts5_SD,     abts5_n,     abts5_method,     abts5_study_id,     abts7_mean,     abts7_SD,     abts7_n,     abts7_method,     abts7_study_id,     pur.pk_purified_enz_id FROM EXTERNAL ((        construct_id NUMBER(10),       n_term VARCHAR2 (50),       enz_name VARCHAR2 (50),       c_term VARCHAR2 (50),       cpp VARCHAR2 (50),       mutations VARCHAR2 (50),       g_batch VARCHAR2 (50),       p_batch VARCHAR2 (50),       emptycol VARCHAR2(50),        c_batch VARCHAR2 (50),       abts5_mean NUMBER (5, 2),       abts5_SD NUMBER (5, 2),       abts5_n NUMBER (3),       abts5_method VARCHAR2 (50),       abts5_study_id VARCHAR2 (8),       abts7_mean NUMBER (5, 2),       abts7_SD NUMBER (5, 2),       abts7_n NUMBER (3),       abts7_method VARCHAR2 (50),       abts7_study_id VARCHAR2 (8))            TYPE ORACLE_LOADER     DEFAULT DIRECTORY data_to_input     ACCESS PARAMETERS (         RECORDS DELIMITED BY NEWLINE          SKIP 1         BADFILE bad_files:'badflie_view_before_insert_char_abts.bad'         FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'         MISSING FIELD VALUES ARE NULL          )      LOCATION ('CHAR_ABTS.CSV')     REJECT LIMIT UNLIMITED) ext  INNER JOIN purified_enz pur ON ext.p_batch = pur.p_batch INNER JOIN produced pr  ON pr.pk_produced_id = pur.fk_produced_id; ; 

If I finish this statement with

AND pr.fk_construct_id = ext.construct_id; 

It selects 46 out of 48 records, which is better, but not great.

String filtering – process hundreds to millions of filters

What would be the most efficient way (whether with algorithims, cpu(s), DBs & SQL, distributed computing, etc) to process many strings, say ~1000/minute, and filter each string over 100s to potentially millions of different filter parameters. A parameter can be a simple statement such as including or not including the word “cat” or including “dog” not “cat” and as “complicated” as including multiple boolean gates with added timestamp ranges (logs). Each individual filter that marks true would be collected and some operation would run for each.

Posts, Links from hundreds sites. Variety of niches. DA up to 54.

High-Quality domains with traffic, DA up to 54, age from 2 to 20 years.

Here is the list of sites: https://drive.google.com/open?id=1dlzJXNL97hIN4e9E3rf-HgfVLk4gQTje

You can sort websites by categories, traffic, DA, PA, creation date.

Niches of sites:

  • Education
  • Business
  • Medicine
  • Women
  • Food
  • Tourism
  • Photo
  • Nature
  • Mass media
  • Children
  • Culture and etc.

For more information…

Posts, Links from hundreds sites. Variety of niches. DA up to 54.

Communication Architecture for hundreds or thousands of devices deployed in field [on hold]

I’m building an application (both client and server side code) where probably thousands of devices will be deployed in field and these devices will be connected to internet via cellular connection. Each device will have a SIM in it. Now, these devices are embedded devices with very limited capability in terms of software options. The only comm protocol support by these are TCP, UDP and HTTP/s(server and client). You can’t even add any external libraries to support more protocols like MQTT etc. The development environment is very closed and only allows very specific and limited type of external libraries to be added.

So I’ve decided not to use UDP and TCP because they’re very very low-level and the development time and hence cost will be much high. So, I’m left with HTTP. The application requires to poll some data from the devices at defined intervals and sometimes on-demand as well(when the user requests). This suggests to put the server on devices, and just use GET/POST etc. to acquire the data periodically or whenever the user demands. But since these devices will be put in very remote and geographically different locations, we’ll have to assign a fixed IP(or a domain name) to each device so that the server can send requests. Now this architecture doesn’t seem very good because managing the IPs will be a huge headache.

The last option I have is to put a HTTP server on the actual physical server, and have each device ping periodically(like every 5 or 10 seconds ?) via POST request, for example, and as a response to this post request the server can ask for whatever it wants in json format, be it new data or any configuration changes etc. This ping will also act as a heartbeat. On the server side, due to some limitations I’m bound to use python. So I’m thinking of using Flask with some webserver like Nginx etc.

I need some suggestions if this is the only way I can implement this or are there any better solutions/architecture ? Also, will the flask+nginx be able to handle thousands of requests ? Or are there better options to implement the webserver in python ?

Server Hunter – Easily browse thousands of virtual and dedicated servers from hundreds of providers

Server Hunter is an online platform for searching and comparing virtual and dedicated server plans from hundreds of different hosting providers. Offers and pricing are automatically retrieved and updated every 24 hours. Visit us at serverhunter.com.

Now you can easily search and compare among thousands of virtual, hybrid, and dedicated servers from hundreds of providers. Learn more about Server Hunter: https://www.serverhunter.com/about/.

What is a good way to organize hundreds of sortable items in a list

I have many pages of items each page containing m x n items with the m and n dimension layouts being editable by the user. I want the order of the items on the page to be editable so I thought about using the jquery-ui sortable connected list shown here: https://jqueryui.com/sortable/#connect-lists

Where each page would be a list and the items can be ordered based on their position in the grid like array of connected lists. Is this a good approach or should I go in a different direction?

Hundreds of empty-username logins in WordPress site

A WordPress site I manage has the WordFence security plugin installed.

Legitimate users occasionally get blocked by the Brute Force protection aspect of the plugin, even though they haven’t made any bungled login attempts.

When I look into the Live Traffic I see hundreds of invalid login attempts, mostly at /xmlrpc.php but some at /wp-login.php.

However, they don’t look like hacking attempts, since they all use a blank username:

Mountain View, United States attempted a failed login using an invalid username “”. https://my_domain.com/xmlrpc.php 5/2/2019 1:00:24 PM (22 hours 5 mins ago)

When I look up the IP addresses in WhoIs, it tells me that it’s Google LLC.

Has anyone else come across this issue before?

Is it possible that some Google product, perhaps a Chrome browser tool, is inadvertently causing this issue?

Weirdly, when I blocked the IP address range of the attempts in WordFence, I was locked out of the site myself, even though my IP address isn’t within the range.

PS: The site uses the W3 Total Cache plugin.

Neural Network sine wave regression requires hundreds of hidden units to be effective?

I am currently attempting to implement a feed-forward neural network with 1 hidden layer using numpy as a project for university. We are to use hyperbolic tangent activation functions for the hidden layer, and the function for the output layer is unspecified. The NN is to regress a sine wave.

I have already implemented this code for the XOR classification problem and it seemed to work fine. The sine wave regression code only has minor alterations.

I have been tinkering with the hyper parameters for hours now, added momentum (which only seems to make things worse), and have seen little success in getting it working.

For the project, we are to run regression with 3 hidden units, and then with 20. Both of these options seem to dramatically underfit the data. About half the time, it will yield just a horizontal line, and sometimes it will plot what appears to be a single tanh function.

The only thing that seems to make this thing work is to add an excessive amount of units to the hidden layer (Hn in the learn function). A learning rate of .1-.2, epoch count of 2000-5000, and 100-200 hidden units seems to regress the sine wave sort of well.

I’ve been messing with this for hours and frankly am out of ideas. Any help would be greatly appreciated.

Here’s the code:

import numpy as np import matplotlib.pyplot as plt  __name__ = "partB"   class partB:     def __init__(self):         pass      def propegate(self, Xs):         self.A1 = np.dot(self.W1, Xs)         self.Z1 = np.tanh(self.A1) + self.b1         self.A2 = np.dot(self.W2, self.Z1)         self.Z2 = np.power(1 + np.exp(-1 * self.A2), -1) + self.b2          predictions = self.Z2         return predictions      def backProp(self, momentumFac):         # Get gradients         dA2 = self.Z2 - self.targets         dW2 = (1 / self.m) * np.dot(dA2, self.A1.T)         db2 = (1 / self.m) * np.sum(dA2, axis = 1, keepdims = True)         dA1 = np.multiply(np.dot(self.W2.T, dA2), 1 - np.power(self.Z1, 2))         dW1 = (1 / self.m) * np.dot(dA1, self.data.T)         db1 = (1 / self.m) * np.sum(dA1, axis = 1, keepdims = True)          # print("dW2: ", dW2, " dA1: ", dA1)          self.W1 = momentumFac * self.W1 - self.learnRate * dW1         self.W2 = momentumFac * self.W2 - self.learnRate * dW2         self.b1 = self.b1 - self.learnRate * db1         self.b2 = self.b2 - self.learnRate * db2      def logLoss(self):         #prob = np.multiply(np.log(self.Z2), self.targets) + np.multiply(np.log(1 - self.Z2), (1 - self.targets))         cost = (1 / self.m) * np.sqrt(np.sum(np.square(self.Z2 - self.targets)))         #cost = - np.sum(prob) / self.m         return cost      def sigmoid(self, Z):         return np.float64(1) / (np.float64(1) + np.exp(-Z))      def learnBitch(self, learnRate, epochs, moment):         # Make data         perm = np.random.permutation(50)         dat = 2 * np.random.random_sample((1, 50)) - 1         targ =  .5 * np.sin(2 * np.pi * dat) + .5 + .3 * np.random.random_sample((1, 50))         self.data = dat[:,perm]         self.targets = targ[:,perm]         # Make Ns         Xn = self.data.shape[0]         Yn = self.targets.shape[0]         Hn = 200         self.m = self.data.shape[1]         # Initialize Ws and bias vectors         cost = np.float64(0)         while (cost == 0):             self.W1 = np.random.random_sample((Hn, Xn))             self.W2 = np.random.random_sample((Yn, Hn))             self.b1 = np.zeros([Hn, 1])             self.b2 = np.zeros([Yn, 1])               self.learnRate = learnRate             itt = np.intc(0)              costArr = []             costXs = []              plt.clf()             costGraph = plt.figure()             costGraph.show()             costGraph.canvas.draw()              for itt in range(epochs):                 self.propegate(self, self.data)                 cost = self.logLoss(self)                 self.backProp(self, moment)                 print('cost: ', cost)                  if itt % 25 == 0:                     costArr.append(cost)                     costXs.append(itt)                      plt.plot(costXs, costArr)                      costGraph.canvas.draw()           self.xVals = [np.linspace(-1, 1, 50)]         self.yVals = self.propegate(self, self.xVals)          print(self.xVals)         print(self.yVals)          plt.clf()         plt.scatter(self.data, self.targets)         plt.plot(np.squeeze(self.xVals), np.squeeze(self.yVals))         plt.show()          #self.plot_decision_boundary(self, lambda x: self.propegate(self, x), self.data, self.targets) 

Manage hundreds of product attributes in a simple way

For the type of products that I manage in my eCommerce I should create about 300 product attributes to create a complete technical sheet. Is there a way to create feature in a product page tab, like: “Add item” to add only the attributes that affect that single product? it would be very difficult to keep a page with 300 fields legible.