The number of threads keeps decreasing. Help me.

My vps are fine. cpu usage is within 10% and ram usage is 200mb.

I bought this from Solidseo
I bought 50 Private proxy products from Solidseo and I am using them.

At first, when threads set 100, it goes back and forth between 90 and 100.
But after 9 hours, he goes back and forth between 30 and 40.

I’m using a catchall emil and I’m using three.

Also, I am using the list of serverified lists. I do not use scape

Newbie questions – only 17 threads running of 200 available

Hi!

I’m testing my first GSA SER project, everything seems to works fine except one major problem.
I set up max 200 theads, but GSA runs much more less (for example 10-50 threads at a time with ~50-75 LPM). Sometimes up to 100+ threads with 130+ LPM. All system resources almost free – pls find attached screenshot. I use a premium links list. Speedtest shows about 100-150 Mbs speed. Xevil runs at another machine and have a lot of free resources.
Why cant GSA run maximum amount of threads?

Hope Sven or smb else could help me with it.
Best regards.

Cores, threads and sockets: what does it mean the calculation $T = tcs$ and the number on windows task manager performance?

Well, suppose then we have an CPU system such as:

Thread(s) per core $ \equiv t$ : 4

Core(s) per socket $ \equiv c$ : 4

Socket(s) $ \equiv s$ : 1

Then, we must to perform a simple calculation such as: $ 4 \cdot 4 \cdot 1 = 16 $

Therefore, in general we have:

$ $ T = t\cdot c \cdot s \tag{1}$ $

I guess that the equation $ (1)$ gives you then the total number of threads which your system can handle simultaneously.

On the other hand, consider the figure, of windows task manager, in the following:

enter image description here

In the red box we clearly see the number of "Threads". So I would like to know:

What is the difference between the number given by formula $ (1)$ and the number given by windows task manager?

200 Lifetime Threads ONE-TIME Payment for 99.95 USD Only

200 Lifetime Threads ONE-TIME Payment to Solve UNLIMITEDLY!

CAPTCHAs.IO has decided to give its users and every developer an offer that would help them greatly with their CAPTCHA automation projects and applications for a lesser cost.

End-users and developers can now enjoy UNLIMITED solves with 200 threads for just the cost of a ONE-TIME payment of $ 99.95 USD fixed price.

To order please email admin@captchas.io

Thanks!

what determines the amount of threads you can use

what determines the amount of threads you can use ? (in order to get more done in the same amount of time)

the reason im asking is because i have seen some posts say they would need minimum 16GB of ram,
but also that GSA SER is 32 bit & that anything over 3GB of ram would not make any difference.

ive taken account of various things to try & make it more efficient etc. & keeping an eye of
memory & cpu usage at bottom of GSA SER interface.

i have 4gb of ram, so would getting more ram make any difference ?

1000 Threads w/ Unlimited Daily Solves for Just $250 a Month!

I’d like to offer you another round of special offer. We have now the MEGA #2 special package. This package is highly different from those that are offered currently in our site.
This package is a big budget saver. You can enjoy <b>1000 threads for just $ 250 USD a month</b> and it comes with <b>UNLIMITED daily solves</b> too…
Features:
1. 1000 Threads
2. UNLIMITED Daily Solves
3. 20 Elite Proxies
To subscribe just go to https://captchas.io/?p=19
Thank you and God bless!

Any possibility to evaluate / calculate the threads by usage of graphic card?

Would it be possible to employ the GFX card instead of CPU, for work with threads, like it would be when the video or anything that would require high power of processing can be used?
I just bought this one for video rendering: PNY NVIDIA Quadro P4000, 8GB GDDR5.

Thanks for your answer, eventually to add it to the wishlist.
Regards,
Michal

Image segmentation of a high resolution 2D binary image into clusters, threads and points

I use python3 to find out the proportion of the mentioned image features. They are originally 8-bit greyscale TIFF images with a resolution of 2048×2168 pixels. I have binarised it into an image composing of the matrix (white) and component particles (black). The particles have random morphologies. I would like to widely categorise them as:

  • Points which can range between 1×1 to 3×3 blocks of independent square pixels completely surrounded by the matrix
  • Threads which are linear or diagonal sets of continuous pixels of at least 3 pixels in length and at most 3 pixels in width
  • Clusters which are any randomly shaped closed morphologies with more than 10 pixels in overall area (or any arbitrary high value)
  • Others which by any chance is not included the three listed above

Here is an Example(400×400) portion of the image.

First of all, I am confused about the order of progress in this situation. I could scan the whole image pixel by pixel and extract the points in my first step. A second scanning can see for threads and final scanning can look for clusters using boundary tracing.

As you can see, the component is spread in a very uneven manner. To a human eye, the threads appear as blocks with very low aspect ration (AR). Points as noises and clusters as blocks of distinguishable larger areas. Therefore the accuracy level of this classification scheme does not needfully be a high score. The objective, however, is minimal user interaction (fully automatic). One another thing to note is that holes within clusters or threads (that does not break it) can be ignored. The ultimate aim is to get the area percentage of each of the objects so that the detection method can be limited keeping it in mind.

Some specific questions:

  1. Let us say that I identify a large cluster of pixels within the image. How can I split this into a surface (high AR) and thread (low AR)? (something similar to Watershed Algorithm)
  2. Should I go for the OpenCV contouring method like Border Following or border tracing (and later ignoring the holes) or something more suitable
  3. I was curious to know if there have been approaches in the past that used random sampling of pixels instead of a pixel by pixel scanning across the entire image.

I would like to know the steps a computer scientist would follow in such a scenario. I am a beginner in image processing and any reference material would be appreciated. For anybody interested in metallography, the images are micrographs and what we see are defects. I am trying to separate cracks, porosities and other openings based on pixel density.