Selling Limited Quantity of Dedicated Servers 99$+ 32 Threads, 48GB ram +|for BOTS

Selling Limited Quantity of Dedicated Servers 99$+ 16 Cores | 32 Threads 2.6 GHZ | 48 GB RAM + | SSD | Great for SEO & Bots  | Convert into multiple VPSs

[Image: 1fqPYm5.jpg]

Selling Cheap Dedicated servers from 99$ per month!
Unlimited Bandwidth ( 20 TB included monthly. That’s more than enough for 99% of the users unless you have a big tube site.)
You’ll get Root Access . This is a Unmanaged Server. You’ll be the only person with access to its OS.
USA Datacenter with 1 GB network.
FREE IPKVM Included ( so you can fully control your server and OS)

Setup fee is not lost. You’ll use the server for another month after you’ll stop paying. The setup fee is there so we won’t have to take immediate action if you don’t plan on paying another month.
Do not use them for anything illegal or offensive.
An active subscription is required to maintain access to your Server. I will remove your access within 1 month (setup month) and 48 hours of your subscription being cancelled.
Bulk discounts are available. If you start with 1 server and later you’ll buy a few more , contact us and we’ll adjust your invoice with the new bulk pricing.

No vouche copies are available. We’ve been selling dedicated servers among our private customers for a while now and just decided to open the service to the public.
If the Server isn’t suitable for whatever you need request a full refund within 3 days of ordering! (For packs up to 101 proxies.)[/COLOR]
Try it risk free with our 3 days money back guarantee!

Pricing! Monthly Payments!
Intel E5-2670 2.60 GHZ 16 Cores / 32 Threads 48 GB DDR3 / 480GB SSD = 99$
Intel E5-2670 2.60 GHZ 16 Cores / 32 Threads 64 GB DDR3  / 480GB  SSD = 112$
Intel E5-2680 2.70 GHZ 16 Cores / 32 Threads 96 GB DDR3  / 480GB  SSD = 140$
Intel E5-2680 2.70 GHZ 16 Cores / 32 Threads 128 GB DDR3  / 480GB  SSD = 168$
2×1 TB HDD are provided free of charge on request on every variant ( for raid, storage, backup etc)

You can transform the server into multiple VPS! We can help you with that. 
Ex : You could convert a server into 10 VPS easily
set them each with 8 core and 4.4 GB RAM (example with 48 GB variant). Set the most used ones on SSD, the rest on HDDs etc.
Pro Tip: Combine that with our Shared Proxy Pack ( you’ll get to use 1 Pack that’s limited on 1 IP on all the VPSs)

No other VPS provider can beat the Value / Price on such VPSs.

Payments via :
PayPal
BitCoins 
AltCoins
WebMoney 
Perfect Money
Skrill
Payoneer
Neteller
Paysera
Monese
TransferWise
IBAN
US Bank ACC
CashApp
Do you want to pay via another method? Let me know maybe I can add it.

Limited quantity available! Get yours before the stock runs out!

Contact me via PM, on site at https://www.bywex.com/contact/ or via skype at

[Image: live%3Asupport_42971.png]

What should be the minimum value when the two threads are executed concurrently

int count=0; void *thfunc() {     int ctr=0;     for(ctr=0;ctr<100;ctr++)     count++; } 

If *thfunc() is executed by two threads concurrently in a uniprocessor system, what will be the minimum value of count when both threads complete their execution? Assume that count++ is performed by using three instructions:(1) Read value of count from memory to a CPU register R,(2) Increment R,(3) Store the value in memory.

(a) 200

(b) 2

(c) 100

(d) None of the above

According to me, the answer should be 100. I cant find any execution sequence in which the count value can go down below 100. But my manual says that the answer is 2.

Can anyone please tell me what I am doing wrong? Also, please explain how the answer 2 is obtained?

Thanks in advance!

Accessing a file in multiple python processes or threads [on hold]

I have one python script which is generating data and one which is training a neural network with tensorflow on this data. Both need an instance of the neural network.

Since I haven’t set the flag “allow growth” each process takes the full GPU memory. Therefore I simply give each process it’s own GPU. (Maybe not a good solution for people with only one GPU… yet another unsolved problem)

The actual problem is as follow: Both instances need access to the networks weights file. I recently had a bunch of crashes because both processes tried to access the weights. I tried to come up with a solution like semaphores in C, but today I found this post in stack-exchange.

The idea with renaming seems quite simple and effective to me. Is this good practice in my case? I’ll just create the weight file with my function

save(self, path='weights.h5$  $  $  ') 

in the learning process, rename them after saving with

os.rename(weights.h5$  $  $  , weights.h5) 

and load them in my data generating process with function

load(self, path='weights.h5') 

?

Will this renaming overwrite the old file? And what happens if the other process ins currently loading? I would appreciate other ideas how I could multithread my script. Just realized that generating data, learn, generating data,… in a sequential script is not really performant.

How does immutability remove the need for locks when two threads are trying to update the shared state?

Okay so I read through this:

Does immutability entirely eliminate the need for locks in multi-processor programming?

And this was the main takeaway for me:

Now, what does it get you? Immutability gets you one thing: you can read the immutable object freely, without worrying about its state changing underneath you

But that was only regarding reading.

What happens when two threads are trying to generate a new shared state? Lets say they’re both reading some immutable number N, and want to increment it. They can’t mutate it directly so the both generate two completely new values at the same time both of which are just N + 1.

How do you reconcile this problem so that the shared state becomes N + 2? Or am I missing something and that’s not how it works?

If I need to read lots of files, will it get faster if I break the problem into multiple threads?

I need some help. I had an interview with NetApp recently for a C++ role (they do big data storage systems). I wrote some code to answer an interview question. My response from them was “You failed”. It was very difficult to get feedback, as it usually is after failing an interview. After some very polite begging for feedback I got a little bit. But it still didn’t quite make sense.

Here’s the Problem:

Given a bunch of files in a directory, read them all and count the words. Create a bunch of threads to read the files in parallel. The consensus at NetApp (people who know a lot about storage) is that it should get faster with more threads. I think in most circumstances you are so I/O bound that it will get slower after 1 or 2. I just don’t see how it’s possible to get faster unless you are under some know special circumstances (like SAN or maybe RAID arrays) Even in those cases the number of sequential channels to the disk saturates and you are I/O bound again after only a few threads.

I think my code was great (of course). I’ve been writing C++ for many years. I think I know some things about what makes good code. It should have passed on style alone. Hehe. As a general rule, performance optimizations are not something you should guess at, they should be tested and measured. I only had limited time to run experiments. But now I’m curious.

The code is in my GitHub account here. https://github.com/MenaceSan/CountTextWords

Anyone have any opinions on this? Shed some light on what they might have been thinking? Any other criticisms of the code?

I base part of my opinion on this:

https://stackoverflow.com/questions/902425/does-multithreading-make-sense-for-io-bound-operations

A small sample of the code:

namespace SSFI {      class cThreadFileReader : public cThreadBase     {         // Read a file on a separate thread.         // Q: We don't bother reading single files on more than one thread at a time. Assume files are serial on a single device. SAN array would make this NOT true.      protected:         void FlushWord();         void ReadFile(const fsx::path& filePath);         virtual void Run();      private:         std::string _word;      public:         cThreadFileReader(cApp& app)             : cThreadBase(app)         {         }     }; } 

Application only seeing half the threads available on dual-processor machine

We recently acquired a dual-processor Dell workstation, equipped with two Xeon 6138 Gold CPUs. Each CPU has 20 physical cores (40 logical cores), so there is a total of 40 physical cores or 80 logical cores.

Both Linux Fedora and Windows 10 Professional are installed on this machine using a dual-boot setup. Note that I have not installed this machine myself.

The Windows task manager correctly displays 80 logical cores. These 80 cores are also available on Linux when looking under /proc.

When running PBRT (https://www.pbrt.org/) on Linux, the application correctly uses (and saturates) 80 cores.

On Windows, however, the process only uses 40 logical cores out of 80. I haven’t checked, but I am pretty sure that PBRT uses std::thread::hardware_concurrency(), which is a good way to determine the number of cores. If I force PBRT to use 80 threads thanks to a command line option, the Windows task manager does not show that all cores are saturated. Only half of them are. It seems a single Windows process cannot use all 80 logical cores to me.

Is this a limitation of Windows? This is surprising.

Am I supposed to install a specific version of Windows to make sure all cores are available to a single process?

Using python threads to download images

I am using python threads to download images. I have a JSON file that contains a URL to the image in the following structure:

images = {'images: [{'url': 'https://contestimg.wish.com/api/webimage/5468f1c0d96b290ff8e5c805-large',    'imageId': '1'},....]} 

There are over 1,000,000 images. After using 20 threads, I only collected ~600,000 of these images, along with ~1000 exceptions due to 500 status codes from the URL. I have a feeling that my code is incorrect. Can someone please check my code?

import requests import threading import json import pickle  train_data = json.load(open("train.json", "rb"))  images = train_data["images"]  threads = []  def save_image(images_part):     errors = []     for i in images_part:         Picture_request = requests.get(i['url'])         if Picture_request.status_code == 200:             with open(f"train/{i['imageId']}.jpeg", "wb") as f:                 f.write(Picture_request.content)         else:             errors.append((i, Picture_request.status_code))             print(f"error in {i['imageId']} with {Picture_request.status_code}")     return errors   for i in range(0, 20):     start = i*50727     end = (i+1)*50727     if i == 19:         end = None     t = threading.Thread(target=save_image, args=(images[start:end],))     threads.append(t)     t.start()     print(f"initiating {i}th thread") 

Duda Threads en C#

Buenos días estoy estudiando C# y me he encontrado con la necesidad de hacer que mi programa haga una pausa, hasta que pasen x segundos o hasta que el mismo usuario genere un evento. He intentado trasladar mis conocimientos de threads de java a C# pero no termina de funcionar correctamente. Mi pregunta es: ¿se puede hacer una pausa en el programa y que se genere un evento bien si el usuario ha realizado una acción o bien cuando han pasado x segundos? PD: estoy trabajando realizando una aplicación de consola en .NET PD2: gracias de antemano.

Better way to set number of threads used by NumPy


Background

When NumPy is linked against multithreaded implementations of BLAS (like MKL or OpenBLAS), the computationally intensive parts of a program run on multiple cores (sometimes all cores) automatically.

This is bad when:

  • you are sharing resources
  • you know of a better way to parallelize your program.

In these cases it is reasonable to restrict the number of threads used by MKL/OpenBLAS to 1, and parallelize your program manually.

My solution below involves loading the libraries at runtime and calling the corresponding C functions from Python.

Questions

  1. Are there any best/better practices in solving this problem?
  2. What are the pitfalls of my approach?
  3. Please comment on code quality in general.

Example of use

import numpy  # this uses however many threads MKL/OpenBLAS uses result = numpy.linalg.svd(matrix)   # this uses one thread with single_threaded(numpy):     result = numpy.linalg.svd(matrix) 

Implementation

  1. Imports

    import subprocess import re import sys import os import glob import warnings import ctypes 
  2. Class BLAS, abstracting a BLAS library with methods to get and set the number of threads:

    class BLAS:     def __init__(self, cdll, kind):          if kind not in (MKL, OPENBLAS):             raise ValueError(f'kind must be {MKL} or {OPENBLAS}, got {kind} instead.')          self.kind = kind         self.cdll = cdll          if kind == MKL:             self.get_n_threads = cdll.MKL_Get_Max_Threads             self.set_n_threads = cdll.MKL_Set_Num_Threads         else:             self.get_n_threads = cdll.openblas_get_num_threads             self.set_n_threads = cdll.openblas_set_num_threads 
  3. Function get_blas, returning a BLAS object given an imported NumPy module.

    def get_blas(numpy_module):      LDD = 'ldd'     LDD_PATTERN = r'^\t(?P<lib>.*{}.*) => (?P<path>.*) \(0x.*$  '      NUMPY_PATH = os.path.join(numpy_module.__path__[0], 'core')     MULTIARRAY_PATH = glob.glob(os.path.join(NUMPY_PATH, 'multiarray*.so'))[0]      ldd_result = subprocess.run(         args=[LDD, MULTIARRAY_PATH],          check=True,         stdout=subprocess.PIPE,          universal_newlines=True     )      output = ldd_result.stdout      if MKL in output:         kind = MKL     elif OPENBLAS in output:         kind = OPENBLAS     else:         return None      pattern = LDD_PATTERN.format(kind)     match = re.search(pattern, output, flags=re.MULTILINE)      if match:         lib = ctypes.CDLL(match.groupdict()['path'])         return BLAS(lib, kind)     else:         return None 
  4. Context manager single_threaded, that takes an imported NumPy module, sets number of threads to 1 on enter, resets to previous value on exit.

    class single_threaded:     def __init__(self, numpy_module):         self.blas = get_blas(numpy_module)      def __enter__(self):         if self.blas is not None:             self.old_n_threads = self.blas.get_n_threads()             self.blas.set_n_threads(1)         else:             warnings.warn(                 'No MKL/OpenBLAS found, assuming NumPy is single-threaded.'             )      def __exit__(self, *args):         if self.blas is not None:             self.blas.set_n_threads(self.old_n_threads)             if self.blas.get_n_threads() != self.old_n_threads:                 message = (                     f'Failed to reset {self.blas.kind} '                     f'to {self.old_n_threads} threads (previous value).'                 )                 raise RuntimeError(message)