Fastest solution to use FTP/Aspera/MediaShuttle with SAN network

At my postpro studio we have a SAN Network (Stornext 6). There is also a MediaShuttle and FTP server in a VM on the firewall (I know, I know… Not my fault, really xD). The VM is sharing SAN over CIFS so everytime we need to send some files we need to upload them to the MediaShuttle or FTP server at 1Gbps Ethernet speed, witch is awful when u try to upload for example 200Gb. It takes 5h or so if the VM doesn’t hang in the process.

My ABSOLUTLY TEMPORAL solution to this mess is using a Windows SAN client with a Storage Server for Mediashuttle. This put the transfer process to an affordable 45 mins.

This is just temporal because something like MediaShuttle/FTP/Aspera needs to go to the DMZ and never directly connected to the SAN network. Also Netflix, HBO and the TPN (Trusted Partner Network) forbids you to do that.

I was thinking about having a server on the DMZ connected to the Firewall thought a 10Gb cards and then another 10Gb card to our SAN. It won’t be as fast as directly connected to the SAN but… I cannot think on a better solution for this.

Am I missing something? Thank you all!

Fastest solution to print a string of characters with their occurrences

I have worked out a solution for this problem, however I am trying to reach a constant (O(1)) solution without the use of two for loops, output should read as a3b2c4d1 for the solution below.

i.e. I want to be able to describe which is a “greedy” approach and the tradeoffs of each.

Here is my current solution:

let countLetters = (str) => {     let arr = str.split(''),       map = {},       ret = '';      for (var i = 0; i < arr.length; i++) {       map[arr[i]] = str.match(new RegExp(arr[i], 'g')).length     }      for (let i in map) {         ret += `$  {i + map[i]}`     }      return ret; }  console.log(countLetters('aaabbccccd')); 

Can someone explain to me what is the time complexity of the current solution, and possible how to think in better terms of reaching a better time complexity?

Find a subsequence with fastest time for specific distance

I have two arrays.

seconds = [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, ..., 2680]  totalMeters =  [0, 3.2, 7.1, 10.6, 14.1, 17.9, 21.5, 24.2, 27.8, 32.5, 36.9, ..., 5000] 

The totalMeters array is the total meters that I have run at this specific point of time. The sampling is 2 seconds.

I would like to find where in the array is the fastest 100 meters I have run, as well as its time.

give you worldwide fastest 12,000+ proxy lists for $1

i will give you high quality 12,000+ proxy list they are fast all from usa and europe you will get them super fast in 24 hours… HTTP and SOCKS proxies from the proxy list we maintain. Checked with Scrapebox V2.0.0.93 Location: USA Timeout: 60s, Thread: 60 You can use this proxy your web traffic bot , views bot , plays bot some thing .. Thank you order me now !

by: chanux7186
Created: —
Category: Servers
Viewed: 216


Fastest way to find dataframe indexes of column elements that exist as lists

I asked this question here: https://stackoverflow.com/q/55640147/5202255 and was told to post on this forum. I would like to know whether my solution can be improved or if there is another approach to the problem. Any help is really appreciated!

I have a pandas dataframe in which the column values exist as lists. Each list has several elements and one element can exist in several rows. An example dataframe is:

X = pd.DataFrame([(1,['a','b','c']),(2,['a','b']),(3,['c','d'])],columns=['A','B'])  X =   A          B 0  1  [a, b, c] 1  2  [a, b] 2  3     [c, d] 

I want to find all the rows, i.e. dataframe indexes, corresponding to elements in the lists, and create a dictionary out of it. Disregard column A here, as column B is the one of interest! So element ‘a’ occurs in index 0,1, which gives {‘a’:[0,1]}. The solution for this example dataframe is:

Y = {'a':[0,1],'b':[0,1],'c':[0,2],'d':[2]} 

I have written a code that works fine, and I can get a result. My problem is more to do with the speed of computation. My actual dataframe has about 350,000 rows and the lists in the column ‘B’ can contain up to 1,000 elements. But at present the code is running for several hours! I was wondering whether my solution is very inefficient. Any help with a faster more efficient way will be really appreciated! Here is my solution code:

import itertools import pandas as pd X = pd.DataFrame([(1,['a','b','c']),(2,['a','b']),(3,['c','d'])],columns=['A','B']) B_dict = [] for idx,val in X.iterrows():     B = val['B']     B_dict.append(dict(zip(B,[[idx]]*len(B))))     B_dict = [{k: list(itertools.chain.from_iterable(list(filter(None.__ne__, [d.get(k) for d in B_dict])))) for k in set().union(*B_dict)}]  print ('Result:',B_dict[0]) 

Output

Result: {'d': [2], 'c': [0, 2], 'b': [0, 1], 'a': [0, 1]} 

The code for the final line in the for loop was borrowed from here https://stackoverflow.com/questions/45649141/combine-values-of-same-keys-in-a-list-of-dicts, and https://stackoverflow.com/questions/16096754/remove-none-value-from-a-list-without-removing-the-0-value

What is the fastest way to research Bitcoin block chain?

I would like to do some research of the Bitcoin blockchain. Because i would like to do massive amounts of processing and lookups, I need a fast way to search the blockchain.

Http requests to insight.io just won’t cut it…

I know of ABE but it seems no longer maintained and I don’t know if it is up to par with the current implementation of the blockchain.

The environment I’m programming in is python.

Any ideas?

Fastest algorithm to decide whether a (always halting) TM accepts a general string

Given a TM $ M$ that halts on all inputs, and a general string $ w$ , consider the most trivial algorithm to decide whether $ M$ accepts $ w$ :

Simply simulate $ M$ on $ w$ and answer what $ M$ answers.

The question here is, can this be proven to be the fastest algorithm to do the job?

(I mean, it’s quite clear there could not be a faster one. Or could it?)

c# Fastest way to get values from string

I have a C# app that receives the following commands, via tcp sockets.

{ key = "foo", value = 1.6557, } 

I’m currently using this method to get the key-value pairs and store them to a classes auto properties.

private Regex _keyRegex = new Regex("\"(.)*\""); private Regex _valueRegex = new Regex(@"\d*\.{1}\d*");  private MyClass CrappyFunction(string nomnom) {   // Gets a match for the key   var key = _keyRegex.Match(nomnom);   // Gets a match for the value   var value = _valueRegex.Match(nomnom);   // Tests if got matches for both. If not, returns null.   if (!key.Success || !value.Success) return null;   // Found both values, so it creates a new MyClass and returns it   // Also removes the " chars from the key    return new MyClass(          key.ToString().Replace("\"", string.Empty),          value.ToString()); } 

Even though it works, I have a really bad feeling looking at this particular piece of code. It’s ugly, in the sense that I’m using two regex objects to achieve my goal. Any suggestions will be appreciated.