Job assignment where each worker handles two non-consecutive jobs

There are $ N$ workers and $ 2N$ jobs, named from $ J_1$ to $ J_{2N}$ . There’s a matrix $ M$ denoting the subset of jobs can be handled by each worker: If $ M_{i, j}$ is true, then worker $ i$ can do job $ j$ .

Our task is to assign exact 2 jobs for each worker, s.t., each job is handled by exact one worker with respect to $ M$ . (So far the problem can be solved with max flow.) Moreover, If a worker $ i$ handles job $ j$ , it can’t handle job $ j+1$ .

The problem asks:

  1. If there exists such an assignment
  2. If there is, find out a solution to $ \max_{Assignment}{\min_{i} {\left| J1_i – J2_i \right|}}$ , where $ J1_i$ is the first job assigned to worker $ i$ , and $ J2_i$ is the second job assigned to worker $ i$ . In other words, to maximize the minimum interval between two jobs for all workers.

Service worker / caching a whole API in a Flask app?

I’m building a small scale app based on the Flask micro framework. In it, I have a service worker that caches the basic shell of my app (HTML, CSS and JS). However, I have dynamic content that is updated when some event occurs (i.e. click on a button which sends a request to the API endpoint, the backend does a bit of filtering and then it sends the processed data back to the UI).

How can I approach the caching of this API and is that possible at all? One way I thought of was to cache request by request until, gradually, I cache all of the possible responses. However, I’m not sure if there is another solution to make my app more accessible when in offline mode.

What is the best way to secure web api calls from worker apps running on windows

I have a particular problem on how to securely call web api from machines that do automated data collection on documents. The computers that run code are windows machines and the server is a kubernetes cluster running on Linux OS. We use AzureAD for getting regular users on the web app and I know there is a device code login that could work but my problem is that the user needs to do the two factor auth manually when the token expires and I don’t want this to happen during the night when everyone is sleeping and the document collection workers to stop working. The worker machines cant be on azure due to software and hardver that is required to run it. I was thinking to store a certificate on azure key vault and then create jwt tokens with the client and use the same certificate to verify that token. I feel there must be a best practice for this but I don’t know. I think we could run the machines inside a vpn and the server too but I would like an extra layer of security for the api calls. If there was a way to use windows AD for this that would be grate but I am unable to find a recommendation for this scenario.

How to get celery worker running on ubuntu 18 in systemd

I’m trying to run a Celery worker using systemd and I have followed the official documentation plus some blog guides but the worker doesnt start, instead shows:

Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 systemd[1]: Started Celery workers. Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: celery multi v3.1.24 (Cipater) Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: > w1@ubuntu-s-1vcpu-1gb-sgp1-01: DOWN 

I have the followinf config in my /etc/systemd/system/celery.service:

[Unit] Description=Celery workers After=network.target redis.target  [Service] Type=forking User=root Group=root  # PIDFile=/var/run/celery/single.pid  WorkingDirectory=/home/isppme/isppme_wa/ ExecStart=/bin/bash -c '/home/isppme/bin/celery multi start w1 worker --time-limit=300 -A isppme.taskapp --concurrency=8 --loglevel=DEBUG --logfile=/var/log/celery/w1%$   ExecStop=/bin/bash -c '/home/isppme/bin/celery multi stopwait w1 --pidfile=/var/run/celery/w1.pid' ExecReload=/bin/sh -c '/home/isppme/bin/celery multi restart w1 -A isppme.taskapp --pidfile=/var/run/celery/w1.pid --logfile=/var/log/celery/w1%I.log --loglevel=DEBUG'  [Install] WantedBy=multi-user.target  

This is the output from the service’s log:

Jun 30 07:13:43 ubuntu-s-1vcpu-1gb-sgp1-01 systemd[1]: Starting Celery workers... Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]: celery multi v3.1.24 (Cipater) Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]: > Starting nodes... Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]:         > w1@ubuntu-s-1vcpu-1gb-sgp1-01: OK Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]:         > worker@ubuntu-s-1vcpu-1gb-sgp1-01: OK Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 systemd[1]: Started Celery workers. Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: celery multi v3.1.24 (Cipater) Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: > w1@ubuntu-s-1vcpu-1gb-sgp1-01: DOWN   

Browser crashes while using OffscreenCanvas.convertToBlob method large file in web worker

I’m trying to show Tiff File in browser, i successfully read Tiff using UTIF.js file, Where I am using Web worker to read Tiff format file . Some files are very large like 10,000 px height and 13,000 width, I need to show them in browser. Browser crashes while executing code OffscreenCanvas.convertToBlob method which return Promise object.

This is where i used Web Worker and Offscreencanvas , I have tried convertToBlob method with different parameter such as quality .6 and less also but still browser crashing.

UTIF.decodeImage(ubuf,utif[k]); var ubuf1 =UTIF.toRGBA8(utif[k]); var a =  new Uint8ClampedArray(ubuf1); var imgData = new ImageData(a,utif[k].width,utif[k].height); var canvas1 = new OffscreenCanvas(utif[k].width,utif[k].height); var ctx = canvas1.getContext('2d'); ctx.putImageData(imgData,0,0); var that = self; if(utif[k].width >2048) { canvas1.convertToBlob({type : "image/jpeg", quality : 0.3 }).then(function(blob) { that.postMessage(blob);                    }); } else { canvas1.convertToBlob( {type : "image/jpeg", quality : 1 }).then(function(blob) { that.postMessage(blob);                    }); } 

I am expecting browser should not crashes in large file scenario.

THanks a Lot in Advance.

Not able to join worker nodes using kubectl with updated aws-auth configmap

I’m setting up AWS EKS cluster using terraform from an EC2 instance. Basically the setup includes EC2 launch configuration and autoscaling for worker nodes. After creating the cluster, I am able to configure kubectl with aws-iam-authenticator. When I did

kubectl get nodes  

It returned

No resources found

as the worker nodes were not joined. So I tried updating aws-auth-cm.yaml file

apiVersion: v1 kind: ConfigMap metadata:   name: aws-auth   namespace: kube-system data:   mapRoles: |     - rolearn: <ARN of instance role (not instance profile)>       username: system:node:{{EC2PrivateDNSName}}       groups:         - system:bootstrappers         - system:nodes 

with IAM role ARN of the worker node. And did

kubectl apply -f aws-auth-cm.yaml 

It returned

ConfigMap/aws-auth created

Then I understood that role ARN configured in aws-auth-cm.yaml is the wrong one. So I updated the same file with the exact worker node role ARN.

But this time I got 403 when I did kubectl apply -f aws-auth-cm.yaml again.

It returned

Error from server (Forbidden): error when retrieving current configuration of: Resource: “/v1, Resource=configmaps”, GroupVersionKind: “/v1, Kind=ConfigMap” Name: “aws-auth”, Namespace: “kube-system” Object: &{map[“apiVersion”:”v1″ “data”:map[“mapRoles”:”- rolearn: arn:aws:iam::XXXXXXXXX:role/worker-node-role\n username: system:node:{{EC2PrivateDNSName}}\n groups:\n – system:bootstrappers\n – system:nodes\n”] “kind”:”ConfigMap” “metadata”:map[“name”:”aws-auth” “namespace”:”kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:””]]]} from server for: “/home/username/aws-auth-cm.yaml”: configmaps “aws-auth” is forbidden: User “system:node:ip-XXX-XX-XX-XX.ec2.internal” cannot get resource “configmaps” in API group “” in the namespace “kube-system”

I’m not able to reconfigure the ConfigMap after this step.

I’m getting 403 for commands like

kubectl apply kubectl delete kubectl edit  

for configmaps. Any help?

Service Worker Uncaught (in promise) DOMException

My Service Worker Troubles with error, Uncaught (in promise) DOMException. My SW runs perfectly, and gives a prompt to install PWA when on mobile, but it gives this error. My url is Milyin. It will take 2-3 visits for this error to start coming in Console log.

Just 3 visits caused it to produce over 260 errors for it. I am not able to debug it. I assume that this is because the SW has consumed all storage space in my device. Because if i reload the page using F5, it shows error, but a refresh with CTRL + SHIFT + R, makes no error.

//This is the service worker with the Advanced caching  const CACHE = "Milyin"; const precacheFiles = [   '/',   '/wp-content/themes/hestia/assets/js/parallax.min.js?ver=1.0.2',   'https://fonts.googleapis.com/css?family=Poppins%3A300%2C400%2C500%2C700'   ];  // TODO: replace the following with the correct offline fallback page i.e.: const offlineFallbackPage = "/"; const offlineFallbackPage = '/';  const networkFirstPaths = [   /* Add an array of regex of paths that should go network first */   // Example: /\/api\/.*/ ];  const avoidCachingPaths = [   '/wp-content/plugins/ultimate-member/',   '/wp-admin/',   '/chat/'   ];  function pathComparer(requestUrl, pathRegEx) {   return requestUrl.match(new RegExp(pathRegEx)); }  function comparePaths(requestUrl, pathsArray) {   if (requestUrl) {     for (let index = 0; index < pathsArray.length; index++) {       const pathRegEx = pathsArray[index];       if (pathComparer(requestUrl, pathRegEx)) {         return true;       }     }   }    return false; }  self.addEventListener("install", function (event) {   console.log("[PWA Builder] Install Event processing");    console.log("[PWA Builder] Skip waiting on install");   self.skipWaiting();    event.waitUntil(     caches.open(CACHE).then(function (cache) {       console.log("[PWA Builder] Caching pages during install");        return cache.addAll(precacheFiles).then(function () {         if (offlineFallbackPage === "offline.html") {           return cache.add(new Response("TODO: Update the value of the offlineFallbackPage constant in the serviceworker."));         }          return cache.add(offlineFallbackPage);       });     })   ); });  // Allow sw to control of current page self.addEventListener("activate", function (event) {   console.log("[PWA Builder] Claiming clients for current page");   event.waitUntil(self.clients.claim()); });  // If any fetch fails, it will look for the request in the cache and serve it from there first self.addEventListener("fetch", function (event) {   if (event.request.method !== "GET") return;    if (comparePaths(event.request.url, networkFirstPaths)) {     networkFirstFetch(event);   } else {     cacheFirstFetch(event);   } });  function cacheFirstFetch(event) {   event.respondWith(     fromCache(event.request).then(       function (response) {         // The response was found in the cache so we responde with it and update the entry          // This is where we call the server to get the newest version of the         // file to use the next time we show view         event.waitUntil(           fetch(event.request).then(function (response) {             return updateCache(event.request, response);           })         );          return response;       },       function () {         // The response was not found in the cache so we look for it on the server         return fetch(event.request)           .then(function (response) {             // If request was success, add or update it in the cache             event.waitUntil(updateCache(event.request, response.clone()));              return response;           })           .catch(function (error) {             // The following validates that the request was for a navigation to a new document             if (event.request.destination !== "document" || event.request.mode !== "navigate") {               return;             }              console.log("[PWA Builder] Network request failed and no cache." + error);             // Use the precached offline page as fallback             return caches.open(CACHE).then(function (cache) {               cache.match(offlineFallbackPage);             });           });       }     )   ); }  function networkFirstFetch(event) {   event.respondWith(     fetch(event.request)       .then(function (response) {         // If request was success, add or update it in the cache         event.waitUntil(updateCache(event.request, response.clone()));         return response;       })       .catch(function (error) {         console.log("[PWA Builder] Network request Failed. Serving content from cache: " + error);         return fromCache(event.request);       })   ); }  function fromCache(request) {   // Check to see if you have it in the cache   // Return response   // If not in the cache, then return error page   return caches.open(CACHE).then(function (cache) {     return cache.match(request).then(function (matching) {       if (!matching || matching.status === 404) {         return Promise.reject("no-match");       }        return matching;     });   }); }  function updateCache(request, response) {   if (!comparePaths(request.url, avoidCachingPaths)) {     return caches.open(CACHE).then(function (cache) {       return cache.put(request, response);     });   }    return Promise.resolve(); } addEventListener('fetch', event => {   event.respondWith(async function() {     // Respond from the cache if we can     const cachedResponse = await caches.match(event.request);     if (cachedResponse) return cachedResponse;      // Else, use the preloaded response, if it's there     const response = await event.preloadResponse;     if (response) return response;      // Else try the network.     return fetch(event.request);   }()); }); 

Service Worker is registered through inline JS

<script type="text/javascript"> if ('serviceWorker' in navigator) {   navigator.serviceWorker.register('/SW.js')   .then(function(registration) {     registration.addEventListener('updatefound', function() {       // If updatefound is fired, it means that there's       // a new service worker being installed.       var installingWorker = registration.installing;       console.log('A new service worker is being installed:',         installingWorker);        // You can listen for changes to the installing service worker's       // state via installingWorker.onstatechange     });   })   .catch(function(error) {     console.log('Service worker registration failed:', error);   }); } else {   console.log('Service workers are not supported.'); }  </script> 

You should definitely see error on repeated visits on pages of my site. Though your mileage my vary based on storage your browser allows.