Service worker / caching a whole API in a Flask app?

I’m building a small scale app based on the Flask micro framework. In it, I have a service worker that caches the basic shell of my app (HTML, CSS and JS). However, I have dynamic content that is updated when some event occurs (i.e. click on a button which sends a request to the API endpoint, the backend does a bit of filtering and then it sends the processed data back to the UI).

How can I approach the caching of this API and is that possible at all? One way I thought of was to cache request by request until, gradually, I cache all of the possible responses. However, I’m not sure if there is another solution to make my app more accessible when in offline mode.

What is the best way to secure web api calls from worker apps running on windows

I have a particular problem on how to securely call web api from machines that do automated data collection on documents. The computers that run code are windows machines and the server is a kubernetes cluster running on Linux OS. We use AzureAD for getting regular users on the web app and I know there is a device code login that could work but my problem is that the user needs to do the two factor auth manually when the token expires and I don’t want this to happen during the night when everyone is sleeping and the document collection workers to stop working. The worker machines cant be on azure due to software and hardver that is required to run it. I was thinking to store a certificate on azure key vault and then create jwt tokens with the client and use the same certificate to verify that token. I feel there must be a best practice for this but I don’t know. I think we could run the machines inside a vpn and the server too but I would like an extra layer of security for the api calls. If there was a way to use windows AD for this that would be grate but I am unable to find a recommendation for this scenario.

How to get celery worker running on ubuntu 18 in systemd

I’m trying to run a Celery worker using systemd and I have followed the official documentation plus some blog guides but the worker doesnt start, instead shows:

Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 systemd[1]: Started Celery workers. Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: celery multi v3.1.24 (Cipater) Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: > w1@ubuntu-s-1vcpu-1gb-sgp1-01: DOWN 

I have the followinf config in my /etc/systemd/system/celery.service:

[Unit] Description=Celery workers  [Service] Type=forking User=root Group=root  # PIDFile=/var/run/celery/  WorkingDirectory=/home/isppme/isppme_wa/ ExecStart=/bin/bash -c '/home/isppme/bin/celery multi start w1 worker --time-limit=300 -A isppme.taskapp --concurrency=8 --loglevel=DEBUG --logfile=/var/log/celery/w1%$   ExecStop=/bin/bash -c '/home/isppme/bin/celery multi stopwait w1 --pidfile=/var/run/celery/' ExecReload=/bin/sh -c '/home/isppme/bin/celery multi restart w1 -A isppme.taskapp --pidfile=/var/run/celery/ --logfile=/var/log/celery/w1%I.log --loglevel=DEBUG'  [Install]  

This is the output from the service’s log:

Jun 30 07:13:43 ubuntu-s-1vcpu-1gb-sgp1-01 systemd[1]: Starting Celery workers... Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]: celery multi v3.1.24 (Cipater) Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]: > Starting nodes... Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]:         > w1@ubuntu-s-1vcpu-1gb-sgp1-01: OK Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10809]:         > worker@ubuntu-s-1vcpu-1gb-sgp1-01: OK Jun 30 07:13:45 ubuntu-s-1vcpu-1gb-sgp1-01 systemd[1]: Started Celery workers. Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: celery multi v3.1.24 (Cipater) Jun 30 07:13:49 ubuntu-s-1vcpu-1gb-sgp1-01 bash[10878]: > w1@ubuntu-s-1vcpu-1gb-sgp1-01: DOWN   

Browser crashes while using OffscreenCanvas.convertToBlob method large file in web worker

I’m trying to show Tiff File in browser, i successfully read Tiff using UTIF.js file, Where I am using Web worker to read Tiff format file . Some files are very large like 10,000 px height and 13,000 width, I need to show them in browser. Browser crashes while executing code OffscreenCanvas.convertToBlob method which return Promise object.

This is where i used Web Worker and Offscreencanvas , I have tried convertToBlob method with different parameter such as quality .6 and less also but still browser crashing.

UTIF.decodeImage(ubuf,utif[k]); var ubuf1 =UTIF.toRGBA8(utif[k]); var a =  new Uint8ClampedArray(ubuf1); var imgData = new ImageData(a,utif[k].width,utif[k].height); var canvas1 = new OffscreenCanvas(utif[k].width,utif[k].height); var ctx = canvas1.getContext('2d'); ctx.putImageData(imgData,0,0); var that = self; if(utif[k].width >2048) { canvas1.convertToBlob({type : "image/jpeg", quality : 0.3 }).then(function(blob) { that.postMessage(blob);                    }); } else { canvas1.convertToBlob( {type : "image/jpeg", quality : 1 }).then(function(blob) { that.postMessage(blob);                    }); } 

I am expecting browser should not crashes in large file scenario.

THanks a Lot in Advance.

Not able to join worker nodes using kubectl with updated aws-auth configmap

I’m setting up AWS EKS cluster using terraform from an EC2 instance. Basically the setup includes EC2 launch configuration and autoscaling for worker nodes. After creating the cluster, I am able to configure kubectl with aws-iam-authenticator. When I did

kubectl get nodes  

It returned

No resources found

as the worker nodes were not joined. So I tried updating aws-auth-cm.yaml file

apiVersion: v1 kind: ConfigMap metadata:   name: aws-auth   namespace: kube-system data:   mapRoles: |     - rolearn: <ARN of instance role (not instance profile)>       username: system:node:{{EC2PrivateDNSName}}       groups:         - system:bootstrappers         - system:nodes 

with IAM role ARN of the worker node. And did

kubectl apply -f aws-auth-cm.yaml 

It returned

ConfigMap/aws-auth created

Then I understood that role ARN configured in aws-auth-cm.yaml is the wrong one. So I updated the same file with the exact worker node role ARN.

But this time I got 403 when I did kubectl apply -f aws-auth-cm.yaml again.

It returned

Error from server (Forbidden): error when retrieving current configuration of: Resource: “/v1, Resource=configmaps”, GroupVersionKind: “/v1, Kind=ConfigMap” Name: “aws-auth”, Namespace: “kube-system” Object: &{map[“apiVersion”:”v1″ “data”:map[“mapRoles”:”- rolearn: arn:aws:iam::XXXXXXXXX:role/worker-node-role\n username: system:node:{{EC2PrivateDNSName}}\n groups:\n – system:bootstrappers\n – system:nodes\n”] “kind”:”ConfigMap” “metadata”:map[“name”:”aws-auth” “namespace”:”kube-system” “annotations”:map[“”:””]]]} from server for: “/home/username/aws-auth-cm.yaml”: configmaps “aws-auth” is forbidden: User “system:node:ip-XXX-XX-XX-XX.ec2.internal” cannot get resource “configmaps” in API group “” in the namespace “kube-system”

I’m not able to reconfigure the ConfigMap after this step.

I’m getting 403 for commands like

kubectl apply kubectl delete kubectl edit  

for configmaps. Any help?

Service Worker Uncaught (in promise) DOMException

My Service Worker Troubles with error, Uncaught (in promise) DOMException. My SW runs perfectly, and gives a prompt to install PWA when on mobile, but it gives this error. My url is Milyin. It will take 2-3 visits for this error to start coming in Console log.

Just 3 visits caused it to produce over 260 errors for it. I am not able to debug it. I assume that this is because the SW has consumed all storage space in my device. Because if i reload the page using F5, it shows error, but a refresh with CTRL + SHIFT + R, makes no error.

//This is the service worker with the Advanced caching  const CACHE = "Milyin"; const precacheFiles = [   '/',   '/wp-content/themes/hestia/assets/js/parallax.min.js?ver=1.0.2',   ''   ];  // TODO: replace the following with the correct offline fallback page i.e.: const offlineFallbackPage = "/"; const offlineFallbackPage = '/';  const networkFirstPaths = [   /* Add an array of regex of paths that should go network first */   // Example: /\/api\/.*/ ];  const avoidCachingPaths = [   '/wp-content/plugins/ultimate-member/',   '/wp-admin/',   '/chat/'   ];  function pathComparer(requestUrl, pathRegEx) {   return requestUrl.match(new RegExp(pathRegEx)); }  function comparePaths(requestUrl, pathsArray) {   if (requestUrl) {     for (let index = 0; index < pathsArray.length; index++) {       const pathRegEx = pathsArray[index];       if (pathComparer(requestUrl, pathRegEx)) {         return true;       }     }   }    return false; }  self.addEventListener("install", function (event) {   console.log("[PWA Builder] Install Event processing");    console.log("[PWA Builder] Skip waiting on install");   self.skipWaiting();    event.waitUntil( (cache) {       console.log("[PWA Builder] Caching pages during install");        return cache.addAll(precacheFiles).then(function () {         if (offlineFallbackPage === "offline.html") {           return cache.add(new Response("TODO: Update the value of the offlineFallbackPage constant in the serviceworker."));         }          return cache.add(offlineFallbackPage);       });     })   ); });  // Allow sw to control of current page self.addEventListener("activate", function (event) {   console.log("[PWA Builder] Claiming clients for current page");   event.waitUntil(self.clients.claim()); });  // If any fetch fails, it will look for the request in the cache and serve it from there first self.addEventListener("fetch", function (event) {   if (event.request.method !== "GET") return;    if (comparePaths(event.request.url, networkFirstPaths)) {     networkFirstFetch(event);   } else {     cacheFirstFetch(event);   } });  function cacheFirstFetch(event) {   event.respondWith(     fromCache(event.request).then(       function (response) {         // The response was found in the cache so we responde with it and update the entry          // This is where we call the server to get the newest version of the         // file to use the next time we show view         event.waitUntil(           fetch(event.request).then(function (response) {             return updateCache(event.request, response);           })         );          return response;       },       function () {         // The response was not found in the cache so we look for it on the server         return fetch(event.request)           .then(function (response) {             // If request was success, add or update it in the cache             event.waitUntil(updateCache(event.request, response.clone()));              return response;           })           .catch(function (error) {             // The following validates that the request was for a navigation to a new document             if (event.request.destination !== "document" || event.request.mode !== "navigate") {               return;             }              console.log("[PWA Builder] Network request failed and no cache." + error);             // Use the precached offline page as fallback             return (cache) {               cache.match(offlineFallbackPage);             });           });       }     )   ); }  function networkFirstFetch(event) {   event.respondWith(     fetch(event.request)       .then(function (response) {         // If request was success, add or update it in the cache         event.waitUntil(updateCache(event.request, response.clone()));         return response;       })       .catch(function (error) {         console.log("[PWA Builder] Network request Failed. Serving content from cache: " + error);         return fromCache(event.request);       })   ); }  function fromCache(request) {   // Check to see if you have it in the cache   // Return response   // If not in the cache, then return error page   return (cache) {     return cache.match(request).then(function (matching) {       if (!matching || matching.status === 404) {         return Promise.reject("no-match");       }        return matching;     });   }); }  function updateCache(request, response) {   if (!comparePaths(request.url, avoidCachingPaths)) {     return (cache) {       return cache.put(request, response);     });   }    return Promise.resolve(); } addEventListener('fetch', event => {   event.respondWith(async function() {     // Respond from the cache if we can     const cachedResponse = await caches.match(event.request);     if (cachedResponse) return cachedResponse;      // Else, use the preloaded response, if it's there     const response = await event.preloadResponse;     if (response) return response;      // Else try the network.     return fetch(event.request);   }()); }); 

Service Worker is registered through inline JS

<script type="text/javascript"> if ('serviceWorker' in navigator) {   navigator.serviceWorker.register('/SW.js')   .then(function(registration) {     registration.addEventListener('updatefound', function() {       // If updatefound is fired, it means that there's       // a new service worker being installed.       var installingWorker = registration.installing;       console.log('A new service worker is being installed:',         installingWorker);        // You can listen for changes to the installing service worker's       // state via installingWorker.onstatechange     });   })   .catch(function(error) {     console.log('Service worker registration failed:', error);   }); } else {   console.log('Service workers are not supported.'); }  </script> 

You should definitely see error on repeated visits on pages of my site. Though your mileage my vary based on storage your browser allows.

Worker pool implementation

With the new additions in c++11 and c++17 I wanted to create a simple implementation of thread pool.

I would like your opinion on:

  • Thread safety
  • API
  • performace
  • and general code quality

I also would like to know if it good idea to have wait_until_empty method. Without id I probably could have avoided using a mutex.

#ifndef WORKER_POOL_H #define WORKER_POOL_H  #include <../cpp11-on-multicore/common/sema.h>  #include <atomic> #include <condition_variable> #include <functional> #include <future> #include <memory> #include <mutex> #include <optional> #include <queue> #include <thread> #include <vector>  #if __cplusplus < 201703l #error "Compile using c++17 or later" #endif  /**  * Simplistic implementation of thread pool  * using C++17.  */ class worker_pool { private:   /**    * Inner class that represents individual workers.    */   class worker {   private:     worker_pool *wp;     long id;    public:     worker(worker_pool *_wp, long _id) : wp(_wp), id(_id){};      /**      * Main worker loop.      */     void operator()() {       // work until asked to stop       while (!wp->stop.load()) {         auto t = wp->fetch();         // when asked to stop workers will wake up         // and recieve a nullopt         if (t.has_value())           t.value()();       }     };   };    std::vector<std::thread> workers;   std::queue<std::function<void(void)>> job_queue;   // access control for the queue   std::mutex queue_mutex;   Semaphore queue_sem;    // these 2 are used to notify that queue has been emptied   std::condition_variable cv_empty;   std::mutex mx_empty;    // stop indicates that we were asked to stop but workers are not terminated   // yet   std::atomic<bool> stop;   // term means that workers are terminated   std::atomic<bool> term;    /**    * Thread safe job fetchind    */   std::optional<std::function<void(void)>> fetch() {     queue_sem.wait();     std::unique_lock l(queue_mutex);     // return nothing if asked to stop     if (stop.load())       return nullopt;     auto res = std::move(job_queue.front());     // if we happen to have emptied the queue notify everyone who is waiting     job_queue.pop();     if (job_queue.empty())       cv_empty.notify_all();     return std::move(res);   };  public:   /**    * Initializing worker pool with n workers.    * By default the number of workers is equal to number    * of cores on the machine.    */   worker_pool(long tcount = std::thread::hardware_concurrency())       : queue_sem(0), stop(false), term(false) {     for (long i = 0; i < tcount; i++) {       workers.push_back(std::thread(worker(this, i)));     }   }    /**    * Terminate all workers before getting destroyed    */   ~worker_pool() { terminate(); }    /**    * No-copy and no-move    */   worker_pool(worker_pool const &) = delete;   worker_pool &operator=(worker_pool const &) = delete;   worker_pool(worker_pool &&) = delete;   worker_pool &operator=(worker_pool &&) = delete;    /**    * Thread-safe job submition. Accepts any callable and     * returns a future.    */   template <typename F, typename... ARGS>   auto submit(F &&f, ARGS &&... args) -> std::future<decltype(f(args...))> {     std::lock_guard l(queue_mutex);     // Wrapping callable with arguments into a packaged task     auto func = std::bind(std::forward<F>(f), std::forward(args)...);     auto task_ptr =         std::make_shared<std::packaged_task<decltype(f(args...))()>>(func);     // Wrapping packaged task into a simple lambda for convenience     job_queue.push([task_ptr] { (*task_ptr)(); });     queue_sem.signal();     return task_ptr->get_future();   }    /**    * Terminate will stop all workers ignoring any remaining jobs.    */   void terminate() {     // do nothing if already terminated     if (term.load())       return;;     // wakeup all workers     queue_sem.signal(workers.size());     // wait for each worker to terminate     for (size_t i = 0; i < workers.capacity(); i++) {       if (workers[i].joinable())         workers[i].join();     };   }    /**    * Check how many jobs remain in the queue    */   long jobs_remaining() {     std::lock_guard l(queue_mutex);     return job_queue.size();   }    /**    * This function will block until all     * the jobs in the queue have been processed    */   void wait_until_empty() {     std::unique_lock l(mx_empty);     while (!(job_queue.empty() || stop.load()))       cv_empty.wait(l, [&] { return job_queue.empty() || stop.load(); });   }    /**    * Check if there was a demand to stop.    * Note: there may be still workers running.    */   bool stopped() { return stop.load(); }    /**    * Check if workers have been terminated    */   bool terminated() { return term.load(); } };  #endif // WORKER_POOL_H ``` 

nginx gunicorn django 502 worker timeout just on some pages

I configured a django app with gunicorn and nginx all was working perfectly until the installation of SSL certifiate on the server. firstly all pages were served perfectly but after some time some pages were showing 502 Bad gateway while others are still working nicely.

I am not trying to upload a big file or to call a page that has a big loading time. the page should be served instantly. I tried everything but cant find the problem.maybe its a configuration error. Please if you can help me

The error was in error.log of gunicorn

[2019-04-20 14:38:24 +0200] [14828] [CRITICAL] WORKER TIMEOUT (pid:21460) [2019-04-20 12:38:24 +0000] [21460] [INFO] Worker exiting (pid: 21460) [2019-04-20 14:38:24 +0200] [21500] [INFO] Booting worker with pid: 21500 

this is my gunicorn configuration

import multiprocessing  timeout = 120 bind = 'unix:/tmp/gunicorn.sock' workers = multiprocessing.cpu_count() * 2 + 1 reload = True daemon = True accesslog = './access.log' errorlog = './error.log' 

nginx config

user www-data; worker_processes auto; pid /run/; include /etc/nginx/modules-enabled/*.conf;  events {         worker_connections 1024;         # multi_accept on; }  http {         fastcgi_buffers 8 16k;         fastcgi_buffer_size 32k;         fastcgi_connect_timeout 300;         fastcgi_send_timeout 300;         fastcgi_read_timeout 300;         ##     # Basic Settings     ##      sendfile on;     tcp_nopush on;     tcp_nodelay on;     keepalive_timeout 65;     types_hash_max_size 2048;     # server_tokens off;      # server_names_hash_bucket_size 64;     # server_name_in_redirect off;      include /etc/nginx/mime.types;     default_type application/octet-stream;      ##     # SSL Settings     ##      ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE     ssl_prefer_server_ciphers on;      ##     # Logging Settings     ##      access_log /var/log/nginx/access.log;     error_log /var/log/nginx/error.log;      ##     # Gzip Settings     ##      gzip on;      # gzip_vary on;     # gzip_proxied any;     # gzip_comp_level 6;     # gzip_buffers 16 8k;     # gzip_http_version 1.1;     # gzip_buffers 16 8k;     # gzip_http_version 1.1;     # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;      ##     # Virtual Host Configs     ##      include /etc/nginx/conf.d/*.conf;     include /etc/nginx/sites-enabled/*; }   #mail { #       # See sample authentication script at: #       # # #       # auth_http localhost/auth.php; #       # pop3_capabilities "TOP" "USER"; #       # imap_capabilities "IMAP4rev1" "UIDPLUS"; # #       server { #               listen     localhost:110; #               protocol   pop3; #               proxy      on; #       } # #       server { #               listen     localhost:143; #               protocol   imap; #               proxy      on; #       } #} 


upstream your-gunicorn {   server unix:/tmp/gunicorn.sock fail_timeout=0; }  # Catch all requests with an invalid HOST header  server {     server_name "";     listen      80;     return      444; }  server {   listen 80;   server_name;   return 301$  request_uri; }  server {   listen 443 default ssl;   server_name;    ssl_certificate /etc/letsencrypt/live/;   ssl_certificate_key /etc/letsencrypt/live/;    client_max_body_size 4G;   keepalive_timeout 70;    access_log /var/log/nginx/example.access_log;   error_log /var/log/nginx/example.error_log warn;    root /var/www/django_projects/example;    location /static/ {     autoindex off;     alias /var/www/django_projects/example/static/;     expires 1M;     access_log off;     add_header Cache-Control "public";     proxy_ignore_headers "Set-Cookie";   }    location @proxy_to_app {     proxy_set_header Host $  host;      proxy_set_header X-Real-IP $  remote_addr;      proxy_set_header X-Forwarded-For $  proxy_add_x_forwarded_for;      proxy_set_header X-Forwarded-Proto $  scheme;      proxy_pass http://your-gunicorn;      proxy_read_timeout 90;      proxy_redirect http://your-gunicorn;   }    location / {     try_files $  uri @proxy_to_app;   }    location /.well-known/acme-challenge/ {     root /var/www/django_projects/example/static/;   }  }