How to export a large WhatsApp chat with media exactly as it appears on my phone?

I want to export a huge WhatsApp conversation (346,000 messages) along with media (photos, videos and voice notes) to my computer exactly as it appears on WhatsApp.

There is a Chrome extension that does exactly what I want by downloading the conversation from WhatsApp Web as HTML. But, it crashes around 100,000 texts. Is there any other way?

I can’t export it directly through WhatsApp because it’s limited to 40,000 texts. Also, my phone is not rooted so extracting the SQLite files is not an option.

(Technical answers are highly encouraged)

Browser crashes while using OffscreenCanvas.convertToBlob method large file in web worker

I’m trying to show Tiff File in browser, i successfully read Tiff using UTIF.js file, Where I am using Web worker to read Tiff format file . Some files are very large like 10,000 px height and 13,000 width, I need to show them in browser. Browser crashes while executing code OffscreenCanvas.convertToBlob method which return Promise object.

This is where i used Web Worker and Offscreencanvas , I have tried convertToBlob method with different parameter such as quality .6 and less also but still browser crashing.

UTIF.decodeImage(ubuf,utif[k]); var ubuf1 =UTIF.toRGBA8(utif[k]); var a =  new Uint8ClampedArray(ubuf1); var imgData = new ImageData(a,utif[k].width,utif[k].height); var canvas1 = new OffscreenCanvas(utif[k].width,utif[k].height); var ctx = canvas1.getContext('2d'); ctx.putImageData(imgData,0,0); var that = self; if(utif[k].width >2048) { canvas1.convertToBlob({type : "image/jpeg", quality : 0.3 }).then(function(blob) { that.postMessage(blob);                    }); } else { canvas1.convertToBlob( {type : "image/jpeg", quality : 1 }).then(function(blob) { that.postMessage(blob);                    }); } 

I am expecting browser should not crashes in large file scenario.

THanks a Lot in Advance.

Splitting low-dimensional $p$-local CW complexes for large $p$

Fix a prime $ p$ . I have a sketch of a proof that if $ X$ is a finite simply-connected CW complex with $ \mathrm{dim}(X) < p$ then for some $ t\in \mathbb{N}$ , $ \Sigma^t X$ is a wedge of Moore spaces.

(Basically, the idea is that all the interesting attaching maps are Whitehead products, hence stably trivial.)

Questions:

  1. Does anyone know a reference for this?
  2. If it’s not true, I’d love to know that too!

Postgres speed up index creation for large table

I have a large Postgres table with 2+ billion entries (1.5TB) and mostly non-null, char var columns. To speed up inserts, I dropped the indexes before bulk uploading. However, it is now taking forever for the b-tree indexes to be created. For one of the runs that I cut short, it had spent >12 hours creating the indexes.

Sample table and indexes that I’m trying to make:

        Column         |            Type             | Modifiers  -----------------------+-----------------------------+-----------  name                  | character varying           | not null  id                    | character varying           |   lifecycle_id          | character varying           |   dt                    | character varying           |   address               | character varying           |   ...  Indexes:  "name_idx" PRIMARY KEY, btree (name)  "id_idx" btree (rec_id)  "lifecycle_id_idx" btree (lifecycle_id) 

The actual table has 18 columns. I’ve set the maintenance_work_mem to 15GB. This is running on Postgres 9.6.11 on RDS. The instance class is db.m4.4xlarge.

Since there are three indexes, it would be hard to sort the data before inserting. Would it be faster to just insert the data without dropping the indexes? Any other suggestions for speeding up the index creation?

Efficiently sharing a large node_modules directory between multiple TeamCity build jobs

The CI flow for our Node.js app looks roughly like this:


enter image description here


Currently, this all takes place in a single TeamCity ‘job’ with three ‘steps’ (the Test step runs 4 concurrent child processes).

Problems with the current approach:

  • The whole job takes too long – 15 minutes. (The Test subprocesses run in parallel, but this only shaves about 15% compared to running them serially.)
  • The Test step has jumbled log output from 4 child processes, and it’s painful figuring out what failed.

Desired approach

I want to split the above into six TeamCity jobs, using artifact and/or snapshot dependencies to compose them into the desired flow. This should make better use of our pool of four build agents (better parallelism), and it should make failures easier to pinpoint.

But I’m having trouble with sharing the node_modules from the first step so it can be reused by all four jobs in the Test phase. It takes about 3-5 minutes to run yarn (to set up node_modules), so I want to avoid repeating it on every Test job.

Also, most git pushes don’t actually change the npm dependencies, so the ‘Setup’ phase could often be bypassed for speed. CircleCI has a nice way to do this: it lets you cache your node_modules directory with a custom key such as node_modules.<HASH>, using a hash of your lockfile (yarn.lock or package-lock.json) – because the complete node_modules directory is more or less a function of the lockfile.

But my company won’t let me use CircleCI. We have to use TeamCity.

What I’ve tried:

  • Configuring the first TC job to export node_modules as an artifact, but this seems to take forever on TeamCity (>10 minutes for a large node_modules dir), compared to a few seconds on CircleCI. Also, TC doesn’t make it easy to have a dynamic cache key like Circle does.
  • I’ve tried a custom solution: I save a tarball of node_modules to S3 (with cache key based on lockfile), then each Test job streams it down and untars it into node_modules locally, but this ends up taking just as long as running yarn from scratch on each job, so there’s no point.

I’m stuck. Has anyone had any success setting up a CI flow like this on TeamCity?

Bounds on chromatic number when maximum degree is large

For a regular graph with $ n$ vertices and maximum degree $ \Delta$ , it is easy to see that the chromatic number, $ \chi\le\frac{n}{2}$ if $ \frac{n}{2}\le\Delta\lt n-1$ (since a regular graph on $ n$ vertices with maximum degree $ n-2$ is the complete graph with a one factor removed, which will have each vertex non adjacent to a unique other vertex, which could be given the same color, using the handshaking lemma we get that chromatic number of such a graph is $ \frac{n}{2}$ )

How could this fact be applied to bound the chromatic number of any non-regular graph with large maximum degree. Does this fact have a well known name, like Reed’s theorem, or Brooks’ theorem? Thanks beforehand.

Sum of a large series as follows:

I want to know the following summation modulo some prime $ m$ : $ $ \sum_{n=0}^k a_n$ $ where $ $ a_n = ((n+1)^p-n^p)*((n+1)^q-n^q) $ $ $ 1\leq{k}\leq10^9 $

$ 1\leq{p,q}\leq10^5 $

$ m=10^9+7 $

Say above problem have to be solved for atleast 1000 different inputs for n,p,q.

I have little knowledge of programming and i am unable to make solution better than O($ log_2(10^5)*t*k$ ) where t is number of test inputs and all other things as specified above. Which will take surely more than 5 seconds say programmed in C or C++. But i want it to be calculated in less than 4 or 5 seconds. I have written constant factor $ log_2(10^5)$ to take in account the complexity due to modular exponentiation.

Source: https://math.stackexchange.com/questions/3259580/summation-of-product-of-two-terms-depending-on-same-variable-as-follows

Legendre Symbol of a Very, Very Large Value

I’m trying to use FLINT (Fast Library for Number Theory) to calculate the Legendre Symbol of the following:

$ $ \left(\frac{n! + 1}{p}\right)$ $

In my case, $ p$ is a positive, odd prime (specifically $ 1,000,000,000,039 $ ), so I should be able to use the Jacobi symbol in its place when attempting to compute it.

How do I simplify the numerator if $ n$ is a very large number, specifically $ 208,463,325,489$ ?

My current thought is that I would need to calculate n! mod p (which I believe is just a running product modulo p) and then add 1 before computing the symbol.

The value of n! mod p that I computed using FLINT is $ 133,008,788,325$ , but I’m not sure if that’s the correct value that I should be using in place of n! when computing the symbol.

Is it possible to simplify this mathematically so that I can verify that my computation is correct?

Filtered, large list exporting

I’m currently managing a large list (just over 5000 items) and have a filtered view using indexed columns that is returning 143 items. I want to export those 143 to Excel. I’m getting an error saying that I “…don’t have permission to view the entire list because it is larger than the list view threshold…”

When I export other (smaller) lists with filtered views it only exports the results of the filter, not the whole list. I had some counts on the list that I removed, there’s no grouping. What am I missing?