La función no existe después de actualizar versión de dependencia en package.json

Estoy desarrollando un módulo que es usado por varios proyectos. Mi módulo usa polished para funciones relacionadas con los colores, en concreto la versión 2.3.1 como se indica en el package.json:

{   ...   "peerDependencies": {     ...     "polished": "2.3.1"   },   ... } 

Ahora quiero usar una versión más moderna de polished (3.4.0) porque incluye una función nueva que me interesa. Actualicé el número de versión en mi package.json, realicé el cambio, comprobé que funcionaba y mandé el nuevo paquete al repositorio.

En dos de los proyectos funciona bien. Pero en el tercero, cuando actualizó a la última versión de mi módulo, éste deja de funcionar. En concreto muestra este mensaje en la consola y termina la ejecución:

Possible Unhandled Promise Rejection: TypeError: Object(…) is not a function

Siguiendo el stack trace, llegó a la raíz del problema que es la nueva función que he añadido (meetsContrastGuidelines).

Mirando el package.json del proyecto (no el de mi módulo), veo que no tiene ninguna referencia a polished. Pero incluye otra dependencia que a su vez incluía una versión antigua de polished que contenía todas los métodos que usamos (básicamente funcionaba de rebote).

Entonces en ese package.json añadí la nueva versión de mi módulo y de polished como dependencia (los comentarios son sólo aquí):

{   ...   "dependencies": {     ...     "mimodulo": "0.4.0",  // la última versión de mi módulo     "polished": "3.4.0"   // la versión de polished que necesito   },   ... } 

También borré node_modules, instalé los paquetes usando yarn install, compilé y corrí el proyecto… y me sigue dando el mismo fallo.

He comprobado en node_modules y la versión de polished es 3.4.0 (la correcta). Pero parece como si no se tomara la correcta y se siguiese tomando la antigua.

Probé a borrar la caché de npm y yarn y repetir el proceso, pero el resultado es el mismo. ¿Qué puede estar pasando y cómo puedo resolverlo?

How can I safely store application secrets/passwords in git and other version control systems?

When I saw this question: Why is storing passwords in version control a bad idea?

I immediately thought that question could be inverted to be: Why is storing passwords in version control a good idea?

  1. True Infrastructure as Code = App + Config + Secrets, all stored as code. (Having this allows results to be replicated reliably.)
  2. Consistency is the best friend of automation/CI/CD Pipelines. Having App + Config in source control, and Secrets in HashiCorp Vault makes your automation more complex. If all 3 are stored consistently in git automation becomes much easier.
  3. It’s important to store your config in a version control system. The thing is .json or .yaml config files with secrets and other sensitive information embedded alongside the configuration are pretty common. Why not just put those in version control too?
  4. Allowing Secrets in git offers the following benefits:
    1. There’s a changelog of when the secret changed, and an audit trail of who changed it, this knowledge allows the scope of debugging to be narrowed.
    2. Sometimes a dev isn’t sure if their code is wrong, or if the secret is formatted in some weird an unexpected way. A dev being able to look at a dev version of the secret while working, and then an ops person being able to compare a dev and pre-prod version of a secret helps debug quicker. (Example: Maybe a .txt file was created on Mac/Linux by a Dev, then created on Windows by an Ops guy and the dev vs pre-prod version of the secret ended up with 2 separate character encodings?, Missing Quote(s), rn vs m, extra space, all kinds of misspellings.)
    3. I’ve run into a scenario where an app was being rapidly developed, and a new feature required a new secret to be added, the secret was added to the dev environment, then a pre-prod version of the application was launched, it wasn’t working and it took a while to figure out that it was because the newly added secret was never created for (much less applied to) the higher environments. (If secrets were consistently stored in git, this would have been obvious by inspection.)

But then I realized there’s a better question beyond:
Why is storing passwords in version control a bad idea?
vs
Why is storing passwords in version control a good idea?

And that’s:
How can I safely store application secrets/passwords in git?
Challenges:

  • It’s obvious that the secret would need to be stored encrypted. But safely storing encrypted data in git, requires that it’s impossible for decryption keys to be leaked:
    If git users directly decrypt secrets using PGP or symmetric keys, then when the decryption keys get leaked, there’s no way to revoke or invalidate the decryption keys and there’s no way to purge the git history because it’s decentralized.
  • Need a means to audit if a piece of data was decrypted or who decrypted it.
  • Need to be able to assign Granular access rights to who can decrypt what secrets. Devs shouldn’t be able to decrypt prod secrets. Ops person who can decrypt Prod application A’s secrets, shouldn’t necessarily be able to decrypt Prod application B’s secrets.
  • Need to be able to prevent footgun scenarios: like accidentally decrypting a previously encrypted secret, then committing the decrypted version of the secret back to the repo.

How do I import database data to a new host that runs a newer version of MySQL without ssh access?

My current host for an old website has only MySQL 5.0 (serverVersion=10.2.12-MariaDB-log).

I want to move this website to a host that has either 5.5 or 5.6 or 5.7 MySQL (depending which server I move to).

But the only instructions I can find for updating database data from 5.0 to 5.6/5.7 are run from the command line, requiring ssh access that I do not have.
For example these are the best, clearest instructions I have found, but I cannot use them because, as AFAIK, I do not have ssh, nor do no fully understand the references ([he?] makes. (e.g. he says --no faults “for simplicity” but even if I had ssh, I don’t know if I should also use that flag or others.)

  • http://mysqlserverteam.com/upgrading-directly-from-mysql-5-0-to-5-7-using-an-in-place-upgrade/
  • https://mysqlserverteam.com/upgrading-directly-from-mysql-5-0-to-5-6-with-mysqldump/

I usually use MySQL Workbench to connect to remote databases, but when I connect to that old host via MySQL Workbench, a message pops up saying [Workbench] is not compatible with 5.0.
So for that host, I have either used MySQL Workbench anyway to make a backup (which probably probably means the backup is no good), or I use the host’s web-based tool (not my preference, but obviously better). I have also recently installed HeidiSQL because it seems to be compatible with 5.0 (I does not give a warning/error message anyway). So I have started making backups and minor data changes on that host using HeidiSQL.

The only reason I have continued to use the host running MySQL 5.0 is because I haven’t yet found instructions on how to migrate data for websites on that server, whether it’s via a hosting provider’s online database tool, MySQL Workbench, or HeidiSQL !
Everything I see is for for doing step-wise data upgrades using the command line, and/or for upgrading the database server itself.

I need a way to upgrade the data from 5.0 to 5.6 or 5.0 to 5.7, probably in 1 step, using a gui database connection tool, or some other independent method. I will not have access to any mySQL servers other than the server I’m migrating away from (5.0), and the server I am migrating to (5.5 or 5.6 or 5.7).

Does anyone know how to do this?

EDIT:

  • I usually choose “Export” from the gui to export and choose all tables, when I do a database backup. I assume this is the same as a “database dump” that I see referenced everywhere.
    Is this correct? If not, how do I generate a proper dump file?
  • What export “settings” should I use when the goal is to upgrade and migrate?
  • I also see some references to users table. Do I need to perform any other exports in order to fully transfer and upgrade my database to a new server with a more recent version?

How do I import database data to a new host that runs a newer version of MySQL without ssh access?

My current host for an old website has only MySQL 5.0 (serverVersion=10.2.12-MariaDB-log).

I want to move this website to a host that has either 5.5 or 5.6 or 5.7 MySQL (depending which server I move to).

But the only instructions I can find for updating database data from 5.0 to 5.6/5.7 are run from the command line, requiring ssh access that I do not have.
For example these are the best, clearest instructions I have found, but I cannot use them because, as AFAIK, I do not have ssh, nor do no fully understand the references ([he?] makes. (e.g. he says --no faults “for simplicity” but even if I had ssh, I don’t know if I should also use that flag or others.)

  • http://mysqlserverteam.com/upgrading-directly-from-mysql-5-0-to-5-7-using-an-in-place-upgrade/
  • https://mysqlserverteam.com/upgrading-directly-from-mysql-5-0-to-5-6-with-mysqldump/

I usually use MySQL Workbench to connect to remote databases, but when I connect to that old host via MySQL Workbench, a message pops up saying [Workbench] is not compatible with 5.0.
So for that host, I have either used MySQL Workbench anyway to make a backup (which probably probably means the backup is no good), or I use the host’s web-based tool (not my preference, but obviously better). I have also recently installed HeidiSQL because it seems to be compatible with 5.0 (I does not give a warning/error message anyway). So I have started making backups and minor data changes on that host using HeidiSQL.

The only reason I have continued to use the host running MySQL 5.0 is because I haven’t yet found instructions on how to migrate data for websites on that server, whether it’s via a hosting provider’s online database tool, MySQL Workbench, or HeidiSQL !
Everything I see is for for doing step-wise data upgrades using the command line, and/or for upgrading the database server itself.

I need a way to upgrade the data from 5.0 to 5.6 or 5.0 to 5.7, probably in 1 step, using a gui database connection tool, or some other independent method. I will not have access to any mySQL servers other than the server I’m migrating away from (5.0), and the server I am migrating to (5.5 or 5.6 or 5.7).

Does anyone know how to do this?

EDIT:

  • I usually choose “Export” from the gui to export and choose all tables, when I do a database backup. I assume this is the same as a “database dump” that I see referenced everywhere.
    Is this correct? If not, how do I generate a proper dump file?
  • What export “settings” should I use when the goal is to upgrade and migrate?
  • I also see some references to users table. Do I need to perform any other exports in order to fully transfer and upgrade my database to a new server with a more recent version?

Is there a noncommutative version of von Neumann’s ergodic theorem

The two most celebrated ergodic theorems are Birkhoff’s ergodic theorem and von Neumann’s ergodic theorem.

E. C. Lance in his remarkable work (Ergodic Theorems for Convex Sets and Operator Algebras) formulated that can be considered to be the noncommutative version of Birkhoff’s ergodic theorem, for a von Neumann algebra, $ T*$ -automorphism and a faithful $ T$ -invariant normal state.

I would like to know whether someone has done the same for von Neumann’s ergodic theorem. In other words, is there a noncommutative version of von Neumann’s ergodic theorem ?

How to cache multiple version of site

I have a site with feed of user notes, and i want to show to users only top 100 notes (paginated by 10 per page). Every note has some tags (like subreddits on reddit), and every user can add some tags to black list to don’t see notes with this tags. I have a problem because if all of top 100 notes has one tag from blacklist then user see no notes. I don’t want to generate page for user because is expensive, so i am looking for smart way to cache multiple version of site.

I tried: – get top 1000 and cache and filter value from cache per request – build cache key with user blacklist – every user can have different black list so i can cache it for user – not good – get top5 tags from last hour and build from it all combination and cache every combination, and check which combination match the best for user – it’s not good idea because there is so much tags, and calculating top X it isn’t cheap, and top 5 can be too least

How to upload document file and its version file using csom in C#

I have downloaded a document file (current version) and its version history files (old version files) from one of my sharepoint site list. Now, I want to upload that current version file into another site list. After uploading the current version file, I need to upload its version files into _vti_history folder and update version information in current version file.

I can upload the current version file by using below csom code:

using (var clientContext = new ClientContext(url)) {      using (var fs = new FileStream(fileName, FileMode.Open))      {          var fi = new FileInfo(fileName);          var list = clientContext.Web.Lists.GetByTitle(listTitle);          clientContext.Load(list.RootFolder);          clientContext.ExecuteQuery();          var fileUrl = String.Format("{0}/{1}", list.RootFolder.ServerRelativeUrl, fi.Name);      Microsoft.SharePoint.Client.File.SaveBinaryDirect(clientContext, fileUrl, fs, true);  } 

}

But, I don’t know how to upload the version file and update version info in actual file.

Can anyone help me to achieve this in C# using csom?