Forcing Users To think up More Complex Passwords / Ease of Remembering Them

Are there any guidelines on the play-off between forcing users to have complex passwords (longer, including numbers and special characters etc) – and the reduction in security if users therefore have to write down these passwords because they can’t remember them ?

To clarify: what I’m thinking about here is where users may have their own preferred (and memorised) set of passwords, but get forced by sites to start making them longer; or adding a number, or sites which just refuse to accept the password unless the site itself deems it strong enough ( hello Google ). So users then have to think of other passwords that fit these particular criteria – which being non standard ones they are then more likely to write down.

So I guess the question is what do users actually do when confronted with a site which tries to force them to use passwords with particular formatting.

.NET C# – how create stream of object field with ease?

I would like to create a stream of object fields. Let say to transfer data over the network in expected binary format.

Serialization is not an option since it carries class information. I need only the bytes of fields.

I found interesting article How to save non-serializable object to stream but as I understand it creates byte array of whole object. (For 250 bytes size of fields I got 291 bytes length array.)

Should I implemented streaming manually handling it by field reference or naming or there is already some well-known good approach to solve this?

How to ease the pain of lack of diffs when using database migrations?

The pain that I’ve often felt when creating database migration files, is best described in this Speakerdeck: Sane Database Change Management with Sqitch.

  • Paste entire function to new “up” script
  • Edit the new file
  • Copy the function to the new “down” script
  • Three copies of the function!

And I end up with no clear diff of the function that I can easily git-blame to understand the change later in time.

I feel too that

sync-based approach to schema migrations is much better.

I stand before this new greenfield project (as the only developer). The stack I’ve chosen is Postgres (on AWS RDS), Node.js, Express, Apollo (GraphQL) and React (on Heroku).

I have read about sqitch and migra, but never used them. Are those the cure to the pain I’ve felt? Are they compatible with the stack I’m using for this new project or what other sync-based migration tool is best compatible with this stack?

My current workflow is like this. Dev and production database models are the same. A new story arrises. An existing database function needs to be altered. I create a new migration file. I copy the function that needs to be altered into the “up” and again into the “down” part of the new migration. I commit. I alter the “up” part. I commit (creates diff).

This all feels very verbose just when only a few lines in the function needed to be changed. Ideally, I have the whole schema in code in git. I alter the schema. I commit (creating a diff in git history). A tool then helps to generate the required SQL statements and makes a new up and down migration file for me. I don’t mind that I need to verify the generated migration file.