Algorithm Design Patterns vs. Algorithm Strategies vs. Algorithm Design Techniques vs. Algorithm Design Paradigms

I often encounter these terms, seemingly bearing same semantics and meaning. I’m almost sure they mean same – the types of Algorithms categorized based on their implementation strategy/paradigm.

Just wondering if I’m right and I get this correct, because, again – in different books/courses these terms are encountered in the same context. I’m kind of a very critical to the exactness of the terms I read.

Could you please confirm or reject my assumption?

good ux design patterns to add/edit/remove entities in a mobile app?

I’m looking for Android apps which have good entity management (add/edit/remove) UX. The home page of my app displays a list of configured entity instances. Now I need to give the user the ability to add/edit/remove these entity instances. There could be a “Manage Entities” menu item in the left Drawer menu which would open a “Manage Entities” screen with the following design:

  • The screen could display a search input supporting autosearch.
  • The search input could display an Add button below it to support an Add scenario.
  • Autosearch would return a list of matching entity list items below it on the screen.
  • Each entity item in the list could have a checkbox on the left. When a user selects one or more entity items a Delete button could be rendered at the bottom of the screen.
  • Each entity list item could include an Edit button at the bottom.

These are just off the cuff ideas about how something like this could potentially work. I’m looking for good Android apps which demonstrate a good implementation of these types of UX design patterns so I can decide the approach that I want to take in my own mobile app.

The fragmentation of the page status introduced by multiple design patterns – best practice for page loading?

In one of the recent updates to Google Chrome, we have seen yet another method of dealing with page loading status with the introduction of the loading animation in the favicon area of the browser tab (by the way, the Firefox browser uses the side to side indeterminate loading state animation made notorious by LinkedIn).

enter image description here

As far as I can tell, this makes at least five or six different ways that you can indicate a loading status on a page, many of which occur simultaneously and makes the current state of the page content rather confusing for users.

So the ones that I have seen include:

  • Browser tab favicon area loading indicator seen in image above (is there a name for this?)
  • Mouse cursor loading indicator
  • Page header loading progress indicator
  • Modal/pop-up page loading progress indicator
  • Call-to-action button progress indicator animation
  • Bottom of the page loading indicator (e.g. when infinite scrolling is implemented)

Assuming that there is a ‘best practice’ when it comes to dealing with page content status, is there a reason why there needs to be so many different ways of indicating to the user that the status of the page is not completed loaded? Doesn’t this provide a very inconsistent user experience and add to the user frustration?

Bash, output from file between 2 patterns, but with variables and asterisks

Issue

I need select all between 2 patterns in file, but with variables and with * as wildcards.

Example (something like that).

myscript var1 var2 var3 

Script.

#!/bin/bash cat $  1 | sed -n "/*$  2*$  3/,/## end of string ##/p" 

It is not working as I expect, because * is not get as anything. * should be anything between variables. Example of file ($ 1)is for example this.

## First line, first variable is var2, and second variable is var3. Information text. Some text to display. ## end of string ## 

Am I clear? Thanks.

System

Linux local 5.0.0-29-lowlatency #31-Ubuntu SMP PREEMPT Thu Sep 12 14:13:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux 

Should I be able to see patterns in a HS256 encoded JWT?

I was fiddling with https://jwt.io/ using this header

{   "alg": "HS256",   "typ": "JWT" } 

when I realized that replacing the payload name with something repetitive like AAAAAAAAAAAAAAAAAAAA would produce a token such as this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkFBQUFBQUFBQUFBQUFBQUFBQUFBIiwiaWF0IjoxNTE2MjM5MDIyfQ.hlXlWvaeyOb6OcrOwd-xfWgF8QlfmTycj5WWZwRr6FY 

You can see that the BQUF substring appears to be repeated. The more As I added to the name, the more BQUFs show up.

As far as I know the presence of these kind of patterns makes it considerably easier to find out the encoded contents. What am I missing?

Usability testing of design system components and patterns

This is my first post here, and what I am searching for have not been found yet, I must be very innovate, joke aside. I have gotten a mission at my current company from the C-level to test through all of the components and patterns of our design system. This is everything from input components, badges, tables, cards, panels etc. Our design system is structured based on atomic design.

I am however not familiar with testing on specific components alone, I have always done it through scenarios and cases where we have whole layouts with components that will support our users in their work. Is there any way of performing smaller usability tests without specific cases?

Here’s what I was thinking:

  1. I could test each component against certain criterias.
  2. I could perform the 5-second test (identify how it is being percieved after 5 seconds)
  3. The break-it-method, where users and test paricipants try to find errors and problems in the functionality and usability
  4. Test participants will compare our components one by one with those of material design or lightning
  5. Evaluate the components through CBUQ (Component-Based Usability Questionnaire)
  6. Have small tasks for each component to see how easy they are to use and navigate, e.g. Task1 – enter data, Task2 – remove entered data, Task3 – Navigate using keyboard etc.

Are any of these ideas good? Are there any others? Please help! Any input in valuable! 🙂

Gmail’s Undo send and other UX design patterns for reversing actions/transactions

When Gmail first brought out the Undo feature for sending emails it was a very interesting way to address the typical user behaviour of sending something on an impulse (or just not checking things properly) and allowing the user to undo an action or transaction the same way that you can undo on Microsoft Word or any typical desktop application.

I am curious to know if there are other similar design patterns for less familiar actions/transactions, and if the user ends up adjusting to these design patterns and so we still end up with many emails sent in mistake just because the perception is that we can undo those actions. Another unintended consequence is that when Gmail tries to predict whether an attachment is missing from the email and prompts the users, it often gets ignored because it incorrectly asks users for missing attachments because it has picked up a wrong context. When a user actually forgets an attachment but ignores the prompts by experience, the design pattern introduced to prevent the problem ends up contributing to it.

My question is: have design patterns used for reversing actions/transaction been shown to reduce the number of mistakes generated by the users (which is the purpose of allowing an ‘undo’) or simply increased the confidence level of users so that they end up creating more mistakes (although some of them are caught by the undo actions).

interior page design patterns

Since the advent of the lonnnnnnng scrolling home page, I feel like there’s been a surge of sites applying the same type of logic to the interior pages as well… lots of scrolling, no secondary navigation menu (if needed), lots of links to orphan pages with no visual indication in the global nav as to where that page exists in the hierarchy, lots of very stylized content that, because it’s so broken up visually, doesn’t really flow together to talk about a single topic. I thought that the fad would die out, but if anything I feel like I am seeing it even more these days. Almost like having an entire web site consisting entirely of landing pages that all kind of look similar. Obviously, some are quite well done, but I feel like I see more that are not.

I know the days of interior pages consisting of a header and paragraph after paragraph of straight text are long gone (thankfully), but I’ve got to think that in most cases, this is a bit of overkill… to the point of shooting yourself in the foot. Or am I missing something? Is there data that shows that such complex structure for a page that is supposed to be narrowing down the focus are equally as effective?

OAuth access token/API key patterns for large web sites

First off, let me preface this post by saying I’m not a security expert.

I’m trying to build regular expresssions to find OAuth 2.0 access tokens and API Keys for common web sites such as Google, Twitter, Facebook, Slack etc. that may have been embedded in source code.

I couldn’t find the token formats of the large sites documented in one place anywhere so I carried out my own research:

OAuth 2.0

| Site      | Regex                                         | Reference                                                                     |  | --------- | --------------------------------------------- | ----------------------------------------------------------------------------- | | Slack     | xox.-[0-9]{11}-[0-9]{13}                      | https://api.slack.com/docs/oauth                                              | | Google    | random opaque string upto 256 bytes           | https://developers.google.com/identity/protocols/OAuth2                       | | Twilio    | JWT [1]                                       | https://www.twilio.com/docs/iam/access-tokens                                 | | Instagram | [0-9a-fA-F]{7}\.[0-9a-fA-F]{32}               | https://www.instagram.com/developer/authentication/                           | | Facebook  | [A-Za-z0-9]{125} (counting letters [2])       | https://developers.facebook.com/docs/facebook-login/access-tokens/            | | Linkedin  | undocumented/random opaque string             | https://developer.linkedin.com/docs/v2/oauth2-client-credentials-flow#        | | Heroku    | [0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12} | https://devcenter.heroku.com/articles/oauth                                   | | Github    | [0-9a-fA-F]{40}                               | https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps/ | 

API Keys

| Site   | Regex                                                                       | Reference                                                     |  | ------ | --------------------------------------------------------------------------- | ------------------------------------------------------------- | | GCP    | [A-Za-z0-9_]{21}--[A-Za-z0-9_]{8}                                           | undocumented (obtained by generating token)                   | | Heroku | [0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12} | https://devcenter.heroku.com/articles/platform-api-quickstart | | Slack  | xox.-[0-9]{12}-[0-9]{12}-[0-9]{12}-[a-z0-9]{32}                             | https://api.slack.com/custom-integrations/legacy-tokens       | 

Based on this research:

  • We can map credentials to web-sites with some level of accuracy based on position of hypens, periods, etc.
  • A more robust way to detect credentials might be to skip the regex altogether and use shannon entropy to find “unusually random” strings, as illustrated in this blog: http://blog.dkbza.org/2007/05/scanning-data-for-entropy-anomalies.html

I also stumbled on truffleHog which has its own completely different regexs which really confused me at first:

    ...     "Facebook Oauth": "[f|F][a|A][c|C][e|E][b|B][o|O][o|O][k|K].{0,30}['\"\s][0-9a-f]{32}['\"\s]",     "Twitter Oauth": "[t|T][w|W][i|I][t|T][t|T][e|E][r|R].{0,30}['\"\s][0-9a-zA-Z]{35,44}['\"\s]",     "GitHub": "[g|G][i|I][t|T][h|H][u|U][b|B].{0,30}['\"\s][0-9a-zA-Z]{35,40}['\"\s]",     "Google Oauth": "(\"client_secret\":\"[a-zA-Z0-9-_]{24}\")",     "Heroku API Key": "[h|H][e|E][r|R][o|O][k|K][u|U].{0,30}[0-9A-F]{8}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{4}-[0-9A-F]{12}",     ... 

Applying these rules to find a GitHub token, would not match the token in GitHub’s documentation: e72e16c7e42f292c6912e7710c838347ae178b4a but it would match assignment of this value to a suspiciously named variable, eg:

$  github_key = "e72e16c7e42f292c6912e7710c838347ae178b4a"   # ... but would _not_ match  $  key = "e72e16c7e42f292c6912e7710c838347ae178b4a"  

My question is:

  1. Does anyone have a more comprehensive list of credential formats
  2. Does anyone have additional/better detection patterns?

[1] I implemented JWT detection as its own rule which matches a 3 section block of text delimited by . with two sections starting { (base64 encoded):

e(y|w)[^.]+\.e(y|w)[^.]+\.[^.]+ 

[2] https://www.youtube.com/watch?v=_hF099c0A9M (skip to 1:35)