Best practices for storing long-term access credentials locally in a desktop application?

I’m wondering how applications like Skype and Dropbox store access credentials securely on a user’s computer. I imagine the flow for doing this would look something like this:

  1. Prompt the user for a username/password if its the first time
  2. Acquire an access token using the user provided credentials
  3. Encrypt the token using a key which is just really a complex combination of some static parameters that the desktop application can generate deterministically. For example something like:
value = encrypt(data=token, key=[os_version]+[machine_uuid]+[username]+...) 
  1. Store value in the keychain on OSX or Credential Manager on Windows.
  2. Decrypt the token when the application needs it by generating the key

So two questions:

  1. Is what I described remotely close to what a typical desktop application that needs to store user access tokens long term does?
  2. How can a scheme like this be secure? Presumably, any combination of parameters we use to generate the the key can also be generated by a piece of malware on the user’s computer. Do most applications just try to make this key as hard to generate as possible and keep their fingers crossed that no one guesses how it is generated?

Local variables in sums and tables – best practices?

Stumbled on Local variables when defining function in Mathematica in math.SE and decided to ask it here. Apologies if it is a duplicate – the only really relevant question with a detailed answer I could find here is How to avoid nested With[]? but I find it sort of too technical somehow, and not really the same in essence.

Briefly, things like f[n_]:=Sum[Binomial[n,k],{k,0,n}] are very dangerous since you never know when you will use symbolic k: say, f[k-1] evaluates to 0. This was actually a big surprise to me: by some reason I thought that summation variables and the dummy variables in constructs like Table are automatically localized!

As discussed in answers there, it is not entirely clear what to use here: Module is completely OK but would share variables across stack frames. Block does not solve the problem. There were also suggestions to use Unique or Formal symbols.

What is the optimal solution? Is there an option to automatically localize dummy variables somehow?

Best practices for Structured Data to make both Google and Facebook happy

I am the effective webmaster of a small corporation. I want to add a Corporation Structured Data object to our corporate site. I hope to accomplish three things by adding this Structured Data:

  1. Google’s Rich Cards will correctly display my company’s name, logo, etc.
  2. Facebook’s Rich Cards (does Facebook call them something else?) will correctly display my company’s name, logo, etc.
  3. Hopefully I’ll get slightly better SEO

I (generally) understand how to write structured data, but I don’t understand where to put my Corporation object in particular. I need Google/Facebook to understand that my company’s website is www.company.com/home. At the same time, I need Google/Facebook to understand that any URI within this domain (e.g. www.company.com/about) should use my company’s name, logo, etc. Where do I put my Corporation object to allow all pages in the domain to “belong to” my object, but only the homepage “owns” the object?

Best Practices for Filters

I have several domains that I want to filter from my projects so they never attempt to post to them.

I have 2 questions.

1. Is it better to create a .txt file with a list of all the domains I want to filter out and enter in the URL of that .txt file in the “Options” —> “Filter” section and GSA will pull the list daily, -OR- is it better to list each domain individually within the Filter list itself?

2. If I want to filter/block a domain and ALL of it’s subdomains, what format do I use? I have seen *domain.com suggested?

For example, if I want to block all art.blog subdomains, and I put in *art.blog would that also block bestart.blog (I don’t want that, I only want to blog art.blog subdomains).

What are the best practices in a manufacturing/production facility for data retention?

My scenario: A production facility that uses data sets provided by customer(s) to produce personalized goods in bulk. Data sets can range from 100,000 to 2,000,000 names and addresses in the US. This isn’t PCI and doesn’t fall under HIPPA or Sarbanes-Oxley Act.

In a job shop environment, how long is too long to keep data lists provided by customers? Project Managers would love to keep “everything” indefinitely to refer to. Network admins would like data sets scrubbed once a project has shipped.

I’d like to have a solid balance. What are some of the best practices in this area and sources that you refer to for setting policy?

What are the common practices to weight tags relations?

I am working on a webapp (fullstack JS) where the user create documents and attach tags to them. They also select a list of tags they are interested in and attach them to their profile.

I am not a math guy, but I did some NLP as hobbyist and learnt about latent semantic indexation: as I understand it, you create a table where you store each couple of words you parsed, and then add weight to each of these couples of words when both are found next to each other.

I was thinking of doing the same thing with tags: when 2 tags appear on the same document or profile, I increase the weight of their couple. That would allow me to get a ranking of the “closest” tags of a given one.

Then I remembered that I came across web graphs, where websites were represented in a 2D space (x and y coordinates) and placed depending on their links using an algorithm called force vector.

While I do know how I would implement my first idea, I am not sure about the second one. How do I spread the tag coordinates when created? Do they all have an x:0, y:0 at the start?

Since I assume this is a common case of data sorting, I wondered what would be the common/best practices recommended by people of the field.

Is there documents, articles, libraries (npm?) or wikipedia pages you could point me out to help me understand what can or should ideally be done? Is my first option a good one by default?

Also, please let me know in comments if I should add or remove a tag to this question or edit its title: I’m not even sure of how to categorize it.

Do best practices eliminate the need for a CSRF token when writing an API server?

I realize that OWASP recommends CSRF tokens but I rarely see them used with public standalone HTTP APIs. This would seem to indicate that they’re not always necessary.

To make this a little more concrete, I would envision the following scenario:

  • The API server serves a limited number of frontends with an explicit CORS whitelist.

  • HTTP method semantics are followed religiously (no writes in GET).

  • All routes require authentication.

  • All POST routes require a request body[1].

  • All routes that take a request body require a JSON content-type header.

  • Cookies are httpOnly but not sameSite.

Based on my understanding of SOP setting a JSON content-type header on requests should trigger a preflight request which would fail for untrusted origins. If all POST routes require a JSON content-type header, that should then mean they’ll always fail the preflight, leaving only GET requests.

So this would not mitigate CSRF attacks against GET routes but as these can’t be used for exfiltration (as SOP prevents the response from being read) and the GET routes should not cause any data modification, guarding these requests with CSRF tokens would not appear to make a practical difference.

Given how viciously some people defend CSRF tokens, I can’t shake the feeling I’m overlooking an obvious problem here. I realize redundant protections may be valuable in their own right, but what I’m trying to understand is whether in the scenario described the CSRF token would really be redundant or not.


[1]: I realise this might be a practical limitation of this approach as in some real-world APIs there are legitimate POST routes that don’t take a request body or there may be routes that need to take a content-type like form-data that won’t trigger a preflight.

Best practices or advice to convince IT admins not to map network drives in privileged sessions with users

Why are currently trying to enhance the security posture of our company, and this means changing how some IT personnel work.

Precisely, our IT helpdesk now have 2 separate accounts: 1 for normal day to day usage (mails, internet, etc…), and 1 for administrative tasks. The later is a privileged account having several rights on the AD and some servers.

The way they work is not very secure when it comes to supporting the users: they use their privileged account to login to the user’s workstation and perform tasks where admin rights are needed.

But my question is more accurately related to network drives being mapped in their privileged account’s profile. They insisted on using the same logon script as with their standard account.

Do you have any recommendations, references to guidelines and/or best practices in such a case ? I’d like to present them some resources to convince them it’s not secure to have network drives mapped in this profile.

I tried to explain to them that if they log in a ‘contaminated’ workstation, their privileges might spread the infection to the network… But they did not understand and argued they need to access some files on the network while assisting the users. They don’t want to waste time typing UNC path, etc…

SQL Server Linked Server Best Practices – Step by Step

Hello and thanks for stopping by. I’m an accidental DBA looking for some guidance in creating Linked Servers the correct way. MS provides complete descriptions of the datatype and descriptions of all of the various parameters for sp_addlinkedserver, sp_addlinkedsrvlogin and sp_serveroption but no guidance on HOW to align the various options as Best Practices for a given situation.

I have examples from the other DBA’s who simply used the ‘sa’ password but my research indicates I should be using bespoke logins tailored to their Linked Server use. The Problem is I’m so far unable to find the right combination and sequence (order of ops) to correctly create all of the parts and pieces resulting in a Linked Server that allows limited communication between two servers.

Goal: Create a Linked Server between a Source server that will allow a job step from the source server to check certain conditions and if TRUE, invoke sp_start_job on Destination server. …and nothing more.

On advice, I’ve created two SQL Auth Logins of the same name/pw on both Source and Destination, both with limited ‘public’ permissions.

I’ve created Linked Servers attempting to map the local login to the remote login (thinking if I got that far, I’d carefully tinker with the permissions of the Destination login to find the permission to allow it to exec sp_start_job).

But so far, my only reward has been a series of failure notices of various types.

There are a TON of online documents explaining what each various proc/param does but I’m having a difficult time finding some sort of over-view explaining how combinations of procs/params lead to different desired outcomes.

I’m hoping for some useful advice, reference to some ‘yet to be discovered’ tutorial or maybe even a Step by Step instruction on how to achieve my goal and develop a little self respect. (so far, this task has done nothing but bruise my ego!)

Thank you for your time.

Best Practices for Caption Text for Images

Website in question is a long form content focused and will include inline images as per user input when they create their blog/ content entries hence number of images in post is not controllable.

Need to decide the suitable typeface size (as a percentage of other blog body text say 70%/60%/50% of main text if it is say 14px) for image caption. Also need to determine if italics works better than regular text for image captions.

Research is providing contradictory information with some sites suggesting to use italics sparingly, other research suggesting no impact on readability on using italics. See italics being used widely across many content heavy websites and news sites for image captions.

Thanks.