Effective ways to overlay a “close” button on a full screen application, when you have no idea the application’s layout

I am creating a web-based interface to a number of internal web applications. They are all embedded in an iframe for access.

At times I need to allow the iframe to go full screen to give an embedded app as much screen space as possible.

What I want to do is have a “close full screen” button when the iframe is full screen. I can overlay one just fine, and functionally this works. But visually, I am challenged with the fact that the apps all have various layouts – so there is no perfect spot to put the close button for all apps. I want the close button to be ideally in the same spot for consistent user experience.

I should add one big condition: this is on a touch screen terminal, and the user does not use a mouse. Otherwise, I would just have any mouse movement to an edge of the screen expose the close button, just like how a full screen web browser in Windows will expose the top UI if the mouse is moved to the top of the screen.

I can imagine several scenarios on how to do this, but am asking here to see if anyone has found a previous UI design pattern to handle this kind of situation.

Is there an unbiased way to collect feedback on a product idea in a user survey?

I’m working with a stakeholder on a potential major pivot to a product, which gets used both inside and outside our organization. The stakeholder wants to directly ask a question like this in a survey to internal potential users: “If our organization offered a way to (achieve your primary goal) (without either of your two top pain points), would you be interested in using it?”

I see this question as biased because it looks like “who in their right mind wouldn’t say yes?” The stakeholder has worked for us for many years and insists that this won’t be a problem, but I find this hard to believe.

Also, in our context, we have to consider whether or not our users have the time, capacity, and desire to use a product like ours. They can use it in or outside their jobs, but they don’t have to.

Are we out of line to ask a question like this? Even if we are, what is a less biased way to ask it?

Is it a bad idea to push everything in my home folder to a private repo on GitHub?

And if so, why?

I have two ubuntu 18.04 machines that I use for work. One at home and one in the office. I use git and github a lot already. I make configuration changes to files in my home directory often (eg, .pgpass file for postgres etc) and I want those changes replicated between the machines, without me having to remember to explicitly sync them when I leave each one. Using source control makes sense to me, as I want to be able to rewind changes if I mess something up.

However, I’m worried that if I set up a process to sync everything in my home folder, I might accidentally push files that I shouldn’t (eg passwords). I know that I can tell git to ignore everything except the files I explicitly tell it to stage, but then I would lose the option of having it automatically sync everything. Is there a simple git ignore file pattern that would suit my needs?

I would rather not set up a private repo server as I then have to worry about keeping it running.

Is it a good idea to store TOTP tokens in a (synchronised) password safe?

Bitwarden (as an example) allows you to store your TOTP tokens in it. That is: you can use the mobile app to scan the QR code that (e.g.) Amazon AWS gives you, and then it’ll generate TOTP codes.

So far, so exactly the same as Google Authenticator and similar.

Bitwarden synchronises your “vault” (to their servers by default; you can install your own server), including the TOTP … stuff. This means that your credentials (including TOTP codes) are available on all of your associated devices.

Is that a good thing, from a security point of view? It’s definitely convenient — I just got a new phone, and resetting all of my MFA tokens is … not a pleasant use of my time.

Is it a good idea to store TOTP tokens in a (synchronised) password safe?

Bitwarden (as an example) allows you to store your TOTP tokens in it. That is: you can use the mobile app to scan the QR code that (e.g.) Amazon AWS gives you, and then it’ll generate TOTP codes.

So far, so exactly the same as Google Authenticator and similar.

Bitwarden synchronises your “vault” (to their servers by default; you can install your own server), including the TOTP … stuff. This means that your credentials (including TOTP codes) are available on all of your associated devices.

Is that a good thing, from a security point of view? It’s definitely convenient — I just got a new phone, and resetting all of my MFA tokens is … not a pleasant use of my time.

Is it a bad idea to make a microservice for every endpoint?

Many microservices commonly have only a single endpoint, so I know that a single-endpoint-service is not inherently a bad thing.

I’ve noticed that many of the advantages of a microservice over a monolith amplify in an inversely proportional manner to the size of the microservice. For instance, being able to scale up one service of an application independently of the others is an advantage of microservices; yet if every microservice was as small as possible (managing a single endpoint / command), this advantage simply becomes greater.

Other advantages which hold this property include:

  • Deployability
  • Team management (perhaps each team is assigned a group of “related” microservices, and perhaps each member is assigned a couple microservices within that group)
  • Emphasis of security where it counts; i.e. if one endpoint involves communication of more sensitive data, it can be singled out and given higher focus on security
  • Selection of tools / languages; i.e. if one endpoint involves some sort of processing or data retrieval which warrants using a different persistence format (NoSQL vs SQL vs in-memory database, etc) or programming language, it can be singled out and built using those tools and languages

Perhaps there are others as well. My point is, if these advantages simply grow the more you split your services, why don’t we make every microservice as small and granular as possible?

If I think I have a publishable idea on a certain problem, how do I publish and proceed?

I’m new to research. I’m working on a certain problem that has sort of been abandoned by all researchers (as far as publications go). Anyway, I like it for its simplicity and easiness to grasp. However, it’s a new algorithm for a problem and a new way of possibly answering theoretical questions about the problem.

There is so little research out there on the certain problem, that it would literally take you a day or less to view it all and ascertain that indeed the idea hasn’t been looked at yet. I’m applying the well-studied field of integer programming / linear algebra to model each instance of the problem.

Anyhow, regardless of whether I answer a major open problem, I think the approach is valuable in itself, if for no other reason that it makes answering basic questions about the problem a lot more easy, sort of like how algebraic geometry answers questions about geometric objects using algebra.

Question: How do I proceed. Do I just spend a month perfecting the paper on my own and put it on arXiv? Is that all?

Thanks.

Is it a good idea to hide logout button with error/warning message?

I’m building a secure website, closer to banking. I’m bit hesitant to hide the logout link with the error/warning message. I think design look simplistic when I add error/warning over the logout panel. I want UX experts out there to help me out with this dilemma? Here are some prototype images. enter image description here

Notice I have a close button (X) on both error/warning message. User can close the message to see the log out button.

Here is the reasoning why I want to do like this:

Basically the error/warning message act like notification message, even success message will go there. I wanted to put this on the header because I wanted to make my notification sit on the master page rather on the content page. This way I can show notification consistently.

mockup

download bmml source – Wireframes created with Balsamiq Mockups

USB security idea

So i have been reading up on computer security and decided to come up with a design of my own here it is: Note: This is focused towards USB security not OS specific A USB that is encrypted using Veracrypt it should have a 15 digit long password that is completely random meaning no connection to you or anyone else you have had contact with and should not be written down anywhere or told to anyone inside this USB there is a directory that will again be encrypted using Veracrypt the password principle is the same here both passwords should be separate and random this directory can contain anything you need every other week or so you should format the system and the USB afterwards write a bunch of random stuff to the drive and then format again finally encrypt the drive with a random password again and the same for the directory My thinking behind this is: 1. If you have a clean system ( Meaning in a theoretical sense there is not any backdoors within the software present or hardware and you just installed the operating system ) You can setup the drive encryption without worrying that your password is being stolen by a keylogger ( Again software/hardware comes into play here ) If the password is completely random then it should be hard for your average attacker to gain access to the drive without lots of resources and time. Note: Whenever you setup the encryption make sure you have a stock copy of a operating system ( Software/Hardware comes into play again ) and the operating system/encryption software has been verified to not be malicious before creating a password. 2. If the password is random and over 10 characters long and/or is not related to you or can be easily guessed cracking the password would take a lot of time and resources your average attacker ( Say a hacker who is looking for information such as banking documents not a state backed one ) would not have. 3. This part is merely a guess on my part if the attacker breaks your first layer of encryption then to access anything they would have to go through the second layer. 4. Regular drive reformat along with the host operating system ( My idea would be erase the drive with hard drive wiping software and format the USB using a live boot operating system ) The passwords should change once the drive is encrypted again. I believe this would prevent attackers from using a keylogger to grab keystrokes and the encryption password along with most malware that resides on the hard drive not firmware based attacks ( A possible solution to this would to flash clean firmware ) 5. I am still learning and don’t know what i am talking about yet so i hope someone here will tell me what is wrong with this setup and how can i fix it along with any ideas or further reading

Is such “transparent” testing framework a good idea? [on hold]

Here is an example in R, but the concept applies to any language that supports file IO. I’m presenting it in R just because its special operator overriding feature makes the syntax looks nicer.

`%<-verify%` <- function(x, new.value) {   name <- deparse(substitute(x))   save.file <- paste0(name, '.rds')    if (!file.exists(save.file)) {     message(paste('Generating', save.file))     saveRDS(new.value, save.file)     assign(name, new.value, inherits=TRUE)   } else {     old.value <- readRDS(save.file)     if (!isTRUE(delta <- all.equal(old.value, new.value))) {       warning(delta)     } else {       message('checked')     }     assign(name, old.value, inherits=TRUE)   } } 

Let me explain it in action. Say you just receive a legacy codebase and want to clean that mess up.

> get.answer <- function() bitwXor(0xDEAD, 0xBEEF) %% 100 

First, you need to run some examples, so that the test framework can learn what the result should be. When a name is assigned with %<-verify% for the first time, its value is stored to a file (named after it to prevent namespace collisions).

> source('test-util.R') > answer %<-verify% get.answer() Generating answer.rds > answer [1] 42 

All subsequent %<-verify% to this name automatically checks the new value against the saved value, and emit a warning if they don’t agree.

> get.answer <- function() 43 > answer %<-verify% get.answer() Warning message: In answer %<-verify% get.answer() : Mean relative difference: 0.02380952 

Effectively, this prevents the codebase from being broken by the new changes.

> get.answer <- function() 42 > answer %<-verify% get.answer() checked 

Note that you should only use %<-verify% on the interfaces you care about, for example, the exported functions. If you i %<-verify% i + 1 within a loop then be prepared for a wall of warnings.

Its main merit is transparency, that you can just replace <- with %<-% and the check is done automatically. People are lazy, so making tests easier to implement basically creates more tests, and errors can be detected at an earlier stage.

Do you think this is a good practice?