A/B testing – how to deal with minority that chose B?

Publishers can test different site layouts and various versions of their content (for example, testing more than one headline on an article)

This can be done with A/B testing. For example, you might find that 60 percent of users prefer layout A, and 40 percent prefer layout B. You go with A because that majority of users preferred it.

But what about all those people who preferred layout B?

How to simulate screen resolution realistically when testing responsive Web UI?

I have tried to test our Web page in different resolutions with two different approaches:

  • changing resolution on Web browser with Resolution Test extension for Chrome
  • simulating mobile devices with Device Mode in Chrome.

The page is rendered differently, depending on my approach. For instance, in device mode a page is zoomed out, e.g., font size seems to be adapted to the device resolution.

  1. Why does it happen? Based on what information (headers, etc.) Web application decides to render layout differently?

  2. Which approach is more realistic to evaluate layout for different resolutions on both mobile devices and standard laptop/desktop monitors?

Usability testing of design system components and patterns

This is my first post here, and what I am searching for have not been found yet, I must be very innovate, joke aside. I have gotten a mission at my current company from the C-level to test through all of the components and patterns of our design system. This is everything from input components, badges, tables, cards, panels etc. Our design system is structured based on atomic design.

I am however not familiar with testing on specific components alone, I have always done it through scenarios and cases where we have whole layouts with components that will support our users in their work. Is there any way of performing smaller usability tests without specific cases?

Here’s what I was thinking:

  1. I could test each component against certain criterias.
  2. I could perform the 5-second test (identify how it is being percieved after 5 seconds)
  3. The break-it-method, where users and test paricipants try to find errors and problems in the functionality and usability
  4. Test participants will compare our components one by one with those of material design or lightning
  5. Evaluate the components through CBUQ (Component-Based Usability Questionnaire)
  6. Have small tasks for each component to see how easy they are to use and navigate, e.g. Task1 – enter data, Task2 – remove entered data, Task3 – Navigate using keyboard etc.

Are any of these ideas good? Are there any others? Please help! Any input in valuable! ๐Ÿ™‚

User Testing with onboarding

Should I User Test a new feature with it’s onboarding?

At the moment I do, because if I never tested the onboarding part, how do I know if the execution is correct? Secondly, if I was to conduct User testing without, then with onboarding, it takes way too long.

But I’ve been thinking. If I include onboarding on information thatโ€™s unnecessary, how will I ever know to strip it?

Does anyone have any good, valid resources that talk about this?

Thanks ๐Ÿ™‚

Which tool is best for online user testing

I have small eshop end I try to improve my information architecture. I want to understand my customers and they behaviour on website.

I planning buy one of online testing tools and I want to know which one is best of them. I found these tools: UXtweak, hotjar, validatelly, smartlook, fullstory.

Tools have similar features but via UXtweak I can conduct task-driven study. Now it is a open beta so I already try to make a study and it is really simple. Also you can try it on your live page.

I think that is the best way to know how to set task and get to know customers behaviour.

Do you know online platform with task-driven study like UXtweak? https://www.uxtweak.com

Let me know your experience with online testing platforms. It designed should be primarly for eshop.

Testing Multiplayer Browser Game on Single Computer

I am writing a multiplay browser based game and am running into an issue with playtesting simple interactions. When a browser window loses focus is stops running requestAnimationFrame calls. This effectively pauses the state all visuals when focus is lost.

I would like to have two instances of the game running at the same time. I could playtest in one window and observe how a thirdparty would see those actions in another window.

Right now I am only able to toggle between the two windows and see teleportation like behavior as the client state catches up to the new server state.

I am not sure how, without setting up a second computer, I might test this. Anyone have any experience with this or any advice?

Application and API Penetration Testing – SaaS solutions

I have managed projects where we have a used a third-party to do application penetration testing. Based on what I could gather, it entailed manual testing and did identify some good issues. We also used Zap to prep ourselves before we went to third-party pen testing. So familiar with that too.

I was wondering if there were SaaS solutions for pen testing that meet the following criteria:

1 – Easy to use in that canned policies exist that are meaningful. Example: You have never done any pen testing before on your app, let’s start here… You are requires to meet a specific regulation, try the following policy set …

2 – Have adequate depth and credibility (both subjective) such that the report will be accepted by a Fortune 500 company’s security team or by a SOC2 auditor (I recognize that the auditors really do not care how you did your pen test as long as you did it given that SOC2 does not really call for a pen test)

Thanks!

Usability testing old and new design at the same time?

One of our designers has redesigned a page of ours using insights from another piece of research we did months ago.

I want to test old design and new design at the same time to evaluate if the changes are indeed better?

Is it a good idea to set x3 tasks for both prototype to evaluate which is easer to use for participants via success rates and ratings along?