Is it common practice to use separate identical images of different resolutions for different screen sizes?

From a design perspective is it more common to use different size images for say desktop and mobile or is it more common to use the same image for each and display them in different size using css or otherwise. What are the pros and cons of each approach?

Is there a commonly used term for a number divided by its greatest common divisor?

Does the expression $ \frac{a}{\gcd(a, b)}$ have a common name?

This type of expression occurs frequently in a program I’m writing. Since $ \forall a,b \in \mathbb{N^{*}}: \frac{a}{\gcd(a, b)} \perp \frac{b}{\gcd(a, b)}$ , I’ve been calling this the coprime part or coprime residue. I’d prefer to use a term of art if one exists.

Least common multiple of a list of numbers

I’m trying to learn computer science by doing some challenges. One is the following.

2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder.

What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?

Let’s generalize the problem from $ 1$ to $ n$ .

I thought of a loop between $ n$ and $ n/2$ as everything above $ n/2$ is a multiple of something between $ 1$ and $ n/2$ . At each iteration, the result is the least common multiple of the loop variable and the precedent result (at the beginning the result is 1).

If I am correct, the complexity of this should be $ \mathcal{O}(n/2)$ . Am I right?

Is there any more efficient algorithms to solve this problem? If so witch one and what is their complexity?

Sequence with an alternating common difference

Consider a sequence like: 3, 7, 13, 17, 23, 27, 33, 37, …

Let’s look at the common difference:

3   7   13   17   23   27   33   37  \ / \ /  \ /  \ /  \ /  \ /  \ /   4   6    4    6    4    6    4    \ / \  /  \ / \  /  \ / \  /     2   -2    2   -2    2   -2 

This particular sequence is achieved by first adding 4, then adding 6, then adding 4, then adding 6, and continues on. We eventually reach a point at which the difference becomes constant in magnitude, but alternating in sign. There is a closed form formula that will give the $ n$ th term of this sequence $ $ a_n = \frac{1}{2} \left( 10n + (-1)^{n+1} – 5 \right)$ $ At one point I remember coming across an explanation of how to find closed form formulae for sequences of this type, however I can no longer locate the source. It is extremely difficult to search these kinds of sequences, so I am hoping someone could explain the methodology and perhaps let me know if there is a name for sequences of this type.

I am further interested in sequences that may have a cycle of more than 2 steps, for example, first add 10, then add 3, then add 5, then add 7, then repeat that cycle of four numbers to keep getting new terms. Any thoughts or direction to sources on such sequences would be greatly appreciated!

Finding multiple “most common values” in a column

I am trying to display the 5 most commonly appearing text values in a column in Google Sheets.

The following will give me the most common value in column A:

=ArrayFormula(INDEX(A:A,MODE(MATCH(A:A,A:A,0)))) 

And this will give me the second most common value in column A, assuming that the formula above is in cell G2 (whose result is being excluded):

=ArrayFormula(INDEX(A:A,MODE(IF(A:A<>G2,MATCH(A:A,A:A,0))))) 

How can I get the third, fourth, fifth, etc most common values? Something like this does not work:

=ArrayFormula(INDEX(A:A,MODE(IF(A:A<>G2:G3,MATCH(A:A,A:A,0))))) 

Basically I need to exclude multiple values from that calculation. Any ideas?

Are the sha1 hashes used by common ssh configurations insecure?

I got an automated PCI security test result that checked various server configurations. The automated test determined the server to be unsafe due to the use of sha1 algorithm in some elements of the ssh configuration.

The configuration can be seen when running ssh -vvv, so here’s the relevant part of that output. I snipped out the other algorithms that are available on this particular server, but several are available.

debug2: KEX algorithms: ...snip...diffie-hellman-group14-sha1 debug2: MACs ctos: hmac-sha1...snip... 

It’s the use of:

  • diffie-hellman-group14-sha1 in the key exchange algorithms
  • hmac-sha1 in the MACs from client to server

I’ve searched this site a bit and I don’t see much data about whether these algorithms are 1) in use 2) considered insecure for a PCI compliant site in 2019.

Getting a Sharepoint List Fields that are common to all content types contained in the list

I’m trying to get the list fields shared by all the content types in said List. After getting the List by its Title, I used the Fields property to get all the fields in the List:

List list = clientContext.Web.Lists.GetByTitle('list Title'); clientContext.Load(list); FieldCollection listFields = list.Fields; clientContext.Load(listFields); 

The problem is I only want the ones shared by all content types in the list. Is there a way to get them without having to go through all the content types and their respective Fields and comparing them to get the one in common?

Extract Common Data From List of Objects

I have a list of orders and for some order fields I need get the data that is common among the orders. If the data is not in common, I should indicate null. I collect the common data in a CommonData object. In addition, I also store the order codes and order ids associated with the CommonData.

I have two approaches that I’m considering:

Approach 1:

public CommonData getCommonData(List<Order> orders) {            Order firstOrder = order.get(0);      LocalDate commonStartDate = firstOrder.getStartDate();     LocalDate commonEndDate = firstOrder.getEndDate();     List<Item> commonItems = firstOrder.getItems();      CommonData commonData= new CommonData();      for (Order order : orders) {          commonData.addCode(order.getCode());         commonData.addId(order.getId());          if (commonStartDate != null &&                 !commonStartDate.equals(order.getStartDate())) {             commonStartDate = null;         }          if (commonEndDate != null &&                 !commonEndDate.equals(order.getEndDate())) {             commonEndDate = null;         }          if (!equalLists(commonItems, order.getItems())) {             commonData.setDifferentItems(true);             commonItems = null;         }     }      commonData.setCommonStartDate(commonStartDate);     commonData.setCommonEndDate(commonEndDate);     commonData.setCommonItems(commonItems);     return commonData; } 

Approach 2:

private boolean haveSameDate(List<Order> orders,          Function<Order,LocalDate> getDate) {      return orders.stream()             .map(getDate)             .distinct()             .limit(2)             .count() == 1; }  public CommonData getCommonData(List<Order> orders) {            CommonData commonData = new CommonData();      if (haveSameDate(orders,              Order::getStartDate)) {         commonData.setCommonStartDate(orders.get(0).getStartDate());     }      if (haveSameDate(orders,              Order::getEndDate)) {         commonData.setCommonEndDate(orders.get(0).getEndDate());     }      commonData.setCodes(             orders.stream()                 .map(order -> order.getCode())                 .collect(Collectors.toList()));      commonData.setIds(             orders.stream()                 .map(order -> order.getId())                 .collect(Collectors.toList()));      List<Item> firstOrderItems = orders.get(0).getItems();      if (orders.stream()         .map(Order::getItems)         .allMatch(x -> equalLists(x,firstOrderItems))) {                      commonData.setCommonItems(firstOrderItems);     } else {         commonData.setDifferentItems(true);     }      return commonData; } 

Even though approach 2 involves multiple iterations over orders, the order and item lists will always be small.

In approach 2, having operations on one piece of data is in one spot within the method but it is scattered in approach 1. Approach 1 also has negative logic. Approach 2 is more complex in finding common order items.

Can approach 1 be redesigned to have the benefits of approach 2 (such as remove negative logic and keep operations on a piece of data in a single spot)? Or should approach 2 be used and perhaps be improved to better handle finding common order items?