Best practise to create virtual networks for CTF [migrated]

I’ve build a CTF server at home for me and my friends. By CTF server I mean a pretty decent server with quite OK specs on which XenServer (free version) is running, I download and mount a VM from and the first one to hack it wins.

Now I would love to step up my game, and see if there are any best practise I can follow for a more professionally looking setup, that’s why I’m here asking for your help.

  1. Is there anything better on the market than xenserver (it doesn’t need to be free) to create virtual networks of VMs?
  2. Is it possible to integrate VMs with real physical devices? Let’s say i want to create a network of 2 Windows10 VM connected to a firewall, the firewall would be a real device not virtualized and attack the network from outside, is that possible with xenserver? Or easier, I would like to put a real printer on the same network as the VM running on xenserver.

Any advice or suggeste readings would be very much appreciated. Thanks!

Best practise for handling hardware and software updates

Im part of a developement team working with machines. Recently due to increased demand for our product our team has grown to double its size during the past 1.5 years.

My question is regarding to what the best practise should be for keeping track of all our software updates. Let me explain. The machines are constantly improved hardware wise which means that we developers sometimes need to tweak the codebase a little. This in the end leads to the problem that we have a bunch of different, but similar, hardware configurations and a bunch of different, but similar, code packages.

As we grow this becomes a real issue keeping track of what update is pushed into which machine. Also, sometimes an update might need to be recoded a little to fit a certain machine. What i am looking for is, if there exists, a best practise or maybe even a project tool that helps my team to remember what goes where. Right now we are using an improvised board in Monday but we will outgrow it soon and need something more tangible in the long run.

Best practise for connecting physical site to aws region which is not local?

I was hoping to get some help from you. At this point the information I’m looking for is probably quite high level. I have been searching for information, and for the most part have found what I’m looking for.

Essentially I’m looking to connect our UK based site to a AWS VPC. I have been looking at both Direct Connect and VPN options for this purpose. In the initial instance I would like to use a VPN to get things started. Naturally, I was going to have this VPC in a UK AWS region.

This would be straightforward enough, however due to a business demand, there is a requirement for us to offer an SFTP(or similar option) in the HK region quickly( before any need to have one in the UK), essentially escalating the move to using AWS. Ultimately, this new VPC in HK would eventually have a set of services shared to users in a remote site in HK. In the first instance only a SFTP is required however, which will mean I can make an externally facing service with credentials or tokens etc, forgoing the DC or VPN to the HK site for now.

My real question is: Would creating a VPN from UK site to AWS UK VPC, and then VPN from UK VPC to HK VPC make sense? Or is it more sensible( aside from security concerns, which would be controlled) to connect a vpn directly from the UK site to the VPC in HK?

I really just need to move some data from our site about 3-4 times a day and make it available to an EC2 instance in the HK region?


Is there any way (or best practise) to measure the perceived workload caused by a software on a certain user?

Perceived Workload has a subjective character trait, but I would like to find a way how one can measure for example the perceived workload from any individual. Here I mean the workload, which is caused by a single software on the person, who has to work with it.

The software I am talking about provides work items as some part of a workflow from a project team. And here I would like to find out, what is circa the optimal amount of work items (for example daily) for one single stakeholder in order to avoid fatigue or cognitive overload. (The work items themselves always take circa same efforts and time)

Are there any best practices for my matter or any (scientifically) proven ways to do this?

Best practise to design API for multiple printer attributes?

I have network printers which have different attributes (e.g. supported network protocols, languages, status, print modes).

public abstract class PrinterAttribute {   protected PrinterAttribute(int value, string dsc)   {     this.Value = value;     this.Dsc = dsc;   }   protected PrinterAttribute(int value, string dsc, Enum type) : this(value, dsc)   {     this.Type = type;   }   protected int Value { get; private set; }   protected string Dsc { get; private set; }   protected Enum Type { get; private set; } } 

And finally I will have number of classes like NetworkProtocol, PrinterMode.

NetworkProtocol constructors:

public class NetworkProtocol : PrinterAttribute {   public NetworkProtocol(int value, string dsc) : base(value, dsc) {}   public NetworkProtocol(int value, string dsc, ProtocolType protocolType) : base(value, dsc, protocolType) {} ... 

1. way – get desired network protocol from locating it in all possible ones

// all possible network protocols that a printer can support private static IList<NetworkProtocol> allPossibleProtocols; public static IList<NetworkProtocol> AllPossibleProtocols {   get   {     if(allPossibleProtocols == null)     {       allPossibleProtocols = new List<NetworkProtocol>()       {         new NetworkProtocol(0, "None"),         new NetworkProtocol(1, "FTP"),         new NetworkProtocol(2, "LPD"),         new NetworkProtocol(4, "TCP"),         new NetworkProtocol(8, "UDP"),         new NetworkProtocol(0x10, "HTTP"),         new NetworkProtocol(0x20, "SMTP"),         new NetworkProtocol(0x40, "POP3"),         new NetworkProtocol(0x80, "SNMP"),         new NetworkProtocol(0x100, "Telnet"),         new NetworkProtocol(0x200, "Weblink"),         new NetworkProtocol(0x400, "TLS"),         new NetworkProtocol(0x800, "HTTPS")       };     }     return allPossibleProtocols;   } }  public static NetworkProtocol POP3 {   get   {     // don't like it because you locate it by value     return AllPossibleProtocols.Single(x => x.Value == 0x80);   } } 

2. way – create public static NetworkProtocol for every possible protocol

private static NetworkProtocol smtp; public static NetworkProtocol SMTP {   get   {     if( smtp == null)     {       smtp = new NetworkProtocol(0x80, "SNMP");     }     return smtp;   } } 

3. way – create enum and find it by enum

public enum ProtocolType : int {   None,   FTP,   LPD,   TCP,   UDP,   HTTP,   SMTP,   POP3,   SNMP,   Telnet,   Weblink,   TLS,   HTTPS }  private static IList<NetworkProtocol> allPossibleProtocols3; protected static IList<NetworkProtocol> AllPossibleProtocols3 {   get   {     if (allPossibleProtocols3 == null)     {       allPossibleProtocols3 = new List<NetworkProtocol>()       {         new NetworkProtocol(0,    "None", ProtocolType.None),         new NetworkProtocol(1,    "FTP", ProtocolType.FTP),         new NetworkProtocol(2,    "LPD", ProtocolType.LPD),         new NetworkProtocol(4,    "TCP", ProtocolType.TCP),         new NetworkProtocol(8,    "UDP", ProtocolType.UDP),         new NetworkProtocol(0x10, "HTTP", ProtocolType.HTTP),         new NetworkProtocol(0x20, "SMTP", ProtocolType.SMTP),         new NetworkProtocol(0x40, "POP3", ProtocolType.POP3),         new NetworkProtocol(0x80, "SNMP", ProtocolType.SMTP),         new NetworkProtocol(0x100, "Telnet", ProtocolType.Telnet),         new NetworkProtocol(0x200, "Weblink", ProtocolType.Weblink),         new NetworkProtocol(0x400, "TLS", ProtocolType.Telnet),         new NetworkProtocol(0x800, "HTTPS", ProtocolType.HTTPS)       };     }     return allPossibleProtocols3;   } }  public static NetworkProtocol GetProtocol(ProtocolType protocolType) {   return AllPossibleProtocols3.SingleOrDefault(x => x.Type.CompareTo(protocolType) == 0); } 

Is it good practise to use javascript template literals to achieve more DRY code?

If we follow the DRY principle, then we should avoid repeating things like identifiers and class names etc.

If we have a condition that checks a boolean so to hide or unhide an element by altering elements class:

if (isVisible) {   $  ('#btn-stop').removeClass('hidden'); } else {   $  ('#btn-stop').addClass('hidden'); } 

we can shorten things a little bit by using the ternary operator like this:

isVisible ?   $  ('#btn-stop').removeClass('hidden') :   $  ('#btn-stop').addClass('hidden'); 

but again element id and the class name we want to alter are duplicated, which can both be eliminated if we use Template literals to compose the method name for each case:

$  ('#btn-stop')[`$  {isVisible ? 'add' : 'remove'}Class`]('hidden'); 

Is it a good practise to write something like this to achieve DRY, or is it better and human readable to avoid it and write whole expressions?

Reducing Exact Cover to Subset Sum in practise!

The reduction of Exact Cover to Subset Sum has previously been discussed at this forum. What I’m interested in is the practicality of this reduction, which I will discuss in section 2 of this post. For you who are not familiar with these problems I will define them and show the reduction Exact Cover $ \leq_p$ Subset Sum in section 1. For the readers who are already familiar with these problems and the reduction can move ahead to section 2.

section 1

The Exact Cover defined as follows:

Given a family $ \{S_j\}$ of subsets of a set $ \{u_i, i=1,2,\ldots,t\}$ (often called the Universe), find a subfamily $ \{T_h\}\subseteq\{S_j\}$ such that the sets $ T_h$ are disjoint and $ \cup T_h=\cup S_j=\{u_i, i=1,2,\ldots,t\}$ .

The Subset Sum is defined as follows:

Given a set of positive integers $ A=\{a_1,a_2,\ldots,a_r\}$ and another positive integer $ b$ find a subset $ A’\subseteq A$ such that $ \sum_{i\in A’}a_i=b$ .

For the reduction Exact Cover $ \leq_p$ Subset Sum I have followed the one given by Karp R.M. (1972) Reducibility among Combinatorial Problems

Let $ d=|\{S_j\}|+1$ , and let $ $ \epsilon_{ji}=\begin{cases}1 & \text{if} & u_i\in S_j, \ 0 & \text{if} & u_i \notin S_j,\end{cases} $ $ then $ $ a_j=\sum_{i=1}^{t}\epsilon_{ji}d^{i-1}, \tag{1} $ $ and $ $ b = \frac{d^t-1}{d-1}. \tag{2} $ $

section 2

In practise (meaning for real world problems) the size of the Universe for the Exact Cover problem can be very large, e.g. $ t=100$ . This would mean that if you would reduce the Exact Cover problem to the Subsets sum problem the numbers $ a_j$ contained in the set $ A$ for the Subset Sum could be extremely large, and gap between the $ \min\{A\}$ and $ \max\{A\}$ can therefore be huge.

For example, say $ t=100$ and $ d=10$ , then its possible to have an $ a_j\propto 10^{100}$ and another $ a_i\propto 10$ . Implementing this on a computer can be very difficult since adding large numbers with small numbers basically ignores the small number, $ 10^{16} + 1 – 10^{16} = 0$ . You can probably see why this could be a problem.

Is it therefore possible to reduce the Exact Cover to Subset Sum in a more practical way, that avoids the large numbers, and have that the integers in $ A$ are of a more reasonable size?

I know that it is possible to multiply both $ A$ and $ b$ by an arbitrary factor $ c$ to rescale the problem, but the fact still remains that gap between possible smallest and largest integer in $ A$ is astronomical.

Thanks in advance!

Is it good practise to create Query Processing components and Index partitions in WFE servers

What is the best practices for creating query components and index components?

I have the following SharePoint servers in a farm and I have plan to enable search for Internet based publishing site: 3 application servers, 2 WFE servers, 1 sql server cluster.

In the 3 application servers :

  • one server running – central admin & other services.
  • Two app servers are now is empty there is no service applications.

Search Topology and Search Components : App 1 & APP 2 :

  • Admin component
  • Crawl component
  • Content processing component
  • Analytics processing component

WFE 1 & WFE 2:

  • Query processing component Index component


  1. Is it good practice to create Query Processing components and Index partitions in WFE servers?

  2. If I move Query Processing components to Application servers, are users able to search the content?