Is Windows Sandbox a viable alternative to conventional VM solutions considering its design?

The idea of having a fast, disposable VM at the palm of my hand appeals to me very much. It makes adding an extra layer of security to any thing I want to do so easy – just launch the sandbox application in a matter of seconds and you’re done. Of course, that is considering the VM actually does the job it’s supposed to do…

A little disclaimer beforehand – I’ve read the article Beware the perils of Windows Sandbox at Magnitude8, describing how the Windows Sandbox comes with a NAT pre-enabled and thus any malware running on the guest would still get a direct access to your intranet, which is already a large problem. But for the purpose of this question, let us just consider the host-guest scenarios.

Windows Sandbox claims to “achieve a combination of security, density, and performance that isn’t available in traditional VMs”, by leveraging a different approach to memory and disk management. If I understand things correctly, everything that in theory can be safely shared between the host and the guest, gets shared. According to the official documentation, the Sandbox shares both the host’s immutable system files, as well as the physical memory pages.

Despite that, Microsoft seems to remain confident that their solution is secure as implied by one of bullet points mentioned in the Sandbox overview:

Secure: Uses hardware-based virtualization for kernel isolation. It relies on the Microsoft hypervisor to run a separate kernel that isolates Windows Sandbox from the host.

This obviously raises a lot of questions, because at the first glance, all this resource sharing should increase the attack surface greatly, leaving more space for exploits to be found. Also, even the most sophisticated technology, which changes only the implementation and not the design, does ultimately make the discovery of an exploit only more time and resource consuming, but not less possible, doesn’t it?

So, my question is

Would you consider Windows Sandbox to be a viable alternative to conventional VM solutions in terms of security, or do the shortcuts used to achieve the performance undermine the VM’s core principles too much? Or am I just not understanding the technology and all of what the Sandbox is doing is technically safe?

An extra question: Does the situation change when we’re talking about a web-based attack, such as opening a malicious site in a browser from within the Sandbox, or does it come down to the same situation as running an infected executable? (disregarding the extra layer of sandboxing done in the browser itself)

Is testing for all executables without considering any files in the system is enough for deducing whether the system is infected with malware?

I came to know that the malicious activities will be carried out only by a software(program) whereas the malicious files(data to the softwares installed in the system) can’t perform the malicious activities directly by themselves but they can responsible for bringing those malicious softwares to the system( say like steganography).Hence those softwares also must be installed ( automatically or manually) before performing their activity.

If this is true scanning for malware in softwares before they get installed( triggered manually or automatically) is enough to say that the system is 100% secure(considering that our detector is ideally 100%accurate)?

Among the following which all affects the processing power, considering one at a time?

1. Data bus capability 2. Addressing scheme  3. Clock speed   

Keeping ” Data bus capability ” and ” Clock speed ” as constant or fixed, If we manipulate the addressing scheme the it will only improve the overhead for accessing the opcode or operand, but will now make the processor run faster.

I believe it should be –

  1. Data bus capability
  2. Clock speed

All possible replacements of an expression, considering each term separately

Say I have an expression:

f[a, f[a, b]] 

I would like to replace each a with either x or y. By using Replace, I can do

Replace[f[a, f[a, b]], {{a -> x}, {a -> y}}, All] 

Which gives

{  f[x, f[x, b]],  f[y, f[y, b]] } 

However, I’d like to get (in any order)

{  f[x, f[x, b]],  f[x, f[y, b]],  f[y, f[x, b]],  f[y, f[y, b]] } 

Essentially, each term is considered separately.

I’ve tried looking in the documentation for functions similar to Replace, but to no avail. How do I go about this? Thanks in advance!

efficiently find connected components in undirected graph considering transitive equivalence

I have a set of nodes and a function foo(u,v) that can determine whether two nodes are equal. By “equal” I mean transitive equivalence: If 1==2 and 2==3 then 1==3 and also: If 1==2 and 1!=4 then 2!=4

When given a set of nodes I can find all connected components in the graph by passing every possible combination of nodes to foo(u,v) function and building the needed edges. Like this:

import networkx as nx import itertools from matplotlib import pyplot as plt  EQUAL_EDGES = {(1, 2), (1, 3), (4, 5)}   def foo(u, v):     # this function is simplified, in reality it will do a complex calculation to determine whether nodes are equal.     return (u, v) in EQUAL_EDGES   def main():     g = nx.Graph()     g.add_nodes_from(range(1, 5 + 1))     for u, v in itertools.combinations(g.nodes, 2):         are_equal = foo(u, v)         print '{u}{sign}{v}'.format(u=u, v=v, sign='==' if are_equal else '!=')         if are_equal:             g.add_edge(u, v)      conn_comps = nx.connected_components(g)     nx.draw(g, with_labels=True)     plt.show()     return conn_comps   if __name__ == '__main__':     main() 

the problem with this approach is that I get many redundant checks that I would like to avoid:

1==2  # ok 1==3  # ok 1!=4  # ok 1!=5  # ok 2!=3  # redundant check, if 1==2 and 1==3 then 2==3  2!=4  # redundant check, if 1!=4 and 1==2 then 2!=4  2!=5  # redundant check, if 1!=5 and 1==2 then 2!=5 3!=4  # redundant check, if 1!=4 and 1==3 then 3!=4 3!=5  # redundant check, if 1!=5 and 1==3 then 3!=5 4==5  # ok 

I want to avoid running in O(n^2) time complexity. What is the correct way (or maybe an existing function in any python library) to efficiently find all connected components by a custom function?

wget: what security issues am I not considering?

Questions: based on the details provided below, (a) what potential security vulnerabilities and threats might I face in my use of wget and (b) what recommendations would you give for mitigating those vulnerabilities and threats?

Goals: as per a recommendation by @forest, these are my security goals:

  • successfully complete potentially lengthy wget jobs, such as mirroring a website or recursively downloading a large number of files from a website.
  • avoid attracting the attention of target website admin, others.
  • be untraceable to my actual IP.
  • avoid leaving traces that would enable a web admin or whomever to detect that different jobs are executed by the same person (me). For example, I might mirror a website roughly once a month, but with some variation; I would be displeased if despite my efforts to change headers and coming out of a different Tor exit node, it was clear to the other side that it was the same person. This one is less important than general traceability.
  • don’t make myself vulnerable to exploits that a malicious actor without a high level of technical skill could pull off.

Background: I work in due diligence and am new to thinking about digital security. I often evaluate content from the websites of sketchy companies. To streamline, I use wget and try to do so in a secure and non-alerting manner. I take the following precautions:

  1. initial evaluation of website’s probable traffic level
  2. only use wget through torsocks
  3. provide randomly selected HTTP headers for each job
  4. random wait between 0 and 600 seconds
  5. all links converted to local references
  6. cron scheduling varies with each ~weekly execution
  7. jobs executed by RasPi modified for additional security
  8. disconnect RasPi from network when evaluating content

Thank you.

Miner transaction selection considering sigops

There is a limit on how much a block can sigop (set to 80000?). There is also a limit on how much a single transaction can sigop (set to 16000?). It thus seems possible (and if it’s not possible, the remaining of this question is void, but then how is it prevented?) that an attacker submits five special transactions to consume the whole block sigop limit. If such attacking transactions are using very generous fees per byte, if the miner chooses which transactions to include in a block based on fee per byte metric, it would include the attacking transactions in a block, getting relatively huge fees per byte, but not filling the block because of sigop limit. This would mean that the miner could get more in fees if it avoided attacking transactions, but the question is whether such selection avoidance is implemented by default.

How are the laws considering storing country details of visitors of my website? (calculated of their ip adresses) [migrated]

I wanted to add the country of the people who left a review on a website but I was wondering how this regards towards the laws about privacy. I have been doing quite some research into this topic but there weren’t any solid answers.

Does anyone know if I am allowed to store the country data of people who post a review? As well as what rules are related to this for different locations e.g Europe and America?

Example data:

Table header       rating | comment | createdOn | countryOfOrigin Table row          5        "hello"   00-00-0000  The Netherlands