Suppose I trained a Gaussian process classifier with a linear kernel (using GPML toolbox) and got some feature weights for each input feature.
My question is then:
Does it/When does it make sense to interpret the weights to indicate the real-life importance of each feature or interpret at group level the average over the weights of a group of features?
as part of a study lab I get a privileged reverse shell on a Windows 10 box and trying to migrate the process to lsass.exe to get credential hashes, etc, Windows Defender is detecting that and automatically rebooting the machine. I did the same thing on a Windows 8.1 machine and it worked just fine. Is it a different behaviour in Windows 10? Do you have any suggestinos on how I can get through this to get the hashes?
My company is re-designing the on-boarding process for Emergency Service within the app. The process as follows:
Page 1: Intro of Emergency Service Page 2: TOS Page 3: Permission Needed from user Page 4-9: Collection of personal/medical data from users for emergency service in the case of emergency situation
There’s an exit button from page 1-3 but not on page 4-9 onwards. However, “Back” button is presented from page 4-9.
My rational to remove exit button from Page 4- 9 as follows: 1. Users have 3 opportunities to exit the process in the beginning 2. Checked other 20 apps and found that all of them have no exit button after you agree on TOS and then start asking for your detailed information
Could anyone chip in or validate the user experience on lack of exit button after Page 3?
Any feedback is appreciated.
I have a process that loads into memory like any other process. It contains a special key. Our goal is to read this key inside memory…or while it is in transit across the data bus from cpu. The catch is that our solution has to be stealthy and undetected by the kernel, so no DMA, drivers or anything that invokes traditional system calls/routines. Anything that leverages the kernel can be detected by the kernel.
Assume the system in question is infected by a rootkit. Assume the rootkit is employing everything specified here and more unknown anti-debug routines: https://github.com/LordNoteworthy/al-khaser So all the traditional windows routines, (like ObRegisterCallbacks) are hooked.
Is there a digital forensic device for this use case? In so far as I can tell the conventional means of volatile memory collection for forensic purposes can be detected (scraping/dumping).
Note1: There is a “magic” number associated with the bytes surrounding the key, so we don’t have to worry about being overwhelmed by heaps of data, we can filter for those magic bytes.
Note2: We can in theory configure this to use non volatile memory for RAM… then shutoff the computer while the key is in there. However, the key is only good as long as the process remains open. It is random gen, key cannot be cracked. This is also somewhat of a side-channel attack question I suppose. Reading cache I would assume be out of the question since its usually embedded on the cpu or motherboard.
Note3: Running this in a hypervisor might be the call. But there still exists the extra hurdles of avoiding detection of sandboxing. Would rather use a solution that avoids virtualization.
Note4: I originally asked this in EE section about using some type of logic analyzer to read the key as it was coming over the PCI-e bus, but that would disrupt some of the data coming over (resistance and properties of impedance would be disrupted).
I’m preparing for an introductory information security examination in university and this is one of the tutorial question on Network Protocol attacks. I tried (a) and came to this conclusion: Since the EPbX() is a public key encryption operation, C can decrypt any encrypted message to get back its original message, m as though it is anyone in the pair of people exchanging messages.
However, when I re-read the question, the decryption requires the use of private keys, which means it might be impossible to get the message unless C impersonates as the other to each of A and B, and is involved in the key exchange, generating 2 pairs of private keys, which seems repetitive. This confusion prevents me from doing the later part (b).
Can anyone suggest the thought process and solution to the above problem?
Here is the question description. Sorry the actual paper document is not formatted such that it allows copy over.
I’ve been trying to understand how operating systems protect processes from each other. My understanding of Windows security is that a process can call OpenProcess() (thereby allowing read and write access to the virtual memory of another process) as long as it has seDebugPrivilege and an integrity level at least as high as that of the other process.
It also looks as if a process can call OpenProcess() without seDebugPrivilege when targeting a process that belongs to the same user.
FYI: my testing to confirm this was done on a Win2008 R2 server. My method for testing whether a process could write to another process was using Meterpreter’s migrate function, which (among other things) makes an OpenProcess() call to a target process to create a Meterpreter thread inside of it.
- Are the statements above correct, or have I screwed up my testing somewhere?
What are the specific criteria that need to be met for an OpenProcess call to work? At the moment, it looks like it’s:
- has correct integrity level
- has seDebugPrivilege OR has same SID as the other process
If this is true, isn’t there a crazy amount of information an attacker can read or tamper from a computer they’ve compromised but don’t have root access to?
- What is the Unix equivalent of this? By default, can all of a user’s processes read and write to each other? Is this true for root too?
I want to design a page where i put fields as trip no,estimated time and process icon.Suggest me an effective solution
I have a *description character array in a multidimensional binary tree in my small neural network library, I want to process data using a function, so I can do this
*nX = parse("add 30 dimensions to data structure"); process(nX) // takes data and processes the output to description MDMDBT->description = NX ``
I just started working for a web development agency, we work mainly with government so we want to align ourselves within that market with a professional, creative and functional website. We have a great blog and want to highlight our content and whitepapers, plus who we are, our vision and services.
We just started the Discovery phase of the project, and are in the process of doing personas and user journey mapping. I work there in project support and partly as a junior UX designer, alongside the contract designer, who is more of a UI designer than a UX designer, so it’s a bit of a learn as we go process. BUT we have a pretty strict deadline of Christmas where we’d like to have the final designs signed off, so we can start developing the site in January.
Our current plan/timeline is: – initial planning including pain points and wants for our existing website (done) – persona planning (done) – user journey mapping and doing value journey canvas for our 4 personas (in process) – content analysis – IA card sorting, treejacking, review and iteration – component mapping and content architecture – sketching session of new components – wireframing all vanilla pages – wireframing custom pages – prototyping key user journeys for customer testing – customer testing script – audience testing – prepare customer insights deck to present – presentation of customer insights – refine prototypes/wireframes – UI: Theme and Brand – UI Review
Which is quite a lot and definitely won’t get done by Christmas this year. So my question, is what is the absolute minimum process required for doing a website redesign?
My boss has insisted for a while that our login process be divided into two steps on separate pages: on the first page, ask the user’s e-mail address. On the second page, either display “user found” and ask for password, or “user not found” and ask for e-mail again.
He insists it is much easier for our users who often forget their credentials or if they even have an account yet. I’ve long been against this approach because it forces every user to go through two steps every time they login. (I also wonder if it’s a security concern, knowing that it will confirm when an email exists in our system.) I would prefer to have a more standard login page with e-mail (all of our usernames are e-mails) and password, and one very clear link for “forgot password?”.
Neither of us have any hard data to support our theories, and we have more important things to do than A/B test something that he doesn’t think is even a problem. I was just wondering if anyone here could provide some arguments or hard data for or against either approach. I enjoy considering UX design, but I am not an expert.