Any security benefit to winmgmt operating outside of svchost via the command
winmgmt /standalonehost ?
Its clearly useful for changing wbem Authentication levels, but does it have any security benefit outside of that?
WbemAuthenticationLevelDefault 0 Moniker: Default WMI uses the default Windows authentication setting. This is the recommended setting that allows WMI to negotiate to the level required by the server returning data. However, if the namespace requires encryption, use WbemAuthenticationLevelPktPrivacy. WbemAuthenticationLevelNone 1 Moniker: None Uses no authentication. WbemAuthenticationLevelConnect 2 Moniker: Connect Authenticates the credentials of the client only when the client establishes a relationship with the server. WbemAuthenticationLevelCall 3 Call Authenticates only at the beginning of each call when the server receives the request. WbemAuthenticationLevelPkt 4 Moniker: Pkt Authenticates that all data received is from the expected client. WbemAuthenticationLevelPktIntegrity 5 Moniker: PktIntegrity Authenticates and verifies that none of the data transferred between client and server has been modified. WbemAuthenticationLevelPktPrivacy 6 Moniker: PktPrivacy Authenticates all previous impersonation levels and encrypts the argument value of each remote procedure call. Use this setting if the namespace to which you are connecting requires an encrypted connection.
Does changing the Default Impersonation Level in Windows machines to 2 or 1 help mitigate against WMI exploitation?
wbemImpersonationLevelAnonymous 1 Moniker: Anonymous Hides the credentials of the caller. Calls to WMI may fail with this impersonation level. wbemImpersonationLevelIdentify 2 Moniker: Identify Allows objects to query the credentials of the caller. Calls to WMI may fail with this impersonation level. wbemImpersonationLevelImpersonate 3 Moniker: Impersonate Allows objects to use the credentials of the caller. This is the recommended impersonation level for Scripting API for WMI calls. wbemImpersonationLevelDelegate 4 Moniker: Delegate Allows objects to permit other objects to use the credentials of the caller. This impersonation will work with Scripting API for WMI calls but may constitute an unnecessary security risk.
I’m considering making a new language based on SQL SELECT statements to allow users to export CSV data in the manner they please. I’m confident in being able to interface this with a permissions system by inspecting the resulting AST from parsing before turning it into a SELECT statement to execute, so I’m not really concerned about this leading to unauthorized data access.
This language would be pretty much a 1-to-1 mapping of SQL SELECT statements, except for a few changes regarding joins and a few other things.
Users are relatively few and can be easily traced and contacted. It’s not the public at large.
The underlying DB would be MariaDB.
What should I be concerned about from this idea? If it’s a bad idea, why?
I thought about the possibility of making a query that doesn’t terminate by using
WITH RECURSIVE, so I’m not going to support that syntax, and I made the following question at the DBA SE to see what other ways a SELECT statement could be non-terminating (I thought of a few more while writing that question):
What are all the ways that a SELECT statement could be made to not terminate or take a very long time?
Besides that, is there anything more? Any particular risk? Is it possible to make some type of resource bomb with it, to consume all memory for example?
Access to this language could be put under a permission so only very privileged users could use it, but I wonder if that’s needed.
I’m using WordPress to host a few sites. Lately it includes this feature called Site Health Status. This information has in part been valuable, but it also itches me the wrong way somehow that I can’t get it to show "green" due to something I’d consider non-issues 😉
Here is how the "critical issue" looks.
Here’s the relevant text excerpt, because search engines aren’t all too good with indexing text from screenshots:
1 critical issue
Background updates are not working as expected [Security]
Background updates ensure that WordPress can auto-update if a security update is released for the version you are currently using.
- Warning The folder
/vhosts/sitename was detected as being under version control (
- Error Your installation of WordPress prompts for FTP credentials to perform updates. (Your site is performing updates over FTP due to file ownership. Talk to your hosting company.)
/vhosts/sitename is indeed under version control and the actual blog is under
/vhosts/sitename/blog and that’s what the web server serves as webroot. However,
/vhosts/sitename/wp-config.php contains the blog configuration. As WordPress allows it to live outside of the webroot, that’s what I opted for out of security reasons. Anyway, the conclusion from this first (yellow) point should be that there’s no way anyone could glean the contents of the version control system, since it lives entirely outside the webroot.
The second (and red) point is about FTP credentials. This one I find particularly unnerving. I have scripts in place, I have 2FA, and the servers in question are only accessible via SSH (and by extension SFTP). WordPress doesn’t support SFTP nor would I want to enable this at all. In fact the files inside the webroot have tight file modes so that even in case a breach occurred very little could be done. In other words, I am updating WordPress in a semi-automated fashion triggered manually. Unlike some setups of WordPress I have seen or administrated in the past with FTP enabled, I haven’t had a breach, going by all the indicators I have available. So to me this is the desired setting. But someone decided to categorize this as a critical issue.
So my questions (two actually):
- Is there a way to dismiss and ignore these exact two items in the future?
- Should I trust some WordPress dev who doesn’t know my exact setup more for security advice than myself or should I spend (mental) energy on actively ignoring the issue (under the assumption that it can’t be dismissed and ignored for the future)?
NB: I am not interested in having the overall feature (or the visible widget) removed. I simply want this feature to be valuable and that means not raising the alarm when nothing is wrong, as far as I’m concerned.
I am a web developer, but I have only a rudimentary grasp of security, e.g., be careful to sanitize inputs, store as little user data as possible, encrypt passwords, keep up with security issues of libraries and packages, etc.
Today, I was approached by a client who does financial planning about replacing a spreadsheet he gives clients with a web-based form. The spreadsheet asks users to input certain financial data – e.g., current value of various investment accounts, business interests, etc. These numbers are put into a formula and a value is generated which is supposed to help the user decide whether the consulting could be useful to them.
The phone call was very short, and my questions focused on more mundane matters about user experience, desired UI elements, etc. No commitments have been made, and I’m analyzing the project to see if it’s something I can do. I began to think about potential security issues, and I realized I really don’t know where to start. So far it seems that client wants the form to be accessed via a magic link, and that the user would not enter any personally identifying information. I do not know yet whether my potential client wants to store the value generated, a simple dollar amount which is the ‘benefit’ the user could get by using the service. The impression I got is that my potential client simply wants to use this value as a motivator for clients to inquire further about his services.
My question is this: In this scenario, what security-related matters should I consider?
We have a website using PWA Client calls / Mobile APP, all using the same APIs. We have APIs Exposed to Public. Currently, our APIs are not secure meaning that anyone can check APIs signature via developer tools/proxy tools and hit the API.
We want our APIs to be hit by verified clients. Verified clients do not mean logged in clients. Our website can be used by non-logged in users as well. Clients here mean users accessing the website via browsers/app.
So for that, we are planning that we will allow those APIs only which have the registered/enabled token and that will be sent via header.
Now to generate the token:
- Device —- sends Token(B) Request—-> Server
- Server generates Token(B) and returns it and stores it in Redis
- Device —- sends Token(B) to enable request—–> Server
- The server enables it
- The device sends Token(B) in all subsequent requests
- The server checks whether the token exists in Redis in the enabled state or not
Since these register/enable token APIs are also exposed publicly, to ensure no one is able to hack this process:
- While enabling the token, we also send the Encrypted token(A) alone with the actual token(B).
- At the server, we decrypt the token(A) and matches it with the normal Token(B).
Encryption is done using the private key known to client/Server only.
Is this the right approach and this is vulnerable? The only issue is seen is that register/enable token APIs are exposed publically. But we have also added the security to that, is that good enough?
What is the reliable site or resource that lists those password managers that have been thoroughly tested by users and that have the most reliability?
- From the point of security: that they do not have access to your data, that it is impossible to hack their data, etc.
- From the point of reliability (say, the software crashes with all of your passwords – what would you do if you entrust it with all of your passwords, or say their servers are blocked in your country/their country blockes it)
- From the point of usability. Say, you need to have specific features, such as an android app with local storage and possibility to create offline password archives, or say you want it to generate passwords in certain patterns, or have both auto-generated passwords and to input passwords yourself.
In general I do not like the idea of entrusting all of my passwords to some software which is just software and may crash or cease existing anytime. It seems even more reliable to store all of the passwords in just a notebook or a text file with several copies.
Consider the below scenario:
There’s a checkout webpage that can be accessed at checkout.example.com. This page has decent security policy. But just to prevent any credit card info leakage, credit card information editing panel is in an iframe and this panel can be loaded from cc.example.com.
Now, are there any security benefits for having a good Content Security Policy for cc.example.com when we are loading it in an iframe in checkout.example.com?
I am currently working on a product similar to smart locks that work with BLE identification. I can handle the hardware of the product very well (mechanics / electronics / embedded software), however, I am totally unaware of how to implement the security layer in the product and how to validate it if exists.
In a standard 48-bit MAC address, the 7th (most significant) bit specifies whether it is a universally-administered address (UAA) or a locally-administered address (LAA).
If it is 0, then the MAC address is a UAA and the first 24-bits are the organizationally-unique identifier (OUI) of the manufacturer of the network interface card (NIC).
If it is 1, then the MAC address is just an LAA.
Many drivers and NIC’s often allow users to modify the MAC address of their device.
But, it seems Windows does not allow modifying mac addresses to universal ones (i.e., UAA’s): https://superuser.com/questions/1265544/
What is the reason for this restriction? Are there security implications if this was not the case? Or, perhaps, is this merely just to prevent someone from spoofing a device as some legitimate company’s network communications product? (to their ISP)