How should you configure PowerShell logging?

I’m currently trying to figure out the best way to configure Windows PowerShell logging, so that

  1. it is secure (Attackers cannot gain sensitive data out of it)
  2. it helps in DFIR (digital forensic and incident response) cases

The CIS Benchmark for Windows 10 (latest is v1.5.0 for release 1803) recommends to completely disable PowerShell logging due to the bad default ACL in Windows, which allows basically everyone to read the logs.

However because I know how valuable such a log can be in a DFIR case, I’d prefer to enable as much logging as possible but secure the access to the logs.

I found a good blog article on the MS technet which describes a way to configure the SDDL (security descriptor definition language): MS Technet Blog: Securing Your PS Operational Logs

Does anyone have experience with PowerShell logging configurations regarding these aspects?

Passwordless flow: What to do when user enters an invalid email when logging in

I’m planning to make my app passwordless and I’ve seen sites implement things differently when one puts in an email they haven’t actually used for an account on the site before when trying to login.

On Medium, if I put in a new email I’ll get the message “We just emailed a confirmation link to youremail@gmail.com. Click the link to complete your account set-up.”

This makes the signup process fast, but I’m not sure if this message is strong enough to make it clear to some users that this was a new email being used. This could frustrate users who had forgotten which email was used for their existing account and would now take the time to realize they’re using a new account before trying to login again with the right email. However, that scenario may not occur often enough that its worth designing the UX to prevent it.

Zeit does design things so if I don’t have an account and I put in a new email I’ll get the message “There is no ZEIT account associated with this email address. Continue with signup?” and I can click to signup.

This prevents one from realizing they used the wrong email, but it adds an extra step to signing up and I think it could confuse users who weren’t paying attention to the fact that they clicked login rather than signin.

What do you think is a better flow and why?

How to check if user is logged in after logging using http post?

I’m developing a Scrap app to extract some information from a sit. To get that information I have to be logged in to that site.

So I use Http post and pass the data needed for login using FormData and log in successfully, so I can browse the private content of that site.

My question Is: “How can I tell if the user is logged in?”. What is the simple way to do that using session cookies or something like that?

I’m currently checking the connection by sending an Http Get Request to a Url that I know is available to registered users.

So before I try to login again, I use this method “isLoggedIn” to check the connection. But it is not perfect, I mean, it seems a kind o tricky and not the best way to do that.

After Logging going to black screen

I recently tried to update my graphics cards without fully knowing what I was doing, and then after getting the updates for Ubuntu downloaded and installed I restarted my HP EliteBook 8440P laptop. Now, when I try to log back in(and I’ve tried on both of my accounts), it goes to a black screen, and sits there indefinitely. I’ve scrolled through several threads on this forum and others on problems that are similar to mine, such as the Login loop thing, which I don’t seem to have a loop like that. I tried doing Control+Alt+F2 as well to attempt to restore my graphics cards, but the text in the terminal is just off the screen, at the top, so that I can’t tell what is there or if I logged in or what. I’ve gone into the BIOS and looked around but I’m an amateur at this, obviously, so I didn’t see anything that would help. Any help would be appreciated on how I can fix this, or if it is fixable.

If I haven’t given enough info about my computer(HP laptop with Ubuntu Linux OS, obviously, installed, and Nvidia graphics card.) please tell me what else I need to post because I’m not sure what all information I need to share to have help getting this fixed.

Do I use Repository or Service Object to Perform Logging?

I’m working in Java Spring, and I have typical service and repository layers. The repository grabs a JSON; passes it along to the service; service maps the repository response to a DTO.

I also need to perform some event logging afterwards (send these events to an auditing REST service), which requires some properties that are not part of the DTO, but rather part of the repository response object. Note that these particular properties are actually used as part of the business logic to perform the mapping.

So, you may say that for obvious reasons, just go ahead and use the repository response object to perform these event loggings since some of properties are not present in the DTO object.

However, mapping of repository response to DTO requires quite a bit of calculations, business rules, etc. which now the event logging also needs if I only use the repository response. In other word, I have to again execute those same business rules which I performed during mapping, and use them for event logging process. All because the event logging needs a few properties that repository response object has but not the DTO object.

To make matters worse, the same DTO object is used by a few controllers to ultimately send results back as part of the JSON.

There are two solutions I pondered:

  1. Include those needed properties in the DTO object, so they can be used by the event logging process, but go ahead and mute them during marshalling/demarshalling with the JSON library. If additional event logging scenarios are introduced with more missing properties from the repository response, then I have to keep adding them here
  2. Do a portion of logging that requires those missing properties from DTO while performing the mapping in place. The issue is, now I’m heavily coupling the operation of mapping from one object to another with partial or complete event logging which will be terrible for unit testing and general laws of universe. Also, as additional event logging scenarios might come along, this coupling becomes even deeper and deeper

I wanted to know if there are other solutions/design patterns that can be more sensible, extensible, and maintainable that I can utilize?

Is it worth logging HTTP requests when they enter an API server?

I’m designing an API and have reached the topic of logging. I’m going to store my logs in Elasticsearch.

I’m certainly going to do some logging at the time the HTTP response is sent back to the client, with info such as processing time, response code, user id, URL.

Is it best practice to also send a record to the logging system right when the HTTP request enters the API server?

What I have in mind here are situations when the response never materializes, e.g. because the server dies, or takes forever processing the request (e.g. due to bad business logic). If this occurred, I’d have no record at all of the client making a request.

API logging: request, response, or both?

I’m designing a REST API and have reached the topic of logging. I’m going to store my logs in Elasticsearch.

Is it best practice to log both HTTP request and response, with some correlation id to match them in the logs? What are the advantages and challenges of doing it this way, as opposed to only logging requests or responses?

(I have some thoughts on this of my own: suspect it is best practice and see some advantages & challenges, but feel there’s a lack of an expert treatment of this subject online. Hoping this question will result in one.)

Edit:

I’m NOT asking about whether to store in the logs the contents of every request and response. I’m asking whether to store some basic record for each request and response (e.g. timestamp, URL, IP, response code, some form of user id), or maybe just for requests, or maybe just for responses.

Logging Config – in code vs in config file

The built-in logging module of python 3.x allows for 3 ways to define a custom logger:

  1. INI-formatted file
  2. dict, json, yaml
  3. python (directly in code)

In my opinion, it is easier to define a custom logger’s config directly in the code, because this does not require anyone to understand other formats (it is not obvious what the dependencies are in INI, yaml, and json formats). If config is written in python, then any coder can understand how the Formatter, Handler, and Logger relate to each other programmatically, if not conceptually.

Are there reasons why it is better to define the logging config in a separate config file? I’m guessing this would have to do with wanting multiple deploys with different logging levels, but I can’t imagine a compelling use case for this.

Sharepoint 2013 ,Forced due to logging gap, cached Error when trying to open page on other language

i have a page it was working on english version and arabic also, last week we got some problem in the server so we restore it to 1 week before and the english page works fine but when try to open arabic version i got this error

[Forced due to logging gap, cached @ 12/07/2019 09:39:25.00, Original Level: Verbose] SQL connection time: 0.1406009 

Category : Database , EventID: ahjqp.

[Forced due to logging gap, Original Level: Verbose]GetUriScheme[/ar/Pages/Home.aspx]  

Category : General, EventID: g3ql.

i read some answers but i don’t know if it’s the same problem

Does the responsibility of logging method/function calls fall to the caller or the callee?

What are the pros and cons to each of the following ways of logging a function call? This code is written in Ruby but I feel the question applies to programming in general.

Responsibility belongs to the callee

class Foo   def do_cool_stuff     # Do some cool stuff     Logger.info("I did some cool stuff.")   end end  class Bar   def self.trigger_cool_stuff     foo = Foo.new     foo.do_cool_stuff   end end  Bar.trigger_cool_stuff 

Responsibility belongs to the caller

class Foo   def do_cool_stuff     # Do some cool stuff   end end  class Bar   def self.trigger_cool_stuff     foo = Foo.new     foo.do_cool_stuff     Logger.info("I did some cool stuff.")   end end  Bar.trigger_cool_stuff 

My initial thought is that it’s a better idea for the callee to log its own actions, because it’s just clearer that your logs are coming from the bit of code that is actually performing the actions that you’re logging about in the first place. However, it does seem to violate Single Responsibility Principle. Additionally, if I wanted to control some state regarding the logs, like whether or not the log happens at all, the callee now must also be aware of that, which also violates SRP.

Is there a generally accepted way of doing this, or is it more down to a matter of opinion/preference?