Is this a secure way for logging users to a web and api? [on hold]

I am splitting a monolith Laravel application which has both api and web routes, to 2 separate applications.

The browser is making an AJAX call to the new API service (all routes /api are routed to the new api service)

The client JS side login is done throught a call to api service /api/login with the credentials, and a JWT token is returned back and stored in localstorage.

This token is send later to any request to /api/*, via the Authorization: Bearer {token} header

In the web service, the session and authentication of routes is done by Laravel 5.5, through a custom middleware that I created which checks the user roles, and if the user is logged or not, which uses the same database as the api service (users, roles tables and so on).

The routing authentication logic, makes the following examples possible:

url /myhome should be allowed for roles:* (any logged user)

url /admin should be allowed for roles:admin (logged and must have role admin)

url /whatever should be allowed for roles:admin,finance (logged and must have role admin or finance)

url /login no roles middleware, public route for anyone …

Right now when a user clicks in the browser “log in” button the following steps happen:

  1. I make a first call from client side to /api/login and get back the JWT token which I store in local storage.
  2. When the ajax call succeeds, I submit the form (POST request to /login) via javascript, form.submit().

The route /login is handled by the web service (Laravel), it authenticates the user once more, creates the session and redirects to /home.

From now on the user can access the routes that need login (use the middleware roles:*, and routes for which the user has the specific role needed)

I find this extremely ugly and inefficient solution due to having 2 requests (one at /api/login then to /login) and having database access by the web service.

My goal is to have database access only in the api service, and the web service will have at most just a cache layer for user session (which I want to use for server side role/login route authentication as in the example above)

My proposed solution:

When the user clicks the login form, now will be only one POST request to /login (which is handled by the Laravel web service)

The web service will call the api service with the credentials and get the JWT token back, then will create user session. Now I need a way to return this JWT token back and use it in javascript ? I am thinking of putting it in a cookie.

Then the JS Http client will check if the cookie for jwt is set, it should read it, store it in localstorage and delete the cookie.

The api service accepts the token as Authorization header, therefore I think is CSRF safe (the cookie with jwt is not read, the jwt is used only from authorization header, which is set by axios js client. Also I will delete the cookie after reading it)

What do you think ? Does this solution make sense, or has any security flow?

I would really love also your other suggestions

Thanks in advance

Logging SFTP File Transfers in Docker container

I want to log SFTP file transfers. My sshd_conf is:

Protocol 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ed25519_key  PermitRootLogin yes  AuthorizedKeysFile      .ssh/authorized_keys  PasswordAuthentication yes  UsePAM yes  AllowTcpForwarding no X11Forwarding no UseDNS no  Subsystem sftp /usr/libexec/openssh/sftp-server -l INFO -f AUTH 

Host keys generated like this:

$   ssh-keygen -N "" -t rsa -f ssh_host_rsa_key $   ssh-keygen -N "" -t ed25519 -f ssh_host_ed25519_key 

SFTP server runs in Docker container created by this Dockerfile:

FROM centos:7  RUN yum install -y openssh-server  RUN echo 'root:123456' | chpasswd  COPY ssh_host_rsa_key /etc/ssh/ssh_host_rsa_key COPY ssh_host_ed25519_key /etc/ssh/ssh_host_ed25519_key COPY sshd_config /etc/ssh/sshd_config  RUN chmod 400 /etc/ssh/*  EXPOSE 22  CMD ["/usr/sbin/sshd", "-D", "-e"] 

And the only output I have after successful login and file uploading:

$   docker run --rm --name sftp-server -p "2222:22" test/sftp Server listening on 0.0.0.0 port 22. Server listening on :: port 22. Accepted keyboard-interactive/pam for root from 172.17.0.1 port 58556 ssh2 

However, I would expect output as described in OpenSSH wiki:

Oct 22 11:59:50 server internal-sftp[4929]: open "/home/fred/foo" flags WRITE,CREATE,TRUNCATE mode 0664 Oct 22 11:59:50 server internal-sftp[4929]: close "/home/fred/foo" bytes read 0 written 928 

What might be a problem with my setup?

How to stop logging for a specific user?

I’m a root user on my server. There are a bunch of users on my system. As you know, when you type w you can see who is and who isn’t online. With the last command you can check last users who were logged in to your system, according to /var/log/wtmp path.

Is there any way to stop all logging throughout the system for a specific user?

I know we can do like cat /dev/null > /var/log/wtmp, but this action removes all of the logs.

It can be done by rootkits like vlany, but how we can do it without them, I mean with commands?

Examples of handling exceptions in ways other than logging

Something I haven’t quite grasped.

You catch exceptions if you can “handle” them, e.g. if you check a file exists or not, and perform action if the file exists, but then let’s say the file is deleted, you would throw an exception. This could be “handled” by prompting for different input etc. Other examples are using a different web service if one is down, anything with the Polly framework.

However, are there examples of where you can handle an exception with anything more than logging? Just seem to have a mental block with this. So basically examples of any code other than logging in a catch block.

Thanks

Simple Audit Logging Design

I have a C# MVC application. One of the requirements of the application is to maintain an audit log of everything that happens to a particular ‘entity’ page. To make the example concrete, lets say there is a customer page, with sub entities like addresses and phone numbers.

I have an audit service that logs a particular action type, so in the GET action to display the customer page, I log an ‘access’ audit record. Similarly, when a user creates a customer, or updates a customer, I have a audit logging event for that as well in the respective controller method.

The design issue I am encountering and would like some advice on is how to handle the ‘access’ audit event. When the user first lands on the page I capture and log the access audit event I am looking for. However, when the user interacts with the page and say, adds an address or a phone number to the demographic table, I get another access audit recorded because after the user adds a sub-entity record to the parent entity (the customer in this example) a redirect is fired back to the customer’s GET method and another access audit record is logged.

Therefore in short period of time, I can have multiple accesses to the same page, multiple times. This goes beyond the intent of the access audit record. I simply wish to show that the ‘customer’ page was accessed by the user at a particular time and date. But to log the access every time a Post-Redirect-Get action occurs on the page is not only redundant, but unnecessary. What I want is an audit of the access, not necessarily a page view (which is what I have now).

What would be a better approach to this? I hope I have explained the design issue appropriately. I could just not worry about it and when I go to display the audit log to an admin, I can filter out the ‘access’ records and limit it to a time-frame, i.e. only show an access record if the times are more than say 10 minutes apart. But I feel like there may be a better solution that I’m not seeing.

How should we do error/usage-logging and business metrics logging in one application / environment?


Challenge

We offer several services for our clients which give them more insights in their data (sales, inventory, customers, leads). Currently we are investigating ways to get more hold on how our deliverables are performing, e.g.: what errors occurred in an algorithm, how many times a specific dashboard was used and how many business value was added by an applications (will explain this further below). We want to do this on a generic way, because at this moment it is dependent on the application whether specific logs or metrics are stored. We don’t want to reinvent the wheel and use existing solutions that have proven their value as much as possible. We want to ask you for advice and thoughts on the matters below.

As I said I want to explain what several metrics/logs/aggregations could look like.

Errors: Pretty straight forward, one example is exceptions thrown during runtime of an application.
Usage: How many times is an endpoint called? How many times is a specific page opened in a dashboard (may be Angular or Tableau)?
Model performance: What was the error of our machine learning model over time?
Aggregations/sums etc: How many times are all articles read? How many articles have been added last week (drill down per category)?
Business value: How much money was saved after using our inventory optimising module?

We don’t want our logging environment to be able to access every other environments (read: access databases and gather information) for security reasons. An application/environment should decide which information may be shared.

Our thoughts and findings

Of course we have thought about this challenge extensively. We came up with some ideas about how to implement this in all our applications and how to gather and store all the information.

Step 1: Gathering information

To reduce the amount of dependencies in an application (and reduce the problems that our data scientists may encounter) , we want to write a library (that works as a facade) which communicates with an API which is responsible for handling different logging/metrics and storing them in the right place. I think that offering the functionality as a decorator-function for saving (parts of) input & output or intercepting exceptions will be great, for example:

@logging.success(metric_type=MetricType.BUSINESS_METRIC, metric_name='model_accuracy', value='accuracy') // value will be the property extracted from model after function is completed @logging.error() def train(...):      // function execution     model = .....     return model  

But also make the class and methods available, so logs can be send during a function call. Example below could also be implemented with the decorator, but in some cases this approach is necessary.

def get_posts_last_week(...):     posts_last_week = // get # of posts lasts week     logger = Logger     logger.log(metric_type=MetricType.BUSINESS_METRIC, metric_name='posts_last_week', value=posts_last_week)  

I just made the above examples up but I hope you get the point: we want to create some library that operates as a facade between the apps and metric/log storing. This to make sure that our data scientist do not need to know tons of different libraries which are all responsible for storing their own kind of information.

This step will be the least challenging for us, if the data we want to store is already available within a function call that is already triggered or scheduled. But, there is a catch: there are metrics which aren’t available yet (they have to be calculated). For example the deviation of our sales forecasting model over time relative to the real sales? Aggregates of articles read per category last week? We indicated that we do not want our logging environment be able to query the data, we only want it to be able to access the data which has been sent to the logging environment and that data hasn’t been send to the logging environment because it has never been processed.

Step 2: Logs/metrics storage

When we’ve gathered all kinds of information, we need to store them somewhere. We also don’t want to reinvent the wheel so we do want to use services/libraries et cetera if they are fit for what we want to do. We came up with the tools below, among other tools.

CloudWatch for error logging
We are using countless services within AWS. A logical thought was therefore to use CloudFront to store all our logs, metrics et cetera. Yet, it doesn’t feel okay to store business performance metrics (sensitive data) within a tool that is more focused on app metrics and logs. We are convinced that CloudWatch can (and probably will) be part of the entire application, but for storing errors only.

Google Analytics for usage logging
A team member noticed that Google Analytics is the designated service to monitor dashboard usage. I agree with him on this, but it only applies on client-side and doesn’t cover the usage of our endpoints (we also expose endpoints to our customers). For Tableau, this option won’t be suitable also.

Database for business metric logging (or even all logs)
Another idea was to create an own database in which we store all metrics/logs/aggs that are generated by our application. We can think of a generic model that matches all our requirements, a quick example:

{     "application" : String, // unique identifier of the application     "timestamp": Date, // Timestamp the metric & value relates to, let's say today     "metric": String, // What is the metric saying something about, for example: "ArticlesAddedLastWeek"     "value": any // Value of the metric, for example 5 would say that 5 articles have been added last week }  

We could choose to also store the error logs within this database (SQL, NoSQL, whatever), or choose to log errors into CloudWatch and keep them separate from the sensitive data. But that is arguable.

What would be your approach to calculate and store these metrics without creating much overhead (i.e. adjusting the existing codes or adding a lot of dependencies) or giving our logging environment direct access to the data?

Step 3: Creating insights

My colleague found an analytics and monitoring tool for creating insights that’s able to communicate with 50+ different sources, called Grafana. We are pretty excited and it seems very promising. But we’re not at the point of visualising anything yet, that will be the next step.

Catches

Like I noticed, some solutions only partially solve some problems and we do not want to end up using dozens of tools/dependencies for logs & metrics. For example monitoring usage: Google Analytics will only work with our Angular dashboards, but we still have to find a good way to store the metrics for our endpoints and we have to find a way to extract the views of a Tableau workbook from our database.

This brings us to the following: we store some business metrics which are aggregates that are not even calculated within the application. We could solve this by creating new functions within the application which does the calculation and sends it to the logging facade. We then have to trigger that function every X interval, or run a scheduled script within the same environment as the application which handles this.

But, before we do that, we want to ask you how your approach would be like, what tools/services you should use and your opinion of our findings? What do you think about putting these two logging purposes (error logging AND storing business metrics) in one application/environment; do you see a solution in keeping these separate?

Manipulation-save logging

I have to keep a journal of all transactions that are performed in my application. Think of an invoice journal.

What options do I have? E.g. an option in syslogd, or a table configuration in MySQL would be great.

My goal is to at least be able to prove that the journal has not been manipulated.

Currently I am computing a rolling hashsum of the entries, and send the hashvalue to some trusted 3rd party every 100th entry. That way, I can prove no one has changed a journal entry and recomputed the whole hash chain.

Macbook restarts when logging in

My Macbook Pro (Mid-2015, Mojave) restarts after I type my password to log in. The progress bar fills to about three-quarters before this happens.

I have tried:

  1. Resetting PRAM
  2. Resetting SMC
  3. Running Diagnostics (holding ‘d’ on startup) [only says battery not at peak capacity]
  4. Running fsck in single-user mode (cmd-s on startup) [all ok]
  5. Booting into recovery mode (cmd-r on startup) – causes macbook to go into infinite boot loop. Mac logo appears, then immediately reboots.
  6. Booting into safe mode – login bar fills all the way, then macbook restarts.

Does anyone have any further ideas of what I could try? Thanks!!