How client-side OAuth is secure?

I want to make a static webpage, but secure it with a login.

In order to do this, I looked at this link. Basically, I want users to log in their Google account to be able to view the webpage.


But I don’t understand how this is secure. Client load the whole webpage, so he can access the javascript. He will see which URL is redirected after Google login. Can’t he just copy this URL directly, therefore accessing the page without logging to Google ?

Google’s recommendation of server-side authentication over client-side

I’m currently building an Android application that utilises some Google APIs. I have also created a back-end using Flask, to which my Android application makes calls to.

One main problem I encountered was getting the user to authorise the app to access their Google account data (such as access to their Google Calendar) via logging in to their Google account.

Whilst looking at possibly handling this within the Android application (client-side), I encountered this page, where at the top it states:

Although it is recommended that G Suite APIs are called from a server using server-side authentication, these APIs can also be called using the Android SDK.

Note: It is highly recommended to call G Suite APIs from a server environment rather than a mobile environment.

Why is this the case? What are the advantages and disadvantages of handling authentication on the client-side?

What are the risks with leaving the offending anchor portion of an URL in a failed client-side XSS attack?

As a simple example, let’s assume that I have implemented a key-value lookup within a pre-populated, static JavaScript dictionary. Let’s say that the dict is:

a = { 'one': 'uno', 'two': 'dos' }; 

The dict is accessed with

https://example.com#key 

For example

https://example.com#one 

will display a page showing uno. An attacker may attempt to exploit this using an XSS payloads such as

https://example.com#<script>alert('xss');</script> 

The JavaScript has a whitelist lookup against legal keys and performs no actions if the whitelist lookup fails. Basically

arg = window.location.hash.substr(1); if (a.indexOf(arg) != -1) {   // do stuff } 

Even though the attack fails, the anchor portion of the URL in the URL bar continues to have the script code in it.

What are the risks of leaving this malicious-looking anchor in the user’s URL bar?

Applications for Service Discovery outside of Client-Side Load Balancing

I’ve been told that service discovery and client-side load balancing are two distinct concepts, however:

  1. I don’t see what you would use service discovery for outside of client-side load balancing; and
  2. I don’t see how you could implement auto-scale-enabed client-side load balancing without service discovery!

My understanding of service discovery is that you have some kind of client/agent running on each of your nodes that all use a consensus service (Consul, ZooKeeper, Eureka, etc.) to communicate the IPs of the healthy/active instances of all the backing services/resources that your nodes depend on. So if a 5-node Service A talks to a 10-node Service B, and one of those 10 Service B nodes goes “down”, the consensus service will alert all 5 Service A nodes not to talk to that particular Service B instance (IP). To me, this is client-side load balancing.

My understanding of client-side load balancing is when each node of Service A makes the decision as to which Service B node it talks to. Advantages of this, as opposed to a classic centralized load balancer sitting in front of all Service B nodes, is that there is now no single point of failure (SPoF) should that centralized load balancer go down. But the only way (that I can see!) to implement this and enable auto-scaling of both services is to use service discovery.

So I ask: how are these concepts really different if you can’t have one without the other? Or is there a whole universe of functionality that you get with service discovery that has nothing to do with client-side load balancing?!

Open ID Connect: Client-side app and API

I’m developing a client-side application (Vue.js) that consumes an API. I want to secure the API with OpenID Connect. In this case, the Vue.js application is the client and the API is the resource server.

When I use the implicit flow, the client receives an ID token and an access token. The ID token is intended for the client, while the access token is intended for the API.

Now I wonder how I can tell the API which user it is. For example, the API needs to know some basic information like the email address, if the user is using the API for the first time and is therefore not yet registered. The API must also be able to identify the user on each request.

Is it worth using a client-side XSS-focused library like DOMPurify?

That may not be the best way to phrase the question, but gives the basic idea.

If everything client-side is malleable, is it even worth it to use a JS library for sanitization?

I’ve seen DOMPurify recommended a few times recently and have read their security document (though it didn’t provide the information I thought it might), and this isn’t intended as a criticism of the effort or people behind it, or its quality.

However if you can’t trust anything on the user’s end, and ultimately need to process on the server anyway, then is it worth even adding another dependency and download?

It might be said that a library like that isn’t large (especially compared to the JS and CSS frameworks people are using), and that it also helps protect against less advanced attacks, so adds some benefit without much cost.

But I’m still not sure if it adds enough value to include, as XSS involves targeting other users through the Web (and thus servers) by definition.

What do you think?