Solution to User Initial HTTP Requests Unencrypted Despite HTTPS Redirection?

It is my understanding that requests from a client browser to a webserver will initially follow the specified protocol e.g, HTTPS, and default to HTTP if not specified (Firefox Tested). On the server side it is desired to enforce a strict type HTTPS for all connections for the privacy of request headers and as a result HTTPS redirections are used. The problem is that any initial request where the client does not explicitly request HTTPS will be sent unencrypted. For example, client instructs browser with the below URL command.

google.com/search?q=unencrypted-get

google.com will redirect the client browser to use HTTPS but the initial HTTP request and GET parameters were already sent unencrypted possibly compromising the privacy of the client. Obviously there is nothing full-proof that can be done by the server to mitigate this vulnerability but:

  1. Could this misuse compromise the subsequent TLS security possibly through a known-plaintext
    attack (KPA)?
  2. Are there any less obvious measures that can be done to mitigate this possibly through some DNS protocol solution?
  3. Would it be sensible for a future client standard to always initially attempt with HTTPS as the default?

Hardening ASP.NET against session fixation: Should I change the session ID despite the additional Auth cookie?


Situation

I am the responsible developer for an ASP.NET application that uses the “Membership” (username and password) authentication scheme. I am presented with the following report from a WebInspect scan:

WebInspect has found a session fixation vulnerability on the site. Session fixation allows an attacker to impersonate a user by abusing an authenticated session ID (SID).

Reproduction

I tried to reproduce the typical attack, using the guide on OWASP:

  1. I retrieve the login page. When inspecting the cookies with Google Chrome’s Developer Tools (F12), I get:

    • ASP.NET_SessionId w4bce3a0e5j4fmxj3b0lqkw2
  2. After authentication on the login page, I get an additional

    • .ASPXAUTH F0B9C00FC624E3F2C0CD2EC9E5EF7D10D91A6D62A26BAEE67A38D0608198750A2428E1F5D7278DCE6312C32EE2788D6C79E8112EA35B2397DB84FBB2BE1DBDA815A304B12505D2B786B00038B1EB0BE854DBDAF13072AFEDB3A21E36A7BCD7CD0032A0BCE8E90ECEAFA5FF487D6D2E2C

    • while the session cookie stays the same (as preconditioned for a session fixation attack)

  3. Attack: However, if steal/make up and fix only the ASP.NET_SessionId and inject it into another browser, the request is not authenticated. It is authenticated only after also stealing the .ASPXAUTH cookie, which is only available AFTER login.

Conclusion

I come to the following conclusion:

While the typical precondition for a session fixation attack is met (non-changing session id), an attack will fail because of the missing, required additional .ASPXAUTH cookie, provided only AFTER successful authentication.

Question

So, should I really change the session cookie after login? Will this only satisfy the WebInspect scan or is there a real value here?

Note: I am very likely having the exact scenario as in Session Fixation: A token and an id, but I am not asking whether it is vulnerable, but what I should do with regards to the report.

How can I keep Twitter from banning me despite using a VPN and different phone numbers? [on hold]

I keep getting banned on Twitter for unironically attempting to evade banning this last time I used a VPN and a new phone number and despite this, they caught me before I sent a tweet so I am assuming they have a way to trace me even using a VPN so my question is what, if anything can I do to prevent this…I was curious if using a different computer would help?

SVC endpoint tells that DLL is missing despite it being in GAC

I have an custom service with one dll. Everything was working fine. However, I’ve decided to simply add one line to the code:

RequestFormat = WebMessageFormat.Json, 

Specifycing request format, nothing huge. I’ve published WSP package, deployed with -GacDeployment flag, and now I’m getting such an error in event log:

Exception: System.ServiceModel.ServiceActivationException: The service '/_vti_bin/XXX/XXX/XXX.svc' cannot be activated due to an exception during compilation.  The exception message is: Could not load file or assembly 'Microsoft.SharePoint.Client.ServerRuntime, Version=15.0.0.0, PublicKeyToken=71e9bce111e9429c' or one of its dependencies. The system cannot find the file specified 

I’ve checked GAC, and have both 15.0.0 and 16.0.0 versions of this dll present under folder:

%WINDIR%/Microsoft.Net/Assembly/Gac_msil/  

Any ideas?

Why does 18.04 LVM default to 4GB despite more space available?

I’ve just noticed that a default install of Ubuntu 18.04 with LVM results in only 4GB of space allocated for the root partition, and the rest of the 500GB drive is left un-used. I find this goes against the principle of least surprise, I expected ubuntu to use the whole drive or to ask how much to use like 16.04 did.

Why wasn’t it provisioned with more space? Is this a bug? I’ve googled but couldn’t find anything other than this: Ubuntu Server 18.04 LVM out of space with improper default partitioning

It talks about how to fix it – and I have fixed it – but I don’t understand why 18.04 defaults to such a small root partition with LVM?

Thanks in advance!

Google suspicious activity warning persists for 4 days despite my saying I recognize it, does this indicate additional security issue?

I had a google suspicious activity warning four days ago triggered by logging in from a location I don’t use often. I accepted the offer to view the activity and acknowledged that I recognized it. This normally makes the warning go away.

However in this instance every time I log back in over the last four days, I get a red warning bar at the top of a signed-in google page.

warning

When I click Review your recent activity I get the critical security alert box grayed out with a white box over it saying “You’ve already replied that you recognize this activity” followed by two options Change reply and OK.

Of course I click “OK” every time because I do not want to change my reply and lock my account.

QUESTION: Does this behavior indicate that there is a further security issue that I’m unaware of? What would happen if I instead clicked Change reply? Right now I have two-factor turned off so I’m very concerned about getting locked out. Are there any further problems that I could look for somehow?

  • macOS & Chrome incognito mode

I should note that in the last weeks I received two dialogues from google when initiating a search, to verify that I was not a robot, due to “unusual traffic from (my) computer network”. (also shown below)

That happened at a shared WiFi connection using a DSL line.


warning

warning

Error trying using regionset despite enough vendor resets available

I’ve got a new PC where the DVD region is set to ‘4’ and regionset gives me an error when trying to set it to ‘2’ even though it says I have 4 vendor resets available.

adam@gondolin:~$   sudo regionset /dev/sr0 regionset version 0.1 -- reads/sets region code on DVD drives Current Region Code settings: RPC Phase: II type: SET vendor resets available: 4 user controlled changes resets available: 4 drive plays discs from region(s): 4, mask=0xF7  Would you like to change the region setting of your drive? [y/n]:y Enter the new region number for your drive [1..8]:2 New mask: 0xFFFFFFFD, correct? [y/n]:y ERROR: Region code could not be set! adam@gondolin:~$    

This is the info from the hardware with smartmontools:

adam@gondolin:~$   sudo smartctl -i /dev/sr0 smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.0-54-generic] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org  === START OF INFORMATION SECTION === Vendor:               HL-DT-ST Product:              DVD+-RW GA31N Revision:             A102 User Capacity:        730,214,400 bytes [730 MB] Logical block size:   2048 bytes >> Terminate command early due to bad response to IEC mode page A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. adam@gondolin:~$    

I don’t like that error!

I figure it might just be something to do with the fact that it’s a DVD drive and not a normal hard drive.

There’s currently a CD in the drive – it won’t accept a DVD and therefore regionset won’t run with a DVD, but according to regionset docs, that should be irrelevant.

This is Ubuntu 18.04 Bionic, from the server install where I then installed Xwfm4 on top, if that makes any difference.

Is the DVD drive some sort of Chinese no-name that will only work in Windows or something? Do I need a new one or is there some linux wizardry I can do?

Cannot connect to my WiFi Network despite my network card is working

So, I just installed Ubuntu into my old laptop. My network card is working great as windows can connect and recognise all available networks. However, despite that network card is recognised as well by Ubuntu not all networks are showing (my own network is not showing at all).

I have two wifi networks available, one of them is hidden. I cannot connect to neither of them despite I even manually insert the networks along with the password and rest info. However some networks appear without a problem, all of them are distant ones.

I’ve also do some tests with other laptops and other distros. On the same laptop, the same issues continuous with tails, kali and Lubuntu. With other laptops or network cards, I don’t have that issue at all.

Any idea what is going on?

Magento 2 Curl : is setting Content-Type: application/x-www-form-urlencoded despite providing Content-Type: application/json

I am facing a weird issue while using Magento 2.2.6 default Curl class Magento\Framework\HTTP\Client\Curl.php to send a curl request.
Magento is automatically somehow adding another Content-Type: application/x-www-form-urlencoded despite providing Content-Type: application/json using

$  this->curl->setHeaders(array(                 'Content-Type: application/json',                 'Content-Length:' . strlen($  jsonData)             )); 

I am trying to send a json value via post request.

$  url = trim($  gwUrl, '/') . '/api/' . $  method . '?format=JSON'; $  jsonData = json_encode($  params);         try{             $  this->curl->setOption(CURLINFO_HEADER_OUT, true);             $  this->curl->setOption(CURLOPT_FOLLOWLOCATION, 1);             $  this->curl->setOption(CURLOPT_SSL_VERIFYPEER, 0);             $  this->curl->setHeaders(array(                 'Content-Type: application/json',                 'Content-Length:' . strlen($  jsonData)             ));             $  this->curl->post($  url,$  jsonData);             $  result = $  this->curl->getBody();          } catch(\Exception $  e){             $  result["errorMsg"] = $  this->getServerDownMsg();             $  result = json_encode($  result);         }          return $  result; 

When I added $ this->curl->setOption(CURLINFO_HEADER_OUT, true); I found out that Magento is sending extra content/type in header. Magento header out info

Please let me know if I am doing something wrong as the core PHP curl functions seem to achieve the correct response from server as in that no extra header is set. But I wish to use the core magento way.