On Role-playing True-to-Canon Elves Despite Fellow Players Promoting Orcs

While I have read all the 3.5 sourcebooks — and had some practically memorised, I haven’t played that many campaigns. One of the few that I did, I went with a Lawful Neutral Grey Elf Wizard, and as part of my backstory wanted my character to be very conservative and traditional, and entrenched in his beliefs, steeped in that which was important to Elven culture, most specifically including the Elven hatred, loathing, contempt, and desire to utterly expunge the Orcs. I felt it meshed well with the rest of my backstory, with our campaign setting, and with the campaign details I was given. However, unbeknownst to me, another player chose to be a half-Orc. This would have been problematic were it not for the fact that his love of leaping into combat got him killed right off the bat, but his replacement character, a human Paladin, chose to make it his mission to proselyte to and convert the Orcs into law-abiding beings, while one of my long-term goals was to eventually cleanse the Material Plane of every last drop of Orc blood. I chose to roleplay this and take the conflicting dynamic as an opportunity for immersion, but my fellow players grew rather frustrated by my character’s unwillingness to give up his heritage for another’s whims, and the scenario collapsed before any solution arose. I am not trying to argue that Orcs shouldn’t be considered capable of alignment shift or civilised behaviour, or that my character’s actions were Good (his moral component of alignment was Neutral), but how does one reconcile character motivations that conflict in a manner where for one to succeed the other must fail? Do we go by precedence, allowing the Elf’s mission to succeed because he was around longer? Do we choose by dragging modern era moral systems into feudal societies and ban canon racial hatred as racist, despite how very pertinent such attitudes and behaviours were in those societies, thus removing part of that aspect of realism that makes the game so well-beloved?

I appear to be completely invisible on google despite their search console claiming that I am indexed [duplicate]

I recently put the site for my web design business up on google. It’s been online for about two weeks and according to Google Search Console, has been indexed several times. Despite this, I can only find it on google if I type the actual url into google. Typing the business name verbatim doesn’t work, typing the contents of the meta description tag doesn’t work. The things that do come up are a worse match. I can type my business name perfectly and things that are off by one or more characters will show up instead. I’ve gone to the 15th page of google search results and found nothing. The google search console is even claiming that my page has 7 total impressions, with an average position of 1. I can’t tell if that’s just because of me typing the url exactly or if it’s actually organic.

Important Note: My server automatically responds with a 301 redirect to the https url when someone goes to the http url. Does the Google indexing bot mishandle this somehow?

Here is what the google search console looks like:

enter image description here

I have a sitemap.xml and a robots.txt. Google search console claims that it has discovered my URL, as a result of the sitemap.xml. I don’t understand why it will only come up in google search when I type the exact url. The contents of the meta description and meta author tags have to be unique, and they still don’t bring anything up.

I should also note that I ran lighthouse on my site and got a 95/100 on the SEO portion.

Localhost website not accessible from Public IP despite port forwarding

My tiny office has 1 router, which is connected to ADSL line on one end and my laptop on other end. In office, laptop’s local IP is 192.168.1.2.

On office router, I have setup port (22) forwarding for SSH access. I also have DuckDNS script that allows me to ssh -v -t -L 5900:localhost:5900 myname.duckdns.org into my office laptop whenever I want.

I followed the same port-forwarding procedure to configure my router to forward Port 8082 to 192.168.1.2 (TCP, WAN interface is pppoe2). I ran a python/nodejs http server listening on 0.0.0.0:8082.

If I try to access my newly spun server from public IP I get timeout. This is the problem. I can SSH into my remote machine, but website hosted on it doesn’t work

Steps tried:

I take remote desktop of office laptop (using port 5900 for x11 forwarding) and find that firefox can open localhost:8082, 127.0.0.1:8082 and 192.168.1.2:8082 properly.

I tried shutting down extra services like gogs and nginx (which was listening on port 80 even though I didn’t tell it to) via systemctl, but still no luck.

Further, curl http://PUBLIC_IP:8082 gives different outputs:

  1. At home, in my Cmder I get curl: (7) Failed to connect to PUBLIC_IP port 8082: Timed out
  2. However, in SSH terminal (i.e. of remote machine), I get curl: (7) Failed to connect to PUBLIC_IP port 8082: Connection refused

Why is connection refused?

Thanks to @davidgo, I tried

$   sudo tcpdump -vv -i enp7s0 | grep 8082 tcpdump: listening on enp7s0, link-type EN10MB (Ethernet), capture size 262144 bytes 

If I curl localhost:8082 or 192.168.1.2:8082 I see 200 on server logs but I don’t see any output in the above command.
But if I curl PUBLIC_IP:8082 from

  1. inside SSH session I get
    duckDNSsubDomain.40626 > abts-north-dynamic-031.P3.P2.P1.airtelbroadband.in.8082: Flags [S], cksum 0x469a (incorrect -> 0x84f5), seq 18095393, win 64240, options [mss 1460,sackOK,TS val 2474578357 ecr 0,nop,wscale 7], length 0     abts-north-dynamic-031.P3.P2.P1.airtelbroadband.in.8082 > duckDNSsubDomain.40626: Flags [R.], cksum 0x8cea (correct), seq 0, ack 18095394, win 0, length 0 

and a quick connection refused complain by curl (BTW my public IPv4 looks like P1.P2.P3.31.

  1. And if I do the same curl from my home computer I see
    157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0     157.32.251.70.50664 > duckDNSsubDomain.8082: Flags [S], cksum 0x299d (correct), seq 132055921, win 64240, options [mss 1370,nop,wscale 8,nop,nop,sackOK], length 0  

and curl fails with timeout.

Now I am guessing my ISP doesn’t like random ports. So I tried hosting my webserver on port 80. Again, localhost and 192.186.1.2 work as expected but http://PUBLIC_IP:80/ opens up router control panel 🙁

So I try hosting it on a well-known port that’s not 80 or 443. I choose 21 (FTP), use sudo to run webserver listening on 0.0.0.0:21 but firefox/chrome don’t let me open it and curl hangs for a while before failing with a timeout.

Solution to User Initial HTTP Requests Unencrypted Despite HTTPS Redirection?

It is my understanding that requests from a client browser to a webserver will initially follow the specified protocol e.g, HTTPS, and default to HTTP if not specified (Firefox Tested). On the server side it is desired to enforce a strict type HTTPS for all connections for the privacy of request headers and as a result HTTPS redirections are used. The problem is that any initial request where the client does not explicitly request HTTPS will be sent unencrypted. For example, client instructs browser with the below URL command.

google.com/search?q=unencrypted-get

google.com will redirect the client browser to use HTTPS but the initial HTTP request and GET parameters were already sent unencrypted possibly compromising the privacy of the client. Obviously there is nothing full-proof that can be done by the server to mitigate this vulnerability but:

  1. Could this misuse compromise the subsequent TLS security possibly through a known-plaintext
    attack (KPA)?
  2. Are there any less obvious measures that can be done to mitigate this possibly through some DNS protocol solution?
  3. Would it be sensible for a future client standard to always initially attempt with HTTPS as the default?

Hardening ASP.NET against session fixation: Should I change the session ID despite the additional Auth cookie?


Situation

I am the responsible developer for an ASP.NET application that uses the “Membership” (username and password) authentication scheme. I am presented with the following report from a WebInspect scan:

WebInspect has found a session fixation vulnerability on the site. Session fixation allows an attacker to impersonate a user by abusing an authenticated session ID (SID).

Reproduction

I tried to reproduce the typical attack, using the guide on OWASP:

  1. I retrieve the login page. When inspecting the cookies with Google Chrome’s Developer Tools (F12), I get:

    • ASP.NET_SessionId w4bce3a0e5j4fmxj3b0lqkw2
  2. After authentication on the login page, I get an additional

    • .ASPXAUTH F0B9C00FC624E3F2C0CD2EC9E5EF7D10D91A6D62A26BAEE67A38D0608198750A2428E1F5D7278DCE6312C32EE2788D6C79E8112EA35B2397DB84FBB2BE1DBDA815A304B12505D2B786B00038B1EB0BE854DBDAF13072AFEDB3A21E36A7BCD7CD0032A0BCE8E90ECEAFA5FF487D6D2E2C

    • while the session cookie stays the same (as preconditioned for a session fixation attack)

  3. Attack: However, if steal/make up and fix only the ASP.NET_SessionId and inject it into another browser, the request is not authenticated. It is authenticated only after also stealing the .ASPXAUTH cookie, which is only available AFTER login.

Conclusion

I come to the following conclusion:

While the typical precondition for a session fixation attack is met (non-changing session id), an attack will fail because of the missing, required additional .ASPXAUTH cookie, provided only AFTER successful authentication.

Question

So, should I really change the session cookie after login? Will this only satisfy the WebInspect scan or is there a real value here?

Note: I am very likely having the exact scenario as in Session Fixation: A token and an id, but I am not asking whether it is vulnerable, but what I should do with regards to the report.

How can I keep Twitter from banning me despite using a VPN and different phone numbers? [on hold]

I keep getting banned on Twitter for unironically attempting to evade banning this last time I used a VPN and a new phone number and despite this, they caught me before I sent a tweet so I am assuming they have a way to trace me even using a VPN so my question is what, if anything can I do to prevent this…I was curious if using a different computer would help?

SVC endpoint tells that DLL is missing despite it being in GAC

I have an custom service with one dll. Everything was working fine. However, I’ve decided to simply add one line to the code:

RequestFormat = WebMessageFormat.Json, 

Specifycing request format, nothing huge. I’ve published WSP package, deployed with -GacDeployment flag, and now I’m getting such an error in event log:

Exception: System.ServiceModel.ServiceActivationException: The service '/_vti_bin/XXX/XXX/XXX.svc' cannot be activated due to an exception during compilation.  The exception message is: Could not load file or assembly 'Microsoft.SharePoint.Client.ServerRuntime, Version=15.0.0.0, PublicKeyToken=71e9bce111e9429c' or one of its dependencies. The system cannot find the file specified 

I’ve checked GAC, and have both 15.0.0 and 16.0.0 versions of this dll present under folder:

%WINDIR%/Microsoft.Net/Assembly/Gac_msil/  

Any ideas?

Why does 18.04 LVM default to 4GB despite more space available?

I’ve just noticed that a default install of Ubuntu 18.04 with LVM results in only 4GB of space allocated for the root partition, and the rest of the 500GB drive is left un-used. I find this goes against the principle of least surprise, I expected ubuntu to use the whole drive or to ask how much to use like 16.04 did.

Why wasn’t it provisioned with more space? Is this a bug? I’ve googled but couldn’t find anything other than this: Ubuntu Server 18.04 LVM out of space with improper default partitioning

It talks about how to fix it – and I have fixed it – but I don’t understand why 18.04 defaults to such a small root partition with LVM?

Thanks in advance!

Google suspicious activity warning persists for 4 days despite my saying I recognize it, does this indicate additional security issue?

I had a google suspicious activity warning four days ago triggered by logging in from a location I don’t use often. I accepted the offer to view the activity and acknowledged that I recognized it. This normally makes the warning go away.

However in this instance every time I log back in over the last four days, I get a red warning bar at the top of a signed-in google page.

warning

When I click Review your recent activity I get the critical security alert box grayed out with a white box over it saying “You’ve already replied that you recognize this activity” followed by two options Change reply and OK.

Of course I click “OK” every time because I do not want to change my reply and lock my account.

QUESTION: Does this behavior indicate that there is a further security issue that I’m unaware of? What would happen if I instead clicked Change reply? Right now I have two-factor turned off so I’m very concerned about getting locked out. Are there any further problems that I could look for somehow?

  • macOS & Chrome incognito mode

I should note that in the last weeks I received two dialogues from google when initiating a search, to verify that I was not a robot, due to “unusual traffic from (my) computer network”. (also shown below)

That happened at a shared WiFi connection using a DSL line.


warning

warning