I have a question about Tomcat vulnerability CVE-2020-1938 aka Ghostcat. The security researcher who discovered the vulnerability created a write up here: https://www.chaitin.cn/en/ghostcat and a PoC here: https://github.com/YDHCUI/CNVD-2020-10487-Tomcat-Ajp-lfi.
Can this vulnerability still be exploited when Apache is acting as the reverse proxy for Tomcat (and communicating with it using AJP) or would it only work when communicating directly to the AJP service on Tomcat?
I can’t get the POC to work when using Apache as a proxy but I don’t know if that’s because of my lack of experience with Apache, Tomcat, and AJP and/or the lack of implementation in the POC to support exploitation over such a setup OR if the vulnerability is in fact only exploitable when communicating directly with the AJP service port 8009 on Tomcat.
After the POODLE vulnerability was publicized, can MITM attacks now see https traffic? Previously only http websites showed info when sniffed, but now, what about https websites?
when running vulnerability scans often a particular version of say Node.js is reported to be vulnerable along with a recommendation to update to a higher version. Then we have also unsecure TLS, SSL protocols like TLS 1.0 and SSL 3.0 and it’s recommended to disable them altogether. For me, any of the above recommendations is a change that needs to be applied to a given application, host etc. Now I’m wondering have one can make sure that any of two changes does not lead to reduced or compromised security? How one can make sure that the new Node.js version is not introducing even more severe weaknesses / vulnerabilities ? How does change management fit into this ? In the end updating the Node.js version or disabling unsecure TLS/SSL protocols is a change request? Isn’t it?
This is a question for android security pros:
What IT-security grade tools are there for android vulnerability assessment ?
In the old days there was Android-VTS which was opensource, would test known vulnerabilities on the actual device, and report the ones it found with info about the respective CVEs.
This is exactly what I’m looking for, unfortunately Android-VTS isn’t active anymore. Looking for something which will:
- Test kernel and framework-level vulnerabilities
(so potential root-level compromise)
- Work on Android 9 phones.
There are so many security apps now, it’s become hard to find the one which actually work. I’m not interested in catalog-type apps which do not test anything / general public antivirus-type apps / anything that scans apps.
I’m trying to figure out this (it could be sql injection, or any other. it does not matter) Is it mainly a design or an implementation problem ? I believe that is the direct consequence of lazy programming, ie.
- lack of input sanitization
- lack of parameterization (not using apis and instead building sql commands directly in code using strings)
Some people say is mainly a design flaw but i can not figure out why.
I am trying to understand what is the difference between these two. I know both roles are some quite similar, because as vulnerability research, and product security finds flaws in popular apps, nonpopular, but however in VR it may involve reverse engineering to find more vulnerabilities, but it also sees VR from product security
case 1 product security I find an overflow, and check it with overwriting the EIP, and product security stop there, and patch it, but what about VR? finds the flaw > develop the exploit, and report?
Application Security/product security Vulnerability Research service is an attack simulation to expose critical vulnerabilities of an application. This service is completely based on manual and technical audits. We perform vulnerability research service for client-based, server-based and web-based applications. The detailed vulnerability research service will cover the following top critical vulnerabilities: Buffer Overflow Input Validation Dangling Pointers Remote Code Execution SQL Injection Authentication Bypass Code Injection
- reverse engineering
- target application binary analysis and debugging
- exploit development
If I wanted to move from independent to a good plan to build a startup based on what I basically have been worked on corporates
penetration testing, some red team engagements, but I have been for the last few years into
vulnerability research, but I would like to apply to business too. of course, I cannot start with everything, but how do companies offering
vulnerability research sell the service without having a product like
Metasploit, core impact or selling exploits to the government? the closest to this area is
application vulnerability analysis/product security that it takes from web app to software, looking for loopholes, and giving remediation. it may also involve the following stuff depending on engagement restrictions
fuzzing reverse engineering protocol analysis data injection target application binary analysis and debugging session manipulation flow analysis
I’m not sure where to turn, so I turned to this forum. Please don’t downvote me for nothing, I’m just trying to ask a question.
Hello! So, I’ve been trying to find a system vulnerable to dirtycow and I can’t find any. (atleast any that can compile C code…)
Things I tried
Using the dirtycow PoC on ubuntu 10 LTS, ubuntu 9 LTS and it didn’t work. ( View my previous question)
Installing debian 7 or debian 6, but they don’t have gcc installed in them.
So my question is
which .iso can I install in order to try out dirtycow? (I’m writing an article so I want to test it for myself)
NOTE: I’ve tested the kernel versions of all the things I’ve tested, and they seemed vulnerable. so I’m not sure. I ran all the virtual machines in virtualbox / vmware with 3 cores and a lot of RAM and HDD memory.
Kudelski Security have put up an interesting explanation of what the actual CVE-2020-0601 vulnerability is and also how it can potentially be exploited.
After reading this, I understand the basics of what was wrong in Windows implementation and how the PoC is supposed to work. The site also has a PoC setup where they generate a certificate which is signed using a rouge private key for a known CA (generated by manipulating the parameter
G and known public key of the CA).
I downloaded the generated certificate and used OpenSSL to view its details
$ openssl x509 -inform der -in cert.crt -text Certificate: Data: Version: 3 (0x2) Serial Number: 13:96:a7:9a:d9:71:d8:47:c3:fe:89:b2:b7:b6:57:40:28:9b:38:01 Signature Algorithm: ecdsa-with-SHA256 Issuer: C=CH, ST=Vaud, L=Lausanne, O=Kudelski Security PoC, OU=Research Team, CN=github.com Validity Not Before: Jan 16 00:03:54 2018 GMT Not After : Oct 12 00:03:54 2020 GMT Subject: C=CH, ST=Vaud, L=Lausanne, O=Kudelski Security, CN=github.com Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (256 bit) pub: 04:c6:54:aa:2c:11:14:b6:f5:c4:39:ea:80:95:7b: 2c:b3:76:b0:90:f5:17:ec:7d:d6:48:6e:cd:63:58: cb:80:71:6b:bc:97:f5:26:4d:d0:7f:7b:cf:cb:05: 0c:24:f3:29:55:5d:52:1d:74:2d:89:78:d9:9d:91: 96:12:c5:cb:be ASN1 OID: prime256v1 NIST CURVE: P-256 X509v3 extensions: X509v3 Subject Alternative Name: DNS:*.kudelskisecurity.com, DNS:*.microsoft.com, DNS:*.google.com, DNS:*.wouaib.ch Signature Algorithm: ecdsa-with-SHA256 30:65:02:31:00:f9:1b:4a:7b:d5:01:4d:f4:e3:42:5a:17:8c: 45:6f:39:ce:fd:ec:38:04:f0:78:93:84:5d:db:9c:db:41:07: a3:97:cf:f3:6d:f6:8b:7b:38:5b:95:4e:a7:1f:9e:4a:0e:02: 30:08:29:0e:f2:d8:9c:e3:e4:15:67:b7:22:f6:de:80:56:18: 01:a0:d8:3e:28:ec:6c:bf:2a:28:a2:8f:fb:8a:b7:1e:c7:8f: 25:36:22:cd:86:1d:bf:6d:fa:fd:0f:a0:6f -----BEGIN CERTIFICATE----- MIICTzCCAdWgAwIBAgIUE5anmtlx2EfD/omyt7ZXQCibOAEwCgYIKoZIzj0EAwIw fDELMAkGA1UEBhMCQ0gxDTALBgNVBAgMBFZhdWQxETAPBgNVBAcMCExhdXNhbm5l MR4wHAYDVQQKDBVLdWRlbHNraSBTZWN1cml0eSBQb0MxFjAUBgNVBAsMDVJlc2Vh cmNoIFRlYW0xEzARBgNVBAMMCmdpdGh1Yi5jb20wHhcNMTgwMTE2MDAwMzU0WhcN MjAxMDEyMDAwMzU0WjBgMQswCQYDVQQGEwJDSDENMAsGA1UECAwEVmF1ZDERMA8G A1UEBwwITGF1c2FubmUxGjAYBgNVBAoMEUt1ZGVsc2tpIFNlY3VyaXR5MRMwEQYD VQQDDApnaXRodWIuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAExlSqLBEU tvXEOeqAlXsss3awkPUX7H3WSG7NY1jLgHFrvJf1Jk3Qf3vPywUMJPMpVV1SHXQt iXjZnZGWEsXLvqNRME8wTQYDVR0RBEYwRIIWKi5rdWRlbHNraXNlY3VyaXR5LmNv bYIPKi5taWNyb3NvZnQuY29tggwqLmdvb2dsZS5jb22CCyoud291YWliLmNoMAoG CCqGSM49BAMCA2gAMGUCMQD5G0p71QFN9ONCWheMRW85zv3sOATweJOEXduc20EH o5fP8232i3s4W5VOpx+eSg4CMAgpDvLYnOPkFWe3IvbegFYYAaDYPijsbL8qKKKP +4q3HsePJTYizYYdv236/Q+gbw== -----END CERTIFICATE-----
The certificate appears to be using a valid EC curve
P-256. How can a person/process inspecting the certificate verify that it has indeed manipulated the EC parameters and is a fake?
I was involved in a conversation concerning the in-house vulnerability management program. One the statements made was that the management is generally not willing to accept risk and it should be aimed to mitigate it preferably in form of patching.
On the other hand, there are cases of applications that are vulnerable (most of them have critical and high severity levels) and they are going to be decommissioned by the end of 2020.
The problem is, that no one wants to put money on the table for fixing the vulnerabilities because the product or products will be decommissioned in a few months’ time.
I’m now wondering, aren’t they somehow already accepting the risk of not providing funds for fixing the vulnerabilities or is it more an example of neglect?
Furthermore, shouldn’t, in this case, a formal process of risk management exist to weigh the cost against the potential loss caused by a possible exploitation leveraging the vulnerability?