I have recently started using Burp as a proxy for hunting bugs on websites and I see many submissions where people have intercepted and modified requests/responses to exploit certain logic flaws in web applications. However, this is possible only because we have installed Burp’s certificate in our browser that allows it to decrypt the traffic to and from the web application. However, in a realistic scenario, the attacker would have to conduct a MITM attack to intercept/modify traffic. This makes me wonder what the point is of traffic interceptions using Burp.
BlackHatKings: Proxies and VPN Section Posted By: tisocks Post Time: 26th Jun 2020 at 02:52 AM
I’m currently deploying a pair of HA firewall devices that will act as a transparent forward proxy (traffic will be directed through the proxy via routing rather than configuring a proxy URL on the client machines)for outbound traffic. I have the high availability configuration in place and working and I can see that the TCP sessions are shared across both devices. When failover is triggered, the passive device will assume the IP addresses (actually the whole network adaptor is moved across as this is on AWS) of the previously active instance.
As a test, I setup a web server and created a large html file. I then used a client machine to retrieve this file using wget and curl (via my proxies) and during the file download I performed a manual failover. When I performed the failover the wget (same happened with curl) download got stuck. I then added connection timeouts and the wget command timed out and then restarted the download which worked fine although I could see that a new TCP session was created. One thing to note is that this is a Cloud setup where failover times are a lot slower than on-premise high-spec devices so it can take between 15 – 60 seconds for failover to complete. I’m trying to ensure that my deployment will not impact highly on applications which will mainly be sending HTTP traffic.
Is it reasonable to expect a HTTP download to continue after a failover or should the client use timeouts and retries to start the download again?
Am I likely to need application teams to change their timeout and retry settings? What is considered normal for timeout and retry settings for applications that regularly send API requests?
Is TCP session sharing only really suitable for pure TCP connections (database connections etc.) and is there any real use for HTTP connections?
please I need your professional help
I have built a new page (3 months old), and because my page speed was too low, I wanted to perform this. But I made a big mistake! I came across a platform called "Ezoic.com" and followed their instructions and changed my server entries in my DNS provider. And that was the lockdown. My website is no longer accessible, neither on the domain name nor on wp-admin. I don't know what to do anymore. I restored the settings on my server, but…
Proxy Error, DNS lookup failure for -SOS-
BlackHatKings: Proxies and VPN Section Posted By: tisocks Post Time: 20th May 2020 at 10:47 AM
What’s the difference between these two Chrome extensions, which provide VPN functionality for browsing via Chrome:
Urban Shield: https://chrome.google.com/webstore/detail/urban-shield/almalgbpmcfpdaopimbdchdliminoign?hl=en
Urban Free VPN proxy Unblocker: https://chrome.google.com/webstore/detail/urban-free-vpn-proxy-unbl/eppiocemhmnlbhjplcgkofciiegomcon
They are both developed by the same company, but I couldn’t find any explanation regarding the differences between the two.
BlackHatKings: Proxies and VPN Section Posted By: tisocks Post Time: 29th Apr 2020 at 11:27 AM
There is currently an ongoing discussion in our company about what security measures to put in place regarding workstation access to the company network and the internet.
- Employees have Linux laptops with encrypted SSDs
- on these SSDs is the intellectual property of the company
- Employees have unrestricted root access to these machines
- AntiVirus is installed and running
- Have protection against theft of the intellectual property of the company while still being able to work from anywhere in the world
- Use VPN to tunnel all network traffic (including internet traffic) through the company
- Do not allow direct internet access via VPN but rather enforce that a proxy server has to be used.
Does the additional proxy server for internet access provide more security than it (potentially) costs in the effort? (additional client configuration effort programs and services, …)
Laptop <-> VPN <-> Proxy <-> Internet vs. Laptop <-> VPN <-> Internet
If the laptop is compromised (backdoor running). How does VPN protect the data anyway if the user has root access and can change network configuration (routes, iptables, …) as he pleases. What additional security does a company proxy give?
I need your clarification about an issue I’m having at the moment.
I currently use SB to check proxies I scanned for through port scanning.
My problem is this, for instance, after checking the proxies in SB I might get like 400 working proxies. I test them again within a 5-minute window and they are still working.
I then take them to SER to check and all fail except may be for 3 or 5 proxies.
When I immediately go back to SB to check the proxies, only about 7 will fail but when I re-check in SER again, the same issue persist.
What could I be doing wrong?
Or are there specific proxy types that SB works with than those GSE SER uses?
If yes, how do I go about searching for GSE SER proxies in particular.
Thanks a lot for your usual understanding.