While trying to answer this question it occurred to me that while there’s many good answers about the strengths and weaknesses of SSL/TLS in terms a security professional or software developer can understand, there’s not many good responses that a layman might be able to properly understand.
For instance, we describe some variants of TLS/SSL as “insecure”, which in the security world has a somewhat specialized meaning that might be summarized as “There’s some known vulnerabilities that significantly degrade the security, and you should likely disable this variant on your servers.”. A layman might interpret “insecure” as “simple to exploit”, which isn’t necessarily true.
So can someone provide a good layman’s explanation as to the current security level offered by SSL/TLS? The answer should include the resources of the attacker, the effort, resources, and access involved, and (possibly) the cost.
The answer might also include other ways to achieve the same goal without attacking SSL/TLS, and risks we all take for granted every day. (My credit card, for instance, was compromised and used for fraud last year when Newegg got hacked)
I have a 2016 SharePoint Farm hosted OnPrem with several hundred contractors offsite on public secured networks. When they try to connect to our SharePoint site, they are prompted to log in. Every time they try, no matter how they format their user name, it bounces back without ever failing out.
Loopback check is irrelevant since it’s off the local box, but it is disabled, regardless.
The farm was configured for NTLM only, but I set it up to be negotiate:Kerberos, and enabled NTLMv2 since I’m assuming it’s falling back to that as well as only leaving TLS 1.2 enabled, but they still can’t get to it.
I’ve validated that every other location except this particular network operates fine.
We don’t have the luxury of running fiddler from their client boxes and our insight into that network is very limited, we’ve requested that they help us get connected, but if we can circumvent the network administrator’s attention it would be best.
How can I get these users to authenticate?
I am aware that installing and updating packages through apt-get should be fairly secure because an attacker supposedly should not be able to interfere or inject packets into the downloads as well as because the packages are signed, with the checksums being verified before(?) the installation.
Consider the case of an attacker performing a man-in-the-middle attack on an apt-get command. If the attacker caused a DNS cache poisoining and redirected the downloads to a server he controls, especially since the downloads are requested using HTTP only, couldn’t the attacker cause the system to download a compromised version of the Release and Packages files, and then push compromised versions of packages to the system? Wouldn’t that then look all correct to apt-get which could then go on to install a compromised package?
Can the attacker not make a mirror of an official repository, compromise some of the packages, say only the firefox or tor packages, modify the Release and Packages file accordingly with the new checksums/hashes then redirect through DNS spoofing the system to download these?
I’m limiting the discussion to downloads from official repositories only.
We know that information can be retreived after it has been deleted. There are several tools for file “undeletion” (Recuva, FTK, some tools contained in Caine, etc.)
I have heard as well, that data can be recovered, even after it has been rewritten. For this exact reason, the DoD used to approve methods which included (DoD 5220.22-M) to 7 (DoD 5220.22-ECE) overwrite passes. This is still a low bar, considering there are algorithms which include 35 passes (Gurmann).
Why, though? What papers, articles, or use cases have been published that suggest successful data recovery after single or dual pass of overwritting data?
Which software, methods, or tools allow me to analyze a given HDD for “further layers” (?) of recovery or overwriting?
(I know there is a different approach and dynamic to SSD, so for the time being, let’s not meddle into it)
I’m wondering about the security of downloading things from Universe. The Ubuntu help page seems to say that the community can make updates to pieces of software in that repo. Does that mean that just anybody can make changes, or that the maintainers of the original software can make changes?
For example, there’s an instance of Discord maintained in Universe, so can just anybody edit that instance, or only the Discord developers? Or do I misunderstand the whole thing?
…..Thank you for reaching out me And visit my gig….. Are you looking to create a website or blog for your business or brand? Than you are at the right place. I am an experience WordPress expert and will create best WordPress website or WordPress blog in a professional manner. I will not only create a website but also install WordPress on your host free of cost. My services include: Install WordPress.plugin installation.Theme installation.Website SEO.Website Security.Site transfer.Speed Optimization.Note: Feel free to message me for more discussion and queries. Best Regards: Zeeshan
Most dedicated server providers offer the option of Private Networking between servers – which my understanding is done via VLAN (always?). … | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1782376&goto=newpost
I saw this design recently in an infotainment product. The goal is mutual authentication between two ECUs, E1 and E2. They only care about each other. The basic idea is to keep both keys secret and let each ECU have one. Let’s call the keys k1 and k2, instead of public key and secret key, or E and D. Both keys are large.
Suppose E1 has k1, and E2 has k2. To perform mutual authentication in a cost efficient way:
- E1 generates random data D of a fix length, and encrypts hash(D) with k1, resulting in S1. D and S1 are sent to E2.
- E2 decrypts S1 with k2, and check if it matches hash(D).
- If OK, E2 calculates the binary complement of D, denoted D’. Then it encrypts hash(D’) with k2, resulting in S2. S2 is sent to E1.
- E1 calculate D’, decrypts S2 with k1, and check if it matches hash(D’).
I have a hunch this design is risky, but fail to find the weakness. Is it secure enough in the real world?
I am facing a programming challenge which I do not know how to solve in a most appropriate way.
We are programming an Android App which is using an API we are providing for this purpose.
While the data the api is providing is not secret, we still want to limit the amount of requests a user can make. Per requests a user can get search results for about 10-20 items around their location. We would like to avoid someone to just use the api and get the entire database by sending requests for various locations.
An important feature of the app is that it works without registration.
So here is my challenge: how can i identify individual devices the app is installed on and verify those are real devices? I could have the app send e.g. the IMEI number with each request, but I would not be able to verify on the server side the IMEI is real and not faked.
Is there maybe a download verification token which gets generated when the app has been downloaded from the PlayStore? This way individual installations could be identified and blocked if that token makes malicious requests.
I would just like to achieve that only real app installations are being allowed by the API, and that bots, Dos attacks etc. are being blocked.
I would appreciate any hint to the right direction.
If an IOS device is jailbroken, understand an attacker cannot extricate key material from the secure enclave, but would they be able to USE keys using CryptoKit within the enclave to encrypt a password stored within keychain ? Or do the CrytpoKit APIs perform some sort of system integrity check before accessing the key material in the enclave to check for system compromise ?