I am trying to use SQL Server Migration Assistant for the first time from my home PC. I have SQL Server in one Docker container and Oracle in another. I can connect to Oracle from SSMA, however when trying to connect to SQL Server I see this error:
I have read plenty of questions on here that explain how to resolve the problem if it is seen when connecting from SQL Studio Manager e.g. this one: The certificate chain was issued by an authority that is not trusted. I have no problem connecting from SQL Studio Manager – just SSMA. How can I connect to SQL server from SSMA?
I have tried unticking ‘Encrypt connection’ on the SSMA SQL Server login window and I see the same error.
I am trying to understand how PKI is used to boot an ARM board.
The following image relates to BL1:
The booting steps state:
The certificate used in step 1 appears to be a content certificate. In the diagram it suggests in contains the public key used to sign a hash, and the signed hash for BL2. Referring to X-509 certificate:
My question is that from the description above, is ARM not using the subject public key information in X509, and is instead adding the public key used to verify the hash in the extension field, and the signed hash in the digital signature field ?
The diagram also indicates that the trusted key certificate contains 3 keys (ROTPK, TWpub, NWpub). Does that mean that put all 3 keys in extension field, then added the signed hash of perhaps TWpub + NWpub in the digital signature and again didn’t use the subject public key information field (with certificate later verified with the ROTPK in the extension field) ?
Our website also has a blog at "example.org/blog", hosted by a third party. Our load balancer routes all requests to "/blog" (and subpaths) to our blog host’s servers. We don’t distrust them, but we’d prefer if security issues with the blog host can’t affect our primary web app.
Here are the security concerns I’m aware of, along with possible solutions.
- The requests to the blog host will contain our user’s cookies.
- Solution: Have the load balancer strip cookies before forwarding requests to the blog host.
- An XSS on the blog allows the attacker to inject JS and read the cookie.
- Solution: Use "HTTP-only" cookies.
- An XSS on the blog allows the attacker to inject JS and make an AJAX request to "example.org" with the user’s cookies. Because of the same origin policy, the browser allows the attacker’s JS to read the response.
- Solution: Have the load balancer add some Content-Security-Policy to the blog responses? What’s the right policy to set?
- Solution: Suborigins (link) looks nice, but we can’t depend on browser support yet.
Is there a way do safely host the blog on the same domain?
I have 2 certificates (one root and one intermediate).
In Windows OS, the Root certificate is in the trusted root store (for current user). The other intermediate certificate (signed by the root CA), is to be found (under current user also) under the Intermediate CA store.
I am using SSL verification in one of my client applications (Kafka Confluent) and realized the client only enumerates certificates in the root store. Therefore SSL handshake fails (the intermediate CA is needed).
One solution is to import that certificate into the Trusted Root Certificate Authorities. With that solution, SSL verification at client works. However, is there any concern in doing so?
From security point of view does it make a difference if the intermediate CA exists in the Root store vs the Intermediate store on Windows?
UPDATE If more context is needed as to what exactly I am facing you can check the issue here https://github.com/edenhill/librdkafka/issues/3025
A family of N people (where N >= 3) are members of a cult. A suggestion is floated anonymously among them to leave the cult. If, in fact, every single person secretly harbors the desire to leave, it would be best if the family knew about that so that they could be open with each other and plan their exit. However, if this isn’t the case, then the family would not want to know the actual results, in order to prevent infighting and witch hunting.
Therefore, is there some scheme by which, if everyone in the family votes yes, the family knows, but all other results (all no, any combination of yes and no) are indistinguishable from each other for all family members?
- N does have to be at least 3 – N=1 is trivial, and N=2 is impossible, since a yes voter can know the other person’s vote depending on the result.
- The anonymous suggestor is not important – it could well be someone outside the family, such as a someone distributing propoganda.
- It is important that all no is indistinguishable from mixed yes and no – we do not want the family to discover that there is some kind of schism. However, if that result is impossible, I’m OK with a result where any unanimous result is discoverable, but any mixed vote is indistinguishable.
Some things I’ve already tried:
- Of course, this can be done with a trusted third party – they all tell one person their votes, and the third party announces whether all the votes are yes. However, this isn’t quite satisfying of an answer to me, since the third party could get compromised by a zealous no voter (or other cult member) to figure out who the yes votes are. Plus, this person knows the votes, and may, in a mixed vote situation, meet with the yes voters in private to help them escape, which the no voters won’t take kindly to.
- One can use a second third party to anonymize the votes – one party (which could really just be a shaken hat) collects the votes and sends them anonymized to the second party, who reads them and announces the result. This is the best solution I could think of, however I still think I want to do better than this – after all, in a live-in settlement cult, there probably isn’t any trustworthy third party you could find. I’d like to find a solution that uses a third party that isn’t necessarily trusted.
- However, I do recognize that you need at least something to hold secret information, because if you’re working with an entirely public ledger, then participants could make secret copies of the information and simulate what effect their votes would have, before submitting their actual vote. In particular, if all participants vote yes but the last one has yet to vote, they can simulate a yes vote and find out that everyone else has voted yes, but then themselves vote no – they are now alone in knowing everyone else’s yes votes, which is power that you would not want the remaining no voter to have.
I could have transfered it via Payoneer, meaning Skrill to Payoneer then Payoneer to Paypal. Of course we can not transfer money from Payoneer to Paypal instead use Payoneer card in Paypal.
The problem is Skrill needs Euro account of Payoneer so I may transfer the funds to Payoneer and they have not approved my application. Now I have to seek a trusted exchange. If any please let me know.
I’ve been looking at the following figure which shows, with Arm TrustZone architecture, resources of a system can be divided into a Rich Execution Environment (REE) and a Trusted Execution Environment (TEE).
Here I’m trying to understand the following: Suppose a remote party wants a particular trusted application (TA) running in TEE to do some computation on his input. How can this remote party be ensured that the computation is actually done by the correct TA ?
A trusted execution environment (TEE) provides a way for one to deploy tamper-proof programs on a device. The most prominent example of TEEs seem to be Intel SGX for PCs.
What I wonder is, if there exists an equivalent solution for mobile devices. For example, I want to deploy an arbitrary application on a smartphone that even a malicious OS can’t tamper with. Is there such a solution at the moment?
As you may know, a CA certificate issued by Sectigo expired recently. This is affecting certain mobile apps and possibly websites, rendering them unable to connect to required network resources. In particular, some older devices that do not have the replacement CA certificate are unable to connect to servers using such certificates.
A work-around is possible (How can I work around the expired Sectigo certificate on my old mobile device?)… but is it safe? What dangers, if any, is a user potentially exposed to by copying a CA certificate from an already-trusted device to a device which does not have said certificate?
(For this purpose, please ignore the broader issue of whether using a device that is not receiving updates is safe. We all know that has its own risks. I am asking specifically if installing a CA certificate that is already widely trusted is any more dangerous than simply using such devices without installing the new certificate.)
What is the best approach to use in identifying, characterizing and detecting compromised CAs? I do not mean an invalid certificate or invalid CA that can be identified by an X.509 during validation process. I am looking for a tool/approach that can identify and detect “trusted CA that is actually compromised. For example the cause of compromisation like attacker Impersonate or compromise CA key and try to issue fraudulent certificate/ fake CRL.
A part from existing methods such as CT, key pinning, DANE etc which partly address some issues related to CA compromised.
I there a way from method like Blockchain, Machine learning or any role based approach can be used to first identify, characterize and detect if trusted CA really compromised?