I can copy and paste between my iPhone and MacBook Pro it’s a great feature that I find myself using frequently. I am frequently copy and pasting from my password manager to log in to different sites. a few questions about the security of the cross device cut and paste.
- Does apple get access to the clipboard?
- How is apple securing this cross device copy and paste?
- Can this feature be turned off?
- Should I turn off this feature to improve my security?
Does Apple’s T2 chip have an endorsement key (or equivalent mechanism) to prove that another T2 key can only be used inside the secure enclave? We are looking for something like what a TPM provides so that a remote system can be assured that the key in use is secured by the T2 chip.
Many thoughts have been spent on creating the decentralised, minimum-knowledge contact tracing approach DP-3T. This has been implemented in contact tracing apps of several countries, e. g. the German Corona-Warn-App. Following this approach, there is no central instance that can identify users’ contact history.
However, as the apps depend on specialised APIs provided by the Google Play Services (Android) and amended iOS features, my question is:
Can Apple and Google, e. g. by logging the API usage, bypass the decentralised approach? In effect, can they create contact history profiles?
Please note: This is a theoretical question about the implementation that I do not understand. I do not imply that there is any abuse of this kind; I just wonder if it would be possible technically.
Several of my personal accounts were hacked by my former employer (files were altered). I confronted them, mentioned the platforms but only sent them evidence of my Google account being hacked knowing that they might try to sweep it under the rug.
They conducted an “internal investigation” and concluded that the Apple Mail on my work device triggered those sign-ins. Besides the fact that their explanation doesn’t explain why my other accounts were hacked, I tested their theory and couldn’t replicate it. I looked it up and some people said that pull requests from Apple Mail don’t trigger logins. Can anyone confirm?
In addition, I did more digging and downloaded my Facebook data and this is what I found. It shows everything including the browser used, which a third-party app is not.
The hacks coincide with both a complaint I submitted to IT about their questionable practices and false allegations (made by IT a week after my complaint) that led to my dismissal.
The goal of COVID-19 exposure notification is to notify people that they were exposed to someone who later tested positive for the virus. Protecting privacy in this process requires some cryptography, and avoiding excessively granular detail on user locations. But providing data useful for disease prevention requires adequate detail in measuring the length of exposures.
There is a new API for such exposure notification from Apple and Google, but it has a tension between 5- and 10-minute numbers that I don’t see how to resolve.
The cryptography specification, v1.2.1, specifies 10-minute intervals as inputs to the hash: “in this protocol, the time is discretized in 10 minute intervals that are enumerated starting from Unix Epoch Time. ENIntervalNumber allows conversion of the current time to a number representing the interval it’s in.”
Meanwhile the FAQ, v1.1, specifies 5-minute increments in the output: “Public health authorities will set a minimum threshold for time spent together, such that a user needs to be within Bluetooth range for at least 5 minutes to register a match. If the contact is longer than 5 minutes, the system will report time in increments of 5 minutes up to a maximum of 30 minutes to ensure privacy.”
How will the system report times in 5-minute increments when the interval numbers are only updated for the hash once every 10 minutes?
I’m looking to implement Sign in with Apple on the mobile device. We have a backend api that expects to receive the id_token (once we get it from Apple). I had a question about using nonce in this flow.
From my understanding nonce are used to prevent from replay attacks. Meaning, if we have a nonce tied to a single user session we could match the nonce for that user session and prevent malicious users from re-using the id_token. In this link they recommend generating a nonce and send (the hash of the nonce) across to Apple with every authentication request.
It then says that ‘firebase validates the response by hashing the original nonce and comparing it to the value passed by Apple’. Does this mean the id_token + nonce is sent over the wire to the Firebase database where it just hashes the nonce (that we send) and makes sure it is contained in the id_token? If this is the case, couldn’t someone intercept that request and replay it? Or is it that Firebase/Database/Server is already aware of the original nonce before hand?
I’ve just set up my first iPad, and first Apple device.
To be extra cautious when setting it up, I created a brand new @icloud.com username, to use as the Apple ID.
The username was 10 digits long, and included random alphanumeric characters.
The password was also completely random, and over 12 characters long.
Just one day after setting up the device, I received a message telling me that someone was attempting to sign into a device in another city, with my ID.
I obviously selected “Do not allow”, but I’m very concerned that this has happened at all.
The ID didn’t exist before yesterday, and the only place it has ever been used is on this particular device, and on my secure home network.
How is it possible that someone would be trying to use my ID? I really can’t believe that someone could have guessed such a unique username.
Does Apple publish their iCloud usernames somewhere?
In recent times, there has been an escalating demand by legislators in the US and the world around to be able to decrypt phones that come pre-configured with strong encryption. Key escrow is commonly suggested as a solution, with the risk seeming to arise out of the escrow agent misusing or not appropriately securing the keys — allowing for remote, illegal, or surreptitious access to the secured data.
Could a system secure from remote attack be devised by adding an offline tamper-evident key to the device? This could be an unconnected WLCSP flash chip or a barcode within the device with the plaintext of a decryption key.
I recognize the evil maid attack, but presume a tamper seal could be made sufficiently challenging to thwart all but the most motivated attackers from surreptitious access to the data.
What would be lost in this scheme relative to the current security afforded by a consumer-grade pre-encrypted device (cf. iPhone)? Bitcoin, Subpoena efficacy, and other scenarios that seem fine with “smash and grab” tactics come to mind.
Is having screen sharing service with ara.apple.com safe?
I had screensharing service with ara.apple.com(https://ara-prn.apple.com/) which is official apple website for apple products support. And they wanted me to install an app after entering session key and they said the app will self-destruct once support session ended.
Would there be a possibility that Apple can still track and monitor that computer even after the support session ended and even if they claim that installment of the app for screen sharing self-destruct?
Apple has now started rolling out the feature that allows users to decide for themselves whether Siri recordings should be forwarded to Apple for improvement or not.
People who have the beta for iPadOS 13.2, iOS 13.2, Apple tvOS 13.2, WatchOS 6.1 or MacOS 10.15.1 should also be able to delete their Siri and dictation history. Doing so will simply erase all Siri data Apple has on its servers.
These new features can be found among the phone's settings.
The English shortcuts look like this:…
Apple and it's Siri recordings, now deleteable?