Triple handshake attack – what are the implications of not supporting RFC 7627: “Session Hash and Extended Master Secret Extension”?

The referenced RFC details a mitigation to what appears to be the ability to compromise a TLS connection through an attack known as the ‘triple handshake attack’.

How serious is this vulnerability? How could this vulnerability be exploited and what would the impact be?

The related RFC for this can be found here:

As of now, is there a Linux distribution supporting Apple keyboard and trackpad out of box?

Half a year ago I tried to multi-boot the Ubuntu 18.0 on my MacBook Pro (2018) but while it has booted in general from the attached USB stick, the keyboard and trackpad were not working. External keyboard for a laptop does not look like a portable solution.

I have heard that Apple keyboard and trackpad need specific drivers, and that there are efforts to provide these drivers for the Linux kernel. So far everything the I have found requires building these drivers and including them into the kernel manually – I would eventually do but this is lots of work.

What is the exact current status of this work? Is there any distribution as of the end of 2019 that supports at least the keyboard out of box?

At which phase of boot process could one modify scancode/keycode translation tables of keyboard drivers supporting the Linux input layer API?

I am using keyfuzz to map Alt-Eject to Alt-SysRq in Mac keyboard (See here). But on recent (X)ubuntus it is preferred to use systemd service to run the needed command at startup. I wonder how early I can put that service to be executed? Like which WantedBy=, After=, Before= and such attributes to use so that the configuration works and will not be overwritten? Will it work even in rescue mode boot then?

here is some reference about dependencies between different targets.

Front-end supporting multiple back-end versions – Maintaining backward compatibility

I know it’s a broad question so I’ll try to be as specific as possible. This question is probably as much an “organisational” question as a technical one.

Our company is selling our software/platform to let’s say 10 customers (also companies). These customers are all running their own installation of our platform, and let’s say all these customers are currently running version 1 of our platform.

Now we are creating version 2 of our backend, to include new features and possibly changing some existing data models. In short, this will introduce breaking changes for front-end version 1.

However, not all our customers will purchase backend version 2, but should still receive new (minor) font-end features. It could be that some of these new front-end features will rely on backend version 2 for the customers who DO purchase backend version 2, but for the other customers we would make them rely on existing endpoints in backend version 1.

Now the big question is: what would be the best way to handle this in the front-end. One could suggest to make a front-end version 2, that relies on on back-end version 2. But as said before, sometimes new front-end features would have to be developed that need to work on both version 2 of the backend, as well as version 1 for the customers that will not purchase version 2 of the backend.

— The big downside here would be that we would need to create every new feature in the front-end twice (once in version 1, and once in version 2 of the front-end).

Another approach could be to keep one version of the front-end, and have a new feature rely on either backend version 1 or 2, based on a setting/variable that indicates if the customer has backend version 2 or not.

— Of coarse the big downside here would be that the for every feature, a lot of if-else statements would need to be implemented ( if(hasVersion2){ use version 2 of backend }..else{ use version 1 of backend } ) .

Now I am looking for some input / feedback / experience on how this could be handled most effectively. Maybe you guys have some relevant experience.

TL;DR; How do you handle new front-end features & improvements that need to work both with a new version of a backend, as well as a legacy version of a backend?

Common strategies for supporting testing complicated scenarios that may also be dependent on 3rd parties

I have an app with a backend (Ruby on Rails) that provides APIs, a web app, an iOS app and an Android app that consume these APIs.

The app is used by people from lots of different countries. To use the app there are a set of verifications the customer must pass – like they provide their address that we then verify is theirs. We use 3rd parties to perform this verification. The verification requirements vary from country to country.

Web, iOS and Android submit the verification info to backend using APIs, and the APIs then talk to our verification service which then talks to 3rd parties.

Web, iOS and Android engineers would like the ability to easily test the app as users from different countries. They would also like to test different scenarios: user submits verification info and the result is that verification failed, or the result is that more info is required etc.

The 3rd party services we rely on do not provide a sandbox or staging environment or ability to deterministically elicit a certain result (verification failed, was successful etc.).

The folks working on verification service abstraction layer are also not in a position currently to provide support for this type of testing.

So, I am looking into various solutions to make this type of testing possible for our client engineers without waiting for the services to implement this support. Some ideas I have:

  1. The verification APIs will accept additional set of parameters in development and staging environments, nested in a testing attribute. When the testing param is present, the APIs will mock the desired result (including database records) without ever talking to the verification service and return that. Client engineers would implement support in their apps to make it easy for developers to use these params in dev/staging/beta environments. Example param: {testing:{target_status: 'failed', failure_reason: 'bad info'}}

  2. Similar to approach (1) but instead of params, the verification APIs will accept the testing options in custom request headers. This helps keep the API contract uncontaminated.

  3. Use ‘canned’ values. If the verification info uploaded contains first name of ‘xyz’, then return failure with reason code of ‘bad info’ etc. This is least amount of work for client engineers but has a few shortcomings imo: (1) the number of canned values will keep increasing (for example one canned value for each possible failure reason). (2) some APIs do not accept anything other than a file upload. This would require either using a canned file with some smart detection on backend to identify this file or rely on the user attributes, like user’s first name. I do not like depending on user attributes since you can now test only one type of result with this user.

I haven’t worked at companies where we had to solve this type of problem. Online searches showed companies using canned values (example PayPal’s sandbox region lets you create test accounts with certain characteristics) or mocked APIs.

My questions for this community is: Based on your experience, what is the best way to solve this problem?

Thank you!

VFS Global and Supporting Documents Submission

After having my supporting documents for a UK Visit Visa scanned and uploaded at a VFS Global centre, it was my understanding based upon what I had read on the website that the actual hard copies of the documents were then no longer required to be submitted and that they would be handed back to the applicant immediately after being scanned. However, this was not the case with my documents. Upon completion of the scanning, the VFS employee did not hand my documents back to me. It was only when I requested the return of the documents that I was told that unless I really needed them that sending the actual hard copies as well would support my application.

My questions are as follows:

Firstly, considering that VFS Global have – one would assume – correctly and proficiently scanned and uploaded the supporting documents themselves, as opposed to the DIY applicant who may mess up, why would VFS also want to keep and submit the hard copies as well? And secondly, can anyone confirm whether the hard copies are in fact sent to the embassy or whether they are kept at the VFS centre for reference and referral if and when required by the relevant embassy?


With the Pixel 2 supporting USB Audio Class 1/2/3, which USB audio devices work? Or *don’t* work?!topic/phone-by-google/Yvy0RfOPFfU

Pixel 2 supports audio adapters and headsets that communicate digitally over USB-C as defined by USB Audio Class 1/2/3. Pixel 2 does not support an analog audio signal over USB-C and subsequently will display an unsupported notification warning when an analog audio device is attached.

Here’s the spec documents from

So it got me thinking, what kinds of USB audio devices are actually supported?

  • Audio mixers, like this one from Behringer? Or does it only support USB output, not input? I have such a mixer, but I don’t have a Pixel 2 to test.

  • Car stereos, to my knowledge, have never supported any USB Audio Device class. I hope this changes, but all I’ve seen is support for USB mass storage mode (only old versions of Android), then MTP (only plays formats that the car stereo supports; no FLAC, maybe no AAC), and then AOA v2, which seems to have been abandoned for some reason (Pixel 2 doesn’t support it, nor does the Moto X4).

  • USB headphones: some recommended models are listed on the Google post I linked above. Are there any that you’ve found not to work?

  • USB DACs/amps: here’s a thread, they seem to be well supported.

  • Speaker docks: they only support iPhones, don’t they? Has any manufacturer chosen to make one for the Pixel 2, or are there any that support a class of USB Audio?