What is the most unique data identifier for a phone user that cannot be repeated?

I’m currently developing an android (and probably iOS in the future) application for my company.

I was wondering what is the most unique data identifier to authenticate the users. A data that cannot be repeated through users.

For example:

Email? That user can log in with another phone using the email and password

Phone number? Could be the most unique one but it would required to verify the phone and I will have to setup a SMS validation service like WhatsApp

IMEI? It pretty much validates the unique phone but it can be spoofed or replaced. Although I don’t know if the application required permissions for this.

EDIT: Maybe a mix of all this methods?

My main goal is to save this data as a database and make it the primary key of it and with this know exactly who’s the user that it’s really using the company web services.

I hope you guys can help me.

Thank you.

Custom data grabber with regex issue

Hello,

I’m looking to use Scrapebox to scrape all domain name mentions on a list just shy of 4000 web page urls.

The domain names are formatted on the pages like so:

Scrapeboxforum.com
Scrapeboxinfo.net
Scrapeboxhub.org

The domain names are plain text. They are not hyperlinks.

If it helps, they are also always in between <td> and </td> elements.  

I already have my list of almost 4000 urls I want to scan.

I am using 5 private proxies that have been tested and saved.
I think they’re being applied when using the Custom Data Grabber, but honestly I struggle with Scrapebox.

I created inbound and outbound rules for Scrapebox in Windows Firewall.
I can do other things using Scrapebox that do work. Like grabbing internal links on the domain I’m getting the urls from.  

I created a Custom Data Grabber Module and under that a Module Mask:

https://imgur.com/a/TpER4Q3

I tried several regex examples and found this one:

Code:
^(?=.{1,253}\.?$)(?:(?!-|[^.]+_)[A-Za-z0-9-_]{1,63}(?<!-)(?:\.|$)){2,}$

Source: https://stackoverflow.com/a/41193739/5048548

I tested it using the tool on https://regex101.com/ and 3 sample urls come up as matches (as far as I can tell?):

https://imgur.com/iVR422q

However, when I run my Module all I get is this:

https://imgur.com/dGgD3Ft

The Module data folder contains a csv for every time I run the Module, containing two odd characters in the first cell:

https://imgur.com/OS3uupX

I ran several of the urls through browseo.net and the domain names on those urls are readable according to that tool.

Does anyone know where I’m going wrong here?
Or is there a better way to scrape domain name MENTIONS from a list of urls?

Thank you in advance!

Is it secure to rely on the data in a lambda context authorizer claims?

I am working on lambda authorization and I learned that there are generally two options.

Either use the default authorizer on the API gateway level, which will do all the heavy lifting (validate the tokens), or write a custom authorizer, which will require me to implement all the logic including all the token validations, which I would like to avoid if possible. I don’t want to write such code, I want to use something that is time proven and tested.

My question is, is it considered secure to write code in my lambda (e.g. python decorator) that will do authorization based on the data in the lambda context.authorizer.claims? assuming of course all I need is there (e.g. cognito:groups, cognito:username, etc.)

can I treat the authorizer data in the context as solid (passed the security validation)?

How can I lock rows for update without returning data over the connection?

I want to do something like this:

begin select * from foos where owner_id=123 and unread=true for update update foos set unread=false where owner_id=123 end 

In the statement where I acquire the lock, I don’t need any info about the rows. I just want to lock those particular rows. Is there a way to do this (elegant or hacky) which tells postgres to not do any of the work of actually giving me the data?

P2P torrents: downloaded data greater than torrent size?

How can downloaded data be greater than torrent size? In this case I already downloaded all size (100%) but after few days – missing 0.01 % of data.

I try to found explanation on internet, some users talks about bad sectors in HDD, but this is happened to me few times; I checked my (external) HDD and it pass tests about errors (GSmartControl).

Is it possible that someone edits downloaded files, for example mp4 file – for reason to be able to install some monitoring tool on my PC? Or to detect my real IP (even I used VPN)?

image from bittorrent

Multi-business data sharing and trust issues

Is there a way to establish data sharing between multiple (a couple dozen) businesses where it is the case that some companies don’t trust others?

This means that these companies are willing to share some their sensitive data with only select few, while not so sensitive data they are OK with sharing with all. (They can further restrict access to who sees it based on the sensitivity.) The data includes things like financial information, so one would want to aggregate all the sources of data they have access to to grasp the current situation.

Protect sensitive data in memory

Is there a way to protect sensitive data which is in RAM? Our setup is a microcontroller with no hardware support for security. When there is a need to encrypt data, then the secret key exists in RAM. Even further- plain text exists in RAM. So if anyone can have an access to RAM (e.g. jtag), then the sensitive data is in danger?

Finding a data structure using hash tables

I learnt about “perfect square hashing” – a stacking algorithm given a S subset U subset of N-size keys, creating the O(N^2) hashing table and mapping the keys to it O(N). After creating the table each key search takes O(1).

But I want to work under a different assumption:

At the start of the algorithem we only get N, the size of the subset of keys that we will need to add to the data structure during the entire run, but we will receive the keys themselves during the run in a scattered way and we do not know what those keys will be (only they are from our U key space). We want running time O(1) (not for expectation) for every income and search during the run.

I need to propose a data structure (and algorithms) that, given N (the number of keys that will enter the data structure when used) is initialized in O(N^2) and then allows insertion and search of keys so that the probability is greater than 1/2. Each of these operations run time is O(1) for each key during the run.

I can’t find anything to work as expected, can you help me?

PCI DSS 1.2.1 Restrict inbound and outbound traffic to that which is necessary for the cardholder data environment

A strict interpretation of that rule would seem to prohibit web browsing by PCs on the same LAN as a card processing PC. However, it appears that rule is interpreted in practice as though it says “Restrict inbound and outbound traffic to that which is necessary for the business environment.” Can anyone provide confirmation or clarification?