Most effective way of logging user query activity

We are logging and permanently storing all the queries performed about private persons in our database. Due to GDPR we need to be able to show all the entities who have requested personal data and the timestamps. The queries can be single or batch involving hundreds of thousands of individuals.

What is the most cost-effective way of keeping such logs? The primary index should be the unique personal code which allows efficient retrieval of query information. For analytics purposes we sometimes need to index by entity and/or query date, but this doesn’t need to be optimized. The number of unique personal codes is about 1.5 million and the number of entities is ~300,000 and growing.

I have tried InfluxDB, but the limitation is that the time makes up a part of the key and batch queries will contain the same timestamp. I have tried storing the data as JSON in a relational database but it takes too long to update both single and batch entries. With DynamoDB the problem was batch write efficiency. It took 2 hrs to batch write million items at 500 WCU.

The data could look like this:

{'personal_code':123456789,  'data_requests':{     'entity_1':{         'query_type_1':['2018-09-25 12:00:00.000000',...],         'query_type_2':['2018-10-02 12:00:00.000000',...]         },     'entity_2':{         'query_type_1':['2018-09-25 12:00:00.000000',...],         'query_type_2':['2018-10-02 12:00:00.000000',...]         }     } } 

And the goal is to use as little hardware resources as needed to achieve <2 sec latencies retrieving the data by personal code and appending new timestamps to the arrays.