We have a set of $ n$ products, each $ i$ th product can be kept in a temperature between $ c_i$ and $ h_i$ .
We have to buy fewest number of fridges for these products. The fridges can only have a fixed temperature.
For me I think of this problem as intervals of product temperatures that are placed on the axis.
My idea to solve it is to see which product’s temperature range overlaps with most other products temperature ranges, then we can place these products in one fridge.
But the algorithm for this would be inefficient..
What’s a simple greedy solution for this? any ideas?
I don’t want to get hung up on technical terms, just laying out basics for this question: I understand personal identifying information (PII) as that info which is not apparent to people who cross paths with you day to day and which could be used to prove your identity. For example, my name and face are not really private because anyone I casually do business with could get that info. My birthdate and address are much less apparent and are considered PII. My social security number is a whole different tier of private, being sensitive personal information (SPI).
I grew up in the wild west internet (there’s fringe PII – apx. age) and was advised never to reveal PII-type info. Basically conceal one’s real identity as much as possible, for safety sake.
Knowing more now, I wonder if this precaution is warranted, especially in the context of persona persistence between platforms which could leak some PII. Most internet use, sure I don’t want my name out there tied to it, but I don’t feel like I need to cover my tracks in general. Conversely, I see some benefit in letting my actual or pseudonymous identities persist online, and I wouldn’t be opposed to lightly-vetted or simply-determined users connecting dots between personas, i.e. friends or acquantances knowing two different profiles both represent me, including a PII-filled one like LinkedIn. I’m asking if my intuition here is right or more risky than I think.
The risks of revealing PII are I think:
- Identity theft
- Planning crimes
For those reasons, I can see reason to use a pseudonym posting publicly. But I also don’t see those threats as particularly concerning in general, like when meeting someone on a message board or a stranger on Facebook or LinkedIn. Someone finding my profile on LinkedIn already has a lot of information that could be used to harass me, just as it’s useful for potential employers to vet me. It has to do with target incentive: why me among numerous others? And even if someone online pursued one of those malicious acts, how would it be any different or more likely than encountering that malevolence with a completely offline relationship? Is it that the internet is vaster (so greater chance of running into bad apples) and might have a deeper look in my life (so greater vulnerability when encountering bad apples)? An online criminal could choose from any number of other profiles to glean info from, so as long as I don’t give away SPI, it seems like basic PII and my online activity is not any worse to reveal online than revealing my PII and ‘in real life’ activity day to day.
Why should relatively-public personal identifying information be kept secret online if at all?
I have an application that need to be able to work in offline, But the requirement is to authenticate everytime the application is open.
So If I also kept password hash on client side to make it be able to authenticate when there is no internet, Is there anything I should concern?
Thank you very much in advance
I already know how to add sans to csr, and I know it’s viable to add once again to crt using openssl x509 like this
openssl x509 -req -extfile < (printf "subjectAltName=IP:xxx" -days xxx -in xxx.csr -signkey xxx.key -out xxx.crt
but I want to find a way to do that in one command line without using config file.
Using most OAuth 2.0 flows, a client application can identify itself to the authorization server by means of a “client id” and “client secret.”
The OAuth 2 specification says that the client secret should indeed be kept secret.
However, if the client secret is inside of the application, then it’s not secret – someone can use a debugger, disassembler, etc to view it.
So I am not sure the effectiveness and/or purpose of this client secret. Furthermore, are there any recommendations for securing a client secret on a client under the control of the general populace? The purpose here is to establish some form of trust between the client application and the Authorization server, independent of the resource owner (user).
Finally, what is the difference between using an OAuth flow without a client secret vs. using one with a client secret and not keeping that “client secret” actually secret?
Starting some days ago I saw the message
[ch720-02:~]$ sudo apt upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages have been kept back: cinnamon-l10n 0 to upgrade, 0 to newly install, 0 to remove and 1 not to upgrade.
As mentioned in some of the articles here (e.g. "The following packages have been kept back:" Why and how do I solve it?), I tried
sudo apt-get --with-new-pkgs upgrade and also
sudo apt-get dist-upgrade, but then I get the same output that package
cinnamon-l10n will be kept back. The only difference I get with
sudo apt-get install cinnamon-l10n, but this call tries to remove cinnamon at all:
which is obviously not what I want. Does anyone have a clue what is going wrong here? How can I update
cinnamon-l10n without removing the other packages?
I have worked with public API’s in only one small project, but I recently learned that if one were to distribute a project with API keys inside this is a security risk.
So I have two questions:
- What does an API key contain that would pose a security risk?
- How does one create an application that makes use of public API’s and distribute that application without posing a security risk?
Surely if someone can reverse engineer the application, they could extract any API keys that are present.
I am a fresh computer science graduate so an explanation of this would be much appreciated.
For an Animal Companion to attack, the Ranger has to use his action to command it.
I am looking to see some math on just why this restriction is in place. Is the ranger way over-powered if the animal companion can keep attacking once ordered, or if gets to attack as an interact with object or verbal command from the ranger?
I’d like to see calculations for the following 3 scenarios:
- act as rules as written
- continue an action once given (1st attack takes a ranger action to activate)
- act as an interact with object by the ranger
How does the above compare with an identical ranger with colossus slayer?
I am hoping to understand why the designers limited it so much.
CBD Oil, other than kept it. To go to the heart of the matter, it is a stunning weight-association pill and you should give it a shot once. Alana – Due to my work open up work, I need to contribute my stores of centrality taking a seat working at my PC. In like way, I got such a wide number of pounds that were making me pitiable, other than amazing other than. I was rapidly checking for safe and a target reaction for reducing my weight. One day, my mate let me think around Bionatrol CBD Oil…
CBD Oil, other than kept it. To go to
Redis 4 added active memory defragmentation (source: release notes):
Active memory defragmentation. Redis is able to defragment the memory while online if the Jemalloc allocator is used (the default on Linux). Useful for workloads where the allocator cannot keep the fragmentation low enough, so the only possibility is for Redis and the allocator to collaborate in order to defragment the memory.
With Redis 5, the feature (now refered to as version 2) has been improved:
Source 1: tweet from Salvatore Sanfilippo, the Redis main developer
Active defragmentation version 2. Defragmenting the memory of a running server is black magic, but Oran Agra improved his past effort and now it works better than before. Very useful for long running workloads that tend to fragment Jemalloc.
Source 2: AWS announcement of Redis 5
One of the highlights of the previous release was the fact that Redis gained the capability to defragment the memory while online. The way it works is very clever: Redis scans the keyspace and, for each pointer, asks the allocator if moving it to a new address would help to reduce the fragmentation. This release ships with what can be called active defrag 2: It’s faster, smarter, and has lower latency. This feature is especially useful for workloads where the allocator cannot keep the fragmentation low enough, so the strategy is for both Redis and the allocator to cooperate. For this to work, the Jemalloc allocator has to be used. Luckily, it’s the default allocator on Linux.
Question: Assuming you are already using Jemalloc, is there any reason not to always set
Given that the alternative is to restart the instance to deal with fragmentation (which is highly problematic), and given that the overhead of activedefrag seems quite low from what I saw so far, the option seems to be too useful to be disabled by default.
Or are there any situations where it will harm performance?