What spells shouldn’t be given to Warlock?

On page 287 of the Dungeons Masters Guide

Be cautious when changing the warlock spell. Since Warlocks regain their spell slots after a short rest, they have the potential to use certain spells more times in a day than other classes do.

What exactly would make a spell unbalanced on a short rest? (ie, attributes like damage, duration, or effect)

Take for example Geas. Would it be unbalanced to that to the Warlock spell list?

Images appearing in search results (when they shouldn’t)

Have two SP2013 on-prem site collections — one where we explore and build tests and the other is for everyone in our company to use. In our “public” facing site search results we are trying to only show pages and documents that our users would be interested in. So we’ve gone through the list/library settings and marked them as No for “Allow items in this document library to appear in search results?”

For example the Style, Images, and Site Assets library. Those are not results our users will find valuable.

In our development site, this works great, but in our public site I’m still seeing some graphics from the images library. So for example we have a graphic for our department’s logo. Someone might search on the name of our department and get a return of the logo. Not useful. The link returned is also strange.


Not sure why it is to the display form.

I’ve checked that the library is not to be included and I’ve do a reindex (from the UI) for both the library and for the site, but that didn’t fix it.

I added a new image to the library and strangely it doesn’t appear in search results.

I’ve checked my result sources, search result webpart query builder, and search and offline availability settings and they are the same between the development and public sites. Not really sure where to go next.

Why shouldn’t I create a class for every property?

In a particular program I had written, I noticed I had a few classes with this pattern:

class IdObject:     '''Objects with generated id properties'''     def __init__(self, id_generator):         self.id_generator = id_generator         self.id = id_generator()  class Node(IdObject):     '''Represents a node in the graph'''     def __init__(self, id_generator):         super().__init__(id_generator) 

That is, there was particular property that I wanted a class to have, so I made a class that instantiated that property, and then just subclassed from that other class.

However, I realized I could do this for literally every other property. Having been a programmer for a while, doing that strikes me as just wrong, but it would be helpful for some more experienced and knowledgable programmers to help me discover exactly why doing that would be wrong.

For another example, it doesn’t seem right to do:

class HasLength:     def __init__(self, length):         self.length = length  class HasWidth:     def __init__(self, width):         self.width = width  class Rectangle(HasLength, HasWidth):     def __init__(self, length, width):         HasLength.__init__(self, length)         HasWidth.__init__(self, width) 

This very much reminds me of Java’s interfaces, but with properties.

So is it bad form to create a new class for each new property? Why or why not?

Shouldn’t NASA JPL’s network be secure against Raspberry Pi connections authorized or not?

The question What information was stolen from JPL during the Raspberry Pi hack? refers to an event in recent news (e.g. Engadget’s A rogue Raspberry Pi helped hackers access NASA JPL systems) and references NASA’s Office of Inspector General June 2019 report Cybersecurity Management and Oversight a the Jet Propulsion Laboratory which states on page 17 in the section titled Incomplete and Inaccurate System Component Inventory:

Moreover, system administrators did not consistently update the inventory system when they added devices to the network. Specifically, we found that 8 of 11 system administrators responsible for managing the 13 systems in our sample maintain a separate inventory spreadsheet of their systems from which they periodically update the information manually in the ITSDB. One system administrator told us he does not regularly enter new devices into the ITSDB as required because the database’s updating function sometimes does not work and he later forgets to enter the asset information. Consequently, assets can be added to the network without being properly identified and vetted by security officials. The April 2018 cyberattack exploited this particular weakness when the hacker accessed the JPL network by targeting a Raspberry Pi computer that was not authorized to be attached to the JPL network. 32 The device should not have been permitted on the JPL network without the JPL OCIO’s review and approval.

To me it sounds like overall the report laments the failure to keep an updated list of devices that are authorized to be connected the network as the major security issue; if they’d only had better record-keeping, this wouldn’t have happened. But it seems to me that no matter how well you document after the fact what’s supposed to be connected that doesn’t in any way prevent inadvertent, accidental, or purposeful connection of an unauthorized device, for example “just for a minute” to download a Raspberry Pi update of some kind.

Question: Shouldn’t such a valuable US government network be secure against all connections equally, authorized or not?

This answer starts to outline the seriousness of the breach.

Shouldn’t Apple consider allowing use of Apple Pencil while it’s charging?

I have an iPad 2018 with 1st generation Apple Pencil. I am trying to use it for taking notes while it is connected to USB-cable charging, but surprisingly it doesn’t work even though it is recognized by the iPad already! (I can see its charging status on notification page)

So I am wondering shouldn’t Apple consider the issue and fix the problem?

Is it worth it contacting them for that?

Why shouldn’t an event pertain to multiple aggregates?

In my experience, it is often said that in event sourcing “an event must belong to one aggregate” and also “an aggregate is your biggest transactional boundary”.

Why is this?

What harm would come of events like this:

{   type: "BoblesSent"   aggregates: [1, 2],   from: 1,   to: 2,   amount: 420 } 

Seems like it could be atomically recorded, efficiently queried when reconstructing just one (either) aggregate, and applied in isolation in both push/projection and lazy/reconstruction scenarios.

Where is the catch? Is it a performance scalability thing?

Can also be heterogeneous aggregates:

{   type: "ActivityCreated",   aggregates: [1, 999],   customer: 1,   activity: 999 } 

set size is changing even though it shouldn’t python

class Test:     TheFlag = True     StartNodeQuery = {1, 2, 3, 4, 5}      def parsa(self):         while self.TheFlag:             SNQ = self.StartNodeQuery             self.iterator(SNQ)      def iterator(self, CurrentNodeQuery):          #it prints {1, 2, 3, 4, 5}         print(CurrentNodeQuery)           if len(CurrentNodeQuery) < 100:             b = len(CurrentNodeQuery) * 2             c = len(CurrentNodeQuery) * 3             self.StartNodeQuery.update({b, c})              # it prints {1, 2, 3, 4, 5, 10, 15}             print(CurrentNodeQuery)          else:             self.TheFlag = False          assert 0  obj = Test() obj.parsa() 

as you can see I deliberately ended the program with assert 0. The Main issue is: Before the function is finished the parameters that is passed to it gets changed!

as you can see StartNodeQuery = {1, 2, 3, 4, 5} and SNQ = self.StartNodeQuery

so why when I change the size of the self.StartNodeQuery inside the function before it’s finished , CurrentNodeQuery which is a different variable with the same values as self.StartNodeQuery (or SNQ) gets changed as well, even though we didn’t pass the new self.StartNodeQuery to CurrentNodeQuery yet?

I hope you understand my problem, if you have the solution, please help a guy out

Why shouldn’t this prove the Prime Number Theorem? [on hold]

Denote by $ \mu$ the Mobius function. It is known that for every integer $ k>1$ , the number $ \sum_{n=1}^{\infty} \frac{\mu(n)}{n^k}$ can be interpreted as the probability that a randomly chosen integer is $ k$ -free.

Letting $ k\rightarrow 1^+$ , why shouldn’t this entail the Prime Number Theorem in the form

$ $ \sum_{n=1}^{\infty} \frac{\mu(n)}{n}=0,$ $

since the probability that an integer is “$ 1$ -free” is zero ?

Why future Bitcoin Core release shouldn’t be in python?

Python offers various advantages including simplified rules of coding and ease of readability. It offers OOP, cross platform compatibility and has numerous libraries that have been added over time. It can be understood why the original Bitcoin Core client was in C++ as python was not that popular as it is now.

Apart, from having to tear down the entire code, and re-write it in python and check for vulnerabilities, why aren’t the core bitcoin developers thinking of migrating the entire reference client implementation language to python?