Shouldn’t NASA JPL’s network be secure against Raspberry Pi connections authorized or not?

The question What information was stolen from JPL during the Raspberry Pi hack? refers to an event in recent news (e.g. Engadget’s A rogue Raspberry Pi helped hackers access NASA JPL systems) and references NASA’s Office of Inspector General June 2019 report Cybersecurity Management and Oversight a the Jet Propulsion Laboratory which states on page 17 in the section titled Incomplete and Inaccurate System Component Inventory:

Moreover, system administrators did not consistently update the inventory system when they added devices to the network. Specifically, we found that 8 of 11 system administrators responsible for managing the 13 systems in our sample maintain a separate inventory spreadsheet of their systems from which they periodically update the information manually in the ITSDB. One system administrator told us he does not regularly enter new devices into the ITSDB as required because the database’s updating function sometimes does not work and he later forgets to enter the asset information. Consequently, assets can be added to the network without being properly identified and vetted by security officials. The April 2018 cyberattack exploited this particular weakness when the hacker accessed the JPL network by targeting a Raspberry Pi computer that was not authorized to be attached to the JPL network. 32 The device should not have been permitted on the JPL network without the JPL OCIO’s review and approval.

To me it sounds like overall the report laments the failure to keep an updated list of devices that are authorized to be connected the network as the major security issue; if they’d only had better record-keeping, this wouldn’t have happened. But it seems to me that no matter how well you document after the fact what’s supposed to be connected that doesn’t in any way prevent inadvertent, accidental, or purposeful connection of an unauthorized device, for example “just for a minute” to download a Raspberry Pi update of some kind.

Question: Shouldn’t such a valuable US government network be secure against all connections equally, authorized or not?

This answer starts to outline the seriousness of the breach.

Shouldn’t Apple consider allowing use of Apple Pencil while it’s charging?

I have an iPad 2018 with 1st generation Apple Pencil. I am trying to use it for taking notes while it is connected to USB-cable charging, but surprisingly it doesn’t work even though it is recognized by the iPad already! (I can see its charging status on notification page)

So I am wondering shouldn’t Apple consider the issue and fix the problem?

Is it worth it contacting them for that?

Why shouldn’t an event pertain to multiple aggregates?

In my experience, it is often said that in event sourcing “an event must belong to one aggregate” and also “an aggregate is your biggest transactional boundary”.

Why is this?

What harm would come of events like this:

{   type: "BoblesSent"   aggregates: [1, 2],   from: 1,   to: 2,   amount: 420 } 

Seems like it could be atomically recorded, efficiently queried when reconstructing just one (either) aggregate, and applied in isolation in both push/projection and lazy/reconstruction scenarios.

Where is the catch? Is it a performance scalability thing?

Can also be heterogeneous aggregates:

{   type: "ActivityCreated",   aggregates: [1, 999],   customer: 1,   activity: 999 } 

set size is changing even though it shouldn’t python

class Test:     TheFlag = True     StartNodeQuery = {1, 2, 3, 4, 5}      def parsa(self):         while self.TheFlag:             SNQ = self.StartNodeQuery             self.iterator(SNQ)      def iterator(self, CurrentNodeQuery):          #it prints {1, 2, 3, 4, 5}         print(CurrentNodeQuery)           if len(CurrentNodeQuery) < 100:             b = len(CurrentNodeQuery) * 2             c = len(CurrentNodeQuery) * 3             self.StartNodeQuery.update({b, c})              # it prints {1, 2, 3, 4, 5, 10, 15}             print(CurrentNodeQuery)          else:             self.TheFlag = False          assert 0  obj = Test() obj.parsa() 

as you can see I deliberately ended the program with assert 0. The Main issue is: Before the function is finished the parameters that is passed to it gets changed!

as you can see StartNodeQuery = {1, 2, 3, 4, 5} and SNQ = self.StartNodeQuery

so why when I change the size of the self.StartNodeQuery inside the function before it’s finished , CurrentNodeQuery which is a different variable with the same values as self.StartNodeQuery (or SNQ) gets changed as well, even though we didn’t pass the new self.StartNodeQuery to CurrentNodeQuery yet?

I hope you understand my problem, if you have the solution, please help a guy out

Why shouldn’t this prove the Prime Number Theorem? [on hold]

Denote by $ \mu$ the Mobius function. It is known that for every integer $ k>1$ , the number $ \sum_{n=1}^{\infty} \frac{\mu(n)}{n^k}$ can be interpreted as the probability that a randomly chosen integer is $ k$ -free.

Letting $ k\rightarrow 1^+$ , why shouldn’t this entail the Prime Number Theorem in the form

$ $ \sum_{n=1}^{\infty} \frac{\mu(n)}{n}=0,$ $

since the probability that an integer is “$ 1$ -free” is zero ?

Why future Bitcoin Core release shouldn’t be in python?

Python offers various advantages including simplified rules of coding and ease of readability. It offers OOP, cross platform compatibility and has numerous libraries that have been added over time. It can be understood why the original Bitcoin Core client was in C++ as python was not that popular as it is now.

Apart, from having to tear down the entire code, and re-write it in python and check for vulnerabilities, why aren’t the core bitcoin developers thinking of migrating the entire reference client implementation language to python?

A good reason why docker containers shouldn’t use standard ports?

I don’t have much practical experience with containers, yet I see a lot of people using alternative ports to deploy their services. As a consequence, here’s a very basic question: Is there a good reason why in docker containers we should avoid standard TCP/UDP ports?

Popular examples for such ports are 80 for HTTP, 21 for FTP, 443 for HTTPS, 22 for SSH, etc. Often these are substituted with ports like 8080 or 3000 for 80, 8443 for 443, 1022 for 22,…

There are good reasons to do these substitutions in general:

  • Ports under 1024 are reserved to system processes thus accessible only to the root user.
  • These system ports are often avoided in development in order to prevent conflicts with other services that might be possibly running.
  • Sometimes such alternative ports are used as a way to achieve a level of security-by-obscurity.

However, to me it seems that the isolated nature of containers predisposes that standard ports are used. This could lead to some benefits, such as easier development and testing due to to default configurations.

Finding and removing a particular page that shouldn’t be active

So I have rogue pages that are in my site, that shouldn’t be there. Using a theme, and these are left over from the theme I believe.

I’m new to wordpress, and I’m trying to figure out how to find the page and set it to private, or just delete it all together.

Any instruction on finding these pages from within the web admin would be great!

thanks.

Shouldn’t only the last digits of a Dynamic IP change when we are assigned a new temp IP?

I kept track of my IP the past months and I noticed that for a few months I had for example an IP starting with 92.. Then the next month I had a different one starting with 79.. How is this possible? I thought only the last digits of an IP change when we have a dynamic one. I didn’t move to another location/change provider. The geo-location remained the same though.

Thanks.