Teamviewer on Jetson nano: CheckCPU: unknown architecture aarch64

I’ve installed teamviewer on Jetson nano following this instruction. It fails in two ways:

  • if I launch it from command line, i get following output:
  Init...   Error: CheckCPU: unknown architecture 'aarch64' 
  • if I connect to it after setting password through command line, it allows to enter login remotely, but after entering password it immediately disconnects.

I tried changing WaylandEnable /etc/gdm3/custom.conf, but it doesn’t help. What else can I try?

What are the security roles/levels for architecture?

Given a sample system architecture for a company, where different types of users access the company’s Web App and databases over the public internet, what are the security roles/levels of the systems in this architecture in relation to Information Security and IAM ?

Is it correct to assume these security roles/levels include system level, transmission level, application level, and even ‘people’, etc. ?

Thanks.

Restoring a database on multiple server architecture with two front end servers

I am having a multiserver architecture with two front end servers. I want to restore a content database in one of the web application and it should reflect on both the site collections.
But when i restore the database only one site collections loads while the other one throws an error. It does not load.
What should be possible approach to restore the database. Awaiting your suggestions.

Cmake giving error “compute_20”, even though the architecture is explicitly unstated in the make file

I have the same error as the person in this question, however, when I tried the solution (Just delete the target for compute_20), I’m still getting the error, even after cleaning the project.

To be specific: I’m trying to install caffe by following these instructions, step by step. Even more specifically, here is my steps.

I run sudo cmake .., and I get the following information:

CMake Warning (dev) at cmake/Misc.cmake:32 (set):   implicitly converting 'BOOLEAN' to 'STRING' type. Call Stack (most recent call first):   CMakeLists.txt:24 (include) This warning is for project developers.  Use -Wno-dev to suppress it.  -- Found Boost: /usr/include (found suitable version "1.65.1", minimum required is "1.46") found components:  system thread filesystem chrono date_time atomic  -- Found gflags  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libgflags.so) -- Found glog    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so) -- Found PROTOBUF Compiler: /usr/bin/protoc -- HDF5: Using hdf5 compiler wrapper to determine C configuration -- HDF5: Using hdf5 compiler wrapper to determine CXX configuration -- Found lmdb    (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/liblmdb.so) -- Found LevelDB (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libleveldb.so) -- Found Snappy  (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libsnappy.so) -- CUDA detected: 10.1 -- Automatic GPU detection failed. Building for all known architectures. -- Added CUDA NVCC flags for: sm_20 sm_21 sm_30 sm_35 sm_50 -- OpenCV found (/usr/share/OpenCV) -- Found OpenBLAS libraries: /usr/lib/x86_64-linux-gnu/libopenblas.so -- Found OpenBLAS include: /usr/include/x86_64-linux-gnu -- NumPy ver. 1.11.0 found (include: /usr/local/lib/python2.7/dist-packages/numpy/core/include) -- Found Boost: /usr/include (found suitable version "1.65.1", minimum required is "1.46") found components:  python  -- Found NCCL (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnccl.so) -- Detected Doxygen OUTPUT_DIRECTORY: ./doxygen/ --  -- ******************* Caffe Configuration Summary ******************* -- General: --   Version           :   0.15.14 --   Git               :   v0.15.14-16-g4b8d54d8-dirty --   System            :   Linux --   C++ compiler      :   /usr/bin/c++ --   Release CXX flags :   -O3 -DNDEBUG -fPIC -Wall -Wno-sign-compare -Wno-uninitialized --   Debug CXX flags   :   -g -fPIC -Wall -Wno-sign-compare -Wno-uninitialized --   Build type        :   Release --  --   BUILD_SHARED_LIBS :   ON --   BUILD_python      :   ON --   BUILD_matlab      :   OFF --   BUILD_docs        :   ON --   CPU_ONLY          :   OFF --   USE_OPENCV        :   ON --   USE_LEVELDB       :   ON --   USE_LMDB          :   ON --   ALLOW_LMDB_NOLOCK :   OFF --  -- Dependencies: --   BLAS              :   Yes (open) --   Boost             :   Yes (ver. 1.65) --   glog              :   Yes --   gflags            :   Yes --   protobuf          :   Yes (ver. 3.0.0) --   lmdb              :   Yes (ver. 0.9.21) --   LevelDB           :   Yes (ver. 1.20) --   Snappy            :   Yes (ver. ..) --   OpenCV            :   Yes (ver. 3.2.0) --   CUDA              :   Yes (ver. 10.1) --  -- NVIDIA CUDA: --   Target GPU(s)     :   Auto --   GPU arch(s)       :   sm_20 sm_21 sm_30 sm_35 sm_50 --   cuDNN             :   Not found --   NCCL              :   Yes --  -- Python: --   Interpreter       :   /usr/bin/python2.7 (ver. 2.7.15) --   Libraries         :   /usr/lib/x86_64-linux-gnu/libpython2.7.so (ver 2.7.15+) --   NumPy             :   /usr/local/lib/python2.7/dist-packages/numpy/core/include (ver 1.11.0) --  -- Documentaion: --   Doxygen           :   /usr/bin/doxygen (1.8.13) --   config_file       :   /home/par/caffe/.Doxyfile --  -- Install: --   Install path      :   /home/par/caffe/build/install --  -- Configuring done -- Generating done -- Build files have been written to: /home/par/caffe/build 

NOTE: The configuration says specifically that it’s compiling to sm_20 compute version, which is not what I want to do. I searched all files that I have for this “sm_20” token, and the results are all in /caffe/build/src/caffe/CMakeFiles/cuda_compile_1.dir/, meaning that they are not contained within a make file.

I searched the same files for a compute_20 mention, and once again, the only mentions are in the directory above. I remove them, keeping syntax in check, and then run make -j"$ (nproc)" to which I get the following error:

[  6%] Building NVCC (Device) object src/caffe/CMakeFiles/cuda_compile_1.dir/layers/cuda_compile_1_generated_crop_layer.cu.o nvcc fatal   : Unsupported gpu architecture 'compute_20' CMake Error at cuda_compile_1_generated_math_functions.cu.o.Release.cmake:219 (message):   Error generating   /home/par/caffe/build/src/caffe/CMakeFiles/cuda_compile_1.dir/util/./cuda_compile_1_generated_math_functions.cu.o 

What is the cause of this, if compute_20 nor sm_20 are mentioned in any makefile?

Should we save clean architecture entities in DB?

I’m a little confused about how to store the entities in DB. According to Uncle Bob, entities encapsulate Enterprise wide business rules. This mean that they can has methods for example.

But it’s correct pass my entity as a parameter to the GatewayInterface / Repository?

I’m been reading some comments here, and some of them tell that is not recommend pass entities to outer layers. Because you are passing Enterprise business logic.

Should I pass a DTO to my Repository only? Something that the repository can map and then save.

Thanks

How would one change this architecture to avoid synchronous REST calls?

right now I have an architecture where several “microservices” are daisy-chained together in a row via synchronous REST called. Obviously this is far from ideal since synchronous communication between microservices is strongly discouraged and, as mentioned on slide 37 here, the overall availability of your application drops exponentially with how many services you have chained together behind it.

Picture a flow somewhat like this:

From the front-end, the user submits an application form. This starts a process for either accepting or denying the application.

first service1 is called, which inserts the form data into a database

once the insertion is complete service1 calls service2 which does some preliminary sanity checks

if those checks pass then service2 will proceed to call service3 to perform some more advanced checks.

service3 will in turn call serviceA, serviceB, and serviceC in parallel, and aggregate their responses into a final decision.

Some challenges are:

  • service3 has to come after we know the results of service2‘s checks. This is because service3‘s checks are more expensive to do so if the sanity checks already fail, we don’t want to bother calling service3 at all.
  • The calls to serviceA, serviceB, and serviceC, should technically be considered “query” calls(under CQS/CQRS) since we are expecting a return value and there is no change in state. However, some or all of these services are in fact complex machine learning models. This differentiates them from regular query calls in 2 ways: 1. they can be slower than what you would usually expect from a “query” call, and 2. we don’t have the option of retaining a local copy of the data using service3‘s DB since said data is generated on the fly.
  • It is a requirement that we have an answer(to accept or deny the application) in real time: a few seconds after the customer clicks submit they should know whether we accepted them or not. At the same time, the financial impact of making the wrong decision, even for a short time is too high to risk so giving some answer and then correcting it later(eventual consistency) is not an option here.

Any ideas on how I could reorganize an architecture like this? Or do the requirements in this case mean we have to live with the issues of coupling?

Thanks.