Error detection in sequence of integers

Let’s say I receive a large sequence s of numbers n0, n1, n2, … Later I expect to receive the same sequence s* with the same numbers n0, n1, n2, … but errors might occur. For instance n2 gets altered. I’d like to detect errors as soon as possible with high probability.

  1. solution: store cumulative hash sum every k elements (but here I detect the error too late)
  2. solution: store the whole sequence (way too large to store on disk)

Is there anything in between those two trade-offs that allows me to detect the error per element with high probability?

What’s the worst case complexity of Robert W. Floyd’s cycle detection algorithm?

Givens. I understand Floyd’s algorithm can determine the length $ \lambda$ of the loop and the length $ m$ of the tail. The hare will not necessarily catch the tortoise on the first cycle, but it is guaranteed to catch it after a number $ k$ of cycles, where $ k$ is a natural number.

Question. Given I know these facts, how can I deduce the worst case complexity of the algorithm?

Config AppleWatch to not dial 911 during “Fall Detection”?

my need:
I do not want “Fall Detection” to dial 911. And, I’d like to use “Fall Detection” to stop other people from dialing 911 on my behalf.

my condition:
I will be walking, and I have a seizure… my next memory is that I am in the Emergency Room. An ambulance ride and ER visit is $ 1800. I receive no treatment in the ER. All I need to do is rest. I refuse to pay $ 1800, and I hope “Fall Detection” can solve this problem.

the situation:
I am walking, have a seizure, stumble and fall to the ground. Apparently I sit there confused and resting. Apparently, I am able to talk with people who ask if I’m ok. But, I’m not able to convince them not to call an ambulance. A few times I remember talking to EMTs telling them to leave me alone, but they always force you to get in the ambulance (I guess for financial reasons). I guess I’m too weak / exhausted to physically resist them.

If “Fall Detection” could get an “Emergency Contact” of mine to talk with a good samaritan that finds me and convince them not to dial 911, that would be such a gift. So amazing. What is the best way to configure “Fall Detection” to have this strategy work?

thanks so much.

Logoff interception based on application detection with user prompt

Good Morning,

I’ve noticed there is a lack of license keys where I work as people consistently don’t log out of licensed applications when they’re not in use. So as a means to address the problem and teach myself a little python I thought I might try to write a little something (hopefully)..

I’d like to be able to intercept a logoff attempt in the case of an open instance of licensed software and gently remind the user to be better

For clarity, I’ve drawn a rough flowchart of how I want the script(?) to function: Flowchartery

I have a vague idea of using Tasklist to check against a defined list of licensed software.

I am not sure whether something like this would need to be constantly running in the background to intercept the logoff or whether the action of logging off could trigger running it?

This is not my area in anyway, so any general steer on this little project would be greatly appreciated; be it pointing in the direction of a more appropriate font of knowledge, problems you foresee or just telling me its not possible. Happy to supply any further information if necessary.

Cheers!

What makes detection of optimized-away memory clearing non-trivial?

In this 35C3 talk, it is said that while it is possible to manually inspect whether a package optimizes away memset() that clears sensitive memory, doing it automatically would be challenging. Assuming that the binary is compiled with -ggdb which IIRC contains source-to-binary mapping, what makes such detection difficult?

Single-bit Error Detection through CRC(Cyclic Redundancy Check)

I was going through some problems related to the single bit error detection based on the CRC generators and was trying to analyse which generator detect single-bit error and which don’t.

Suppose, If I have a CRC generator polynomial as $ x^4 +x^2$ . Now I want to know whether it guarantees the detection of a single-bit error or not ?

According to references 1 and 2 , I am concluding some points :-

$ 1)$ If $ k=1,2,3$ for error polynomial $ x^k$ , then remainders will be $ x$ ,$ x^2$ ,$ x^3$ respectively in the case of polynomial division by generator polynomial $ x^4 +x^2$ and according to the references, if generator has more than one term and coefficient of $ x^0$ is $ 1$ then all the single bit errors can be caught. But It does not say that if coefficient of $ x^0$ is not $ 1$ then single bit error can’t be detected. It is saying that “In a cyclic code , those $ e(x)$ errors that are divisible by $ g(x)$ are not caught.”

2) I have to check the remainder of $ \frac{E(x)}{g(x)}$ where $ E(x)$ (suppose, it is $ x^k$ ) where, $ k=1,2,3,…$ is error polynomial and $ g(x)$ is generator polynomial. If remainder is zero then I can’t detect error and when it is non-zero then I can detect it.

So, According to me, generator polynomial $ \;x^4 +x^2$ guarantees the detection of single-bit error based on the above $ 2$ points.

Please confirm whether I am right or not.

Heuristics-based detection and behavioral detection

I’m working on my thesis and I have couple questions about differences between heuristics-based detection and behavioral detection?

Does the heuristics-based detection relies only on staticly examine malcode, in order to find some kinds of instructions, or maybe anti-virus application has it’s own virtual machine which can execute malicious code?

Does the behavioral detection which observes how the program is executed is being done in virtual machine like virtualbox, or vmare or it is being done on real machine?

I’m a little be confused because this article states that during Heuristics-based detection the program is emulated. What excatly that means? Does it means what’s I said above – that the AV app has its own, basic virtual machine?

How to get back the co-ordinate points corresponding to the intensity points obtained from a faster r-cnn object detection process?

As a result of the faster r-cnn method of object detection, I have obtained a set of boxes of intensity values(each bounding box can be thought of as a 3D matrix with depth of 3 for rgb intensity, a width and a height which can then be converted into a 2D matrix by taking gray scale) corresponding to the region containing the object. What I want to do is to obtain the corresponding co-ordinate points in the original image for each cell of intensity inside of the bounding box. Any ideas how to do so?

Tensorflow detection API GPU медленная обработка

Медленно работает Tensorflow detection API на GPU.

GPU видите, видео память нагружает. Но на обработку одной фотографии уходит порядка 7 секунд.

Код из примера ssd_mobilenet_v1_coco

import numpy as np import tensorflow as tf import cv2 as cv  # Read the graph. with tf.gfile.FastGFile('frozen_inference_graph.pb', 'rb') as f:     graph_def = tf.GraphDef()     graph_def.ParseFromString(f.read())  with tf.Session() as sess:     # Restore session     sess.graph.as_default()     tf.import_graph_def(graph_def, name='')      # Read and preprocess an image.     img = cv.imread('example.jpg')     rows = img.shape[0]     cols = img.shape[1]     inp = cv.resize(img, (300, 300))     inp = inp[:, :, [2, 1, 0]]  # BGR2RGB      # Run the model     out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),                     sess.graph.get_tensor_by_name('detection_scores:0'),                     sess.graph.get_tensor_by_name('detection_boxes:0'),                     sess.graph.get_tensor_by_name('detection_classes:0')],                    feed_dict={'image_tensor:0': inp.reshape(1, inp.shape[0], inp.shape[1], 3)})      # Visualize detected bounding boxes.     num_detections = int(out[0][0])     for i in range(num_detections):         classId = int(out[3][0][i])         score = float(out[1][0][i])         bbox = [float(v) for v in out[2][0][i]]         if score > 0.3:             x = bbox[1] * cols             y = bbox[0] * rows             right = bbox[3] * cols             bottom = bbox[2] * rows             cv.rectangle(img, (int(x), int(y)), (int(right), int(bottom)), (125, 255, 51), thickness=2) 

Характеристики

Windows 10

i7 8700

GTX 1060 GB

Где может быть ошибка?

Trying to modularise OpenCV detection algorithms

I have made a project involving the use of OpenCV to detect faces. The project has yet to grow tremendously over the course of this year, therefore the fact that I am failing to modularise my code to make it cleaner and easier to read is worrisome.

I use a camera feed to detect faces, as well as haarcascades, therefore OpenCV must do its analysis frame by frame, in a loop, like this:

project_dir = dirname(dirname(__file__)) face_cascade_path = join(project_dir, "haarcascades/haarcascade_frontalface_default.xml")  face_cascade = cv2.CascadeClassifier(face_cascade_path)  while camera.view.isOpened():     ret_val, frame = camera.view.read()     frame = cv2.resize(frame, (camera.width, camera.height))       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)     faces = face_cascade.detectMultiScale(gray, 1.05, 6)     for (x,y,w,h) in faces:         cv2.rectangle(frame, (x, y), (x+w, y+h), (255,0,0), 2)         roi_gray = gray[y:y+h, x:x+w]         roi_color = frame[y:y+h, x:x+w]      cv2.imshow(camera.name, frame)     cv2.startWindowThread()     cv2.namedWindow(camera.name, cv2.WINDOW_NORMAL)      # q to quit     if cv2.waitKey(1) & 0xFF == ord('q'):         break 

As may be evident here, I am already trying to modularise this as much as possible by using classes for the camera object in OpenCV (camera.view) as well as assigning the class some other parameters (such as camera.width and camera.height). However, the fact that OpenCV for videos must run the algorithm in a while loop for every frame makes me feel extremely limited. For example, if the code is to include also eye detection, it would look something like this:

project_dir = dirname(dirname(__file__)) face_cascade_path = join(project_dir, "haarcascades/haarcascade_frontalface_default.xml") eye_cascade_path = join(project_dir, "haarcascades/haarcascade_eye.xml")  face_cascade = cv2.CascadeClassifier(face_cascade_path) eye_cascade = cv2.CascadeClassifier(eye_cascade_path)  while camera.view.isOpened():     ret_val, frame = camera.view.read()     frame = cv2.resize(frame, (camera.width, camera.height))       gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)     faces = face_cascade.detectMultiScale(gray, 1.05, 6)     for (x,y,w,h) in faces:         cv2.rectangle(frame, (x, y), (x+w, y+h), (255,0,0), 2)         roi_gray = gray[y:y+h, x:x+w]         roi_color = frame[y:y+h, x:x+w]          eyes = eye_cascade.detectMultiScale(roi_gray)         for (ex, ey, ew, eh) in eyes:             cv2.rectangle(roi_color, (ex, ey), (ex+ew, ey+eh), (0,255, 0), 2) 

It seems to me that if I further apply analysis for other objects or any other transformations with the frame, the code will get boggier and harder to use. Is there any way to modularise this, to separate the code and make it more maintainable?