developing mobile apps for research purposes & data collection

Essentially I want to create a mobile application that has users perform some tasks and logs the data of various interactions, including:

  • Answers to questions at the beginning of the app
  • time it takes them to complete a task from the start of a page
  • whether they clicked on certain objects or not, and how many times.
  • information about the device they are using.

I’m wondering what people with experience building these test apps do, generally? I’m ok with something I’ll have to hard-code, so long as there are guidelines out there that I can learn how to incorporate it. Generally, I don’t want to have to link an app up to an analytics tool that will cost me tons of money, as this is for basic research purposes.

Hopefully this was the right place to ask. I know it’s more “HCI” than UX but this seemed like the right place.

Developing a neural network for image modification

On the project I am currently working on, my goal is to train a neural network to convert images of circles to ellipses in a way that models convolution/blurring in real imaging processes.

What remains is to construct a neural network, preferably a CNN, that has the desired results – i.e. takes an image with circles as an input and returns an image with ellipses. However, I have not been able to do this. At best, neural nets (including CNNs) that I have used so far have at best returned blurred images of the circles. I can’t tell whether it is the fault of the neural network or the fault of the preprocessing code I am using.

Below, I will show you my code.

First, importing the necessary modules:

import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Activation, Reshape from keras.layers import Input, Conv2D, MaxPooling2D, UpSampling2D import numpy as np import pandas as pd from collections import OrderedDict import itertools import matplotlib.pyplot as plt import matplotlib.patches as patches import random from sklearn.model_selection import train_test_split from sklearn.preprocessing import OneHotEncoder import math from math import sqrt from keras.models import Model, load_model 

Next, creating and storing the input (circle) and output (ellipse) images:

def create_blank_image(size):     data = np.ndarray(shape=(size, size))     for i in range(0, size):         for j in range(0, size):             data[[i], [j]] = 0     #print(data)      return data  def circle_randomizer():     number_of_circles = random.randint(4,10)     intensity = np.ndarray(shape=(128, 128))     #print(number_of_circles)     radius_list = []       for i in range(number_of_circles):         radius_list.append(random.uniform(8, 10))     #print(radius_list)      center_coords = np.zeros((2,1))         center_coords[[0],[0]] = random.uniform(0,size)     center_coords[[1],[0]] = random.uniform(0,size)      for i in range(number_of_circles):       #temp_array = np.ndarray(shape=(2,1))       #temp_array[[0],[0]] = random.uniform(0,size)       #temp_array[[1],[0]] = random.uniform(0,size)        if i > 0:           j = 0           #print(i,j)           while j in range(i):               #print(i,j)               #print(center_coords)               temp_array = np.ndarray(shape=(2,1))               temp_array[[0],[0]] = random.uniform(0,size)               temp_array[[1],[0]] = random.uniform(0,size)               #while sqrt((center_coords[[0],[i]] - center_coords[[0],[j]])**2 + (center_coords[[1],[i]] - center_coords[[1],[j]])**2) < radius_list[i] + radius_list[j]:               while sqrt((temp_array[[0],[0]] - center_coords[[0],[j]])**2 + (temp_array[[1],[0]] - center_coords[[1],[j]])**2) < radius_list[i] + radius_list[j]:                                  temp_array[[0],[0]] = random.uniform(0,size)                   temp_array[[1],[0]] = random.uniform(0,size)                   j = 0               center_coords = np.concatenate((center_coords,temp_array), axis = 1)                         j = j + 1               #print('loop ran ' + str(j) + ' times')      return radius_list, center_coords  def image_creator(centers, radii, img_data, size):     x = np.arange(1, size, 1)     y = np.arange(1, size, 1)      for c in range(len(centers)):         x0 = centers[[c],[0]]         y0 = centers[[c],[1]]         radius = radii[c]         for i in range(0, size-1):             for j in range(0, size-1):                 height2 = radius**2 - (x[i]-x0)**2 - (y[j]-y0)**2                 if height2 >= 0:                     img_data[[i], [j]] = sqrt(radius**2 - (x[i]-x0)**2 - (y[j]-y0)**2)      return img_data  def make_ellipses(size, radii, center_coords):     # idea: use a random number generator to create a random rotation of the x,y axes for the ellipse      # size is the length of a side of the square     # length is the length of the ellipse     # defined as equal to the radius of the circle later      my_label = np.ndarray(shape=(size, size))     x = np.arange(1, size, 1)     y = np.arange(1, size, 1)      # inefficiently zero the array     for i in range(0, size):         for j in range(0, size):             my_label[[i], [j]] = 0             # print(my_label)     for c in range(len(center_coords)):         x0 = center_coords[[c],[0]]         y0 = center_coords[[c],[1]]         #theta = random.uniform(0, 6.28318)         theta = 0.775          for i in range(0, size - 1):             for j in range(0, size - 1):                 xprime = (x[i] - x0) * math.cos(theta) + (y[j] - y0) * math.sin(theta)                 yprime = -(x[i] - x0) * math.sin(theta) + (y[j] - y0) * math.cos(theta)                 height2 = (0.5 * radii[c]) ** 2 - 0.25 * xprime ** 2 - yprime ** 2                 if height2 >= 0:                     my_label[[i], [j]] = sqrt((0.5 * radii[c]) ** 2 - 0.25 * xprime ** 2 - yprime ** 2)      return my_label  size = 128 radii, centers = circle_randomizer() #print(radii) #print(centers)  #Make labels and samples consistent with rest of code N = 100 circle_images = [] ellipse_images = [] coords = [] for sample in range(0, N):     blank_image = create_blank_image(size)     radii, centers = circle_randomizer()     temp_image = image_creator(centers, radii, blank_image, size)     circle_images.append(temp_image)     temp_output = make_ellipses(size, radii, centers)     ellipse_images.append(temp_output)     coords.append(centers) 

Storing the images in files:

filenames = [] for i in range(0,N):   np.save('ellipses_' + str(i) + '.npy', ellipse_images[i])   filenames.append('ellipses_' + str(i) + '.npy')   np.save('circles_' + str(i) + '.npy', circle_images[i]) circles_stack = np.stack(circle_images,axis=0) ellipses_stack = np.stack(ellipse_images,axis=0) np.save('ellipses_stack.npy', ellipses_stack) np.save('circles_stack.npy', circles_stack) 

Loading the images:

# load training images and corresponding "labels" # training samples training_images_path = 'circles_stack.npy' labels_path = 'ellipses_stack.npy'  X = np.load(training_images_path,'r')/20. y = np.load(labels_path,'r')/20. 

Defining the image preprocessing functions: (I’m not sure why preprocessing_X and preprocessing_y are different; this is code I’ve partially adopted from a research paper.)

# Preprocessing for training images def preprocessing_X(image_data, image_size):     image_data = image_data.reshape(image_data.shape[0], image_size[0], image_size[1], 1)     image_data = image_data.astype('float32')     image_data = (image_data - np.amin(image_data))/(np.amax(image_data) - np.amin(image_data))     return image_data   ​ # preprocessing for "labels" (ground truth) def preprocessing_Y(image_data, image_size):     n_images = 0     label = np.array([])     for idx in range(image_data.shape[0]):         img = image_data[idx,:,:]         n, m = img.shape         img = np.array(OneHotEncoder(n_values=nb_classes).fit_transform(img.reshape(-1,1)).todense())         img = img.reshape(n, m, nb_classes)         label = np.append(label, img)         n_images += 1     label_4D = label.reshape(n_images, image_size[0], image_size[1], nb_classes)         return label_4D 

Preprocessing the images:

# Split into train/test and make the shapes of tensors compatible with tensorflow format nb_classes = 10 target_size = (128, 128)  #Below line randomizes which images are picked for train/test sets. ~20% will go to test. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) X_train = preprocessing_X(X_train, target_size) X_test = preprocessing_X(X_test, target_size) y_train = preprocessing_Y(y_train, target_size) y_test = preprocessing_Y(y_test, target_size) 

The Keras model that I have been using:

model = Sequential() model.add(Conv2D(nb_classes, kernel_size=3, padding = 'same',                  activation='relu',                  input_shape=(128,128,1))) model.add(MaxPooling2D(pool_size=(2, 2), padding='same')) model.add(Conv2D(32, kernel_size = 3, activation='relu', padding = 'same')) #model.add(MaxPooling2D(pool_size=(2, 2), padding='same')) #model.add(Dropout(0.25)) #model.add(Flatten()) #model.add(Dense(128, activation='relu')) #model.add(Dropout(0.5)) #model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))  model.add(UpSampling2D((2,2)))  model.add(Conv2D(nb_classes, (1, 1), activation = 'linear', padding='same')) model.add(Activation('softmax')) 

Compiling the model:

model.compile(loss='mean_squared_error',               optimizer='Adam',               metrics=['accuracy'])  model.fit(X_train, y_train,           batch_size=128,           epochs=epochs,           verbose=1,           validation_data=(X_test, y_test)) score = model.evaluate(X_test, y_test) print('Test loss:', score[0]) print('Test accuracy:', score[1])  model.save("/content/artificial_label_train.h5") model.save_weights("/content/artificial_label_train_weights.h5") 

Somebody suggested an encoder-decoder pair, but I do not know how to implement this. Any suggestions?

Developing accessory for iOS devices without MFi Program

I am developing a device that will communicate with an app installed on an iOS device. I have some doubts:

  • If I use an MFi cable, do I need to license my product?

  • To communicate by serial port, is it necessary to license the product?

  • Is there any way to communicate with the USB cable that does not require the license?

  • I have nowhere found the cost to license the product, for both development and manufacturing.

The device will be connected and powered over USB.

Note: A proprietary iOS application will be developed to communicate with the hardware.

Developing asp.net core using (Boostrap & HTML-5) without using (Angular.js, React.js or Vue.js) [on hold]

From my asp.net mvc-4 and mvc-5 experience, when i want to start a new project, first thing i do i search for online web templates, i always chose templates that do NOT use third party JavaScript libraries such as Angular.js, react.js or Vue.js. mainly i rely on Bootstrap + HTML-5 + pure javascript/jquery code.

now i want to start a new asp.net MVC core web project, so i search for online templates which i can use such as the ones mentioned in this link @ https://wrapbootstrap.com/theme/beyondadmin-responsive-admin-app-WB06R48S4?ref=clevision . but not sure if it is necessary to use third party javascript libraries such as Angular.js, react.js or Vue.js?? now i am planning to build a workflow online management system, which can allow users to create new workflows and each workflow contain around 10 steps and registered users can access the steps (based on their permissions) and add data and upload documents, also the web application will have internal chatting (text-based only).

Developing SDK for a Device? [on hold]

I need to develop a SDK that will run on a device. This SDK should faciliate communication between any device which is using our SDK and our API Service.

I specify some methods for a device, these are

  1. Register Device (Itself) to the platform
  2. Init Device (After Registration or After Reboot)
  3. Sensors loaded (If any added)
  4. Actuators loaded (If any added)
  5. CRUD(Create/Update/Delete/Get) Sensors
  6. CRUD Actuators
  7. Getting Notification From Sensors and Actuators (ex. If someone whats to turn on/off light)

Now these are what I thought,

I need a class that will keep our API information and some Device Data

public class DeviceSettings {     public string DeviceId { get; private set; }     public string PlatformMqttAddress { get; private set; }     public string PlatformMqttUserName { get; private set; }     public string PlatformMqttPassword { get; private set; } } 

I need a class which is responsible to register device to the platform

public interface IProvisioningDeviceClient {     Task<DeviceRegistrationResult> RegisterAsync(); } 

I need a class which is responsible all the crud operations on the device

public interface IDeviceServiceClient {     Task<Device> GetDeviceInformation(string deviceId);     Task<Sensor> CreateSensorAsync(string sensorName, string currentValue, bool twoWayCommunication = false);     Task<Sensor> UpdateSensorAsync(string sensorId, string sensorName, string currentValue);     Task RemoveSensorAsync(string sensorId);     Task<List<Sensor>> GetSensorsAsync(string deviceId); } 

In addition to that I need a class which is responsible to Init Device. Init means it should get its device information, sensors and actuators from platform whenever device is rebooted or after registration process.

After getting these information from platform, it should subscribe these sensors and actuators with a mqtt connection. If another device (which is in the platform) wants to turn the light on by using our device, our device should get a notification from the platform

So I named another class as DeviceManager. Then I recognize I should inject IDeviceServiceClient to the DeviceManager because if any crud operation occurs on any sensor on the device, DeviceManager must manage its subscriptions. For example, it should unsubscribe the sensors that is deleted. If any new sensor added to the device, it should subscribe it automatically. Then DeviceManager started to manage CRUD operations too by using IDeviceServiceClient. It seems fine to me.

Then I think DeviceManager should be singleton because I want only one connection point to the platform from a device. This should be managed by one single instance.

public sealed  class DeviceManager {     private static readonly DeviceManager instance = new DeviceManager();      static DeviceManager()     {      }      private DeviceManager()     {      }      public static DeviceManager GetInstance     {         get         {             return instance;         }     }      private static bool IsConnected;     private readonly IDeviceServiceClient _deviceServiceClient;     private readonly IDeviceManagementServiceClient _deviceManagementServiceClient;     private readonly DeviceSettings _settings;      public Device Device { get; set;  }     public DeviceManagementModule DeviceManagementModule { get; set; }     public List<Sensor> Sensors { get; set; }      public async Task InitAsync()     {         var sensors = await _deviceServiceClient.GetSensorsAsync(_settings.DeviceId);        //todo subscription      }      public void AddSensor(){ //todo }     public void UpdateSensor() { //todo }      public void DeleteSensor() { //todo }       public event EventHandler<SensorNotification> SensorEventHandler;     public void OnDeviceManagementEvent(Notification notification)     {         //devicenotificaitona map etmeliyiz         DeviceManagementEventHandler?.Invoke(this, new DeviceManagementNotification());     }     public void OnSensorEvent(Notification notification)     {         if(Sensors != null)         {             var sensor = Sensors.FirstOrDefault(x => x.Id == notification.Id);              if (sensor == null)             {                 //Logging             }             else             {                 var sensorNotification = new SensorNotification()                 {                     SensorId = sensor.Id,                     SensorName = sensor.Name,                     Data = notification.Data,                 };                  SensorEventHandler?.Invoke(this, sensorNotification);             }         }         else         {             //Logging         }     }     public async Task PushDataToPlatform(string sensorId, string message)     {       }  } 

How should I design DeviceManager class ? Should it be singleton ? These are all concepts as you guessç Not actual implementation. If you have any experiences, I need your advice to design a simple sdk for a device to communicate with a platform. What should I do to make it better ?

What are the pros and cons of companies like Apple or Google developing their own programming languages? [on hold]

Even thought they are promoted as being open source and community owned, it seems that languages like Go or Swift (what are other good examples?) are owned by big companies that have a lot of influence on design decisions of these languages.

So I wonder: Why do these companies develop own programming languages? Is this good for programming and the software business in general? Is it good for society?

Developing an automated lawn mower that searches N by N grid

I am developing software for a lawn mower that will mow all grass on an N by N grid. The grid contains boulders at certain grid spots. For example, there may be boulders on grid coordinates [3,2] and [5,1]. The mower cannot go over the boulder.

Currently my mower works for grids that only have 0-1 boulders. Some of the harder maps will contain 10 boulders on a 6 by 6 grid, which my mower seems to fail most of the lawn with.

My current algorithm is flawed in that I am looking at the mower on the start location, scanning its surrounding boxes and then going to a grid piece if it has grass to mow. My problem is that if the current grid my mower is on detects a grass to the east and a grass to the west, it will just go to the east and forget about that grass it detected to the west.

Should I change my algorithm to DFS? Should I use backtracking? Is this a graph problem? Any suggestions on a better algorithm?