Interaction between fog cloud and new vision errata rule

I am trying to understand from a RAW point of view as to how the interaction between the spell Fog Cloud interacts with the errata’d rules of vision and lighting.

First, we have the normal vision rules for heavily obscured:

A heavily obscured area–such as darkness, opaque fog, or dense foliage–blocks vision entirely. A creature effectively suffers from the blinded condition (see appendix PH-A) when trying to see something in that area.

Next, we have the errata:

Vision and Light (p. 183). A heavily obscured area doesn’t blind you, but you are effectively blinded when you try to see something obscured by it.

As we know, Fog Cloud creates an area of heavily obscuring fog around the effect zone. According to the errata, it would seem as if being inside the fog does not blind you, hence, you can attack creatures outside of the fog with advantage as per the unseen attacker rules.

Am I understanding this correctly? An I missing something?

Please advise, and thanks in advance.

Unable to use NVIDIA GPU on Ubuntu 18.04 for computer vision task (GeForce GTX 1060)

My machine is a ASUS ROG SCAR-GL703GM-EE033T with a NVIDIA GeForce GTX1060. I installed Ubuntu 18.04 on it and the necessary softwares needed to run deep learning applications using its GPU.

I installed:

  • Cuda 10.0
  • cuDNN 7.5.0
  • tensorflow-gpu 1.13.1

and everything looks fine, as when running the following command in a Python terminal:


it outputs the characteristics of the NVIDIA card and “True”. So the installation looks correct.

However, when running an application for computer vision that uses pytesseract, I’m surprised by the long run-time that may indicate the GPU isn’t really used. To verify that, when the code is running I display in another terminal the nvidia-smi command that outputs: this screenshot of nvidia-smi command when code is running

As you can see the NVIDIA driver is 418.67. It is not the one recommended by NVIDIA website (here with GeForce / GeForce 10 Series / GeForce GTX1060 / Linux 64-bit), but I tried unsuccessfully to install driver 430.67 with following message:

nvidia-installer log file '/var/log/nvidia-installer.log' creation time: Thu Jul 11 17:56:00 2019 installer version: 430.34  PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin  nvidia-installer command line:     ./nvidia-installer  Unable to load: nvidia-installer ncurses v6 user interface  Using: nvidia-installer ncurses user interface -> Detected 12 CPUs online; setting concurrency level to 12. ERROR: An NVIDIA kernel module 'nvidia-uvm' appears to already be loaded in your kernel.  This may be because it is in use (for example, by an X server, a CUDA program, or the NVIDIA Persistence Daemon), but this may also happen if your kernel was configured without support for module unloading.  Please be sure to exit any programs that may be using the GPU(s) before attempting to upgrade your driver.  If no GPU-based programs are running, you know that your kernel supports module unloading, and you still receive this message, then an error may have occured that has corrupted an NVIDIA kernel module's usage count, for which the simplest remedy is to reboot your computer. ERROR: Installation has failed.  Please see the file '/var/log/nvidia-installer.log' for details.  You may find suggestions on fixing installation problems in the README available on the Linux driver download page at 

I then tried with driver 418 that ended up with tensorflow recognizing available GPU, so I continued with it.

Also, as you can see in screenshot no process uses GPU. And as you can see in the screenshot that Cuda version displayed is 10.1 althought before installing 10.0 that looked working best I erased all traces of previously installed 10.1. Why would nvidia-smi get this 10.1 version of Cuda?

Do you know, with my current configuration/installation, how to permanently make GPU available for my code running? Do you see a missing point in my installation?

Thanks for your help, I tried to provide as much details as I could!


Google Play Services update to version 17.4.55 slows down GMS vision based App

I have an App that is based on the GMS vision Google APIs for Android, it has worked well and fast enough for months, using about 50% of cores of my smartphones (Huawei P20 Lite, and Samsung Galaxy S8), but after the recent upgrade of Google Play Services to version 17.4.55 it has suddenly become slow, and it uses only 1 of the 8 cores of my smartphones.

If I uninstall the Google Play Services update and go back to the factory version, the App is fast as before.

I have no idea of what’s new in the latest Services release that may cause this behaviour, any hint?

Write a Powerful Mission and Vision Statement for $17

I will write a powerful mission- and vision statement, a slogan and 3 values – 350 words. The mission and vision statement are essential for every serious organization. You will be able to inform stakeholders, employees and customers of your purpose, direction and intentions. With 10+ experience, I know how to write a powerful mission- and vision statement that gives your organization the clear voice it needs. With my statements you will be able to establish your brand identity to the public and internally provide a compass for decision making. My services will be authentic, professional and accurate, writing your: Mission – What you are dedicated to doing.Vision – The ultimate goal you work towards.Values – The essentials that are part of every step you take.Tagline – A catchy line that gives your profile more character! A professional About Us page can also be written to provide even more depth on your history and activities. The content can be customized for any platform, including your: About us pageBiographyBio Executive summarySocial media pages (Facebook, twitter etc.)Website

by: Markybiani
Created: —
Category: Content & Writing
Viewed: 51

Computer Vision and translation of real movement to program

Hello i am really new to all of this but i undertake computer vision to simply do the following:

Using computer vision i make the computer regognize a specific object and it;s complex movement (the object’s gonna be a glove with marks) and that glove has it’s digital replica to a game engine translating real movement to game in real time? So how i start with this?

Ingredients App that connects to Firebase Real-Time Database using Barcode Reader Using Google Vision Api

I am creating an android device using Android Studios that connects with Firebase. The initial idea is to create an app that uses a barcode scanner and a manual activity to input ingredients into a database and for that database then to recognise the ingredients entered and find a matching meal idea from that. So far I have managed to connect a working Barcode-Scanner but I am not sure of how to do the next stage (matching meal idea using real time firebase database).

I have made classes such as:

BarcodeFragment FirebaseDatabaseHelper Ingredient IngredientDetailsActivity ListActivity ScannerActivity ManualActivity MealActivity ProfileActivity RecyclerView_Config And imported Barcode Reader Using Google Vision Api module.

This is a very basic design of how I think the Real Time Database on Firebase would work but not sure.


ingredients 1 insertIngredient: “Mince” 2 insertIngredient: “Onion” 3 insertIngredient: “Puree” 4 insertIngredient: “Tomatoes” 5 insertIngredient: “Garlic”

recipe 1 insertRecipe: “Bolognese”


Tools to simulate active stereo vision algorithms [on hold]

I would like to simulate and experiment with various active stereo vision algorithms.

Lets say I have a laser that can project a point or line beam onto some 3D geometry (which I have in the form of an STL file) and the reflected light is captured by a camera.

My question is: Which software tools (preferably open source tools like, OpenCV, python packages, vtk…) would you recommended for the job?