Is there a realistically implementable algorithm for testing the termination of a given petri net?

I am trying to implement this petri net simulator. Amongst it’s specifications it has to return a map of reachable markings from the current state. I don’t really want it to give me an OutOfMemoryError or something if I’m given a petri net with infinite reachable markings. Can this be resolved more elegantly?

In my case the net can have inputs and outputs of any natural value, as well as reset and inhibitor arcs.

Paper prototype testing on checkout?

I would like to run few iterations of user testing on a checkout re-design. This project is still in the early stage so I would like to take full-advantage of the situation and run testing sessions on paper prototypes. The testing will be therefore focused on the checkout experience, filling forms and so on, which makes me wonder if is it a good idea to test paper prototypes on a so focused aspect of the website.

Do you think is still a good idea? Any tips on how to setup the tasks?

What are the best methods for conducting usability testing with people who are neither experts nor end-users?

My team and I have developed a prototype of an augmented reality mobile application for teaching primary school students human anatomy.

We are going to do a usability testing and evaluation with the primary school students using FUN toolkit, and we are also going to conduct an expert review using heuristic evaluation and cognitive walkthrough.

Furthermore, we also want the teachers to test the app, and to evaluate the usability in the context of their students’ usage. However, the teachers are neither usability experts nor end-users so what is the most appropriate method for them regarding usability testing, survey design etc?

Usability testing with multiple devices in the same session with the same user?

We had a pilot test of usability testing. We only planned to do tests on desktop pc because that’s what we think is the main device with the our user group.

After the pilot we had a feedback conversation with our teacher. One of the suggestions he made was that we should add some tasks that are done with the mobile device.

I disagreed immediately but couldn’t come up with any good explanation why this is a bad thing. My opinion was not taken seriously because teacher is ‘the pro’. Now I would like to know if this really is a standard testing method to have multiple devices in the same testing?

For me it slunds like finding issues here and there and not focusing anything. So you most likely find more issues but wouldn’t it be more important to find the ‘famous 80 % of the problems’ with one device? In my opinion the experience with the first device affects to the use of second device because the system being tested is only a part of a website.

In our case we cannot have more participants.