How to Automate Unit test for HMI application?

I have the requirement like this: . Write python script, which run on the test Machine and interact with HMI Application.

· Automatic test can trigger the command from HMI (Button click action) / take the screen short of widget / screen for the status indication and compare it with expected results (compare with expected value/ pixmap) .

· Automatic test run every night and generate the test report of executed test cases.

Please, suggest some good solutions as this is very urgent.

Thanks, Aniprada

Does it make sense to write one unit test and loop it through similar components?

I have a situation where I have different forms, each with 4 or 5 steps (components), and I decided to write one unit test per step.

After doing it, I noticed they were very similar and I could just loop through them, changing some values based on the loop index. All good and worked fine, until one of the forms failed and I couldn’t figure out which step was failing . I could provide another generic solution for it, but this failure made me wonder if it really makes sense to reuse code for tests like this.

Well… as developers, we always want to reduce code duplicates, but I think for unit/snapshot tests, it’s a best practice to have every test explicit. The advantage I see by duplicating code in this situation is that it makes it more clear, easy to debug when a failure happens and also, once it’s test code, it doesn’t impact production code. What are your thoughts on it? Does it worth to have generic code to test several similar components, or is it better and safer to have it more explicit, even if you have to duplicate the code?

I also read this article on twitter, a few days ago that opened my mind for this approach even more: https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction WDYT?

Where should I start with an Integration Test for a Legacy Software in .NET?

Currently, I’m planning a new project of CI/CD with Azure DevOps (with Git, already committed) for an old Solution, which contains 17 C# projects.

Technically, we have access to the source code and it’s required to write all Unit Tests (since it wasn’t planned with them); however, as it’s advice in this article:

Integration Testing made Simple for CRUD applications with SqlLocalDB

The best solution is to perform Integration Tests and not Unit Tests for several reasons:

  • It has been for a considerable amount of time in the market without major changes (Legacy System), more than 5 years.
  • There is not enough documentation of the entire system.
  • The support is limited to minor bugs.
  • It integrates several technologies like C#, SQL, ASP MVC, Console, SAP, etc.
  • Most of people involved in this project are not working anymore; therefore, the business logic is minimal.
  • There would be thousands of cases to evaluate, which means a considerable amount of money and time.

I’d like to know if someone has any experience related or some advice of how to perform them, what way should you follow?

In my case, I’d like to focus specifically in the business logic like CRUD operations, but what should this involved? A parallel database for storing data? Any specific technology? xUnit, nUnit, MSBuild? Or how would you handle it?

P.S.

I see a potential issue with the previous article since it’s using SQL Local DB and I read that is not supported in Azure and probably it’s the same in Azure Dev Ops.

What’s the best kind of test for complex calculations without access to external resources?

I have two libraries that handle the mapping from one family of objects to another one. I had to create a middle set of objects for other transformations.

So, the NativeConverters libray converts elements NativeElement to MiddleElement, and the ViewModelConverters library converts MiddleElement to ViewModelElements.

I have unit tests (with NUnit) for both NativeConverters and ViewModelConverters. So the single conversion works well.

Now, I want to test the whole process: given a converter from NativeConverters and another one from ViewModelConverters, I want to test that a NativeElement gets converted correctly into a ViewModelElement.

I don’t need access to DB, file system or whaterver, so I’m not sure that Integration tests are the best choice. But I’m not testing a single method, so it shouldn’t be a unit test.

What kind of test do you think could best fit this case?
Do you know any library for C#?

How to write correct test? [on hold]

I can write some unittests but have no idea how to write test about createAccount() which connect other functions together.

createAccount() contains some steps in order:

  1. Handle Email

  2. Handle Password

  3. Handle Security Keyword

  4. Instantiate new account object

Every step has some test cases. So, my questions are: 1. How to write createAccount() test case ? Should I list all possible combination test cases then test them.

For example:

TestCase0. Email is invalid

TestCase1. App stops after retrying email 3 times

TestCase2. Email is ok, password is not valid

TestCase3. Email is ok, password is valid, 2nd password doesnt match the first one

TestCase4. Email is ok, password is valid, both password match, security is valid

TestCase5. Email is ok, password is vailid, both password match, security is valid, account was create succesfully

  1. Don’t I know how to test because my createAccount() sucks ? If yes, how to refactor it for easier testing ?

This is my code:

class RegisterUI:      def getEmail(self):         return input("Please type an your email:")      def getPassword1(self):         return input("Please type a password:")      def getPassword2(self):         return input("Please confirm your password:")      def getSecKey(self):         return input("Please type your security keyword:")      def printMessage(self,message):         print(message)   class RegisterController:     def __init__(self, view):         self.view = view       def displaymessage(self, message):         self.view.printMessage(message)      def handleEmail(self, email):         """get email from user, check email         """         self.email = email         email_obj = Email(self.email)         status = email_obj.isValidEmail() and not accounts.isDuplicate(self.email)         if not status:             raise EmailNotOK("Email is duplicate or incorrect format")         else:             return True       def handlePasswordValid(self, password):         """         get password from user, check pass valid         """         self.password = password         status = Password.isValidPassword(self.password)         if not status:             raise PassNotValid("Pass isn't valid")         else: return True      def handlePasswordMatch(self, password):         """         get password 2 from user, check pass match         """         password_2 = password         status = Password.isMatch(self.password, password_2)         if not status:             raise PassNotMatch("Pass doesn't match")         else: return True      def createAccount(self):         retry = 0         while 1:             try:                 email = self.view.getEmail()                 self.handleEmail(email) #                 break             except EmailNotOK as e:                 retry = retry + 1                 self.displaymessage(str(e))                 if retry > 3:                     return          while 1:             try:                 password1 = self.view.getPassword1()                 self.handlePasswordValid(password1)                 break             except PassNotValid as e:                 self.displaymessage(str(e))          while 1:             try:                 password2 = self.view.getPassword2()                 self.handlePasswordMatch(password2)                 break             except PassNotMatch as e:                 self.displaymessage(str(e))          self.seckey = self.view.getSecKey()         account = Account(Email(self.email), Password(self.password), self.seckey)         message = "Account was create successfully"         self.displaymessage(message)         return account  class Register(Option):     def execute(self):          view = RegisterUI()         controller_one = RegisterController(view)         controller_one.createAccount()   """========================Code End=============================="""  """Testing""" @pytest.fixture(scope="session") def ctrl():     view = RegisterUI()     return RegisterController(view)  def test_canThrowErrorEmailNotValid(ctrl):     email = 'dddddd'     with pytest.raises(EmailNotOK) as e:         ctrl.handleEmail(email)     assert str(e.value) == 'Email is duplicate or incorrect format'  def test_canHandleEmail(ctrl):     email = 'hello@gmail.com'     assert ctrl.handleEmail(email) == True  def test_canThrowErrorPassNotValid(ctrl):     password = '123'     with pytest.raises(PassNotValid) as e:         ctrl.handlePasswordValid(password)     assert str(e.value) == "Pass isn't valid"  def test_PasswordValid(ctrl):     password = '1234567'     assert ctrl.handlePasswordValid(password) == True  def test_canThrowErrorPassNotMatch(ctrl):     password1=  '1234567'     ctrl.password = password1     password2 = 'abcdf'     with pytest.raises(PassNotMatch) as e:         ctrl.handlePasswordMatch(password2)     assert str(e.value) == "Pass doesn't match"  def test_PasswordMatch(ctrl):     password1=  '1234567'     ctrl.password = password1     password2 = '1234567'     assert ctrl.handlePasswordMatch(password2) 

Kotlin delegation, what should I test?

In Kotlin the powerful construct of delegation can be used to extend functionality of existing interfaces by reusing existing implementations.

class Demo : Map by HashMap {} 

Questions:

  • What should I be testing? Testing hashmap from the example is not the target of this test. It seems very verbose to verify the complete implementation, I would rather like to verify that the delegation of the proper fields take place.
  • When using mutation testing, e.g. using PItest, how do I catch all mutations? The report shows quite a few mutations, correctly I believe. The Kotlin compiler creates byte code for all delegations.

Como fazer o Extent Reports gerar um relatório HTML quando o teste dá erro fora da tag @Test?

Criei um modelo de relatório para o meu projeto de testes que está até o momento atendendo minha demanda, porém, caso o teste falhe fora da tag ‘@Test’ o relatório não é gerado (sendo algum erro de conexão, ou erro no driver, etc). Acredito que isso acontece porque o teste não passa pelas tags @AfterMethod e @AfterTest quando erros desse tipo acontecem, e, portanto, não consigo executar os comandos extent.flush() e extent.close() e o arquivo html não é criado. Alguma sugestão do que posso fazer? Segue código em Java abaixo:

@BeforeTest public void startTest() {      className = this.getClass().getName();     String dateName = new SimpleDateFormat("dd-MM-yyyy hhmmss").format(new Date());     String userDir = System.getProperty("user.dir");     nomePasta = className.replace("MOBILEX_AUTOMACAO.TEST.", "") + " " + dateName;      new File(userDir + "\target\reports\" + nomePasta);      extent = new ExtentReports(userDir + "\target\reports\" + nomePasta + "\"             + className.replace("MOBILEX_AUTOMACAO.TEST.", "") + "REPORT.html", true);     extent.addSystemInfo("Nome APP", "MobileX");      extent.loadConfig(new File(userDir + "\extent-config.xml"));  }  @AfterMethod public void getResult(ITestResult result) throws Exception {      if (result.getStatus() == ITestResult.FAILURE) {         String screenshotPath = getScreenhot(result.getName());         logger.log(LogStatus.FAIL, "Test Case Failed is " + result.getThrowable());         logger.log(LogStatus.FAIL, "Test Case Failed is " + result.getName());          logger.log(LogStatus.FAIL, logger.addScreenCapture(screenshotPath));     } else if (result.getStatus() == ITestResult.SUCCESS) {         logger.log(LogStatus.PASS, "Test Case passed is " + result.getName());     }      extent.endTest(logger);     DriverFactory.killDriver(); }  @AfterTest public void endReport() throws IOException {      extent.flush();     extent.close();  } 

Meu projeto de testes é para Mobile, estou utilizando o TestNG, versão 6.10 para os meus testes em Java, com o Appium versão 7.0.

Security test within a staging environment. Is SOAPUI sufficient as a test tool?

Currently I am working on a project in which I am supposed to define the following aspects as a test manager:

  • Conception of a penetration test for a test staging environment
  • Planning security guidelines for REST API development
  • Use of REST API scans via SOAPUI (create security test cases)

So my planning within the staging environment includes functional test procedures, integration test, as well as test procedures on the RESTAPI level. I am however undecided whether the pure security test about the SOAPUI solution alone is enough to get a high security coverage.

Therefore I plan another possibility in the test environment with even more special tools which besides the RestAPI level also : Integration of W3af or Vooki into the staging environment.

Questions: Is a security test drive via SOAPUI sufficient ? Should I use a different tool in the Stage (Pre Production) and in the Production? Should I test Security Test with another tool within the staging environment?