Is it bad practice to have a helper package in go for testing purposes. Or is this introducing dependence’s

I find myself repeating the same code when writing unit tests, for example… When writing functions that work with files, In the setup for the test i often write some code to create a file (in a directory specified with an environment variable) populate it then after i have run the test I destroy the file and folder.

To me this seems the correct way of doing things as the test is completely independent of anything other than an environment variable which can easily be set on multiple os’s.

However I am clearly violating DRY principle by writing this file creating code every time, so I thought I could make a helper package that simplifies this, however i feel that would mean the test would become “dependant” on the helper package.

So the questions are.

  1. In this situation should the DRY principle be violated so as to avoid unnecessary dependence’s?

  2. It’s ok to create a helper package as long as it can be imported from an external location like git hub?

    1. Is there another approach (perhaps using dependency injection)?

UI Testing for multi screen setups

I was wondering if there was any tools/systems, that are capable of simulating multiple screens for use on UI testing.

Context: We are developing an application which can consist of multiple windows, depending on the user setup. We would like to test (manually) such setups with more than 2 monitors, even if the developer/tester only has 1-2 monitors. We do not need to automate this testing. Being able to view it from time to time when we make larger UI changes would be sufficient.

Thanks.

An atomic method updates dozens of properties. Am I “testing too much”?

This question is about designing unit tests, something I started learning short ago.

I know the principle that if you are testing too much in a single unitary test it is a smell, either of code or design itself.

Scenario

I am computing a tax return form based on tax figures computed over the course of a year. Those figures are stored in the database in different rows on a monthly basis.

In order to compute the figures for the tax returns, some columns have to be copied “in the right place”, other have to be summed up across 12 months.

The final form has a set of few flat columns, a Map divided by month with 10 columns for each entity and another Map that is a view on a different aggregation (hence the sum), where each entry has 4 columns.

Mocking the database is a joke, so I have a consistent set of tax figures along with the expected results hardcoded (this is a kind of code duplication I like to implement).

The problem here is that while I am still writing test code I discover myself writing a lot of testing code.

The method that computes the form should be atomic (i.e. set all the columns at once) because it runs in a single transaction creating a new consistent object, but is all about setting dozens of different columns. I have ended up testing more than 30 different properties.

Simple question

  • Given that my unitary method works on columnA..columnZ at the same time
  • Given that failure of one assertion (thanks to Junit’s ErrorCollector) causes the test to fail but evaluates and reports other failed assertions all together

From the test design perspective, and in particular to the principle “don’t test too much”, is it preferrable to…

  1. Write a single test that performs 26 assertions each every column?
  2. Write multiple tests that run the same method with the same data set but each tests the outcome of a very single property of the output?
  3. Other strategy?

Pseudo code

The following is a sample of how the current code is structured. I really have plenties of assertions

@Test public void testCreateTaxForm(){      //Create the mocks, e.g.     MonthlyDao dao = mock(MonthlyDao.class);     when(dao.list(any)).thenReturn(getTheTestSet());      TaxFormManager uut = new TaxFormManagerImpl();     uut.setDao(dao);      //Test     TaxForm form = uut.create(....);      //Verify      getErrorCollector().checkThat(form,hasProperty("flatColumnA",equalTo(expectedValueForA));     getErrorCollector().checkThat(form,hasProperty("flatColumnA",equalTo(expectedValueForB));     -----     getErrorCollector().checkThat(form,hasProperty("flatColumnA",equalTo(expectedValueForZ));      //Following is wrapped in a method for my comfort     for (Month month: Month.values()) {         getErrorCollector().checkThat(form,hasProperty("monthMap",hasEntry(equalTo(month),hasProperty("flatMonthlyColumnA",equalTo(....)));          .............     }      for (ExemptionType exemption: ExemptionType.values()) {         getErrorCollector().checkThat(form,hasProperty("exemptions",hasEntry(equalTo(month),hasProperty("flatExemptionColumnA",equalTo(....)));          .............     }  } 

Pseudo alternative

In the pseudo alternative, I would have to write dozens of different methods (at least 120, each testing a column in a month) trying to reuse as most initialization logic as possible to avoid the test code base to grow too fat.

public void testTaxReturn_flatColumnA public void testTaxReturn_flatColumnB public void testTaxReturn_flatColumnC public void testTaxReturn_january_columnA public void testTaxReturn_january_columnB public void testTaxReturn_january_columnC public void testTaxReturn_retirementFunds_columnA public void testTaxReturn_retirementFunds_columnB public void testTaxReturn_retirementFunds_columnC 

Product Research Platform Testing

Hey guys!

Our company is looking for Shopify sellers to test out a new product research tool we just built that offers tons of metrics. This tool helps find the top selling products on Shopify-hosted domains.

The testing process takes about 20 minutes, in order to go through all the features the platform has to offer.

When you finish, I’ll send you a short questionnaire where you can leave your thoughts.

As a token of our appreciation, we’ll also offer the first 20 participants a…

Product Research Platform Testing

Conditional vs Logical Testing

I would like to get your code thought and views on using conditional vs logical testing.

For example:

To test the conditions of truthness of all of the following variables, their currect status is as follows:

a= True  b= True  c= True  d= True  e= True 

One could use as test as follows to ensure that they all are true:

  1. Method 1:
 If a And b And c And d And e Then x = "Passed" Else x = "Failed"  
  1. Metod 2:
If a = b = c = d = e Then x = "Passed" Else x = "Failed" 
  1. Method 3:
If a Then If b Then If c Then If d Then If e Then x = "Passed" Else x = "Failed" 

With all variables being “True”: Method 2 is the slowest and Method 3 using Condition is fastest.

If any of the variables states would be changed from True to False then Method 3 (Conditional) always wins by big margin.

I do tend to test 2 conditions or more for truthness using the AND logic (we all do, i guess). But isn’t quicker to do an “If If” statement.

Looking forward to your valuable feedback and input.

kind regards

How to perform a security test/review/penetration testing of Ethernet ports?

So recently I have been engaged by a client who wants it’s Ethernet ports checked that whether port security is functioning effectively.

  1. What can be the approach or steps to do check Ethernet port security?

  2. What tools can be used to do the same?

  3. In the scenario that if I am third party who enters in an
    organization with his/her laptop and sees that there are Ethernet
    ports around. Then I decide to plug in a wire and try to get into the network. What can I do to achieve that?

What are the success factors to security and performance testing?

I’m performing a research regarding security and performance verifiction (testing and reviews). I would like to know the opinion of practitioners regarding my findings. Could you answer my survey at http://www.spvsurvey.com/ ?

Besides contributing to the evolving of knowledge, you will be helping those who need it – We will be donating R$ 1 to Red Cross (Brazil) for each of valid survey response.

Thank you very much!

ReferenceError: foo is not defined in Jest testing

I have a ReactJs application. The code more or less looks like this:

  componentDidMount() {     this.fetchRdsInstanceList();   }    fetchRdsInstanceList = () => {     fetch('/api/foo', {       credentials: 'include',     })       .then((response) => {         return response.json();       })       .then((json) => {         this.setState(json);       });   } 

Then in my test I wrote this:

jest.unmock('./Home');  describe('Home', () => {   let mockProps;    beforeEach(() => {     fetchRdsInstanceList = jest.fn();     mockProps = {       'users': [         {'name': 'Bobby'}       ]     };   });    describe('Rendering', () => {     it('Render Home table', () => {       const container = shallow(<Home {...mockProps} />);       console.log(container)     });   }); }); 

I get the following error:

  ● Home › Rendering › Render Home table      ReferenceError: fetch is not defined        44 |       45 |   fetchRdsInstanceList = () => {     > 46 |     fetch('/api/foo', {          |     ^       47 |       credentials: 'include',       48 |     })       49 |       .then((response) => {  

What am I missing? Note the application runs just fine.