Is bad idea to utilize helper functions on integration tests?

In my job I have a small disagreement on whether we should utilize helper functions for making datasets especially in laravel framework. A sample for the test is:

namespace Tests\MyAppTests;  use PHPUnit\Framework\TestCase; use Carbon\Carbon;  class MyTest extends TestCase {   private function makeATwoHourSpan(Carbon $  now, &$  start, &$  end)   {      $  start = new Carbon($  now);      $  start->modify("-1 hour");       $  end = new Carbon($  now);      $  end->modify("+1 hour");   }    public function testSomething()   {      $  now=Carbon::now();      $  start=null;      $  end=null;       this->makeATwoHourSpan($  now, $  start, $  end);      //Rest Of Test Here   }     public function testSomethingElse()   {      $  now=Carbon::now();      $  start=null;      $  end=null;       this->makeATwoHourSpan($  now, $  start, $  end);      //Rest Of Test Here   } } 

The argument that my supervisor says is that using the makeATwoHourSpan method even though makes the code DRY it does not aid the readability of the test. Also he mentioned that a test should be autonomous and a easy to run as standalone without any helper function except the tools provided from the framework.

So my question is: Should I avoid utilizing “helper” functions especially when I make test data when I make tests, or having a function for data creation makes is the way to go?

MPGS (Mastercard) Integration Security

According to MPGS integration guide, if merchant wants to display receipt when payment completed successfully, MPGS will redirect to merchant site callback URL with the resultIndicator. Merchant site needs to compare resultIndicator with successIndicator stored previously.

Hackers can first launch the MPGS payment session without paying. Then pretend to be MPGS and brute-force attempt repeatedly by calling that merchant site callback URL with different resultIndicator until it matches the stored successIndicator. Merchant then thought that the hacker has paid.

Is this possible? If yes, how to avoid this loophole? Need to call Inquiry API when received callback without trusting resultIndicator?

MQ integration: How to notify consumers of upcoming message format changes?

We have multiple microservices communicating over MQ. As MQ messages are the interface/contract between the services, whenever we make changes to the MQ message published by a service we need to make the same adjustments on the services which consume the message.

As of now, the number of services is small enough so that we know which services communicate with each other, and can keep the MQ message contract in sync between them. But as the number of services grow this becomes harder.

Option 1: Break things first, then fix it

I’ve been thinking maybe of implementing some kind of health check. Let’s say service A during operations may emit message type X, which is consumed by service B. Service A could then on startup emit a health check type of message, something in the lines of a message X dry-run. When service B receives this, it simply verifies that the message is according to contract. If not, for example if service A have remove a critical field in the message, then service B will reject the message which in turn will end up on a dead-letter exchange, which again will trigger a warning notification to the devops staff.

This approach won’t prevent us from deploying non-compatible message types, but will notify us pretty much instantly when we do. For our use case, this might work due to our very small number of developers and projects, so if we break things like this we’d be able to fix it quite quickly.

Option 2: Early probes

A variation over this might be that we start versioning the MQ message format (which we probably should and will do anyway). Then, when service A plans to upgrade from version 1 of message type X to version 2, server A could early on start emitting “dry-run” type of version 2 of message type X. This would cause service B to drop the message. Say this happens a few days or weeks before service A perform the actual switch from version 1 to version 2, then the devops team will have time to add support for version 2 in the mean time.

Option 3: Manually detecting conflicts before deployment

Another approach would be to have some way of detecting – before the actual deployment – that service A is about to start emitting non-compatible messages in the first place. This would mean that would need to maintain some matrix or something over which versions of message X is support by which consumer, and defer deploying service A (with the new version of message X) until all the consumer are ready for it. How to implement this effectively I don’t know.

Other alternatives

How does other handle message type compatibility between services that communicate using MQ – how do you know that when your service A makes a change to message type X, it won’t break any of the consumers?

PS. I posted this over at Reddit a few days ago, but due to the lack of feedback I decided to post here as well.

How to design generic bug-tracker integration?

I’m looking for ideas on how to design a “generic” bug-tracker integration architecture for Kiwi TCMS (opensource test case management system).

Background: at the moment we support integration with several systems: Bugzilla, JIRA, GitHub and GitLab. Behaviours are: – link existing bugs to test cases/test executions – add comments to existing bugs when TE fails – create new bugs from a TE (semi-automatically now) – display a link to open all reported bugs in a list format (if supported by the other system)

Other features required: – fully automatic creation of new bugs on failure – richer display of information from bug-tracker inside Kiwi TCMS.

Currently we have an interface class which defines the entry points to these actions and then separate implementation for every one of the 3rd party systems. This also means different settings for them and different 3rd party libraries we need to ship in our code.

This also means whenever somebody requests a different system we don’t have a ready solution.

For systems which ship multiple versions, e.g. Jira, some of the API may not be compatible with the current version of the library we’re using.

I am looking for something that will make it possible to integrate more easily with anything out there (in terms of link & display) b/c this is what most people use. Maybe Open Graph Protocol will help us here although none of the systems seems to support it (except GitHub).

Then for comments and auto creation I’m thinking webhooks but not really sure how to define anything else. 3rd party systems can be vastly different from one another, the list of required fields is also different. So while it is easier to just execute a web-hook I have no idea how to let the user properly configure its payload. There can be also the case where you need to make several HTTP calls (e.g. login, create, add/update) to complete the operation.

Ultimately I’d like to make this part of the software easier on the maintainers and also a bit easier for admins who would need to integrate these 3rd party systems with our software.

What do you reckon ? Has anyone done something similar for a web app?

When do you have enough automatic testing to be confident in your continuous integration pipeline?

Continuous integration with testing is useful for making sure that you have “shippable” code checked in all the time.

However, it is really difficult to keep up a comprehensive suite of tests and often, it feels like the build is going to be buggy anyways.

How much tests should you have to feel confident in your CI pipeline testing? Do you use some sort of metric to decide when there is enough tests?