Is bad idea to utilize helper functions on integration tests?

In my job I have a small disagreement on whether we should utilize helper functions for making datasets especially in laravel framework. A sample for the test is:

namespace Tests\MyAppTests;  use PHPUnit\Framework\TestCase; use Carbon\Carbon;  class MyTest extends TestCase {   private function makeATwoHourSpan(Carbon $  now, &$  start, &$  end)   {      $  start = new Carbon($  now);      $  start->modify("-1 hour");       $  end = new Carbon($  now);      $  end->modify("+1 hour");   }    public function testSomething()   {      $  now=Carbon::now();      $  start=null;      $  end=null;       this->makeATwoHourSpan($  now, $  start, $  end);      //Rest Of Test Here   }     public function testSomethingElse()   {      $  now=Carbon::now();      $  start=null;      $  end=null;       this->makeATwoHourSpan($  now, $  start, $  end);      //Rest Of Test Here   } } 

The argument that my supervisor says is that using the makeATwoHourSpan method even though makes the code DRY it does not aid the readability of the test. Also he mentioned that a test should be autonomous and a easy to run as standalone without any helper function except the tools provided from the framework.

So my question is: Should I avoid utilizing “helper” functions especially when I make test data when I make tests, or having a function for data creation makes is the way to go?

Should tests perform a single assertion, or are multiple related assertions acceptable

Assume a client is making a request to an API endpoint that returns a JSON response where the structure and data change depending on whether the request was successful or not. Possible responses may look as follows:

Scenario with a success response

{   "status": XXX,   "data": [{     ...   }] } 

Scenario with a failure response

{   "status": XXX,   "errors": [{     ...   }] } 

Example of test scenarios for the above would be:

  • Assert the expected status code
  • Assert the expected JSON structure
  • Assert the expected JSON data

When performing tests where you have more than one assertion you can perform, is it recommended to provide a single test for each assertion, or group the assertions into a single test?

Design pattern for exposing static functions in C/C++ only to unit tests [on hold]

I have some static free functions (the don’t belong to a class) in a C++ file. I want them to still be only visible within such module and to be free, but I want to test them with unit tests as well.

As far as I can see, the are several possible options:

  • Using macros, enable/disable the static keyword depending on whether or not a variable (TEST_MODE) is defined or not:
#ifdef TEST_MODE    #define static #endif 
  • Use ‘private headers’: create a header file whose name ends with _private and whose location is in include/private.

  • Declare the functions in the normal header file adding like a private section with code comments. Like /////// PRIVATE SECTION ///////

  • Include the *.cpp file in which they are declared (personally I don’t like this option at all)

  • Group the private functions in a ‘private’ namespace. Like namespace myNamespace { namespace private { .... } }. The problem with this one is that it can’t be used in plain C.

  • Use compiler directives for hiding stuff, like __attribute__(("hidden"))

I would say that my favourite one is the private header, it seems to be clean and does not need namespaces, in case the language doesn’t support them.

Which one is considered the best practice?

Summative user tests that allows you to compare against competitors?

I want to understand when it would be appropriate to conduct summative users tests with the focus on evaluating the performance of our product against our competitors?

We have a finished live product which we know can be improved in many ways and want to think of a way to evaluate ours against others in the market. We’ve done some competitor analysis but want to see how people use our product and what they like/dislike. Would this kind of user testing work?

Unit Tests vs System Tests

I’ve always know Unit Tests to be something you do in code. You write functions to test other functions. Our team has been working with the same project manager for a while now and he’s always referring to system tests as unit tests. By system tests I mean the kinds of tests where a user follows a test plan and performs actions on an application to ensure it meets the requirements. This PM is an Indian offshore resource.

Is this use of terminology something unique to this person? Is it an Indian thing (like “do the needful”)? Or am I just not aware that there’s more than one definition for what a unit test is?

running selenium tests as part of main test suite or separately?

I have a large suite of rspec tests for a rails application with unit tests, controller tests and feature tests using capybara. I also have selenium tests that are excluded that I run separately.

Is it better practice to run the selenium tests together with the main suite? Apart from saving time, is there any other reason for keeping them separate?

How do I write unit tests for legacy code (that I don’t understand)?


I’ve read a lot of things before asking this question, including many relevant questions right here on SE:

  • (Software Engineering SE) Writing tests for code whose purpose I don’t understand
  • (Software Engineering SE) Unit testing newbie team needs to unit test
  • (Software Engineering SE) Best practices for retrofitting legacy code with automated tests
  • (Software Engineering SE) How to unit test large legacy systems?
  • (Blog post) How to mock up your Unit Test environment

However, I can’t help but feel that the itch hasn’t been scratched yet after reading for help.


How do I write unit tests for legacy code that I can’t run, simulate, read about, or easily understand? What regression tests are useful to a component that presumably works as intended?

The Whole Picture

I’m a returning summer intern again as I’m transitioning into grad school. My tasking involves these requirements:

  1. For a particular product, evaluate whether our software team can upgrade their IDE and JUnit version without losing compatibility with their existing projects.
  2. Develop unit tests for some component in the existing Java code (it’s largely not Java). We want to convince the software team that unit testing and TDD are invaluable tools that they should be using. (There’s currently 0% code coverage.)
  3. Somehow, end the days of cowboy coding for a critical system.

After obtaining a copy of the source code, I tried to build and run it, so that I might understand what this product does and how it works. I couldn’t. I asked my supervisors how I do, and I was issued a new standalone machine capable of building it, including the build scripts that actually do. That didn’t work either because as they should’ve expected, their production code only runs on the embedded system it’s designed for. However, they have a simulator for this purpose, so they obtained the simulator and put it on this machine for me. The simulator didn’t work either. Instead, I finally received a printout of a GUI for a particular screen. They also don’t have code comments anywhere within the 700,000+ Java LOC, making it even harder to grasp. Furthermore, there were issues evaluating whether or not their projects were compatible with newer IDEs. Particularly, their code didn’t load properly into the very IDE version they use.

My inventory is looking like this:

  • NetBeans 8, 9, 10, 11
  • JUnit 4, 5
  • Their source code for a particular product (includes 700,000+ Java LOC)
  • Virtually no code comments (occasionally a signature)
  • No existing tests
  • A physical photo of a GUI window
  • A software design document (109 p.) that doesn’t discuss the component in the picture

I at least have enough to theoretically write tests that can execute. So, I tried a basic unit test on this said component. However, I couldn’t initialize the objects that it had as dependencies, which included models, managers, and DB connections. I don’t have much JUnit experience beyond basic unit testing, so follow me to the next section.

What I’ve Learned From My Reading

  1. Mocking: If I write a unit test, it likely needs to have mock variables for production dependencies that I can’t easily initialize in setUp.
  2. Everyone here liberally suggests the book “Working Effectively with Legacy Code” by Michael Feathers.
  3. Regression tests are probably a good place to start. I don’t think I have enough weaponry to attempt integration testing, and regression tests would provide more instant gratification to our software team. However, I don’t have access to their known bugs; but, I could possibly ask.

And now an attempt to articulate the uncertainty I still have as a question. Essentially, I don’t understand the how part of writing these tests. Assuming I don’t receive any further guidance from my supervisors (likely), it’s in my ballpark to not only learn what this component does but to decide what tests are actually useful as regression tests.

As professionals who’ve worked with projects like this longer than I have, can you offer any guidance on how to write unit tests in this kind of situation?

How to create a Python class that is a subclass of another class, but fails issubclass and/or isinstance tests?

I know this is probably bad design, but I’ve run into a case where I need to create a subclass Derived of a class Base on-the-fly, and make instances of Derived fail the issubclass(Derived, Base) or isinstance(derived_obj, Base) checks (i.e. return False).

I’ve tried a number of approaches, but none succeeded:

  • Creating a property named __class__ in Derived ( This can only be used to make the checks return True.
  • Overriding the __instancecheck__ and __subclasscheck__ methods of Base. This doesn’t work because CPython only calls these methods when conventional checks return False.
  • Assigning the __class__ attribute during __init__. This is no longer allowed in Python 3.6+.
  • Making Derived subclass object and assigning all its attributes and methods (including special methods) to that of Base. This doesn’t work because certain methods (e.g. __init__) cannot be called on an instance that is not a subclass of Base.

Can this possibly be done in Python? The approach could be interpreter specific (code is only run in CPython), and only needs to target Python versions 3.6+.

To illustrate a potential usage of this requirement, consider the following function:

def map_structure(fn, obj):     if isinstance(obj, list):         return [map_structure(fn, x) for x in obj]     if isinstance(obj, dict):         return {k: map_structure(fn, v) for k, v in obj.items()}     # check whether `obj` is some other collection type     ...     # `obj` must be a singleton object, apply `fn` on it     return fn(obj) 

This method generalizes map to work on arbitrarily nested structures. However, in some cases we don’t want to traverse a certain nested structure, for instance:

# `struct` is user-provided structure, we create a list for each element struct_list = map_structure(lambda x: [x], struct) # somehow add stuff into the lists ... # now we want to know how many elements are in each list, so we want to # prevent `map_structure` from traversing the inner-most lists struct_len = map_structure(len, struct_list) 

If the said functionality can be implemented, then the above could be changed to:

pseudo_list = create_fake_subclass(list) struct_list = map_structure(lambda x: pseudo_list([x]), struct) # ... and the rest of code should work 

Methodology: Writing unit tests for another developer

I was thinking about software development and writing unit tests. I got following idea:

Let’s assume we have pairs of developers. Each pair is responsible for part of code. One from pair implements feature (writing code) and second writes unit tests for it. Tests are written after code. In my idea they help each other, but work rather separately. Ideally they would work on two similar-sized features and then exchange for test preparation.

I think that this idea has some upsides:

  • tests are written by someone, who can see more about the implementation,
  • work should be made little faster then Pair Programming (two features at the same time),
  • both tests and code has responsible person for it,
  • code is tested by at least two people,
  • maybe searching for errors in code written by person that is testing your code would give special motivation for writing better code and avoiding cutting corners.

Maybe it also good idea to add another developer for code review between code and tests development.

What are downsides of this idea? Is it already described as some unknown to me methodology and used in software development?

PS. I’m not professional PM but know something about project development proces and know few most popular methodologies – but this idea doens’t sound familiar to me.

What’s the proper name for code that tests another code?

That seems like a stupid question, but I can’t find a proper name for “code that tests another code”. The most of literature names that kind of code just “tests” but it’s way to general in my understanding (obviously I am not an English native speaker so I might be wrong) – code might be tested by people, some external machine that is not “code” at all. The same applies to “code tests”.

I was thinking about “testing code” but this seems to be a bit confusing as it looks more like a verb than noun.