Would de-coupling using interfaces/templates make the system easier to maintain at the cost of over-engineering?

I have been practicing this hybrid approach for dependency injection in the last couple of days and I am wondering if it should also apply to components which are within the same package?

For example:

I have a GPIO module that uses the device chip and that required to be mocked for unit testing. I also exposed its Pins as interfaces, so their consumers would not have any coupling.

I have a Motor component that lives in another package and consumes the GPIO Pins’ interfaces.

Then a ControlAgent component that lives within the same package as the Motor component, and consumes it.

One benefit of using an interface for the Motor and a template class for its implementation seems to be making its construction a little more generic (as long as I provide what’s needed at compile time), and also makes the unit testing easier.

But I also have three more components (PID/Encoder/Odometry) and potentially more that are consumed by the ControlAgent.

Seems like a big effort in development time and complexity to setup each of those as interfaces when they are part of the same package.

What is the long term benefit (if any) in the ControlAgent consuming all of its neighboring components as std::unique_ptrs (or any pointers) to interfaces rather than friends/members?

Especially since the implementations use templates, so the types must be known at compile time.

Does the over-engineering make the implementation less readable but also more maintainable?

Decoupling User Stories in Agile Development

Based on what I’ve read, user stories are often cast in a “who”, “what”, and “why” format, i.e. “[Who] wants the system to do [What], so that [Why]”. The “who” and “what” seem easy to grasp. For the example of an ATM:

As a customer
I want to be able to deposit money

The “why” line seems like it could have a significant impact on the scope of the software. For example, all of the following seem like reasonable justifications for the feature; however, the first implies security, insurance, data redundancy, etc…; the second and third imply the existence entirely separate systems; and all three imply data persistence.

So that it will be protected by the bank
So that I can manage my finances online
So that make payments with my debit card

To put it briefly, within the context of agile software development, how are complex/coupled functional requirements handled so that they can be sanely developed?

For example, would the development team derive sets of use cases from such high-level user stories? Or would they re-write the user stories so that they had a limited scope?


Swappable state object or decoupling data and functions

I come from OOP pradigm and I also know a bit about functional programming and its advantages. Over time I came to like the separation of data and transformations that are applied to it using pure functions. While I like the OOP idea of encapsulating data and the operations you can perform on it using classes I also started to dislike the mess coming from keeping data and functions on one level. However for many reasons I don’t feel like switching to functional programming totally (one of them is working in an environment where this would cause massive disruption) and I prefer to take baby steps in selected areas.

So I started to do a middle way, half baked things like creating a props object for keeping the instance properties that really define the state of the object (as in you can serialize or store just the props object and you can recreate the object from that) – sort of the way FB React framework does it in the components. I like the idea that I can store the instanceA props object externally then swap it with a temporary one, use the usual instanceA methods to play with that temporary state and then return to the old props.

There are tons of use cases having to do mostly with reusing the same subsystem ‘engine’ in all it’s current configuration and dependencies to a) manage some temporary state b) simulate some operation without affecting the actual current state c) implementing a material and tool metaphor where you use tool to transform the material in some way, etc.

I started to look for a design pattern describing that strategy, namely having objects with decoupled swappable state to see what are the limitations, some other uses or possible improvements but what I found is this is most often regarded as an anti pattern in OOD (it breaks the ‘object should only manipulate it’s own data’ idea). It is obviously also not really a purely functional approach.

So my question is: given I haven’t invented a wheel here is it a recognized pattern or maybe an antipattern for a really good reason or maybe it’s a pattern that can become antipattern when used incorrectly like Singleton but it has some well described merits, limits and drawbacks?

(For a bit more of a domain context I work most of the time programming custom, multimedia websites mixing event driven architecture with 60fps updates and lots of non standard UI solutions that often require some improvisation and are subject to late-in-the-process experimentation so the overall architecture needs to be very open to massive changes of requirements in the middle of the projects so I can (and do) live with a bit of punk rock programming patterns).

EDIT To be more specific, I usually need the ‘tool’ object to have some state, configuration and dependencies of it’s own and that’s keeping me from having just pure function or static methods.