Make a class depends on its own ports instead of injecting dependency interfaces

I just have an idea about dependency management in Spring IOC environment that seems to be better than the typical approach, but I am not sure because I can’t find any references or example out there that talk about it, though the idea is very simple. I will explain the idea comparing to the typical approach. Please help me see if the idea is valid and if it has some formal theoretical name or are there some books talking already about it.

Typical Approach

In typical Spring application, when class A has class B as a dependency, we would inject the interface of B into A. For example, let’s say we have class ProductManager that depends on ProductService.

class ProductManager {    private ProductService productService;    public ProductManager(ProductService productService) {       this.productService = productService;    }     public void foo() {       ... // some logic       // somewhere in this method       String result = productService.bar1();       ... // some logic    } } 
interface ProductService {    String bar1();    void bar2(); } 
class ProductServiceImpl implements ProductService {    @Override    public String bar1() { ... }     @Override    public void bar2() { ... } } 
@Configuration public class ApplicationConfiguration {    @Bean    public ProductManager productManager(ProductService productService) {       return new ProductManager(productService);    } } 
@RunWith(SpringRunner.class) @SpringBootTest class ProductManagerTest {    @Autowire    private ProductManager classUnderTest;     @MockBean    private ProductService productService;     public void testFoo() {       when(productSerivce.bar1()).thenReturn("bar");       // do the test    } } 

In this approach, ProductManager directly depends on ProductService. It has some problems which I will explain in the last section.

My Idea

I want to make it that ProductManager and ProductService do not know each other completely. The ProductManager depends only on its own defined Port. We plug the ProductService#bar1() to the port in configuration class.

interface ProductManagerPort {    String baz(); } 
class ProductManager {    private ProductManagerPort port;     public ProductManager(ProductManagerPort port) {       this.port = port;    }     public void foo() {       ... // some logic       // somewhere in this method       String result = port.baz();       ... // some logic    } } 
// Notice we don't use interface anymore class ProductService {    public String bar1() { ... }    public void bar2() { ... } } 
@Configuration public class ApplicationConfiguration {    @Bean    public ProductManager productManager(ProductService productService) {       // in more complex case, we can create an adaptor class instead of using a lambda       return new ProductManager(() -> productService.bar1());    } } 
class ProductManagerTest {    @Test    public void testFoo() {       ProductManager classUnderTest = new ProductManager(this::mockBaz);       // do the test    }     private String mockBaz() { return "bar"; } } 

The Differents

To explain why I come up with the idea, I will compare the two approaches in the following topics.

Dependency Decoupling

In the typical approach, ProductManager is tightly depending on ProductService. Even though the dependency is bound via an interface, it only helps to hide the implementation detail of ProductService from ProductManager, but the ProductManager still have to concern about how the ProductService works in general.

Suppose you assign 2 developers to write code for each class: John writes ProductManager and Marry takes care ProductService. both persons will have to agree on the interface before doing their own work. John has some assumption in mind how ProductService#bar1() works, and write the method ProductManager#foo() based on that assumption. When Marry found something new during implementation that affects the agreement, she notifies John. Then John has to rework to make his code support the new assumption.

In My Idea approach, ProductManager and ProductService are completely independent. ProductManager declares its own Port. John can keep his assumption in the Port without having to fear that Mary will find some problem during implementing the ProductService. Mary, in turn, does not have to fear that her changes of design will bother anyone.

To integrate ProductService into ProductManager, we can create ProductServiceToProductManagerPortAdaptor that implements ProductManagerPort. If there is some mismatch in John’s and Mary’s assumption, it will be adjusted in the adaptor. In the example code above, I use lambda for the adaptor to exemplify possible simplification of the adaptor.

Single Implementation Anti-Pattern

The typical approach requires us to make the ProductService an interface and put the implementation in ProductServiceImpl. We are forced to do so to avoid wiring dependencies with concrete classes.

In My Idea approach, we are free to that limitation because the wiring is done through the class’s own ports. We can write a straight forward code using concrete class and use interfaces where a true abstraction is really needed.

Interface Segregation Principle

The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.

Wiring component to component is easy to violate this principle. For example, ProductManager only depends on ProductService#bar1() but it is forced to know about ProductService#bar2() just because they are packed within the same component. You have to read the implementation detail to be sure if ProductService#bar2() is used in ProductManager or not.

When writing test, in the typical approach, you are forced to use a mock framework to avoid unnecessary mocking unused method ProductService#bar2(). While in My Idea Approach, you are required to mock only as much as the ProductManager actually needs without using the mock framework.

Question

I want to encourage my team to use this pattern in our next project but I cannot find any references to support my idea. I am afraid that someone has already considered this idea and reject it due to some limitation.

Please share if you have experience using this pattern in your application or if there is a standard concept about it. I have looked into Hexagonal Architecture but it seems to be more about system architecture than a coding pattern.

Passing uint8_t instead of uint32_t into a function (in C)

Going over some code (written in the C language) I have stumbled upon the following:

//we have a parameter

uint8_t num_8 = 4;

//we have a function

void doSomething(uint32_t num_32) {….}

//we call the function and passing the parameter

doSomething(num_8);

I have trouble understanding if this a correct function calling or not. Is this a casting or just a bug?

In general, I know that in the C / C++ languages only a copy of the variable is passed into the function, however, I am not sure what is actually happening. Does the compiler allocates 32 bits on the stack (for the passed parameter), zeros all the bits and just after copies the parameter?

Could anyone point me to the resource explaining the mechanics behind the passing of parameter?

Why did early telephones use a rotary dial instead of 10 individual buttons?

I was watching a video of little kids trying to figure out how to use a rotary phone, and it was not immediately clear to any of them how the rotary mechanism was supposed to work. That got me thinking: why was the rotary mechanism used in phones at all? It seems like individual buttons would be more intuitive to everyone and might even have less mechanical problems than a thing that has to rotate thousands and thousands of times over it’s lifetime. Why was that design choice made, why were they popular, and why did they stick around so long even after phones with buttons came on the market?

edit: The first touch tone phone was introduced by AT&T in 1963. Even thought integrated circuits weren’t practical for commercial use at that time, apparently transistors were:

By the early 1960s, low-cost transistors and associated circuit components made the introduction of touch-tone into home telephones possible. Extensive human factors tests determined the position of the buttons to limit errors and increase dialing speed even further. The first commercial touch-tone phones were a big hit in their preview at the 1962 Seattle World’s Fair. (1)

The fact that Bell Labs had invented the transistor probably helped that process along. And I understand that the change in how telephone networks worked (from pulse dialing to DTMF) drove a change in the electrical design of phones. But while transistors let them make that protocol change, was it really impossible to implement pulse dialing using buttons but not transistors? From Wikipedia:

In the 1950s, AT&T conducted extensive studies of product engineering and efficiency and concluded that push-button dialing was preferable to rotary dialing (2).

That suggests the designers didn’t know that buttons were better than a dial (if they did know, why would they have done extensive studies?). It also suggests that it WAS possible to make a phone with buttons. They would have had to build button prototypes for their studies, right?

(1) http://www.corp.att.com/attlabs/reputation/timeline/64touch.html

(2) https://en.wikipedia.org/wiki/Push-button_telephone

Does a spell cast instead of an opportunity attack still observe the timing of an opportunity attack?

The War Caster feat states that

When a hostile creature’s movement provokes an opportunity attack from you, you can use your reaction to cast a spell at the creature, instead of making an opportunity attack.

The rules of the opportunity attack state that

The attack occurs right before the creature leaves your reach.

Should the former override the latter? That is, does the ruling given as to order of resolution in the rules of opportunity attacks still hold even when the opportunity attack never actually takes place?

Xanathar’s Guide to Everything states that

If you’re unsure when a reaction occurs in relation to its trigger, here’s the rule: the reaction happens after its trigger completes, unless the description of the reaction explicitly says otherwise.

So, can I perform the action described in At what point in a creature's movement does an opportunity attack take place?, given this information?

Security risks in confirming the username instead of confirming the username and password combination [duplicate]

This question already has an answer here:

  • Is it unsafe to show message that username/account does not exist at login? [duplicate] 7 answers

Most of the login pages like Google, Outlook and Yahoo! confirm the username first and then ask for a password instead of confirming the username and password combination altogether. Isn’t it less safe to go with the former practice as the intruder can guess the username first and then guess the password? whereas in the later case the intruder has to go with guessing both the options?

Also is there a website where I can find the industrial standards for the login flow?

Computer unable to boot to windows 7 Home Premium, and instead opens up Startup Repair

Every time I boot my dell PC it opens Startup Repair and can’t repair the computer.I am don’t know ho to fix it. I get the following Problem Signatures:

Problem Event name: StartupRepairOffline

Problem Signature 01: 6.1.7600.16385

Problem Signature 02: 6.1.7600.16385

Problem Signature 03: unknown

Problem Signature 04: 21200281

Problem Signature 05: AutoFailover

Problem Signature 06: 12

Problem Signature 07: NoRootCause

OS Version: 6.1.6700.2.1.0.256.1

Why datatypes are marked as thread-safe instead of procedures?

In Rust, Send (or Sync) marker traits are used to indicate whether a value of a type (or a reference to that) can be worked on in a threaded context.

However, it is an attribute of a function or a precedure that whether it is thread-safe, as frequently seen in C function man-pages (e.g. man 3 rand).

So, why rust is designed to apply such attributes to the datatypes instead of functions? like:

struct Foo { ... }  unsafe sync fn thread_safe_fn(foo: &Foo) { ... } 

This way, any type can be used anywhere, but only sync functions can operate on shared data; which makes it possible to have for example a single Rc with defined operations of either atomic (sync) or non-atomic (!sync).