Vue.js Two Way Communication Using Objects As Probs – Is this an Anti-Pattern?

Typically the way to communicate between parent <-> child component in Vue.js is to send the child a prop and emit an event from the child, which is listened to on the parent.

However, if you pass in an object to a child prop, updating the object in the child will update the object in the parent as objects are passed by reference, not copied.

This basically provides two-way communication in a much simpler way, however, I never see this method referenced in any official tutorials. This suggests it may be an anti-pattern (you arent suppose to modify props in the child) but if it is, why? It achieves what I want. What could be the consequences of communicating in this way?

Selecting objects to maximize value while under multiple constraints

I think it will be best to explain the problem first and then explain what I am thinking:

I have a dataset similar to the following:

fruit,calories,cost apple,100,1 apple,200,1.5 apple,150,2 pear,300,3 pear,100,.5 pear,250,2 orange,100,1 orange,120,1 orange,400,2 

And I am trying to maximize my calories while keeping my cost within a certain range. At this point it is just a knapsack problem, but what if I have to have 1 apple, 1 pear, and 2 oranges, or some other arbitrary set of fruit numbers?

I can’t really wrap my head around how this should work, given the extra constraint. I’ve thought about trying to merge my cost and calories into 1 metric somehow, but I am pretty sure that can’t work since it loses.

My most recent thought is potentially keeping track of fruit count in a list, and if the proper amount of fruit has been reached it will skip to the next fruit. I am thinking about this in a similar way to a knapsack with 3 constraints, weight, value, and size, but the size is just fruit count essentially.

Hoping someone on here can tell me if I am heading in the right direction or if there is an algorithm that does this that I can look into.

Thank you!

Are Result objects the cleaner way to handle failure, than exceptions? [duplicate]

This question already has an answer here:

  • Are error variables an anti-pattern or good design? 11 answers

I was watching the following video by Vladimir Khorikov, which recommends to “Refactoring Away from Exceptions” pluralsight.com – Applying Functional Principles in C# – Refactoring Away from Exceptions and instead using a Result object. You can also find a blog about it here: enterprisecraftsmanship.com – Functional C#: Handling failures, input errors

To summarize it, the recommendation is to prefer returning a result object then throwing an exception. Exceptions should be used to signalize a bug only. The arguments for this approach are the following:

  • Methods which throws exceptions are not “honest”. You can’t recognize if a method is expected to fail or not, by looking at its signature.
  • Exception handling adds a lot of boiler plate code.
  • When exceptions are used to control the flow, it has a “goto” semantic, where you can jump to specific line of code.

On the other hand return values can be ignored (at least in C#), which exceptions can not.

Is it a good idea to refactor a existing enterprise application in this direction? Or is a less radical approach the better one? (I belive that it make sense for sure to avoid Vexing exceptions by using return types for method like ValidateUserInput(string input))

Note that Are error variables an anti-pattern or good design? is a similar question. The difference is, that I am not talking about “Error by magic values” (returning a error code or even worse null) which is clearly an anti pattern. I am talking about the pattern presented by Vladimir Khorikov, which doesn’t have the same drawbacks like just returning a primitive error code. (For example: Result objects have a error message, like exceptions does)

SQLAlchemy with Plain Old Python Objects (POPO)

In SQLAlchemy class models are of data type Column which contain metadata about the table in the database. Such as:

id = Column(Integer, primary_key=True, autoincrement=True, nullable=False)

When I used Hibernate, we had DTO classes (aka POJO’s) which contained only the variables required to represent the object.

QUESTION: Should this pattern be used with SQLAlchemy? The DTO objects will be passed and eventually serialized in my API.

SQLAlchemy with Plain Old Python Objects (POPO)

In SQLAlchemy class models are of data type Column which contain metadata about the table in the database. Such as:

id = Column(Integer, primary_key=True, autoincrement=True, nullable=False)

When I used Hibernate, we had DTO classes (aka POJO’s) which contained only the variables required to represent the object.

QUESTION: Should this pattern be used with SQLAlchemy? The DTO objects will be passed and eventually serialized in my API.

What is a good Object-Oriented design for geometry objects when building libraries dealing with geometry operations?

I am trying to design an object-oriented library to handle geometric calculations. However, I am trying to exaggerate on being “tightly” object-oriented and applying relevant “best practices”. I know that there is no reason to be dogmatic about patterns, I am just trying to squeeze out every bit of possibility that I have not found out any different way for what I am about to ask.

Consider trying to design a path, composed of segments. I consider having a common interface for segments, which offers a method to calculate points of intersection with another segment. In short, the tentative signature of such a method might look like:

abstract class Segment {     ...      Point[] Intersection(Segment other);      ... } 

However, when implementing such a method, it might be necessary to check what actual implementation lies behind the “other” Segment. This can be done through run-time type checks, given that the language supports it, otherwise, I have to use some kind of enum to differentiate between segments and, potentially, cast the object to call corresponding methods. In any case, I cannot “extract” some common interface for this kind of design.

I have considered “forcingly” establishing a base-assumption that all path segments boil down to sequences of points, and unify the algorithmic intersection process as being always a line-to-line intersection between all sub-segments, but this will rob the design of a very significant (in my opinion) optimization possibility. Considering the ubiquity and “temporal density” of geometry-based operations the library will be built to support, it is very important, in terms of performance, to take advantage of special shapes having “closed forms” to calculate intersections between them (such as a circular arc with a line), instead of testing a multitude of small line-segment pairs to identify intersection points.

Apart from that, if I make the simplifying assumption of paths consisting of path sequences, I will have to make another relatively pervasive (for such a library) design choice, that of point density, to trace, for example, the various segment types. This would be, in my opinion, a reasonably architecturally-relevant parameter when considering an end result of drawing, e.g. on-screen, in order to achieve a given level of smoothness, for example. However, I feel this is, conceptually” an unsuitable abstraction for the geometric operations between pure geometric abstractions. A circle is not a series of line segments and should not need 5,10 or 100 coordinate pairs to be represented. It is just a center and a radius.

My question is, is there any other way to be object-oriented when dealing with base classes for geometry entities, or the “usual” way of doing things is with an enumeration and implementations checking segment type and exploiting specific geometric relations to potentially optimize the procedures?

The reason I am giving so much thought on this is that I might find myself having to implement special segment types, such as, for example, parametric curves, in the future, or simply allow extension of the API outside of the API itself. If I use the enum-based, type-checked everything-with-everything intersection tests (and do so also for other spatial operations between segments besides intersection), “outsider” extensions of the API will have to “register” themselves in the segment-types enumeration, which would necessitate either changing and rebuilding the original API, or providing a mechanism for external registrations. Or simply make a true global capture of all possible segment geometric forms to account for everything.

To make it simple, assume that I implement this only with segments, and then I add a new implementation of a circular arc. I can “teach” the circular arc how to intersect itself with straight line segments (by checking the segment type for “line”, for example), but I can not “teach” the line segment how to intersect itself with arcs without going back to change the original library.

I understand that there are methods or techniques to provide all this flexibility (I could make segments register special “injected” intersection methods for specific identifiers, which would be determined by the external API extension objects, so that lines will first check whether the object they intersect with is such a “special” type, or simply make intersection methods virtual, so that the developer trying to extend my API will be able to manually “teach” all existing segment implementations how to intersect themselves with my original objects). All I am asking is, is there any other elegant way to tackle this situation?

The top-voted answer to this question suggests excluding the method entirely and delegating it to a different class. This does sound somewhat counter-intuitive, given that segments do know their geometries. However, segments do not know other segments’ geometries, so it appears to be reasonable design decision to “outsource” the intersection method, one that still necessitates knowing the segment type at run-time, however. Since I am trying to represent segments as interfaces “ignorant” of the underlying type (as in “I want to support the use of the segment interface as being ignorant of the underlying implementation”). Apart from that, I would not resort to empty marker interfaces to differentiate between classes. An external “intersector”-like class would look interesting, though I would avoid making it static, in order to allow for extensions and potential changes of strategy (different implementations, optimizing for speed, employing snapping, etc).

Need some design advice on my ORM for Immutable objects & ref passing

I’m adding Immutable objects support to my Micro ORM called “Symbiotic”

In the case of a create, I need to pass back a newly created version of the value passed in because the object is immutable and I need to update the newly created identity. For non-immutable values I just set the identity property, but obviously I can’t do that now. So I’m thinking of adding a new method called CreateImmutable() with a ref for the value and setting the value with the newly created immutable object with the new identity property value and possibly a changed RowVersion property also. The current method return value is the record changed count, so I want the leave that as is.

I know I could take the easy route and force private property setters, but I would rather support pure immutable types.

Does anyone have any thoughts or potential unforeseen issue on my approach?

current method:

public int Create(object value) 

proposed new method:

public int CreateImmutable(ref object value) 

Object attributes as special parameter objects in python

I am writing a library that can be used with some GUI and would like to be able to build an interface where user can see and/or change most of the object’s parameters, and also write some of these parameters into the separate tables of SQL database.

Instead of hard-coding GUI representations for each object, I want to create them from objects parameters, so each parameter should have a corresponding type of GUI element associated with it. In a same manner, I want to write parameters to SQL tables based on the sql_tables property of parameter.

The best I came up so far is the following:

class Param:   def __init__(name, value, gui_element, sql_tables):     self.name = name     self.value = value     self.gui_element = gui_element     # Names of the tables into which the parameter will be written     self.sql_tables = sql_tables   class FruitParams:   def __init__(fruit_type, init_amount, sold_amount):     self.fruit_type = Param(                    'Fruit type', fruit_type, 'Label', ('Fruits', 'Inventory'))     self. = Param('Amount harvested', init_amount, 'EditableLabel', ('Inventory',) )     self.fruit_type = Param('Amount sold', sold_amount, 'Slider', ('Inventory',))   class Fruit:   def __init__(self, params):     self.params = params   apple_params = FruitParams('Apple', 100, 0) apple = Fruit(apple_params)  # proceed to Building GUI from apple.params ... 

There will be dozens of parameters for each object. I wonder if there is a better approach than the one I am thinking about? It seems like a fairly common problem, so I don’t want to re-invent the wheel. Also, I already have a big chunk of library written, so if I am to change all parameters into the proposed form, I would have to add .value to every already existing usage of each parameter, which I would prefer to avoid.

Knapsack problem with specified amount of objects

Suppose I need exactly $ X$ flowerpots.

I have $ Y$ flowerpots to choose from, and $ Y > X$ .

Each of the $ Y$ flowerpots has a cost and a capacity. I have a fixed budget to buy flowerpots. The budget, and costs and capacity of each flowerpot are all strictly positive integers. Assume that the total cost of the cheapest $ X$ flowerpots do not exceed my budget.

My goal is to maximize total capacity subject to not exceeding my budget (I can spend less, but not more).

If I didn’t need exactly $ X$ flowerpots, this is the standard 0/1 knapsack problem. However, I need exactly $ X$ flowerpots.

What does this problem become, and what algorithm can I use to solve this?