Given their huge variety, why is it so often concluded that the penalties needed to use a Weapon of Legacy are never worth it?

A common trend when discussing Weapons of Legacy is to compare their benefits to that of normal magical items, notice that they pretty much match up, and then conclude that because the Weapons of Legacy have extra penalties associated with them, they are clearly inferior to any comparable normal magical item. However, some of the benefits of some of the Weapons of Legacy are as good as unique (for there level, at least), so how is it so frequently and easily concluded that their extra penalties outweigh their unique benefits? What I’ve just linked does a fairly good job of assessing the value and rarity of said benefits, but despite the frequency with which I’ve seen the claim that the penalties for Weapons of Legacy always outweigh their benefits, I’ve never seen any such analysis of their penalties, which are rather varied. What property of these penalties is so severe that it allows the Weapons of Legacy to be disregarded without any further analysis, despite the variety in these penalties?

Legacy CE vs current

I recently updated a SQL server to current version. I have major performance issue with some query for example this query takes 8 seconds with legacy CE:

But 5 minutes with CE 150:

I updated statistics for the whole DB and run fullscan update to the tables in this query.

Any idea how can I fix this performance issue?

Thanks in advance!


Security assessment of a legacy SSL/TLS implementtaion on an IoT device

I am doing a security aseesmment on communication security of a legacy IoT Device. So basically objective is to assess and find security gaps in curreny design/implementation. The mode of assessment is manual, primarily with the reference of existing design and code. This is only client side at device; while server is a cloud based server. The device is using a GSM module (SIMCom SIM900) and makes HTTPS communication to server over internet using GSM AT commands.

Based on my understanding on SSL/TLS, I am considering below parameters or criterias for this assessment:

a. TLS portocol version

b. Cipher suites used

c. certificate and key management

d. Root CAs installed on device

e. Embedded PKI aspect for device identity management

f. Hardware crypto aspect (SHE/TPM)

Am I doing it in a right way? Though I think above list of parameters are not specific to Device HW/SW platform; rather generic. but I guess that’s how it should be! I mean parameter list will be pretty much same; however actual assessment on these will depend on security requirements and other aspects like device footprint & its platform etc.

Is the assessment parameter list I am considering is good and adequate? I would appreciate your inputs to validate/correct my approach.

How to improve Legacy Form submission ( >20 input fields)

The product that I am working on has a lot of forms inputs in one page. I’d ideally want to bring the entire UX to 2019 (the application was built in early 2000).

One idea that I’ve tried is breaking down the form into wizards, that worked in few cases. But I’m looking for any different ideas on how to break it down/make it look better.

The forms that I’m talking about is close to how a Data Entry personnel enters Product details. So the form would include, product name, ID, Manufacturer details, Sizes, etc, it gets really long.

Any help would be lovely 🙂

Automating removal of legacy entries in /etc/group

I’m removing entries in /etc/group programmatically.

Because I cannot use grep, cat, or cut for this exercise, I wrote my own program that can produce stdout and stdout data to essentially read a file. If you can write your solution in grep, awk, sed, cat, echo, etc. I can use it.

I have root access and can remove groups manually, but since n groups will contain a ‘+’ character, I need a script that checks for this.

After first I assumed I could append any line including ‘+’ with a #, but I’m now feeling confident that this isn’t how you programmatically manage /etc/groups. I haven’t found great documentation yet and was wondering if someone here might have a better idea on how to disable groups deemed ‘legacy’ via the use of ‘+’ character.

Where should I start with an Integration Test for a Legacy Software in .NET?

Currently, I’m planning a new project of CI/CD with Azure DevOps (with Git, already committed) for an old Solution, which contains 17 C# projects.

Technically, we have access to the source code and it’s required to write all Unit Tests (since it wasn’t planned with them); however, as it’s advice in this article:

Integration Testing made Simple for CRUD applications with SqlLocalDB

The best solution is to perform Integration Tests and not Unit Tests for several reasons:

  • It has been for a considerable amount of time in the market without major changes (Legacy System), more than 5 years.
  • There is not enough documentation of the entire system.
  • The support is limited to minor bugs.
  • It integrates several technologies like C#, SQL, ASP MVC, Console, SAP, etc.
  • Most of people involved in this project are not working anymore; therefore, the business logic is minimal.
  • There would be thousands of cases to evaluate, which means a considerable amount of money and time.

I’d like to know if someone has any experience related or some advice of how to perform them, what way should you follow?

In my case, I’d like to focus specifically in the business logic like CRUD operations, but what should this involved? A parallel database for storing data? Any specific technology? xUnit, nUnit, MSBuild? Or how would you handle it?


I see a potential issue with the previous article since it’s using SQL Local DB and I read that is not supported in Azure and probably it’s the same in Azure Dev Ops.

Legacy deep-inheritance XML schemas: how to design patterns for APIs that map those to and from flat schemas?

Consider a purely hypothetical legacy proprietary library for XML models, which has some really deep nested inheritance within its corresponding POJOs — 1-10 fields per class, lots of special instance classes that extend archetypal classes, types as wrappers of lists of type instances etc. The resulting model looks really pretty with some dubious performance specs but that’s besides the point.

I want to make this work with the ugly, high-performance flat models that kids these days and people that claim not to have a drinking or substance abuse problem prefer for some reason. So say my beautiful, shiny model is something like this:

<QueryRequestSubtypeObject>   <QueryRequestHeaders>     <QueryReqParams>       <Param value = 4/>       <ParamDescWrapper index = 12>          <WrappedParamDesc>Foobar</WrappedParamDesc>   ... 

And the corresponding object as modeled by the vendor instead looks like

{    paramVal = 4    paramTypeIndex = 12    paramDesc = "Foobar"     } 

There are also regular updates to this ivory Tower of Babylon as well as updates to the business logic as vendor specs change.

Now the part where I convert my ancient classic into a teen flick is straightforward enough, however ugly it might be. Say something like below would be used by a query constructor and that would be enough abstraction for all business logic involved:

def extractParamVal(queryRequestSubtypeObject):     return queryRequestSubtypeObject.getQueryRequestHeaders.getQueryReqParams.getParam.getValue 

Alas, it is not that simple. Now I want to convert whatever ugly, flat, subsecond latency response that comes back into our elegant, delicate model (with 5-10 second latency, patience is a virtue after all!). With some code like this:

queryRequestSubtypeObject = new QueryRequestSubtypeObject queryRequestHeaders = new QueryRequestHeaders queryReqParams = new QueryReqParams queryReqParamList = new ArrayList param = new Param param.setValue(4) queryReqParamList.add(param) queryReqParams.setQueryReqParamList(queryReqParamList) queryRequestSubtypeObject.setQueryRequestHeaders(queryRequestHeaders) ... 

Code like this needs to be somewhere somehow for each and every field that is returned if someone were to convert data into this hypothetical format. Some solutions I have tried:

  • External libraries: Libraries like Dozer use reflections which does not scale well for bulk mapping massive objects like this. Mapstruct et al use code generation which does not do well with deep nesting involving cases like the list wrappers I mentioned.

  • Factory approach: Generic response factories that take a set of transformation functions. Idea is to bury all model specific implementation into business logic based abstractions. In reality this results in some FAT functions.

  • Chain of responsibility: Methods that handle initialization of each field and other methods that handle what goes where from vendor response and some other methods that handle creation of a portion of the mapping and some other methods that handle a sub-group… loooooong chains of responsibility

Given all of these approaches resulted in technical nightmares of some sort, is there an established way to handle cases like this specifically? Ideally it would have minimal non-business logic abstractions involved while providing enough granularity to implement updates and have it technically solid as well. Bonus points for the ability to isolate any given component, wherever it might be in the model hierarchy, without null pointers getting thrown somewhere for unit testing

As is analysis of legacy systems

I’ve been tasked with performing an ‘as is’ analysis of a monolithic legacy system in my organisation. I’ve been conducting interviews with the technical team responsible for developing the system along with other techniques (such as but not limited to, observing people using the system, surveying, meetings etc), and I’ve been creating data flow diagrams to map out the system.

As I’ve never conducted an ‘as is’ analysis of a legacy system, what is the best way to model the system? I’ve been creating DFDs although I’m begging to question if this is the most appropriate approach.