At which phase of boot process could one modify scancode/keycode translation tables of keyboard drivers supporting the Linux input layer API?

I am using keyfuzz to map Alt-Eject to Alt-SysRq in Mac keyboard (See here). But on recent (X)ubuntus it is preferred to use systemd service to run the needed command at startup. I wonder how early I can put that service to be executed? Like which WantedBy=, After=, Before= and such attributes to use so that the configuration works and will not be overwritten? Will it work even in rescue mode boot then?

here is some reference about dependencies between different targets.

Seperating Persistance Layer and Domain Layer

I have been reading alot about Domain Driven Design lately, and starting to feel a little more confident, than when i first touched this topic. I’m using a an Asp Core project, with class libraries for:

  • PersistanceLayer
  • DomainLayer(Service)
  • ApplicationLayer
  • UILayer

Let’s say i have an entity called “Company”, this will have a model in:

  • PersistanceLayer(Persistance model/PM)
  • DomainLayer(Domain model/DM)
  • Application/UI-Layer(Data transfer object/DTO)

I see many people talking about doing mapping from PM to DM in the repository, when some of my models has five-hundred properties it’s not appropriate to have a constructor with this many fields.

I know some of you will say that this model can be refactored etc, but since i’m writing software for my organization and we use old systems thats been around for 30 years, so this is not an option at the moment. I would also like to have domain-events triggered inside the DM, so by assembling a “Company” in the serviceclass, will trigger events.

I basically need a factory function inside my DomainModel that accepts a PersistanceModel and do not trigger any events, is this smelly good? And what are the pros/cons for going with this approach?

Actions including layer fill.

I'm operating in Windows on CC2019.

I'm writing an action in which I include a layer fill. I want it so that when I run the action it fills the layer with the foreground colour I have selected at the time of the running the action each time I run it.

But whenever I record the action, it embeds the colour value of the foreground colour (say 100% cyan) at the time of recording it. So everytime I run it subsequently it fills with 100% cyan no matter what foreground colour is selected.

I know…

Actions including layer fill.

Actions including layer fill.

I'm operating in Windows on CC2019.

I'm writing an action in which I include a layer fill. I want it so that when I run the action it fills the layer with the foreground colour I have selected at the time of the running the action each time I run it.

But whenever I record the action, it embeds the colour value of the foreground colour (say 100% cyan) at the time of recording it. So everytime I run it subsequently it fills with 100% cyan no matter what foreground colour is selected.

I know…

Actions including layer fill.

What architecture layer does GraphQL is part of?

I’m currently dealing with a GraphQL project with poor architecture. I read about where to put AuthZ checks and they refer to the “business layer” as to be called from a GraphQL resolver.

Up till now, I thought GraphQL belong in the Data Access Layer (or at least, JUST BEFORE the DAL), but it now looks like it belongs in the most outside layer, similar to a “controller”.

What layer does it belongs to?

Serverless Architecture – Integrating with Data Layer

My question is in context with the Serverless Architecture (e.g. AWS Lambda) and how does one interact with the Databases in this system.

Typically in a 3 Tier architecture, we have a web service which interacts with the Database. The idea here is to ensure that one database table is owned by one component. So changes in there, does not require changes in multiple places and there is also a clear sense of ownership so scaling and security are easier to manage.

However, moving to serverless architecture, this ownership is no more clear and exposing a web service to access a database and having a Lambda use this web service does not make sense to me.

I would like to know a bit on the common patterns and practices around this.

Ruby Array being passed wrong to the javascript layer

I have an application, where I have an Array of elements on my controller. I want to pass this Array to a Javascript view, and then convert it to a JSON and parse it.

This is my View:

<% content_for :page_meta do %>     <script>         const filterItems = "<%= @filter_options %>";         const productItems = "<%= @mdms_products %>";                </script> 

If I debug @filter_options. I have this on IRB:

@filter_options.class = Array 

And the value:

[{:name=>"Solution Type", :uid=>"application", :component=>"HierarchicalListFilter", :props=>{:rootUrl=>"/insulation/commercial/enclosure/applications", :rootText=>"Enclosure Solutions"}, :values=>[{:name=>"Walls", :slug=>"walls", :children=>[{:name=>"Framed", :slug=>"framed", :children=>[{:name=>"Steel Stud", :slug=>"steel-stud"}, {:name=>"Wood Stud", :slug=>"wood-stud"}]}, {:name=>"Masonry", :slug=>"masonry", :children=>[{:name=>"Concrete Masonry Unit", :slug=>"concrete-masonry-unit"}]}, {:name=>"Concrete", :slug=>"concrete", :children=>[{:name=>"Precast", :slug=>"precast"}, {:name=>"Tilt-up", :slug=>"tilt-up"}, {:name=>"Cast-in-place", :slug=>"cast-in-place"}]}, {:name=>"Metal Building", :slug=>"metal-building"},  

Everything looks perfect right? But, on my javascript console debug, when I get the value of productItems I get this weird String:

[{:name=&gt;&quot;Solution Type&quot;, :uid=&gt;&quot;application&quot;, :component=&gt;&quot;HierarchicalListFilter&quot;, :props=&gt;{:rootUrl=&gt;&quot;/insulation/commercial/enclosure/applications&quot;, :rootText=&gt;&quot;Enclosure Solutions&quot;}, :values=&gt;[{:name=&gt;&quot;Walls&quot;, :slug=&gt;&quot;walls&quot;, :children=&gt;[{:name=&gt;&quot;Framed&quot;, :slug=&gt;&quot;framed&quot;, :children=&gt;[{:name=&gt;&quot;Steel Stud&quot;, :slug=&gt;&quot;steel-stud&quot;}, 

And of course, when I try to do a JSON.parse(filterItems) it shows a parse error.

So, Whats is the better way to pass an Ruby Array, to a Json in Javascript?

Add another layer of security in accounts except phone number

I recently came across articles in which another person went to telecom company and got another sim registered with the target’s phone number. After doing so, he was able to reset the passwords in almost all the accounts (like gmail, outlook) etc and hijack everything.

This also defeats 2FA (at least in google) as anybody who gets my phone number can reset the password of gmail. Once an attacker has access to Gogle account, they can go on to gain access to almost every other website where my gmail account is the verification mail. This includes AWS, Google cloud where I could be significantly charged.

SO my first question is, how to prevent this from happening. Is there any way to tell google to stop using my phone number as fallback for verification or tell them to use something else along with phone number.

2nd question. If this happens, how can I recover my accounts (google and all the connected accounts) as soon as possible. Because the attacker will most likely change all the passwords. He/she might also go one step ahead to change the verification emails/phone number, effectively locking me out of my account. Also if I somehow, do manage to gain back the access to gmail, the attacker could meanwhile hijack all my other accounts tied up with gmail and change email id in them preventing me from resetting password there.

Microservice implementation decoupled with a full asynchronous layer [on hold]

The IT architect of my company just have exposed us the hight architecture principles of the new system that we plan to work on. Globally this is a backend system exposing apis for desktop application (point of sale) on a private network and web application on internet (administration/backoffice managament).

One of the think that seems to me weird is that all front end apis will be decoupled from the real service implementation by a kafka bus.

Here a schema of the general architecture :

enter image description here

the arguments for using the kafka message broker are :

  1. to separate the entry point into the system from service implementations
  2. Allow easier scaling of components with Pub/Sub pattern
  3. the broker will act as a persistent message buffer when services are not able to process events as they come in

    I think it will add some complexity to hve all service working in this asynchronous way . What do you think of this kind of architecture?

Keeping steps in sync of long running process and creating common layer for code repetition

I have 1 long running process wrapped inside a method and it is for 2 different types like below:

  • Type1
  • Type2

Code:

public interface IBaseType     {         MyResult LongRunningProcess(int jobId,int noOfTimes); //doenst save long running process data in database.just returns the results to consumer         void LongRunningProcess(int noOfTimes); //Save results of long running process in database.Background job with on-demand as well as scheduled     }  public class Type1 : IBaseType {     public void LongRunningProcess(int jobId,int noOfTimes)     {         try         {            //Step1 :             var type1Manager =  new Type1Manager(params);            for (int i = 0; i < noOfTimes; i++)                {                  var con = ConnectionFactory.OpenConnection();                  type1Manager.Start(con);                 //Save results of those processing             }            //Step2:           IVersioning versioning = new Versioning();           string version = versioning.GetVersion();           using (connection = new SqlConnection(connectionString))           {                connection.Open();                using (var transaction = connection.BeginTransaction())                {                   try                   {                      Repository.UpdateVariantVersioning(connection, transaction,jobId, version);                      Repository.UpdateCategoryWithVersion(connection, transaction,versioning.Category,version);                      transaction.Commit();                   }                   catch(Exception ex)                   {                        transaction.Rollback();                        //code to delete everything that has been performed in step1                        throw ex;                   }                 }             }         }         catch (Exception ex)         {             Repository.UpdateErrorDetails(connectionString,jobId,ex.Message);         }          //Step3 : if step1 and step2 successfull than mark this job as succeeded else failed         // Updating time of whole process in table     }  }      public class Type2 : IBaseType {     public void LongRunningProcess(int jobId,int noOfTimes)     {         try         {            //Step1 :              var type2Manager =  new Type2Manager(params);             for (int i = 0; i < noOfTimes; i++)                {                  var con = ConnectionFactory.OpenConnection();                  type2Manager.Start(con);                 //Save results of those processing             }            //Step2:           IVersioning versioning = new Versioning();           string version = versioning.GetVersion();           using (connection = new SqlConnection(connectionString))           {                connection.Open();                using (var transaction = connection.BeginTransaction())                {                   try                   {                      Repository.UpdateVariantVersioning(connection, transaction,jobId, version);                      Repository.UpdateCategoryWithVersion(connection, transaction,versioning.Category,version);                      transaction.Commit();                   }                   catch(Exception ex)                   {                        transaction.Rollback();                        //code to delete everything that has been performed in step1                        throw ex;                   }                 }             }         }         catch (Exception ex)         {             Repository.UpdateErrorDetails(connectionString,jobId,ex.Message);         }          //Step3 : if step1 and step2 successfull than mark this job as succeeded else failed         // Updating time of whole process in table     }  } 

So as you can see here that step2 and step3 code are getting repeated for both types so I want to this code repetition.

Secondly I want to keep step1 and step2 in sync so that when step2 fails, then rollback whatever has been done inside the entire step1 process.

I am a bit confused with moving versioning in base abstract class because that would probably be tightly coupled with this long running process. I want to design it in a way that tomorrow if I think of removing versioning then it should not hamper my current design and code.

Can anybody please help me with this?

Update : Added versioning code

interface IVersion {     string CreateVersion(); }  Public class Version : IVersion {      public string Category { get; private set; } }