PCI Compliance for developers accessing a production database for support

As a developer, when an Incident comes in and reaches Tier 3 support (The development team), how can the developers get access to query the Production Database, while remaining PCI Compliant? I’m admittedly a newbie when it comes to PCI Compliance. Is this just a case of Read-Only accounts? Is this a case of data masking? Is this a case of having a Production copy within Production so devs can’t hit a "Live" db? What’s the easiest, and compliant way for developers to be able to perform application incident support in production?

How do I approach towards an abstract production rule interpreter with this situation of converting XML to python or java class?

If I am asking in the wrong place, please forgive and direct me to a more suitable one

So I have a XML like this

<range>    unconstrained    <span>       <rttype>          String </range>  <range>    x type    <span>       <rttype>          int       <assert>          $   > 0 </range>  <range>    Simple class reference    <span>       <rttype>          SimpleClass </range>  <range>    Simple class set    <span>       <rttype>          ArrayList<SimpleClass> </range>  <class>   Simple class     <attribute>      x         <range>            x type   </attribute>    <attribute>      state   </attribute>    <action>      initializer      <guarantees>         x has been set to zero      </guarantees>      <pimaction>         .@a x @ = 0      </pimaction>   </action>    <action>      update x      <parameter>         new x         x type      <guarantees>         x has been set to new x      </guarantees>      <pimaction>         .@a x @ = @i new x @      </pimaction>   </action>    <state>      Exists   </state>    <state>      Doesn't exist   </state>    <event>      <<new>>   </event>    <event>      <<destroy>>   </event>    <event>      update   </event>    <transition>      Doesn't exist      <<new>>      Exists      <transitionaction>         initializer   </transition>    <transition>      Exists      <<destroy>>      Doesn't exist   </transition>    <transition>      Exists      update      Exists      <transitionaction>         update x   </transition> 

I have a Java compiler (let’s call this ToJavaCompiler) that will compile this into a Java class

And another Java compiler (let’s call this ToPythonCompiler) that will also compile this into a Python class.

class SimpleClass:      # State Enum Declaration     # see MMClass.ruleStateEnumDeclaration for implementation      SimpleClass_states = Enum("SimpleClass_states", "EXISTS DOESNTEXIST")      # Attribute instance variables     # see MMClass.ruleAttributeInstVarList for implementation      _x: int     _state: SimpleClass_states      # Class level attribute     # All class members accessor      SimpleClassSet: ClassVar[List[SimpleClass]] = []       # Constructor     # See MMClass.ruleConstructorOperation      # See constructEvent.ruleConstructorOperation     def __init__(self):         # requires         #    none         # guarantees         #    --> x has been set to zero and state == Exists         self._initializer()         self._state = SimpleClass.SimpleClass_states.EXISTS         SimpleClass.SimpleClassSet.append(self)      # Attribute getters      @property     def x(self) -> int:         # requires         #   none         # guarantees         #   returns the x         return self._x      @property     def state(self) -> SimpleClass_states:         # requires         #   none         # guarantees         #   returns the state         return self._state       # Pushed events      def destroy(self) -> None:         # requires         #    none         # guarantees         #   state was Exists --> state == Doesn't exist         if self._state == SimpleClass.SimpleClass_states.EXISTS:             self._state = SimpleClass.SimpleClass_states.DOESNTEXIST             SimpleClass.SimpleClassSet.remove(self)      def update(self, new_x: int) -> None:         # requires         #    none         # guarantees         #   state was Exists --> x has been set to new x         if self._state == SimpleClass.SimpleClass_states.EXISTS:             self._update_x(new_x)      # Private transition actions      def _initializer(self):         # requires         #   none         # guarantees         #   x has been set to zero         self._x = 0      def _update_x(self, new_x: int):         # requires         #   none         # guarantees         #   x has been set to new x         self._x = new_x 

THe thing is my production rule need access to instance variable data from the model object they are compiling.

For example to generate the instance variables declarations i need a production rule that’s written in Java code like this which require access to the underlying model itself at Context.model()

public void ruleAttributeInstVarList() {     // description     // this rule emits the set of (private) instance variable declarations, if any     //     // Class.#ATTRIBUTE_INST_VAR_LIST -->     // foreach anAttribute in class     // anAttribute.#DEFINE_INST_VAR     //     // requires     // none     // guarantees     // all attributes of this class have been declared as instance variable of the     // PIM Overlay run-time type     if (Context.model().isVerbose()) {         Context.codeOutput().indent();         Context.codeOutput().println("# Attribute instance variables");         Context.codeOutput().println("# see MMClass.ruleAttributeInstVarList for implementation");                  Context.codeOutput().println("");         if (!attributeSet.isEmpty()) {             for (Attribute anAttribute : attributeSet) {                 anAttribute.ruleDefineInstVarAsPrivate();             }         } else {             if (Context.model().isVerbose()) {                 Context.codeOutput().indent();                 Context.codeOutput().println("# none");             }         }         Context.codeOutput().indentLess();     } else {         for (Attribute anAttribute : attributeSet) {             anAttribute.ruleDefineInstVarAsPrivate();         }     }     Context.codeOutput().println("");     Context.codeOutput().println(""); } 

I wonder if there’s an easier way to add target languages or frameworks without creating separate codebases per target language.

For e.g. I now have a ToJavaCompiler and a ToPythonCompiler two separate codebases

So I am here asking if there’s a way I can create an abstract production rule interpreter that suits my needs. My aim is to ultimately produce model classes in the target language or framework (such as Django or Rails) with a single codebase that allows extensions for different target languages/frameworks

I am okay to move away from Java if there’s a better language that suits what I am trying to do.

Made a mistake with mysql restore on the production site instead of the staging one. Can I use mysql general log to restore?

Made a mistake with mysql restore on the production site instead of the staging one. Can I use mysql general log to restore? It seems general log is all I have. I dont seem to have a binlog with MariaDB’s Aria storage engine.

What should I do now? Anyone has any experience using general log to recover the database?

Is EBNF a formal grammar? If yes, how can we generate production rules from EBNF expression?

According to Wikipedia definition EBNF, EBNF is a formal grammar.

My question is that how could I generate production rules base on EBNF expression:

For example:

Expression:

letter = "A" | "B" | "C" ;

digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ;

identifier = letter , { letter | digit | "_" } ;

Generates production rules:

letter ⟶ "A"

letter ⟶ "B"

letter ⟶ "C"

digit ⟶ "0"

digit ⟶ "1"

digit ⟶ "2"

digit ⟶ "3"

digit ⟶ "4"

digit ⟶ "5"

digit ⟶ "6"

digit ⟶ "7"

digit ⟶ "8"

digit ⟶ "9"

identifier ⟶ letter

identifier ⟶ letter noname_nonterminal

noname_nonterminal ⟶ letter

noname_nonterminal ⟶ digit

noname_nonterminal ⟶ "_"

Thank you for your reading,

How have you secured production data (PII) on non-prod environments?

Data protection laws including GDPR state:

“Personal data shall be obtained only for one or more specified and lawful purposes, and shall not be further processed in any manner incompatible with that purpose or those purposes.”

GDPR stipulates data should not be used in non-production systems unless anonymized or through pseudonymization .

Generally speaking a customer would not expect their information to be used in a test environment or for the purpose of new technology solutions and hence we can argue we do/do not have an case for legitimate processing of PII in test environments.

I have requirement. I want to use personal identifiable data (PII) to develop new technology. I need to ingest PII in an AWS dev environment, the data quality is poor, then clean the data in a dev/test system, and sent to a production environment after i have proved the data cleansing works. Ofuscating the data in some fashion is not an option as we need to transform the poor data quality into making it good.

I will encrypt the relevant services used in AWS using KMS and data access will be limited to a small group of developers. Data will be deleted at the end of the dev/test period. All AWS services will be tightly controlled via security groups and IAM polices. This seems like an easier option than anonymization or pseudonymization which is difficult and cumbersome.

Does this seem like a good approach ? How have others secured live (PII) data in non-prod environments?

What is the appropriate recommendation concerning making new indexes on our production database?

We are working on ERP application with a SQL server 2008 R2 database in compatibility level 80. I’m working as SQL server DBA I want to make performance tuning against our database but I’m facing many obstacles because our application may not be compatible with higher compatibility level so I cant use DMVs which may help me to find the most expensive queries which is running frequently against our production database.

I tried to run SQL server profiler to extract workload file and run this trc file on database tuning advisor to explore it’s recommendation concerning our database, including index creation and SQL server statistics. I found many opinions said that do not blindly execute DTA recommendation.

I tried to run SQL server activity monitor to discover the most expensive queries and displayed it’s execution plan and I found also recommendations to execute non-clustered indexes.

My questions are:

How can I depend on DTA or execution plan to tune performance?

If I execute these recommendations (indexes) and I face regression on performance, could I drop it easily without any threats and will it be created automatically while Index rebuild operation or rebuild indexes operation drop and create the only existed indexes?

What are the best practices to make new indexes?

Given a CFG $G=(V_N, V_T, R, S)$ and one of its nonterminals $v$ determine if there exists a production chain $S \Rightarrow^* v \alpha$?

I am supposed to find an algorithm solving the following problem:

Given a CFG $ \;G=(V_N, V_T, R, S)$ and a nonterminal $ v \in V_N$ determine if there exists a production chain $ S \Rightarrow^* v \alpha$ , where $ \alpha = (V_N + V_T)^*$ .

Not sure if that’s the right term, but in other words we are trying to check if you can yield $ v$ from $ S$ – the starting symbol.

I don’t know anything about the form of the grammar and I can’t convert it into Chomsky’s form as it would introduce new nonterminals and possibly remove $ v$ . Where do I start with this? Any suggestions?

Thanks

Is AES the recommended symmetric cipher for production level software?

I was considering developing an application level software for file encryption after stress testing many of my implementations of popular symmetric ciphers. I would love to support multiple algorithms like AES (GCM / CBC/ CTR) , XChaCha20-Poly1305 etc. But I’m on crossroads when choosing a very recommended symmetric cipher for my application, when it comes to performance as well as secure implementation. AES has been tried and tested for years and is still the most popular symmetric cipher in the world. It’s well documented and also standardized by various Governments (FIPS standard for example) , but very difficult to implement securely unless hardware acceleration is available. Pure software version of AES is slow and my application should be able to achieve the same performance on many devices. Also, AES GCM has a maximum size limit for messages ~ 64 GB, and I really wanted authentication with encryption!

  • Should I stick with AES and its various modes (ex. CTR HMAC SHA256) or is it time to adopt a new symmetric cipher like XChaCha20 or XSalsa20 with Poly1305, which is easy and secure to implement in software, but unfortunately not standardized yet ?

  • Also, why using a standardized cipher is recommended in production quality software?

What is the safe way to update production data if data found inconsistent

I am in a small company and require to fix a bunch of data in production which are inconsistent. I have written the script to handle the fix.

I understand that writing sql to fix data is way more risky than working on fixes normal front end or backend code. For front end or back end, we can write thousands of test code to automatically test result or even little bug the affect might not be as dramatic.

How do you guys fix the data in production normally if many data was inconsistent? Are there any method or strategy which can reduce the risk