Designing Models from complex ERP SQL tables

I have a project coming up where im supposed to upgrade an existing access app to a .net architecture. Im wondering how to design object oriented models from the giant tables and mutliple relationships between them in an ERP like system.

For example, if i have an Order object, the orders table alone has 20+ columns where a lot would translate to normal properties, but an order has multiple foreign keys which all map to their own complex objects like items, customers, supplier etc.

Obviously im also not super familiar with the database and the tables and fields in detail, so do i just translate every colmumn? Do i try to only do the relevant stuff and fix it later if it turns out i need column/property X after all?

Also, since its an upgrade of an access app, the queries are already written and it would be silly not to reuse them, so no Entity Framework i think? Im also wondering how to best initialize them from the database.

What i usually end up with in these bigger systems is something like this to initialize an order completely: get all orders, get all items, get all customers, get all suppliers -> loop through the orders and add the appropriate item/curstomer/supplier from the lists.

Why getData() and toArray() on collection have a different behaviour for EAVs tables such as Product?

 $  collection = $  this->productCollectionFactory->create();  $  collection->addAttributeToSelect('*'); 
  • collection->getData() -> returns only a few data (most of the main table until explicitly provided in addAttributeToSelect('my_attr_code').
  • collection->toArray() -> returns complete data.

This behaviour is same on the Flat tables, but why different on EAV table such as product?

What are considered best practices when designing data tables?

I’ll soon be tasked with designing data tables with a ton of data. I’d really appreciate it if you guys could point me in the right direction. I’ll share some of the links I’ve found while researching.

Links:

https://medium.com/mission-log/design-better-data-tables-430a30a00d8c

https://medium.muz.li/complex-tables-356826d11861

https://design-nation.icons8.com/intro-to-data-tables-design-349f55861803

https://uxdesign.cc/design-better-data-tables-4ecc99d23356

https://medium.com/nona-web/data-tables-best-practices-f5edbb917823

How to keep track of db tables used in various apps

I have N applications using ORM, SQL-statements and stored procedures to access M tables from an MSSQL Server 2017. There are some shared tables that are used by various apps. Let’s say I am forced to alter an existing attribute in one of these shared tables. Now I do not want to miss updating any application that uses this table / attribute.

What is the best way to keep track of things like that? Is there a best practice? My first thought was a documentation-related solution that has to be maintained manually. Is there a better approach?

States and behaviour for progress bars embedded in tables

These days it is not uncommon for data tables to contain more complex UI elements (i.e. not just data), with things like pills (or tags), call-to-action buttons, icons, and even graphs & charts (e.g. sparklines) to be embedded.

However, I haven’t actually seen the specific behaviour for these embedded UI elements specified in the context of an child element in a table cell.

So the question is, what happens to a progress bar (and other UI elements) when the table row cycles through different states (e.g. hover-over, active, selected, etc.) and how does the styling and behaviour change compared to when they are outside of a table?

enter image description here

A specific example of this is to consider what happens to a table cell containing a progress bar (which is actually not an uncommon thing to see) if it is selected. Should it be:

  1. Unchanged (even though there might be some contrast issues with the table cell’s selected state).
  2. Modified by making changes to the colour or styling
  3. A custom rule to the behaviour of the table to accommodate the interaction

If you can include any screenshots of actual examples of applications (rather than CodePen or design concepts) that would be very useful for illustrating the answer.

what are the tables being inserted when import product in Magento2?

I want to know what are the tables being inserted while import product in magento2.

or what are the main table to be inserted for showing products in home page( don’t think big. just to display in UI)

I have 1 Million of data, and I can’t upload a file which is more than 2mb.

It is not possible to split my file and insert.

I can create insert into ... query from csv file if I know the table names and its relations.

can anyone know, please answer me

Copy Multiple Excel Tables to VBA Dictionary

I have a number of Excel tables that I want to populate into VBA dictionaries. Most of the code is repetitive but there are some unique things for each table that prevent me from making one general purpose routine and calling it for each table. The basic code:

Public Sub MakeDictionary( _     ByVal Tbl As ListObject, _     ByRef Dict As Scripting.Dictionary)      ' This routine loads an Excel table into a VBA dictionary      Dim ThisList As TableHandler     Set ThisList = New TableHandler      Dim Ary As Variant      If ThisList.TryReadTable(Tbl, False, Ary) Then         Set Dict = New Scripting.Dictionary          pInitialized = True          Dim I As Long         For I = 2 To UBound(Ary, 1)             Dim ThisClass As MyClass             Set ThisClass = New MyClass              ThisClass.Field1 = Ary(I, 1)             ThisClass.Field2 = Ary(I, 2)              If Dict.Exists(Ary(I, 1)) Then                 MsgBox "Entry already exists: " & Ary(I, 1)             Else                 Dict.Add Ary(I, 1), MyClass             End If          Next I      Else         MsgBox "Error while reading Table"     End If  End Sub 

TableHandler and TryReadTable are a class and function that reads a table into a variant array. Field1 and Field 2 are the columns of the Excel table. My tables range from 1 to 8 columns depending on the table.

I don’t know how to make these lines generic:

            Dim ThisClass As MyClass             Set ThisClass = New MyClass              ThisClass.Field1 = Ary(I, 1)             ThisClass.Field2 = Ary(I, 2) 

At this point I have to make 10-12 versions of MakeDictionary where only the class name and class fields are unique. If that’s the best way to do it, I can live with it, but, I’d like to make my code more general. Any suggestions?

Which is the better way to create child database tables

Imagine there are 2 tables, a user table and comments table. tb2 references tbl1 with users_id column.

tbl_1, id, name

tbl_2 id, users_id, comment

Now lets say we want to loop through all comments, easy enough, but to get the name we need to loop through all the users too within the first loop.

foreach ($  comments as $  comment) {{$  comment->comment}}  foreach ($  users as $  user) if ($  user->id === $  comment->users_id) {{$  user->name}} endif endforeach  endforeach 

What worries me, is what if we have 10,000 users, and there are 20 comments on the page.

Does this mean we are iterating through 10,000 users for every comment, so 200,000 iterations and thats just for 1 get request.

what if we add 100 concurrent users to the mix.

In example 2 we could maybe also store the users name with the comment, when the comment is created.

tbl_1, id, name

tbl_2, id, users_id, name, comment

foreach ($  comments as $  comment) {{$  comment->comment}} {{$  comment->name}} endforeach 

in this example we can just loop through the comments table and since the name has been stored with the comment. there is no need to loop through all the users.

This way doesnt seem dry as data is being duplicated in the child table.

So what is the better way or what would be some good resources to read?

Input data in data tables

I have to design a data table which is going to be 288 rows and 15 column big. I want to make it easy for users to input data easily. Data is for a day for every 5 minutes interval(i.e. 288 rows are 5 min intervals). As per research this is how they input data: 1) Most of the users put same data across columns 2) During some critical situations they put similar data upto some rows e.g. till 60 rows data will be same, next 40 rows data will be same and so on. 3) Users sometimes do copy-paste data directly from excel sheets.

I was planning to have a collapsible table where users can select time duration and enter bulk values which will populate for that entire time duration. But If I do this, copy and paste from excel will be an issue.

PostgreSQL data query from two tables based on a condition in python takes too long time?

I’m writing a python script and do database query to get the ids of employees who are in one table but are not in another. I’m using psycopg2 module for python 2 and PostgreSQL as database. After querying from one table based on a condition, I do another query on another table to get the difference between these tables using the result of previous query. The problem is the full procedure takes a long time. I want to know is there any other method or technique which can make the entire procedure faster? Below is the code I used for doing my feature:

def find_difference_assignment_pls_count(self):         counter = 0         emp_ids = []         self.test_cursor.execute("""Select id,emp_id from test_test where flag=true and emp_id is not null and ver_id in(select id from test_phases where state='test')""")         matching_records = self.test_cursor.fetchall()          for match_record in matching_records:             self.test_cursor.execute("""Select id from test_table where test_emp_id=%s and state='test_state'""",(match_record['emp_id'],))             result = self.test_cursor.fetchall()              if result:                 continue             else:                 emp_ids.append(match_record['emp_id'])                 counter +=1          print "Employees of test_test not in test_table: ", counter         return emp_ids 

I run these queries on two tables which at least have more than 500000 records. Therefore the performance is really slow.