Delayed or Asynchronous DML Trigger Execution

I have created a DML trigger (AFTER INSERT, UPDATE, DELETE) on a table
Trigger’s logic takes about 30 seconds to execute

So if you change even 1 row, it takes about 30 seconds due to trigger execution
Developer asked me "Is there a chance that the trigger could be a fire & forget action?"

I said no, but is it really so ?

Question:

Can trigger be executed in "asynchronous" mode?
Application updates couple of rows in few ms, and thinks that transaction is completed, and then trigger is silently executed under the hood ?

I understand that this actually does not look good from consistency point of view, but still, is it possible ?

Vectorization vs Asynchronous parallelism

I have taken a course "Programming for Performance" in my college and in the first week of the course, I have come across vectorization and Asynchronous Parallelism. But I am unable to figure out what is the relation and difference between the 2 of them. In the slides, the professor has provided something like this–>

enter image description here

What does it actually mean that parallelism is finer-grained and course-grained?

Also here is another instance where he has tried to point out the difference, here we are given 2 code instance and tried to explain the concept, but I am not able to get that.

enter image description here

What I got is that, in the first one both vectorization and asynchronous parallelism is not possible, since there is a data dependency, but in the second one vectorization can be done as there is no dependency, but asynchronous parallelism is not achievable since, say we have this 2 instances

A(1) = A(2) + B(2) | A(2) = A(3) + B(3)

So in this case the A(2) may get overwritten by the second instance even before the 1st instance of the code is executed, hence Asynchronous Parallelism is not possible.

Is my understanding of this right?

When do we use Asynchronous and Synchronous data transmission? [closed]

I’m doing an AS in Computer Science and i’ve been taught about different types of data transmission, the ones that i was taught are Serial and Parallel with Asynchronous and Synchronous data transmission. My question is what type of transmission do the 2 cable types use (i understand HOW they work e.g. Parallel cables have n strings for n bits etc, but i dont know which type the cables use or how they’re determined)

What is time stamp in asynchronous transmission?

I am new to CS, and learning asynchronous transmission.

Asynchronous Transmission typically also uses SYNC word/bits to provide occasional time stamp for receiver to synchronise its clock to the transmitter clock.

Could anyone explain more specifically how time stamp synchronizes the transmitter and receiver clock to reduce clock skew between them (such as the process or mechanism used)?

Thank you.

Why is a threshold determined for Byzantine Fault Tolerance of an “Asynchronous” network? (where it cannot tolerate even one faulty node)

In following answer (LINK: https://bitcoin.stackexchange.com/a/58908/41513), it has been shown that for Asynchronous Byzantine Agreement:

“we cannot tolerate 1/3 or more of the nodes being dishonest or we lose either safety or liveness.”

For this proof, the following conditions/requirements has been considered:

  1. Our system is asynchronous.
  2. Some participants may be malicious.
  3. We want safety.
  4. We want liveness.

A fundamental question is that:

With considering the well-known paper titled: “Impossibility of Distributed Consensus with One Faulty Process” (LINK: https://apps.dtic.mil/dtic/tr/fulltext/u2/a132503.pdf)

showing that:

no completely asynchronous consensus protocol can tolerate even a single unannounced process death,

Can we still assume that the network is asynchronous ? As in that case the network cannot tolerate even one faulty node.

Is PREFETCH an asynchronous operation?

I often hear Prefetching as a technique for speeding up, for example, sequential memory access pattern. The prefetch should occur sufficiently far ahead in time to mitigate the latency of memory access, for example in a loop traversing memory linearly.

According to the famous “What Every Programmer Should Know About Memory” paper by Ulrich Drepper, it is written:

This prefetching would remove some of the costs of accessing main memory since it happens asynchronously with respect to the execution of the program

(emphasis mine)

I could not find references on Google or Wikipedia corroborating or proving this. Does anyone know where I can find if this is true?

The only reasoning I can think of is rhetorical: it must be asynchronous b/c otherwise prefetching offers no benefit to sequential access… unless the execution time of prefetching a cache line is less than bypassing prefetch and placing the same cache line from RAM directly into cache.

Best practices for asynchronous user experiences?

Can someone suggest some good examples and resources around designing for async user experiences?

I’m working on a Web application that needs to process invoice submissions asynchronously to improve the user experience. We want to allow them to submit an invoice, receive feedback that it has been received, and process it async on the backend. The current, synchronous submissions requires a lot of processing and can take up to a couple minutes or more to due to integrating with another system. It’s not lost on me that 1-2 minutes waiting in a Web UI is horrible.

Note: We can perform a enough validations to reduce the chance of errors on the submissions, but there will still be a small chance of errors.

I’m looking for real-world examples of asynchronous user experiences for processing where it is done well. For example, I know that amazon.com does this for processing orders. You receive an almost immediate response that your order has been received, but that it is pending processing. There is still the slight chance of an out-of-stock issue or some other problem. They communicate that back through notification via email and in the UI.

Users often submit multiple invoices in the same session, so we want them to be able to submit invoices on after the other quickly in a way that will:

  • Make it clear that the submission was received, but more processing is required.
  • Surface errors in a clear way
  • Improve the perception of speed
  • Provide a more natural and improved UX workflow.

Any examples or resources specific to this type of UX topic is very much appreciated.

Sharepoint REST api: wasn’t $.Ajax supposed to make asynchronous calls?

I don’t understand why my code is running synchronous. At least it is what the attached print-screen shows. Are my assumptions correct?

<script src="../SiteAssets/js/lib/jquery-2.2.0.min.js"></script>  <script type="text/javascript" src="_layouts/15/sp.runtime.js"></script> <script type="text/javascript" src="_layouts/15/sp.js"></script> <script type="text/javascript" src="_layouts/15/sp.RequestExecutor.js"></script>  <script>     var oWeb = "https://xxxxxxxxx.sharepoint.com/sites/edpsaomanoel/";      $  (document).ready(function () {         getListItems();         executorGetListItems();     });      var endpointUrl = "_api/web/lists/getbytitle('acoes')/items/?$  select=Id,Title,DueDate,PercentComplete&$  top=200";     var acceptHeaders = {"Accept": "application/json; odata=minimalmetadata"}      function onFail (data, errorCode, errorMessage) {         console.log("Error:",errorMessage,errorCode,data);     }      function executorGetListItems() {             var executor;         executor = new SP.RequestExecutor(oWeb);          executor.executeAsync({             url: endpointUrl,             method: "GET",             headers: acceptHeaders,             error: onFail,             success: function (data){                 console.log("ok executor",JSON.parse(data.body).value; //data.body.constructor === String                      }         });     }      function getListItems() {             $  .ajax({             url: oWeb + ""+ endpointUrl,             method: "GET",             headers: acceptHeaders,             error: onFail,             success: function (data){                 console.log("ok ajax",data.value); //data.value === Object             }         });     }  </script> 

enter image description here

SPFX: is @pnp/sp get call asynchronous?

I am new with @pnp/sp and I am having problems with my code:

public render(): React.ReactElement<IVisualizadorPermisosProps> {     var arbol=[];     sp.web.get().then((data)=>{       arbol.push({name: data.Title, children:[]});     });      //Other code      return (       <div id="principal">         <Select options={opciones} onChange={this.onUserSelection} />         <div id="treeWrapper" style={{ width: '50em', height: '20em' }}>          <Tree data={arbol} />          </div>       </div>     );   } 

My problem here is that executing this code, the code inside the sp.web.get() block is not executing before the rest of the code, as I pretend. For example the return statement is executing before arbol.push({name: data.Title, children:[]}); so I get execution errors cause variable arbol can’t be empty. If I put a break point inside the “get” block, it reaches the break point in the middle of the execution of the other code.

This situation makes me believe that the “get” call is asynchronous.

Is that truth? What is the solution to my problem?

Thanks.

Design question on synchronization of two asynchronous data streams

I have two async streams suppose- Trip : {tripId, date, city} Bill : {billId, tripId, date, amount}. I need to design a system to get real time aggregated view of following nature: City, TripCount, TotalAmount. Events in both streams can be out of sync or duplicate. But result needs to be accurate and realtime.

My Solution:

1.) Create two different DB tables: Trip and Bill (indexed on TripID and BillID). Read the messages from the streams and persist in these tables with the status column as pending in Bill table and Trip table. Then write a worker which will read from the bill table and look into the Trip table for the record containing the given tripID. If the record is found it will update the aggregated view in third table (City, TripCount, TotalAmount). Then we will change the state of the bill and trip record to processed. There will be one background job running on periodic basis which will remove all the records which will have the state as processed from both Bill and Trip table.

The problem i see with the above solution is the indexing that done on the TripID and BillID will become a bottle neck if i remove the records at very high frequency. Other from it do you guy see any other problem with this solution. I have read on internet that people are suggesting it as very famous anti-pattern cause here i am using DB as queue.

2.) Here is other solution: Take the data from the streams, persist it in the tables: Trip, Bill (for audit purpose of records and to avoid duplicates). For storing the Trip data temporarily take one distributed key-value pair very fast data structure. I am taking REDIS for this purpose. So, after writing trip data to DB, i will write the same data in the cache with tripid as key and record as value. Then i will put the bill data in the queue. Workers will be reading from the queue and will lookup into the cache for the tripid. If the tripid is present then the workers will read the data from cache and update the aggregated view and delete the tripid from the cache and also bill message from the queue. If the tripid is not found in the cache then then worker will again put the same message in the queue back.

To avoid duplicates, the insertion will fail if we will try to insert same tripid or billid in the tables. In case where the insertion will fail, i will not put the message in the queue and cache.

Experts please let me know about your thoughts on the above two solutions and please propose any better solution if you have any.