Understanding handle, viable prefix and valid item in the context of LR(0) and LR(1) items

Dragon book gives definition of handle, viable prefix and valid item at various different places. I am trying to understand these definitions in each other’s context. Various definitions given are as below.

In the bottom up parser section, it gives following definition of handle:

  • Handle: If $ S\xrightarrow{*rm} \alpha A \omega \xrightarrow{rm} \alpha\beta w$ , then production $ A\rightarrow \beta$ in the position following $ \alpha$ is a handle of $ \alpha\beta\omega$ . For covenience, we refer to the body $ \beta$ rather than $ A\rightarrow\beta$ as a handle. enter image description here

(Above, $ \xrightarrow{*rm}$ means rightmost derivation of length $ n$ and $ \xrightarrow{rm}$ means rightmost derivation of length 1)

Then after some pages, in SLR parser section, it gives below definitions:

  • Viable prefix: A viable prefix is a prefix of a right sentential form that does not continue past the right end of the rightmost handle of that sentential form.
  • Valid item: We say item $ A\rightarrow\beta_1.\beta_2$ is valid for a viable prefix $ \alpha\beta_1$ if there is a derivation $ S’\xrightarrow{*rm}\alpha A\omega\xrightarrow{rm}\alpha\beta_1\beta_2\omega$

The book further says:

The fact that $ A\rightarrow \beta_1.\beta_2$ is valid for $ \alpha\beta_1$ tells us a lot about whether to shift or reduce when we find $ \alpha\beta_1$ on the parsing stack. In particular, if $ \beta_2\neq \epsilon$ , then it suggests that we have not yet shifted the handle onto the stack, so shift is our move. If $ \beta_2=\epsilon$ , then it looks as if $ A\rightarrow\beta_1$ is the handle, and we should reduce by this production.

Doubts:

  1. In most discussions, the book uses all these definitions together. However, above the definitions are given separately but not together. How can I relate them together? Can I relate them as follows:

    a. In the definition of handle, can we say $ A\rightarrow\ \beta$ is a valid item?
    b. In the definition of valid item, can we say $ \beta_1.\beta_2$ is a handle?

Definition of handle is given in the section 4.5.2. (Section 4.5 is of bottom up parsers) Definition of viable prefix and valid item is given in the section 4.6.5 (Section 4.6 is of SLR parsers). So none of these definitions are given in the context of LR(1) items or CLR(1) or LALR(1) items. So I want to know whether these definitions applies to LR(1) items too without modifications and if not then what will be corresponding definitions for LR(1) items. Below questions detail this doubt.

  1. For canonical collection state with final item $ E\rightarrow \gamma$ , SLR parser reduces $ \gamma$ to $ E$ , if next input symbol is in $ FOLLOW(E)$ . Does the above definition of valid items adheres with this? That is, does that definition gives sense that $ FIRST(\omega)\in FOLLOW(A)$ ? (In other words, does this definition applies to LR(0) items?) If yes, how? I feel, this definition means $ FIRST(\omega) = LOOKAHEAD(A) \neq FOLLOW(A)$ , and hence it is talking about LR(1) items and applies to CLR/LALR parsers, but not to SLR parsers, as stated by the book. Am I wrong? If yes, how? Do these definition apply to both LR(0) and LR(1) items equally and I am unable to see how? If even that is not the case (that is above definitions apply only to LR(0) items, not to the LR(1) items), how we can give equivalent definitions for LR(1) items?

PDO Library to handle MySQL querys

This is my library class. This not have any issues but I want to know if I have any mistakes, or ways to improve performance:

variable and call (example of SELECT):

include(class_dbmanager.php) $  this->DBMANAGER  = new Class_DbManager(); $  Query['DatabaseName']  = "SELECT user FROM table_user WHERE unsername='" . $  username . "';"; $  BDResult      = $  this->DBMANAGER->GetData($  Query); 

variable and call (example of INSERT/UPDATE):

$  Query['DatabaseName'][] = "INSERT INTO Tbl_Sys_Usuarios(IdTUser, Username, Password, Email) VALUES ('$  IdTUser', '$  Username', '$  Password', '$  Email');"; $  BDResult       = $  this->DBMANAGER->InsertData($  Query); 

The library:

<?php class Class_DbManager {     //for select     //$  Query is array with database index and query string     //$  Conf is secundary conection data     public function GetData($  Query, $  Conf = '') {         try {             $  DB_R   = [];             $  DB     = [];             $  val    = [];             $  prefix = '';             $  count  = 0;             if (USEPREFIX == True) {                 $  prefix = DB_PRE; //prefix DB             }             reset($  Query);             $  DB_2USE = key($  Query);             //Conecction. i use defined const...             $  conn    = new PDO("mysql:host=" . DB_HOST . ";dbname=" . $  prefix . "" . DBSYS . "", DB_USER, DB_PASS);             //secundary Connection             if (isset($  Conf['CONF']['ChangeServ'])) {                 if ($  Conf['CONF']['ChangeServ'] == true) {                     $  conn = new PDO("mysql:host=" . $  Conf['CONF']['DB_HOST'] . ";dbname=" . $  Conf['CONF']['PREFIX2USE'] . "" . $  Conf['CONF']['DB2USE'] . "", $  Query['CONF']['DB_USER'], $  Query['CONF']['DB_PASS']);                 }             }             $  conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);             $  conn->exec("set names utf8");             $  conn->exec('USE ' . $  DB_2USE);             //execution query.             $  DB_R['r'] = $  conn->query($  Query[$  DB_2USE], PDO::FETCH_ASSOC);             $  count     = $  DB_R['r']->rowCount();             $  DB_R['c'] = $  count;             if ($  count == 0) {                 $  DB_R['r'] = null;             } elseif ($  count == 1) {                 $  DB_R['r'] = $  DB_R['r']->fetch(); //Fetch result i f result is 1 if not resturn the result unfetch             }             $  conn = null;             return $  DB_R;         } catch (PDOException $  e) {             echo '<pre>';             echo var_dump($  e);             echo '<pre>';         }     }     //for Update and Insert     //$  Query is array with database index and query string     //$  Conf is secundary conection data     public function UpdateData($  Query, $  Conf = '') {         try {             $  DB_R   = [];             $  DB     = [];             $  val    = [];             $  prefix = '';             $  cT     = 0;             if (USEPREFIX == True) {                 $  prefix = DB_PRE;             }             $  conn = new PDO("mysql:host=" . DB_HOST . ";dbname=" . $  prefix . "" . DBSYS . "", DB_USER, DB_PASS);             if (isset($  Conf['CONF']['ChangeServ'])) {                 if ($  Conf['CONF']['ChangeServ'] == true) {                     $  conn = new PDO("mysql:host=" . $  Conf['CONF']['DB_HOST'] . ";dbname=" . $  Conf['CONF']['PREFIX2USE'] . "" . $  Conf['CONF']['DB2USE'] . "", $  Query['CONF']['DB_USER'], $  Query['CONF']['DB_PASS']);                 }             }             $  conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);             $  conn->beginTransaction();             $  conn->exec("set names utf8");             foreach ($  Query as $  DB_2USE => $  QArr) {                 $  conn->exec('USE ' . $  DB_2USE);                 foreach ($  QArr as $  key => $  QString) {                     $  conn->exec($  QString);                     $  cT++;                 }             }             $  conn->commit();             $  conn      = null;             $  DB_R['r'] = true;             return $  DB_R;         } catch (PDOException $  e) {             #rollback al autoincrement             $  conn->rollback();             $  conn->beginTransaction();             $  conn->exec("set names utf8");             foreach ($  Query as $  DB_2USE => $  QArr) {                 $  conn->exec('USE ' . $  DB_2USE);                 foreach ($  QArr as $  key => $  QString) {                     preg_match('/\binto\b\s*(\w+)/i', $  QString, $  tables);                     $  conn->exec("ALTER TABLE " . $  tables[1] . " AUTO_INCREMENT=1;");                 }             }             $  conn->commit();             echo '<pre>';             echo var_dump($  e);             echo '<pre>';         }     }     //for Insert     //$  Query is array with database index and query string     //$  Conf is secundary conection data     public function InsertData($  Query, $  Conf = '') {         try {             $  DB_R   = [];             $  DB     = [];             $  val    = [];             $  prefix = '';             $  cT     = 0;             if (USEPREFIX == True) {                 $  prefix = DB_PRE;             }             $  conn = new PDO("mysql:host=" . DB_HOST . ";dbname=" . $  prefix . "" . DBSYS . "", DB_USER, DB_PASS);             if (isset($  Conf['CONF']['ChangeServ'])) {                 if ($  Conf['CONF']['ChangeServ'] == true) {                     $  conn = new PDO("mysql:host=" . $  Conf['CONF']['DB_HOST'] . ";dbname=" . $  Conf['CONF']['PREFIX2USE'] . "" . $  Conf['CONF']['DB2USE'] . "", $  Query['CONF']['DB_USER'], $  Query['CONF']['DB_PASS']);                 }             }             $  conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);             $  conn->beginTransaction();             $  conn->exec("set names utf8");             foreach ($  Query as $  DB_2USE => $  QArr) {                 $  conn->exec('USE ' . $  DB_2USE);                 foreach ($  QArr as $  key => $  QString) {                     $  conn->exec($  QString);                     $  cT++;                 }             }             $  conn->commit();             $  conn      = null;             $  DB_R['r'] = true;             return $  DB_R;         } catch (PDOException $  e) {             #rollback al autoincrement             $  conn->rollback();             $  conn->beginTransaction();             $  conn->exec("set names utf8");             foreach ($  Query as $  DB_2USE => $  QArr) {                 $  conn->exec('USE ' . $  DB_2USE);                 foreach ($  QArr as $  key => $  QString) {                     preg_match('/\binto\b\s*(\w+)/i', $  QString, $  tables);                     $  conn->exec("ALTER TABLE " . $  tables[1] . " AUTO_INCREMENT=1;");                 }             }             $  conn->commit();             echo '<pre>';             echo var_dump($  e);             echo '<pre>';         }     }     //for delete     //$  Query is array with database index and query string     //$  Conf is secundary conection data     public function DeleteData($  Query, $  Conf = '') {         try {             $  DB_R   = [];             $  DB     = [];             $  val    = [];             $  prefix = '';             $  cT     = 0;             if (USEPREFIX == True) {                 $  prefix = DB_PRE;             }             $  conn = new PDO("mysql:host=" . DB_HOST . ";dbname=" . $  prefix . "" . DBSYS . "", DB_USER, DB_PASS);             if (isset($  Conf['CONF']['ChangeServ'])) {                 if ($  Conf['CONF']['ChangeServ'] == true) {                     $  conn = new PDO("mysql:host=" . $  Conf['CONF']['DB_HOST'] . ";dbname=" . $  Conf['CONF']['PREFIX2USE'] . "" . $  Conf['CONF']['DB2USE'] . "", $  Query['CONF']['DB_USER'], $  Query['CONF']['DB_PASS']);                 }             }             $  conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);             $  conn->beginTransaction();             $  conn->exec("set names utf8");             foreach ($  Query as $  DB_2USE => $  QArr) {                 $  conn->exec('USE ' . $  DB_2USE);                 foreach ($  QArr as $  key => $  QString) {                     $  conn->exec($  QString);                     $  cT++;                 }             }             $  conn->commit();             $  conn      = null;             $  DB_R['r'] = true;             return $  DB_R;         } catch (PDOException $  e) {             #rollback al autoincrement             $  conn->rollback();             $  conn->beginTransaction();             $  conn->exec("set names utf8");             foreach ($  Query as $  DB_2USE => $  QArr) {                 $  conn->exec('USE ' . $  DB_2USE);                 foreach ($  QArr as $  key => $  QString) {                     preg_match('/\binto\b\s*(\w+)/i', $  QString, $  tables);                     $  conn->exec("ALTER TABLE " . $  tables[1] . " AUTO_INCREMENT=1;");                 }             }             $  conn->commit();             echo '<pre>';             echo var_dump($  e);             echo '<pre>';         }     } } 

How can I force my dedicated GPU to handle all applications or disable my integrated GPU?

I’m a software developer and currently Electron, a library I’m using lacks a feature to choose whether the dedicated GPU handles the dynamically generated executable during development (each time you save a change, an executable is dynamically generated for quick testing, so I can’t just change Windows settings for this specific executable to use the desired GPU), so I’m wondering if there’s a way to just force my dedicated (non CPU integrated) GPU to handle everything on my system. This isn’t really a software question, the context just seems relevant.

How can I force my dedicated GPU to handle all applications or disable my integrated GPU?

How to Handle Long TCP Sessions deployment in ZDD?

I have an application that forward TCP connection to another App. Currently I am trying to make this application Zero Download Deployment, so I can deploy new version at any time but there is a problem I don’t have not found a solution in how to solve it.

I can’t kill the TCP sessions, some of them can least 5 min or 2 hours. I would like to know what is the generic way to solve this problem, when deploying a new version of my software it will be taken by new connections without kill the previous ones.

I know with docker you can modify signals that the container receives and handle them, but still I see on the deployment after some point, they send a a “docker rm” command a delete the container (currently I am testing with Docker Swarm and I assume Kubernetes will do the similar).

Is that the way to go to have a very long time out for the deployment or use kind of a blue/green?

Thanks,

Is there a KM-switch that can handle Magic Trackpad 2 in wired mode?

I have a couple of Macs that I want to use with a single keyboard/trackpad. One of these is a 2019 iMac. The other two are Mac minis. The iMac is connected to a second monitor (Thunderbolt to displayPort). That same second monitor works as first monitor for the minis (2x HDMI, the monitor has three inputs). I can attach my Apple wired keyboard to a KM-switch and thus work with it on all computers. But that doesn’t work for the trackpad. I tried attaching the trackpad to the keyboard, but that did not work. It is not seen by the computer the KM-switch is connected to.

So, I’m looking for a hardware solution to connect the Apple Magic Trackpad 2 to 3 computers. Is there a solution?

How to handle dependencies in Web APIs

I’m struggling with a decision about how to design a web-API where I create new “things”. We roughly follow the API guidelines of Zalando, which do provide a nice starting point for web-APIs (https://opensource.zalando.com/restful-api-guidelines/). But there’s no guidance on how to handle creating new resources, which might have dependencies.

To provide a simple example, I have a beloved automotive example.

Assume the following API:

GET /vehicle – will get a list of vehicles
POST /vehicle – will create a new vehicle

The vehicle might look something like this

class Vehicle {   VehicleType Type { get; set; }  }  enum VehicleType { // This enum is an example - it might as well be some complex type.   eCar,   Car,   Truck } 

Now for the Post, I need to know about valid VehicleTypes.

Would I rather do:
GET /vehicle-type or
GET /vehicle/types or
GET /vehicle/dependencies/types or
GET /new-vehicle and include the dependencies?

Which approach is “well-known”? Are there other well known approaches?

Handle stream data with Dataflow

I have a websocket connection with 3-rd party API. The API returns a lot of data, in a peak hours it returns hundreds messages per second. I need to process the data for two purposes: save everything in a DB and send some data to RabbitMq.

The idea is the following:
I want to save data to DB when batch size is 1000 or by timeout which is equal to 3 seconds.

I want to publish data to RabbitMQ by 1 second timeout. It’s a kind of throttling, because there are a lot of data. Moreover, I need to select the last record for the specific ticker, I have done it in the ActionBlock, f.e: we have the following records in the batch:

{“Ticker”: “MSFT”, “DateTime”: ‘2019-05-14T10:00:00:100’}
{“Ticker”: “MSFT”, “DateTime”: ‘2019-05-14T10:00:00:150’}
{“Ticker”: “AAPL”, “DateTime”: ‘2019-05-14T10:00:00:300’}

I need to publish the last for the specific ticker only, so after filtering there will be 2 records that I am going to publish:

{“Ticker”: “MSFT”, “DateTime”: ‘2019-05-14T10:00:00:150’}
{“Ticker”: “AAPL”, “DateTime”: ‘2019-05-14T10:00:00:300’}

Full code:

public class StreamMessagePipeline < T > where T: StreamingMessage {     private readonly BatchBlock < T > _saveBatchBlock;     private readonly BatchBlock < T > _publishBatchBlock;      public StreamMessagePipeline() {         _saveBatchBlock = new BatchBlock < T > (1000);         _publishBatchBlock = new BatchBlock < T > (500);          SetupSaveBatchPipeline();         SetupPublishBatchPipeline();     }      private void SetupSaveBatchPipeline() {         var saveBatchTimeOut = TimeSpan.FromSeconds(3);         var saveBatchTimer = new Timer(saveBatchTimeOut.TotalMilliseconds);          saveBatchTimer.Elapsed += (s, e) = >_saveBatchBlock.TriggerBatch();          var actionBlockSave = new ActionBlock < IEnumerable < T >> (x = >{             //Reset the timeout since we got a batch             saveBatchTimer.Stop();             saveBatchTimer.Start();              Console.WriteLine($   "Save to DB : {x.Count()}");         });          _saveBatchBlock.LinkTo(actionBlockSave, new DataflowLinkOptions {             PropagateCompletion = true         });     }      private void SetupPublishBatchPipeline() {         var publishBatchTimeOut = TimeSpan.FromSeconds(1);         var publishBatchTimer = new Timer(publishBatchTimeOut.TotalMilliseconds);          publishBatchTimer.Elapsed += (s, e) = >_publishBatchBlock.TriggerBatch();          var actionBlockPublic = new ActionBlock < IEnumerable < T >> (x = >{             var res = x.GroupBy(d => d.Ticker).Select(d = >d.OrderByDescending(s =>s.DateTime).FirstOrDefault()).ToList();              Console.WriteLine($   "Publish data to somewhere : {res.Count()}");             //Reset the timeout since we got a batch             publishBatchTimer.Stop();             publishBatchTimer.Start();          });          _publishBatchBlock.LinkTo(actionBlockPublic, new DataflowLinkOptions {             PropagateCompletion = true         });     }      public async Task Handle(T record) {         await _saveBatchBlock.SendAsync(record);         await _publishBatchBlock.SendAsync(record);     }  }