How to model hierarchical json in Java

I’m a front end developer who is brand new to backend development. My task is to model json in a Java object. It’s just some mock data for now that my controller returns.

{ "data": {     "objectId": 25,     "columnName": [         "myCategory",         "myCategoryId"     ],     "columnValues": [         [             "Category One",             1         ],         [             "Category Two",             2         ],         [             "Category Three",             3         ],         [             "Category Four",             4         ],         [             "Category Five",             5         ]     ] } 

}

And here’s my attempt. The controller returns this json correctly. But isn’t this too simple? What I believe should be done is extrapolate the columnName and columnValues arrays into separate classes but I’m not sure how.

package com.category;  import java.util.List;  public class MyObjectData {     private int objectId;     private List columnName;     private List columnValues;      public int getObjectId() {         return objectId;     }     public void setObjectId(int objectId) {         this.objectId = objectId;     }      public List getColumnName() {         return columnName;     }      public void setColumnName(List colName) {         this.columnName = colName;     }      public List getColumnValues() {         return columnValues;     }      public void setValues(List values) {         this.columnValues = values;     }  } 

Regarding the columnNames and columnValues, I feel like I should be doing something like this in the model instead:

private List<ColumnNames> columnNames;     private List<ColumnValues> columnValues;      public List<ColumnNames> getColumnNames() {         return columnNames;     }      public void setColumnNames(List<ColumnNames> columnNames) {         this.columnNames = columnNames;     }      public List<ColumnValues> getColumnValues() {         return columnValues;     }      public void setColumnValues(List<ColumnValues> columnValues) {         this.columnValues = columnValues;     } 

And then I’d have two separate classes for them like this:

package com.category;  import java.util.List;  public class ColumnName {      private String columnName;      public String getColumnName() {         return columnName;     }      public void setColumnName(String columnName) {         this.columnName = columnName;     }  }  package com.category;  import java.util.List;  public class ColumnValue {      private String columnValue;     private int columnValueId;      public String getColumnValue() {         return columnValue;     }      public void setColumnValue(String columnValue) {         this.columnValue = columnValue;     }      public String getColumnValueId() {         return columnValueId;     }      public void setColumnValueId(int columnValueId) {         this.columnValueId = columnValueId;     }  } 

I feel like I have all the right pieces but just not sure if this is a better approach than my initial attempt…which works. Just looking for input. Thanks in advance.

Differences between enumeration-based and hierarchical token typing

When writing a lexer/parser, why/when would an advised developer chose to define the tokens’ types through an enumeration field/type hierarchy?

The closest question I’ve found here so far was Lexing: One token per operator, or one universal operator token? by Jeroen Bollen, but it seems to be more about the ideal deepth of the token type hierarchy.

As for my personal experience I’ve used Newtonsoft.Json‘s reader, which uses an enumeration, and I’ve read about C#’s Expression types, which seem to use a hierarchy, but also seem to be more than just tokens.

Hierarchical Bayes Model

I am given a zip-inflated Poisson (ZIP) model, where random data $$X_1, .., X_n$$ are of the form $$X_i=R_iY_i$$, where the $$Y_i$$‘s have Poisson distribution ($$\lambda$$) and the $$R_i$$‘s have Bernoulli distribution ($$p$$), and all independent from each other. If given an outcome $$x = (x_1, .., x_n)$$, the objective is to estimate both $$\lambda$$ and $$p$$.

We can use a hierarchical Bayes model:

$$p$$ ~ Uniform(0,1) (prior for $$p$$),
$$(\lambda|p)$$ ~ Gamma(a,b) (prior for $$\lambda$$),
$$(r_i|p, \lambda )$$ ~ Bernoulli($$p$$) independently (from the model above),
$$(x_i|r, \lambda, p )$$ ~ Poisson($$\lambda r_i$$) independently (from the model above)

Since $$a$$ and $$b$$ are known parameters, and $$r = (r_1,…,r_n)$$, it follows that

$$f(x,r, \lambda, p) = \frac{b^\alpha \lambda^{\alpha-1} e^{-b \lambda}}{\Gamma(\alpha)} \prod_{i=1}^n\frac{e^{-\lambda r_i} (\lambda r_i)^{x_i}}{x_i!} p^{r_i}(1-p)^{1-r_i}$$

My question is to obtain the following:

1) $$\lambda|p,r,x\sim Gamma(a+ \sum_{i}x_i, b+ \sum_{i}r_i)$$
2) $$p|\lambda,r,x\sim Beta(1+ \sum_{i}r_i, n+1 – \sum_{i}r_i)$$
3) $$r_i|\lambda,p,x \sim Bernoulli(\frac{pe^{- \lambda}}{pe^{- \lambda}+(1-p)I{\{x_i=0}\}})$$

for 1) and 2), I am able to deduce them by integrating and removing the other variables. However, for 3), I was not able to do so and eventually obtained the following expression:

$$e^{\lambda \sum r_i} r_i^{\sum x_i} p^{\sum r_i} (1-p)^{n-\sum r_i}$$

Can anyone show me if what I did it correct and perhaps how to obtain the required expression?

Thank you.

Hierarchical Deterministic (HD) Importing Funds

I’m creating derived addresses from a HD private key.

I have all data saved, such as the; seed, HD private key and derived addresses with their private keys, plus wif formats.

I’m receiving funds in each address, but need a way to import all the funds as efficiently as possible into a SPV wallet. At the moment I import each address wif to recieve its corresponding balance.

Is there a neat way to pull in all funds that derive from a “master key”. Or, have I misunderstood deterministic benefits – am I trying achieve a non existent feature?

TLDR; what’s the best way to import funds from multiple addresses?

HSM for Hierarchical Deterministic Wallet

I’d like to use a small HSM to manage an HD Bitcoin wallet (BIP32). The HSM does have to correct ECDSA curve, secp256k1, can generate keypair, and sign.

My challenge comes from the HD part. In order to create hardened child keys, I need a SHA512 of the parent key and chaincode. The HSM doesn’t expose the parent key (that’s a “good thing”), and also doesn’t have any concept of a chaincode.

The larger vendors sell expensive HSM units that do have this kind of functionality, so there must be a way.

Specific questions: 1) How can I use an HSM, with private EC key to generate child keys following BIPS32?

2) How can I store part of the HD chain (seed, parent, child, etc.) in the HSM, and only expose part of it?

3) Any open source code examples for this?

THank You!

.NET Custom hierarchical expression to treeview data structure

I’m not good at English. It’s gonna be hard to read. I apologize in advance.

I need expression parser for draw diagram(fault tree). In order to do that I have to create data structure from custom expression

( (123-A1) AND (123-A2) AND (123-A3) OR (123-A4 AND (123-A5 OR 123-A6)) ) 

The above example was written roughly as I thought.

1. In some cases, parentheses are used for each variable for readability.
2. Read in order within the same parentheses.
3. Read in order if there are no parentheses.
4. Parentheses around the expression can be attached without meaning.
5. Operators use only AND, OR parentheses use only (, ).
6. I don’t know which is the best way string to data structure.
7. The depth and order of parentheses and.. anything are all important because eventually I need to draw Diagram.
           _________OR___________           |                      |           |                 ____AND____           |                |           |            ______AND______         |       ___OR___    |       |       |        |      |        | 123-A1  123-A2  123-A3  123-A4  123-A5  123-A6 

Expression to Token

public class Token {     public TokenType Type;  // Operator, Parenthesis, Variable      public string Label;     public int Depth;     public int Group;      public Token(string label)     {         Label = label.Trim();          if (ExpressionParser.SupportedOperatorHashSet.Contains(label.ToUpper()))         {             Type = TokenType.Operator;         }         else if (ExpressionParser.SupportedParenthesesHashSet.Contains(label))         {             Type = TokenType.Parenthesis;         }         else         {             Type = TokenType.Variable;         }     } }  public enum TokenType {     Variable,     Operator,     Parenthesis }  public static class ExpressionParser {     private static Regex TokenRegex = new Regex(@"[]|[\d\w-]+");      internal static readonly HashSet<string> SupportedOperatorHashSet = new HashSet<string>() { AndGate, OrGate };     internal static readonly HashSet<string> SupportedParenthesesHashSet = new HashSet<string>() { OpenParenthesis, CloseParenthesis };     private static readonly List<Token> TokenList = new List<Token>();      internal const string AndGate = "AND";     internal const string OrGate = "OR";     internal const string OpenParenthesis = "(";     internal const string CloseParenthesis = ")";      public static List<Token> Parse(string expression)     {         try         {             // Get '(' ')' '123-A1' 'AND' 'OR'             MatchCollection matches = TokenRegex.Matches(expression); // @"[]|[\d\w-]+"             int depth = 0;              foreach (Match match in matches)             {                 Token token = new Token(match.Value);                 TokenList.Add(token);                  // Increase depth when token is open parenthesis                  if (token.Type == TokenType.Parenthesis && token.Label == OpenParenthesis)                 {                     depth += 1;                 }                  token.Depth = depth;                  // Set group                 if (TokenList.Count > 1)                 {                     Token prevToken = TokenList[TokenList.Count - 2];                     if (prevToken.Depth == token.Depth)                     {                         token.Group = prevToken.Group;                     }                     else                     {                         token.Group = prevToken.Group + 1;                     }                 }                  // Decrease depth after token is close parenthesis                  if (token.Type == TokenType.Parenthesis && token.Label == CloseParenthesis)                 {                     depth -= 1;                 }             }              // Remove parenthesis  [ex. (123-ab)]             for (int i = 0; i < TokenList.Count; i++)             {                 if (i + 2 < TokenList.Count &&                     TokenList[i].Type == TokenType.Parenthesis && TokenList[i].Label == OpenParenthesis &&                     TokenList[i + 2].Type == TokenType.Parenthesis && TokenList[i].Label == CloseParenthesis)                 {                     TokenList.RemoveAt(i + 2);                     TokenList.RemoveAt(i);                 }             }              return new List<Token>(TokenList);         }         finally         {             TokenList.Clear();         }     } } 

Run

ExpressionParser.Parse("( (123-A1) AND (123-A2) AND (123-A3) OR (123-A4 AND (123-A5 OR 123-A6)) )"); 

Result

OR  ├ AND  │  ├ 123-A1  │  ├ 123-A2  │  └ 123-A3  └ AND     ├ 123-A4     └ OR       ├ 123-A5       └ 123-A6 

What do input layers represent in a Hierarchical Attention Network

I’m trying to grasp the idea of a Hierarchical Attention Network (HAN), most of the code i find online is more or less similar to the one here: https://medium.com/jatana/report-on-text-classification-using-cnn-rnn-han-f0e887214d5f :

embedding_layer=Embedding(len(word_index)+1,EMBEDDING_DIM,weights=[embedding_matrix], input_length=MAX_SENT_LENGTH,trainable=True) sentence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32', name='input1') embedded_sequences = embedding_layer(sentence_input) l_lstm = Bidirectional(LSTM(100))(embedded_sequences) sentEncoder = Model(sentence_input, l_lstm)  review_input = Input(shape=(MAX_SENTS,MAX_SENT_LENGTH), dtype='int32',  name='input2') review_encoder = TimeDistributed(sentEncoder)(review_input) l_lstm_sent = Bidirectional(LSTM(100))(review_encoder) preds = Dense(len(macronum), activation='softmax')(l_lstm_sent) model = Model(review_input, preds) 

My question is: What do the input layers here represent? I’m guessing that input1 represents the sentences wrapped with the embedding layer, but in that case what is input2? Is it the output of the sentEncoder? In that case it should be a float, or if it’s another layer of embedded words, then it should be wrapped with an embedding layer as well.

I have added a dropwown item through Hierarchical Select. The newly added item can be displayed properly on the client end. However, when a client selects the item and submits it, the system report an error”Plese select a campus”.. I checked up the system log, got the following messages. May I ask your help to fix it? Thank you!

Design patterns for hierarchical multiple selection

Given the hypothetical structure:

Representing a group of items that can be classified as type A or B, and each subtype can be classified as type C or D, and within D can be classified type E or F.

Where the user can choose to display All items, All items of type A or type B, any combination of C-F from A + any combination of C-F from B

This will be used as a filter where there are mutually exclusive states (e.g. you can’t select ALL and A at the same time, or select A and A/C at the same time) and also non-mutually exclusive states (e.g. you can select A/C and B/C). I want to implement this as a ‘toggle map’ where if the user selects ALL then every square will be selected (or unselected depending on the existing state). This then also allows users to select different combinations to analyze different subsets of data.

Is this type of user interaction too complex? Is there an alternate design pattern that allowing users to make multiple selections from a hierarchical structure while still able to show the relationship between each of the categories?

Efficiently listing all last-level descendants of each root in a proper hierarchical graph (bipartite DAG)

Let G = (V, E) be a graph which is hierarchical in the sense that its vertices are arranged in levels/layers (say 1 to k) and an edge can only be from a vertex at level i to a vertex at level i+1. Vertices at level 1 and k are called the roots and the leaves, respectively. The problem is to find all descendant leaves of each root. For instance, for the following graph,

the answer would be: 1: 12, 13 2: 12, 13 3: 12, 13, 14, 15, 16 4: 14, 15, 16

A naive solution can be to run a DFS for each root which will take O(r(n+m)) time where r is #roots, n is #vertices and m is #edges.

Can we do better? Another way to see the same graph is that it is a bipartite DAG.