angular-ui-router appending #app

We have a bug in our AngularJS application.

We’re using ui-router to handle our state changes, but the ui-router keeps appending #app to the URL, whenever the user navigates the page using his browser navigation (back, forward). As a result navigating like that takes the user two clicks per action to get to the new subpage.

The browser history looks like so

http://localhost:8080/app/#/modulename#app <- unnecessary step
http://localhost:8080/app/#/anothermodule#app <-unnecessary step

and so on…

Maybe there’s something wrong with this, which is causing the bug in question.

$  stateProvider                 .state('app', {                     abstract: true,                     url: '/',                     // parent: true,                     templateUrl: CONFIG.baseUrl + '/js/modules/Core/View/app.html',                 })                 .state('access.404', {                     url: '/404',                     templateUrl: 'tpl/page_404.html'                 }); 

AngularJS version: 1.2.29

ui-router version: 0.2.10

Appending a new option to an existing full contextMenu

I’m creating a chrome extension that adds functionality to gmail when right clicking on threads.

The issue is that the chrome context menu for threads is already at capacity (6). However they have added more by creating child contextItems for each section, and so that’s my plan as well.

I have tried to add to one of those sections however I can’t figure out what the parent ID is. I have inspected the context menu but each item has a unique ID and while they have naming conventions I don’t see a parent ID I can use.

chrome.contextMenus.create({“title”: ‘testbutton’, “contexts”:[‘all’],”parentId”:??,”id”: “1”})

hoping to have an additional option of ‘testbutton’ right above the existing ‘move to tab’ option.

Appending new lines in Tomcat Catalina log rotate

Application running on Tomcat is using log4j for some unknown reason is appending (randomly) new log lines somewhere in the middle of the log file

log4j.properites looks like:

log4j.rootLogger=INFO, CATALINA log4j.appender.CATALINA=org.apache.log4j.rolling.RollingFileAppender log4j.appender.CATALINA.RollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy log4j.appender.CATALINA.RollingPolicy.FileNamePattern=$  {catalina.base}/logs/catalina.%d{yyyy-MM-dd}.log log4j.appender.CATALINA.layout=com.medallies.log.ThreadIdSupportedPatternLayout log4j.appender.CATALINA.layout.ConversionPattern=[TID=%i] %-5p %d{HH:mm:ss,SSS} | %c | %m%n 

Tomcat restart will help in that case but after some time issue will come back.

Any thought on that?

Appending a secret (pepper) to Argon2 password hashes

I’ve read quite a bit of the StackExchange and HackerNews debates on the user of “peppers” in password hash security. There are a number of different implementations of the idea of a pepper, ranging from an additional hardcoded salt in the code hash(password, pepper, salt), to encrypting each password hash separately, in which case the secret key is the pepper.

In the case of one of the middle ground approaches, a shared and secret pepper is included via hash(hmac(password, pepper),salt). This is necessary primarily because of many hashing algorithm’s reliance on the Merkle–Damgård construction which is vulnerable to length extension attacks.

However, Argon2 is not based on a Merkle–Damgård construction and therefore this attack would not apply. In using Argon2, can the naive approach of argon2(password, pepper, salt) be used?

Additionally, the Argon2 specification Introduction seems to indicate that using HMACs, peppers, or secret keys at all is unnecessary. Is Argon2ID with strong memory, threads, and time costs enough on its own?

API Changes to Paragraph Module Stop Appending Paragraphs to Existing Nodes

I know there are posts out there talking about adding a paragraph to a node programmatically but it also appears the API changed and I cannot reconcile the two.

I am using the following code.

       foreach ($  item['commodities'] as $  commodity_id => $  commodity) {         $  commodity_price_save = $  commodity['commodity_price'];          //Store Commodity values in Paragraph content         $  paragraph = Paragraph::create([             'type' => 'commodity_name_and_price',             'field_commodity_namep' => $  commodity['commodity_name'],             'field_commodity_pricep' =>$  commodity_price_save,         ]);          $  paragraph->save();          //$  this->_log('Target ID $  paragraph->id(): ' . $  paragraph->id());         //$  this->_log('Target Revision ID $  paragraph->getRevisionId(): ' . $  paragraph->getRevisionId());          $  nid  = $  node->nid;         $  node_apend = Node::load($  nid);          $  node_apend->field_commodity_name_and_pricep[] = [             'target_id' => $  paragraph->Id(),             'target_revision_id' => $  paragraph->getRevisionId(),         ];          $  node_apend->save();      }] 

I get the following error messages when running this code as part of a custom module running during drush.

[warning] call_user_func() expects parameter 1 to be a valid callback, class ‘Drupal\paragraphs\Entity\Paragraph’ does not have a method ‘getCurrentUserId’ BaseFieldDefinition.php:469

[warning] array_flip(): Can only flip STRING and INTEGER values! EntityStorageBase.php:264

[error] Object of class Drupal\Core\Field\FieldItemList could not be converted to string EntityStorageBase.php:131

[error] E_RECOVERABLE_ERROR encountered; aborting. To ignore recoverable errors, run again with –no-halt-on-error

Any help is greatly appreciated!

Thanks, Josh

Appending large length encoded in DER to custom recovery

I want to install custom recovery with locked bootloader by exploiting some bugs following this guide

One of the step in guide says that I have to append a 4k block which begins with 0x30, 0x83, 0x19, 0x89, 0x64 to twrp-3.2.1-0-riva.img but how to append?

I tried editing the file using hex editor and i dont know what to do next

Cleaner way of appending data to List in BeautifulSoup

So I’ve been experimenting various way to get data from different variety of website; as such, between the use of JSON or BeautifulSoup. Currently, I have written a scrapper to collect data such as [{Title,Description,Replies,Topic_Starter, Total_Views}]; but it pretty much has no reusable code. I’ve been figuring out how to correct my approach of appending data to one singular list for simplicity and reusability. But I’ve pretty much hit a stone with my current capability.

from requests import get from bs4 import BeautifulSoup import pandas as pd from time import sleep   url = ''  list_topic = [] list_description = [] list_replies = [] list_topicStarted = [] list_totalViews = []   def getContentFromURL(_url):     try:         response = get(_url)         html_soup = BeautifulSoup(response.text, 'lxml')         return html_soup     except Exception as e:         print('Error.getContentFromURL:', e)         return None   def iterateThroughPages(_lastindexpost, _postperpage, _url):     indices = '/+'     index = 0     for i in range(index, _lastindexpost):         print('Getting data from ' + url)         try:             extractDataFromRow1(getContentFromURL(_url))             extractDataFromRow2(getContentFromURL(_url))             print('current page index is: ' + str(index))             print(_url)             while i <= _lastindexpost:                 for table in get(_url):                     if table != None:                         new_getPostPerPage = i + _postperpage                         newlink = f'{indices}{new_getPostPerPage}'                         print(newlink)                         bs_link = getContentFromURL(newlink)                         extractDataFromRow1(bs_link)                         extractDataFromRow2(bs_link)                         # threading to prevent spam. Waits 0.5 secs before executing                         sleep(0.5)                     i += _postperpage                     print('current page index is: ' + str(i))                     if i > _lastindexpost:                         # If i gets more than the input page(etc 1770) halts                         print('No more available post to retrieve')                         return         except Exception as e:             print('Error.iterateThroughPages:', e)             return None   def extractDataFromRow1(_url):     try:         for container in _url.find_all('td', {'class': 'row1', 'valign': 'middle'}):             # get data from topic title in table cell             topic = container.select_one(                 'a[href^="/topic/"]').text.replace("\n", "")             description = container.select_one(                 'div.desc').text.replace("\n", "")             if topic or description is not None:                 dict_topic = topic                 dict_description = description                 if dict_description is '':                     dict_description = 'No Data'                     # list_description.append(dict_description)                     #so no empty string#                 list_topic.append(dict_topic)                 list_description.append(dict_description)             else:                 None     except Exception as e:         print('Error.extractDataFromRow1:', e)         return None   def extractDataFromRow2(_url):     try:         for container in'table[cellspacing="1"] > tr')[2:32]:             replies = container.select_one('td:nth-of-type(4)').text.strip()             topic_started = container.select_one(                 'td:nth-of-type(5)').text.strip()             total_views = container.select_one(                 'td:nth-of-type(6)').text.strip()             if replies or topic_started or total_views is not None:                 dict_replies = replies                 dict_topicStarted = topic_started                 dict_totalViews = total_views                 if dict_replies is '':                     dict_replies = 'No Data'                 elif dict_topicStarted is '':                     dict_topicStarted = 'No Data'                 elif dict_totalViews is '':                     dict_totalViews = 'No Data'                 list_replies.append(dict_replies)                 list_topicStarted.append(dict_topicStarted)                 list_totalViews.append(dict_totalViews)             else:                 print('no data')                 None     except Exception as e:         print('Error.extractDataFromRow2:', e)         return None   # limit to 1740 print(iterateThroughPages(1740, 30, url)) new_panda = pd.DataFrame(     {'Title': list_topic, 'Description': list_description,      'Replies': list_replies, 'Topic Starter': list_topicStarted, 'Total Views': list_totalViews}) print(new_panda) 

I’m sure the use of my try is redundant at this point as well, my large variety of List including, and the use of While and For is most likely practiced wrongly.

file not appending

I am making a project on cafe in which I need to append user file to enter different items in the file for example drinks he want to buy.So the user enters 1 2 3 4 input and the item from the list in file named drinks copied and paste into his file.

But the problem is file takes in one first item and then stops taking more items. Here is the code `

#include<iostream> #include<fstream> #include<string> using namespace std;  int main() { ofstream f;   string filename;   cout << "Please enter a file name to write: ";   getline( cin, filename );   ofstream outFile(filename.c_str(),ios::app);   string line;   ifstream inFile("Drinks.txt");   int count,c;   cout<<"Choice number of drinks\n";   cin>>c;   for (int i=0;i<c;i++)   {   cout<<"Enter number of drinks\n";   cin>>count;   while(getline(inFile, line))   {      if (--count == 0)     {         outFile << line <<endl;         break;     }   } }    outFile.close();    inFile.close();  } 

` just want to know where i am making mistake

JAVA_HOME is not set correctly after adding and appending to PATH

Ubuntu 14.04 LTS Bash version 4.3.11(1)-release

I’ve added $ JAVA_HOME to ~/.profile (and .bash_profile) like this:

# # This is the default standard .profile provided to sh users. # They are expected to edit it to meet their own needs. # # The commands in this file are executed when an sh user first # logs in. # # $  Revision: 1.10 $   #  # Set the interrupt character to Ctrl-c and do clean backspacing. if [ -t 0 ] then         stty intr '^C' echoe fi  # Set the TERM environment variable eval `tset -s -Q`  # Set the default X server. if [ $  {DISPLAY:-setdisplay} = setdisplay ] then     if [ $  {REMOTEHOST:-islocal} != islocal ]     then         DISPLAY=$  {REMOTEHOST}:0     else         DISPLAY=:0     fi     export DISPLAY fi   # List files in columns if standard out is a terminal. ls()    { if [ -t ]; then /bin/ls -C $  *; else /bin/ls $  *; fi }  export JAVA_HOME=$  (/usr/bin/java) export PATH=$  JAVA_HOME/jre/bin:$  PATH 

But still typing echo $ JAVA_HOME yields:

XXX:~$   echo $  JAVA_HOME JAVA_HOME /usr/local/lib/jdk-8u25/