Manipulate auto execution of Javascript since Burp can only see HTTP Requests/responses

There a javascript that is executing in my browser and it is generating session token. (This was a design requirement to the dev team Sesion token is generated on the client side – don’t ask me why lol)

I want to be able to modify the javascript variables during execution (just as if I was using Debug in Netbeans for instance)

I though I’d use burp suite but it only catches request (not the building of a request by Js)

What can I do to do that ?

Also, I thought I’d use browser debugger but strangely, none of the loaded JS seems to be generating the session token. One of the JS just do that and I see it later in burp interceptor.

Any help here ?

*Lifelong Free Hosting – FREE Auto SSL – DDOS Protection – Hostpoco.com

Everyone attract to Free Web Hosting in Google Search. Yes, it’s true and Hostpoco.com always trying to give the best possible features with our services, and hence most of the clients are now moving with us. Our features like max space and bandwidth perfectly suit for startups..hence we are requesting everyone to try our services once and then decide.

*FREE Startup Plan:$0 /Lifetime
– Single Domain Hosting
– 200MB Web Space
– 200MB Bandwidth
– 2 Email Accounts
– 2 Sub Domains
– FREE Auto SSL
– DDOS Protection
– 99.99% uptime
– Softacolous Supported
– Tier 1 Technical Support

We also offer you the freedom to upgrade your existing Free Web Hosting plan to Paid Unlimited Web hosting service plan and we guarantee that there won’t be any type of data loss of such upgrades. You simply suppose to initiate an upgrade from the client area and need to pay the respective amount and a new package will be assigned as soon as you are done with the payment!

For more information: https://hostpoco.com/free-hosting.php

Thank You.

MySQL innoDB cluster auto rejoin failed

3 node cluster, single primary. heavy read/write was happening on the master node. Restart the Master node. Then node 3 became the master. After the restart, the old master was in recovery state

"recoveryStatusText": "Distributed recovery in progress",                  "role": "HA",                  "status": "RECOVERING"    select * from gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | NO               | YES       |                   0 |                 8401 | +------------------+-----------+---------------------+----------------------+ 

this trx_to_cert never decreased even after 15mins,

then I tried to reboot node2

this also went to recovery mode.

Finally restart the node3, that’s all

It is saying no eligible primary in the cluster. Not able to recover it.

ERROR LOG:

2020-06-03T15:24:19.735261Z 2 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0' 2020-06-03T15:24:19.735271Z 2 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds' 2020-06-03T15:24:19.735285Z 2 [Note] Plugin group_replication reported: 'Member configuration: member_id: 1; member_uuid: "41add3fb-9abc-11ea-a59d-42010a00040b"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; ' 2020-06-03T15:24:19.748017Z 6 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. 2020-06-03T15:24:19.846752Z 9 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './dev-mysql-01-relay-bin-group_replication_applier.000002' position: 4 2020-06-03T15:24:19.846765Z 2 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!' 2020-06-03T15:24:19.868161Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3' 2020-06-03T15:24:19.868183Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 33061' 2020-06-03T15:24:21.722047Z 2 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address dev-mysql-03:3306.' 2020-06-03T15:24:21.722179Z 0 [ERROR] Plugin group_replication reported: 'Group contains 3 members which is greater than auto_increment_increment value of 1. This can lead to an higher rate of transactional aborts.' 2020-06-03T15:24:21.722427Z 24 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10' 2020-06-03T15:24:21.722550Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dev-mysql-01:3306, dev-mysql-03:3306, dev-mysql-02:3306 on view 15910200188085516:19.' 2020-06-03T15:24:21.803914Z 24 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='dev-mysql-02', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. 2020-06-03T15:24:21.855802Z 24 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor bd472ec4-9abc-11ea-976d-42010a00040c at dev-mysql-02 port: 3306.' 2020-06-03T15:24:21.856155Z 26 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information. 2020-06-03T15:24:21.862169Z 26 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'mysql_innodb_cluster_1@dev-mysql-02:3306',replication started in log 'FIRST' at position 4 2020-06-03T15:24:21.918855Z 27 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './dev-mysql-01-relay-bin-group_replication_recovery.000001' position: 4 2020-06-03T15:24:42.718769Z 0 [Note] InnoDB: Buffer pool(s) load completed at 200603 15:24:42 2020-06-03T15:24:55.206155Z 41 [Note] Got packets out of order 2020-06-03T15:29:29.682585Z 0 [Warning] Plugin group_replication reported: 'Members removed from the group: dev-mysql-02:3306' 2020-06-03T15:29:29.682635Z 0 [Note] Plugin group_replication reported: 'The member with address dev-mysql-02:3306 has unexpectedly disappeared, killing the current group replication recovery connection' 2020-06-03T15:29:29.682635Z 0 [Note] Plugin group_replication reported: 'The member with address dev-mysql-02:3306 has unexpectedly disappeared, killing the current group replication recovery connection' 2020-06-03T15:29:29.682729Z 27 [Note] Error reading relay log event for channel 'group_replication_recovery': slave SQL thread was killed 2020-06-03T15:29:29.682759Z 0 [Note] Plugin group_replication reported: 'Group membership changed to dev-mysql-01:3306, dev-mysql-03:3306 on view 15910200188085516:20.' 2020-06-03T15:29:29.683116Z 27 [Note] Slave SQL thread for channel 'group_replication_recovery' exiting, replication stopped in log 'mysql-bin.000009' at position 846668856 2020-06-03T15:29:29.689073Z 26 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery' 2020-06-03T15:29:29.689089Z 26 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-bin.000009', position 846668856 2020-06-03T15:29:29.700329Z 24 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10'