How can I separate it with Json decode and save it in PostgreSQL database with PHP?

I want to do web scraping with PHP. There is a json data in the URL, I want to pull this data and save it to the postgreSQL database. This is the code:

<?php  $  ch = curl_init(); $  url = "";  curl_setopt($  ch,CURLOPT_URL, $  url); curl_setopt($  ch,CURLOPT_RETURNTRANSFER, true);  $  resp = curl_exec($  ch);  if($  e = curl_error($  ch)) {     echo $  e; } else{     $  decoded = json_decode($  resp, true);     print_r($  decoded); }    //your database connection here  $  host = "localhost";  $  user = "postgres";  $  password = "****";  $  dbname = "sok";  // Create connection try{     $  this->linkid = @pg_connect("host=$  this->host port=5432 dbname=$  this->dbname user=$  this->user password=$  this->password");     if (! $  this->linkid)     throw new Exception("Could not connect to PostgreSQL server."); } catch (Exception $  e) {     die($  e->getMessage()); } foreach ($  array_data as $  row) { $  sql = "INSERT INTO il_adi (il) VALUES (decoded)"; $  conn->query($  sql); } $  conn->close();   ?> 

I can view the data I have captured in the array in the terminal. How can I save this to the database?

Custom Post Type with Category Separate

I have create two custom post type(movie type1 , movie 2) from function.php, but when make new category, this category duplicates itself in the POSTS(Article) and other CPT(movie type1, movie2) why?

enter image description here

function custom_post_type_week() {           // Set UI labels for Custom Post Type         $  labels = array(             'name'                => _x( 'Movie type 1', 'Post Type General Name', 'twentythirteen' ),             'singular_name'       => _x( 'Movie type 1', 'Post Type Singular Name', 'twentythirteen' ),             'menu_name'           => __( 'movie type 1', 'twentythirteen' ),             'parent_item_colon'   => __( 'Parent Movie', 'twentythirteen' ),             'all_items'           => __( 'All Movies', 'twentythirteen' ),             'view_item'           => __( 'View Movie', 'twentythirteen' ),             'add_new_item'        => __( 'Add New Movie', 'twentythirteen' ),             'add_new'             => __( 'Add New', 'twentythirteen' ),             'edit_item'           => __( 'Edit Movie', 'twentythirteen' ),             'update_item'         => __( 'Update Movie', 'twentythirteen' ),             'search_items'        => __( 'Search Movie', 'twentythirteen' ),             'not_found'           => __( 'Not Found', 'twentythirteen' ),             'not_found_in_trash'  => __( 'Not found in Trash', 'twentythirteen' ),         );            $  args = array(         'label'               => __( 'movies', 'twentythirteen' ),         'description'         => __( 'Movie news and reviews', 'twentythirteen' ),         'labels'              => $  labels,         'supports'            => array( 'title', 'editor', 'excerpt', 'author', 'thumbnail', 'comments', 'revisions', 'custom-fields', ),         'hierarchical'        => true,         'public'              => true,         'show_ui'             => true,         'show_in_menu'        => true,         'show_in_nav_menus'   => true,         'show_in_admin_bar'   => true,         'menu_position'       => 5,         'can_export'          => true,         'has_archive'         => true,         'exclude_from_search' => false,         'publicly_queryable'  => true,         'capability_type'     => 'page',         'show_in_rest'        => true,                   // This is where we add taxonomies to our CPT         'taxonomies'          => array( 'category','post_tag' ),     );              register_post_type( 'movies', $  args );   }     add_action( 'init', 'custom_post_type_week', 0 ); 

What can cause higher CPU time and duration for a given set of queries in trace(s) ran on two separate environments?

I’m troubleshooting a performance issue in a SQL Server DR environment for a customer. They are running queries that consistently take longer in their environment than our QA environment. After analyzing traces that were performed in both environments with the same parameters/filters and with the same version of SQL Server (2016 SP2) and the exact same database, we observed that both environment were picking the same execution plan(s) for the queries in question, and the number of reads/writes were close in both environments, however the total duration of the process in question and the CPU time logged in the trace were significantly higher in the customer environment. Duration of all processes in our QA environment was around 18 seconds, the customer was over 80 seconds, our CPU time was close to 10 seconds, theirs was also over 80 seconds. Also worth mentioning, both environments are currently configured to MAXDOP 1.

The customer has less memory (~100GB vs 120GB), and slower disks (10k HHD vs SSD) than our QA environment, but but more CPUs. Both environments are dedicated to this activity and should have little/no external load that wouldn’t match. I don’t have all the details on CPU architecture they are using, waiting for some of that information now. The customer has confirmed they have excluded SQL Server and the data/log files from their virus scanning. Obviously there could be a ton of issues in the hardware configuration.

I’m currently waiting to see a recent snapshot of their wait stats and system DMVs, the data we originally received, didn’t appear to have any major CPU, memory or Disk latency pressure. I recently asked them to check to see if the windows power setting was in performance or balanced mode, however I’m not certain that would have the impact we’re seeing or not if the CPUs were being throttled.

My question is, what factors can affect CPU time and ultimately total duration? Is CPU time, as shown in a sql trace, based primarily on the speed of the processors or are their other factors I should be taking in to consideration. The fact that both are generating the same query plans and all other things being as close as possible to equal, makes me think it’s related to the hardware SQL is installed on.

SEO – onload components seen as separate pages by Google

I have tried to optimized my blog by loading some component after the page load to improve the performance. Since I have done this, the performance has increased but I now see that those components have been indexed in Google search.

I have use the following code to load my components

window.onload = function (e)  {   loadComments();   loadFeeds(); } 

and then one of the functions:

function loadComments() {     event.preventDefault();     console.log('Loading comments');     fetch('".rand(0,1500); ?>',          {             method: 'GET',             headers : new Headers()         })     .then(response => response.text())     .then((response) =>          {             document.getElementById('comments-content').innerHTML=response;             // PREFEED COMMENT FORM             reply_links = document.querySelectorAll(".feed_form");             for (x=0;x<reply_links.length;x++)                 {                     local_reply = reply_links[x];                     local_reply.addEventListener("click", feedComment);                                      }          })     .catch((err)=>console.log(err))  } 

I can see that the url is now indexed in Google and that’s not what I want.

Should I load the page differently? Or should I add something to the loaded component?


Has the Underdark ever been a separate plane to the Material Plane?

I play D&D 5e; I am not that familiar with the other editions of D&D. However, I’m looking for lore on the Underdark from any edition, since the settings (e.g. the Forgotten Realms) are still roughly common to most editions (even if certain events have occurred in some editions and not in others).

For context, in my own homebrew universe, I’ve decided that the Underdark is in fact another plane, although it is accessible from the Material Plane via certain tunnels and such that are like subtle portals (similar to Fey Crossings). However, this question is not about my homebrew universe (which I doubt I’ll change regardless of the outcome of this question).

I was looking into the Underdark, searching through information online and in 5e books, to see if the Underdark is a different plane or whether it is simply beneath the “surface” of the Material Plane. It seems as though it’s the latter, which means I’ll have to go to greater efforts to adapt existing adventures to my homebrew universe that were written in the Forgotten Realms, for example.

However, I believe I got my idea about the Underdark being a different plane from somewhere, so I was wondering if there have ever been any adventures or settings within D&D where the Underdark has been considered a different plane.

I’ve read online that Matt Colville has used this idea from an adventure called “Night Below”, which was apparently an old 2e adventure. I did used to watch some of his videos, so maybe that’s where I got this idea from? But even if this lead proves false, are there any adventures or settings that have ever treated the Underdark as a different plane from the Material Plane?

Why do we need a separate notation for П-types?


I am confused about the motivation behind the need for a separate notation for П-types, that you can find in type systems from λ2 on. The answer usually goes like so – think about how one can represent a signature of identity function – it can be λa:type.λx:a.x or λb:type.λx:b.x. The subtle part, they say, is that these two signatures not only not equal, they are not alpha-equivalent as type variables a and b are free variables inside their correspondent abstractions. So to overcome this pesky syntactic issue, we present П binder that plays nicely with alpha-conversion.

So the question: why is that? Why not just fix the notion of alpha-equivalence?


Oh, silly of me, λa:type.λx:a.x and λb:type.λx:b.x are alpha equivalent. But why a:type -> a -> a and b:type -> b -> b arent then.

UPDATE suc z:

Aha, interesting, I guess this is a perfect example of selective blindness =D

I am reading the book Type Theory and Formal Proof, and in the chapter about lambda2 author motivates the existence of П using exactly that kind of argumentation – one cant say that \t:*.\v:t.v : * -> t -> t because this makes two alpha-equivalent terms\t:*.\v:t.v and \g:*.\v:g.v have different types, as corresponding types are not alpha-equivalent, where types like t:* -> t -> t are in fact alpha-invariant. Mind the difference between t:* -> t -> t and * -> t -> t. But, doesn’t it make this argument a bit trivial, and is it even something meaningful to talk about type a -> b where a and b are unbound by any quantifiers variables. Andrej Bauer pointed out in the comments that П is indeed resembles a lambda abstraction with a few additional bells and whistles.

All in all, I am done with that one, thank you guys.

Why was this Base64 encoding of password string with the last two characters in a separate encoding?

As i was testing the security of my own network, i visited the login page of my router. I wanted to see how it managed the credentials. This was when i noticed it transformed the entered password to a ciphered text, with some obvious visible patterns. This was found via burpsuite, and was decoded with the base64. However, the decoded text ONLY provided the password in clear text, except the last two characters.

Transformed            ||        Clear text PW            || Decoded from base64 ================================================================================ YWRtaW4%3D                       admin                      admaW4%3D  cGFzc3dvcmQ%3D                   password                   passwocmQ%3D MTIzNGY%3D                       1234f                      123NGY%3D  YWRtaW5hZG1pbjIyMjI%3D           adminadmin2222             adminadmin22MjI%3D YWRtaW5hZG1pbjIyMTE%3D           adminadmin2211             adminadmin22MTE%3D 

All obfuscated text ends with %3D which is something i wanted to comment about but i just found it out from this link that it’s due to URL encoding of the ‘=’ sign.

And i just figured out the answer to this questions whilst creating it..

The process is: clear text password => Base64 encoding => URL encoding of last 2 characters and '=' character => Base64 encoding of the URL encoded characters

How can I add a seperate damage dice to my hombrew magic item so that it rolls two separate dice on beyond’s new dice roller?

Got curious for when my paladin reaches lvl 11/2 warlock and gets a permanent 1d8 divine smite all the time. My main weapon is magical gifted by my patron so im looking to see if it’s possible to add the extra dice as a feature so it’s properly displayed and usable in dnd beyonds dice roller feature. I can’t figure out the right combo in the magic item creation page.

Refresh token using a separate auth server?

I’d like to use JWTs for user authorization. My intention is use an auth server and an app server to keep them separate. This way my auth server will be the only JWT issuing server and the only server w/login and sign up logic.

I’ve recently run into this issue however – how do I refresh a user’s access token if my auth server is separate?

I’d like to use middleware via node.js to check the validity of a JWT, but if it fails, I’d need to contact the auth server, present the user’s refresh token, to get a new access token.

So, what’s the best way to do this? Would I use middleware to issue a remote request to get a new JWT? It seems there’s no other way, so I thought I’d check w/the community.

Are the Githyanki and the Githzerai separate races?

"If the two races were ever to team up against the illithids" – MToF, pg 85

"The githzerai were born as a race…" – MToF, pg 93, under the heading Githzerai

These two quotes imply that the Githyanki are (lorewise) two separate races. However,

"Long gone are the days when the gith race" – MToF, pg 87, under the heading VLAAKITH’S DILEMMA

Are the Gith one race with two subraces, or two races?

I would prefer 5e sources, that distinctly mention this, but if nothing else is avoidable, then sources from other editions are acceptable.