Loading scripts and stylesheets together failure

I’m trying to load scripts and stylesheets together at same function. At first I loaded stylesheet by itself and it worked after I added admin_enqueue_scripts function in additional to wp_enqueue_scripts.

Now I’m trying to load stylesheets and scripts in the same function, unfortunately that fails to exist, and also prevent CSS to load.

I can’t find any typo’s or something online that related to my issue.

  function loadAssets(){       wp_enqueue_style( 'style', get_template_directory_uri() . '/css/style.css');       wp_enqueue_script('jquery', get_template_directory_uri() . '/js/jquery.js');       wp_enqueue_script('script', get_template_directory_uri() . '/js/script.js');   }    //Applying those loading references   add_action('wp_enqueue_scripts', 'loadAssets');   add_action('admin_enqueue_scripts', 'loadAssets'); 

How can I load both JavaScript files and CSS files on the same function?

Is there some mechanisms in PHP to assign “less trust” to scripts in a given dir? (not a duplicate) [closed]

Kindly stop redirecting my questions to that unrelated one which doesn’t answer my question whatsoever. I’ve already read every answer there and it doesn’t help at all. If it did, why would I ask this much more specific question?

This has been a continuous worry and problem for me for ages:

For practical and logical reasons, I am forced to trust some third-party PHP libraries. These are installed, updated and managed with Composer, and live in C:\PHP-untrusted-external, entirely separated from my own PHP scripts, which live in C:\PHP-my-own.

The scripts in C:\PHP-my-own include and make use of the libraries in C:\PHP-untrusted-external.

Since there is no way that anyone, especially not I, could ever vet all the third-party code, and all updates, I’m looking for some way to “secure” or “sandbox” these in some way, even if it’s just partial.

Basically, I’m worried that one day, an update will make an edit such as:

unlink('C:\'); 

Or:

phone_home_to_hacker_server($  contents_of_my_harddrive); 

If that happened, the scripts would happily run and do those actions. Nothing prevents them from doing so.

Is there really no way to specify in the php.ini configuration file, something like:

security.sandbox_dir = "C:\PHP-untrusted-external" 

Or:

security.refuse_network_connections_for_dir = "C:\PHP-untrusted-external" security.refuse_disk_io_for_dir = "C:\PHP-untrusted-external" 

… or something like that?

I don’t understand Docker. I have tried it countless times, and it makes no sense whatsoever to me. I don’t want Docker. I don’t want to deal with containers. Correction: I can’t deal with it. I’ve tried to, but didn’t understand it. Several times.

I just want PHP to support this in itself, and it seems more than reasonable to me. Doesn’t it seem reasonable to you?

The saying that “at some point, you have to trust other people” is way too generic/vague to apply here. It’s bypassing the problem. I don’t trust people at all, and for good reason. It seems idiotic that we are (apparently) just supposed to sit around and wait for the disaster to happen. At least if I could prevent the third-party scripts form doing anything with the file system and network, that would go some way toward mitigating this issue. It still won’t make the scripts unable to lie about the numbers/data they return to me, but at least they can’t directly “phone home” or delete random files.

Cannot retrieve script handles from custom scripts, not in $wp_scripts

I’m trying to retrieve the handles of my custom scripts by using:

global $  wp_scripts; var_dump( $  wp_scripts ); 

But this is only showing me the script handles + info of all the scripts enqueued by WordPress by default. None of my custom scripts, all enqueued and working on their respective pages, is shown in the output. Why?

CSP: any way to prevent inline scripts dynamically created by a trusted external script?

Let’s say I have a simple web application which uses a single JavaScript (JS) file, loaded from its own domain, and has implemented the restrictive Content Security Policy (CSP) of default-src 'self'. There’s a stored XSS in it whereby the JS file will make an Ajax call to an API which would return some content stored in a database, and that content (which came from untrusted user input) has inline JavaScript in it. The JS file creates an element in the page’s document and sets its HTML content to the retrieved content. Let’s assume that this is the necessary way of doing what it needs to do, and let’s assume that sanitising/encoding the input is unfeasible. I know that user input should always be sanitised, just for the purposes of this question, skip this suggestion as a solution.

Is there any way to set a CSP such that this inline JavaScript, dynamically put onto the page by trusted JavaScript, is blocked?

Here’s a minimal working example (you may need to serve it from a simple HTTP server, e.g. php -S localhost:58000, rather than loading as an .html file)

csp-test.html:

<!DOCTYPE html> <html>   <head>     <meta charset="UTF-8" />     <meta http-equiv="Content-Security-Policy" content="default-src 'self'">     <script charset="utf-8">       console.log('script') // blocked, OK     </script>     <script src="csp-test.js" charset="utf-8"></script>   </head>   <body>     <img src="x" onerror="console.log('img')"/> <!-- blocked, OK -->   </body> </html> 

csp-test.js:

console.log('trusted ext script') // executed, OK i = document.createElement('img') i.src = 'y' i.addEventListener('error',   function(){ console.log('img by trusted ext script'); }) // executed, HOW TO BLOCK THIS? document.body.append(i) 

result:

enter image description here

Can/Should I self-host marketing scripts (Facebook Pixel, Google Analytics, etc.)?

Many marketing providers and plugin providers provide embeddable scripts, but many don’t use a CDN so their response time is slow. They also DON’T cache their scripts because they need to update them from time to time (this is for Facebook pixel, Google and other big guys too) . This slows down the site loading speed.

I’m thinking of hosting some of the script files myself either on my servers or on a CDN, and auto-updating them whenever the original source file is updated (this would be done periodically and programmatically).

Can someone tell me what’s the pitfall of this strategy?

For example, Facebook and Google warn against self-hosting their scripts. I think their main concern is that users will have outdated scripts if these companies update them.

BUT – if I take really good care of that, then is it really an issue? If I check every 5 minutes and update my hosted file accordingly, then I’m assuming I could potentially have lost or bugged out data for those 5 minutes, but no more.

using NoScript – what approach to use with randomnly named CDN scripts?

I ran across this on namecheap.com. This question isn’t about namecheap.com itself, which I have no problem trusting. Rather, what approach do you recommend in dealing with CDN scripts that have very long random names? Now, I know they’re most likely just how the main site decided to refer to external utility scripts they’ve put on a CDN.

But, if the main site had been hacked, the attackers could also inject a script to fire and use a long random name that looks innocuous by looking a like a utility script.

Is there any particular reason to be more cautious about these long random named scripts than for the more recognizably-named ones? I suppose not, but they do make me a bit queasy on NoScript. Especially on sites that I will enter credit cards into.

enter image description here

Secure a Jenkins node to only run approved scripts?

We have a series of Jenkins nodes that are used to deploy changes onto our SQL Servers, which works fine as long as everyone behaves and can be trusted.

The worry is that a rogue developer or hacker could simply add something like this into a Jenkins file and trash our data or performance:

   node (production) {          stage ‘deploy_straight_to_prod’ {                …<do something bad here>         }     } 

How do we protect against this? Ideally, only scripts that have been actively aproved by a DBA should be allowed.

How can I reference classes from my scripts in my unit tests?

Bit of a noob question, but I’m having trouble getting unit testing working in Unity. I created a PlayMode test assembly, I toggled “Enable playmode tests for all assemblies” as directed in the manual, but when I try to edit the test script, Visual Studio still isn’t recognizing references to the classes I’ve written. There’s also no option to manually add a reference within Visual Studio. Am I doing something wrong, or is this some kind of bug? It’s hard to imagine what kind of unit testing you could even do without references to your main project.