Theme customisation – how to store common variables externally

I have been developing a custom extension for my theme over the last few weeks.

This is my first time “developing” with WordPress. All works fine and I am happy with the resulting functionality, albeit it needs some tidying.

My extension currently stores all data in an array in the user_meta table. When any page loads which uses the extension, all of these variables are pulled:


if (is_user_logged_in()) {                  $  custom_meta = get_user_meta($  userid, 'custom' . $  postid, true);                 $  custom1 = $  custom_meta['custom1'];                 $  custom2 = $  custom_meta['custom2'];                 $  custom3 = $  custom_meta['custom3'];                 .....                 $  custom8 = $  custom_meta['custom8']; 

Is there a way to greatly clean this up by storing all of these variables outside of main-block so I can just call them without defining them in here?

Or is this normal and OK?


How can I setup Office 365 email alerts for people externally sharing files from a single document library?

I’ve been using the Office 365 Alerts from the Security and Protection to keep tabs on mass downloading and deletion of files throughout Sharepoint. However, I was wondering if it’s possible to limit the scope of an alert to just a single document library. For example, I might want to keep track of external sharing in a higher security document library housed on a Sharepoint site with other document libraries that have files that are safe for sharing. I know it’s possible to limit alerts to a single site by adding in “*” as a condition, but “*” doesn’t seem to work. Is what I’m asking possible at all, or is there some alternative I’m not thinking of?

Cloned my laptop’s HDD to a new SSD. Now it won’t work unless my old HDD is connected externally

Using Macrium Reflect, I cloned my 2TB HDD to a new 500GB SSD. After installing the new drive, I got hit with “no bootable devices found”.

I ran Macrium’s rescue media to try and resolve boot problems, but it didn’t fix anything. I could only get my laptop to boot again with the new SSD when I plugged in my old HDD thru USB and used it as an entry point to the Windows Recovery Environment. From there I used this answer to fix the issue:

I know now with 100% certainty that my laptop’s using its new SSD when it’s booted up. But here’s the issue: the moment I unplug the old HDD, my laptop immediately bluescreens with error INACCESSIBLE_BOOT_DEVICE, and won’t boot again until I plug it back in.

I feel like there’s something easy that I’m missing here.

Blocking Azure Web App externally breaks internal

I am trying to lock down an Azure website we have running to our intranet. I have browsed to App Services > Webappname > Networking > Access Restrictions but when I enter the two “allow” rules for our production and user internal IP subnets, it breaks displays the error message “Error 403 – This web app is stopped.”

View post on

My desired end-state is to have the app work from our internal network, but not from the WWW.

My access rules are as follows:

  1. Allow
  2. Allow
  3. Deny Any

Magento 2 Assign customer group name to externally calling JavaScript file variable


  • MageVersion – Magento 2.3.x
  • External JavaScript –

    var mage_customer_group = ”; //other code related to the customer group

  • Pages – Category page and Custom product listing page (External JavaScript manages the product listing using simple AJAX)

Getting Customer group name by overriding customer section updates and saving into the mage-cache-storage (localstorage).

Customer Data Management in Magento 2

In external-js.js file, 3rd party vendor having logic to send prices and other information based on customer group name. For now, product name, image, link and product price is showing proper. Issue only occures, when category or custom product listing page loads value of mage_customer_group is getting blank value.

While looking on the Network console, first External JavaScript ( calling then customer section update is calling.

Is there any way in KnockOut js that we can bind customer properties to external JS?

Run SFC on an Externally connected Hard Drive

Issue: The PRECISE command line text to run SFC (System File Check) on an EXTERNALLY connected. Situation: Host computer: Windows 7 It’s Windows HD partition is “C”. Connected HD: – Connected DIRECTLY to Host logic board. – Has Windows 7 installed on it. – Drive Letter of Windows partition = “E” In your response PLEASE show the PRECISE command line in TOTAL. Thanks – in advance – for your attention and response. Patrick

If $S, S_1, S_2$ be the circles of radii 5,3 and 2 respectively. If $S_1$ and $S_2$ touch externally and they touch internally with $S$.

If $ S, S_1, S_2$ be the circles of radii 5,3 and 2 respectively. If $ S_1$ and $ S_2$ touch externally and they touch internally with $ S$ . The radius of circle $ S_3$ which touches externally with $ S_1$ and $ S_2$ and internally with $ S$ is?

I tried making a diagram and figuring out, but cannot bring a relation.

How to externally access a kubernetes service of type as “NodePort”, using ec2 Public IP

On ec2, running a single node k8s cluster. On the node, a service is running with the type as “NodePort” with the exposed port “31380”.

I need to access this service externally over port 80.

  apiVersion: v1   kind: Service   metadata:     name: demo-nginx     labels:       run: demo-nginx   spec:     ports:     - port: 80       protocol: TCP     selector:       run: demo-nginx     type: **NodePort** 

What additional config is needed to access this from ec2 public IP e.g. a successful “curl ec2publicIp:80” or via a browser?

> ~$   kubectl get services NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE demo-nginx   NodePort   <none>        80:31380/TCP   17m kubernetes   ClusterIP    <none>        443/TCP        23m 

Note#1) I’m able to access the service from inside the Node, using the privateIP.


Note#2) I have tried a combination of IPtable rules e.g.-

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination sudo iptables -I INPUT 1 -p tcp --dport 80 -j ACCEPT sudo iptables -I INPUT 1 -m state --state RELATED,ESTABLISHED -j ACCEPT 

Note#3) My ec2 security group and rule is configured to allow http traffic.

Note#4) I have updated the IP forwarding on my EC2 instance.

Note#5) The k8s service exposes a simple nginx deployment.

> $   kubectl get deployments NAME         READY   UP-TO-DATE   AVAILABLE   AGE demo-nginx   1/1     1            1           43m 

Any insight to this issue would be highly be highly appreciated.

N.B. I already searched many contents but could not find a solution.