Drupal maintenance over several environments

I have a production environment for a Drupal 7 site.

This environment is replicated to a staging environment, which we use for testing updates to modules and drupal core (same major version).

We would like to sync this environment with the production one everytime we want to test an update, and then replicate the changes.

The way I envision the process is the following:

  1. rsync everything from production to staging, getting the updated content from production and crushing any staging changes, deleting any files from staging that do not exist in production

  2. replicate database from production to staging, so we can test pages and functionalities on production content replica

  3. perform updates on the staging environment

  4. rsync from staging to production, not deleting in production content that might not exist in staging

Question #1: Do updates make changes to the database? Or are these file only changes? Will rsync be enough to get the production in sync with staging after the updates?

Question #2: Is there a better, more robust way to achieve this?

Question #3: Should I set the site into maintenance mode while doing #4? Can this be done by setting maintenance mode programatically in a config file, rsync that file and everything else, then reset the maintenance mode and rsync that file? (I am looking to build a script to automate this)

Microsoft Visual Studio Code Terminal – Bash does not stably integrate Python, Conda , and Environments

I try to make full use of Visual Studio Code for which a functional terminal would be very helpful!

The Problem is, that everytime I reopen the VSC Terminal in Bash, the settings are deleted, which are in my case integrated Conda and Python into Bash, in Gitbash on my computer this is not the case, there it is stably integrated. For now I worked around it by manually integrating Conda and Python everytime I open a new Terminal with the following lines of code:

echo 'export PATH="$ PATH:/c/dev_ops/anaconda:/c/dev_ops/anaconda/Scripts"' >> .profile echo 'alias python="winpty python.exe"' >> .profile source .profile

Now this is an irqsome quick fix for some use cases, however not for opening or creating environments with conda, because there I would need to restart a Terminal after making changes, and then instead of having it set, it would all be deleted again.

It would be great if someone knows something about this, thank you!

Is there a technical security standard for internet facing test environments?

We have a number of test environments that are permanently internet facing to accommodate external and automated testers with dynamic IP addresses. While we regularly check the servers for security vulnerabilities etc, we found that the servers were indexed by Google and other search engines. This led to a situation where customers were clicking on search engine links and attempting to make use of the UAT environment for business. We’ve put a few controls in place now to ensure this does not happen again but to avoid future errors, I was hoping there is a full standard available to say e.g.

  1. Ensure sites are not searchable by search engines by making use of Robots.txt or other meta tags
  2. Clearly mark UAT environments as different from Prod environments
  3. Etc…

Is there a standard / checklist available for this specific use case?

Best approach to deploying new features across multiple environments

We currently have four environments in AWS, development, test, beta and production. Previously, we released from develop to beta at the end of each sprint (2 weeks). We would create feature branches off of develop.

We would then deploy to beta and production every month by merging across environments. Such as merging beta into production, test into beta. This process was both slow to deliver value, and high-risk as lots of changes by different teams would build-up.

So I’ve attempted to alter the process so we branch off of master, and create pull requests into each environment instead. With the intention of releasing smaller deliverables, more often.

However we’re now in a position where we’re dealing with misaligned branches, changes getting out of sync. Being unable to pull any change other than master into our feature branches, otherwise we ‘pollute’ our feature branches with other teams changes.

My initial intention was for us to deploy things to beta/production as soon as they’re ready, but the business is insisting on two week cycles still. Which means changes are building up again.

So it’s led me to despair, surely it’s possible to just deploy small sets of changes across several environments? I just wondered if anyone has a solution to this, or any advice at all?

How to white-balance photos shot in mixed-lighting environments?

I have a dSLR and I often find myself taking pictures of people in ‘mixed lighting’ environments (e.g. tungsten lighting and daylight, fluorescent and tungsten, or even the ‘nightmare lighting scenario’ of mixed fluorescent, tungsten and daylight). Since white balancing won’t work (or at least it won’t work completely) to remove the color cast from these sorts of mixed environments, what can I do to manage the multiple types of lighting in my environment?


Asked by Finer Recliner:

I was at a wedding recently, and the reception hall had these huge windows that let a lot of sunlight through. The overhead lamps used inside the reception hall had a strong yellow tint to them. Given the two different types of light sources, “white” seems to have a vastly different definition in different parts of the photo. I found that a lot of my photos were near impossible to correct the white balance in post production (I shot in RAW).

Here is an example from the set I shot at the wedding:

mixed lighting at wedding

If I set the white balance relative to something outdoors, everything inside looks too yellow (as seen). If I set the white balance relative to something indoors, everything that falls in the sunlight looks too blue. Neither looks particularly “good”.

So, does anyone have tips for how to handle this sort of situation next time I take a shot? I’ll also accept answers that offer post-processing advice.

// As an aside, I’m just an photography hobbyist…not a professional wedding photographer 😉

Is Linux dangerous for secure environments because it’s open source? [duplicate]

This question already has an answer here:

  • Open Source vs Closed Source Systems 7 answers

I work for a company that sells hardware products with Linux embedded into the product. We lost a job for a defense contractor due to our hardware running Linux. Their reason was “because it’s open source and a security risk”. On the flip side, we sold the product to the US Marines several years ago and they specifically required that we run Linux.

Is Linux more or less secure due to its open source nature?

Hybrid Federated Search – Multiple Environments

We have Dynamics 365 Online, which is currently storing documents in Sharepoint Online. We have a Dev/Sandbox instance for D365 as well as Prod, and each of the two instances is connected to a different SP Online site. (Same tenant, though.) So far, this is all fine.

However, we’re looking at moving to SP On-Prem for our internal file server. In order to search both the Online and On-Prem, it’s looking like I can do a Hybrid Search. (Specifically looking at the Outbound Hybrid Federated Search.) What I’m trying to figure out is this – can I have two different searches configured on the same SP On-prem server? One that searches all of the internal stuff plus the D365 Prod SP Online site, and another for admins only that searches the internal stuff plus the D365 Sandbox/Dev SP Online site?

How do I restore Magento Cloud environments deleted by GitHub integration?

I followed the instructions to setup a GitHub integration with my Magento Cloud Pro project. This project has not been made live yet, but has been in development for several months and therefore has 1k+ commits. I took a snapshot of my Integration environment and ran the command as documented:

magento-cloud integration:add --type=github --project ... 

There were additional prompts that appeared after running, with what seemed to be reasonable defaults, which I accepted.

Build pull requests (--build-pull-requests) Build every pull request as an environment? [Y|n]   Build pull requests post-merge (--build-pull-requests-post-merge) Build pull requests based on their post-merge state? [y|N]   Clone data for pull requests (--pull-requests-clone-parent-data) Clone the parent environment's data for pull requests? [Y|n]   Fetch branches (--fetch-branches) Fetch all branches from the remote (as inactive environments)? [Y|n]   Prune branches (--prune-branches) Delete branches that do not exist on the remote? [Y|n]  

After the last question it created a webhook and created the integration.

Oh, then it deleted all my environments apart from Master, Production, and Staging.

Bye bye bye

I’m guessing it’s the last prompt that screwed me up --prune-branches. Shame on me for not pausing to consider what that might do (note: this option isn’t documented in the instructions).

What can I do to restore these environments?

Unlike deleting an environment through the Magento Cloud GUI, these appear to be gone. They aren’t there and deactivated.