"A place to share knowledge about life science"
A site about DNA/RNA life science in the cloud.
Domain + functioning site for sale.
Site generates science related content via the Google news API. Runs all on its own.
Valued on GoDaddy at $ 1,359.00
I want to install my own cloud instead of Google Drive/Images, but in Google images you can find your pictures not only by filename – if you type ‘dog’, google will show all your dogs images.
Is there is something like that in ownCloud or NextCloud?
I’m running CentOS 7 GNU/Linux on a Google Cloud Compute Engine instance. On my other CentOS servers (not hosted at Google Cloud), I am able to receive the system emails (cron reports, logwatch updates, server error info, etc) in my mailbox by adding a .forward file in /root. On Google Cloud this doesn’t work. It appears Google is blocking all outbound mail even if it originates on the server is being sent to the Google Cloud owner’s email address. I’ve searched the Google Cloud documentation but all references to sending email seem to be aimed at people who want to enable volume email from web applications or email for large numbers of users being sent to arbitrary email addresses. I simply want to get my OS information emails to my own mailbox (which is a Gmail address). Can anyone explain how to do this without having to pay for a third party SMTP relay service just to receive a few emails per week from each instance?
I followed the instructions to setup a GitHub integration with my Magento Cloud Pro project. This project has not been made live yet, but has been in development for several months and therefore has 1k+ commits. I took a snapshot of my Integration environment and ran the command as documented:
magento-cloud integration:add --type=github --project ...
There were additional prompts that appeared after running, with what seemed to be reasonable defaults, which I accepted.
Build pull requests (--build-pull-requests) Build every pull request as an environment? [Y|n] Build pull requests post-merge (--build-pull-requests-post-merge) Build pull requests based on their post-merge state? [y|N] Clone data for pull requests (--pull-requests-clone-parent-data) Clone the parent environment's data for pull requests? [Y|n] Fetch branches (--fetch-branches) Fetch all branches from the remote (as inactive environments)? [Y|n] Prune branches (--prune-branches) Delete branches that do not exist on the remote? [Y|n]
After the last question it created a webhook and created the integration.
Oh, then it deleted all my environments apart from Master, Production, and Staging.
I’m guessing it’s the last prompt that screwed me up
--prune-branches. Shame on me for not pausing to consider what that might do (note: this option isn’t documented in the instructions).
What can I do to restore these environments?
Unlike deleting an environment through the Magento Cloud GUI, these appear to be gone. They aren’t there and deactivated.
So I am working on a custom payment gateway extension. It installs and works beautifully in my test server. Right now I’m trying to get it install on Magento Cloud.
I follow their installation instructions, and can see that my database changes are present, but my payment gateway doesn’t show up in the store configuration.
Here is my repo: https://github.com/apruve/apruve-magento2
CloudFormation templates are used for both provisioning of aws services(like EC2/VPC/r53/S3/…) and configuration(installation/config/restart/..) on every service.
Ansible was unreliable in this approach in terms of rollback, error handling etc.
What is the approach for provisioning and configuration in Azure cloud?
I want to be able to host site images on Google Cloud Storage. One way I am thinking to get it does is by adding a drive on my windows server that point to the cloud. Then my site would read the images from the windows drive.
Is there a free way to add Google Cloud Storage on Windows?
I found ExpanDrive. But that isn’t free. The I found Google FUSE. but that is only for Lunix. Is there a way to mount windows drive directly to my google cloud storage service?
We have a typical Dockerized Node/Express app that is deployed to about 100 machines on Digital Ocean. Currently, the entire deploy – not counting testing – takes about an hour.
I am used to deploys that take maybe 10-15 minutes, even for large numbers of machines.
I am a bit confused about what is going on (their deploy system is rather bespoke) and have begun to gather data. The images are built in the cloud, so it’s not something obvious like upload time from someone’s laptop.
However, that’s not my main problem. The main problem is that nobody in this company thinks that one hour is a problematic amount of time for this deploy. (It used to be five hours!)
Can you point me to data about what is a reasonable amount of time?
NOTE: Shaming my co-workers is offtopic. Many of the people here are junior or are just inexperienced, and I am far more senior. I have the knowledge to fix this, but I need to convince the leadership that we are far outside the norm.
INGRESS is set to port 22 for all instances. SSHD is installed and running. Anyway to delete all firewall rules and have the defaults reset?
I am lost.