process collection after load collection

$  categoryid = 43;  $  layer = Mage::getModel("catalog/layer");  $  category = Mage::getModel("catalog/category")->load($  categoryid);  $  layer->setCurrentCategory($  category);  $  attributes = $  layer     ->getFilterableAttributes()     ->addDisplayInAdvancedSearchFilter()     ;  Mage::log($  attributes->getSelect());  foreach ($  attributes as $  attribute) {     echo $  attribute->getAttributeCode(). " ". $  attribute->getStoreLabel() ."<br/>"; } 

I want tho filters attributes which is used in layered navigation and used in advance search. When I print query the result query working fine but print the attribute filter they give only getFilterableAttributes.

The getFilterableAttributes method class Mage_Catalog_Model_Layer load collection first then I use addDisplayInAdvancedSearchFilter cause will effect after load collection.

I want to know is there any method load collection after loading the method. Any one face these kind of problem then please help how it works.

In class Mage_Catalog_Model_Layer method getFilterableAttributes have following code.

public function getFilterableAttributes()     {         $  setIds = $  this->_getSetIds();         if (!$  setIds) {             return array();         }         /** @var $  collection Mage_Catalog_Model_Resource_Product_Attribute_Collection */         $  collection = Mage::getResourceModel('catalog/product_attribute_collection');         $  collection             ->setItemObjectClass('catalog/resource_eav_attribute')             ->setAttributeSetFilter($  setIds)             ->addStoreLabel(Mage::app()->getStore()->getId())             ->setOrder('position', 'ASC');         $  collection = $  this->_prepareAttributeCollection($  collection);         $  collection->load();          return $  collection;     } 

When I add $ collection->addDisplayInAdvancedSearchFilter() before $ collection->load() but its wokring fine. Like This.

public function getFilterableAttributes()     {         $  setIds = $  this->_getSetIds();         if (!$  setIds) {             return array();         }         /** @var $  collection Mage_Catalog_Model_Resource_Product_Attribute_Collection */         $  collection = Mage::getResourceModel('catalog/product_attribute_collection');         $  collection             ->setItemObjectClass('catalog/resource_eav_attribute')             ->setAttributeSetFilter($  setIds)             ->addStoreLabel(Mage::app()->getStore()->getId())             ->setOrder('position', 'ASC');         $  collection = $  this->_prepareAttributeCollection($  collection);         $  collection = $  this->addDisplayInAdvancedSearchFilter();         $  collection->load();          return $  collection;     } 

Its work fine.

I don’t want to rewrite Mage_Catalog_Model_Layer Model.


What is the point of hiding a webpage until load?

I have noticed that whenever I go to, the page loads abnormally. Instead of the gradual loading like most websites, namecheap stays completely blank for a few seconds, and then suddenly everything appears. I got curious, and I looked at the page source and found this script:

(function(a,s,y,n,c,h,i,d,e){s.className+=' '+y;h.start=1*new Date;                                 h.end=i=function(){s.className=s.className.replace(RegExp(' ?'+y),'')};                                 (a[n]=a[n]||[]).hide=h;setTimeout(function(){i();h.end=null},c);h.timeout=c;                                 })(window,document.documentElement,'async-hide','dataLayer',4000,                                 {'GTM-544JFM':true}); 

Which goes through all the elements in the page, and removes the async-hide CSS class from them. There is also a CSS rule: .async-hide { opacity: 0 !important} that hides elements with this class.

To me, this seems very bad from a UX stance, but I am wondering if there is any good reason to do this?

Java JDK installed via HomeBrew, but an application is erroring out that it cannot load the JRE, is there something else i need to set?

Title says it all. Installed (successfully, no errors) Java via HomeBrew on a fresh install of Mojave. From Terminal, I get the expected responses when I run things like which java and java -version (12.0.1 if that matters), but a desktop application I installed is telling me that it cannot load the runtime environment.

Another machine I have, same application and OS version, but I installed Java using the Oracle downloader, no errors and no issues. So I’m assuming that I need to set an environment variable or something, but I’m not sure what.

HTTPS works only with Load Balancer DNS – AWS

I have a problem with HTTPS configuration on AWS hope you can help.

What I already have:

  1. EC2 – with Elastic IP, open ports screen shot with security group.
  2. Load Balancer attached to EC2 (with same security group as EC2).
  3. SSL certificate from AWS (ACM)
  4. Domain – “Transferred”, From another service (not amazon) using just Elastic IP for DNS configurations. (Can this be the problem?)
  5. Route53 – configured for Domain with AWS (SSL) and for IPV4 address I am using alias for Load Balancer.

How it works:

  • EC2: Elastic IP and public DNS (are working only for http) as it should work I guess.
  • LOAD BALANCER: Works and gives HTTPS and HTTP access just from DNS name.
  • Route53(domain) – Works just for HTTP, every HTTPS request returns ERR_CONNECTION_REFUSED

Is it going to fix the problem if I will change EC2’s elastic ip in Domain DNS with Load Balancer’s public DNS name?

Should I always need to load the kerner first – grub

I dual booted ubuntu on my macbook pro late 2016, but ubuntu doesn’t start instead ‘GNU GRUB version 2.02 Minimal BASH-like line editing…’ starts. I found a solution here so I do this all the time I want to boot into ubuntu.

grub> ls (hd0) (hd0,gpt5) (hd0,gpt4) (hd0,gpt3) (hd0,gpt2) (hd0,gpt1)  grub> set root=(hd0,gpt5) grub> linux /boot/vmlinuz-5.0.0-16-generic root=/dev/nvme0n1p5  grub> initrd /boot/initrd.img-5.0.0-16-generic  grub> boot 

and then ubuntu loads. When I shutdown I have to force shutdown otherwise I get so many lines like:

[ OK ] Unmounted /boot/efi. . . . [ OK ] Stopped Monitoring of LVM2 mirrors, snapshots etc. .... 

and it stops here:

[ OK ] Stopping LVM2 metadata deamon... 

Then after a long wait, I have to press the turn off key to forcefully shut it down.

Therefore I want to find a solution so that I don’t have to write everytime this ‘set root’ and that the ubuntu can shutdown correctly. Is there any solution for this?

VM BOX won’t load due of missing file

I have a small problem while loading up Ubuntu from a VM BOX.

I have used it for a few months without problems as I used it mainly for embedded school projects. Now I have a problem.

I did: sudo apt-get update and then sudo apt-get upgrade to update and upgrade the system. However, now when I try to load it up, this happens. Picture of the problem

When that loads up and I press OK, nothing happens then. It is basically crashing. What might be the issue here?

Thank you for anyone who helps.

Corosync caused a 100% CPU load after a node disappeared. How to fix it?

A general setup (the very first for me) of Corosync + Pacemaker is made upon 4 Virtual Servers with a VirtualIP, a private network is organised with a help of OpenVPN.

corosync-2.4.3-4.el7.x86_64 corosynclib-2.4.3-4.el7.x86_64 pacemaker-1.1.19-8.el7_6.4.x86_64 pacemaker-cli-1.1.19-8.el7_6.4.x86_64 pacemaker-cluster-libs-1.1.19-8.el7_6.4.x86_64 pacemaker-libs-1.1.19-8.el7_6.4.x86_64 pcs-0.9.165-6.el7.centos.1.x86_64 

So I have here 4 VPS with CentOS 7, running OpenVPN. The cluster status in a regular state:

# pcs status Cluster name: hacluster Stack: corosync Current DC: node2 (version 1.1.19-8.el7_6.4-c3c624ea3d) - partition with quorum Last updated: Sat Jun 15 14:00:36 2019 Last change: Sat Jun 15 02:25:39 2019 by hacluster via crmd on platinum  4 nodes configured 1 resource configured  Online: [ node1 node2 node3 master ]  Full list of resources:   virtualIP      (ocf::heartbeat:IPaddr2):       Started node1  Daemon Status:   corosync: active/enabled   pacemaker: active/enabled   pcsd: active/enabled 

Everything is running fine. Each node can ping another nodes using addresses.

If I reboot a VPS the cluster stays online and virtual IP is moved to another active node. So far everything is fine.

Yesterday due to a DDoS attack an IP address from one of the nodes got black-holed. So the other 3 ones could not connect to it, and since that corosync started consume all possible CPU and even more. I had to killall -9 corosync to bring the servers back to live.

The cluster started to show all nodes are offline even the local one. Nothing helped, tried:

pcs cluster localnode remove node1 

restarted daemons, stop/start cluster, etc. CPU consumption by corosync started to grow each time it started.

I guess I missed something very obvious, and still ma I not too sure what is it exactly.

The cluster has recovered only after the failed node returned back online after 4 hours of a downtime.

Kindly let me know what I need to tune to keep the cluster online even if one-two nodes are not accessible.

Regards, Alex.

AWS SDK PHP added in Drupal 8 via composer.json but how to load the SDK in module…?

I have set up a module in my Drupal 8 project to handle some tasks I need doing using AWS.

I added the awssdk (for PHP) to my composer.json and ran the composer update command successfully. the AWS SDK is now sitting nicely in my /vendor directory under the folder aws.

Composer entry

'aws/aws-sdk-php': '^3.0' 

I cannot seem to get this to load within my custom module. I have tried referencing the SDK as follows at the top of my one Controllers (After the use keyword but before the class keyword)

require_once (\Drupal::root() . '/vendor/autoload.php'); 

The result of trying this gives me a blank white page with this error in the apache log:

PHP message: PHP Fatal error:  require_once(): Failed opening required '/home/dash/public_html/docroot/vendor/autoload.php' (include_path='.:/usr/share/php') in /home/dash/public_html/docroot/modules/custom/awsintegration/src/Controller/AwsiInstanceListController.php on line 9 

Also tried using an absolute path as

require '/home/dash/public_html/vendor/autoload.php'; 

Result: Page loads no watchdog error but I get an apache php error as soon as I try reference one of the SDK classes:

PHP message: PHP Fatal error:  Class 'Drupal\awsintegration\Controller\Aws\Sdk' not found in /home/dash/public_html/docroot/modules/custom/awsintegration/src/Controller/AwsiInstanceListController.php on line 27 


After Clive’s suggestion in comments I removed the require calls. I then added some simple code that used AWS to try create a credentials provider, I simply get errors in the apache log about missing Class as per:

PHP Fatal error:  Class 'Drupal\awsintegration\Controller\Aws\S3\S3Client' not found in /home/dash/public_html/docroot/modules/custom/awsintegration/src/Controller/AwsiInstanceListController.php on line 21 

My code is as follows:

namespace Drupal\awsintegration\Controller;  use Drupal\awsintegration\AwsInstances; use Drupal\Core\Controller\ControllerBase;  use Aws\Credentials\CredentialProvider; use Aws\S3\S3Client;  class AwsiInstanceListController extends ControllerBase {      public function content() {          $  s3 = new Aws\S3\S3Client([             'version'     => 'latest',             'region'      => 'eu-west-2a',             'credentials' => [                 'key'    => 'AbAbAbAbDcDcDc',                 'secret' => 'QwertyQweRty'             ]         ]);          return array(             '#type' => 'markup',             '#markup' => $  this->t('Hello, World!'),         );     }  } 

TY John

Set Configurable Option Before Product Load

Is there a way to set a configurable product’s option before the first page load?

Right now we have a javascript script that selects the first option when the page loads but we’d like to avoid that and hopefully speed up the page.

I thought this might be the way to do it but it doesn’t seem to be working:

$  item = $  this->model->getAttributeById($  attributeId, $  this->productRepository->getById($  child)); $  this->configurableAttributeFactory->create()->setProductAttribute($  item); 

Can an AWS Classic Load Balancer redirect traffic from a public ip address to a ec2 instance with only private ips?

I have a classic load balancer that has public ip addresses. Do the ec2 instances that it routes traffic to need public ip addresses as well, or will it successfully redirect the traffic to a private ip address? They’re all located in different subnets in the same VPC.

The Classic Load Balancer allowed me to add the instances with only private ip addresses without any sort of complaints or errors.