How can I use full URLs for aggregated assets on a specific route?

My organization is using an external search application. The results are intended to be wrapped in a header and footer to mimic the appearance of the main website.

I’ve created a couple of custom routes, /wrapper/header and /wrapper/footer, that simply return a (practically) empty page with a partial template. As far as the markup goes, that’s working fine; the search app can pull both and wrap them around the results as intended.

However, on the header, we need to use full URL for the aggregated CSS and JS files. I’ve taken a look at hook_page_attachments_alter(), but that appears to deal with assets attached pre-aggregation. Is there a way for me to override the URLs inserted to replace the placeholders? The CDN module looked promising, but can’t be restricted to work on specific routes.

I create shorten URLs website for $10

Yes, i will create a stuning and professional shorten URL website. (example https://za.gl/) you can make real money using this kind of website. only thing i need from you paid domain and hosting or free domain and free hosting. i will complete your website with admin panel access. This is only on seoclerk. thank you.

by: rjroshan
Created: —
Category: WordPress
Viewed: 250


Append a string before opening external URLs

I frequently read news articles shared on social media. A lot of times, these articles are full of advertising and other irritating elements. I use a service that strips just the content and formats the article in a nice presentation.

Currently I have to:

  • Long click the link on the social app
  • Share using a clipboard copier
  • Open Chrome
  • Type service.com/ and paste the original URL.

Is there a way to automate this? I think it would involve something that deals with intents, but I don’t know how to achieve this.

Basically I need to be able to click on a link on apps and have an option that attaches “service.com/” before that URL and then opens it.

“Remove URL’s Containing” List for Broken Link Building & Filtering Excel Question

Hi!

I wanted to share a file I started that can be used to filter results for broken link building lists. I’m attempting to block social media, big box news sites, video sites, other known no-follow sites. Also, I was hoping that maybe someone else had already started a similar list where we could share domains and combine the lists, or if anyone has a suggestion on how one could better accomplish this that would be super helpful.

Additionally, is there a way to apply a list such as this to the excel file thats exported from scrapebox’s broken link checker without deleting each domain one-by-one using a blank search replace? Is there a way I could use scrapebox to filter the urls in this list while keeping its format integrity? Suggestions?

Looking forward to any ideas!

Here’s the file. Anyone can edit it, organize it, and add to it.

https://drive.google.com/file/d/1KqLkRqZ…q_1ce9NY_J

Thank you!

Clean urls for taxonomy views pager in Drupal 8?

I need to change urls like taxonomy/term/12?page=1 to taxonomy/term/12/page/1 in Drupal 8. I know about Cleanpager module, but it works only in D7. Its dev version for D8 is not working.

I tried to implement my custom solutions with InBound and OutBound processors, but was able to make it work with only InBound processor, which would redirect to the page with the get query param.

However I need to the taxonomy views page to be available completely under new URL with supporting all the aliases.

Here is the sample code I tried to implement.

    <?php      namespace Drupal\dummy\PathProcessor;      use Drupal\Core\PathProcessor\InboundPathProcessorInterface;     use Drupal\Core\PathProcessor\OutboundPathProcessorInterface;     use Drupal\Core\Render\BubbleableMetadata;     use Symfony\Component\HttpFoundation\Request;      /**      * Processes the inbound and outbound pager query.      */     class DummyPageProcessor implements InboundPathProcessorInterface, OutboundPathProcessorInterface {        /**        * {@inheritdoc}        */       public function processInbound($  path, Request $  request) {         if (preg_match('/.*\/page\/([0-9]+)$  /', $  request->getRequestUri(), $  matches)) {           $  path = preg_replace('/(.*)\/page\/[0-9]+/', '$  {1}', $  path);           if ($  path == '') {             $  path = '/';           }           $  request->query->set('page', $  matches[1]);           $  request->overrideGlobals();         }         return $  path;       }        /**        * {@inheritdoc}        */       public function processOutbound($  path, &$  options = [], Request $  request = NULL, BubbleableMetadata $  bubbleable_metadata = NULL) { // the 'page' key never comes to $  options array, so that statement will never work.         if (!empty($  options['query']['page']) || $  options['query']['page'] == 0) {           if ($  options['query']['page'] > 0) {             $  path .= '/page/' . $  options['query']['page'];           }           unset($  options['query']['page']);         }         return $  path;       }      } 

Any thoughts, or module which would solve this issue?

New Without slash URLs not redirected with slash URLs; but canonicalised: Any harm?

Hi friends,

Our website pages without slash are not redirecting to with slash and vice-versa. Both the versions are returning 200 response code. Both the versions are pointed to with slash URLs with rel-canonical tags. Is this right setup? Or we need to redirect one another to slash or without slash versions?

Thanks

How to scrape for Facebook pixels installed on URLs?

I need to scrape a list of URLs and determine if they have a Facebook tracking pixel installed.

If the Facebook pixel is installed directly on the site I can use the Page Scanner to look at the source code for a footprint like “/fbevents.js” or “https://www.facebook.com/tr” to tell me if that the site has a Facebook pixel installed or not.

But if the site is using Google Tag Manager to install the Facebook pixel then the pixel tracking code is injected into the DOM, and it is not visible inside the source code.

How can I use scrape box another way to determine if a site has a Facebook Pixel installed (even when they’re using Google Tag Manager)?

Maybe I can scrape for the Facebook cookie somehow?

How stop google from giving too much link juice to particular URLs?

We have a product website with separate pages for product details, product images, product videos, product reviews.

We want to design a card for our products which we can use everywhere i.e. on internal website ads, cross-sell etc. Below is a sample card.

bike toy product - seo

There is a problem that we see here – this will create too many linkages to our product review, images and videos page. The most important page for us is the product details page and we want to give maximum link juice to that page.

How can we fix this link juice distribution problem and indicate to google that product details is the most important link out of all these links?

We are apprehensive of doing no-crawl/no-follow as we are not sure if it would solve this issue.

Verify Urls (still) not working

Hi,
again I find the same error I notified here around 1 month ago.

I created a new project, inserted a website and then, with right click on the project > show urls > verified, I inserted around 50 mixed urls (some pointing to the website of the project – field URL in the top project) and many other urls pointing to others sites, just to check if SER is checking well.

After pushing the button “Verify”, SER will mark ALL the urls as GREEN (so positive answer), but I know most of that links have no links to that website (I checked them manually). And the column Link URL show an N/A without any link.

Can you please check this feature? It’s something I use a lot and now I’m not trusting anymore on its functionality.

Thanks