How to expand all accordions on a web-page? [closed]

I have access to a web-page with accordions on it. I wish to print this web-page with all the accordion tabs expanded. Is there a way to do that?

Currently, the web-page can be printed with a maximum of one accordion tab open, which is how accordions work in general. There is no option to "Expand All" for accordions.

I am not a web-developer and do not wish to modify the web-page source code. I would prefer to have a plugin or extension that will allow me to print the page with all accordions expanded.

Is there a webpage which tells me when the next version of PHP is planned for release?

Before PHP 8.0.0 was released, I was eagerly awaiting it. It had an elaborate table detailed the planned release dates for the "GA" (final) version as well as all the betas/alphas/RCs. I unfortunately think this was just because it was a major new PHP version.

Now that I have tried PHP 8.0.0 and determined that it had a show-stopping bug, and reported that bug, and had it fixed, I’m eagerly awaiting 8.0.1 or 8.1.0 or whatever will be the next version of PHP.

Sadly, I’ve now looked through the entire PHP website without finding any such page.

Does it exist? PHP 8.0.0 was released "26 Nov 2020", so it seems like it could be due soon, but I want to know (roughly) when.

The mailing lists seem completely dead and offer zero clue into the PHP development/plans.

Can a webpage differ in content if ‘http’ is changed to ‘https’ or if ‘www.’ is added after ‘http://’ (or ‘https://’)?

When I use the Python package newspaper3k package and run the code

import newspaper paper = newspaper.build('http://abcnews.com', memoize_articles=False) for url in paper.article_urls():     print(url) 

I get a list of URLs for articles that I can download, in which both these URLs exist

  • http://abcnews.go.com/Health/coronavirus-transferred-animals-humans-scientists-answer/story?id=73055380
  • https://abcnews.go.com/Health/coronavirus-transferred-animals-humans-scientists-answer/story?id=73055380

As can be seen, the only difference between the two URLs is the s in https.

The question is, can the webpage content differ simply because an s is added to http? If I scrape a news source (in this case http://abcnews.com), do I need to download both articles to be sure I don’t miss any article, or are they guaranteed to have the same content so that I can download only one of them?

I have also noticed that some URLs also are duplicated by adding www. after the http:// (or https://). I have the same question here: Can this small change cause the webpage content to differ, and is this something I should take into account or can I simply ignore one of these two URLs?

Is building part of an href on a webpage from URL parameters a security risk?

I’ve written some code and have a feeling there’s a security issue with it, but I can’t figure out what it is.

Is there a security risk in including URL parameters directly into part of a link on a webpage?

Steps:

  • User visits https://www.example.com/1/guid
  • JS reads the URL, and retrieves part of it, in this case guid
  • JS builds a URL using that data https://www.example.com/2/guid
  • That new URL is added to the page (Adding the URL to the page is escaped, so injecting JS shouldn’t be a problem, in theory)

Is there any way that displaying or clicking on https://www.example.com/2/<any plain text here> could be a security flaw?

Why do modern sites frequently use JSON blobs client-side and construct the webpage in JavaScript client-side? [closed]

I have noticed something in later years. Instead of actually creating a HTML page, such as a table of data, on the server and sending the finished HTML page, they nowadays send a “minimal” (in a very broad sense of that word…) webpage which executes a JavaScript which in turn loads in JSON blobs which it then parses in JavaScript, client-side, to construct a webpage which is finally displayed to the user.

One side-effect of this, intentional or not, is that it often makes it much easier for me to “grab” their data since they frequently just “dump” their internal database’s fields to the client, even if they themselves don’t use all fields. So in a way, it’s like they are making it easier for people like me to automate things on their websites, whereas I used to have to constantly parse complex, messy, ever-changing HTML code.

So while I hate how idiotic this is from a logical/user perspective, as well as from a security one (depending on various factors), it’s actually “good” for me in a way. I just don’t understand it, since it’s significantly more work on their part and an overall extremely strange way of doing things.

Whenever I notice that a HTML page is “empty from content”, I always open the network tab in Pale Moon and reload the page, and then I see a “JSON” blob which I can study. It’s bizarre. It’s almost as if they are unofficially providing an “API” without mentioning it openly, but secretly they wink/nudge to us “powerusers”?

How to add different shortcodes on webpage using apply_filters(‘the_content’)?

I have different shortcodes that are generated by plugins. I have

$  content = apply_filters('the_content', $  post->post_content);  echo $  content;  

in my code to display my plugin content(for example, Contact Form 7) on the webpage.

I would like to add another shortcode plugin on the webpage. How should I accomplish it with apply_filters without using echo do_shortcodes?

Hopefully this is not an off topic question or too stupid to ask. And if possible please point me to the reference so I can learn more about it. Thank you!

What would happen if some random webpage made an Ajax request for http://127.0.0.1/private.txt?

I run a localhost-only webserver (PHP’s built-in one) for all my admin panels and whatnot on my machine. I’m worried that, if any random webpage has a JavaScript snippet which makes an Ajax call to http://127.0.0.1/private.txt , and I visit that webpage, it will make my browser (Firefox) fetch whatever data is returned from that URL and be able to use it, for example to send it back to their own server in another Ajax request.

Let’s assume that http://127.0.0.1/private.txt returns my entire diary since 1958. Or anything equally sensitive. I definitely don’t ever want it to interact with anything other than my Firefox browser, but from what I can reckon, this could be a massive privacy/security issue. I hope I’m wrong about my assumption that this request would be allowed. I hope that it has some kind of “cross-domain policy” blocking it or something. Especially since it’s from 127.0.0.1, which should be some kind of special case.

What would stop it from doing this? What am I missing in my reasoning?

I’m thinking of a layout like a tree in my webpage. I think that is the only way to go. is there some replacement?

I’m trying to make a layout design for a process. I have 3 types of users one is x , b and c. x does some work then b aproves it then c confirms it. But b can disaprove and open a ticket for x to do on that same task and c can do the same. Its confusing but I want to see them all in a single page about x did “work1” then b approved it but c declined it but created a ticket called “work2”. then x again did “work2” b approved it and then c confirmed it and the task ended. It required some sort of a tree view according to my perspective what do you think what sort of tree view would be better?