Making Static Buttons

Is it possible to get a button that remains still when clicked? That is, it does not show the temporary pressed frames

enter image description here

This also happens when I set Appearance -> "Frameless"

enter image description here

as well as others. What I want is to be able to change the background colour of the button under the "Frameless" appearance without this pressed animation.

With Appearance -> "Pressed" there is no change, but the button remains pressed and I cannot change its background color. Instead it shows me the negative of that colour, so I could simply use ColorNegate to make it work. If want green, for example, I could do

Button["Hello", Appearance -> {"Frameless", "Pressed"},   Background -> ColorNegate[Green]] 

enter image description here

I’m wondering however, whether there is a better approach and how this translates into other operating systems (I’m using Windows 10). Could there be a difference in the way the pressed color is computed?

Google showing static title instead of dynamically set javascript title

Basically, lets say I have set my title using HTML to "Something – Example", and with Javascript I change the title to "Apple – Example". When I google my website (after waiting for it to update) or the link gets embedded on things such as twitter or discord, it shows the static "Something – Example" title instead of the new title.

How to create a clickable static call us button that links to a new setting in general

I recently had an admin setting created on my website that allows me to input a contact number to display on the front end via a shortcode.

Now I am now trying to use this field to also create a clickable and static call button for mobile devices only that appears on the bottom of their screens.

There are plugins like WP Call Button that achieve this, but I am trying to keep my website as light as possible. Any help would be much appreciated!

I think I may need another 2 admin fields created that act as a text label as well as the hyperlink function? e.g. tel:035555555?

enter image description here

Here’s the code that was created (thanks again Walter) for my general settings:

/**  * Class for adding a new field to the options-general.php page  */ class Add_Settings_Field {      /**      * Class constructor      */     public function __construct() {         add_action( 'admin_init' , array( $  this , 'register_fields' ) );     }      /**      * Add new fields to wp-admin/options-general.php page      */     public function register_fields() {         register_setting( 'general', 'phone_number_custom', 'esc_attr' );         add_settings_field(             'custom_phone_number',             '<label for="custom_phone_number">' . __( 'Phone Number' , 'phone_number_custom' ) . '</label>',             array( $  this, 'fields_html' ),             'general'         );     }      /**      * HTML for extra settings      */     public function fields_html() {         $  value = get_option( 'phone_number_custom', '' );         echo '<input type="text" id="custom_phone_number" name="phone_number_custom" value="' . esc_attr( $  value ) . '" />';     }  } new Add_Settings_Field(); 

How to create shortcodes that pull custom field data from general settings

Use XML Based Sitemap and/or Static Page Sitemap?

I have a question about which sitemap to use – XML based that is sent to Google Search Console or static HTML based sitemap that is added to the website footer.

If you search "dump trucks for sale", you will find this result in the third position: https://www.commercialtrucktrader.com/Dump/trucks-for-sale?category=Dump%20Truck%7C2000609

Our website uses faceted navigation to filter inventory results, like the Commercial Truck Trader website example I posted regarding a search query.

I see the Commercial Truck Trader website has a sitemap link added to their footer. This is a static HTML based sitemap that can help the user navigate parts of the website.

Do you think the search query "dump trucks for sale" is showing the Commercial Truck Trader website in the third position on a serp from using a static HTML based sitemap that is added to the website footer, or from an XML based sitemap that is sent to Google?

Is there any compiler IR paradigms other than Three Address Code and Static Single Assignment (SSA) Form?

Specifically, I have tried boiling down the operations of a VM into the smallest possible standardized units and arrived at this:

allocate_stack(size) fetch(offset_from_stack_pointer) store(offset_from_stack_pointer) call(index) 

So you might have:

allocate_stack 6 fetch -2 store 0 fetch -1 store 1 call 72 

Basically, the allocate_stack would tell you how much space you are allocating for fields in the stack. Then the fetch would queue up something relative to the stack to be stored (or fetch_a would fetch an absolute address). Then store would store the fetched value in the space relative to the stack (or store_a for absolute position). And finally, call <index> would call that specific function.

So basically these are 2-address codes instead of 3-address codes. Does anything like this exist out there where they’ve expanded upon the idea to find compiler optimizations and such, like 3-address-code and SSA form? If not like this, does anything else exist outside of 3-address-code and SSA form?

Any idea how to keep this text static

Hi guys,

Do you have any idea how to keep the text above the textarea static when you resize the textarea. It expands by the size of the parent div. My intention is to keep everything centered.

The link: https://test-c3848.web.app/

The HTML:

 <div id=div1>     <div id=div2>         <p>Paste link in the textarea.</p>         <textarea type="text" id="txt"></textarea>     </div> </div> 
Code (markup):

The CSS:

 #div1 {min-height: 10em; position: relative} #div1 #div2 { margin:...
Code (markup):

Any idea how to keep this text static

How high can the static bonus to an attack roll get?

I was considering this Q&A: What is the highest possible AC?

I was curious about whether any attack roll could ever get high enough to hit such an AC (short of scoring a critical hit, which ignores AC), and that got me thinking: How high can the static bonus to an attack roll get?

I’m not interested in the amount of damage done, just the static bonus, hence I’m also not interested in the actual number rolled on the die, but if it matters, assume it’s not a critical hit. The bonus can be temporary or situational, hence buff spells are allowed.

Any magic items, help from friends, feats, official races or classes or subclasses are allowed, but UA is not allowed, and nor is anything involving any kind of polymorph/wild shape. Also assume that the maximum ability score range is 20, with the exception of Barbarians going up to 24 in STR and CON at level 20 (in other words, no Manuals/Tomes to get to 30, and no other magic items that increase your maximum, but class features that do the same are OK). We can also assume rolling ability scores with lucky rolls so we can have almost any ability score at 20.

The best I can think of off the top of my head is a level 20 Ranger (so proficiency bonus of +6) with 20 DEX, the Archery Fighting Style (+2), and is shooting their +3 Longbow at one of their favoured enemies, thus adding their WIS (also 20) to the attack roll, so that’s 6 + 5 + 5 + 3 + 2 = +21 bonus. (There are also probably some buffs that can be added to this, but I can’t recall any at the moment, but I’m sure something exists…)

Can we do better than that within the restrictions I’ve specified above?

Why are browsers makeing PUT requests for static assets on my site?

Our site hosts static assets at /assets/…. In debugging a font-related issue, I looked through our logs for unusual activity. I found a bunch of requests like these

method path                         referer PUT     /assets/js/40-8b8c.chunk.js https://mysite.com PUT     /assets/fonts/antique.woff2 https://mysite.com/assets/css/mobile-ef45.chunk.css 

The requests come from lots of different IP addresses all over the world. I don’t see any pattern in the User-Agents. The only HTTP methods are HEAD (odd, but fine), GET (expected), and PUT (very suspicious).

I haven’t been able to identify any code in our system that would cause a browser to make PUT requests to these paths.

I have no evidence that this activity is malicious. It could certainly be a broken browser plugin.

Has anyone seen this sort of behavior?

separate prerendered static page for open graph crawlers on netlify ( cant redirect by detecting bots ) [closed]

I want to create high browser cached shell app which is basically one cached file index.html which works like spa and it uses progressive enchantment to fill with content, but there is problem for open graph meta tags for Facebook, LinkedIn, Twitter, and of course with SEO also ( yes I know that these days google can parse JavaScript oriented applications but others can’t ), and we cant redirect on server side by bot detection because we are using static file hosting like netlify, so basic idea is this:

put canonical url for every page to redirect crawler to /seo/$ page, and on /seo subfolder we will be serving pre-rendered static pages which contains all open graph tags and also don’t contain all the css ans JavaScript, just content and html tags ( this is not necessary but idea ise to optimize unnecessary bandwidth )

Would this solution be considered good practice? Would this solution be considered cloaking? What are downsides of this solution? Is it considered bad practice to serve “striped down” pages to bots? ( with same content but not all functionality ) Do you maybe have any other proposition how to handle static pre-rendered pages that will be used for SEO and open graph tags and be served only to bots

and most important question is beacuse i suppose that links in google search will then be with additional parameter /seo/ which is not good, is there any solution to force google to use original links or to redirect to /og/ urls just for serving open graph tags for facebook and other social network regarding information that google actually can parse javascript today.

Would sitemap.xml or robots.txt somehow be helpful for redirecting just facebook parser?