Tilemap is sent to back/disappears behind background when I go into play mode or save (video included)

screencap of the problem here

I’m sure this has a simple solution but I haven’t found it answered yet. I have two tilemaps, one a BG and one a middle layer, and everything works when I paint on the middle layer. It shows up on top of the BG.

But if I save the project or open play mode to test it, the middle layer disappears. Seems like it gets sent behind the BG tilemap but the order of the tilemaps hasn’t changed. Also of note is that nothing changes even if I reorder the tilemaps, and the only way I can make the tilemap "reappear" is by ctrl-z undoing my last action. At a loss, any takers?

Headers already sent error with get_template_part in REST API call

Crosspost from StackOverflow

I’ve looked at multiple other questions about this warning (Headers already sent…etc) and this thorough explanation, but I’m not finding a solution for the circumstances I’m dealing with:

  1. I have a custom WordPress REST API endpoint that returns the output of get_template_part. When I just call get_template_part normally on a page, there’s no warning. It only appears when it runs in register_rest_route‘s callback.
  2. I am also fetching data from an external API using this library. The error started when I started making these requests. I’ve tried making the request both inside and outside the template part, the latter either within register_rest_route‘s callback or as an init action.
  3. The line that triggers the error is an img tag, where I’m echo-ing a URL as the src using data from the external API response. There are other echo calls all over this template, so I doubt that’s the issue.
  4. The line in question actually works fine and does its job. I just need to get rid of the accursed warning.

Code:

Inside functions.php:

<?php    //the api endpoint declaration   add_action( 'rest_api_init', function () {     register_rest_route(         'theme/v2',   // namespace         '/homepage',  // route         array(                  // options             'methods'  => 'GET',             'callback' => 'build_home',         )     );   });    //the callback for the wordpress api   function build_home() {      // this is the external API request, using their PHP library.      // I linked the library above. The request returns an object with a bunch of data     include_once(get_template_directory()."/arena/arena.php");     $  arena = new Arena();     $  page = $  arena->set_page();      $  per = 8;      $  slug = 'concepts-bo-6ajlvpqg';      $  channel = $  arena->get_channel($  slug, array('page' => $  page, 'per' => $  per));      //I'm passing the response from the external API (in $  channel) to get_template part     //get_template_part is then the output for the wordpress API request     get_template_part('custom-home-template',null,array('channel'=>$  channel));  }   ?> 

Inside template part:

<?php   $  channel=$  args["channel"];   ?>    <div id="concepts-box" class="bodycontent reg-width">     <?php foreach ($  channel->contents as $  item) { if($  item->class=="Image"){       $  img=$  item->image["square"]["url"];       ?>     <div class="concept-wrapper">       <!-- the line below is the one that triggers the error -->       <img lozad src="<?=$  img; ?>">     </div>     <?php }} ?>   </div> 

Full warning:

<b>Warning</b>:  Cannot modify header information - headers already sent by (output started at /filepath/wp-content/themes/theme/custom-home-template.php:63) in <b>/filepath/wp-includes/rest-api/class-wp-rest-server.php</b> on line <b>1372</b><br /> 

How I can ensure that a link sent via email is opened only via user clicks from a mail client and not by bots?

In my web application I generate links in the following format:

https://example.com/^token^ 

Each link according to my specs is sent via email and should not allow robots to scrape it. In order to avoid visits via robot I placed the following robots.txt:

User-agent: * Disallow: / 

And on page’s <header></header> tag I placed:

    <meta name="robots" content="noindex">     <meta name="robots" content="nofollow"> 

The question is how I can ensure that a link is opened only when user has clicked upon it and not a random bot/spider scrapes it? Does the length and the randomness of the ^token^ in url factor into bot visit prevention?

In my application the ^token^ is a cryptographically-random 5 byte value that once generates is hex-econded. So in my case, if the token length takes a significant role on non-scrappines of this link, which is the recommended length of the random token?

My application is in Laravel and I use nginx and php-fpm combo in order to serve the content to the browser.

Capturing SQL calls sent to a remote server from application

First of all, I’m not well versed in SQL anything at all. Closest I’ve ever needed to get was storing and retrieving data from a local SQLite db.

In essence I think I have a simple problem but it’s hard to orient yourself when everything is new.

My main tool at work is an ERP software, which is basically a front end to an SQL db.

Problem I have with it is that it’s very clumsy and doesn’t allow automation of even the most basic tasks.

What I want to do, is bypass the front-end completely and interact directly with the db to automate most of my tasks with python.

I can connect to the database just fine from python environment, but the schema is gigantic, there’s no way I’ll be able to find whatever it is I might be looking for.

So I need to capture the call front-end sends when I click a button (telling it to display specific set of data) to use that call as a guide.

Basically, how can I, an SQL noob, capture calls that a desktop application sends to a remote server?

Edit 1: My job is mostly analytical, so all of my automation will be for retrieval, analysis and visualization. I’m not very likely to mess anything up.

Edit 2: Tried running a Profiler and got the message:

"In order to run a trace against SQL Server you must be a member of sysadmin fixed server role or have the ALTER TRACE permission."

I’m not a sysadmin, don’t have an alter trace permission and reeeally don’t feel like asking for it 😀

Can you identify telepathically received messages sent through spells like Sending as magical via Detect Magic?

An enemy casts Sending to communicate with a player from far away. In this example, the player character doesn’t know anything about the Sending spell and he might think he is just hearing voices or going crazy.

Another player casts Detect Magic to scan the area. Can this player detect the presence of the telepathic message inside the first players head via Detect Magic as an evocation spell.

I am seeing ICMP type 3 error message from my firewall logs. However , I am unable to find the original request sent to that external IP [closed]

No matching connection for ICMP error message: icmp src inside: X.X.X.98 dst outside: X.X.X.11 (type 3, code 2) on inside interface. Original IP payload: udp src X.X.X.11/53 dst X.X.X.98/52906.

Can somebody please help me understand the cause.

CSRF token not sent when calling the back-end?

My system composes of NuxtJs and AdonisJs application. Adonis handles csrf tokens for us by sending:

set-cookie: adonis-session=XXX; Path=/; HttpOnly set-cookie: XSRF-TOKEN=XXX; Max-Age=7200; Path=/; SameSite=Strict set-cookie: adonis-session-values=XXX; Path=/; HttpOnly 

Now from what I can see, it will set a cookie that can be sent only by a browser. And only if the host is the same. From my understanding, from that point on, browser is the one who will auto attach cookies like that to each request. The problem is, when Nuxt application is making an API request to the back-end I do not see any csrf token being sent when looking at the traffic trough BurpSuite.

And naturally adonis will reply with "Invalid CSRF Token", and respond with status code 500.

I’m not sure what am I missing, I fail to understand why browser is not sending that cookie. And just as the extra information I’ve failed to find it trough browser’s inspector window (Storage tab). Is it possible that the cookie is not set or?

I’ve seen other posts regarding this issue, but they where not helpful because the solution was composed of reading a cookie and manually sending it as the header. Which I do not advise, and is not the model I’m going to implement. I would rather leave it to the back-end framework and browser to do the job for me, because as we all know, there would be less room for me to make a mistake.

Thank you for reading this.

What’s the term for a hash sent early and plain text revealed later?

I think there is a known pattern where you post the hash of a document, e.g. on Twitter, in order to have its time registered. You could then later publish the document and have it accredited for the time of the hash.

I’m sure someone gave this procedure a name. What is that name?

I found trusted timestamping, but that is a thing for digital certificates, which do not come into play here.