nmap: Same IP, different domain names, different results?

I’m scanning a network (whose name will not be stated). It has >1 IP addresses. When I tried scanning its subdomains, there are several subdomains that are translated to the same IP address but return different scan reports (like different ports being reported).

For example:

nmap subdomain1 nmap subdomain2 nmap i.p.v.4 # The IPv4 that both subdomains translate to  nmap subdomain1 -A -p- nmap subdomain2 -A -p- nmap i.p.v.4 -A -p- # This also returns different results 

Those 3 all return different port findings.

From what I know, the URL/domain name should just be translated to the IP then scan, so I think they should all return the same results.

Why are different results returned? Is it because of domain translation (something I missed?) or is it something else?

Also, if given an IP address of a domain and its subdomains with the same IP, should I just scan the IP (save time and resources) or should I also scan every subdomain?

If I disable the list items to display in search results, will it have any impact on indexing?

I am having a SharePoint list with 30 columns. I have added few columns to the “Indexed Columns”. But I don’t want to display any list item in the search results. So I want to disable following option. enter image description here

If I disable above option, will it have any impact on Indexed columns. I mean will there be any exceptions if I use the indexed columns in my queries (REST, CSOM) etc. like threshold error?

Why is Google indexing our robots.txt file and showing it in search results?

For some reason, Google is indexing the robots.txt file for some of our sites and showing it in search results. See screenshots below.

Our robots.txt file is not linked from anywhere on the site and contains just the following:

User-agent: *
Crawl-delay: 5

This only happens for some sites. Why is this happening and how do we stop it?


Screenshot 1: Google Search console…

Why is Google indexing our robots.txt file and showing it in search results?

Upgrade to Ubuntu 19.04 on HP Elitebook 840 G1 results in blinking cursor after bootloader

I used to dual boot Windows (on dev/sda1) and Ubuntu 18.04, on my HP Elitebook 840 G1 (SSD). I also have a third partition for most of my files (I believe on dev/sda2 named D:/).

I tried upgrading to 18.10, which worked, after which I wanted to upgrade to 19.04. GRUB still finds the boot options I want, but when I try to boot into Ubuntu, I get a blinking cursor (after the ubuntu logo). Also going into “advanced options” in GRUB and selection 4.15.0-55-generic gives me the blinking cursor (after the ubuntu logo).

I wanted to know if anyone had managed to solve this? I have tried:

  • booting from a stick with Ubuntu 19.04 (most of the times it doesn’t even boot into the stick, and when it does, I also get a flashing underscores)
  • boot into recovery mode and change the GPU settings as proposed by heynnema (gdm3 display manager hangs after booting with Ubuntu 18.10)

The fact that even booting from stick gives an error maybe indicates that it indeed is a graphic error?

Enterprise Search Query Rules not affecting results

I am trying to have certain documents be listed first in search results for my company’s SharePoint site (not Online). To do this, I am creating a new query rule, removing search conditions, and “change ranked results by changing the query”. Then I set a filter on the content type to equal the desired content type (or file extension, that did not work either). The test query results don’t return anything if it’s a custom content type I made. The query test does seem to work with sorting documents according to last modified time, but when I try it in the site’s search bar I get different results (documents from a year or more ago instead of the ones changed yesterday).

I have tried creating these rules at the site collection level as well as the subsite level, nothing seems to change the results. Any idea what is going on here? Could it be some other setting somewhere is blocking these changes from taking effect? Does it just need to reindex? Thanks in advance.

Why does the numerical inverse laplace function FT for small times give erroeous results and what is the alternative

I am trying to do the numerical laplace inverse of a very complicated transfer function, subject to a trapezoidal pulse input. For the sake of understanding, I will use a simple transfer function to pose my question. I want to find out the output value during pulse on times, but numerical laplace inverse gives highly erroneous results for such time scales. Here is the pulse input that I am givingenter image description here

The mathematica code is the following:

Gtransfer[s_] = 1/(1 + s); T1 = 10^-9; T2 = 10*10^-9; T3 = 10^-9; Eo = 1;  Eint[t_] = Eo*10^9*t; Enet[t_] =    Eint[t] - Eint[t - T1]*UnitStep[t - T1] -     Eint[t - T1 - T2]*UnitStep[t - T1 - T2] +     Eint[t - T1 - T2 - T3]*UnitStep[t - T1 - T2 - T3]; Eins[s_] = LaplaceTransform[Enet[t], t, s]; fun1[s_] = Eins[s] Gtransfer[s];  thisDir =    ToFileName[("FileName" /.        NotebookInformation[EvaluationNotebook[]])[[1]]]; SetDirectory[thisDir]; << FixedTalbotNumericalLaplaceInversion.m t0 = 25 10^-9; Vnumerical = FT[fun1, t0](*Numerical*) Vanalytical =   N[InverseLaplaceTransform[fun1[s], s, t] /. t -> 25 10^-9](*Analytical*) 

You will find that for values of t0<12*10^(-9), numerical inverse laplace is extremely erroneous. Could anyone suggest me as to why this is so and how to fix this? My original transfer function is just too complicated to invert analytically. Thanks in advance!

New manage property not returning to search results

I am working with sharepoint 2013 on premise. I attached new crawled properties to managed properties (existing managed properties and new one ) and couldn’t get any result using it- i ticked the new managed property aa retrievable, safe etc.

It does not matter from which content source i add the mp – i don’t retrieve it..like the sql is full or mp not really retrievable / safe

Old data is updated fine I ran full crawl – restarted the the service and the crawl server At the moment reset index is not possible (6 website using it) Out of ideas