Multiple mod_rewrite parameters for same php file

I want

to map to


to map to

as well as (this already works, just including it for semantics) OR

to map to OR

Here is my rewrite rules

RewriteEngine on RewriteBase /  RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^/]+)/([^/]+)/?$   /index.php?view=$  1&type=$  2 [QSA]  RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !^.*\.(jpg|css|js|gif|png|bmp|svg|ico)$   [NC] RewriteRule ^([^/]+)/([^/]+)/?$   /index.php?view=$  1&id=$  2 [QSA]  ## This part already works, but will it interfere with my above rules? RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$   /index.php?view=$  1 [L,NC,QSA] 

What am I doing wrong? This only works for one or the other, but not both!

Serving files from nginx if apache returns 404

I’m new with nginx and apache so please forgive me if my question is difficult to understand.

Currently I have passenger on apache serving the assets needed for my application, however, I also have nginx serving up the old version of these assets. My goal is to have the the request go to nginx if apache cannot find the file. I also need this to be transparent.

I’ve tried using RewriteCond and RewriteRule with apache but it causes a redirect which I do not want. I also do not want to use the [P] flag or mod_proxy if possible. Is what I want to do at all possible?

Things worth noting:

  • the application and nginx are both running on docker
  • I’m using haproxy as a load balancer

Uninstall TCP/IPv4 (Server 2016)

In Microsoft Windows Server 2016 Datacenter there is a properties window for network interfaces, inside that properties window there are several checkboxes that control protocols. (Internet Protocol Version 4 (TCP/IPv4) being one of them).

On several of the protocols listed there is an uninstall option. However this uninstall option is grayed out for IPv4. Is there a way to uninstall it?

I have tried to use the command netsh interface IPv4 uninstall, and it says the computer must be restarted to complete the action; after a restart the protocol is still present on the stack available for the interface.

Is there any software that acts as a “routing sandbox”?

So that any network traffic generated by the programs launched inside this “sandbox” are routed through a specific gateway.

In case you are curious by what is the use case for this:

All traffic on my machine goes through an VPN, but for certains apps (in my case fast-paced multiplayer online games) it is better not to use the VPN and route directly through internet for latency.

Rename copied file when verification is not valid

I’m looking for a command-line tool that does exactly what I describe in the.subject. Instead of just copying/overwriting/ignoring a file when the verification is not valid on both ends, I’d like to have the copy program not touch the destination file, but create a renamed version next to it and show a warning in a report.

I have been searching for a cross-platform or windows command-line tool that does this, so I’m wondering somebody knows of a tool that does something similar like i described above. Maybe I overlooked some of the options on regular utilities. If so, I’d love to know.

IIS URL Rewrite username logging

We migrated a SQL SSRS server from one machine to another. In order to make it so that everyone’s shortcuts and favorites to specific reports still work, I shut down the old SSRS server and created an IIS site on that server that uses URL rewrite to redirect the request to the new server. It works great.

Now we want to, over time, contact the users who have not replaced their shortcuts and favorites with new ones on the new server and get them to do it so that we can shut this old server off.

I thought it would be as easy as making it so that the only authentication method available was “Windows Authentication” (which is how this specific site was set up to begin with) and then look in the log files. But all of the log file lines have no username… which makes me believe that the URL rewrite is taking place BEFORE the authentication.

Anyone have a workaround that would force the authentication to the old server to work, so I can get usernames in the log files?

Kibana 6.5 behind a Haproxy : No response from kibana

First of all I’m pretty new to the Kibana world and Haproxy. Installation : Centos 7 Haproxy 1.5.18 : installed through yum install haproxy. Kibana : 6.5 latest release. It’s an ‘In the box configuration’ with ES, logstash & Redis. Firewalld stopped. SELinux port 5601 opened My problem : Haproxy doesn’t seem to communicate with kibana.

Haproxy Configuration :

global     log local1 debug     chroot /var/lib/haproxy   pidfile     /var/run/     maxconn 4000     user        haproxy     group       haproxy     daemon     tune.ssl.default-dh-param 2048     # Répertoire SSL par défaut     ca-base /etc/ssl/certs     crt-base /etc/ssl/private      ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS     ssl-default-bind-options no-sslv3  defaults   log global   option httplog   timeout connect 5000   timeout client  50000   timeout server  50000  frontend http-in   mode http   bind   redirect scheme https code 301 if !{ ssl_fc }  frontend https-in     bind ssl crt /etc/ssl/private/vmrelkoytst.pem     reqadd X-Forwarded-Proto:\ https     acl acl_kibana path_beg /kibana     use_backend kibana if acl_kibana  backend kibana     mode http     option forwardfor     option httpchk GET /     reqrep ^([^\ :]*)\ /kibana/(.*) \ /     server relk 

Kibana Configuration : "" server.basePath: "/kibana" server.rewriteBasePath: false elasticsearch.url: "http://localhost:9200" elasticsearch.preserveHost: true 

Haproxy doesn’t make a connexion with Kibana, based on the lack of traces in the Kibana log. When I go to the URL http://myserver/kibana I’m correctly redirected to https://myserver/kibana. haproxy log :

Dec 19 16:00:05 localhost haproxy[7025]: [19/Dec/2018:16:00:05.903] https-in~ https-in/<NOSRV> -1/-1/2 0 SC 1/1/0/0/0 0/0 Dec 19 16:00:05 localhost haproxy[7025]: [19/Dec/2018:16:00:05.903] https-in~ https-in/<NOSRV> -1/-1/2 0 -- 0/0/0/0/0 0/0 Dec 19 16:00:05 localhost haproxy[7025]: [19/Dec/2018:16:00:05.907] https-in~ https-in/<NOSRV> -1/-1/1 0 SC 0/0/0/0/0 0/0 

Nothing about connexions in the kibana.log or messages. command lsof -i -nP is giving this :

node      6752        kibana   11u  IPv4  55370      0t0  TCP (LISTEN) node      6752        kibana   13u  IPv4  55376      0t0  TCP> (ESTABLISHED) node      6752        kibana   14u  IPv4  55371      0t0  TCP> (ESTABLISHED) node      6752        kibana   15u  IPv4  55372      0t0  TCP> (ESTABLISHED) rsyslogd  6948          root    3u  IPv4  59765      0t0  UDP *:514 rsyslogd  6948          root    4u  IPv6  59766      0t0  UDP *:514 haproxy   7024       haproxy    5u  IPv4  61408      0t0  UDP *:44174 haproxy   7025       haproxy    4u  IPv4  61407      0t0  TCP *:80 (LISTEN) haproxy   7025       haproxy    5u  IPv4  61408      0t0  UDP *:44174 haproxy   7025       haproxy    6u  IPv4  61409      0t0  TCP *:443 (LISTEN) 

webpage is giving the error : myserver didn’t send any data ( ERR_EMPTY_RESPONSE )

Response to : curl http://x.x.x.x:5601/app/kibana OK Response to : curl http://x.x.x.x:5601/kibana     404 Error Response to : curl http://x.x.x.x:5601/           No response 

I don’t understant why curl doesn’t respond correctly on my 2 last curl’s. Can somebody help me ? The answer given in : Doesn’t seem to work.

PCI Compliance – SSL certificate doesn’t match hostname (port 25)

I’m working on a server hosting multiple websites for one company. Trying to get one of the domains to be PCI compliant, but it’s failing on port 25 (SMTP) because the SSL certificate hostname doesn’t match.

Each domain hosted on the server has its own valid SSL certificate, or some share multi-domain certificates. The PCI scan validates the SSL certificate on port 443.

The mail server is Postfix, and the config uses a valid wildcard SSL certificate that is used for the “main” domain of the company. The domain I’m trying to validate for PCI is another domain.

I don’t really understand how this could be set up to use a SSL certificate on port 25 which will be valid for any domain hosted on this server that needs to pass PCI. This is slightly outside my areas of knowledge at the moment.

OpenVPN Tunnel blocking inbound web connections

I have a server running a OpenVPN client to route all internet traffic via the VPN.

I have excluded the local subnet from the tunnel and this is all working well so far.

The server also has a webserver running, which is publicly accessible using port forwarding from my router.

The web server is only working when the VPN client is stopped. I assume when the vpn is open the packets to respond are being sent back over the VPN link, rather than back to the router.

Question: is it possible to prevent this?

I’m running Ubuntu Server 18.04.