How would I obtain error information on a $Failed GeoElevationData?

The following code returns an error: GeoElevationData: with no description of the error. I think this error may be because I am asking for elevation data that is too granular.

cellGrid= Flatten[Table[{lat,lon}, {lat, 53.506, 53.508,0.0005},{lon,-112.097, -112.095,0.0005}],1] elevationData = GeoElevationData[cellGrid] 

When I look at the stack trace, I see an ...Throw[$ Failed, "GeoElevationDataError". There are no entries in Help for the GeoElevationDataError.

I will try a variation on this code, but, are there some good detailed sources for error information?

After a failed bootcamp installation of Windows 10, I am stuck with multiple partitions

I canceled a bootcamp install of Windows 10 because it was taking AGES to work. Anyhow, I now have a partition on my Macbook I cannot seem to get rid of. If there is nothing on the extra partition (the 58 GB), how can I merge it back into my regular disk and get that space back?

Here is what shows after running diskutil list

Lanes-MacBook-Pro:~ lanelawson$   diskutil list     /dev/disk0 (internal):    #:                       TYPE NAME                    SIZE       IDENTIFIER    0:      GUID_partition_scheme                         251.0 GB   disk0    1:                        EFI EFI                     314.6 MB   disk0s1    2:                  Apple_HFS                         0 B        disk0s2    3:                 Apple_APFS Container disk2         184.0 GB   disk0s3    4:       Microsoft Basic Data OSXRESERVED             8.0 GB     disk0s4    5:                 Apple_APFS Container disk1         58.6 GB    disk0s5    6:                 Apple_Boot Boot OS X               134.2 MB   disk0s6  /dev/disk1 (synthesized):    #:                       TYPE NAME                    SIZE       IDENTIFIER    0:      APFS Container Scheme -                      +58.6 GB    disk1                                  Physical Store disk0s5    1:                APFS Volume UNTITLED                868.4 KB   disk1s1  /dev/disk2 (synthesized):    #:                       TYPE NAME                    SIZE       IDENTIFIER    0:      APFS Container Scheme -                      +184.0 GB   disk2                                  Physical Store disk0s3    1:                APFS Volume Mac HD                  71.0 GB    disk2s1    2:                APFS Volume Preboot                 52.3 MB    disk2s2    3:                APFS Volume Recovery                517.0 MB   disk2s3    4:                APFS Volume VM                      3.2 GB     disk2s4  /dev/disk3 (disk image):    #:                       TYPE NAME                    SIZE       IDENTIFIER    0:      GUID_partition_scheme                        +17.2 MB    disk3    1:                  Apple_HFS Transmission            17.2 MB    disk3s1  /dev/disk4 (disk image):    #:                       TYPE NAME                    SIZE       IDENTIFIER    0:      GUID_partition_scheme                        +148.3 MB   disk4    1:                  Apple_HFS VLC media player        148.2 MB   disk4s1  /dev/disk6 (disk image):    #:                       TYPE NAME                    SIZE       IDENTIFIER    0:      GUID_partition_scheme                        +202.4 MB   disk6    1:                  Apple_HFS Final Draft 11 11.0.0   202.3 MB   disk6s1 

Failed to execute ‘json’ on ‘Response’: body stream is locked

I am following this article and get the following error in the console.log

Uncaught (in promise) TypeError: Failed to execute 'json' on 'Response': body stream is locked     at e.json (sp-pages-assembly_en-us_5d8862cf2c0cc1538b9ce027f59ea4e9.js:1133) 

The code from the article is as below

 **this.context.aadHttpClientFactory       .getClient('https://tenant.onmicrosoft.com/6b347c27-f360-47ac-b4d4-af78d0da4223')       .then((client: AadHttpClient): void => {         client           .get('https://myfunction.azurewebsites.net/api/CurrentUser', AadHttpClient.configurations.v1)           .then((response: HttpClientResponse): Promise<JSON> => {             return response.json();           })           .then((responseJSON: JSON): void => {             //Display the JSON in a table             var claimsTable = this.domElement.getElementsByClassName("azFuncClaimsTable")[0];             for (var key in responseJSON) {               var trElement = document.createElement("tr");               trElement.innerHTML = `<td class="$  {styles.azFuncCell}">$  {key}</td><td class="$  {styles.azFuncCell}">$  {responseJSON[key]}</td>`;               claimsTable.appendChild(trElement);             }           });** 

In the second .then chain the JSON object is undefined and also the response.json() is empty.

Is there something I am missing.

SSH failed to start – Missing privilege separation directory: /var/run/sshd

I have a VPS running Ubuntu 16.04.5 that’s been going for a number of years now with little issue. Today, however, I found I was unable to access the server using SSH, receiving ‘connection refused’ errors. I accessed the server using my VPS host’s serial console service, and traced the issue down to openssh server failing to start. Here’s an output of service status, service start, and sshd -t following a fresh reboot:

root@167:/# service ssh status ● ssh.service - OpenBSD Secure Shell server    Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)    Active: failed (Result: start-limit-hit) since Fri 2019-01-18 04:56:42 EST; 24min ago   Process: 983 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=255)  Jan 18 04:56:42 167 systemd[1]: Failed to start OpenBSD Secure Shell server. Jan 18 04:56:42 167 systemd[1]: ssh.service: Unit entered failed state. Jan 18 04:56:42 167 systemd[1]: ssh.service: Failed with result 'exit-code'. Jan 18 04:56:42 167 systemd[1]: ssh.service: Service hold-off time over, scheduling restart. Jan 18 04:56:42 167 systemd[1]: Stopped OpenBSD Secure Shell server. Jan 18 04:56:42 167 systemd[1]: ssh.service: Start request repeated too quickly. Jan 18 04:56:42 167 systemd[1]: Failed to start OpenBSD Secure Shell server. Jan 18 04:56:42 167 systemd[1]: ssh.service: Unit entered failed state. Jan 18 04:56:42 167 systemd[1]: ssh.service: Failed with result 'start-limit-hit'. root@167:/# service ssh start Job for ssh.service failed because the control process exited with error code. See "systemctl status ssh.service" and "journalctl -xe" for details. root@167:/# sshd -t Missing privilege separation directory: /var/run/sshd 

I’ve attempted some research into this, but nothing that’s come up seems to have an actual solution – Just endless cycles of ‘I have this problem’ with no answers, answers that are outdated, or just generally unhelpful information.

Does anybody have any ideas on what to do next to troubleshoot/resolve this issue? SSH was last working about 12 hours ago when I logged in to run updates and rebooted the server.

AWS Fargate task failed ELB health checks

How can I troubleshoot it further? I am trying to run a simple nginx container but the load balancer complains that health checks are failed and the task does not respond on its ip number, likely because of the error with the load balancer.

I set the priority to 2 in cloudformation for the task. If I try and set the priority to 1 then the CF stack fails to deploy. Can that have something to do with it?

# Create a rule on the load balancer for routing traffic to the target group   LoadBalancerRule:     Type: AWS::ElasticLoadBalancingV2::ListenerRule     Properties:       Actions:         - TargetGroupArn: !Ref 'TargetGroup'           Type: 'forward'       Conditions:         - Field: path-pattern           Values: [!Ref 'Path']       ListenerArn:          Fn::ImportValue: !Ref LoadBalancerListener       Priority: !Ref 'Priority' 

The resources look like:

Resources:    # The task definition. This is a simple metadata description of what   # container to run, and what resource requirements it has.   TaskDefinition:     Type: AWS::ECS::TaskDefinition     Properties:       Family: nginx       Cpu: 256       Memory: 512       NetworkMode: awsvpc       RequiresCompatibilities:         - FARGATE       ContainerDefinitions:         - Name: nginx           Cpu: 128           Memory: 256           Image: nginx           PortMappings:             - ContainerPort: 80    Service:     Type: AWS::ECS::Service     DependsOn: LoadBalancerRule     Properties:       ServiceName: !Ref 'ServiceName'       Cluster:          Fn::ImportValue: !Ref EcsCluster       LaunchType: FARGATE       DeploymentConfiguration:         MaximumPercent: 200         MinimumHealthyPercent: 75       DesiredCount: !Ref 'DesiredCount'       NetworkConfiguration:         AwsvpcConfiguration:           AssignPublicIp: ENABLED           SecurityGroups:              - !Ref EcsHostSecurityGroup           Subnets:             - !ImportValue core-vpc-PublicSubnet1AID             - !ImportValue core-vpc-PublicSubnet1BID       TaskDefinition: !Ref 'TaskDefinition'       LoadBalancers:         - ContainerName: !Ref 'ServiceName'           ContainerPort: 80           TargetGroupArn: !Ref TargetGroup    TargetGroup:     Type: AWS::ElasticLoadBalancingV2::TargetGroup     Properties:       HealthCheckIntervalSeconds: 6       HealthCheckPath: /       HealthCheckProtocol: HTTP       HealthCheckTimeoutSeconds: 5       HealthyThresholdCount: 2       TargetType: ip       Name: !Ref 'ServiceName'       Port: !Ref 'ContainerPort'       Protocol: HTTP       UnhealthyThresholdCount: 2       VpcId: !ImportValue core-vpc-VPCID      # This security group defines who/where is allowed to access the ECS hosts directly.   # By default we're just allowing access from the load balancer.  If you want to SSH   # into the hosts, or expose non-load balanced services you can open their ports here.   EcsHostSecurityGroup:     Type: AWS::EC2::SecurityGroup     Properties:       VpcId: !ImportValue core-vpc-VPCID       GroupDescription: Access to the ECS hosts and the tasks/containers that run on them       SecurityGroupEgress:         - CidrIp: 0.0.0.0/0           IpProtocol: "-1"       SecurityGroupIngress:       - IpProtocol: tcp         FromPort: '443'         ToPort: '443'         CidrIp: 138.106.0.0/16  # Create a rule on the load balancer for routing traffic to the target group   LoadBalancerRule:     Type: AWS::ElasticLoadBalancingV2::ListenerRule     Properties:       Actions:         - TargetGroupArn: !Ref 'TargetGroup'           Type: 'forward'       Conditions:         - Field: path-pattern           Values: [!Ref 'Path']       ListenerArn:          Fn::ImportValue: !Ref LoadBalancerListener       Priority: !Ref 'Priority' 

nginx bind() to 0.0.0.0:443 failed (48: Address already in use)

I have Laravel Valet 2.1.6 installed on Mac OS 10.14.2.

nginx 1.15.8 is installed using brew.

I restarted my Mac without installing any updates or new software, and now all example.test sites are giving a 502 error with the following showing in the /usr/local/var/log/nginx/error.log log:

2019/01/17 20:38:47 [warn] 31277#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/nginx.conf:1 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: still could not bind() 

At the same time, I get the following in the /Users/Myself/.config/valet/Log/nginx-error.log log:

2019/01/17 20:41:34 [error] 32071#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: example.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/Myself/.config/valet/valet.sock:", host: "example.test" 

When I run ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' I see this list:

  PID  PPID  %CPU      VSZ WCHAN  COMMAND 32064     1   0.0  4306660 -      nginx: master process /usr/local/opt/nginx/bin/nginx -g daemon off; 32065 32064   0.0  4333284 -      nginx: worker process 32066 32064   0.0  4332260 -      nginx: worker process 32067 32064   0.0  4333284 -      nginx: worker process 32068 32064   0.0  4333284 -      nginx: worker process 32069 32064   0.0  4326116 -      nginx: worker process 32070 32064   0.0  4316900 -      nginx: worker process 32071 32064   0.0  4368236 -      nginx: worker process 32072 32064   0.0  4331236 -      nginx: worker process 32073 32064   0.0  4326116 -      nginx: worker process 32074 32064   0.0  4340452 -      nginx: worker process 32075 32064   0.0  4333284 -      nginx: worker process 32076 32064   0.0  4334308 -      nginx: worker process 36815  1406   0.0  4268060 -      egrep (nginx|PID) 

None of the following solves the issue:

  • sudo killall nginx
  • brew services restart nginx
  • brew services restart php
  • valet restart
  • Restarting my Mac
  • valet uninstall && valet install then valet park on the relevant dir

Apache is not running as a conflicting service.

I tried doing sudo /usr/local/opt/nginx/bin/nginx -g 'daemon off;' and got this:

nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] still could not bind() 

Running sudo lsof -i tcp:80 produces:

COMMAND   PID USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME nginx   42220 root    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42221 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42222 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42223 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42224 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42225 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42226 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42227 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42228 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42229 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42230 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42231 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42232 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) 

Basically the same output when I run that command for port 443.

This valet issue post suggests valet domain test might fix it, but that didn’t help.

Tried reinstalling PHP but no luck:

  • brew uninstall –force php
  • brew cleanup
  • brew install php
  • valet uninstall && valet install

Laravel valet bind() to 0.0.0.0:443 failed (48: Address already in use)

I have Laravel Valet 2.1.6 installed on Mac OS 10.14.2.

nginx 1.15.8 is installed using brew.

I restarted my Mac without installing any updates or new software, and now all example.test sites are giving a 502 error with the following showing in the /usr/local/var/log/nginx/error.log log:

2019/01/17 20:38:47 [warn] 31277#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/nginx.conf:1 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: bind() to 0.0.0.0:443 failed (48: Address already in use) 2019/01/17 20:38:47 [emerg] 31277#0: still could not bind() 

At the same time, I get the following in the /Users/Myself/.config/valet/Log/nginx-error.log log:

2019/01/17 20:41:34 [error] 32071#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: vshred.test, request: "GET / HTTP/2.0", upstream: "fastcgi://unix:/Users/Myself/.config/valet/valet.sock:", host: "example.test" 

When I run ps ax -o pid,ppid,%cpu,vsz,wchan,command|egrep '(nginx|PID)' I see this list:

  PID  PPID  %CPU      VSZ WCHAN  COMMAND 32064     1   0.0  4306660 -      nginx: master process /usr/local/opt/nginx/bin/nginx -g daemon off; 32065 32064   0.0  4333284 -      nginx: worker process 32066 32064   0.0  4332260 -      nginx: worker process 32067 32064   0.0  4333284 -      nginx: worker process 32068 32064   0.0  4333284 -      nginx: worker process 32069 32064   0.0  4326116 -      nginx: worker process 32070 32064   0.0  4316900 -      nginx: worker process 32071 32064   0.0  4368236 -      nginx: worker process 32072 32064   0.0  4331236 -      nginx: worker process 32073 32064   0.0  4326116 -      nginx: worker process 32074 32064   0.0  4340452 -      nginx: worker process 32075 32064   0.0  4333284 -      nginx: worker process 32076 32064   0.0  4334308 -      nginx: worker process 36815  1406   0.0  4268060 -      egrep (nginx|PID) 

None of the following solves the issue:

  • sudo killall nginx
  • brew services restart nginx
  • brew services restart php
  • valet restart
  • Restarting my Mac
  • valet uninstall && valet install then valet park on the relevant dir

Apache is not running as a conflicting service.

I tried doing sudo /usr/local/opt/nginx/bin/nginx -g 'daemon off;' and got this:

nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:443 failed (48: Address already in use) nginx: [emerg] bind() to 0.0.0.0:60 failed (48: Address already in use) nginx: [emerg] still could not bind() 

Running sudo lsof -i tcp:80 produces:

COMMAND   PID USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME nginx   42220 root    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42221 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42222 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42223 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42224 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42225 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42226 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42227 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42228 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42229 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42230 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42231 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) nginx   42232 Myself    7u  IPv4 0x7ac8eae7874ccb11      0t0  TCP *:http (LISTEN) 

Basically the same output when I run that command for port 443.

This valet issue post suggests valet domain test might fix it, but that didn’t help.

POP DELE(xxx) FAILED — ERR: There’s no message xxx

Since about 2 weeks I have this problem on all my catchall accounts. 
First I thought it was because I have overlap – use the same catchall with different projects, then I thought it was because there were too many emails etc. But after some work, hand-cleaning all emails, running only one project and resetting data (parsed email cache) I believe the problem must be somewhere outside my control.

Here’s what happens when I verify (or verify email only): 
Checking E-mail catchall@myserver.com for links (243 verifications waiting)…
Parsing 1705 E-Mails…
Parsing of …. 0% … 100% ) with a lot of Data found, 
Deleting messages of catchalll@myserver.com done by 0%
POP3 DELE(1626) failed – ERR There’s no message 1626.
— at this point the project stalls. Seems to stop doing anything, thread count is 1.
When I finally hit STOP a host of more verification messages come.
example:
Found 722 URL(s) (720 verify, 0 login) for WordPress Article – https://www.flexiform.co.uk in E-Mails

Very strange. Sometimes it seems to delete a few (17% for instance) and then hit that DELE error.
Note that when I hand-login into these accounts the number of emails is smaller than the DELE err number (sometimes by 2-5, sometimes by over 200). When I login before the number is higher. So it seems GSA is deleting messages successfully, but then tries to delete one of them again?

PS Does reset data – parsed email cache really do something? It always exits very quickly without any notice, and when I run it a few times with a quick Active (VE) in between, the verification always starts from the next email server, as if it would remember what it just had done…(might of course also be a question of randomization)

FCGI: attempt to connect to 127.0.0.1:9000 (*) failed Apache 2.4, PHP7.1-FPM with unix socket

I am trying to set up PHP7.1-FPM on MacOS Mojave! I followed this guide and got all the way through to the end when it stopped working.

I’ve got my services installed:

$   sudo brew services list Name    Status  User Plist httpd   started root /Library/LaunchDaemons/homebrew.mxcl.httpd.plist php@7.1 started root /Library/LaunchDaemons/homebrew.mxcl.php@7.1.plist 

I’ve set up my httpd.conf:

# # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module>     DirectoryIndex index.php index.html index.htm </IfModule>  <VirtualHost *:*>    ProxyPassMatch "^/(.*\.php(/.*)?)$  " "fcgi://127.0.0.1:9000/usr/local/var/www/$  1" </VirtualHost> <FilesMatch \.php$  >     SetHandler "proxy:unix:/usr/var/run/php7.1-fpm.sock|fcgi://localhost/" </FilesMatch> 

In my /etc/php-fpm.d/www.conf I’ve got

listen = /var/run/php/php7.1-fpm.sock 

If I check for the processes it seems all good:

$   ps aux | grep php-fpm                        10.2s  Thu 17 Jan 14:15:40 2019 finnlesueur       3718   0.0  0.0  4268052    692 s001  S+   12:28pm   0:00.01 tail -f /usr/local/var/log/php-fpm.log finnlesueur      30588   0.0  0.0  4268060    812 s000  S+    2:15pm   0:00.00 grep --color=auto php-fpm _www             29371   0.0  0.0  4520960   1180   ??  S     2:08pm   0:00.00 /usr/local/opt/php@7.1/sbin/php-fpm --nodaemonize _www             29370   0.0  0.0  4520960   1080   ??  S     2:08pm   0:00.00 /usr/local/opt/php@7.1/sbin/php-fpm --nodaemonize root             29366   0.0  0.1  4518912  30808   ??  Ss    2:08pm   0:00.08 /usr/local/opt/php@7.1/sbin/php-fpm --nodaemonize  $   ps aux | grep httpd finnlesueur      29346   0.0  0.0  4345112   1832   ??  S     2:08pm   0:00.00 /usr/local/opt/httpd/bin/httpd -D FOREGROUND root             29332   0.0  0.0  4309296   2720   ??  Ss    2:08pm   0:00.09 /usr/local/opt/httpd/bin/httpd -D FOREGROUND finnlesueur      30727   0.0  0.0  4268060    812 s000  S+    2:16pm   0:00.00 grep --color=auto httpd finnlesueur      29350   0.0  0.0  4328728   1172   ??  S     2:08pm   0:00.00 /usr/local/opt/httpd/bin/httpd -D FOREGROUND finnlesueur      29349   0.0  0.0  4345112   1180   ??  S     2:08pm   0:00.00 /usr/local/opt/httpd/bin/httpd -D FOREGROUND finnlesueur      29348   0.0  0.0  4353304   1184   ??  S     2:08pm   0:00.00 /usr/local/opt/httpd/bin/httpd -D FOREGROUND finnlesueur      29347   0.0  0.0  4335896   1192   ??  S     2:08pm   0:00.00 /usr/local/opt/httpd/bin/httpd -D FOREGROUND 

My DocumentRoot has an index.php where it’s just echoing phpinfo(); and that also seems fine, but when I load localhost I see 503 Service Unavailable and in my HTTP error log I get:

[Thu Jan 17 14:18:57.654807 2019] [authz_core:debug] [pid 29347] mod_authz_core.c(817): [client ::1:57866] AH01626: authorization result of Require all granted: granted [Thu Jan 17 14:18:57.654991 2019] [authz_core:debug] [pid 29347] mod_authz_core.c(817): [client ::1:57866] AH01626: authorization result of <RequireAny>: granted [Thu Jan 17 14:18:57.655083 2019] [authz_core:debug] [pid 29347] mod_authz_core.c(845): [client ::1:57866] AH01628: authorization result: granted (no directives) [Thu Jan 17 14:18:57.655119 2019] [proxy_fcgi:debug] [pid 29347] mod_proxy_fcgi.c(108): [client ::1:57866] AH01060: set r->filename to proxy:fcgi://127.0.0.1:9000/usr/local/var/www/index.php [Thu Jan 17 14:18:57.655162 2019] [proxy:debug] [pid 29347] mod_proxy.c(1246): [client ::1:57866] AH01143: Running scheme fcgi handler (attempt 0) [Thu Jan 17 14:18:57.655171 2019] [proxy_fcgi:debug] [pid 29347] mod_proxy_fcgi.c(1019): [client ::1:57866] AH01076: url: fcgi://127.0.0.1:9000/usr/local/var/www/index.php proxyname: (null) proxyport: 0 [Thu Jan 17 14:18:57.655183 2019] [proxy_fcgi:debug] [pid 29347] mod_proxy_fcgi.c(1028): [client ::1:57866] AH01078: serving URL fcgi://127.0.0.1:9000/usr/local/var/www/index.php [Thu Jan 17 14:18:57.655191 2019] [proxy:debug] [pid 29347] proxy_util.c(2313): AH00942: FCGI: has acquired connection for (*) [Thu Jan 17 14:18:57.655199 2019] [proxy:debug] [pid 29347] proxy_util.c(2367): [client ::1:57866] AH00944: connecting fcgi://127.0.0.1:9000/usr/local/var/www/index.php to 127.0.0.1:9000 [Thu Jan 17 14:18:57.655219 2019] [proxy:debug] [pid 29347] proxy_util.c(2576): [client ::1:57866] AH00947: connected /usr/local/var/www/index.php to 127.0.0.1:9000 [Thu Jan 17 14:18:57.655346 2019] [proxy:error] [pid 29347] (61)Connection refused: AH00957: FCGI: attempt to connect to 127.0.0.1:9000 (*) failed [Thu Jan 17 14:18:57.655367 2019] [proxy_fcgi:error] [pid 29347] [client ::1:57866] AH01079: failed to make connection to backend: 127.0.0.1 [Thu Jan 17 14:18:57.655375 2019] [proxy:debug] [pid 29347] proxy_util.c(2328): AH00943: FCGI: has released connection for (*) 

And nothing makes it to my PHP-FPM log, I guess because the connection has not been made.

I’ve been Googling for hours but can’t seem to find anything that works. Any help would be appreciated!

Let me know if there’s extra information I can provide!

Update 1

$   sudo lsof -U | grep php php-fpm   29366            root    5u  unix 0xf497a489280ca0c1      0t0      ->0xf497a489280c91e9 php-fpm   29366            root    6u  unix 0xf497a489280c91e9      0t0      ->0xf497a489280ca0c1 php-fpm   29366            root    7u  unix 0xf497a489280c9a81      0t0      /var/run/php/php7.1-fpm.sock php-fpm   29370            _www    8u  unix 0xf497a489280c9a81      0t0      /var/run/php/php7.1-fpm.sock php-fpm   29371            _www    8u  unix 0xf497a489280c9a81      0t0      /var/run/php/php7.1-fpm.sock 

needrestart keeps restarting some failed services

I get this message in logs:

Jan 16 06:01:02 examplehost systemd[1]: xrdp-sesman.service: Unit entered failed state.

This may stem from the fact that I disconnected from xrdp session without logging out.

OK, so I run needrestart. It keeps restarting the service over and over:

root@examplehost ~ % needrestart  Scanning processes...                                                                                      Scanning candidates...                                                                                     Scanning processor microcode...                                                                            Scanning linux images...                                                                                    Running kernel seems to be up-to-date.  The processor microcode seems to be up-to-date.  Restarting services...  systemctl restart xrdp.service  Service restarts being deferred:  /etc/needrestart/restart.d/dbus.service  systemctl restart libvirtd.service  systemctl restart systemd-journald.service  systemctl restart systemd-logind.service  No containers need to be restarted.  User sessions running outdated binaries:  root @ session #1: login[743]  root @ session #626: sshd[19524]  root @ user manager service: systemd[1208] root@examplehost ~ %  root@examplehost ~ %  root@examplehost ~ %  root@examplehost ~ %  root@examplehost ~ %  root@examplehost ~ % needrestart  Scanning processes...                                                                                      Scanning candidates...                                                                                     Scanning processor microcode...                                                                            Scanning linux images...                                                                                    Running kernel seems to be up-to-date.  The processor microcode seems to be up-to-date.  Restarting services...  systemctl restart xrdp.service  Service restarts being deferred:  /etc/needrestart/restart.d/dbus.service  systemctl restart libvirtd.service  systemctl restart systemd-journald.service  systemctl restart systemd-logind.service  No containers need to be restarted.  User sessions running outdated binaries:  root @ session #1: login[743]  root @ session #626: sshd[19524]  root @ user manager service: systemd[1208] 

How do I fix that (without rebooting)? And why restarting xrdp does not seem to work (manually too)?

OS: Debian 9.6 amd64.