I’m running RabbitMQ Docker image (rabbitmq:3-management) in AWS ECS. It’s working fine with no issues.
Then I added a bit more complexity and created a service with the same RabbitMQ but now connected to AWS Network Load Balancer (my ultimate goal is to create a RabbitMQ cluster, so I need a few instances behind load balancer). Target group is configured with port 5672 and uses the same port for health checks. Interval between health checks is 30 sec (it’s max available). Threshold is 5. In configuration of service in ECS Health check grace period is 120 sec. Should be enough to start service. What happens is that when I run service after a few minutes it gets killed and restarted:
service Rabbit-master (instance i-xxx) (port 5672) is unhealthy in target-group Rabbit-cluster-target-group due to (reason Health checks failed)
‘A few minutes’ means 2 or 5 or 9… It varies. It doesn’t happen on a start but after a while. Also I see that RabbitMQ works fine (in logs and in management panel). So it’s exactly ELB which causes its restart. Not that first RabbitMQ died and then ELB restarted it, no.
So my question is what I’m doing wrong and how I can achieve stable work of RabbitMQ in ECS in pair with ELB? Is the idea to use port 5672 for helth checks wrong? But which port then to use? 15672?
Sorry if I provided not enough details. I desribed those which seemed to me relevant. If you need anything more I will be happy to elaborate. Thanks!
Is there a reason for my local pytest run to die right after I Ctrl-b z? Is there a signal that is sent and not catched by the tmux server and thus, somehow killing my pytest? (I think running the pytest in a non-tmux terminal, Ctrl-b z won’t do a thing though I couldn’t confirm that yet)
The same happens when trying to resize a pane (i.e. Ctrl-b Press-Down-Arrow)
I have a script that uses
sudo mount --bind /dev other/dev and then later uses
sudo umount other/dev to unmount. After some random number of runs, we can somehow enter an odd state where the main tty stops existing, and the whole desktop environment starts malfunctioning (new applications won’t start, or they crash, and firefox can’t redraw pages and they freeze, etc…) and
tty in my terminal says
not a tty, and the terminal fails to open new instances. I’m not using ssh or any other remote functions, so the loss of the tty for my local terminal is absurd.
The only way to fix it is to reboot or to enter a different tty with
CTRL+ALT+F1 or similar, log in, and force
other/dev to unmount (normal asking says it is busy). After this the tty is magically revived and everything works again. I’m only actually mounting
dev to get another
/dev/null, so an easier workaround probably exists for me, but this is still very strange!
Is there some explanation for this odd behavior? I’m on 18.04.2 LTS.
Recently I heard that news from Search Engine Land, Google has changed the way they show local results! Again! So, guess what?
Google going to update Google + or would be closed local business? what you guys thinking?
After turning my rig back on after hibernation performance is almost always shot. What, if any verifiable explanation are the search engines hiding and is there a way to improve performance?
What this question is not:
- This is not a “why doesn’t my system boot in 10 seconds on my SSD?” (it obviously saves a RAM dump).
- This is not a “how do I speed up the boot / shut down” question.
- This is not a “how do I prevent hibernation file from using so much space” question.
- This is not a “why my over/under-clock was reset after rebooting?” question.
Simply put simple tasks unrelated to anything networking/Internet are pointlessly unresponsive (1TB SSD using about 480GB of storage) on an eight core rig with the pagefile disabled. Internet speeds (wired and wireless) are at or below 10% of what they normally can achieve. Obviously rebooting restores system performance though the point of hibernation (at least for me) is that my rig is not pointlessly heating the room up while in an extended absence (a few hours or more). I should mention that I can not sleep Windows on my rig though that is a different question.
I have a really simple query I’m running on a large table (500k rows) to page results.
Originally I was using this query, which is really fast:
select * from deck order by deck.sas_rating desc limit 10
Its explain analyze estimates a 0.2 ms execution time. Cool.
sas_rating column has duplicate integer values, and I realized when paging through the results (using offset for other pages) that I was getting duplicate results. No problem, add the primary key as a secondary order by. But the performance is terrible.
select * from deck order by deck.sas_rating desc, deck.id asc limit 10
That takes 685ms with an explain analyze of:
Limit (cost=164593.15..164593.17 rows=10 width=1496) (actual time=685.138..685.139 rows=10 loops=1) -> Sort (cost=164593.15..165866.51 rows=509343 width=1496) (actual time=685.137..685.137 rows=10 loops=1) Sort Key: sas_rating DESC, id Sort Method: top-N heapsort Memory: 59kB -> Seq Scan on deck (cost=0.00..153586.43 rows=509343 width=1496) (actual time=0.009..593.444 rows=509355 loops=1) Planning time: 0.143 ms Execution time: 685.171 ms
It’s even worse on my weaker production server. My search went from 125ms total to 35 seconds!
I tried adding a multi-column index, but that didn’t improve performance. Is there any way to prevent duplicate results when using limit + offset without destroying the performance of the query?
For Immediate Release:
What am I too my bank…?
apparently, I am nothing…
At first, I contacted my bank in order to do a legal international transaction
with a seller that used an escrow service in a foreigh country. The bank refused
to send the wire, and made it sound as if they did not want to get involved in
an international transaction which was understandable….
I informed the seller, at this time, I actually found out the seller is actually
from the US, and he used the foreign…
WAB, A Bank That Kills Business Plans, Rejects $ 13,000k+ Legal Wire Transaction….