postgresql 13 sync replication implementation question and pgpool failback question

In a pgpool-II 4.2 and postgresql 13 environment: 3 servers (each hosting a pgpool and a postgresql)

I understand we can set the below parameter for different durability’s. synchronous_commit = off, local, on, remote_write, or remote_apply

My question is that when I set it to something beyond "local" document implies the wal information will not even be sent to slaves until the local is flushed.

My question is why not the below 2 start in parallel

  1. local flush starts
  2. sending wal info. to slaves

because this would save some time by doing these in parallel, correct? Or please let me know my understanding is off?

The other question is that does pgpool-II simplify the postgresql failover and failback? And is it reliable? Is there some real life examples of setups and comments on this? Thanks!

Removing large amounts of rows from MySQL 5.7 table + ibdata increase question

I’m working with MySQL 5.7 on Windows.

I’ve got several tables to clean data from. Largest table’s ibd file is 300GB in size and it has almost 1.5 billion lines. I need to leave about 290 million rows in it. So a very large chunk needs to be removed.

From reading the docs there are 2 ways

  1. DELETE statement and OPTIMIZE TABLE after that
  2. copying data to a new table, dropping the old one and renaming the new one.

Second option seems much better in this case, but it there any potential issues to look our for? Downtime is not really an issue.

Another question, I saw that delete statement on big tables (deleting 1 million+ rows) can get stuck and then cause ibdata to grow (hundreds of MB), even though innodb_file_per_table option is on.

I assume that has to do with temporary tables somehow, but can’t find an explanation. Any ideas?

Hi i am new to gsa I have some question if you know please Tell me

1) Can I create 90+ da Social Profile Backlinks With Gsa Ranker
2) How to set 50+ high da backlinks to be created setting on gsa ranker campaign only have pr option
3) How much backlinks i can created per minutes if i run 700 thread and with high da and quality of all type of backlinks created
4) how much thread i can run on 16 gb ram , 6 core vps
5) How much backlinks i can get per 30 days if quality backlinks only created
6) if i purchased verified gsa ranker list how can i create allot of backlinks
i have checked gsa ranker Tutorials But it have limited videos and don’t much have these info i just asked Please Tell Me If You Know.

How do you ‘do something’ to every element in a list except that one index? (C# Beginners Level Question) [closed]

I created a list of 40 buttons, each of these buttons have an ‘int counter’ that counts up incrementally to 5 whenever pressed.

If I hit button 1, the other button’s counter will reset and become 0, but the button I hit can now increase to 2, 3, 4, 5.

How would you loop the list in a way that doesn’t also reset the button being pressed?

Button itself is a class, and I have a ButtonManager that contains List< Button > Buttons

Laravel Multilanguage SEO Wise question [duplicate]

It’s my first time creating a Multilanguage site, i’ve got one concern which is SEO, i don’t use different slugs for my languages, i use Voyager as a back-end admin and when for example a blog page is opened it gets translated automatically in the controller based on the locale the user selected but it doesn’t go to a different page, what i mean is the page is still https://example.com/blog/{blog-page} AND NOT https://example.com/blog/{blog-page}/{language}.

Will that be an issue SEO wise?

What i’m doing is in AppServiceProvider.php everytime a page loads i set the locale so there won’t be any problems

    $  locale = session()->get('locale');          if($  locale == null) {          Session::put('locale', 'el');     } 

Setting the language at the HTML tag

<html lang="{{ session()->get('locale') }}">

and also

@section('title', {{ $ locale == 'en' ? 'English Title' : 'Greek Title' }})

Canonicalise

    <link rel="canonical" href="{{ Request::fullUrl() }}">     <link rel="alternate" href="{{ Request::fullUrl() }}" hreflang="el-gr">     <link rel="alternate" href="{{ Request::fullUrl() }}" hreflang="el">     <link rel="alternate" href="{{ Request::fullUrl() }}" hreflang="x-default">     <link rel="alternate" href="{{ Request::fullUrl() }}" hreflang="en"> 

The website has a lot of traffic and i don’t intend to see how this will turn out, so I’m looking for some tips if my approach is correct and if the crawlers will look at the website just fine.

Question Workflow

Is there any chance to do this completely in Scrapebox is:

From a List of domains I crawl the sites for contact emails
From the List I scrape the Social Media Accounts
I verify the emails

Is there any chance to “relate” everything in one Excel sheet?

What I want is:

Newssite:    Contact:                     State:       Facebook:                   Facebookemail (scraped with another tool)   State:
elpais.com  contact@elpais.com.  Valid.       facebook.com/elpais

And so on…

Question on support/deprecation status of browser plug-ins (not extensions) [closed]

I’d appreciate it if someone could provide confirmation or correction on what I found on the support/deprecation status of browser plug-ins that are enabled by NPAPI or PPAPI. Please provide feedback on each of the following:

  1. NPAPI plug-ins

    • As of 1st half of 2021, NPAPI plug-ins are deprecated in most major browsers including Chrome, Safari, FireFox, and Opera, but several other browsers still support them. (link)
  2. PPAPI plug-ins

    • PPAPI plug-ins are supported by Chromium-based browsers, but the official support for PPAPI will be stopped on June 2022. (link)
  3. Apple QuickTime

    • QuickTime 7.x used to be provided as a browser plug-in, but the plug-in support was deprecated from Safari 12 in 2018. From QuickTime 10.x, the plug-in is not supported. Therefore, QuickTime plug-in is deprecated. (link1, link2)
  4. Adobe Flash

    • Adobe Flash plug-in is deprecated in most major browsers, but it’s still supported as NPAPI or PPAPI plug-in by several browsers forking Chromium or FireFox. (link)
  5. Prospect for plug-ins

    • Even after official support for NPAPI or PPAI reaches end-of-life, the plug-ins might be supported by browsers forking Chromium or FireFox, or other types of browsers for backward-compatibility.

Having a question about SQL

Consider the relations: EMPLOYEE(ID, SURNAME, LINE_ID) CAR(EID, MAKE) For each employee, we store their unique ID and the ID of their line manager (if exists) LINE_ID. If an employee has a line manager then the corresponding LINE_ID is not NULL. A line manager does not have a line manager, i.e., the corresponding LINE_ID is NULL. The LINE_ID is a foreign key referencing to the ID primary key in the relation Employee. In the relation Car, we store the car(s) make of an employee, if exist(s). Specifically, for each car, we store its make (e.g., BMW, Jaguar, Aston Martin, Toyota) and the ID of its owner/employee (EID). The EID is a foreign key referencing to the ID in the relation Employee. The primary keys are underlined. Assume the following query: SELECT E.SURNAME, L.SURNAME FROM EMPLOYEE E, EMPLOYEE L, CAR C WHERE E.LINE_ID = L.ID AND C.EID = L.ID We know that: (i) there are 20 line managers, each one being the line manager of 10 employees; (ii) there are 120 employees, who have 1 car each, i.e., the CAR relation has 120 tuples, each one corresponds to different employee. (a) Provide the canonical tree of the query and estimate the expected number of tuples retrieved.

have a question

have a question
I saw “No targets to post to” alarm on my project. does it mean the project is over right?
also i did “Delete Target URL cache” and “Delete Target URL History” to repeat the project after it’s finished, and then set it active again
at the first time, I got 2000 verified links. After deleting cache&history  and active it again, can i get roughly 2000  verified links again? hope i can.
and after deleting cache&history, the speed of the second time work gets slower than the first time. what should i do?

Newbie question – what will happen after project Stop?

Planning to run GSA SER, and did not find answer to my question.
Lets say i import target URLs from bought list file.
Run a campaign for some time. Then i stop, and start it again after some time.
Does link building starts from beginning of that list file over again, or is any way to start from that point where i stopped, even when the program was closed ?
Thanks.