FTP server and chroot: SSL3 alert write: fatal: protocol version

When i enable "chroot_local_user=YES" in my FTP server config /etc/vsftpd/vsftpd.conf
then the FTP client (WinSCP) says:

when it is commented out and "service vsftpd restart" , it login OK, but allows browsing system directories in the /.
This is CentOS 7 Linux.

These are…

FTP server and chroot: SSL3 alert write: fatal: protocol version

Is it possible to create an “always on” environment from a SQL Server on premises to a virtual machine on azure?

I’ve been looking for a similar question here and reading about it about ""Add Azure Replica Wizard", but I heard the it doesnt works because it’s a deprecated feature.

I used to have a primary server and a secondary server on premises as always on, but because of costs I had to delete the secondary "replica".

I would like to know if it’s possible to recreate this always on environment, and then have the primary server On Premises, and a replicated environment on a virtual machine on Azure Cloud.

Then if something happens with our primary, automatically the secondary replica on azure will take the work.

SQL Server Agent unable to view network drives

I am unable to get agent jobs to output to a network path. I have pushed the IT guy to set up a domain authenticated user that logs in when the agent starts. That login does have access to the domain and is able to see the network drives. If I set the location of the output file to be the local c: then this works without issue. However if I set the drive to be a network location I get the following message;

[SQLSTATE 01000] (Message 0)Unable to open Step output file. The step succeeded. 

Any help would be very much appreciated

FileStream DB SQL Server – merging DB data

We recently upgraded a server from 2014 to 2017. Since down time was a concern, we migrated some DB’s a couple days before.

Migration complete Sunday afternoon, we’ve now come to find out the Filestream DB was used between restore and cutover. So we have a datagap to mitigate. I restored the old DB to a backup server, and did a basic ‘not in’ to find 12 documents in the old DB.

Finally, the question: how do we get those 12 documents into the current DB? Is it as simple as an insert from old to new?

If it helps, this query is what shows the 12 record gap:

SELECT *  INTO #tmpFileStore FROM OPENQUERY (OldDBSever_LinkedServer, 'SELECT * FROM [FSDB].[dbo].[DocumentFileStore]');  SELECT stream_id   FROM #tmpFileStore   WHERE stream_id not in (Select [stream_id] FROM [dbo].[DocumentFileStore]) GO 

SQL Server Snapshot replication – Failing to start the Publisher snapshot agent

I am trying to set up a Snapshot replication using an Azure SQL Managed Instance. While trying to see the Snapshot Agent status, I see this error. Failed to connect to Azure Storage ” with OS error: 53.

While configuring the Distribution wizard, I had the option to set the Snapshot folder as well as the Storage account connection string.

I got the storage account connection string from the Azure Portal and pasted that in. I am in doubt about the Snapshot folder. What value should I set in there ?

Is it a folder inside Azure Storage account or in the Distributor SQL Server instance ? In the Distribution wizard, it said, that the details of the folder would also be in the Azure portal. Is there a place where I could get it ?

I am getting a feeling that if I have the correct setting for this, my snapshot replication would work just fine.

Can anybody guide me to find out the problem ?

Using server timestamp to synchronize actions across clients


tl;dr

The question is rather simple: how unsynchronized can internal clocks be between different machines?


Context

We have a "hybrid" approach where the Clients are authoritative on certain things, and the Server on some others. That is to make our life easier because the gameplay doesn’t need to be too tight about certain things.

In this case, the Server will pre-determine a sequence of Enemy Types to be spawned at different locations with pre-determined delay between the spawns. We would then let the Clients handle the rest during the gameplay.

However, if we want the Clients to be mostly synched on the display of the enemies, they would need to initiate this sequence at the very same time, and thus we cannot rely on the network to have transmitted the data at the very same time for all Clients.

The idea we had was to send a timestamp of the format returned by System.currentTimeMillis() which would indicate when to begin the sequence.

But then one question remains in the way: how unsynchronized can internal clocks be between different machines?

Will a Web Server detect a base64 encoded reverse shell on run time?

A vulnerable website blocks almost everything that is related to PE (Privilege Escalation), but when encoding the ls -al code into a base64 format, the website doesn’t block the dangerous code (at Scan Time), will the web server detect and block the code at Run Time ?

base64 -d <<< bHMgLWFs | sh: Base64 of ls -al

Web Server: Scanning the input.. Seems fine, I will not block it.

Web Server Inside: ls -al # Will it block it at run time ?

Why developers put the installer/executable and the file checksum on the same server? [duplicate]

On https://exiftool.org/ , there is a link to https://exiftool.org/exiftool-12.01.zip and https://exiftool.org/checksums.txt .

Both the ZIP archive and the checksum hash are hosted on the same machine. This means that an attacker who has compromised the server also will have replaced checksums.txt with a fake one matching the malware-infected ZIP archive. And thus there is no point in checking this checksum?

Maybe the answer is "they can’t afford a separate server", which explains it and is understandable. However, what is the point?

One idea I had was that maybe it’s implied that I should store these hashes "for the next time", and thus be reasonably sure (at least slightly less unsure) that somebody hasn’t compromised the server since the last check. However, the checksums are very specific to the current file! They are not some kind of general "author’s signature" which I can use to verify that it’s the real author who has signed the new binaries. So that idea goes out of the window as well.

Since the security hashes are very specific for the current files, there is no value in "pre-fetching" and storing these locally to compare them the next time, since they always correspond to the current version of the binaries.

I refuse to believe that a person smart enough to make such a project would not have thought of this, so I assume that I must be missing something.

(Also, this is far from the only project where I’ve seen this, so this is not specifically about ExifTool. I just used it as an example since I’m trying to code a secure update mechanism for this program.)