Redirect to a subdirectory frontpage using without using a WP plugin- what files to edit, and how?

My situation

I have a multisite setup, where I use subdirectories for different languages. I use the main homepage for English and subdirectories for other languages, as follows: : English language frontpage : Japanese language frontpage

What I want to do

I would like to redirect people who live in Japan to the Japanese language ffrontpage without using a WordPress plugin.

What I have tried

Using the , as inspired by the 3rd answer of this stack I edited the theme header.php file by adding the following at the very beginning

$  a = unserialize(file_get_contents(''.$  _SERVER['REMOTE_ADDR'])); $  countrycode= $  a['geoplugin_countryCode']; if ($  countrycode=='JP'){         header( 'Location:' );         exit;     } 

The problem

However, I get the following error

The page isn’t redirecting properly

An error occurred during a connection to

This problem can sometimes be caused by disabling or refusing to accept cookies.

Note: the redirecting works fine with an external website, for example

$  a = unserialize(file_get_contents(''.$  _SERVER['REMOTE_ADDR'])); $  countrycode= $  a['geoplugin_countryCode']; if ($  countrycode=='JP'){         header( 'Location:' );         exit;     } 

Is there a way to fix this error?

I am also open to other solutions not using a WP plugin.

Why is VirusTotal “unable to process file type” for files packed with Mpress?

Packed an AutoHotkey script with ahk2exe using MPress, got 13 hits on the VirusTotal online scan for the zipped result. Packed the same script with the same ahk2exe without MPress and got just 6 hits on the zipped result. Zip performed with identical 7z defaults in both cases.

Some of the virus agents associated with red flags in the first scan now come up as “Unable to process file type” in the second report. Why is this?

How to securely transfer files from a possibly infected WinXP machine to Linux?

There is an old Windows XP installation that was being used without even an antivirus. This WinXP computer has files. These files are important and should be moved to a Linux installation. Given the lack of any security practices on the side of the WinXP owner it seems possible that the data contains malware.

I can now:

  • Ignore this and simply keep using these files in Linux; after all Linux is supposed to not need AV.
    • At the very least the files should be scanned to avoid accidental redistribution of malware if they are ever sent to anyone else again
    • The files contain eg a multitude of .odt / .doc documents – maybe it’s a very remote possibility, I don’t know, but malicious macros are OS independent?
  • Install ClamAV on Linux machine, scan the files, remove Clam afterwards.
    • AFAIK ClamAV is known for its poor detection rate – scanning the files with it is only marginally better than not scanning at all?
  • Install an AV on the WinXP machine (Panda Free AV still supports WinXP, doesn’t it?), scan the files there, only transfer them afterwards.
    • Which means going online with WinXP once again – this just feels wrong
  • Any options I overlooked?

I feel stuck. Not sure how to progress.

Note I wouldn’t like to manually inspect the files and eg remove any potentially suspicious files like .exe files while leaving safe files like .png files intact. Reason is the data is not mine, I was just asked to transfer it so that someone else may use them.

What is the accepted best practice in a situation like this?

Should XHTML template files for JSF be provided over the webserver?

Assume that I have a Java web application with

  • JavaServer Faces (JSF) 2.x
  • which uses XHTML files (so called Facelets) with the Java Expression language [1]

In the current configuration, it is possible to access the XHTML files over the web server. According to [2], it is possible to prevent such an access.

Do you see any relevant threats towards the current configuration?

I could image the following:

  • Information disclosure of secrets, if such secrets are stored in the XHTML file
  • Information disclosure of variable names, which are parsed when rendering the XHTML
  • Information disclosure of relevant business logic, if such logic is part of the XHTML file (which would be bad code style)



Error caching static js files in my htaccess file. What does this error mean? [duplicate]

I have this in my .htaccess file:

<IfModule mod_expires.c>   ExpiresActive on  # Perhaps better to whitelist expires rules? Perhaps.   ExpiresDefault                          "access plus 1 month"  # cache.appcache needs re-requests in FF 3.6 (thx Remy ~Introducing HTML5)   ExpiresByType text/cache-manifest       "access plus 0 seconds"  # Your document html   ExpiresByType text/html                 "access plus 0 seconds"  # Data   ExpiresByType text/xml                  "access plus 0 seconds"   ExpiresByType application/xml           "access plus 0 seconds"   ExpiresByType application/json          "access plus 0 seconds"  # RSS feed   ExpiresByType application/rss+xml       "access plus 1 hour"  # Favicon (cannot be renamed)   ExpiresByType image/x-icon              "access plus 1 week"  # Media: images, video, audio   ExpiresByType image/gif                 "access plus 1 month"   ExpiresByType image/png                 "access plus 1 month"   ExpiresByType image/jpg                 "access plus 1 month"   ExpiresByType image/jpeg                "access plus 1 month"   ExpiresByType video/ogg                 "access plus 1 month"   ExpiresByType audio/ogg                 "access plus 1 month"   ExpiresByType video/mp4                 "access plus 1 month"   ExpiresByType video/webm                "access plus 1 month"  # HTC files  (css3pie)   ExpiresByType text/x-component          "access plus 1 month"  # Webfonts   ExpiresByType font/truetype             "access plus 1 month"   ExpiresByType font/opentype             "access plus 1 month"   ExpiresByType application/x-font-woff   "access plus 1 month"   ExpiresByType image/svg+xml             "access plus 1 month"   ExpiresByType application/ "access plus 1 month"  # CSS and JavaScript   ExpiresByType text/css                  "access plus 1 year"   ExpiresByType application/javascript    "access plus 1 year"   ExpiresByType text/javascript           "access plus 1 year"    <IfModule mod_headers.c>     Header append Cache-Control "public"   </IfModule>  </IfModule> 

When I do a speed test, I get a C in “cache static content”, because of this:

Leverage browser caching of static assets: 79/100 Learn More

FAILED – (No max-age or expires) –

FAILED – (5.0 minutes) –

FAILED – (15.0 minutes) –

WARNING – (1.8 hours) –

WARNING – (24.0 hours) –|Open+Sans:300,400|Merriweather:400,700

WARNING – (2.0 days) –

WARNING – (2.0 days) –

WARNING – (2.5 days) –

WARNING – (2.5 days) –

WARNING – (2.5 days) –

WARNING – (2.5 days) –

WARNING – (2.5 days) –

WARNING – (2.5 days) –

WARNING – (2.5 days) –

WARNING – (6.9 days) –

WARNING – (6.9 days) –

Why I’m getting these errors? I thought that I’ve already cached js files…

Do I need to enable Trace Flag 1117 for equally sized data files?

I was reading about fill proportional algorithm in SQL Server and then I recalled TF1117. BOL states:

When a file in the filegroup meets the autogrow threshold, all files in the filegroup grow. This trace flag affects all databases and is recommended only if every database is safe to be grow all files in a filegroup by the same amount.

What I can’t understand is if data files are filling proportionally, won’t they auto-grow proportionally either? In that case we can’t omit using TF1117.

Password checking resistant to GPU attacks and leaked password files without introducing a DoS attack on the server?

In very old time the passwords were stored in clear text. This made it trivial for an attacker to log in if he had access to a leaked password file.

Later, passwords were hashed once and the hashed value stored. If the attacker had a leaked password file he could try hashing guesses and if a hash value matched, use that guess to login.

Then passwords were salted and hashed thousands of times on the server and the salt and the resulting hash value was stored. If the attacker had a leaked password file he could use specialized ASICs to hash guesses and if a guess matched use that password to login.

Can we do better than that?

Can we make password guessing of an attacker so hard that even if he has the hashed password file, he will not get a major advantage over testing the passwords against the server – even if he has specialized ASICs?

UAC Security Issue when Running Batch Files

I was having trouble pushing a batch file to a local users machine, when it ran just fine on another person. It turns out I was having the file run as the Current Logged in User.

So the problem is the user is able to run any batch files without being prompted by UAC, they have the highest level of UAC set and they are a local Administrator. Other users with the same level of Access and UAC do get prompted when attempting to run any batch files.

Is there something I am missing here? Any ideas would be great!

Thanks! Eatery of Ramen

How can we protect encrypted files and directories from being fingerprinted when stored on online storage services?

Assuming that online storage providers are considered untrusted, if files and directories are encrypted, how can these be protected against fingerprinting?

The files are encrypted using rclone’s implementation of Poly1305 and XSalsa20 before being backed up to the cloud provider.

According to rclone’s documentation, the available metadata is file length, file modification date and directory structure.

  • What can be identified?
  • What can be inferred?
  • What attack vectors are there against the encrypted files and directories if the online storage provider is compromised assuming the passphrase is at least 24 characters long and is a combination of alphanumeric and special characters (uppercase and lowercase) as well as salted with similar entropy?

The encrypted data is considered to be sensitive.

How can I protect those files from being fingerprinted and the contents inferred such as ownership, source and the like?