Certification Authority not issuing the right certificates for SCEP client

I’m doing certificate-based WiFi authentication (EAP-TLS). I have set up the CA server and in MMC console, I have added the certificate snap-in. In the certificate authority console, I’ve duplicated a certificate (RAS and IAS server) and imported it to the certificate template. I also enabled Certificate auto-enrollment in GPO. But when I send a request from SCEP client, I’m getting an IPSEC(Offline request) certificate. What changes should I make to make the CA provide the right certificate (RAS and IAS server certificate that I configured)?

If you need additional clarification, please ask me. Thanks a lot.

Risk of issuing a client-certificate for every user in application level

I’m currently working on a project that includes client-authorization using client certificates, where my app is the Certificate Authority. Doing that, every user that registers to my web server gets a p12 signed certificate issued especially to his username as common-name. Currently the app signs every CSR it gets, only if the CN is not in the users database.

Am I exposed to potential risk doing so? Can the client abuse it somehow? Do I need to check more carefully the PKCS12 parameters?

Thanks

Overstaying Schengen residence permit outside of the issuing country

I have a student residence permit issued in one of the EU/Schengen countries. As fair as I understand, it allows me to stay without visa in any other Schengen country for up to 90 days in 180 day interval.

I want to ask if and how is this rule enforced and are there practically any possible consequences for overstaying the mentioned 90 days outside of the issuing country.

My situation is that I am currently doing a research internship in Switzerland, which officially lasts 85 days, and I am doing it without any Swiss visa/permit, since my original permit allows me to have such educational/working stays for up to 90 days. After the internship ends, I want to have a holiday and stay in various Schengen countries for up to 15 days in total.

I wonder if any authority can find out that I will be overstaying the 90 days limit. Since there are no border checks in the Schengen area, is that even possible? The only clue that I can think of are hotel bookings: I have a hotel booking in Switzerland for 85 days of my internship, and I will likely stay in a hotel/AirBnb during my trip. As far as I know, they are somehow obliged to check my passport and notify the authorities about my booking. So, strictly speaking, I will have a hotel booking for 100 days, which is more than what the rule allows. In reality, it can be that during my internship I made several short trips to the issuing country, with the total length of 10 days, without interrupting the hotel booking. So, the real length of my “outside” stay is less than 90 days, but that is hard to prove (I can maybe show some transportation tickets).

So I wonder what really happens when I book a hotel and show my passport/permit on registration. Do authorities somehow keep track of my stays? Is it somehow synchronized between different countries?

Did anybody ever have any experience with enforcement of that rule? It seems it is totally unrealistic for it to be enforced, but I guess if some authorities get really suspicious they can find it out, e.g. by requesting bank statements, bookings and checking locations.

Can visa issuing country be different from departure country?

I am an Indian citizen who was residing in the U.S few months back. I got a job offer from Italy and therefore, applied for a Schengen visa from the U.S. However, I had to come back to India for some personal reasons and now I will be traveling to Italy from India. My visa started from February 20 and is valid for one year. I am traveling to Italy on April 29.

My question is, since the issuing country and point of departure is different, can I still travel from India? Does the delayed date of arrival matter?

Can I lost the access to my server by issuing `sudo dpkg-reconfigure openssh-server`

I’m wondering if it is save to execute the following command :

$   sudo dpkg-reconfigure openssh-server  

On a remote server where the only access I’ve is …ssh. Is it safe ? Not sure about this. If it isn’t, what would you do ?

Note that I’ve the following :

$   sshd -t Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key Could not load host key: /etc/ssh/ssh_host_ecdsa_key Could not load host key: /etc/ssh/ssh_host_ed25519_key 

nginx try_files issuing http redirect beind load balancer; need https

My nginx instance is behind a SSL terminated load balancer, and I want all urls to hit https endpoints, which means http to be redirected to https.

All is well when urls have a trailing slash. They all get redirected nicely: good.png

But when the same urls have no trailing slash, nginx’s try_files seems to be issuing a http redirect always: bad.png

Here’s my nginx vhost config:

server {     listen 80;     root /app/;     index index.html index.htm;      # Redirect all http requests to https     if ($  http_x_forwarded_proto = "http") {         return 301 https://$  host$  request_uri;     }      location / {         try_files $  uri $  uri/ =404;     } } 

How do I get nginx’s try_files to redirect directly with the https $ scheme when it hits the $ uri/ parameter (2nd parameter in my try_files above) and successfully finds a file matching $ uri/<index> (where index is defined by the index directive in the nginx config above)?

I searched similar questions such as here, here, or here but still could not find anything remotely relevant.

raid not spinning down – mdadm issuing syncs?

I have an issue with an archival server that is running a RAID 5. The server is being accessed only every couple of days, so I want these disks to spin down when there is no activity for a while.

Disclaimer: I understand that spinning down disks is normally bad practice. I am not asking for advice on disk lifespan. I am asking for help on how to make spin downs happen. Thank you.

The file system is ext4. I have ramped up the commit interval of ext4 through the appropriate mount option and verified that there is no activity from jbd2. I have also configured systemd-journald to volatile mode and disabled any other non-essential logging. I have 100% verified that no log files are being written and no user-space processes have IO activity. Swapping is off.

Still, iosnoop is showing periodic writes to sectors 2056, 2064, and 2088 of the disks in the array. I suspect that this is where the superblock or related information is stored. My working theory is that mdadm is marking the RAID as synced or something like that, but I failed to Google any relevant information.

Does anyone have an alternative theory or an idea on how I can stop the IO?

Here’s an iosnoop trace for the first disk in the array:

# iosnoop-perf -s -d "8,16" Tracing block I/O. Ctrl-C to end. STARTs          COMM         PID    TYPE DEV      BLOCK        BYTES     LATms 5068.962692     md0_raid5    249    FF   8,16     18446744073709551615 0          0.35 5068.963054     <idle>       0      WFS  8,16     2064         4096      21.28 5068.990201     md0_raid5    249    FF   8,16     18446744073709551615 0          0.40 5068.990619     <idle>       0      WFS  8,16     2056         512       18.70 5069.017432     kworker/1:1H 216    FF   8,16     18446744073709551615 0          0.42 5069.017866     <idle>       0      WFS  8,16     2088         3072      24.86 5069.442687     md0_raid5    249    FF   8,16     18446744073709551615 0          0.40 5069.443104     <idle>       0      WFS  8,16     2064         4096       7.90 5069.467942     md0_raid5    249    FF   8,16     18446744073709551615 0          0.40 5069.468360     <idle>       0      WFS  8,16     2056         512       57.62 5074.578771     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41 5074.579195     <idle>       0      WFS  8,16     2064         4096      21.82 5084.818728     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41 5084.819146     <idle>       0      WFS  8,16     2088         3072      31.92 5125.794841     md0_raid5    249    FF   8,16     18446744073709551615 0          0.35 5125.795205     <idle>       0      WFS  8,16     2064         4096      22.49 5125.823437     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41 5125.823855     <idle>       0      WFS  8,16     2056         512       18.83 5125.850640     kworker/1:1H 216    FF   8,16     18446744073709551615 0          0.42 5125.851071     <idle>       0      WFS  8,16     2080         4096       8.33 5125.859599     kworker/1:1H 216    FF   8,16     18446744073709551615 0          0.42 5125.860026     <idle>       0      WFS  8,16     2064         4096       7.67 5126.146833     md0_raid5    249    FF   8,16     18446744073709551615 0          3.50 5126.150353     <idle>       0      WFS  8,16     2064         4096       8.98 5126.159498     md0_raid5    249    FF   8,16     18446744073709551615 0          4.39 5126.163913     <idle>       0      WFS  8,16     2056         512       53.75 5131.410989     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41 5131.411412     <idle>       0      WFS  8,16     2064         4096      22.99 5141.650858     md0_raid5    249    FF   8,16     18446744073709551615 0          0.41 5141.651276     <idle>       0      WFS  8,16     2064         4096      16.40 5141.667708     <idle>       0      FF   8,16     18446744073709551615 0          0.29 5141.668012     <idle>       0      WFS  8,16     2080         4096       7.95