Anyway to do an NFS export on casper cow filesystem?

Trying to export a folder from a remastered Ubuntu livecd:

# df -h Filesystem      Size  Used Avail Use% Mounted on udev             63G     0   63G   0% /dev tmpfs            13G   18M   13G   1% /run /dev/sdb        3.3G  3.3G     0 100% /cdrom /dev/loop0      397M  397M     0 100% /rofs /cow             63G  3.4G   60G   6% / tmpfs            63G     0   63G   0% /dev/shm tmpfs           5.0M     0  5.0M   0% /run/lock tmpfs            63G     0   63G   0% /sys/fs/cgroup tmpfs            63G  100K   63G   1% /tmp tmpfs            13G     0   13G   0% /run/user/1000 

The root folder shows it as the ‘/cow’ filesystem type. It seems that NFS doesn’t like it. I have tried various ways to export it, but they all yield “/exports/nfsroot does not support NFS export”

# exportfs -i 172.16.111.240:/exports/nfsroot exportfs: /exports/nfsroot does not support NFS export 

I suspect NFS doesn’t support the ‘/cow’ filesystem. Is there a kernel module or package I might be missing in order for it to work?

QEMU: qcow2 and RAM…..which filesystem combi (avoid writing journal twice)?

Let’s say my host disk is using EXT4, I place an image-file for my virtual machine in either QCOW2 or RAW format which again is formatted with EXT4 inside. Wouldn’t journal data be written twice? First in the guest drive and then on the host? Can I safely disable journal in one of them? Or should I use a combination of two different filesystems on host and guest (if both are Linux). If guest is Windows using NTFS, which also seem to be a journal-like filesystem, could I safely disable journal on host EXT4 FS?

Read Only Filesystem for Kiosk Application

I have something like an Intel NUC running Ubuntu that operates like a touchscreen kiosk in an industrial environment. It gets power cycled regularly and I want to protect against hard drive corruption. How do I make the filesystem read only, but still allow Chrome to operate normaly, other code to run (C++, Python), and MySQL database writes? How would I allow for alterations of /etc/netplan/01-network-manager-all.yaml?

Resize partition: add space to filesystem from neighbouring home

Resizing partitions has been asked before (1,2) but I just wanted to check: for my setup: gparted

I want to add a couple of Gb to sda1. Is it possible to shrink sda5 from the start going right? (using gparted liveUSB) I don’t know if this is an accurate a visual representation of partition locations.

Also does one still need swap? Using xubuntu 18.04.

Thanks!

Unable to fetch table hive_table. java.io.IOException: No FileSystem for scheme: null

I have created an external hive table in python code using spark, but I have problem when I want to view table using hive shell.

None of these requests works: describe hive_table, SELECT, Alter……

hive> select count() from hive_table;*

FAILED: SemanticException Unable to fetch table hive_table. java.io.IOException: No FileSystem for scheme: null

Creation of external hive table:

(df.write.partitionBy(config['col_part'])                .mode('overwrite')                .format(config['format_tbl_hive'])                .saveAsTable(hive_tbl, path=hive_tbl_dir)) 

photorec photo recovery software not seeing my mounted filesystem – trying to use photorec to recover lost jpegs


What is my situation?

I am working in a Dev Ops capacity for a service that manages jpeg files online. We had an unfortunate deploy and our media files (jpegs) are completely gone. I anticipate that our loss is probably simple and may be recoverable. I think somehow that the directory that contains the sub-directories that have our jpeg files was unlinked. If this is the case, we should be able to recover them.

What I have done so far and where we are hosted — details

I realized the loss almost right away and fortunately we did not have any users online at that moment. I stopped our service and brought down our server. I did that to prevent any more writes to the filesystem figuring that avoiding writes was essential to file recovery.

We are running Ubuntu 16.04 in DigitalOcean. I have brought the server back up using DigitalOcean’s recovery mode. This permits one to mount the filesystem of the given virtual host without running the virtual host and without running the services one has on the virtual host. This should be sufficient and correct for performing any form of recovery.

I need some where to write data for recovery. To that end, I have another server in DigitalOcean in the same data center (SFO1 unfortunately). I have mounted that host’s filesystem using sshfs. I should be able to write any recovery data from my virtual host’s filesystem (which is in recovery mode) to this other host via sshfs.

I selected the following utility to execute my recovery: PhotoRec

That utility is actually two utilities — PhotoRec and TestDisk.

The filesystem of the host we wish to recover is ext4. PhotoRec supports ext4. TestDisk may not support ext4. That’s okay, according to the documentation if the data is still there and largely uncorrupted, then we should be able to recover it with PhotoRec.

Here is the output of when I run df -Th — as you can see the filesystem I wish to recover is /dev/vda1 it is of type ext4 and mounted via /mnt . I installed photorec in /lib/live/mount/overlay which is the tmpfs . I have mounted another host via sshfs within the same datacenter to put any recovered data on:

root@xxxx-xxxxxx-xxxxxxxxx:~# df -Th Filesystem             Type        Size  Used Avail Use% Mounted on udev                   devtmpfs    7.9G     0  7.9G   0% /dev tmpfs                  tmpfs       1.6G  6.2M  1.6G   1% /run /dev/sr0               iso9660     251M  251M     0 100% /lib/live/mount/medium /dev/loop0             squashfs    220M  220M     0 100% /lib/live/mount/rootfs/rescue_rootfs.squashfs tmpfs                  tmpfs       7.9G   14M  7.9G   1% /lib/live/mount/overlay overlay                overlay     7.9G   78M  7.8G   1% / tmpfs                  tmpfs       7.9G     0  7.9G   0% /dev/shm tmpfs                  tmpfs       5.0M     0  5.0M   0% /run/lock tmpfs                  tmpfs       7.9G     0  7.9G   0% /sys/fs/cgroup tmpfs                  tmpfs       1.6G     0  1.6G   0% /run/user/0 root@xxx.xxx.xxx.xxx:/ fuse.sshfs  311G   13G  298G   5% /mnt2/xxxxxx-xxxxxx-xxxxxx /dev/vda1              ext4        311G   41G  270G  14% /mnt  

When I run photorec it only sees:

>Disk /dev/sr0 - 252 MB / 250 MiB (RO) - QEMU DVD-ROM 

It does not see my filesystem that I want to execute recovery on at all. That is:

/dev/vda1              ext4        311G   41G  270G  14% /mnt 

I have tried this with my filesystem mounted because that seems right to me. However, we did find in some online documentation that some file recovery tools require file systems to not be mounted (which seems weird to me – how is that supposed to work). So I tried executing it unmounted but same thing: it only sees:

>Disk /dev/sr0 - 252 MB / 250 MiB (RO) - QEMU DVD-ROM 

Does anyone have any suggestions regarding getting photorec to see my filesystem:

/dev/vda1              ext4        311G   41G  270G  14% /mnt 

I do have some backups, but unfortunately, I have about seven days worth of unbacked up photos. We could in theory live without them and reach out to our clients and get data from them and reprocess and repost it. But it would be ideal, if I could with just a few clicks of some buttons, get back this data that is likely still un the filesystem just unreachable.

Help using photorec for this purpose wouold be ideal as would any other suggestions regarding how to recover my lost/missing files.

Thanks!

mounting DFS filesystem with remote shares in it on Arch Linux

I have laptop joined to domain AAA. Have two DFS namespace servers which are also AD DC with Win Server 2012 R2. NAS is Synology server with CIFS enabled/domain joined.

Servers:

  • dc1.domain1.local – ip 10.8.0.3
  • dc2.domain1.local – ip 10.8.0.27
  • nas1.domain1.local – ip 10.8.0.7
  • laptop.domain1.local – 10.91.0.2

All setup was working until recently. (don’t know what happened, kernel upgrade? or Windows Update).

[sssd] domains = domain1.local config_file_version = 2 services = nss, pam  [domain/domain1.local] ad_domain = domain1.local krb5_realm = DOMAIN1.LOCAL realmd_tags = manages-system joined-with-adcli cache_credentials = True enumerate = True id_provider = ad default_shell = /bin/bash fallback_homedir = /home/%d/%u krb5_lifetime = 1h krb5_renewable_lifetime = 1d krb5_renew_interval = 60s ldap_id_mapping = True krb5_store_password_if_offline = True 
includedir /var/lib/sss/pubconf/krb5.include.d/ [logging]  default = FILE:/var/log/krb5libs.log  [libdefaults]  dns_lookup_realm = true  dns_lookup_kdc = true  ticket_lifetime = 24h  renew_lifetime = 7d  forwardable = true  clockskew = 300  rdns = false  default_ccache_name = KEYRING:persistent:%{uid} 

/etc/request-key.d/cifs.spnego.conf

create  cifs.spnego    * * /usr/bin/cifs.upcall -t %k 

I’m trying to mount share using

mount -t cifs -o sec=krb5,user=$  USER,cruid=$  USER,uid=$  USER //dc1.domain1.local/namespace1 /mnt/mp1 

I can go to /mnt/mp1. But I can’t access anything behind like //dc1.domain1.local/namespace1/share1 which is on Synology server (/mnt/mp1/share1).

Logs on laptop during mounting:

[   54.894236] No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3 (or SMB2.1) specify vers=1.0 on mount.           [   55.036042] CIFS VFS: Autodisabling the use of server inode numbers on new server. [   55.036046] CIFS VFS: The server doesn't seem to support them properly or the files might be on different servers (DFS). [   55.036049] CIFS VFS: Hardlinks will not be recognized on this mount. Consider mounting with the "noserverino" option to silence this message. 

When entering /mnt/mp1/share1 I got:

mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: key description: cifs.spnego;0;0;39010000;ver=0x2;host=DC1.domain.local;ip4=10.8.0.7;sec=krb5;uid=0x460c22f4;creduid=0x460c22f4;user=admin;pid=0x923                                                     mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: ver=2 mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: host=DC1.domain1.local mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: ip=10.8.0.7 mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: sec=1 mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: uid=1175200500 mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: creduid=1175200500 mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: user=admin mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: pid=2339 mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: get_cachename_from_process_env: pathname=/proc/2339/environ mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: get_cachename_from_process_env: cachename = KEYRING:persistent:1175200500 mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: get_existing_cc: default ccache is KEYRING:persistent:1175200500:krb_ccache_s3dU4cx                                                                                                                               mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: handle_krb5_mech: getting service ticket for server.poznan.tbhydro.net mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: handle_krb5_mech: obtained service ticket mar 20 08:05:57 LAPTOP.DOMAIN1.LOCAL cifs.upcall[14414]: Exit status 0 

Notice that it is asking for ticket for different host that it is resolved for IP address. (10.8.0.7 is host nas1.domain1.local).

And on nas1.domain1.local samba logs:

../source3/lib/access.c:338: [2019/03/20 08:08:50.530826, all 3, pid=26839] allow_access   Allowed connection from 10.91.0.2 (10.91.0.2) ../source3/smbd/oplock.c:1323: [2019/03/20 08:08:50.530929, locking 3, pid=26839] init_oplocks   init_oplocks: initializing messages. ../source3/smbd/process.c:1975: [2019/03/20 08:08:50.530968, all 3, pid=26839] process_smb   Transaction 0 of length 196 (0 toread) ../source3/smbd/smb2_negprot.c:281: [2019/03/20 08:08:50.531044, all 3, pid=26839] smbd_smb2_request_process_negprot   Selected protocol SMB3_11 ../source3/auth/auth_generic.c:246: [2019/03/20 08:08:50.531084, all 3, pid=26839] auth_generic_prepare   make_auth_context_subsystem [NT_STATUS_OK] ../source3/auth/auth_generic.c:377: [2019/03/20 08:08:50.531400, all 3, pid=26839] auth_generic_prepare   gensec_set_remote_address: [NT_STATUS_OK] ../source3/smbd/smb2_server.c:2687: [2019/03/20 08:08:50.558318, all 3, pid=26839] smbd_smb2_request_dispatch   SMB2: cmd=SMB2_OP_NEGPROT [NT_STATUS_OK] ../source3/smbd/smb2_sesssetup.c:811: [2019/03/20 08:08:50.572723, all 3, pid=26839] smbd_smb2_session_setup_send   in_session_id 0 ../source3/auth/auth_generic.c:246: [2019/03/20 08:08:50.572850, all 3, pid=26839] auth_generic_prepare   make_auth_context_subsystem [NT_STATUS_OK] ../source3/auth/auth_generic.c:377: [2019/03/20 08:08:50.572870, all 3, pid=26839] auth_generic_prepare   gensec_set_remote_address: [NT_STATUS_OK] ../source3/smbd/smb2_sesssetup.c:866: [2019/03/20 08:08:50.572877, all 3, pid=26839] smbd_smb2_session_setup_send   auth_generic_prepare [NT_STATUS_OK] ../source3/smbd/smb2_server.c:2687: [2019/03/20 08:08:50.572918, all 3, pid=26839] smbd_smb2_request_dispatch   SMB2: cmd=SMB2_OP_SESSSETUP [NT_STATUS_OK] ../source3/librpc/crypto/gse.c:503: [2019/03/20 08:08:50.599304, all 1, pid=26839] gse_get_server_auth_token   gss_accept_sec_context failed with [ Miscellaneous failure (see text): Failed to find cifs/dc1.domain1.local@DOMAIN1.LOCAL(kvno 76) in keytab MEMORY:cifs_srv_keytab (aes256-cts-hmac-sha1-96)] ../auth/gensec/spnego.c:544: [2019/03/20 08:08:50.599342, all 1, pid=26839] gensec_spnego_parse_negTokenInit   SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_LOGON_FAILURE ../auth/gensec/spnego.c:719: [2019/03/20 08:08:50.599360, all 2, pid=26839] gensec_spnego_server_negTokenTarg   SPNEGO login failed: NT_STATUS_LOGON_FAILURE ../auth/gensec/gensec.c:476: [2019/03/20 08:08:50.599370, all 3, pid=26839] gensec_update_async_trigger   gensec_update [NT_STATUS_LOGON_FAILURE] ../source3/smbd/smb2_server.c:3111: [2019/03/20 08:08:50.599393, all 3, pid=26839] smbd_smb2_request_error_ex   smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx[1] status[NT_STATUS_LOGON_FAILURE] || at ../source3/smbd/smb2_sesssetup.c:136 

Any idea where to look for answer for this?

PHP Should I Use Filesystem DB to store JSON Files

My question is regarding the use of the file system as a database to simply hold JSON files.

I have came up with this following code, not perfect at all. which I found very easy to store and extract data, using JSON files.

My question is, will this DB be good for any large projects? will it work fast? or is the limitation to use this kind of approach is simply security related?

Is there some kind of built-in solution by PHP for this kind of things?

Any input on this matter from people who know will be appreciated…

class JDB{      public $  path;      function JDB( $  path = __DIR__.'/jdb/' ){         $  this->path = $  path;         if( !file_exists($  this->path) ) mkdir($  this->path);     }      function p($  t){         return $  this->path.$  t.'.json';     }      function get($  t){         return json_decode(file_get_contents( $  this->p($  t) ));     }      function set($  t,$  c){         return file_put_contents( $  this->p($  t), json_encode($  c,JSON_PRETTY_PRINT) );     }      function create( $  t, $  d = [] ){         $  s = file_put_contents( $  this->p($  t), json_encode($  d) );         return $  s;     }      function destroy(){         $  files = glob($  this->path.'*'); // get all file names present in folder         foreach($  files as $  file){ // iterate files         if(is_file($  file))             unlink($  file); // delete the file         }     }      function delete( $  t ){         $  s = unlink( $  this->p($  t) );         return $  s;     }      function insert( $  t, $  d = null ){         if($  d) $  d['__uid'] = $  t.'_'.$  this->uid();         $  c = $  this->get($  t);         array_push($  c,$  d);         $  s = $  this->set($  t,$  c);         if($  s) return $  d['__uid'];     }      function update($  t,$  conditions,$  u){         $  c = $  this->get($  t);         $  this->search($  c,$  conditions,function($  object) use (&$  c,$  u){             foreach ($  u as $  key => $  value) {                 $  object->$  key = $  value;             }         });         $  this->set($  t,$  c);     }      function remove($  t,$  conditions){         $  c = $  this->get($  t);         $  this->search($  c,$  conditions,function($  object,$  key) use (&$  c){             unset($  c[$  key]);         });         $  this->set($  t,$  c);     }      function search( $  c, $  conditions = [], $  fn ){         $  l = count($  conditions);         foreach ($  c as $  key => $  object) {             $  f = 0;             foreach ($  conditions as $  k => $  v) {                 if( property_exists($  object,$  k) && ($  object->$  k == $  v) ){                     $  f++;                     if( $  f==$  l ) $  fn($  object,$  key);                 }else break;             }         }     }      function select( $  t, $  conditions = [] ){         $  c = $  this->get($  t);         $  r = [];         $  this->search($  c,$  conditions,function($  object) use (&$  r){             array_push($  r,$  object);         });         if (count($  r) == 0) return false;         if (count($  r) == 1) return $  r[0];         return $  r;     }      function count($  t){         $  c = $  this->get($  t);         return count($  c);     }      function uid($  length = 20) {         $  c = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ';         $  cl = strlen($  c);         $  uid = '';         for ($  i = 0; $  i < $  length; $  i++) {             $  uid .= $  c[rand(0, $  cl - 1)];         }         return $  uid;     }  }  /*  $  db = new JDB();  $  db->create('users'); $  db->create('pages');  $  user_uid = $  db->insert('users',[     'name' => 'a',     'password' => 'hello world',     'pages'  => [] ]);  $  user_uid = $  db->insert('users',[     'name' => 'b',     'password' => 'hello world',     'pages'  => [] ]);  _log($  user_uid,'1');  $  page_uid = $  db->insert('pages',[     'name' => 'page 1',     'content' => 'hello world',     'users'  => [$  user_uid] ]);  _log($  page_uid);  $  user = $  db->select('users',['name'  => 'a']); $  page = $  db->select('pages',['users'  => [$  user_uid]]);  $  db->update('users',['name'  => 'b'],['pages' => [$  page->__uid]]); $  db->remove('users',['name'  => 'a']);  _log($  user); _log($  page);  */