I’m relatively new to Unity debugging, and I’ve got an issue that happens every so often about 2 minutes into playing through a scene. If I can do the following, it would save me a lot of time debugging:
- Pause the scene
- Save the snapshot
- Modify some code
- Play the snapshot saved in step 2
- Repeat 3-4
Is this at all possible with Unity, or should I just modify my code to running "closer" to the point I’m trying to debug?
I have 40 views in an Oracle 18c GIS database that are used in a map in a workorder management system (WMS).
- The views are served up to the WMS map via a web service/REST.
- The views have an average of 10,000 rows per view.
The views have joins to dblink-tables in a separate Oracle database, and as a result, are not fast enough for use in the WMS map (3-second map refresh delay). Furthermore, it seems like a bad idea to compute the views each time a user refreshes the map; since the map does not need to be up-to-date in real-time (an unnecessary burden on the DB).
As an alternative, I would like to take snapshots of the views on a weekly basis. The snapshots would be static tables that would perform much better in the WMS map.
Unfortunately, due to office politics challenges, using technology like materialized views or Oracle’s Golden Gate to solve this problem is not an option.
What are my options for taking scheduled snapshots of Oracle views (without using materialized views or Golden Gate)?
For example, I could make an .SQL script that truncates static tables and inserts the rows from the views into the tables (on some sort of schedule). But as a novice, I don’t know how efficient or risky that option would be, or if there are better alternatives.
I have a cron job set up in my crontab which is set to fire a snapshot every first minute of an hour. When I see the cron logs I can see that it triggered the task but I do not see the snapshots in Kibana client. I curl’ed the ElasticSearch instance directly and do not see the snapshots there either. When I try to run the cron script manually from one of the nodes it works fine and creates a snapshot but surprisingly the crontab job does not do so.
Here is my cron job setting
< 1 * * * * root curl -v -k -s -X PUT https://:@localhost:9201/_snapshot/s3_backup_repository/%3Csnapshot-%7Bnow%2Fh%7Byyyy.mm.dd-hh%7D%7D%3E?wait_for_completion=true >> /opt/elasticsearch/system/system.log 2>&1 />
This cron job is set on all the three EC2 instances in our ES cluster.
Here is the row in cron log in /var/log/ which shows the script was triggered per the cron schedule. (I see this in all the 3 EC2 instances in ES Cluster all with the same timestamp).
< Sep 28 18:01:01 ip-10-7-22-136 CROND: (root) CMD (curl -v -k -s -X PUT https://redacteduser:redactedpassword@localhost:9201/_snapshot/s3_backup_repository/) />
The system.log file in all the 3 instances on path /opt/elasticsearch/system do not have any trace of the snapshot triggered by the cron tasks.
Any idea why this could be happening? Is it because all the 3 instances in the cluster try to trigger the snapshot all the same time?
I was having problem with my time machine, that was showing me a failure message with the text about failing to create a local snapshot.
I opened terminal and listed all snapshots available. There were about 20. I deleted all them and at the end I issued the command
sudo tmutil localsnapshot
I forgot to add the
/ at the end, but this command was successful and mojave showed me the message
Created local snapshot with date: 2019-05-24-063058
I have tried to use Time Machine after that and it was still failing. Then I repeat the command adding the bar at the end.
sudo tmutil localsnapshot /
Again, the command was successful, with the message
Created local snapshot with date: 2019-05-24-063243
I have tried to use Time Machine after this and it started working.
The problem is that when I type
tmutil listlocalsnapshots /
at the terminal, I do not see the snapshots mojave mentioned as created.
They are not at
I want to locate and delete them. Where are them? Any ideas?
I’ve been using ZFS for a while and have snapshots piling up.
I believe I can start deleting the old snapshots, but want to be doubly sure. A ZFS snapshot would be essentially similar to a git tag where it is a read-only reference to a point in time version of the repository. The active data set would be HEAD and would remain unaffected if I delete a pointer to that point in time?
So, if that is the case, then if I want to have a 30-day retention policy, I can merely look at the creation date for the snapshot and discard anything > 30 days?
Are those assumptions accurate?
Context: I was thinking about making snapshot for safety reasons, so I downloaded Timeshift but once installed, I have to make a choice between Rsync and BTRFS Snapshots
I already read the documentation:
Question: But considering I’m really new to system restore point and snapshots, I can’t figure out which one should I choose, and what’s the difference ?
Virtualbox has recently damaged a .vbox and .vbox-prev file on a guest virtual machine that has several snapshots, leaving a 3 kb .vbox file with just the machine uuid and default settings.
All .VDI files are still intact, including the snapshots in the \Snapshots directory.
I can rebuild the virtual machine, but how can I re-attach the Snapshots to the virtual machine? (Is it possible?)
Daily I create ec2 instances from AMI’s and I regularly create fresh AMI’s every few days and delete stale AMI’s. However under ELASTIC BLOCK STORE =>Snapshots I see ever growing list of Snapshots.
Right now I’ve 10 AMI’s and 19 Instance ID’s in Instances, however I see 81 snapshots.
Can I go ahead and delete all those Snapshots?
Mac Pro with Mojave I cannot create a local snapshot to back up, what can I do about that?
Is there a way to populate the Snapshot Manager with snapshots that are listed in the datastore but not under Manage Snapshots. Is there a way to either populate Manage Snapshots or using something like vmkfstools revert to the snapshot manually?
Long story short, I had a hardware failure and had to re-add the datastore to the Linux VM. I believe before the failure that I was using Portal-000003.vmdk but when I try to start the system with that datastore I get the following error: Failed to power on virtual machine Portal. File system specific implementation of LookupAndOpen[file] failed
When I try to using Portal.vmdk it boots up fine, but seems to be running code from 2015. When I look at the Datastore I have several vmdk files:
-rw------- 1 root root 21308092416 Jun 23 2016 Portal-000001-delta.vmdk -rw------- 1 root root 323 Nov 24 2015 Portal-000001.vmdk -rw------- 1 root root 117391208448 Nov 24 2015 Portal-000002-delta.vmdk -rw------- 1 root root 330 Nov 24 2015 Portal-000002.vmdk -rw------- 1 root root 8192512 Apr 19 05:17 Portal-000003-ctk.vmdk -rw------- 1 root root 521604673536 Apr 19 03:28 Portal-000003-delta.vmdk -rw------- 1 root root 457 Apr 19 04:35 Portal-000003.vmdk -rw------- 1 root root 8192512 Apr 18 17:37 Portal-000004-ctk.vmdk -rw------- 1 root root 32146173952 Apr 17 03:03 Portal-000004-delta.vmdk -rw------- 1 root root 398 Apr 17 03:02 Portal-000004.vmdk -rw------- 1 root root 8192512 Apr 19 05:52 Portal-000005-ctk.vmdk -rw------- 1 root root 2198843392 Apr 17 03:07 Portal-000005-delta.vmdk -rw------- 1 root root 398 Apr 17 03:07 Portal-000005.vmdk -rw------- 1 root root 8192512 Apr 18 17:37 Portal-000006-ctk.vmdk -rw------- 1 root root 4413435904 Apr 17 03:15 Portal-000006-delta.vmdk -rw------- 1 root root 398 Apr 17 03:10 Portal-000006.vmdk -rw------- 1 root root 8192512 Apr 18 17:37 Portal-000007-ctk.vmdk -rw------- 1 root root 31961624576 Apr 17 07:43 Portal-000007-delta.vmdk -rw------- 1 root root 398 Apr 17 07:10 Portal-000007.vmdk -rw------- 1 root root 536870912000 Apr 19 05:57 Portal-flat.vmdk -rw------- 1 root root 8684 Apr 19 05:57 Portal.nvram -rw------- 1 root root 560 Apr 19 05:56 Portal.vmdk -rw-r--r-- 1 root root 78 Apr 19 05:40 Portal.vmsd -rwxr-xr-x 1 root root 3394 Apr 19 05:57 Portal.vmx -rw-r--r-- 1 root root 367 Apr 18 16:40 Portal.vmxf
Any suggestions on what else I can do to revert back to the latest good snapshot?