SQL Server uses allocated server memory for different kind of purposes. Two of them are plan cache and data cache which are used to store execution plans and actual data correspondingly.
My question: Do these two caches have different allocated space section in Buffer pool, or in contrary, they have just one section in Buffer pool which they share between each other?
In other words, if plan cache is filling up, does space for data cache is reducing as well?
Does linking from one page(A) to another page(B) [Internal link] reduce the rank of the (A) page? Assume that page A in search engine ranks 2nd, If I link to page B internally Does it affect the rank of page A?
I finished reading the Code Book by Simon Singh, I’m interested in playing with some of the ideas in the book to help increase my own understanding. I don’t intend to implement the following in any consequential settings; I’m only interested in exploring the security implications.
I want to generate alternate diceware lists that have quirks, like each word is typed with the left hand only, or keystrokes alternating hands when typing. Assuming I can generate 7776 different strings, and am able to follow all of the other guidelines of diceware, are all diceware lists equally secure?
In the German Enigma Machine no letter could be encoded to itself (ex,
a cannot be encoded to
a). This detail helped crack the code. However I don’t think this applies here, the strength of the password doesn’t rely on a cipher. I don’t see why 6 or 7 strings randomly chosen from a list of 7776 wouldn’t have the same entropy, no matter the list. Theoretically, it could just consist of 7776 different binary lines couldn’t it?
I understand that additional rules to password generation sometimes decrease security. If an attacker knows my diceware list, does it matter if every entry consists of only 15 unique characters of the left hand? Is there less entropy?
I was reading though this paper, which suggests using dijkstra without edge relaxation, but to rather to just insert new nodes, cf page 16 for the pseudo code. But to me the code looks wrong. I think the comparison should be
k <= d[u]
and also the update of the d[u] in the next line seems redundant to me. I think the delete-min operation can never return a vertex with distance label k which is strictly less than d[u] since whenever a vertex distance pair is inserted into the priority queue the distance array d is updated. Am I correct in assuming that this is a mistake in the paper?
Given a graph $ G = (V,A)$ , with source $ s$ , sink $ t$ , edge capacity larger than 1 (but not all equal), I know that if we decrease the capacity of one edge by 1, the $ s,t$ -maximum flow decreases by at most 1. But I would like to know what happens to max flow if we decrease (or increase) the capacity of all edges by 1. I’d appreciate any comments/insights on this. Thanks!
i’am not professional server administrator.have an ubuntu server that host a WordPress website use for shoping with almost 200 user’s per day.
Server config is : 512 MB Memory with 25 GB SSD I need advice and guides to decrease subprocess of apache and mysqld and also increase server sped
some things that i does to make server stable is : * Make swap file for when memory is completely taken * Restart apache and MySQL server every 1 hour(using a cronjob) * Instead of using Dedicate DNS Server ( like BIND) I just use Cloudflare.
Now I have some questions:
- What other options and methods i have that can use to decrease the running process or increase server speed?
- some times of the week, server use 100% of CPU.. but i dont know what process causes this.how can I log process’s that use 100% percent of CPU( is this posible? )
- and a common question is this usual that mysqld and apache use this lot of subprocess?
Here is the screen shot of command
I have been checking some systemd-analyze outputs and I actually don’t have a crucial problem with my boot time but just wondering whether I can decrease it further. I’d like to mention that I’m using an SSD and Ubuntu is my only OS.
systemd-analyze Startup finished in 5.450s (firmware) + 565ms (loader) + 2.632s (kernel) + 10.086s (userspace) = 18.734s graphical.target reached after 10.071s in userspace
systemd-analyze blame 6.607s NetworkManager-wait-online.service 5.660s fwupd.service 5.042s bolt.service 4.134s plymouth-quit-wait.service 1.579s dev-sda2.device 1.552s systemd-backlight@backlight:intel_backlight.service 1.367s plymouth-read-write.service 1.211s snapd.service 903ms systemd-logind.service 572ms systemd-journald.service 555ms dev-loop9.device 523ms dev-loop6.device 515ms man-db.service 499ms dev-loop8.device 478ms dev-loop5.device 472ms dev-loop13.device 448ms dev-loop7.device 441ms dev-loop11.device 438ms dev-loop10.device 432ms dev-loop12.device 415ms udisks2.service 406ms dev-loop14.device 319ms snap-gnome\x2d3\x2d28\x2d1804-71.mount
graphical.target @10.071s └─multi-user.target @10.071s └─kerneloops.service @10.042s +27ms └─network-online.target @10.032s └─NetworkManager-wait-online.service @3.422s +6.607s └─NetworkManager.service @3.224s +186ms └─dbus.service @3.218s └─basic.target @3.211s └─sockets.target @3.211s └─snapd.socket @3.208s +2ms └─sysinit.target @3.205s └─systemd-backlight@backlight:intel_backlight.service @1.468s +1.552s └─system-systemd\x2dbacklight.slice @1.467s └─system.slice @212ms └─-.slice @212ms
if i has a tablev
t1, has two column,
id is auto-incr PK,
age no index. if i want to update by age, if i use
update .. where age=XX, it will lock all table. The scond method:
update ... where id in (select id from t1 where age=XX), it will lock less record ?, it can enchance concurrent ?