## needs of lead to the implementation of ASCII codes

In the domain of computer science, er already have four number systems namely

``Binary  Decimal  Octal Hexadecimal ``

so what were the needs that lead to the implementation of ASCII codes in computer science. why this code is been use in computer architecture. I can search for it but i can’t get satisfactory answer if someone know kindle tell me

## will blocking outgoing traffic from port 80 and only allowing port 443 lead to more security?

Is blocking http connections on a home network and only allowing https connections reasonable? Will it reduce the chances of a man in the middle attack?

## Does Type:Type lead to inconsistency without general inductive types?

In e.g. Agda , it is possible to derive an element of the empty type by enabling the “type in type” option.

Every proof I have seen (and come up with) involves making a special inductive type definition. Can a contradiction be derived using type in type but with e.g. only using standard library types?

## In perfect hashing, why does a secondary hash table quadratic in size lead to no collisions?

I read the following in CLRS 3rd Edition (Section 11.5, “Perfect Hashing“):

How does the choice of $$m_j = n^2_j$$ lead to no collisions?

## Can a TLB hit lead to a page fault?

$$Can$$ $$a$$ $$TLB$$ $$hit$$ $$lead$$ $$to$$ $$a$$ $$page$$ $$fault?$$

My answer is yes, as say even after a TLB hit if the page in the memory is dirty and it will lead to Page fault. Other case can be that, it is read only and we want to write that page which will eventually lead to page fault.

Now wrt this context, let me ask u question that was asked in GATE 2020 exam(India’s one of the most prestigious examination to get into $$IIT’s$$ for Masters and PhD). I know this is not a question ans posting site, but for the sake of clearing my doubt I have to, and the concept can be understood well through this question. The questions goes like this–>

Consider a paging system that uses $$1−level$$ page table residing in main memory and a TLB for address translation. Each main memory access takes $$100 ns$$ and TLB lookup takes $$20 ns$$. Each page transfer to/from the disk takes $$5000 ns$$. Assume that the TLB hit ratio is $$95%$$, page fault rate is $$10%$$. Assume that for $$20%$$ of the total page faults, a dirty page has to be written back to disk before the required page is read from disk. TLB update time is negligible. The average memory access time in ns (round off to $$1$$ decimal places) is _______.

There is a lot of fuss going around this question as the answer provided by the exam authority is claiming it to range from $$154.5ns$$ $$to$$ $$155.5ns$$, but many fellow scholars are claiming that it should be $$725ns$$ as there can still be page faults if there are hit in the TLB. Moreover this question is of $$2$$ marks. So its correctness can change the life of many students.

Thank you.

## how to make sure vulnerability management does not lead to reduced to compromised security

when running vulnerability scans often a particular version of say Node.js is reported to be vulnerable along with a recommendation to update to a higher version. Then we have also unsecure TLS, SSL protocols like TLS 1.0 and SSL 3.0 and it’s recommended to disable them altogether. For me, any of the above recommendations is a change that needs to be applied to a given application, host etc. Now I’m wondering have one can make sure that any of two changes does not lead to reduced or compromised security? How one can make sure that the new Node.js version is not introducing even more severe weaknesses / vulnerabilities ? How does change management fit into this ? In the end updating the Node.js version or disabling unsecure TLS/SSL protocols is a change request? Isn’t it?

## Why do I always have something missing in my understanding of topics which always lead me to solve problems incorrectly?

I am computer science masters student, i come from background of engineering and not cs, my problem is whenever i have a problem set, a programming task or an exam. i always try hard to understand the question and think for the right answers, but i usually either get stuck or have a wrong answer, and when i seek help i figure out i wasn’t completely understanding the topic of the question itself, missig some part of the information in or even having a wrong understanding to some parts.
So my question is, how i can approach a computer science topic “for e.g. operating systems” and have a good understanding with the right depth to have a better comprehension and to be able perform better at programming tasks and exams.

• It helps to extract b2b and b2c leads in minutes.
• Enables the users to send and track multiple emails by adding free add-ons.
• Extracts 100% verified contact details of the leads.
• Simple and intuitive GUI interface.
• Users can easily upload generated data to CRM Software for business marketing.
• Supports all the available Windows OS such as Windows 10/8/7/XP/Vista etc.