Operating systems memory management: help understanding a question related to segmentation

There is this question in my textbook (Operating Systems: Internals and Design Principles by William Stallings):

Write the binary translation of the logical address 0011000000110011 under the following hypothetical memory management schemes, and explain your answer:

a. A paging system with a 512-address page size, using a page table in which the frame number happens to be half of the page number.

b. A segmentation system with a 2K-address maximum segment size, using a segment table in which bases happen to be regularly placed at real addresses: segment# + 20 + offset + 4,096.

I am having trouble with understanding part b. I’m not a native English speaker. Initially, I assumed that “using a segment table in which bases happen to be regularly placed at real addresses” means that the segment number in the logical address is the number of the physical segment, but then I read this “segment# + 20 + offset + 4,096”, and I am not sure what to make of it. So does this mean that the base number in the segment table contains segment# (in the logical address) + 20 + offset (in the logical address) + 4,096?

why segmentation fault is arising , because of these three lines?

The code is giving segmentation fault. But it is working fine, :

  1. if i remove the first weird line “q++;”. Or
  2. if i call solve(s,1) in main function instead of solve(s,0) at third weird line.
  3. the code is working fine if i use “solve(s,q)” at second weird line.
 ''' #include using namespace std;  >bool solve(string s,int q) {     q++; //first weird line     if(q==10)         return true;     solve(s,q+1);//second weird line     return false; }  int main() {     string s;     cin>>s;     solve(s,0);//third weird line      return 0; }'''    

Tensorflow Image segmentation

For my Image colorization project, I am trying to be able to get the class label for a given pixel in an image. For example, I want to be able to know if a given pixel of the image belongs to sky, person, tree, flower, ocean, etc.

I am looking to be able to use a pre-trained model which does this, as my main goal is to implement an Image colorizer (similar to this research paper: https://link.springer.com/article/10.1007/s11263-019-01271-4).

I have looked at: https://www.tensorflow.org/tutorials/images/segmentation and https://www.tensorflow.org/tutorials/images/classification

but it looks like the model needs to be trained.

I am new to tensorflow so anyone knows about something similar, please let me know.

Relation between size of address bus and memory size; memory Segmentation in 8086

My question is related to memory segmentation in 8086. I learnt that,

8086 has a 20 bit address bus. And so it can address 2^20 different addresses. Which means it has an memory size of 2^20, i.e, 1MB.

I have a few doubts:

  1. What I understand from the fact that 8086 has a 20 bit address bus is that it could have 2^20 different combinations of 0s and 1s, each of which represents one physical address. What I don’t understand is that how does 2^20 different address locations mean 1 MB of addressable memory? How is total number of different addresses locations related to memory size (in Megabytes)?

  2. Also, correct me if I’m wrong, the 16 bit segment registers in 8086 hold the starting address of the different segments in the memory (Code, Stack, Data, Extra).My question is, aren’t the addresses in memory of 20 bits? Then how can the 16 bit register hold 20 bit addresses? If it contains the upper 16 bit of the 20 bit address, how does the processor make out to which exact address location it has to point?

P.S: I am a beginner is micro-processors and total reliant on self study, so kindly excuse if my questions seem a bit silly.

Thanks in advance.

Image segmentation of a high resolution 2D binary image into clusters, threads and points

I use python3 to find out the proportion of the mentioned image features. They are originally 8-bit greyscale TIFF images with a resolution of 2048×2168 pixels. I have binarised it into an image composing of the matrix (white) and component particles (black). The particles have random morphologies. I would like to widely categorise them as:

  • Points which can range between 1×1 to 3×3 blocks of independent square pixels completely surrounded by the matrix
  • Threads which are linear or diagonal sets of continuous pixels of at least 3 pixels in length and at most 3 pixels in width
  • Clusters which are any randomly shaped closed morphologies with more than 10 pixels in overall area (or any arbitrary high value)
  • Others which by any chance is not included the three listed above

Here is an Example(400×400) portion of the image.

First of all, I am confused about the order of progress in this situation. I could scan the whole image pixel by pixel and extract the points in my first step. A second scanning can see for threads and final scanning can look for clusters using boundary tracing.

As you can see, the component is spread in a very uneven manner. To a human eye, the threads appear as blocks with very low aspect ration (AR). Points as noises and clusters as blocks of distinguishable larger areas. Therefore the accuracy level of this classification scheme does not needfully be a high score. The objective, however, is minimal user interaction (fully automatic). One another thing to note is that holes within clusters or threads (that does not break it) can be ignored. The ultimate aim is to get the area percentage of each of the objects so that the detection method can be limited keeping it in mind.

Some specific questions:

  1. Let us say that I identify a large cluster of pixels within the image. How can I split this into a surface (high AR) and thread (low AR)? (something similar to Watershed Algorithm)
  2. Should I go for the OpenCV contouring method like Border Following or border tracing (and later ignoring the holes) or something more suitable
  3. I was curious to know if there have been approaches in the past that used random sampling of pixels instead of a pixel by pixel scanning across the entire image.

I would like to know the steps a computer scientist would follow in such a scenario. I am a beginner in image processing and any reference material would be appreciated. For anybody interested in metallography, the images are micrographs and what we see are defects. I am trying to separate cracks, porosities and other openings based on pixel density.

Using The Output Of Semantic Segmentation In Autonomous Driving

I have read a couple of papers on semantic segmentation and ran this github code (which was trained on Cityscapes) against a KITTI sample image and it did pretty well (as seen below).

enter image description here

I get that classification at the pixel level is very important. The problem I am having is now that I have classified each pixel, how can I use that to make decisions in autonomous drivings.

For example: the cityscapes dataset overview defines 30 classes, traffic lights and signs being in the object category.

So now that we have identified which pixels are a traffic light, we need to still get their indices so we can run that ROI through a traffic sign classification network so we can know if it’s a red light, or yellow, a stop sign, etc.

Isn’t a YOLOv3 model then essentially doing the same thing by putting bounding boxes around objects and doing it much cheaper and faster (computationally)?

Moreover, in the above image, there are three cars labeled in blue. So getting their indices does not actually tell me where in the image EACH car is, just where in the image cars exist.

I do see it being very useful for derivable space estimation, but is that it??

Recursividad en nasm error en el return Segmentation fault (core dumped)

Este es el codigo para fibonacci el codigo funciona bien y da el resultado esperado, el error está en que al terminar la recursion, necesito limpiar los llamados que hice a la funcion, pero cuando coloco el ret se cae el programa, sé que el error es ahí. Quizas el error sea la manera en que lo coloque pero no he podido encontrar otra forma de hacerlo y no lo veo mal: la idea es que al final el programa solo tiene en la pila la direccion de los calls a fibonacci entonces coloque el ret despues de la etiqueta de fin para que al hacer el primer ret y se devuelva a la posicion despues del call por gravedad vuelva al ret de nuevo y asi hasta que limpie la pila.

SECTION  .data msg:      db "Ingrese el numero a calcular(1-9)?: " len:      equ $   - msg     SECTION .bss count:    resw 1 result:   resb 1     SECTION .text       GLOBAL _start _start:   mov edx, len   mov ecx, msg   mov ebx, 1   mov eax, 4     int 80h    mov ecx, count   xor ebx, ebx   mov eax, 3   int 80h    xor ecx, ecx   mov cx, [count]   and cx, 0fh    mov ax, 1   mov bx, 1 fibonacci:    push bx   push ax    cmp cx, 1   je final    pop ax   pop bx     mov dx, ax    push dx    add ax, bx    dec cx   pop bx    call fibonacci  final:   ret    xor ebx, ebx  mov eax, 1  int 80h 

Attempting to start Thunderbird and getting Segmentation Fault (core dumped)

My electric went out last night causing computer to crash and after I cannot start Thunderbird.

From the Launcher the timer spins and just stops. If I try entering thunderbird from terminal I get Segmentation fault (core dumped).

Is there a way to get Thunderbird working again?

Ubuntu 18.04 Thunderbird 1:60.9.0+build1-0ubuntu0.18.04.1