## Why is the Nintendo Entertainment System (NES) referred to as an 8-bit system, rather than a 1-byte system?

As far as I’ve understood it, referring to this system as an 8-bit system points out that one can access 8 bits of data in one instruction.

While I understand that we’re not saving vast amounts of time by calling it “one byte” instead of “eight bits”, is there a particular reason why the latter is/was preferred?

## Representation of -40 in 8bit computer using 2’s complement

What is the representation of -40 in a 8bit computer using 2’s complement intiger?

## The maximum decimal integer that can be stored in memory of 8-bit word processor computer?

Actually i am preparing for an exam and in the last year exam this que. was been asked. i.e

The maximum decimal integer number that can be stored in memory of 8-bit word processor computer ?

a) (128)10
b) (127)10
c) (129)10
d) (255)10

Answer of this que. as given in the answer key is (b). And I have no idea how they arrived at this result.

Acc. to my understanding, we have 8-bits, which is 28 = 256 so 255 should be the maximum integer which we can store.

## I want to create an unsigned 8-bit adder/substractor and implement it in a logic circuit

I am having a hard time trying to implement an adder for 8-bits signed numbers with 1’s complement but without using VHDL since I am new to this kind of stuff. But I know that I should use 8 full adders and link them together but the problem is that I don’t know how to do it.

It is an assignment and I know you can’t give me the full solution of the problem. So I started designing my circuit on an application called “logic circuit”.

And this is the interior of a full adder.

## Is there a simple image processor that can reduce image color depth (convert 24-bit RGB to 2, 4 or 8-bit indexed color)?

I have a need for a quick/easy image processing application under Ubuntu 18.04 that will allow me to view, crop, scale down and reduce color depth/color mode (number of color bits per pixel) – with and without dithering – of 24-bit JPG input images and export them as PNG. I’d like the option of reducing to 4, 16 or 256 colors (2, 4 or 8 bits per pixel).

GIMP has these capabilities, but I’m looking for something less bulky.

In Windows (and OS/2 before that!), I was a longtime user of PMView, a speedy viewer and processor. I can continue to use PMView through WINE (there is also an IrfanView app that works similarly through WINE), but I would prefer a Linux app.

I have tried a number of applications. I like the simplicity, cropping, resizing and even the color manipulations of gThumb. But gThumb does not allow me to save a PNG at less than 24-bit. Same problem with Shotwell and Mirage. I’ve had no luck getting usable results with Pinta.

## Conversion from 32 bit to 8-bit values and vice-versa in assembly giving segmentation fault

This is probably my final hurdle in learning x86 assembly language.

The following subroutine is giving me a segmentation fault:

``    ;=================================================================      ; RemCharCodeFromAToB - removes all chars between a and e from str     ; arguments:     ;   str - string to be processed     ;   a   - start     ;   e   - end     ; return value:     ;   n/a      ;-------------------------------------------------------------------     RemCharCodeFromAToB:         ; standard entry sequence         push    ebp    ; save the previous value of ebp for the benefi\$           mov     ebp, esp ; copy esp -> ebp so that ebp can be used as a \$               ; accessing arguments                                    ; [ebp + 0] = old ebp stack frame                                 ; [ebp + 4] = return address         mov     edx, [ebp + 8]  ; string address          while_loop_rcc:             mov cl, [edx]       ; obtain the address of the 1st character of the string             cmp cl, 0           ; check the null value                je  while_loop_exit_rcc     ; exit if the null-character is reached              mov al, cl ; save cl             mov cl, [ebp + 16]      ; end-char             push cx                 ; push end-char             mov cl, [ebp + 12]      ; start-char             push cx                 ; push start-char             push ax;                ; push ch             call IsBetweenAandB             add esp, 12              cmp eax, 0          ; if(ch is not between 'a' and 'e')              je inner_loop_exit_rcc              mov eax, edx    ; copy the current address              inner_loop_rcc:                 mov cl, [eax+1]                 cmp cl, 0                 je  inner_loop_exit_rcc                   mov [eax], cl                  inc eax                 jmp inner_loop_rcc             inner_loop_exit_rcc:              inc edx             ; increment the address             jmp while_loop_rcc  ; start the loop again         while_loop_exit_rcc:          ; standard exit sequence         mov     esp, ebp        ; restore esp with ebp         pop     ebp             ; remove ebp from stack         ret                     ; return the value of temporary variable         ;=================================================================== ``

Can someone point me the error here?

I am suspecting that there is something wrong with data conversions from 32-bit to 8-bit registers and vice-versa. My concept regarding this is not clear yet.

Or, is there something wrong in the following part

``        mov al, cl ; save cl         mov cl, [ebp + 16]      ; end-char         push cx                 ; push end-char         mov cl, [ebp + 12]      ; start-char         push cx                 ; push start-char         push ax;                ; push ch         call IsBetweenAandB         add esp, 12 ``

?

• Full asm code is here.

• C++ code is here.

• Makefile is here.

## 8bit to Quoted-Printable encoding in Exim

Is it at all possible, and how, to configure the Exim4 mail server to convert outgoing messages (or message parts) from 8bit encoding to Quoted-Printable (or Base64, although I’d prefer QP) before signing them with DKIM and transferring them?

We currently have a setup where messages containing 8bit parts get an invalid DKIM signature when arriving at the destination server because they are converted by an upstream server (which we have no control of) to Quoted-Printable. Unfortunately, we can’t really complain about the behaviour of the upstream server because RFC4871 clearly states that it is the signing server that has to reencode the mail in the appropriate encoding before signing (see RFC4871 section 5.3):

In order to minimize the chances of such breakage, signers SHOULD convert the message to a suitable MIME content transfer encoding such as quoted-printable or base64 as described in MIME Part One [RFC2045] before signing.

I would therefore expect this conversion to be a basic function of any mail server supporting DKIM, but as far as I searched in the exim manuals, there is nothing like that. Is it any known solution to this issue?

## Using Darktable, how to batch convert from RAW to Lossless (uncompressed) TIFF 8-bit

I’ve notice on other answers there are ways to use the History Stack in Darktable to batch convert a bunch of RAW images. However, since I really only want to use Darktable convert (export) a lot of RAW images as 8-bit uncompressed / lossless TIFF’s, I wondered if this is possible somehow?

## Simple algorithm for IEEE-754 division on 8-bit CPU?

IEEE Std 754-2008 is the modern definition of Floating-Point Arithmetic. It requires that division (among other operations) performs

as if it first produced an intermediate result correct to infinite precision (..), and then rounded that intermediate result, if necessary, to fit in the destinationâ€™s format.

I ask for a simple algorithm performing that operation, tailored to 8-bit CPUs with 8×8->16 bit multiplier; speed is a second(ary) criteria.

For simplicity I restrict to binary32 type

Image credit: wikipedia

and positive “normal” input with exponents such that no overflow or underflow occurs (so that we can ignore any sign or exponent consideration beyond some limited final exponent adjustment according to the result of the division), and rounding to nearest even:

the floating-point number nearest to the infinitely precise result shall be delivered; if the two nearest floating-point numbers bracketing an unrepresentable infinitely precise result are equally near, the one with an even least significant digit shall be delivered.

I ask because I strongly suspect that the AVR libc implementation of `float` division (typically used on the Arduino Uno) deviates from the standard beyond being locked to roundTiesToEven, and I wonder how hard it would be to fix that. The current code seems to be divsf3, which invokes divsf3x to perform the division to apparently 40-bit mantissa, then rounding with fp_round. Isn’t that very sketch doomed, BTW?