-1

I am doing a computer architecture course online and they taught like how computer works from gates to building a cpu and making it work . I know we put instruction set in memory containing registers which are fed to every unit in cpu using control buses n all , we use PC ,ALU ,Registers, data memory , program memory and with help of these units cpu works.I understood all that but i was curious about gates and transistors ,how they work or so. Then i read about basics of gates like transistors,FET , TTL, CMOS and their gate circuits. There i saw transistors or gates take input of HIGH and LOW voltage and give us output as 0s and 1s .

But My question is how do we make those instruction as 0s and 1s at first place.In order to make an binary code we need to turn on or off(HIGH or LOW) the input in order to get output in the binary form which will later be stored in registers and later fed to the computer units . How do we create input in the first place ,like making a binary code will take simultaneously turning on and off at input , who does that work of simultaneously turning on and off of circuit at input?.

ankasaw99
  • 19
  • 4

1 Answers1

1

Maybe a duplicate of How does an assembly instruction turn into voltage changes on the CPU?.

Or since you're asking how 0/1 get into the computer in the first place, early computers had front-panel switches which could literally connect a logic input to the power supply rail or ground, directly giving a low or high voltage. This retrocomputing question has pictures.

On more modern machines, we have keyboards that work by making or breaking electrical connections, pulling input pins high (e.g. via a pullup resistor) or low (connected to ground) in the keyboard controller.


We also have computer memory that works by outputting a low or high voltage as data in response to signals on address lines. For example, one of the simplest kinds of programmable ROM (PROM) is programmed by essentially "blowing fuses" in the chip. i.e. melting wires that connect the bit for that address to the supply voltage. So when you read it by driving the address lines low or high, you get a low or high voltage output on the data pin.

More modern memory technologies are just fancier versions of that. e.g. electrically eraseable PROM or flash. Or volatile SRAM / DRAM that will hold data you store to it, but loses it on power failure. An SRAM cell can be built out of a few transistors.

On bootup/reset, the CPU has a hard-coded address that it reads code from ("jumps to"), the reset vector. It will fetch bytes from this address and decode them as instructions. Hard-coding that reset vector into the CPU is just a function of how some wiring (or silicon paths) are connected. That, and some code in ROM, are all you need for a computer to bootstrap itself and load more code, e.g. from disk or whatever. (CPU talks to disk controller with I/O instructions, or the disk controller can DMA data into RAM for the CPU to read.) Obviously communication of data busses involves electrical low and high voltage levels corresponding to logic 1 and 0 bits, or some more complex encoding.


All this electrically-programmable stuff is great, but how did it all start?

You can physically build memory by hand, instead of electrically programming it. For example, the Apollo Program's guidance computer used rope memory. https://en.wikipedia.org/wiki/Core_rope_memory. I found some videos of how their memories were built, encoding hand-written machine code by wrapping wires one way or another, by hand.

Other early computer memories included punch cards or paper tape, and stuff like https://en.wikipedia.org/wiki/Drum_memory.

Punch cards could be punched by hand, and a punch card reader would mechanically turn the pattern of holes into patterns of low and high voltages / bits that the CPU could read with I/O instructions.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847