Bit is a contraction of the term binary digit and is the smallest unit of information in computing and digital communications on a machine. A bit will only hold one of two values: a 0 or 1. A bit can be thought of like a light switch. It is either on, or off (0 or 1). At the heart of every digital device, every program is written in bits (or binary). Combinations of bits are put together in specific order so the computer processor can understand the instructions. Originally all computer programs were written in binary. Programming languages evolved and became more human readable by relying on intermediate programs called compilers to translate the code into binary.
A bit also refers to a location of data in a computer’s memory or in a disk. Again bits are combined in different ways to create unique “addresses” for data.
Often you may hear terms like 32-bit or 64-bit. This refers to the number of bits a computer is able to process and the “size” of the memory it can support. Specifically it is the size of the integers, memory and data path widths (i.e. 64-bits wide it can hold 64 0s or 1s). It also refers to the hardware used such as a “64-bit bus”. A bus is a cable or interface that transfers data from one computer component to another.
According to Wikipedia, the encoding of data by discrete bits was used in the punched cards invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semen Korsakov, Charles Babbage, Hermann Hollerith, and early computer manufacturers like IBM. In early computers, a stiff paper card or tape would have an array of hole positions. Early programmers could be either punch through a hole position or not, thus carrying one bit of information. To program a computer, the punched paper would be feed through a reader and be understood by the computer.
The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes and stock ticker machines (1870).