Development of computer technology. History of the development of computer technology. Generations of computers (computers). Updating new knowledge

The first device designed to make counting easier was the abacus. With the help of abacus dominoes it was possible to perform addition and subtraction operations and simple multiplications.

1642 - French mathematician Blaise Pascal designed the first mechanical adding machine, the Pascalina, which could mechanically perform the addition of numbers.

1673 - Gottfried Wilhelm Leibniz designed an adding machine that could mechanically perform the four arithmetic operations.

First half of the 19th century - English mathematician Charles Babbage tried to build a universal computing device, that is, a computer. Babbage called it the Analytical Engine. He determined that a computer must contain memory and be controlled by a program. According to Babbage, a computer is a mechanical device for which programs are set using punched cards - cards made of thick paper with information printed using holes (at that time they were already widely used in looms).

1941 - German engineer Konrad Zuse built a small computer based on several electromechanical relays.

1943 - in the USA, at one of the IBM enterprises, Howard Aiken created a computer called “Mark-1”. It allowed calculations to be carried out hundreds of times faster than by hand (using an adding machine) and was used for military calculations. It used a combination of electrical signals and mechanical drives. "Mark-1" had dimensions: 15 * 2-5 m and contained 750,000 parts. The machine was capable of multiplying two 32-bit numbers in 4 seconds.

1943 - in the USA, a group of specialists led by John Mauchly and Prosper Eckert began to construct the ENIAC computer based on vacuum tubes.

1945 - mathematician John von Neumann was brought in to work on ENIAC and prepared a report on this computer. In his report, von Neumann formulated the general principles of the functioning of computers, i.e., universal computing devices. To this day, the vast majority of computers are made in accordance with the principles laid down by John von Neumann.

1947 - Eckert and Mauchly began development of the first electronic serial machine UNIVAC (Universal Automatic Computer). The first model of the machine (UNIVAC-1) was built for the US Census Bureau and put into operation in the spring of 1951. The synchronous, sequential computer UNIVAC-1 was created on the basis of the ENIAC and EDVAC computers. It operated with a clock frequency of 2.25 MHz and contained about 5,000 vacuum tubes. The internal storage capacity of 1000 12-bit decimal numbers was implemented on 100 mercury delay lines.

1949 - English researcher Mornes Wilkes built the first computer, which embodied von Neumann's principles.

1951 - J. Forrester published an article on the use of magnetic cores for storing digital information. The Whirlwind-1 machine was the first to use magnetic core memory. It consisted of 2 cubes with 32-32-17 cores, which provided storage of 2048 words for 16-bit binary numbers with one parity bit.

1952 - IBM released its first industrial electronic computer, the IBM 701, which was a synchronous parallel computer containing 4,000 vacuum tubes and 12,000 diodes. An improved version of the IBM 704 machine was distinguished by its high speed, it used index registers and represented data in floating point form.

After the IBM 704 computer, the IBM 709 was released, which in architectural terms was close to the machines of the second and third generations. In this machine, indirect addressing was used for the first time and input-output channels appeared for the first time.

1952 - Remington Rand released the UNIVAC-t 103 computer, which was the first to use software interrupts. Remington Rand employees used an algebraic form of writing algorithms called “Short Code” (the first interpreter, created in 1949 by John Mauchly).

1956 - IBM developed floating magnetic heads on an air cushion. Their invention made it possible to create a new type of memory - disk storage devices (SD), the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk storage devices appeared in IBM 305 and RAMAC machines. The latter had a package consisting of 50 metal disks with a magnetic coating, which rotated at a speed of 12,000 rpm. /min. The surface of the disk contained 100 tracks for recording data, each containing 10,000 characters.

1956 - Ferranti released the Pegasus computer, in which the concept of general purpose registers (GPR) was first implemented. With the advent of RON, the distinction between index registers and accumulators was eliminated, and the programmer had at his disposal not one, but several accumulator registers.

1957 - a group led by D. Backus completed work on the first high-level programming language, called FORTRAN. The language, implemented for the first time on the IBM 704 computer, contributed to expanding the scope of computers.

1960s - 2nd generation of computers, computer logic elements are implemented on the basis of semiconductor transistor devices, algorithmic programming languages ​​such as Algol, Pascal and others are being developed.

1970s - 3rd generation of computers, integrated circuits containing thousands of transistors on one semiconductor wafer. OS and structured programming languages ​​began to be created.

1974 - several companies announced the creation of a personal computer based on the Intel-8008 microprocessor - a device that performs the same functions as a large computer, but is designed for one user.

1975 - the first commercially distributed personal computer Altair-8800 based on the Intel-8080 microprocessor appeared. This computer had only 256 bytes of RAM, and there was no keyboard or screen.

Late 1975 - Paul Allen and Bill Gates (future founders of Microsoft) created a Basic language interpreter for the Altair computer, which allowed users to simply communicate with the computer and easily write programs for it.

August 1981 - IBM introduced the IBM PC personal computer. The main microprocessor of the computer was a 16-bit Intel-8088 microprocessor, which allowed working with 1 megabyte of memory.

1980s - 4th generation of computers built on large integrated circuits. Microprocessors are implemented in the form of a single chip, mass production of personal computers.

1990s — 5th generation of computers, ultra-large integrated circuits. Processors contain millions of transistors. The emergence of global computer networks for mass use.

2000s — 6th generation of computers. Integration of computers and household appliances, embedded computers, development of network computing.

At all times, starting from antiquity, people needed to count. At first, they used their own fingers or pebbles to count. However, even simple arithmetic operations with large numbers are difficult for the human brain. Therefore, already in ancient times, the simplest instrument for counting was invented - the abacus, invented more than 15 centuries ago in the Mediterranean countries. This prototype of modern accounts was a set of dominoes strung on rods and was used by merchants.

The abacus rods in the arithmetic sense represent decimal places. Each domino on the first rod has a value of 1, on the second rod - 10, on the third rod - 100, etc. Until the 17th century, the abacus remained practically the only counting instrument.

In Russia, the so-called Russian abacus appeared in the 16th century. They are based on the decimal number system and allow you to quickly perform arithmetic operations (Fig. 6)

Rice. 6. Abacus

In 1614, mathematician John Napier invented logarithms.

A logarithm is an exponent to which a number must be raised (the base of the logarithm) to obtain another given number. Napier's discovery was that any number can be expressed in this way, and that the sum of the logarithms of any two numbers is equal to the logarithm of the product of these numbers. This made it possible to reduce the action of multiplication to the simpler action of addition. Napier created tables of logarithms. In order to multiply two numbers, you need to look at their logarithms in this table, add them and find the number corresponding to this sum in the reverse table - antilogarithms. Based on these tables, in 1654 R. Bissacar and in 1657, independently, S. Partridge developed a rectangular slide rule: the engineer’s main calculating device until the middle of the 20th century (Fig. 7).

Rice. 7. Slide Rule

In 1642, Blaise Pascal invented a mechanical adding machine using the decimal number system. Each decimal place was represented by a wheel with ten teeth, indicating numbers from 0 to 9. There were 8 wheels in total, that is, Pascal's machine was 8-bit.

However, it was not the decimal number system that won in digital computing, but the binary number system. The main reason for this is that in nature there are many phenomena with two stable states, for example, “on / off”, “there is voltage / no voltage”, “false statement / true statement”, but there are no phenomena with ten stable states. Why is the decimal system so widespread? Yes, simply because a person has ten fingers on two hands, and they are convenient to use for simple mental counting. But in electronic computing it is much easier to use a binary number system with only two stable states of elements and simple addition and multiplication tables. In modern digital computing machines - computers - the binary system is used not only to record numbers on which computational operations must be performed, but also to record the commands themselves for these calculations and even entire programs of operations. In this case, all calculations and operations are reduced in a computer to the simplest arithmetic operations on binary numbers.



One of the first to show interest in the binary system was the great German mathematician Gottfried Leibniz. In 1666, at the age of twenty, in his work “On the Art of Combinatorics,” he developed a general method that allows one to reduce any thought to precise formal statements. This opened up the possibility of transferring logic (Leibniz called it the laws of thought) from the realm of words to the realm of mathematics, where the relations between objects and statements are defined precisely and definitely. Thus, Leibniz was the founder of formal logic. He was researching the binary number system. At the same time, Leibniz endowed it with a certain mystical meaning: he associated the number 1 with God, and 0 with emptiness. From these two figures, in his opinion, everything happened. And with the help of these two numbers you can express any mathematical concept. Leibniz was the first to suggest that the binary system could become a universal logical language.

Leibniz dreamed of building a “universal science.” He wanted to highlight the simplest concepts, with the help of which, according to certain rules, concepts of any complexity can be formulated. He dreamed of creating a universal language in which any thoughts could be written down in the form of mathematical formulas. I thought about a machine that could derive theorems from axioms, about turning logical statements into arithmetic ones. In 1673, he created a new type of adding machine - a mechanical calculator that not only adds and subtracts numbers, but also multiplies, divides, raises to powers, and extracts square and cubic roots. It used the binary number system.

The universal logical language was created in 1847 by the English mathematician George Boole. He developed propositional calculus, which was later named Boolean algebra in his honor. It represents formal logic translated into the strict language of mathematics. The formulas of Boolean algebra are similar in appearance to the formulas of the algebra that we are familiar with from school. However, this similarity is not only external, but also internal. Boolean algebra is a completely equal algebra, subject to the set of laws and rules adopted during its creation. It is a notation system applicable to any objects - numbers, letters and sentences. Using this system, you can encode any statements that need to be proven true or false, and then manipulate them like ordinary numbers in mathematics.

George Boole (1815–1864) - English mathematician and logician, one of the founders of mathematical logic. Developed the algebra of logic (in the works “Mathematical Analysis of Logic” (1847) and “Study of the Laws of Thought” (1854)).

The American mathematician Charles Peirce played a huge role in the spread of Boolean algebra and its development.

Charles Pierce (1839–1914) was an American philosopher, logician, mathematician and natural scientist, known for his work on mathematical logic.

The subject of consideration in the algebra of logic is the so-called statements, i.e. any statements that can be said to be either true or false: “Omsk is a city in Russia,” “15 is an even number.” The first statement is true, the second is false.

Complex statements obtained from simple ones using the conjunctions AND, OR, IF...THEN, negations NOT, can also be true or false. Their truth depends only on the truth or falsity of the simple statements that form them, for example: “If it’s not raining outside, then you can go for a walk.” The main task of Boolean algebra is to study this dependence. Logical operations are considered that allow you to construct complex statements from simple ones: negation (NOT), conjunction (AND), disjunction (OR) and others.

In 1804, J. Jacquard invented a weaving machine for producing fabrics with large patterns. This pattern was programmed using a whole deck of punched cards - rectangular cards made of cardboard. On them, information about the pattern was recorded by punching holes (perforations) located in a certain order. When the machine was operating, these punched cards were felt using special pins. It was in this mechanical way that information was read from them to weave a programmed fabric pattern. Jacquard's machine was the prototype of computer-controlled machines created in the twentieth century.

In 1820, Thomas de Colmar developed the first commercial adding machine capable of multiplying and dividing. Since the 19th century, adding machines have become widespread when performing complex calculations.

In 1830, Charles Babbage tried to create a universal analytical engine that was supposed to perform calculations without human intervention. To do this, programs were introduced into it that were pre-recorded on punched cards made of thick paper using holes made on them in a certain order (the word “perforation” means “punching holes in paper or cardboard”). The programming principles for Babbage's Analytical Engine were developed in 1843 by Ada Lovelace, the daughter of the poet Byron.


Rice. 8. Charles Babbage


Rice. 9. Ada Lovelace

An analytical engine must be able to remember data and intermediate results of calculations, that is, have memory. This machine was supposed to contain three main parts: a device for storing numbers typed using gears (memory), a device for operating on numbers (arithmetic unit), and a device for operating numbers using punched cards (program control device). The work on creating the Analytical Engine was not completed, but the ideas contained in it helped build the first computers in the 20th century (translated from English this word means “calculator”).

In 1880 V.T. Odner in Russia created a mechanical adding machine with gear wheels, and in 1890 he launched its mass production. Subsequently, it was produced under the name “Felix” until the 50s of the 20th century (Fig. 11).


Rice. 10. V.T. Odner


Rice. 11. Mechanical adding machine "Felix"

In 1888, Herman Hollerith (Fig. 12) created the first electromechanical calculating machine - a tabulator, in which information printed on punched cards (Fig. 13) was deciphered by electric current. This machine made it possible to reduce the counting time for the US Census several times. In 1890, Hollerith's invention was used for the first time in the 11th American Census. The work that 500 employees had previously taken as long as 7 years to complete was completed by Hollerith and 43 assistants on 43 tabulators in one month.

In 1896, Hollerith founded a company called the Tabulating Machine Co. In 1911, this company was merged with two other companies specializing in the automation of statistical data processing, and received its modern name IBM (International Business Machines) in 1924. It became an electronic corporation, one of the world's largest manufacturers of all types of computers and software , a provider of global information networks. The founder of IBM was Thomas Watson Sr., who headed the company in 1914, essentially created the IBM Corporation and led it for more than 40 years. Since the mid-1950s, IBM has taken a leading position in the global computer market. In 1981, the company created its first personal computer, which became the industry standard. By the mid-1980s, IBM controlled about 60% of the world's production of electronic computers.


Rice. 12. Thomas Watson Sr.

Rice. 13. Herman Hollerith

At the end of the 19th century, punched tape was invented - a paper or celluloid film on which information was applied with a punch in the form of a set of holes.

Wide punched paper tape was used in the monotype, a typesetting machine invented by T. Lanston in 1892. The monotype consisted of two independent devices: a keyboard and a casting apparatus. The keyboard served to compile a typing program on punched tape, and the casting machine made the typing in accordance with the program previously compiled on the keyboard from a special typographic alloy - gart.

Rice. 14. Punch card

Rice. 15. Punched tapes

The typesetter sat down at the keyboard, looked at the text standing in front of him on the music stand and pressed the appropriate keys. When one of the letter keys was struck, the needles of the punching mechanism used compressed air to punch a code combination of holes in the paper tape. This combination corresponded to a given letter, sign or space between them. After each strike on the key, the paper tape moved one step - 3 mm. Each horizontal row of holes on the punched paper corresponds to one letter, sign, or space between them. The finished (punched) spool of punched paper tape was transferred to a casting machine, in which, also using compressed air, the information encoded on it was read from the punched paper tape and a set of letters was automatically produced. Thus, the monotype is one of the first computer-controlled machines in the history of technology. It belonged to hot typesetting machines and over time gave way first to phototypesetting and then to electronic typesetting.

Somewhat earlier than the monotype, in 1881, the pianola (or phonola) was invented - an instrument for automatically playing the piano. It also operated using compressed air. In a pianola, each key of an ordinary piano or grand piano corresponds to a hammer that strikes it. All the hammers together make up the counter-keyboard, which is attached to the piano keyboard. A wide paper punched tape wound on a roller is inserted into the pianola. The holes on the punched tape are made in advance while the pianist is playing - these are a kind of “notes”. When a pianola operates, the punched paper tape is rewound from one roller to another. The information recorded on it is read using a pneumatic mechanism. He activates hammers that correspond to holes on the punched tape, causing them to strike the keys and reproduce the pianist's performance. Thus, the pianola was also a program-controlled machine. Thanks to the preserved punched piano tapes, it was possible to restore and re-record using modern methods the performance of such remarkable pianists of the past as composer A.N. Scriabin. The pianola was used by famous composers and pianists Rubinstein, Paderewski, Busoni.

Later, information was read from punched tape and punched cards using electrical contacts - metal brushes, which, when contacted with a hole, closed an electrical circuit. Then the brushes were replaced with photocells, and information reading became optical, contactless. This is how information was recorded and read in the first digital computers.

Logical operations are closely related to everyday life.

Using one OR element for two inputs, two AND elements for two inputs and one NOT element, you can build a logical circuit of a binary half-adder capable of performing the binary addition operation of two single-digit binary numbers (i.e., performing the rules of binary arithmetic):

0 +0 =0; 0+1=1; 1+0=1; 1+1=0. In doing so, it allocates the carry bit.

However, such a circuit does not contain a third input to which a carry signal from the previous bit of the sum of binary numbers can be applied. Therefore, the half-adder is used only in the least significant bit of the logic circuit for summing multi-bit binary numbers, where there cannot be a carry signal from the previous binary bit. A full binary adder adds two multi-bit binary numbers, taking into account the carry signals from the addition in the previous binary bits.

By connecting binary adders in a cascade, you can obtain a logical adder circuit for binary numbers with any number of digits.

With some modifications, these logic circuits are also used to subtract, multiply and divide binary numbers. With their help, the arithmetic devices of modern computers were built.

In 1937, George Stibitz (Fig. 16) created a binary adder from ordinary electromechanical relays - a device capable of performing the operation of adding numbers in binary code. And today, the binary adder is still one of the main components of any computer, the basis of its arithmetic device.


Rice. 16. George Stibitz

In 1937–1942 John Atanasoff (Fig. 17) created a model of the first computer that ran on vacuum tubes. It used the binary number system. Punched cards were used to enter data and output calculation results. Work on this machine was almost completed in 1942, but due to the war, further funding was stopped.


Rice. 17. John Atanasoff

In 1937, Konrad Zuse (Fig. 12) created his first computer Z1 based on electromechanical relays. The initial data was entered into it using a keyboard, and the result of the calculations was displayed on a panel with many light bulbs. In 1938, K. Zuse created an improved model Z2. Programs were entered into it using punched tape. It was made by punching holes in used 35mm photographic film. In 1941, K. Zuse built a functioning computer Z3, and later Z4, based on the binary number system. They were used for calculations in the creation of aircraft and missiles. In 1942, Konrad Zuse and Helmut Schreier conceived the idea of ​​converting the Z3 from electromechanical relays to vacuum tubes. Such a machine was supposed to work 1000 times faster, but it was not possible to create it - the war got in the way.


Rice. 18. Konrad Zuse

In 1943–1944, at one of the IBM enterprises (IBM), in collaboration with scientists at Harvard University led by Howard Aiken, the Mark-1 computer was created. It weighed about 35 tons. "Mark-1" was based on the use of electromechanical relays and operated with numbers encoded on punched tape.

When creating it, the ideas laid down by Charles Babbage in his Analytical Engine were used. Unlike Stiebitz and Zuse, Aiken did not realize the advantages of the binary number system and used the decimal system in his machine. The machine could manipulate numbers up to 23 digits long. To multiply two such numbers she needed to spend 4 seconds. In 1947, the Mark-2 machine was created, which already used the binary number system. In this machine, addition and subtraction operations took an average of 0.125 seconds, and multiplication - 0.25 seconds.

The abstract science of logic algebra is close to practical life. It allows you to solve a variety of control problems.

The input and output signals of electromagnetic relays, like statements in Boolean algebra, also take only two values. When the winding is de-energized, the input signal is 0, and when current is flowing through the winding, the input signal is 1. When the relay contact is open, the output signal is 0, and when the contact is closed, it is 1.

It was precisely this similarity between statements in Boolean algebra and the behavior of electromagnetic relays that was noticed by the famous physicist Paul Ehrenfest. As early as 1910, he proposed using Boolean algebra to describe the operation of relay circuits in telephone systems. According to another version, the idea of ​​​​using Boolean algebra to describe electrical switching circuits belongs to Peirce. In 1936, the founder of modern information theory, Claude Shannon, combined the binary number system, mathematical logic and electrical circuits in his doctoral dissertation.

It is convenient to designate connections between electromagnetic relays in circuits using the logical operations NOT, AND, OR, REPEAT (YES), etc. For example, a series connection of relay contacts implements an AND operation, and a parallel connection of these contacts implements a logical OR operation. Operations AND, OR, NOT are performed similarly in electronic circuits, where the role of relays that close and open electrical circuits is performed by contactless semiconductor elements - transistors, created in 1947–1948 by American scientists D. Bardeen, W. Brattain and W. Shockley.

Electromechanical relays were too slow. Therefore, already in 1943, the Americans began developing a computer based on vacuum tubes. In 1946, Presper Eckert and John Mauchly (Fig. 13) built the first electronic digital computer, ENIAC. Its weight was 30 tons, it occupied 170 square meters. m area. Instead of thousands of electromechanical relays, ENIAC contained 18,000 vacuum tubes. The machine counted in the binary system and performed 5000 addition operations or 300 multiplication operations per second. Not only an arithmetic device, but also a storage device was built on vacuum tubes in this machine. Numerical data was entered using punched cards, while programs were entered into this machine using plugs and typesetting fields, that is, thousands of contacts had to be connected for each new program. Therefore, it took up to several days to prepare to solve a new problem, although the problem itself was solved in a few minutes. This was one of the main disadvantages of such a machine.


Rice. 19. Presper Eckert and John Mauchly

The work of three outstanding scientists - Claude Shannon, Alan Turing and John von Neumann - became the basis for creating the structure of modern computers.

Shannon Claude (born 1916) is an American engineer and mathematician, the founder of mathematical information theory.

In 1948, he published the work “Mathematical Theory of Communication,” with his theory of information transmission and processing, which included all types of messages, including those transmitted along nerve fibers in living organisms. Shannon introduced the concept of the amount of information as a measure of the uncertainty of the state of the system, removed when receiving information. He called this measure of uncertainty entropy, by analogy with a similar concept in statistical mechanics. When the observer receives information, entropy, that is, the degree of his ignorance about the state of the system, decreases.

Alan Turing (1912–1954) – English mathematician. His main works are on mathematical logic and computational mathematics. In 1936–1937 wrote the seminal work “On Computable Numbers,” in which he introduced the concept of an abstract device, later called the “Turing machine.” In this device he anticipated the basic properties of the modern computer. Turing called his device a “universal machine,” since it was supposed to solve any admissible (theoretically solvable) mathematical or logical problem. Data must be entered into it from a paper tape divided into cells - cells. Each such cell either had to contain a symbol or not. The Turing machine could process symbols input from the tape and change them, that is, erase them and write new ones according to instructions stored in its internal memory.

Neumann John von (1903–1957) - American mathematician and physicist, participant in the development of atomic and hydrogen weapons. Born in Budapest, he lived in the USA since 1930. In his report, published in 1945 and becoming the first work on digital electronic computers, he identified and described the “architecture” of the modern computer.

In the next machine - EDVAC - its more capacious internal memory was capable of storing not only the original data, but also the calculation program. This idea - to store programs in the memory of machines - was put forward by mathematician John von Neumann along with Mauchly and Eckert. He was the first to describe the structure of a universal computer (the so-called “von Neumann architecture” of a modern computer). For universality and efficient operation, according to von Neumann, a computer must contain a central arithmetic-logical unit, a central device for controlling all operations, a storage device (memory) and an information input/output device, and programs should be stored in the computer's memory.

Von Neumann believed that a computer should operate on the basis of the binary number system, be electronic, and perform all operations sequentially, one after another. These principles are the basis of all modern computers.

A machine using vacuum tubes worked much faster than one using electromechanical relays, but the vacuum tubes themselves were unreliable. They often failed. To replace them in 1947, John Bardeen, Walter Brattain and William Shockley proposed using the switching semiconductor elements they invented - transistors.

John Bardeen (1908–1991) – American physicist. One of the creators of the first transistor (1956 Nobel Prize in Physics together with W. Brattain and W. Shockley for the discovery of the transistor effect). One of the authors of the microscopic theory of superconductivity (second Nobel Prize in 1957 jointly with L. Cooper and D. Schriffen).

Walter Brattain (1902–1987) - American physicist, one of the creators of the first transistor, winner of the 1956 Nobel Prize in Physics.

William Shockley (1910–1989) - American physicist, one of the creators of the first transistor, winner of the 1956 Nobel Prize in Physics.

In modern computers, microscopic transistors in an integrated circuit chip are grouped into systems of “gates” that perform logical operations on binary numbers. For example, with their help, the binary adders described above were built, which allow adding multi-digit binary numbers, subtracting, multiplying, dividing and comparing numbers with each other. Logic gates, acting according to certain rules, control the movement of data and the execution of instructions in the computer.

The improvement of the first types of computers led in 1951 to the creation of the UNIVAC computer, intended for commercial use. It became the first commercially produced computer.

The serial tube computer IBM 701, which appeared in 1952, performed up to 2200 multiplication operations per second.


IBM 701 computer

The initiative to create this system belonged to Thomas Watson Jr. In 1937, he began working for the company as a traveling salesman. He only stopped working for IBM during the war, when he was a pilot in the United States Air Force. Returning to the company in 1946, he became its vice president and headed IBM from 1956 to 1971. While remaining a member of the IBM board of directors, Thomas Watson served as the United States Ambassador to the USSR from 1979 to 1981.


Thomas Watson (Jr.)

In 1964, IBM announced the creation of six models of the IBM 360 family (System 360), which became the first computers of the third generation. The models had a single command system and differed from each other in the amount of RAM and performance. When creating models of the family, a number of new principles were used, which made the machines universal and made it possible to use them with equal efficiency both for solving problems in various fields of science and technology, and for processing data in the field of management and business. IBM System/360 (S/360) is a family of mainframe class universal computers. Further developments of IBM/360 were the 370, 390, z9 and zSeries systems. In the USSR, the IBM/360 was cloned under the name ES COMPUTER. They were software compatible with their American prototypes. This made it possible to use Western software in conditions of underdevelopment of the domestic “programming industry.”


IBM/360 computer


T. Watson (Jr.) and V. Lerson at the IBM/360 computer

The first in the USSR Small Electronic Computing Machine (MESM) using vacuum tubes was built in 1949–1951. under the leadership of academician S.A. Lebedeva. Regardless of foreign scientists S.A. Lebedev developed the principles of constructing a computer with a program stored in memory. MESM was the first such machine. And in 1952–1954. under his leadership, the High-Speed ​​Electronic Calculating Machine (BESM) was developed, performing 8,000 operations per second.


Lebedev Sergey Alekseevich

The creation of electronic computers was led by the largest Soviet scientists and engineers I.S. Brook, W.M. Glushkov, Yu.A. Bazilevsky, B.I. Rameev, L.I. Gutenmacher, N.P. Brusentsov.

The first generation of Soviet computers included tube computers - “BESM-2”, “Strela”, “M-2”, “M-3”, “Minsk”, “Ural-1”, “Ural-2”, “M- 20".

The second generation of Soviet computers includes semiconductor small computers “Nairi” and “Mir”, medium-sized computers for scientific calculations and information processing with a speed of 5-30 thousand operations per second “Minsk-2”, “Minsk-22”, “Minsk-32” ", "Ural-14", "Razdan-2", "Razdan-3", "BESM-4", "M-220" and control computers "Dnepr", "VNIIEM-3", as well as the ultra-high-speed BESM-6 with a performance of 1 million operations per second.

The founders of Soviet microelectronics were scientists who emigrated from the USA to the USSR: F.G. Staros (Alfred Sarant) and I.V. Berg (Joel Barr). They became the initiators, organizers and managers of the microelectronics center in Zelenograd near Moscow.


F.G. Staros

Third-generation computers based on integrated circuits appeared in the USSR in the second half of the 1960s. The Unified Computer System (ES COMPUTER) and the Small Computer System (SM COMPUTER) were developed and their mass production was organized. As mentioned above, this system was a clone of the American IBM/360 system.

Evgeniy Alekseevich Lebedev was an ardent opponent of the copying of the American IBM/360 system, which in the Soviet version was called the ES Computer, which began in the 1970s. The role of the EU Computers in the development of domestic computers is ambiguous.

At the initial stage, the emergence of ES computers led to the unification of computer systems, made it possible to establish initial programming standards and organize large-scale projects related to the implementation of programs.

The price of this was the widespread curtailment of their own original developments and becoming completely dependent on the ideas and concepts of IBM, which were far from the best at that time. The abrupt transition from easy-to-use Soviet machines to the much more complex hardware and software of the IBM/360 meant that many programmers had to overcome difficulties associated with the shortcomings and errors of IBM developers. The initial models of ES computers were often inferior in performance characteristics to domestic computers of that time.

At a later stage, especially in the 80s, the widespread introduction of EU computers turned into a serious obstacle to the development of software, databases, and dialog systems. After expensive and pre-planned purchases, enterprises were forced to operate obsolete computer systems. In parallel, systems developed on small machines and on personal computers, which became more and more popular.

At a later stage, with the beginning of perestroika, from 1988–89, our country was flooded with foreign personal computers. No measures could stop the crisis of the EU computer series. The domestic industry was unable to create analogues or substitutes for ES computers based on the new element base. By that time, the economy of the USSR did not allow us to spend gigantic financial resources on the creation of microelectronic equipment. As a result, there was a complete transition to imported computers. Programs for the development of domestic computers were finally curtailed. Problems arose of transferring technologies to modern computers, modernizing technologies, employing and retraining hundreds of thousands of specialists.

Forecast S.A. Lebedeva was justified. Both in the USA and throughout the world, they subsequently followed the path that he proposed: on the one hand, supercomputers are created, and on the other, a whole series of less powerful computers aimed at various applications - personal, specialized, etc.

The fourth generation of Soviet computers was implemented on the basis of large-scale (LSI) and ultra-large-scale (VLSI) integrated circuits.

An example of large fourth-generation computer systems was the Elbrus-2 multiprocessor complex with a speed of up to 100 million operations per second.

In the 1950s, the second generation of transistor-based computers was created. As a result, the speed of the machines increased 10 times, and the size and weight were significantly reduced. They began to use storage devices on magnetic ferrite cores, capable of storing information indefinitely even when computers are turned off. They were designed by Joy Forrester in 1951–1953. Large amounts of information were stored on external media, such as magnetic tape or a magnetic drum.

The first hard disk drive in the history of computing (winchester) was developed in 1956 by a group of IBM engineers led by Reynold B. Johnson. The device was called 305 RAMAC - a random access method of accounting and control. The drive consisted of 50 aluminum disks with a diameter of 24 inches (about 60 cm) and a thickness of 2.5 cm each. A magnetic layer was applied to the surface of the aluminum plate, onto which recording was carried out. This entire structure of disks on a common axis rotated in operating mode at a constant speed of 1200 rpm, and the drive itself occupied an area measuring 3x3.5 m. Its total capacity was 5 MB. One of the most important principles used in the design of the RAMAC 305 was that the heads did not touch the surface of the disks, but hovered at a small fixed distance. For this purpose, special air nozzles were used, which directed the flow to the disk through small holes in the head holders and thereby created a gap between the head and the surface of the rotating plate.

The Winchester (hard drive) provided computer users with the ability to store very large amounts of information and at the same time quickly retrieve the necessary data. After the creation of the hard drive in 1958, magnetic tape media was abandoned.

In 1959, D. Kilby, D. Herney, K. Lehovec and R. Noyce (Fig. 14) invented integrated circuits (chips), in which all electronic components, along with conductors, were placed inside a silicon wafer. The use of chips in computers has made it possible to shorten the paths for current flow during switching. The speed of calculations has increased tenfold. The dimensions of the machines have also decreased significantly. The appearance of the chip made it possible to create the third generation of computers. And in 1964, IBM began producing IBM-360 computers based on integrated circuits.


Rice. 14. D. Kilby, D. Hurney, K. Lechovec and R. Noyce

In 1965, Douglas Engelbart (Fig. 15) created the first “mouse” - a computer handheld manipulator. It was first used in the Apple Macintosh personal computer, released later, in 1976.


Rice. 19. Douglas Engelbart

In 1971, IBM began producing the computer floppy disk, invented by Yoshiro Nakamatsu, a removable flexible magnetic disk (“floppy disk”) for permanent storage of information. Initially, the floppy disk had a diameter of 8 inches and a capacity of 80 KB, then - 5 inches. The modern 1.44 MB floppy disk, first released by Sony in 1982, is housed in a hard plastic case and has a diameter of 3.5 inches.

In 1969, the creation of a defense computer network began in the United States - the progenitor of the modern worldwide Internet.

In the 1970s, dot matrix printers were developed to print information output from computers.

In 1971, Intel employee Edward Hoff (Fig. 20) created the first microprocessor, the 4004, by placing several integrated circuits on a single silicon chip. Although it was originally intended for use in calculators, it was essentially a complete microcomputer. This revolutionary invention radically changed the idea of ​​computers as bulky, ponderous monsters. The microprocessor made it possible to create fourth-generation computers that fit on the user's desk.


Rice. 20. Edward Hoff

In the mid-1970s, attempts began to create a personal computer (PC), a computing machine intended for the private user.

In 1974, Edward Roberts (Fig. 21) created the first personal computer, Altair, based on the Intel 8080 microprocessor (Fig. 22). But without software it was ineffective: after all, a private user does not have his own programmer “at hand” at home.


Rice. 21. Edward Roberts


Rice. 22. First personal computer Altair

In 1975, two Harvard University students, Bill Gates and Paul Allen, learned about the creation of the Altair PC (Fig. 23). They were the first to understand the urgent need to write software for personal computers and within a month they created it for the Altair PC based on the BASIC language. That same year, they founded Microsoft, which quickly became a leader in personal computer software and became the richest company in the world.


Rice. 23. Bill Gates and Paul Allen


Rice. 24. Bill Gates

In 1973, IBM developed a hard magnetic disk (hard drive) for a computer. This invention made it possible to create large-capacity long-term memory, which is retained when the computer is turned off.

The first Altair-8800 microcomputers were just a collection of parts that still needed to be assembled. In addition, they were extremely inconvenient to use: they had neither a monitor, nor a keyboard, nor a mouse. Information was entered into them using switches on the front panel, and the results were displayed using LED indicators. Later they began to display results using a teletype - a telegraph machine with a keyboard.

In 1976, 26-year-old engineer Steve Wozniak from Hewlett-Packard created a fundamentally new microcomputer. He was the first to use a keyboard similar to a typewriter keyboard to enter data, and an ordinary TV to display information. Symbols were displayed on its screen in 24 lines of 40 characters each. The computer had 8 KB of memory, half of which was occupied by the built-in BASIC language, and half the user could use to enter his programs. This computer was significantly superior to the Altair-8800, which had only 256 bytes of memory. S. Wozniak provided a connector (the so-called “slot”) for his new computer for connecting additional devices. Steve Wozniak’s friend Steve Jobs was the first to understand and appreciate the prospects of this computer (Fig. 25). He offered to organize a company for its serial production. On April 1, 1976, they founded the Apple company, and officially registered it in January 1977. They called the new computer Apple-I (Fig. 26). Within 10 months, they managed to assemble and sell about 200 copies of Apple-I.


Rice. 25. Steve Wozniak and Steve Jobs


Rice. 26. Apple-I Personal Computer

At this time, Wozniak was already working on improving it. The new version was called Apple-II (Fig. 23). The computer was made in a plastic case, received a graphics mode, sound, color, expanded memory, 8 expansion connectors (slots) instead of one. It used a cassette recorder to save programs. The basis of the first Apple II model was, as in the Apple I, the 6502 microprocessor from MOS Technology with a clock frequency of 1 megahertz. BASIC was recorded in permanent memory. The 4 KB RAM capacity was expanded to 48 KB. The information was displayed on a color or black-and-white TV operating in the NTSC standard system for the USA. In text mode, 24 lines were displayed, 40 characters each, and in graphic mode, the resolution was 280 by 192 pixels (six colors). The main advantage of the Apple II was the ability to expand its RAM up to 48 KB and use 8 connectors for connecting additional devices. Thanks to the use of color graphics, it could be used for a wide variety of games (Fig. 27).


Rice. 27. Apple II personal computer

Thanks to its capabilities, the Apple II has gained popularity among people of various professions. Its users were not required to have knowledge of electronics or programming languages.

The Apple II became the first truly personal computer for scientists, engineers, lawyers, businessmen, housewives and schoolchildren.

In July 1978, the Apple II was supplemented with the Disk II drive, which significantly expanded its capabilities. The disk operating system Apple-DOS was created for it. And at the end of 1978, the computer was improved again and released under the name Apple II Plus. Now it could be used in the business sphere to store information, conduct business, and help in decision making. The creation of such application programs as text editors, organizers, and spreadsheets began.

In 1979, Dan Bricklin and Bob Frankston created VisiCalc, the world's first spreadsheet. This tool was best suited for accounting calculations. Its first version was written for the Apple II, which was often purchased only to work with VisiCalc.

Thus, in a few years, the microcomputer, largely thanks to Apple and its founders Steven Jobs and Steve Wozniak, turned into a personal computer for people of various professions.

In 1981, the IBM PC personal computer appeared, which soon became the standard in the computer industry and displaced almost all competing personal computer models from the market. The only exception was Apple. In 1984, the Apple Macintosh was created, the first computer with a graphical interface controlled by a mouse. Thanks to its advantages, Apple managed to stay in the personal computer market. It has conquered the market in education and publishing, where the outstanding graphics capabilities of Macintoshes are used for layout and image processing.

Today, Apple controls 8–10% of the global personal computer market, and the remaining 90% is IBM-compatible personal computers. Most Macintosh computers are owned by users in the United States.

In 1979, the optical compact disc (CD) appeared, developed by Philips and intended only for listening to music recordings.

In 1979, Intel developed the 8088 microprocessor for personal computers.

Personal computers of the IBM PC model, created in 1981 by a group of IBM engineers led by William C. Lowe, became widespread. The IBM PC had an Intel 8088 processor with a clock frequency of 4.77 MHz, 16 Kb of memory expandable up to 256 Kb, and the DOS 1.0 operating system. (Fig. 24). The DOS 1.0 operating system was created by Microsoft. In just one month, IBM managed to sell 241,683 IBM PCs. By agreement with Microsoft executives, IBM paid a certain amount to the creators of the program for each copy of the operating system installed on the IBM PC. Thanks to the popularity of the IBM PC, Microsoft executives Bill Gates and Paul Allen soon became billionaires, and Microsoft took a leading position in the software market.


Rice. 28. Personal computer model IBM PC

The IBM PC applied the principle of open architecture, which made it possible to make improvements and additions to existing PC designs. This principle means the use of ready-made blocks and devices in the design when assembling a computer, as well as the standardization of methods for connecting computer devices.

The principle of open architecture contributed to the widespread adoption of IBM PC-compatible clone microcomputers. A large number of companies around the world began assembling them from ready-made blocks and devices. Users, in turn, were able to independently upgrade their microcomputers and equip them with additional devices from hundreds of manufacturers.

In the late 1990s, IBM PC-compatible computers accounted for 90% of the personal computer market.

The IBM PC soon became the standard in the computer industry and drove almost all competing personal computer models out of the market. The only exception was Apple. In 1984, the Apple Macintosh was created, the first computer with a graphical interface controlled by a mouse. Thanks to its advantages, Apple managed to stay in the personal computer market. It has conquered the market in the field of education, publishing, where their outstanding graphics capabilities are used for layout and image processing.

Today, Apple controls 8–10% of the global personal computer market, and the remaining 90% is IBM-compatible personal computers. Most Macintosh computers are owned by US users.

Over the last decades of the 20th century, computers have greatly increased their speed and the volume of information processed and stored.

In 1965, Gordon Moore, one of the founders of Intel Corporation, a leader in the field of computer integrated circuits - “chips”, suggested that the number of transistors in them would double every year. Over the next 10 years, this prediction came true, and then he suggested that this number would now double every 2 years. Indeed, the number of transistors in microprocessors doubles every 18 months. Computer scientists now call this trend Moore's Law.


Rice. 29. Gordon Moore

A similar pattern is observed in the development and production of RAM devices and storage devices. By the way, I have no doubt that by the time this book is published, many digital data in terms of their capacity and speed will have become outdated.

The development of software, without which it is generally impossible to use a personal computer, and, above all, operating systems that ensure interaction between the user and the PC, has not lagged behind.

In 1981, Microsoft developed the MS-DOS operating system for its personal computers.

In 1983, the improved personal computer IBM PC/XT from IBM was created.

In the 1980s, black-and-white and color inkjet and laser printers were created to print information output from computers. They are significantly superior to dot matrix printers in terms of print quality and speed.

In 1983–1993, the global computer network Internet and E-mail were created, which were used by millions of users around the world.

In 1992, Microsoft released the Windows 3.1 operating system for IBM PC-compatible computers. The word “Windows” translated from English means “windows”. The “windowed” operating system allows you to work with several documents at once. It is a so-called “graphical interface”. This is a system of interaction with a PC in which the user deals with so-called “icons”: pictures that he can control using a computer mouse. This graphical interface and window system was first created at the Xerox research center in 1975 and applied to Apple PCs.

In 1995, Microsoft released the Windows-95 operating system for IBM PC-compatible computers, more advanced than Windows-3.1, in 1998 - its modification Windows-98, and in 2000 - Windows-2000, and in 2006 – Windows XP. A number of application programs have been developed for them: Word text editor, Excel spreadsheets, a program for using the Internet and E-mail - Internet Explorer, Paint graphic editor, standard application programs (calculator, clock, dialer), Microsoft Schedule diary, universal player, phonograph and laser player.

In recent years it has become possible to combine text and graphics with sound and moving images on the personal computer. This technology is called “multimedia”. Optical CD-ROMs (Compact Disk Read Only Memory - i.e. read-only memory on a CD) are used as storage media in such multimedia computers. Outwardly, they do not differ from audio CDs used in players and music centers.

The capacity of one CD-ROM reaches 650 MB; in terms of capacity, it occupies an intermediate position between floppy disks and a hard drive. A CD drive is used to read CDs. Information on a CD is written only once in an industrial environment, and on a PC it can only be read. A wide variety of games, encyclopedias, art albums, maps, atlases, dictionaries and reference books are published on CD-ROM. All of them are equipped with convenient search engines that allow you to quickly find the material you need. The memory capacity of two CD-ROMs is enough to accommodate an encyclopedia larger in volume than the Great Soviet Encyclopedia.

In the late 1990s, write-once CD-R and rewritable CD-RW optical compact discs and drives were created, allowing the user to make any audio and video recordings to their liking.

In 1990–2000, in addition to desktop personal computers, “laptop” PCs were released in the form of a portable suitcase and even smaller pocket “palmtops” (handhelds) - as their name suggests, they fit in your pocket and on the palm of your hand. Laptops are equipped with a liquid crystal display screen located in the hinged lid, and for palmtops - on the front panel of the case.

In 1998–2000, miniature solid-state “flash memory” (without moving parts) was created. Thus, Memory Stick memory has the dimensions and weight of a piece of chewing gum, and SD memory from Panasonic has the size and weight of a postage stamp. Meanwhile, the volume of their memory, which can be stored indefinitely, is 64–128 MB and even 2–8 GB or more.

In addition to portable personal computers, supercomputers are being created to solve complex problems in science and technology - weather and earthquake forecasts, rocket and aircraft calculations, nuclear reactions, deciphering the human genetic code. They use from several to several dozen microprocessors that perform parallel calculations. The first supercomputer was developed by Seymour Cray in 1976.

In 2002, the NEC Earth Simulator supercomputer was built in Japan, performing 35.6 trillion operations per second. Today it is the fastest supercomputer in the world.


Rice. 30. Seymour Cray


Rice. 31. Supercomputer Cray-1


Rice. 32. Supercomputer Cray-2

In 2005, IBM developed the Blue Gene supercomputer with a performance of over 30 trillion operations per second. It contains 12,000 processors and has a thousand times more power than the famous Deep Blue, with which world champion Garry Kasparov played chess in 1997. IBM and researchers from the Swiss Polytechnic Institute in Lausanne have attempted to model the human brain for the first time.

In 2006, personal computers turned 25 years old. Let's see how they have changed over the years. The first of them, equipped with an Intel microprocessor, operated with a clock frequency of only 4.77 MHz and had 16 KB of RAM. Modern PCs equipped with a Pentium 4 microprocessor, created in 2001, have a clock frequency of 3–4 GHz, RAM 512 MB - 1 GB and long-term memory (hard drive) with a capacity of tens and hundreds of GB and even 1 terabyte. Such gigantic progress has not been observed in any branch of technology except digital computing. If the same progress had been made in increasing the speed of aircraft, then they would have been flying at the speed of light long ago.

Millions of computers are used in almost all sectors of the economy, industry, science, technology, pedagogy, and medicine.

The main reasons for this progress are the unusually high rates of microminiaturization of digital electronics devices and programming advances that have made the “communication” of ordinary users with personal computers simple and convenient.





























































































































































Back forward

Attention! Slide previews are for informational purposes only and may not represent all the features of the presentation. If you are interested in this work, please download the full version.

The purpose of the lesson:

  1. introduce the history of the development of computer technology, the devices that are the predecessors of computers and their inventors
  2. give an idea of ​​the connection between the development of computers and the development of human society,
  3. introduce the main features of computers of different generations.
  4. Development of cognitive interest, ability to use additional literature

Lesson type: learning new material

View: lesson-lecture

Software and teaching software: PC, presentation slides depicting main devices, portraits of inventors and scientists.

Lesson plan:

  1. Organizing time
  2. Updating new knowledge
  3. Background of computers
  4. Generations of computers
  5. The future of computers
  6. Consolidation of new knowledge
  7. Summing up the lesson
  8. Homework

1. Organizational moment

Stage task: Prepare students for work in the lesson. (Check the class’s readiness for the lesson, availability of necessary school supplies, attendance)

2. Updating new knowledge

Stage task: Preparing students for the active assimilation of new knowledge, ensuring students’ motivation and acceptance of the goals of educational and cognitive activities. Setting lesson goals.

Hello! What technical inventions do you think have particularly changed the way people work?

(Students express their opinions on this issue, the teacher corrects them if necessary)

- You are right, indeed, the main technical device that influenced human work is the invention of computers - electronic computing machines. Today in the lesson, we will learn what computing devices preceded the appearance of computers, how the computers themselves changed, the sequence of formation of the computer, when a machine designed simply for counting became a complex technical device. The topic of our lesson: “History of computer technology. Generations of computers." The purpose of our lesson : get acquainted with the history of the development of computer technology, with devices that are the predecessors of computers and their inventors, get acquainted with the main features of computers of different generations.

During the lesson we will work using a multimedia presentation consisting of 4 sections “Prehistory of Computers”, “Generations of Computers”, “Gallery of Scientists”, “Computer Dictionary”. Each section has a subsection “Test yourself” - this is a test in which you will immediately find out the result.

3. Background of computers

Draw the attention of students that a computer is an electronic computing machine, another name “computer” or “computer” comes from the English verb “compute” - to calculate, so the word “computer” can be translated as “calculator”. That is, in both the word computer and the word computer, the main meaning is calculations. Although you and I know well that modern computers make it possible not only to calculate, but also to create and process texts, drawings, videos, and sound. Let's look into history...

(at the same time, we draw up the table “Prehistory of Computers” in a notebook)

"Prehistory of Computers"

Ancient man mastered counting before writing. The man chose his fingers as the first assistant in counting. It was the presence of ten fingers that formed the basis of the decimal number system. Different countries speak and write different languages, but count the same. In the 5th century BC. The Greeks and Egyptians used the ABAC for counting, a device similar to the Russian abacus.

Abacus is a Greek word and is translated as counting board. The idea behind its design is to have a special computational field where counting elements are moved according to certain rules. Indeed, initially the abacus was a board covered with dust or sand. You could draw lines on it and move pebbles. In Ancient Greece, the abacus was used primarily for performing monetary transactions. Large monetary units were counted on the left side, and small change on the right. Counting was carried out in the binary-pentary number system. On such a board it was easy to add and subtract, adding or removing pebbles and moving them from category to category.

Arriving in Ancient Rome, the abacus changed in appearance. The Romans began to make it from bronze, ivory or colored glass. The board had two rows of slots along which the bones could be moved. The abacus turned into a real calculating device, allowing even fractions to be represented, and was much more convenient than the Greek one. The Romans called this device calculare - “pebbles”. This is where the Latin verb calculare comes from – “to calculate”, and from it comes the Russian word “calculator”.

After the fall of the Roman Empire, there was a decline in science and culture and the abacus was closed for some time. It was revived and spread throughout Europe only in the 10th century. Abacus was used by merchants, money changers, and artisans. Even six centuries later, the abacus remained an essential tool for performing calculations.

Naturally, over such a long period of time, the abacus changed its appearance and in the XLL-XLLL centuries it acquired the form of the so-called counting on the lines and between them. This form of counting in some European countries remained until the end of the 16th century. and only then finally gave way to calculations on paper.

In China, abacus has been known since the 4th century BC. Counting sticks were laid out on a special board. Gradually they were replaced by multi-colored chips, and in the 5th century the Chinese abacus - suan-pan - appeared. They were a frame with two rows of seeds strung on twigs. There were seven of them on each twig. From China, suan-pan came to Japan. This happened in the 15th century and the device was called “soroban”.

In Russia, abacus appeared at the same time as in Japan. But Russian abacus was invented independently, as evidenced by the following factors. Firstly, Russian abacus is very different from Chinese. Secondly, this invention has its own history.

“Counting with dice” was common in Russia. It was close to European line counting, but the scribes used fruit seeds instead of tokens. In XVL the board abacus appeared, the first version of Russian abacus. Such accounts are now kept in the Historical Museum in Moscow.

Abacuses were used in Russia for almost 300 years and were replaced only by cheap pocket calculators.

The world's first automatic device that could perform addition was created on the basis of a mechanical clock, and it was developed in 1623 by Wilhelm Schickard, a professor at the Department of Oriental Languages ​​at one of the universities in Germany. But an invaluable contribution to the development of devices that help perform calculations was certainly made by Blaise Pascal, Godfried Leibniz and Charles Babbage.

In 1642, one of the greatest scientists in human history, the French mathematician, physicist, philosopher and theologian Blaise Pascal, invented and manufactured a mechanical device for adding and subtracting numbers - the ARITHMOMETER. ? What material do you think the first adding machine in history was made of? (tree).

The main idea for the design of the future machine was formed - automatic discharge transfer. “Each wheel... of a certain category, moving by ten arithmetic digits, makes the next one move by only one digit” - this invention formula asserted Blaise Pascal’s priority in the invention and secured his right to produce and sell cars.

Pascal's machine added numbers on special disks - wheels. The decimal digits of a five-digit number were specified by turning the disks on which the digital divisions were marked. The result was read in the windows. The discs had one elongated tooth to allow for transfer to the next rank.

The initial numbers were set by turning the dial wheels, rotating the handle set various gears and rollers in motion, and as a result, special wheels with numbers showed the result of addition or subtraction.

Pascal was one of humanity's greatest geniuses. He was a mathematician, physicist, mechanic, inventor, and writer. Theorems of mathematics and laws of physics bear his name. In physics, there is a unit of measurement for pressure called Pascal. In computer science, one of the most popular programming languages ​​bears his name.

In 1673, the German mathematician and philosopher Gottfried Wilhelm Leibniz invented and manufactured an adding machine that could not only add and subtract numbers, but also multiply and divide. The scarcity and primitiveness of the first computers did not prevent Pascal and Leibniz from expressing a number of interesting ideas about the role of computer technology in the future. Leibniz wrote about machines that would work not only with numbers, but also with words, concepts, formulas, and could perform logical operations. This idea seemed absurd to most of Leibniz's contemporaries. In the 18th century, Leibniz's views were ridiculed by the great English satirist J. Swift, author of the famous novel Gulliver's Travels.

Only in the 20th century did the significance of the ideas of Pascal and Leibniz become clear.

Along with computing devices, mechanisms for AUTOMATIC OPERATION ACCORDING TO A SET PROGRAM (jukeboxes, striking clocks, Jacquard looms) also developed.

At the beginning of the 19th century, the English mathematician Charles Babbage, who was engaged in compiling tables for navigation, developed a PROJECT of a computing “analytical” engine, which was based on the PROGRAM CONTROL PRINCIPLE (PCU). Babbage's innovative thought was picked up and developed by his student Ada Lovelace, daughter of the poet George Byron - who became the world's first programmer. However, the practical implementation of Babbage's project was impossible due to the insufficient development of industry and technology.

The main elements of Babbage's machine inherent in a modern computer:

  1. A warehouse is a device where initial numbers and intermediate results are stored. In a modern computer this is memory.
  2. Factory is an arithmetic device in which operations are performed on numbers taken from the Warehouse. In a modern computer this is the Processor.
  3. Source data input blocks – input device.
  4. Printing results – output device.

The architecture of the machine practically corresponds to the architecture of modern computers, and the commands that the analytical engine executed basically included all the processor commands.

An interesting historical fact is that the first program for the Analytical Engine was written by Ada Augusta Lovelace, the daughter of the great English poet George Byron. It was Babbage who infected her with the idea of ​​​​creating a computing machine.

The idea of ​​programming mechanical devices using a punched card was first implemented in 1804 in a weaving loom. They were first used by loom designers. The London weaver Joseph Marie Jacquard succeeded in this matter. In 1801, he created an automatic loom controlled by punched cards.

The thread rose or fell with each stroke of the shuttle, depending on whether there was a hole or not. The transverse thread could go around each longitudinal one or the other side, depending on the program on the punched card, thereby creating an intricate pattern of intertwined threads. This weaving is called “jacquard” and is considered one of the most complex and intricate weavings. This program-driven loom was the first mass-produced industrial device and is considered one of the most advanced machines ever created by man.

The idea of ​​recording a program on a punched card also came to the mind of the first programmer, Ada Augusta Lovelace. It was she who proposed the use of perforated cards in Babbage's Analytical Engine. In particular, in one of her letters she wrote: “The Analytical Engine weaves algebraic patterns in the same way as a loom reproduces colors and leaves.”

Herman Hollerith also used punched cards in his machine to record and process information. Punch cards were also used in first generation computers.

Until the 40s of the twentieth century, computer technology was represented by adding machines, which from mechanical became electrical, where electromagnetic relays spent several seconds multiplying numbers, which worked exactly on the same principles as the adding machines of Pascal and Leibniz. In addition, they were very unreliable and often broke. It is interesting that once the cause of a breakdown of an electric adding machine was a moth stuck in a relay, in English “moth, beetle” - bug, hence the concept of “bug” as a malfunction in a computer.

Herman Hollerith born on February 29, 1860 in the American city of Buffalo in a family of German emigrants. Herman came easily to mathematics and natural sciences, and at the age of 15 he entered the School of Mines at Columbia University. A professor at the same university drew attention to the capable young man and invited him after graduating from school to the national census bureau, which he headed. A population census was carried out every ten years. The population was constantly growing, and its number in the United States by that time was about 50 million people. It was almost impossible to fill out a card for each person manually, and then calculate and process the results. This process dragged on for several years, almost until the next census. It was necessary to find a way out of this situation. Herman Hollerith was given the idea to mechanize this process by Dr. John Billings, who headed the consolidated data department. He suggested using punch cards to record information. Hollerith named his car tabulator and in 1887 year it was tested in Baltimore. The results were positive, and the experiment was repeated in St. Louis. The time gain was almost tenfold. The US government immediately entered into a contract with Hollerith for the supply of tabulators, and already in 1890 the census was carried out using machines. Processing the results took less than two years and saved $5 million. Hollerith's system not only provided high speed, but also made it possible to compare statistical data on a variety of parameters. Hollerith developed a convenient key puncher that allows you to punch about 100 holes per minute simultaneously on several cards, and automated the procedures for feeding and sorting punched cards. Sorting was carried out by a device in the form of a set of boxes with lids. The punched cards moved along a kind of conveyor belt. On one side of the card there were reading pins on springs, on the other there was a reservoir of mercury. When the pin fell into the hole on the punched card, it closed an electrical circuit thanks to the mercury on the other side. The lid of the corresponding box was opened and a punched card fell into it. The tabulator has been used for population censuses in several countries.

In 1896, Herm Hollerith founded the Tabulating Machine Company (TMC) and his machines were used everywhere - both in large industrial enterprises and in ordinary firms. And in 1900, the tabulator was used for the census. renames the company IBM (International Business Machines).

4. Generations of computers

(at the same time we make notes in notebooks and the table “Generations of computers (computers)”)

COMPUTER GENERATIONS
period Element base Fast action (ops/sec.) Information carriers programs application Examples of computers
I
II
III
IV
V

Icomputer generation: In the 30s of the 20th century, a breakthrough and a radical revolution occurred in the development of physics. In computers, they no longer used wheels, rollers and relays, but vacuum vacuum tubes. The transition from electromechanical elements to electronic ones immediately increased the speed of machines hundreds of times. The first operating computer was built in the USA in 1945, at the University of Pennsylvania by scientists Eckert and Mauchly and was called ENIAC. This machine was built by order of the US Department of Defense for air defense systems and to automate control. In order to correctly calculate the trajectory and speed of a projectile to hit an air target, it was necessary to solve a system of 6 differential equations. The first computer was supposed to solve this problem. The first computer occupied two floors of one building, weighed 30 tons and consisted of tens of thousands of electronic tubes, which were connected by wires, the total length of which was 10 thousand km. When the ENIAC computer was working, the electricity in the town was turned off, so much electricity was consumed by this machine, the electronic tubes quickly overheated and failed. A whole group of students did nothing but continuously search for and replace burnt out lamps.

In the USSR, the founder of computer technology was Sergei Alekseevich Lebedev, who created the MESM (small calculating machine) in 1951 (Kyiv) and BESM (high-speed ESM) - 1952, Moscow.

IIgeneration: In 1948, American scientist Walter Brighten invented the TRANSISTOR, a semiconductor device that replaced radio tubes. The transistor was much smaller than a radio tube, was more reliable and consumed much less electricity; it alone replaced 40 vacuum tubes! Computers have become smaller in size and much cheaper, their speed has reached several hundred operations per second. Now computers were the size of a refrigerator and could be purchased and used by scientific and technical institutes. At that time, the USSR kept pace with the times and produced world-class computers BESM-6.

IIIgeneration: The second half of the 20th century is characterized by the rapid development of science and technology, especially the physics of semiconductors, and since 1964, transistors began to be placed on microcircuits made on the surfaces of crystals. This made it possible to overcome the millionth barrier in performance.

IVgeneration: Since 1980, scientists have learned to place several integrated circuits on one chip; the development of microelectronics led to the creation of microprocessors. The IC crystal is smaller and thinner than a contact lens. The performance of modern computers amounts to hundreds of millions of operations per second.

In 1977, the first PC (personal computer) from Apple Macintosh appeared. Since 1981, IBM (International Business Machine) has become the leader in PC production; this company has been operating in the US market since the 19th century and has produced various devices for offices - abacus, pen adding machines, etc. and has established itself as a reliable company trusted by most business people in the United States. But that's not the only reason IBM PCs were much more popular than Apple Macintosh PCs. Apple Macintosh PCs were a “black box” for the user - it was impossible to disassemble, upgrade the PC, or attach new devices to the PC, while IBM PCs were open to the user and thus made it possible to assemble a PC like a children’s construction set, so most users chose IBM PCs. Although when we hear the word computer we think of a PC, there are tasks that even modern PCs cannot solve, which can only be handled by supercomputers, the speed of which amounts to billions of operations per second.

Lebedev's scientific school successfully competed with the leading US company IBM in its results. Among the world's scientists, Lebedev's contemporaries, there is no person who, like him, would have such a powerful creative potential to cover the period from the creation of the first tube computers to the ultra-high-speed supercomputer with his scientific activity. When the American scientist Norbert Wiener, who is called the “first cyber prophet,” came to the USSR in 1960, he noted, “They are quite a bit behind us in equipment, but far ahead of us in the THEORY of automation.” Unfortunately, in the 60s, the science of cybernetics was persecuted as a “bourgeois pseudoscience”; cybernetics scientists were imprisoned, which is why Soviet electronics began to noticeably lag behind foreign ones. Although it became impossible to create new computers, no one could stop scientists from thinking. Therefore, our Russian scientists are still ahead of world scientific thought in the field of automation theory.

To develop computer programs, various programming languages ​​(algorithmic languages) were created. FORTRAN FORTRAN - FORmula TRANslated - the first language, created in 1956 by J. Backus. In 1961, BASIC BASIC (Beginners All-purpose Symbolic Instartion Code) appeared - T. Kurtz, J. Kemeny.In 1971, Professor at the University of Zurich Nicholas Wirth created the Pascal language Pascal, which he named after the scientist Blaise Pascal. Other languages ​​were also created: Ada, Algol, Cobol, C, Prolog, Fred, Logo, Lisp, etc. But still the most popular programming language is Pascal; many later languages ​​took from Pascal the basic commands and principles of program construction, for example the language C, C+ and the Delphi programming system, even BASIC, having changed, borrowed its structure and versatility from Pascal. In the 11th grade, we will study the Pascal language and learn how to create programs for solving problems with formulas, for text processing, learn to draw and create moving drawings.

Supercomputers

5. The future of computing

  • Advantages of artificial intelligence (AI):
  • Molecular computers
  • Biocomputers
  • Optical computers
  • Quantum computers

6. Consolidation of new knowledge

It is possible to consolidate new material using a test in a multimedia presentation for the lesson: the “Test yourself” section in each part of the presentation: “Background of computers”, “Generations of computers”, “Gallery of scientists”.

Testing knowledge on this topic is possible using the “History of Computer Science” tests ( Annex 1) in 4 versions and a test about scientists “Informatics in Persons” ( Appendix 2)

7. Summing up the lesson

Checking completed tables ( Appendix 3)

8. Homework

  • lecture in notebook for presentation, tables “Prehistory of Computers”, “Generations of Computers”
  • prepare a message about the 5th generation of computers (the future of computers)

The computer they created worked a thousand times faster than the Mark 1. But it turned out that most of the time this computer was idle, because to set the calculation method (program) in this computer it was necessary to connect the wires in the required way for several hours or even several days. And the calculation itself could then take only a few minutes or even seconds.

To simplify and speed up the process of setting programs, Mauchly and Eckert began to design a new computer that could store the program in its memory. In 1945, the famous mathematician John von Neumann was brought in to work and prepared a report on this computer. The report was sent to many scientists and became widely known because in it von Neumann clearly and simply formulated the general principles of the functioning of computers, that is, universal computing devices. And to this day, the vast majority of computers are made in accordance with the principles that John von Neumann outlined in his report in 1945. The first computer to embody von Neumann's principles was built in 1949 by the English researcher Maurice Wilkes.

The development of the first electronic serial machine UNIVAC (Universal Automatic Computer) began around 1947 by Eckert and Mauchli, who founded the ECKERT-MAUCHLI company in December of the same year. The first model of the machine (UNIVAC-1) was built for the US Census Bureau and put into operation in the spring of 1951. The synchronous, sequential computer UNIVAC-1 was created on the basis of the ENIAC and EDVAC computers. It operated with a clock frequency of 2.25 MHz and contained about 5000 vacuum tubes. The internal storage device with a capacity of 1000 12-bit decimal numbers was implemented on 100 mercury delay lines.

Soon after the UNIVAC-1 machine was put into operation, its developers came up with the idea of ​​automatic programming. It boiled down to ensuring that the machine itself could prepare the sequence of commands needed to solve a given problem.

A strong limiting factor in the work of computer designers in the early 1950s was the lack of high-speed memory. According to one of the pioneers of computing, D. Eckert, “the architecture of a machine is determined by memory.” The researchers focused their efforts on the memory properties of ferrite rings strung on wire matrices.

In 1951, J. Forrester published an article on the use of magnetic cores for storing digital information. The Whirlwind-1 machine was the first to use magnetic core memory. It consisted of 2 cubes 32 x 32 x 17 with cores that provided storage of 2048 words for 16-bit binary numbers with one parity bit.

Soon, IBM became involved in the development of electronic computers. In 1952, it released its first industrial electronic computer, the IBM 701, which was a synchronous parallel computer containing 4,000 vacuum tubes and 12,000 germanium diodes. An improved version of the IBM 704 machine was distinguished by its high speed, it used index registers and represented data in floating point form.

IBM 704
After the IBM 704 computer, the IBM 709 was released, which, in architectural terms, was close to the machines of the second and third generations. In this machine, indirect addressing was used for the first time and I/O channels appeared for the first time.

In 1956, IBM developed floating magnetic heads on an air cushion. Their invention made it possible to create a new type of memory - disk storage devices (SD), the importance of which was fully appreciated in the subsequent decades of the development of computer technology. The first disk storage devices appeared in IBM 305 and RAMAC machines. The latter had a package consisting of 50 magnetically coated metal disks that rotated at a speed of 12,000 rpm. The surface of the disk contained 100 tracks for recording data, each containing 10,000 characters.

Following the first production computer UNIVAC-1, Remington-Rand in 1952 released the UNIVAC-1103 computer, which worked 50 times faster. Later, software interrupts were used for the first time in the UNIVAC-1103 computer.

Rernington-Rand employees used an algebraic form of writing algorithms called “Short Code” (the first interpreter, created in 1949 by John Mauchly). In addition, it is necessary to note the US Navy officer and head of the programming team, then captain (later the only female admiral in the Navy) Grace Hopper, who developed the first compiler program. By the way, the term “compiler” was first introduced by G. Hopper in 1951. This compiling program translated into machine language the entire program, written in an algebraic form convenient for processing. G. Hopper is also the author of the term “bug” as applied to computers. Once, a beetle (in English - bug) flew into the laboratory through an open window, which, sitting on the contacts, shorted them, causing a serious malfunction in the operation of the machine. The burnt beetle was glued to the administrative log, where various malfunctions were recorded. This is how the first bug in computers was documented.

IBM took the first steps in the field of programming automation by creating the “Fast Coding System” for the IBM 701 machine in 1953. In the USSR, A. A. Lyapunov proposed one of the first programming languages. In 1957, a group led by D. Backus completed work on the first high-level programming language, which later became popular, called FORTRAN. The language, implemented for the first time on the IBM 704 computer, contributed to expanding the scope of computers.

Alexey Andreevich Lyapunov
In Great Britain in July 1951, at a conference at the University of Manchester, M. Wilkes presented a report “The Best Method for Designing an Automatic Machine,” which became a pioneering work on the fundamentals of microprogramming. The method he proposed for designing control devices has found wide application.

M. Wilkes realized his idea of ​​microprogramming in 1957 when creating the EDSAC-2 machine. In 1951, M. Wilkes, together with D. Wheeler and S. Gill, wrote the first programming textbook, “Composing Programs for Electronic Computing Machines.”

In 1956, Ferranti released the Pegasus computer, which for the first time implemented the concept of general purpose registers (GPR). With the advent of RON, the distinction between index registers and accumulators was eliminated, and the programmer had not one, but several accumulator registers at his disposal.

The advent of personal computers

Microprocessors were first used in a variety of specialized devices, such as calculators. But in 1974, several companies announced the creation of a personal computer based on the Intel-8008 microprocessor, that is, a device that performs the same functions as a large computer, but is designed for one user. At the beginning of 1975, the first commercially distributed personal computer, Altair-8800, based on the Intel-8080 microprocessor, appeared. This computer sold for about $500. And although its capabilities were very limited (RAM was only 256 bytes, there was no keyboard and screen), its appearance was greeted with great enthusiasm: several thousand sets of the machine were sold in the first months. Buyers supplied this computer with additional devices: a monitor for displaying information, a keyboard, memory expansion units, etc. Soon these devices began to be produced by other companies. At the end of 1975, Paul Allen and Bill Gates (future founders of Microsoft) created a Basic language interpreter for the Altair computer, which allowed users to easily communicate with the computer and easily write programs for it. This also contributed to the rise in popularity of personal computers.

The success of Altair-8800 forced many companies to also start producing personal computers. Personal computers began to be sold fully equipped, with a keyboard and monitor; the demand for them amounted to tens and then hundreds of thousands of units per year. Several magazines dedicated to personal computers appeared. The growth in sales was greatly facilitated by numerous useful programs of practical importance. Commercially distributed programs also appeared, for example the text editing program WordStar and the spreadsheet processor VisiCalc (1978 and 1979, respectively). These and many other programs made the purchase of personal computers very profitable for business: with their help, it became possible to perform accounting calculations, draw up documents, etc. Using large computers for these purposes was too expensive.

In the late 1970s, the spread of personal computers even led to a slight decline in demand for large computers and minicomputers (minicomputers). This became a matter of serious concern for IBM, the leading company in the production of large computers, and in 1979 IBM decided to try its hand at the personal computer market. However, the company's management underestimated the future importance of this market and viewed the creation of a personal computer as just a minor experiment - something like one of dozens of works carried out at the company to create new equipment. In order not to spend too much money on this experiment, the company's management gave the unit responsible for this project freedom unprecedented in the company. In particular, he was allowed not to design a personal computer from scratch, but to use blocks made by other companies. And this unit took full advantage of the given chance.

The then latest 16-bit microprocessor Intel-8088 was chosen as the main microprocessor of the computer. Its use made it possible to significantly increase the potential capabilities of the computer, since the new microprocessor allowed working with 1 megabyte of memory, and all computers available at that time were limited to 64 kilobytes.

In August 1981, a new computer called the IBM PC was officially introduced to the public, and soon after it gained great popularity among users. A couple of years later, the IBM PC took a leading position in the market, displacing 8-bit computer models.

IBM PC
The secret of the popularity of the IBM PC is that IBM did not make its computer a single one-piece device and did not protect its design with patents. Instead, she assembled the computer from independently manufactured parts and did not keep the specifications of those parts and how they were connected a secret. In contrast, the design principles of the IBM PC were available to everyone. This approach, called the open architecture principle, made the IBM PC a stunning success, although it prevented IBM from sharing the benefits of its success. Here's how the openness of the IBM PC architecture influenced the development of personal computers.

The promise and popularity of the IBM PC made the production of various components and additional devices for the IBM PC very attractive. Competition between manufacturers has led to cheaper components and devices. Very soon, many companies ceased to be content with the role of manufacturers of components for the IBM PC and began to assemble their own computers compatible with the IBM PC. Since these companies did not need to bear IBM's huge costs for research and maintaining the structure of a huge company, they were able to sell their computers much cheaper (sometimes 2-3 times) than similar IBM computers.

Computers compatible with the IBM PC were initially contemptuously called “clones,” but this nickname did not catch on, as many manufacturers of IBM PC-compatible computers began to implement technical advances faster than IBM itself. Users were able to independently upgrade their computers and equip them with additional devices from hundreds of different manufacturers.

Personal computers of the future

The basis of computers of the future will not be silicon transistors, where information is transmitted by electrons, but optical systems. The information carrier will be photons, since they are lighter and faster than electrons. As a result, the computer will become cheaper and more compact. But the most important thing is that optoelectronic computing is much faster than what is used today, so the computer will be much more powerful.

The PC will be small in size and have the power of modern supercomputers. The PC will become a repository of information covering all aspects of our daily lives, it will not be tied to electrical networks. This PC will be protected from thieves thanks to a biometric scanner that will recognize its owner by fingerprint.

The main way to communicate with the computer will be voice. The desktop computer will turn into a “candy bar”, or rather, into a giant computer screen - an interactive photonic display. There is no need for a keyboard, since all actions can be performed with the touch of a finger. But for those who prefer a keyboard, a virtual keyboard can be created on the screen at any time and removed when it is no longer needed.

The computer will become the operating system of the house, and the house will begin to respond to the owner’s needs, will know his preferences (make coffee at 7 o’clock, play his favorite music, record the desired TV show, adjust temperature and humidity, etc.)

Screen size will not play any role in the computers of the future. It can be as big as your desktop or small. Larger versions of computer screens will be based on photonically excited liquid crystals, which will have much lower power consumption than today's LCD monitors. Colors will be vibrant and images will be accurate (plasma displays possible). In fact, today's concept of "resolution" will be greatly atrophied.

Computing devices and devices from antiquity to the present day

The main stages in the development of computer technology are: Manual - until the 17th century, Mechanical - from the mid-17th century, Electromechanical - from the 90s of the 19th century, Electronic - from the 40s of the 20th century.

The manual period began at the dawn of human civilization.

In any activity, man has always invented and created a wide variety of means, devices and tools in order to expand his capabilities and facilitate work.

With the development of trade, the need for an account appeared. Many centuries ago, to carry out various calculations, people began to use first their own fingers, then pebbles, sticks, knots, etc. But over time, the tasks facing him became more complicated, and it became necessary to find ways, invent devices that could help him solve these problems.

One of the first devices (5th century BC) that facilitated calculations was a special board, later called an abacus (from Greek “counting board”). Calculations on it were carried out by moving bones or pebbles in the recesses of boards made of bronze, stone, ivory, etc. In Greece, the abacus existed already in the 5th century BC. e. One groove corresponded to units, the other to tens, etc. If more than 10 pebbles were collected in one groove when counting, they were removed and one pebble was added to the next digit. The Romans improved the abacus, moving from grooves and pebbles to marble boards with chiseled grooves and marble balls. With its help it was possible to perform the simplest mathematical operations of addition and subtraction.

The Chinese variety of abacus - suanpan - appeared in the 6th century AD; Soroban is a Japanese abacus, derived from the Chinese suanpan, which was brought to Japan in the 15th-16th centuries. XVI century - Russian abacus with decimal number system is being created. They have undergone significant changes over the centuries, but they continue to be used until the 80s of the 20th century.

At the beginning of the 17th century, the Scottish mathematician J. Napier introduced logarithms, which had a revolutionary impact on counting. The slide rule he invented was successfully used fifteen years ago, serving engineers for more than 360 years. It is undoubtedly the crowning achievement of the manual computing tools of the automation period.

The development of mechanics in the 17th century became a prerequisite for the creation of computing devices and instruments using the mechanical method of calculation. Among the mechanical devices, there are adding machines (they can add and subtract), a multiplying device (they multiply and divide), over time they were combined into one - an adding machine (they can perform all 4 arithmetic operations).

In the diaries of the brilliant Italian Leonardo da Vinci (1452-1519), a number of drawings were already discovered in our time, which turned out to be a sketch of a summing computer on gear wheels, capable of adding 13-bit decimal numbers. In those distant years, the brilliant scientist was probably the only person on Earth who understood the need to create devices to facilitate the work of performing calculations. However, the need for this was so small (or rather, it did not exist at all!) that only more than a hundred years after the death of Leonardo da Vinci, another European was found - the German scientist Wilhelm Schickard (1592-1636), who, naturally, did not read the diaries of the great Italian - who proposed his solution to this problem. The reason that prompted Schiccard to develop a calculating machine for summing and multiplying six-digit decimal numbers was his acquaintance with the Polish astronomer J. Kepler. Having become acquainted with the work of the great astronomer, which was mainly related to calculations, Schickard was inspired by the idea of ​​​​helping him in his difficult work. In a letter addressed to him, sent in 1623, he gives a drawing of the machine and tells how it works.

One of the first examples of such mechanisms was the “counting clock” of the German mathematician Wilhelm Schickard. In 1623, he created a machine that became the first automatic calculator. Schickard's machine could add and subtract six-digit numbers, ringing a bell when it was full. Unfortunately, history has not preserved information about the further fate of the car.

The inventions of Leonardo da Vinci and Wilhelm Schiccard became known only in our time. They were unknown to their contemporaries.

The most famous of the first computers was the summing machine of Blaise Pascal, who in 1642 built the Pascalina model - adding machine for eight-digit numbers. B. Pascal began creating Pascalina at the age of 19, observing the work of his father, who was a tax collector and often had to carry out long and tedious calculations. And his only goal was to help him with his work.

In 1673, the German mathematician Leibniz created the first arithmometer, which allowed him to perform all four arithmetic operations. “...My machine makes it possible to perform multiplication and division over huge numbers instantly, without resorting to sequential addition and subtraction,” wrote V. Leibniz to one of his friends. Leibniz's machine was known in most European countries.

The principle of calculations turned out to be successful; subsequently, the model was repeatedly refined in different countries by different scientists.

And from 1881, mass production of adding machines was organized, which were used for practical calculations until the sixties of the 20th century.

The most famous mass-produced model was the Felix adding machine, Russian-made, which received its name in 1900. at the international exhibition in Paris a gold medal.

Also included in the mechanical period are the theoretical developments of Babidge’s analytical machines, which were not implemented due to lack of funding. Theoretical developments date back to 1920-1971. The Analytical Engine was supposed to be the first machine using the principle of program control and intended to calculate any algorithm, input-output was planned using punched cards, it was supposed to work on a steam engine. The analytical engine consisted of the following four main parts: a storage unit for initial, intermediate and resulting data (warehouse - memory); data processing unit (mill - arithmetic device); calculation sequence control unit (control device); block for inputting initial data and printing results (input/output devices), which later served as the prototype for the structure of all modern computers. Lady Ada Lovelace (daughter of the English poet George Byron) worked simultaneously with the English scientist. She developed the first programs for the machine, laid down many ideas and introduced a number of concepts and terms that have survived to this day. Countess Lovelace is considered the first computer programmer, and the ADA programming language is named after her. Although the project was not implemented, it was widely known and highly appreciated by scientists. Charles Babidge was a century ahead of his time.

To be continued…